About us
Cognitive cyber-physical systems are the key innovation drivers in many industries, such as autonomous vehicles, robotics, intelligent manufacturing systems, or medical devices. In many areas, traditional software systems are reaching their limits to further increase productivity and automation. That's why artificial intelligence is playing an important role. However, cognitive cyber-physical systems are more than AI; they are complex, software-intensive systems that use AI. Moreover, cyber-physical systems (as opposed to traditional interactive systems such as smartphone or web apps) interact with the real world. And while a failing smartphone app is annoying, a failing autonomous driving system can easily become life-threatening. Therefore, the development of high quality systems, above all safe and reliable systems, is central to the success of cognitive cyber-physical systems. We call that resilient cognitive systems.
Cognitive Systems do not only need intelligent behavior, but must also become aware of their environment, their own state, and their goals in order to intelligently adapt themselves for optimizing their utility whilst preserving safety - Resilient Cognitive Systems.
Although AI is currently in the spotlight, engineering will be the key to transforming ideas into real innovations in the field of safety-critical and high-consequence systems. However, it is not enough to simply use the "old" engineering paradigms. In fact, the engineering of intelligent systems requires intelligent engineering, or as we call it: Engineering-Intelligence.
The idea of engineering resilient cognitive systems has two main strands of research:
- We define resilience as "optimizing utility whilst preserving safety in uncertain contexts". Cognitive cyber-physical systems such as autonomous vehicles must cope with the infinity of the real world, which is full of surprises, constantly changing, and which we as engineers can no longer (fully) predict. Yet the systems must provide maximum utility while still ensuring safety. Cognitive systems will therefore only make their breakthrough when they become resilient, i.e. when they become aware of their goals, their context, and their own state in order to continuously adapt to their environment - always with the goal of optimizing their utility while guaranteeing their safety. This means transferring parts of the engineering process from design time to run time, from human engineers to the system itself. Instead of rigidly adhering to a static specification, most likely based on invalid a priori assumptions about the system's context, the system continuously evaluates its goal achievement and can adapt its requirements, its architecture, and its behavior to autonomously (i.e., without human intervention) optimize its goal achievement even in changing, unpredictable environments. To do this, we need to imbue systems with engineering intelligence so that they can perform the final engineering steps themselves at runtime, instead of a human at design time. In this way, we get cognitive self-adaptive systems.
- In addition to shifting parts of the engineering process to run-time, we also need to rethink design-time engineering to cope with ever-increasing complexity. Approaches such as model-based software systems engineering (MBSE) and automated software engineering provide a solid foundation for managing complexity. But what makes a system complicated is not just its size, structure, or sophisticated behavior. It's also its variability. We may need to assure the safety of hundreds of versions of a system per year, or the system may evolve over time. With each change, the models erode, and it takes considerable (and unaffordable) manual effort to keep them up to date. However, there are several ways to make MBSE more efficient. As a key, the use of AI can help us to keep the models up to date and also to include aspects from various sources such as code, test results, safety analysis, etc., where AI is good at finding correlations that we miss. Therefore, we need to take MBSE to the next level of " Engineering-Intelligence" by creating an intelligent engineering companion that teams up with human engineers to combine the strengths of both human and artificial intelligence to reduce the manual effort required. While large language models provide us with approaches for coding co-pilots, we need different approaches at the symbolic level for engineering because we don't have enough data for brute force training and because we need full transparency to ensure the safety of our co-engineered systems. Therefore, the integration of intelligence into software engineering still requires solving many open challenges, but it will be another key to engineering resilient cognitive systems.
The work at the chair is closely related to the Fraunhofer Institute for Cognitive Systems IKS, where you can find some additional information about our activities.