Video
This transcript was automatically generated.
Adrien Bron: Good morning. As we are here at the automotive day of this year's Hannover Messe, let me talk to you about the story of a car. A car is being driven on the highway in Germany. Of course, it's being driven fast, and during the drive, it picks up a small low-level anomaly in its braking system. That signal is sent via telemetry to the connected car platform of the manufacturer—the OEM—where the data from other vehicles on the roads is being streamed. And because the manufacturer has linked this platform to its production system very quickly and automatically, it can find out in which factories, on which production lines, and on which assembly cells this brake system had been assembled. It can find out that another two thousand cars have been produced to the same tolerances, and therefore it can issue a preventative warning for the remaining vehicles. But more importantly, it can reduce the tolerance threshold that this braking system is being assembled towards. But if it does that, then the braking system itself needs to be designed slightly differently. The supplier needs to reduce the thickness of the brake pad, so it can inform the supplier to adapt the design. And that little issue—below driver warning or below malfunction notice—that our fast German car picked up on the road has been solved before it becomes a real problem.
Now, the most remarkable part of this story is that it is actually pretty common. Connected cars have been on the road for years, and closed automation feedback loops that allow such use cases are being deployed at scale, starting with the most safety-critical features. But if you step outside automotive, and especially if you step outside production-centric use cases, you will see that the vast majority of automation systems that are powering and enabling our factories have never been designed for that kind of intelligence.
So here at the Hannover Messe, we are here to discuss what comes next: to talk about automation systems that create value beyond control, to talk about automation systems that create value beyond the four walls of the factories that they are installed in, and also to talk about automation systems that, with AI, will progressively increase their own autonomy.
We are entering a new era of industrial automation, and this is the result of forces on both the demand and the supply side of the market. On the demand side, manufacturers are asking for more and more of these cross-cutting use cases—these closed feedback loops of continuous improvement from operations back to design and forward again. And they do so because their clients, the drivers, you and I, are asking for better convenience of use, for better reliability and safety of our products. But also they do that because they see significant opportunities to improve their own operations, their supply chain management, their ability to do product recall.
And on the supply side, what has been traditionally challenging in the industrial world—fragmented data, expensive integration of systems, cybersecurity concerns, all the way to more mundane things such as low connectivity—that is being tackled fast. So we are going to a market where demand will increase and supply will improve. That will create significant innovation and growth in the automation space.
And growth will be everywhere, but not evenly distributed. A good way to think about growth is to think about the automation value as a percent of the sector value add. In automotive, that number is twenty—twenty euros for a thousand euros of automotive value add. So if you remember the example of that car, let's say it is a hundred-thousand-euro car. That car has two thousand euros of automation embedded in the car and in the factories that were used to produce it. That number of twenty is significantly lower outside of oil and gas and automotive. Those industries are not automated today to the full extent.
And if we look forward, we will see the highest growth in the automation market in what we call the hybrid verticals—those industries that combine discrete production methods and process production methods, or flows and things that have lower quantities. This is where the bulk of the growth will come in addition to automotive and oil and gas, which are at the forefront of automation.
And at the core of this new era of automation is a digital loop, the digital thread—a combination of data that combines product design data, production data, and operation data, and allows for such cross-cutting use cases. Today, manufacturers that are using such digital loops are seeing significant productivity gains, both in innovation cycles as well as in industrialization processes and in their shop floor productivity. And with AI, this digital thread or digital loop will further increase its value and its autonomy.
But we are at the onset of an autonomous automation world, and the main reason for that is that our data is still too fragmented and not accessible enough. In an industrial world, we have local data models. Think about a product data model, an equipment data model, and a process data model. Those can be combined, and then we have a digital twin—we can even call it an end-to-end digital twin. But for production-grade, AI-ready data, we need to contextualize this digital twin in the real-world environment in which it operates. And companies are developing contextual data models, but they are still rather isolated and not deployed at scale.
So what the industry is working towards is what we would call a factory world model—a more comprehensive representation of what happens in which environment and what is the relationship between a digital product and its contextual world. And when AI will have access to that kind of data, everything will change. The potential will increase. AI will become physical.
Even before those factory world models are deployed at scale, AI is already significantly impacting the industrial automation landscape. We asked a hundred executives about where AI will have the biggest impact on the automation stack and market. And as you see on this chart, the answer is very heterogeneous. We have use cases in the top right territory, where AI will drive significant growth but also significant substitution of current ways of working and technologies—think about use cases such as visual detection or production scheduling in the S&OP process. And we also have use cases at the bottom left, such as line calibration and line commissioning, which will be more resilient. Overall, we expect by 2030 that about half of the automation market will rely on AI-enabled offerings—which is significantly faster than most manufacturers currently expect and plan for.
So let me take this together, because the shifts that we are talking about are not only about technology—they are about where value is created and captured in the automation market. A good way to think about that in the past has been along this automation pyramid. Most of the value of automation was at the bottom of this pyramid in the field layer—the machines with their embedded sensors and actuators that were orchestrated with control systems across the production lines. Today, that pyramid has already become a stack, which means equal importance of enterprise software, operation software, control, and field devices. And as we look forward, this pyramid in its value shape will look more like an hourglass, which means that value will be created at the top on software, data, and AI-enabled workflows, and at the bottom on hardware devices and connectivity, while the traditional orchestration layer may see some compression.
And to discuss more what vendors and manufacturers must do in that march towards autonomous operation, we have assembled a panel here that my colleague Michael Schertler will now introduce. Thank you.
Michael Schertler: Thank you very much, Adrien. Thank you for walking us through the structural changes that are reshaping industrial automation. But the real question is: what does this all practically mean for those who are building and deploying these systems? I'm delighted to have a distinguished panel with us today. We have Rainer Brehm, CEO of Siemens Digital Industries; Timo Kistner from NVIDIA; Jon Lervik, CEO of Cognite; and Florian Dorr from Schaeffler.
Let me start with the big picture. Adrien described how AI and software will reshape value in the automation stack—the hourglass model. Rainer, from your experience and view, you describe this as a shift of "automating automation" by using software, digital twins, and AI agents. What are the decisive control points in this stack? Is it orchestration of these different elements? Is it the digital thread? Or is it the process know-how?
Rainer Brehm: I think there are different topics. So first of all, I like that you catch the digital thread—that's also how we describe it. At the end, there are workflows we need to automate, and at Siemens we work on connecting the digital and the real world. That means coming from product design, production design, and then going into real operations—we need to connect that, and there is huge value in making these workflows more automated. In the future, probably more of these agents will replace the manual handover between those workflows, sitting on a data layer—as Jon mentioned, more of a PLM data backbone.
Number two: if you talk about automating automation, it's not just about the workflow. It's about how we can make automation smarter, because currently the profit pool of rule-based automation shrinks because you are not flexible enough to adapt to the needs which come in the factory. AI will go into the control as software—and that's what we work on with NVIDIA; we call it physical AI—and then you move from rule-based to goal-based automation. So you tell the automation system what you need to do: grab this glass, put it there—and you don't write a line of code because the system is smart enough. I think this is a very important aspect which will enhance automation control as we see it today.
Michael Schertler: Would be great to get also your perspective, Timo, as you span the full AI stack. I think you also have a view that accelerated computing specifically will change what is actually possible in industrial AI. How do you see this?
Timo Kistner: Absolutely. When we think about accelerated compute, this is not just a couple of GPUs or server infrastructure that accelerates workloads—it's a full stack including all the frameworks and libraries needed to accelerate those workloads. In my mind, if you look across the value chain of an industrial company, every step in that value chain will be automated and accelerated at one point in time.
Think of simulation workloads in the R&D space, like fluid dynamic simulations—tools like Star-CCM+ from Siemens today. They are accelerated with GPUs. We had those simulations running for days or weeks. Today, we can run those workloads in hours if not less. And the next step is the production side: every kind of computer vision model, video search and summarization, industrial AI agents on the shop floor—everything will be automated. Physical AI needs to be simulated. We need simulation capabilities to have a physics-based representation in which we can train and simulate AI models that will ultimately be deployed on the shop floor. From our perspective, you're going to have a fully accelerated value chain, with all workloads accelerated and everything simulated.
Michael Schertler: Thank you. I think that raises a good point, and you can see this from these different answers: we are clearly witnessing how value is moving towards software, towards AI, and the smart edge. That has always raised a key question in the industry—is it open ecosystems? Is it more integrated platforms? Or is it vertical solution stacks? Our research shows that about sixty percent of the incremental growth and value coming until 2030 will come from vertical-specific offerings. But that requires building ecosystems and working with partners. The question always is: where do you compete and where do you collaborate? Timo, NVIDIA has built deep partnerships with automation vendors, data platforms, and manufacturers. How do you see your role in that ecosystem?
Timo Kistner: Let me take exactly that. When we think of physical AI, we have what we call the three-computer model—three areas that we need to cover. We need the AI training ground to actually train the physical AI model we ultimately want to deploy on the shop floor. We need to simulate that in a physics-based environment, because today we don't want to deploy an AI model after training it on a robot and just see if it works—we want to simulate it to find out how that model behaves. And last, once we have that training-simulation cycle complete, we need to deploy it. This is where partnerships become so relevant.
For AI training, we first need to have the understanding and data context—which is why we need to partner with companies like Siemens as well as end users to really enable that training. For simulation, we need to leverage the know-how that industrial automation players and data platform and contextualization layers bring. And for deployment, it doesn't work without collaborating very closely on the shop floor and literally putting all of that into the physics-based representation of what's happening there.
Michael Schertler: That raises a question for you, Rainer. Based on what Timo just said, Siemens has always been in strong control of the control layer, and you've now invested especially in software and made quite some bold acquisitions. How do you keep the stack still open for partners, and how do you prioritize where you invest in further verticalization?
Rainer Brehm: First of all, I also believe in openness. Siemens just launched the Siemens Xcelerator, which is easy, open, and flexible—that's the key of it. The topic of proprietary field buses is over. We need to work with IT standards adapted to the industrial world. Our industrial edge is Docker-based. We incorporate GPUs from NVIDIA to make it accessible for partners, and we use this stack ourselves—deploying workloads like a virtual PLC—but we also invite partners to deploy Docker container apps that run in a safe, secure, and deterministic environment on the shop floor. We also enable, for example, ingesting data from an edge into Cognite when you want to ingest real-time data there. So we need to be open.
Number two: where we invest, we invest where the biggest return is and where the biggest customer need is. Going into areas where a lot of manual work exists today is an interesting area—that's why we looked at physical AI because that will be a lever. Our acquisitions around Altair and Dotmatics were about the comprehensive digital twin as an enabler to doing all that work we discussed. Altair is very strong in simulation, and Dotmatics is very strong on the pharmaceutical side, where I think we also have a big impact.
Michael Schertler: Very interesting. Let's move into the actual deployment of AI and what that practically means. We have seen many manufacturers investing quite heavily in automation technologies, and many have gone through investments in Industry 4.0—investing in connectivity and upgrading their OT architectures. Some are hesitant to say, "Okay, now is the next wave of technology coming—should I really invest again, or do I wait?" It's important to understand what is really delivering results. Rainer, you mentioned that people will not just rip out everything they just invested in—it has to work in a brownfield environment. What are some learnings from brownfield settings that can drive adoption?
Rainer Brehm: As I said, you need to be open to connect to the most topics, and you need to use standards. For example, when we do industrial edge, you can connect to a Rockwell PLC—it talks EtherNet/IP, Modbus, and all different field buses. Then you have data layers where we use DDS and MQTT for northbound connectivity. We work with AWS and Microsoft to have direct integrations, using the stacks from NVIDIA. So we need to combine it all.
I think our value add comes when these technologies hit the shop floor, because then they need to be reliable—eighty percent is not enough to deploy in a machine. They need to be easy to use for customers who are not tech experts. They need to be safe. And they need to have a return on investment, because great technology is good, but it needs to deliver ROI.
Michael Schertler: I think that's a perfect segue to you, Florian. You are actually one of the leading companies deploying this in brownfield settings. What has scaled within factories, what has scaled across factories on the AI side? What were the arguments and business cases that convinced your leadership to make the investment decision?
Florian Dorr: In the end, all solutions which are implemented have to show a real value-add after a certain period of time. Of course, there is a ramp-up period that is accepted by everyone, but in the end, a plant manager needs improved numbers—whether from AI or automation solutions. These solutions also need to be reliable in daily production. If there are big problems with the implementation of a technology, it always creates a difficult business case. We are also talking about brownfield applications—about running machines that are producing parts and generating money—so the downtime of these machines has to be considered when implementing solutions. To sum it up: it should be a reliable solution, easy to implement, and accepted by the people, meaning also easy to use.
Michael Schertler: Maybe a comment on scalability—my claim is that without scalability, it is not possible to have a positive ROI on digitalization.
Florian Dorr: Correct. There is scalability across being able to automate more and more workflows for a given plant, and then scalability across plants as well. And of course, what has changed now with generative AI is that we can automate workflows in a matter of hours instead of months and years when we had to develop conventional software applications. So the question is: how can we then scale those automated workflows efficiently across plants and across the value chain? And this is again back to the data foundation. If you can't ground the AI-powered workflows in trustworthy data, there's no way you can scale.
Michael Schertler: Jon, let me turn to you. You are coming from the data layer perspective—from the industrial knowledge graph, the contextualization of data. How do you see your role in enabling that kind of scalability?
Jon Lervik: I think the key challenge is that industry today has an enormous amount of data—but very little of it is actually put to work. We look at the industrial world and we see that most companies are sitting on vast amounts of OT data, engineering data, asset data, maintenance data, but it's siloed. It's not contextualized. It's not connected. And without that connection and contextualization, you can't really get the AI models to work reliably at scale.
So what we do at Cognite is essentially provide the data foundation—the industrial knowledge graph that contextualizes data from different source systems, whether that's SAP, OSIsoft PI, Historian, sensor data, CAD data, whatever it might be. And when you have that contextualized data, then you can start building reliable AI models. You can automate workflows. You can close the loop from sensor to insight to action at scale. And I think that's really the key unlock for industrial AI deployment.
Michael Schertler: I want to get to our last topic, and this is around AI adoption in the industrial space. We've heard it requires interoperability, interfaces, the right governance for cybersecurity, and an architecture that works across different vendors and allows to create a business case that actually works on the shop floor. From your perspectives: what do vendors and manufacturers need to do to make deployment even faster, even safer, and more scalable? Let me start with you, Florian.
Florian Dorr: First of all, the biggest problem we are currently facing is the very heterogeneous quality of data—that's difficult to handle, and also difficult to process either by AI or by software tools in general. What would be a good baseline for me is when everyone could align on a same or at least similar data foundation that we can all access. Then we can provide the data in a consistent way—whether through naming, labeling, or whatever—and create a process chain that can access this data pool from end to end. From the first idea to the finished product, even during production. We also have a digital twin of the running machine already implemented in a facility, which can be fed with real data and then improved with our simulation model during operation. We don't need to stop operational production, we don't need downtime—we can just simulate what process parameters we can improve to make the machine run faster, have fewer failed parts, and so on. Having that data foundation to do all these steps would help a lot. We all agree here on what's necessary to be done—it's just difficult to get it out of the legacy data we have.
Michael Schertler: Timo, you made the point that physical AI raises demands when it comes to safety certification, and you need a lot of real-world data to get models safe and reliable. Now, that data often doesn't exist, and so you can work with synthetic data and simulations. What do you think is possible and needed to close the gap between model-trained simulations and performance in the real world?
Timo Kistner: So today we've actually reached exactly that point where we can close that gap. Think of a safety-relevant use case: do we want to reproduce something safety-critical just for the sake of getting all the data from a real-world situation? Obviously not. We don't want to try that out. We don't want to destroy any of our machines or bring someone into a critical situation. So obviously we want data that we cannot even collect in such high amounts because it just doesn't exist. This is exactly where world foundation models come into play—foundation models that actually understand the physics around us and that we can leverage to synthesize the data that we need. Hundreds, thousands, millions of different scenarios that then become the input for all those physics-based simulation environments. So if you're in that situation today and thinking, "How do I get there? I don't even have enough data"—well, the tools are there. You can synthesize the data to an extent that you can really be sure it still represents the actual behavior in the real world.
Michael Schertler: And Rainer, coming back to you—as you really span the full stack from the OT side to the software side, where do you see misconceptions about these new technologies that slow down adoption? What should people think about differently in order to accelerate AI adoption in the industrial space?
Rainer Brehm: One aspect, building on what Florian said: I also head the automation topic at ZVEI, the German Association for the Electrical Industry. You know, thirteen or fourteen years ago, we founded something called Industry 4.0, and we moved it further into what's called the Asset Administration Shell. So a lot of what Florian mentioned—there are standard data formats that are very well suited to execute on what was just described. I think we need to simply implement them across industries. A lot of the basis is there, but we need to move from concept to implementation. That's a big topic we need to do. And then around what's truly necessary—really bring it down to reliability and ease of use.
Michael Schertler: Jon, your idea or wish—what could be done to further drive adoption and acceleration from the data perspective?
Jon Lervik: We all share the perspective: we want to drive towards autonomous industry, and that means we need to be able to close the loop in how we operate. That also means we need to have trust in the data layer and performance in the data layer. And I think, as Florian said, we also need more common industry standards on data modeling—on how you represent data. That can of course plug into physics-based or industrial foundation models. So my hope is a more unified industrial movement to leverage AI and move much quicker towards autonomous industry.
Michael Schertler: Thank you. I think it is becoming very clear and obvious that industrial automation is entering a new phase—a very exciting phase where factories become adaptive systems that sense, learn, and act across the value chain, and where intelligence is used across the full life cycle. Productivity gains are real and they can scale, but it requires investing in the right layers. It requires building the right ecosystem, which very often a single company cannot fulfill but needs to be orchestrated. We believe companies that move now, that move first, will be the ones that shape the rules of this era and that will reap the benefits of that trend. With that, big thank you to the panelists: Rainer from Siemens, Timo from NVIDIA, Jon from Cognite, and Florian from Schaeffler. Very much looking forward to keeping in touch and observing how we together shape the next era of industrial automation. Thank you.
Hannover Messe 2026: The Technology Is Ready. Are Manufacturers?
Stop piloting. Start transforming.