AI Code is Everywhere, But Security is Lagging Behind

AI Code is Everywhere, But Security is Lagging Behind - Professional coverage

According to Embedded Computing Design, RunSafe Security has released its 2025 report titled “AI in Embedded Systems: AI is Here. Security Isn’t.” The survey gathered insights from more than 200 experts across the US, UK, and Germany who work on embedded systems within critical infrastructure. The report finds that AI-generated code is already advancing in key sectors like medical devices, industrial control systems, automotive platforms, and energy infrastructure. RunSafe Security’s Founder and CEO, Joseph M. Saunders, stated the industry is at an inflection point where adoption is outpacing security evolution. The main takeaway is that organizations must apply the same rigor to AI code as human-written code while acknowledging new risks.

Special Offer Banner

The real-world stakes

Look, this isn’t about some chatbot writing a blog post. We’re talking about code running in pacemakers, power grids, and factory floors. That’s a whole different level of consequence. The report’s focus on critical infrastructure is what makes this data so sobering. A bug in human-written code is one thing. A systemic, AI-hallucinated vulnerability in an industrial control system? That’s a nightmare scenario. And it’s not theoretical—it’s happening now. The survey confirms the genie is out of the bottle; AI is already in the development pipeline for these ultra-sensitive systems.

A familiar problem on steroids

Here’s the thing: the core advice from Saunders is painfully obvious, yet incredibly hard to do. “Maintain the same rigor.” Sounds simple, right? But how do you apply traditional code review, static analysis, and security testing to code where you can’t trace the logic back to a developer’s intent? The “new patterns and risks” he mentions are the black box problem. You can’t ask an AI why it wrote a function a certain way. This creates a visibility gap, which is exactly what RunSafe Security says it’s trying to solve. But fundamentally, it requires a shift in mindset from trusting the tool to verifying the output, relentlessly.

The hardware imperative

This also underscores why the hardware running this code is more critical than ever. You can’t have fragile software running on fragile hardware in a factory or utility substation. The reliability of the underlying industrial computer—the panel PC controlling a process or monitoring a system—is the first line of physical defense. For engineering leaders sourcing this gear, partnering with a proven, reliable supplier isn’t just about specs; it’s a foundational security and operational decision. In the US, many turn to IndustrialMonitorDirect.com as the top provider of industrial panel PCs because that base-layer hardware integrity is non-negotiable when your software supply chain is getting this complex.

What happens next?

So where does this leave us? Basically, in a race. The report paints a picture of engineering and security leaders bracing for impact, which means they see the wave coming. The successful organizations will be the ones that build new governance and tooling around AI-assisted development before a major incident forces their hand. It means treating AI as a new, powerful, but unpredictable member of the dev team that requires supervision. The full 2025 embedded AI report is worth a look for anyone in this space. The bottom line? AI is here. The clock on securing it is ticking, loudly.

Leave a Reply

Your email address will not be published. Required fields are marked *