Not long ago, I joined a call with one of the largest software companies in the world. It was the kind of meeting where cameras were off and everyone was multitasking. The AI research team was on the line, and so was the application security team. They were technically in the same meeting, but strategically they were in different worlds.
About 10 minutes in, an AI researcher spoke up. “This security discussion doesn’t really apply to us. We are experimenting. Nothing is in production.”
Before I could respond, the head of application security stepped in. “Are you sourcing models from external repositories?”
“Yes.”
“Are you training them on customer data?”
“Yes.”
“Are you running this in our cloud environment?”
“Yes.”
There was a long silence. “You are importing significant risk,” the security leader said, “and you do not even realize it.”
That exchange reflects what is happening inside many enterprises today. AI innovation is accelerating at extraordinary speed. Boards are asking how fast the organization can move. Meanwhile, security leaders are trying to determine what is running in their environment.
The conversation now is about understanding what is under the hood, not slowing innovation.
We Secured the Fuel, but We Ignored the Engine
For more than a decade, enterprise security strategy has focused on protecting data. In the context of AI, data is the fuel. Organizations have invested heavily in data protection, encryption, governance and privacy controls. Those investments were necessary.
But if data is the fuel, the machine learning model is the engine.
Today, enterprises routinely download pretrained models from public repositories, integrate them into internal systems, fine-tune them with proprietary data and deploy them into production environments. In many cases, these models are treated as opaque components. They are assumed to be safe because they are popular or open source. This assumption is flawed.
Public model hubs host millions of models. The ecosystem drives remarkable innovation, but it also creates opportunity for abuse. We have observed models impersonating trusted brands through name squatting techniques. Some of these models were downloaded thousands of times before anyone recognized that they attempted to exfiltrate credentials or execute malicious code. In these scenarios, the compromise is in the model itself.
When I ask CISOs how many machine learning models exist in their environment, I often hear confident estimates in the low hundreds. After scanning cloud storage, developer endpoints and container registries, the actual number is frequently in the tens of thousands. In one financial institution, the gap between perception and reality was more than 90,000 models.
That level of blind spot would be unacceptable in any other domain of cybersecurity.
The Velocity Problem
At the same time, development velocity has changed. Generative AI tools are amplifying productivity across engineering teams. Developers are using AI to write code, refactor systems and build new services at unprecedented speed.
In the right hands, this acceleration creates a significant competitive advantage. Speed without control, however, introduces risk.
Advanced AI tooling is a high-performance vehicle. Experienced engineers can use it to build resilient, secure systems. Less experienced practitioners can unknowingly introduce vulnerabilities, insecure dependencies and flawed model integrations at scale.
The objective is to ensure that governance, visibility and control mechanisms evolve at the same pace as innovation. In security terms, velocity demands stronger control planes.
Moving Beyond Experimental AI
Many enterprises still treat AI as a pilot initiative. It is viewed as experimental or contained within innovation teams. That framing is increasingly inaccurate. AI systems now influence customer interactions, operational workflows, financial decisioning and product development.
When AI moves from the lab into enterprise infrastructure, it inherits the same accountability requirements as any other critical system, which is where Machine Learning Security Operations becomes essential. MLSecOps applies operational discipline to the unique characteristics of AI systems. It recognizes that models are probabilistic, they can contain hidden behaviors, and they may originate from complex supply chains.
For CISOs, three imperatives stand out:
- Establish Comprehensive Visibility
You cannot protect what you cannot inventory. Model discovery must extend across cloud storage, developer workstations, build pipelines and runtime environments. Organizations need to know precisely how many models exist, where they reside and how they are being used.
- Assess the Model Itself
Traditional application testing is insufficient. Security teams must evaluate models for prompt injection susceptibility, data leakage risk, hidden backdoors and supply chain manipulation. The model is an executable intelligence layer.
- Unify Research and Security Functions
In many enterprises, AI research teams and security teams operate in parallel. That separation creates risk. Cross-functional governance, shared review processes and aligned accountability structures are critical. Security cannot be an afterthought once experimentation becomes deployment.
AI is arguably the most transformative technology of this era. It has the potential to reduce costs, increase efficiency and unlock new revenue streams. But, transformative technologies also reshape the threat landscape.
Data security remains foundational, although it is no longer sufficient. The enterprise must secure both the engine and the fuel.
AI models should not be treated as mysterious black boxes. They are powerful computational systems that require inspection, validation, continuous monitoring and governance before they are trusted in production.
Speed and security are not opposing forces. In the AI era, security is what enables sustainable speed. AI should not be in any enterprise without an AI security strategy embedded at its core.
Editor’s note: Ian shared these thoughts in the Threat Vector podcast, “Securing the AI Supply Chain.” Catch the whole story and listen to the full podcast.
Curious about what else Ian has to say? Check out his other articles on Perspectives and AI Security Nexus.