AI security is no longer optional, it’s urgent. In this episode of Threat Vector, David Moulton sits down with Ian Swanson, former CEO of Protect AI and now the AI Security Leader at Palo Alto Networks. Ian shares how securing the AI supply chain has become the next frontier in cybersecurity and why every enterprise building or integrating AI needs to treat it like any other software pipeline—rife with dependencies, blind spots, and adversaries ready to exploit them. They also explore "vibe coding" the practice of developers relying on instinct and intuition rather than rigorous review when coding with or around LLMs. It's a fun name for a very real risk. Whether you're a CISO, a developer, or anyone helping shape AI in your organization, this conversation is your guide to locking down AI before it locks you out.
Protect yourself from the evolving threat landscape – more episodes of Threat Vector are a click away
Transcript
[ Music ]
David Moulton: Welcome to "Threat Vector" the Palo Alto Networks podcast where we discuss pressing cybersecurity threats and resilience and uncover insights into the latest industry trends. I'm your host, David Moulton, Senior Director of Thought Leadership for Unit 42.
Ian Swanson: We should put any AI live in any enterprise use case without securing it first. AI has a ton of problems. AI can drive major transformation within a company, you know, whether it's again, reducing operational costs through automating processes, all the way through to co-development, to improving customer experiences with better products. AI can do some pretty amazing things; however, it can go off the rails. It can generate malware. It can execute attacks that live within these-these artifacts. It can exfiltrate very important data. It can even lead to perhaps brand reputation if doesn't have the right guardrails. Even AI is so impactful, it can be so amazing in many different ways, we need to make sure that it's safe, that it's trusted, and that it's secure and that there really should not be any AI in any enterprise without security of AI. [ Music ]
David Moulton: Today, I'm speaking with Ian Swanson AI Security Leader here at Palo Alto Networks. Ian is a founder of three successful exits, most recently as CEO of Protect AI. Before that, he led AI/ML at Amazon Web Services and served as VP of Machine Learning at Oracle. Today, we're going to talk about securing the AI supply chain, why it matters, where the risks are hiding, and how leaders can take practical steps to close the gaps. [ Music ] Ian Swanson, so glad that you're here on "Threat Vector," we've had a little bit of a slow start here as we were getting started, but I'm expecting a great conversation with you today.
Ian Swanson: Hey, thanks David. I really appreciate you having me on "Threat Vector."
David Moulton: Ian, you had this really interesting journey and I noticed something off-mic when we were putting things together, the way that you think in memos and, you know, that alludes to your time at Amazon where you're leading AI there, now you're at Palo leading AI. Talk to me a little bit about that journey and, you know, how those different pieces string together to get to this moment and maybe what's the secret of what's next.
Ian Swanson: Yeah. So, I've been in AI for roughly 20 years. So, I've been pretty fortunate. Definitely AI is having its moment, you know, in the last three years, but I've had multiple companies that, you know, I helped start and was CEO of, that ultimately became successful exits to American Express to Oracle and now Protect AI which I started about four years ago as part of Palo Alto Networks. And the genesis for Protect AI was really when I was at Amazon. I was leading the worldwide AWS business for AI and I saw risks firsthand, and so we had over 80,000 customers, AI at Scale back when I was running that business and I didn't see any cybersecurity companies focusing on the risks, specific risks that can be inherit in artificial intelligence. So, I chose to start Protect AI to go after this kind of greenfield space and it's been a phenomenal ride over the last four years, and this past July, we finished acquisition; we're going to become part of Palo Alto Networks and it only took us two-and-a-half months to fully integrate the Protect AI product set into what is now called Prisma AIRS at Palo Alto Networks.
David Moulton: Well, I'm glad that you went on that journey, and as somebody who is involved in a lot of the conversations in and around the risks to AI from AI, you know, how do we use AI? I think this will be a very clarifying conversation and today we're going to get into AI supply chain, we're going to talk to you about your thoughts on what can go right and wrong when teams are too heavily reliant on things like vibe coding, so let's get into it. Let's go back to this concept of MLSecOps and it's a concept that I think that you helped pioneer, and maybe for the listeners that might be new to it, can you explain what MLSecOps is and why it's so incredibly crucial right now?
Ian Swanson: MLSecOps stands for Machine Learning Security Operations. It's a term that Protect AI helped coin about three-and-a-half-four years ago. And it really was a play on DevSecOps, you know, as we think about secure design and "shift left" moments and, again, there wasn't any company, or I should say, companies in general, were not focused on the security of AI. And so, we released frameworks around how you should think about securing AI all through the development lifecycle from how you build AI to when AI is in applications and in production workloads. And all that sat underneath this banner of MLSecOps, again, how do we secure AI by design?
David Moulton: I want to ask you something. A lot of times I'll get into these nerd conversations and somebody will say like, "Oh! The AI supply chain." And you just kind of nod along and you're like, I kind of have an idea of what that is, but before we get into some of the deeper questions, I think it's important that we define exactly what we're referring to when we talk about something like the AI supply chain, and then if you can go a touch deeper and say, what component should CISOs or security leaders really be paying attention to in that AI supply chain?
Ian Swanson: Yeah, no it's a great question. As I look at the supply chain, you know, clearly data is the fuel, you know, to AI and machine learning. And there's been a lot of security around data for the last 10-20 years, but what's something that is new that we really talk to CISOs about at depth is machine learning models themselves. And so, if data is the fuel, the machine learning model is the engine to an AI application. And there's a lot of these great foundational models that live in the open-source environments, and so, you know, you can go to Hugging Face which is the world's number one AI community where there's over two million models that companies are able to pull in and train on their datasets and release, but there's all these, you know, various model repositories-they have a really rich supply chain of building blocks that companies use as they are putting forth their AI applications. Now, what are the risks? So, again, often times when I meet with a CISO I say, "How many machine learning models do you have live?" A common answer that I get is somewhere between a hundred and hundred and fifty; the real answer is tens of thousands. And we have many customers that have hundreds of thousands of models that are live in production. And as we scan our team's devices, the network, the cloud we also need to scan machine learning models for risk. We need to scan the engine that powers AI applications. And within that engine, can be a lot of malicious code, unsafe operators, neural backdoors, and so that's one of the first areas that we tell companies to really look out for, because they have this deployed and have had it deployed in production a quite a large scale.
David Moulton: So, as you're talking about that, you're basically saying that the perception is they have a couple hundred and reality is they have tens of thousands, maybe even more, and that lack of visibility is, I suppose, a first problem, but then specifically, are there common AI or ML vulnerabilities that you see out there in the wild that companies are consuming today that really concern you?
Ian Swanson: Yes. So, I think there is multiple areas in the development lifecycle where there are hidden risks and important risks that CISOs need to pay attention to. As I said, if data is the fuel, you know, to AI, the engine is the machine learning model; we need to deserialize these models, look in them for risks, and we found real risks that if you deploy these in, for example, your cloud environment, it's going to try to steal credentials. It's going to try to exfiltrate data. But as those engines, those models go into AI applications, we should test drive these AI applications before we put them in production. What does that mean? Test, benchmark, evaluate, red team these applications and models before you put them in production at the point of inference, let's say in customer-facing applications. So, throughout this development lifecycle, we need to run continuous testing and find real threats. I'll give you an example of a threat. We saw within the supply chain of open-source some [audio cuts out]. We found a model pretending to be from a well-known health care life sciences company. It was a name squatting attack. It wasn't the company that put that model live, but it was an attacker, a malicious actor. And that particular model we saw was downloaded tens and tens of thousands of times. If you put that model within your AWS infrastructure and at the point of deserialization, one of its core goals was to steal and exfiltrate your credentials on your cloud. And, so we see a lot of attacks that ten-twenty years ago were just in the typical software supply chain that are re-manifesting themselves within the AI supply chain, specifically around data, models, and now agents.
David Moulton: And do you think that there is a tension between the securing and looking for those vulnerabilities and making some of those mistakes again, and then the desire to move fast that's causing a lot of organizations to quickly skip the validating where did this come from and just inserting it and moving quickly? What stops that human behavior?
Ian Swanson: Yeah. So, first off, I truly believe that there should be no AI in any enterprise without security of that AI. And if you take a look at the boardroom conversations over the last few years, the CEO is saying, "We're going to move faster with AI. We're going to reduce our operational costs. We're going to improve the customer experiences of our various products." But in that same boardroom they're turning to the CISO and saying, "What are you doing to make sure that this is safe, that this is trusted, and that this is secure?" And so, at that point, they ride that fine balance David of how do they act as an enabler to all these innovative teams, but yet do it in a way that is, again, safe, trusted, and secure? And it's a healthy dialogue and one of the most important things is this is a truly a team sport of how we develop AI that drives true value, that is secure. We need to make sure that these teams are having a discussion; that they're both being educated on the risks, but also the opportunity of AI.
David Moulton: So, you mentioned a moment ago this idea of name squatting and that's an old tactic, and it's showing up in a new domain. Are there other attacks, other ways that threat actors are out there that you're seeing as real-world campaigns or is really some of this very theoretical and, you know, hypotheticals? I'm curious, if you can illuminate that for our audience.
Ian Swanson: Yep. So, as I said, we see attacks that manifest within the supply chain, you know, within data, within models, but a lot of attacks are also happening at runtime. So, let me explain runtime. Runtime is at the point of inference. It's inline. It's in the example of let's say a chatbot, as somebody is communicating with this chatbot, they're giving a prompt and they're getting a response. And in many cases, we see malicious actors that are trying to fool these AI systems and manipulate them to leak sensitive data; leak personally identifiable information, bank information, you know, on that. And so, this is at the point of runtime that we need to take a look at all the inputs, all the outputs, run it through many different security checks across security, but also safety concerns for brand reputation. So, there is two places that attacks manifest; at the point of when you're building AI within the supply chain. It can be third party as well as first party threats, but also as you get in production, you need to be-have constant guardrails looking at all the inputs and outputs, including embeddings, code, tool calls and agentic workloads, and you run that through a series of policies that make sure that we shield ourselves from attacks at the point of inference.
David Moulton: So, Ian, I know you've talked about this idea of vibe coding quite a bit. First off, what is vibe coding and then why is it so especially dangerous in AI development?
Ian Swanson: Yeah. Vibe coding is a slang term basically for a process in tools where a developer is able to utilize AI for code generation, and there is a lot of amazing companies, startups, as well as big cloud providers that have many different vibe coding solutions. It really is a force multiplier for a development team. I see, within our team, thirty percent gains, you know, in terms of just the value and the velocity that happens there. Now, the challenge is, how do we make sure again that the AI doesn't go off, you know, off the rails and introduce malicious content, malicious URLs, that it's not as it's understanding what is it that it's going to build and it's not reaching out to repositories that might have a what's called "indirect prompt injection" attack. The bottom line is as you are vibe coding with these solutions, you have AI on the side that is making plans, it's perceiving, it's executing steps and you need to make sure that you have controls there otherwise these processes can perhaps go rogue. And that's where we see that there are solutions for, again, runtime security checks looking at all the code to make sure that we're not introducing anything malicious within our various environments.
David Moulton: So, as you were talking about that idea of thirty percent gains; was it thirty percent? Did I get that right? That's-.
Ian Swanson: Yes.
David Moulton: That's incredible.
Ian Swanson: Roughly.
David Moulton: Right?
Ian Swanson: Yeah, it is.
David Moulton: And-and you have this allure, right? Like, I've got an idea, I want to go faster, I want to be able to build something quicker, maybe the blank page or the blank cursor, you know, that's a daunting space to be in, so you're able to build the boiler plates and some of the different pieces of code very, very quickly and I can understand why you would want to go fast, but how do you advise teams to strike the balance between the dangers of using an LLM to pulling in some of that malicious code or, you know, having an injection and also then still getting that value out of it so that it's fast and safe?
Ian Swanson: Yep. So, I think a lot of our development teams want to use these tools. And so, it's really important that security enables the tools because of all the efficiency gains, you know, that I, you know, previously brought up. That being said, we do see that these engineering teams are cognizant of the fact that there could be introduced security concerns. So, they're okay with security teams putting in some guardrails, runtime protections, analyzing all of the code at the point of read and write, you know, on it. And they're okay with maybe adding a hundred milliseconds of latency to be able to go through these various security checks, again, because we're able to enable them with the tool that's incredibly powerful. Now, these tools-I'll give a quick analogy, if I gave a Formula 1 car to my daughter who just learned how to drive and she's 16, she'll probably cmash it a wall. That car is a little bit too high-performance. But if we give a Formula 1 car to a highly trained driver, they're going to just smash it, you know, on the race course. They're going to be able to use that car to its fullest potential. I think the same way in vibe coding, I see my senior distinguished engineers building incredible things, but also doing it in a safe and trusted way. But if I give it to maybe somebody a little bit more junior that doesn't understand the basics of security or the foundational building blocks of engineering, sometimes things can get a little bit hairy there. And one way to build and control it in an even and balanced way is to put in place security at the point of inference so that we can make sure whether you are a very senior distinguished engineer or a junior engineer that just came out of the university system; we basically put a firewall in between all the interactions that you have with these vibe coding solutions to make sure again that they're safe and secure. [ Music ]
David Moulton: You know, a couple months ago I got to talk to our CIO, Meerah and she talked about needing security to be a strong break so that she could go fast. And I think you're getting at the exact same point of, you can go fast, you have the Formula 1, I don't recommend it for a 16-year-old, but you know.
Ian Swanson: Yeah.
David Moulton: Maybe your daughter really likes to drive. But you do need that strong break and sometimes it needs to kick in for you before you go and-you said "smash it out there" I know what you mean, but you don't want to wreck, right?
Ian Swanson: Yeah.
David Moulton: So, I like the idea that there is this moment of here's the risk, as a business you can accept the risk, because the reward is so good, but we're going to put in medications and be thoughtful about, you know, where could things go wrong and how do we stop before there's a real problem? I think one of the ways that you can go and determine if those-those controls, those hypothetical controls that are going to stop things actually works is through red teaming. And I'm curious what your perception of where does red teaming play in securing an AI system?
Ian Swanson: Yep. First off, red teaming is incredibly important. And so, where does it live within the development lifecycle? Again, if data is the fuel and engine to an AI application is the model, when we put that engine into the AI app, we need to be able to test drive it. We need to be able to red team it before we put it in production. And so, it's incredibly important that before any application goes live, we need to test it and so that testing-what is that? First off, it's continuous and it's integrated within the developer toolsets. That as various points of the lifecycle as they are training these models, we need to red team; as we put them in production we need to red team it. And what we're doing is we're running these series of attacks. We have a couple different approaches to this Palo Alto Networks. Number one is we have this stack attack library that's tied to all these frameworks, NIST, MITRE ATLAS and more, and it runs through these series of attacks and creates a scorecard. And that scorecard right now is being used by our customers as they make go/no go decisions for these AI applications. They're using that scorecard as it starts to build policies that they're going to put in production to enforce guardrails around data leakage; how susceptible these applications might be to prompt injection attacks and stop that. So, it's really important that we test and that we understand where the applications are vulnerable to one, improve on the application side, perhaps improve the model, but two, to inform these runtime policies so that we can de-risk the application even further mimicking it as if it was in a production environment.
David Moulton: And I think it's important too to run those red teams, because you've got that human ingenuity, that craftiness, and you're going to go in and say like things don't stay the same over time, what didn't work before might work now, something else comes up and you've got that continuous attack going on so that you've built your resilience and you've hardened things. Another-another area that has emerged as more and more tools have become available to everyone in the enterprise, everyone in a business, is this idea of Shadow AI, right? And I can't imagine the difficulty of getting your arms around Shadow AI as a leader. The tools are better than what we had before in whatever work we're doing, and we want those tools. And so, sometimes you skirt the rules, right? You say like, I'm going to go around. I'm going to use a different model and I'm going to skip the governance and not necessarily acknowledge that visibility is needed. How do you talk to security leaders and advise them on getting that under control?
Ian Swanson: So, that's really the first step, you know, which is discoverability, discover all the assets that you have and what makes this incredibly complicated in the AI space, is that these assets can live in multiple different forms and places. And so, let's think about an agentic workload, you know on that, they can live on end devices, they can live on your laptop, they can live in your infrastructure, on premises, in your cloud and then you also have SAS agents, you know, that live within Salesforce to ServiceNow and many of these SAS solutions that are opening up to agentic workloads. So, Shadow AI takes form in many different places and I like to break it down two categories; number one is employees' usage of AI. So, whether it's through the browser, your other methods, we need to understand how we can govern what you're team members are able to access, you know, what generative AI solutions, like perhaps ChatGPT. What are they allowed from a governance perspective to share with these solutions? Like a complete visibility on the employee usage in all the applications that happen on that side. The other side is as you build, train, deploy models, AI applications, agentic workflows, we need to figure out where all those live and we need to bring that to light so that we have visibility and then the next step is, how do we audit and assess risk on all these assets?
David Moulton: So that visibility piece that you're talking about, is really critical and I'm curious Ian, if you were to say like what's the one first task or policy that you would recommend for that discovery, you know, and monitoring where do you-where do you tell concerned leaders to go first?
Ian Swanson: Yeah. So, again I break this down in those two different categories. On the employee usage, it's "Hey, let's start figuring out from a browser perspective all the AI that's being consumed and used and catalog it." And then on AI that's being built and deployed, it's really going into your buckets, or object storage, you know, trying to understand where are all these artifacts? And then once we get a handle of all of the artifacts, from there we can start to assess what's the real risk; scan them if they're models. If it's applications, how do we red team them? If there is agentic workloads, let's start to understand the security profile of the identity, the permissions, the tools it has access to. And so, again, two different areas; the employee side, start at the browser and then as you're building your own AI applications or perhaps using SAS, you know, AI solutions, start to scan all of your infrastructure to see where that lives.
David Moulton: And so, I want to close the loop a little bit here on something you said earlier where organizations think that they've got, you know, hundreds and ends up with tens of thousands, this is exactly what you're talking about right?
Ian Swanson: Yeah. So, I was-I was meeting with a top ten bank, you know, just in the past few months and I asked the question. "How many models do you think you have live?" And they said, "In the hundreds." And I said, "Message or teams, your AI teams?" So, I was talking to the security team. And they messaged, you know, their team and their team came back and said, "No, we actually have," you know, I think it was like ninety-three thousand, you know, plus models. And that was enlightening to the security team. The security team was like, "Wait a minute. Where are all these models? Who's building these models? What are third party assets and supply chain versus first party?" And it started to really paint the picture for them of "Wait a minute. Maybe we don't have the visibility that we thought." And in some cases, it requires different tools and that's why we invented this category of secure AI and we brought it over to Palo Alto; is to help customers understand the differences, but provide them unique tools that can actually assess the risk and stop it as well at runtime.
David Moulton: I have a question that is related to your role starting and building Protect AI and now you're here. You know, you talked a little bit about the gaps that you saw in the industry. It drove you to start a company. And now you're inside of Palo Alto Networks, which has maybe we'll say a slightly bigger view of things, maybe more data. How has your thinking evolved now that you're part of the Palo Alto Networks enterprise?
Ian Swanson: Yeah. So, first off, the acquisition of Protect AI closed at the end of July 2025. Within two-and-a-half months, we completely integrated all of Protect AI's offerings into what we call Prisma AIRS at Palo Alto Networks incredibly fast. One of the ways we were able to do that is we had a lot of micro services within our architecture that we were able to, you know, kind of tie together if you will with Palo Alto's existing offerings. And what we were able to get was a truly better together scenario where Palo Alto had amazing capabilities in data loss prevention, all the way through to detecting malicious URLs; things that kind of added a lot more foundational capabilities to what we also brought to the table with Protect AI. And so, one of the learnings, to answer your question, was Palo Alto Networks was already doing a ton in this space. And we were able to really provide our customers with this one plus one offering, Protect AI plus Palo Alto Networks that solves end-to-end AI security. I hear from some other, you know, people in the space that we have a platform for AI security, but they only work with AI security at the point of production. We go all the way back into how it's being developed, all the way through to it being in production. And we were able to do that from the power of all sorts of integrations we were able to get from Palo Alto Networks. We were working with the Cortex Cloud team of how we pull in posture. We were able to share with these teams what we think about security of model artifacts as they see them in S3 buckets perhaps. So, there's all these capabilities we were able to bring under, you know, this common tent and platform that is Palo Alto Networks to be able to secure AI end-to-end.
David Moulton: So, we're sitting here in early 2026.
Ian Swanson: Yep.
David Moulton: I'm wondering what you see is the biggest blind spot for security leaders when it comes to AI risk this year?
Ian Swanson: So, I'll say right now, I meet with probably seven to ten CISOs every week, chief information security officers. My broader team meets with many more than that. So, we're having a lot of leadership discussions. And top-of-mind for all these CISOs are these agentic workloads and, specifically, where they start to lose control on visibility as it relates to building within software as service offerings. So, one of the first things that we did here at Palo Alto Networks, as it relates to agent security, was leverage all of our great partnerships to be able to build in native security solutions that can give our customers a single pane of glass of all the AI agents that are being worked on in SAS environments. So, think ServiceNow, Salesforce, Microsoft, Foundry, we've developed native integrations into all of that which is incredibly powerful and it's the thing in 2026 that is, I think, the biggest risk which is agents, because we've given AI arms and legs to go carry out tasks. We need to make sure they don't go rogue.
David Moulton: If there is one thing that a leader could do to improve their AI security this quarter, what would it be?
Ian Swanson: That's a good question. I think it starts with education. And that education starts to drive that visibility of understanding all their AI assets and where they live. Why did I start with something as simple as education? I'll tell a story. I was on a call with a top five software company in the world probably a year-and-a-half-two years ago, where this situation took place. Where their research team was in one room, their security team was in a different room, Zoom meeting, cameras pointing down. And we're talking about all the security risks, my team is with them, and the research team goes off mute and they say, "Ian, we're going to leave early because none of this really pertains to us. We are the AI research team. We don't put stuff in production. We do experiments. It doesn't apply to us." I didn't have to say a word. The application security team at their own company comes off mute and goes. "Wow! Wait. Are you pulling in AI artifacts from third party's supply chain?" Their answer was, "Yes." They said, "Are you training it on our customer data?" The answer was, "Yes." "Are you putting it in this particular cloud environment?" "Yes." And the security team came back and said, "Hang on here. You are pulling in grenades and pulling pins and you don't even know it." But I would also say the security team didn't know it, right? And so, what needs to first start with is this team approach of let's understand the gaps that we have. The AI teams might understand security risks. The security teams might be blind to all the AI that's already in development and how it's being developed within companies. So, even though it's really simple, I think we need to start with internally at a company, let's catalog and let's understand all the AI that's being built and that needs to happen through conversation across all these teams.
David Moulton: Yeah, I like that. And it reminds me of a conversation I had with Noelle Russell last spring and she talked about this idea of a baby tiger and that research team. You know, they're just building something. It's a cute little tiger. We're not sure what it's going to do, but it suddenly grows and it grows quickly and it still has its claws, it still has its teeth, as you said it's a grenade with the pin pulled, right? These things get big fast and all of a sudden you're left with a menace. You're left a lot of risk and if you're not communicating that well, if you're not aware of what's going on or aware of the risk that you're accepting when you bring in code from somewhere else and put it in your own systems, train on your own data, that's a, you know, sure maybe they didn't know. But it's a wildly risky behavior and I like the idea that, you know, transparency and a lot of good conversations allow for a company to understand what they're doing or any team to really understand what they're trying to secure. Ian, thanks for this awesome conversation today. I really appreciate you taking the time to hop on "Threat Vector" and share your insights around the AI supply chain, and some of these recommendations on education and specific behaviors that you're seeing out there that are risky and how leaders can reduce their risk around AI.
Ian Swanson: Thanks for having me David. [ Music ]
David Moulton: If you like what you heard, please subscribe wherever you listen and leave us a review on Apple Podcast or Spotify. Your reviews and feedback really do help me understand what you want to hear about. I want to thank our executive producer, Michael Heller, our content and production teams, which include Kenne Miller, Joe Bettencourt, and Virginia Tran. Original music and mixed by Elliott Peltzman. We'll be back next week. Until then, stay secure, stay vigilant. Goodbye for now. [ Music ]