Recorded Live at the Canopy Hotel in San Francisco, David Moulton speaks with Noelle Russell, CEO of the AI Leadership Institute and a leading voice in responsible AI. Drawing from her new book Scaling Responsible AI, Noelle explains why early-stage AI projects must move beyond hype to operational maturity—addressing accuracy, fairness, and security as foundational pillars. Together, they explore how generative AI models introduce new risks, how red teaming helps organizations prepare, and how to embed responsible practices into AI systems.
From the Show:
Protect yourself from the evolving threat landscape - more episodes of Threat Vector are a click away
Transcript
[ Music ]
Noelle Russell: The most important thing is to be a doer, not a talker. To learn by doing. It's not hard. Today you use your words to create a model. Start building models, not just using them, because that's going to teach them about security, accuracy, and, you know, all the things that they're already doing in their role, but how AI's going to impact it. So yeah. Become a doer. Get out there. [ Music ]
David Moulton: Welcome to "Threat Vector," the Palo Alto Network's podcast where we discuss pressing cybersecurity threats and resilience and uncover insights in to the latest industry trends. I'm your host David Moulton, director of thought leadership for Unit 42. And today I'm speaking with Noelle Russell, founder and chief AI officer at the AI Leadership Institute, Microsoft MVP, in responsible AI, and one of the most influential voices in the AI space today. Noelle is a multi award winning futurist and an executive AI strategist whose career spans roles at Amazon Alexa, AWS, Microsoft, IBM, Accenture, and NPR. And now she's the author of a powerful new book "Scaling Responsible AI: From Enthusiasm to Execution" where she outlines the framework and principles that organizations can use to scale AI ethically, securely, and effectively. I downloaded the PDF copy of the book and got in to it as far as I could before I said, "You know what? I need to have a conversation with you about it." And today we're going to talk about AI leadership going from prototyping in to production and then how organizations can rapidly adopt what they're doing and generative AI and what is the tipping point, that balancing innovation with risk, speed, and responsibility. So Noelle your book "Scaling Responsible AI: From Enthusiasm to Execution" I think it's already making waves. And I especially liked your baby tiger metaphor. And I see you've got your baby tiger with you today.
Noelle Russell: Bruiser.
David Moulton: Bruiser. I love the framing. It's both cute, but you know baby tigers are dangerous if mishandled. Can you tell us where that analogy came from and what you want business leaders to take away from that analogy?
Noelle Russell: Absolutely. It actually came from my journey as you mentioned. Yes. I've worked at a lot of companies. The interesting thing about my career is that I always at the -- I end up at these companies before they've done a thing, before they've gone in to the world of Amazon Alexa or before they've launched cognitive services at Microsoft. And so I was at Microsoft. I was hired to help the research organization productize AI. So they had 17 research models that were going to be in my purview and I immediately thought of them like I would use the term herding cats. And so herding cats kind of transformed in to this concept of a tiger because cats aren't that fierce. And I'm a cat owner, but you know you don't want a bunch of cats around, but they're more nuisance than like a danger. And so I realized like I needed to change that a little bit and so we ended up with a tiger. And that metaphor though has now become even more interesting over time because now we're looking at, you know, I always will tell people when you start an AI project you start with this like adorable cute little model that you think, you know, it does novel things, trite things. It's exciting. Everyone loves it. People want to be on the team. And then at some point you're hoping someone will go, "Wow, baby tiger. Like how big are you going to be?" Or "What are you going to eat?" Or "You have razor sharp teeth. Like how much do you have to eat? Where are you going to live? What happens when I don't want you anymore?" Like no one asks that in baby tiger mode. And so that's how this book was created was literally I was like what happens when -- like it's still a baby tiger, but like nobody's asking these questions. So.
David Moulton: What happens when it grows up?
Noelle Russell: Yes. How do we, you know, avoid -- yeah. Baby tigers become big tigers and big tigers eat people. Right? Like so --
David Moulton: Yep.
Noelle Russell: Let's be careful.
David Moulton: Let's talk about balancing AI risk and reward. Right? You've worked with a lot of organizations at every stage of AI maturity. What are some of the most common risks you see when organizations try to scale the AI too quickly?
Noelle Russell: Yeah. So I'd say the risks are always in three basic buckets. One. Accuracy. So they don't care that much. They're not thinking that a model could really be very wrong or grow wrong over time. Right? In baby tiger mode you're like, "Oh my gosh. I love this model. It's doing a really good job." And the job of a generative model specifically is to give you an answer that's pretty good. So you get this pretty good answer and you're like, "Well, that's not bad." And that becomes your bar as opposed to the floor. It becomes kind of the ceiling of like well this is pretty good. We should keep going. And what happens on day two or when you get to the point where you're scaling is all of a sudden now this model's ground truth begins to change in the spirit of machine learning. Right? Model drift begins to happen unless you're monitoring it. In research that's the whole job is to build the hypothesis, test it, monitor it, repeat. When you move research in to production those people don't know to do that. So accuracy is huge. The second one is fairness, is just making sure your models not only do they not lie, but how do they maybe even not hurt the people we're trying to help? Right? So like if I'm a financial company I want to make sure that I'm not accidentally leveraging a data set that's going to tell my model to disenfranchise women over men or, you know, zip codes, one zip code over another.
David Moulton: Of course.
Noelle Russell: And then the final one which is right in your wheelhouse is defense and security. It started off in kind of military domain, but now it's really much more in just general like let's make sure that this model does not -- it's going to increase your attack surfaces. But as it increases it let's not exponentially grow the threat to your organization with that. And that means you have to have people that care about all three of these things and they're not always the same person. As a matter of fact they're almost never the same person. The people that care about accuracy kind of they're the guys that plan their vacations on an Excel spreadsheet. And then, you know, there's a whole wonderful like area in the business that cares about belonging and inclusion. Like so now there's a space for them to make sure models are fair and just. And then of course we have an entire security organization and that should definitely be included in these AI journeys.
David Moulton: So accuracy, fairness, and security. And that early stage making sure that you've found the Venn diagram or the balance between those things and/or remember to invite everyone in while you're in that early stage mode. So that's something you've seen over and over. Let's talk about the security beats. How can CISOs or maybe the risk officers evaluate when an AI use case is ready for that deployment, that production, that you're talking about?
Noelle Russell: Well it's kind of like that the whole baby tiger analysis. Right? Like how do you ask? The number one thing you want to ask is what's the worst thing this thing can do to somebody? And a good data scientist will not have thought about that typically because they're very narrowly focused on the hypothesis and the result of that hypothesis being positive. There's no value in spending research energy investigating the worst thing that could happen. But I always say to people like you're one acquisition away from the death star. Like you need to make sure that building these things securely, safely, responsibly, that it's woven in to the fabric of what you do. You can't sprinkle it in at the end or have a checklist that says, "Make sure you're, you know, safe and secure." Like it has to be woven in so that if somebody, you know, acquires that technology it can't be plucked out. I often will say like a raisin in a bun or a chocolate chip in a muffin. It has to be more like water in a wave.
David Moulton: Okay. So it's part of the DNA. It's part of sort of the ethos of building it as opposed to something that's bolted on at the very end.
Noelle Russell: That's right.
David Moulton: I think I've heard that story of security before. So you've got this secure AI framework. And that helps organizations align AI with security and compliance. What's security's biggest blind spot in those AI deployments?
Noelle Russell: It's probably a similar blind spot to what most executive leaders in the security space face right now which is alignment. Right? Like right now when I go in to an AI organization there's one of two things might have happened. One. The technology team, the data scientists, have driven the development of the thing. Whatever it is. Some problem that they uniquely want to solve. Or the business has driven it and it's driven by productivity or operational excellence or profit. In neither of those two cases are they inviting security or legal to the conversation. And when I started building these systems I was at Accenture and I helped build a generative AI practice from nothing to about 300 million a year in 9 months. So it was a rocket ship. And this was of course the golden age of November 2022 to like September '23. So this is why nobody knew what was going on, but if you did, and I was an MVP at Microsoft so I got a chance to play with GitHub Copilot before it became GitHub Copilot -- so I knew what it could do. So I would go in to these clients and be like, "No. You should do this right now. It's 2023. I know nobody knows this yet and everyone's still learning. But this is going to be awesome." Yeah. So that drove -- you know I think the executives really will struggle with this. Like how do you get in the room? And that's what I do now is I do a lot of executive education where I just tell the executives, "It starts with legal, security, DevSecOps." Like those people need to be there first number one because this technology is going to sit in an infrastructure you've already created. You're not building in most cases. You're a new rack to stick this model in. Right? It's going to go in your already secured cloud architecture. And I will tell people AI is as secured as you are. The danger of course is that most companies aren't that secure.
David Moulton: We've noticed that as well.
Noelle Russell: Yeah. Yeah.
David Moulton: So when you're thinking about those blind spots maybe it's something around AI governance that could help out. Do you recommend integrating an AI governance practice with the cybersecurity programs?
Noelle Russell: Absolutely. I mean my hope is, and I know that this is not true so fun fact, when I go in to an organization and I build out a solution with them I will get them to preapprove a -- like let's say I'm going to generate, I don't know, a 25% increase in ROI for the investment they made in the project. I will tell them to take half of that and dedicate it to cybersecurity because it's money they haven't made yet or, you know, profit that they don't have. They haven't spent it yet. So I'm like for every net new dollar 20 -- and I would say 50 or 75% if I could, but 25%. I've been able to sell that to executives to preallocate benefit from an AI project. And the reason I say that is because that is one of the biggest challenges. The good news, though, is that you do already have some governance structures in most organizations. Data governance is ultimately AI governance. Like they are the same thing. It is an evolution of the same process. So start with your -- or your data governance team. And expand the scope to now include AI systems. The challenge of course is that that data it's not clear because you don't own the data. But there are tools. So I teach a lot of my clients about the tools that are available, open source tools for measuring accuracy, ll of the things we talked about. Accuracy. Security. Fairness. There's an organization at Stanford. Human centered AI. They created a tool called helm, the holistic evaluation of language models.
David Moulton: Okay.
Noelle Russell: And this is a tool that anyone can use. It's not technical. But every model you pick, every baby tiger you pick, like I go in to companies and there's little AIs running all over the place like shadow IT. And I'm like, "So when you picked that model did you know that it had a propensity to lie?" Like there are models that will lie more than others. Did you make that intentional choice? And they're like, "I didn't even know I could, that they had metrics for that."
David Moulton: Yeah.
Noelle Russell: So baselining is not a new concept to the people on the technology side of the organization, security professionals, developers. It is very new to the business that's now driving these technology changes.
David Moulton: Yeah. No. You were talking about the idea of that propensity to lie and I think that's a more accurate way to put it, but the folks at Carnegie Mellon were presenting a couple months ago and one of the things they found in their research was that the models were between 40 and 70% of the time hallucinating which is a great term for lying. But they did it with such confidence or these models did it with such confidence that if you weren't an expert in what that answer was you could be quickly conned in to believing that this machine had told you something that was accurate. They also found that these same models wanted to please.
Noelle Russell: Yes.
David Moulton: They had an intended let's get an outcome that you're happy with. Right? And if they could lie to you because they couldn't get the answer, or it could lie to you, I shouldn't personify this, and get past that that was success. And I think that is a really --
Noelle Russell: Turing test, but in reverse.
David Moulton: Yeah.
Noelle Russell: Like the machine is trying to get you to say that's pretty good.
David Moulton: That's pretty good. And that's a moment where you're going what other portion of my life outside of IT, outside of technology, just find it, where you go, "You know, if I was lied to 40 to 70% of the time I would consider that great." I can't come up with it, Noelle. Right? That's not for me.
Noelle Russell: And I think it's interesting because in that study what they ended up doing was asking the model, "Why did you lie?" Right? Why did you make that up? And it would say -- this is how they came to that assessment. It would say, "Because I didn't want to hurt your feelings."
David Moulton: Yeah.
Noelle Russell: Like what do you mean hurt my feelings? How are you quantifying hurting my feelings? Like I'm afraid. I didn't want to embarrass you. Or I wanted to make you look good. Like these are the answers coming from a pretrained model. Yeah. So these are the things we -- and that's why I always say like it's about the questions you ask.
David Moulton: Right. And to take it to, you know, a human factor, if I had somebody on my team or that I was working with that admitted to me that they did this lie, right, I couldn't trust them anymore.
Noelle Russell: Right.
David Moulton: And it would be one of those moments of like we should part ways.
Noelle Russell: Yeah.
David Moulton: Right? And yet you're like okay well --
Noelle Russell: Maybe three strikes you're out, but we're talking 40 to 70%.
David Moulton: That's too much. That's too much. You know maybe if you were talking about like I made egg salad sandwiches and then you're like they were great and they were not. Like --
Noelle Russell: Yeah. Exactly. If you're trying to. But that is the philosophy.
David Moulton: But I'm not asking it to do something like, you know, left over egg salad for me.
Noelle Russell: You're like help me yeah find the right customer for this problem.
David Moulton: That's right. Looking at what the Carnegie Mellon researchers had, I walked about -- you know walked out and they were like, "How do you use AI to be smarter?" And/or a smarter user of AI. And generally it was like very, very cautious and look up every single thing it tells you and verify it. And not just that it put a link that said this because sometimes the link was the lie.
Noelle Russell: And it will manufacture a link and host it on its own server. What?
David Moulton: Yeah. It was just awful.
Noelle Russell: It's not good. It's not good.
David Moulton: So it has given me that moment where I'm going like oh.
Noelle Russell: Yeah. But I will say, you know, we're talking kind of about the big AI in the sky. And we're -- these are public models. Right? So this is true for public models. This is not true when you take a model and you host it inside your organization, inside your firewall. You control it through a retrieval augmented generation architecture. You control the data set that it uses. Like all of these things, you know, I don't want to give the impression that all AI is this way. Like it is if you go to Chat GPT or Claude on the internet. If you look in to a URL browser and you see a link that you don't own, then these are all the risks that you present yourself with. The sad thing is that most employees in the company that's exactly what they're doing is that they get a problem and they're like, "We just go out to Chat GPT on my phone." And that's the worst place to do it because on your phone you have no idea what the domain is.
David Moulton: Right. It's through the app.
Noelle Russell: Yeah. So it's complete -- it gives you a sense of security when it's not secure at all.
David Moulton: Yeah. And the Carnegie Mellon researchers were looking at the public --
Noelle Russell: Yes. Of course.
David Moulton: AIs. And looking for that range because that's the most successful. That's the one that you wake up and you say, "I'm going to be an AI pro today." And, you know, not even a few dollars later you're using it. If you want to throw a few dollars at it so that it keeps going you can.
Noelle Russell: Yeah. [ Music ]
David Moulton: Well let's -- let's talk about the human element of responsible AI. You emphasize that people, not just the technology, are the key to responsible AI. What's the role of a security culture in helping AI succeed at scale?
Noelle Russell: So in this case, you know, we look at that weaving. I like that you said the DNA. Haven't used that analogy in a while, but it is -- like it has to be part of the DNA. It has to be woven in to the fabric of these projects. So now all of a sudden -- which is why most of the time the technology part is probably 25% of what I do when I go to an organization and help them build a solution or deploy a solution. The tech is usually not the hard part. The hard part is how do you get a team of people that are going to care about all the things that we've shared, that are going to care about accuracy and fairness and security. And how do you get them in to that project early enough to ensure that you've built it in to the model's behavior, not just bundled it on? That's why governance is required, but it's not enough because you can just change your governance policies or worse get acquired by a company that completely dismantles your governance process. Then what are you going to do? So it needs to be built in and that's the beauty of having LLMs as part of your infrastructure. So I'll encourage, you know, yes, if we expand our mind and think about how do we use an LLM to actually be the security auditor in these systems and embed it in to the deployed feature so now when you get that feature an LLM's built in to say, "Oh no. These are the rules by which I'm abided."
David Moulton: Yeah.
Noelle Russell: And there's a framework called the AI safety system and Microsoft and Amazon both use it. I think Microsoft's the only one that's kind of called it out, but this is what they do intentionally. But that safety system is like four layers and it starts with the human AI experience which is like that's when you involve security, legal, compliance. Everyone's in the room. Plus the line of business owners. Plus the engineers. And you're like what are we trying to do. And this is when you define delegation. What's the AI going to do? What are the humans going to do? Because it's like the Skynet moment. Right? When you decide if you want to can you give everything to the AI? You could. It will hurt you. Baby tiger. Right? But most organizations are like no. There's stuff I want to keep. And usually security is one of those things. Accuracy is one. Fairness is one. So there are certain things. But once that human AI experience is defined that's not a technical problem. That's like a designer problem. So you have these user experience designers designing how AI will be integrated in to a workflow or a process. The next thing is the system prompt, is realizing with every machine you deploy you have the ability to control the way it operates. Most people when they think prompt engineering they think of the prompts they use to ask their questions, but this is the prompt that's used to tell the bot how to answer the questions. And that's completely controlled and most context windows for that it's like 375,000 characters. It's a lot of space for you to -- and I go. That's the first thing I do in an executive briefing when they're like, "Yeah. We're using AI." I'm like, "Great. Let's take a look at one." And I go in to the configuration of the system prompt and it's like "You are a bot that does blah." Does it? I'm pretty sure it's a default setting. I mean it's not uncommon to many of these security things. You walk in. You're like, "You know we wrote a book on this." There's a document on this. Like it's well documented. But people just won't do it. Many reasons. Time. Resources. But now you can build an LLM that will infuse it in to the life of your systems and feature releases. Like there's no excuses now. And then just quickly the last two are less controlled. One model selection. So we talked about HELM. Right? Picking the right model for the right task. And then the last one is infrastructure which again we're getting deeper and deeper so if you're not building a model you won't get to choose the infrastructure it runs on, but you should know like are you running on Amazon, are you running on Microsoft, are you running on Google, are you running on hardware in your basement? Are you good at that? Have you ever built a NIC card? Like nobody, you know, asks these questions.
David Moulton: Yeah. How far down the stack do you want to go?
Noelle Russell: Yes.
David Moulton: But you should know.
Noelle Russell: But you should know. Like or at least they should be transparent about it. Like even if they have what are called system cards -- so I was just speaking with a CISO an Anthropic and Meta at the event here. And they both were like, "We have system cards." And they monitor how many people read them and it's like less than 1% of people who use their systems go to that page and download their system cards. Not because they didn't publish it. Not because they didn't say, "We're responsible. Here you go." Explainability. People aren't even asking the question which I think is a big challenge.
David Moulton: Well before we started recording we talked about this idea of curiosity and perhaps one of the things we need in the culture in and around AI is more curiosity. Where is this coming from? How is it governed? What's going on with it? How did you get there? Do I believe you? Who else is involved? Why aren't they involved?
Noelle Russell: Literally.
David Moulton: Yeah.
Noelle Russell: Yes.
David Moulton: So going back to that culture idea, how can you train employees to spot and then escalate the potential ethical or security issues in AI systems?
Noelle Russell: Yeah. This is probably my favorite. So I do it in two different ways. One is by we do what are called break your bot challenges. So we have everyone in the team, and this goes from the boardroom to the whiteboard to the keyboard, everybody in the company, goes through a process where they build an AI system end to end. It takes less than four hours. I've done it in an hour. And the job is to build an AI system like a chat bot, but a custom GPT where you're providing the instructions and then you build it, you feel pretty good about it, and you hand it to your friend and their job is to break your bot. Basically red team. Right? Like just beat it up. Right? Like hammer it. And the good news is at that time none of this requires code. They're just using their words. Clarity of thought to define a system that is now completely -- like completely functional. So they deploy it. This person tries to break it by saying, "Hey, help me with my taxes," even though it's supposed to help with sales force data. So they try to get -- they try to identify vulnerabilities. That alone just educating people on if you build a machine you own the truth. You own the answers. You own desirable conversations, undesirable conversations. All of that is up to the person deploying the machine to define it. And if you don't define it that's where hallucination comes in. In a lack of definition it's going to make it up. That's the difference between where we were 10 years ago when we created these declarative systems. These systems when you asked it a question it didn't understand it would just fail. Like oh no. And that was kind of better from a security perspective.
David Moulton: Right.
Noelle Russell: Right? Like I'd prefer that.
David Moulton: You know when it broke and you didn't have to worry about 40 to 70% of the time you're going to get something that the sycophants wanted to tell you so that you felt better about yourself. Right?
Noelle Russell: Like the Chevron. Or like I don't know. It wasn't Chevron. It was like Chevrolet maybe. Maybe it was GM. I don't know. But you can google it and find out that they ended up deploying a bot too early, baby tiger mode, and they ended up selling a car for $1 on the internet.
David Moulton: I do remember.
Noelle Russell: Remember that? Like that was a huge press cycle because people were like, "Wait. What? I want a car for $1."
David Moulton: Right.
Noelle Russell: And that's all it takes is like somebody says it and they put it on Reddit and it goes like before you've had a chance to even fix it.
David Moulton: Yep.
Noelle Russell: And these are all things if you have someone in the room that's testing it. That's why so red teaming becomes this whole new concept in the world of AI. I don't know if we're going to talk about it so I don't want to steal my own thunder, but red teaming becomes this whole new concept because it's not just about like vulnerability and attacking and adversarial attacks. It's all about like that yes and benign attacks. How is someone going to accidentally do something to this system? And how -- and the only way to solve for that is like this symphony of talent you're going to need in the room to go, "Well as a mom I'd probably say this" or "As a cat owner I'd say this" or "As a person who never graduated from high school I'd say this." Right? Like the more people you have, but they're not going in and saying "How can I get something from you? How can I do something nefarious?" It's just literally like --
David Moulton: How would I use this? And then by having enough different points of view, experiences, you start to patch against the bias and/or the blindness that you carry in if you're the person who's making it. Or you spend a lot of time with it. And/or you just always approach problem solving in a particular way. You plan your vacations with a spreadsheet. I laugh because there are times when that's what I have done and I was like, "Oh no." When I like I just was like I need it all written down. I can see where things are at and it's very clear. You know, but then I think other people maybe don't do that, but --
Noelle Russell: Yeah. But you need at least one of those people on the team.
David Moulton: Absolutely. Yes.
Noelle Russell: Somebody's got to be very detail focused oriented.
David Moulton: So let's talk about some of the regulatory sides. Right? Like being ready for regulation. With some of the global regulations that are coming out - and I'm thinking about like the EU AI Act. Right? Maybe some of the U.S executive orders I think these are moving -- a moving target right now. How can CISOs prepare their organizations for the regulatory environment that's coming up in and around AI compliance?
Noelle Russell: Yeah. So my biggest advice is that they don't have to go it alone. The department of energy, the department of state, they've released enterprise AI architecture documents that are very good and most importantly acquired the brightest minds across the biggest companies. Like $40 million was invested to create these documents for the federal government, you know, in the previous administration granted. But they are public documents. So at least as a model of like what questions to ask it's a great format. So I kind of want to say department of energy, but then also thinking state and local governments. So the state of Arizona. Maricopa County is a county in Arizona, one of the largest counties. They also released an enterprise AI. Like it's not an act. It was a guidance guiding principle, but they hired big companies, you know, Accenture, PWC, McKinsey to help them build this document. And then they made it basically it's not open source, but it's public data. So you can go out and use that. So that's what I always encourage is don't start from scratch. Like take a model and if it -- I really think a federal model is like the lowest possible denominator because they're going to do the least amount of money for, you know -- do the least amount of work for the least amount of money.
David Moulton: But it gives you a floor.
Noelle Russell: At least. Yes. And most people don't do that at all.
David Moulton: It actually gets on the floor. Yeah. Not below the floor.
Noelle Russell: Right. Right.
David Moulton: They'll be in the basement.
Noelle Russell: Yes. Exactly. And they'll -- when you read that document you're going to be like, "Holy mama. We don't have any of that." Any of that.
David Moulton: For keeping the threat vector safe. So today there are financial audits that are very common with large public companies. Do you see a future where there are going to be AI audits at that same level because it's so important?
Noelle Russell: Absolutely. I think it's happening now. So in the financial space as well as in the healthcare space. The good news is that we are now building technology that was born in research and that research continues to happen. What's different is that those research organizations used to be and continue to be at like MIT and Stanford. Now companies like Open AI, Deep Mind at Google, Amazon research, like their research organizations are productized. So now we're seeing that same benefit happen inside companies, but it is research. Like someone has to kind of push the envelope of what's possible without making it a product feature. Like just testing. Like testing, hypothesis testing, asking questions, that's what academic people do. Like that's what PhDs -- and so I was just at MIT last week and I saw all these amazing AI PhD programs. They were -- and I'm a little worried because no one's productizing that. But these are like healthcare, finance. They are using AI to audit AI. And I remember someone telling me like you can't use AI. In the world where we think it's one big AI in the sky you wouldn't want the one big -- you know, the right hand to know what the left hand is doing or whatever that phrase is. But that's not what happens. Like when you build a model it's completely different. It's like Bob and Jeff.
David Moulton: Right. It's not the student grading their own homework.
Noelle Russell: Right. At all.
David Moulton: It can be the TA grading the homework of the students or another faculty member.
Noelle Russell: A Nobel Prize winner actually grading a student like that works all the time without fail, never gets tired.
David Moulton: Right.
Noelle Russell: It does get wrong because you might not be caring for the accuracy or security part of it. But these models like right now for example Microsoft just released a brand new agent called the response. What was it called? The red teaming, AI red teaming, agent. And it came out of those three years of research and productization of red team, like humans doing the red teaming. And then they fine tuned a model and now they're selling it as a product. So look at your organization. What do you do enough as humans that you can right now just make an intention we're going to capture enough data to train a model to augment this team. Not replace it, but make that team's job more effective. And I think there's so much nonsense that we have to do like in security and documentation and meta data management and all of that that like building models to help us with that is going -- I mean if you don't do it you'll be behind. Like your competitors will eat your lunch.
David Moulton: So earlier you talked about this idea that AI security is data security. And I want to talk about data protection in this age of generative AI. The generative models they often include this like large scale data in gestation. What are the privacy and the security implications that leaders often overlook?
Noelle Russell: I think one of the biggest misconceptions is that a leader when looking at a model they're looking at it through the lens of like a public AI model. The data piece is going to be when you start using these models you're going to use it privately. It will be inside your walled garden. Right? Inside your VPN. So once you're inside your VPN the -- it's very different. You now own that data. The problem they'll then run in to is that the data itself is conflicting. So they'll have data from 20 years ago meeting data from today and it will actually confuse the models. So models will be like, "But you said." Or worse. They will use the word like I was working with the state of New York and they used the word policy. And we created a model for the entire state of New York for like the whole organization. When you used policy it was not possible to differentiate between residential policy and pet owning policy. And so as a result we got really weird behavior, really weird suggestions about pet owning and human like habitation. You don't want that, by the way. But that just gave us the ability to think about how we might be able to like -- a leader needs to think, "How do I make sure that I'm following guidance that I'm already using in the rest of my organization?" It's kind of like habits. Who's that guy? John something. He wrote like "Atomic Habits." Like take something you're already doing really well in the organization and take the new thing and attach it to that. So like if you're already [inaudible 00:31:34] a security policy like attach the new AI policy to that so that the habit is created that you asked the same -- they're different questions, but the process is the hard part.
David Moulton: Okay.
Noelle Russell: Change management I think is going to be much more harmful to organizations than the tech itself. [ Music ]
David Moulton: So Noelle thank you so much.
Noelle Russell: It was super fun.
David Moulton: For coming on and talking to me. This has been fantastic. Thank you so much.
Noelle Russell: Thanks for having me. I really appreciate it.
David Moulton: If you like what you heard today, please subscribe wherever you listen and leave us a review on Apple Podcasts or Spotify. Those reviews really do help me understand what you want to hear about. And if you want to reach out to me directly email me at the show at threatvector@ paloaltonetworks.com. I want to thank our executive producer Michael Heller, our content and production teams which include Kenne Miller, Joe Bettencourt, and Virginia Tran. Elliott Peltzman edits the show and mixes our audio. We'll be back next week. Until then stay secure. Stay vigilant. Goodbye for now. [ Music ]