How to Build an AI Security Program from Scratch.

View Show Notes and Transcript

Is your organization an Adopter, a Builder, or a Scaler of AI? And do you have the right security blueprint to match? In this episode, Shannon Murphy (Global Security & Risk Strategist at Trend Micro) joins Ashish to break down the practical realities of securing AI in the enterprise.Shannon explains why 95% of AI projects are failing often due to skipping the "boring" but critical step of governance . SWe explore the "AI Security Blueprint" covering the essential building blocks: Data Security Posture Management (DSPM), AI-specific vulnerability scanners, and treating AI agents like identities . Whether you're just adopting Copilot or building a full-scale AI factory, this episode goes through a roadmap for visibility, access control, and risk management in a non-deterministic world.

Questions:
00:00 Introduction
01:55 Who is Shannon Murphy? (Trend Micro)
03:00 AI Risk Scenarios: Internal Data Leakage vs. External Attacks
05:15 "Tell Me My Boss's Salary": The Failure of Traditional DLP
06:20 AI Frameworks: NIST AI RMF, OWASP, MITRE ATLAS
08:50 Why 95% of AI Projects Fail (The Governance Gap)
12:30 The AI Security Stack: Do You Need New Tools?
16:00 Building Trust with "Model Cards"
18:30 Adopters, Builders, and Scalers: The 3 Stages of AI Maturity
20:00 The 2025 AI Security Blueprint: Data, AppSec, and Identity
25:40 Risk Ownership: The Role of the AI Governance Committee
30:20 Securing AI Agents: Treating Agents as Identities
32:15 Shift Left is Not Dead: Testing AI Before Runtime
33:45 Milestones for Your AI Security Program
38:50 Fun Questions: Guitar, Las Vegas Food, and Balut

Shannon Murphy: [00:00:00] Tell you for a fact. Every single CISO conversation over the last two years, data governance has become that central bingo card conversation. Any security leader who attempts to drive a AI governance strategy in a silo will fail. 95% of AI projects are failing because we're not having all the stakeholders at the table.

I often talk about adopters, builders, and scalers. Yeah. The convergence between those two groups is actually getting smaller because the adopters are building internal tools. For their own teams. I need to get a better idea. Particularly when we're looking at agen ai, maybe we wanna start treating them a little bit like identities.

Expectation for security teams now is that have an IQ In this space. We're going to see massive specialization as far as ai, you know, security.

Ashish Rajan: If you're an organization working with HI, you're probably one of three categories. You're either someone who's adopted AI into your organization, you're building applications with ai, or you are someone who's scaling AI across your board.

Problem is there's not a lot of framework that covers for what we are working on today. [00:01:00] For this particular conversation I had Shannon from Trend Micro come and talk about AI blueprints, uh, security risk angle or what does that mean for organizations, whether you're an organization which is adopting ai, building AI, or scaling ai, all that, and a lot more in this conversation with Shannon from Trend Micro.

If you know someone who's building a program for security with genai, even if you have an existing stack of AppSec clouds sake and a few other things that come along that we have been using and working with, but somehow have blind spot with ai, definitely check out this episode and definitely share this with someone who's trying to build a program as well.

As always, if you are here for a second or third time and have been enjoying episodes of Cloud Security podcast, I really appreciate you taking a second to hit the follow subscribe helps us grow and get even bigger guests. Thank you so much for tuning in. I hope you enjoy this episode and I'll talk to you soon.

Hello and welcome to another episode of Cloud Security Podcast. I've got Shannon with me. Shannon, thank you for coming on the show.

Shannon Murphy: Thanks so much for having me. I'm super excited to be here.

Ashish Rajan: I'm excited as well. Maybe to start off with, if you gonna share a bit about yourself for your professional background, maybe before May you now?

Shannon Murphy: Yeah, absolutely. It's been about a decade in high technology across a lot of different [00:02:00] industries. Over the last, um, five years working specifically in the cybersecurity space, looking at emerging technologies like artificial intelligence. Mm-hmm. And the tool sets, the toolkits that we're gonna use to actually secure that.

Yeah. Uh, that landscape and that surface spend a lot of time with security and technology leaders working on their security stack. Yeah. Um, kind of collaborating together. Experimenting together to figure out how are we gonna secure what's next. Uh, I love spending that time in the field with those folks in particular.

Ashish Rajan: So talking about people that you've been spending time with in the field. Mm-hmm. What are some of the risk-based scenarios that you've been seeing with different leaders? In terms of, I mean, I guess goes without saying AI is everywhere. It's everywhere. Yeah. So there, there is this obsession that ai, uh, is having in most businesses.

What are some of the risk scenarios that you've been seeing that people are talking about, uh, that are, that is top of mind for them?

Shannon Murphy: Yeah. I love how you teed that up, right? Because the way that we're looking at AI compared to maybe 18 months ago where, you [00:03:00] know, you'd kind of tab over to, you know, chat, you know, perplexity chat, GBT, whatever it is, and then you'd be in your other workflow, your standard workflow that you've been doing for years.

Yeah. And now AI is really the new operating model, right? So we have this integrated workflow where people are. There's no tabbing around it anymore. It's in your life and from the moment you wake up to the moment you go to bed, you're interacting with this technology. Yeah. Um, as an individual, right? Yeah.

Yeah. So when we look at the enterprise, we can only imagine how that scales, and I think what security leaders are really seeing where the conversation is going. There's kind of. Two categories. One is the standard stuff in security that we've always looked at when we think about bad actors or ransomware, gangs, you know, nation state attacks and you know, how are they looking to capitalize on this new opportunity.

But really where I think a lot of the conversation is, is actually operational risk. What are we doing in the enterprise with our AI applications, with our projects that could be [00:04:00] sort of, uh, self-harming it When we look at things like Drift, you know, uh, drift hallucination, data leakage, unintended access to data.

Yeah. Um, from, you know, you know, these applications really flatten the, the permissions for what people have access to. These are leading, um, when we're looking at people who are adopting. So they're just looking to get the productivity gains and for those who are building, so those who are actually going to market and selling an application as well.

Ashish Rajan: Interesting. So people to, to your point, there is a split between. Internal problems and external problems. Yeah. And these are a lot of them sounds like internal problems, like the data leakage.

Shannon Murphy: Yes. Because I'm

Ashish Rajan: concerned that Ish's gonna accidentally put my proprietary data into, yeah, I'm gonna

Shannon Murphy: go, or I go to my copilot and say, Hey, tell me ashish's salary.

Yeah, yeah. Well, maybe I wouldn't have known where to look for that information before, but the AI knows exactly where to look. And if we don't have our data permission set up appropriately, that's just a simple example, right? Yeah, yeah, yeah. Of information getting into the wrong hands. That scales in [00:05:00] lots of different areas.

Right?

Ashish Rajan: Yeah. I wonder, after listening to this conversation, how many people on the chatbot, on the co corporate chat bot to ask, what's my boss's salary? Like? Am I being paid enough? Yeah. Gimme this intel that I need. Yeah. Yeah. Gimme the GOs. Yeah.

Shannon Murphy: Simple example. But it is just one, one thing. When we're just looking at a simple internal example of, you know, where data can start to get really leaky.

Yeah.

Ashish Rajan: Do you find that? What are your thoughts on the whole framework, because obviously traditionally as a CISO, myself Yep. We've gone down the npa, we've gone, we've done CSA has won, o os has won. Mm-hmm. Do any of these stand the test of time for this?

Shannon Murphy: Yes and no. So, you know, the standard, you know, risk management frameworks that we've had in the past, there's nuggets of truth in there that we can apply to, you know, this new evolution revolution with ai.

Um, however, what I think what these institutions have been doing really well is they're actually publishing AI specific frameworks. So we look at. Uh, the NIST AI Risk Management Framework, OWASP Top 10 for [00:06:00] LLM Mitre Atlas. I met the author Martin Stanley of the NIST AI RMF in DC earlier this year, and they've just done a phenomenal job in really getting super gran granular and super strict, um, in all of the different areas that the enterprise needs to be looking at.

You know, whether that's in the cloud, whether that's the data, whether that's the identity and applying. You know, the best of NIST with new strategies, um, new considerations for that risk register that's unique to, uh, the AI landscape.

Ashish Rajan: What would be some of these new components that we're talking about in terms of, I guess how different, how it's different?

Shannon Murphy: Yeah, for sure. Well, we have to look at models, right? Because, um, you know, we talk about deterministic, and non-deterministic. Uh, applications, right? Yeah.

And that is a fundamental different, right? That's why we need to be looking at you know, monitoring during runtime, uh, more closely looking at. New vulnerabilities for applications that are live and looking at scanning those AI [00:07:00] applications. Yeah. When it's actually live as well. While, you know, minimizing as much disruption as possible.

So of course we have very, um, you know, similar landscape. If we look at containers, workloads, data, identity. Uh, but I'll tell you for a fact, and I'm sure you've heard it as well, in every single CISO conversation over the last two years, data governance, data security, posture management has become that central bingo card conversation.

Yeah. That, you know, is gonna come up every single time because we have to get so much more intentional and thoughtful mm-hmm. Um, about these. Various surfaces because of the world that we're living in now.

Ashish Rajan: Yep. And do you find that, I mean, I guess model and agents are just some of the components as well.

Are these frameworks that we I think you were referring to NIST has done a good job. Yeah. Am I as a CISO Okay. To just rely on these for this new as I guess world that we are living in with models. 'cause data has been part of frameworks before.

Shannon Murphy: Yeah.

Ashish Rajan: Is this, is it keeping up with the new challenges of data?

Shannon Murphy: I think that it's [00:08:00] going to give you a really solid foundation to get started. Right. But you also have to do your own governance process. Right. We cannot be flying blind. Mm-hmm. So when we're looking at getting an asset inventory. Understanding the risk posture of those assets, doing your risk assessment, understanding relatively, how does that risk, uh, stack up with the rest of the assets in your organization?

Um, this is so critical and you cannot do this alone. This is the biggest piece, right? Any security leader who attempts to drive it. AI governance strategy in a silo will, will fail. Right? Yeah. And we saw this, you know, report earlier this year that everyone shared on LinkedIn from MIT. Right? 95% of AI projects are, are failing.

Right. And I think the reason why this is happening is because we're not shifting left much earlier in the conversation. We're not having all the stakeholders at the table. Mm. You need to have that C-suite alignment. [00:09:00] When you are building that governance and that policy, because often what will happen is that the pressure comes from your business units, right?

Yeah, yeah. Um, to move really fast. It's not centralized. We need to centralize that governance, centralize that policy so that we can actually start to move a lot faster.

Ashish Rajan: Yeah. Also, that's where people have the governance council. Mm-hmm. They build that with legal compliance and other people as well

Shannon Murphy: have a committee.

Right. Yeah. Yeah. I think you, I think you have a committee with different stakeholders who bring a different point of view to the table. Yeah. Um, and security is very central to that discussion. And I think a lot of security leaders, what they're finding in their role over the last two years is that they have a seat at that table now because the business under.

Dan, they understand what they don't understand, right? Yeah, yeah, yeah. They know that there are gaps in their, um, in their knowledge where they need to bring in someone who's in the IT and in the security space who's going to provide that visibility, which is super exciting because I think for the first time we're using security intelligence to [00:10:00] make business decisions and this was a gap, um, that has existed for a very long time.

Ashish Rajan: Yeah, and I think maybe because security was always. Treated as a silo before where they didn't have to be part of a decision.

Shannon Murphy: Mm-hmm.

Ashish Rajan: But now, because ai, it's

Shannon Murphy: bolt on. Right. We'll just bolted on, that's what I mean. Yeah. Quote unquote bolted on. Yeah. Yeah. As

Ashish Rajan: I do with air quotes. Is the adversary challenges different as well with this?

'cause I almost feel too, what you said earlier in the beginning of this conversation mm-hmm. Between internal, external, external one, there's so much prompt injection and there's so much more there. In terms of the internal ones, what's the. Difference in how adversaries were approached before and what are they looked at now with AI world?

What? What's the newer adversary challenges that people are seeing?

Shannon Murphy: Yeah, for sure. I think that there's kind of two pieces to this. One is actually looking at that application. How can I exploit the application? You mentioned prompt. Prompt injections. Prompt injections, model manipulation, uh, jailbreaking.

Absolutely. We've seen lots of, you know, bad actors try to use open models [00:11:00] to, to do their own kind of special, uh, you know, criminal LLM, but also targeting, um, applications as well because you might be able to get, you know, crafty and creative and say. You know, get the application to share its system prompt.

Yeah. Well, once I have its system prompt, that's a lot of valuable and juicy information in there for how I might be able to use that to make my next move, right? Yeah. Yeah. So this is one category. The other category is your, your kind of attack surface of assets, right? What's the data that I want? You know, what models are they running?

Where are the containers? Is there a vulnerability that I could exploit in order to get in? Right? Mm. Because ultimately. Bad actors don't just do things just to be annoying. Right? They do it for money most of the time, right? Yeah. Disruption or money, normally money, right? Yeah. Yeah. So follow the money, right?

Where is there going to be an opportunity to get in, get some juicy data, uh, to either extort or to sell it? And there you can start to see, okay, where are the common, uh, attack paths are gonna start emerging? Yeah. Um, from that, [00:12:00] from that level, which is why you really wanna get. Some good visibility into where are those AI assets, what are the building blocks?

What is their posture?

Ashish Rajan: Interesting. I think I'm I may wanna double click on the visibility thing because mm-hmm. Most CISOs today have a pretty decent coverage of cybersecurity. Yeah. If you look at the security stack, you, I'm sure like what I had as well. Most people have security operation. Yeah. AppSec, cloud sec, cloud security posture manager, throw that in there as well.

How different is this new stack that we are looking at there? Am I not covered by what I already have?

Shannon Murphy: I don't think you are, to be honest. Yeah, I don't. I don't think you are. And I think that we need to reimagine and modernize the stack to meet this moment, right? Mm-hmm. So to me that looks like, um, you know, a total artificial intelligence, risk management.

Approach where you are looking at the full life cycle, you're looking at all of your building blocks. In order to do that, when we're looking at things like [00:13:00] data. Identity your cloud assets. Mm-hmm. We, we've secured those before, but we've maybe done them in a way that you know, that isn't appropriate anymore.

If you look at kind of legacy DLP, right? That's not going to, you know, fix your problem that you have right now, and you're looking at data leakage or, you know, manipulating models. Who gets access to. Those containers who if something changes, how do we track that? Right? Yeah. So you really need to be taking, in my opinion, a much more shift left, you know, proactive position, um, when you're securing these assets.

So you want to look at, you know, AI specific vulnerability scanners, and you don't wanna have that sitting over here disconnected from the rest of your security stack. Right? Yeah. Yeah. I think this is a perfect scenario. For when bolt on is not going to work, because you need to see all of these assets, all of that risk in context of one another.

If you do not have the context, how are you going [00:14:00] to prioritize? Right? Yeah. It's going to become absolutely overwhelming task for security team. Yeah. Um, and you're going to start leaving the door wide open. Whenever we have new applications, we have new code, right? Yeah. What happens when we have new code, right?

Yep. Yeah. Because it gets buggy, right? Yeah, yeah, yeah. Um, so we need to be able to account for that, right? Because the scale and the volume is so massive. Yeah. So I think that the way that we wanna look at it. Is almost with that kind of platform approach that a lot of people have been so interested in in the last couple years, but taking that platform approach and applying it to the AI ecosystem specifically.

Ashish Rajan: Oh, I guess so. And I think I, uh, when you mentioned the DLP example mm-hmm. The first thought that came to mind was, uh, the example that I was laughing at earlier, which is, uh, tell me my boss's salary.

Shannon Murphy: There is no

Ashish Rajan: DLP that covers for that. No.

Shannon Murphy: And I'm like,

Ashish Rajan: most DLPs and most DLPs are usually. Focused on your outlook and like the email system, they're not even looking at your chatbot, that's talking about your solutions.

Well, so there's already a blind spot there, [00:15:00] but I think, I love the example that you shared in terms of the broader context having I guess where you're going with this also is that you're not saying that the, the current tool set is no longer required. It's just sad. It's just one of the things, you already should have stable stakes, but there's a AI has bought another layer that you just are completely blind to.

Shannon Murphy: Exactly. And we need to get specific visibility into these things. Right. Especially now that that AI is embedded in, in every single SaaS application and every. Single, you know, tool that your team is using. Yeah. You need to know what people are using, right? Yeah. Uh, you need to know what they're using. You need to know what content is going into that experience and what content is going out.

Right? Yeah. This is just for an internal purpose. When we look at organizations who are going to market and are selling an AI application, they need to take that so much more seriously. Particularly if you're in healthcare, if you're in finance. Mm. Um, there's a whole host of, um. Requirements that, that you need to take.

I think a [00:16:00] really great thing that some organizations are doing to build that transparency with their customers is they're using like a model card, right? I, I call it like a license to thrive, um, where they actually use that model card that shows here are the models that we use, here are the. You know, safety precautions we take, this is how we use the Zero trust approach to, you know, protect your data and make sure everything's in a contained system.

And this is helping organizations build that trust and transparency with their clients without giving away their ip. Yeah. At the same time. So I think we'll maybe start to. See this type of approach a little bit more in 2026. I hope we get to a place of kind of standardization, um, so that people can really start to adopt much faster and with more confidence.

And we're not seeing these failing projects and we're not seeing all of this risk in the enterprise because they're doing it in a way that is safe and scalable.

Ashish Rajan: What are some of the challenges you see in this particular space right now? I guess to your, to your point it's a pretty good state forward path if you are able to just follow it.

Mm-hmm. Uh, what are some of the challenges? [00:17:00] You're seeing people have, whether it's ownership or I don't know, what, what do you see as challenges that people face as they're trying to walk this path?

Shannon Murphy: Yeah. I think I think a lot of CISOs, CTOs are super realistic. They know that they have gaps right now.

Yeah. They're not blind to that, so they are working on. That AI policy and that AI governance process. I think where people are having the most challenges is when they skip that boring step uhhuh and they just try and hit the ground running. Right? Yeah. We need to have really strong governance in, in place in order to, in order to move fast.

So, um, I think a lot of the time it does come down to things like, um, internal communication, bringing. The stakeholders together, um, getting buy-in and support for these types of, um, you know, shift left kind of activities. But once we're able to have those conversations, we can communicate risk effectively.

We know the bets that we want to take. This is where people are overcoming that challenge. But a lot of the time it's, um, you [00:18:00] know, bus are moving fast. Security feels a lot of pressure. Um, and then that's when the gaps start to surface.

Ashish Rajan: Interesting. And do you find that in terms of how, because a lot of chief architects, enterprise architects, they're all making calls about this particular space.

Mm-hmm. What we may have spoken about is very much on the side, Hey, I have one chat, GPT or whatever. But scaling AI across a large organization. Yeah. Especially a large enterprise, comes with its own complexity. Is, does that, I don't know if you were to put a risk risk lens to that as well, how would you approach architecture today?

Shannon Murphy: Yeah. That

Ashish Rajan: you have to scale AI across an enterprise?

Shannon Murphy: Yeah, and I think, you know, you've heard it quite a bit, but it comes down to this visibility, right? Mm-hmm. We need to get the visibility into what we have. We have to choose our priorities. What are our projects, right? What, like looking at the broader business, and this is why I love this connection of security coming to this decision making table because they're actually getting insight into what are our 2026 priorities?

What are the AI projects we want to [00:19:00] achieve? What do we want to build either internally. You know, for our own teams to use, or do we wanna push something out? Do we wanna push an application out that, that's generally available? Security being a part of that discussion. Yeah. Is making it possible to actually start to architect in an apply a blueprint approach.

I think Blueprint was one of like, you know, top trendy words in AI for 2025, and I think it started at Nvidia, GTC earlier this year, right? Because we started hearing this. Blueprint term come up, and we heard it more in the SaaS space actually, right? Where it, where we looked at what is that architecture, what is that technical architecture, um, of what we're building.

Yeah. What an amazing roadmap now, right? Yeah. Now we have the visibility into the plans, um, into what people are building, and we can take a security blueprint and layer it right. On top. I think that this is how you're building, um, really [00:20:00] resilient architectures, uh, that are going to actually bring the enterprise, the value that they wanna see and bring their clients the value that they wanna see.

Ashish Rajan: So what does the security blueprint look like then, I guess today?

Shannon Murphy: I think it's a beautiful question, right? I think we have to look at our building blocks, right? Yeah. I think the first thing we look at is data, right? Mm-hmm. So we wanna look at data security, posture management, especially when we're looking at.

Fine tuning and using, you know, open weights and open models and how we're improving those models and making them more specific to our use cases. So what is the data that we're using to do that, um, and how are we selecting that data and limiting who touches it? So if data is being added, we wanna be very intentional.

If data is being taken away, we wanna be very intentional. That should always be you, you need to have that data provenance and governance. You know, across the entire chain there. The next piece that we want to look at is on the application side, right? So we think about application security. While this is, includes things like container security, right?

This is where a [00:21:00] lot of the models are running in, and we also want to look at vulnerabilities as well. We know today how are, how do attackers get into organization, social engineering? Internet facing vulnerability, internet facing misconfiguration. Yeah, we know that this is the way that they get in, so we need to think about that, right?

Mm-hmm. When we're looking at the ai security blueprint and that risk management framework, so let's look at that application layer and have an AI scanner that's continuously monitoring for AI vulnerabilities, right? And they happen right at then the software happens, right? Yeah. So we have to look at that cloud kind of application layer.

Then we wanna look at our identities, right? What are our identities doing? What do they get access to? Uh, what information do they get access to? So tools and information. Yeah. And start to take something like zero trust, secure access approach to our identities as well. We also wanna look at things like, um, you know, pre-filtering, post filtering on [00:22:00] content, red teaming our applications.

Mm-hmm. Finding those gaps before someone else can. Um, and really taking a full kind of cloud risk management framework from inventory to assessment, to prioritization and mitigation. That is your blueprint when you're looking forward to, you know, 2026 and how, what are the different pieces that I need to be taking care of.

Yeah. It's in the cloud, it's in your identities, your data. There's a network kind of IPS layer to this as well. Mm-hmm. Um, and that's going to do a lot for the stack.

Ashish Rajan: Okay. Right. And do you find that. Because obviously there are different levels of adoption too. Mm-hmm. What you were saying as well, some organizations may have gone, you know what, I'm gonna wait and see because frameworks are not there yet, so I'm not gonna take a risk.

Some organizations may be at that stage where every workforce employee has access to some form of AI copilot. And the other ones are like your early adopters who have basically gone, yeah. They're totally knee deep into this and going, okay, now we are, we are, we are an AI company. Moving forward, everything has to be AI first [00:23:00] does the blueprint.

Evolve or change, uh, depending on which kind of adoption curve you're in. For ai,

Shannon Murphy: not only does it, but it has to.

Ashish Rajan: Right.

Shannon Murphy: Um, because the maturity journey is different for everyone. Mm-hmm. And there's so much speed. Right. What we're dealing with here that's really, really different when we look at, you know, threat modeling is the speed Yeah.

In which we need to move. I often talk about, similar to how you've teed this up, is, um, adopters, builders, and scalers. Right? Yeah. There's adopters who are looking to get the productivity gains. There's builders who are building applications and you know, their development teams are either experimenting or seriously going to market with a tool.

Yeah. And then there's the scalers and those are the ones who are investing in the AI factories. AWS announced, uh, big AI factory initiative today. Of course, NVIDIA's been in that. We've been in that space. Yeah. Um, as well. Um, but for the most part, 80, 90% of organizations are in that adopter or builder space.

Mm. What I'm seeing actually, even in the last like few weeks [00:24:00] is that the convergence between those two groups is actually getting smaller, um, as time goes on because the adopters are building internal tools for their own teams. Yeah. They're not selling something, but they're still building application.

Yeah. And with any application you have, you know, vulnerability, misconfiguration risk, as you're going on that journey, okay, I just wanna adopt, right? So I need to take some kind of zero trust approach here. Yeah, yeah, yeah. Because there's a lot of data. Yeah, yeah, yeah. Um, and I, and I need to protect that.

Yeah. So that's great, right? So you can, you know, start with that approach and start to get your feet wet. Okay? I need to get a better idea, particularly when we're looking at a gen to AI of our agents. How are we going to treat agents, right? Okay. Maybe we wanna start treating them a little bit like identities, right?

Mm-hmm. So taking an identity, you know, risk management approach to those agents, that's great, right tho that those are the tools that are coming into your organization and nobody's saying no anymore. Mm-hmm. I, I can't have a single conversation. Everybody is saying yes. Two, [00:25:00] three years ago people were saying, ah, shut it all off.

Right Now everybody's saying yes. And that is kind of a top down decision that's happening in every single organization. When you're looking at building, you just want to have a toolkit available that's going to make it easy for you to scale up into that next phase of your journey very quickly. Yeah, so that whole philosophy, you know, discover, assess, prioritize, mitigate.

That is applicable regardless of what stage you're in. Mm. The level of security, perhaps even the number of capabilities will probably scale up as you, you know, increase your surface and you have different types of assets to secure however that philosophy remains consistent.

Ashish Rajan: And I guess, do you find that that almost like the risk ownership question, uh, risk concept or risk also evolves as you go through adoption building and scaling to producing as well?

Do you find that there is a. I guess the ownership is a big one for lot of people talk about. Yes. At this point in time, is there a gap there as well, [00:26:00] still as things are evolving, whether it's AI factory or not an AI factory?

Shannon Murphy: I think you sitting in this role before would probably say yes. Yeah, yeah. Uh, I don't wanna speak for you.

However, I think, um, you know, very strategic CISOs know how to, uh, maintain their job security. Yeah. Um, and they do that by kind of flattening the risk ownership and acting as advisor and consultant to the organization, setting expectations up front, being very transparent about what the operational and the external, you know, risk or threat scenarios are going to be.

And I think it's important to talk to mm-hmm. Uh, the business in terms of scenarios. Yeah. Um, I think the best approach that organizations can take is that committee approach, uh, where you have leaders at the table, um, and the board. As well plays a big role in this as well, because they're also pushing for more AI adoption [00:27:00] innovation.

What are we doing with it? So if that directive is coming from that level yeah. Some ownership also needs to be coming from. Mm-hmm. We can't just shift that risk appetite over to another person. Right. I think taking that committee approach and really. Working in the name of Breaking down silos Yeah.

Is the way, um, that CISOs can protect themselves a little bit, and they also get to be part of a much higher level discussion as well. They're speaking the language of the business. Right. The business understands risk.

Ashish Rajan: Yeah. When we're

Shannon Murphy: talking about, you know. Previously in security, we were talking about, assault typhoon and scattered spider, and, you know, meantime to detect and meantime to respond.

Like, it's like, I don't even know what you're talking about, men. Yeah. But when I'm coming to the table and I'm saying, look at these scenarios, I'm gonna explain them to you. Here's the risk. Associated with it. Here's how we can actually meaningful bring that, bring that risk down. Yeah. I need your support to do that.

Mm-hmm. I'm gonna make sure that there's very little friction. I'm [00:28:00] gonna make sure that people don't feel it in their day to day. Yeah. But I do need, you know, your support to do that. That is a much more productive conversation. And I do think that this AI era is enabling that to happen in more organizations than ever before.

Ashish Rajan: I think it's very well said, because earlier. For every risk. Mostly it used to the, at least especially technical risk used to come around security. Yes. So you figure it out and we had this battle of frantically against other people to follow us. But now because of ai credit breaking those silos with legal being a thing, third party, fourth party, first party, whatever, then you have compliance as a thing, engineering as a thing, cybersecurity.

And maybe to what you said as well. It's probably the best time to be able to have that flat structure for risk. Mm-hmm. Where it's not just one team or individual who's own the risk. Now it's spread across the board because now everyone's responsible. If there was a legal, legal breach. It's not technically cybersecurity, but

Shannon Murphy: we're all involved.

Right. Yeah. We're all, we're all involved in that

Ashish Rajan: context. And, and I love [00:29:00] the breaking the silos part as well because definitely goes into, uh, this kind of new evolving world. We are going with the new threat threat vectors. I think you, you touched on the whole probabilistic system that we are building.

Yes. Which are not deterministic. How's, what's the new way to approach these? 'cause I imagine, to your point, people who are on the adoption stage probably only have the co-pilot pieces. They haven't really gone to that builder scaler stage yet. Yeah. So they're almost going, okay, that sounds like a pretty big task to just mm-hmm.

Jump from someone who's adopting AI versus scaling, building, all of that. How is the threat model evolving in terms of the way they, they should approach this? 'cause a lot of people, I imagine now that at AWS re:Invent today. Mm-hmm. And to what you said as well, they've announced the AI factory, all the AI agents you can build within AWS now.

Yes. So I'm sure This's gonna bring up a lot more conversation. How should people look at this now when they go back and build a program tomorrow for the next year? What are some of the things that you should think that you think they should be top of mind for them [00:30:00] for sure as they build the program?

Shannon Murphy: Yeah. I think that next phase of where we're going to be focusing our attention is, you know, as security leaders is mm-hmm. In the inferencing. Piece. Right. Interesting. Okay. So we're looking at you know, monitoring changes, right? Monitoring for drift, monitoring for hallucination, monitoring for novel vulnerabilities, you know, zero day vulnerabilities in mm-hmm.

In these, um, in these stacks, right? So I think that it, the focus really shifts into that runtime, um, space when we're looking at, okay, while we're actually live, what are we doing in that inferencing piece in order to make sure that things are still valuable, are accurate, are giving us the outcomes that we want?

And I think that this conversation even leaps like, you know, 10 times when we look at agents and we look at, yeah. Um, you know, leveraging model context protocols or agent to agent protocols in order to actually do way more workflow [00:31:00] automation, um, and have agents carry out these tasks, right? Yeah. Yeah.

Because now we're really relying on you like an individual. Mm-hmm. Um, and having, you know, uh, guardrails in place for when those things do break. And I think that's why we want to. Treating agents like an identity, uh, in that sense, because we do know how to manage identity.

Ashish Rajan: Yeah. Yeah, we do. We've done that before.

Yeah. Yeah. And backups and stuff has gone through. Data security has been a thing as well.

Shannon Murphy: Exactly right. So we know. The roadmap to do this. Now we're applying it to a brand new landscape and leveraging kind of a, a reinforced, uh, set of tools in order to make that possible in this, um, probabilistic non-deterministic.

World that we find ourselves in now, which is super exciting. Right? This is so incredible the outcomes that we're going to get from that. Yeah. We're not, you know, black and white binary anymore. The world opens up and we've seen just how powerful that can be, uh, in many, many [00:32:00] industries.

Ashish Rajan: Yeah. Do you think that.

Before the whole Gen AI thing became like what it is today.

Shannon Murphy: Sure.

Ashish Rajan: Uh, shift left was like top of mind for people. That used be one of those things that Yes. Trend trendy. Yeah. Yeah. It was also one of those things where people said, if you're doing shift left, you'll be fine.

Shannon Murphy: You're right.

Ashish Rajan: Like, yeah, you do.

The, is that conversation evolved as well in terms of the shift left? Because a lot of people clearly spend a lot of time on DevSecOps as a whole thing as well. Yes. Now, in this world where we were talking about. CISOs who already have AppSec programs. Mm-hmm. cloud sec programs, posture management Just insert the entire security stack gamut in there.

Shannon Murphy: Yeah.

Ashish Rajan: Is shiftless still valuable?

Shannon Murphy: Absolutely. And in fact, it's more needed than ever before and it, it is trendy. Right. It's like one of those, it's zero trust shift lev, like all of the, you know, agentic, whatever. It, it is one of these terms, but fundamentally when we drill down into mm-hmm. What that means for the business.

It is absolutely, incredibly, uh, relevant and you see that come through in, uh, applic, uh, application security and code security. Mm-hmm. Um, tools that are [00:33:00] looking at that before we're actually live. Right. So, um, it's absolutely critical and it is what is going to, uh, keep you out of trouble, um, from a quality perspective and.

When we layer in things like an AI scanner for vulnerability, that's what's going to keep you out of trouble when even when we're, you know, live in, in runtime.

Ashish Rajan: So this AI security risk blueprint that we are talking about, mm-hmm. If I were to start building this or start thinking about this for my security program Sure.

Are there any milestones that come to mind that I can use as my for lack of a better word, maturity levels? Like I think I, I may be, I already have my established security framework. Yeah. But I'm, I'm trying to up a notch. 'cause I've come in, heard Shannon talk about all these new problems that I have, I'm gonna have as I'm moved to the builder and scaling stage.

Sure. Um, how would you kind of, uh, measure that and are there any milestones that come to mind that. I can use as I'm building that risk-based security risk blueprint for my AI capability.

Shannon Murphy: Yeah, I think the very first thing that you're going to start with is that [00:34:00] visibility piece. Shadow AI is, uh, absolutely massive, uh, issue that everyone is dealing with, and you need to be able to wrap your arms around it.

And the most amazing part is there are tools available today that make it very easy for you to. Start to wrap your arms around that. Mm-hmm. Um, stuff. So the very first milestone, milestone one that you're gonna hit on that roadmap or that journey, um, is that visibility and getting an accurate up-to-date inventory.

Realtime inventory. Right. Because we can't, especially in the AI landscape, what we have today in place is not what we have tomorrow. And I mean that literally tomorrow. Yeah. Like this is not a, um, uh, you know, just something made up it literally tomorrow. So we need to have real time. Visibility into what those assets are.

And that's gonna be your very first thing. The next thing that you're going to want to look at is your access. Right? Who gets access to what, whether you are building, whether you're adopting, whatever you're doing, um, and managing that. And the, and you can only do that once you have the visibility, right?

Yeah. [00:35:00] Otherwise you have no idea if people are accessing, yeah. And you can start to build, you know, your governance and policy around those types of things. Your biggest project, your milestone three that you're gonna be working on up until this point is on your data piece. Really understanding your data provenance and you know where it's coming from and why it's there and, and the intention of, uh, of that data and getting very thoughtful about that.

Um, and taking like data security, posture management. Approach, um, in order to do that. So I think those are kind of your big things. You need the visibility, you need to control your identities. You need a really great understanding of of your data. Mm. Once you've done this and you start getting into, you know, builder mode, then you have your full suite of, you know, cloud security capabilities AI scanners, AI guardrails, application security, container security that's gonna, you know, keep you outta trouble there.

Ashish Rajan: Right. And I think you, so to your point, we already have a lot of capability. We're just trying to cover the gap that we have with ai. With this new world that the blueprint would be. Mm-hmm. Um, I'm curious also because it's not that our, my regular [00:36:00] day job of being a cloud security person has changed, or my AppSec job has not really, I mean, yes, there is a lot of AI there.

I'm still doing mostly what I was doing before, but now I have to account for AI into this place as well.

Shannon Murphy: You're probably doing way more AI actually than you have before.

Ashish Rajan: Yeah. So from doing bit more AI today. Yeah. I, I guess. Because I imagine even there's a option scale even between individuals and the teams that CISOs have as well, for sure.

Right? Yeah. Like so we spoke what AppSec clouds, all of that in terms of doing this as like a, as I like to call it, as a side project. Sure. You know, hey, we, I need to show to my boss, the CTO or CIO, that, hey, I have increased your adoption of AI in my team. So I've basically been pushing my AppSec team, my CloudSec team.

Hey guys, you should do more ai. Mm-hmm. In honesty, it's like a side project and yeah, I don't know. I don't feel it's the right thing, but I don't know where you stand on that. If it's the right thing for people to have it as a side project or make it more and it, I guess make it more ingrained, for lack of better word.

Mm-hmm. But if they're making it ingrained other things that they should consider, to your point, [00:37:00] visibility helps. Or do we, are we still on the right path? If we got on that same visibility? The transition that you spoke about. Mm-hmm. Is that, does that make sense?

Shannon Murphy: I think the expectation for security teams now is that is that you have a, an IQ in this space, right?

Yeah. You have knowledge base in this space, and I, I think that absolutely we're going to see massive specialization, um, as far as ai, you know, security understanding and skill sets go with that said. Not everybody has a, you know, 50 person it team with specialists who can do all of this stuff. So it's on the, it's incumbent, on the, you know, security technology community to make this stuff dead simple.

Yeah. Like we have to make it dead simple for you to secure this. And that's what the blueprint, you know, kind of idea, um, is providing, it's a, it's a very explicit roadmap. It's very clear in, you know, your categories. You're able to see. You know, what do I have in place and how does that map over to the security stack?

Mm-hmm. Um, and, and for that reason, you're kind of, [00:38:00] satiating both groups, right? Someone who's highly specialized in this is gonna be day in, day out, the kind of toolkit that they're playing in. Or if you're in a scenario where, you know, I've got three guys on the team or three girls on, three ladies on the team, and um, and you know, we have to work with what we've got, right?

Yeah. And I need to be able to have one single plane, right. One source of truth. In order to make these decisions because I need to be so focused on making sure that this is airtight. Um, and I think that's what this, you know, central kind of all in one risk management framework, uh, is going to do for both of those teams.

Ashish Rajan: Yeah. Something, it sounds like we have, uh, work cut out as well. Yes. This is for this particular as well. Um, I mean, those are mostly technical questions. I've got three fun questions for you as well.

Shannon Murphy: Ah, I love a fun question.

Ashish Rajan: First one being, what do you spend most time on when you're not trying to solve AI security problems of the world?

Shannon Murphy: Um, just in life, uh, you know, it could be in life as well. Yeah. Um, you know what, I love playing in this space, so I [00:39:00] think experimenting as much as possible. Um, also on the education piece as well I, there's a lot of, you know, uh, kids, families, seniors who are, playing with these tools as well.

You just see their eyes light up. Uh, when these do these, uh, when they're participating or playing with these types of things, I think, you know, we do a ton of education in this space. I love talking to, you know, as much as I love spending time with the technical community, spending time with people who, um, you know, are just kind of getting their feet wet, I think is always such a joy, uh, to see that kind of, oh my God moment.

You, you can't get enough of that type of stuff.

Ashish Rajan: Yeah. Uh, second question. What is something that you're proud of that is not on your social media?

Shannon Murphy: Not on my social media. You know what? I started learning to play the guitar three years ago. Oh, wow. I'm so pretty bad. Danielle, my guitar teacher, if you're listening to this.

Um, but you know, I am super proud of that. I think it's just another one of those skill sets, right? It's completely different. It's totally 100%, uh, creative. And I think that that balance is actually so important in this industry. The more that you [00:40:00] can kind of, you know, expand the way that your brain is working.

Just gonna come right back. Yeah.

Ashish Rajan: And acoustic guitar or electric guitar?

Shannon Murphy: It's actually a classical guitar. Nylon string guitar. Wow. Okay. Wow. You,

Ashish Rajan: you've got, you've gone hardcore. Okay. Fair. I was thinking it'll be like, uh, so what's the easier one? I remember this, uh, one of my boss used to have this like little tiny as electric.

It looks like a classic guitar, but it was an electric guitar. Does that make sense? Yeah. Yeah. But he, you could put headphones on 'cause his family hated him trying to play it. Yes, yes.

Shannon Murphy: Oh my God. Yes. So it's like, yes,

Ashish Rajan: I get it. You were trying to learn, but not everyone has to be involved in this kind of a thing.

So yeah.

Shannon Murphy: Yeah. That stuff definitely doesn't make it on a social media. Yeah. Yeah. That's right.

Ashish Rajan: Uh, okay. And the final question of your favorite cuisine or restaurant that you can share with us.

Shannon Murphy: Oh my gosh. Favorite cuisine. You know what? We're here in Los. Vegas today. My favorite restaurant in Las Vegas is Bizarre Meat.

It's like a kind of Argentinian open flame style cooking. Ooh, okay. Uh, best steaks, best beef tartar. Um, you'll just absolutely love it there. Um, but I love to eat, you know, anything, [00:41:00] not picky. Anything. I'm open to it. I think the only thing I said no to, uh, we have wonderful large team in the Philippines.

I was there earlier this year. They tried to gimme the Ballou.

Ashish Rajan: Oh, what is that?

Shannon Murphy: Um, it's like a, it's like a half. It's like a chicken that's not quite yet. A chicken in the egg still. Oh

Ashish Rajan: wow. Oh wow. Okay. Unborn child. I was like, it's, it's the only

Shannon Murphy: thing I've ever said no to. I'm sorry guys. Maybe next time, but you didn't get me this time.

Ashish Rajan: Don't realize. I did not realize that was actually like a thing. Okay.

Shannon Murphy: It's a delicacy.

Ashish Rajan: Oh, wow. Mean, that's why they offered it, obviously. Yeah. It's culturally I can,

Shannon Murphy: I think they were messing with me, but, uh, yeah,

Ashish Rajan: we'll find out. We'll let the internet decide if it's actually a delicacy or what was Shannon be put into at Trap.

But I think where can people find out a, I guess what the work that you're doing in Trend Micro and everything else just to connect with you and know more about the AI blueprint?

Shannon Murphy: Yes, absolutely. Um, I would say check us out trend micro.com. Uh, you can look up our white paper, we actually have a whole dedicated white paper to this blueprint [00:42:00] concept.

Um, and you can dig into what we're doing for this really all in one approach, platform of platform. Mm, uh, platform within a platform so that you're able to really centralize all of that visibility, bring the exposure down, get in front of your risk.

Ashish Rajan: Awesome. Thank you so much for getting on the show and, uh, really appreciate the conversation as well.

Shannon Murphy: Awesome.

Ashish Rajan: Thanks everyone for tuning in. We'll see you next time. Thank you for listening or watching this episode of Cloud Security Podcast. This was brought to you by Tech riot.io. If you are enjoying episodes on cloud security, you can find more episodes like these on cloud security podcast.tv, our website or on social media platforms like YouTube, LinkedIn, and Apple, Spotify, in case you are interested in learning about AI security as well, to check out assisted podcast called AI Security Podcast.

Which is available on YouTube, LinkedIn, Spotify, apple as well, where we talk to other CISOs and practitioners about what's the latest in the world of AI security. Finally, if you're after a newsletter, it just gives you top news and insight from all the experts we talk to at Cloud Security Podcast. You can check that out on cloud security newsletter.com.

I'll see you in the next episode, please.

No items found.
More Videos