The Truth About AI in the SOC: From Alert Fatigue to Detection Engineering

View Show Notes and Transcript

"The next five years are gonna be wild." That's the verdict from Forrester Principal Analyst Allie Mellen on the state of Security Operations. This episode dives into the "massive reset" that is transforming the SOC, driven by the rise of generative AI and a revolution in data management.Allie explains why the traditional L1, L2, L3 SOC model, long considered a "rite of passage" that leads to burnout is being replaced by a more agile and effective Detection Engineering structure. As a self-proclaimed "AI skeptic," she cuts through the marketing hype to reveal what's real and what's not, arguing that while we are "not really at the point of agentic" AI, the real value lies in specialized triage and investigation agents.

Questions asked:
00:00 Introduction
02:35 Who is Allie Mellen?
03:15 What is Security Operations in 2025? The SIEM & XDR Shakeup
06:20 The Rise of Security Data Lakes & Data Pipeline Tools
09:20 A "Great Reset" is Coming for the SOC
10:30 Why the L1/L2/L3 Model is a Burnout Machine
13:25 The Future is Detection Engineering: An "Infinite Loop of Improvement"
17:10 Using AI Hallucinations as a Feature for New Detections
18:30 AI in the SOC: Separating Hype from Reality
22:30 What is "Agentic AI" (and Are We There Yet?)
26:20 "No One Knows How to Secure AI": The Detection & Response Challenge
28:10 The Critical Role of Observability Data for AI Security
31:30 Are SOC Teams Actually Using AI Today?
34:30 How to Build a SOC Team in the AI Era: Uplift & Upskill
39:20
The 3 Things to Look for When Buying Security AI Tools
41:40 Final Questions: Reading, Cooking, and Sushi

-------📱Cloud Security Podcast Social Media📱_____________________________________

🛜 Website: https://cloudsecuritypodcast.tv/

🧑🏾‍💻 Cloud Security Bootcamp - https://www.cloudsecuritybootcamp.com/

✉️ Cloud Security Newsletter - https://www.cloudsecuritynewsletter.com/

Twitter:  / cloudsecpod  

LinkedIn: / cloud-security-podcast  

#cloudsecurity#securityoperations#soc#detectionengineering

Allie Mellen: [00:00:00] One of the, the CISOs who's a friend of mine recently said to me like, no one knows how to secure ai. And I think that that's like true. Yeah. Unfortunately, this is a moment of reset. This is a moment of like massive change on many levels between like the data management stuff, between the generative AI stuff.

The next five years are gonna be wild.

Ashish Rajan: I think you're getting a lot of L one, L two people as you, as you say, that I think, is she trying to hint that our jobs are gone?

Allie Mellen: Like, no, no. I wholeheartedly believe that we need. The people. Yeah. Involved. And being an L one has always been a rite of passage. Yeah.

But only because it burns you out and you're miserable, you know? Yeah, yeah. Yeah. The structure evolving to detection engineer as opposed to an L one, L two, L three is something that should stay consistent regardless of ai.

Ashish Rajan: Right. How has detection kind of kept on with this?

Allie Mellen: We see a lot of teams that are starting to look at observability data as their foundation detection surface, or. AI agents

Ashish Rajan: security operation team is probably the most disrupted with ai. If you have been watching a lot of the AI disruption that's happening [00:01:00]across cyber security. You probably would've come across how SOC specifically is being disrupted.

How siem, how log aggregation, X-D-R-E-D-R. They're all up for disruption with ai. To have a look at this particular problem with a closer look and quote unquote analyze this, I had Allie Mellen, she's a principal analyst at Forrester, and we spoke about the transition how, how the traditional L one, L two L three is probably not the right way to approach security operation in an AI world.

How do you vet the marketing noise from signal. In a world where people are trying to solve the SOC problem with ai, how far can we really go versus what the reality is? How do you prepare and build a SOC team today in 2025 with AI being so prolific and how far can you plan ahead all that, and a lot more in this conversation with Allie Mellen.

If you know someone who's working on the security operation problem, whether it's to uplift it or trying to build a new one, definitely check out this episode and share with others who are trying to tackle the same problem as well. And as always, if you're here for the second or third time and have been enjoying the episodes of Cloud Security Podcast, I would really appreciate if you take a quick second to hit the [00:02:00] subscribe, a follow button for listeners on Apple on Spotify, or if you're watching this on YouTube or LinkedIn.

I really appreciate this love and Supporty show us. So thank you so much for supporting the work we do by hitting that subscribe, follow button. And, uh, I hope you enjoy this episode with Ali. I'll talk to you soon. Peace. Hello and welcome to another episode of Cloud Security Podcast. I've got Allie with me.

Thanks for coming on the show. Allie.

Allie Mellen: Thanks so much for having me. I'm thrilled to be here.

Ashish Rajan: Yeah. And on a second time here as well. I, I know it's great. It seems to you pattern, uh, if people haven't recognized it yet. Uh, but could you just give a brief introduction about yourself? You. proffesional background. Would love to hear that.

Allie Mellen: Definitely. Yeah, so I am a principal analyst at Forrester Research. I cover SIEM source security analytics as well as nation state Threats, X-D-R-E-D-R, and generative and security tools. So I already mentioned generative this conversation. I know I, wait,

Ashish Rajan: I mean we haven't even gotten to start the conversation.

I'll bring GenAI later on again, but 'cause you work in that XDR SIEM space as well. Obviously we had the same conversation about, uh, where that is in 2024. How much of it, or [00:03:00] maybe a better way to put this is how do you describe security operations and all these tools around in 2025? What's your viewpoint on that?

Allie Mellen: It's changing so much. I just did the security analytics platform wave, which is our SIEM wave, and it was released in June and it was just incredible to see the differences like comparing. Even from the last one, which was 2022, so many market changes. Everything is changing there. The XDR vendors are getting into SIEM in a very real way.

Mm-hmm. A lot of customers are very excited about that. So it's just been fascinating to see. And then on the other side of things, for customers who want like a lot around data management and data strategy, they're like, how do we use a data lake for a long-term storage? How can we manage some of the costs that way, but also keep the data in a.

In a place where we can still access it for some use cases. Mm-hmm. So it's very complicated, but the market is just, I'm excited about it because I think we need a little bit of a mix up. Yeah. And now that's coming,

Ashish Rajan: the new challenges are coming in.

Allie Mellen: Exactly. Yeah,

Ashish Rajan: because I think you [00:04:00] mentioned the, the cost associated with the SIEM as well, which is definitely top of mind for a lot of people as being a cso.

Before I've kind of realized, there used to be conversations about how much log should we really send to a SIEM, and how much log is enough that we still find an incident. If we were to go down the rabbit hole, we find all the parts we need without causing us to have like a million dollar bill or whatever.

Allie Mellen: Yes.

Ashish Rajan: So why has that tension still continued to be a case in 2025 with SIEM platforms?

Allie Mellen: You know, I think there's a couple of factors. Mm-hmm. Right? The first, and I think most important is that you don't wanna miss data that you really need in an incident. And so having access to all that data, it's like, let's just bring it all in and deal with the cost and suck it up.

Not fun though. And not something you wanna do long term. I think it's also really difficult to determine your detection coverage and the detection surfaces that will make up your detection coverage. And so a lot of times you're like, I'll bring in all this data. I won't necessarily like cut any of it down.

Mm-hmm. Because I just wanna make sure that when I build new detections in the future, I have [00:05:00] the data that I need. Now there's a lot changing, which is very cool because now we have these data pipeline management tools like the cripples of the world, data bond, 10 zero, those types of things. What's really neat is not only can you reduce the amount of data that you're bringing in, route the data to different locations, but one of the advancements that's happening there is you can kind of draw a correlation between the rules that you have and the data you're bringing in.

So you can actually more fine tune the data that you're bringing in based on the specific detection use cases that you have.

Ashish Rajan: All right, so, being able to slice and dice the data the way you want. It's actually possible now.

Allie Mellen: Yes. It's so much more powerful than it was. It's not just like send it all there.

It's like you can reduce the amount of data that you're bringing in, you can route it to a different location. That's one of the biggest use cases that we see a lot of teams doing the data reduction is the immediate cost savings. Mm-hmm. The long-term use is now we can route the data in a bunch of different places.

And then even like redaction tokenization, you can do all of that [00:06:00] with the data as well. So it's just, it's so. The tools that we have now are so different than what we had before and so much more advanced, and now it's more about, okay, how do you think about this strategically and have the knowledge that you need to make the best decisions for where you wanna bring the data?

Ashish Rajan: Is that where the traditional teams to fail us in a way?

Allie Mellen: Honestly. Yeah, because a lot of it was just, okay, let's bring let's just set up a log collector and bring in all the data. Mm-hmm. And even for a long time, a lot of the SIEMs in the market wouldn't even tell you if the log collection wasn't working, you wouldn't get a notification.

So how do you even know if you're bringing in the logs that you need? Yeah. And so this is really cool because it's bringing these really great data collection and. Manipulation features so that you can actually make better decisions. But, you know, it makes sense that the traditional SIEMs wouldn't have wanted to do that because it, it affects their pricing.

Ashish Rajan: Oh, of course. Yeah. I mean, and you're charging them for storing the data as well in the first place. Yes. That's where the redirection of data. Yes. So, and you mentioned, uh, data Lake as well. Mm-hmm. Uh, in, and I guess that's usually looked at as a [00:07:00] concept from the other, other side of the planet, which is basically, Hey, I'm doing data analytics and all of that, so.

You find that people are building security data lakes as well?

Allie Mellen: Yes. Okay. We have a lot of security data lakes cropping up. It's been like the panacea of, oh, if we could just keep everything in this long-term storage and have it be cheap. But especially with like Amazon Security Lake and what they've done with the Open Cybersecurity schema framework.

Mm-hmm. Just the ability to standardize the data a little bit and access it at a low cost cripple is also doing this with Cripple Lake. Mm-hmm. There's a lot of different options. Even Microsoft just recently released, this is still within Sentinel, but their data lake tier of storage, which is much cheaper, it's just expensive if you wanna bring out the data and use it.

And so that plays directly into the conversation about, um, data storage and data cost optimization.

Ashish Rajan: Yeah.

Allie Mellen: And the way that. Teams had been doing this prior was they were like, okay, we're just gonna throw in an S3 bucket. Mm-hmm. Or we're just gonna like throw it into Azure Log analytics.

Ashish Rajan: Yeah.

Allie Mellen: And the SIEM vendors were always like, no thank you.

We can figure out a [00:08:00] way to, to yeah. Still make money off of this fair. And

Ashish Rajan: actually, because the one other thing people talk about is just the difference in the depth as there's, there's one thing for it to be able to detect everything for you to able to query it. The other challenge of that side was flexibility.

And being able to have some kind of a depth of coverage, for lack of a better word.

Allie Mellen: Yes. In terms

Ashish Rajan: of like am me as a stock. Analyst, I would not even know. Uh, sometimes what I'm looking for, especially if it's a cloud environment or God forbid, an AI environment, they're trying to figure out what log is relevant.

Like Ashish said this, that he wants a salary upgrade. Is that like a hack? I know, like could be anything. So, is that also playing a role in this?

Allie Mellen: Definitely because it's also, I mean, one of the things that has made this so difficult is the SIEM is not just a detection response tool. There's also the compliance reporting that you can do with it.

There's the dashboarding, there's the use cases for vulnerability management and threat intel and threat hunting. And so there's so many different things you need data for and you have so many different stakeholders that you have to deal with. Yeah. So [00:09:00] making those kind of data decisions like. You need a dedicated person to be making the data strategy decisions, especially if you're in a large organization.

Ashish Rajan: I almost feel like what you're suggesting is, is a reset coming for the SOC teams, the way they operate and work today.

Allie Mellen: Oh, we are. This is a moment of reset. This is a moment of like massive change on many levels between like the data management stuff, between the generative AI stuff. The next five years are gonna be wild.

Ashish Rajan: Interesting because obviously we spoke about this topic of how to build a SOC team in 2024 last year, and now we're in the second black hat. We're talking about how that's basically the entire, entire thing is gonna be reset.

Allie Mellen: It's crazy. The shift has been so crazy because I am like a huge AI skeptic.

Ashish Rajan: Okay. I.

Allie Mellen: Anytime I see ai, I'm like, I don't believe that you're actually doing this.

Ashish Rajan: And yet you've mentioned AI like 20 times already. Okay, let's, we're like five minutes into the conversation. You're like, I was skeptic, but I just mentioned it 20 times.

Allie Mellen: Well, this is the thing. This is the first time I'm actually excited about it.

Ashish Rajan: Oh, right, okay. So

Allie Mellen: now it's like we're [00:10:00] off to the races with it and I do think that like. The things that we're seeing now are going to fundamentally change how security operations teams use it. Okay. And to that point, it's also going to be a matter of changing how the team operates as a whole. Like I still think, and I believe this wholeheartedly, that like detection engineering is extremely important, but this does kind of change the difficulty of detection, engineering detection coverage when you have agents that can like serve as a threat intel agent and or start building detections for you, or start evaluating the environment for where there might be gaps. Mm-hmm. Like that's where we get into a new level of like. Agility and speed for the security operations team.

Ashish Rajan: Yeah. I think you're scaring a lot of L one, L two people as you, as you say that I think, is she trying to hint that our jobs are gone?

Allie Mellen: No. No. Okay. That's the thing. I wholeheartedly believe that we need the people Yeah. Involved in this. Yeah. I know I, my, I go back and forth on this. I'm actually kind of curious your [00:11:00] perspective on this, because sometimes I'm like, oh, how is this gonna change the L one role?

What are the skills they're gonna need to develop? Mm. Because they're not gonna, it's not gonna be the hands-on type of experience that we're used to. And then I'm like, well, we've been dealing with technology making these changes forever. Yeah. Right. Yeah. What do you think of that? Do you think that this is like

Ashish Rajan: No, I, I, A, I would love to hear your response for, but I think I personally definitely feel.

A lot of us have gone through similar changes. Our parents have gone through similar changes. Like my dad used to be a typist and I don't do typing. I think I remember him. I, I, the only reason I know that is because I found a notebook which had all these weird symbols on it, and I'm like, what is that?

And he's like, it's called shorthand. And I'm like, what is shorthand? He is like, come on, like, you don't know. I'm like, no, I, I don't, but he type all day. I'm like, yeah, but I don't type shorthand, whatever the shorthand. So I think him and I went to, but I got to discover that he had gone through a transition from being a typist, being a actual full-time job to that fact someday.

Oh, just need to, you don't need to do that anymore. Yeah. [00:12:00] Like we've gone to the point where I can just. Voice record myself, it translates that into a transcript and does all of that. So I feel like the SOC teams who are level one today, I don't think they have anything to be afraid of a mm-hmm. There's a whole thing about whether super intelligence and everything else can actually completely do what we want to do.

Yeah. But if I were to bring it back to the real world where we are not a super intelligent, uh, AI is there right now. The level one would be doing better things with their job in hand instead of going through 10,000 alerts and figuring out, Hey, which one of these 10,000 alerts that I go through every day is a false positive versus not.

We are definitely at a point where you can triage it to an extent where the 10,000 reduces to say a hundred or 200 or whatever the list may be, because it doesn't have the context of everything else. So you can safely assume that a lot of the job now for if I was a level one today, is that time saved for me not being frustrated by my onco duty or whatever that thing was.

And I'm not being frustrated by the fact [00:13:00] that I see the same thing again and again.

Allie Mellen: Yes. It's just

Ashish Rajan: different applications. How can I just, and being able to pattern recognise, we can see that if we, if we could see 10 things, we can't do that when we see 10,000 things. Mm-hmm. Which is where AI can help amplify that.

Amazingly, I think to your point, I'm excited for that perspective of that AI usage in our scenario. Uh, and I definitely find they would be. There is a transition that's gonna happen where S one, or sorry, level one, would become level two because that entire field is now almost like they existed because that was not a solved problem.

Allie Mellen: Yes.

Ashish Rajan: When that is a solved problem, people are now, Hey, machine is taking care of that. I don't need a human for, I want the human for the next job above.

Allie Mellen: Yeah.

Ashish Rajan: How do I add the business context into it? How do I make sure that the potential 5% or the thousand alerts that are discovered is potential vulnerable?

How do I add some context into it? Can I learn detection response? I think, uh, we had a conversation yesterday for the AI security podcast with the [00:14:00]researcher from Microsoft, uh, who's doing some threat intel work, and he was talking about Microsoft is releasing a tool where anyone can become a reverse engineer using LLMs.

Reverse engineering is one of those things. I was used to be like, I even made a confession during the episode. I was like, I tried for one hot second and I was like, this is not for me. I'm like, I'd rather be something else than this. And I, so, but he's saying that the bar is so low that if you have some context and most L one.

Allie Mellen: Pretty smart people, right?

Ashish Rajan: Technical people. I think they're just gonna graduate to L two. And, and, but obviously you have your perspective on how the l, the L one, L two, L three may not even exist. So I'm curious to see what you're seeing.

Allie Mellen: Yeah, I think that it's a great point and I think that's one of the reasons why when I look at the AI tools today, I am like all about explainability.

Yeah. Because I think we do have a challenge here where. Unfortunately, a lot of these tools have been marketed towards like new analysts. I don't agree with that at all. Mm-hmm. I think that's a huge mistake because they're wrong sometimes.

Ashish Rajan: Mm-hmm.

Allie Mellen: And we don't need, we don't need our L [00:15:00] ones to be learning from something that's wrong.

Ashish Rajan: Yeah.

Allie Mellen: Um, and per thinking, they can trust it. But I do think that. Once we get to the point where these can really start to make a difference, it's gonna almost be like qa, like you're gonna be walking through the steps and being like, is this actually doing what's expected? Is this returning what I want?

And then making the determination and doing the response. Yeah. Because I, we should never be automating that with like exceptions for where you're Okay with some risk.

Ashish Rajan: Yeah.

Allie Mellen: So I do think it's a big opportunity and my hope is that it, like, lets people focus on things that are more important. Mm-hmm.

And also to your point, like. Being an L one has always been a rite of passage. Yeah. But only because it burns you out and you're miserable, you know? Yeah, yeah, yeah. And so if we can remove that, that's, that's a good thing. Yeah. I think

Ashish Rajan: many people don't even move to L two 'cause they're just so sick of it.

Like, oh, literally. Is there anything else I can do though? I don't have to be awake at odd hours.

Allie Mellen: Oh. The number of people that I've seen leave the industry and be like, I have enough that I can go into tech and have a very cushy job now. Yeah. So I might as well just do that instead. Yeah. And not have to

Ashish Rajan: be awake at odd hours and try to figure this out.

Yes, exactly. [00:16:00] At 3:00 AM in the morning for why is this? Thing publicly available. Why? Why was this not stored by developers to begin with? Yeah. So but do you find. Structure needs to change or evolve for that?

Allie Mellen: I do think so. Like I, I still think that the structure evolving to just a more of a like detection engineer as opposed to an L one, L two, L three is something that should stay consistent regardless of ai, right?

Because the teams that we see do that, they just up level people so much faster. And now, especially if we have the AI that's able to pull all this context in, they can uplevel faster as well. And so I do think that. Breaking down that structure is very important. Turning them into detection engineers so that they are not only responding to these alerts, but also helping to optimize and improve them and build new detections is where we start to see that infinite loop of improvement that we really need to start seeing.

And then once they're detection engineers. That's where they can start to experiment with, like, I wanna look at threat intel for this detection. I wanna do a threat hunt and see if I [00:17:00] can find something for that's worth turning into a detection, that type of thing. And we actually get them into the interesting stuff.

Ashish Rajan: Yeah. I, that, that, that's music to my, uh, obviously we had the conversation with the Microsoft person on AI security podcast. We had a conversation at BSIdes SF with the detection response team at Anthropic. And I think Jackie Bow, uh, episode is on there as well, and she was talking about how they're using hallucination as a feature to identify new kinds of detections.

Like, oh my God. Yeah. So, and I'm like, oh, tell me more. What do you mean? Uh, I think the, I love the explanation where, as humans we are tuned to certain kinds of patterns. Will, hey, I know there is an excess here, or whatever the, the specialty needs to be. Maybe I can identify that with my eyes closed, but I don't know what new patterns exist.

There are new kinds of complexity, new kind of connections, and the way her, uh, her herself, and her team are using, it's like, Hey, why not just let it decide what the new detection should be? You can still be a human in the loop and go, Hey, this actually doesn't make sense. Yeah. But then she said, [00:18:00] now, every now and then there's one feature, like I did not even think that's possible.

That's cool. So that you almost makes you feel that what we have been calling out as a shortcoming potentially is a feature. Mm-hmm. That we just don't market it as a feature. We just talk about, oh, it's hallucinating, so it's all bad. But in certain scenarios that's actually a feature.

Allie Mellen: Oh yeah. It can teach you a new way of doing things.

Oh my God. Which is kind of crazy. Hundred percent.

Ashish Rajan: But to your point, if most people would become detection engineers and uplevel themselves, where do you, and maybe I guess. How is AI being used in SOC today? How much of it is reality versus a marketing fluff?

Allie Mellen: It depends on the vendor you talk to. I'll say that the majority have like query language translation from human language to whatever query it is or have incident summarization.

Mm-hmm. So you can see what, what's going on in an incident in a human readable way. They have a chat bot. But to me these are the least interesting use cases. Like when that was coming about, I was like, okay, AI is not gonna do very much for us. The really cool [00:19:00] stuff is in the investigation and triage agents, the agents that are domain specific.

They're responsible for the task of like, we're gonna triage or investigate a phishing email and we're gonna give you a recommendation as to what, or just

Ashish Rajan: that like

Allie Mellen: Very specialized. Very specialized. Okay. And the specialization is actually so important because what we really need is, it's kind of like a microservices approach to ai.

Where you have a bunch of individual task agents that do specific things, but they, that's all they do. That's all they know how to do. That is their responsibility. Yeah. And they don't have to be special purpose models even. Yeah. It's just the prompts that you're giving them ahead of time are very specialized to the task.

Ashish Rajan: Yeah. Yeah.

Allie Mellen: And so once you have that. You can not only have them interacting, but that's when you can start to build an agentic system where they're talking to each other and helping make decisions that way. So that's how you maintain the efficacy Now. There are a couple different vendors, especially in the XDR and SIEM space who have released AI agents that are doing triage and in some cases, [00:20:00] investigation.

We're not really at the point of ag agentic despite what all of the messaging in the world will tell you right now.

Ashish Rajan: Oh, wait, we don't, I mean, it's like blew everyone's mind, like there's no agentic. Yeah, right.

Allie Mellen: Go

Ashish Rajan: bigger.

Allie Mellen: But it is really interesting to see some of the early AI agents and the impact that they're having now, I'll caveat all of this with.

Efficacy is incredibly important. Mm. Because to your point, like hallucinations can be good. They can be bringing about new ways to do things, but not if you're gonna be building an agentic system. Yeah. Yeah. I mean, maybe because you could have them interacting, but you still want someone, one of these guys that's very accurate.

Ashish Rajan: Poor thing. Yeah. Yeah, yeah,

Allie Mellen: yeah. Exactly. And so that's one of the biggest questions that I'm asking is one, what's the explainability like? Mm-hmm. Is there an opportunity for you to look at the steps that the AI took and then potentially like. Rerun something and be like, this step was incorrect. You're gonna rerun it until you get it right.

Kind of thing. Yeah. And then the other part of that is how are they doing testing? Mm-hmm. And this is where a [00:21:00] lot of this falls apart. Unfortunately, there are exceptions, which is why I'm still excited about it. But in a lot of cases it's either, oh, we have a thumbs up, thumbs down. Okay. And we're crowdsourcing accuracy, which to me is a nightmare because they're marketing this to new people in the field.

Yeah. And then they're expecting them to give. The right response is to if it's right or wrong. Yeah. Makes no sense. No. Yeah. But others are doing like more of a golden data set approach where they all run the golden data set through, have like 500 different responses that they gather, and then manually by the person compare, is this an expected response or not?

Because the response can be different and still be, right.

Ashish Rajan: Yeah.

Allie Mellen: And so that is, that's a step in the right direction, but still it's so manual and we don't really have the automated testing infrastructure to do this at scale. Mm-hmm. The exception to this is with some of the service providers, 'cause they typically have pretty talented staff who have done this for a long time.

They're responding to these alerts day in and day out. And so [00:22:00] they are the ones who at scale can look at the responses from AI and say, yes, this is what I would've done, or No, this isn't. And so they can do testing in like a much bigger way, do QA in a much bigger way.

Ashish Rajan: Yeah, and I think I, I, I definitely relate to the word that we are not truly agent.

In a lot of ways, and I'm sure people would, would fill the conversation for, you're not truly agentic and this is what, this is true agentic or whatever. But I definitely find that people are also open to the idea of being quote unquote agentic, whatever the version is today.

As to what you said as well, are you finding that the 2024 version of what people were trying to sell as, Hey, this is where Agen is and where, what it's today, are the customers smarter about this as well? I'm sure there's a marketing on the other side, but are they also smarter about the fact that, hey, that does, that sounds like bullshit.

Allie Mellen: I think that security, maybe I'm just biased, but I think security people are unique in that they tend to be very technical or [00:23:00] be also very skeptical. Yeah. And so it's very rare that I run into someone that like is trusting exactly what people are saying about ai. That said, I get a lot of questions on it, and it's a lot of like, okay, what is real and what's not real?

Not because they can't figure it out, but because they don't have time to be sifting through all these messages. Yeah. And like we at Forrester do so much research on this. Like one of the biggest benefits for me is it's not just me, like, uh, be going crazy behind a keyboard, writing up something. You know, we have a lot of analysts that only cover ai.

Yeah. And so I interface with them regularly to be like. Stack me up. Is this right? Is this wrong? Is this a way that I should be approaching this? We have a definition of AgTech AI from that team that I also collaborated with them on, and so it helps a lot because I don't have to do everything alone. And I can also get that validation of like, I.

Am I just talking crazy right now or is this something real? Am I just micro

Ashish Rajan: chamber? Just

Allie Mellen: exactly what

Ashish Rajan: fair what? Wait, so what is your definition of agentic AI then?

Allie Mellen: Yes. So Agentic [00:24:00] AI is when you have AI agents working together in an autonomous way. Yeah, multiple of them. Now, we see that across disciplines, whether it's security or product management or marketing.

There's a spectrum of this, right? There's the AI agents that are. Standalone, or maybe they're working together, but only really through like an automation or orchestration platform. It's still very. Brittle, then you get to this next point where the AI agents can work together with specific other agents to accomplish things, and then what the team considers the Mecca I'm not convinced, they still have to have to convince me is where AI agents can work with any other AI agent.

And come to the decisions that they need to come to. And that's kind of like the, they call it the executive agent approach,

Ashish Rajan: right? Or the judge or whatever you wanna call it, I guess. Yeah, there's a few, few names for it. Yeah.

Allie Mellen: The, so LLMs judge is so interesting to me because it's [00:25:00] very, um. I still feel like it's so aspirational.

It's like, okay, you're having something that's sometimes wrong, deciding if something else that is sometimes wrong is right. And it's just like, I don't know. So I, I do see some teams using it with success. I see other teams that are struggling. Now, the other thing that I'll say is like, one of the reasons that I am excited about this is because I've talked to a couple of customers who they're building their own LLMs for their security team, and they're building them to be triage and investigation agents.

And so to me, when I see a team that's successfully doing this in an end user environment, that's where I'm like, okay, there's something to this. Yeah. You know, because they can't justify it otherwise.

Ashish Rajan: Yeah.

Allie Mellen: That's been kind of cool

Ashish Rajan: and. I think, I'm glad you mentioned it 'cause I, I definitely find agen AI to be, at least my definition matches yours.

So I, I'll give you that. Uh, and I'd also agree on the super intelligence and everything else. Yeah. People, people, apparently we can, uh, record an entire episode as per some people on just super, super [00:26:00] intelligence, what it should be. And there is, uh, something above that as well. I'm like going, okay, clearly I'm in a crazy world these days.

Uh, but I also find that these days. To what you said, the, the structure of security operations is changing. Yes. The environments are different now. It's just not agent ai, but there's agent workflows. How far has detection come for just understanding AI related threats? Because I, there is one thing for me to, I guess last year we were primarily talking about things like your, Hey, I've got.

Security vulnerabilities in my cloud environment. Security vulnerabilities in my, I don't know, uh, AppSec in my functions and everything else now. It's basically all of that. Plus I have some agents floating around as well, which have their own own identity and everything else. How has detection kind of kept on with this?

Allie Mellen: So it's, it's a great question. I'm very glad you asked it. Mm-hmm. One of the, the CISOs who's a friend of mine recently said to me like, no one knows how to secure ai. And I think that that's like [00:27:00] true, unfortunately. That said, we did just release a framework that I'm really excited about, which is like a big collaboration between a lot of our analysts led by one of our analysts, Jeff Pollard, and it's called ais.

So it's our framework for securing a AI agents and agentic ai and one of the. Core six pillars is detection and response.

Ashish Rajan: Mm-hmm. And

Allie Mellen: so I'm very excited about that. But also, it's so daunting because it's so much more complicated, which is actually, this is one of the reasons why I recommend detection engineering so much, is because detection engineering is ultimately about aligning to the processes that the product team has, the processes that you would expect from the cloud.

And making it so that you are able to iterate at the same pace and capacity. As those other functions. And so it's about keeping up with them. And now to this end, when we talk about detection and response within these AI agents, a lot of it comes down to how are you looking at the observability data that you can get?

That's a very [00:28:00] important starting point to determine is there something off here that is either, okay, maybe it's an issue with the agent, or maybe it's a security issue with the agent. And so we see a lot of teams that are starting to look at observability data as their foundation detection surface for AI agents.

And there are other. Detection surfaces within that. It's not just that, but especially for those that you're building in house, that's a good starting point for like what should normal behavior be? What should not so normal behavior be? But it is so new that I don't see a lot of teams that have figured out quite how to.

Balance detection and response with some of like the, the QA or the just identifying issues that are not really security issues with the AI agents themselves.

Ashish Rajan: I think, um, something to add there, and I'm sure you would've observed this as well. A lot of people are using cloud providers to build their ai, uh, capability or application or whatever, [00:29:00] and we had this conversation with a healthcare company.

Really fascinating that they're using AWS and Azure to build their AI capability. And in the conversation, I think, uh, the episode should go out soon, but in the, and the conversation we came across was there's actually not enough proper logging available for AI as well. Uh, in terms of, to, to your point about detection can be done on logs that you're producing.

But if you can't, if you don't even know what you're producing is AI or not ai, where, and I think that's where that is a gap in the market at the moment where a lot of people believe that if I buy a vendor then hey, my, all my AI problems are sold. But then if you take a step back for, to what you said, people who are in enterprise building their own AI embedded applications.

One of the things they're not even talking about is the fact that there's not enough logging available for you to even identify. Yes. Which one? I mean, we spoke about SIEMs before that, Hey, which log should I take over here? I get a ton of logs, but whether it's the right log. And for [00:30:00] the model that I care about or the model that I'm building and the application, I'm like, we haven't even opened that Pandora box.

Have you kinda seen any of that as well?

Allie Mellen: It's so true. It's so complicated. A couple years ago I wrote a report with one of my other colleagues on how to do detection response for homegrown applications. Mm-hmm. And to me, it's the same thing where it's like, it's so manual. Yeah. It, it, it takes so much interfacing with the AI team or the application team to even make a determination of what good looks like and what good doesn't look like.

It's. It's very tedious right now. Yeah. And then you put on top of that, that A, in a lot of cases, the agents that are being built, especially if they're interacting with MCP, like there's so many security issues there in and of itself. And typically the permissions that they're getting from an identity perspective are way higher than they should be.

Yeah. Like we talk about this in the report as least agency. Mm-hmm. And ensuring leased agency. And if you're not there, then doing detection response is gonna be miserable. It's gonna be miserable because you're gonna have so many false positives, you're not gonna know what's going on. [00:31:00] So yes, a hundred percent.

It takes a lot of hardcore detection engineering work right now. Yeah. Because it's not standardized. Yeah. There's too many issues from a protection and identification standpoint, let alone getting to detection and response.

Ashish Rajan: Yeah. Yeah.

Allie Mellen: And so it's gonna, it's gonna take time. Yeah.

Ashish Rajan: We don't even know how can we isolate a. AI application. I don't even know if you can actually isolate the LM or like, do you just call up open AI and go, Hey, hey, hey, Sam, I'm just wondering if you can just, lemme just hook up my server here for two seconds and I'll be, I, yeah, I, I definitely find, um, we spoke about instant response as well yesterday.

I think. Um, I personally definitely feel we don. We didn't speak about instant response in the cloud context. It's been decades of it, and now we're in this world of ai. We haven't even caught up with the ai, with the cloud world of instant response, but now we have another vector there. It's so true. Yeah.

And so As much as I would not like to be a skeptic in this context how far you mentioned that, uh, AI people are, I guess people are doing AI using ai. Are [00:32:00] SOC teams already using AI in the conversations you're having for, maybe they may not have the most modern theme or XDR or whatever with the AI capability, even if it's the one which just gives you a chat information.

Are you seeing people starting to adopt AI within SOC teams as well?

Allie Mellen: Yes. Um, there's kind of two sides to this. First off, I totally agree with you on the cloud front. Like that is one of the things, and this is why I think. Security operations and like detection engineers are gonna have jobs for a long time to come.

'cause we're constantly finding new detection surfaces that we have to track. And it's a mess. But on the, how SOC teams are using it, I have seen SOC teams using it, which is really exciting. I will say it's mostly like large enterprises who want to experiment with this. Yeah. But that's one of the things that's been so validating to me is like.

They, I've been like, you guys are not gonna like these features. They're not that exciting. And then they come back and they're like, yeah, we don't really use 'em. Yeah. The exception. So the, the chat bot is interesting because when I saw the chatbots [00:33:00] originally, I was like, this sucks. Like, who's, who's taking time outta their day to go and use this?

Yeah. It's a novelty. Like they're gonna do it for five minutes and then they're never gonna look again. The exception has been Gen Z, who has been using it a lot more. Really? Isn't that crazy? Interesting. It's so interesting, and I think it's twofold, right? First off, obviously they're more exposed to chatbots than, yeah.

Than other generations and, and good chatbots, not just like. What we remember 10 years ago. Yeah. It's terrible.

Ashish Rajan: And I just, they're primarily on messages. They don't have a call. They, they're texting you exactly. Like, why are you calling me the text message? I'm like, well, for you to answer the phone, but yes.

Anyway. Yeah. Yeah. Okay.

Allie Mellen: Uh, there's that angle, but I think also since they're newer to the field, they have more questions that they wanna ask. And so they're trying to understand like, okay, what does this threat actor typically do? Looking at product documentation, like, how should I approach this particular thing?

So that's been interesting. Mm-hmm. But overall, again, those features haven't been the value add for what they're costing. Now I do [00:34:00] think that's gonna change with the triage and investigation agents. That's what I've heard from the customers that are using it. But it is much less than the other features, which are more standardized.

Ashish Rajan: Yeah.

Allie Mellen: And also like one of the things that we saw in the wave was I think there was one vendor at the time that had triage and investigation agents out of all 10 that were included. And these are the 10 biggest, most important providers in the market. So it's new, you know. Yeah. We're just starting to see it, but it's definitely something where we're seeing the value out of that far more than some of these other features.

Ashish Rajan: Yeah. 'cause and I, I'm glad you called out the part where soc teams in large enterprise, primarily the ones, 'cause they had the time for it. Yes. Because everyone else is primarily being bogged down by 10,000 alerts that they have to look out for as well. So, how do people who are building SOC teams today.

I feel like we almost need an updated version of the 20 24 1 Now where if I'm a SOC leader or a CISO trying to build a SOC team, even if it's to [00:35:00]outsource it, or even if it's to build in-house, what am I looking for in my SOC team that I'm building? Like I guess, what's some of the things you recommending people, I feel like in a large enterprises primarily would be a SOC uplift, but maybe we can start there.

What should a SOC uplift consider in terms of. Things they need to plan for. Where are we going with this? What's your thoughts there?

Allie Mellen: I think fortunately and unfortunately now, it has to be people who are interested and actively using ai. Like this is, I think we talked about this last year as well, which is like.

AI isn't going to replace you in security, but if you don't use it, someone is going to replace you. Yeah. Because they're using it well.

Ashish Rajan: Yeah.

Allie Mellen: And so that unfortunately needs to be a fundamental part of this is like what is your aptitude and willingness to start adopting ai? And then of course, like the typical skills, like I, I think we're gonna see constantly that those types of skills around like network fundamentals.

Os fundamentals, like a clear [00:36:00] understanding of technology is critical. And then on top of that, like if you're starting net new, I always recommend trying to get someone as senior as you can, obviously. 'cause you're gonna need that type of support. Yeah. It's very difficult. But but for the uplift scenario, it is like.

We talk about T-shaped people, so people who have empathy, that's kind of the cross is like, do they have empathy and understanding for other disciplines? Because that's very important, especially in like an incident response scenario. I can't tell you the number of times that I've had employees come to me and be like, I.

Incident response team just doesn't care about me at all. Yeah. And they're kind of like, they're sitting there feeling all this dread 'cause they caused an incident and the IR team is like, we're out. Yeah. You know, not great. But then the experience and the, um, the actual knowledge is the other part of the t where it's like actually understanding how computers work and what.

A threat actor typically looks like and having that ability to grow and [00:37:00]learn more and a has a foundation, a solid foundation that they can build from, at least we try very hard to recommend to clients wherever possible, to look for that aptitude and to focus on. Instead of focusing on who you're going to hire, focus on how you're going to upskill the people that you do hire because it makes such a bigger difference.

'cause at the end of the day, everybody needs something or is missing some component. Yeah. And that training is so fundamental and we see that in the data too. We actually, one of the questions we ask every year in our security survey of like thousands of security decision makers, is. What factors into your purchase decisions when it comes to products or services?

And one of the main things is consistently the training that's available and whether it's actually high quality. Mm-hmm. Unfortunately, in most cases it is not, but when it is, people love it and it helps the team so much. It's a requirement for the team. You have to take this vendor's training for this particular thing and it, it makes a huge difference for them.

Ashish Rajan: And I guess to your point, it's a great career move for that individual as well because they get into. Or [00:38:00] sort whatever company certification on there is, resume and all of that as well.

Allie Mellen: Exactly. And it's great for the team because they're coming up in this company, first off, that company took a chance on them when they maybe didn't have the experience.

And also they are a. Ideally in a detection engineering structure, they have mentors who are the people who are more experienced, and then they become the mentor. And so not only did they learn all of this, did they get all this training, but then also they feel connected to the team because they are a part of the team and a part of Uplevel upleveling people on the team.

Ashish Rajan: Yeah. So to your point, should the leaders in SOC teams plan for, how do I uplift my current team? And I guess in an AI world, which is already complex where would you say they should focus on in terms of, I, I don't know if technology coverage is the right word because to what we were talking about, there's already a lot of marketing around agent AI and how much is truly gentech versus what you can really do with that today.

Would you say [00:39:00] for people who are making decisions today, they should definitely consider. AI capability in either in their own, even either either building it in their team or having a tool or a vendor solution that actually has that, which is not just a chat feature, although, mm-hmm. Gen Z would require that, but you know, you know what I mean?

So it's like that's where you do you find that, where should they focus on as, Hey, if I'm making a purchase decision today, what should I must have in my decision tree? Are there any top three or that comes to mind?

Allie Mellen: They need to be evaluating the AI capabilities for sure. Like it, I can't stress how much of a pivot point this is for security operations and how much this is gonna change.

Like the speed at which we're able to respond. It's, and especially like do the investigation, do the triage. Yeah. And we're gonna see it on the other side too, right? Like. If we're building these agentic systems, the attackers are eventually going to do this as well. Yeah. It's gonna start with nation states.

We probably have like more years than than we think we do, but it's [00:40:00]going to be very interesting to see how they approach like a. Much more dynamic exploits, much more dynamic identification of vulnerabilities, much more dynamic privilege escalation. Mm-hmm. Not even talking we're, I'm not even like going to the,

Ashish Rajan: because the day or de Yeah.

Yeah.

Allie Mellen: I don't think that's realistic. Right. Yeah. Because AI's just really good at finding patterns and things. That's already happened. Of course. Yeah. But. Unfortunately, like most organizations have something that should not, that is not patched the way that it should be, or multifactor authentication isn't working on this particular system isn't implemented, and that's, that's what they're gonna be looking for.

Yeah. And so, um, I think that. Evaluating the generative AI capabilities is fundamental. I think the second piece of that is the analyst experience is so important right now and is going to be so important. Mm-hmm. Because of that explainability element. Yeah. That comes into play. And then I do think automation actually remains very important in this conversation [00:41:00] and is something that security teams should be thinking about.

Because if you have. Outlined incident response processes and you know the steps that you're supposed to take, you're able to automate them. One of the areas I think is really compelling for what's going to happen with generative AI is. The SOAR playbook, enabling the analyst to customize when they want to actually invoke the generative AI tool.

'cause you could just have your typical SOAR playbook and then at the end you're like, I wanna call out to an LLM and have it do these six prompts. Mm. That's very useful. And it's very customizable as opposed to what's being built into vendors, which is like they're just doing what they think is best.

Ashish Rajan: I love that.

Thank you for sharing that as well. And thank you for answering the technical questions. I've got three fun questions, which you've done before, so I wonder if your answers have changed now. Uh, first one being, what do you spend most time on when you're not trying to solve the security operations problems of the world?

Allie Mellen: Good question. So I love to read, I have a ban on reading right now, but I'm not holding to it because I'm writing a [00:42:00] book. So I'm like, oh, fair. If I read, I'm spending all of my time reading.

Ashish Rajan: Oh, okay. Fair. But

Allie Mellen: I love to I love to draw. I've really enjoyed drawing lately, especially like going to a park.

New York has been so nice this summer and Oh, nice drawing.

Ashish Rajan: Oh, lovely. A park. Have the, have, have your canvas and just like draw. Exactly,

Allie Mellen: yeah. And there's so many people, there's a tennis court near me. It's just, it's great. It's a great experience. Yeah,

Ashish Rajan: it sounds like a great idea. Second question, what is something that you're proud of that is not on your social media?

Allie Mellen: Oh, something I'm proud of that's not on my social media. Any of my social media

Ashish Rajan: could be anyone.

Allie Mellen: Yeah. I'm thinking about my answer last time and I'm like, not again. I'm like, I, okay. I am actually really proud. This is interesting. 'cause like I grew up and I was not into cooking. I was really into baking.

Right. But I was never into cooking. I've gotten so into cooking and I'm really enjoying it. And like my grandfather is Italian, he's from Italy. Yeah. He came to the US when he was young and he is a chef and [00:43:00] so he had his own restaurant. Right. And um. I have been taking a lot of his recipes and doing them for dinners and things like that.

Wow. And I'm actually like, I'm loving it. I'm having so much fun. It. I'm very proud when I can, like, I'll test out some meals and then I'll make them for my friends, and that's like such a treat for me. And especially like full course meals, so I wanna like make the dinner. I wanna make the dessert, like I'm getting the crunchy bread.

It's really nice. So I've just loved doing that lately.

Ashish Rajan: Interesting. I, I look forward to the invite. I guess. I'm like, it's so

Allie Mellen: meditative, you know? Yeah. I'm like, grand. A

Ashish Rajan: grandfather recipe coming into the picture that I'm like, oh,

Allie Mellen: actually I listen to a lot of podcasts, including this one. Oh, there

Ashish Rajan: you go. I'm like, okay.

There you go. I mean, I, I look forward to that. Uh, final question. Uh, what is your favorite cuisine or restaurant you can share with us?

Allie Mellen: Okay, so last year I said steak. Mm-hmm. Which is still true, but I am just so into sushi right now. I'm just loving sushi a Yeah. I'm [00:44:00] just loving it. And especially like hand rolls, very into like the little, like not quite pokey bowls, but very similar, like a smaller version.

There's so many good sushi spots in New York too. Okay. Especially Omakase

Ashish Rajan: Really?

Allie Mellen: Oh, outta this world. Out of this world. It's so good.

Ashish Rajan: Interesting. I'm gonna be in September there in September, so I need to get this from you. So there hit me up. I I'll hit you up as well. I'll take you

Allie Mellen: out.

Ashish Rajan: I'll wait before, wanna do the Italian, the grandfather recipe first before we go jump straight to like a Michelin star chef.

Oh my god,

Allie Mellen: I have the perfect recipe. Oh, perfect. Right.

Ashish Rajan: Uh, where can, uh, people find all the work you're doing and connect with you and all of that as well?

Allie Mellen: Yeah, definitely. LinkedIn is the best. Please. I love to, to chat with people on LinkedIn, hear from them. I also have my Forrester blog, which I post to very regularly, so both of those places and Substack.

Ashish Rajan: Alright, I'll, uh, put those things in there. But thank you so much for coming on the show.

Allie Mellen: Thank you so much for having me. This is always so fun.

Ashish Rajan: I, well, I, I always enjoy it, so thank you. Uh, and thank you everyone for tuning in. What's you next time? [00:45:00] Thank you for listening or watching this episode of Cloud Security Podcast.

This was brought to you by Tech riot.io. If you are enjoying episodes on cloud security, you can find more episodes like these on Cloud Security Podcast tv, our website, or on social media platforms like YouTube, LinkedIn, and Apple, Spotify, in case you are interested in learning about AI security as well.

Do check out our sister podcast called AI Security Podcast, which is available on YouTube, LinkedIn, Spotify, apple as well, where we talk to other CISOs and practitioners about what's the latest in the world of AI security. Finally, if you're after a newsletter, it just gives you top news and insight from all the experts we talk to at Cloud Security Podcast.

You can check that out on cloud security newsletter.com. I'll see you in the next episode,

peace.

No items found.
More Videos