Is the AI SOC a reality, or just vendor hype? In this episode, Antoinette Stevens (Principal Security Engineer at Ramp) joins Ashish to dissect the true state of AI in detection engineering.Antoinette shares her experience building detection program from scratch, explaining why she doesn't trust AI to close alerts due to hallucinations and faulty logic . We explore the "engineering-led" approach to detection, moving beyond simple hunting to building rigorous testing suites for detection-as-code .We discuss the shrinking entry-level job market for security roles , why software engineering skills are becoming non-negotiable , and the critical importance of treating AI as a "force multiplier, not your brain".
Questions asked:
00:00 Introduction
02:25 Who is Antoinette Stevens?
04:10 What is an "Engineering-Led" Approach to Detection?
06:00 Moving from Hunting to Automated Testing Suites
09:30 Build vs. Buy: Is AI Making it Easier to Build Your Own Tools?
11:30 Using AI for Documentation & Playbook Updates
14:30 Why Software Engineers Still Need to Learn Detection Domain Knowledge
17:50 The Problem with AI SOC: Why ChatGPT Lies During Triage
23:30 Defining AI Concepts: Memory, Evals, and Inference
26:30 Multi-Agent Architectures: Using Specialized "Persona" Agents
28:40 Advice for Building a Detection Program in 2025 (Back to Basics)
33:00 Measuring Success: Noise Reduction vs. False Positive Rates
36:30 Building an Alerting Data Lake for Metrics
40:00 The Disappearing Entry-Level Security Job & Career Advice
44:20 Why Junior Roles are Becoming "Personality Hires"
48:20 Fun Questions: Wine Certification, Side Quests, and Georgian Food
Antoinette Stevens: [00:00:00] If you don't know how to write code, AI won't help you get anywhere faster because it's gonna write slop and then things will break and you won't know how to fix it. I do not trust it to close out alerts. I have watched ChatGPT just lie to me continuously. I am really nervous. For the entry level positions in security, a lot of the entry level work that we would hire people for can be done. Now with AI, if you're early on in your career, your job is to be a personality hire. If something was public to the internet, it would say, well, this action was taken by an engineer, and so it is legitimate. And it's well no.
Ashish Rajan: If you work as a Detection engineer or are building a detection engineering team.
You probably have seen AI around the corner for the past couple of years, and you're probably wondering, what do I do with ai? How far can I go with it? Can I do a complete AI SOC with it? I was lucky enough to have a conversation with an. Net Stevens, she runs the detection engineering team at a company called Ramp in the US and we spoke about how she has been using AI to augment some of our detection engineering [00:01:00] work, along with increasing the maturity of what AI could do for detection in general.
Across the board, we spoke about some of the realities, the challenges. Whether you should build or buy. Can you really go as far as some people may claim with AI in today's world for ai soc? And what are some of the limitations? What should you think about from a detection program that you may be building from a maturity perspective?
How much AI-fy, AI-fixation you should be doing to it? All that and a lot more in this episode with Antoinette. Do share this episode with a colleague of yours who is trying to work on detection as code or. Perhaps trying to adopt more AI into the detection teams. As always, if you are watching or listening an episode of Cloud Security Podcast for a second or third time and have been finding it valuable, if you can take a quick second to hit the subscribe or follow button on whichever platform you're listening or watching this on Apple's, Spotify, YouTube, LinkedIn, whichever platform that is, I really appreciate subscribe.
It helps more people find out about us and get more interesting guests to talk about more interesting topics as well. Thank you so much for supporting the work. I hope you enjoy this [00:02:00] episode. Talk to you soon. Peace. Hello and welcome to another episode of Cloud Security Podcast. Today I've got Angenette with me.
Hey, thank you. Thank you for coming to the show.
Antoinette Stevens: Thank you for having me.
Ashish Rajan: I'm looking forward to this conversation. Uh, by the way, we have, we've, I didn't realize we had so many mutual people who are, who said you should definitely have her on the podcast. So I'm glad I'm having this conversation with you.
But for people who may not have heard of you before, could you share a bit about yourself and your experience?
Antoinette Stevens: Absolutely. So am Antoinette Stevens. I am a principal security engineer at Ramp. I was hired as the first detection and response engineer and built out the program. And I'm currently working on scaling that up and attempting to navigate how our threat landscape is shifting, given, uh, new emerging threats related to AI and endpoint security and other things.
And I am also the tech lead for Ramp's internal tools engineering team. Where I'm working on building out internal tooling to help our customer facing operations teams [00:03:00] do their best work. Before that I've done nothing but security. I did a very short stint at Slack. Um, I spent some time at Cisco Meraki, and before that I was at Principal Financial Group in Iowa for four and a half years.
Ashish Rajan: Oh wow. So I mean, I guess it's been a while in the security space for yourself. Yeah. You've changed the way, or at least seen the way that detection engineering has changed. And I think when we were talking about this before we started recording, we were talking about the whole engineer led approach.
It'll be also, if you can probably share a bit about what do you mean by an engineering led approach to detection.
Antoinette Stevens: Yeah I think there are certain things that you learn when you've been trained to do do engineering work, um, especially around like testing and validation and reliability and observability.
I think some of those things that some people might take for granted, it's, it feels obvious until you realize you haven't been doing it. And, and so I think for someone with an engineering background that. [00:04:00] That comes like breathing. But otherwise you might not think about it and you might just think it's already there until you're asked to like actually make sure things work and validate it.
And we've seen a rise in that as we look at like detection is code become more popular. That's an engineering led approach to detection engineering, where you are source controlling something, you might make a test suite to make sure it works. You have validations, you do various things before you move things to production.
I think the other side of that coin actually comes in where if you have engineers on your team, people who can build software, your approach to buying tooling changes such that my approach is now can I build it myself? And if so. Is the cost of me building, building it myself more or less than the cost of buying it.
So like, is it better for me to have it? Meaning it might not have, it might not require a lot of support. Yeah. Or am I building [00:05:00] something that's gonna require a lot of complex support and so in the long run, it makes more sense to buy it and use someone else's support and have someone else maintain it.
So like there's platforms that help you with log ingestion. Mm-hmm. I'm not paying for that. I could write a script and then never touch it again. So there are like trade-offs that, that come with. I think, uh, a being an engineer and having the mindset, but also having the skillsets to be a bit more self-sustainable when it comes to like functional needs.
Ashish Rajan: Wait, I mean so many threads to pull there. 'cause I was gonna go down the path of is it AI that makes you feel that you can do build or buy more easily today versus, but I, I'll double click on that in a bit, but I think to set the scene for a lot of people. 'cause a lot of people, when you mentioned testing suite for detection engineering space, or you have detection.
I, I would say, and this is what my approach used to be before, which is like if I had someone in my team who's looking at detection, engineering, they go in, they look at a space let's [00:06:00] just say AWS for example, they look into AWS obviously a WS has a lot of services. So you go, oh, I'm just kinda looking at S3 bucket, because that seems to be top of mind for a lot of people.
I go down the SV bucket part, oh my god, so many permissions. But how would I go into one of my, I don't know, my cloud security tools like my C-S-P-M-C now, whatever, figure out which is coming out the most, how much? Like it almost turns into like this hunting exercise of finding something, validating if it actually works, and then if it works, then you turn it on and you walk away.
To, again, it sounds like you are saying that engineering, in my head, engineering approaches, you have a testing suite, you're actually retiring. Tests that do not need to be there anymore. As, and obviously I'm putting words in your mouth. Is that what the diff what is the difference? Because I, I think a lot of people just would get lost in that Oh, just means the same thing or what I do, I do the same thing, but I think you mean something else over there.
Right?
Antoinette Stevens: I, I think when it comes to like testing and validating detection rules, there's [00:07:00] clearly a life cycle to, to follow. And I think every detection engineering lifecycle actually begins with hunting. And, and looking through your logs and, and seeing what's already happened. And it's not always like.
Always based on that, you might see an article and go, I wonder if this has happened to us, and go look for it. And if it hasn't, you're like, okay, well it could, and so you'll write a rule about it. And then you're, you're asking yourself, well, this hasn't happened to us, which means I can't run like a, a historical, um, analysis and, and make sure my rule triggers.
And so the question is, okay, how can I test that this rule will actually work? And so your options here are to try the thing yourself. So try whatever you're afraid of and see if your rule triggers. The other option is like, you can mock the logs, especially AWS is a great example. Cloudera logs are incredibly uniform.
Uh, you could just make the logs and try to trigger the alert yourself. But that's what I mean. It's like making sure you have a way to [00:08:00] make sure that the thing you think should happen will actually happen.
Ashish Rajan: And I guess to your point, the testing suite that you were referring to, like, so the, the, the, it's not that the lifecycle has changed, it's more like your approach to the lifecycle has changed.
Would that be right? Yeah,
Antoinette Stevens: yeah. It's a, it's a more automatic approach if you can. Um, so, with our detection as code repository. My detection engineer built out validation of the rules. So that's a test that runs in GitHub. And then the tool that we use has built-in testing framework. So we can put in like mock logs and make sure that the alert fires the way we think it will.
And we have that as testing.
Ashish Rajan: How, and I guess I think the other part to this is like. There's obviously newer spaces that you, people get going into people multi-cloud, and now the AI is another one of those where now we are in this space of, like it or not, we are all living in an AI world. Yeah. Uh, I wonder, going back to what you were saying, build versus buy.
So people who are looking at the whole detection engineering space today is AI helping [00:09:00] you build or make that decision between build and buy? Is it making it easier to build yourself? How far can you go with it? What's the thinking there?
Antoinette Stevens: Uh, for me personally, I think AI has made me faster.
Okay. I, I think with the job that I currently have, my time is incredibly limited and so the things that I can do have become faster through ai. But I would say if you don't know how to write code, AI won't help you get anywhere faster. 'cause it's gonna write slop. And then you won't know it, and then things will break and you won't know how to fix it.
And so like for those, for people who already know how to build software, I think it accelerates, but for almost anyone else, it likely slows you down. Mm-hmm. A bit like, I wait, so you should have been programmer before you get into doing this, or, I, I honestly think you should at least know something. Uh, right.
Because if you, like, if you don't understand how architecture works or. The [00:10:00] basics of writing code then if your, if your AI generates code that is, for example, over-engineered or missing something, especially if you're prompting it and you don't know how, like you're either too vague or you're being specific in the wrong way, you end up with a product that down the line isn't sustainable, um, like at best isn't sustainable, at worst is vulnerable to something.
And so for a lot of the code that we write or might need it needs to talk to the internet or be on the internet in some way. And so we end up in a situation where someone might create vulnerabilities if they don't know what they're doing. And so in my opinion, it is helpful to, to already know something so that you can review review the code and make sure it's sound.
I don't think it's influenced much of my build versus by decisioning because the things that I'm worried about aren't necessarily things that I believe AI would solve for me, which is like how much [00:11:00] documentation do I need to maintain this thing, which I can have AI write. Sure. But. Like keeping up with it or, or managing it?
Maintenance of it. It, it just, uh, depends on like what the use case is.
Ashish Rajan: So has your approach changed now that you're using more ai? Like, for example, to, to your point about, uh, documentation? A lot of people, uh, and I'm just obviously giving a very simplified version of this. I write a detection today.
I write some documentation around it, but over time it evolves and I did not update the documentation. So would AI become that capability that actually you can just give it the detection piece of code and go, Hey, what, what is this doing? And if it's drifted from what the documentation has been like what's been your experience with that?
Antoinette Stevens: Absolutely. I think that's a really good use case for it actually. We use we have like an alerting and detection kind of strategy in all of our. Uh, alerts. I think that if you're, especially if all of your stuff is codified, [00:12:00] it's really easy to have it say, look at this change, update the playbook if it's relevant.
The issue that I have with AI sometimes is that it can over communicate or be verbose in really odd ways. And so making sure that your documentation is still clear, which is through prompting and it's all prompt engineering. Yeah, it's still, it's still clear and kind of what you need it to be becomes the more important part.
Ashish Rajan: And do you feel that when with, I guess obviously there's prompting skills and everything else that comes with it, if I'm starting someone who's starting today as a detection engineer, and to your point, I'm, I've been software engineer before and I feel like, oh, I mean, how hard can detection engineering be?
Really? And because I've got ai, right? So I'm kind of thinking about those people as well. 'cause what we are saying is, Hey, it's an engineering first approach or engineering led approach. And if you have been doing software engineering before, you'd find a transition to be, uh, easier, but. With ai, are we saying that the curve to learn is a lot more?
'cause in my [00:13:00] mind, as much as it's a security problem, it's a software problem as well. Yeah. Where you, you have to figure out, uh, the new environment, the new service. We use the a Ws S3 bucket example. You're basically understanding that from scratch. I imagine that learning time is now compressed down because of ai, uh, but at the same time, you're able to do a lot more tests that you would not have been otherwise could, could possibly think of as well, like what's, what does that workflow look like now with that, you do it with ai.
Antoinette Stevens: I think that, I wanna make a quick distinction too of, I think that the infrastructure around doing detection engineering might be easier to understand if you come from a software engineering background. 'cause I mm-hmm. You know, you don't have to relearn, get, or like learn how a bunch of things work.
Ashish Rajan: Yeah. I
Antoinette Stevens: do think that there is a domain expertise that comes with detection engineering that. Is a learning curve. It's like, two separate people. If someone's purely done detection work, someone's purely done software engineering work. They have things to learn from one [00:14:00] another. Oh, yeah, yeah, yeah, yeah.
So there's a, there's a way of thinking about writing detections that I don't believe is inherent in someone with a software engineer. Like it just might not be where. There are multiple things that I think through when I, when I'm writing a detection rule or thinking about how to approach it, where you, it's just sometimes something you pick up with experience.
Like you see enough alert balloon where you can kind of look at a rule and go, that's not a good one.
Ashish Rajan: I love the alert balloon word as well.
Antoinette Stevens: Like, you just, you just kind of know, I think from looking at it. Yeah. And so I. I think it's the same problem both ways of, if you've never done detection engineering work before and you go to an AI and say, write this rule for me, you're gonna end up with something generic instead of something highly specific to your environment.
Like highly business contextual because it's, it's trained on like the most generic stuff. And so if you don't know anything about detection and response and you have AI write rules [00:15:00] for you, you're probably looking at something that's. Gonna be noisy and not helpful to you, versus you really understanding your environment and saying, I know that these are the things I'm worried about here, even if it's not like standard in the industry to be worried about this, so I'm gonna write a rule about it, or I'm not gonna write any rules and I'm just gonna go fix the thing so that it can't happen in the first place.
And so like that kind of thought process is not. Where I believe we should be like utilizing ai, it's once I know what I'm looking for, of course you can use it to help you write the syntax, but the full thought process I think has to start with a human being who understands business context and understands like the history of what they're trying to do and all of that stuff.
Ashish Rajan: So detection engineering with AI is actually starting with the human first. Then using AI to accelerate that detection, creation [00:16:00] process.
Antoinette Stevens: Yeah. AI should be a force multiplier, but it should not be your brain.
Ashish Rajan: Mm-hmm. Well, at least yes, it feels like it's my brain, but, you know, but I mean, I, I, I, I, I joke about this, but I think I definitely find that, um, doing a lot more complex things is easier today in terms of me walking in.
Like I'm, I've been primarily an AWS person, but I know for sure. That if I walk into an Azure or a GCP environment with the help of, uh, ai, I would be able to kind of like transition that skill fairly quickly. I may not be the expert, but at least I, I know enough to be dangerous. I'll just say that.
Antoinette Stevens: Yeah, because you, well, because you already have AWS experience, you have generic cloud computing like understanding.
Yeah. And so you could easily walk into a, an Azure environment and go, show me the equivalent of an object like storage. I'm looking for this specific type of thing. What's possible here? Where can I find these? Like that's because you already have [00:17:00] like the contextual knowledge. You can do that, but if you had knowledge of nothing, yeah, you would have no way of prompting or knowing if the AI is correct or if it's hallucinating or anything like that.
Ashish Rajan: Yep. And would you, while you were working using, while, while we, while you were using AI to build detection and scale the team and everything, were there any challenges that you came across that people, and I don't know if the challenges kind of went away by itself when the new version of GPD came along or didn't, but I'm curious as to what were some of the challenges that came along and did it make a difference if you changed the model or had to think about any of that?
Antoinette Stevens: I think one of the more interesting challenges has to do, we, we use AI for first level triage. It goes through and does the initial investigation and then a human comes be like behind it and says, I agree or disagree and takes the action. Yeah. I think what's interesting with that is, uh, if you're not clear with it in how an investigation [00:18:00] should be ran, it tends to try to fill in the gaps for you.
It likes to make a lot of logical summarizations, if you've noticed where it's and this happened because of X and the result of this is that it's making X better or whatever, whatever. And it's like, I don't need you to guess at Y. Someone did something. I just need the facts of the situation. Yeah.
Um. I don't want inferences. And so we, we went through and did that, or it would do things where if something was public to the internet, like a subnet got opened up, it would say, well, this was taken by a legitimate, you, like this action was taken by an engineer. And so it is legitimate. And it's like, well, well, no.
It doesn't matter if it's legitimate. We should always know and wanna do something if a resource is open to the internet, it, and so it's, it's that kind of nuance I think.
Ashish Rajan: Ooh, actually is. It's a very good example you shared because the whole industry at the moment is building that AI SOC or AI enabled soc for level one.
What's the reality [00:19:00] of it? Because I the, some of the promises from some of the cybersecurity vendors who I would not name right now is that you can go all the way, like there, there is obviously the detection piece. The isolation, the isolation piece. And you, there's like so many stages to the investigation.
Even if you've detected it. Yeah. Identifying it's an incident or an event, you've gone to the next stage. Okay. It's an incident. Now I have to trigger something and go all the way to remediation and the action that comes after as a post-incident response report.
Antoinette Stevens: Yeah. How
Ashish Rajan: much of that can AI do today realistically, without a human?
Antoinette Stevens: Promises. Promises. I, so I, uh, I do not fault them. They have product to sell, so I don't,
Ashish Rajan: I mean, yeah, I mean, say, I mean, I'm not, I'm not throwing, uh, for, for lack of better word, no dirt on anyone, but I'm just like, I'm just curious as to someone who's, because you, we spoke what Bill versus buy thing.
Antoinette Stevens: Yeah.
And so we we're not building that in house. And the, so I can break down the rationale there. Actually, it is perfectly [00:20:00] possible for me to build. My own agent that does this. Yeah. Where I think that a vendor comes in handy would be on the evaluation side and the ability to rotate between different models if necessary and kind of have more infrastructure around the observability of the agent itself.
Yeah. Like evals I think becomes super important when we're talking about like security investigations because you wanna make sure that. Uh, you have a really good confidence rate in what the el like the model is bringing back to you. I would also say it's important to make sure it has memory.
And so like if I go in and Im prompt it differently, it should remember what I said or like, remember corrections and so all of that, I don't wanna run that myself. Like I, I know that for a fact. And so we, we bought a platform that does this for us. And so I do not trust it to close out alerts.
Like it, I, okay. I just don't, don't believe in that. And we [00:21:00] have the ability to write like our own prompt for the agents. And so, because. We have that kind of flexibility. We can get closer to me being comfortable with like ticket closeout, but I'm not there yet simply because I have seen it either make an inference that was incorrect or just flat out hallucinate.
I think there's wild variability between, between the models, especially like a new. A version of GPT could come out and be completely wrong. And I have watched ChatGPT just lie to me continuously, and so I'm definitely not fully on board with just letting it close out things. I think that if someone really wants to, they're probably really noisy alerts that someone could use it to close out.
But then I would start asking, well, why do you have these alerts? And so I, I think there's like a huge, there's a really good conversation there. I just, I don't fully want it to close things out. I would prefer someone still look at it because if we [00:22:00] get something wrong and miss something, I can't say, well, it's this AI's fault, it's mine.
Like it's our fault for not validating it. And so I still want someone to go through and make sure that we agree with the analysis, make sure that we're consistently giving feedback and tuning it so that it gets more accurate. Um, but also just making sure that. Again, business context wise, there might be things that it says that I don't agree with.
'cause I understand how Ramp runs and understand like what is or is not. Okay.
Ashish Rajan: Yep. And do you find that as you go through that decision process for, and I, I just actually maybe I should take a step back 'cause. You mentioned a few words there, which I understand, but I don't know how many people in the audience will understand.
You mentioned memory, you mentioned eval, you, you mentioned inference as well. Could you just quickly do like a 32nd version of these three so people don't even understand because if you work in this space, you kind of like, oh yeah, I know what that is. But like I think it's fair to assume that AI is [00:23:00] like very much in terms of knowledge of it is quite distributed.
So would you mind giving a 32nd version of that?
Antoinette Stevens: The easiest way I can say it is in order for a model to be accurate, there's a lot of surrounding infrastructure to help make that true. Yeah. And so one of it is memory where a ChatGPT is a really good example when you see your chat history. Newer versions of GPT like ChatGPT will now use your chat history to inform how it behaves going forward.
Yeah. And so if you make a correction in ChatGPT and say, no, I don't want you to speak to me this way, or no, this is what's true or false about this topic, it will remember that and use it going forward, which is important. Yeah. That does not necessarily retrain the model, but it does reference that to inform how to talk to you.
Yeah. Evaluations would be how accurate the output from the model is. Um, and so you can give it test, like test data or that you can just [00:24:00] evaluate like in real time the input and outputs from it and kind of do like scoring basically on this is accurate or not accurate. And then you can update your prompts and your, um.
Your system prompt to try to get it more in the realm of accuracy that you're looking for.
Ashish Rajan: Awesome. Thank you for sharing that. And I think it's almost like, um. Because you're building your own detection as code. You are able to use, you are able to plug and play with the memory part, the prompt part, and the inference part of it, the in infrastructure part of it as well, which kinda gives you a lot more control.
So if you wanted to switch, I, and in a funny way, I actually, uh, when I started talking about AI and working in this space some time ago, I was kind of almost thinking, why would you go outside of chat GBD? But then I realized even the prompting. Prompting evolves over time. It doesn't even have to be change of model.
Like the way I used to prompt two years ago. It's not the same way I prompt today. Yeah. And it's [00:25:00] like you want to be able to go through that journey with everyone in the, in your team so people come to continue to improve one 'cause there are, I think the false sense of I would just buy an AI SOC and I'm not doing anything about because the learning curve is still there, quite a bit of it.
Even if you have an AI SOC looking after the entire level one.
Antoinette Stevens: Yeah, I think it's a, I think it's interesting because a lot of the AI SOC products, I think are trying to obfuscate away some of those nitty gritty details. Yeah. But sometimes you kind of want it like, yeah, I might wanna know what the temperature on my model is, and if you don't know what that means, you should look it up so that, yeah, it, at least when you're talking to some of these vendors, you can ask them really good questions to understand where you might see issues.
Ashish Rajan: Yeah. Yeah. And I guess, do you do you recommend having something like a knowledge source or a RAG for your, I guess, I don't know. So people just design in different ways. Do you? You can have it, you cannot have it, but, uh, do you, have you experimented with that [00:26:00] to kind of see, uh, maybe extend the scope of what you can do and how far you can get with, get with it?
Antoinette Stevens: I. I haven't used that, uh, for security application just yet. Maybe in the future. Right now, we, we have our, our system prompt for it. Um, and then every alert that comes has like attached, which this is already true, but it, it has like investigation steps. And so the, the basics of the prompt is like, follow the steps here.
And it has access to go search. And now we are exploring, which I think is actually a really good idea. Um, like a multi persona setup, like an architecture where individual agents do very specific things and there's an orchestrator agent pulling data from each of them.
Ashish Rajan: Ooh. Like a multi-agent architecture.
Actually you, that's a good point. 'cause I was talking to someone about how these, uh, automated pentesting companies work as well in the background. 'cause they actually have like a agent, which is just a SQL injection expert. It does not know anything outside of SQL injection, [00:27:00] but it's a really good SQL injection, quote unquote SOC analyst, for lack of a better word.
Antoinette Stevens: Yep. They do really well when they do a very specific job. I saw a vendor once that had like a legal agent, a pentest agent like that, did very specific things, and then like another validator agent, and they got all of them to agree and would take action, or if certain ones did or didn't agree it would do certain things.
It's very interesting. Um, but I think, uh, like having personas in the agents will also help with accuracy of output.
Ashish Rajan: And maybe, uh, to take this a bit more further as well then, 'cause I imagine there's a, there's a change for culture, there's a change for how you even build detection programs for this as well.
And I, I imagine 2026, there'll be a lot of people kind of, people would've already planned in 2025 as to what they would build their detection programs as. How, and you've been scaling your for some time. What's your advice to other people who are starting to do this? And perhaps they don't even come from an [00:28:00] engineering led approach, they do the traditional path.
Like what should they be looking at in this new world that we are going with, which is, which AI is eating up slowly?
Antoinette Stevens: I think that from the perspective of like building out teams, you should seek out ways. Where again, it can force multiply. So like my team is very small, but we get a lot done 'cause we, we know how to utilize the tooling around us.
So I think it's worth like looking at ways that it can help your team without, again, becoming a hindrance in the long run. And so I, I would not just rush out and buy tooling that I don't understand. Hmm. Uh, 'cause I think it will just slow me down if something breaks and I, or things like that. Um, I think from a threat perspective, uh, my, my opinion there has held steady for the past few months, which is a lot of this is just getting the basics right.
So like if you don't have a, a good endpoint detection program, you should probably consider getting one now, considering like all [00:29:00] the MPM packages that are getting compromised and Yep. The move to targeting engineering and developer machines. I think that getting a Shadow IT program going, if you don't have one is a good move.
Where we've seen a lot of malware be propagated through tooling claiming to be like ai. Um, or like different AI software. So it's a lot of just go back and review, like, do you have the basics down,
Ashish Rajan: actually, to your point, because if you don't have people who, who have the right skillset and even understand how to use coding or detection, I've never built detection before, and it just basically throwing in a, an enterprise level AI subscription may not be the answer at that point in time.
Antoinette Stevens: Yeah. Reevaluate the skill sets on the team too. I, I think it's worth doing that. Go get training, go learn together. If AI is what you wanna do, this is a great opportunity to learn it. It's never too late.
Ashish Rajan: What about maturing in an ex, an [00:30:00] existing detection program? Like in terms of like the, obviously they might want to add ai.
It's different for people who started from scratch. I guess they have an opportunity, but people who have been doing it for some time, who already have people with detection. Detection skillset.
Antoinette Stevens: Yeah. And
Ashish Rajan: the next level of maturity with ai, what's your recommendation for that?
Antoinette Stevens: That one's harder 'cause I, I think.
For larger security teams, there's probably a healthy skepticism of what AI can do for them. I would say it's a good opportunity to play. I think it levels the playing field, like we've all been leveled. Not everyone, unless you, I guess, maybe started this years ago or have a PhD in like artificial intelligence and you've been doing this for a while.
We we're all learning together and so I think that it's time to experiment. Yeah. Um, I, I would caution against turning up your nose at new technology because it's been annoying. You. You know, I think a lot of us may have an impulse where if we see something that's become a buzzword, we try to stay away [00:31:00] from it.
Either out of wanting to be contrarian or again, having a, a real and healthy, um, skepticism of it. I think that it's a good idea to take some time. It doesn't have to be too much time, but take some time to experiment and see if it can. Automate some part of your workflow that has been tedious and see if you completely have this out of your way.
See if you can use it to build random tooling that you have been paying a vendor for and don't feel like you need to anymore. You know, I, I think the message that I would have is experiment in whatever way is safest and most comfortable for you, but to not, to choose not to touch it at all, I think would be a mistake.
Ashish Rajan: And I guess for people who are thinking about, everyone likes to kinda almost measure maturity. I guess what some of the metrics or signal that you think people can track as a way to know that improving, whether it's using AI or, I mean, sure. I'm sure they have metrics beforehand, like or detection and [00:32:00] how quickly do you close it?
MTTR and all of that.
Antoinette Stevens: Yeah. In terms of
Ashish Rajan: using with ai or does it change or not change?
Antoinette Stevens: I think it's different for everyone. 'cause if, if we're saying we're measuring maturity, are we. Saying that we wanna use AI to measure how we like
Ashish Rajan: fewer false positives or Yeah. Responses we
Antoinette Stevens: have, we changed it.
Like I, I would see it in two ways of AI is helping me with my detection, engineering maturity as a program, or it's my maturity of AI adoption, which I think are
Ashish Rajan: two. Oh, actually, yeah. I'm, I'm curious what's your thoughts on both of them? That's, I mean, that's a good point. Yeah.
Antoinette Stevens: And so I, I would say at Ramp my, my daily like.
My day-to-day is fully AI integrated. Hmm. I think that there is, it, it's had to be, 'cause I have to keep up with the engineering teams who are actively using it so that I understand where threats and risks are. Um, and so it has become something that I use daily and something I've encouraged my team to use as well.
I think from a detection [00:33:00] engineering perspective, it's been helpful with noise reduction. Mm-hmm. Uh, for sure. It's been helpful with tuning, so when it's sees a rule it can go in and tune it. I will say that I have not seen the drop in false positive rate that I would like to see, but I think it's also because we have onboarded more alert sources, like more log sources, and so it's kind of stage stack balancing out.
Yeah. I think my next goal is to get aggressive about reducing our false positive rate. 'cause even if a human's not looking at every single, like not being bogged down by every single alert, I still have a base philosophy that if an alert is not useful, it should not fire.
Ashish Rajan: Yeah.
Antoinette Stevens: Fair. So I would like to keep to that.
I think that the time for my. Engineering team to fo like detection engineers to focus on new log sources. Focus on how we expand the program instead of all of [00:34:00] their time Going towards alerting is a good indicator of the way that we've been able to build.
Ashish Rajan: And does things like, uh, you mentioned, uh, source code repositories for detection of code.
I'm sure it's like a prompt for the library somewhere as well. What are some of the newer things that people may have to consider in their tool set or toolbox or arsenal in their detection programs as they become more, I guess to what you were saying earlier, they start adopting more ai?
Antoinette Stevens: Yeah.
I think what will become important will be the various instructions. Like, oh gosh. Like it's, it's become such a big thing with AI where if you're working out of a repository with an AI coding agent there, there's like an instructions file in there, like a agents dot empty or something. Yeah. Um, that's become a big thing.
I, I think the other thing is like, uh, there is no real standard language for detection rules, and so making sure that you. Know how it can write sigma rules [00:35:00] versus JSON rules versus whatever it is splunk's using these days versus like, and so, uh, one of my engineers wrote like prompted it to be able to translate between Datadog rules and like A-J-S-O-N rule format.
Oh wow. Um, yeah. And so it's, I think something that that comes in with using AI is less about like your technical capabilities and more about your ability to think through how to write good prompts becomes a bit more important.
Ashish Rajan: Do you feel like over time as you continue to use ai you guys would start building your own quote unquote knowledge source for, you know, it is funny 'cause I think.
After years of being a ciso, I just, I, I, and maybe I didn't realize this then, but with ai, I'm also realizing there used to be this same set of patterns, you know, that your organization experiences, in my case, it was SQL injection. Mm-hmm. Which we knew was like a false positive, which every time we saw it, we just basically ignore it because we, we didn't [00:36:00] even have a SQL, SQL server to begin with.
So it was an automatic like, Hey, I don't need this information ever in my life. But then if does, that does change tomorrow and I'm just, don't know. I just, I'm just not aware of it. I wanted an ability to be able to go, Hey, this is the history of all the detections we have. We track, we manage, we see if it's still triggered or if it is, if head doesn't triggered for three years.
Do we still keep it in the entire life cycle of it? It's almost like, do you, I don't wanna use the word data lake, but I kind of can't think of any other word, but it's almost like, do you need like a internal, do you end up building like your own internal repository of every alert you have, either created in the past, detection in the future lifecycle, what were, what's your thinking about those kind of things?
'cause I feel like now to your point about the memory, we are building a memory for the organization for this.
Antoinette Stevens: I'm actually working towards building like an alerting data lake. Not for this use case. I just want a metrics [00:37:00] dashboard. I don't know how to generate manually.
I'm working on, uh, getting all our alerts into like a snowflake table. And so that's why I hadn't thought about that, but that's a really good, I think direction and idea. To kind of have this one place. I think what helps with having, again, detection is code is the full history of an, uh, a detection rule.
Is there. Yeah. And, and like commits. And so you don't really have to work that hard for that one. But being able to tie, I think what would be really powerful would be being able to tie a commit to a change in like th like trends that we saw in a specific alert. Hmm. 'cause like after this commit, we saw the false positive rate for this alert drop here, but then this commit was made and it went back up.
And be able to make that connection Very interesting because then I think what you could do is over time inform how you write future rules, where it's like, well, in the past you made a similar change on a rule and [00:38:00] it didn't work for you, and so maybe you should try this strategy. But again, that's highly specific to your environment and would involve like the memory part and the evals and so on and so forth.
Ashish Rajan: Yeah. Yeah. And I guess to your point, that's because at the end of the day, you almost want the security part. And I know people think about terminator every time I mention this, but essentially you're almost trying to make a, a profile for your organization internally. Of course,
Antoinette Stevens: yeah. I think some places are now offering like fine tuned models specific to your business to kind of help with this problem.
Wow. I mean, teams, if they, they wanna do it can also train like fine tune train. Their own models and do it, but it, it comes back to how much time do you have?
Ashish Rajan: Are you gonna build a product inside your organization or just gonna have Exactly, yeah. Fair. Yeah. Or just use someone else's, uh, fine tuned model.
Yeah, I mean, I, so it's a fair thing 'cause there's the investment of time. If you hire a detect detection engineer, are you really expecting him or her to go down that path of like, oh, I'm [00:39:00] just gonna spend your time just building this product for us? Yeah, because I dunno, we're gonna become this cybersecurity vendor tomorrow, even though we are a financial organization today.
But I heard this great revenue in cybersecurity, so no ones gonna do that.
Antoinette Stevens: Exactly. I, I think I, yeah, I, I wanna stay away from building an entire product internally that won't outlast my tenure.
Ashish Rajan: Yeah. Fair. That's what it's what kind of people do you think, I am also thinking about people who are probably thinking of considering their shift into detection, engineering.
Maybe they are secure engineers and transitioning over to the space. What's the, what's the hiring in this particular space would look like moving forward, the way you see it in this AI world?
Antoinette Stevens: I, you know, I've been thinking about that actually a lot lately. I, I don't have the most optimistic outlook.
Mm-hmm. There, to be honest with you.
Ashish Rajan: I don't think anyone has an optimistic answer, but Sure. I, I'll hear you.
Antoinette Stevens: I, [00:40:00] I am really nervous for the entry level positions in security. And it's really fascinating because. There used to be no such thing as an entry level position in security. Yeah. All of the roles were you pivoting from some other domain expertise of like you were in it Yeah. For six years and you're like, well, I understand enough here. I'm gonna do security. Or you did software like there used to never, that used to not be a thing. You didn't graduate from college and just go directly into security.
It is a thing now and so many programs are focused on, like getting people graduated and in into entry-level roles that are maybe like SOC positions or GRC or things like that.
Ashish Rajan: Yeah.
Antoinette Stevens: What I'm nervous about is a lot of the entry-level work that we would hire people for can be done now with ai.
Yeah. Like, uh, we don't even have to wait five years for that to come. Like it can be [00:41:00] done right now. Yeah. Um. So like, I'm nervous for the people who are coming from completely non-technical backgrounds and wanna pivot into security. And they've been told for the past three or four years that all they had to do was get a certification and it would be okay.
I do not have the most optimist. Like I, I wish I did, but I don't have the most optimistic outlook there. Um. So I, I also think it's hard for me to give advice because I did not follow a non-traditional path. Like I went to school for computer science. I graduated, I got a job as a, a network analyst and then pivoted into security from there.
And I interned on the security team first. And so it's like I, I took a very straightforward approach. It is harder for me to give advice to someone who maybe is like an English teacher and they, they say they wanna pivot security. How do I do that? Yeah. So I wish I, I had something profound and useful to say, but I am legitimately nervous.
Ashish Rajan: There is [00:42:00] some silver lining. I would say that I think, and I've been talking about this for some time too. A lot of people, a lot of CSOs were hiring as well. The collective belief at the moment. I spoke to someone recently, they hired an ER nurse into their security operations team and as actually specifically fraud department in there, and they said this person was so cool as the cucumber in the middle of an incident.
Everyone else is panicking and 'cause it's like the skillset they have.
Antoinette Stevens: Yeah.
Ashish Rajan: Is, is that something that, and I guess I'm sure it's the same with teachers and other people, especially if you're a primary school teacher, I'm like, holy shit, I don't even know how you manage all these young kids across the board.
But the point being yeah, yeah. I mean, I don't know what is easier, but when you have like 50, 60 of them, it's not even like one or two of your own. It's just like 50, 60 kids. But I think the, the silver lining that I wanted to get to was the fact that. A, there is the fact that there is a slower adoption of ai, slower than what people are thinking.
And so we, we are fortunate [00:43:00] enough to work with companies that are leading that charge, but there's definitely a lot of companies that have not even started adopting. So my hope is that okay, if you're graduating this year or maybe next year there, there is still hope that you would find a job which is level entry, if it's a SOC or GRC or IAM is where I got my entry.
There are fields where you can get in. However, when it does become like, it's a, like the thing, same thing that happened with cloud.
Antoinette Stevens: Mm-hmm.
Ashish Rajan: In the beginning, people were like, data centers. Suddenly everything seems to be starting with cloud. And it's assumed like you would know some, some cloud for sure.
If you work in a tech company or a tech role, I feel it. They should be whatever the time period is. If it's one year, two year, whatever the time period is, my hope is people would start shifting towards a focus for, I can teach Ashish the technical skills because they have AI to help that. What I can't teach him is to be cool as a cucumber when the incident is running at that point in time.
That is a skillset I like that is a very individual human skillset. You don't need AI for that. You just need [00:44:00] the soft skills or whatever you call it, I think, yeah, they would become lot more important.
Antoinette Stevens: Yeah. I actually have a theory about that where if you're early on in your career and anything, your job is to be a personality hire.
Ashish Rajan: Yeah. Yeah. A hundred percent.
Antoinette Stevens: That is, that is your goal. That is the job. Yeah. If you don't do anything, you contribute almost no value at that, at that stage. Your job is to be a good person. Yeah. And to like learn as much as you can. I, I amend my statement then I think. If you're pivoting in right now, then you're right.
There are a lot of larger companies that are slower to adopt. The first company I worked at Principal, I went there in 20 and they were just starting to get on the cloud train. There you go. That's, they still had data centers and data centers are making a comeback. Mind you. Yeah. They, they were. Oh yeah.
Ashish Rajan: Yeah. I mean, now even more like, you can't get enough of the data centers anymore.
Antoinette Stevens: But I, I think if you're pivoting, finding an [00:45:00] old, I won't say older, we'll go with larger company.
Ashish Rajan: Oh yeah, yeah. Like a really large enterprise. Been there for a while. Yeah. They're not throwing away data centers anytime soon.
Antoinette Stevens: Yeah. Gonna be your best bet. But the realistic. Possibilities of getting a job at like a smaller startup without some sort of deeply technical skill to like go with it.
Ashish Rajan: Um, and beyond's
Antoinette Stevens: is like very slim.
Ashish Rajan: Yeah. Yeah. Those, those are definitely because they would like, there's already every startup that I know of, and maybe to your point, it's actually worthwhile clarifying it.
If you are watching or listening and you are in that beginning journey, you probably want to understand that, okay, if I'm spending time looking for a job. Instead of focusing on the startup area, I probably wanna focus on internships and stuff, which, and I guess the bigger enterprises that do afford internships in cybersecurity stuff.
So there'll be plenty of opportunities there. It just that you, you're cutting yourself off that market for for now. For now. I mean, it may level out, but right now even they don't know what they're doing. They just, yeah. [00:46:00] Take the money and figure we'll figure it out.
Antoinette Stevens: Yeah. Start big and go small, but get in somewhere I think.
Yeah. Don't let, don't let the flashy names catch your eye. It the goal is to just get the job. Yeah. And then you can do what you want from there.
Ashish Rajan: And definitely don't believe someone who just tells you getting a certification will get your job.
Antoinette Stevens: Yeah,
Ashish Rajan: definitely That. I mean, that's highlighted AI or no, ai, it's like doesn't really matter.
Even in a pre AI world, that was not true in a post AI world. Also, it's not true, so
Antoinette Stevens: it's not always true. Yeah, no, I'm very interested to see what happens to the field, I think in the next five or so years.
Ashish Rajan: Yeah. Yeah. Same. I mean, even though, funny enough, I actually have so many conversations about AI and I help so many people with ai.
But I still also feel like we haven't really figured out what's the, I don't know what the right word, maybe what's the, what's the stable point for this? Is it still keeps evolving.
Antoinette Stevens: Do you wanna know what I think the silver lining to that is? There was, I felt like there was an assumption, uh, earlier this year [00:47:00] that threat actors would somehow adopt AI and just be inherently good at it. But they are also just figuring it out. Yeah. Yeah. Yeah. Hundred percent.
Ashish Rajan: I mean, I dunno if you, there was a, there was one that came out from Anthropic using, or someone using Anthropic Claude Code for doing whatever, like, you know, they, yeah. But if you actually read through the paper.
I'll be honest, it was not like the most sophisticated thing that the Claude Code was doing. You're almost like, well, I can do it if I had the entry to it right as well. So I'm not even in like a high level, level three, level four cybersecurity expert. So, I, I don't wanna dis discredit it is possible.
I'm sure it would be, but right now I definitely feel like the scale of it. Is not to what people think. It's still very non-deterministic.
Antoinette Stevens: Yes. Still very non-deterministic. I don't think the scale is anywhere near where, uh, some of the earlier predictions, uh, wanted it to be, um. And again, I think it's because a lot of the threat actors are also having to learn [00:48:00] alongside.
Yeah.
Ashish Rajan: Yeah. They're probably listening to this episode going what is Angenette gonna share that I can learn about detection? They might reach out to you as well, actually. Those are most of technical questions. I've got three fun questions for you as well. Actually. First one being, what do you spend time on when you're not working on detection, engineering cloud as your thing?
What, what is something are you spending time on?
Antoinette Stevens: Yes. I have a level two W set, uh, which is a wine certification. And so I spend a lot of time learning wine. I won't say drinking wine 'cause that sounds problematic, but I do I do learn, I mean, I imagine that's part of the job of working in that certification.
Is that fair? Yes, it's, but you can spit it out. So I don't, I don't wanna encourage like over consumption of alcohol, but I Oh, fair. Are you gonna be a sommelier then? I don't wanna be a, I don't know. I, I think it's just fascinating. Wine is the most chemically complex beverage that we have, and I, I think it's fascinating to learn about.
And so that is how I've been. Kind of [00:49:00] decompressing.
Ashish Rajan: Interesting. A lot of restaurants in your, uh, free time then, I guess.
Antoinette Stevens: Yeah.
Ashish Rajan: Second question. What is something that you're proud of that is not on your social media?
Antoinette Stevens: What is something that I'm proud of? I am proud of my ability to, uh, side quest, I'd say.
Okay. Uh, I, I'm a big side quester. I mean, the wine thing is one, I take voice lessons. I've done comedy, I've danced. I I like that. I do a bunch of things just to try it out and then it doesn't have to be super serious. Yeah. And then move on.
Ashish Rajan: Yeah. You're, you're definitely sound like sounding like my soul sister now.
'cause I'm, I'm kinda on the same where I'm like I do one thing, then I move on to the next. I'm like, okay, I've done that. Now moving on to the next one. Yeah. But I dunno if you can continue doing that, but I feel like you only get one life, so you cannot keep exploring newer skills. I don't know how, how, what's the limit to that?
You just keep exploring.
Antoinette Stevens: Exactly. There is no limit.
Ashish Rajan: I mean, [00:50:00] I love that. And I think the final question I have is what's your favorite cuisine or restaurant that you can share with us?
Antoinette Stevens: My favorite cuisine, I'm a big Italian food person, but my favorite restaurant right now in New York is Trauma Mama.
It's a Georgian restaurant. I think I told you about it maybe.
Ashish Rajan: Yeah. Is that the one? Wait. 'cause I went to a Georgian restaurant recently in Austria, and they do this big dumpling thing where there's slice over the dumpling and there's like mini dumplings inside it. Is that,
Antoinette Stevens: oh no, I haven't had that. But Chama mama has like their equivalent of a soup dumpling that is so, oh
Ashish Rajan: yeah, yeah, that's right.
Yeah. Like you have like, and apparently what my friend was telling me that growing up she would have this like 15 of those at the same time.
Antoinette Stevens: So good. It's like, yeah.
Ashish Rajan: If people haven't not tried it, I'll definitely check. I would definitely recommend it. So Georgian restaurant Yeah. Is, is right up there.
Awesome. Actually that, that's all the questions I had. Where can people reach out to you, connect with you and learn more about the detection engineering work you're doing and maybe get some wine lessons while they're,
Antoinette Stevens: uh, you can find me on LinkedIn. It's just my [00:51:00] name, Antoinette Stevens. And then I will also be talking at sprawl on December 2nd.
Oh. In New York. Which is just a continuation of the build versus buy conversation.
Ashish Rajan: I would put the link in there as well. Uh, well thank you so much for, uh, coming on the show as well, and I, I look forward to having more conversation. I think it'll be a, maybe a wine conversation after this. We'll find out.
But, uh, thank you so much for joining as well, and, uh, thank you everyone else for tuning in as well. I appreciate you tuning in. I hope you enjoy the episode. See you next one. Thank you for listening or watching this episode of Cloud Security Podcast. This was brought to you by Tech riot.io. If you are enjoying episodes on cloud security, you can find more episodes like these on Cloud Security Podcast tv, our website, or on social media platforms like YouTube, LinkedIn, and Apple, Spotify.
In case you are interested in learning about AI security as well. To check out a sister podcast called AI Security Podcast, which is available on YouTube, LinkedIn, Spotify, apple as well, where we talked. To other CSOs and practitioners about what's the latest in the world of AI security. Finally, if you're after a newsletter, [00:52:00] it just gives you top news and insight from all the experts we talk to at Cloud Security Podcast.
You can check that out on cloud security newsletter.com. I'll see you in the next episode, please.

.png)



















