The 2-Minute Dwell Time: Why Agentic AI is Redefining Threat Hunting

View Show Notes and Transcript

Is your SOC prepared to defend against an attack that takes less than three minutes to move laterally? In this episode, Ashish sits down with Damien Lewke, CEO of Nebulock and former threat hunting leader at CrowdStrike and Arctic Wolf, to discuss the dramatic evolution of cyber threats .Damien explains how adversaries have shifted from human-driven, localized attacks to machine-speed AI orchestration, citing a recent example where a single actor used Anthropic and OpenAI to exfiltrate 150GB of data . We dive into the critical differences between "Agentic" systems (copilots requiring a human-in-the-loop) and "Autonomous" systems (agents that drive end-to-end hunts), and how to leverage this technology to empower Tier 2 and Tier 3 analysts .If your threat hunts are dying in a SharePoint folder, this episode would be an interesting listen for you . Learn how to differentiate human behavior from agent behavior, the blind spots of AI agents, and why identity is the new true perimeter in cloud security .

Questions asked:
00:00 Introduction
02:00 Damien's Background (Northrop Grumman, CrowdStrike, MIT, Arctic Wolf)
03:30 The Evolution of Threat Hunting: 2015 vs. Today
08:50 Why EDR and SIEM Tools Aren't Enough (The Gray Area)
10:40 AI-Orchestrated Attacks: 150GB Stolen from the Mexican Government
12:00 The Shrinking Dwell Time: From 12 Hours to 2.5 Minutes
14:30 The Difference Between Agentic and Autonomous Systems
17:30 How to Detect Humans vs. AI Agents (NHIs & Typos)
21:30 The Blind Spots of AI Agents (Insiders & Slow Attacks)
26:50 The Future of Detection Engineering & SIEMs
31:00 The Lifecycle of Security Detections
34:30 The Cardinal Sin of Threat Hunting (SharePoint Graveyards)
38:30 Crocodile Jerky Tasting & Fun Questions
39:30 Active Meditation: Running 1,000 Miles (1600km) a Year
41:30 Favorite Restaurant: Bangkok Bento in Boston

Damien Lewke: [00:00:00] By design, we have accepted that we're not covered. It's just more hoping a single threat actor using Anthropic with a sprinkling of open AI to exfiltrate 150 gigabytes of data in February. Lateral movement started his days moved down to 12 hours, has now dropped down to two, two and a half minutes.

Ashish Rajan: I don't know how many people actually think about life lifecycle for detection roof as well.

Damien Lewke: Oh my gosh,

Ashish Rajan: should we even water this plant or should we let it alone?

Damien Lewke: The cardinal sin of hunting. We did all this great work. And then it lives on a SharePoint drive. Yeah. And the adversary could come back and use those exact same TTPs a week later. People have the creativity and skills, they just haven't had the time to do it in the past.

Yeah, yeah. And now we're in a world where

Ashish Rajan: if you have been using AI agents for threat hunting, you probably understand the difference between an autonomous threat hunting versus an agentic threat. I had a chance to speak to Damien Lewke, who has been doing threat hunting for a long time, leading SOC teams, and now he is the CEO of Nelo.

We spoke about. How threat hunting has evolved. What does it look like with egen threat hunting and how realistic it is all the way up [00:01:00] to autonomous threat hunting? What that could mean? How do you even maintain accuracy on it? Can you build this on your own if you want it to? If you know someone who's trying to understand how AI agents capitally applicable in the threat hunting context and how much you can scale that, including autonomous threat hunting and why that would be the next evolution on where this would go.

Definitely share this with the others, you know, who are interested in this topic as well. As always, if you have been a long time listener or viewer of the podcast episodes, I would appreciate if you could take a quick second to hit the subscribe follow button on whichever platform you listen or watch podcast episodes on.

We are everywhere, including Apple, Spotify, YouTube, and LinkedIn. It is a free thing for you, but means a lot because more people get to see the work we are doing over here as well. I also wanted to say thank you to everyone who came and said hello to us at RSA and share the love that they had for the work we do here on the podcast as well.

Thank you so much for all the love. I also look forward to seeing you at another conference soon too. I hope you enjoy this episode with Damien. I'll talk to you soon. Hello and welcome to another episode of Cloud Security podcast. I've got Damien with me. Hey man, thanks for coming on the show.

Damien Lewke: Thanks, Ashish, [00:02:00] great to be here.

Thank you so much for having me.

Ashish Rajan: Oh, man. I'm so excited for this conversation. A because you've had, uh, some, some history in Australia. That's, that's, that's there is that, but yep. For people who don't know what you know, you, could you share a bit about your background, what you've been up to before?

Damien Lewke: Mm-hmm.

A hundred percent. I mean, I'm fortunate. I think a lot of folks spend most of their careers trying to figure out what their passion is. And for me, I found at the age of 20, my first day on an internship at Northrop Grumman, that it was cybersecurity. So I've been in security my whole career. Uh, started out in the DOD building out the threat modeling, hunting and cyber ops team for a nuclear weapon system.

Mm-hmm. Um, then transitioned over to CrowdStrike, uh, at the end of 2016. Really? Because I realized like it wasn't about just cyber physical systems and adversaries targeting weapon systems, but also how and where free speech and democracy could be impacted. So the fact that they did the IR on the DNC really got me motivated.

So was fortunate to join CrowdStrike after the series C. I worked across go to market, some integrations, engineering and partnerships roles through the [00:03:00] IPO. Mm-hmm. Uh, we talked about our shared history in Australia. So I did spend a couple years out in nas. I did, uh, enterprise systems engineering at Palo Alto Networks.

Yeah. Um, before really deciding to kind of take a step back, go back into research, and, uh, was fortunate to go to MIT for grad school, uh, master's in engineering. Did my thesis out of our computer science AI lab before running the AI threat intelligence detections engineering and security research product teams.

At an MDR called Arctic Wolf. Oh, building the detection systems for our 1200 person sock. And

Ashish Rajan: yeah,

Damien Lewke: the net benefit being to the thousands of our customers, kind of how and where we could identify novel things with the stack that they had.

Ashish Rajan: So I just, I mean, obviously clearly you've worked in this space for a long time.

Mm-hmm. I'm curious, how does. How has the threat hunting workflow kind of changed what, like what was it before? So people have some context here.

Damien Lewke: Yeah, a hundred percent. So threat hunting is very different than when I was doing it, you know, 10, 11, 12 years ago.

Ashish Rajan: Right.

Damien Lewke: Uh, you had a pretty fixed stack. And I think as an abstraction, security very much still remains the same in terms of pillars.

Ashish Rajan: Yeah.

Damien Lewke: You had email [00:04:00] security, you had endpoint security, you had identity, you had network security. But this whole cloud security thing wasn't really a thing.

Ashish Rajan: Yeah.

Damien Lewke: Um, so the landscape has changed a lot and even within these domains right across email identity endpoint now cloud and and network, you've seen a, a fragmentation of bifurcation of different markets.

Mm-hmm. Um, so back in the day, the workflow is relatively straightforward. Um, if I were to do a threat hunt, so threat hunting was the whole idea of. Can I identify, convict and attribute anomalous behavior,

Ashish Rajan: right?

Damien Lewke: So insecurity, right? We've got known good, known bad, and then there's this big old gray area that we're not really sure about.

Yeah. And throughout hunting really was around focusing in and and being able to provide clarity on that. And then based on what you found, determined a course of action. Did I find a malicious insider? Yeah. Did we find a nothing burger and it's just assist admin running a bunch of remote scripts. Did we find a malicious actor inside our network?

So in 2015. Pretty straightforward. We [00:05:00] would have an intelligence report. So we would get some sort of intel from one of our partners who would say, Hey, we see these threat actors. These are the, the capabilities that they have. Um, they're targeting organizations or systems like yours.

Ashish Rajan: Yeah.

Damien Lewke: So the process would be, okay, great.

I have this input. I'm now going to try and break down, Hey, what am I looking for? Right? Are these IP addresses to command and control servers, are these specific hashes related to malware? Um, and then I would search for that environment, my environment, um, and ultimately get to get to a conclusion. Yeah.

If that sounds a little laborious or time consuming. It, it was. Yeah. Right. Yeah. Uh, trying to get the data, capture the data, distill the data. A lot of times, like you'd not necessarily get it right the first time. Um, you'd have to iterate and this process could take days, weeks. Creeping into months.

Ashish Rajan: Yeah.

Wow.

Damien Lewke: Um, and and and that was in 2015?

Ashish Rajan: Yeah. Wow. 2015.

Damien Lewke: That's like me running McAfee, you know all that.

Ashish Rajan: Right. Okay.

Damien Lewke: Uh, that has, that same [00:06:00] process still remains, but the environment is so different. Mm-hmm. Right. Um, we had brick and mortar offices. Now identity is the perimeter. Um. You didn't really have cloud workloads.

You had an on-prem data center that you backhauled everything to. Yeah. Now I'm running typically in multiple public cloud environments.

Ashish Rajan: Yeah,

Damien Lewke: right. Yeah. So the process still remains, but the threat vectors, the different ways that actors can target your environment, the amount of intelligence, the amount of data that you have, uh, at your disposal has changed.

And also say the discipline of threat hunting has evolved. Um, it used to be primarily Intel driven. Now it's really predicated around this idea called a hypothesis.

Ashish Rajan: Yeah, yeah.

Damien Lewke: So the hypothesis is based on a variety of inputs. I have an idea, I have an intuition.

Ashish Rajan: Yeah.

Damien Lewke: I'm going to go look for that.

Ashish Rajan: Yeah.

Damien Lewke: Um, and that process, however, when you still look at the time, still takes a tremendous amount of time and people, and most organizations Yeah.

Would love to threat hunt, but being able to do [00:07:00] that consistently, proactively, continuously is. Is a really hard, I would argue, almost a non-existent practice. Yeah. Not because existing operators can't do it, but it's so hard with the timing controls that you have. Yeah. To get the mind share and space.

Ashish Rajan: Yeah. Yeah.

Damien Lewke: To think about the different ways adversaries are targeting your environment. And then go look for it and then convict it and do something about it.

Ashish Rajan: Yeah. Also, because companies may not be large enough to have that much security budget that was that big. A security

Damien Lewke: a hundred percent everything that I just talked about Yeah.

Is predicated on a human operator. Yeah.

Ashish Rajan: Yeah,

Damien Lewke: yeah. Um, and typically, I mean, in classic fashion, what we had at a large enterprise was. 24 by seven. Right. This was like a, we're following the sun this processes all the time. That's

Ashish Rajan: right.

Damien Lewke: Most organizations can't afford one, let alone many of these people.

Ashish Rajan: Yeah. And that's when they end up outsourcing it. But I guess to your point, if people were to just look at the models from, from, from a 10,000 feet view, I guess. Mm-hmm. They're obviously people who have their own socks that they, that's the others follow the sun. Yep. Some of them have [00:08:00] outsourced it.

Correct. But some, and some of them are big enough that they actually have threat hunting detection, engineering team. Mm-hmm. But the first two categories don't really do this. As a, as a concept for them. It's like, oh, that's just, I, I don't know if I have the resources for it. I don't know if I have the right time for it as well.

A lot of them perhaps may think that, oh, because I have an EDR solution or I have some kind of A-X-D-R-E-D-R fill in the, another acronym that is my endpoint or my mobile phone. I don't know. Is that right? Is that the right, uh, way to look at this as a, if I have that I don't need threat hunting?

Damien Lewke: That is a great question.

Yeah. Um. I don't think the two are mutually exclusive. In fact, what we see is a lot of times threat hunting is seen as aspirational. Yeah. We'd love to get to a point where we can threat hunt because I mean, we all understand Dr. In EDR.

Ashish Rajan: Yeah. Yeah.

Damien Lewke: Um, or the existence of the entire SIEM market. Yeah. We all accept that we're not going to detect everything with the vendors that we have.

Ashish Rajan: Yeah.

Damien Lewke: And the reality is [00:09:00] most breaches don't happen because of. High and critical alerts that you're seeing in your CrowdStrike. Um, they're the low or no signal events, right? Mm-hmm. It's the Phish Okta, API token that allows you to temporarily assume escalated privileges within your AWS environment and achieve your objective.

Mm-hmm. Um. The hope, I think for most organizations is defense in depth. So if I buy a bunch of different tools, if I log this telemetry, that should have me covered. The reality is by design we have accepted that we're not covered. It's just more hoping,

Ashish Rajan: yeah,

Damien Lewke: that the breach isn't hiding in those low and no signal events.

Ashish Rajan: Yeah, yeah, yeah. As you kind of said, the evolution. I thought about the EDR I thought about they traditionally go down the path of looking at signatures. Mm-hmm. The hashes. Is that still applicable in this world that you're in now?

Damien Lewke: Uh, short answer is no. Um, the longer answer is signatures are still helpful.

Yeah. Um, [00:10:00] behavioral event based detections are still helpful. Yeah. If I want to codify, uh, different hashes related to Locke malware or I know that, um. Um, scattered Spider uses a particular remote access Trojan. Yeah. That is helpful.

Ashish Rajan: Yeah.

Damien Lewke: Uh, but the reality is twofold. Um, about 10 years ago, we started to notice that adversaries were, were shifting from static atomic indicators.

So IOCs Yeah. Yeah. Things like file hashes or IP addresses.

Yeah.

Damien Lewke: To more behavior-based threats. Right. I would achieve access to a network. I would be able to do remote code execution. Mm-hmm. I would blend into the environment. Mm-hmm. And then I'd ultimately move laterally until I got to my objective.

Right. That whole phenomenon was called dwell time and estimates were 180 to 270 days.

Ashish Rajan: Yep.

Damien Lewke: agents have changed everything about that. If you look at what agentic threat actors can do, you know, Anthropic, uh, I mean goodness, like OpenAI, Google and Anthropic have all published research on how adversaries are using their [00:11:00] infrastructure to orchestrate attacks.

Ashish Rajan: Yeah.

Damien Lewke: Um. The most recent public example I can think of is a single threat actor using Anthropic with a sprinkling of open AI to ex filtrates 150 gigabytes of data from the Mexican government in February.

Ashish Rajan: Yeah, yeah.

Damien Lewke: Um, so we've moved almost from this like human driven behavior in orchestrated attacks to genetically orchestrated and executed attacks in that world.

When you're trying to codify what blends in what normal looks like, that's operating now at machine speed. These signatures, these static indicators just aren't quite ready for that. Yeah. Um, because the threat landscape is changing so quickly and because the actual threats you need to worry about are going to look normal or normal-ish.

Yeah. In your environment, not somebody detonating a known ransomware payload in your, in your environment. Yeah. That's not stealthy.

Ashish Rajan: Yeah. So what's the new version then, I guess? Threat hunting kind of evolved from a hypothesis world in this autonomous world that moving

Damien Lewke: towards a hundred [00:12:00] percent. Yeah. Um, so really the, the evolution is it's not just about being able to identify and convict behavior, but to do so at speed and at scale.

Yeah. The missing ingredient that we had, uh, and, and I would admit I had

Ashish Rajan: Yeah.

Damien Lewke: As a human, is it was all human driven. Mm-hmm. It was all single threaded. I had an idea, I had an input, I went and looked for it.

Ashish Rajan: Yeah.

Damien Lewke: In a world where breaches move, you know, the, the time dilation that we've seen is, um, lateral movement.

So adversaries being able to break out from one machine to another. Started as days moved down, you know, seven, eight years ago to 12 hours has now dropped down to, you know, two, two and a half minutes. I need to be able to quickly see this convicted and inform what to do. The missing ingredient, and this is where agents are so valuable, is organizational context because back to this like known, good, known bad.

Gray area.

Ashish Rajan: Yeah.

Damien Lewke: You want to be able to quickly capture context around all these different signals you [00:13:00] see and go, Hey, wait a minute. That's totally cool. Engineers are supposed to do SSH port forwarding or, Hey, wait a minute, Damien in accounting. Yeah. Has never done that. Let's look at the identity data and oh my gosh.

Look like, oh wow, he's logged in on five different machines. Looks like that identity's been compromised.

Yeah.

Damien Lewke: But if you're not looking at the context of an org and using something simple like. Different departments.

Ashish Rajan: Yeah,

Damien Lewke: yeah. And privileges. You're gonna miss that. So it's combining context with conviction at speed and scale, that's the missing ingredient.

And that's where selfishly, what we've done at Nebulock has been able to really accelerate that, that response pattern by capturing context and informing the next right step.

Ashish Rajan: But no, there's a whole skepticism around the whole hallucination. Mm-hmm. Hey, I still need human in the loop as well. Mm-hmm. Where does the.

'cause a Deloitte detection to what you said started off with, I have a hypothesis. Yep. I dug up some research. Yeah. And months, year, well not, maybe not years, hopefully. Mm-hmm. I spent weeks and months trying to figure out [00:14:00] what my data sources are from the detection for it. But what's the, what's the workflow in this autonomous world then that would give us that confidence?

Or is humans still a very much a part of that?

Damien Lewke: Yeah. So I think kind of upleveling it one bit. I promise I'll get to your question. So. In this world of agents, there are two kinds of systems, right? Agentic and autonomous.

Ashish Rajan: Yeah.

Damien Lewke: Agentic systems and autonomous systems exist on a spectrum.

Ashish Rajan: Yeah.

Damien Lewke: Right. I, I think there's a bit of confusion.

A lot of folks will claim agentic solutions versus autonomous solutions, and that nuance is important.

Ashish Rajan: Mm-hmm.

Damien Lewke: Um, when it comes to. Agents and humans in the loop, right? N agentic system is an intelligent copilot. It'll take different actions for you, but it will require a human in the loop. And there are a lot of cases where that is incredibly valuable.

Mm-hmm. Um, a great example of this would be incident response. I think we're at a point that agents can capture context, understand what's happening, and provide a list of [00:15:00] remediation steps. But you might not necessarily want to have an agent run amuck doing remote. Response, forensic analysis and remediation.

Yeah. Uh, not that that's not a, a world where we're going, but it still is a, a point in time where these are non-deterministic systems. They are incredibly intelligent. Mm-hmm. The more context you give them.

Ashish Rajan: Yeah.

Damien Lewke: But you still want a human in the loop.

Ashish Rajan: Okay.

Damien Lewke: The way that we think about this at Nebulla, and we do have threat hunters doing RLHF, improving our agents, like ensuring that we are continuing to ever improve this, but we've gotten to a point.

We are confident about what's autonomous. So back to this copilot analogy, right? So your agent copilot is flying the plane, it'll talk to the tower, it'll, you know, give you instructions about which landing field might make sense for autonomy like this. Agent's able to take things like weather, weather patterns, and previous inputs from earlier flights and looking at other flight data to really steer the plane.

We think about that in the same way for hunting. So what our autonomous [00:16:00] agents are able to do is take inputs like threat intelligence or generate their own hypotheses and then run a threat hunt to ground, right? Go through the entire life cycle, capture the data, use the context and knowledge that it has, and then ultimately get to a conclusive point.

But because we're not doing the response yet, right, we're just providing the visibility to start, uh, we can do so confidently when it comes to autonomy.

Ashish Rajan: Yeah.

Damien Lewke: So that's how we think about it. And obviously there is that spectrum.

Ashish Rajan: Yeah,

Damien Lewke: we started out very much as agentic, so we have agented workflows within the platform.

We've been able to make the shift to autonomous just as the platform has matured, as the agents have gotten better and smarter. And obviously as, as we start to see just what you're able to do with real practical application of knowledge and context when it comes to designing the agent systems you have and applying them to the appropriate use case.

Ashish Rajan: So you can be more autonomous if you have the, so I guess what I'm taking away from this is that. If your system has a right context and remembers the context, you're able to make decisions, which are a lot more. Uh, trustworthy [00:17:00] for lack of word. Yes. Yeah, yeah.

Damien Lewke: Absolutely.

Ashish Rajan: Yeah. Okay.

Damien Lewke: Um, now I wanna be very clear.

You still do want humans in the loop? Yeah. From like a, an evaluations perspective.

Ashish Rajan: Oh yeah, of course.

Damien Lewke: Um, but yes. Right, to your point, more context, more knowledge, more experience. You can let them run more and more autonomously.

Ashish Rajan: But how would you identify like the, one of the bigger things that people have been talking about in the industry mm-hmm.

Is the whole notion of. Uh, it could be ish or it could be Ashish's agent doing this. Yes. How would you differentiate? 'cause I mean be, would the behavior stand out and mean, this is going back to your Yeah, yeah. Yeah. Old days as well.

Damien Lewke: Oh, that's a great question. Um, so there are a few different dimensions that tell you if you're dealing with a human or an agent.

Yeah. And the amazing thing is like we all have access to that telemetry already. Oh. Um. Simplest example is like an EDR, right? Agents typically operate within the command line. So you can see command line arguments and parameters, but humans and agents do things very differently. Um, there was an old thought experiment, and it's [00:18:00] funny, we have almost inverted the pyramid.

So 10 years ago, if you wanted to find a persistent actor, you would look for hands on keyboard activity delivered via remote code execution. Mm-hmm. Right? So someone was remotely logged into another machine. Able to execute commands hands on keyboard.

Ashish Rajan: That's a signature. Okay.

Damien Lewke: And the idea was, if I wanted to tell if it was an automated script or a human, it was entirely based on the amount of time it took.

So back to Ashish and Ashish's agent, humans make mistakes. We make typos, and it takes this time to write these commands, even if I'm copying and pasting them from a, you know, Wiki. Agents, however, operate at a speed and iteration that makes it much easier to codify. Right? These are rapidly executed commands with little deviation.

Uh, it may not be Ashish's account, right? The agent might be using a service account or another NHI Yeah. To be able to run all the commands and take the actions that it is.

Ashish Rajan: Yeah.

Damien Lewke: So we [00:19:00] actually had an example of this, um, a couple months ago when Open Cloud was. The rage. Yeah. Uh, still is very big, uh, really interesting space, uh, to see what claw means.

Yeah. Uh, to, to the security industry and, and productivity writ large. What we saw, um, and this goes back to kind of what hunting and the output of what we're trying to do at Nebulock is, is really important was it wasn't just about finding like that, that static indicator of, okay, open claws running therefore bad

Ashish Rajan: mm-hmm.

Damien Lewke: But rather. Based on the behavior, based on the frequency of execution, the commands being written, the service accounts, and other NHI being used by these agents. What is this agent really trying to do? Yeah. And based on the agent's actions, being able to say, okay, that looks totally normal. Right? A little weird, but you know, you wanna run open claw?

You run open claw. That actually is trying to access resources that the agent shouldn't be trying to [00:20:00] access within the corporate environment.

Ashish Rajan: Mm-hmm.

Damien Lewke: What that allowed us to do is abstract that away and say, okay. In general, when you're trying to define agent activity, what does that look like? Typically?

Faster, more iterative, more repetitive. Typically, not necessarily using your identity, but using NHI to do that. And then ultimately, um, when it comes to codifying that a lot of organizational context is helpful. Some organizations say you can run your own open cly, you can do whatever you want. More organizations, I think are, are, are a bit more prescriptive.

Ashish Rajan: Oh yeah.

Damien Lewke: Right. Yeah. Um, so as soon as it starts touching corporate repos, corporate assets, a lot of that can be told, again, just by looking at the command line.

Ashish Rajan: Yeah. So, but the, because, so to your point, actually, what you said is really interesting because. Back. Well, it sounds like back in the, when I say back in the day, it sounds like 20 years ago, but it's like pre pre gen ai, like just pre gen ai.

I could just be running a script. Mm-hmm. Which is the system, uh, to your point, it's using a service account and just running a script. But [00:21:00] is there, are there blind spots in what agents can find? 'cause to your point, it could just genuinely be like a script that I run versus agent, and they probably would be at the same speed I would think.

So what's the, what I'm, I'm curious to know what are some of the things that agents are not that good at at today? Because obviously we can talk about the, the fact that it can do a lot of things, but I'm curious what are some of the gaps here?

Damien Lewke: Yeah. Um, and there are, yeah. Right. I wanna be very clear that like, agents are not a silver bullet.

Ashish Rajan: Yeah.

Damien Lewke: Um, we've talked about human in the loop verification validation and also things like agent training and, and RLHF, right? Yeah. Like it's, it's really important. To continue to ever improve, uh, the ENT systems that you have, and also recognize the value of existing signal generation systems. Heuristics and supervised ML are great ways that we have used for 15 plus years.

And you could argue AI has been in cyber since the seventies because we've, we've applied ML to cyber [00:22:00] problems for quite some time,

Ashish Rajan: right?

Damien Lewke: Um, agents do struggle, um, ironically enough, uh, in two key areas. One is. If I'm a fully credentialed malicious insider

Ashish Rajan: mm-hmm.

Damien Lewke: And I am doing things that appear normal, an agent is not magically going to say like, yep, totally convicted, malicious insider.

Right. Like it still is challenging for an agent to find that.

Ashish Rajan: Yeah.

Damien Lewke: The other challenge you do run into, and this is something that. We have really been thinking through is low and slow attack paths. Right. So really easy. Back to your point on like the script versus the agent, you know, with the script, pretty deterministic agent might make some iterations, you might see some tool calls, but the, that all operates within like a very fast time horizon.

Ashish Rajan: Yeah.

Damien Lewke: Whereas like tried and true, an agent trying to capture state and context over like many days or weeks

Ashish Rajan: Oh

Damien Lewke: yeah. Is gonna be a challenge. Yeah. Uh, and the final one really just given that this is the cloud security podcast. Um, is what I love to call the, the undiscovered country of [00:23:00] cloud hunting.

Ashish Rajan: Mm.

Damien Lewke: Threat hunting shifts. When you think about the, the rich telemetry state and process hierarchy that you can get on the end point, when you look at identity authentication patterns, h it's like other elements around identity versus like. Cloud trail logs. Right? Like I can look at API calls.

Ashish Rajan: Yeah.

Damien Lewke: Uh, and if someone is fully credentialed,

Ashish Rajan: yeah.

Damien Lewke: Um, that's tough. Right? Or ephemeral instances trying to find persistence. There is no, no silver bullet. The real key in the way that we've thought about this, and I think where, where I would suggest most organizations go is looking really at entity relationships between each of these attack surfaces.

Ashish Rajan: Mm-hmm.

Damien Lewke: Um, right. Okay. Based on the identity, that's a accessing these, these workloads within AWS like. What are they doing within your own, like endpoint environment? What, what does this user normally do and what are the [00:24:00] deviations? Yeah. So I know we went a little bit like

Ashish Rajan: no off

Damien Lewke: filter, but at some point

Ashish Rajan: because I, I was thinking, sorry.

Damien Lewke: Oh, no, no, just your, just your point back to what agents are good and good and bad at. Good at quickly identifying and convicting something.

Ashish Rajan: Yeah.

Damien Lewke: Not great at low and slow long areas. Yeah. And they also are only as good as the data they have access to. Yeah. Right. So when telemetry can be hard or dealing with traditional human-based challenges.

Ashish Rajan: Yeah.

Damien Lewke: Um, especially on the cloud side, like that is something that, that they struggle with and, and, and we really are, are working towards that. The way that we work to solve that is by doing this cross entity stitching as opposed to viewing this data, which we do typically. In silos and point identity cloud and then trying to like backwards induct, you want to do that all in real time?

Ashish Rajan: Yeah. 'cause I think to your point, the delay comes in, especially when you're doing threat hunting in the cloud as well. The delay is great. You have cloud trail logs, but you may be talking to a data center which is in-house.

Damien Lewke: Yep.

Ashish Rajan: You are from my laptops, so my laptop doesn't have connectivity to that data center.

You have to go through this VPN. Mm-hmm. Or do something there. [00:25:00] There's a lot more complexity than as simple as. Oh, you, it's in the cloud. I'm like,

Damien Lewke: yeah. Oh, a hundred percent. Well, and like VPN's a great example, right? Yeah. Like if I don't have an endpoint based encrypt or like some sort of network based decryption, like I can't read that.

Yeah, right. Like it, it's gibberish.

Ashish Rajan: Yeah, yeah, yeah. You're

Damien Lewke: not gonna be like, Hey, reverse encrypt a 2 56 encryption and like try and figure out what's happening. Like it doesn't work that

Ashish Rajan: way. Doesn't work that way. Yeah, it doesn't work that way. Well, 'cause I, I think, 'cause as you mentioned the whole, uh, the delayed attack as well.

'cause that reminded me that even in cloud formation or in general people when they United script. There you normally would add sleep.

Damien Lewke: Yep. '

Ashish Rajan: cause you, we are running, we are waiting for the script to finish. 'cause there used to be this thing where like, oh I want this to finish 'cause my, somehow it's blazing through the entire script.

Yep. I wanted to wait for five seconds. And to your point, those are very much, could be in a way that I normally work.

Damien Lewke: Mm-hmm.

Ashish Rajan: So it may be harder to identify, but it'll be really interesting 'cause I'm curious on your thoughts where moving forward and. Maybe planning for the next five years is not unrealistic at this point.

Just say next six [00:26:00] months. Yeah. Yeah. The six months horizon. Rest of the 2026. Where do you see this kind of go? 'cause there's almost like a use case for, maybe signatures are still important to what, what you're saying as well. Mm-hmm. But then there's requirement for context, but where do you kinda see this detection engineering space go?

'cause a lot of people, obviously. Oh, actually, before we go into it, one more question that I had was please. Uh, is the, the fact that a lot of, I have been talking to a lot of detection engineers and mm-hmm. Claude Code and all of these things seems to give the confidence.

Damien Lewke: Yep.

Ashish Rajan: And maybe, I don't know if it falsely, uh, uh, false confidence or a great confidence, but irrespective, yep.

There is this notion that we as an organization don't need anything, which is, uh, like we can create these AI tools ourselves. Mm-hmm. Where do you stand on the reality of actually making it happen in terms of, 'cause obviously you guys are building companies on this, right? Mm-hmm. So you guys realize the end to end of it for, for an outsider, it's like, oh, I just need like CLO code and [00:27:00] no detection and that'll be it, right?

Yeah. I just keep adding allion 'cause we can figure it out. Yeah. I just prompt it out. So what's the reality of this?

Damien Lewke: Yeah. Um, we were just in both, both, both at uh, BSides sf. Yeah. And. What's really encouraging to see, uh, as an AI and security company that's building agentic systems is like. Agents are in security.

Like that time has happened, like security has gone. agentic practitioners are, are using Claude Code, um, and are building detections with it. And, and honestly, yeah, some of the presentations I saw were really, really exciting. Like, I'm so excited to see that we are all on the same page here. Mm-hmm. Uh, I feel like a year ago it was a very different landscape, so I'm really, really excited about what's gonna continue to happen in 2026.

Short answer is, Claude Code plus, you know, plus your SIEM gets you pretty far, right? You're able to write detections, um, you can write complex queries and you can even start to ideate on, on [00:28:00] sophisticated threat hunts that you might want to use. It's about 20% of the battle. Mm-hmm. I think the real problem that you run into is around three tiers.

First tier is data normalization. Um. We still, if you go back to like SIEM and data transformations, like if you're interrogating a system where you've done all these data transformations, A, you got latency, B, the data might not be complete or lossy. Yeah. And c, without being able to have all that data cohesive, um, writing complex queries is gonna be a challenge.

It may be expensive, incorrectly written.

Ashish Rajan: Yeah.

Damien Lewke: Um, there's a whole lot more I could talk about with data normalization, but the second point that I would make is. Um, capturing and maintaining context and knowledge around that. I only know what I know and I only have access to my organization's intelligence.

And while depending on the company that may be, that is significant, that is not everything. And given how fast threat actors are moving, one of the big challenges that you're gonna have when it [00:29:00] comes to intelligently writing detections is. Not just being able to generate the right ideas and have the time to do this, but then you have to like build and maintain this whole system, right?

You have to evaluate and watch context creep. You have to watch agent creep. Yeah. You have to ensure that the inputs keep matching the outputs and that the system is, is, is robust and, and intelligent. The final tier that I really think about is not just around, Hey, can I write the question? 'cause the question is just the beginning of the process.

Ashish Rajan: Yeah.

Damien Lewke: I think where a lot of us struggle is beyond being able to interrogate a data set, how do I test it? How do I validate it, how do I do that performantly? And more importantly, continuously, um, you know, the way that we think about this at Nebulock is it's great that folks do that. I'm super excited. We got folks that we work with who are, who are exploring these, these areas, really where we're able to augment that is normalizing all that endpoint identity cloud.

And we, we also now do, uh, network but normalizing that [00:30:00] data, doing the entity stitching, being able to do real-time streaming detections as well as retrospective hunting and analysis. Then being able to say, Hey, you know, you've given us context about your organization. We're gonna proactively flag things that you yourself can write.

Detections. Oh yeah. Uh, using Nebulock or your own tool. Um, and then finally being able to use the collective intelligence of all the organizations that we work with, right? Instead of this still being a very human driven workflow, you want the agents to empower, augment the human. Mm. Um, and that diversity of data, that diversity of perspective really solves the, the inherent core problem of, of coverage.

Mm-hmm. Because it's not just about asking the question, it's about testing, validating, and generating. Pragmatic detections with your org's context. But I'm so excited and I think the folks that are doing that are doing amazing work and I'm so encouraged as a practitioner, uh, and also as a founder of agentic Security Company, that that's happening.

Yeah, I think it's amazing.

Ashish Rajan: But I, but to your point, it's the, [00:31:00] the maintenance power also. I don't know how many people actually think about Lifecycle for detection rules as well.

Damien Lewke: Oh my gosh.

Ashish Rajan: Because, I mean, that itself is like a whole, I mean, we can spend a lot of time on that because I feel like it's great to create a rule.

And then I, maybe I even use AI to create that rule.

Damien Lewke: Yeah.

Ashish Rajan: I mean, now my job skill set is higher up as a, as a detection engineer thinking, oh, I've up upleveled myself as a CSO or a head of detection engineering. I'm thinking I'm gonna lose this CSOs really soon because he or she's gonna leave.

Damien Lewke: Yeah.

Ashish Rajan: Then go somewhere else to do this because, uh, that detection that was made by me.

Mm-hmm. Now someone has to. To your point, continuously monitor if it's actually still valid to keep.

Damien Lewke: Yep.

Ashish Rajan: Should we even water this plant or should we let it alone?

Damien Lewke: Well, and it's funny, back to the, the plants, right? AI is very good right now at, you know, I'd say like single event based detections.

Ashish Rajan: Yeah.

Damien Lewke: But if you think about what the modern attack looks like, it's a sequence of behaviors.

Yeah. Over time and writing things like. Multi-entity [00:32:00] correlation rules and being able to do that across hundreds of thousands of entities at scale, it's a very different problem. You know? Yeah, I agree. Right. The care and feeding of detections versus being like, do we just throw this all out and start from scratch?

It's, it's an interesting question. We could talk a lot about that. Yeah. Uh, detection lifecycle management is a, a phenomenal, we can talk about that. Yeah. After the podcast. Yeah. Over dinner or something. Yeah.

Ashish Rajan: And so going back to, uh, the thing that I was talking about in terms of. Like the evolution of, uh, the threat hunting as a landscape.

Damien Lewke: Mm-hmm.

Ashish Rajan: Uh, a lot of CSOs, obviously we are at RSA, there's people all have come with a plan for a, what do I'm doing? What am I doing for my detection programs mm-hmm. Or my threat hunting program for that matter as well. Um, and again, not going for the five year plan, but even within 2026 as I'm rebuilding my team.

How do I do detection, engineering in general? Mm-hmm. What, what do you see as a shape of those teams? As to what the, what their roles would be?

Damien Lewke: Yes.

Ashish Rajan: And what they should focus on and what should they give up on from before, I guess. 'cause are those practices [00:33:00] even valid anymore?

Damien Lewke: Yeah. So there is validity in old practices, but I, I think we have to shift the discussion.

Yeah. Uh, I, I completely agree. Uh, the first is we all need to accept that identity really is the new perimeter.

Ashish Rajan: Mm-hmm.

Damien Lewke: Right. And not just human. But non-human. Mm-hmm. Right. Uh, that's just where we're at. And if you think about threat hunting and detection engineering as a practice, the vast majority of it for a long time has been focused around the end point.

That's not to say that the end point still isn't valuable. Right. I'm a, I'm a former crowd striker. I love the end point, but

Ashish Rajan: yeah,

Damien Lewke: the, the threat landscape has shifted. So I think the first is like, let's understand what our perimeter is. What's, what's the core problem we're trying to solve? What are the resources and assets that these identities engage with?

The second is in terms of team shifting away from this like single event driven to correlation rules. Um, one of the comments from, from one of the discussions at BSides that I really thought was [00:34:00] quite insightful was the importance of systems thinking around detection, engineering. We want to think about how all of these systems interact with one another.

As opposed to single event conviction bad.

Ashish Rajan: Mm-hmm.

Damien Lewke: Still could be very valid, but given how fast everything is moving, it's more codifying how all of these correlation rules work together. Then the final one is, um, I think the, the monik is death, right? Uh, there's a death con, uh, it was in San Diego last November, right?

Detection, engineering, and threat hunting. Really like the two practices should be blended. Um. The whole idea is like if I'm gonna write a detection, it's based on an instinct, on an intuition. That could be from an internal wiki, a threat intelligence report. I have an idea. But ultimately what we think about is shifting beyond like, Hey, I have a question, or I'm gonna look for data to, if I have confidence that this behavioral pattern does matter to my enterprise, why don't we just make this a detection?

Right? [00:35:00] Let's proactively flag that instead of what I believe to be the, the cardinal sin of hunting, which is we did all this great work. Um. And then it lives on a SharePoint drive. Yeah. And the adversary could come back and use those exact same TTPs a week later. We wouldn't have a proactive detective control.

So if I'm a cso, I'd say like, identity is a perimeter. Think about identities, human and non-human, and how they access different assets and applications within my environment. Understand the importance of folks who can do systems thinking and cross correlation detection rules, because that's where we need to go.

Ashish Rajan: Yeah. Yeah.

Damien Lewke: And then finally, uh, really marry up the, the disciplines of like proactive security isn't just restricted to one or the other, but these two teams should work together. Um, because that creates a flywheel of like the idea generation, the detections that we're writing, like they're all pragmatically based around like our threat and form defense as opposed to ad hoc, backlog driven workflows, which people use.

But uh, I think we need to shift our [00:36:00] perspective in 2026.

Ashish Rajan: Would you say, um, maybe it's my final question, at least technical question. Yeah. Would you say that. This also allows me as a CSO to uplift my level three, level four SOC analyst to be a more threat hunter detection, because there, there's this separation of that, that's an elite field.

Mm-hmm. I just do deep dive into the logs that I'm getting. The, I, I don't, but I don't build detections. Yeah. Is that you reckon that systems like that you guys are building and the way industry is going, that gap's going to shrink?

Damien Lewke: Yeah. Um, short answer is yes. Longer answer is, um. That's exactly what we're designed to do.

Mm-hmm. Right. We, we understand that automation is here to stay. Yeah. You've got AI SOC helping automate tier one and tier to tier two workflows. You've got SOAR that's able to orchestrate response, but you've got this lovely gray area, to your point of the like tier two, tier three folks who back in the day hunting was triage and search.

Yeah. [00:37:00] That's not threat hunting, but that was what people had the time to do. Yeah. Our whole focus is use. Use agentic systems and autonomous systems to uplevel your folks, right? The idea is you can enable mastery and help folks build and develop new skills so that something that was once aspirational.

I'd love to get the threat hunting, but like we have to focus on our tier three or like I'd love to do detection, engineering. I only have three people.

Ashish Rajan: Yeah, yeah, yeah.

Damien Lewke: You now can do that.

Ashish Rajan: Yeah. Yeah.

Damien Lewke: And I think the, the key is, okay, give the brilliant folks who are in security operations teams, now that automation exists on both sides.

Like they can do these things and that's unlocked, that potential gets unlocked by these agentic systems that have been built.

Ashish Rajan: Yeah.

Damien Lewke: Um, because the people have the creativity and the skills, they just haven't had the time to do it in the past. Yeah. Yeah. And now we're in a world where they can't.

Ashish Rajan: Yeah.

Awesome. Now, uh, a that's the technically questions I had. Yeah. I've got a apparently snack war. Oh, let me grab that.

Damien Lewke: Love it.

Ashish Rajan: Uh, so the, actually for context of people, [00:38:00] so we have, uh, British snacks and Australian snacks. I, well, I guess the choice to be made is you were gonna go for the, there's a crocodile and a kangaroo version.

Damien Lewke: Crocodile,

Ashish Rajan: you're going into the crack crocodile version. Thank you. And the way we are doing this. So one question, uh, we can either do one question one by how big are the pieces? I don't even know how big are, how big are the crocodile piece?

Damien Lewke: That is a great question. About four centimeters tall.

Ashish Rajan: Oh, right.

Okay, cool. Alright. So I'm gonna see how big the kangaroo one is. Okay. I want to try the kangaroo one. Oh, should we do swap C? But we figure, so anyway, the, the way we're gonna do this, we're gonna try it out and, uh, you and I can do it together. I've never had these before.

Damien Lewke: Right.

Ashish Rajan: Uh, so sweetened, sweetened heart Jeep jerky.

This is, that's

Damien Lewke: game jerky crocodile. Oh. It's just the authentic taste. I'm intrigued.

Ashish Rajan: It's a real deal. It's a real crocodile.

Damien Lewke: Great for, great for the morning.

Ashish Rajan: Yeah. Cheers.

Damien Lewke: Cheers. Absolutely.

Ashish Rajan: Is this like any beef jerky?

Damien Lewke: Yeah, that's actually pretty good. It just tastes like chicken,

Ashish Rajan: right? I, [00:39:00] I'm, am I gonna, if you wanna try this, I'm gonna try that.

I'm

Damien Lewke: just curious. Yeah,

Ashish Rajan: like

Damien Lewke: it's a little dry.

Ashish Rajan: It is like chicken. Mm-hmm. It like chicken packaged as crocodile.

Damien Lewke: Did they, that's some, that's some smart arbitrage.

Ashish Rajan: Yeah. Yeah. Like also. We're eating. Uh, the question that I had for the first few, first few you was, um, I can finish chewing. What is something that you're, what, what is something that you spend time most on maybe not trying to solve autonomous threat hunting problems?

Damien Lewke: Um, my big thing is, uh, running. So I started this practice a few years ago. Long distance running. So I run a, we'll use non imaginary unit, so we'll use metric. Yeah. I run 1600 kilometers a year, so a thousand miles a year.

Ashish Rajan: Right.

Damien Lewke: And the real point around it is it's an exercise in discipline. Mm-hmm. If you actually boil it down, you're running about 4.9 Ks a day.

Ashish Rajan: Right.

Damien Lewke: So, like, that's not that much. But I live in [00:40:00] the tundra that is Boston, Massachusetts.

Ashish Rajan: Right.

Damien Lewke: So six months outta the year, the weather's terrible. But it forces me to be consistent and really for me, has become a way of active meditation. It allows me to process things, think through things. My family are all here in California.

Ashish Rajan: Oh right.

Damien Lewke: So I'll also like call my parents on a long run.

Ashish Rajan: Oh yeah.

Damien Lewke: Um, and I have a, an an amazing and supportive wife who. Endures my running hobby. You know, I have way too many running shoes. Yeah,

Ashish Rajan: fair. Uh, second question. What is something that you are proud of that is not on your social media?

Damien Lewke: Um, actually

Ashish Rajan: I got it.

Damien Lewke: Oh, that is not on my social media. The problem is I started posting about running on my social media. You know, um, I'll be personal. Uh, I'm proud of the fact that, uh, despite. All the busyness, um, as a founder, and I wanna be clear, like I'm so grateful to get the opportunity to do this. We have an amazing team at Nebulock of really smart folks, um, who are, who are very [00:41:00] mission-driven.

I'm so grateful just as like, as a person Yeah. To work with such, such amazing folks. Um, honestly, like what I'm most proud of is despite all that busyness, um, I still do take the time to prioritize. Time with my wife. Yeah. So we have a deal that Tuesday nights are date nights, um, and Saturday afternoons, like no laptop, no nothing.

Um, I know that's like a to be proud of. It's more like I naturally always wanna be working and always wanna solve that problem. Um, it's really important to me that I also take the time for myself and, and to show up for the people that I love.

Ashish Rajan: Yeah.

Damien Lewke: So I'm really proud of that.

Ashish Rajan: Yeah. Yeah.

Damien Lewke: You know,

Ashish Rajan: that's, that balance is hard, especially if you're doing running your own thing as well.

Damien Lewke: Oh my gosh.

Ashish Rajan: Yeah. It's like, it's funny, I think. I don't know how many people can relate to it as well, so. Mm-hmm. I guess a lot of people who are doing multiple things have at their side quest the same time. Yeah. They probably will understand this as well. Uh, final question. What's your favorite cuisine or restaurant that you can share with us?

Damien Lewke: As I go back to kangaroo,

Ashish Rajan: I was gonna say kangaroo or kangaroo.

Damien Lewke: [00:42:00] Okay. Favorite cuisine or restaurant? Um, restaurant. Easy. Uh, it's a place called Bangkok Bento. Okay. On Newbury Street in Boston. Mm-hmm. It's a hidden gem. Ironically enough. It's also, uh, the Nebulock new hire tradition is we onboard you in Boston.

We take you out to this place, but it's a Thai in Japanese fusion place.

Ashish Rajan: Oh,

Damien Lewke: it's, it's. It's so funny 'cause it's so non-descript. It's like in the basement of like a brownstone, right? It doesn't look like anything. And the food is incredible. The sushi is fresh. Uh, the curries are spicy. Like it is just so good.

So Bangkok bento on Newberry Street. I cannot recommend

Ashish Rajan: it. Bento. And I mean, even the name has Thailand and Japan combined.

Damien Lewke: Exactly.

Ashish Rajan: Oh wait, I'm gonna check that out the next time. I'll you there. Uh, let's to wrap it up then, I guess, where, where can people obviously connect with you? Mm-hmm. Learn more about Nebulock and the work that you guys are doing.

Damien Lewke: Absolutely as I eat more Kroo Turkey, so you

Ashish Rajan: can take your time from my day.

Damien Lewke: It's all good. [00:43:00] So our website is nebulock.io

Ashish Rajan: Yeah.

Damien Lewke: Um, we also have kind of a free open demo that allows you to play around with our agentic threat hunter, not the autonomous one. The agentic one.

Ashish Rajan: Yeah.

Damien Lewke: That's vibe hunting. So like vibe coding.

Vibe hunting.com.

Ashish Rajan: Alright.

Damien Lewke: Um, and of course on LinkedIn. Yep. So, Damien Lewke, there aren't many of me on LinkedIn. And, uh, of course, Nebulock's LinkedIn page.

Ashish Rajan: I will add on the short as well. Dude, thanks so much for coming on the show.

Damien Lewke: No, thank you so much for having me. Pleasure. Thank

Ashish Rajan: you. And thanks Ev everyone tuning in as well.

You next time. Thank you. Thank you for listening or watching this episode of Cloud Security Podcast. This was brought to you by Tech riot.io. If you are enjoying episodes on cloud security, you can find more episodes like these on Cloud Security podcast tv, our website, or on social media platforms like YouTube, LinkedIn, and Apple, Spotify, in case you are interested in learning about AI security as well.

To check out a sister podcast called AI Security Podcast, which is available on YouTube, LinkedIn, Spotify, apple as well, where we talk. To other CISOs and practitioners about what's the latest in the world [00:44:00] of AI security. Finally, if you're after a newsletter, it just gives you top news and insight from all the experts we talk to at Cloud Security Podcast.

You can check that out on cloud security newsletter.com. I'll see you in the next episode, please.

No items found.
More Videos