The Zero-Day Clock: How AI Shrank Exploit Times from Months to Hours

View Show Notes and Transcript

What happens when an attacker goes from zero stolen credentials to full AWS Admin in just eight minutes? In this episode, Sergej Epp (CISO at Sysdig and former CISO at Deutsche Bank) joins Ashish to discuss the terrifying new speed of AI-driven cyber attacks .Sergej breaks down the reality of the "Zero-Day Clock"a metric showing how the time between vulnerability disclosure and live exploitation has plummeted from 1.5 years in 2020 to under 24 hours today . We explore why traditional, human-in-the-loop SOC teams can no longer keep up, and why defenders must adopt "YOLO mode" automation to fight back .We also dive deep into the fundamental architecture of LLMs. Sergej explains why prompt injection is essentially unsolvable (because AI cannot separate data from instructions) and why runtime security is the only way to gain the "ground truth" needed to stop autonomous attacks . If you're a security leader trying to explain the urgency of AI threats to your board, this episode gives you the exact metrics and mental models you need.

Questions asked:
00:00 Introduction
03:40 Who is Sergej Epp? (Sysdig, Deutsche Bank, Palo Alto)
05:10 Why Prompt Injection is Unsolvable (Data vs. Instructions)
09:10 "YOLO Mode": Taking the Human Out of the Loop in Defense
11:30 The Verification Law: Why Offense is Winning the AI Arms Race
14:30 Scaffolding for AI: How to Give Models Context (Project IRE)
18:20 Honey Tokens: The Most Trustful Signal in Defense
19:40 The Zero-Day Clock: Exploit Times Shrinking to Hours
22:30 How AI Automatically Generates Exploits from Patches
26:30 Case Study: The 8-Minute AWS Compromise via AI Agents
28:40 Why Posture Management Fails Against Fast Attacks
30:20 The Need for Runtime Security and "Ground Truth"
34:30 Restructuring the Security Team for the AI Era
38:30 The Defender's Advantage: Attackers Don't Know Your Environment
41:10 Fostering AI Adoption Through Hackathons
44:50 Fun Questions: Crocodile Tasting, Kids Building Games with AI, and Butter Chicken

Sergej Epp: [00:00:00] Offence just got this superpower. And the same question I would ask now for us in cyber defense, how do we. Take out the human out of the loop for everything we've been doing. The timeframe between a CVE of vulnerability being released to exploitation of exploited was more than one and a half years.

Now we are like under a day, right between eight hours and one that three days. You can expect this is going to drop to hours.

Ashish Rajan: What used to be called quote unquote script kiddies now have access to the same information. It's a very different way of thinking from the way a SOC analyst looks at incidents today.

Sergej Epp: One AWS environment being compromised. Where the attacker moved from zero stolen credentials to full admin admin and eight minutes,

Ashish Rajan: you have tens of thousands of false positive risk came through as well.

Sergej Epp: How a busy SOC analyst supposed to cope with an attack, which is taking under 10 minutes.

Ashish Rajan: I had to fit my vulnerability management into like a, it's into a sprint somewhere. Which could be in six months, it could be one year. Does that way of thinking need to change now?

On one side, the offensive AI teams have started increasing the amount of attacks they can potentially build and do in a [00:01:00] very short amount of time. One of the recent surveys that came out on how quickly a vulnerability can be exposed was down to less than 24 hours before the exposure was announced, all the way up to some of the, the first hack for that happened as well.

We are living in a world where the attack span window from exposure to impact is actually very short. And the defense, which is people like you and I and others are not losing the battle because we don't have access to the same tool. We are losing the battle because most of the time we are not able to verify if the detection we had is actually working or not.

Now we have ai, red teaming, we have runtime, but the fundamental principle a lot of people are heading towards, including Sergej Epp from Sydig who we had the conversation with. There's been the CISO for Deutsche Bank before, now he's a CISO at Sysdig. We were talking about two things. One, as a defense team, we need to get an assurance of how are our security products verifying the output.

Instead of creating multiple false positives, you should be able to walk away with knowing the exact. [00:02:00] Top 10. Let's just say things that are actually valid and are exposed when you do runtime security or security in general across your detection platforms. The second is that it's the harness you build in your security team in an AI ecosystem that would be a moat, not the most expensive or the smartest LLM in the world, because let's face it, it would not have the right context.

We spoke about. How he ran a hackathon inside his organization to start enabling more AI adoption and how interesting use cases came out of the security team adopting AI and why runtime security is becoming top of mind for a lot of organizations as they start to build towards a verifying capability where there is a higher accuracy to the results that you get from all the detection you have put in.

If you know someone who's working on expanding the security program with runtime or probably looking at detection as a capability. I'll definitely recommend you share this episode with them as well. And if you are here for a second and third time and have been finding the podcast episodes valuable, I really appreciate if you take a quick second to hit the subscribe, a follow button on whichever platform you're listening or watching this podcast on.

We are on all [00:03:00] platforms, including Spotify, apple, YouTube, and LinkedIn. It's a free thing for you, but it means a lot to us because more people find out about the work we are doing here in the podcast as well. I also wanted to say a quick shout out to everyone who came and said hello to us at RSA. Thank you so much for all the love and support you shared with us.

It really meant a lot to hear from you, meet you in person, and get to hear how the podcast have helped you in your career as well. Thank you so much for the support. I hope you enjoyed the episode with Sergej. I'll talk to you soon. Peace. Hello and welcome to another episode classically podcast. I got Serge with me.

Thanks for coming on the show, man.

Sergej Epp: Thanks for having me here.

Ashish Rajan: Uh, maybe to set some contexts, if you can share a bit about your background and where you're from.

Sergej Epp: Sure. I'm, uh, CISO at Sysdig. I'm running internal security. IT teams, uh, Strat Research and Field CISO programs. Uh, my background is I've spent like 15 years in cybersecurity, most of that time, um, running cyber defense at Deutsche Bank and then at Palo Alto Networks.

Ashish Rajan: Yeah. And obviously you've done, you've seen kind of like the, the regulated side. You've seen the non-regulated side. We just. Before we started recording, we're just starting to talk about the AI security components, just top of mind, especially [00:04:00] if you're walking the RSA floor or in cyber security. What are some of the things that you guys are seeing as people on the frontline for AI security that not used to be a thing before?

It's almost like a, what's the pre AI thing that we can. Not expecting this post gen ai, or it's not even post now, it's current gen AI era.

Sergej Epp: I mean, I'm, I'm looking just for, for vendor startup, which is not doing AI security. Everybody's just, is just pitching, uh, for that. But I think what's becoming pretty obvious, you need to have an deterministic layer for everything you're doing in cybersecurity.

You can't just rely on AI because it's probabilistic. No. And then, you know, um, that's a big challenge. How do you write now understanding how do you cut through all this noise, which is happening out there? Yeah. It feels like we have 10 times more startups at this RSA

Ashish Rajan: Yeah.

Sergej Epp: And, um, you know, just to get an understanding as well, um, who's doing what is a really big thing.

So it feels like we need an AI right now to digest all that. Put this all LLM notebook or whatever.

Ashish Rajan: Yeah, yeah, yeah.

Sergej Epp: Just ask questions like, what is the best pattern for me now?

Ashish Rajan: So what's been the most, uh. [00:05:00] Eyeopening one that you've come across. Obviously you've been in this space for a long time.

Sergej Epp: Yeah.

Ashish Rajan: What's been the most eyeopening one that you probably the, like Sergey from 10 years ago would've gone, what? We can do this.

Sergej Epp: Right. Interesting. I mean, um, I saw a couple of companies, I think, um, trying to approach securing AI and obviously we're ourself in this business.

Ashish Rajan: Yeah.

Sergej Epp: What was very interesting is that we start to sing about approaches around explainability.

So some startups are trying really to understand like. How's the LLM sinking and, you know, what is the like what is the bad pattern or what is the bad area of this LLM? Yeah. So if, if syncing starts to happen in this space, um, how can we block them the prompt? So, uh, trying really to find out different approaches how to secure and fight prompt injections.

Yeah. Um, I think I'm still a bit, uh, skeptic about that. I don't think that we can solve problem injections because right now everything we're putting in context. Post instructions and data is, is becoming the same. It's all becoming instruction.

Ashish Rajan: Yeah. Yeah.

Sergej Epp: [00:06:00] And, um, I have not really seen a proof that anybody can solve that.

Ashish Rajan: Actually, maybe to cut some context, a lot of people hear the word prompt injection, but may not be sure what that is like. Right. How would you describe it's to other people who may not have dug deep into the space.

Sergej Epp: Right.

Ashish Rajan: How, what would you describe the problem? Objection.

Sergej Epp: So let's perhaps go one step back.

And try to understand how models are trained and used.

Ashish Rajan: Yeah.

Sergej Epp: If you train a model, you train us based on specific data set, and then it's becoming, um, you know, it has a state Yeah. And the state doesn't, doesn't change.

Ashish Rajan: Yeah.

Sergej Epp: So if you want to do then certain activities with this, you can just provide a system prompt in the context window to tell it, Hey, you are a nice, um, assistant, or you're a bad assistant, or you're a fun assistant, like Grok.

Ashish Rajan: Yeah.

Sergej Epp: Um, and then on top of that, users can ask questions. Mm-hmm. Uh, which is also going into. Into the context window. Um, you can as well define certain rules.

Ashish Rajan: Yeah.

Sergej Epp: So everything we are adding on top of this train model is effectively becoming the context of this model. This is becoming more or less the, the changing state.

[00:07:00] Which we iterate over and over again once the model starts to reason, once the model starts to do certain activities.

Ashish Rajan: Yeah.

Sergej Epp: And the point is just, um, the model doesn't really differentiate between what is data and what is instruction. So if you think a bit about the compute architectures and what we've tried to solve for the last 20 years

Ashish Rajan: Yeah.

Sergej Epp: Is to split data from instruction.

Ashish Rajan: Yeah. Yeah.

Sergej Epp: Across, uh, you know, across our operating systems, across our ma management, across pretty much everything. Yeah. File. And for, um, for LLMs, it's not possible. And the problem becomes, now let's say you have now, um, a specific model and you give it a system prompt to tell it.

You are, you're a very nice, uh, chatbot assistant who is working for this specific shop. And now you define this. Let's say you're using this model now as a claude code, or let's say an open claw. You give it as bonus instruction. Okay? These are the activities you're allowed to perform and these are the activities you're not allowed to perform.

Please never give this personal data out. I can just. Come on top of that and then query this bot and just ask it. Hey, everything you've heard before is just, [00:08:00] um, uh, it's just a wishful thinking or whatever. Let's just be real ignore and just ignore that. Right? Yeah. And all this data instructions becoming then complete obsolete and the system tries to behave differently.

So I think that's, I'm not sure if it's the easiest explanation of ion. I think it goes back into the, to

Ashish Rajan: your point, unintentional access. So unintentional action that you're making an AI do is what you're going at. Correct. 'cause I feel like. So far we've been in the world for cybersecurity, where it was predictable.

I think, uh, there was like a claude code or claude code was used to come up with, I dunno, 500 plus open source vulnerabilities. There was a whole, how do you see people approach it and what's wrong about the way people are approaching AI security specifically where AI is quote unquote being weaponized.

Sergej Epp: So I think that's a different type of problem. Right. We've talked about model security Yeah. From injections. Now we talk about, uh, I think you're referring to the traffic release of 500 Ities.

Ashish Rajan: Yeah.

Sergej Epp: And you know, there's a lot of questions like, uh, I'm hearing through the conference as well.

How do we, I don't know, how do we put the right priority now? How do we keep up with the [00:09:00] speed of this attacks? How do we patch it very fast? How do we enable our rules? And I don't think that these questions are the right ones because it's sort of. Of going after the symptoms rather than trying to understand the root cause of the problem.

Ashish Rajan: Mm-hmm.

Sergej Epp: And the root cause of the problem right now is, um, offense just got this, this superpower, right? Mm-hmm. And um, what we're trying to do around offense, uh, we start to see that you have a fully enabled, completely autonomous lapse already try, uh, you know, being in the position to find zero days.

Ashish Rajan: Yeah.

Sergej Epp: And the same question I would ask now for force in cyber effects. Us, how do we take out the human out of the loop?

Ashish Rajan: Yeah.

Sergej Epp: For everything we've been doing. I dunno if you're using like YOLO mode and claude code.

Ashish Rajan: Yeah, yeah, yeah.

Sergej Epp: I mean, YOLO

Ashish Rajan: mode is, uh, yeah. Okay. I'll let you explain it. Yes.

Sergej Epp: So YOLO mode is, is you don't have to click and approve every instruction, right?

Go for you. Just go ahead, give it the problem and say. I wanna, I want you to go out and find a zero day and only come back to me once it's validated and proved and you verified as well. That's really novel.

Ashish Rajan: Yeah. Yeah.

Sergej Epp: And then you go for, [00:10:00] for break, whatever, like, and you have it after a couple of hours, right?

So what is YOLO mode going to look like For cybersecurity and for everything we're doing in this, we need a yellow mode for cybersecurity.

Ashish Rajan: Yeah.

Sergej Epp: Wow. We, to keep our people out of that, to be able to keep up with the speed of, um, of the engineers, right. Of software engineering, but also of offense.

Ashish Rajan: Yeah.

Actually that's a good way to put it. 'cause you almost, uh, I, I love the analogy because I've traditionally, so far we have kind of worked on having people say focus only on technical problems, SQL, injection whatever. Mm-hmm. And cv. There, there's a process defined. And to your point, obviously I wanted to kind of cover both the spectrums because.

On one side, we would have the model prom ejection. There's obviously a growing storm there, but then on the other side, we already have existing challenge of dealing with the usual cvs that come out right, which probably are coming at at a lot more volume. Right than, than what we used to before. So, and I love something you were talking about in talks about verification law.

Sergej Epp: Mm-hmm.

Ashish Rajan: What is [00:11:00] it and why is it important now? Like what, what inspired that thinking? I'll just start with that first.

Sergej Epp: Right, right. Saying what inspired me is, um, just looking back at all the research which was published, um, last year around cybersecurity and around AI. I think specifically looking at the AI research you start to see that in every domain where you can measure something and verification is easy.

You know, from playing chess to playing su Sudoku mathematics, um, every time you can verify and validate something, AI is becoming very good. So all the benchmarks we've created so far

Ashish Rajan: Yeah.

Sergej Epp: Across all the domains they've been, you know, AI was able just to reach 90, 95% of successes. Or even more sometimes.

Ashish Rajan: Yeah.

Sergej Epp: So, and what I've tried to do is, um, um, you know, middle of last year is just to map back this principle to cybersecurity and specifically trying to distinguish between the different offense and defense domains.

Ashish Rajan: Yeah.

Sergej Epp: And it turns out that if you just map back this specific principle of where's verification cheap and.

Where's the problem? Space [00:12:00] big. It turns out offense is, is dominating. Um, because, if you, if you pop a shell, if you unexplored, you get a very easy binary feedback. You get this feedback loop during the training, uh, but also during the interference when AI agents are working that yes, this happened really, now you've got this.

Capture the flag, you've got this, this token, you've got the flag or the exploit work. And so offenses has this very cheap verifiers.

Ashish Rajan: Yeah. Yeah.

Sergej Epp: But if you apply this to defense just look at binary, right? Oh, this binary is 56% malicious or look at, uh, like, uh, our SIEM data, right? I call it, uh, the suspicious information every minute, right?

Yeah. Yeah. How do you, how do you, now, do we have now, what's your confidence level right now to find something bad? So in, in defense, we don't really have this deterministic. Binary verifiers.

Ashish Rajan: Yeah.

Sergej Epp: And therefore saying that explains as well a lot we're seeing right now that offense is pretty much getting the superpower and accelerating much more compared to defense.

Ashish Rajan: Oh, and I, I love [00:13:00] this also because, um, it's very easy for us to go down the path of we are gonna, we're gonna build a huge amount of defense accounts, but then a lot of that is. Probably working from, when you think about, from a deterministic control, it doesn't work from non-deterministic one where AI is supposed to be non-deterministic, probabilistic, all that.

Sergej Epp: Yep.

Ashish Rajan: So does the verification part apply there and can we apply that into our current way of thinking and how we approach security overall?

Sergej Epp: Yeah, I think, look, we've been doing this like through the history of cybersecurity. We've been always trying to push everything. What is hard to verify?

To be very easy to verify. Mm-hmm. Like pick binary for an example. We've been trying, you just push a lot of human knowledge into analyzing of a binary analysis of a binary. Then you identify this one bad pattern, you create a signature to just go and look for this pattern. Yeah. So making verification easy because you can scan very quickly for specific binary versus having now 10 reverse engineers looking at this specific binary for 10 days.

Right. Where it's, it's very difficult. You know, look [00:14:00] at, um. Uh, and you know, there's some other scaffolding techniques as well to do that. Yeah. Like implementing sandboxes for instance. So through the entire history we've been trying always to, to make verification easy.

Ashish Rajan: Yeah.

Sergej Epp: Code analysis is another one.

Can you really get rid of some static code rules like Semgrep for instance. You can't really, because you can apply now on ai, but then how long will the AI reason across all this data set Yeah. Will it really identify as well, this, this. Bad patterns.

Ashish Rajan: Yeah.

Sergej Epp: So potentially not for everything you can replace a lot of that.

Yeah. But especially in the domains where you don't have static problems, but dynamic problems. Yeah. I think it would be very hard,

Ashish Rajan: but didn't like, so Microsoft came up with their version of, I think, pro project IRE I, I don't know how to say it. IRE Yeah. Project. So that was along the same lines as well.

Right. So is there, is that a good starter for people? We are thinking about how to apply this.

Sergej Epp: Yeah, I think look, um, yes. So what they came up with is just to try to scaffold. So rather than having, you know, the AI [00:15:00] just to look at the binary and say, Hey, is it malicious or not? Yeah. You have for certain steps, you have a chain of steps to be performed.

Looking at static analysis, dynamic analysis, like they've been using Gira really trying to get all this understanding together.

Ashish Rajan: Yeah.

Sergej Epp: Before making the assessment and, um. I think this is something where a year ago from now, you still require the scaffolding techniques to be explicitly written down.

Mm-hmm. So the AI can follow them. Now I see that this is even not required because AI learned in this recent trainings cycles. If you use Opus four six, for instance, around. Finding zero days. Yeah. They already and has a good understanding that specific steps have, have been to be performed before. So you have first all to create the inventory, do the static analysis come up with candidates.

Ashish Rajan: Yeah.

Sergej Epp: So it has already this type of F flow.

Ashish Rajan: Yeah.

Sergej Epp: And I think that's, that's what we're going to see more and more going forward. So how do we scaffold as well the defense domain more than the offense domain because. Offense has the ground truth. Yeah. [00:16:00] Defense doesn't really have this ground truth, and you can't really learn that.

The model cannot really learn that. So you, you, you would still require a lot of scaffolding in defense going forward.

Ashish Rajan: So, so what would defendable, like, what would a posture look like then if, because to your point, we already. Being part of defense is almost your walking into a conversation handcuffed, for lack of a better word.

Right. Like, whereas on offensive side there are no handcuffs. You do whatever you want to get the objective, which is what red team and which is why red team always wins.

Sergej Epp: Yeah.

Ashish Rajan: So in this particular scenario, what do you think would be defense postures? Because a lot of people, thesis are obvious. Speak here, trying to build a security program.

Right. Or think about how they're gonna build a security program. If we say, I love the verification analogy because it kind of allows you a, a number to understand, okay. Yes. As a defense I have a certain baseline of regulation that I need to manage a baseline of. Like I can't have unhappy employees as well who could basically become disgruntled internal risk.

Right. As well. Right. I, I don't think offensive security person think, well, they're the only maybe a team [00:17:00] or whatever. So what would defense posture look like if we were start assessing our current security programs?

How would you approach defense now knowing this, that there are these things that are already. Limiters, for lack of a better word.

Sergej Epp: Yeah. So first of all, lemme ask you a question, like what, what do you think is the, like from your experiences, well, being a CISO, what is the most trustful signal in defense? Oh, you can get, I like where you say, damn, something bad is happening.

Ashish Rajan: Oh,

Sergej Epp: it's, it's pretty binary.

Ashish Rajan: Uh, it's pretty binary, I think. Well, I definitely think it's, it's not the false positive part for sure, but. Earlier it used to be the fact that I have a, a single source of truth.

And my, that used to inform my posture.

Sergej Epp: Right.

Ashish Rajan: The moment it drifts away from it, okay, I have something wrong.

Sergej Epp: Right.

Ashish Rajan: But then over time, I feel like the longer I spent in cybersecurity, I realized that the lines got blurry because it's a lot more false positive than start coming out.

And I, I don't, I don't think I have a great answer apart from that. I know where it started.

Sergej Epp: I think you, you might have, but I think we, we sort of, we are talking a lot about that. It's, it's honey [00:18:00] tokens. Yeah. Token tokens. Like if you look at any EDR, XDR solution. All of them are, throwing around honey tokens to detect ransomware.

Yeah. Because it's the, the only unique binary signal that something is happening. Some, somebody's eliminating all this honey

Ashish Rajan: tokens. Yeah, yeah.

Sergej Epp: Right. Same in the cloud. You can, you can position a lot of honey tokens. Yeah. So if you, if you restrict them down, that's the, the only exception I saying where we have a very binary toque, uh, binary signal.

Ashish Rajan: Yeah.

Sergej Epp: But then, um, in other domains, I think there's a lot of scaffolding techniques. You can, you can use, for instance, in cloud security. What became the, the out ultra important, you know, scaffolding techniques is having a graph, right? Yes. Because if you have now a vulnerability in one of your workloads and that's how you can fix this, uh, you have to know if this specific workload is exposed externally.

Yes. If somebody can reach that or not. What kind of identities are being mapped to this, you know, workload as well, what can happen really? So graph is becoming a really, really important scaffolding technique as well. I think we have some of these tricks already and they're not going away. They're going to be.

Leveraged as well bi going forward.

Ashish Rajan: Yeah.

Sergej Epp: [00:19:00] What's I think is changing is where I feel we need to go is, um, what I've said at the beginning, like, how do we take as well the human out of the loop? Because now you have simply to assume that attacks are going to happen in real time. Yeah. Once there's a vulnerability, once there's a misconfiguration, you're going to get attacked.

So how can you defend at this speed as well.

Ashish Rajan: Yeah. Actually, uh, this reminds me of the zero day clock that you guys had mm-hmm. Um, released on unprompted. What was that, the motivation for this? Because the speed of, uh,

Sergej Epp: yeah, yeah. So I think that there are two things to that. The motivation was really, I was, I've heard from a lot of friends how successful they were finding zero. Vulnerability across, um, some of the open source projects. So I saw on Sunday, Hey, let's try it out.

Ashish Rajan: Yeah.

Sergej Epp: Myself. And literally within a couple of hours I found some zero days, including in our own product. And, um, you know, then I just thought, okay, like that was very difficult a couple of years ago, 10 years ago when I was still hands on.

Ashish Rajan: Yeah.

Sergej Epp: Uh, using f and so it was very difficult. So I've just tried to see [00:20:00] how do we map down and. Visualize this as well. Yeah. Because there's one thing we are really bad at, uh, cybersecurity experts is just how do you transfer this message that things have to change at the board level? At the society level.

Ashish Rajan: Yeah. Yeah.

Sergej Epp: And, you know, yeah, I think the, the idea of zero day clock is if you have not been there, just 12, listen and check this out. It's, it's just one simple dashboard where you can see how the timeframe between a CVE of vulnerability being, um. Released to exploitation of exploited vulnerabilities dropped down.

Ashish Rajan: Yeah.

Sergej Epp: And it was just to give you two data points in 2020, it was, um, more than one, one and a half years.

Ashish Rajan: Right. Okay. Before it got exploited. Correct.

Sergej Epp: Okay. And, and now we are like under a day, right? Under, under a

Ashish Rajan: day,

Sergej Epp: under 24 hours. I think it's just varying between, uh, eight hours and one to three days depending on route this year and this a annual basis, but that's crazy, right?

Ashish Rajan: Yeah.

Sergej Epp: You can expect this is going to drop two minutes or hours.

Ashish Rajan: Yeah. [00:21:00] Yeah.

Sergej Epp: Wow. 'cause there's a lot of, you know, techniques, which I don't think we've started to see it being used by, by the offense.

Ashish Rajan: Yeah.

Sergej Epp: Community. Just to show potentially one, one of them, it's, uh, it's, uh, the, the patching problem, right?

Mm-hmm. So effectively when a, when a vendor ships a patch,

Ashish Rajan: yeah.

Sergej Epp: It ships the blueprint. Of the vulnerability and the AI is really good right now in reconstructing this blueprint and building the exploit out of this blueprint.

Ashish Rajan: Oh, actually,

Sergej Epp: that's a good

Ashish Rajan: point. Yeah. So

Sergej Epp: right now, as you know, as a bad nation state, yeah, you can just deploy a lab and then collect all these different patches and instantly create exploits and go out and you know, uses exploits.

So there's still a lot of potential as in way AI can accelerate. Yeah. It doesn't mean that. It can accelerate, but it might accelerate.

Ashish Rajan: Yeah.

Sergej Epp: Um, so I think we need to be, we need to be prepared for that.

Ashish Rajan: I think I only when you mentioned like Yeah, [00:22:00] actually, 'cause the whole patch is supposed to be, it's patching something.

Sergej Epp: Right.

Ashish Rajan: But until now the unlock was not there for what's changed between patching and not patching or we, I don't know if he has a technology, but I think the overall theme at RSA has been so far that earlier some of these things used to be that, hey, you need to be funded by nation state. To be able to get to that point where you are reverse engineering or this, but what used to be called quote unquote script kiddies now have access to the same information.

Sergej Epp: Right.

Ashish Rajan: All they need is just some motivation to go down the path of I'm just gonna un reverse engineer what was then. 'cause that means that everyone else who has not patched, it's still exploitable by this. It's a very different way of thinking from the way. A SOC analyst looks at incidents today.

Sergej Epp: Yeah.

Ashish Rajan: What do you think as well change over there as well?

Sergej Epp: I mean, again, it's, um, I think it's, um, it's not just every script kiddies it is every human out there, because right now it's, it's that easy, right? Just Claude code, the stands as well as scaffolding for offense. Uh, just had a great discussion yesterday with, [00:23:00] um, Carlini from Anthropic who's just saying like, highly respected person on this field of offense.

Are just much more better than I am in offense and don't expect any anything to change in the next few months. It's probably just worse. Yeah. So, and we have just to listen to this type of warnings and prepare for this future because our patching cycle and others hand, you know, of the, of the coin is 20 or 30 days, sometimes even longer.

Like all the environments.

Ashish Rajan: Yeah.

Sergej Epp: So we have to do something about that.

Ashish Rajan: That's right. And especially if you're a regulated environment, it could be even longer because the change management takes a while. And so it's funny, it's funny, I think, 'cause the more you said that, the more I'm thinking that Okay.

The window. Or the way organizations were designed with change management and the way, hey, I can have, I had to fit my vulnerability management into like a, into a sprint somewhere,

Sergej Epp: right?

Ashish Rajan: Which could be in six months, it could be one year because I have lowered my risk. And does that way of thinking need to change now in [00:24:00] terms of how organizations are structured, how security teams are structured as well?

Is there like a crossover happening between, obviously you've been in this space for a while in the regular industry as well? Cloud native is your quite, quite a bit of interest, is quite a bit of interest as well. Are you seeing a lot of crossover there in terms of how organizations have typically designed security?

Does that still work in this world?

Sergej Epp: I don't see that a lot of people are really hearing and seeing the warning right now. Okay. I mean, I'm still, still hearing from some colleagues. I'm German. Right. Based in Germany. Yeah. AI security is not important.

Ashish Rajan: Right, right. Okay. Can

Sergej Epp: you imagine this all this RSA is about AI security, so there's still, like I'm talking about here, big top 40 big companies in Yeah.

In Germany. Look, I think we need to find a way, not just to apply ai, but to change as you are, you're saying the entire way of operating. So again, the, yeah. Like the challenge would be how do you take the human out of the loop? Yeah. And use all this different [00:25:00] advantage. Superpowers, AI is now bringing offense as well.

So what you can do is, for instance, you can, you can be the first to do the exploitation. You can be the first based on this to. Build rules and motivate rules to make sure that you can attack potential attacks. Yeah. 'cause patching is not going to change.

Ashish Rajan: Yeah.

Sergej Epp: You, you're still going to have a delay.

You'll be not able to patch everything immediately. Hopefully it's going to work at some point in time For tech companies, traditional company, uh, traditional companies are going definitely to have this delay because you have a lot of dependencies of, on, on all technology. So the only way just really to be prepared is to understand how can you.

Stop the attacks, how can you detect the attacks? But then if you detect us, will respond to them, um, automatically. Yeah, because human in the loop, a so analyst, like how is he, so analyst supposed to cope with an attack, which is, taking under 10 minutes or so. Right. Yeah.

Ashish Rajan: Also, because that's not the only alert you're looking at, you have tens of thousands of false positives in that Exactly.

You're trying to ski through as well. Well,

Sergej Epp: yeah. I mean, if the attackers, let me just share perhaps something we, we just saw recently with [00:26:00] detected that one AWS environment being compromised. Where the attacker moved from zero. He just got some stolen credentials to full admin in eight minutes.

Ashish Rajan: Eight minutes

Sergej Epp: in eight minutes.

And you know what happened there Is he was using ai. Yeah. And why do we know that he was using ai? Because he was assuming roles which were called Claude. Because whenever the, the AI was stuck. So he was trying to spin up some gpu. GPU as well to mine. Cryptocurrency. Um, when it was stuck, it was just trying to call, for instance, Anthropic GitHub repo.

Yeah. Which didn't exist. Almost like internal GitHub repo as well. I don't, I don't know. Wow. Which were used for the training purposes. So I think that's the new speed of attack and if the attackers are just, running this warmth of offense agents.

Ashish Rajan: Yeah.

Sergej Epp: How we're approaching this from a defense point of view, if you're still having a console and we look at alerts and I would go and do forensic analysis.

Yeah. Try to understand and figure out how Kubernetes is working. It's not going to work. So we have to take the human out of the loop.

Ashish Rajan: Yeah. I think I don't know how much you agree on this, 'cause Forrester and [00:27:00] other analyst firms have started talking about runtime security as a way for at least for you to even know that, hey, things they're putting out there, are you, you, you are getting to at least test your exposure to an extent, right.

How realistic is that A and B, depending on the short exploit time that you're seeing.

Would that work in that context? Yeah. Or is this more like, hey, instead of trying to worry about, um, you're gonna get exploit in 10 minutes, you should probably worry about runtime security is what's the balance that I'm trying to find here?

And does Red Dam security actually an answer for the.

Sergej Epp: Yeah, I mean, look, I mean, I think you've answered this yourself, right? Yeah. Yeah. So there's no need. The question is just why and how it's going to work. Yeah. What good looks like. Um, and I can share my opinion how we are looking at that as well.

If in that hack is happening within I eight minutes you inventory posture management is not going to help. Whatever's just misconfigured, you'll be not able to go back, send this, open up a ticket in Jira and have the engineer just work on that. Right? Unless you automate this, uh, lifecycle as well.

But even [00:28:00] then, you might still have. To go through CICD and get all the approvals. Yeah. So what you need to do is you need now to understand what the, is it a real attack? First of all? What is the attack trying to do?

Ashish Rajan: Yeah.

Sergej Epp: And how can I stop and intervene with the attack in, in real time? And to do this properly, what do you need?

What do you need to AI to be really. Confident in all the actions you need data.

Ashish Rajan: Yeah, yeah, yeah.

Sergej Epp: You need to have a lot of telemetry as well from the Kubernetes cluster, from the containers, from posture management with all this scaffolding required to be able to have really precise answer and not something which is hallucinated.

Ashish Rajan: Yeah.

Sergej Epp: Right. And, um, to have this ground truth, what's going on. Because if you just have parts of that. Five or six events suggesting, oh, something happened in this communities container. But I don't really know what processes we're running. I don't really know if any, uh, you know, container escapes were performed.

So if I don't have a lot of this telemetry,

Ashish Rajan: yeah.

Sergej Epp: I'm not going to be able to. Be confident to say now, oh, I'm going to kill [00:29:00] this container, or I'm going to kill this process because you are going to introduce interruptions as well.

Ashish Rajan: Yeah. Yeah.

Sergej Epp: And I think that's, that's what becoming obvious, um, for runtime security, be able to have a lot of ground truth data, be able to have as well, as much as possible deterministic rules.

Yeah. And with, you know, with far, we have a big community as well of people are there building deterministic rules specifically for kubernetes, Cisco. And then you just bring this together and give it to an AI agent to reason about that and understand, is that really bad? How bad is that? And is it for us now to have an automated action being performed?

Ashish Rajan: Does it go back to what we were talking about earlier, similar to how Honey tokens gives us signals for am I being attacked or am I being exploited? Does the runtime security kind of answer that part for AI as well?

Sergej Epp: Yes. I think with ai I think the situation's even much more worse because, you know, with traditional applications sitting in somewhere in the, in the [00:30:00] container they're deterministic.

So you can find the flows. You can fix the flows,

Ashish Rajan: yeah.

Sergej Epp: If you have good, proper code, security checks. Now with ai, you know, you can, and we've started this discussion as well we can't fix prompt injections, so you can always lu the agent to do something, and then it comes down to three. Uh, you know, three, uh, categories I think which are being discussed right now.

What kind of access to data is the AI agent having? Can the AI agent as well execute commands?

Ashish Rajan: Mm-hmm.

Sergej Epp: Most of them can. Does ai, AI agent have as well access to the internet? And, you know, SIM Sim Williams posted about that, uh, recently where he was suggesting like, you have to take at least one of this.

Out. Otherwise it's going to be a, a huge blast radius and, and could lead to a nightmare scenario.

Ashish Rajan: Yeah.

Sergej Epp: Which I feel is a good like, mental model to try to understand how do you restrict as well the possibilities of NI agent. But even if you take one of those aspects out, out, you can't really take out the network capability in the cloud.

Right. Let's say you, you're going to say. It cannot run commands. I'm even not sure if [00:31:00] that's possible because even if you analyze like a PDF or whatever, it's always writing a, the AI is going to write a script and then the script is going to do something.

Ashish Rajan: Yeah.

Sergej Epp: So it's becoming very challenging. And therefore, I think runtime security for every LLM is.

The key foundation just to get visibility. And by the way, not just for L lambs.

Ashish Rajan: Yeah.

Sergej Epp: We've just released, uh, yesterday as well. The, um, you know, the plugin Foggo parking plugin for, uh, coding agents as well to be able to tap into the coding agents. Understand what the coding agents are doing. Yeah.

Because now you start to see this cycle between the coding agent writing the code, right. And trying to misbehave as well, and the production infrastructure environment coming or going into a loop. And that's where you can start bringing this information together.

Ashish Rajan: Yeah. Yeah.

Sergej Epp: And reason with that,

Ashish Rajan: but is that, so a lot of people obviously have existing EDR identity network.

Especially people who've been doing this for a long time. And some may be skeptic AI securities not needed, maybe. Uh, but for people [00:32:00] who are walking down that path, what are some of the. Top three things you recommend because we kind of spoke about the verification law,

Sergej Epp: right?

Ashish Rajan: We spoke about the zero o'clock, that the time window is getting shorter and shorter.

In terms of when, when, if you were to build a program today, and for people who may be listening, watching, what do you think is at minimum people should consider for their AI security program to feel. I'm not saying we'll solve the problem 'cause I don't think anyone can solve the problem. We don't even know what the new problem is yet.

Right. Yeah. So what's the one of, what's the one or two things that come up in your mind that, hey, at least have these. Right to feel some level of comfort, but it may not be the end all answer, but at least get you somewhere to a point where, oh, I'm starting to do something where I, maybe I can build on this maturity after.

So other things that come to mind.

Sergej Epp: Yeah, I mean, and we're talking about both AI security and security for you, right?

Ashish Rajan: Yeah, a hundred percent. Yeah.

Sergej Epp: So, um, yeah, I mean, a couple of things I think, think I would recommend, and I will start with, with soft principles, because it's all about the culture itself.

Yeah. Yeah. So we're effectively trying to build a new operating system [00:33:00] for cybersecurity. So first, I guess. How do you really, uh, ensure that you can get access to all fuel stuff and take them together with you on this journey?

Ashish Rajan: Yeah.

Sergej Epp: Yeah, because a lot of them are highly specialized. They, they've been experts in forensics, in cloud, security, whatever.

Ashish Rajan: Yeah.

Sergej Epp: A lot of people are pushing back. So I think that's, that's where you have to start. You have just to enable the people, not just by mandating or to use ai, but let them play, uh, make. Make this a habit just to try out, for instance, every day, every week, uh, a couple of tools, trying to see how everything is working.

Yeah. Because you have to understand that and, um, you know, start from there. As we, as I said initially, we're building a new operating system. Yeah. So just using AI or trying to say, oh, which vendors do I need not to cover all this issues? Is not going to help.

Ashish Rajan: Yeah.

Sergej Epp: And um, I think the, the second would be again, uh, more or less of a soft type of recommendation where you start to build organization around that.[00:34:00]

And to me, I'm still trying to find this out. I'd love to hear your opinion as well. To me, we're going to see like two type of roles going forward. The ones where you have more or less all the security engineering forensics coming together, the, the architects of security. The validators of security. So people who are building a validation architecture.

Ashish Rajan: Yeah.

Sergej Epp: Trying to understand, okay, now I've got this and this controls what kind of ground rules, what kind of data are those controls collecting? Is this EDR really in the position to explain this type of, um, you know, attack and reconstruct this back?

Ashish Rajan: Yeah.

Sergej Epp: Others rules really validated.

Ashish Rajan: Yeah. Yeah.

Sergej Epp: So I think that's, that's becoming very important.

And then you go back to the technology level where. You have simply to assume bridge.

Ashish Rajan: Yeah, yeah.

Sergej Epp: You know, that's, that's the reality we are living in. And based on that, you build, uh, build up your runtime, real time controls.

Ashish Rajan: Yeah.

Sergej Epp: Where you take all the human out of the loop.

Ashish Rajan: And I guess to your point, I love the analogy, by the way.

I think there's a, there's some, some weight to the fact if you split your current team instead of [00:35:00] focusing on individual disciplines. I, I think there's, we had this conversation a few months ago. The idea was that there is a part of IT that we're. Let's just say 1.0, pre ai, 1.0 that's gonna continue to exist, right?

That would probably not be upgraded with AI that may exist like that forever. So there's a part of the security team or the organization that will continue to maintain, we did this, so mainframes, we are doing this today. Even today we talk about mainframes, so that that hasn't gone away yet, right?

So

Sergej Epp: our financial Swiss to running on mainframe,

Ashish Rajan: so yeah, yeah, yeah. A hundred percent. Like anyone who's in banking still in. So for that, I feel like it's just business as usual. They'll discontinue and I, I think,

Sergej Epp: well, security, web security and by complexity doesn't work anymore. So that will be a very fun time.

Yeah.

Sergej Epp: I think 'cause now everybody can just hack a mainframe, literally.

Ashish Rajan: Yeah. They want to. Yeah.

Sergej Epp: Three years ago it was not possible. So that would be fun.

Ashish Rajan: Because I don't even remember, mainframes can't even have complex passwords as well. It's like a,

Sergej Epp: it's a aff uh, like access management system. Actually, I did a couple of time forensic there and you had always to get at least two experts just to explain what, how things are working.

Ashish Rajan: Yeah, yeah. So coming from that kind of [00:36:00] world, you feel, okay, that's gonna continue. But I love the analogy which you went with, where it's almost like you want to have you split your entire team into, instead of teaming them as a skill set of, you're a cloud person, Kubernetes person. AppSec person, you kind of go, okay, you are, you are the, you're the builders of the guidance, the framework, the harness, whatever you wanna call it for AI systems.

And the other half is like your verifier to your point. Because it's not that, because the previous version of the 1.0 was that hey, I can have a guardrail or control made once until that gets triggered. I don't have to think about it.

Sergej Epp: Right.

Ashish Rajan: But that's not the world we are moving towards now. Where Right.

My permission today. Could be the same for next six months, but for the seventh month, I don't know, some change, something changed. I became more like a super AI user. I have, oh, I've got MCP. I'm putting connections into Anthropic, putting connections with chat GPT and seeing, oh, I'm going experimentation with whatever.

My overall posture has [00:37:00] changed considerably. But then in the older model, I've already reviewed Ashish. I don't have to look after 'em anymore, so I, I love the, the verification split. Yeah. It'll be really interesting because then your red team goes in there, your, all, your authentic thing goes there as well.

Sergej Epp: Let me share something as well with you saying that's something which is not really proven, but what, going back to the verification law, there's one advantage, one like first principle advantage for defenders, and this first principle advantage for defenders is the attackers do not understand the environment.

So every time they're just going to go in and reach the objective, oh, I've hacked now this kuber, this container. They're going to start to perform steps from scratch based on the training data. So they will try to assume certain roles with certain usernames. Yeah. Based on, you know, trying to hallucinate down something.

Ashish Rajan: Yeah.

Sergej Epp: And that's going to create a lot of noise.

You can hear this noise, you can start build your detection around that.

Ashish Rajan: Yeah.

Sergej Epp: It can obviously go slow and low and just try to enumerate, but then, you are losing disadvantage of these threats.

Ashish Rajan: That's right. Yeah.

Sergej Epp: So, and that's where, if you just go back [00:38:00] to the cybersecurity verification law, the offenses owning the objective verifies and the defense is owning the, you know, the environmental verifiers.

Yeah, yeah. So understanding, like going back to behavior security, everything we've been doing around that. Now if you can explain how your environment is looking like, for instance, what kind of naming conventions are you using for your, um, for your clusters? What kind of naming conventions are you using for your identities?

So starting really to understand that equipping your team. Yeah. With this understanding building detections rules on top of that Yeah. Is quite powerful. Mm-hmm. And that's by the way, something also vendors will be not able to help with.

Ashish Rajan: Yeah. Yeah,

Sergej Epp: that's right. 'cause that's you unique. Experience, insights as well you have as a company.

Ashish Rajan: Yeah. And to your point, I actually, I've got a whole thesis on this, that the, the future would have security people build their own AI ecosystem internally.

Sergej Epp: Right.

Ashish Rajan: And exactly to what you said about the first principle, because I may not want an external body to [00:39:00] have the, all the context over my insight.

Sergej Epp: Right.

Ashish Rajan: As much as I trust a product and I've signed up, have a contract with them. It does happen even today. Like I may go for, to your point, I may have Sysdig as a product, but I would have some things that I have on my side to identify what's a high, medium, low, based on the constraints that I have, based on the context that, hey, I know Ashish doesn't work for us anymore, right?

So I can ignore this. So there's a lot more which just cannot be passed on to a product company anyways.

Sergej Epp: Yeah. That's a scaffolding which is happening, right? Yeah. Yeah. This is something what you need to do.

Ashish Rajan: That's right.

Sergej Epp: Lift the vendors. You know, alone was building like this framework, the statistics of this world.

Right. We're happy to do that, but then there's still a lot of work.

Ashish Rajan: Yeah.

Sergej Epp: Which you can bring in by, ensuring that all these detections protections are working based on your knowledge. I think that's, that's the biggest constraint right now. Um, that's

Ashish Rajan: right. Yeah. Do you find that it's like I, because I, I remember seeing a post from you on the whole hackathon thing that you guys did as well as a team, alright?

Sergej Epp: Yes, yes, yes.

Ashish Rajan: Do those kind of things increase the adoption? 'cause a lot of people that I'm talking to are also on that bandwagon that, hey, I want to increase AI adoption.

Sergej Epp: Right? [00:40:00]

Ashish Rajan: I want my team to be, but then they are sometimes lost on what's my use case here.

Sergej Epp: Right.

Ashish Rajan: I love the hackathon thing because it cut to your point, it opens an environment.

But I love to for you to share overthinking behind it. What was some of the, uh, did it help with the adoption? What were some of the takeaways from there?

Sergej Epp: Yeah, look, I mean, we've been trying to organize, first of all, company-wide hackathon, and then there's a lot of privacy questions. Can I, can my CFO now go and use AI to use this financial data, right?

And then deploy an app with this financial data. So there, there's a lot of constraints as well around that.

Ashish Rajan: Yeah.

Sergej Epp: And we saw, let's just lead. You know, lead by example, a security team, and try to go very deep and you know, try out a lot of things.

Ashish Rajan: Yeah.

Sergej Epp: And a lot of companies are trying to mandate from the top, like, you have to use AI L you have to do that, and this is a tool you have to use.

I think like the first thing what AI is going to disrupt is the entire management layer.

Ashish Rajan: Yeah, yeah,

Sergej Epp: yeah. From that perspective, because the experts, they understand the pain points, they understand the problem, they understand how certain things are working, so let them just [00:41:00] try out. How the AI is working, let them try to fix this these problems.

Um, and that's what we did with, uh, you know, I had a couple of days, uh, offsite and half of this day we've tried to dedicate time and just, um, have a preview of a couple of use cases and just let people build.

Ashish Rajan: Yeah,

Sergej Epp: it was just amazing to see, like I had a lot of it people who never, who didn't have an understanding of security and never touched really a development or threat researcher who never wrote a line of code.

And she was able then to come up with a, you know, with a framework to check if any APIs of Azure you know, if there was any drift of how we adopt automatically the rules on top of that.

Ashish Rajan: Oh,

Sergej Epp: wow. So there are like tons of use cases, um, which you can now build on top of that.

Ashish Rajan: Yeah.

Sergej Epp: I think it's first step just to make sure that people understand that.

And then the next step is going to be how do you now bring this all. To life and make it operational.

Ashish Rajan: Yeah.

Sergej Epp: You know, the larger context. So I think that's, that's gonna be important. What I like as well, the audience to to live [00:42:00] with from that is just how, how do we make this a habit just to try really and enable the people, give them time back,

Ashish Rajan: yeah.

Sergej Epp: On the calendar to try out things. Because if you still live in this whole world. Don't try the new world. We'll be not able to build this new operating system for cyber security.

Ashish Rajan: I mean, we are almost done with technical questions, but I think I, I love the hackathon analogy because also it helps people focus on the right kind of metrics as well from an ROI perspective of how much usage and.

Gives people ideas for what else can I use it for in, and maybe inspires the rest of the organization as well to go, Hey, we should probably do more of this as well.

Sergej Epp: Yeah.

Ashish Rajan: Like, I, I love what you guys did. Those are all the technical questions I had. I've got fun snack war for you. Uh, obviously, as I said, the crowd favorite between the British and the Australian.

Is it kangaroo and crocodile? Uh, or you can go for some of the sweeter side as well.

Sergej Epp: I, I love, I love the exotic stuff, so

Ashish Rajan: Yeah.

Sergej Epp: For I take the crocodile.

Ashish Rajan: Oh,

Sergej Epp: is it, is it a real one?

Ashish Rajan: Yeah. Well, yeah, that's, well, apparently.

Sergej Epp: How, how does it taste?

Ashish Rajan: I'll let you, I'll let you try first.

Sergej Epp: Oh, wow.

Ashish Rajan: Yeah. [00:43:00] I, I'll let you try fun and see what you think.

What, what, I mean, it's a co I'm disappointed. Let's just say that No one has been disappointed.

Sergej Epp: Okay.

Ashish Rajan: By that, right?

Sergej Epp: Okay.

Ashish Rajan: So I'll let you try it out. Does it taste like chicken?

Sergej Epp: Yeah, it's pretty much like chicken. It's good.

Ashish Rajan: Yeah.

Sergej Epp: I mean, my most horrible experience was with turtle soups and, you know, shark meat.

But

Ashish Rajan: yeah, because I, in, in my

Sergej Epp: interesting, '

Ashish Rajan: cause I, I think it was, obviously Australia is a lot of alligators or crocodiles, but I might, the first thing that I had was crocodile would be like to rubbery chewy, right. All of that. I'm like, I wasn't expecting to be like chicken, but I'm like, oh, it's a bit. And unless I'm like, wait, unless they try and make it like chicken so people can consume it.

But I was just definitely pleasantly surprised by the fact that it taste like chicken bread. I mean you can still that kang kangaroo as well if you want. Yeah. Did you want to try the, have you had the kangaroo for

Sergej Epp: I didn't have the kangaroo.

Ashish Rajan: Yes. You can try the kangaroo as well.

Sergej Epp: First time I've been to Australia last year.

Ashish Rajan: Yeah.

Sergej Epp: But I didn't have this type of special things. That's, um,

Ashish Rajan: yeah. I only saw this because, uh, we [00:44:00] were. At the airport and we were trying to, we basically, we were trying to step out of, we at the airport about leave and I saw this, this thing was selling out. I'm going, people are that much into crocodile and kangaroos.

So I'm like, I'm gonna include that in the snack. Interesting.

Sergej Epp: Yeah. I think I'm more of a crocodile fan.

Ashish Rajan: Yeah. The kangaroo fan. Oh, fair, fair. Also, oh, cool. I've got three, uh, fun questions as well. First one being. What do you spend most time on when you're not trying to solve AI security? Ally from the world?

Sergej Epp: So I've got, I've got three kids.

Ashish Rajan: Oh yeah.

Sergej Epp: I am trying just to, um, to do some sort of a hackathon. We spoke about the hackathons, so Yeah. Yeah. I laugh just to try out, uh, seeing, just building robots, building weather, balloons. Oh, nice. Stuff like that. And, um, and that's what we are doing during the weekend a lot.

Uh,

Ashish Rajan: nice.

Sergej Epp: My son got, he's 14 and he just got a job with a, you know, gaming startup because. We've been just building some games and actually I've showed him how to, how to do that and a lot of people reached out afterwards. Oh really? Because of [00:45:00] that.

Ashish Rajan: And he got a job.

Sergej Epp: That's fine. He got a job Yeah.

Internship right now. Yeah,

Ashish Rajan: of

course.

Ashish Rajan: Yeah. Yeah. Wow.

Sergej Epp: Yeah. Allowed to work yet full time in Germany. Beijing,

Ashish Rajan: of course. Yeah. But they are, uh, sorry, but they, they can be an intern at 14. I thought it was more like 1617 already.

Sergej Epp: No. So they can, they can have some small drops as well. Big take as well.

Ashish Rajan: They can have experience.

Sergej Epp: Yeah.

Ashish Rajan: Oh, that's pretty good. And so he is now building games.

Sergej Epp: See games and he was pretty big in coding by the way, but then he lost interest about two years ago.

Ashish Rajan: Right.

Sergej Epp: Because, you know, teenage age, all those things. Yeah. But then, um, when claude code came out, um, you know, the story was quite of a fun story.

During the birthday party here, the birthday party and then we were, they were playing games.

Ashish Rajan: Yeah, yeah.

Sergej Epp: And, um, I just showed them quickly claude code and just challenged 'em, Hey, why don't you guys try out just to build your own games?

Ashish Rajan: Yeah, yeah, yeah.

Sergej Epp: And literally stopped playing.

Ashish Rajan: Oh,

Sergej Epp: really?

And then started to build games and make a competition out of that.

Ashish Rajan: Oh wow.

Sergej Epp: Um, so I started, it was a very unique experience to be able. To give them something better than playing Fortnite. So if you have kids at 14 now, nothing is better than the Fortnite. Just give us time and leave us alone.

Ashish Rajan: Yeah,

Sergej Epp: that was pretty cool.

Wow. Uh, yeah, then I've posted that and you know [00:46:00] me, a couple, a couple of gaming studios, uh, reached out, reached out, and, and asked, Hey,

Ashish Rajan: there you go man. Um, so second question may be, uh, similar vein as well. Then what's something that you're proud of that is not on your social media?

Sergej Epp: Hmm. A lot of things.

I'm, proud first of all that right now I'm just, traveling a lot between Germany and and us almost every second week. I'm proud of my wife that she's handing all this mess with teenagers at home. Yeah. Wow. And I have my escape first of all.

Ashish Rajan: Yeah, yeah, yeah. Fair. You have your escape while she's the one who's joining away at the moon.

Yeah,

Sergej Epp: I think that's, um, that's the first thing. I'm proud pretty much uh, about the community as well and it's pretty cool to see that a lot of people are in cybersecurity. Whatever is happening, you know, we're coming together, we're finding good ways to, to share stuff, right? Yeah. And, um, to build together.

And I think your instrumental part, I appreciate that. Thank you. Cloud Security podcast as well around that. So yeah, we should do more of that and I think we should try to make the bridge as well between us as hack our security leaders. [00:47:00] And the government and society. Yeah. 'cause now it feels like I, I've heard this you know, last week it feels like being back in the nineties, right?

Yeah. With cybersecurity. Remember back then we were all nerds, like all the people didn't wanted to, to, to do anything with us. So

Ashish Rajan: That's right. Yeah.

Sergej Epp: How can we get, again, like being humans and being on the table?

Ashish Rajan: That's right.

Sergej Epp: Um,

Ashish Rajan: because I think to your point, the, that being technical is a good thing again,

Sergej Epp: right?

Ashish Rajan: Earlier it became, I mean, obviously you still need to be able to translate the technical into business, but. You can be a nerd again in AI if you want to.

Sergej Epp: That's true.

Ashish Rajan: And you don't have to be just computer nerd. You could just be any kind of nerd.

Sergej Epp: Right.

Ashish Rajan: Yeah. I mean, great answer as well. Uh, final question.

What is your favorite cuisine or restaurant that you can share with us?

Sergej Epp: Oh yeah, I think that the couple of them as well. I like, uh, Indian food actually. Oh, nice. So I used to study in New Zealand and they had this nice butter chicken, you know. Oh, really? They show where the other universities. So, um, that's, um, that's something I, I like to, I still, not sure if it, is it from UK or is it from India?

Ashish Rajan: Uh, I think ButtER chicken [00:48:00] technically is Indian. Uh, we were talking about this the other day where I think there's land Go Buting. I know you

Sergej Epp: in between right now,

Ashish Rajan: so Yeah, it's definitely Indian. There's one which is not Indian, but it's like a, it's like a British version. They created it and all that, butter chicken is definitely Indian.

Yes.

Sergej Epp: Right.

Ashish Rajan: Yeah.

Sergej Epp: Okay. It's like the Italians always trying to fight with Americans who invented the pit service.

Ashish Rajan: Yeah. Or, or we were talking about this the other day where there is nothing like a Californian roll in Japan.

Sergej Epp: Oh, okay.

Ashish Rajan: Or, and there is no, um, Mongolian beef in China.

Sergej Epp: Right.

Ashish Rajan: But there is a Mongolian beef.

If you go to any Chinese restaurant. So, but it's technically from Mongolia, not from China.

Sergej Epp: Right.

Ashish Rajan: So it's like this. Yeah. So as far, I was thinking for a second, like butter chicken. I'm like, yeah, it is Indian. Yeah. So,

Sergej Epp: and it's, it's really, really difficult to find good butter chicken.

Ashish Rajan: Oh. I

Sergej Epp: mean,

Ashish Rajan: in the uk maybe a lot more easier.

Definitely not that difficult in

Sergej Epp: Germany is, it's almost impossible. I mean, we have some good Indian restaurants. It's, I dunno, it's not really that type of level.

Ashish Rajan: Oh fair. So that's the favorite cuisine? Or are we have favorite?

Sergej Epp: Yeah, I think so. Um. Oh [00:49:00] gosh. I mean, it's just, um, I love, uh, I love sushi. I love, um, uh, Italian food.

I love, uh, you know, Indian food. I think that's, um, it's, I don't think that I have the favorite cuisine right now. Oh.

Ashish Rajan: All of it is, is always good. Perfect. I'm a

Sergej Epp: food. You so

Ashish Rajan: Perfect. Uh, so I mean, that's all the questions I had. Where can people find more about the work you guys are doing at Sysdig and connect with you as well on the work you're doing

Sergej Epp: look, I'm, I'm pretty active on LinkedIn. Uh, not so much X dotcom, unfortunately. Uh, trying just to be focused. Um, just please reach out, check out the, the Strat research we're doing at Sysdig as well. We're publishing quite a lot right now.

There's a lot of automated things which are. Helping to do this right now. And um, yeah, check out zero day clock. If you need one slide for the board that's a slide. And if they're not listening to this slide and not, going to ignore this or saying we have different problems, yeah, I wanna look for another job.

Yeah. And um, everywhere on LinkedIn,

Ashish Rajan: I will share

Sergej Epp: that, all that as well. But dude, thank you so much for coming. Thank you for having me. Thank you. Thanks for everyone tuning in as well.

Ashish Rajan: Thank you for listening or watching this episode of Cloud Security Podcast. This was brought to you [00:50:00] by Tech riot.io.

If you are enjoying episodes on cloud security, you can find more episodes like these on Cloud Security Podcast tv, our website, or on social media platforms like YouTube, LinkedIn, and Apple, Spotify. In case you are interested in learning about AI security as well. To check out a sister podcast called AI Security Podcast, which is available on YouTube, LinkedIn, Spotify, apple as well, where we talk.

To other CISOs and practitioners about what's the latest in the world of AI security. Finally, if you're after a newsletter, it just gives you top news and insight from all the experts we talk to at Cloud Security Podcast. You can check that out on cloud security newsletter.com. I'll see you in the next episode, please.

No items found.
More Videos