Is the traditional detection engineer becoming obsolete? According to Jack Naglieri, CEO of Panther and creator of StreamAlert, the role isn't dying it's evolving into something much more powerful: prompt engineering and agent orchestration .In this episode, Ashish and Jack explore the "New World of Detection Engineering." Jack explains how the industry has shifted from writing manual Splunk queries to Python-based "Detection as Code," and now into an era where AI agents absorb the heavy lifting of alert triage. The future SOC analyst won't be memorizing syntax; they will act as a "puppet master" orchestrating multi-agent systems to make critical decisions .We dive deep into what it takes to thrive in this new paradigm. Learn why providing deep organizational context is the ultimate moat, how to use "thinking tokens" to debug AI logic, and how to set governance frameworks so your agents don't accidentally delete your production database trying to fix an error.
00:00 Introduction
02:50 Jack's Background: Yahoo, Airbnb, StreamAlert, and Panther
04:30 The 3 Phases of Detection Engineering (Splunk - Code - Agents)
07:50 How Agents Eliminate Alert Fatigue and Manual Triage
10:30 Is the SIEM Dead? Why AI Still Needs a Data Pipeline
16:20 AI Hallucinations vs. Human Errors (The Context Problem)
19:30 The Build vs. Buy Debate: Why You Shouldn't Build Your Own SIEM
24:20 Building Trust in AI: Governance and "Thinking Tokens"
27:00 The Shift from Gathering Information to Decision Making
35:30 Why the Future of Detection Engineering is Prompt Engineering
37:30 Fun Questions: Kangaroo Jerky Tasting
38:30 Hobbies & Pride: Workaholic Founders and Moving to San Francisco
40:30 Favorite Food: Mexican and Japanese Cuisine
Jack Naglieri: [00:00:00] Detection engineering as a practice is probably just gonna get kind of absorbed by agents. You're not just creating detections for a human anymore. You're creating detections for an agent. Right. And it's a totally different paradigm. I would assume that a hundred percent of alerts should be triaged by agents.
If you give an agent too much power, you go delete your production database on accident. Yeah. 'cause it's like, well, it had an error. Yeah. I fixed the error budget leading the production needed. So I one humans also have hallucinations. A lot of times if you look at something and you misinterpret it, why did you misinterpret it?
Because you didn't have the right context. It's the same with an agent. Everyone says agentic. Is it agentic in that? It's giving me a five bullet summary of what the alert was. Is it agentic to the point where it is spun up 30 agents to go investigate this simultaneously? Those are vastly different worlds.
We live in a bubble here in Silicon Valley. I think it's very easy to think that everyone's like us, but they aren't. Am I spending more time making decisions? Or gathering investigation. By the time I've triaged through 10,000 alerts, I was just gathering information, not doing much.
Ashish Rajan: If you work in security operations, you probably already know that AI is [00:01:00] evolving how level one as a whole works.
But what you may not have realized that SIEM, as we have known to collect information and helps you, triage, has also evolved to at a great conversation with Jack Naglieri. Who is this? CEO of Panther. And, and you also know him from his open source work called Stream Alert, all the detection, engineering conversation he may be having on his podcast as well.
We spoke about how detection has evolved since the time he has been involved with the community. What's the gap between the detection depth, which is things that you feel you can discover and detect versus what you're actually capable of detecting, combined with, whether it's hosted on cloud or whether it's hosted on a data lake platform, and how that adds complexity and how you can use AI as a way.
We also spoke about how people are building detection programs. In a world where a lot of things can be automated using AI agents, what are the metrics you should be looking out for a typical investigation that you do as a detection engineer? All that, and a lot more in this episode with Jack Nary from Panther.
As always, if you haven't listening or watching an episode of the podcast for [00:02:00] a while, I would really appreciate if you take a quick second to hit the subscribe, follow button, whichever platform you listen to, watch podcast episodes on. We are all podcast platforms, including Apple, Spotify, YouTube, and LinkedIn.
It does not cost you anything, but it helps a lot. In helping spread the work we do to other people as well. I also wanted to say thank you to everyone who came and said hello to us on RSA conference in San Francisco. It meant a lot. You stopped us and said hello and shared all the love that you have for the work that we do over here as well.
Thank you for all the love and support. I hope you enjoy this episode with Jack and I'll brought you. Peace.
Jack Naglieri: Hello and welcome to another
Ashish Rajan: episode of Cloud Security Podcast. I good, Jack with me. Thanks for coming on the show, man.
Jack Naglieri: Thanks for having me,
Ashish Rajan: man. I'm like excited a, because I've known you personally for a while, but for people who have not been around your circle, don't know who you are, can you share a bit about your background?
Yeah. Cybersecurity, all that as well.
Jack Naglieri: Yeah, so I've been working in the security operation space for about 15 years, and I don't know if I look like a, I haven't worked in the SOC for 15 years, but. I spent about half the time as a practitioner started at Yahoo, which was a big company [00:03:00] back in the day.
Yeah. In the two thousands, I think, as we all know. And then I moved to Airbnb, uh, which was a very beloved Silicon Valley growth company in the, in the 2010s.
Ashish Rajan: Yeah.
Jack Naglieri: And, um, after Airbnb, I, I went to become a founder, and while I was at Airbnb, I built in open source projects that was called Stream Alert.
And Stream Alert was a alternative to traditional SIEM. And uh, it really solved a, a ton of big problems that we were having in, in cloud native tech companies.
Ashish Rajan: Mm-hmm.
Jack Naglieri: Right? Like we wanted something that was much more scalable, that could be much more capable on the detection side. Yeah. And really cost effective and, and scalable on the data side.
That led us to build our own SIEM, which is something I would recommend to nobody, because now I've done it probably about three times at this point. It's extremely, extremely difficult. But that journey led me to start Panther right in, uh, 2018, which was almost eight years ago.
Ashish Rajan: Yeah. Wow.
Jack Naglieri: And, um, and Panther's whole, my mission was a mixed security team, smarter, faster than attackers.
And we, we sort of carried the learnings from the Airbnb days into now what we're working on.
Ashish Rajan: Detection, engineering as a [00:04:00] whole, I imagine has changed quite a bit. And now with. Gen AI being like the accelerate, I'm, I'm curious just for people to have an understanding of what detection engineering used to be and what it has become after ai.
I'm curious on your thoughts. 'cause you are so deep in the space.
Jack Naglieri: Yeah. I think about in three phases. I think the first phase was Splunk.
Ashish Rajan: Oh yeah.
Jack Naglieri: Right. Yeah. It was like write a query and hope that you get a response within an hour.
Ashish Rajan: Yeah.
Jack Naglieri: And, um, you're bound to this. Splunk, DSL, and I remember I had a printout of the Splunk language cheat sheet on my desk as an analyst, as a, you know, 22-year-old analyst.
Yeah, yeah. I was just like, how do I use this thing?
Ashish Rajan: Yeah.
Jack Naglieri: So that was kind of like my introduction and actually even probably before that, where just the really, really old SIEMs from like 2008, 2009, where you just get an alert. You don't even get to decide what the alert is about.
Ashish Rajan: Yeah.
Jack Naglieri: And, or it's from an IDS.
Like, yeah, like Snort or something kind of like that. So I think of it as, as far back as there, like just having a way of expressing what you're looking for. And when [00:05:00] we built Stream Alert and then Panther Detection Engineering was really around detection as code. Okay. So we built Stream Alert with this concept of Python based detection as code.
Mm-hmm. Streaming analytics, real time detection. That was like the core thesis behind that product.
Ashish Rajan: Yeah.
Jack Naglieri: And the panther is, is has the same roots.
Ashish Rajan: Yeah.
Jack Naglieri: So when we started building. Detections with agents, you didn't need to understand how to write code anymore. You didn't need to understand really the structure of the logs.
Yeah. And where the logs exist and how you test the alerts and all of that. Like kind of SDLC that that comes with detection, engineering. Yeah. So I'm really excited about the current state of the world with agents, how far we've come in the last few years. And I'm super excited about where things are headed and I actually think that detection engineering as a practice is probably just gonna get kind of absorbed by agents to be quite honest.
Wow. And actually, as you say that, I'm thinking about all the
Ashish Rajan: mature organizations. 'cause the detection engineering used to be a thing where. Only a certain size of an organization would get to the point of protection engineering. It's not like your [00:06:00] small to medium sized businesses may traditionally may not have been able to access that, which is kind of to your point about becoming an AI agent is an obvious thing.
That means they get access to that as well, but. A lot of these mature organizations have carried, and when we were talking about this other day, it's more called the detection debt that people carry in terms of, so the depth between the detection you want to find versus what you can actually find versus the entire lifecycle of it as well, of keeping and maintaining that across the board.
Jack Naglieri: Mm-hmm.
Ashish Rajan: Is that still the case? And if it is. How real is it? Or has it become more easier to kind of get rid of that debt? Thanks to ai for sure.
Jack Naglieri: I think that everything has become simpler with agents. Yeah. So if we sort of think about where we are today on the spectrum of AI adoption, so the way that I expl, I'm gonna explain it, how I would explain our own evolution, because Yeah, I think it's quite, it's a good analogy to where things are headed, right?
Sure. So in the first version of. SIEM detection, engineering, and detection as a whole. You had detection engineers creating rules. Mm-hmm. [00:07:00] Right. Thinking about what's important for our organization and getting that alignment from your greater team. Like, Hey, what are we here to do?
Ashish Rajan: Yeah.
Jack Naglieri: What's the most important thing to the business for us to protect?
Ashish Rajan: Yeah.
Jack Naglieri: And then how do we work backwards from that?
Ashish Rajan: Yeah.
Jack Naglieri: Into what logs would help us. Create detections that would help us get alerted, when things are actually going wrong.
Ashish Rajan: Yeah,
Jack Naglieri: right. So it's like that's kind of the mental math that I go through if I were to create a detection.
Ashish Rajan: Yeah,
Jack Naglieri: It's not just about the log, it's not just about the TTP or Mitre, it's about why is this important?
Ashish Rajan: Yeah.
Jack Naglieri: Right. And then I think over time things will shift, right? The things the company cares about will shift, the tech will shift the systems creating the logs will change. And I think if you've done this for long enough, you'll know that if a vendor changes a log field, like in CloudTrail, and then you find out about it and all your stuff breaks.
So it's just representative of like, there's always this overhead that comes with maintenance. And then it's like, n times number of detections
Ashish Rajan: Yeah.
Jack Naglieri: To do this on. So agents have allowed us to, to delegate a lot of that work. Yeah. And agents are exceptional at tool calling. Right. And all tool call is, is just a [00:08:00] way of saying it's translating the need or a job into an action.
Ashish Rajan: Yeah.
Jack Naglieri: So you're saying, Hey, I need to, I wanna tune this detection.
Ashish Rajan: Mm-hmm.
Jack Naglieri: So. I can give my agent a skill that understands what that means.
Ashish Rajan: Yeah.
Jack Naglieri: When I say tune in detection, I'm talking about go look at all the alerts from the last 30, 60, 90 days.
Ashish Rajan: Yeah.
Jack Naglieri: Look at what the outcome of those alerts were. Was it noise?
Was it helpful? And then look at the actual underlying logs for all of those 90 days of alerts for that particular detection. Look at the patterns and then. Tell me what things that I could do to that detection to reduce the volume of noise.
Ashish Rajan: Oh,
Jack Naglieri: and what are the risks associated with that tuning?
Yeah,
Jack Naglieri: because there's always risk.
Ashish Rajan: Yeah.
Jack Naglieri: Right? Because now you have a false negative potential. 'cause you're like, okay, cool. Well, I've been getting a lot of alerts, but. I don't know. Maybe that's fine. Right? Yeah. Because it isn't just about detection anymore. It's also about response. Yes. And when you are in this agentic world, you're not just creating detections for a human anymore.
You're creating detections for an agent.
Ashish Rajan: That's
Jack Naglieri: right. And it's a totally different paradigm. Yeah. So this idea of a lip fatigue it naturally [00:09:00] has basically been eliminated.
Ashish Rajan: Yeah.
Jack Naglieri: Because now you're putting agents in front of the alerts.
Ashish Rajan: Yeah.
Jack Naglieri: And you don't have to worry about noise as much, if that makes sense.
So that whole thing is gonna change, I think.
Ashish Rajan: Yeah. Yeah.
Jack Naglieri: Or has changed for at least the teams that we serve.
Ashish Rajan: Yeah.
Jack Naglieri: Because we do provide the agents Yeah. On both sides, detection and alerting.
So yeah, it's just the idea of maintaining the detections. The idea of even responding to the alerts, like the baseline assumption in my world now is that you are using an agent for literally everything in the soc,
Ashish Rajan: right?
Jack Naglieri: For searching logs, for creating detections, for looking at alerts. And that is going to help you get like massive workload improvements. It's gonna allow you to. Just do the job easier and better. And you still need someone guiding the agents.
Ashish Rajan: That's right.
Jack Naglieri: Which I think is, is still the need. You still have the need for security practitioners.
Ashish Rajan: Yeah.
Jack Naglieri: Right. But the day-to-day of their workflows is drastically different.
Ashish Rajan: I'm just, as you said that, I'm just thinking about all the mature organizations who have invested quite heavily on theme log aggregators and. Most likely [00:10:00] you have a detection engineering team as well. What does this mean for that world where it's a mature organization, invested heavily around, and also because this week we are at RC as well, so a lot of people are walking through thinking about to what you said, Hey, how do I do AI security?
What would detection look like for that? I am concerned about the fact that I've, I've increased in my pursuit to increase adoption of ai. I have opened the door quite a bit. Is there. No place for SIEM in that world where we are moving forward in this 2.0 world as, as I like to call for tech after post gen ai?
Or is the, is the world a bit different? It's
Jack Naglieri: a
Ashish Rajan: good question.
Jack Naglieri: The world is definitely different.
Ashish Rajan: Yeah.
Jack Naglieri: You still need data.
Ashish Rajan: Okay.
Jack Naglieri: Right. Data is the most important lever that we have for agents.
Ashish Rajan: Yeah.
Jack Naglieri: Right. And the way that we always talk about it internally and, and now externally with our new messaging on our website.
Is agents need really three things. They need a ton of context. Context about what's important to the organization. Yeah. What is actually happening in all the systems? Who are the people and the assets being monitored?
Ashish Rajan: Yeah.
Jack Naglieri: And what are the patterns? It's sort of seen over time. Right. So context is [00:11:00] extremely critical.
Ashish Rajan: Yeah.
Jack Naglieri: And then they need the ability to take action.
Ashish Rajan: Yeah.
Jack Naglieri: So the ability to do hunt. On their own, the ability to close alerts out or escalate alerts. A lot of these sort of like traditional things that analysts were trained to do.
Ashish Rajan: Yeah.
Jack Naglieri: And then they need guidance. They need guidance on what's important.
They need guidance on really just ensuring that they're aligned really well to the needs and the desires of the organization security team.
Ashish Rajan: Yeah.
Jack Naglieri: So if you're able to achieve that without a SIEM, I mean, you might be able to, but a SIEM actually has all those components. Right. So it's like
Ashish Rajan: Right.
Jack Naglieri: SIEM as a traditional thing was just a, like a way of aggregating and focusing your telemetry.
Ashish Rajan: Yeah.
Jack Naglieri: So you could do correlation.
Ashish Rajan: Yeah.
Jack Naglieri: Right. That was the whole premise of it. Yeah. And, and I always associate SIEM with manual, like manual, like human led. And that I think was the foundation of now what we're doing with agents, but agents really need that foundation to really thrive and. If it's a SIEM or a data pipeline or a data lake, like all of them have kind of [00:12:00] blended together in interesting ways.
Ashish Rajan: Mm.
Jack Naglieri: Right. Like we, for example, we were considered a cloud native SIEM, you know, as of, you know, a year ago.
Ashish Rajan: Yeah, yeah.
Jack Naglieri: And now I wouldn't call us that because we're more than that.
Ashish Rajan: Yeah.
Jack Naglieri: But if you think about the components, it's like data pipeline.
Ashish Rajan: Yep.
Jack Naglieri: A data lake.
Ashish Rajan: Yep.
Jack Naglieri: A detection correlation engine, and a way of doing searching, alerting.
So it's like
Ashish Rajan: all the ingredients of a SIEM.
Jack Naglieri: Yeah, it's basically all the ingredients, but when you give an agent all those tools, it does remarkable things. So you still, it doesn't remove the need to have those dependencies, if that makes sense.
Ashish Rajan: Yeah.
Jack Naglieri: But like I don't know if the tool
Ashish Rajan: will be called a SIEM in two, three years.
Right. Interesting. And do you find. There is a level of automation coming to SOC as well. 'cause I mean, obviously we started the conversation by saying detection engineering is for large organizations who had the budget. But there's a whole ecosystem of there are SOC people who are trying to level up to a detection engineer.
There are companies that have exported, uh, their mssp, sorry, the SOC to mss ps. There's this plethora of variety that people have ways to detect. [00:13:00] Respond.
Is this SOC automation happening at the same time for this and what is that looking like? So in terms of like the SOC level one that used to be, hey, we, a detection, you like Ashish made an alert, or Ashish made a detect detection, SOC analyst gets the alert and he or she goes, goes down the rabbit hole, finding out it false positive or false positive.
What does it look like in this current world and where my detection capabilities in quote unquote agent
Jack Naglieri: The way it looks is that you're not doing any of the manual triage anymore. I would assume that a hundred percent of alerts should be triaged by agents going forward.
Ashish Rajan: Oh, it because it has the context.
It should be able to at least give you enough information that the recon is done through across the end points.
Jack Naglieri: Yeah. We're seeing, we're seeing this already, by the way.
Ashish Rajan: So what would be an example, so at least people have some more context. Is that on how the
Jack Naglieri: process. Or
Ashish Rajan: no, as in it's, anything that you can think of is like, hey this is a CloudTrail example that we were saying earlier.
Earlier. I would imagine I get an alert, I may not even know what CloudTrail is 'cause I'm soc level one.
And I go into AWS 'cause someone gave me the credentials and I'm trying to figure out what is this [00:14:00] CloudTrail, what is EC2? What is any of that? So now you're saying AI agent chemistry, do all of that.
Can do an API call to an A WS environment, identify CloudTrail, identify resources, identify like, you know, the, I don't know, five step process to find, identify that first thing. They can actually do it really well.
Jack Naglieri: Well, I, I'm just referring to the old triage process.
Ashish Rajan: Oh,
Jack Naglieri: right. So let's say, let's assume that you have a detection in CloudTrail or Okta or something like that.
Yeah, yeah. You get an alert that. Is relevant for something that you care about.
Ashish Rajan: Yeah.
Jack Naglieri: And the agents should be able to read all the detection context to know that, understand the intention of the alert.
Ashish Rajan: Yeah.
Jack Naglieri: They should be able to read what actually caused the alert.
Ashish Rajan: Mm-hmm.
Jack Naglieri: They should be able to go pull enrichment and surrounding details about that asset or identity.
So. The most common question that we always ask as analysts is what happened before this? Right. It's like a crime scene.
Ashish Rajan: Yeah. Yeah.
Jack Naglieri: It's very investigative, right? Yeah, yeah, yeah. Like you are trying, you are giving clues and you need to fill in the blanks. Yes. That's the
Ashish Rajan: job of
Jack Naglieri: of the soc. Yes. I did it for a very long time.
I also did forensics for a very long time that, [00:15:00] so it's highly similar and agents are extremely capable of creating those queries. Understanding where to look and taking a huge amount of context and then distilling it down into something that is very approachable for someone who is a cyber analyst.
Ashish Rajan: Mm-hmm.
Jack Naglieri: And, um, that's I think the reality of right now. And of course there's a, there's a lot that goes into that again, on the context side.
Ashish Rajan: Yeah.
Jack Naglieri: Right. Like, because you need to understand is this. Actually an important occurrence for you, first of all. And then you need to understand what's the criticality like, does this person CEO?
Yeah. Are they someone in finance who's doing finance things? Mm-hmm. Or someone in finance looks like they're doing engineering things because that's really bad.
Ashish Rajan: Yeah, yeah.
Jack Naglieri: Right. Like, oh no. Like that's horrible.
Ashish Rajan: Yeah.
Jack Naglieri: But every piece of context. Drastically could change the outcome. And that's always how security has been.
Ashish Rajan: Yeah.
Jack Naglieri: Give it to an agent and you give it the right guidance, then it should be able to perform just like an analyst. And that's been our experience so far.
Ashish Rajan: Would you say, 'cause obviously the entire industry complaints about the fact that there's a lot of hallucinations still there, uh, in terms [00:16:00] of, and obviously we're talking about multi, like a multi-agent kind of ecosystem that you guys are creating.
Everyone's creating.
Jack Naglieri: Yeah.
Ashish Rajan: In all the work that you guys have been doing, have you been able to find that? 'cause the detection is one of those ones where you want a consistent response. You don't want it to be like a, hey, this is a false positive, and two seconds later is actually, maybe it's a false negative.
So how do you find ways that you're able to kind of at least get more accuracy on that? And are we there much closer? Because I guess there's definitely people who on the practitioner side, they're also wondering that, but I've heard this hallucination is a thing.
Jack Naglieri: Yeah.
Ashish Rajan: If I build AI agent, what does that mean for my.
Deterministic things that I, I actually want it to be, yes. This is a false positive because the AI agent does, did the investigation. What's your thoughts there in terms of increasing accuracy?
Jack Naglieri: The first thing that came to my head is like, well, humans also have hallucination for that happens like then, well, 3:00 AM AI can see that.
Yeah. You look at something and you misinterpret it.
Ashish Rajan: Yeah.
Jack Naglieri: Why did you misinterpret it? Because you didn't have the right context or the right perspective. Yeah.
Ashish Rajan: Yeah.
Jack Naglieri: And I think it's the same with an agent. If an agent is going off the [00:17:00] rails. I think it's kind of a skill issue in a lot of ways. Uh, the agents are very capable of getting to the right conclusion.
Yeah. If they have the right level of context and ability to see all the perspectives,
Ashish Rajan: that's fine.
Jack Naglieri: So I think it just comes down to that.
Ashish Rajan: Yeah.
Jack Naglieri: I think a fully off the rail solicitation is really not a thing of this year or even last year.
Ashish Rajan: Yeah.
Jack Naglieri: I think when frontier models were more primitive, where they were trained on a much smaller corpus of data and their reasoning capabilities weren't quite there, like the ability to just like.
Really do chain of thought and chain of reason and all these things. I think we did see more of the traditional like, wait, why is it talking about identity when it's a Okta or a, uh, AWS alert? Right? Like you would see those things more often where it just felt like, yes, this is a hallucinations just totally off the rails.
Ashish Rajan: Yeah.
Jack Naglieri: That's, it's much more rare these days to see it. Of course, the models can't have bugs.
Ashish Rajan: Yeah. Yeah.
Jack Naglieri: But. You report it to the lab, right? Or you report it to whatever frontier provider that you're using and they typically fix it or roll it back like so I think [00:18:00] hallucinations in the traditional sense.
Yeah. Like in the last year, not really an issue anymore. Yeah. You just have to make sure that the agents have the right depth of context. And the other thing I would say as well is you wanna be able to incorporate the past as well.
Ashish Rajan: Mm-hmm.
Jack Naglieri: So you want to be able to give the agent not only what happened in this point in time.
Historically what hasn't seen, and this is something that we've effectively, we've been calling like closing the loop.
Ashish Rajan: Yeah.
Jack Naglieri: Right. Because it isn't really about like this one stateless thing, it's about this is one event over the course of months or years.
Ashish Rajan: Yeah.
Jack Naglieri: Right. Like we're continuously monitoring things, so we have a history of like where it typically log in from.
Ashish Rajan: Mm.
Jack Naglieri: Right. So. I should be able to get that context in the context of this alert. And then I should be able to say like, oh, well, you know, Jack lives in San Francisco.
Ashish Rajan: Yeah.
Jack Naglieri: He logs in 90% of the time from here.
Ashish Rajan: Yeah.
Jack Naglieri: And 5% of the time he goes to Northern Virginia. Yeah. 'cause that's where he is from.
Ashish Rajan: Yeah. Yeah.
Jack Naglieri: Right. So it's like that isn't anomalous anymore.
Ashish Rajan: Yeah.
Jack Naglieri: And the agent should know that because it has access to the past and baseline. Right?
Ashish Rajan: Yeah.
Jack Naglieri: Yeah. So I think it really comes down to context. You have to have context. If the [00:19:00] agents have the, if it knows your environment and knows your assets and identities really well,
Ashish Rajan: yeah,
Jack Naglieri: they can make these same assumptions.
Ashish Rajan: Clearly you've been working this for some time and quite knowledgeable in at least the building process of it as well. A lot of people may who be, who probably are equally invested into the space, they may just go, Hey. Why can't I build this too? You mentioned you have MCP. Yeah. And, uh, I, a combination of cloud code and MCP.
Why should that not be good enough for me to just build this on my own? Like what's the thinking there? Obviously you guys have built this.
Jack Naglieri: Yeah.
Ashish Rajan: So I'm just curious as to where do you see this more like, hey. Anyone can do this versus actually these are challenges with this.
Jack Naglieri: Yeah, I think it's a great question.
I would answer it in a few different ways, but the first thing I'll say is that there is a segment of our customers that are really talented builders, and I have a ton of respect for them because that was me. You know, 10 years ago, right? Like I was a security engineer, I was building software. I really enjoyed it.
It was very fun.
Ashish Rajan: Yeah.
Jack Naglieri: And I think that there, those are organizations that have heavily adopted ai.
Ashish Rajan: Yeah. [00:20:00]
Jack Naglieri: They have adopted coding agents like Codex or Cloud Code. Yeah. They are builders and like they are people who have opinions about how security should run. And they are creating these multi-agent systems on top of multiple platforms.
Right? Yeah. Ours included. But there's so much more outside of the SOC and security, right? There's product security. There's identity, right? There's now like AI agent security.
Ashish Rajan: Yes.
Jack Naglieri: Right. So there's all of these things that there's a lot of benefits for that, that sliver of our customers.
Ashish Rajan: Yeah.
Jack Naglieri: But I would say it's the same as why I would never recommend someone build their own SIEM, right?
Because it's production infrastructure. Yes. That is not in your core competency as a security professional, right? Like security professionals are not developers, and it's not a accurate parallel to be like the creator Claude Code doesn't write anything. It's like, yeah. But he was also an elite software developer for like 20 years.
Right? Like he was like one of the best in Instagram engineers. Yeah. It's like those people don't work in in the SOC. Yeah. Yeah. That's just not the archetype of people, right? Like people in the SOC [00:21:00] are really good at understanding attackers and knowing like the latest like light LLM vulnerability that came out, right?
Yeah, yeah.
Ashish Rajan: Like, yeah.
Jack Naglieri: But they're not the ones who are like, what is it? What type of infrastructure do I need to build to process a petabyte logs per month or per day? It's just outside the core competency. And I'll also use another analogy, which is like, yes, theoretically software has become commoditized a hundred percent.
Ashish Rajan: Yeah.
Jack Naglieri: However, there are still pieces of software that we will never try to build, right? It's like, 'cause that by that logic, it's like, well, why do you need to buy any software? Because I'm just gonna go build it. It's like, but then you waste your time building and testing and deploying software that's outside your for competency.
Yeah. So anyway, it's just a way of saying like, my preference is to support the builders.
Ashish Rajan: Yes.
Jack Naglieri: Right. Like I, that's why we have an MCP server. Yeah. And that's why we really continue to invest in it.
Ashish Rajan: Yeah.
Jack Naglieri: And we released it with Block.
Ashish Rajan: Yeah.
Jack Naglieri: So when we were building in a Panther, I went to all the customers and I was like, who cares about the MCP?
Like who cares about it? Who wants it? And like, what are you doing today?
Ashish Rajan: Yeah.
Jack Naglieri: And Block had actually created their own MCP Panther.
Ashish Rajan: Oh,
Jack Naglieri: right. [00:22:00] And then I talked to them. I'm like, well, what if we hosted it? Like what if we, kind of like joined forces. We took your learnings. Took our learnings put together, put it open source.
Ashish Rajan: Yeah, yeah,
Jack Naglieri: yeah. Right. And that's what we did last year.
Ashish Rajan: Yeah.
Jack Naglieri: And like that was that project and it was awesome.
Ashish Rajan: Yeah.
Jack Naglieri: And a ton of our customers have adopted through MCP.
Ashish Rajan: Yeah.
Jack Naglieri: But again, those are the ones who have invested in their own agents or had some internal sort of agent infrastructure.
Ashish Rajan: Yeah.
Jack Naglieri: But the majority of security teams.
Don't have that, and the majority of enterprises don't have that. We live in a bubble here in Silicon Valley.
Ashish Rajan: Yeah.
Jack Naglieri: I think it's very easy to think that everyone's like us, but they aren't. Yeah. And that's okay. Yeah. So we build agents on top of this infrastructure and we deliver it to them so they can just consume it.
And build content for the agents.
Ashish Rajan: Yeah.
Jack Naglieri: That I think is where the interesting security work happens. Not in trying to create, recreate a triage agent because you're not gonna do it better than me. Right? Yeah. Yeah. 'cause I see hundreds of different customers doing it.
Ashish Rajan: Yeah.
Jack Naglieri: And I can get the feedback from them individually, incorporate the feedback for everybody.
Ashish Rajan: Yeah.
Jack Naglieri: And that's why you have products that exist, like this is my, the [00:23:00] purpose that I serve in being a founder. Yeah,
Ashish Rajan: yeah,
Jack Naglieri: yeah. My purpose is learn and build and provide the best product possible. And before that was just. Software and now it's through agents. Yeah. And agents are effectively the new operating layer.
Yeah. So I'm gonna continue to do that and we're gonna ship services that are gonna make the agents better, and hopefully it will also help the builders build better with agents. So I have no spicy opinions. No, no. But I build versus buy
Ashish Rajan: I, I'm, I'm with you on this also because I think the analogies that I go with this, that if, if I am an engineer in an insurance company.
I don't know if you, to your point about the core competency, I don't think a publicly listed or even a, not a publicly listed company would want their, one of their competencies to be, we have a cybersecurity product.
Jack Naglieri: Yeah. It makes no sense.
Ashish Rajan: Yeah, it does not make sense. I would be really surprised if anyone in the executive team or the management team goes, I think that's a great idea.
We should make our own security borders. It doesn't make sense, but I appreciate the fact that there would still be a core competency of understanding the detection, the context of what's relevant for your [00:24:00] organization.
Jack Naglieri: Yeah.
Ashish Rajan: Doesn't have to be SQL Injection or from an AI perspective and whatever else comes next after this one thing that.
A lot of CISOs still struggle with is building that trust capability with the LLM ecosystem in general, whether it's agents or AI agents or LLM models. And perhaps it's also because you don't really, once the data goes in, it's like a black box and you have no idea what's happening. What have you seen ways that people are able to raise a trust level for what's actually happening behind that black box?
Jack Naglieri: That's a good question. I think that there's two things that immediately come to mind. The first one is. Not so much in understanding how inference happens and how the agents like, how they get to a conclusion based on your input, but it's really around how can you control what the agents do.
Ashish Rajan: Oh, okay.
Jack Naglieri: That's the first piece of this because agents are much more than just ask a question, get an answer.
Ashish Rajan: Yeah.
Jack Naglieri: It's now do a job for me and do it within the bounds that I approve of. So for us, the way that we've defined that is we have policies that say these specific types of agents. Can and cannot do certain things.
Ashish Rajan: [00:25:00] Mm-hmm.
Jack Naglieri: So Alert Triage is a great example of that. Do you want to give the agents the ability to close alerts? Do you want to give the agents the ability to send to someone that ping on Slack? Do you want to give them the ability to revoke a session?
Ashish Rajan: Mm-hmm.
Jack Naglieri: If you can entirely control the world that they're in?
Yeah. That is one way of gaining trust. And it's more of a governance based answer, so that's fine. Of
Ashish Rajan: course. Yeah. Yeah.
Jack Naglieri: I think the other one is exposing thinking tokens is a really important one.
Ashish Rajan: Okay.
Jack Naglieri: So now with like the, the reasoning models, it will tell you it's chain of thought. It'll say, okay I'm looking at this alert, I'm noticing these things.
I'm gonna go make these tool calls, I'm gonna get more context.
Ashish Rajan: Yep.
Jack Naglieri: And even just the act the act of exposing that is a really great way to build trust because then it allows the, the users who are the security engineers, security analysts to debug. How it got to that answer. So I really think it's those two core primitives governing the agents with the right level of permission.
Ashish Rajan: Yeah.
Jack Naglieri: There even maybe an extension of that one where you log the action.
Ashish Rajan: Yeah.
Jack Naglieri: Okay. And use it. But you can typically see it in a single agent run. You can see all the tool calls It did. And then being able to see the reasoning is a really important [00:26:00] aspect of trust. So if you do those things. Then you have pretty high clarity of how that answer was achieved.
And then you can say, Hey, actually you know what? It did miss this one piece of context because it wrote the, a slightly wrong query. Maybe it looked in the wrong time range, which is something that is actually pretty hard to build as someone who just spent, you know, two years building this thing out.
It's like pivoting. Teaching the agent how to pivot correctly isn't always obvious to just an off the shelf frontier model. But if you expose the tool call and you expose the work it did.
Ashish Rajan: Yeah.
Jack Naglieri: A seasoned security engineer can be like. Actually, it looked too far back. It looked a day ago. I needed to look one hour ago and that's why it missed something or it gave me the wrong conclusion.
So it all, it all goes back to context at the end of the day. Yeah. But for trust, it's governance around what the agents can and can't do, and it's also exposing. The two calls and the reasoning
Ashish Rajan: there is a split between say, gathering further more information versus decision making. Do you see that change?
'cause going back to where we started the conversation, when I was a detection engineer in a pre pre gen AI era, I had a [00:27:00] hypothesis. I kind of went down that path of building the, collecting the logs, understanding what data am I getting and building detection for it, and then hopefully maintaining it till kingdom come kind of a thing.
But now we are in this world where we are almost questioning the, am I spending more time? Making decisions or gathering investigation like 'cause obviously seems like we were more on the gathering information piece before and less on the decision making. 'cause by the time I've triaged through 10,000 alerts, like all of them turned to false positive because I was just gathering information, not doing much.
So you see that paradigm shift as well happening in the a
Jack Naglieri: hundred percent. A hundred percent right. Like everything has gone to decision making in my opinion.
Ashish Rajan: And,
Jack Naglieri: and you still need to drive the work that will lead to the decision. Yeah. So an example of that is like, I wanna detect X, Y, ZI wanna detect again, like every time Jack logs in, not from California.
Ashish Rajan: Yeah, yeah. Let's just say,
Jack Naglieri: yep. This is how I think I would want to handle that alert. Yeah. So you encode that. Then if you get the alert, then it's like, did it do it in the right ways or do you wanna take a [00:28:00] next step? Or it goes so much from the like tactical, what's the syntax that I have to remember?
Ashish Rajan: Mm-hmm.
Jack Naglieri: Again, for this, or where does this log live and what does it look like? Like it's just much more in the weeds. It goes much higher level too. Is this detection working as I intended it to work?
Ashish Rajan: Yeah.
Jack Naglieri: And are we responding now in the ways that we think is efficient?
Ashish Rajan: Representative of how we want to protect our threat models.
Do you find that the AI driven workflows, 'cause we are almost hinting towards the fact that the new world is primarily air driven to what you said, there's a lot more decision making. That means there's a lot of autonomous actions happening. Yeah. Uh, genetic actions happening. Is that creating a reset?
Probably we're not accounting for and how should people approach it? 'cause is, I don't even know what role would take that responsibility as well.
Jack Naglieri: Yeah. I think you're orchestrating at that point,
Ashish Rajan: right? Yeah.
Jack Naglieri: Kind of a puppet master.
Ashish Rajan: Yeah. Yeah, yeah. In a
Jack Naglieri: lot of ways. And I think the risks, it kind of goes back to what we were saying before.
Do the agents have the right level of context and guidance, governance, right. Because if you give an agent too much power.
Yeah.
Jack Naglieri: Delete your production database on [00:29:00] accident. Yeah. Because it's like, well it had an error. Yeah. I fixed the error by deleting the production database, so I won. So I think like the risks like that are always gonna be there for now until it's like a well accepted governance framework.
Right? Yeah. And the agents will get better at tool calling and you know, depending on what. Frontier model you really go with? So us personally, we're big fans of Anthropic. Okay. We've been working with them for a while, well before the craze of Claude Code. Uh, right when they were actually just starting Claude Code, one of the engineers was like, we're all using CLA code internally.
I'm like, I've literally never heard of this. Oh, I dunno what it is.
Ashish Rajan: Yeah.
Jack Naglieri: And then I got into it and I was like, wow, this is incredible. Yeah. And then like obviously in the last like three months, six months has like exploded, right? Yeah. In their revenues.
Ashish Rajan: It's like the entire internet is on cloud code now.
It's almost like, and I, I bet you the software engineer and you would've been excited as well, and at least someone explained this as the era of, we are going back to the nineties where it's cool to be technical again. 'cause the cloud code kinda allows you to be, even though you are still explaining it in a natural language.
But I don't have to be a Python nerd. True work. I [00:30:00] could have just been doing JavaScript before
Jack Naglieri: True.
Ashish Rajan: And just go down any rabbit hole that I want. Which kinda leads me to another question where in the same vein of, I don't have to think what language that I'm writing my code in.
Jack Naglieri: Yeah.
Ashish Rajan: A lot of people who are walking the floors would be obviously bombarded by agentic.
I'm sure there'd be so automation detection, observation as well. As a technical person, before,, how do you recommend people separate the. Signal from the noise, for lack of a better word, where, how do I even know which Egen is good versus which agentic is just a rapper? And 'cause I imagine 'cause as a technical side, as the audience as well, you technically yourself.
Mm-hmm. They're all going, okay, I get it. I can't build it myself unless I'm planning to just become a cybersecurity vendor myself.
Jack Naglieri: Yeah.
Ashish Rajan: Uh, what's, what have you found as one, I know one or two things that you kind of go, that's my, that's the red flag for me.
That you can pass on to other leaders as well, so that they can go look out for that, I guess.
Jack Naglieri: It's hard because it's hard to get a demo and know that this is the right solution for you.
Ashish Rajan: Oh, I think
Jack Naglieri: this is why we have PO like POC processes,
Ashish Rajan: right? Yeah, yeah, yeah, yeah, yeah.
Jack Naglieri: But the thing that I would look at in [00:31:00] particular is a demo.
Ashish Rajan: Okay. I
would
Jack Naglieri: always ask for a demo and just take me through what it's like to do this workflow.
Ashish Rajan: Yeah. In this product. Yeah.
Jack Naglieri: And then I can sort of piece together, it's like, well, what is it actually taking into consideration? What does it visually look like? Yeah. Whose work is the agent doing?
Ashish Rajan: Yeah.
Jack Naglieri: Because everyone, again, to your point says agentic. agentic, but. agentic has a lot of levels to it.
Ashish Rajan: Yeah.
Jack Naglieri: Is it agentic in that it's giving me a five bullet summary of what the alert was. Is it agentic to the point where it has spun up 30 agents to go investigate this simultaneously? Those are vastly different worlds.
Ashish Rajan: Yeah.
Jack Naglieri: Yeah. And they have vastly different cost structures as well.
Ashish Rajan: Yeah.
Jack Naglieri: Right. So I think ultimately, just generally, my, my opinion is. If you wanna find like the right vendor, it's really a process of like, who meshes well with the, the technology that we have.
Ashish Rajan: Yeah.
Jack Naglieri: Are we in Amazon? Are we a like Google shop, et cetera? Like we have our data in Snowflake, we have our data in Databricks, we have our data nowhere.
Right? There's a lot of these like questions that get answered and then you sort of narrow it down to like, well, who aligns to me on [00:32:00] like the process and technology side?
Ashish Rajan: Yeah.
Jack Naglieri: And even the people side, like. In cyber, there's a lot of, there's a ton of teams that just outsource to MDRs and MSPs. Yeah. And that's totally fine.
Ashish Rajan: Yeah, yeah, yeah.
Jack Naglieri: Right. Because you don't always need the muscle of doing it internally.
Ashish Rajan: Yeah.
Jack Naglieri: So that also is a way of self-selecting.
Ashish Rajan: Yes.
Jack Naglieri: Right. But then when you're looking at at products to run the program yourself, I'm a product person. I'm a pragmatist. Like I spent many years of my life building these tools.
I'm really good at spotting like what a good solution is because I've built it numerous times.
Ashish Rajan: Yeah.
Jack Naglieri: So I always just take me through a workflow, take me through what it looks like to triage an alert, take me through what, what it takes to write a detection and I can usually spot limitations and workflow and kind of get a
Ashish Rajan: sense of what it would be like to live with this tool day to day.
I think one thing I'm also kind of wanna double think on the fact is that some people may also be hesitant to share what they're using as a model in the background. If you, you are very transparent about the fact that we've been supporting Anthropic from the beginning. Yeah. Before cloud quarter of the thing.
Jack Naglieri: Yeah.
Ashish Rajan: Whereas if you also hear people just say [00:33:00] that I, oh, sorry. It's our secret. We can't share the model. We're using you almost, and maybe, I don't know if it's the right, right red flag, but you almost feel like if you're trying to build trust on something that's going to be quote unquote agentic, you probably wanna know what's running in the background.
'cause
Jack Naglieri: Yeah.
Ashish Rajan: You are technically becoming part of my supply chain. True. At that point in time.
Jack Naglieri: It's true.
Ashish Rajan: Yeah.
Jack Naglieri: And, and also the, you reminded me of an opinion that I think exists a lot in the space right now, which is like, well, we all have access to the same frontier model, so no one has differentiation or a moat.
And I'm like, that's like saying, oh, we all have access to AWS so no one has a moat. It's like, all right. Like clearly we figured out how to build like unique products.
Ashish Rajan: Yeah, yeah,
Jack Naglieri: yeah. I think the cloud provider is just, it's become more abstracted Yeah. To agents now. Like that's the unit of work and frontier models are allowing us to build new types of software.
Ashish Rajan: Yeah.
Jack Naglieri: But I do think that there is a moat still. Yeah. It starts with data.
Ashish Rajan: Yeah. Okay.
Jack Naglieri: Right. You have to have great data and you have to prepare it in a way so the agents understand it. So it's really around the whole ecosystem of what supports the agent, the [00:34:00] data, the tooling layer, right? Like what actions can it take?
How is it governed?
Ashish Rajan: Yeah.
Jack Naglieri: What is their memory, right? Like there's so much that goes into it, just like there's so much that goes into building web apps.
Ashish Rajan: Yeah.
Jack Naglieri: It's the same thing, right? It, it's like, what's the backend, like what's the caching, right? Like how does it scale? It's all the same types of things now.
Just applied it a new way.
Ashish Rajan: What do you think is the future for detection engineering then? Because I imagine people who are uplifting detection, detection programs today, they're probably, I don't know. I don't think they're planning for 2027, but they might be thinking more from a, at least in 2026. If I'm uplifting my detention program, what am I looking for in terms of one or two things that I should consider making part of my detention engineering, and maybe I'm from that 1.0 world.
Jack Naglieri: Mm-hmm.
Ashish Rajan: Trying to transition into this 2.0 world.
Jack Naglieri: Yeah.
Ashish Rajan: What are some of the capabilities, and I guess because you mentioned that a lot of the triage work is being automated. What is my team looking like now as well?
Jack Naglieri: Mm-hmm. Yeah, it's a great question and my mind goes in many different places. But the, the way I would start to answer this is like, talking a little bit about our journey for just a second.
[00:35:00] Mm-hmm. So our journey was we wanted to solve technology scale problems. Yeah. We wanted security teams to get all their log data into the SIEM, just because historically that was either extremely cost effect, uh, ineffective, or it would just be technically unfeasible. Right. A lot of SIEMs, especially when I was a practitioner, they would just legitimately fall over at our scale.
Yeah. Because I was at Yahoo, remember, right?
Ashish Rajan: Yeah, yeah,
Jack Naglieri: yeah, yeah. Yahoo had a, you know, billion users.
Ashish Rajan: Yeah, yeah,
Jack Naglieri: yeah. So they were producing, astronomical amount of data that no security product could handle. Yeah, and that's kind of like where I came up.
Ashish Rajan: Yeah,
Jack Naglieri: yeah. As a practitioner. So we wanted to solve the, the technology scale problem.
Like, okay, great. So I think after that, which is where we are now, we're solving a security expertise problem, right? How do we take the expertise that's in, in the minds of security people? And how do we delegate it out to agents to scale the work? And I think that's a vastly different world where we're in.
So the reason I say that is because I think detection is changing drastically.
And it's changing because we can effectively use agents to delegate threat hunting. Right. And I think that's gonna [00:36:00] become a new form of detection in a lot of ways. We're gonna rely much more on agents with the security mindset.
Knowing what to look for.
Ashish Rajan: Yeah. Yeah.
Jack Naglieri: And I think, we'll, we'll be in a world where we start with augmenting the current sort of like linearity of detection where it's like, log alert, agent runs conclusion. It's a very linear process.
Ashish Rajan: Yeah.
Jack Naglieri: I think what we're gonna start seeing is that we may do that and then we may also at the same time be looking at groups of signals with agents.
Ashish Rajan: Yeah.
Jack Naglieri: And then trying to elevate new things. And, and that actually might allow us to identify attacks that we don't have detections for. And that will only work if we give agents the expertise that we have as security practitioners in trying to identify things that are novel. Yeah. Right. And I think that's gonna be the new world of detection, engineering.
It's really. Drifting into prompt engineering.
Ashish Rajan: Yeah.
Jack Naglieri: Right. Yeah. Like, and I think that's gonna be, that's the shift that we're all going through. So if you're a security person and you should become an expert at prompt engineering.
Right? And you should really be an expert at understanding how the models really work.
Ashish Rajan: Yeah.
Jack Naglieri: Right. And how, like, how, what a token is, and like really [00:37:00] primitive understanding of like how the models work. Because I do think that's where
Ashish Rajan: the world is headed when it comes to detection. That's a good talk. Well that's the technical questions I had were fun questions. But with this we get to the, uh, snack war round.
As I said, an Australian and British snacks. Our favorite has been kangaroo and crocodile, so you can pick your poison if you want, but although these are sweeter ones, by the way, these are Caramel, British,
Jack Naglieri: Uhhuh
Ashish Rajan: Dodges, British Tin Tam. Australian shapes are Australian. These are Vegemite shapes. Twisties are like Cheetos, but these are like lollies.
I'm just calling it out. You don't wanna be the only one that kind of does not go for crocodile and kangaroo. But
Jack Naglieri: yeah, I think I might go for kangaroo. I don't know. Sure. I don't know what, what impression this is by making this choice, but I'll try it. What's, uh,
Ashish Rajan: I'm, I'm curious as to what does it taste like to you when you have, because even you've never had kangaroo before though, right?
Jack Naglieri: Never.
Ashish Rajan: Oh yeah. What first thoughts on a kangaroo jerky would you, if you were to given a packet and you were, I don't know, watching something or just what Claude coding away would you? Uh. Finish a packet. It's [00:38:00] good enough for that. But does it taste exactly as you expect it to be? Like, like a gamey, meaty, like,
Jack Naglieri: oh, yeah.
Ashish Rajan: All right. Okay. Yeah.
Jack Naglieri: I'll give it, yeah. I give it, um, I can give it a five out 10.
Ashish Rajan: Yeah. I, I was expect, I was gonna say, 'cause you're foodie yourself, so what's the rating? So five outta 10?
Jack Naglieri: Yeah, I'd give it a five
Ashish Rajan: outta 10. Oh wait, do you wanna try the crocodile as well?
Jack Naglieri: I'm good.
Ashish Rajan: Fair.
Jack Naglieri: I, you know, my stance on you
Ashish Rajan: and I appreciate that.
Uh, sorry, the three questions that I have, I've got three fun questions for you. First one, uh, what do you spend most time on May not trying to solve the detections engineering problems in the world?
Jack Naglieri: To be honest, like I am a workaholic. I'm a founder, so I'm, I'm, I've never really turned that off, especially in the last two years.
Ashish Rajan: Yeah,
Jack Naglieri: really. Like as AI became, eating, started eating the world.
Ashish Rajan: Yeah.
Jack Naglieri: I never really took a break and I honestly don't really have any, it sounds so depressing. Right. Like if you asked me this like two years ago would've been like I love to cook. I'm really into fitness. I love running. Right.
Like, and I still do those things.
Ashish Rajan: Yeah.
Jack Naglieri: But much [00:39:00] less, right? Like I, I, yeah. My schedule is very crazy and I squeeze in a 30 minute workout in the morning.
Ashish Rajan: Yeah.
Jack Naglieri: But yeah, I would've said nutrition, fitness, longevity, like those are like my. Big interests outside of cyber and tech.
Ashish Rajan: Awesome. And second question, what is something that you're proud of that is not on your social media?
Jack Naglieri: That's not on my social media?
Ashish Rajan: Yeah.
Jack Naglieri: You mean like LinkedIn?
Ashish Rajan: Yeah. Could be any, I mean, it could be any social media. I mean, I don't know what else. It could be Instagram, could be anything. I don't know. That's good question. I try not to like. I don't have like an immediate answer in my mind, which is
Jack Naglieri: boring,
Ashish Rajan: and
Jack Naglieri: I, I just, I'm not a, I don't like to
Ashish Rajan: brag.
Some people go for a family, some people go for I don't know, team, but depend, no, whatever you, what connects with you the most, I guess. 'cause. It's one of those ones where everyone has this thing that it's not, I, I don't know the right word for it, but if you don't have nothing to do, it's totally fine as well.
But I'm just giving you what I've heard other people say and 98% of them are of our family. And like, 'cause I imagine they're gonna watch this later on which is [00:40:00] why they're like, my family and my kids are the most part looking straight into the camera, like the, the, the treasures of my life. Okay. I have
Jack Naglieri: an
Ashish Rajan: answer.
Yeah. It, it might be
really,
Jack Naglieri: I mean, I don't really post on LinkedIn much, and if I do, it's about purely work and AI and stuff. But yeah, I would say I am proud of being here in San Francisco. Oh, like I, I never would've thought that I would've been a founder, I never would've thought that I would have the opportunity to live in San Francisco.
Ashish Rajan: Oh,
Jack Naglieri: wow. And it always felt like a very unachievable thing to me as someone who came out here 14 years ago as an analyst. Right. Like,
Ashish Rajan: yeah.
Jack Naglieri: Barely could afford living in the Bay Area is very, very expensive here.
Ashish Rajan: Yeah.
Jack Naglieri: Um, so I think the fact that. I said yes to opportunities and I, I also had a lot of people help me along the way.
Right, right. It's certainly not, I'm not self, I'm a solo founder. Yeah. I have said yes to a lot of things. Yeah. So I think it's just like the willingness to like, try new things and Yeah. I am proud of that and I think it's awesome, man. Thank
Ashish Rajan: you for sharing that. Uh, the third question that I had for you was favorite cuisine or restaurant that you can share with us?
Jack Naglieri: Ooh. That's [00:41:00] such a tough question. I, I think I would say Japanese is one of my favorites and. It's tied with Mexican. Oh. Um, which is like, I'm married. I'm married into a Mexican family, so.
Ashish Rajan: Oh,
Jack Naglieri: right, fair. So I absolutely love Mexican food. It is an incredible cuisine, especially when you travel to Mexico.
Ashish Rajan: Oh,
Jack Naglieri: it is totally different from having it in the States. Oh really? It's a hundred times better. It probably is my favorite.
Ashish Rajan: You probably get more local food as well then, I guess, which is probably not served in restaurants as well.
Jack Naglieri: Yeah, for sure. I mean, things are just significantly fresher as well. Right.
It's like, I think with food, like you always want to have stuff as soon as possible.
Ashish Rajan: Yeah. Yeah. Fair man.
Jack Naglieri: Fair. Awesome. So, um, yeah, Mexican and uh, and Japanese. We went to Japan in October. Absolutely incredible. A hundred percent recommend it to everybody.
Ashish Rajan: I would check that out as well. But where can people find more about the work you guys are doing at Panther and yourself as well connect with you?
Yeah. Panther.com.
And I'll put you on LinkedIn as well. People can connect and talk to you more about this as well. But
Jack Naglieri: yes, thank
Ashish Rajan: you so much. Coming on the show.
Jack Naglieri: Oh, detection@scale.com too.
Ashish Rajan: detection@scale.com.
Jack Naglieri: Yeah, my own podcast.
Ashish Rajan: Oh,
Jack Naglieri: [00:42:00] actually, yeah, in my blog. Yes, definitely. Just to check out his podcast as well.
Show. Great conversation there. Thank you.
Ashish Rajan: Thank you for listening or watching this episode of Cloud Security Podcast. This was brought to you by. Tech riot.io. If you are enjoying episodes on cloud security, you can find more episodes like these on Cloud Security podcast tv, our website, or on social media platforms like YouTube, LinkedIn, and Apple, Spotify.
In case you are interested in learning about AI security as well. To check out a sister podcast called AI Security Podcast, which is available on YouTube, LinkedIn, Spotify, apple as well, where we talked. To other CISOs and practitioners about what's the latest in the world of AI security. Finally, if you're after a newsletter, it just gives you top news and insight from all the experts we talk to at Cloud Security Podcast.
You can check that out on cloud security newsletter.com. I'll see you in the next episode, please.










.jpg)










