The nature of Security Operations is changing. As cloud environments grow in complexity and data volumes explode, traditional approaches to detection and response are proving insufficient. This episode features an in-depth conversation with Kyle Polley, who leads the AI security team at Perplexity, about a modern blueprint for the Security Operations Center (SOC).
The discussion centers on a necessary architectural shift away from traditional SIEMs, which were not built for today's scale, toward a "data lake infrastructure built for detection and response". Kyle explains how this model provides the scalability needed to handle modern data loads and enables a more effective incident response process.A cornerstone of this new model is the use of centralized AI agents. The conversation explores how these agents can be tasked with performing in-depth alert investigations, helping to reduce analyst burnout and allowing security teams to focus on more proactive, high-impact work.
This approach moves beyond simple automation to create a system where AI augments and enhances the capabilities of the human team.Questions asked:00:00 Introduction to Kyle Polley & The Future of SOCs
01:03 The Core Argument: Why You Must Build Your SOC Before Compliance
03:34 Beyond the Certificate: The Difference Between Being Compliant vs. Secure
04:20 Today's #1 AI Threat: The Challenge of Prompt Injection
06:00 The Architectural Flaw: Handling Untrusted Data in AI Systems
08:20 The "Security Data Lake": Moving Beyond the Traditional SIEM
15:00 The Future is Now: A Centralized AI Agent for Automated Investigations
20:06 Will AI Take My Job? How AI Elevates, Not Replaces, the Security Analyst
25:20 Redefining "Shifting Left" with Personal AI Security Agents
31:00 Can AI Reason? How Modern AI Agents Intelligently Query Logs
37:05 Rethinking Incident Response Playbooks in the Age of AI
41:00 The MVP SOC: A Practical Roadmap for Small & Medium Companies
46:08 Final Questions: Maintaining Optimism, Woodworking, and Tex-Mex
50:08 Where to Connect with Kyle Polley
--------------------------------------------------------------------------------📱Cloud Security Podcast Social Media📱_____________________________________
🛜 Website: https://cloudsecuritypodcast.tv/
🧑🏾💻 Cloud Security Bootcamp - https://www.cloudsecuritybootcamp.com/
✉️ Cloud Security Newsletter - https://www.cloudsecuritynewsletter.com/
Twitter: / cloudsecpod
LinkedIn: / cloud-security-podcast
Ashish Rajan: [00:00:00] Hello and welcome to another cloud free podcast. Today I've got Kyle. Hey man, thanks for coming on the show.
Kyle Polley: No, super excited to be here. Thanks for having me.
Ashish Rajan: Ah, man. We should probably start with the intro. So can people have some idea about what you've been up to where you today? Would you mind starting there?
Kyle Polley: Yeah, sure. And right now I'm at the security team at Prop. I focused on all things AI security and building the program out there. I spent a lot of my career in various parts of security, a lot of areas in detection, response and have done a lot of research and operations there. In addition, I was an early member of the security team at Robinhood and built the security program at a FinTech company called Pipe.
And so a lot of DNR, but honestly, I've at this point have had my hands in almost everything related to security.
Ashish Rajan: Excited to build a security operation team at Perplexity as well. But I think when you and I were talking about this previously you said something which kind of stuck with me, where a lot of people normally focus on [00:01:00] compliance first and then soc.
You went the other way around. I'm curious why would you do that and what was your thinking behind it?
Kyle Polley: Yeah, that's, it's, and it's a good question. And I do think that I think there are two reasons. One is. Attackers and hackers they're not gonna wait for you to become compliant and spin up all this, all these resources and infrastructure, right?
And so a lot of things like when talking to leadership and other members of the team and they're like, Hey, why are you gonna focus on a soc and detection response? And it's imagine a breach happen tomorrow, right? The first question that everyone that. The company's gonna ask, the enterprise customers are gonna ask is what happened and what did they take?
Like what went wrong? Like, where'd it go? And without a detection response process, you're just not gonna know. You just don't have the audit logs or the capabilities to uncover what happened. And so you're stuck. And so that's why always, I'm not saying you have to like the [00:02:00] first thing you do, just.
Only focus on DNR and build a super mature DNR and not focus on anything else, but at the same time, like turn on CloudTrail, have it shipped to a bucket turn on guard duty, have it sent to a Slack channel. There's a lot of easy ways to get started that will take you very far. That I think is super important.
So that's one piece. One piece I think is, it's just super important. The second piece is, I think a general philosophy that I have with building security programs is if you build a really great security program, compliance just comes along with it and everything becomes super easy. And so to me it's like if compliance is challenging, we're probably doing something wrong or we're not doing something that we should be doing.
Yeah. And so I think going at the approach of. What makes a great security program, what needs to be done? I found that sometimes being compliance driven leads you down the [00:03:00] wrong path. And it's easier to just yeah, go down the happy path of building a great security program in compliance.
Generally tax along with it.
Ashish Rajan: To be fair to I think what you started off with, if there was an incident today, no one's gonna ask for your ISO or SOC two certification.
Kyle Polley: Exactly. It was a hard take
Ashish Rajan: on it. That is the reality of it. And I think you hit it on the nail by saying that because if you know what your, important critical systems are and you just rely on the fact that, Hey, I'm SOC two compliant, and I think it's a, it's just a myth.
I won't really say it's a myth. I think it's definitely a understanding between practitioners across the board that being compliant is not the same as being, secure. Yeah. Yeah. Just to understanding that people have talking about being secure as well. Obviously I imagine your feed is not filled with all the, all kinds of AI threats like most people on the internet.
Considering you are on this detection and response space and security operation, is your jam, what kind of AI threats are you seeing in your [00:04:00] feed these days and for that talk team that you're facing today?
Kyle Polley: Yeah, it's. Yeah, that's another good question. I think a lot of what folks see is at least specifically for AI attacks, is prompt injection.
And there are a lot of ways to mitigate prompt injection these days. And a lot of LLMs especially the really the brand new intelligent ones, they're pretty good at spotting it and ignoring it. But I think a lot of the issue is that similar to how engineers when. When they write code, they don't necessarily think about the security implications Very similarly, when they perform, when they write system prompts or like context engineering it's called now too.
They don't really think about the security implications of that too. And so you have this like AI agent with access to a ton of different systems and data and it's easy to forget that you need to also instruct the agent. To handle untrusted data differently than trusted data and to watch [00:05:00] out for here are the things that you shouldn't be doing.
I feel like it's very easy to say, Hey, agent, here are the things you should do. It's not, it's very hard to say, here's the list of all the things you shouldn't do. And I think, yeah, I think introducing that, I think that's like a big problem that you see.
Ashish Rajan: Sorry. I was gonna say, you know how there is this allow list and denial list that people talk about in the WAF context and otherwise Yeah.
Broadly speaking where, hey, it's so much more harder for me to say what's allowed versus what denied. Yeah. Is it the opposite when it comes to AI systems or is it more the fact that is more in terms of data that's being put into the AI system for what's a good data versus what's a bad data? Anything else?
Yeah. Like where's the thinking coming from?
Kyle Polley: I think, yeah, I think I think both cases are right. Yeah. Where this agent has broad access to things and and kinda what you touched on oh, what's trusted versus untrusted. Yeah. And so right now when you're designing AI systems and using AI systems, there's a system prompt [00:06:00] and a user prompt.
That's right. And so when you're. Ingesting untrusted text. Either the user prompt or maybe the agent calls from the internet and that. HTML content is also untrusted. There's no place to put that. So you have to also put that in the user prompt or the system prompt. And yeah, and I do wonder if I know actually Meta recently released an open source variant of LAMA that includes a third section specifically for untrusted content.
So I, yeah, so that actually does exist specifically for this purpose. I forget the name of it, but it was released maybe a week ago. And it's just normal. It's normal llama. It's not like an extra tool or something you have to attach to llama. It's just normal llama, but it includes that like safeguard built into the model.
Ashish Rajan: Oh, and I guess it's a good word to use for it. It's more to your point. General thinking is also that be mindful of the input and output you, the input that goes into the system, AI system and the output that comes out of it. In terms of building a soc for this, I [00:07:00] think you had described to me a data lake with detection.
And obviously volume is a huge thing in security operations. It's it's not that you are only looking at a few gigabytes. There's a reason why all the scene providers are so expensive. Like when it comes to the data ingestion and data what is it look like? Obviously not thinking of it as a compliance artifact.
What's your thinking behind the whole data lake with detection for a security operation team? Yeah, I never liked the term sim. I don't know why. It just rubs me the wrong way. It just feels so,
the other option was which,
Kyle Polley: right? Yeah, exactly. It just feels so I don't know, a little old school to me because I think at the time you had you had like your internal infrastructure, see a bunch of Linux boxes. You had you had like DNS like network traffic. You have your server with log traffic and that, and then authentication and like three different log sources. And so I think a sim was designed for those three log services in mind.
You're not getting that much activity. And [00:08:00] you can easily correlate between those logs and it does a good job there. Yeah. But I feel like now one, everything's in the cloud, then you have a hundred different SaaS providers and I just think it's getting a lot crazier. And so that's why I like to say it's like a data lake infrastructure built for detection response.
Because you see like a lot of the modern. SIM providers literally using data like infrastructure in the backend. And I think that's what it takes
Ashish Rajan: to, and I think maybe to add a few more layers so people understand where you're coming from as well. Would you say the data lake concept is more from a.
I guess because you need the context from everywhere, not just one place. Yeah. Like any given point in time, you may be on premise cloud, multi-cloud, and I don't know what other AI system or agents come and talk to you in English or Chinese or whatever the other language they may use. And you wanna be able to look at everything, not just one particular log source, and I think being able to combine that was always a challenge with traditional scenes. And yeah, I think [00:09:00] a lot of people have been promoting the whole data lake piece for soc teams to begin with. I almost feel like, I don't know how people explain this to the executives, 'cause I think a lot of them would've just been told, Hey, you need I think and to your point, it's an old term, everyone in that board knows it.
Yeah. How do you change that narrative for, hey, you don't need a theme detection in terms of building a data lake is the key. Yeah. And is this data lake being shared with engineering or are we having our own data lake and they have their own data lake?
Kyle Polley: Yeah. Yeah. And that's, yeah, that's a really good question.
The way I like to pitch it is that like a traditional SIM just couldn't handle what we're doing. And like I, if you want to call, if you want to call modern systems a sim, like that's fine. I'm just like. This is, here are the tools that I wanna work with and I wanna work with it because they could scale with us.
And I think they could digest that. And if they want to call it a sim, that's fine. I'm not gonna argue about naming conventions, but
Ashish Rajan: whatever gets the budget through.
Kyle Polley: Yeah, exactly. Yeah, exactly. And I have [00:10:00] really thought about the whole idea of why do we have our own data lake? Why is the security data lake separate from a normal data engineering data lake?
Yeah. I do always I like the idea of sharing tools and systems. It seems crazy that the security team is handling sometimes as much data, if not more than like the normal data team. Yeah. And security engineers are not data experts. And so why are they dealing with this burden alongside the data team hired to do data stuff. And so I don't know, we've, I've we've explored ways to build this internal, I've definitely spent a lot of time on like side projects basically using like Spark similar to what a data team would use Jupyter Notebooks and Spark to do the same as like a to do threat detection analysis, the same way you would do data science.
Yeah. And I think it works on paper. Haven't seen it actually work in practice though. However, actually, before, before we move on to that though there is a really great talk by Apple's threat detection response team at a Databricks [00:11:00] conference, and it was many years ago. But they do use Jupyter Notebooks and Databricks for their whole type detection response operations, which is pretty cool.
So I guess that's like the closest thing I've seen to this works in practice.
Ashish Rajan: I believe the Netflix team does that as well. I think I, yeah. They have I think they were the first ones. I still remember. I think, I don't know if it was my second or third reinvent, and they were on stage talking about data scientists and security.
Like, why do you need a data scientist in a security team? And this is obviously way before Gen AI and all of that. Yes. And I know, I'm going it just almost seems like a they were clearly working on a very different problem than the rest of the world. Yeah. Everyone thought they were crazy. Now suddenly we are in this AI world going.
That kind of made sense. It made
Kyle Polley: sense like, wow, they're onto something.
Ashish Rajan: Yeah. Obviously they were ahead of the time. Yeah, and I'll definitely try and link that the link that video from the Netflix team as well then when they were hiring data scientist people too. Yeah. And maybe to your point then, yeah, sorry, go on.
Kyle Polley: Oh, no, I was just gonna say I think. It sounds crazy. [00:12:00] And even now it could sound crazy, like why are you hiring a data scientist for quick detection response? But one, I would say it's, it was ne like it is a data science problem, right? It is a big data problem that's very hard to wrangle.
And two, threat detection response up until that point, like doesn't work. And like teams are stru, like they're burnt out. They're getting false positives everywhere. Yeah. Companies are still getting breached. Even like companies who have a massive sock, hundreds, like millions of dollars poured into just their sock operation and they're still getting breach.
Like it doesn't work. So it's like I do see this concept of something's gotta change. If if you just keep doing the same thing over and over again, it's not gonna work out right?
Ashish Rajan: A hundred percent. I would say that to an extent. I definitely find there's definitely a change in the industry where at least a lot of the conversations that I've had I keep seeing a lot more people are maybe even more braver to test this out with gen ai.
Yeah. I think [00:13:00] earlier the bar for every time people like you and I would talk about data scientist people were like, I don't know, man. Jupyter Notebook. There's a lot of things that I have to learn. I already have so much happening in my seam provider that I have to learn, I have to certify I to do this, I have to do that.
Yeah. But to, I think to what you said in such a simple way, which is actually true, that detection response is technically a data problem. It's just the kind of questions we ask versus a data person would ask to for a business, they just happen to have two different questions. Yeah. Which is, by the way, a normal concept today for anyone who has an AI subscription.
It's oh, I want it to be the best. I don't know, LinkedIn post creator. I want it to be the best ciso. I want it to be the best. Like we are obviously already asking it to be different people. Yeah. And I think people are being trained with the idea that, oh, I wonder if it can be my threat detection response person.
Yep. With that said, what do you think is required to build a SOC team, which is scalable today? Obviously now. Yeah. Building on that detection using data lake or [00:14:00] people who may already have a SOC team and they're reshaping it. Vote for what AI would look like. Yeah. How do you build a resilience, scalable SOC team today?
Kyle Polley: Yeah. Yeah. It's a good question. I feel like it could sometimes be like, it's, the whole industry has been. I feel like things change every month. It's crazy. But this is what I will say. The things that I've noticed that have been consistent are, one, you need a modern system or infrastructure that can scale.
And so that means using modern tools such as like snowflake. Some data like infrastructure that has proven to handle tons and tons of data. 'cause I don't think that's going away. And you could, you can give the best AI agent access to, to a legacy sim and it's still not gonna work because it's querying stuff the way we do.
And if we can't, if we can't query the data we need to query, then it won't either. And so it, it needs to have [00:15:00] some scalable, modern infrastructure behind it. The next thing I'll say too is I've been playing around a ton with cps and having at least a very approachable API or some standardized way of querying the data that's like easy to understand and can be applied to like different systems I think is important too.
And so given what I've learned recently about cps, like I, I will at least require every security tool I use to have a robust API, if not an MCP server.
Ashish Rajan: Why is that? What, what changed?
Kyle Polley: I've, it's a good question. I've learned that, I think the future or the future's headed, you have all these security tools.
Even outside of security, you have a lot of vendors who are attaching AI to their product. And so now you have even for like I think all the modern sims, now you can go to the sim, you write a query and there's a button that says talk to the AI agent. And I think that's a step in the right direction, but I [00:16:00] personally, I see the future of, instead of having all of these.
Third party vendors each having their own AI agent. Instead, security teams will have a centralized AI agent with access to third party tools. And not to show my own project, but literally this morning I open sourced a project that basically does this. I don't know if you saw that on LinkedIn.
No, I haven't. Okay
Ashish Rajan: I'll look it up. Yeah, I'll put that in there. Yeah. So basically, what does the agent do?
Kyle Polley: So it's basically an agent and it's powered by Cloud code and cps. Let's say you wanted to create a security alert investigator agent. You would simply describe it in natural language and say, Hey this is your task.
You're here to investigate alerts. Here are the steps. I want you to look at the alert. Run a very in-depth investigation and know that this is super high level. Write a report about what happened and whether you think it's benign or malicious, and send it to a Slack channel. [00:17:00] And that's a pretty complex, imagine like trying to code that that'd be super, super complex, but now it's like literally five steps in English, and then you give the agent access to your sim.
In this case, I was using Panther for my example, Panther, MCP I gave it access to the virus total MCP. I gave it access to Slack, MCP. Okay. And so I just said, here are the five instructions that you need you to follow. Here are the tools that you can use. Every time an alert comes in, go and do those things.
And it works surprisingly well. I I think it was like, honestly like one of the cl moments I've had that feel the a GI movement, like it, it's crazy how good it is and it performs a really thorough investigation end. I think when we talk about having da hiring data scientists for the team, I think that kind of feels appalling and challenging for a lot of teams.
Just because that just sounds hard. Like one, like hiring great data scientists and then [00:18:00] two, convincing them to work in DNR, which is not a fine, I wouldn't necessarily call it like. A super fun place to work at. I don't know if many data sciences would choose to work there. But now you have this AI agent who's really good at querying data and they have all the time in the world.
And so you can have a hundred alerts and you spin up a hundred agents and each agent doesn't get tired. They just go in. They perform a really in-depth investigation and do a really good job at it. And so I personally, that's where I see the future headed.
Ashish Rajan: It's funny how you and I think alike about this.
So literally yesterday and you made a post this morning, I made a post yesterday about how. The, all the, everything that I'm seeing today, I definitely believe each department would have its own AI agent as well. Yeah, and I think ultimately and this is why I was curious about why your belief about MCP changed, because, I'm keeping the thing about challenges for like at the moment you and I mentioned MCP, I'm sure people in the audience would've gone, but there's so many security [00:19:00] flaws and like what Yeah.
Keep that, acknowledge, pin that on the side for a second. Yeah. But outside of all of that, yeah, I definitely, I genuinely believe that I feel every business unit, it just not, I think everyone just becomes the norm. It's almost silly to not have an AI agent per the business unit that understands the context of the AI agents.
Sorry the business unit specifically? Yeah. What logs it collects what third parties it's connected to. What am I really here for? What's my purpose? Someone to clearly talk about this and I don't think the CEO's gonna have a hard time talking about this and cannot be lose anymore. Like you have to literally type it out 'cause it's gonna do what, exactly what you say.
Yeah I definitely, genuinely believe where the role of these cybersecurity vendors would be in that world where to your point, are we just gonna talk to the CPS or whatever, the agent to agent protocol or whatever the thing turns out to be. And that's where all of us are just basically asking simple for when was the last time this was seen?
Like the whole thing that apps [00:20:00] people talk about where reachability is a thing. It's not just about, there's a, yeah, there's a flaw. But can it be reached from the internet? Yeah. And if that's a simple question of you and I just asking the question to an agent, I'm like, oh my God, I'll take my money today.
That's a feeling most people would have. Yeah. But it does raise another thing, and I think this is where some people challenge me as well, where there's this whole I thing about is AI helping in this case or is it replacing a stock agent? And how do you identify AI use cases that elevate rather than replace.
Or person?
Kyle Polley: That's a good question. And potentially a hot topic. And so I'm on the team of I don't think it's gonna replace, I don't think it's gonna just replace humans, I think, I don't think it's gonna take anyone's job. Yeah. I do think it'll take tasks that were super slow and repetitive.
And it will empower. The security team. To do things [00:21:00] that are more impactful to the business and more proactive in security. And so consider like going back to DNR like right now, security teams we're losing, right? We're not we're not doing a great it's very hard.
It's a very hard job and there's a laundry list of things to do. There is a nonstop thing. There's, you'll ne it's never ending, right? And when I hear that argument of oh, it's gonna take, it's gonna take jobs it's gonna act as an analyst, and we don't need analysts anymore.
It's like there are a hundred other things that we could be working on that are much more impactful, but we're too busy. We're too busy firefighting to even consider it right now. Yeah. And so I think this future is super bright. If I think of oh wow, if I don't have to triage alerts and if I don't have to be like looking at bug value ports or static code analysis alerts all day long, what can I do at that time?
And it's it's super exciting to me.
Ashish Rajan: Yeah. Yeah. I can already think of so many things to what you said, where in a future where you [00:22:00] probably keep the same amount of people in your SOC team. Who are managing agents with summary. 'cause there, there's a lot of moving parts in detection responses.
Yes. Every day there's a threat alert coming out. Like a identifying, is this relevant or not? 'cause your morning start for the news feed or threat intel for the day and you're basically doing in your minds pattern check for, is this relevant? No. Maybe, but I need to change. Yeah. And the end almost, you almost feel drained by the end of it to begin with.
And then you go after. 'Cause I built a SOC team and remember the frustration everyone would have, I think we had a VI won't mention the vendor name, but it was just this. Tremendous amount of false positives that the team had to go through, and they were just sick of it. I think after a while, I remember we just made the call for, hey, there should be a better way.
And they didn't have an API back then. The team wanted to do it, but I genuinely agree the parts where this can go is very exciting for, yeah, for broadly cybersecurity and tech in general for me as well. Yeah.
Kyle Polley: So you're saying you, you had to turn the alert [00:23:00] off.
Ashish Rajan: Yeah. No, so what we ended up doing was we basically started putting, we made a feature request for Ignore, and we had to basically on an Excel sheet, we had to make a note of all the things that are control, quote unquote.
It can be ignored because yeah, for example, we didn't have SQL injection, but we would start SQ injection alerts.
Kyle Polley: Yeah.
Ashish Rajan: Yes. Internet do SQL injection great. I don't care.
Kyle Polley: Like as long as it didn't actually happen. That's how I always feel about the Okta failed login alerts. How it's like it worked.
I don't even know this. And so it like, that's always so funny to me.
Ashish Rajan: But no thanks. You just wasted my brain cells on something, which is not even relevant.
Kyle Polley: Yeah. But what I love about that is like. Why do you have to go and have this allow list or block list, whatever, and it's because you were limited by how many people could actually be reviewing each alert.
Yeah. And now that burden, like that restriction isn't there anymore. You could just, every alert, just assign a new AI agent to [00:24:00] it and have them triage it and deal with it. And maybe once a month you can look to see which alert. Created the most false positives and for cost sake, you go and tweak it.
Instead of burning out your entire soc team, hundred percent or even worse, turning off the alert entirely and missing a true positive. Oh
Ashish Rajan: my God. Yeah. But I'll add another layer to this as well, where there was a whole co thing about we were a cloud native company and we acquired.
Couple of other so we, we became a multi-cloud shop from being an A one cloud provider shop in a matter of three months. Oh, wow. And no one in the team had access or understanding of all the other cloud providers. But in my mind, if that happens today, I would be fine and I would might be panicking for the team because you are almost like, oh wow, you have the AI agent, it understands what service is for.
And it can help you, to your point, summary, summarize and understand is this relevant and help you ask the right question. Yeah. I don't know if you find that already. 'cause I feel like there's more to [00:25:00] these use cases that people don't even talk about from a, like how much more simpler life can be with this.
Yeah.
Kyle Polley: I, so I've all, yeah, this is something, this is such a rabbit hole and it makes me so excited for the future. But if you think about too, like a lot of modern security teams, I'm sure you talk about on the pod a lot about building guardrails, not roadblocks, and oh, let's almost like this security DevOps operation where it's oh, we're gonna integrate security deep into our processes to make it so it's super easy for security to, for developers to do things securely.
And I love that. Like I love that philosophy. I live by it. I breathe it. But it's very challenging to do. It requires, right? It requires a team of software engineers who can beat. Who are like Excel in infrastructure. They excel in product engineering. They excel in every single tool imaginable, right?
And so you're looking for these unicorns of [00:26:00] engineers in order to actually achieve that goal. But now if you think about it, it's like before we had to build these guardrails because we couldn't look at every single pr. And so we needed to make it so engineers can self-serve and.
And do it on their own without security intervention. But now again, we go back to you have this automated intelligence that can be like a security expert. Like they're incredible at this stuff. And so imagine a security engineer like hanging out in the developer's, IDE and it'll say Hey, you should do this instead of that.
Or, Hey, like this text is untrusted. You should add some sanitation layer to it. Or they're part of the PR and they do a review. And so to me it's like shifting left, used to be like. Building these massive systems and guardrails that like gently, are in the security pipe or in the engineer lifecycle.
But now it's just oh, just give everyone an AI agent, give every single person a security AI agent. It's with them 24 7. It'll intervene and talk to [00:27:00] you, and they can also ask the agent questions. And maybe that's what shifting left is in the future. Maybe it's literally just enabling an AI agent.
Ashish Rajan: Yeah. I clearly, you and I have been drinking the AI cooler for some time.
Kyle Polley: I have. I'm all in it. I'm all I'm a hundred percent convinced this is the future. And the faster we get there, the better chance we have at stopping attacks.
Ashish Rajan: We should probably end the episode here, like
Kyle Polley: done,
Ashish Rajan: like mic drop moment and you walk away at that point.
I, I guess maybe to bring it home as well and come
Kyle Polley: I'm real, real quick. I'm super curious though. You've, you talked to a lot of folks, like what is the sentiment?
Ashish Rajan: I definitely believe people have the right sentiment about this. They want to go down the path. Yeah. But Shannon has been to what you said.
The balance between the old and new Yeah. Shaking the tree for what has been the process for 12 years versus Yeah. Hey, I'm gonna start from scratch. So much more easier to start from scratch today. Yeah. And not try and change [00:28:00] mainframe for some reason. So you're trying to figure out, and I think that's where the friction is coming from the most where I and there is a movement in that direction where applications that are being made, AI enables.
They're making people think about, Hey, MCP server is a thing. We need to have the MCP servers, but at the same time. Like we haven't changed the structures to be driving more AI innovation. 'cause it cost is a thing as well. Yeah. There's the number of programs being spent, then you're like, Hey, how long is the experiment gonna go?
I thought you only needed a month there. Yeah, there is, there are more challenges, but there are good challenges, boss is a big one, irrespective of challenges. Yeah. But it definitely is in that I definitely feel in all the conversations that I'm having, people are all for it. It just the question of whether they have something new to prove as a use case for the business to go, Hey, yes, we can invest more here.
Yeah. Instead, at the moment, the money is being spent on engineering for a we need to be better than a competitive [00:29:00] Yeah. Which is fair. I get that. Which is why I hear a lot of people in the cybersecurity teams assume the financial sector talk about we are improving productivity and all of that using ai.
Yeah, it's productivity. It's not like I'm going through logs. Yeah. But you talk to a tech company, tech first company, like a Netflix at that level. Obviously, I haven't spoken to specific Netflix about this, but Netflix kind of companies, as you said, rippling and other people who are openly talking about AI and all that as well.
Like they are very forward. They're very open about what they're doing and I love that, but. At that point in time, I definitely feel the big wave would only happen when the bigger enterprises start opening, talking about at the moment, they're like, what's my competitive edge? I need to make more money.
I need to make this foster. How many people can, how many people can I get rid of? Which is not,
And everyone also understands there is a gap in reasoning as well. At this point in time that there is a Yes you can do summary. Yeah. But there is a complexity that comes with [00:30:00] reasoning that at the moment perhaps humans are still better. So there's a human loop. Yeah. But I also almost want, which kind of leads me to the context part as well.
One of the challenges that a lot of people have is with the context window. Yeah. Due to what you said, you made an open source tool, MCP, ready now in a small use case probably. It makes really good sense and it really amazing in a small team. Yeah. If you were to scale that and I, my hope is we can try.
Sparking some hope with all the cloud and security operation people who are listening to this conversation as well. What's a good way to, whether it's a cloud security alert or whether it's a general security alert, how are you tuning or is it possible to tune detection rule where, or are we just at that stage where we can do AI triage?
But can't go that step further. Like where is the boundaries that you've tested so far?
Kyle Polley: Yeah. Yeah, that's a good question. And things like, yeah, we could talk about, 'cause about autonomous detection and response, right? And so you could imagine if you [00:31:00] if you're an analyst or instant response engineer and you get an alert and you go to your sim.
What I normally do is I would start just let's say I wanna look into an IP address. I'll just say, show me all logs with the IP address. And I'll like maybe score around a little bit, but then I'll hone in and I can't look at thousands of logs, right? So I'm just gonna hone in and improve my query to really dig into what happened.
Yeah. And I found that a lot of MCP clients. Are capable, a cloud code is capable of doing this. And so you can actually watch cloud codes. The open source tool I use uses cloud code. That's why I've talked a lot about this. But you can, you watch it go through these steps and think, and so it'll literally run a query and say, this is too much.
It'll literally tell itself that. It'll say, this is too much info. I need to refine my query, and it will try again with a better query and eventually get to the answer. It will. And that's what, that's why it's blowing my mind so much. And so I'm [00:32:00] convinced that with the right MCP client and as intelligence gets better.
It will reason about things really intelligently in a way that a normal human would that's exact, like it did exactly what I would've done in that situation. Look at the logs that I have up to like my own context. I have my own context window, right? I'll scroll around a little bit, but I'm not gonna look at everything.
And then I'll say, okay, this is useful, but let me hone in this way. Let me add this parameter to my query. Or. Filter out all this noise to try to try again and reduce it. Yeah. I will say too I think chat GPT was released like three years ago, like in it's crazy how fast this is moving and everything's just improving so quickly, including context window that we could enter a future where context window just isn't an issue.
I could totally see that happening at this rate we're moving so quickly that it's just getting smarter and smarter. But honestly, like even [00:33:00] now with what we have it does a pretty good job.
Ashish Rajan: Yeah. Yeah. I think I was watching a open AI. Researcher talk about how humans view growth. And they said if you ever watch a plant grow, it feels like it's not growing.
If you see it every day. Yeah. But it's a week later, a month later, it's like significant growth. Yes. And it's. The same with AI as well, as much as we are all complaining about, Hey, it's not good enough, but if you put it, if you take a step back, it's only been three years and we've been talking about like future, like back to the future for how many years now?
Yeah. Still waiting. So I'm still in the biggest scheme of things. I think the it's all the credit to everyone who's trying to push the envelope on what this would mean as a benchmark. Yeah. It just takes a moment to realize that, we've come really far, mean
Kyle Polley: this, like we wouldn't be, like, this discussion would've been like impossible three years ago.
Cabin
Ashish Rajan: at that point in time, cabin [00:34:00] talking about this like crazy people and we probably have a cult of people who just we believe data lake is the future.
Kyle Polley: Yeah. Exactly. Yeah. And no, it'd be this, it'd be this impossible task of automating intelligence. The things that we're talking about now just weren't possible.
Yeah. And now it suddenly became. It's real. It became real, it became very possible in just a few years which is really crazy.
Ashish Rajan: I would also say putting another lens of the enterprise thing, 'cause the MCP client that you were referring to, are you able to, and I don't know how much. Down that rabbit hole that you go into.
But I'm thinking about from a enterprise perspective, which may want to say include not just Slack or my Jira tickets or my, all of that come there's a lot more context required. I'm gonna say a simple example if someone says, Hey, S3 bucket, open the internet.
Kyle Polley: My
Ashish Rajan: C app or whatever called it out, and I'm trying to figure out my it came into my feed.
The AI agent with the MCP that I've triggered for it, it goes ahead and sees that, Hey, by, this is the region, this is the account number, blah, blah, [00:35:00] blah. Yeah. And there just happens to be a conversation, which was six months before that Ashish had said, Hey, we are making this public, making a decision.
I think a lot of people have that use case in mind, is that, yeah. Are we far from it? No. Or have you experienced it?
Kyle Polley: Yeah. No, we could, you could do that. You can do that now. And it's crazy because you tell, like you give the AI like some high level instruction, perform a very thorough in-depth investigation on this alert.
And if you give it the right tools. It will search Slack. It will search Slack for that query, for that bucket name. It will right now I don't think an A-S-M-C-P server exists, but theoretically you could imagine like what I would do if I was investing in this, I would look at the bucket and say, is there anything sensitive in here?
What's going on? Yeah, nothing's stopping it from doing that. You can also. Find the person, it'll look through the cloud trail logs, find the person who did it, and it'll send 'em a message and be like, Hey, did you do this? My, per the project that I recently open sourced it doesn't have [00:36:00] some like feedback.
Like I would love for it to like message the person who did it and say, click this button if you meant to make it public or understand the implications of it being public. And then they click it and then the agent like, okay, this is fine. Here's the report. It doesn't do that yet, but. You can. And I think what's also really cool about this about LLMs and CPS is that if you give it some high level steps, it will just go and do it.
And and Claude is you'll, it'll be pretty creative. It'll figure out, it'll say Hey, this is what I think needs to be done. But let's say if I, as the engineer telling the agent what to do. If I wanted it to be more specific, if I said okay, like you could, I guess as the engineer, you can set the bar to how creative you want the LM to be.
So if you're worried about the LM going off and doing its own thing, you can give it very specific instructions and like literally these are like the 50 things I want you to do. And it will just go down that list. And so you could gauge like how much [00:37:00] you want it to think critically and do its own thing versus.
Hey, we have this set process and I want you to follow it. And when you write the report, I want it to be in this structure in markdown and it's all up to you and it's all in natural language.
Ashish Rajan: Yeah. Yeah. And I guess to your point, it could be a non-exhaustive list as well. You can keep adding to it.
Kyle Polley: Yeah, you can keep adding to it. And it's like reading a instant response playbook or a playbook just for any operation.
Ashish Rajan: Talking about playbooks as well, what is the structure of a security operation team in today's day and age? And yeah, I feel like the old instant response playbooks may not even apply anymore.
But do they apply in this context?
Kyle Polley: I think they maybe a good example is, let's say google com credentials were compromised. An employee was like, Hey, I, got an alert that I logged in from a state that I'm not in or something, right? Yeah. And the playbook would be one revoked those credentials.
Maybe suspend the [00:38:00] user in Google Workspace to investigate. What that user did with those credentials and see what systems they access and whether there was like lateral movement or something. And then maybe also investigate the machine, the endpoint, like the laptop to see okay, how were these credentials stolen?
And then write a report about it. And I don't know if that changed very much. That's still the case today.
Ashish Rajan: Yeah.
Kyle Polley: I think. The issue today is there's just a lot more, there's just a lot more alerts and it's just hard to keep up with it all. I think the general practice remains the same.
And I will say too I think everything's moving much faster. Where let's say before if you got a Phish if an employee had a phishing email there's a like. There's a chance you could respond to it fast enough to where, oh suspend the user, rotate the creds, or something like that, right?
But now you have these automated phishing systems where it'll ask for the username password, and then ask for the MFA code, and then automatically log in and start spamming like [00:39:00] Twitter crypto posts, right? Stuff like that. And so the opportunity is like, the opportunity to mitigate it is like it's very small.
The diamond goes baseball. And I think that changed too. And I think that's why there was this big push for automations as code like Soar, and now we're going to soar. But. I don't know. I don't know how many folks you've talked to who actually have a robust SOAR process.
Ashish Rajan: SOAR is a very soar topic, let's just say.
Kyle Polley: Yeah. It's a soar. It's SOAR because it's too rigid, right? You like defining these things as code is too challenging. There's too many variables. And so I know there's a lot of, there's a lot of great tools like tines who's really fantastic. No code, you plug in and play.
But then if you look at like really mature. Sore operations, there are tines. It's just like this massive web of little blocks that's like really scary to look at and you have no idea what's going on. And I think just having an intel, automating intelligence just opens up so many things.
Ashish Rajan: Fair.
[00:40:00] And maybe if, for people who are building, of, building a new capability for security operation, I think yeah, the other day, and they are ramping up their soc, especially people who are coming out of MSSP and coming back into the organization now that they've realized AI capabilities, what's based on the conversation we have had so far.
Yeah. What would you say is, should it be a MVP? So stack in like a mid-market kind of a, like a medium sized company. Obviously large enterprise, they have to go for a sea or whatever, legal obligation, realization, whatever, but for a small to medium sized company. Yeah. What's an MVP SOX stack that people can actually build up on based on whatever you're seeing today?
Today in terms of team size or team skill, or whatever you want, call it, whichever direction you want to go in.
Kyle Polley: Yeah, no, that's, it's a really good question. And I would say there's a lot of options out there. The ones I would focus on are mo like threat detection response tools that are focused on or built on modern [00:41:00] infrastructure scalable infrastructure that proved to do well in high throughput environments.
I would look at their APIs and. Not saying you need to jump into the MCP and AI bandwagon on day one, right? But you wanna make sure they're building for that future. Just have that in mind. They, the company also needs to be aligned on that mission and say Hey I know there's a MCP, the auth.
Like no one has auth built out, ai, stuff like that, but we're building towards it, and when it's ready we'll be there. And so as long as you go for a company that does those things, I know like the two that, I know have been really forward thinking with AI and MCP or Panther and Run Reveal is are the two.
And so that'd be like my go-to, but as long as they have those principles, that's really what matters.
Ashish Rajan: What about maturity milestones? So if you're building one, like what's a good baseline to think about, hey, at least have this and then build up? Is there like stages of maturity that you can think of?
Kyle Polley: Yeah. I would say starting off, like day one, [00:42:00] don't even purchase a sim, just turn on cloud trail logs and garity logs. That's literally, you can do that. You can do that the day you're hired. It's a flip of a button. Yeah. And then I would purchase one of those modern sim platforms that we talked about and I would start ingesting critical logs.
And I would say only. Have the absolute critical 'cause you don't wanna just have a flood of alerts. So start with high impact, high severity alerts where there are very little false positives. Things like S3 bucket becoming public. It's like a really good example. And so I would say one start with just very basic guard duty.
You get it out of the box, it's great. Okta also has some detections out of the box. It's great. Then have a sim We are collecting those logs in a format that's easy to query. Because keep in mind to, for me, especially right now, not talking about ai just in general detection. Detecting threats is [00:43:00] like a Hail Mary.
Like it's very hard to do it. And so like threat detection is great. It's like when all of your other controls fail, you turn to threat detection as like the last resort. And so to me, actually the value of a sim is less on threat detection. More on instant response and being able to query those logs and be able to answer the question what actually happened, right?
And so I would say only alert on like super critical super critical alerts and make sure that the logs are coming in, in a format that is easy and faster query. And then I would say after that step, then you start hiring a detection or response team who could then now start to fine tune those detections and have a start building out that Hail Mary process where it's like, Hey, like we want to do our best job at trying to detect the threat as a last resort because it is super important.
Yeah. And then a great point. Sorry. Yeah, [00:44:00] I'll let you finish. Oh, no, it's okay. I guess the last point is then focusing on automations for instant response. And I think that's when I think you sh, even in the age of ai, still focus on those first three, get to that point where you have a pretty good like detection pipeline and a repository of solid detections.
They don't have to be perfect, but a good amount. And then. When you're focusing on automation, what I would suggest is now instead of immediately investing into soar or writing your own Python code for instant response, now I think start exploring AI and LLMs.
Ashish Rajan: Yeah. No, makes sense. It's funny, I think I was having a debate with someone on LinkedIn the other day about whether the traditional they call it traditional cybersecurity categories of identity, AppSec.
Detection response. All of these have any value in a world of AI agents. I clearly stand by the fact that considering they're still building applications or embedding [00:45:00] AI into existing applications, they still have a lot of value. Yeah, but I think you validated my point as well, that before you go down that, Hey, I'm gonna be these genetic AI on one corner.
Yeah, maybe start with the basics.
Kyle Polley: Lemme turn on the log start. Exactly. Start with the basics. And I think you'd be surp like going back to that DNR point of like steps one, two, and three, right before AI is still the same. You could, if you never, if you're like, if you hey AI and don't wanna use it at all.
Like you should still do steps one, two, and three. Yeah. And so I think the future in an AI world, won't look a whole lot different. It's just the tasks that are being run are now handled by autonomous agents. Instead of humans.
Ashish Rajan: Probably help done better as well, doing get
Kyle Polley: tired better, faster.
Yeah. All the things. Awesome. I'm really excited.
Ashish Rajan: I know, I'm excited as well. I's just I feel like every time, people are gonna think you and I just drinking the AI Kool-Aid, these guys like, I, okay.
Kyle Polley: I'm in it. I'm [00:46:00] okay to accept that I'm drinking it.
Ashish Rajan: No, but I guess it's a possibility of what's what it can do, and I'm seeing what it is doing.
It's just amazing. Obviously those are, that's the end of the. Technical questions. I have got three fun questions for you as well. Sure. General one, which is the first one being, what do you spend most time on when you're not working on solving the security operation detection response problems with ai.
Kyle Polley: Oh, that's a good question. Just in like life or?
Ashish Rajan: Yeah. Which pick whichever you want. Life or opposite. Probably life is probably makes more sense I guess, in this context.
Kyle Polley: Yeah. One in terms of, in, in terms of work, I just. I have really enjoyed listening to a lot of the talks and blog po blog posts about people just like messing around with ai.
I just think this is a super fun area to be in. And it's constantly changing and so it's like a really great time to be an engineer, in my opinion. But yeah, outside of that I've been taking up woodworking and golf and those are the probably the two things that I've been spending most of my time on.
Ashish Rajan: Hopefully we'll see some of your woodworking [00:47:00] out of the woodwork.
Kyle Polley: I'll send some texts.
Ashish Rajan: Yeah. Yeah. I look forward to it. Second question being, what is something that you're proud of, but not on your social media?
Kyle Polley: That's a good question. I, ooh, I
Ashish Rajan: don't know, family and other things, but obviously you can dig whichever direction you want.
Kyle Polley: Things that I'm proud of? I don't know. That's a good question. Maybe that means I talk too much. I share my accomplishment too much on I, I think something that I guess I'm proud of is, I think in security it's very challenging to. Have an optimistic view on things. I think it's very easy to go into this rabbit hole of anxiety and burnout.
Yeah. Ambulance chasing. Yeah. And I don't know. I'm I try to have that mentality of optimism and I think it's just, it [00:48:00] motivates me and drives me and, yeah that's what I'm proud of. I'd rather spend my energy on trying to build a better future than being up, upset about the present.
Ashish Rajan: I love that. And no wonder you and I gel so much, man. I think I have very optic,
Kyle Polley: this is,
Ashish Rajan: I walk around the world with roast tin glasses, how people describe it.
Kyle Polley: Yeah, exactly. That's how I, that's how I feel. But you know what it's a fun life this way.
Ashish Rajan: A hundred percent. You only get one.
Why don't just be happy in there. Yeah. Final question. What's your favorite cuisine or restaurant that you can share with us?
Kyle Polley: Oh I'm located in Austin, Texas, and so Tex-Mex is definitely my favorite dish and cuisine and fajitas, just fajitas in general. If it's on the menu, I'm gonna get it.
Absolutely.
Ashish Rajan: Wow. Okay. What's yours? Thanks for sharing that. I actually, funny enough, I have not been in Texas yet, so I haven't had my share of Texas. No way. You'll have to come visit. It'll be fun. I know like smoke meat is like my jam as well. I haven't been able to have that in Texas, if anyone has an invite coming, I would [00:49:00] definitely take that.
But dude, hey, anytime to come
Kyle Polley: to Austin, please let me know.
Ashish Rajan: I would definitely let you know. But dude, this is awesome man. Where can people find you to connect with you and talk more about this as well and the gentech future of SOC as we are calling it now?
Kyle Polley: Yeah. You could find me on LinkedIn. Pretty active there.
I think it's literally just Kyle Polly. And I'm also fairly active on Twitter. Just Kate Polly and, maybe we'll throw some links. The only cyber
Ashish Rajan: security guy left on air Twitter. Now. I know, right?
Kyle Polley: I tried Blue Sky and all that, but yeah, no, Twitter keeps really lean back in.
Ashish Rajan: No, I'm there as well, but it's dude, I'll leave those links in the con in the description as well for people to catch out, to check it out as well.
But dude, thanks so much for doing this and yeah thank you everyone for tuning in as well. Yeah, hopefully we can have this conversation again. But thanks so much for doing this, Kyle.
Kyle Polley: Yeah, no, thanks for having me. This was
Ashish Rajan: a ton of fun. Alright I'll talk to you everyone else soon and yeah, enjoy this episode.
Peace.