What does the integration of AI into a Security Operations Center (SOC) practically look like? This episode explores the concept of the "Agentic SOC," moving beyond marketing terms to discuss its real-world applications and limitations.Ashish Rajan is joined by Edward Wu, CEO of Dropzone AI, for an in-depth discussion on the current state of artificial intelligence in cybersecurity. Edward, who holds numerous patents in the field, shares his perspective on how AI is changing security operations. The conversation details how AI agents can function as a tool to support human analysts rather than replace them, and why the idea of a fully autonomous SOC is not yet a reality.This episode examines:
- The "Agentic SOC" model: A framework where AI agents assist human security engineers.
- AI's role in alert investigation: How AI can autonomously investigate alerts by making over a hundred large language model invocations for a single alert.
- Practical limitations of AI:*A discussion on challenges like AI hallucinations and the need for organizational context.
- Building vs. buying AI tools: An overview of the complexities involved in creating in-house AI agents for security.
- The impact on SOC metrics: How AI can influence Mean Time To Resolution (MTTR) by investigating alerts in parallel within minutes.
- The future for security professionals: How the role of a Level 1 SOC analyst is expected to evolve as AI handles more repetitive tasks.
Questions asked:
00:00 Introduction: Why Agentic AI in the SOC Matters Now
03:03 Meet Edward Wu: 30 Patents and a Mission to Fix Alert Fatigue
04:03 What is an "Agentic SOC"? (AI Foot Soldiers & Human Generals)
06:27 Why SOAR & Playbooks Are Not Enough for Modern Threats
08:18 Reality vs. Hype: Can AI Create a Fully Autonomous SOC?
11:55 The New SOC Workflow: How AI Changes Daily Operations
14:10 Can You Build Your Own AI Agent? The Hidden Complexities
19:06 From Skepticism to Demand: The Evolution of AI in Security
22:00 Slashing MTTR: How AI Transforms Key SOC Metrics
28:42 Are AI-Powered Cyber Attacks Really on the Rise?
31:01 How Smart SOC Teams Use ChatGPT & Co-Pilots Today
32:38 The 4 Maturity Levels of Adopting AI in Your SOC
37:04 How to Build Trust in Your AI's Security Decisions
41:28 Beyond the SOC: Which Cybersecurity Jobs Will AI Disrupt Next?
46:44 What is the Future for Level 1 SOC Analysts?
49:11 Getting to Know Edward: Sim Racing & StarCraft Champion
Edward Wu: [00:00:00] What about SOAR and automation and stuff? The challenge with most so technologies is they are leveraging playbooks. At the end of the day, playbooks are IFS and Ls decision trees.
Ashish Rajan: A lot of security operation people are quite smart, quite technical. They probably would hear this and also go, I'm smart enough, I can build an AI agent.
What's the reality?
Edward Wu: Our system. Is making over a hundred distinct large language model invocations in order to autonomously investigate a single alert generating the correct SPL search queries is actually very difficult.
Ashish Rajan: Has this changed the metrics as
Edward Wu: well? With AI agents, what we have seen is the MTTR time generally can be immediately reduced to within minutes, and that is something that a security team simply cannot achieve purely with humans.
Ashish Rajan: Agentic AI. Yes. In the SOC. That's what we are talking about today. I have been talking about security operation teams have been overwhelmed for some time now, especially in the last year or [00:01:00] so since the explosion of AI applications around most companies out there and adding to that, most security operations people were never trained in. A W. S Azure, Google Cloud, some of them did, but you have specialized people. I can go on a rant about this, but I'm so glad I'm having this conversation. I had Edward Wu, who has I think about 30 patents in this particular space, and is also the founder of a company called Dropzone AI. Him and I had a conversation on our sister podcast called AI Security Podcast, and I thought, man, Agentic AI seems to be everywhere, especially in the SOC, and I wanted him to have an honest and transparent conversation about what agentic AI is, how does it work in the background, and how do we separate the signal, which is what is truly agentic AI able to do versus what the marketing hype is. Yes, he was very transparent and honest, and I think him and I, and a lot of people in security believe that the foundation of AI, at least in cybersecurity, would be laid by transparency. This is the belief across the industry, open AI Claude, everyone else already shows you a thought process and it [00:02:00] probably is more important in cybersecurity.
So I'm super excited for you to hear this conversation with Edward. As always, if you know someone who is thinking about Agent SOC for their cloud or hybrid cloud environment, or is looking to uplift their security operations team for the world of AI. I would definitely share this episode with them as well.
And as always, if you have been listening or watching an episode of Cloud Security Podcast for some time and have been supporting us for a while, I would really appreciate if you could take a few seconds to hit that follow subscribe button, whether it's on the audio platform like Apple or Spotify or on the video platforms like YouTube or LinkedIn.
It means a lot when you take that second to support us as well. So thank you for all the continued support and all the recent reviews that were left on Spotify as well as on Apple Podcast. Really appreciate the kind words that have been dropped in there. We continue to be the Top 10 cybersecurity podcast globally.
Thanks to you guys. So I'm really thankful. That's all I would wanna say. So I hope you enjoy this episode with Edward and I'll talk to you soon. Peace. Hello and welcome to another episode of Cloud Security Podcast. I've got Edward with me [00:03:00] today. Hey, man, thanks for coming in on the show. Thank you for having me.
Could you give us a bit about your professional background, what you've been up to before we get into the meat of the episode.
Edward Wu: Yeah, absolutely. My name is Edward. I am the founder and CEO of Dropzone AI. We are a Seattle based cybersecurity startup that's leveraging larger language models to build essentially AI security analysts.
So before Dropzone, I was at actual Hub Networks for eight years where. I built its AI ml and detection product from scratch. So really spent eight years generating millions of security alerts and saw firsthand how most security teams don't need yet another alert canon. So once I saw the capabilities of the large language models decided to start Dropzone to partially redeem myself as well as try.
Try to build a piece of technologies that can once and for all address the alert, fatigue and overload problem.
Ashish Rajan: So now that you've figured out a way [00:04:00] to redeem yourself maybe to continue redeeming yourself something that the industry has been talking about quite a bit has been Agentic SOC.
And maybe if I can get a definition from you on what is Agentic SOC to start off with, at least we can level the playing field for everyone.
Edward Wu: Yeah, from my perspective, I consider a Gentech SOC to be a security operations center where human security engineers and analysts work side by side with AI agents.
And the analogy I oftentimes use is think of AI agents as your foot soldiers, and then your human engineers and analysts working as generals directing the foot soldiers, as well as special forces, tackling complex missions. Foot soldiers are not well equipped for.
Ashish Rajan: Oh, wait, what about SOAR and automation and stuff?
I thought that was the promised line for I guess what the world could be with AI. Yeah,
Edward Wu: so see the challenge with [00:05:00] most SOAR technologies is they are leveraging playbooks. And the challenge with playbooks is at the end of the day, playbooks are FS and LS decision trees. And. One of the key components that SOCs need automation for is alert investigations.
But if you actually look at alert investigations it actually requires a lot of improvisation. As well as dynamic planning because how you investigate a security alert is not the same every single time. You have to adjust your techniques, you have to pull in different data sources, depends on the unique situations of each alert, and this is where.
Traditional SOR playbooks really struggle to fully automate alert investigations. And, but with large language models and AI agents these technologies are capable of dynamic planning, so they [00:06:00] can look at an alert and tailor and investigations that's specific to the situations of that particular alert.
Ashish Rajan: It is such very interesting, you then mention getting rid of humans. 'cause a lot of people may, when they think about AI coming into the SOC world, or I guess any world for that matter today, a lot of that falls under the umbrella of Oh, that's why humans would be replaced by AI. But even your definition of Agentic SOC was more around the fact that, hey, it's the superpower being given to a SOC analyst to be able to do a lot more in this world of ai what do you see as the future with SOC based on all the work that you got, you guys have done so far?
Edward Wu: Yeah. What we have seen is ultimately it's about force multiplication. If one thing we, we consistently ask every single security leader we're run into is, what would you do?
10 or 20 additional security engineers. And I think every single one of them [00:07:00] has a lot of different projects in mind. We haven't run into a single security leader or practitioner who said I don't need, 10 or 20 additional security engineers.
Ashish Rajan: That's too many.
Edward Wu: So this is again, yeah. So this is where I do think cybersecurity is a unique domain.
Yeah. Where there is a rare, win-win situation between the human workforce and the AI augmentation because ul, ultimately the cybersecurity demand is so high. That there are still tons of projects for the human engineers and analysts to, to work on. And in fact, those projects are more exciting, they're more intellectually rewarding.
And they're not as repetitive as looking at phishing emails over and over again.
Ashish Rajan: I'll show you 'cause it's a good point you raised there talking about exciting projects. 'cause I always feel one of the reasons why people go down the path of exciting projects is a lot of the time in SOC.
Especially at the teams [00:08:00] that I manage. I remember level one was primarily just bogged down with hundreds of villas, whether it's firewall, phishing, you name the entire gamt, just throwing at them. But on the other side, AI is also being looked at as something which is, Hey, it's still evolving. We are not even there in, in terms of final state yet.
So if I were to ask you what's the reality versus what is not the reality and just prob probably to a large extent, perhaps it's marketing overselling the idea of ai.
Edward Wu: Yeah, zero has definitely been a lot of over marketing within the Agentic SOC or AI SOC analyst space. Yeah. From what we have seen in the field, it is still technically impossible to have a fully autonomous SOC.
What I mean by that is you cannot have a piece of software that fully replaces the entire SOC, from tier one to tier three. I know there are a number of vendors claiming that. But from our perspective, it's unrealistic both in terms [00:09:00]of tech technology readiness as well as real world implications of having a piece of software running around and, quarantining different users.
Blocking different IP addresses at will without any human oversight. What we have seen being practical is again, delegating the manual repe repetitive tasks such as initial alert investigation, triage to a piece of software, and having those AI agents, essentially work for the human engineers and analysts.
Ashish Rajan: So to your point, then I think. It's not, it is practically, AI itself is also not there. Where, I guess we don't even have it from a non cybersecurity perspective. Cybersecurity is not that special that suddenly we have solved the problem that the world is still trying to figure out.
Which is reasoning.
Edward Wu: Yeah, exactly. There's, large language models to hallucinate. And some say, see hallucination are fundamental to the models. While other [00:10:00] people might say hallucinations are caused by a lack of contextual information. And this is where yes, we can, even if we have AI super intelligence right now.
I think there will still be challenges for this AI, super intelligence to fully automate the entire SOC because in order to do that, the super intelligence needs to know everything about the organization, the preferences, policies, and practices. Unfortunately, a lot of that knowledge exists within people's heads.
Not everything is digitally accessible using an API. So one can argue even with. Super intelligence. Unless everything everybody knows about the organization is written down in an Excel spreadsheet or Google Doc, it will still be very hard for that super intelligence to, make all the right decisions.
Ashish Rajan: I think you've hit a good point there because I may also, because I get to work with a lot of [00:11:00] regulated financial industries CISOs and stuff over there. One of the challenges when I was talking to them about data security, this is when we were talking about Zero Trust and all that. It was really interesting.
We spoke about the fact that some of the institutes have been there for hundreds of years, possibly before there was like paper stuff that was going on across which, which probably was never digitized as well. To your point, a lot of the knowledge perhaps has been. A in left in people's minds, but b, they left, they moved on.
Or some perhaps may not be alive after hundreds of years gone past, but we still use that same mainframe system that everyone wants to use. That to your point, and I agree with, as you mentioned, it is a, that's a missing context. Even if we had the super intelligence, we would not even have the full context.
Edward Wu: Absolutely, because you can think of like, when you make a decision, it's about intelligence and access to context, right? Yeah. The smarter person on the planet might not be able to make the best decision if he or she was not informed.
Ashish Rajan: So what would this new workflow look like for SOC teams and in, in an AI world?[00:12:00]
Edward Wu: Yeah, so similar to again the foot soldier and the general, analogy. Yeah. For most AI SOC analyst products, they work by sitting in front of the teams of human analysts. So security alerts will go to these AI agents first. The AI agents will autonomously perform investigations, and then within a couple minutes, make a recommended conclusion on whether each security alert is a true positive or a false positive. At the same time, these agents will also generate investigation reports so that the human analysts and engineers can simply review those reports if they want to, understand exactly why. Or the reasoning behind a specific alert.
Alert being considered as malicious.
Ashish Rajan: All right, because I was also gonna say in terms of accelerating your point, the regular workflow that people have, it, at least it gets you to a point where you're not looking [00:13:00] at 20,000 alerts. Instead of looking at 50,000, 60,000, you're probably only looking at a hundred or whatever the number may be.
That's basically what's practical today.
Edward Wu: Absolutely. So there's two big time savings. One is, let's say you have 50,000 alerts with AI agents. First and foremost, the agents will go through all 50,000 alerts. So for every single alert, regardless if it's. Malicious or benign, there will be a preexisting investigation report.
Yeah. So it's like somebody already prefilled all the exams questions and you can just review the answers really quick, and move on. So that alone is a lot of time saving. Beyond that, there's a second level of time saving, which is once you start to build trust with the AI SOC analysts, you can actually just scroll past all the alerts that have been deemed as benign. For example, for our technology, we have been really making sure there is the false negative rate is very [00:14:00] low. So our customers can truly trust Dropzone. When Dropzone deems certain alerts to be benign, they can safely, ignore it and skip it.
Ashish Rajan: A lot of security operation people are quite smart, quite technical. They probably would hear this and also go, I'm smart enough, I can build an AI agent. What's the reality of adding?
And I do wanna put in caveat by saying these days, at least the way open AI cloud or whatever all of these providers are talking about is that, hey, you should be able to go from zero to a hundred in a matter of a few hours or a few minutes if we spend on it. What's the reality? Because I think. And I would definitely recommend people check out the episode that you and I did for the AI Security podcast last year.
In terms of what's the reality of a SOC team trying to integrate an agent themselves in terms of the time invested? How long does it take it to get to a point to where you describe the current reality is?
Edward Wu: Yeah. We have definitely run into a number of organizations who are trying to build this technology [00:15:00] in-house.
In our experience, there are a couple challenges of making this work. First and foremost, alert investigation is very complicated. It looks a simple, subconsciously, I think most of us know how to do it, but if you actually break down the different cognitive steps involved in an alert investigations, there's actually a lot of them.
To give you a concrete example at Dropzone for a typical alert. Our system is making over a hundred distinct large language model invocations in order to autonomously investigate a single alert. So yes, you can use, there's a lot of agent tooling frameworks out there that allow you to stitch together multiple large language model invocations to perform complex tasks.
But we are talking about, on the order of. A hundred large language model invocations. So the complexity of orchestrating [00:16:00] all of those invocations, making sure the system is not going crazy is very high. So I think that's kinda one difficulty because alert investigation is a very complex, you can say sometimes recursive reasoning process versus, take see email addresses from this Excel spreadsheet, search these email addresses in Google or in a customer database and pull out the companies they work for. That's a, maybe a 10 step process in comparison. Alert investigation is like a hundred step process. So that's number one. Number two, which is also very difficult, is again, how do you have bring your organizational contact to the AI agents. So there's actually a lot of challenges over there with regards to, different companies keep their organizational data in different shapes and formats. So how do you support that? And then number three, which is elephant in the room and that is integrating the [00:17:00] AI agents with different cybersecurity tools.
As far I'm, as I'm aware, most SIEMs don't have an MCP server yet. Most EDRs, most firewalls I'm not aware of an MCP server for things like. Azure active directory and stuff like that. So you have to hand build a whole set of integrations with different security products. And the unfortunate reality also involves every security product nowadays have their own specialized query language or search API, that you really have to teach the AI agents how to use them.
Making API calls to Splunk is not that difficult. There's a, Python libraries that you can use to talk to Splunk. Generating the correct SPL search queries is actually very difficult because you not only need to master the search query. Syntax. [00:18:00] You also actually need to understand what are the schema of the data so you can know which fields to filter on when you are looking for logs asSOCiated with a specific user.
So these are all like different types of complexities that needs to be addressed. If somebody were to build an AI agent for, their own internal, so use, oh, that,
Ashish Rajan: Those two definitely hit home in a. A lot of organizations that I've spoken to, especially in a perspective of sometimes we don't even know the kind of logs we have are the right logs that we need to investigate with as well.
Edward Wu: Yeah. See more, what's interesting is oftentimes it's a more complex and sophisticated organizations that have, you can say the willingness and interest to DIY something. But to some extent because of the complexity and sophistication of their environment, the difficulty of DI, ying and AI agent is actually exponentially higher.
Versus, maybe a small startup with 200 employees. It's a Google Workspace [00:19:00] shop with AWS, just Mac laptops, and that's pretty much it.
Ashish Rajan: I think you made me remember something. 'cause the last time we had this conversation, obviously even then there was the same promise plan was being sold to every cybersecurity tech people person out there that you can solve most of your problems.
How far, like obviously we understand the current state. I imagine a lot of people would've started doing agentic AI or quote unquote agentic AI back then as well. There is that whole challenge of. I guess continuing to keep up with the models as well as they continue to improve as a third thing I can think of as to what you added.
Yep. I can build this amazing system. I can teach my SOC team sort how to be super super AI engineers, but at the end of the day, I had the same problem that I had with cloud, which was I need to be updated on the LLM model that I'm using. Is this the right one? And to your point, if I need hundreds of them.
Do I have I don't know. What does the, in terms of the change that you're seeing in the market for using AI in SOC, how [00:20:00] much has that changed in the one year since we've had this conversation?
Edward Wu: Yeah. In terms of the actual utilization, I would say, so last year has led to a lot of progress.
A year ago, most security teams are still, skeptical of the technology. They have never heard of it. But fast forward a year later at this point I will say we have seen a significant uptick in, in customer demand. A big part because as the initial success stories of. Adoption of AI agents within SOC start to, get spread out.
More and more security teams are becoming interested. They want to learn about this technology. And then simultaneously, all of us have seen the news, $500 billion Stargate project, right? All the headline use of meta, paying a hundred like a hundred million, signon bonuses to top researchers, stuff like that.
I think continue to essentially [00:21:00] evangelize the technology. Yeah. I remember a time like. Two years ago when I first started Dropzone I would say 70% of the security practitioners I met were skeptical of GenAI. They're like, Hey, this is Stochastic Parrot. A stochastic parrot is not going to be able to do much or reason about complex things.
But now I think, I'm not sure if you saw the news like it, it turns out I think Google had, the models are really. Actually could have won the international maths Olympics gold medal. Yeah. And I think this is where, again, a lot of progress has been made. And what we have seen is nowadays vast majority of security practitioners, leaders as well as people in other industry, understand that GenAI is real technology.
It is here to stay. And everybody has been very interested in learning what GenAI could do for my business, what GenAI could do for my team, what GenAI could do for my [00:22:00] profession.
Ashish Rajan: Has this changed the metrics as well? I am thinking about the people who are leading security operations team or are leading engineers there.
How has this changed the metrics? If I guess pre AI and now with ai?
Edward Wu: Yeah. Generally there are a lot of, theoretical metrics a SOC should be tracking, right? Things like how many alerts SOC has received. Like what is the mean time to resolution? What is the close rates.
And what we have seen is a lot of SOCs. You don't really have these metrics at hand. But. From the high level, ultimately when we look at AI agents, we see it helping two key metrics. One is the efficiency of the SOC. What that means is like how many number, like what, like how many hours of manual work the team has to put in?
To achieve the desired, alert investigation coverage or or timeliness. So [00:23:00] that's number one. Yeah, obviously with, again, AI agents or AI foot soldiers, we have seen security teams spending way less time on these manual repetitive tasks. So maybe previously they're spending, a hundred hours a week on alert investigations.
Now they only need to spend 20 hours or 10 hours. So that's number one. Which is efficiency. And number two is effectiveness. And when we think about effectiveness, I think of it from two different axes. One is the response time. So one immediate benefit of having AI agents to investigate security alerts is the meantime to response reduction.
Because software is infinitely paralyzable. It's not taking lunch breaks, it never, goes to meetings. That means when you have 10 alerts that show up in your alert queue, the AI agents are able to investigate all 10 alerts in parallel [00:24:00] in comparison. If you have a human only and those 10 alerts might take your team, might only get picked up a couple hours later.
So with AI agents, what we have seen is the MTTR time. Generally can be immediately reduced to within minutes. And that is something that regardless how much budget you have a security team simply cannot achieve sub 10 minute MTTRs without or with purely with humans. So that's one big, efficacy difference. Yeah. And this difference is huge for larger organizations where every minute of the attacker running free within the environment makes potential damages exponentially larger. Beyond that, we have also seen efficacy gains with regards to coverage. A lot of organizations have a lot of different alerts.
That they are either tuned out or simply swept under the rug. [00:25:00] And historically the team has to do that because they have to reduce the volume of the alert to the capacity of the team. But now with AI agents and AI foot soldiers. Security teams can really, operate as if they have double triple 10 x the capacity.
And this is where a lot of our customers are using our technology to look at security alerts that they previously tuned out because they deemed those to be too behavioral and too noisy or security alerts. They previously just swept under the rocks. And obviously, as we know, the more security alerts you look at.
The probability of catching attackers will be higher.
Ashish Rajan: I love this. So I love this also because one of the patterns, at least we had seen on cloud podcast since the beginning of this year and later late last year, has been that because of all the AI applications being put out there, a lot of, SOC teams were being asked to look at, hey, now you're not just [00:26:00] looking at my on-premise hybrid world logs.
You're also looking at my cloud logs. And by the way, I've also dumped in Azure and GCP in there as well. One of the reasons why at least a lot of SOC teams were complaining, and in many conversations that I had across in conferences, a lot of them complained about the fact that, great, that now you've given me AWS or Azure or Google Cloud, I have no idea what the right logs I'm looking for.
And very rightly said, a lot of the times the MTTR is impacted also because there's lack of knowledge in the team as well for whatever this new technology is, even though there's plenty of documentation on it, on the internet, but that may not be relevant for the alerts you have found Hey, is this a legit thing, not a legit thing?
I don't know. Have you found that as something that you guys are seeing as well?
Edward Wu: Yeah. Yeah. Like one, one of the top challenges for staffing as SOC is because of the diversity of security tools and systems as well as it systems it takes a lot of different [00:27:00] expertise to be, to be effective in a SOC.
For example, if you have a Splunk shop, then you need to, have your team being very familiarized with Splunk itself. You might use CrowdStrike, you might use AWS. So as the number of tools that see SOC team has access to increase, see, you can say, see number of key skill sets that the team need to master continue to increase after a certain point, you are looking for like a unicorn.
And this is where what we have seen is like generally within a SOC. Not everybody is good at everything. Generally you have a Splunk guy or AWS guy, right? You have a like Google workspace guy or a kind of a Azure guy. Yeah. Or girl. And this is where. It, this kind of setup works well when, during business hours when everybody is online.
But when you are in a, evenings and weekends, you see this alert from AWS, [00:28:00] but you are more an on-call engineer or analyst is more experienced with, for example, CrowdStrike and EDR malware reverse engineering saying, what do you do? So this is actually another area where because we are building AI agents ai, we have pre-trained our AI agents to know how to investigate different types of common security alerts, as well as how to use all these security tools.
So we have seen kind of this AI agents being a backstop during evenings and weekends when the on-call engineer might not be the expert at that particular alert type.
Ashish Rajan: Would you say, and I think I'm glad you brought this up as well, because are you also seeing that the incidents that are typically experienced is also evolving in terms of either it's the volume, 'cause a lot of reports these days are calling out the fact that hey AI attacks are on the rise on the internet.
How [00:29:00] true is that statement, the work that you guys have been doing?
Edward Wu: Yeah, overall. What we have seen is the alert volume is definitely increasing steadily over the years we haven't seen a significant uptick. A 10 x or five x haptic recently. I think part of this I sometimes I tell this joke to my non-security friends, which is, at this point it's obvious that GenAI can really force multiply the attackers, but the reality is most attackers are still able to become very successful without GenAI.
So this is where again. Attackers will find the weakest link. Yeah. They will choose the path of least resistance. Yeah. So until the defenders really uplevel our games, the incentives for attackers to, learn GenAI and adopt new technology is not as high. So that's number one.
Number two is a lot of the, a lot of the [00:30:00] techniques, for example, within like phishing email generation or spear phishing are actually very similar to AI SDRs and AI outbounding. So I was also telling people that instead of building AI based spear phishing tools, a lot of these, talent are starting AI, SDR startups.
So they're using their knowledge and insights to, to really up level. See outbound sales, outbounding emails. And as a founder, every morning I wake up, I I see a hundred emails sell, trying to sell me stuff in my inbox. And I could absolutely tell that within the last 12 months, the degree of personalization of sales outbound emails has leveled up drastically.
Like nowadays, it's very common for me to see my team members' names in the subject line, our customer's name in the subject line, or recent awards or podcast appearances in the subject line. [00:31:00] Oh wow.
Ashish Rajan: Okay. And what about in SOC teams as well? How what are, so if people who may or may not be using Agentic AI, SOC today, how are SOC teams using AI at the moment?
What do you see them use? 'cause to your point, the adversaries are also using it. Everyone's using it. Have you seen any examples of how SOC teams may be already using it in their organization? Perhaps may not be to solve complicated level one false positives, but are they using it otherwise as well?
Edward Wu: Yeah, definitely. Technologies like co-pilots and, ChatGPT have really made a lot of these capabilities accessible to, to SOC team members. For example I've heard a lot of successes of just using ChatGPT to look at, for example, obfuscated, JavaScripts or help to interpret or understand PowerShell scripts.
And I've also heard cases where folks will use ChatGPT to perform things like OCR. [00:32:00] To really understand like what a particular image or attachment is saying. So I think like doing the day-to-day work of a SOC analyst, there are like small tricky tasks that, that folks have been successfully having ChatGPT or co-pilots perform.
Oh, another aspect will be writing a report. ChatGPTs and co-pilots are tremendous. You have an alert, you perform in the investigation, you wrote down five bullet points and you need, like two paragraphs. Yeah. They're tremendous at hydrating those five bullet points into a full well-written paragraph or two.
Ashish Rajan: I can understand that. 'cause I used to hate at least my used to hate writing the post incident response document along with the entire timeline of, Hey, nine o'clock Ashish did this 9 0 1. He said, we should use agentic SOC. But whatever. But for people who may be already thinking about, hey, how much this investment is gonna be in terms of the [00:33:00] time, energy and everything, what's a maturity levels like in terms of starting to uplift your existing SOC team? 'cause most people already have a SOC team. Either they've outsourced it or they already have an in-house one. But that may or may not be AI ready or using AI today. If they wanted to go down that path of being, say, agent in the future, even if it is to build your own agentic AI SOC company tomorrow inside your own organization for whatever reason.
What are different levels of maturity that people can go expect to go through and some of the challenges they would face as they go through that maturity?
Edward Wu: Yeah. So way we look at it is through our kind of four levels. It's similar to how actually how you will train or adopt or bring on a new team member.
So I think the first level of maturity is you can call it. Independent contractor. What that means is like when you have specific tasks such as, write this report for [00:34:00] me, or summarize sis PowerShell script for me. You can use chatbots. Chat. In my mind, chat bots are like part-time contractors where you give them a very specific tasks that's not too difficult and you have them do it.
So that's the first level is start using, large language model tools at specific tasks that you don't want to spend time on. The second level is kind. Treating AI agents or Agentic tools as an intern. So you are giving them full intern projects. And this is where we start to see, you can have Agentic AI systems to maybe start with looking at all the alerts for you.
And then but with you still reviewing the findings and the conclusions of every single analysis. So this is where yes, you are trusting the agent to, to do the work. By the same time you are like reviewing and validating every [00:35:00]piece of output. So we think of this as the stage two, and then stage three is you start to treat your AI agents as a senior individual contributor of your team.
What that means is you start to build more confidence in the technology. The technology is accurate enough in terms of its analytical output that you can start to trust it and without reviewing every single piece of the output. So that and that's where again, you are treating agentic AI systems as additional members of senior individual contributors.
You might coach it to do certain things in a specific way, but you are not handholding it, you are not, standing behind, its back, breathing down its necks, checking everything. And then the fourth stage is like when the technology is mature enough, you start to really think about what it could do.
Or what it could provide [00:36:00] that your existing team is not capable of. So it's treating AI agents like additional members on your team with genius level intelligence, right? For example, large language models are really good if you give it, 2000 security alerts and say, go find patterns within those 2000 security alerts.
I don't think anybody. On the planet enjoys doing that kind of work. But, GenAI doesn't mind and oftentimes they're tremendous at identifying patterns and connecting dots in a very noisy data set. So these are kind of types of projects where they are beyond a typical human intelligence or interest, but could be very valuable to the organization.
So in short, we think about, the maturity as first, you treat AI agents as part-time contractors, and then you treat them as interns. After that, you treat them or uplevel them as senior individual contributors, and then ultimately they become, additional [00:37:00] members on your team with genius level intelligence.
Ashish Rajan: Interesting. And as more people try and adopt AI, I'm sure they'll go through these stages as well. So it's definitely aligned with at least a lot of the thinking that is in the market and is what I feel as well. And people may be at different stages of this maturity. A lot of people may even go down the path of.
I guess at this point in time, a lot of supply chain comes into the question quite a bit. There's a lot of third party involved in terms of AI. I guess using agentic AI in these days is a third party quite question as well. How does one, and especially for something like security operations, which is to your point, every minute counts in most enterprise these days.
How does one increase the level of trust in the triage or the output of the ai? Or even agentic AI for that matter, in a world where hallucination is a thing and to, to your point, you need to have hundreds of LLM models to go through one triage to figure out is it a false positive or a true positive.[00:38:00]
What are ways that you found that people can increase trust in the output of ai?
Edward Wu: Yeah, trust is . A little bit hard to measure, right? Because it's a sub subconscious, it's a subconscious measurement, right? There's no oh, I see this. The AI agent gained two additional trust points.
So it's a, it's a subconscious thing. So in my mind, like trust is something like what, how you build trust with an AI agent is actually very similar to how you build trusts with your human colleagues, right? This is where it does take time. You, you cannot. Trust on the first day.
But as you work with the technology and you see consistently day in and day out, it's doing the right thing. So you start to trust it more. And this is where for like mature agentic AI technologies, oftentimes you see those technologies like ours will allow [00:39:00] operators or human engineers to specify the scope of authorization.
What I mean by that is initially when our technology, for example, is deployed in an environment, our scope of authorization involves look at security alerts, make sense of them, generate a report and send those to me. That's the initial scope of authorization. But over time, as the security team builds more trust.
See that might change to, okay, look at all the alerts, make sense of me make sense of them. If the alerts are benign, close the case with your notes and move on. But if the alerts are urgent and malicious, escalate them to me. That's a, the second level of trust you can say. And then the third level of trust might be, look at all the alerts, make sense of them, dismiss the benign ones.
Escalate the malicious ones. But by the way, if this malicious alert involves these assets [00:40:00] or these alert types, I give you the authorization to, lock this user's account or quarantines this particular endpoint or host on the network. So I think a big part of this actually has to do with, trust is not a binary thing, it's a spectrum.
And what we have seen is it takes time to build trust and as security teams start to build trust with the technology. The scope of authorization will continue to expand.
Ashish Rajan: I love those points, but I also wanted to add, and maybe something that the the foundation models are preaching in terms of transparency as well.
Is there a role for transparency in this as well? Considering a lot of people may be using Agentic AI or any other kind of AI SOC today, which may be third party. Would transparency be helpful in going at least covering some distance in there?
Edward Wu: Yeah, absolutely. I think this is where every single at this stage like transparency is table stakes.
[00:41:00] If you have a product that can investigate security alerts, it has to provide to see full evidence chain. All the reasonings and the metadata gathered that led to that particular determination. So I would say at this stage, like transparency is not a nice to have it, it is a table stake in order for the technology to show its work and be trusted by the security teams.
Yeah. And I guess to your point,
Ashish Rajan: The backbone of most security incident, it's a chain of custody as well. Like how did he get to the conclusion and even the decision to be made that, hey, this is a false positive, or is this a true incident and I should wake up as she should in the middle of the night?
Like, why was that decision region, how and to your point, that goes a long way in building that trust. I'm curious, obviously your focus in terms of the past two, three years has been in the AI SOC space. You obviously have a lot of patents as well. So I'm curious, where do [00:42:00] you see AI impacting in other parts of cybersecurity?
What other areas do you feel are up for disruption?
Edward Wu: Yeah. Good question. I think this is where one, one common strategy startup founders in any vertical have been utilizing is just look for building AI agents for any chunk of manual and repetitive work. If you turn on a search light for manual and repetitive work.
I think alerting investigation is an obvious chunk, but there are a whole lot of other types of this work within cybersecurity. Vulnerability management is pretty manual and pretty repetitive. I know there are a lot of GRC projects and tasks that's very manual. Repetitive patching is very manual and repetitive.
Like reading threat actor reports that are relevant to your business is very manual and repetitive. Reviewing code is manual and repetitive. Running meta [00:43:00] exploit against 500 public facing IP addresses is also manual and repetitive. So I think there are like a lot of these a lot of other types of, or chunks of manual and repetitive work insecurity and all of them.
Should be, ripe for automation.
Ashish Rajan: Actually, that reminds me, 'cause one of the things that a lot of regulatory people care about and has been on top of mind for a lot of people also has been if third parties use their data as in the customer's data to train their model. A, is it true that it's required or can it be done without it? And b. How does one make sure, or clarify or that it doesn't happen? Is there, has there been a way around this?
Edward Wu: Yeah, absolutely. Like obviously there are a number of different ways to train models or improve the capabilities of the systems.
Some of them involve, taking raw inputs and outputs. Yeah. And like raw data, including PII, from the [00:44:00] customers. But there are definitely techniques around it. For example, at Dropzone we use a single tenant architecture. So our customer's sensitive data are all segmented in, in dedicated compute network and storage storage components.
But beyond that, when we improve our system. We only utilize de-identified telemetries from our customer base. And our customers can audit exactly what, we're, what we are sharing to the backend as well as opt out. And you might ask, how does is de-identified telemetry sufficient?
Like what we are building is very similar to a technique called federated learning, and that's the best example of federated learning is how doctors uplevel each other. I think if you look at, how doctors improve their ability to diagnose diseases, they leverage the identified case studies [00:45:00]continuously.
So it would be like 70-year-old male in upstate New York with symptoms A, B, and C. Test results. X, Y, and z was diagnosed to this rare disease. And medical professions have a whole database of these de-identified telemetries that allows them to improve the discovery of new diseases as well as improve the diagnostics globally.
To some extent we're doing very similar things where we are, gathering, de-identified telemetries of different types of alerts, different types of determinations, our system has been seen in the field and feeding those back into our product. But there is no PII involved. And these de-identified.
You can say cases or scenarios help to uplevel our system as it sees more variations of security alerts and more variations of different [00:46:00]organizational setups and environments.
Ashish Rajan: Oh man. Thank you for sharing that. 'cause I think it doubles down on that transparency thing that I was talking about earlier, and you were talking about this as well.
What I'm taking away from this is that one of the questions people can ask, whatever third party that they go with who claims to have AI, is that, Hey, what are you doing in the background? How are you doing this? And being, and to your point. It can be simple to explain without going into becoming more of a data scientist.
And hey, you wouldn't understand this because the models is, are complicated and whatever behind the scene. I think having that put out there, I think I, I appreciate you saying that because it makes a lot more weight onto the whole transparency thing as well as to, Hey, it's great, but what is it doing in the background?
Am I talking to a black box, et cetera. But final question. In terms of the SOC teams preparing for this AI world, what do you feel the level one SOC is in the future? As it we, as you progress to this more agentic AI happens, AI gets [00:47:00] integrated, whether through MSSPs or people buy a product or people just build it internally.
Where do you see this SOC space go, at least for the level one?
Edward Wu: Yeah, good question. I definitely get asked this question a lot of times. In my mind what will happen with level one SOC analyst is level one as a kind of a job role will over time be substituted by software. But what that means is folks who previously work in this role will be upleveled to level two or level three analysts, or they will be transitioned to other parts of security. I'm sure most security teams will love more red teamer. They will love more security architects who can, push through the organizational boundaries to evangelize and implement technologies like zero Trust networking or implement [00:48:00] hygiene.
Improvements like, Hey business let's stop making a thousand windows workstations accessible. Accessible via RDP from the public internet, right? Let's build a better solution to, to this prob to this problem without excessively exposing our systems. So this is where, again, what I'm talking about is the job role itself will be substituted. But the expectation is because there is so much additional projects and needs and improvements that are required within cybersecurity. See folks who are currently level one SOC analysts I will expect all of them to be able to find, more interesting roles within cybersecurity. Yeah. To work on.
Ashish Rajan: Yeah. And I guess to your point, what happened I'll probably throw another analogy in here, that people who were doing typewriter work. It's not, they were replaced, they were just qualified into becoming better on computers and [00:49:00] Microsoft Word and everything else that came after.
It's not that like suddenly we don't need typing. We still need typing. We still, we are typing right now as well as we're doing this. Yeah. I like the comparison as well. Then, those are most the technical questions I had. I've got three fun questions for you as well. I hope people will get to know you a bit more.
First one being, what do you spend most time on when you're not trying to solve the agentic AI SOC problems in the world.
Edward Wu: Yeah, great question. Obviously, in addition to my family I am a I love racing, so I spend a lot of time in sim racing and I also, Dr. Drive my car on track to to practice as well.
Ashish Rajan: Oh, nice. So you go over track days. Correct. Oh, wow, man. I think you definitely should put that on there as well. The second one, what is something that you're proud of that is not on your social media?
Edward Wu: That is a good question. What I'm proud of probably one thing I was really proud of was in the early days of StarCraft two, when it first came out, I was ranked [00:50:00] like top a hundred in the Master league. I was quite good at it back then. Oh, wow. But nowadays, obviously I I'm a lot worse, but at least. At the peak of my performance, I was quite good at StarCraft too.
Ashish Rajan: Wow. There you go. Because that's a pretty intense game as well, by the way.
So good on you for being able to be good at this as well. Final question, what is your favorite cuisine or restaurant that you can share with us?
Edward Wu: Ah, I would say my favorite restaurant at this point is a Chinese hot place in Seattle. It's called Chowing. It's in insanely spicy and but it's very good.
Ashish Rajan: I will add that to my list the next time I come in. A, I love spicy, but I think I haven't had a hot pot for a while as well. No, I haven't had a great one in London yet, so hopefully when I make my way to Seattle. But dude, that's most of the questions I had. Where can people find out more about you and all the work that you guys are doing from a transparency perspective on this particular space and agentic AI?
We can they find all the information about [00:51:00] this.
Edward Wu: Yeah they can check out our website at Dropzone, so D-R-O-P-Z-O-N e.ai. And one thing we have on our website is a actual actually public and ungated test drive of our product, so anybody on the internet can jump into our product without talking to a sales person by filling out a simple form with three questions.
Ashish Rajan: Oh I will put that link in there as well Then and we can people connect with you to talk more about this particular space and where it's going?
Edward Wu: Yeah. I'm mostly active on LinkedIn so folks can definitely find me. They can search for Edward Wu and Dropzone. I. I expect I should be the first in the search results.
Ashish Rajan: Perfect. I will drop that in there as well, man. But dude, thanks so much for doing this and thank you for being quote unquote transparent about the AI space as well to what the reality is versus marketing hype around it. I look forward to more conversations with you, but all the best man.
Thanks so much for , doing this interview, and I look forward to talking to you more about this. [00:52:00] Thank you for having me. No problem. Alright, thanks everyone. Talk to you next time. Thank you so much for listening and watching this episode of Cloud Security Podcast. If you've been enjoying content like this, you can find more episodes like these on www.cloudsecuritypodcast.tv
We are also publishing these episodes on social media as well, so you can definitely find these episodes there. Oh, by the way, just in case there was interest in learning about. Cybersecurity. We also have a sister podcast called AI Cybersecurity Podcast, which may be of interest as well. I'll leave the links in description for you to check them out, and also for our weekly newsletter where we do in-depth analysis of different topics within cloud security, ranging from identity endpoint all the way up to what is the CNAPP or whatever, a new acronym that comes out tomorrow.
Thank you so much for supporting, listening and watching. I'll see you next.