Is the AI SOC analyst just hype, or is there measurable ROI? We spoke to Edward Wu, founder of Dropzone AI about this and he shared insights from a recent Cloud Security Alliance (CSA) benchmark report that quantified the impact of AI augmentation on SOC teams. The study revealed significant improvements in speed (45-60% faster investigations) and completeness, even for analysts using the tech for the first time.
Edward spoke about the "robotic" limitations of traditional SOAR playbooks with the adaptive capabilities of agentic AI systems, which can autonomously investigate alerts end-to-end without pre-defined scripts . He shared that while AI won't entirely replace human analysts ("That's not going to happen"), it will automate much of the manual Tier 1 toil, freeing up humans for higher-value roles like security architecture, transformation, and detection engineering .
Questions asked:
00:00 Introduction
02:40 Who is Edward Wu?
03:30 The Evolution of AI Agents Since ChatGPT
04:35 Surprising Findings from the CSA AI SOC Benchmark Report
06:40 Why Has Traditional Security Automation (SOAR) Underdelivered?
09:30 How AI SOC Analysts Differ from SOAR Playbooks
11:30 Does Agentic AI Reduce the Need for Security Data Lakes?
13:20 The Evolving ROI for SOC in the AI Era
14:50 ROI Use Case 1: Reducing Alert Investigation Latency
15:15 ROI Use Case 2: Increasing Alert Coverage (Mediums & Lows)
16:20 ROI Use Case 3: Depth of Coverage & Skill Uniformity
18:15 Achieving Both Speed and Thoroughness with AI
19:40 How Far Can AI Go? Detection vs. Investigation vs. Response
21:35 AI SOC Hype vs. Reality: Receptiveness and Trust
24:20 The Future Role of Tier 1 SOC Analysts
27:40 What Scale Benefits Most from AI SOC Analysts? (Enterprise & MSPs)
29:00 The Build vs. Buy Dilemma for AI SOC Technology ($20M R&D Reality)
33:10 Training Budgets: What Skills Should Future SOC Teams Learn?
Edward Wu: [00:00:00] Is there something wrong in the way we have done SOC so far?
The challenge with these type of automation is they are very robotic. From our perspective, the technology has underdelivered compared to the promise that they are made. People do believe that AI hype and AI bubble is a burst or whatever. Can AI, SOC analyst automate everything in a SOC?
And as a security leader, you can fire everybody in your SOC. That's not going to happen. I, I do see a world where in the future there will. Not be that many tier one security analysts as a job role. What we will have is a whole lot more security architects, a whole lot more, you know, security transformation folks.
Ashish Rajan: Is the Agentic AI for SOC a complete hype, or is there actual measurable, ROI from this. Well, I may not be the best person to answer this, but I found Edward Wu from Dropzone who has a couple of patents, one of them specifically on anomaly detection using device relationship graphs. So I thought, why not have him talk about this?
So Dropzone worked with [00:01:00] Cloud Security Alliance recently on a report on how much impact can an AI augmented SOC environment can have on detecting the accuracy, speed and just the result or you can expect from an AI augmented SOC? So we spoke about the hype, what some of the hypes questions you may have about AI in terms of whether it's gonna take away AI jobs.
What should be the ROI in an AI world that we are moving towards? What are some of the true facts about AI usage? Can we use it for detection, response, investigation, and what parts are truly just not there yet? So you're able to walk away from this conversation, at least having some clarity on today as we speak.
Is AI in SOC really a hype, or are there actually people able to solve problems with this? If you know someone who's trying to build a SOC team for the world of AI or thinking about what it could be as you build a SOC team for an AI world. I would definitely recommend checking this episode. I would share this with someone who's probably doing the same as well, and in case you've been following us for sometime, have been finding all the episodes or [00:02:00] some of the episodes of Cloud Security podcast valuable.
I would really appreciate if you could take two seconds to hit that subscribe or follow button. It would notify new people of the episodes we release and also able to help more people like yourself as well. Thank you so much for all the support you have shown us, and I hope you enjoy this episode with Edward Wu.
I'll talk to you in the next episode. Peace. Hello and welcome to another another episode of Cloud Security Podcast. I've got Edward Wu with me. Hey man, thanks for coming on the show. Could you share a bit about yourself, your background? I know you've come on the podcast before, but just for people who may not have caught up in the previous episode, could you share a bit bit yourself as well man Absolute be?
Edward Wu: My name is Edward Wu. I am the founder and CEO of Dropzone AI. We are a cybersecurity startup that's leveraging large language models to build essentially AI security analysts. So we're building a piece of software that can end-to-end autonomously investigate security alerts, replicating the techniques of expert human security analysts.
My personal background before funding Dropzone is I worked in detection response and [00:03:00] spent eight years more or less generating alerts for different enterprises and during that time, uh, came to the realizations that most security teams already have too many alerts. What they really need help with is the processing of those alerts.
Ashish Rajan: And it's clearly been a while since the ai I wanna say boom, I guess, for lack of a better word. Uh, and we've obviously been, we, you and I spoke at the AI security podcast as well. We're speaking over here too. How, how much has the AI wave or AI agent wave, if you wanna call that, how much has that evolved in the past three years since ChatGPT come into?
Edward Wu: Yeah, a lot has. Has happened. ChatGPT to some extent was the first successful application of large language models. But, uh, in the last two, two to three years, we have seen a lot more adoption of large language models in other applications as well. I think at this point in time, any engineering leader who is [00:04:00] not using agentic coding tools probably, have their job security in question. Like every single company expects developers to use AI coding tools. Every single person that's running contact centers expect the contact center to leverage a lot of GenAI Everybody who's doing, you know, web design, uh, is expected to leverage. Uh, image generation models to help with creatives or, you know, building prototypes and drafting out different ideas.
Ashish Rajan: Uh, so this report that I think you and I, uh, were linked with, with the whole CSA, coming up with benchmark report that you guys were involved with as well. I think there were some interesting findings in terms of how much an AI assisted analyst was impacted. I'm curious in terms of how humans or human analysts in this case were interacting with AI or using ai.
Was there anything surprising [00:05:00] in how it was being used by humans?
Edward Wu: Yeah. Um, ultimately nothing was very surprising. It's kind of expected that hu with AI augmentation, human analysts are more effective they're more efficient as well. Mm-hmm. And they are less fatigued. From our perspective, maybe the biggest surprise is the, actually, the actual magnitude of the differences were honestly larger than we originally anticipated.
Because keep in mind these, like we. Recruited 148 participants. So they are operation, they are in the seat security analysts. And this is their first time using our product. Uh, so we're looking at like, see. The impact of AI assistance when it is the first time they have even, you know, experienced such technology.
So I would say the magnitude of the improvement were actually much bigger than we originally anticipated for first time [00:06:00] users.
Ashish Rajan: Wow. 'cause I read the whole 45 to 60% faster and being able to be more complete. Was there, I, I guess. How broad was the test set and how broad was the skillset of?
Edward Wu: Good question.
Um, so with regards to the skillset, we had analysts across like tier one to tier three. Uh, if I remember correctly, most of the analysts mm-hmm. Are. On the more junior side. Uh, and then with regards to the test cases, we used two security alerts. That's pretty common. So one is an AWS S3 bucket policy change alert.
And then the second is an Microsoft Entra ID failed logging alert.
Ashish Rajan: So based on, I guess the, based on the spike of how quickly people were able to investigate based on AI assisted and to what he said, it's kind of becoming like the I guess for lack of a better people are adopting more ai.
Is there something wrong in the way We have [00:07:00] done SOC so far, and I'll probably include the. Detection, the broader circle. 'cause security automation has been a conversation for some time. Ha Has that changed or that does, does that not work in the AI world today? Is that why you we saw such dramatic improvements?
Edward Wu: Yeah. Security automation has been around for, for a long time like historically. Mm-hmm. The way you automate different tasks within like alert investigation or response is by building playbooks. Um, you can build playbooks by using code. You can also build playbooks by using drag and drop interfaces, but at the end of the day, you are essentially telling computers exactly, this is the API to cost.
This is the parameter you're putting into this, API and do that like 10 times. The challenge with these type of automation is they are very robotic. And if we look at the type of tasks we're trying to automate within the [00:08:00] SOC, like alert investigations being a SOC analyst or investigating alerts require somebody to go through a sequence of steps that.
It actually resembles being a detective in the physical world. You have to look at the evidence. You have to look at, you know, blood stains or, fingerprints on, on the window, uh, window trims and start to formulate hypothesis and gather additional evidence to validate or invalidate hypothesis.
It's impossible to be a detective in the physical world just by following sequence of Fs and L statements. So that, that's actually why if we look at security automation, yes, that's a very well known concept, but from our perspective, the technology has under delivered compared to the promise that they all made.
And it's not surprising that none, obviously security automation vendor ever became public. All of them, [00:09:00] like some of them got acquired. Some of them are. Moving along, but not none of them went public because of the, the playbook based automation wasn't able to sufficiently provide enough automation that SOC teams that really moves the needle on the SOC teams.
But in comparison with large language models and agentic AI systems, now we are able to build and build systems that end to end automatically investigate security alerts. You don't need to write any playbooks, you don't need to write any code. The system is pre-trained out, out of the box. You point it at.
S3 bucket alert or an Entra ID alert and or already knows how to query the sims. He already knows how to make API calls to look at CloudTrail, how to identify S3 bucket metadata and everything in between.
Ashish Rajan: That's interesting you say that. 'cause I, I mean, I guess, and obviously I'm more a data scientist. [00:10:00] I mean obviously you're, you're the person with the patents here, so I'm curious.
The previous generation that the running joke being the, SOAR became the sore topic that no one spoke about. Is there 'cause obviously they were doing log analysis and all of this, what's different about AI that's able to do the same thing better is just because it can process log data or is
Edward Wu: there Yeah, like if we just look at log analysis, for example during the investigation of an Entra ID logging alert, one question one might have is.
Has this person logged in previously from the same IP address in the last maybe seven days. Historically, in order to automate this one has to write a query, query template, uh, against whatever sim in the environment and actually codies that in the playbook. But. But with large language models, they're capable of programmatically generating those [00:11:00] queries without any templates or prior examples.
So this drastically reduces, um, the requirement to automate this task. Uh, very similar again to like, if you compare ChatGPT. That's kind of general purpose, open-ended. You can ask it different questions. It's able to answer them versus traditional automation where you have to build a playbook for every single type of distinct sequence of actions you want to take.
Ashish Rajan: Because is, is this also where the data lake piece comes in? 'cause I guess there was kind of like that pit stop in between where, uh, a lot of people were like, oh, I, I should just make a data lake and that should be a good enough point for me to help with my soar problem because I don't have enough data to begin with.
So where, where does Data Lake fitted didn't fit into all of this? Who may be thinking that maybe that was the
Edward Wu: system there are. Probably a lot of different reasons why people decide to build a data lake. But I know one of the [00:12:00] reasons is historically, in order to perform analysis on data across different sources, you have to aggregate them first.
You have to normalize them first. So essentially you have to put those data in the same spot first, and then you can analyze them. So that's kind of why you want to build a data lake, because the data lake enables you to aggregate data from your endpoint, from your identity systems, from your cloud workloads, all in a single place.
But with large language models and AI agents you do not need to pre aggregate. Pre normalize the data anymore, uh, because these agent systems can pivot across different distinct data sources and piece together those information without requiring you to put them in the same place.
Um, we have actually seen some examples of having agentic AI systems [00:13:00] reduce the need for data lake because. Pre aggregation and pre normalization is no longer a prerequisite for complex analysis.
Ashish Rajan: And so does that mean the, uh, the ROI for SOC? I think it's kind of where the topic of this episode kind of was inspired by as well, like the report itself is all about, is it the AI agent, SOC a hype or is there like actually a measurable ROI from it.
So is the ROI for SOC supposed to evolve as we've now evolved into the AI world?
Edward Wu: Yeah. This is where the study only evaluated a very, like one of the ROIs of Agentic AI systems in SOC, which is making augmenting human analysts. So human analysts are more efficient, meaning they, they don't, they can go through more alerts within the same number of hours, and they are a lot more effective, meaning they are able to get to the ground, truth and actually have higher accuracies.
[00:14:00] Uh, but beyond that, we have also seen other aspects of ROI, of ai, SOC technology. Uh, for example, we have seen. You know, customers and organizations leveraging AI SOC analysts to drastically reduce the response time where they want to get to a place where every single alert is looked at. Reach a definitive disposition within like 20 minutes.
Historically, it's almost impossible to do that regardless how much money you have and how many humans you can hire. But with software software can immediately start in another. Investigation on a new alert within 20 seconds. Uh, the software is not taking lunch breaks. The software is not going to get stuck in team meetings or, all sorts of different administrative activities.
And a piece of software can investigate 20 alerts in parallel. So that's kind of another use case, which is reducing the amount. [00:15:00] See latency between the alerts being flagged in the first place and the alerts being dispositioned, whereas like any remediation or containment activity can start if the alert is hinting or pointing at a true positive.
Another use case we have seen. Is actually around the ability to look at a lot more alerts. Uh, see, see, unfortunate, brutal reality is most SOCks are not looking at. Every single alert that's coming to their system. Most SOCks probably are doing their best with the criticals and the highs. If they have a lot of resource, maybe they'll look at the mediums.
Every now and then, most SOCks are not paying any attention to the lows and the informationals. But we also know that when we look at historical breaches, uh, a good number or. Percentage of those breaches [00:16:00] actually had signals in the mediums, the low and the informational alerts. So by having ai SOC analysts that are able to go through the same number of alerts, but with a significant lower price point, it enables organizations to mm-hmm.
Look at double, triple, or quadruples the number of alerts. They are able to historically without like breaking the bank. Interesting.
Ashish Rajan: And I guess maybe there is something to said about depth of coverage as well in terms of, I guess a lot of people, and I'll let you talk about it. 'cause I feel like that was an area that I definitely found was at least the time I was building a SOC team, I would have a team member who would be really good with AWS but have no clue about Azure or.
On-premise technology or something. Is there like something that we said about depth of coverage and feedback? Well, yeah.
Edward Wu: Maybe two components of that. Number one is a lot of SOCs are 24 7 but the reality [00:17:00] is not every member of the SOC team. Is equally good at every single type of alerts or every single type of technology.
You know, within a team. There's always this, like this, there's a CrowdStrike gal or a Splunk guy. And when you are on a kind of a rotation, attackers or alerts are not waiting for the. Splunk guy to be on call when it shows up. Or an alert that requires in-depth spon Splunk analysis to show up.
So this is where like the. An evenness of the team member skillset, uh, sometimes cause a big challenge for especially smaller teams. But the beauty of, again, a piece of software is once you taught it how to use Splunk, how to use Microsoft Sentinel, how to look at CrowdStrike, how to look at Palo Alto, it becomes equally good at everything.
So that really helps to, you can say, uh, even out some of [00:18:00] the. Uneven skill distributions or technology familiarity within the team. And then the other part where, uh, the, the technology could really help with is in its ability to achieve both low latency and in-depth analysis. One of the trade-offs when you are looking at an alert as a security analyst is you can spend five days on a single alert.
You will be very thorough. But five days on every alert that's like. See latency is unacceptable. At the same time, you know, sometimes security analysts can spend one minute at an alert, you know, make an educated guess. Where, see, latency is very good, but the thoroughness is quite lacking. And what software and AI SOC analyst gives you is the best of both words because again, it's software, it's parallelizable.
What that means is it is able to [00:19:00] cram like a couple hours of analysis. Within a five minute window, uh, because software can make five parallel queries against Splunk and Microsoft Defender. And reason through, you know, a thousand lines. Of returned logs within a couple seconds. So having the best of both worlds where you don't have to sacrifice or find this delicate balance between latency and thoroughness is also very valuable.
Ashish Rajan: Interesting. 'cause would you say then 'cause obviously that those are. Quite, what's the word for it? They're quite close to home for a lot of SOCk people where a lot of people spend years trying to be Splunk experts or years trying to be CrowdStrike, expert. They get all the certification and everything as well.
But to your point, in an enterprise, the reality is it's not just one environment. There's multiple environments that people are looking at. Is AI augmenting only the detection piece or are [00:20:00] we able to use AI to get into the investigation remediation? How? How far? Have we actually been able to space? Yeah.
Edward Wu: So at Drop Zone we're focused on building ai SOC analysts that's really targeted to automate investigations. Uh, we have seen other startups that's leveraging AI or agentic AI systems to like programmatically write detection rules. So a little bit like Cursor for detection engineering teams who have seen, other teams building interesting technologies there.
Response is an interesting place where. In the fullness of time, yes. Maybe there's a future where. AI can be used to, or AI can do as good of a job as what, a human IR team from Mandiant will be able to do. Uh, but the reality is at, at this point, the technology is not there yet. So we actually haven't seen tons of use cases and the real world adoption of [00:21:00] agentic AI systems with regards to response because incident response is very.
Nuanced. It requires a lot of organizational context, and it's actually also very risky. I, I, I analogy I use is investing. Getting alerts is akin to a, a medical doctor giving you diagnosis. While performing IR is more like open heart surgery, uh, it's very easy to cut the wrong blood vessel and make a, you know, a, a, a big mess.
So this is where the current level of accuracy and the reliability of agentic AI systems are not there yet to be able to fully automate IR process. Interesting.
Ashish Rajan: And I, I guess maybe then the follow up to that is 'cause people do believe, or at least some people do believe that AI hype and AI bubbles, you know, burst or whatever.
In terms of the, how receptive are people? 'cause I imagine to, [00:22:00] to what you're saying at large enterprises who are already seeing all of this at scale as a challenge. Um, are people being receptive to having AI assistance? 'cause there's a whole question of accuracy to what you were saying. Every time you hear, oh, I, it doesn't gimme the same response twice.
I can't risk that with my detection piece. And I think maybe that's where a lot of the hype comes in from as well. What can be, what can the SOC people who are in that managerial role or thinking about, uh, SOC for 2026 or and beyond. How do you even increase the trust in the AI assistance that they can actually use or augment into their so teams?
Edward Wu: Yeah, it, this is a question, not, not unique to cybersecurity, but all other kind of white collar job families as well. From my perspective, there is definitely overhyped expectations within the agentic AI space, but at the same time, we have all seen a lot of [00:23:00] real world concrete ROIs from AI systems as well.
Like maybe another way to think of it is, think about coding for a second. Can agentic AI systems help to improve the productivity of developers by 20%? Absolutely. I don't think anybody has any doubts about that. Uh, but can we. Within the next 12 months, get into a place where we can fire all the software developers and just have AI directly with work with PMs.
I would say that's kind of the bubble part. So I, I do think agentic AI is going to be able to unlock tremendous amount of value. But there is a limit to its capabilities and it is a bubble once the expectations are so inflated. So if we take a moment and look at SOC, in 2026, can AI SOC agents meaningfully reduce the amount of time human analysts spend investigating alerts?
Absolutely. [00:24:00] But can AI, SOC analyst or agentic AI systems automate everything in a SOC? And as a security leader, you can fire everybody in your SOCk. That's not going to happen. So I think part of this is making sure you have the right expectation, uh, of this technology and solutions. And in that case, if you have the right expectations, uh, I, I don't think you will be too disappointed.
Ashish Rajan: Yeah. Yeah. Fair. And I guess to your point about SOC teams who are trying to. Build their program for 2026. I know personally so many people have gone through their end of year planning and thinking about what that, what does that mean for next year? So what's the role for tier one in this augmented AI world?
I imagine a lot of them probably would be worried about their jobs with more of this augmentation.
Edward Wu: Yeah. Yeah. We have heard this a lot across our customers, prospects, early adopters as well. From my perspective. It's very similar to what happens to [00:25:00] calculators like human calculators when you have Excel.
Uh, a big part is yes see historical, manual, repetitive toil is going to be automated. But it's also opening opportunities where if you. Get upleveled on this new technology, you can actually focus on other parts of the cyber securities. That's a lot more exciting and interesting. For example, and a piece of AI is never going to be able to work with other application teams or network teams to expand additional.
Visibility of the SEC security fabric. A piece of AI is not going to convince an application team to add additional logging so the security team can better identify abuses within business applications or convincing network teams to deploy five additional network sensors because, uh, the security team really wants to see what are all the ICS or IOT [00:26:00] devices are doing in the environment.
And there are. Like what's exciting about cybersecurity from my perspective is cybersecurity, there's so much we can do as defenders that it is probably one of the very few. Career or job families where there is a unique win-win situation between the human workforce and AI augmentation.
Uh, it is not a zero sum game. Every single security leader we talk to, when we ask a question, what would you do with 10 additional security engineers? Every single one of them can come up very quickly, within 30 seconds, a long list of projects. Oh, hey, if we have more resources, I would love to, evaluate and deploy zero trust networking. I would love to, like reorganize how access control is granted within the system. I would love to uplevel, uh, our patching strategies. Uh, I would love to uplevel on how we do more [00:27:00] like pen testing or red teaming exercises. So this is where I, I do see a world where in the future there will not be that many tier one.
Security analysts as a job role. But what we, we, what we will have is a whole lot more security architects, a whole lot more, you know, security transformation folks, a whole lot, lot more incident responders, uh, responders, tier three analysts, uh, detection engineers. Yeah, so I, I'm very positive not only because I'm a founder, but because I do see like cybersecurity teams.
Historically being so under resources, under resourced and understaffed, the augmentation of AI is actually going to open up more interesting job opportunities for folks who are kind of right now stuck in investigating alerts over and over again.
Ashish Rajan: Does AI augmentation make sense at a certain level in terms of, or certain, [00:28:00] even certain scale for that matter?
I mean, obviously a startup today, 'cause I'm, I'm also thinking about a lot of people may listen to this and go, I already have a theme I, which has quote unquote AI capability. I'm sure they're also saying they're AI augmented today. But in terms of. What kind of organizations, at what scale can they benefit the most from an AI SOC?
Because I don't imagine every scale it works. Or it doesn't at every scale. Yeah.
Edward Wu: What we have seen in the field is like ai, SOC analysts or, uh, technologies like what we're building generally benefits two groups of folks. One is enterprise security teams with an internal. In-house security practice.
Those are the folks who are leveraging internal resources. You know, they have full-time security analysts. Uh, those security teams will definitely benefit from our technology. And then the other group are actually security service providers. AI augmentation actually [00:29:00] drastically increase the quality.
Of the security services that service providers like MSPs or MDRs can offer. So this is where I don't think with AI, like if you are a team of if you are. Organization of 200 employees. I still think security service providers are the best best way to get the initial set of security protections and capabilities.
But now with AI augmentation, you can get a whole lot more from your security service providers. Interesting. And
Ashish Rajan: to operationalize this, I'm also kind of thinking more from a, I think, um. Another con conversation that comes up is the whole build versus buy conversation. A lot of people also believe that I mean, how hard can it be to attach an AI to my siem or my log aggregator or my data lake and 'cause, uh, these days, thanks to ai.
People who did not have a data lake. Mm-hmm. Now even [00:30:00] they're building a data lake. Thanks. But not that they're getting the budget for it, but at least there's a data lake in the, in the company that they can ask permission to access. What, what's kind of like the operational, what does it take in terms of the conversations you've had to operationalizing an AI in a SOC world in an organization?
Edward Wu: Yeah. Obviously nowadays it's very cool to. To start new projects, uh, around, hey, you know, I can take this open source library, I can connect it to a couple APIs, and voila, I have an AI SOC analyst based on what we have seen. In the field it's definitely not this easy. Uh, in fact, you might have noticed there are close to 30 or 40 different startups building, trying to build similar technologies, but very few actually have working technology, so it's much more difficult than look, uh, on paper.
And that's true for any agentic AI systems. Look at Cursor, like how hard could it be? Like ChatGPT already can be used to [00:31:00] code for coding. So how, how hard it is to, bolt. Both on a couple additional APIs and features on ChatGPT and you have Cursor. But I think, uh, if you look at, you know, if you talk to investors, you talk to startup founders, they'll share, it's actually incredibly difficult to build Cursor, even though, see, see, chat GBT already seems to offer, uh, very capable building blocks from our experience.
See biggest challenge when building AI agents for security is how do you manage large language models? How do you find the right balance between allowing large language models to improvise and adapt while keeping some like while keeping some. Within certain guardrails so they can offer trustworthy and deterministic, uh, outputs.
And that, that's actually very difficult. And for folks who are trying to [00:32:00] build this internally, I, I generally share like a single. A simple piece of statistics, which is at drop zone. Uh, if we look at 2020, by the end of 2026, drop zone would have spent close to $20 million purely on r and d to build this technology.
Unless as a security organization you are maybe allocating five or $10 million of budget to build such a technology you are probably not going to be able to get it right.
Ashish Rajan: Yeah, and I guess to your point, it's also not just the fact that you need security people, you also need data science people.
You need people who understand and can work with data pipelines. There's a lot more, as much as. To, to your point, it's very easy to get to that, uh, POV or, uh, the, the, the POC you can make where I have a quote unquote, I have created a detection, I can investigate, but you're like let's try that again on the same thing and see if we get the same result.
It's a different result that doesn't, is that's not [00:33:00] something you want from something as sensitive, an area of your organization as SOC, which is requires you to work on true positive really quickly and get onto them as well. What a final question with you as well. 'cause we were talking about the tier one almost disappearing slowly.
'cause as part of the year end planning, a lot of people would also be thinking about training budgets for. They're cut their teams, what should they be sending their SOC teams to learn in an AI SOC world?
Edward Wu: Great question. I think a lot of that is again, using like software development as analogies with AI coding tools. What we have seen is it's a lot more important for software developers to. To essentially pick up more program management or project management skills, because now with an army of AI coding agents, like a single software developer can operate as a, a team of a team of developers.
That means as a human developer. Now there's [00:34:00] actually more quasi like managerial. Technical leadership tasks, like you have to divvy up the feature into different components and then you will assign each component to an AI coding agent or Cursor to, to help you with it. So. Looking at that analogy moving forward, the ability to be a tech lead, the ability to know what is good, what is not good.
Uh, the ability to divide complex projects into smaller pieces. The ability to, coach. To tune, to configure AI solutions to achieve maximum efficiency. Uh, I think those are all good skills that practitioners can focus more time on. Awesome.
Ashish Rajan: No, thank you so much for sharing that.
And that's all the technical questions I had. Uh, where can people find you connect with you? I'll probably, I'll put a link to the report as well in the show notes so people can download that. But where can people [00:35:00] connect with you on the stuff?
Edward Wu: Yeah. Uh, so folks can check us out@Dropzone.ai. And we also actually have.
A public and gated test drive. So if you are curious how an AI saw catalyst work and you don't want to talk to a salesperson, you can just check out our website and you and we have a three question forms that you need to fill out. And then once after that you immediately can get access to a live environment.
With our product running and you can see different examples of how an AI SOC analyst investigated alerts from different sources and different types.
Ashish Rajan: Yeah.
Edward Wu: Awesome. I'll, uh, put that in the
Ashish Rajan: shownotes as well. But thank you so much for coming on the show. Thank you for having me. Appreciate you doing this.
Thank you for listening or watching this episode of Cloud Security Podcast. This was brought to you by Tech riot.io. If you are enjoying episodes on cloud security, you can find more episodes like these on Cloud Security Podcast tv, our website, or on SOCial media platforms like YouTube, LinkedIn, and Apple, Spotify.
In case you are interested in [00:36:00] learning about AI security as well. To check out our sister podcast called AI Security Podcast, which is available on YouTube, LinkedIn, Spotify, apple as well, where we talk. To other CISOs and practitioners about what's the latest in the world of AI security. Finally, if you're after a newsletter, it just gives you top news and insight from all the experts we talk to at Cloud Security Podcast.
You can check that out on cloud security newsletter.com. I'll see you in the next episode, please.







.jpg)













