Are attackers really using AI to run end-to-end cyber campaigns? In this episode, Edward Wu joins Ashish to separate the hype from reality when it comes to AI-driven attacks .Edward explains how attackers are currently using open-source LLMs for reconnaissance and spear-phishing , and why the major commercial models now explicitly prohibit users from generating exploits without vetting . On the defense side, Edward shares how AI agents have successfully automated over 160 years' worth of alert investigations in the real world proving that 100% software-delivered SOC triage is already here .We also debunk the myth of AI "hallucinations," explaining why most errors are actually just poor context management . If you're building a security operations center or working with an MSSP, this episode will teach you how to shift from manual alert fatigue to leveraging AI for threat hunting.
Questions asked:
00:00 Introduction
02:50 Who is Edward Wu? (Founder of Dropzone AI)
04:50 The Reality of AI Cyber Attacks Today (Recon vs. End-to-End)
07:20 Why Commercial LLMs Are Blocking Exploit Generation
11:50 How MSSPs are Evolving with AI Triage
18:20 The Asymmetric Capacity Gap: Why Humans Can't Keep Up
22:30 Automating 160 Years of Alert Investigations
23:50 Why AI Hallucinations are Actually Context Management Failures
26:00 Build vs. Buy: The Data Network Effect for AI Agents
29:20 The New Workflow for SOC Analysts & Threat Hunters
31:30 Defining "Threategy": Scope, Authorization, and Context
35:50 How to Detect Prompt Injection (Treat it like an Insider Threat)
38:30 Dropzone AI Announcements at RSAC
Edward Wu: [00:00:00] They are seeing 10 15 step attack campaign being completed within minutes. It is pretty difficult, if not borderline impossible to have a SOC that can respond to alerts within minutes without extensively leveraging AI.
Ashish Rajan: When someone says there's AI capabilities, someone in some country wakes up just to triage the alert to maintain the quality.
Edward Wu: We have seen a pure software solution. Working very well in the real world. We are a hundred percent software delivered outcome, and our software has already automated over 160 years. Worse of alert investigations.
Ashish Rajan: So what happened with all the hallucination and everything that people used to have concern?
Edward Wu: Vast majority of the mistakes, uh, made by AI agents were actually not caused by models hallucinating, but rather actually caused by improper contact management. Prompt injection is kind of the same thing. You are [00:01:00] kind of like social engineering, a piece of software instead of a human.
Ashish Rajan: If you have been keeping up on how AI attacks are evolving you.
Probably have heard that AI attacks are on the rise and. AI world we are moving towards is increasing the number of kind of attacks we get. So to unpack this, I had a conversation with Edward W who is the founder and CEO of Drop Zone AI to talk about are there actual AI attacks to increase and whether they are actually agent, how much of it is truly possible to automate a lot of that ai?
How easy it is for you to have an attack path that can be exploited by ai and how, what should we change? Especially if you're an organization who probably don't have an internal security operations team and outsource the entire security operations team, what's the expectation moving forward? Should everyone be agentic AI and are we truly in the era of Twitter AI hype as I would like to call it, where you can plug in an AI repository to where an AI model and it goes and finds all these vulnerabilities and.
Basically hacks it. So if you know someone who is trying to unpack this themselves for the [00:02:00] security operation teams that they're building or working towards uplifting them for an AI First world, I would definitely share this episode with them. And as always, if you are here for a second and third time and have been finding the podcast valuable, I really appreciate if you're taking a quick second to hit the subscribe or follow button, whichever platform you're listening on.
Spotify, LinkedIn, YouTube, apple Podcast, where everywhere. It only takes a second from you. It's free for you, but it also means that. Hitting the subscribe, follow button. Let's see algorithm. Know that you want others to know about us as well. So I appreciate you taking that quick. Second, I hope you enjoy this episode with Edward Woo and I'll talk to you soon.
Peace. Hello and welcome to another episode. Today we are talking with Edward. Edward is a returning guest. Edward, thank you for coming on the show, man.
Edward Wu: Thank you for having me.
Ashish Rajan: And for people who probably did not get to hear about you from the previous episodes, could you just share a bit about yourself, your background?
How did you get stuck into the AI world?
Edward Wu: Yeah, absolutely. So my name is Edward Wu. I am the founder and CEO of drops on ai. We are a cybersecurity [00:03:00] startup that's leveraging large language models to build AI agents for cybersecurity teams. And the first agent we built is an AI SOC analyst that can autonomously investigate cybersecurity alerts with a goal of force multiplying your existing teams human team members so they can focus on only the real threats.
And other critical projects. My background before funding Drop Zone is I worked at Actual Hub Networks, one of the MDR vendors for eight years, where I built, its A IML and detection product from scratch. So I spent eight years generating millions of security alerts and ultimately came to the realizations that most security teams already have too many alerts.
You guys probably need a lot more help on the processing of those alerts. So that's why I decided to switch sites and started Drop Zone three years ago.
Ashish Rajan: I wanna say that now we are living in this world where AI agent and agentic AI seems to [00:04:00] be the norm where multiple agents are being spoken about.
I I, I had a session recently where we were doing a live episode and in San Francisco, I asked a crowd the question of what's the level of ai. Driven attacks people are seeing. And I think that's kind of to a lot of the idea that people have around, Hey, I need an agent sock is coming from the fact that because the quote unquote AI attacks on the rise.
I'm curious the kind of attacks you guys are seeing because you're doing a lot of threat intel stuff and we're trying to work towards this. Has the kind of attacks you were seeing, say when you started. They started, the company started looking into, and even your previous company as well, when you were, when you were developing detection, I'm sure the detections have come, come a long way.
What are the kind of AI attacks you're seeing today and are they sophisticated agent or. Are they just like AI bots on the internet?
Edward Wu: Yeah, great question. I think this is where like there's [00:05:00] a difference between the reality we're seeing today versus the. inevitable future that will come very soon. Um, at this moment what we have seen is mostly AI still being used for the initial stages of an actual attack campaign.
Uh, we have not seen AI agents end to end performing. A 10 step or a 15 step attack campaign, but we have , absolutely seen a lot of cases of AI generated very personalized spear phishing emails, for example. And we have also seen a lot of cases of utilization of AI in the early kind of reconnaissance phase, such as scanning.
Most of those can be trivially automated via lateral language. Using to open source, large language models. And on the flip side of kind of the application security, 'cause that's drop zone. Right now we're primarily focused on [00:06:00] corporate. So enterprise security, we deal with endpoint alerts, firewall alerts, cloud workloads, alerts and stuff like that.
But if you look at the AppSec world using AI to find vulnerabilities, uh, using ais to. Weaponize vulnerabilities or build exploits has been increasingly increasingly common. In fact, I think like every couple weeks nowadays, we see new. New blogs, new research reports coming out saying, Hey, I just took this model and pointed it at, you know, 50 open source code base on GitHub.
And I found, n number of remote code executions. And then I used large language models to also weaponize. Vulnerabilities into working proof of concept exploits without really doing much hardcore research, uh, oneself. Okay.
Ashish Rajan: So wait, so the, all that Twitter hype of, I just point my repo that we are not there yet, or is it more, [00:07:00] more that it can help you with level one informa, like say the reconnaissance of whether it's actually possible, but it can't take that next step of I'm just gonna go exploit this.
Edward Wu: Yeah. At least from what we have seen, not yet. Um, I think there has been a lot of research on the ability of large language models to find vulnerabilities as well as. Generate, generate exploits. In fact, the, the capacity has increased so significantly that nowadays the top commercial model providers are explicitly prohibiting folks from using their models to find and exploit vulnerabilities.
Unless you ex you, you are explicitly vetted and approved for such use cases.
Ashish Rajan: Right. So, so it in the user agreement between LLMs and the, I guess, the customer in that case, and even if an enterprise customer
Edward Wu: Yeah. [00:08:00] You, there's a, uh, for example, if you look at Cloud or OpenAI specifically you do have to, for example, submit a formal proof of identity as well as kind of sharings, like writing up a couple.
Sentences or paragraphs of exactly what you are trying to use the models for before they actually allow you to use the models to generate exploits or find look at GitHub repositories and find vulnerabilities. So it's one of the prohibited uses un unless you are explicitly vetted and opt in.
Ashish Rajan: Interesting because you know how a lot of people when agentic AI or just doing AI workflow automation became like a it still is a huge a demand for it. I wonder if people were not doing this, if they're not doing it, are they like, obviously not using agent capability? How, how would you describe this?
'cause I, I, almost a lot of people just imagine [00:09:00] chatbot talking to a chat GPT Web. Web application and hey, go look into this GitHub repository because now I can connect my GitHub repository into my cloud, my open open AI connection or ChatGPT connection. Is that what we're talking to? Or are we talking about CLI and going beyond just that?
Right? So I'm curious in terms of what is advanced today in terms of this capability. 'cause I, we, I'm assuming we are not talking about using web application, connecting my GitHub and hey, find me vulnerabilities.
Edward Wu: Yeah. So I think what's advanced is like how attackers are using large language models to some extent mirrors how defenders are using large language models, but attackers are kind of slightly ahead.
I think like right now it's pretty clear. Both on the attack and defense side, you can use large language models to automate a lot of the grunt work, right? Traditionally eyeballing source codes of open like [00:10:00] GitHub repositories is not a fun exercise, right? Or doing any source of like reverse engineering of different, you know, applications is not a fun exercise for attackers.
Googling a potential target, doing background research on you know, on a website or entity, like m mapping of their external attack surfaces is not fun. Exercise for attackers. And I feel like at this point, from what we have seen, like those are trivially, automateable. With larger language models, obviously you cannot use a lot of the commercial models for that.
There are, but there are plenty of open source models without you know, any sort of cybersecurity guardrails. Yeah. So you can easily leverage those models to say, Hey, I want to research, you know, a particular organization helped me find all the subdomains, helped me find all the IP ranges, uh, help me find all the [00:11:00] open ports.
Um, and let's start to understand, you know, what are the external attack surfaces? That is absolutely trivially automateable with large language models. But if you are asking whether the attacker is using large language models to perform, again, end to end, like a 10 step, 15 step attack campaign. Not yet.
Okay. But I do think the world is trending towards that direction. So,
Ashish Rajan: hey, so what about people who are outsourcing their security operation team as well? Like what kind of capability? 'cause obviously you mentioned this conversation started with having the ability being provided to one SOC analyst to be able to multiply the productivity they could have in terms of triaging across hundreds and probably thousands of alerts that they go through every single day.
A lot of people may not have a dedicated SOC team, and so internally they have MSPs that they work with is in your, in what you're seeing in terms of the usage [00:12:00] is . The bare minimum that we used to expect from A-M-S-S-P earlier where they look after they, they triage the level one uh, reports that come in, but if they're level two, level three, they come they escalate and all that.
How have you seen that evolve in terms of the expectation that. People may have, or customers may have from MSPs, but also at the same time are you seeing them evolve about with this, not just, I mean we're obviously talking about defenders and attackers, but they're kinda like the people in the middle who have to figure out, Hey, how do we continue servicing hundreds of customers as these can country evolve?
Edward Wu: Yeah. M SSPs is managed Security Service Providers is definitely a very interesting topic. So first and foremost, at Drop Zone, we actually sell to both enterprises as well as, uh, managed security service providers. What we have seen is a couple different things. First and foremost, there are generally two reasons or two, two type of two types of [00:13:00] engagements that security teams have with MSPs.
The first type is they essentially outsource the entire SOC from tier one to tier three, right? This is where if, they do FC service provider come across. A true positive or high likelihood, true, positive. They themselves will handle the incident response. And more or less have the internal CISO or, or the security organization as chaperones.
So that's one way as the second way type of engagement is more. Leveraging MSPs for tier one staff augmentation. But then when MSPs find something, they actually escalated back to the tier two and tier three analysts on the internal security team. And what we have seen is for the first buck category of use cases where, uh, or business kind of [00:14:00] engagements where.
See, the internal security team is outsourcing the entire security operation, tier one to tier three, to have two service providers. That's still happening. And the biggest difference is nowadays, as with all service providers, the expectation is higher, right? And what I mean by that is the quality of the analysis, the speed of the analysis.
And also most importantly, the customizability of the analysis. Mm-hmm. Uh, one of the biggest challenges for security service providers is customizability. A very common complaint we have heard over and over again from enterprises who outsourced their entire SOC to MSPs is, Hey, they're not understanding our environment.
We told them this certain activity is normal. But they get tripped up every single time, which is actually not really a fault of our MSPs. I think [00:15:00] it's an artifact of their business model where they are. They have to, allocate fractional security expertise to a pool of 10, 50, a hundred clients.
Yeah, it's impossible for anybody to memorize what client A wants during a shift when that person is also looking at alerts from 50 other clients. So this is where we actually have seen a, a number of really AI forward and progressive MSPs leveraging ai technologies like Drop Zone to really change how they deliver service so that they move away from a essentially a hundred percent human delivered outcome, kind of a service model to a 80 to 90% AI delivered and then having their, having their existing security analysts focus on focusing on only kind of the final 10%, uh, of the outcome. So that's that. And then [00:16:00] for the second engagement, where as the security team is not outsourcing the entire soc, but only leveraging MSPs for staff augmentation of the initial triage in the investigation.
We have seen a number of cases where these enterprises actually started to bring, uh, bring the entire SOC in-house. So they are actually leveraging, instead of using external service providers as a tier one staff augmentation. Uh, a lot of them are looking at AI SOC software such as DropZone to provide these staff augmentation instead.
Ashish Rajan: And do you find that the expectation that MSS ps now have to evolve as quickly as a threat? With accuracy and speed. I imagine there's different levels to it as well. I think there should be, and I don't know if it's still a joke, but there should be a joke about, uh, when someone says there's AI capabilities, someone in some country wakes up just to [00:17:00] triage the alert to maintain the quality of, of this.
And I, I don't know where I, I can't remember the source of the, the joke, but there was definitely a joke around this thing. 'cause um, there a lot of companies may come into this play where. There's probably human augmentation in the background to check the quality of this, uh, kind of threats coming because to what you said it, how has that been different like this?
How are people managing the speed and quality with, uh, with the new world that you're in, A and B, why is it. Today even more 'cause is has the volume has increased. 'cause we were talking about this AI tech earlier. To you, what you said that the level one, the triage, the recons, and finding out vulnerabilities that is truly possible.
And byproduct of that, has that been that the increased. There has been an increase in the volume of attacks, and that's why we need more. Yeah. 'cause I, I'm kind of lost in the, in the, in the track of like, what, what's the right way moving forward to maintain [00:18:00] speed and quality when there's a higher, uh, chance that you are being potentially targeted for an ai AI attack.
Like,
Edward Wu: yeah. What it, it really boils down to is. See number of alerts is definitely increasing year over year. If I remember correctly, there is a sense survey published a while back that talked about year over year. Most security teams are probably seeing somewhere around average 30% increase in just in terms of the volume of alerts.
Ashish Rajan: Yeah.
Edward Wu: But beyond that, there are also elements that's not captured by the volume of alerts, which is the. See ultimate surface area that are exposed to attackers. I think everybody would agree that year over year we're also seeing rapidly increasing attack surfaces. Uh, publicly facing OpemClaw servers or machines was not really a thing a year ago.
But now I think [00:19:00] a lot of security teams. Are forced to, you know, figure out ways to protect their enterprise users and enterprise data when people, run installations of, of OpenClaw in kind of insecure ways. So I think what it really boils down to is, again, we are seeing a continuous divergence between the.
Budget resource headcount of security teams and the, all the attack surfaces that they are asked to protect, as well as the intensity of the attacks. And what we are seeing at Drop Zone is, and frankly one of the reasons we started the company in the first place, is ultimately we believe that humans alone is insufficient to close this asymmetric capacity gap.
And cyber defenders really need more help. And silicon electricity can, can be great at performing a lot of analysis, [00:20:00] pennies on the dollar and can really help to plug this ever expanding analytical gap, uh, between c. Between the analytics that's required to sufficiently protect the organization and the limited analytics capacity constraint, again by headcount budget as well as staffing.
Ashish Rajan: Do you find that the, i I, I agree with you, but I, with, in terms of the complexity, but. What about speed and speed response? You know, the whole mean tend response. Mean tend protection. And how is that, I mean, I guess in terms of expectation, how has that changed?
Edward Wu: Yeah, I think I came across, I believe CrowdStrike recently published a threat report that mentioned nowadays they are seeing.
A lot more attacks being completed or the entire attack campaign, 10, 15 step attack campaign being [00:21:00] completed within minutes versus traditionally, I think all of us have heard stories of, 30 day due time. Yeah, yeah, yeah. Or attackers really taking their time. But I think with kind of any sort of, you know, attack and defense, speed is a key component of a lot of these.
And when attackers are dialing up the speed of end-to-end attack campaigns, it's raising the bar for the for the defenders. And this is where, practically speaking, it is pretty difficult, if not borderline impossible to have a SOC that can respond to alerts within minutes without extensively leveraging ai.
Ashish Rajan: And do you find that is, and I obviously you, you've been dedicated your four, four plus years into this. Do I need a human waking up every time an alert is raised to maintain quality or is it possible to do a lot of that with agents or AI in general? [00:22:00]
Edward Wu: Yeah, absolutely. So I would say a year or two.
Two, three years ago, the AI agents technology are not there. But from what we have seen, uh, across our, our fleet you know, across over 300 organizations is it is very real. Today to have AI agents end to end autonomously investigating security alerts at a quality level that's at or above a typical tier one human security analyst.
For example, at Drop Zone, we actually have no artificial intelligence behind our software. Uh, the outcome and the investigations conducted by Drop Zone is actually a hundred percent coming from our software. So we have seen a pure software solution working very well in the real world, obviously.
You know, I'm a little bit biased. I, I don't think that's the case across the entire industry. And this is [00:23:00] where we absolutely have seen cases where sometimes other startups, uh, or vendors would use human analysts, uh, to help bridge kind of the gap between their product reality and the expectation of of the actual end users.
Uh, but for DropZone. We are a hundred percent software delivered outcome and our software has already automated over 160 years worth of alert investigations.
Ashish Rajan: Wow. Wait, so what happened with all the hallucination and everything that people used to have concerns? So, and I mean, you and I spoke about that as well last couple of episodes ago, so how has that gotten to a point where it can be managed as long as you know what you're doing?
Edward Wu: Yeah, absolutely. So for hallucination specifically, what we have really seen over the last kind of year and a half is it's really a, an artifact of context management. [00:24:00] Modern day large language models, uh, hallucinate. At far fewer, far less frequency than models in the past. I think a lot of us probably still have a pretty outdated, uh, mental image of like bad hallucination coming from the models.
At least what we have seen in the field is vast majority of the mistakes. Made by AI agents are, were actually not caused by, uh, models hallucinating, but rather actually caused by improper context management. For example, you are asking the model to look at a specific piece of data but you didn't give it the full business context or the analytical context behind it, so it got confused.
Uh, we actually have seen a number of times when, when things are, were viewed as like ha hallucinations, were actually caused by kind malformed or poorly engineered prompts that. Actually [00:25:00] will confuse a very well educated human analyst as well.
Ashish Rajan: Oh wait. So I guess it is, if he is try and talk to a, an AI the same way we would talk to a colleague.
It's not fair until the AI has a context of that colleague you are expecting it to be.
Edward Wu: Exactly.
Ashish Rajan: Interesting. Do you find that, would it be now possible for organizations to build their own software version of sac? I mean, is is it, I mean obviously you've spent a lot of time building this and a lot of people still have that question for, can I just build this myself?
Is AI models have become smarter overall. What's the, I'm curious about the challenges you guys had in terms of building this, I'm assuming it sound, multi-agent and going stronger. Like, so what has been different since the time you did this? However many years ago to today, if you were to rebuild this and if people listen to you talk about this and go, well, if can I build this on my own using the new AI models?[00:26:00]
Edward Wu: Yeah. I think build versus buy is definitely a very interesting conversation. A lot of it ultimately boils down to how much engineering resources you actually have in your security team. Mm-hmm. And whether you are willing to kind of. You know, whether the ROI is there as with any piece of software, vibe, coding, it is one thing, but maintaining it over, you know, a number of years and making sure it actually works when you have like new attacks showing up new alert types showing up, you know, new tools being brought into the environment is the most expensive part.
Uh, and this is still where, as with any software vendors, there's economy of scale, right? The r and d budget that drop zone has spent in the last couple years. Uh, a lot of times far it is far larger than the overall security budget of many organizations. But at the same [00:27:00] time, we have seen cases where in very specialized environments where all the toolings are internal, there are and as the organization have extensive access to to software developers over many years you know, teams of software developers.
Over many years we have seen cases where they were able to successfully DIY, kind of a, you know, a solutions that works quite well for their own environment. Obviously as a vendor. See bar for us is much higher 'cause we need to build a solutions that not only works for one environment, but across several hundred environments of different permutations of security stack.
Yeah. Um, and that's also one element where sometimes people. Maybe overlook when they consider build versus buy is any AI technology, whether it's autonomous driving or autonomous alert investigation, really benefit from what people [00:28:00] sometimes call the data effect and the data network effect. What that means is see more data and the environments see more types of alerts.
A system processes actually will lead to the system becoming better. So for a vendor like us, when we are building this technology, you know, we, we are seeing alerts from several hundred different organizations and the system is benefiting from the collective experience and the gotchas that it's, it has learned across all those organizations.
But when you are DIYing a solution, all you see is your own environment. So sometimes you actually see some of those agents being less capable because they have been more or less kind of, locked into an, a single environment. It's kind of like an echo chamber. Yeah. Versus being benefiting from.
Okay. I see another organization have running into this rare alert that [00:29:00] the environment has not seen previously. And when you work with a vendor, they can actually take the learnings. From other environments and uplevel you know the software, uh, across the fleet.
Ashish Rajan: So I, I guess what's the new workflow for SOC analysts then?
I guess there's, there's obviously, uh, people who I imagine touch a lot of this space. Threat hunters, there are soc analyst people. How have you seen their workflow now be different now that we live in this AI world?
Edward Wu: Yeah. So most of our customers, when they're looking at their security alerts day to day, uh, they're not performing the investigations from scratch anymore. Um, they're starting off with the outputs. Of our AI SOC analyst. And when it comes to other, you can say chunks of manual and repetitive work within cybersecurity see seeing re reinforcements or the help is, is coming as well.
It's pretty clear [00:30:00] if we look at whether it's threat hunting, detection, engineering, threat intelligence there are many areas where a a dedicated AI agent could also automate a lot of that work. If we just take threat hunting, for example. I think what humans see parts of the threat huntings that really benefits from like a human threat hunter, is the beginning, which is the formulation of the hunt, the formulation of the hypothesis, and you know, the subtle human judgment.
Of what should be, what we should hunt, and what are the elements of a hunt that makes certain activities abnormal? Mm-hmm. Looking at 200,000 rows of query results from a sim is not what, any of us is really good at. But frankly that is where large language models is really good at.
So we have seen, already [00:31:00] seen a good number of examples where, for example, threat hunting, the actual execution of the hunt could really benefit from additional AI augmentation as well.
Ashish Rajan: Interesting. And would you say, 'cause you know how you mentioned the context earlier. In terms of being able to benefit, say threat hunters or SOC analyst people, considering they have all the context of the organization, shouldn't they technically have an advantage or what's the word for it?
I, I guess where I'm going with this is I feel like the workflow for them includes using AI to build that context layer for themselves, and then they can say, plug into anyone, even like drop zone or whatever as that additional layer for, hey, triage this, or whatever that they need to. 'cause I imagine in my, in, in, in this world as you're moving towards is it is a growing expectation that there would be an AI layer at, I guess in, in the security team of the organization.
Is that how [00:32:00] you see the future to be as a workflow evolve or do you see that the role of a so would still be just what is a, so doing, I guess, in this world where a lot of the AI is being done by products like yourself and others?
Edward Wu: Yeah. We cause these kind of human threategies. So we, we see the future of soc detection and response and then over time, probably, different parts of the security team to involve humans providing the threategies. Yeah. And then AI and machine responsible for execution. And what I mean by threategy ultimately is three things. The first component is scope of work. Human team members need to tell AI what it should really focus on. You know, what kind of alerts is, should investigate roughly how many threat hunts and what are the preferences of hunts.
That the system should analyze, right? Stuff like that. The second component of the human threategy is the [00:33:00] the scope of authorization around what actions can a, an AI agent take is an AI agent allowed to con, contain certain users or hosts, maybe doing weekends and off hours. When they are impacted by high priority alerts.
Uh, for example, the third component of human threategy is what you kind of described, which is the business context. Yeah. Uh, there's no software on this planet that can read minds. And for folks who might have played with, whether it's chat GPT or Open Claw, I say. It is becoming increasing clear that in order to get the best out of any AI agent, and frankly even a coworker,
Ashish Rajan: yeah,
Edward Wu: Dumping, like making your context knowledge accessible to that system, whether, again it could be an AI agent, like open Claw, it could be drop zone, it could also be a [00:34:00] human coworker, is vitally important.
So I, I do see a world where a lot of times as you know, human operators of AI agents, we are essentially spending time materializing the context knowledge that we have in our head into an accessible format that could be in the form of, recording a podcast, uh, you interviewing yourself to give a brain dump of everything you know about the organization, about the soc.
It could be. We have also seen for, uh. Kind of ways where you can actually use AI to generate a large, a pretty long survey. And then have the practitioners like actually fill out. So the system is starting off with a bare bone understanding of the environment. A little bit similar to I think a lot of service providers, uh, you know, whether you are getting security alert investigations or you are getting other type of services.
Oftentimes they have an [00:35:00] onboarding survey. And we have seen increasingly, actually onboarding surveys is a great tool to CoStar an AI agent and boost up its understanding of you. Uh, in a very short period of time.
Ashish Rajan: Interesting. I know, I mean, uh, the final question was gonna be more around the AI specific attacks being identified.
You know, a lot of people are building AI applications now. Prompt congestion is also being spoken about. Is that kind of capability at the moment, is that detectable at that level one soc. Or is that still, I mean, you're not, you're not seeing a lot of key use cases for it. I'm curious as to where the reality of that is as well.
Where we had the stop 10, hey, AI application being built and looked after. No one's really talking about how is that ending up in the SOC world. I'm curious if you, if you're seeing any use cases for that as well?
Edward Wu: Yeah, it's, it's not very common. Because I think there are products out there that are [00:36:00] kind of like.
Prompt firewalls, you know, stuff like that, right? That might be able to flag attempts of prompt injections, but most of the time see results of prompt injection. We see those at the tail end of the prompt injection attack, which is the application itself being tricked or being confused to perform activities that it typically does not perform.
And this is where a lot of these. At the end of the day come across as, uh, abnormal behavioral alerts. So to some extent, detecting kind of service credentials, being misused or being tricked by prompting objection is actually very similar to detections. You write to identify insider threats, right?
Insider threats is one. That's the human. You know, credential or identity being kind of misused for malicious purposes and prompt injection is kind of the same thing, but in that case, [00:37:00] you are kind of like social engineering, a piece of software instead of a human. Mm-hmm. And trying to get that software to leverage its service credential to do things that you want, that it typically does not do.
So a lot of these ultimately translate to abnormal behavioral alerts. But as you know, abnormal behavior alerts are some of the most complex and difficult to investigate ones. 'cause you can say, Hey, you know, service, account of application X, Y, and Z. Suddenly read 50 gigabytes of data from our internal, code repository.
Is that normal or not? Romo, sometimes it's not very easy to tell on surface. Yeah, and this is exactly what, an AI SOC analyst solution like Drop Zone can help you track down. 'cause we can looking into like what is this service account actually used for? What are the historical behavior baseline of that service account?
And when needed we could even reach out to the owner of the service account asking, hey. [00:38:00] Like, is this expected for the service account to perform activity like these?
Ashish Rajan: Yeah, that, that's all the questions I had. Thank you for sharing that as well. I think you had some RSAC things that you wanted to share as well
Edward Wu: yeah, absolutely. So we, we, we obviously will be present at RSAC and we have a couple exciting announcements coming up. And a key component of that is we are, uh, unsurprisingly going to unveil a couple additional AI agents for other parts of the detection and response teams beyond just investigating alerts.
So if you're interested in checking out, like how AI agents could help other parts of the detection response, like threat intelligence. Or threat hunting and, and maybe even for the first time ever get us into a world of 24 7 autonomous threat hunting. Feel free to stop by our booth which is a number of 4, 5 5 in the South Expo [00:39:00] hall.
Ashish Rajan: Awesome. And, uh, we can, people can touch with you if they wanna talk more about any of the, the things that we spoke about earlier.
Edward Wu: Yeah, folks can find us at, uh, dropzone.ai.
Ashish Rajan: Yep. I'll put the links in the, uh, links in there as well. But dude, thank you so much for this as always. I appreciate your time and, uh, insight shared as well.
But I look forward to, uh, I look forward to seeing your booth at RSA.
Edward Wu: Sounds great. Thank you. Thanks
Ashish Rajan: everyone. Tuning again.
Edward Wu: You soon.
Ashish Rajan: Thank you for listening or watching this episode of Cloud Security Podcast. This was brought to you by Tech riot.io. If you are enjoying episodes on cloud security, you can find more episodes like these on Cloud Security podcast tv, our website, or on social media platforms like YouTube, LinkedIn, and Apple, Spotify, in case you are interested in learning about AI security as well.
To check out our sister podcast called AI Security Podcast, which is available on YouTube, LinkedIn, Spotify, apple as well, where we talk. To other CISOs and practitioners about what's the latest in the world of AI security. Finally, if you're after a newsletter, it just gives you top news and insight from all the experts we talk to at Cloud Security Podcast.
You can check that out on cloud security [00:40:00] newsletter.com. I'll see you in the next episode, please.




















