Is your EDR blinding you to insider threats? In this episode, Ashish is joined by Brandon Dixon (Co-Founder & CTO of Ent AI, and former Microsoft Security Copilot leader) to discuss why traditional endpoint security tools are failing in the AI era .Brandon talks about the reality of modern "Insider Risk." Attackers are no longer relying on malware; they are "living off the land" by using legitimate enterprise software (like Zoom or Microsoft Office) to look like everyday employees . Why EDR tools can see that Zoom is running, but are completely blind to a user granting remote control to an outsider .We also explore the explosion of Shadow AI, highlighting a real-world HIPAA violation where an HR employee tried to feed patient records into Meta AI via WhatsApp . If your SOC team is drowning in alerts from "dumb control points," this episode talks about how to move from reactive pattern matching (legacy DLP) to proactive behavioral intent modeling at the endpoint
Questions asked:
00:00 Introduction
02:50 Who is Brandon Dixon? (RiskIQ, Microsoft Copilot, Ent AI)
04:00 Redefining Insider Risk: Malice vs. Mistakes
05:10 "Living Off the Land": Why Adversaries Use Legitimate Tools
06:30 The Zoom Example: Why EDR is Blind to Remote Control Hacks
09:30 The Failure of Security Training against "Click Fix" Attacks
11:50 Case Study: A HIPAA Violation via Meta AI in WhatsApp
13:50 Why Traditional DLP Fails at Semantic Context
16:50 Local AI Usage: Why Workloads Are Returning to the Endpoint
18:50 The Problem with UEBA: Putting Anomalies in Context
22:30 Why You Can't Build This With a Data Lake
26:30 Stopping the "Trophy SOC" and Dumb Alerts
27:40 Fun Questions: Kangaroo Jerky Tasting
28:40 Hobbies & Pride: Ultramarathons and Growing Up in Baltimore
29:20 Favorite Cuisine: Burmese Food (Tea Leaf Salad)
Brandon Dixon: [00:00:00] So the adversary is now using the same software as the enterprise, and they're trying to look like an employee specifically, so they don't get detected. All we know is that Zoom's running. We don't understand why they gave remote control over and what they did after that happened.
Ashish Rajan: All these systems, they're all isolated from each other as well.
Brandon Dixon: Yes,
Ashish Rajan: by design
Brandon Dixon: unsanctioned AI in a sanctioned communication tool, that's a HIPAA violation, and a lot of these alerts don't need to exist. They exist because we've got dumb control points. Do you actually understand who your risky users are, and more importantly, why are they risky? Most people can't answer the question what's actually happening in their business.
We think it's time to start preventing mistakes.
Ashish Rajan: Detection with AI is all about understanding the intent because gone are the days when you're looking for a SQL injection. These days. Detecting AI security is all about understanding the intent across multiple things. For example, in this particular episode I had Brandon Dixon from Ent AI, where we spoke about how a nons suspicious employee was trying to copy information from their Word document into the meta [00:01:00] ai, which was not one of the authorized LLM.
Now, if you have worked in detection engineering for some time or worked with endpoints, you probably understand that picking this up would be really hard based on the traditional tools we've had. Like endpoint security or browser security or third party, there are so many other combinations that you have to do just to, understand the intent that what the person was unintentionally trying to do, which is why in this new world, it's all about understanding the intent of the users that are on the endpoint and what they do across multiple applications, not just individuals siloed ecosystem that you may have to triage to even identify as a malicious or not so malicious intent here.
To understand if there's a malicious or an unintended action taking place on any given endpoint. If you're someone who's working on enterprise AI security for what that means for applications that are AI capable on your endpoints, and what would the program look like in the world where the AI actions are not being picked up by your traditional security tools?
The approach needs to change. All that and a lot more in this episode with Brandon Dixon from and ai. As always, if you have [00:02:00] been listening or watching podcast episodes for a while, I would really appreciate if you take a quick second to hit the follow subscribe button on whichever platform you listen or watch Podcast episodes on We are everywhere, including Apple's. Spotify, YouTube and LinkedIn. It does not cost you anything, but it means a lot because that means the algorithm would spread us to more people and more people get to know about the podcast we create over here. I also wanted to give a shout out to everyone who came and said hello to us at RSA. It really meant a lot that you came and said hello.
Share the love you had for the work we are doing as well. So thank you so much for all the support and I look forward to seeing you at other events as well. Enjoy this episode with Brandon. I'll talk to you soon. Peace. Hello. Welcome to another episode of Cloud Security Podcast. I am excited today. I've got Brandon.
Hey man, Brandon, thanks for coming on the show.
Brandon Dixon: Thank you.
Ashish Rajan: For people who may not have heard of you before, could you share a bit about yourself, your background, Brandon?
Brandon Dixon: Sure. Been in cybersecurity my entire career. I've done multiple startups, uh, including passive total ninja jobs. More recently, we were, uh, part of RiskIQ mm-hmm.
The past couple of years that the company got acquired by Microsoft. [00:03:00] So. We launched a couple defender solutions, including defender, threat intelligence, defender, external attack surface management. And then, um, we also were the team that led out the Microsoft security copilot. And so, uh, now I'm doing another startup
Ashish Rajan: Oh, nice.
Brandon Dixon: With the crew again. And, uh, yeah, that's it.
Ashish Rajan: Oh, right. So in terms of, uh, when we were talking about this the first time we spoke about threat hunting, instant response, and you spoke about the whole concept of, uh, the Compromise Insider.
Brandon Dixon: Yes.
Ashish Rajan: And coming from this land of defender for like co, co-pilot, security, all that, which is kind of fits exactly in that category.
Yes. 'cause co-pilots primarily is by internal people. How do you describe that to people? What is comp What? What's a compromise Insider. In that case,
Brandon Dixon: I think it depends. Like there's the traditional inside risk, which is assuming that someone's bad within your organization. And so I think those are incredibly rare.
Mm-hmm. I mean, they obviously do happen, but there's cases where someone's doing something egregious. They're trying to take data outside of your organization. [00:04:00] They're trying to manipulate the tools to do something they shouldn't. And then there's another side of inside risk, which I perceive is more, most people, they wanna do the right thing.
Ashish Rajan: Mm-hmm.
Brandon Dixon: And you have the accepted path of the business. You have the exception path, which is typically pretty cumbersome. And then you have the path that the user's gonna follow. Yeah. Right. To get their job done. And sometimes when people are doing that pathway, they make mistakes, they violate policy. So when we look at things like inside risk, we kind of view it from two different dimensions.
You have your users making mistakes. Yeah. And then you have your threats that actually show up.
Ashish Rajan: And would you say, does that, 'cause I'm thinking more from a people who are, obviously people who are listening to us are D have different levels of their detection engineers and stuff as well in there. Would that be a different kind of data point compared to what they normally see from a insider compromise?
Brandon Dixon: Well, I think most solutions that exist today from a traditional security perspective, they're pretty good at seeing a lot of telemetry on control points. In fact, they've gotten so good that [00:05:00] adversaries have effectively moved out of some of those control points. They're started to use legitimate tools or other things that live, uh, technique, uh, that they call living off the land.
So the adversary is now using the same software as the enterprise, and they're trying to look like an employee specifically, so they don't get detected.
Ashish Rajan: Interesting.
Brandon Dixon: So when you think about like, uh, a system like E-D-R-E-D-R was designed in the era where malware was prevalent and the file system was, uh, you know, being manipulated frequently and there's still.
A need for EDR solutions. Like they bring tremendous value to help misunderstand what happened at the system level.
Ashish Rajan: Yeah.
Brandon Dixon: But what's historically been missing and what we're building at end, AI is effectively adding a layer of behavior on top. Understanding what the user is doing, and if you understand what the user is doing, you can see that full story.
Ashish Rajan: Like what would be an example of this? 'cause I, in my mind, the insider risk normally is that I am just a disgruntled employee trying to, I know my last day Sure. I'm exporting all my important files that are projects [00:06:00] that I've worked on. Yeah. So not, maybe not to sell, but at least we reuse it in my next job.
Brandon Dixon: Yeah.
Ashish Rajan: Like, is that what we are talking about here?
Brandon Dixon: I think there's, there's two sides of the coin again, but I'll give you a a concrete example. A lot of companies use Zoom.
Ashish Rajan: Mm.
Brandon Dixon: Right? So Zoom is, is prevalent software for, for conducting meeting. I think unbeknownst or unclear maybe, uh, people don't realize this, is that when you share your screen on Zoom, you actually have an additional option to give remote control of your system through Zoom.
Ashish Rajan: Oh,
Brandon Dixon: right. And there's legitimate reasons potentially to do this, but if I'm on the EDR and I'm looking at system level telemetry, I see that the Zoom process is running.
But I don't see how the user is interacting with Zoom and then giving over remote control. So let's pick on that inside example.
Someone could use Zoom and give remote control for the purposes of trying to exfiltrate data.
Ashish Rajan: Mm.
Brandon Dixon: But they could also do it from a legitimate perspective as well, where they're inviting someone in to help them troubleshoot something or get additional help on their system. Yeah. And maybe they can't do it themselves.
The problem with the [00:07:00] EDR systems today is that all we know is that Zoom's running. Maybe through SaaS logs, we recognize that someone gave remote control over, but we don't understand why they gave remote control over and what they did after that happened. And that's something that we're bringing light to today.
So it could be somebody who's doing something bad, could be somebody who's doing something completely normal.
Ashish Rajan: Interesting.
Brandon Dixon: But you should know what that is.
Ashish Rajan: But a lot of mature practices already have an EDR. Sure. And maybe, maybe even SSPM do your case about like the, maybe it's my SaaS service or maybe it's my thick application.
Is this not being visible there?
Brandon Dixon: I don't, I've not seen it. Yeah. I, I've plenty of friends that do incident response and the biggest challenge that they have is they know that someone activated Zoom.
Ashish Rajan: Yeah,
Brandon Dixon: they've activated remote control, but they don't understand what actually took place. So they don't know if somebody did something malicious or they did some sort of insider uh, activity.
And if the user is bad, they're obviously not gonna tell them what they did.
Ashish Rajan: Yeah.
Brandon Dixon: And if they're otherwise [00:08:00] naive or ignorant, they're not gonna, they're not gonna think back to that moment. And so when they get interviewed, they're gonna be like, I don't know what I was doing. It was just like, completely normal.
Ashish Rajan: Well, so they don't have the ability to understand the intent
Brandon Dixon: Exactly. They miss what it is that the user's trying to do. They can't see that behavior. So imagine you can now all of a sudden see that.
Ashish Rajan: Oh, because to your point, just because I know that someone has turned on Zoom doesn't really mean I can get them, get the telemetry to tell if the remote control feature was activated.
Sure.
Brandon Dixon: Or I'll pick another example, like AI's big right now, everybody's using it and most companies wanna understand. Who's using ai? Which services are they using? And then most importantly, why are they using it?
So there might be cases where there's legitimate uses for AI as part of someone's job.
They also need to then understand cases where you know, is being abused or sensitive data is potentially leaking out of the environment. Yeah. How do you get that? Unless you have some instrumentation? Yeah. Especially with the fact that AI is being embedded into all of these tools. Yeah. Are you supposed to keep up with that?
That's the biggest [00:09:00] challenge.
Ashish Rajan: Because are we over missing on the wrong problem then? 'cause it feels like a lot of people who build security awareness programs, they build all the, I mean obviously you've kind of pulled out the, uh, the EDR pieces. I would think a lot of people would hear the example that you gave for the Zoom, oh, I've got training enabled for this as well.
Sure. But, uh, or I've got is that a good control for this?
Brandon Dixon: I haven't seen the training be really effective.
Ashish Rajan: Yeah. Yeah.
Brandon Dixon: Right. Like one of the, another type of attack that we see that goes across multiple channels is click fix.
Right. This is a very simple attack that's effectively trying to get someone to manipulate the system.
By making it appear like there's some sort of reason that they're getting social engineered on the fly.
Ashish Rajan: Yeah. Yeah.
Brandon Dixon: Right. A, you need to go and enable this, and here's how you go and do it. You hit the Windows R command, go to run, type in PowerShell, then paste this thing that we've already preloaded into your clipboard, and it's highly effective.
It's effective because people still don't understand what's normal and what's not.
Ashish Rajan: Yeah,
Brandon Dixon: and [00:10:00] honestly, I think that's a failure of security is why do we have to wait for the boom to occur? To try and educate the user, like is there not a better way to surface sooner and deter them from making that mistake?
If I'm a program manager and I never use PowerShell and I haven't used PowerShell for over six months or ever in my job, and all of a sudden I'm doing that, that seems weird. Right? We should be able to point that out and if we see that there's code. That will manipulate your system inside of your clipboard that came from a website you haven't browsed before.
Yeah, that's also weird. Yeah, so it's, it's, I, I think that training is one element, but it's tough because these people have jobs to do. You can't put that cognitive load. On everybody all the time.
Ashish Rajan: But then there are also two sides to the like one is the, to what you said, the action of me opening up Zoom and logging in and letting, giving someone remote control.
But then there's the other side of I, I'm a cloud first organization. I'm a developer who logs into a cloud environment. Are these signals [00:11:00] different or was, what are they like in a cloud first kind of environment? Where is most of it's cloud native?
Brandon Dixon: I mean, even in cloud native systems, you still have to understand what people are doing.
Ashish Rajan: Mm.
Brandon Dixon: Right. Is it normal for their job or not? Like, is that, uh, a standard behavior that should be taking place? Uh, there's a lot of normal things that are innocuous to a job that can lead to mistakes, even in a cloud environment that could cause a potential breach. Mm-hmm. It's only getting tricked into adding a user, increasing permissions, uh, putting data in the wrong spot.
Just because it's in the cloud doesn't mean that. You've somehow safeguarded it completely.
Ashish Rajan: Yeah.
Brandon Dixon: If there's a lot of corner cases that just occur across like the operating system and what it is that people do.
Ashish Rajan: You actually had an example of data information that you were sharing with me. I'd love for you to share that whole, the WhatsApp one.
Brandon Dixon: Oh,
Ashish Rajan: yeah, yeah, yeah. Uh, 'cause I think I was gonna ask the question about it, but if you don't mind sharing the example that you had in terms of what you had seen.
Brandon Dixon: We're deployed within, you know, fortune 500 environments. And in one of the [00:12:00] environments in particular, it's a global organization. And one of the communication tools that they have is WhatsApp.
Mm-hmm. So they have WhatsApp in the web and they have WhatsApp, the fit client installed on the desktop. And WhatsApp is a sanction tool within the business, so it's a good to use it. Yeah. They communicate with clients that way, especially occurs in like the Middle East and um, and in, uh, in Asia as well.
So they're using WhatsApp and in this particular case, someone within Human Resources wanted to go and get a a summary of the patient records that they had. What they did was they went and activated meta AI directly within WhatsApp. So now we've gotten to a point where like AI is becoming more ubiquitous.
It was inside of the application where that was an unsanctioned AI in a sanctioned communication tool. And so what happened there is that person went. To go try and put that patient records directly into meta AI to summarize it. Yeah. And if you would've let that occur, that's a HIPAA violation.
Ashish Rajan: Mm.
Brandon Dixon: Right.
So it's increasingly getting complex that [00:13:00] you have all of these different controls. And I was talking to one of my buddies, I'm like, how would you have even written a rule from an incident response perspective? That's why, to stop that from happening.
Ashish Rajan: Yeah. So what would telemetry look like in that scenario?
Like what In the traditional sense that as to if I was the
Brandon Dixon: process.
Ashish Rajan: Yeah,
Brandon Dixon: I think you would see the process. You would, uh, but meta AI would be, I think, buried within WhatsApp and it would probably be even like part of that network traffic as well. I don't know if they would call it out separately.
Ashish Rajan: But then does that,
Brandon Dixon: that's it. You know that WhatsApp was running, but like, what were the contents? You don't know
Ashish Rajan: because there's no, there's no copy paste information coming across that I pasted information somewhere.
Brandon Dixon: Or what if you dragged the file? What if it's not even a copy or paste?
Ashish Rajan: Yeah.
Brandon Dixon: Okay.
So I mean, like DLP, you know, aims to, to try and disrupt some of these things. Yeah. But it's not gonna be perfect because it requires that you have to instrument all of these different applications. So what it is that we're doing is we sit above all of that.
Ashish Rajan: Yeah.
Brandon Dixon: Like within N ai, we're just a, a layer. A behavior layer within the operating [00:14:00] system that understands what the user is doing.
Ashish Rajan: Yeah.
Brandon Dixon: We're not attached to any given application, but we understand when people are engaging with WhatsApp, what they're clicking. How they're clicking, what they're doing.
Ashish Rajan: Would EDR, I guess if I had the telemetry actually to your point, I was thinking, thinking of it would seem or an EDR have telemetry, they would also only have, I've gone on WhatsApp.
Yeah.
Brandon Dixon: And this is not a knock on in EDR, I think EDR is important to understand how the system gets manipulated. Yeah. And they're still, ultimately, when a, an organization's compromised, it's generally, there's, there's some lateral movement taking place and the adversary wants to be persistent.
Ashish Rajan: Yeah.
Brandon Dixon: But I think the problem is, is that living off the land techniques have become more prevalent.
Ashish Rajan: Yeah.
Brandon Dixon: And that is just more innocuous behavior that using tools that are part of the enterprise.
Ashish Rajan: Actually, as you say that, the one thing I came came to mind was it's the context of you mentioned global organization working across maybe Middle East or Asia, whatever.
Me sitting in the US may not have the context for what the policy or whatever thing may [00:15:00] be on the other side of the planet, even though we're in the same company as well. That's right. Have you seen that as well then?
Brandon Dixon: Yeah, I mean, like I, I'll, I'll come at it from a different angle, so short answer, yes, but also like as an analyst, my job would historically have been to understand who is the person that's involved in the incident.
Ashish Rajan: Yeah.
Brandon Dixon: Right. So I wanna understand like what is their role, what do they do? What is normal for them? What's not normal for me to determine whether or not the actions that I'm seeing in telemetry, like traditional telemetry. Match up with what's expected. And the biggest challenge has been how is it that you understand what's normal for a user?
Ashish Rajan: Yeah.
Brandon Dixon: Like if it is a role that I'm not familiar with, like, uh, I don't know, a sports betting director or something. Yeah, right. I don't know what that person does throughout the day. Yeah. I don't know what's normal. I don't know where risk could potentially occur or something weird might take place. But with AI and where we are right now, by seeing that behavioral layer, I can understand what that person does throughout the day, what's normal for them, what's not.
And then see areas of risk of like, oh, they're doing like sports betting rate here, like a [00:16:00] risky transaction might occur. As an analyst, I might wanna look at that activity. If I suspect that there's something wrong with this person, how they're reacting
Ashish Rajan: but isn't, I mean, obviously these days most people are using AI agents.
Brandon Dixon: Mm-hmm.
Ashish Rajan: And if you're focusing on behavior as a csaw or a detection engineer or secure engineer for that matter, is the behavior going to be different? And if it is going to be different, would say something which is on a user level or user space level, would that be able to pick it up?
Brandon Dixon: I mean, like for the way that we see people using AI right now is there's an increase of local usage on the endpoint again.
So we're getting to a point where the hardware, hardware is sufficiently powerful enough that it can run these AI models locally. I don't have to go and put everything into the cloud. And from a, um, just a, a cost perspective and privacy perspective, people are starting to run these things more locally to do things on their behalf.
So what we're seeing at least is. Cases where someone is instrumenting ai, so maybe they're using clawed cowork and they're doing their job and they wanna get some assistance from ai, [00:17:00] and so they activate clogged cowork and they give it some instructions. Mm-hmm. And then they hit the button and then that thing's gonna go do what it's gonna do.
Ashish Rajan: Yeah.
Brandon Dixon: But because you're at the end point, you can see with how the system's being manipulated, you can proxy that agent traffic as well. So it allows us to understand the handoff between the human. Then the AI and then back to the human again.
Ashish Rajan: Oh. But across multiple agents as well then like, uh, if you're looking at across endpoints,
Brandon Dixon: Well, we're looking we look predominantly at like a single endpoint.
Ashish Rajan: Yeah.
Brandon Dixon: If those agents are like, instrumented in the cloud or they're doing something else, like we're not gonna be able to see all of that activity. But if it's running locally on the system and it's manipulating that, we will see it.
Ashish Rajan: Interesting. And I guess too, it goes back to the behavior, uh, question again then, but.
Bringing it back to the behavior piece, I found over the years of working in companies where there's nothing really normal, not, maybe normal is not word, there's not standard about my behavior today. Yeah. I feel like having coffee today, tomorrow I feel like I'm a tea person now.
Brandon Dixon: Yeah.
Ashish Rajan: But obviously how do you account for that kind of a transition?
I mean, we, because we [00:18:00] still talking about humans here.
Brandon Dixon: Yes.
Ashish Rajan: And we have for lack of a better word, we are pretty spora, sporadic and very moody and all that. Yes. Are behavior systems mature enough to kind of understand those kind of patterns from a work context? Obviously I went for a very normal human context, but I imagine it's very similar when you look at from an endpoint perspective.
Brandon Dixon: Yeah, I mean the, I think where we've seen previous incarnations of this, or systems like Yuba, like the user behavior entity analytics, and where those struggled is they simply tried to model independent variables or anomalies. And so like, is this person working their normal time? Are they doing their normal process?
Ashish Rajan: Are they from the right country? Sure. Yeah.
Brandon Dixon: There's a number of different factors that can create anomalies, but as you stated, sometimes I need to work on the weekend because maybe it's a crunch or I wanna work late at night because I'm excited about a problem.
Ashish Rajan: Yeah.
Brandon Dixon: Like those anomalies are not inherently malicious.
Yeah. There's nothing bad about that. But when taken independently, it's not particularly useful. Mm-hmm. So you have to put it in context. So to answer your question. Yes. Like these systems are capable of [00:19:00] understanding what's normal for a user, and it's not just looking at these independent variables, it's putting them in context.
Ashish Rajan: Yeah.
Brandon Dixon: And it's trying to understand where the risk might occur.
So working late at night is potentially not a big deal, but working late at night and giving remote control of your system to someone else outside the business Well. You know, that's kind of odd, right? Like there's a context there.
Ashish Rajan: Yeah.
Brandon Dixon: Yeah. That I think has historically been missing.
Ashish Rajan: So people who are, and, 'cause there's obviously people who would've used UBU systems before and maybe they are scarred by it. Yeah. And so as they walk, we obviously at RSA as they walk down, as try and think about, Hey, what's the uplift that I'm looking for for this AI world that I'm moving towards?
What's your thinking between. How should they approach this as a program? And is this just a program that is now API driven or is it more endpoint focused? 'cause. They're also looking at behavior being pulled in from multiple sources and everything. So A, is that thinking still accurate? And if it's not, [00:20:00] what's the new level of thinking that they should be looking at this, these programs from,
Brandon Dixon: I mean, I think pulling in behavior and, and having like some central security organizations not gonna go away.
Ashish Rajan: Mm-hmm. Right.
Brandon Dixon: I don't think we disrupt that entirely, but I do see more workloads coming back to the endpoint because of ai. Mm-hmm. And that's incredibly exciting, is that you have. People who historically have not been able to do certain jobs can now do it because AI is capable of helping them. As long as those workloads come back to the endpoint, it creates risk.
There's risk of data leaks, there's risk of deletion, there's risk of exfiltration, of information risk, of increasingly more mistakes. Mm-hmm. And we read about these in the news. And so I think that the endpoint is just one pillar of many. Like you have identity, you have the network stack, you have obviously cloud as it exists, but it's endpoint's very important.
Ashish Rajan: Mm-hmm.
Brandon Dixon: And I think that for too long we've just. Relied simply on the EDR solutions, to backhaul everything to the cloud and simply react when, when something bad occurs. If I'm [00:21:00] capable of modeling behavior, either user or agent behavior, at the end point where the workload is actually running,
Ashish Rajan: yeah,
Brandon Dixon: then I'm in the best position to actually intervene and disrupt somebody, stop them from making a mistake before it actually occurs, right?
And if I can successfully do that. There are no tickets. There are no alerts, there's no downstream theater that occurs in security today. And at the same time, I can educate the user. So they don't do that again. They recognize they're, they violated policy or they did something they shouldn't have.
Ashish Rajan: Yeah.
And do you find like the should the focus, if someone's trying to build this as well, 'cause obviously some people may already have a program, they're uplifting it, but some people may think, hey, maybe. My stand on how I deal with the risk that AI poses is by behavior because I understand that there would be a lot of AI agents running as CLI on my browser extensions as plugins.
Brandon Dixon: Yep.
Ashish Rajan: I may even have IDE enabled the code initiative. There's like, there's all kinds of AI plus we four port, the SaaS enabled AI as well. In that kind of people who are taking those footsteps even before they [00:22:00] kind of go in the solution parts, would they be like looking at telemetry or like what should be like the first few steps?
'cause almost I feel like simply having a a, a solution which is behavior based is not gonna answer the question. 'cause you kind of need some groundwork before you can even have that happen. What are some of the groundwork things that they should already have to enable a successful start of a program?
Brandon Dixon: Yeah, I mean for not, not to plug our own stuff. Yeah. But like one of the problems that we saw in traditional solutions is that nobody had that behavioral layer. I have not seen it.
Ashish Rajan: Oh.
Brandon Dixon: If it existed, I wouldn't have built it.
Ashish Rajan: Right.
Brandon Dixon: So what it is that we looked at was there was no, and, and we've talked to enough, uh, larger, like global 2000 customers at this point, that many of them have tried to create their own, which is interesting.
They've taken signals. Uh, they've tried to mint their own signals to understand like behavior using the logs that they have access to today, and it's just sparse. It doesn't tell them exactly what happened. They're trying to piece it together indirectly through an EDR solution or like their sim in some form.
And the problem that they have is they don't get enough [00:23:00] context. They can't understand what's happening. So if someone's building this program, I think what I would tell them is like. Do you actually understand who your risky users are? And more importantly, why are they risky? Because most people can't answer the question what's actually happening in their business.
They have these draconian policies that they draft, that they say, this is how the business is supposed to be operating, and this is how data's supposed to be handled.
Ashish Rajan: Yeah, yeah.
Brandon Dixon: And it's only as good as its ability to enforce it at a control point.
Ashish Rajan: Yeah,
Brandon Dixon: and the problem is, is that those policies are overly broad and the control points are not sufficiently deep, and they miss context.
So if you're gonna roll out a program, you need to kind of look at the solutions from a first principle perspective, which is what we did. And we're like, nobody does the actual behavioral layer and that's what we created.
Ashish Rajan: So, so to kind of put that into a bit more I guess another context. So you're still able to pull the telemetry from your endpoint, your xt, REDR seam, whatever, you able to pull that and make an intelligent.
I guess [00:24:00] understanding of it to make sense that, oh, this is me or Ashish trying to copy paste sensitive data into WhatsApp. Mm-hmm. Whereas traditionally what has happened is these connect, the, the connectivity was never there. The tissues were never connected together. Were this to be possible.
Brandon Dixon: But it was, and, and it's also that the data wasn't there.
So like, it's great, like let, like take the example of all of our logs and like you're in WhatsApp. Yeah. I don't have the contents of your files or WhatsApp or any other engagements semantically to understand what's normal and what's not. So DLP is gonna run like a, a pattern expression on the box and it's gonna say it looks like a social security number.
But it doesn't have the context for where that occurred. So it's simply looking at a pattern matching, it's missing the surrounding context. How do you get that? You have to be at the end point.
Ashish Rajan: Yeah. Actually, the more we talk about, the more I realize that all these systems, and maybe because of the nature of how the industry has worked, they're all isolated from each other as well.
Brandon Dixon: Yes.
Ashish Rajan: By design
Brandon Dixon: and they're sparse. They don't see
Ashish Rajan: each other
Brandon Dixon: again. They look at the, [00:25:00] they look at the limited amount of information that they have access to. That visibility is not enough to make an informed decision.
Ashish Rajan: Interesting. Wait, so what about people who may think, I, I have a data lake, man, I, I, I got this and can I just use AI to build a context in a data lake?
And how realistic is that? You guys are building a whole product around it. So I'm curious, have you had customers who've gone, I can build this right? Don't worry. With a data lake and ai, is that something which is.
Brandon Dixon: I think if you can instrument your system Sure.
Ashish Rajan: Yeah.
Brandon Dixon: But like who wants to maintain like instrumenting a system?
You're not gonna get your EDR agent to be able to do, you know, that additional collection that's required.
You're not gonna be able to take AI and like just magically have it show up in your endpoints and reliably run. Right. And be able to do that in sub-second performance to intervene when a mistake's going to occur.
Ashish Rajan: Yeah. Yeah.
Brandon Dixon: So the short answer is no. I mean, I think you can obviously collect sparse signals like we've been doing, but how has that worked? Compromises still occur. Click fix is still successful. Inside risk is a big deal.
Ashish Rajan: Yeah,
Brandon Dixon: living off the land is how [00:26:00] the adversaries operate. You have all of these things that are now normal for, for business, and yet these traditional solutions don't capture it.
Ashish Rajan: No. No.
Brandon Dixon: And it creates a bunch of, like, I, I hate the trophy security operations. They're like drowning in alerts and all this stuff. It is true though.
Ashish Rajan: Yeah. Yeah.
Brandon Dixon: Like it does. And a lot of these alerts don't need to exist. They exist because we've got dumb control points. We've instrumented a control point in a dumb way to have it go and light up an alert and enforce a user to go and triage it.
And now we're gonna have AI soc that is magically without the context gonna go and try and triage this without. Understanding what's normal for the business, what's normal for that user, what's normal for that system.
Ashish Rajan: Yeah.
Brandon Dixon: They don't know.
Ashish Rajan: Yeah. Yeah.
Brandon Dixon: And so they're gonna like try and piece it together on the fly.
And it's not to say that those solutions can't be successful.
Ashish Rajan: Yeah.
Brandon Dixon: But I think they're fundamentally flawed. They're missing an understanding of like, who is the person, what are they doing? What's the intent behind their job? And like what they're, what they're doing throughout the day.
Ashish Rajan: Yeah. Yeah.
Brandon Dixon: And that allows you to, to help understand like, is this a [00:27:00] problem or not?
Ashish Rajan: That's a good one. And I think, um, for people who are. At least thinking about these programs. I think it's, uh, I, I feel like I covered enough sides for people to be more informed about decisions. So that's, that was all the technical questions. I have got snack war here. Okay. I'm gonna bring some snacks over and I will not tell you, I mean, I did tell you what the favorites are.
So we have the kangaroo. The crocodile.
Brandon Dixon: Yeah.
Ashish Rajan: The British and this. So which one are you gonna go for kangaroo? I'm
Brandon Dixon: going for the kangaroo.
Ashish Rajan: All right. I'm gonna go for the crocodile this time. 'cause uh, I've been having crocodile, uh, kangaroo this morning in this case. What's your thoughts on the crocodile jerky?
Brandon Dixon: It's good.
Ashish Rajan: Like is it already expected it to be?
Brandon Dixon: I'd love it. Yeah. I mean like it's a little bit more chewy. Like it's got like a toughness to it.
Ashish Rajan: Like a gamey meat, would you say?
Brandon Dixon: No.
Ashish Rajan: No, but in a good way.
Brandon Dixon: In a great way.
Ashish Rajan: This is your first Kangaroo though. You haven't had a Yeah,
Brandon Dixon: I've never had this. Like you, I would eat a whole bag of this.
It is like one of those like dangerous things. You put a bag in front of me and it's like, man, I'm gonna probably eat that whole thing.
Ashish Rajan: Yeah, probably take, take it for your run as well. I mean, I don't know if it's a good running. Snack though. I've got three [00:28:00] fun questions for you, man.
Brandon Dixon: Yeah.
Ashish Rajan: First one being, what do you spend most time on when you're not working on solving the endpoint
Brandon Dixon: uhhuh
Ashish Rajan: behavior problem of the world?
Brandon Dixon: So for me, like I have, I have, uh, three kids, so yeah, obviously like I'm busy with them, but outside of work and family, uh, I run, so I do ultra marathons and, and do my best to kinda stay in shape that way.
Ashish Rajan: Yeah. Awesome. And, uh, second question. What is something that you're proud of that is not on your social media?
Brandon Dixon: I mean, I think just, I don't know if it's directly put on social media, but I'm, I'm particularly proud of, you know, growing up in Baltimore in like a pretty rough area. And then, you know, being in the position that I am now, I'm incredibly fortunate. A lot of circumstances of building companies and having success there is both timing, luck and, and skill.
And so I'm, I'm incredibly proud of. The solutions that I've built in cybersecurity, and I feel like it's, it's made an impact over time. So that's, that's awesome. It feels good.
Ashish Rajan: I mean, yeah, I'm sure. And, uh, yeah, it doesn't sound like the path was easy as well from what you shared. No, [00:29:00] no. Final question.
What's your favorite cuisine or restaurant that you can share with us?
Brandon Dixon: Place in, at least in San Francisco where we are. I'll pick the theme of the week here at RSA. I was at Burma Love last night, so Burmese food, uh, I love super spicy, uh oh food. Like that tea leaf salad.
Ashish Rajan: What's tea leaf salad.
Brandon Dixon: Tea leaf salad.
Get the tea leaf salad's. Got a little bit of funk to it. Incredible flavor. Very, very, very unique. And it's a wonder list,
Ashish Rajan: which is bur what, what's, uh, Burmese. Dish. So tea leaf salad. Anything else that spit out for you?
Brandon Dixon: Tea leaf salad. Um,
Ashish Rajan: is it vegetarian? I'm assuming
Brandon Dixon: there's vegetarian as well. So curries Oh,
Ashish Rajan: as tea leaf salad is vegetarian?
Brandon Dixon: Yes.
Ashish Rajan: Okay. Right. Okay.
Brandon Dixon: Yeah. So, uh, vegetarian there, and then I believe it's vegetarian. There's like a, I'm pretty sure it's the, it's like a, they use like a fermented style. Like, um, it's not like a dressing, a soy
Ashish Rajan: like a soybean? No,
Brandon Dixon: it's like a, they mix something into it. It's like fermented and tastes, uh.
A bit funky, but it's really good.
Ashish Rajan: Alright.
Brandon Dixon: Okay.
Ashish Rajan: Yeah. Okay.
Brandon Dixon: But like great, incredible place. I love, I love Burmese food. Love Chinese food. Oh, traditional Chinese, Sichuan.
Ashish Rajan: [00:30:00] Spicy. Spicy Chaan. Yeah. Fair. Okay, cool. And um. That's kind of the all the questions I have from fun perspective. What, where can people find you on the internet and we can learn about the work you guys are doing as well?
Brandon Dixon: Pretty simple Ent AI , ent.ai uh, is the best way to get to us, um, and learn about what it is that we're building. And then you can find my social media directly from, from that page
as
Ashish Rajan: well. Yeah. Awesome. I think I'll put your LinkedIn in there as well, assuming. Yeah. Perfect. Yeah. Awesome. But thank you so much checking on the show.
Brandon Dixon: Thank you.
Ashish Rajan: Thank you. Thanks everyone. Shooting it as well. Thank you for listening or watching this episode of Cloud Security Podcast. This was brought to you by Tech riot.io. If you are enjoying episodes on cloud security, you can find more episodes like these on Cloud Security Podcast tv, our website, or on social media platforms like YouTube, LinkedIn, and Apple, Spotify.
In case you are interested in learning about AI security as well. To check out our sister podcast called AI Security Podcast, which is available on YouTube, LinkedIn, Spotify, apple as well, where we talk. To other CISOs and practitioners about what's the latest in the world of AI security. Finally, if you're after a newsletter, it just gives you top [00:31:00] news and insight from all the experts we talk to at Cloud Security Podcast.
You can check that out on cloud security newsletter.com. I'll see you in the next episode, please.





.jpg)














