How do you establish trust in an AI SOC, especially in a regulated environment? Grant Oviatt, Head of SOC at Prophet Security and a former SOC leader at Mandiant and Red Canary, tackles this head-on as a self-proclaimed "AI skeptic". Grant shared that after 15 years of being "scared to death" by high-false-positive AI, modern LLMs have changed the game .The key to trust lies in two pillars: explainability (is the decision reasonable?) and traceability (can you audit the entire data trail, including all 40-50 queries?) . Grant talks about yje critical architectural components for regulated industries, including single-tenancy , bring-your-own-cloud (BYOC) for data sovereignty , and model portability.In this episode we will be comparing AI SOC to traditional MDRs and talking about real-world "bake-off" results where an AI SOC had 99.3% agreement with a human team on 12,000 alerts but was 11x faster, with an average investigation time of just four minutes.
Questions asked:
00:00 Introduction
02:00 Who is Grant Oviatt?
02:30 How to Establish Trust in an AI SOC for Regulated Environments
03:45 Explainability vs. Traceability: The Two Pillars of Trust
06:00 The "Hard SOC Life": Pre-AI vs. AI SOC
09:00 From AI Skeptic to AI SOC Founder: What Changed?
10:50 The "Aha!" Moment: Breaking Problems into Bite-Sized Pieces
12:30 What Regulated Bodies Expect from an AI SOC
13:30 Data Management: The Key for Regulated Industries (PII/PHI)
14:40 Why Point-in-Time Queries are Safer than a SIEM
15:10 Bring-Your-Own-Cloud (BYOC) for Financial Services
16:20 Single-Tenant Architecture & No Training on Customer Data
17:40 Bring-Your-Own-Model: The Rise of Model Portability
19:20 AI SOC vs. MDR: Can it Replace Your Provider?
19:50 The 4-Minute Investigation: Speed & Custom Detections
21:20 The Reality of Building Your Own AI SOC (Build vs. Buy)
23:10 Managing Model Drift & Updates
24:30 Why Prophet Avoids MCPs: The Lack of Auditability
26:10 How Far Can AI SOC Go? (Analysis vs. Threat Hunting)
27:40 The Future: From "Human in the Loop" to "Manager in the Loop"
28:20 Do We Still Need a Human in the Loop? (95% Auto-Closed)
29:20 The Red Lines: What AI Shouldn't Automate (Yet)
30:20 The Problem with "Creative" AI Remediation
33:10 What AI SOC is Not Ready For (Risk Appetite)
35:00 Gaining Confidence: The 12,000 Alert Bake-Off (99.3% Agreement)
37:40 Fun Questions: Iron Mans, Texas BBQ & Seafood
Grant Oviatt: [00:00:00] How does one establish trust in the regulated space, especially as an AI SOC skeptics out there. Don't blame me. You know, I was you. If I think of 15 years ago with ai, anything AI detection in it scared me to death and I was very skeptical . But do we still need the human in the loop Today, our average time to complete an investigation is right around four minutes.
Most security analyst teams won't even start to look at an alert in four minutes for 90. Five plus percent of our investigations were automatically closing items as false positive to that additional 5%. You think bringing in human is still the right approach today?
Ashish Rajan: I don't know if you know this, but in a regulated environment.
Whether it's in cloud, on-premise or multi-cloud, there are certain kinds of models you can work with. There are certain kinds of applications of AI you could have across, and one of them seems to be AI SOC. And to understand this a bit more, I had Grant Oviatt, who used to work for Mandiant and Red Canary, so he's done the whole SOC world and now he's currently working with Prophet security to build AI SOC there.
And we spoke about some of the requirements for AI to [00:01:00] make it more trustworthy for a regulated environment, even for non-regulated environment. What are some of the. Hype versus reality of how far can you get away with having an AI SOC in a regulated space where, how much of a quote, unquote, data management you may need to do and all that a lot more in this episode of Cloud Security Podcast, if you know someone who's trying to build an AI SOC in a regulated environment or maybe in a non-regulated environment considering an AI SOC, even if it's replaced in MDR. Definitely check out this episode, share this with them as well. And if you have been finding Cloud Security Podcast episodes very valuable.
Maybe you're here for a second or third time. I would really appreciate if you take a quick second to support us by hitting that subscribe and follow button depending on whether you are on Apple's, Spotify, YouTube, LinkedIn, where everywhere you listen or watch this episode, it really helps us grow and get bigger guests.
I look forward to hearing what you think about the episode in the comment section. I'll let you enjoy the episode. Talk to you soon. Peace. Hello. Welcome to another episode of Cloud Security Podcast. I've got Grant with me. Hey man, thanks for coming on the show.
Grant Oviatt: Yeah, happy to be here. Thanks for having [00:02:00] me.
Ashish Rajan: We are gonna talk about the regulated environment but before that, could you just share a bit about yourself and your professional background?
Grant Oviatt: Yeah, happy to. So I am, uh, the head of security operations over at Prophet Security. We build AI SOC agents to do triage and investigation for security teams. Yeah, so happy to, to jump into today. Just as a, a forewarning, I am not a regulated, uh, environment expert, but we do have customers in that space, so happy to jump in and, and kind of traverse that world together of what are, what are our customers needs in relation to regulation support, how can we support them, things of that nature.
Ashish Rajan: Sure. And also, man, maybe to set this context as well, I guess the biggest thing that a lot of people talk about in the AI SOC space is, uh, trust. And, regulated environment requires a lot more let's just say love for lack of better word. So, so hardest one, establish trust in the regulated space, especially as an AI SOC, I guess.
Grant Oviatt: I think there's a [00:03:00] couple different pieces to it, right? And since. AI SOC is so new. It's, there's a transparency and explainability piece to it that's super important. So, being able to see what data was gathered from the environment to make a decision. What were the inputs or how was the data manipulated in order to be passed to a model or an agent to, to make a decision?
Then what was the resulting decision? And so similar to a human brain, right? There is some level of what actual decision was made, but you see all the inputs, the process and the outputs to make that line of thought come together. You're spot on. Regulated environments have a high degree of scrutiny.
There's two elements to it, really explainability. Can we trust the model's decision making process, and how do we validate whether the model is coming to the right answer and traceability or transparency? What data was used? From the input perspective all the way to the output and coming to a final decision, and can we track the journey from a data [00:04:00] perspective to get to the right place?
So. Can we ensure that there are no hallucinations in the data? Is is accurate and valid? And is the decision making correct? Very similar to auditing human decision making, right? Like, is this a reasonable decision by a, a security professional and are they gathering data from a right place? And so those are the two factors that become, uh, extremely important.
E even more on the data side when it comes to regulated environments.
Ashish Rajan: I, I guess, can, could you just, uh, double click on say maybe explainability and traceability, because I guess people who have, I mean, I imagine most people know what a SOC is or what a security operation center is, what MDRs is like what does explainability and traceability mean in this AI driven SOC world that we're talking about?
Grant Oviatt: Yeah. It's a good question. I think traceability really comes down to data inputs. Like what queries is the AI SOC issuing API requests issuing across different technologies? What [00:05:00] queries is it issuing across your sim? What are the requests that it's making? What are the responses that it's returning?
So really, uh, tracing the inputs and the data transformation that's happening. So you may gather 10,000 logs. Then you're going to filter them down to just a relevant set of IPs that are relevant to answering a particular question in the environment. Then that will be passed to a model to make some decision, an agent that's really focused on a given task, and then you'll see the output of, of that information of, yeah, I was given.
This input focused on this analysis, and here was the end result. And so that's sort of the, the traceability explainability to me comes to the decision making of, you know, regardless of the chain of lineage, of the evidence. Given this particular situation, is it reasonable to make the decision here of Cool, I, I have this input.
And then in an explainable way is the analysis understandable and reasonable for a security practitioner, [00:06:00] right?
Ashish Rajan: And maybe. Just to kind of, uh, unpack some of the realities of in a regulated environment where, say 'cause you've obviously done, I think from memory, you were in Mandiant before, so you've done the SOC hard, hard SOC life as people call it, the real SOC life, the pre ai SOC life.
Yeah. Uh, what's the a, what's the, what's the change? That you've seen between the non-AI SOC world and the AI SOC world that we're going towards. And how did traceability change between the two, and why is it more important in the AI world than say it was, was it obvious in the previous world?
Is that why we didn't talk about it much then?
Grant Oviatt: I don't think it was obvious. It was just taken for granted because people were doing the work. So explainability was really the focus rather than traceability because you're expecting humans to. Produce the right queries and you're expecting reasonable analysis for someone in that professional role.
And so the validation tended to be on the output reported of I am doing this [00:07:00] investigation and here are the findings that I had, and here's a timeline of the evidence that's present, is typically how it went. And so there might be some copy and pasting of, of timeline events. Then a quick summary report of what you identified.
And that was good enough from an explainability, you had some level of traceability from copying, pasting from logs. Yeah. And then you had explainability from a typing perspective. Now what we're seeing with AI SOC is, you know, the, the average investigation is 40 to 50 queries across six different tools in a customer's environment.
And instead of just having best effort copying pasting of logs from a traceability perspective. You have a line by line detail of everything that was gathered, all the evidence that was gathered, and all the decision making that happened along the way. And so I think explainability and traceability have been significantly increased as a bar for ai, SOC and generally our customers.
And with that expectation, regulators are looking for sort of a 10 x magnification. I would [00:08:00] quantify it as in terms of explainability and transparency, just given the skepticism with ai, uh, and, and the work product.
Ashish Rajan: And I guess that's what builds a trust as well to kind of bring it back to the trust that at regulated environment require when they say go to an auditor and auditor, like, I mean, I guess we had this transition when we were going to the cloud transition as well, where, oh, my cloud is doing automated encryption.
How do you know it's doing an automated? And I think to your point. This allows us to have that traceability to be able to say, Hey, here you can see Ashish applied the encryption. Uh, or it was applied, and you can, we can follow it step by step. And on the time and on the timeline that it happened as well. I think is, is that kind of where to, to your point where moving towards where rather than being just able to explain the situation, we are able to be more an auditable ai SOC workflow, if I wanna call it.
Grant Oviatt: Yeah. I. I believe that people need regulated or not, to be able to go from the [00:09:00] raw evidence to the query that was issued to gather it to the decision that was made and the analysis process at a high level. I think that has to be captured by ai, SOC regulated or not, just to build trust and say this, this is doing the job that I expect, and that may be the most.
Call a useful feature that is never used in a way that like only auditors are using this at some point. But that initial time spent with an AI SOC and going through that and just validating. Uh, for peace of mind that it's coming to the same decisions consistently that you would are hugely important.
I don't wanna mince words. If I think of 15 years ago with ai, anything AI detection in it scared me to death. 'cause I knew it was gonna be the highest false positive in the entire company. Right? And, and being a SOC practitioner I felt the burn I was very skeptical of, of how LLMs would perform in an AI SOC environment until I got into it and started experimenting and then.
Enough to, uh, help start an organization centered around it. So I've gone from skeptic to huge [00:10:00] proponent and now seeing like the merits that it's generating in customer environments. But there's a lot of heartburn from that AI tag in the past. And how far it's come now and can be beneficial.
Ashish Rajan: Out of curiosity, what changed the skepticism? Because I think I'm pretty sure I, well, I personally still talk to a lot of people who are very skeptic about. AI cannot do what I've done for 15 years and include the experience here or whatever. Our outta curiosity for the skeptics who are listening or watching and they're rolling, I'm sure they're rolling their eyes as we, as we're talking about this as well.
Yeah. What changed for you? I mean, considering you, you actually come from that world as well where you are Exactly. Where the people who are watching and listening to this were, what changed for you and what, what was an aha moment? Maybe they can also experience themselves and validate themselves that, hey, maybe there is something here.
Grant Oviatt: Yeah. Skeptics out there. Don't blame me. You know, I was you. I, I think my experience started with just throwing some security alerts at ChatGPT and the, the result was a coin flip, right? [00:11:00] Like sometimes this was really compelling. Sometimes it wasn't. It wasn't reliable enough to come to a place where I feel like I could build an organizational security framework around it or enable a security operations team. And, but when it was great, it was great. And so it started becoming more of a consistency problem of how can you get how can you chunk up this problem of doing a security investigation that security operators think of into small bite-sized pieces that agents can tackle really well instead of trying to.
Eat the elephant for, for lack of, of better term. And then in, and breaking it down into compostable pieces, started seeing higher degrees of consistency and really fast responses. And that was the, the early signs for me of, okay, this is, this is tractable problem. I think we're at a space where this is gonna be meaningful.
It's also a very interesting world where, you know, we have an entire machine learning team that is looking at all the different model providers out [00:12:00] there. Your product can get better or worse overnight depending on how model providers change, their, their models. And so yeah, without writing a lot of code, you might see a 3% improvement somewhere else, or a, 0.5 degradation and, and consistency or, or quality of, of analysis there.
And so, you know, we have a whole operational team that's. Kind of wrangling the weather, so to speak, and looking at what the improvements are and making sure that customers are getting the benefits of that. So, very interesting problems, exciting times. But you know, we, we knew also that if security operations is here today, if we can make investigations consistent, it only continues to improve as model providers improve.
And, and we see the benefits of that as well.
Ashish Rajan: Yeah I guess because you mentioned you guys have, uh, at least you've been working with regulated customers. I'm curious. What are some of the things that is expected from a regulated body for an AI SOC? 'Cause I imagine it's not just government, but also your financial sector, your insurance sector, health [00:13:00] sector.
There's so much I, what is expected. 'cause I almost, when I was kind of talking about this or thinking about this, I'm like, there's so many standards for regulatory things like HIPAA and everything else. But I don't know if there's an AI SOC standard. So how do you translate that in this regulated world for people to feel a bit more comfortable that, hey, if I show this to an auditor, they'll be okay.
And obviously I know we are, we, none of us are auditors here. We're just trying to come up with, Hey, this is my personal experience kind of a thing. So I, I wanna preface that with that.
Grant Oviatt: Yeah. Thank you for, for giving that me that Get outta jail free card there. But I can talk about, you know, our overall customer experience.
I think there's two pieces. One we've been talking about the product side, the auditability side, and, is this. Can I look at the components and trace where the data came from, see the, the raw evidence where this was gathered, and have a really strong understanding of how answers were derived as part of this investigation.
There's a data management piece that's really important here too, [00:14:00] right? And prob perhaps the most important thing that regulated customers are managing is. PHI or, or card data processing. And, and what should an AI SOC process? There's two things here that, uh, are really helpful in our world. One is for us specifically, we're not a sim so we don't require you to stream all of your data to Prophet, to make decisions.
Hugely valuable, right? From a cost perspective, you don't want to duplicate that data anyway, but, uh, it reduces the exposure. So we make point in time queries across your different security tools to make decisions much like a person would. Uh, when they see a security event, they're gonna log into the different tools, issue queries.
So the subset of data that we're processing is much, much, much smaller and much more focused which gets us out of a lot of. Problems from a data management perspective. Customers also have total control of what they want to send to us and the capabilities that Prophet can issue. And so, we have a [00:15:00] healthcare regulated customer that said, Hey, my HIPAA data, I'm just not even gonna put that in the purview of Prophet to look at.
And we're gonna start without going down that path and we can explore it later. Uh, and others may say, Hey, I'm gonna restrict this log set just to be these fields. And so when the PII land versus the PHI land, and so Prophet can make. Uh, investigations using that content still, and it doesn't have visibility into things that would be sensitive or that regulators wouldn't allow us to to be in the next space is being able to move the data plane of our environment.
So we have a SaaS model, but we also have a bring your own key model or, uh, bring your own cloud model. Uh, and so when the data can live, uh, defensively within. The organization's perimeter. This has been really meaningful for financial organizations for us. There's defensibility that they own all of the evidence and data.
They can remove our access at any time or blow away the data plane and all of that raw evidence is gone. Prophet is no longer able to perform investigations on that [00:16:00] data. So that's been really meaningful as well.
Ashish Rajan: I'm curious, what does it take to. Build 'cause I, uh, the values, when you said that these days your AI SOC could be something which you host locally on your own cloud or you can have user SaaS.
The one question that people have been tossing in their mind is as well. Everyone is turning into a, my, uh, data is being used for training as an option. It's not an optional, it's already opt-in, and I imagine there's a lot of skepticism around that as well. I'm curious as to how do you kind of answer those kind of questions about whether our data is being used for, how is your AI making calls without having the context of the integration or any of that?
Uh, is, like I would be skeptical from that perspective, so yeah. How do you normally address those kind of concerns?
Grant Oviatt: Yeah, great, great. Great question there. Our architecture's entirely single tenants, there's no cross contamination and you can think of it like onboarding a new security staff member to the team, like it's a new analyst.
All the learning that [00:17:00] happens within your tenant stays in your tenants. And we strictly enforce that even to a contractual level that your data is never used for, for training in improving the model outside of, it improves the product for you, but there's no model training on the, the raw data. So that's another huge a huge component that's meaningful for regulated environments is that single tenant architecture that you can bring in.
There's no option otherwise. And that, that assurance that no raw, raw data is being used to, to train models.
Ashish Rajan: Yeah. And do you find that, um, with the AI uh, space specifically, what are some of the core architecture. I guess elements you would need to make it more trustworthy for people or for people in the regulated, even for non-regulated space as well.
Like, are there certain things people should be expecting for the AI SOC to have in terms of architecture elements or the core, core architectures to have to make something which is like an AI SOC more trustworthy for the organization.
Grant Oviatt: I would say just be [00:18:00] on the lookout for flexibility, right?
Of, Hey, I want the data plane to be able to live in my environment, uh, or manage the, the evidence that's collected by Prophet. I think there needs to be, uh, from your AI SOC provider to your, your data management, there's a separation there, and you should have control of your data. Uh, I think for anything AI product related.
The ability to have ownership of, of that data is paramount. And then from a regulated environment, there's a drift, or a trend I would say, in being able to bring your own everything. And that includes model providers. And so model portability is something that we've invested in as well, where regulated industries may have their own model gateway, where there's specific models that they are comfortable with and others that they're not.
And so being able to comply with those types of regulations too. And, you know, they can manage the traceability of the inputs going to the model they're paying the outbound costs and able to manage that through their, their own AI governance process. That's another [00:19:00] piece that we're starting to see at of AI auditability and management and looking out for organizations that are thinking in that way in the AI SOC space.
Is important even if your organization isn't there. I think it's positive signs that thinking about trust and, and management and continuing to support in a regulated environment are, are positive.
Ashish Rajan: I guess most regulated people would already have an MDR is like, is the AI so component changing the need for that, or is that more of a.
Added layer on top, like a lot of people are using AI SOC as like a layer on top of their SIEM or, uh, just to give it more, Hey, I'm replacing my tier one, tier two, or whatever. But a lot of people also have MDRs as well, which taking care of the entire problem for them. Is that still re is an AI SOC still relevant in that world with an MDR in place?
Grant Oviatt: Totally. I, I think there, there are two things to talk about there. One is we have many customers that are moving from MDR [00:20:00] to ai, SOC as their holistic approach for doing investigations for many reasons. One is you know, our average time to complete an investigation is right around four minutes, uh, over the last quarter.
Most security analyst teams or MDR teams, even the best ones that I've been a part of in the past, won't even start to look at an alert in four minutes, much less complete the investigation in that time. So there's just sort of a response be that that can't be contended with, uh, and has been really meaningful.
I think the other piece that's really interesting where we have a few customers that coexist and have both is that AI SOCs aren't scared by custom detections or things that your team has generated. That's often 20 to 30% of the burden of a security operations team, where they have specific applications or custom detections that their security engineering team has developed that their MDR can't reasonably look at.
And when you think about it, MDR has hundreds if not thousands of customers, and they're focused on the base case that's consistent across [00:21:00] all of those customers and building detections around those spaces. And so. Being able to adapt to your custom detections doesn't scale on a people level, but on an AI SOC level, you know, one that's deployed for your environment specifically, it, it scales perfectly well.
And so we see some of that division of labor too, where it's, Hey, my MDR contract is going for another two years or three years, but I still have all these custom detections that are a problem for me. I would love to put AI SOC on that and then, you know, visit, if this is a replacement or, or expand my budget spend to bring Prophet in in a.
Uh, a broader way when that that renewal is over. So we see some of that conversation too.
Ashish Rajan: So I, I guess to your point then, for pe, for people who are probably not seeing this enough, it's. They're also a band of people who maybe wanting to build this on their own as well, to your point, people who are replacing it and kind of trying to build this on their own.
Now, you've obviously been the, on the other side where you were part of that team as well where Yeah, you're doing security operation part piece, you're running the whole team. [00:22:00] What's the reality of if I wanted to build this AI capability of myself regulated or not regulated?
Grant Oviatt: It's a really hard problem.
And so, uh, I think it's a, a fun thing to experiment with. I think there's a lot you can do on your own to build sort of enrichment and build context, but trusting decision making and that consistency with, with AI agents, very hard to do in an internal organization. So I would say from a workflow perspective, we've talked to some, some creative companies that are now customers where they've built a, uh, a workflow that's in their store or something else where they go grab data and then they make an LLM call and then they send the data back and they make another LLM call.
And it's very. Strict logic to try to make a single investigation work. And then they're like, well, now I've gotta do it for the other 500 detections in my environment. And I was like, that sounds like a chore. That doesn't sound like an answer to your problem. You're just focusing your energy in a different place.
And it's a fun problem. But I [00:23:00] don't think this is a, a scalable business way to solve this. And so, um. My recommendation is play around with it, get, an understanding of how prompts work and how models work. But when you're looking to scale this and trust this and not worry about security problems, I would work, uh, you know, try out some AI SOCs and, and see how it compares to what you build.
Ashish Rajan: Because I'm already thinking about the fact that there could be model updates, there could be drift. How are you guys dealing with all of that while still being I guess in, in a state that you don't drift away from the whole regulatory compliance or data sovereignty that you may have to maintain for some of your regulated customers?
Grant Oviatt: Yeah. Regulated a little bit different where you may pin specific models to problems, or there's , like a single provider that's being used in that entire tenant. But I mentioned a bit earlier, we have an evaluation team that is constantly looking at changes in upcoming models and each agent, uh.
Focused on a very specific task. And those tasks, uh, the quality [00:24:00] improvements are different based on model type and version. And so we have a whole team that's intending on optimizing our products output for customer security success. And that changes on a quarterly basis as new releases happen. And so, the answer is, is, kind of doing it the, the old fashioned way.
We, we have a team that evaluates these manually. Uses the label data to help make decisions and make customers better.
Ashish Rajan: But do you feel that I guess regulatedy or not, as people have gone down the path of building this on their own, I guess that they probably need to also consider this part as well to, to your point.
Reduce hallucination, but so if there's a model update, suddenly the same prompt may have worked before, but now it behaves a bit differently, getting the same answer twice in the same. You don't want like discrepancy in the answer for the first time you investigate an incident. It's a false positive, but the second time you do it, it's a true positive.
Now you're like, wait, which one was it?
Grant Oviatt: Yeah. And then MCP adds a [00:25:00] whole other layer, right? Oh, where there are MCP servers where you want to. Make it very easy to grab this information from your environment. But when you talk about regulated environments specifically, transparency is lost. Uh, we've evaluated a bunch of MCP agents and opted to build our own collection agents for that transparency piece because today, when I ask an MCP agent a question, it'll give me an answer, but it won't give me the query that it ran to generate the answer.
And so there's a a mismatch in auditability from our perspective to make this clear to you a human as to what happened, and that break in the chain is just one too many black boxes in the cycle. For us to be comfortable pushing that to customers, I expect that to change. I'm hopeful that MCP agents. Or can, expand to be more scrutinous on the auditability side and have some sort of rest API response that was queried on their end and tracked along with the results and send this entire package over to maintain that.
Evidence, chain of custody, so to speak. Today it's, ask a question, get [00:26:00] an answer and that's a little scary too. So that would also be a concern if you're gonna go and build it yourself. MCP is the fastest way to do it in a lot of cases, but who knows what's happening on a bunch of different levels at that point.
Ashish Rajan: Uh, how far have we gotten into the. AI driven SOC world like I was in, if before when you were working in SOC it was the fact that you had to do the quote unquote explainability part, spend hours trying to figure out, Hey, is this alert or event a actual security event, or is it an incident going from there to how far have we come with AI SOC and, or maybe how far can we go with AI SOC between the pre AI world and what it is today with ai SOC.
Grant Oviatt: How far can we go? I think it's a significant difference in, um. I think AI SOC today has been largely on replicating and improving the thought processes and investigative tempo that teams experienced today.
So instead of your investigation's taking an hour and a half, they take 10 minutes and [00:27:00] they're at a, uh, consistent or higher quality than senior analysts that are on your team. That is the same process that exists just improved in a very significant way where. I think organizations are moving is to how do we expand this from analysis to support things like threat hunting capabilities and other parts of the team.
How can I ask bigger hypothesis driven questions and have AI systems go and pull larger sets of data for me and start to find. Unidentified threats in my environment, or how can I manage my detection, management and posture? Like how can I look over alerts that I've seen in the past, identify my gaps in line with like Mitre attack framework or similar and suggest tuning recommendations.
And so where I see AI SOC continuing to move is taking the groundwork out of SOC operations the grunt work level. Tasks out of people's purview. Instead, operate more as a [00:28:00] manager in the loop where I'm managing my detection program and saying, these are the risks that are important to me.
If these are the things that are not working for me, I want to tune or look for new things in the space and have that orchestration happen by an army of, of agents and you're getting the feedback to make decisions on whether this is helpful or expending energy that's unneeded in your organization.
So I continue to think that AI SOC is going to shift to build security programs that I honestly dreamed of in past places.
Ashish Rajan: But do we still need the human in the loop today or can we go fully autonomous or as it's called agentic?
Grant Oviatt: Yeah, I think look for, um, for 95 plus percent of our investigations we're or automatically closing items as false positive with all of that explainability and transparency for that additional 5% of things that are malicious or that.
We have a question on, we think bringing in a human is still the right approach [00:29:00] today. Just to have eyes on, validate the activity, confirm remediation actions and move on. For us, I think that's more of where the market is and less of where the product is. I think we're gonna move more and more to a state where only if you have an issue a threat has been identified and it's been remediated in three minutes.
This has all happened. You get a rollup report of, of what's been observed and your team isn't escalated to because the threat was mitigated. There's no further action to do. So I think we'll move further into that state. The technology is there. I actually don't think the operations teams are ready to be there, and that's okay.
I, I think trust becomes the important element.
Ashish Rajan: So are there any red lines that maybe actions that you believe AI shouldn't be automating without a human oversight in day to day, especially in the previous experience you've had as an SOC person? You've probably clearly seen multiple environments, multiple, uh, technologies, uh, containers, Kubernetes, throw in multi-cloud, and anything you want.
Have you found any red lines where people [00:30:00] should totally not be thinking of automating AI in that space? Or maybe the AI capabilities already in, in a particular space? And people may be trying to shove it in because, just because they have AI as the hammer in their hand and every problem is a nail.
Grant Oviatt: Yeah, my, my take on remediation is you still want some strict approach. So agents are very creative maybe a, a nicer way of saying probabilistic. And so if you were to give an AI agent, Hey, go and remediate this file on a system, there's several different ways that it could go and do that. If it was just given API docs and, um. Or access to an EDR tool. And so I am a big believer today in strict coupling of, instead of AI having creativity on the query part. There's a, a specific API that an agent is allowed to access. It performs one function, and this is the, the capability that the agent has in your environment.
So I think with remediation today, constrained creativity and really treat it more as a traditional integration in that [00:31:00] sense, where AI SOC is managing the reasoning of when this is appropriate. But the action is, is a. A single call or a series of calls that are more deterministic. So I think if agents are inspired and agentically like performing remediative actions in your environment rather than coming up with the playbook of things that things you should do and aligning those two rest API calls that gets a little scary in redlining it in my opinion.
Uh, but I think there's a very safe and manageable way to do so.
Ashish Rajan: So to your point then. I guess if you were to think about detection, investigation, and. Threat hunting, if you want to use that word, it's better for investigation today where you can you give it a problem, it can give you to your point about 95% or similar understanding of, Hey, this is probably what this investigation is, and then as a human you make a choice for I agree, disagree, or I need to change this, whatever.
Is that where. Do you reckon most of that AI SOC capability can be utilized to the [00:32:00] maximum today?
Grant Oviatt: I think detection, the ample opportunity, and I think on the response side there is as well, uh, it's more of an implementation detail of how your AI SOC vendor performs the action rather than, right, like, is it safe or not?
And so I think it's more of an implementation challenge of, um. Set up a test environment for remediations and make sure for five of five tests that it's containing hosts in the same way. And performing consistent actions on that side just so that you're not introducing additional risk. But yeah, investigation is where we see the most opportunity because it's the most human constraining problem today, like clicking our remediate problem, uh, button.
Pretty easy writing, new detection, logic, cumbersome, but it, it's not super time consuming and you don't do it. You're trying to build a detection, refine it only a handful of times. Investigations happen all the time, and it's the overlooked part that takes 10, 20, 30 minutes, an hour to go and do.
And I'm tired of folks having to deal with impossible travel alerts and just their eyes glazing over and missing [00:33:00] threats and creating risk. And so I think that's, it's a great problem to set AI on.
Ashish Rajan: For a regulated environment. Are there any parts of that where AI SOC is not ready for and could be the fact that no matter what, don't give any PII into because it's a black box.
I don't know what, what it would be. I'm just making it up. But I guess are there parts where for people who are running their SOC teams or CISOs who are thinking about maybe especially this time of the year when a lot of people are planning for 2026 and onwards. For what their SOC team would look like.
Are there any parts of a SOC that AI is not ready for according to you, even if it's regulated or not regulated?
Grant Oviatt: You know, I'm not an expert on the regulated side. I, I think it becomes a data problem by the security organizations. Uh, it's more of a risk tolerance level. So if, if you're not comfortable having your MDR perform remediations in your environment today.
Letting Ai SOC agents perform remediation environments in your today is probably not a risk chasm that [00:34:00] you're, you're willing to cross yet. Yeah. Um, but being notified and seeing the actions in front of you and clicking the button to perform those and compressing your investigation time may make sense if on the investigation front, you are working with Subprocess or MDRs that are using your PHI or PII data to do investigations.
You're probably comfortable having an AI SOC do the same thing since it's the same process and honestly less risk because there's no human that's going to go and copy and paste this to some resource and, you know, like share it on the internet. You're not having to manage like those contracts and relationships and things like that.
And if you're not, or if you're segmenting some sensitive data or regulated data. Before, we're finding that our customers are starting to have those conversations with us now and say like, Hey, you know, we, we didn't show this to our staff log vendor in the past. This is. What are the processes that we would do to get Prophet onboarded or, hey, could we be FedRAMP compliant if we only sent you these two fields?
Or, or our team says If we [00:35:00] just sent you two fields instead of these 10, uh, we would, we would be fine within our boundary of, of processing Fed FedRAMP data or, um, can we start to look at financial fraud and what would that look like from perspective and how would we manage that? So starting to have those conversations, but, it's sort of a risk appetite level or level specific to the organization.
Ashish Rajan: I guess you've kind of touched on something interesting 'cause these days, I guess the whole looking at AI is not just a, trust is not only a technical thing today. Today it's a cultural thing too.
Grant Oviatt: Yeah.
Ashish Rajan: Because a lot of people, I would imagine, are skeptical, whether it's the people who are security analysts or executives.
How does one gain confidence in decisions being made by assistance from ai? 'cause to your point, ni you said 95%. I'm like, what if there was, there's always a skeptic brain and unfortunately we as security people are designed to be always suspecting of everything. How do you even gain trust or confidence, I guess?
Uh, [00:36:00] yeah, I think kind of boggles me at that point in time where it's like, well, 95% accuracy. How do I know what I, I didn't miss something that was really important.
Grant Oviatt: Yeah, I, it's the same way you would gain trust of anything or a person is you try it. And so we, we continue to tell our prospects to take us and other vendors to the mat.
Uh, you know, like we're not afraid to have that conversation. I think AI demos really well, you know, it's, it's nice to pull a compelling story. It's something else to see it in your environment. And most times operations teams. You know, I can share with a customer of ours. They sent us 12,000 investigations over a two week period as just kinda a bake off.
And what, this is not unfamiliar territory for us, but they have a a security operations team that was doing their daily function. They sent us the exact same alerts, but had a a black box on our side. So they didn't know the decisions that were made. And then at the end was the big reveal where they had a big spreadsheet and compared the decisions made, you know, one for one and the time differences.
Uh, [00:37:00] and we had 99.3% agreement between their security operations team and Prophet during that 12,000 investigation stent. We were significantly faster with it. It was 11 x difference in sort of the meantime to investigate. Yeah. Uh, and that was really compelling for them to say. Hey, I can trust you in this situation, how you're gonna handle this using my data.
Uh, others may bring in pen testers and go and see it on our worst day. Like, does Prophet have our back in a reasonable way? And we welcome that as well. So, you know, the short answer is just how you would evaluate any great analyst is the same way how you would evaluate an AI SOC. Yeah. And some of that may be just trying it out.
Ashish Rajan: Awesome. No, thank you for sharing that. I guess for people who are, uh, in this journey, I think they've definitely, especially I, I use the example of regulated, but I think everything we spoke about definitely applies to unregulated environments as well. Sure. Those are all the technical questions I had.
I've got fun questions for you as well. Three fun questions for you, man. Okay. Yeah. The first one being, where do you spend most time on [00:38:00] when you're not trying to solve the security operations problems of the world?
Grant Oviatt: Oh, great question. I would say most time is spent with family. So I've got a, you know, 2-year-old daughter and, and my wife who's very supportive.
They're the best. And the fun side of things though I'm a glutton for punishment, I guess. I do triathlons or Iron Man races and things like that. Oh, wow. So in my spare time. It's a lot of time on, on bike or in pools and running and things like that.
Ashish Rajan: Yeah. Wow. Okay. Have you already participated in a few Ironman or.
Grant Oviatt: I've done two half Ironmans. My, my, my wife actually joined me on the last one too, which was super. Oh, cool. Oh wow. Yeah she's awesome. Um, so we've, we've got a little bit of a competition going in our household. I'm not winning any awards, let me, let me be clear, but I'm crossing through the finish line and.
I still have, you know, legs that can tell the tale.
Ashish Rajan: So, I mean, plus if you survive that, that's battle. You don't wanna win. You wanna lose that.
Grant Oviatt: You said [00:39:00] it on me. I, I think, I think you're absolutely right.
Ashish Rajan: Yeah. Uh, the second question I have for you is, what is something that you're proud of that is not on your social media?
Grant Oviatt: Oh, nothing's on my social media. So maybe everything. Everything. Uh. I don't know, I maybe a cheesy answer. I'm really proud of what we're building in the AI SOC space. I think you know, it's, it's the only time in my career where customers are coming to us and say, you absolutely can't. Take this away.
Like, I would be devastated if you remove this from my life. And it's so much better being a security analyst right now and, and working with you. And so, I've of course had successful outcomes in the past where people are like, yeah, this is great, this is helpful. But people are very excited to interact and, and see the benefits of the product.
So, maybe on a, a cheesy self-promoting way, very excited for what we're focused on right now.
Ashish Rajan: That's awesome. Final question. What's your favorite cuisine or restaurant that you can share with us?
Grant Oviatt: Ooh. Great question. I've got a, I've [00:40:00] got a soft spot for, I'll give you two answers. One is I'm originally from from Texas. So I live in Seattle now, but I've got a, a really soft spot for a great Texas barbecue. It Oh my God, yes. Always just hits home for me and just a great brisket. Seattle, we don't have any, so I brought, I bought a smoker and used it two times like everyone else, but, you know, I tried and, uh, that's great.
So I would say Texas barbecue. I'm also a big like seafood fan, so any like sushi. Shrimp crab, fish. I'm having a great day. So if, if, yeah, if I'm not living the dream with Texas barbecue, I'm try probably trying to find some good seafood somewhere.
Ashish Rajan: Awesome. Now thank you for sharing that as well. And, uh, for, for people who wanna continue the conversation about AI SOC, and just to understand that world and whatever you guys are doing in terms of building the AI SOC, where can they find out about more about you, connect with you and the work that Prophet is doing?
Grant Oviatt: Yeah I'm on LinkedIn, uh, [00:41:00] Grant Oviatt Feel free to reach out to me. My email is grant@prophetsecurity.ai Feel free to just shoot me a line and, uh, I'll, I'll of course hit you back up or if you want to see more about the company, Prophet security.ai, uh, you can reach out to us directly there and check out the product and yeah, it's so to stay in touch.
Ashish Rajan: Awesome. I'll put them in the show notes as well. But thank you so much for spending the time with us, grant, and I look forward to, uh, hearing all about the AI SOC as well. Regulated or not regulated.
Grant Oviatt: There we go. No, I appreciate the time, Ashish. This has been super fun, so awesome. Uh, happy to be here.
Ashish Rajan: Thank you. Uh, thanks everyone for tuning in as well. See you next time. Peace. Thank you for listening or watching this episode of Cloud Security Podcast. This was brought to you by Tech riot.io. If you are enjoying episodes on cloud security, you can find more episodes like these on Cloud Security Podcast tv, our website, or on SOCial media platforms like YouTube, LinkedIn, and Apple, Spotify.
In case you are interested in learning about AI security as well. To check out our sister podcast called AI Security Podcast, which is available on YouTube, LinkedIn, [00:42:00] Spotify, apple as well, where we talk. To other CSOs and practitioners about what's the latest in the world of AI security. Finally, if you're after a newsletter, it just gives you top news and insight from all the experts we talk to at Cloud Security Podcast.
You can check that out on cloud security newsletter.com. I'll see you in the next episode, please.














.jpg)




