How do you establish trust in an AI SOC, especially in a regulated environment? Grant Oviatt, Head of SOC at Prophet Security and a former SOC leader at Mandiant and Red Canary, tackles this head-on as a self-proclaimed "AI skeptic". Grant shared that after 15 years of being "scared to death" by high-false-positive AI, modern LLMs have changed the game .The key to trust lies in two pillars: explainability (is the decision reasonable?) and traceability (can you audit the entire data trail, including all 40-50 queries?) . Grant talks about yje critical architectural components for regulated industries, including single-tenancy , bring-your-own-cloud (BYOC) for data sovereignty , and model portability.In this episode we will be comparing AI SOC to traditional MDRs and talking about real-world "bake-off" results where an AI SOC had 99.3% agreement with a human team on 12,000 alerts but was 11x faster, with an average investigation time of just four minutes.
Questions asked:
00:00 Introduction
02:00 Who is Grant Oviatt?
02:30 How to Establish Trust in an AI SOC for Regulated Environments
03:45 Explainability vs. Traceability: The Two Pillars of Trust
06:00 The "Hard SOC Life": Pre-AI vs. AI SOC
09:00 From AI Skeptic to AI SOC Founder: What Changed?
10:50 The "Aha!" Moment: Breaking Problems into Bite-Sized Pieces
12:30 What Regulated Bodies Expect from an AI SOC
13:30 Data Management: The Key for Regulated Industries (PII/PHI)
14:40 Why Point-in-Time Queries are Safer than a SIEM
15:10 Bring-Your-Own-Cloud (BYOC) for Financial Services
16:20 Single-Tenant Architecture & No Training on Customer Data
17:40 Bring-Your-Own-Model: The Rise of Model Portability
19:20 AI SOC vs. MDR: Can it Replace Your Provider?
19:50 The 4-Minute Investigation: Speed & Custom Detections
21:20 The Reality of Building Your Own AI SOC (Build vs. Buy)
23:10 Managing Model Drift & Updates
24:30 Why Prophet Avoids MCPs: The Lack of Auditability
26:10 How Far Can AI SOC Go? (Analysis vs. Threat Hunting)
27:40 The Future: From "Human in the Loop" to "Manager in the Loop"
28:20 Do We Still Need a Human in the Loop? (95% Auto-Closed)
29:20 The Red Lines: What AI Shouldn't Automate (Yet)
30:20 The Problem with "Creative" AI Remediation
33:10 What AI SOC is Not Ready For (Risk Appetite)
35:00 Gaining Confidence: The 12,000 Alert Bake-Off (99.3% Agreement)
37:40 Fun Questions: Iron Mans, Texas BBQ & Seafood
--------------------------------------------------------------------------------
📱Cloud Security Podcast Social Media📱_____________________________________
🛜 Website: https://cloudsecuritypodcast.tv/
🧑🏾💻 Cloud Security Bootcamp - https://www.cloudsecuritybootcamp.com/
✉️ Cloud Security Newsletter - https://www.cloudsecuritynewsletter.com/
Twitter: / cloudsecpod
LinkedIn: / cloud-security-podcast #cloudsecurity#aisecurity#secops














.jpg)




