We are officially entering the "Multi-AI Era." Much like the multi-cloud times, organizations are no longer just using a single AI tool like Microsoft Copilot, they are building custom, agentic workflows using diverse third-party models and MCP servers . In this episode, Ashish sits down with Shawn Hays from Varonis to discuss why the security market has over-pivoted on AISPM (AI Security Posture Management) . Shawn spoke about how having visibility and an inventory of your AI models is a great start, but it fails to secure the enterprise if you lack the guardrails to actually stop an agent from going off the rails and exfiltrating data . Shawn breaks down the components of a robust AI security platform (like Varonis Atlas) and explains why data security is inseparable from AI security. He spoke about why AI agents will blindly "read whatever is on the teleprompter," meaning your AI is only as secure as the data access and identity controls surrounding it . Tune in to learn how to apply Zero Trust across the entire AI chain from the prompter to the cloud infrastructure .
Questions asked:
00:00 Introduction
02:50 Shawn's Background: Microsoft, CMMC, and Varonis
03:50 The Biggest AI Security Challenges (Copilot to Agentic AI)
05:50 Third-Party AI Risk (Jira and Salesforce Agents)
08:40 The Connector Ecosystem Danger (Copilot + Salesforce)
11:50 8 Distinct Areas of an AI Security Platform (Varonis Atlas)
14:00 Entering the "Multi-AI Era" (Analogies to Multi-Cloud)
16:00 The AI Bill of Materials (Athena AI & Grammarly)
20:50 Why Data Security and AI Security are Intertwined
22:00 Applying Zero Trust to the Entire AI Chain
24:50 The Role of Identity and ITDR in AI Systems
27:00 HIPAA, OCR, and Regulating AI Data Access
31:30 Creating a Governance Plan for Microsoft Copilot
33:50 Securing Pro-Code AI Systems (AWS Bedrock & MCP Servers)
38:30 Why the Security Market is Over-Pivoting on AISPM
44:10 The "Ron Burgundy" Analogy for AI Agents
45:50 Fun Questions: Crocodile & Caramel Tasting
47:20 The Ed Sheeran & Yelawolf Mixtape Connectio
48:50 Hobbies & Pride: DJing Weddings and Playing Ice Hockey in Alabama
51:50 Favorite Food: Alabama White Sauce BBQ & Milo's Burgers
Shawn Hays: [00:00:00] We are now entering this multi AI era where no longer is sophisticated organization just using copilot. They're using all these different pieces. A lot of security teams are struggling with not even the fancy, crazy, sophisticated AISPM type stuff. It's the simple AI. That's new in their environment, that's embedded.
I need to not trust that agent. I need to even not trust the user prompting. I need to also not trust the cloud architect, apply zero trust to that entire chain and the reason being data
Ashish Rajan: With ai, we are that horse has left the barn
Shawn Hays: The market's over pivoted on AISPM, but we've seen great visibility. They know every piece of their AI system.
But they really have no way of preventing that agent or going off the rails. Can I look at all the ai?
Ashish Rajan: Yeah.
Shawn Hays: Both the stuff that I built, the stuff that I'm going to build and the stuff I don't even know about.
Ashish Rajan: Building an AI security program is not just about buying an AISPM. It has so much more that goes into building an [00:01:00] AI security program.
I had a conversation with Shawn Hayes from Varonis where we spoke about what does it take to build an AI security program? What are some of the components that you need to look into and the stages that you go through as you build the AI security program itself? We all spoke about what does it mean to have identity third parties that connect into your AI ecosystem, and what does data security perhaps could look like if you were to start building an AI security program today?
If you're someone who's working in this particular space and building an AI security program, definitely share this episode with them. As long as if you have been listening or watching episodes of the podcast for a while and I have been finding it valuable, I would really appreciate it if you take a quick second to hit the follow subscribe button.
All podcast platforms, including Apple, Spotify, YouTube, and LinkedIn. It doesn't cost you anything, but it helps us spread the word to many more people to enjoy the episode and the work we do over here. And also thank you to everyone who came and said hello to me at RSA. It was great to finally meet a lot of you and also hear about the love and support you had for the podcast.
Really appreciate that and I look forward to seeing you again another conference or event soon as well. [00:02:00] Enjoy this conversation with Shawn and I'll talk to you soon. Peace. Hello and welcome to another episode of Cloud Security Podcast. I've got Shawn with me. Thanks for coming on the show, man.
Shawn Hays: Yeah. Glad to be here.
Glad to be here.
Ashish Rajan: Could you share a bit about yourselves, your background for people who have some context about you? Well,
Shawn Hays: yeah. Yeah. So, uh, I first started working at least in cybersecurity, working for a Microsoft consulting company. Mm-hmm. Uh, we were predominantly working within. The defense industry. So those organizations, commercial businesses, not the federal government, but commercial businesses that are supplying goods or services to the Department of Defense, or even in aerospace industries.
Um, so think Boeing, Northrop Grumman, those types of organizations and helping them configure their Microsoft 365 tenant in their Azure environment to meet CMMC requirements, uh, which is basically audit dependent. Then I was doing that for about six years, and then after that time, I joined Microsoft for three years where I was predominantly working on go to market for all the various components of the Microsoft 365 and Azure [00:03:00] Stack.
So think intra purview, defender, Sentinel, et cetera. But verticalize, so in healthcare, state and local government, financial services, manufacturing, that type of thing. And then. I joined Varonis. And so I've been at Varonis going on about a year and eight months, uh, almost about to hit the two year mark.
And my responsibilities is I'm on the product marketing team, specifically working on our Microsoft applications. So how Varonis secures data in Microsoft 365 and in Azure. And then also with the introduction of Atlas, which I'm sure we'll get into in some way, shape or form, is I'm also the PMM for our AI security solutions.
Ashish Rajan: So just on the. Adoption of AI security then, 'cause clearly that's, you're spending a lot of time in there. Uh, what are some of the components that you find that people have for AI security capabilities in the enterprise, typically?
Shawn Hays: Yeah.
Ashish Rajan: And what are some of the blind spots?
Shawn Hays: Yeah. So I think. The biggest challenge right now that I'm seeing, well, I shouldn't say the biggest.
There's like a couple big ones. Um, it's, it's hard to pick a favorite. It's like my picking my [00:04:00] favorite kids. But one of the challenges that I'm seeing is a lot of organizations have looked to solve the, we'll call it at a broad, at a broad lens at the AI security challenge by purchasing maybe a particular solution to solve a, to solve the AI security challenge for a type of ai. And then as they mature, they find that that solution they bought or that they're looking to buy doesn't meet the entirety of the AI that they have, or maybe the AI they're going to have. 'cause maybe they're in a crawl, walk, run phase. And so they've started with copilot.
They have a beta test group. They have some sort of initial test group that's rolling out copilot, but then all of a sudden they realize, oh, to solve the certain challenges we wanna solve, we need to start making agents to do certain things. So then they roll out copilot studio, but then they realize, oh, the same solution or tool that we, that we've procured or that we're using only looks at prompts and responses in copilot. It doesn't cover this agentic AI [00:05:00] framework and how do we secure that? And we don't have guardrails around how those agents are functioning. And so nevertheless that's one area that I'm seeing. And so. When they deploy these solutions, they realizing they're kind of like reaching the end.
So that's one piece of it. The other piece of it I think that is concerning for a lot of folks is they have agents deployed in different solution areas. I'll use, uh, JIRA for example.
Ashish Rajan: Yeah,
Shawn Hays: so Atlassian just announced for Jira that they now have an agentic ecosystem. So much like in Salesforce, you can make agents with agent force that's grounded on a, uh, Salesforce data.
And those agents could do certain things within the Salesforce environment, on Salesforce data. Well, now Jira has that same type of capability with their agents. And so what I think organizations are struggling with is. Okay we had copilot down, like I said, that beta test group, but then now all of a sudden we have to.
We have to understand how those agents are accessing data [00:06:00] in Jira. And then I also have to understand how agents are accessing data in Salesforce. And that, and how you access data in those different places are extremely different. In Salesforce, it's maybe like records as it relates to accounts and CRMs and things of that nature.
And then when you go into Jira, it's based off projects and. When you look at Microsoft, it's based off intra security groups and teams and teams, groups, and all of these things. Permissions look distinctly different in all these different places. And ultimately, these agents, what they can do, how they access data is based off of, in most cases, received tokens.
Like who you are, who you are, like what you can access in terms of data. That agent is only gonna access the stuff you have access to in Jira, in Microsoft 365 in Salesforce, all these places.
Ashish Rajan: Yeah.
Shawn Hays: And a lot of security teams are struggling with not even the fancy, crazy, sophisticated A-I-S-P-M type stuff with custom pro code agents built with Microsoft [00:07:00] Foundry or Foundry or AWS Bedrock.
It's the simple AI that's new in their environment that's embedded.
Ashish Rajan: Does
Shawn Hays: that make sense?
Ashish Rajan: Yeah, it does. And I think it's, it's kind of like the the open clause of the world or any of the other insert new AI tool that gets released and suddenly everyone wants to adopt it.
Shawn Hays: Yeah.
Ashish Rajan: Uh, it's, it's, those cases are the, probably the ones which are a bit more hairy in this context.
Shawn Hays: Absolutely. Yeah.
Ashish Rajan: And do you find that, obviously a lot of people have, different kinds of use cases. You kind of mentioned a few, but you mentioned the Jira use case. You mentioned copilot. 'cause I'm thinking also this in third party as well.
Is third parties using AI capabilities or having AI capabilities, is that also being considered or are they also, are there any other blind spots there too?
Shawn Hays: Yeah, so we will, let's go down the third party path. I think there's two issues there too. There's the. Connector ecosystem, and I'm gonna go Sam, and come back to them. There's the connector ecosystem. Then there is the introduction of pro code custom AI solutions built with [00:08:00] third party platforms, um, and using third party.
Elements within an AI system. And what I mean by that is like, you know, grabbing a model. So you could be using Microsoft Foundry, but you're bringing in a model that is not a Microsoft model. Does that make sense?
Ashish Rajan: Yeah.
Shawn Hays: Or it's a an agent built with Microsoft Foundry, but it's using an MCP server as a part of whatever that agent is doing and how it's configured.
It's bringing that MCP server from somewhere else, right. Let's go back to the connector thing. Those, there's two challenges. The connector ecosystem is what I like to call it, which is if you look at just co-pilot, co-pilot, co-pilot studio, I'm an ex Microsoft guy, so we'll have a lot of Microsoft examples here.
But within Microsoft. 365 copilot and also with agents created in Copilot Studio. They now have a very robust connector ecosystem where I can use copilot, not even anything really crazy like an agent made with Microsoft Foundry, but I can use copilot, but connect it to. Salesforce, we were talking about Salesforce earlier.
[00:09:00] I can connect it to Salesforce. So now Copilot, which is a rag AI system, is able to not only look at my emails and files and SharePoint, but it's also able to go and query against Salesforce data.
Ashish Rajan: Yeah,
Shawn Hays: and you know, it goes back to what I said earlier about can access in Salesforce when now copilot is following that same can access model.
So what can I see in Salesforce? It's gonna serve to me. In copilot if I turn that connector on. And that's just a connector, not a, a highly configured, sophisticated pro code AI solution. So that's a, that's a third party risk that's introduced by just connecting to a new data source that a no code off the shelf AI solution can use.
Ashish Rajan: Sounds like there's, 'cause you mentioned ISPM, you are mentioned third party as well.
Shawn Hays: Yeah.
Ashish Rajan: So are there like a lot of categories within, because I imagine, obviously we're an RSA and a lot of CISOs and lead security leaders are walking around thinking about what [00:10:00] my security programs gonna look like for, I won't say next five years, but let's say six months.
So maybe 20, 26.
Shawn Hays: Yeah.
Ashish Rajan: Just because how AI works at the moment, the industry obviously CISOs who are walking on the floors, sort of, uh, RSA. They're looking at AI security as something that they probably want to care, they take care, they want to take care of in their organization.
Shawn Hays: Mm-hmm.
Ashish Rajan: They obviously have multiple things to look at.
Yeah. There is the AISPM category spoke about, there's third party as we spoke about.
Shawn Hays: Yeah.
Ashish Rajan: What are some of the other kind of categories that need to think about from a security program perspective if they are trying to focus on the AI security?
Shawn Hays: Yeah, absolutely. Uh, it kind of ties into your question earlier about the third party pieces.
Yeah.
Shawn Hays: Um, because there's a. Typically the, or the s CISOs that I talk to the organizations, depending on if they're maybe in a regulated industry, typically state and local government can sometimes be behind on the adoption curve. Not by intent. In many cases, a lot of their users and even their technology leaders want to adopt AI faster.
But there's procurement cycles. There's you know, regulatory frameworks that they have [00:11:00] to comply with. And so it's a little bit more sensitive, but nevertheless. All the organizations that I talk to and the scissors that I meet with, they typically fall into one or two camps. They are in that early stage with the co-pilot beta group, or maybe they've rolled out co-pilots, all their users, but they are not developing and building.
AI solutions. Mm-hmm. Um, agents, things of that nature. But then the other half, if you will, that are in other industries, they are, they already have multiple different agents. They have different business units that are also developing. And this gets into your question though, which is what makes up, uh, AI security platform.
Obviously I'm biased. We just announced Varonis Atlas. And in that platform specifically, we, at the very beginning when we launched that solution, we wanted to make it clear that there's pretty much eight distinct areas that you wanna look at. I'll hit on several of them. We talked about.
Inventory, or we've touched on it, but you wanna have observability of every layer of the AI [00:12:00] stack or system. So we want to be able to see what are all the models we have in our, in our environment. We want to see what MCP servers are in our ecosystem. We want to see what. Agents we have, we wanna see, uh, what services we're using.
We're wanting to look at code repositories, uh, that are being called. Uh, we wanna see even within the code repositories, are they calling, let's say a Jupyter Notebook, and does that Jupyter Notebook have secrets in it? Um, because some developer wanted to make it easy on themselves and they threw in secrets within that particular notebook.
We wanna see all of those things, and we want it to continuously be refreshed. So that's like, that's just one area.
Ashish Rajan: Yeah.
Shawn Hays: And that is one thing too, and we may hit on it later, is there is a lot of native solutions that are starting to hit the market within different places. Uh, you look at Agent 365.
Yeah. Most recently announced that Ignite. We were at Ignite not too long ago. It is, at least by design, makes it super easy to see some of that inventory. For the [00:13:00] Microsoft stack, super easy. But then once you start getting outside of it, there's a little bit more configuration and manual work to try to bring in any of that.
Ashish Rajan: Yeah.
Shawn Hays: So you don't want to have just something that's exclusively looking at just that. Because what we find in those organization I was talking about is that they're not exclusive, they're not using just one. Vendor, if you wanna call them that.
Ashish Rajan: And not only Microsoft agents as well,
Shawn Hays: right? And a lot of times it's, it's divided by the organization.
They'll have a business unit or a department trying to solve a problem, and so they pick what's best for them to solve that problem.
Ashish Rajan: Yeah.
Shawn Hays: So if it's clawed, if it's AWS bedrock, if it's this model, if it's a model from hugging face, wherever it's coming from. They just wanna solve the problem. They wanna pick the tools that best solve the problem.
And so then you get a very diverse ecosystem. Mm-hmm. And you wanna have inventory over that entire ecosystem, not just a sliver. Yeah.
Ashish Rajan: Does that make sense? Yeah. Yeah. Yeah.
Shawn Hays: So that's one piece. Yeah. Yeah. I've told you there's a think
Ashish Rajan: it's a first of many. Yeah.
Shawn Hays: I told you there's a, but do you have any questions on that?
Ashish Rajan: Oh, no. I think I, [00:14:00] it's a good one because I. The foundational piece. So if you can't see them, you can't protect them, kind of thing. Comes into picture what I hear up inventory. I'm sure we'll double check on this as well, but yeah, keep punching me.
Shawn Hays: Yeah. And one last thing on that I think we're in a moment of, if you recall, gosh, like 20 years ago, it's, it's crazy to think in terms of 20 years ago, but.
Two decades ago, up till about 10 years ago. So not the last 10 years, but the 10 years prior to the last 10, we went through this multi-cloud phase. Oh yeah. So if you were at RSA, I was not candidly, but if you were at RSA. 20 years ago, 15 years ago, 16 years ago, every booth had something about multi-cloud.
Ashish Rajan: Yeah.
Shawn Hays: And the reason being is because you had organizations lifting and shifting different data sources for the first time ever into this place and that place, and this place and that place. And they didn't know how to protect and secure. This place and that place and that place in the cloud. Um, and it was a new paradigm.
It created new threat vectors. It created new [00:15:00] risk factors.
Ashish Rajan: Yeah.
Shawn Hays: And so you had all these multi-cloud solutions hitting the street to solve that multi-cloud problem.
Ashish Rajan: Yeah.
Shawn Hays: I think we are now entering this multi AI era where no longer is an organization and enterprise sophisticated organization just using copilot.
They're using all these different pieces and so that's why. Inventory is one of the biggest areas is because you need to see it all. Yeah. It's not just one AI solution.
Ashish Rajan: Yeah.
Shawn Hays: You need to see it all and it needs to be accommodate for where people are building things and what they're building with.
Ashish Rajan: And that's where the whole shadow AI thing comes in as well that people are concerned about.
Shawn Hays: Absolutely.
Ashish Rajan: Because to your point, you may use a native agent studio, uh, to identify any Microsoft created agents perhaps, but then you have, I don't know, another OpenClaw gets released. That's just wild on the internet at that point in time. Someone uses it. That would, that would not even come in the radar at that point then,
Shawn Hays: right?
And that's a, a key piece you hit on. We were talking about third party, and this is one of the eights, I'm, I'm checking 'em off as we go. But with third party risk [00:16:00] management, ai, third party risk management, one of the, the key pieces of Atlas is being able to pull in the AI bill of materials of the AI that your third parties are using as a service to you.
So if it's maybe I'll give you an example. We have an AI solution called a Athena.
Ashish Rajan: Oh,
Shawn Hays: and Athena AI is basically a SOC assistant. That is AI embedded into the platform, into the platform that you can use to query against varonis stuff, if you will to serve information up to you by way of a prompt and a response.
Right? Well, that AI has a bill of materials, it has a model it's using, um, it has different components that are, uh, grounded in maybe the customer's data, whatever the case may be. And we make sure to lock it down, secure that to where we are meeting also privacy standards, that type of thing.
But. You have Grammarly. Let's say you're a writer. For any of the writers that are gonna watch this later, you're using Grammarly. Well, Grammarly has a build of materials. They have a model they're using, and so [00:17:00] it's critical for you when you're using these third party solutions that have AI embedded in them or you're using their agent, that they give you those because much like if.
How our team, we have a business unit that just developed this agent. It's got a bill of materials, it's got models that it's using. We would want to know if that model has a vulnerability associated with it. Yeah like basically in the nist. National vulnerability database. If there's a CVE associated with that model and we're using it for our agent, we would wanna know that.
But similarly, if I'm using Grammarly and maybe something that's not as well known, but I'm using some other software that has AI in it, I would wanna know their bill of materials. So if they're using a model that has a CVE associated with it, that I can. Basically turn that service off for a time until that, you know, CVEI know for a fact that they've addressed that.
So maybe they're using a different model now or whatever the case may be. Uh, so that's just like one example, why third party risk management is a key [00:18:00] piece of any AI security. Program. Yeah. Within a organization as well as like a platform. The technology.
Ashish Rajan: Do you find that a bill of material specifically for ai?
'cause there was a whole thing some time ago, there was an executive order to have bill of material for application libraries you were using.
Shawn Hays: Oh yeah.
Ashish Rajan: And now it's more about similar ask for ai. A the question obviously is more along the lines of, uh, this helps you from a third party risk management to identify.
What vulnerable LM borderline may be using. But I think to what you're saying, even it goes back to the shadow AI piece as well. Yeah. And that makes you understand that hey, this is what we should be seeing.
Shawn Hays: Yep.
Ashish Rajan: And then anything outside of that toku to what you said? Yep. Shadow ai. Exactly. If anyone else comes up in that space, like I'm kind of able to tie that back to visibility.
Shawn Hays: Yeah.
Ashish Rajan: And say that, oh, okay, now I understand that this is my bill of material, which will be constantly updated. But just on the updated part, is it possible today for us to be, I don't know, have like [00:19:00] real time information of what the AI bill material would be? Is that practical, technically possible, or, yeah.
Is it just more like a Hey, someone would manually keep updating it?
Shawn Hays: Yeah. There's various ways that you can get visibility into the different AI components and as their use changes, uh, as their behavior changes and also to what they're comprised of changes. An example of that is, you know, once you have registered Microsoft, uh, excuse me.
Yeah. Af after you've registered Microsoft Foundry, within Varonis Atlas. Or even just your Azure instance, your subscription, anything that anybody builds in that instance is gonna show up. Or any existing agents that are within that subscription, any changes, they, you know, they add certain components that's gonna be added to the map.
So you can see exactly what, which pieces parts have been added. Because as you add new things, which is why continuous scanning is a. Big piece of it.
Ashish Rajan: Yeah.
Shawn Hays: Which gets into AISPM is as you introduce new things into the environment, you want to be able to see those and see what risks that may have introduced by adding a component.
Yeah. [00:20:00] Um, you know, you added an MCP server over here. Well now it's calling certain tools. Oh, well that tool's poisoned, so, oh, we need to, you know, disconnect that, that. Connection point between that and the MCP server, whatever API is being called, we wanna cut that off or we wanna remedy that. So it doesn't call that tool 'cause that tool's poison now.
Ashish Rajan: Right.
But do you find that the data based approach, 'cause it is, the data is an interesting one, right? 'cause a lot of people, and I'm probably guilty of this as well, even when I was CISO where we had data classification, it was no. rubber never hit the road, for lack of better word. Yeah. It like, it was an easy way for me to identify how exposed I was before.
Yeah. And I think maybe because the understanding was if it's confidential data, it would just not leave us. So it would not leave the boundaries of my organization. Yeah. But now we are mean with ai. We are that that horse had left the barn for er better word.
Do you find that. What's the advantage of using a data driven approach for an AI ecosystem?
Shawn Hays: Yeah.
Ashish Rajan: Versus say, because a lot of people who would. [00:21:00] Have been enterprise and building for a while. They have parameters of their based on identity network. They're all these other components. So are we saying this is a complimentary thing to it, so start building on top of it? Or is this more like, hey, this, you almost need to have this as like a parallel because of the a s ecosystem?
Shawn Hays: Yeah, you definitely have to implement with a data security platform. And that's not just predicated on me being at Varonis. And the reason being is because there's different ai, there's two reasons really. It kind of goes back to my initial conversation of different types of AI are just going to be what they can access their RAG AI.
So they're only gonna be able to access what you have access to. 'cause there's a token exchange. I know it's you.
Ashish Rajan: Yeah.
Shawn Hays: You're the one that's prompting me and thus that's how I'm going to ground my response to you. Yeah. By token exchanges and knowing identity and metadata about you and what you can access.
Ashish Rajan: Yeah.
Shawn Hays: All of that's being exchanged by identity providers and various calls. I can only access what I can access by way of this agent, [00:22:00] by what I can access to data. So that's why data security is important there. Yeah. But then on the pro code side we need to be able to kind of apply zero trust to everything.
And what I mean by that and why it relates to data is we wanna look to see if, let's say I'm in healthcare.
There are certain instances where I want to ingest, 'cause I have a patient facing agent. I wanna be able to ingest sensitive data. But then once I do that, and maybe it updates a record in a database or I do something where a tool and an agent is going to do that, take that sensitive data.
Put it somewhere, I need to not trust that agent. Yeah. So I need to be monitoring how it's handling that sensitive data. I need to even not trust the user prompting to see like, like, yes, it's supposed to uh, ingest sensitive data, but embedded in that sensitive data. I don't want it to be jail breaking the solution.
Ashish Rajan: Yeah.
Shawn Hays: And so I want to not trust the, the user and the prompt. As it relates to data, yeah, I want to not trust the agent that's supposed to then be maybe [00:23:00] updating a record somewhere. I need to also not trust the cloud architect that's over that. Maybe data SQL database that's sitting in Azure.
I wanna apply zero trust to that entire chain, and the reason being data. Mm-hmm. Because I don't want that comp, uh, you know, maybe that that architect, that Azure architect for that user to have compromised identity.
Ashish Rajan: Yeah.
Shawn Hays: And then all of a sudden data is exfiltrated that way. I wanna make sure that that Azure instance is locked down.
There's not just nor normal, boring run of the mill cloud. Vulnerabilities there.
Ashish Rajan: Yeah.
Shawn Hays: And misconfigurations where that data could leak out there. I wanna make sure that I don't trust that agent and because I don't want data to somehow leak out.
Ashish Rajan: Yeah.
Shawn Hays: And I also want to not trust the, the prompt interface in that user that they're trusted and that they're never going to try to do anything malicious to get sensitive data out.
Yeah. So that's just like one example of why, why data security and AI security are so in tandem and might possibly even intertwined in certain [00:24:00] cases.
Ashish Rajan: Is it also because there's not much trust on the AI being used as well? Because is this an exercise to build trust on the ai Because you, to what you said, I don't know how much can I control open AI or cloud or else, right? But what I can control is on my side of the world. And data is just data and identity seems to be the two pieces that come to mind. Is huge identity playing a role in there as well?
Shawn Hays: Yes, absolutely. You know, so much of it is predicated on identity and, and Microsoft, to their credit on the Agent 365, they make it super easy if you're building in the Microsoft ecosystem and.
Part of why it is easy is because identity is very uniform there. When a, an agent gets created in copilot studio, Microsoft Foundry, immediately an agent ID is created and that agent ID will show up in agent 365, and you can manage and monitor identity related vectors. Super simple. Yeah. But then once you start introducing other systems, other agent ecosystems, other developer platforms, within that, then identity gets a little bit more complicated, both on the builder side, [00:25:00] the agents, the identities they have.
All of a sudden you're evoking so many different identities. You got identities for the folks that a, that can access data in the cloud store or resource that's being called.
Ashish Rajan: Yeah.
Shawn Hays: You have the identity of the agent. You have identity of the builders, like the people making these agents that have access to these agents on the development side?
Ashish Rajan: Yeah.
Shawn Hays: You have, uh, if they're using some sort of like Bitbucket, GitHub, you have identities for those that can have access to the code repos. It's like there's an identity layer woven throughout. Which is why also too you need an IT TDR solution. On each of those things. So when identities start to be maybe like additional entitlements get added to a user
That is. Again, a cloud architect. So this is like run of the mill identity stuff, not even crazy AI stuff, but you need to have ITDR or uh, maybe like a Kim solution, CIEM for identity and entitle management in your cloud [00:26:00] resources. Again, run of the mill boring identity stuff. To alert on any type of elevated privileges, moving laterally, blogging in from weird locations, all that normal boring identity stuff on the cloud repo.
But then also I apply ITDR, you know, threat detection on identities for the agent. Same thing. Yeah. An agent all of a sudden getting privileges, entitlements that it shouldn't, or an agent all of a sudden accessing things that it typically doesn't, or you know, an agent all of a sudden lighting up. You know, maybe it's been stale and it hasn't necessarily been using a certain resource, but all of a sudden now it's feverishly, you know, logging in and authenticating and using and, you know, using authorization that it typically wouldn't.
Ashish Rajan: Yeah.
Shawn Hays: Anyway, so yeah, it's, it's identity everywhere as well.
Ashish Rajan: I think a lot of people, every time I'm had this conversation with people, they say, I have an E five license or an E seven license, and I'm kind of covered, or I, why not? Just are the native services that we have? It doesn't really matter what the cloud provider [00:27:00] is.
Shawn Hays: Yeah.
Ashish Rajan: Are they not enough to have data and identity being managed? 'cause I think they all have 'cause the data security option. Identity security option.
Shawn Hays: Yeah.
Ashish Rajan: Where do, where do they fall behind?
Shawn Hays: Yeah. Let me give you an example. So in healthcare right now. It's projected that OCR within HHS, that they're gonna update some of the HIPAA requirements.
Ashish Rajan: Okay.
Shawn Hays: I doubt they're actually gonna hit that mark. The government's going through an interesting time as it relates to regulation just in the states. I know. Yeah. Yeah. Uh, you're not based in the states, but, uh, sorry. I'll use a US reference here. But so hipaa, if. The, they put out a notice of proposed rulemaking that they were gonna change hipaa.
And part of it is that they're gonna now require these healthcare organizations to provide evidence of how they're protecting PHI, not only in cloud systems but also within these AI systems that's tapping into the same protective health information if you're gonna do that. Yeah. You're not gonna [00:28:00] be able to show that inventory I talked to you about.
You're not gonna be able to show the posture management. You're applying to all these different components of the AI system, which is diverse and has a broad ecosystem that's not just, you know, Microsoft or Azure or whatever. It's very diverse. And so you're not gonna be able to provide that inventory.
You're not gonna be able to provide that posture management to meet that regulations to say, Hey, we are protecting PHI. Protected health information that's sitting in these things and being used by these agents. And here's how we're doing it. Yeah. Here's how we're protecting it. You're not gonna be able to show guardrails that's preventing, I gave you an example of ingesting PHI.
Well, maybe I don't want that agent to give protected health information, and so I need an output guardrail. Uh, to prevent that from happening, you're not gonna be able to have that guardrail on all these other things that you've built, because at least I'll use Microsoft. An example. You may have labeled PHI in your Microsoft 365 tenant.
They're gonna do a great job [00:29:00] if it's labeled with DOP policies to prevent that data from xFi.
Ashish Rajan: Yeah.
Shawn Hays: Right?
Ashish Rajan: Yeah.
Shawn Hays: But then once you start getting outside of that. Parameter and you start looking at an agent that was built in AWS bedrock, and it's tapping into other, maybe an EHR system, like an electronic health record system, and it's calling that by way of an MCP server, whatever the complicated scenario is, you need to also, for HIPAA's sake, not let's forget about breaches and those types of things, but just from a HIPAA standpoint, to avoid a fine, you're gonna need to be able to show OCR or some auditor, Hey, we've.
We've protected this we put the right configurations in place to make sure PHI is not gonna be able to be exposed by way of this agent. We built in AWS bedrock. So to answer your question, native tools are really good about solving some of the native challenges. But then once you start, broadening the scope and the aperture, that's when it gets a little tough.
Ashish Rajan: And I guess most enterprises are not just using one, even though they [00:30:00] might think they're Microsoft, Microsoft Shop. They may have, it doesn't need to be like an Amazon or Google as well. It just could be like a third party AI agent as well,
Shawn Hays: right? Yeah, absolutely. Like my Grammarly example. Which gets into another thing too that is going to be tough for like a native solution is.
At least with Atlas, we're tapping into network telemetry. Uh, we're rolling out new features as it relates to browser activity. So you're gonna need to see activity not only on the ai, you know, which AI, you know, can be shadow I ai. That's a, that's a topic we'll put a bookmark in that.
Ashish Rajan: Yeah.
Shawn Hays: But not just the ai, you know, but the ai you don't know that's being used and being able to flag it based off looking at what's happening over the network, looking at what's happening.
On the browser and on certain endpoints. Being able to bring in all of that so that way you can sit down and have a conversation with that user or that business unit and not be the department of no, but enable them in a way to say, Hey, we can use [00:31:00] this, but we need to possibly add this to our inventory.
Or, Hey, you're using this ChatGPT is a good example. Hey, you're using this. We have chat. GPT Enterprise, can you just use this same solution? It's not gonna change your workflow, but just. Authenticate into ChatGPT Enterprise and start using this instead of your, public Chad g pt t uh, account as like an example.
Ashish Rajan: Yeah. But it's all always makes me think that people have, maybe it could be an overwhelming sense of how many options they have to consider they going on the path of AI security today.
Shawn Hays: Yeah.
Ashish Rajan: If I were to just ask you just like, let, let's say, let focus only on co buyer for a second.
Shawn Hays: Yeah.
Ashish Rajan: If I were to build a governance plan for copilot, what?
What would be some of the things I should consider?
Shawn Hays: Yeah,
Ashish Rajan: for Microsoft copilot and how should I approach, 'cause a lot of people obviously have that on my default. They may not even know the gaps that are there. If I just had that as a component of an AI component that I use at the moment. Yeah, because that comes with my license or whatever, how would I approach governance plans [00:32:00] for that?
Shawn Hays: Yeah. So I think it really is, it goes back to, I think your other question about, is AI security, data security related?
Ashish Rajan: Yeah.
Shawn Hays: To create a governance plan just for copilot, I think you need to have something that's looking at all the data that copilot can access. So if that is just grounding on Microsoft 365 and that tenant, then you need to understand what data you have in your tenant.
So getting in very rudimentary beginning stages, understanding what data you have, understanding if and how it's being classified.
And is that successful? You know, testing that, seeing whether or not you've actually classified all the data in your environment. Understand also new data. So I talked about existing data, who has access to it and is it classified or not?
Yeah. You know, what sensitive information types do you have?
Ashish Rajan: Yeah.
Shawn Hays: I would say also, is it labeled, but then new data, how is new data going to be classified and is it going to be labeled? And then lastly, how is copilot using that data? So [00:33:00] knowing the data, knowing who has access to it. Knowing how new data is being classified and labeled, and then how AI is going to be using that data.
I think those are three key tenets. And then lastly, kind of applying zero trust from a monitoring perspective. Yeah, so once you've right sized permissions you've classified all the data, you know, you're going to be classifying and labeling new data and setting the right access controls on new data.
Then also too, we need to have zero trust on those users and how copilot behaves and make sure that. Even if we've set everything up as perfectly as we can, that we have something looking at how copilot is interacting with data from a runtime perspective.
Ashish Rajan: And oh, is this where we go into the whole runtime
uh, in the prompt injection, all that, is that where the Yeah. Component comes in?
Shawn Hays: Yeah. So we're gonna leave copilot for a second. Yeah. Um, that was, that's a good, like, I, I think it's a good starting point for most companies that are in that, Hey, we're just using copilot. I want to have some sort of governance plan for copilot.
Um, but when we start looking at. Other agentic solutions. So like my healthcare example, for [00:34:00] one, I'm building an agent with AWS Bedrock. It's calling MCP servers and doing these other things to serve up something. Maybe it's, uh, an agent that's used for billing within a healthcare service provider organization.
And so I wanna pull up patient records where we have, we have late billing and why, and what were the services created? And then give me information about those patients. And yeah maybe there's certain things that we need to do to extend different types of services to those patients that still meets their needs, but doesn't necessarily require them to have a type of insurance or whatever the case may be.
I wanna have some sort of learning and an agent that helps me understand patient outcomes along with billing. And I want to, you know, tie those things together. And I have some sort of agentic system built for that. You are not going to be able to, even with the best A-I-S-P-M tool and solution know for 100% certainty that.
That agent is never gonna have a vulnerability or some sort of hallucination, or it's [00:35:00] never going to expose sensitive data no matter how it's prompted. So you need to do some type of pen testing ahead of time on that agent. Yeah, you need to put it through, um, the ringer, so to speak on, and put it on the ropes to test to see what is it actually going to do in real time.
Ashish Rajan: Yeah.
Shawn Hays: Both from a. From a, like a, like jail breaking, poisoning, all of those things, but also very run of the nil interactions to see how it's gonna behave to just normal prompting to see if it's going to expose sensitive data or it's going to be very vulnerable and exposed for jail breaking or whatever the case may be.
So there's that. Then lastly, set up guardrails. So if you know that, oh, this agent possibly has a tendency to do certain things and you want to block it from doing those certain things, then you know, guardrails are another way to prevent an agent from going kind of off the rails.
Ashish Rajan: So what components should people consider?
And going back to the CSOs, walking the show floor on [00:36:00] RSA?
Shawn Hays: Yeah.
Ashish Rajan: Was building an AI security program. What would you say? Because I love the example you gave for the copilot pieces that the three stools of the chair, if you wanna call them, I guess. Yeah. In terms similar to that in a security program, a where should they start today At minimum.
Shawn Hays: Yeah.
Ashish Rajan: And what would some of the maturity stages would look like as they kind of hit different milestones?
Shawn Hays: Yeah. I'd say two biggest components. And maybe as I talk, I'll come up with a third. Yeah. The two biggest things as you're talking to other individuals on the floor. At RSA or really as you leave R sac and you go home and you follow up with some of these other, uh, possibly providers, is all ai can it cover all the AI that I'm, I have already built, that I am going to be building?
You? You talked about, uh, 2026 and 2027. Even if you do not have current AI that you have built with other different parts of the ecosystem, the AI ecosystem. Can it protect all AI that I've built today? That I plan to do, [00:37:00] mm-hmm. That I plan to build tomorrow?
Ashish Rajan: Yeah.
Shawn Hays: Is it suitable for that? You know, can I protect all of the various elements of AI and shadow AI that you, you, you keep bringing up too?
Ashish Rajan: Yeah.
Shawn Hays: You know, am I, can I look at all the ai?
Ashish Rajan: Yeah.
Shawn Hays: Both the stuff that I've built, the stuff that I'm going to build and the stuff I don't even know about. So like, can I see all ai? That's number one. The second thing is this platform going to be able to. Apply to the entire AI life cycle. Yeah. Meaning from inception, so inventorying all of the things that I have.
From A-I-S-P-M securing it continuously from pen testing it like we talked about. And then once it goes live, can I apply guardrails to it to block certain behaviors? From a compliance standpoint, can I regularly see, hey, as I change things or add things, or as this agent continues to move and work and do things for me, is it going to impact my compliance risk?
How have I applied, you know, maybe the NIST IRMF to this particular agent? And as it changes, is [00:38:00] it still compliant? And then also too, looking at, you know, third party risk, things of that nature. Um, and being able to do monitoring on all of it continuously. So again, all ai
all lifecycle. Yeah. So not just at the beginning, not just doing AI SPM, but also to being able to the other pieces throughout the entire lifecycle all the way to the time you turn that agent off.
Ashish Rajan: Yeah.
Shawn Hays: Sad day. Yeah. But you turn it off.
Ashish Rajan: Is there a wrong metric people focus on? 'cause a lot of people may have already purchased an AI security program. They have. They watch and listen to this conversation. Go, oh. I should, I didn't really plan for the other part.
Shawn Hays: Yeah.
Ashish Rajan: What, what's the wrong metric that people end up focusing on sometimes that you find for AI security?
Shawn Hays: Yeah. A big one is on visibility in ISBM. We have seen this too in the data security space where there was a, a over emphasis and a huge pivot on DSBM. So Varonis as a part of the data security platform
Ashish Rajan: Yeah.
Shawn Hays: Provides DSPM
Ashish Rajan: right
Shawn Hays: to customers. But then there's also other pieces of the data security approach and platform and strategy that are not.
Just [00:39:00] DSPM. Mm-hmm. Um, and so when you limit yourself to just A-I-S-P-M, so just configurations and vulnerabilities, things that are very important and good.
Ashish Rajan: Yeah.
Shawn Hays: You lose some of the things like guardrails, like actually blocking the agents from doing bad things PIN testing, things of that nature. So that's one area where I think.
The industry or maybe the market's over pivoted on AISPM. Yeah. Or put a put a a, an elevated emphasis on AISPM, which it is key. It's a huge component. But there's that, the other piece of it is visibility. In many cases, uh, we've seen where an organization may have great visibility, they have inventory, they know every piece of their AI system but they really have no way of preventing that.
Agent or AI system from going off the rails. Yeah. Um, so that, that's a big one.
Ashish Rajan: Do you find that the, uh, example you gave earlier for the three sets of things people can focus on when they build an AI security program, what are some of the milestones that people can consider as they progress from one [00:40:00] maturity to another?
Like perhaps you just only look at co-pilot only in the beginning, but 'cause to what we said, it's not just co-pilot today. This is like third party. Then there's the sales force of the world, of the world have ai. Do you find in the customers you guys talk to? Is there an easier one to tackle first so that there's a, it doesn't feel like you're trying, trying to, I don't know, chew an elephant, for lack of a better word.
Yeah. All, all in one go.
Shawn Hays: Yeah. Yeah. I would say it's gotta be the inventory in the AISPM piece. Yeah. I told you earlier that. We're overemphasizing and over pivoting on those areas.
Ashish Rajan: Yeah.
Shawn Hays: However, it is a good place to start.
Ashish Rajan: Yeah.
Shawn Hays: I don't want to, I don't wanna dismiss the fact that it's a great place to start for organizations just knowing what you have.
Ashish Rajan: Yeah.
Shawn Hays: Is such a key piece of it, because then, and then all the ISBM pieces of it as well. Once you know what you have, you can start to look at where are mis configs, where are vulnerabilities, those types of things, which. AI system you have or agent you have, and even just with Copilot or Chad Enterprise being [00:41:00] able to do that.
If you do not have expectations, maybe my state and local government friends that are watching this you have no ideas of grander that you're gonna be rolling out any kind of agentic ai? Yeah. Um, besides co-pilot or besides shed c the enterprise. I think it goes back to one of your questions earlier, which is.
What data do I have? Is it being classified and protected in some way, shape or form? Because a lot of these agents that are rag ai, JIRA example, copilot Chad GT Enterprise, it's gonna be grounded in what your users can already access.
Ashish Rajan: Yeah. Yeah.
Shawn Hays: So I think having some sort of data security platform to look at that, to rightsize access and all those different places that AI may crop up, and maybe it's shadow AI you don't even know about.
As long as you're right sizing access to the data in those places. In many cases, this out of the box AI copilot in many cases it's, it's going to, not misbehave and your users aren't gonna be able to abuse it or misuse it, um, as long as you've right sized. Data. So if you're looking at more complicated [00:42:00] pro code, complex AI systems, then I would say good place to start inventory ISPM.
Mm-hmm. Before you get into pin testing and guardrails and compliance and monitoring and those types of things just starting there. But then if you're looking for a starting place for just copilot or Chad GBT enterprise or Gemini Rightsizing data access in those places and, and getting a hold on that, because at the end of the day what that agent is going to, or excuse me, what that.
You know, rag AI solution is going to be able to access is gonna be ba you know, bound to what that user can access.
Ashish Rajan: Uh, I love this. So considering there is also this uncertainty, 'cause I love the example you had earlier where it's not just the AI that we have in 2026, but 2027. Yeah. And perhaps even after that as well, because obviously we can't predict where 2027 AI or what, what would that look like?
Shawn Hays: Yeah.
Ashish Rajan: But at least what we. Are there controls that people can rely on from a long term defense perspective against ai? Which would probably stand the [00:43:00] test of time for
Shawn Hays: Yeah.
Ashish Rajan: Whatever comes up next. And that could be, because I think so far we've spoken about the identity layer, we spoke about Yeah. Data layer and that people are like, Hey, is, is any of this going less last period, extra time, or am I, should I focus on the interconnectivity or should I focus on third party?
Yeah. Or which one of these are you banking on as. Something that would, you know, go beyond just the six month, the 2026 mark, I guess.
Shawn Hays: Yeah, I think it's, uh, we've kind of done it in this conversation where we've talked about identity data, et cetera. I think there's gonna be a little bit of a Renaissance, not just in the Zero Trust.
'cause I've been hitting on that. That's a buzzword that's been around for quite some time.
Ashish Rajan: Yeah,
Shawn Hays: yeah. It's, I think it's more than a buzzword, but still it's been around for quite some time. I think you're gonna see somewhat of a renaissance of these old. Older ideas and concept like defense in depth potentially.
Ashish Rajan: Yeah.
Shawn Hays: Because you know, who knows when we're gonna have quantum and all of a sudden now the cryptography that we use for all different types of things may be crackable.
Ashish Rajan: Yeah.
Shawn Hays: And so you're gonna need to [00:44:00] have different pieces and parts to, um, you know, apply zero trust and have some sort of defense and death strategy.
So it's gonna require all these pieces.
Ashish Rajan: Yeah.
Shawn Hays: Um, now I will say this, an analogy that I've started to use now that we're tapping into old things like zero trust in defense and depth and applying it to new things. I don't necessarily know. 'cause you're from Australia originally, or Well, from India, then Australia, then the uk
Ashish Rajan: Yeah.
Shawn Hays: Anchorman.
Ashish Rajan: Oh yeah. You
Shawn Hays: familiar with the film? Okay.
Ashish Rajan: Ron Burgundy.
Shawn Hays: So yeah. So Ron's gonna read anything that's on the teleprompter, right? Like even if it's profanity even if he's saying, you know, am, you know, I'm Ron Burgundy. Yeah. Yeah. He's gonna read what's on the teleprompter in many cases. The AI that we have today, and maybe it's changed, maybe it's gonna get to a point where we reach gen, uh, you know, general intelligence or something to that effect.
But at least the AI we have now, and maybe for the next five years, who knows, AI is Ron Burgundy. It's only gonna read what's on the teleprompter.
Ashish Rajan: Yeah.
Shawn Hays: And so if we're looking at. Identity solutions and data security, identity security, and [00:45:00] data security solutions. It's like how are we gonna make sure that the right data shows up on the teleprompter?
Ashish Rajan: Yeah.
Shawn Hays: Whether it's copilot that's surfacing information to a user and reading the teleprompter, or is it an agent that's calling these things MCP servers and tools and other things, and serving that information up to the user. Basically it, it's all gonna boil down to how do we control the teleprompter?
'cause at the end of the day, what's gonna be shown to the user is gonna be how we control what's on the teleprompter. Does that make sense?
Ashish Rajan: It does.
Shawn Hays: Um,
Ashish Rajan: it's a good analogy as well. I mean, those are the technical questions I had. I've got three fun questions as well. What, let's do
Shawn Hays: it, let's do it.
Ashish Rajan: So as I mentioned the. So there's a British version. Okay. And then there is the Australian version. Which one are you gonna go for? Are gonna be ignoring the, the cloud favorite. That, that.
Shawn Hays: Can I choose two?
Ashish Rajan: Oh, yeah, yeah, yeah. A hundred percent. Okay. You can go with more than two if you want.
Shawn Hays: So I'm gonna, I'm gonna span both. I'm gonna, I'm gonna go with a crocodile.
Ashish Rajan: Okay.
Shawn Hays: And then what's the British,
Ashish Rajan: uh, British version would be like caramel, or I guess let's go with the
Shawn Hays: caramel. Let's go [00:46:00] with the caramel.
Ashish Rajan: Yeah. These
Shawn Hays: are like, I'll wash my crocodile down with my caramel.
Ashish Rajan: Sure. Well start with the crocodile.
I'm curious to know what you think of the crocodile first as well.
Shawn Hays: In fact can I share with you? So I'm from Alabama. Alright. The other a place, not Australia.
Ashish Rajan: Yeah, yeah, yeah.
Shawn Hays: So I'm from Alabama. Oh, this is interesting.
Ashish Rajan: I'll be curious to know what you think.
Shawn Hays: Okay. As I'm telling this story, I'm gonna try the crocodile.
Okay. So my UK in Alabama connection, you know, it's not too bad.
Ashish Rajan: Is is that chicken?
Shawn Hays: It's a little dry. Yes. Chicken. Consistency.
Ashish Rajan: I kept getting it. I mean, 'cause I. The first time I tried it, I just assume it would be a bit more gamey. You know how it's like you look in an alligator or you think
Shawn Hays: Yeah, not at all.
Ashish Rajan: You're expecting it to be like a tough meat.
But maybe that's as a chewy part comes in. But I would just like chicken. It's not the thing the thing that I was thinking of maybe, or I just have too much chicken. Maybe
Shawn Hays: the Australians know how to, how to season their, uh,
Ashish Rajan: crocodile.
Shawn Hays: They're crocodile.
Ashish Rajan: Yeah.
Shawn Hays: It's interesting
Ashish Rajan: they're sharing a story as well.
Shawn Hays: It's interesting because I come from Microsoft and I [00:47:00] care about labeling things. Like data. It's interesting, it says on your maid in Australia from at least 95% Australian ingredient.
Ashish Rajan: Oh.
Shawn Hays: So I guess maybe they 5% is the, uh, is the seasoning or the,
Ashish Rajan: that probably
Shawn Hays: or crocodile source from somewhere else.
Maybe Florida. I don't know.
Ashish Rajan: Yeah,
Shawn Hays: yeah. Okay. My connection,
Ashish Rajan: I double story. Yeah.
Shawn Hays: As I try this now. So are you familiar with the musical artist, ed Sheeran?
Ashish Rajan: Yes.
Shawn Hays: Okay. Ed Sheeran. Has only made one hip hop mixtape. I'm not kidding. You can look this up. It is a, a record called Slum Dun Bridge, and this is not made up.
And Ed Sheeran made a hip hop mixtape with
Ashish Rajan: Hello?
Shawn Hays: I boy.
Ashish Rajan: Mm-hmm.
Shawn Hays: A rapper from Alabama.
Ashish Rajan: Really?
Shawn Hays: I am not kidding. It's
Ashish Rajan: a mix tape.
Shawn Hays: It is a mixed tape.
Ashish Rajan: Oh, has he been there for, 'cause mix tape is not like a 2020. 2020s did not have mix tape.
Shawn Hays: It's an old, it's an old concept. Yeah. Yeah. Hip hop artists used to [00:48:00] make mix tape and sell 'em out of their trunks.
Yeah, yeah. Before they got popular. Right. Sorry, this is actually really chewy and really good. This is, um,
Ashish Rajan: this, this is British childhood in a, in a rapper.
Shawn Hays: Oh my gosh. Milk chocolate coated caramel wafer biscuit. Of course it's a biscuit. So he evidently was a fan of this rapper. His name is Yellow Wolf.
Y-E-L-A-W-O-L-F, and he wanted to make a mixtape that kind of tapped into. The, like Alabama Southern roots thing. Yeah. That he has.
Ashish Rajan: Yeah.
Shawn Hays: And it's really, really good. And it's, it is like Ed Sheeran mostly singing like the hook. Yeah. He'll, he'll sing a couple verses and he'll even like rap a couple lines.
Uh, but it's predominantly him singing. And then the rapper, yellow Wolf from Alabama, Gatson, Alabama specifically. Uh, shout out to Ston. Yeah, so that's my British, Alabama,
Ashish Rajan: yeah. Oh,
Shawn Hays: fair connection.
Ashish Rajan: Fair, fair. Oh, kinda is a good segue to my first question as well. Then for the fun question, what do you spend time on, maybe not, not trying to solve the data [00:49:00] security problems in the world?
Shawn Hays: Oh, man. When I'm not trying to solve the data security challenge, uh, AI security no. Oh, that's a great, that's a great question. I do DJ on the side predominantly like weddings.
Ashish Rajan: Oh, yeah.
Shawn Hays: So I do that on the side. And then also too, I have three children.
Ashish Rajan: Oh, yeah. So,
Shawn Hays: A lot of the challenges that I'm trying to solve is how do they become productive humans, uh, that can deal with the nuances of a, a crazy world they're growing up in.
Ashish Rajan: Yeah, yeah. Yeah.
Shawn Hays: And trying just to navigate, you know, life. Yeah. Uh, so that, that is probably one of the big things. Yeah.
Ashish Rajan: A good segue to my second question as well in that case, oh, here we go. Uh,
Shawn Hays: take
Ashish Rajan: my, yeah, go for it. What is something that you're proud of that is not on your social media?
Shawn Hays: Oh, not proud of is on my social media.
Oh man. No. Gosh. If my kids are gonna watch this later talking my kids, I'm really proud of my kids.
Ashish Rajan: If you ever voice this,
Shawn Hays: my greatest achievement in life is them and everything they've done and will do. Honestly oh [00:50:00] man, that's, I gotta think about it. I will say this, I, my oldest, he plays hockey, so I keep it on my kids' thing.
Sorry. Oldest plays hockey. Uh, growing up in Alabama, I did not play. Oh
Ashish Rajan: yeah.
Shawn Hays: For obvious reasons. And there's not a lot of ice in Alabama. Okay.
Ashish Rajan: I was was gonna say that because you cannot need some ice for it to be a thing.
Shawn Hays: Um, but there is now and it's, it is a growing sport even in the south.
'cause now the NHL has expanded to Florida.
Ashish Rajan: Oh, right, okay.
Shawn Hays: Tennessee and other places.
Ashish Rajan: Florida as well. Yeah. ICE in, I mean, I'll get Can you
Shawn Hays: They have two, they have two NHL teams in Florida now, and so it's quite the mecca, at least in the south for hockey. Anyway, long story short, my oldest, he plays, and so I, I wanted to be able to hang with him, so to speak.
And so I picked up playing and I now actually play on a team.
Ashish Rajan: Oh, right
Shawn Hays: Now, granted, it's, you know, it's not high level play, but it is a team. Nonetheless. I can skate backwards. I can make a pass. You know, there's normal stuff.
Ashish Rajan: Oh, wow.
Shawn Hays: So, uh, that's an achievement that I'm, I'm a little proud of.
Ashish Rajan: That's a good
one.
Shawn Hays: Picked up a new thing, much like our folks, you know, trying to, you know, get [00:51:00] into AI security or,
Ashish Rajan: yeah.
Shawn Hays: Enhance their new craft of AI security. I think it's like a thing I'm proud of.
Ashish Rajan: Awesome.
Shawn Hays: Yeah. Learn something I didn't know.
Ashish Rajan: Yeah. Wow. They actually, it's a good skill as well. It's really hard to love pick up new things when you're older as well.
As much as I should read about it, I definitely experience it these days. Like I've very learn new language and it's like,
Shawn Hays: oh,
Ashish Rajan: I learn French is like it. Yeah. The brain just stops working after a while.
Shawn Hays: Tough. It's tough.
Ashish Rajan: I mean, you're doing the physical part as well. Uh, final question is the, uh, what's your favorite cuisine or restaurant?
Shawn Hays: Oh wow. Okay. My favorite cuisine or restaurant? Man. I would have to say, so there's two things that are regional to Alabama. I just had this conversation yesterday. It's two things. White sauce on barbecue. Okay. So barbecue in the south is a thing.
Ashish Rajan: Yeah. Yeah.
Shawn Hays: And so we, Alabama kind of adapts from Tennessee and some other places, the Carolinas, but we have a white sauce.
So in North Carolina they put this like yellow vinegar based sauce on their barbecue.
Ashish Rajan: Right.
Shawn Hays: Um, in other [00:52:00] places it's, you know, the red sauce, traditional barbecue sauce, that type of thing. So in Alabama, they have a white sauce. It's literally like white with peppers in it and stuff like that.
Ashish Rajan: Okay.
Shawn Hays: It's like a vinegar based sauce that we put on
Ashish Rajan: our barbecue.
Shawn Hays: Amazing. Put it on chicken, put it on barbecue, any kind. And that would be one thing. And then there's also a burger joint that is local to Alabama. So think you know, we're in California, in and out.
Ashish Rajan: Yeah.
Shawn Hays: There is a regional burger chain in Alabama called Milo's.
Ashish Rajan: Milo's.
Shawn Hays: Okay. It's also famous for its sweet tea, which is a, a, a regional delicacy.
So yeah, so the sweet tea, white sauce, and a Milo's burger with their particular milo sauce on it.
Ashish Rajan: Oh, that's where
Shawn Hays: it's at.
Ashish Rajan: I need to go to Alabama. That's where it's at, dude. This is great.
Shawn Hays: Super healthy.
Ashish Rajan: Yeah. That's all the questions I had. Uh, where can people know more about Atlas Veronas? And I see you guys as well.
Shawn Hays: Uh, so yeah, so you can find me on LinkedIn. So nevertheless, I think maybe I'll drop it in the, I'll, I'll drop it in description or something to matter of fact that's on a personal level. If you want to connect there. Um, also to varonis.com.
Ashish Rajan: Yeah,
Shawn Hays: you [00:53:00] will see, because we just announced Varonis Atlas not too long ago.
Uh, that's on there. And also too, if you hit our blog, I just wrote a blog about applying zero trust to MCP servers. Oh,
Ashish Rajan: M
Shawn Hays: mc MCP environments. So nevertheless, you can check that out. Should be a good read, I hope. Yeah, those would be my three.
Ashish Rajan: Awesome. No, thank you so much for doing this and uh, thank you everyone for tuning us all then.
Shawn Hays: Thank you.
Ashish Rajan: Thank you.
Shawn Hays: See you next time. Thanks everybody.
Ashish Rajan: Thank you for listening or watching this episode of Cloud Security Podcast. This was brought to you by Tech riot.io. If you are enjoying episodes on cloud security, you can find more episodes like these on Cloud Security podcast tv, our website, or on social media platforms like YouTube, LinkedIn, and Apple, Spotify, in case you are interested in learning about AI security as well.
To check out assistant podcast called AI Security Podcast, which. Is available on YouTube, LinkedIn, Spotify, apple as well, where we talk to other CISOs and practitioners about what's the latest in the world of AI security. Finally, if you're after a newsletter, it just gives you top news and insight from all the experts we talk to at Cloud Security Podcast.
You can check that out on cloud security [00:54:00] newsletter.com. I'll see you in the next episode, please.












.jpg)








