AI is reshaping cybersecurity as we know it. From sophisticated AI-driven phishing attacks to the amplified risk of insider threats using tools like Copilot, the landscape is shifting at an unprecedented pace. How can security leaders and practitioners adapt?Join Ashish Rajan and Matthew Radolec (Varonis) as they explore the critical challenges and opportunities AI presents. Learn why 86% of attacks involve credential misuse and how AI agents are making it easier than ever for non-technical insiders to exfiltrate data.In this episode, you'll learn about:
- The "Blast Radius": How AI tools can dramatically increase data exposure.
- From "Breaking In" to "Logging In": The dominance of credential-based attacks.
- AI-Powered Social Engineering: The rise of "conversational bait".
- Copilot Use Cases & "Aha!" Moments
- Data Integrity in AI: The critical, overlooked pillar of AI security.
- The Enduring Importance of Access Management in an AI World.
- Transforming Security Operations: AI for incident response, playbooks, and forensics.
Questions asked:
00:00 Introduction
01:57 New Threat Landscape in Cloud & AI
08:08 Use cases for regulated industries
10:03 Impact of Agentic AI in the cybersecurity space
12:22 Blind spots of going into AI1
8:06 Shared responsibility for LLM providers
20:56 Lifting up security programs for AI
27:41 How is incident response changing with AI?
29:30 Cybersecurity areas that will be most impacted by AI
34:43 The Fun Section
Matthew Radolec: [00:00:00] Hackers aren't just like drop in malware and establishing persistence anymore. A lot of times they're reusing credentials they're not breaking in, they're logging in. 86% of attacks are coming from some type of credential misuse or credential theft, and now that's only multiplied and magnified by Agentic AI, and Copilots. Because you know this concept called the blast radius. Yeah. Or how much data that one person has access to is something that a lot of organizations can't get a handle on. And when you try to measure it, you realize it's huge. And now you put a Copilot in their hand or an agent AI in their hands and you've removed the need to be technical to get data out.
Yeah. So now your insider threats like an insider equipped with a Copilot is just as good as a nation state actor in terms of accessing and exfiltrating trading data. They just, you just don't realize it.
Ashish Rajan: Welcome to another episode of Cloud Security Podcast, I've got Matt here with me. Hey man. Thanks for coming in.
Matthew Radolec: It's great to see you again.
How's everything?
Ashish Rajan: Oh my God. We've been talking so much about I wanna say makeup. And I'll encourage people to watch his keynote and some of his social media after. But for people who don't know about you yet could you give us a [00:01:00] bit, 30 seconds version about your background, what got you into cybersecurity?
Matthew Radolec: Sure.
My name is Matt Radolec. I run systems engineering incident response cloud, managed data detection and response at Varonis. I've been there for about eight years. I'm deeply passionate about protecting the world's data. I feel like since a young age, I had this calling to be a protector of some sort, and it took me a while to figure out that it was data that I was after the whole time.
And now I'm really excited to share my passion about data security and cybersecurity with the world.
Ashish Rajan: And because to your point about protecting the world one thing that seems to be taking over the world right now seems to be AI. It's and different forms of it, it's Copilot. There's a lot of it.
Maybe to set the scene for the threat landscape that you're noticing as it's changing. It feels like the entire cybersecurity as we know today and the threats we knew about cybersecurity, they've all being challenged now, what's the new threat landscape that you're seeing? Especially when that you also have cloud and IR team that report into you as well.
Yeah. So what's the new threat landscape that you're finding cybersecurity is dealing with AI?
Matthew Radolec: As the couple things I want to mention, [00:02:00] one of those is we've always thought and talked about the insider threats. Yeah. And a lot of people jump to the Edward Snowden, like disgruntled IT administrators.
Yeah. But that's not your number one concern anymore. Hackers aren't just like drop in malware and establishing persistence anymore. A lot of times they're reusing credentials or what we like to say at Varonis is, they're not breaking in, they're logging in. Oh yes. 86% of attacks are coming from some type of credential misuse or credential theft.
And now that's only multiplied and magnified by agentic AI and Copilots. Because you know this concept called the blast radius. Yeah. Or how much data that one person has access to is something that a lot of organizations can't get a handle on. And when you try to measure it, you realize it's huge.
And now you put a Copilot in their hand or an agentic AI in their hands. And you've removed the need to be technical to get data out.
Yeah. So now your insider threats, like an insider equipped with a Copilot is just as good as a nation state actor in terms of accessing and exfiltrating data. You just don't realize it.
It's not something that you're thinking about. So one of the things, the trends [00:03:00] that we see is there's a lot more threats or incidents or breaches that are occurring where one user gets compromised and has access to a lot of data. Yeah. And that causes a data breach. And that is not a sophisticated threat at all.
Now when we think about this like battle of ai though, it's, it is a little bit of the robots versus the robots.
Ashish Rajan: Yeah.
Matthew Radolec: Because we see cyber criminals, nation state actors also starting to use AI and in two ways that I really would wanna mention one of those is this concept of a conversational bait.
So we, we've all heard of like the Nigerian prince scam, right? Oh, congratulations, you're my cousin and you won $20 million. You just need to wire me 20,000 so I can get you this 20 million. Yeah. Everyone knows about this scam. But what if it was different? What if it started with a conversation like, Hey, it's been so long. How are you, how's it going with that podcast I just listened to about you? Oh, and so what's happening is that the hackers have built like large language models to have conversations with the rouse before actually sending the phishing email or sending the request for the wire. Oh my God.
Yeah. And you don't realize that you're having a conversation with a [00:04:00]bot and that bot is using ChatGPT to learn about you. So they know a little bit about you and they're able to have a meaningful conversation all before they send the rouse.
Ashish Rajan: Oh my God.
Matthew Radolec: So I think this is something that we're up against and a lot of people also worry about like deep fakes and stuff like that.
Yeah. The more that your content is out there, the easier that's gonna be. And I think it's gonna get to the point where we're gonna trust what we see on the internet even less than we do now.
Ashish Rajan: I think it's funny, a, the first thought that when you mentioned Nigerian principal wait, is that not real?
Yeah. I'm never gonna get the money, but I agree with you because the phishing emails are so sophisticated as well these days. You almost, all the security awareness training that was done before, it almost feels is that really gonna be helpful? To your point, when I'm having a conversation with this person, I built trust for this person.
And I can't even differentiate if the person is real Ashish on the other end, or versus just someone who happens to use ChatGPT LLM with my profile picture on it.
Matthew Radolec: Right. And did a little bit of homework on you. Yeah. Asked, tell me about Matt. Yeah. What can you tell me about Matt?
Because if you put that in ChatGPT right now. It's gonna list podcasts that I do. Yeah. [00:05:00] Yeah. My city Cyber crime podcasts. All the public experience you have. All the public experience. I have stuff that crawled from my LinkedIn. Yeah. Any blog posts or articles I might have written or been quoted in.
And if you were having a conversation with me about that stuff,
Ashish Rajan: yeah,
Matthew Radolec: I might reply.
Ashish Rajan: Yeah, I actually, it's a good point as well, because, do you find that the newer landscape that has been evolving there's obviously the email perspective and you mentioned the fact that people, it's people inside your firewall now who potentially threats with Copilots and know it's literally I'm gonna copy paste from one directly to another directory.
It's the same as, oh, I'm just gonna copy paste into my ChatGPT or whatever. What's the difference? The same thing, right? 'cause it's the enterprise edition or whatever. Because there has not been a public AI, like a big incident, there's a CapitalOne moment that happened with cloud where suddenly everyone became more aware of it.
In the conversations you've had with customers and otherwise, are you finding are there AI use cases for where oh oh, shit, that should not have happened? Oh shit. Moments like,
Matthew Radolec: oh, so here this story gets, I could tell you gets told dozens and dozens of times, and it's [00:06:00] probably one of the reasons why we are absolutely swamped at this RSA conference with people that want to talk to us because we have, the world's first solution for Microsoft Copilot and it's expanding. But here's the use case that happens. They do a pilot.
Yep.
They maybe buy a dozen licenses for Copilot, or 30 licenses for Copilot or a hundred license Copilot.
And they turn it on. Yeah. They realize they can get to salary information, they can get passwords, they can get merger and acquisition details, all with a couple of keystrokes. And then they're like, oh my gosh. But also our business is benefiting so much. Like the note taking, the Copilot for Excel or Copilot for GitHub is really benefiting our dev teams.
And so there's tension that builds between security and the business. Yeah. And they want to move forward. So they're coming and talking to Varonis because they want to get a handle on this and they wanna do it fast 'cause they don't want security to be the reason that they don't roll out Copilot. But it is from some aha moment, almost every time it's salary information.
Oh wow. It's like the review cycle or the bonuses. Yeah. Yeah. Or it's an upcoming merger and acquisition. Or, [00:07:00] even like layoffs and things like that. Yeah. Or closures of factories and facilities, especially in the manufacturing space. This is stuff that obscurity protected from the user's view before, but now Copilot, like it removes the obscurity layer from security.
Ashish Rajan: It's, oh, so you're, I guess what you're trying to defend against is the curiosity of the person to ask the question. Yes.
Matthew Radolec: But before that person had to be technical. They had to know how to use search or they had to write scripts to be able to find this data. Yeah. Now it's just curiosity.
Yeah. Literally, it's just like a little bit of self-control to not ask the Copilot all these dangerous questions
Ashish Rajan: in plain English as well.
Matthew Radolec: Correct. And Exactly. And it doesn't even have to be good English. Yeah. That's right. It could be a little broken and Copilot and say, did you mean this?
Ashish Rajan: Yeah. And I think that's what the scariest, and I think it goes back to the blast radius you mentioned earlier. So what about regulated industries like health and manufacturing and any of that? Are you seeing any use cases there as well that have.
Matthew Radolec: Oh, absolutely. I, and I think these are the ones that oftentimes have the biggest benefit, right?
Let's take healthcare as an example. Some of our healthcare clients, were the very first ones to want to [00:08:00] deploy Copilot, because if you can buy doctors more time, not only are you gonna make a healthcare company more money, but maybe you're gonna actually help more people. So healthcare can be one of the biggest like hospital systems, right?
One of the biggest benefits of agentic AI. Think about note taking. They have to produce all these medical notes. That's right. Every single interaction has to be put into medical note and in the patient's, in the patient's medical record, if you can speed that up or make it more accurate, yeah.
Not only are you gonna help the hospital make more money, which I mean it's all for profit in the states, but you're also gonna be able to help the doctors see more patients and see more patients more effectively. So they're the ones that stand to benefit the most, but there is also the most risk. Yeah.
So that's why they're having conversations with companies like ours, like Varonis, 'cause they want to get a handle and they want to be the first to adopt AI and get these benefits as a competitive advantage, but also to deliver better patient care. The same thing with financial services. And financial services sometimes.
And I hear a lot when you talk to like traders Yeah. That like knowing something seconds ahead or tenths of seconds ahead. Yeah. Microsecond makes a difference that or like a microsecond in processing. Yeah. Makes a [00:09:00]difference. So what if the AI that you have
Yeah.
Can get that analysis done a little bit faster?
Than your competitors and you make the trade seconds before. This could be millions or tens of millions of dollars. That's right. Yeah. Or the difference between catching the jump or catching the dip, right? Yeah. Yeah.
Ashish Rajan: And manufacturing as well.
Matthew Radolec: Manufacturing is the same now, I think with manufacturing it's more about being able to like invent new things and the ideation phase, right?
And I also see this a lot with companies that are trying to do creative thinking. I don't personally get a lot of new ideas when I'm using a Copilot. Okay. But if I take an existing idea and I want 10 different spins on it and I'm in that ideation phase, yeah. I find that Copilots can be really useful at helping you break through that next layer of that innovation.
Ashish Rajan: And to be specific, Copilot is, I think, not these days. So many things a Copilot. When you are looking at these use cases, are they Copilot specifically for your they Copilot for GitHub, they Copilot for Office 365
Matthew Radolec: or maybe Gemini. Oh yeah, [00:10:00] there is that as well know agent force on top of Salesforce is another really popular one.
And this like agentic AI is also starting to gain a lot of steam. That's right. Where you're like, you move away from like the person when you call a company. Yeah. Whether it's to book an appointment or get a service. Yeah. No I think almost all real estate is like this now where you're trying to get a scene apartment.
Yeah. And you're just talking to some agentic AI about it the whole time. Oh my God. Even when you say you wanna talk to a person, they just give you a different bot.
Ashish Rajan: Yeah, I'm just so funny story. I was trying to, so we do this creative dinner every year for a bunch of cybersecurity content creators.
And I was trying to book a restaurant, it's a Korean barbecue place somewhere in SF when you call them, you get this automated service. And I, in a lot of ways, I, it was a good reminder. I didn't talk to one single person. I made the reservation. It just asked me for prompts. I gave it the prompts while on the phone in voicemail as a voice.
It made the booking. I got the email, everything was end to end. To your point. Every little smaller business is also benefiting from this. But also at the same time, I wonder the internal threat party, [00:11:00] you've called out as an attack vector if you look at this. 'cause that's where a lot of people are, like you mentioned blast radius.
Outside of the data part, are there other components perhaps there are requiring attention even more.
Matthew Radolec: Yeah, I think denial services one to think about in this agentic space, if have a public facing, again, let's just use the reservation example that you brought up. Yeah. How easily could that be overwhelmed?
Now what's the impact? Like you all your Resy book up and you have to clear 'em out. Yeah. So maybe it's frustrating, but if it happened in a sustained way, it could definitely hurt your business. Yeah. I also think that, this also draws the need, and I've been talking about this a lot, insecurity in the last, I'll say like 10 years.
Yeah. We've talked a lot about confidentiality. In terms of like keeping data a secret.
That's right.
And because of ransomware, we've talked a lot about availability. Yes. Which is if I lose access to my data and it's encrypted and I can't deliver my business outcomes. Yeah. AI draws the need for us to talk about integrity more now, more than ever you need to make sure that the data you feed into the LLMs you're building Yeah. Is good clean data. Especially in stuff like healthcare and medical [00:12:00] research. Oh, because if that data got altered Yeah. And then fed into an AI, yeah. The outcomes of that and the consequences of that are far worse than that data getting publicly leaked because now decisions are gonna be made based on faulty data that went into it.
Yeah. So I predict the trend where we start to see the ransomware actors start to try to poison the data that goes into the models as opposed to just capture the data that comes out of the model.
Ashish Rajan: Every conversation that I've had at every CSO dinner to, at least for the last couple of days, I found that everyone is being asked by their executives pushing AI, AI is gonna be the number one thing we want.
And to what you call that, every industry's gonna see multitudes of benefit from it. They have to adopt it for the edge. 'cause if they don't do it, the competitors are doing it. So everyone's pushed into a corner with it. What are the blind spots of rushing into it? And I don't mean it in the context that I'm not saying that we should not go into AI, I'm just conscious of the fact that you're also talking to the people who are implementing this at scale healthcare manufacturer.
Yeah. You mentioned all of the financial, like what's the blind spot here?
Matthew Radolec: [00:13:00] One of 'them is like non determinative. It's like the fact that you and me could ask ChatGPT, the same question and get different answers.
Ashish Rajan: Ah. Okay. That's a big deal.
Matthew Radolec: Yeah, right. Like it's based off of the previous queries that we've made.
Yeah. It's based off of our search history and the conversations that we've had with the Copilots. Yeah. So the fact that it isn't scientifically provable, it's not a fact that gets produced on the other end. I think we haven't, we never necessarily seen the impact of that yet. I think the other thing that we have to worry about is what I was trying to bring up before.
If bad data is fed in. A lot of this was happening with DeepSeek, where people were just dumping lots of data into DeepSeek. Yeah. And it wasn't as, let's say, powerful as a model. And it was getting results back that were less than valuable. Yeah. Now you've hurt the environment with all the compute you've just wasted, but you've also wasted a lot of time to get an inaccurate result.
And what if now bad data is then used to train something that's then used for downstream business decisions. So the way I think about it is like this in business, a bad decision you can recover from. If you fail fast. Yeah, like this DevOps mindset of fail fast, innovate fast.
Ashish Rajan: Yeah. Yeah.
Matthew Radolec: If you don't realize you're [00:14:00] failing with AI because you built a bad model and you continuously make bad decisions with that model, how are you making sure you're failing fast with AI?
This is the question I'd want to ask. Oh, actually. And that I don't wanna spin that back to that same person that's asked me what's been on, how do you know you're failing fast? Yeah. How do you know you're winning though? How do you know you're getting the gains? Do you have a, like a metric from before and after?
Because I'll give you an example if you don't mind. Yeah. Yeah. So I run the managed data detection and response. Yeah. And one of the things that we measure really closely is how long it takes us to investigate, respond to, and close alerts for our clients. Yeah. So we built a AI security analyst.
Okay. And we baseline our humans against the AI. At baseline, you have to be better than the AI or unfortunately, you're not gonna last very well. Which is a fair point. They have to be fair point, more accurate, more precise, and faster. Yeah. Yeah. And if not, then again, you might not have a long tenure in our managed detection and response team.
Yeah. And I think that this tension is healthy. Yeah. Because it's my job as a practitioner, but also as a publicly traded company. We gotta be efficient, we gotta be precise, we gotta be accurate. We gotta protect the [00:15:00] world's data. Yeah. And so I think the use case per organization and per company, there are ways to say, am I getting it right or is it actually making things worse?
Because if it increase the rate at which I had false negatives or false positives, then it's bad. That'd be bad. Client outcomes. Yeah. So they gotta be better than that too. Oh yeah. And so from a, from just simply from a standpoint of like, how do you measure success or how do you measure failure?
How do you fail fast to innovate fast? These are how I would try to answer the how do I know or what do I need to be worried about in terms of ai?
Ashish Rajan: Where does the whole data vault and stuff fit into this? 'Cause obviously a lot of people are building this in cloud and working with applications in cloud, a lot of that data is in cloud.
Yeah. And you kinda mentioned the the data security part. The data leakage part. And now the integrity part being more important. Data vaults is a thing. Identity abuse is a thing. What are some of those? These things. Things all.
Matthew Radolec: I also think that like masking and tokenization and encryption is gonna come back with a vengeance.
Oh, okay. Because we might not be able to prevent data from getting fed into a model, but maybe it doesn't need to be everything. Does it really need the social security numbers and the credit [00:16:00] card numbers? No, it just needs to know, Matt likes to shop here, or this is Matt's health information.
Yeah. Yeah. And so if we can mask it inside the models. It can still get the value it wants to derive from the other parts of those data elements, but not necessarily the sensitive regulated parts. And I think this is what businesses are faced with right now is how do I move fast but not dump all this extra data in.
Yeah. Same could be said about data lakes though. We've been doing that, we did this five years ago with Snowflake and Databricks and all these other data lakes. This is just like the next place for people to make a ton of mistakes.
Ashish Rajan: And do you find that the people who did this in the beginning or maybe doing it now.
A lot of them are focusing, at least initially on Office 365 and that, yeah, office 365
Matthew Radolec: or in the Build Your own Agentic AI is also getting really hot right now. Oh, so Agentforce from Salesforce is another one that is just taken off, they really want to be the leaders and helping you have an agent for your small business.
Yeah. And help you gain those efficiencies from a Salesforce perspective. So we see a lot of that. We see a lot of the Copilot. We also see a lot of [00:17:00]companies that are building their own LLMs on top of, or or adjacent to ChatGPT.
Ashish Rajan: Oh wow. So it's not, I guess so contrary to the popular beliefs that, oh, I'm just using OpenAI, or I'm just using Gemini, people are actually using multiple, yeah.
Matthew Radolec: They wanna combine the data streams together. I think some of the largest tech companies in the world right now have their own data lake. That they're combining with a ChatGPT type of model. Yeah. In order to get unique results out of it. 'Cause that's where the value to your business comes from.
You take all this institutional knowledge in your business. Yeah. You combine it with the large language model or kind of the hive of mind as you'd like to think about it. Yeah. That new, net new end product, that's the power of ai.
Ashish Rajan: Actually, could you raise an interesting point with SaaS applications, like I'm gonna take Salesforce as an example.
Them having an agent and us being a consumer of that agent or consumer of Salesforce, whatever. Does that kind of change the threat landscape a bit more? Because, to your point about data, me, I as an in like an individual with a access to data as an internal threat versus [00:18:00] now I have sensitive data in Salesforce and other places.
What's the change in threat landscape with those?
Matthew Radolec: Now you need to worry about AI security, like you need to worry about stuff like model poisoning. Yeah. You need to worry about stuff like prompt engineering. Now, but fortunately, I think for everyone here at RSA conference and all your listeners.
A lot of the AI companies themselves are moving security to the left in their models. Yeah. And they're trying to build AI governance and AI ethics into their models. I ultimately think that this AI security startup space around LLM security isn't going to take off for this reason. Oh I don't really believe that, like it should be on the consumer of AI to make sure the model itself is safe.
I think this is upstream needs to be the responsibility of the companies building the models and the toolkits and not the consumers of them. But there, there is something to be said about a balance on this too though. And let's just use Salesforce, an example. Salesforce is on the hook for certain parts of security. Yeah. And customers of Salesforce are on the hook for other parts of security. Yeah. AI needs to be the same, but in terms of the [00:19:00] model itself being vulnerable, that needs to be on the creators of the model. I don't think that's on the consumers.
'Cause the consumer can't fix it. I can't rewrite the code. Yeah. ChatGPT has, flaws in its logic and reasoning. I, as a consumer of ChatGPT cannot solve for that.
Ashish Rajan: What about people using the snowflakes of the world and others, but I guess maybe where I'm going with this is, as people are using more SaaS services that have agents, we're talking about shared responsibility here.
As you evolve and you mentioned that hey, LLM provides as a, sorry, LLM, security people that, or security companies that exist at the moment, they may not exist the same way, is the thinking there, obviously you guys are in the data security space using this in and out. As long as the data is clean and structured in a way that you can consume the right way
Matthew Radolec: and the rights are set up the right way, this is what's all come down.
They've come down to an access management problem, which is the same when we've been facing in security for as long as I've been in the industry. I don't know. The first time I had to deal with like active directory group permissions. I feel like I was in middle school or something like that, and I was trying to set up a domain on Windows.
Oh my God. Yeah. [00:20:00] 3.1 or XP or something like this. Yeah. Policies and as server 2003 or server policies and Server 2003 and access rights was the problem then. Yep. That's what got you in a breach. Yep. Has it changed? No. As can we name one part about that's changed or is it still the, just like basic access management can be what saves you from a breach or leads to one.
I think it's the same problem. Snowflake, Databricks, Salesforce, Microsoft 365. File servers and NASA arrays, S3 buckets. I could just keep telling you every single place that people put data, if the access management is tight, they probably don't have a data breach by AI or by their own employees or by an external threat actor.
But if access management's wide open, they have a big blast radius, like that is the weakest part of your security program. Yeah, you can have the most sophisticated stack in the world. Modern firewalls. SSAE, next gen endpoint, next gen EDR, IPSs, manage detection and response. But if when I compromise in a single account, I can use Copilot to access all the salary data and export it, then what's the point?
Yeah. What's the point?
Ashish Rajan: Yeah. What's the point actually? 'cause when you say this way, right? I think it's [00:21:00] funny, I'm also thinking about the CISO leaders CISOs and security leaders leaders who are not that they're waking up now, they've been trying to tackle the Copilots in this world AI adoption of the company.
No one's trying to slow it down. I don't think anyone believes that this is not the future, but there is a definitely a sense of nervousness for the fear of the unknown. For lack of a better word. Yeah. FUD. Yes. And. Are there things you've found in the way people have adopted AI or maybe in the blind spots, they were like, what's a good maturity?
'cause a lot of people already have a security program. A lot of people may already have a data security policy. They may even have a AI council in-house as well, AI Security Council for governance. What are the things they can include in their security program? Or to uplift it for AI as it's going up.
Matthew Radolec: Yeah. So the first one, how are you using AI? As a security professional, how are you arming your teams with AI powered tool sets? Your actual teams? How are you using AI to do a better job at being a protector?
Ashish Rajan: Right?
Matthew Radolec: Because if you're talking about enabling your business AI, are you enabling your security team with AI.
Or are you sitting on [00:22:00] the back burner and just trying to get the business gains without trying to get the security gains? Or maybe that's one of the ways that you can help combat that threat. You can make your team more agile, more efficient, much like how you're trying to make your business more agile, more efficient.
Yeah. I take this really seriously. Yeah. It's my job as a leader, and this is what my talk is about on Wednesday. Yeah. Yeah. To give my team the tools they need to be the best at possible at doing their job. That's my job as a leader, I have to give them an AI powered tool set. Yeah. Or I'm lagging behind.
Ashish Rajan: Yeah. Fair. And to your point does that mean in a existing security program, once you start using AI I guess you're doing it yourself as well, are there areas whether it's to still continue to starting with Office 365 or look at SaaS services? What are some of I guess I'm, where I'm going with this is because.
A lot of people don't even know what maturity could look like from an AI security perspective. We understand that yes, data is important. We understand that identity is important. Sometimes access management may not be the best across the board. Single sign on was such a big thing. But for a long time authentication.
Yeah. Oh yeah, you do [00:23:00] that and you'll be fine, but everything is fixed. Yeah. Yeah. But we also found that over time these things are not enough. You have to still have, Hey, make sure Ashish actually doesn't actually have access to salary information if you should not be active,
Matthew Radolec: and if you can't, then police the prompts.
Yes. So I think this is the balance point, right? Yeah. We sometimes in security need to enable the business. Yeah. Yeah. Then we shouldn't give up the monitoring. We should demand that we need to do the monitoring. Yeah. 'cause you can still be the police. Yeah.
Ashish Rajan: Yeah.
Matthew Radolec: You can still look at what people are using the prompts for, look at the responses that they're getting and you can still say, Hey, why did you look up salary information for the CEO?
Yeah. Like you didn't need to do that for your job. Yeah. You're not on some kind of special project. Yeah. Same as if they somehow got administrative access to Outlook or Exchange. Yeah. Or an IT admin. You know that Snowden use case we talked about before, you still need to police for these insider threats.
Yeah. And I think if you do that, you can, there's some risk that's mitigated. Yeah. From simply being blind to it. But the other thing that you just have to face is whatever you're feeding into this, you need to know exactly what's gonna happen to it, or you aren't ready. If you turn on Copilot and your [00:24:00]permission set is wide open and you think you can hit the undo button on that, you can't.
No. There's no undo button from something getting chewed into a large, at least not yet. It's not yet. Yeah. And so you need to be thinking about these things before you go, buck wild and turn it on for everyone everywhere all the time. But that's is also what's helpful about running a pilot.
Security should be using the pilot. It should be using the pilot. Same as the business that's using the pilot. Yeah. What gains can we get? What controls do we need to put in place before we go company wide with it? Yeah. Or if we do go company wide with it, there's some level of risk acceptance. Yeah. I'm not here to make risk decisions for your company.
Yeah. Maybe the benefit far outweighs the risk. Maybe the danger of, this person knowing everyone's salary is overplayed. Yeah. I don't know, I don't know your business enough to tell you that. , I wouldn't want a business to not get the benefits of AI and not sit here and listen to this and think, am I doing enough with AI as well as am I doing enough to secure it?
Because we're, we are at the infancy. I, we haven't even left the crib when it comes to AI in terms of the world and the [00:25:00] benefit of AI yet. And there's a lot more that we have to learn and a lot of it's gonna be learned from doing it.
Ashish Rajan: You are on the money there. 'cause I, and I'm glad you used the example of how your team is using it with the level one and how they need to at least match up to that.
Matthew Radolec: Absolutely. Yeah. Or they get weeded out.
Ashish Rajan: Yeah. And as a sidekick or whatever, but they should be leveling up to be better than that rather than, oh, I'm at the same level as the AI. Correct. Like why are you here in the first place?
Matthew Radolec: Yeah.
You're not adding value on top of the robots.
Yeah.
Ashish Rajan: And to, to your point, as harsh as it may sound. But that is truly there's the reality of it as well as a business. You don't want a security incident. Not a security incident to be missed just because the right information was not provided. Context was not there. Or you didn't want to work on it,
Matthew Radolec: but let's just spin it.
That's the same reason that these agents, this agentic AI is successful. Yeah. Do you really wanna talk to someone on the phone to make a dinner reservation? No, not really. Yeah. My AI agent could talk to it as well. Yeah. I could send my AI to talk to your AI and like we save time and money and headcount and benefits.
Yeah. And no. I have empathy for that [00:26:00] person that like lost that job. Yeah. Maybe they really liked it. They should maybe look for the American Express concierge line or something now like people still value that in different parts of society, but like we have to level up. Yeah. Like we as society have to then find new skills or new ways to train people to use the AIs or build the AIs because it's time.
Yeah. It's time to be more efficient and to be better than the robots. Yeah. Maybe that's a whole another podcast I'll have to tell you about. Yeah. I feel
Ashish Rajan: And I think I'm. I'm definitely in agreeance with you on this as well because I think because it's still in the beginning stages, we haven't really left the crib to quote you as well.
I definitely find that a lot of people have either taken a stand for, oh, it's completely blocked off in our organization, or,
Matthew Radolec: which I think is a career limiting maneuver. I will say out loud. Yeah,
Ashish Rajan: Yeah. I'm fully in agreeing. It's almost like saying, oh. The people who I'm not gonna do cloud.
Yeah. Or the internet. I don't want internet. Yeah. No internet access. We only do internal LAN networks only. It's kinda like that kind of conversation. I feel like, yes, you can try and push back on it, but for how long would you be able to do this and I And at what expense? At what [00:27:00] expense?
Because as a business is your executive team or board okay to take that risk of not using it. And waiting and seeing what happens as well. There's that question to be asked about what was the edge that you were looking for and if this was the edge that you could have used to, I don't know, go IPO or go whatever.
But it also opens another door for, we spoke about attack vectors. We spoke about, hey, how leaders should be trying to build this capability in their organization. Incident response is often not looked into, and fortunately you obviously have the detection and response team under you as well. How has that world changed with AI?
Because now we are multimodal data is yeah one component identity access management is one component. How is incident response evolving?
Matthew Radolec: So in one way, you can get AI generated playbooks. So one of the things that we factored into is, i've been running the IR team of Varonis for almost six years.
So I think we had some data on something like 12,000 investigations, 80,000 hours of services delivered. Oh, wow. And everything was tracked in a case management [00:28:00] system. Okay. So we could feed all those tickets into at a large language model to write playbooks on how to handle certain types of incidents.
Tremendously successful at that. Wow. What comes out of, like what you should do for a brute force attack, a ransomware investigation, domain generation algorithm, all these different styles of threats or like detection techniques. Really good at that. Wow. But at what to do when there's a live incident still need incident commanders.
Oh yeah. Of course. Yeah. Because there's just a little bit too much input and and nuance to like the risk tolerance. Yeah. Until we can teach AI how to understand risk tolerance of a business and business critical applications, I don't know how much it's gonna be used in the live handling of an incident.
Yeah. But on the IR side, like the collection of artifacts, the reviewing of those artifacts, the building of a timeline, AI's already doing a great job at all that.
Ashish Rajan: And are you finding that the
Matthew Radolec: writing of the playbooks?
Ashish Rajan: Yeah. Yeah. So do you find that using a, you already had the data that the incident response team are going to use.
You already had information about what has been, a potential [00:29:00] false positive in the past as well, correct? Correct. So now. When your team sees that same information come down, you're able to come quickly, oh, we can ignore this person.
Matthew Radolec: And what we're doing is we're having the AI analyst provide a risk score.
Yeah. So there's elements that will increase the score and elements that'll decrease the score that will ultimately lead to a recommendation of whether or not an alert is a true positive or a false positive. That's what then gets processed by the person.
Ashish Rajan: Yeah.
Matthew Radolec: So the person's combining this risk score, the AI's recommendation, and then their own.
Yeah. And that's how we're able to be more precise, but also cut down on time to investigation and time to resolution.
Ashish Rajan: Oh, that is so cool. 'cause I think. I'm curious on your opinion on this. Obviously, if you take a step back and looking at people who are using AI and looking at people who are talking about AI within cybersecurity, what are some of the areas that you feel would be disrupted in the next coming years?
You're obviously giving a keynote here at RSA as well, and you're trying to bring that element to this. Are there signals that you've already seen where you feel that, you know what, I think these one or two, or three or four, whatever fields in security Yeah. Are gonna be, this may even not [00:30:00] exist maybe in the few years,
Matthew Radolec: like forensics is, I think gonna really change like how we collect Yeah.
Once, once we get an AI with a enough of the packages from GitHub or from paid products to collect artifacts. Yeah. Why have a person go pull memory from a computer? Why even have a person analyze that? Yeah. Because it gets to a certain point where the only thing that a person's gonna help with is novelty.
Ashish Rajan: Yeah,
Matthew Radolec: right. This is a never before seen malware variant, so the AI is not gonna do such a great job. But even that, there's AI detection, AI antivirus on the floor that if you run it through that or some type of AI sandbox, it might even spit back for you more information.
Ashish Rajan: Yeah.
Matthew Radolec: So I think the SOAR space is another one that could likely be disrupted by that.
Okay. Because a lot of times with a SOAR, you're just kinda like reading one company's API and maybe like Python or PowerShell and creating an integration by pinging two APIs. Yeah. Yeah. Like Copilot for GitHub can already do a lot of that. Hey, read this API manual, read this API manual, write for me a script to be able to query this and dump data, and push data [00:31:00]into this particular API.
Yeah, what you get back usually compiles. Now it might not be the most efficient version of it. It might not be perfect. Yeah. But I think these are places like SOAR and log artifact analysis and forensic analysis that I could see AI starting to gain its ground over the human resources that are doing this work.
Ashish Rajan: Wait, any blue collar jobs, you reckon that will be factor as well? 'cause we were talking about health manufacturing earlier. What I can blue collar would be, or is it primarily just white collar?
Matthew Radolec: This is a really great question. Here's what I'm gonna say about, I'm gonna give you a direct answer on that.
Low skilled white collar jobs Yeah. Are gonna disappear.
Ashish Rajan: Oh, okay.
Matthew Radolec: So like
Ashish Rajan: an, let's
Matthew Radolec: use like note takers as an example. Oh, yeah. And keeping a calendar. Yeah. Someone to answer the phones or taking notes. Make appointments. Yeah. Taking notes. And there are instances already where I think the AI is better.
It doesn't sleep. Yeah. It doesn't yawn doesn't get tired. It doesn't drink water. Yeah. Like it doesn't need a pause. No. It sometimes is better with accents. I do, I have an international business at [00:32:00] Varonis. And obviously like sometimes I have to listen a little bit more closely for certain accents.
Yeah. The AI does a pretty good job at that. Maybe even better than me with some accents. Wow. And can do translation as well? Yeah. Clear, in many languages. Yeah. Oftentimes. Now do I know if it's clear or not? I'm just trusting it. Yeah, that's right. But usually live translation. So I think these are places where we used to value this.
Yeah. And what I think it, the reality is that we valued that interpersonal relationship we had with the person delivering that work. Right now, and that's what's gonna be gone. Okay. Let's take like a secretary for instance. Yeah. You could develop a lifelong relationship with a secretary.
Yeah. They get you. Do you still want that, is that still valued by society or are you gonna value precision and timeliness over that? Because that's, that was their role in the beginning, right? That's a real question. Yeah. Yeah. Yeah. They're like a, they're in your life, like a supportive person.
Yeah. Are you gonna take the robot over that? This is like the existential question about AI and replacing these, again, lower skill, but still value jobs in the work. Force. Yeah. These [00:33:00] clerical type jobs.
Ashish Rajan: Do you find it's an interesting one, right? 'cause a lot of people are nervous, but there are a lot of people also on the camp of, it'll take a long time before that happens.
Do you feel like there's a timeline already in the horizon in the next
Matthew Radolec: Oh I think it, that those jobs that, that's already happening. Happening
Ashish Rajan: already happening.
Matthew Radolec: Absolutely. You started this conversation, what, 10 minutes ago, telling me about how you called a restaurant.
Ashish Rajan: Oh yeah. That right?
Yeah, I did That job's gone, actually, to your point, that person's job is gone. Yeah. And that's a restaurant as well. Correct. Which is probably like the lowest denominator in terms of bigger crisis.
One of
Matthew Radolec: the most common things that exist. Yeah. Yeah. So what, but the question really is does that mean less jobs?
I don't think so.
Ashish Rajan: No,
Matthew Radolec: I think we have now developed the agents, secure the models.
Ashish Rajan: Yeah.
Matthew Radolec: We have deployed them. We have all the services around getting there with AI. The jobs are just shifting.
Ashish Rajan: Yeah. I agree. I agree.
Matthew Radolec: Until we hit to this critical mass point again, we leave the crib, we learn how to walk with AI.
Yeah. We get to this critical mass point where it's ubiquitous, it's everywhere, it's integrated in everything. This is where I start to, I think we start to see that there [00:34:00] isn't as much of this explosion of, I need services to build AI, run AI and secure ai. Yeah. We kind of figure that out. So that kind of levels off, that kind of plateaus.
And then we're at a point where we've replaced jobs and we're continuously replacing jobs, but we're not adding new ones.
Ashish Rajan: Oh, yeah. We just need to go through that period, in terms we, yeah. I think it's our whole
Matthew Radolec: lifetime though. Yeah. Yeah. I don't think you and me will see this.
Ashish Rajan: No. I, yeah.
It'll be a while before that happens. Hopefully. I don't even, hopefully
Matthew Radolec: I'll be retired.
Ashish Rajan: Yeah. On a yacht somewhere, hopefully. Yeah. Getting a face mask on at that point. But that's most of the technical questions I have. I have put in Monaco, oh, that would be fun. Yeah. Yeah. I'm going watching the Grand Prix or something.
Matthew Radolec: I'm dreaming here.
Ashish Rajan: Yeah. Yeah. No, it's a good, I love to do that as Monaco on a cruise watching the Grand Prix. I, it's a great dream. It's a great vacation as well, if anyone wants to take it as well. I've got three fun questions as well to towards the tail end of this.
First question being, what do you spend most of your time on when you're not running the cloud? Instant response, traveling for all of this data security kind of conversation. What do you spend most of your time on?
Matthew Radolec: I'll give you three things. One, self [00:35:00] care. Oh, nice. So I'm gonna do a some social media about after my RSA talk.
That might include, face masks. Oh, yeah. Or, self grooming. Or even journaling. I picked up journaling a couple years ago to really I feel like I get to experience so much in a day that if I don't write it down, I won't remember everything or all the cool conversations or the cool ideas I had.
Yeah. It's also a great way to dump your thoughts out if you're like, thinking really intensively about something. Yeah. Yeah. So let's put self-care as number one. Two, I started racing cars. So I love racing cars. I'm a big fan of nine elevens. Oh. And Porsches Wow. And racing GT threes. I just did my first track day at Spa-Francorchamps in Belgium in March.
Wow. I did 60 something laps over the course of five hours. Oh my God. Had a couple of breaks in between. I went to Porsche's Racing School in Barber, Sports bar in Birmingham, Alabama. Yeah. Yeah. Had an incredible time. Like I'm. Learning how to do that. And then I'm also like a precision marksman.
Okay. So I love shooting long range. Oh, nice. And if I'm not traveling the world for my business or trying to take care of myself or my mind, I do a lot of audio books [00:36:00] too. Oh wow. I'm probably racing cars or shooting guns.
Ashish Rajan: Oh my God. That's pretty awesome. The second question, what is something that you're proud of that is not on your social media?
Matthew Radolec: I the first house I ever bought was in Silver Spring, Maryland. And it had. Like weeds and dirt for a long and over the course of two years. I grew a country club golf course level lawn. Oh, wow. And took meticulous care of it. Even if I was only home like once every two weeks, I was out there trimming it and fertilizing it and aerating it.
Oh, wow. And edging it. And you could have a picnic on my yard and think you were at Augusta National.
Ashish Rajan: Oh, wow.
Matthew Radolec: And I, and just like growing grass, I also learned how to grow roses during that time, which are quite temperamental flower. Okay. And so I'm very proud. Of that. It also as someone that, like I travel a lot, right?
I don't get to have a lot of, like my own space and stuff. Of course, I'm like in hotels or car service and airplanes. And so it's like connect with your land is also there's something really [00:37:00] special about that.
Ashish Rajan: I guess the sand and everything.
Matthew Radolec: Yeah, the dirt and the sand and it's your grass. That's right. Yeah. Yeah. I hope to get back to that someday. I am living a bit of a nomadic life right now, so I'm not really staying in any one place for too long. I hope to get to the point where I can grow grass again, and I can say I'm proud of it.
Ashish Rajan: I was gonna say Monaco only has water, so I don't know if Monaco would,
Matthew Radolec: maybe I'll grow something on my balcony
Ashish Rajan: Fair. Final question. What's your favorite cuisine or restaurant that you can share with us?
Matthew Radolec: Oh, I'm like a, a aspiring carnivore. Oh, it's a steak man. Oh. Take me to a steakhouse with a ribeye with salt and butter and like t-bone Steak. Oh, yeah. Ribeye. Oh, ribeye. Oh fair. And it's gotta be cooked medium.
I know this is gonna spur some discussion. Oh yeah. Fair. But you need the fat to marbleize and cook out. Yeah. Out so that what's left is just like a juicy pile of protein. Oh. And throw some fla, like flaky Morton salt on top of that. Oh, nice. And I'm done, man. Oh, call it done.
Ashish Rajan: That definitely is pretty awesome.
And any what's your favorite restaurant then? Consider you travel so much.
Matthew Radolec: If I had to pick one Yeah. It would be [00:38:00] Bourbon Steak, which is at the Four Seasons in, there's a couple of them, yeah. Okay. But there's one of the Four Seasons in Washington, DC Okay. My absolute favorite's on the canal in Georgetown.
Yeah. And I don't know what it is about the ambiance or just the way the tables are set up, but I just feel like you get a great steak there. You can get a great glass of red wine to go with it, with a great view as well, and a great view of the canal. And it just. It's like colonial, but also modern at the same time, like in the heart of dc.
Ashish Rajan: Oh, wow. I'll check that out. But if people wanna talk more about any of the conversation we've had so far in terms of data security, changing landscape of AI and everything even in fact if it's an incident response in cloud AI as well. Where can people find you online and connect with you?
Yeah,
Matthew Radolec: LinkedIn is definitely the best place. Yeah. And I have a whole team that helps me support my LinkedIn. Oh, awesome. And they're standing in front of us right now while we're recording this podcast. And so that's a great place to find me.
Ashish Rajan: Yeah. Awesome. I'll put that from the short as well. But dude this was a great conversation then.
Thank you so much for coming in. Yeah,
Matthew Radolec: I really enjoyed it. Thank you.
Ashish Rajan: Yeah, likewise
Matthew Radolec: man. Thanks very for watching. See you next time.
Ashish Rajan: Thank you so much for listening and watching this episode of Cloud Security Podcast. If you've been enjoying content like this, you can find more [00:39:00] episodes like these on www.cloudsecuritypodcast.tv
We are also publishing these episodes on social media as well, so you can definitely find these episodes there. Oh, by the way, just in case there was interest in learning about AI cybersecurity, we also have a podcast called AI Cybersecurity Podcast, which may be of interest as well. I'll leave the links in description for you to check them out, and also for our weekly newsletter where we do a in-depth analysis of different topics within cloud security ranging from identity endpoint all the way up to what is the CNAPP or whatever, a new acronym that comes out tomorrow.
Thank you so much for supporting, listening and watching. I'll see you next time.