AI Vulnerability Management: Why You Can't Patch a Neural Network

View Show Notes and Transcript

Traditional vulnerability management is simple: find the flaw, patch it, and verify the fix. But what happens when the "asset" is a neural network that has learned something ethically wrong?

In this episode, Sapna Paul (Senior Manager at Dayforce) explains why there are no "Patch Tuesdays" for AI models .Sapna breaks down the three critical layers of AI vulnerability management: protecting production models, securing the data layer against poisoning, and monitoring model behavior for technically correct but ethically flawed outcomes . We discuss how to update your risk register to speak the language of business and the essential skills security professionals need to survive in an AI-first world .

The conversation also covers practical ways to use AI *within* your security team to combat alert fatigue , the importance of explainability tools like SHAP and LIME , and how to align with frameworks like the NIST AI RMF and the EU AI Act.

Questions asked:
00:00 Introduction
02:00 Who is Sapna Paul?
02:40 What is Vulnerability Management in the Age of AI?
05:00 Defining the New Asset: Neural Networks & Models
07:00 The 3 Layers of AI Vulnerability (Production, Data, Behavior)
10:20 Updating the Risk Register for AI Business Risks
13:30 Compliance vs. Innovation: Preventing AI from Going Rogue
18:20 Using AI to Solve Vulnerability Alert Fatigue
23:00 Skills Required for Future VM Professionals
25:40 Measuring AI Adoption in Security Teams
29:20 Key Frameworks: NIST AI RMF & EU AI Act
31:30 Tools for AI Security: Counterfit, SHAP, and LIME
33:30 Where to Start: Learning & Persona-Based Prompts
38:30 Fun Questions: Painting, Mentoring, and Vegan Ramen

Sapna Paul: [00:00:00] You find the flaw, you patch it and you verify the fix. Yeah, that's as simple as that. What would you do if an AI model has learned something wrong? There's no patch. Tuesdays, can't you just go there and patch the vulnerabilities? A model is giving you an outcome, which is technically correct, but ethically wrong if, if compliance is not there.

AI can become rogue. The asset is a neural network. The asset is a model. We have not done or assessed that sort of asset before.

Ashish Rajan: If you work in vulnerability management, you probably have already dealt with cloud containers and a lot of other complexity, but you probably were not ready for ai. And in today's conversation with Sapna Paul, who is a senior manager running vulnerability management programs in a company called Dayforce.

We're talking about how AI is impacting in the way she approaches vulnerability management, the talent she's hiring for this AI first world that we are all moving towards. Whether you should build or buy your AI solution for this vulnerability management world with the new stakeholders that you're developing, all that, and a lot more in this conversation with Sapna.

Now, if you're not someone who's working in vulnerability [00:01:00] management or building a vulnerability management program for the AI assets that are growing in your organization, then this is the episode for you. Definitely share this with them. And as always, if you are here for a second or third time and have been enjoying Cloud Security podcast episodes, I would really appreciate if you can take a quick second to hit that follow subscribe button on whichever platform you're listening or following us on.

We are on Spotify, apple, LinkedIn, YouTube, it helps us grow and get more amazing guests. As always, thank you so much for tuning in. I hope you enjoy this episode. I'll talk to you soon. Peace. Hello and welcome to another episode of Cloud Security Podcast. I've got Sona with me. Hey Sapna, thanks for coming on the show.

Sapna Paul: Hi. Hi Ashish. Thanks for having me.

Ashish Rajan: Maybe to kick kick this off, would you mind sharing a bit about yourself, what you've been up to, what's your background?

Sapna Paul: Yeah, I I'm a senior manager in Dayforce. It's a, a, a software at cm SaaS based company. Yeah. I lead the vulnerability management team, uh, within cybersecurity in this organization.

Before that, I have, uh, a bunch of experience, cybersecurity and other product based SaaS [00:02:00] based companies as well in financial institutions. And I've been on the other side where I've done software development, DevSecOps as well. So that's my, uh, little bit about myself. I have, I live in Austin.

I have my family here yeah, that's it.

Ashish Rajan: I was gonna say, since you've come from the development side and you've moved into this vulnerability management world the whole idea of AI is like top of mind for a lot of people. But maybe before we even jump into this I, I, I have to assume that some people may not even understand what vulnerability management does.

What is vulnerability management for people who are from the other part of cyber security or other part of technology?

Sapna Paul: Yeah. Thank you for asking that because it's important to level set the concepts of vulnerability management. Right. And it's not difficult. It's just a very let's take a step back, right?

So it's based on a simple premise and what is that? You find the flaw, you patch it and you verify the fix. That's as simple as [00:03:00] that, but as you mentioned, ai, right? Things are changing in this era. It's not as simple as just finding a flaw and fixing it, because what would you do if an AI model has learned something wrong?

You can't patch it. There's no patch Tuesdays, right? Yeah. You just can't, you just go there and patch the vulnerabilities. Microsoft is releasing and that's it. No, you have to take a step back, detect what it's doing, what weaknesses and flaws the model has, and then retrain. You know, that's the key word here, that every model, if identified a vulnerability or a flaw, you have to retrain.

And that takes like, yeah. So much of cycles of compute and weeks of testing and revalidation and redeployment. So it's so important to understand the vulnerability management for ai. From this evolution, it's actually evolving. Like it's evolving, like it's not point in time, first of [00:04:00] all.

Ashish Rajan: Yeah.

Sapna Paul: Continuous monitoring and continuous retraining. So it's, so if I take a, if I just put it in one line, it's like. You don't scan and patch anymore. You observe, you detect anomalies and you retrain the model. So that's how the vulnerability management space is evolving with ai.

Ashish Rajan: And to your point, the observability part.

'cause I guess to what you said, a lot of that had always been on asset management. On what kind of asset do you have? And asset usually used to be stationary unless humans were changing it or the programs were changing it. So is the asset itself evolving with AI now?

Sapna Paul: Yeah. So yes, if absolutely. It's not the traditional assets anymore.

The asset is a neural network. The asset is our model, right? Yeah. It's an ai, artificial intelligence. We have not done or assessed that sort of. Asset before in a typical traditional vulnerability management or cybersecurity space. Yeah. Like we [00:05:00] do have, like we have analyzed data, complex data models.

Yeah. But something which is deterministic, you know, which, which is, has a rule based, because in this, you know, boom of data that has happened and securities and built into those processes. Yeah. But now in this era with ai, it's an intelligent system. It's learning on its own. It's a neural network of its own.

So there are billions of parameters. There's billions of different things you have to think about when you are. Doing, you know, vulnerability management in this space. So, um, asset has definitely evolved. It is still evolving. Tomorrow it's gonna be something else. But to your point, like, we can't see a typical asset management asset inventory in this era.

We have to think outside the box of how this asset behave and re. You know, re revolutionize our vulnerability management in cybersecurity space. As for that, [00:06:00]

Ashish Rajan: so is the asset management evolving as in, as a concept as well? Like to your point now, your assets are, uh, almost like a learning model. They, their, their behavior changes.

What kind of data? So how, how would you define vulnerability? Or is this, now, are we moving to a world where there's AI vulnerability and then there is a regular vulnerability that we used to care and care about before?

Sapna Paul: Not really. I, I don't wanna, I don't wanna I don't, I don't wanna, uh, you know, throw away the basics, you know?

Right. Um, basic definition of a vulnerability is the same. What is it? Yeah. It's a flaw. It's a weakness of a particular asset. And that asset could be a neural network. That asset could be an AI. Yeah. And that flaw can be leveraged by an adversarial to against it. It's the same definition. But the solution to that definition or the approach to that problem is different.

That's what I think [00:07:00] security teams need to learn is how that solution is different. Let me take a step back and, and, and, and also explain how this is different, right? Like with ai Yeah. There are three layers in my mind that we have to think about when it comes to vulnerability management.

The first layer. Have to keep analyzing the production models that are. You know, that are running that are doing something that's solving a use case, how an adversarial can impact the working of that model. So red teaming and, and, and you know, these uh, you know, out of the box, testing, the security testing of these models so important.

Yeah. The second thing in my mind is, is the data layer, which is so, so, so important with model with ai because. The vulnerabilities that can make, that can make its way into the model is through data, the training that is happening on the left side of the production.

Ashish Rajan: Right? So

Sapna Paul: poisoning the [00:08:00] biasness that an AI can learn is in the left side.

So it's so important to understand. What data is going in and the controls that needs to be put around that data layer is so important. And third, in my mind, is behavioral. That's like how the model is behaving. So take this as an example. A model is giving you an outcome, which is technically correct, but ethically wrong.

Ashish Rajan: Yeah, yeah.

Sapna Paul: You know, how would you solve that? How would you explain that? Yeah. Um, how would you then retrain. Um, so interesting. These are some of the things that, the three layers, we have to keep in mind that the vulnerabilities are identified in every layer, and it's, it's again, the same concept. Defense in depth to understand the depthness of the ai.

It's not just about the production or how this thing is behaving at the end, but it's so important in this era. And then this is, I know a [00:09:00] cliche in the security world, but it is 10 x more important is to shift left. Shift left principle is more important than ever because it's all about how you're training the model.

Ashish Rajan: Yeah. Yep. I, I love the three perspectives you had about the behavior, and I love the example of the the ethics behind the output rather than the output being correct or wrong. Because I guess traditionally security has always been about, is there a SQL injection vulnerability? Yes or no? And then you move forward, but which used to be a quote unquote technical vulnerability.

Right. But it sounds like AI vulnerability is a lot more complex. You have, you've mentioned behavior. Mm-hmm. You mentioned the fact that now the responses could be different, but at the same time, I may be looking at the bad training data to begin with as well. Someone has poisoned my data.

Sapna Paul: Yeah.

Ashish Rajan: So there's, considering there's a lot more complexity to this, 'cause there's a huge risk aspect here as well.

And as a person who's been working vulnerability management [00:10:00] there's obviously the risk register. We have loved and managed the entirety of our career for a long time. Yeah. Do you feel there should be like a separate one? 'cause I mean, I feel like the technical risks that we have traditionally had in risk register do not match the three that you mentioned for, which I would consider needs to be a new risk register of its own kind.

Does it need to be a separate one or are I, do you see yourself thinking that maybe it makes sense to combine the two?

Sapna Paul: It is. It is. I, I, I do feel we cannot, I am a big advocate of not showing what we have learned as a security classroom. Right. It's really important that we go back to our basics, because a risk register is all about business risk.

Yeah. When you put anything in a register, in a risk register, it's about how it affects the business. If AI is generating revenue, is generating, contend recommendation, marketing campaigns, sales making, like high stake decisions like, uh, loan denials and stuff like [00:11:00] that. You have to make a risk register that talks in the business language, so you make a new risk register or you order, or you use the existing one.

It's, I think it's a principle across the board. You have to make business understand what the risk AI is bringing to your business. Yeah. Right. So it's, at the end of the day, every risk is just that. If you cannot tie back to the business or whatever your companies is going for, it's, it's just not a, a useful risk register.

So you either combine the AI risk register. But you have to tie it back to your, to the five goals, business goals or strategic goals or BAU goals that your company, that organization have to tie it back. Yeah. So yeah, I mean, you have to keep different line items. Yeah. Or ai. Yes. Because they're different.

Like we have to, the risk scenario is different that a model is [00:12:00] discriminating against certain demographics. It's a risk. Why? Because the training happened on certain, you know, data sets. Yeah. Which is, you know, I'm just taking an example of a particular risk. So that means you're losing the market somewhere else and in some country you're losing it.

So, yeah. So yeah, that's a stress you can't expand.

Ashish Rajan: Yeah. And I guess, do you find that the, the day-to-day life of a vulnerability management person, because there's a lot of interaction with the risk risk people in the team. There's a lot of interaction with the people who are actually trying to make the change for vulnerability because on one side you have to keep tabs on your new AI assets or existing assets are quote unquote.

Patched or not patched. If they haven't been patched, why have they haven't been patched? What does this mean for the risk? Yeah, because vulnerability management is an interesting space in organizations where you're, you're the bridge between the two sides, the technical side and the business risk side.

Yeah. Is that also changing with ai? And if it [00:13:00] is, how do you see that tension being maintained? Because on one side, I imagine to what you said is consistently learning model, but on the other side, business is like, oh, no, no, you need to maintain the risk. How do you do the balancing act of compliance and innovation?

Sapna Paul: Compliance and innovation, right? And they go hand in hand. They are, I feel they complement each other. Compliance allow you to put guardrails in your AI and not, you know, if, if compliance is not there, if risk and assessment is not there. AI can become rogue, you know?

Mm-hmm. And, and, and we don't need that AI that could work against you completely against your business and innovation. So needs to understand that that risk and compliance are enablers. Enablers. Yep. Uh, and they're not blockers. So that understanding, that mindset needs to be there. And I think that is also relevant to other innovations, not just ai.

Uh, it's the same thing. Is, is is just 10 x more [00:14:00] important now to see them as an enablers and, and I wanna talk about certain solutions here, right? Like, what are, um, some of the things that. AI teams, data teams can do to help to put those guardrails in and they would not like it. And some, and first of all, let me, let me talk about this.

You have to talk in, in the language that data people understand. Mm-hmm. So if you go and say that, okay, your your, model has this security vulnerability, they'll be like, ma, what? No. But if you go and say like. Uh, your model is going to be 30% less effective because adversarial can do something to it and change its scores.

Yeah. You know, you have yours now you can sit with that person and make him understand, uh, why you coming from that lens. So that's the first thing. Talk in the language that they understand. Right. And then that is why my job is so interesting that I, I am able to bridge that [00:15:00] gap. To make them understand.

And I, and for that, we need to upskill as well. Uh, security compliance people need to upskill their ai, uh, ai concepts, knowledge subject. And hence there's a lot of emphasis on, uh, upskilling in the AI field. So you can talk in their language. So that's one. And then secondly, there are different embedded controls that you can put in your ai, uh, ML pipelines.

Um, as I was talking about data. And this is, and that's why the compliance innovation piece needs to go hand in hand because you can't think compliance or security is an afterthought. Yeah. You have to shift left. So from the moment you have started your pipeline, from the moment the data comes into your organization, it has to be within certain guardrails, yeah. Access control. Very important. Who is accessing that data? Are we main maintaining audit track or audit trails and logs of all of that? Integrity testing, biasness, all of that, and the data control [00:16:00] needs to be there and then. Whoever is working, and this is all going back to the way that we are developing our DevOps pipelines, right?

Version controlling you have to start from the beginning. You can't just start the coding and, oh, we've developed a great ai, but who's controlling the version of it, right? Are, is it being controlling in something like gi? GI doesn't like do have Python model, but there are like ML kits as well, right?

Yeah. So to use something like that to start governance and then I think that's the key word when we talk about compliance and innovation. You can't really have perfect system like ml. So let's say in a vulnerability is unexplainably, right? A male cannot just. Explain how it is reaching to that particular outcome.

So you won't have perfect unexplainable, you won't have perfect explainability, but what you can have is a good governance AI systems that will take care of these [00:17:00] things, uh, that I just said down where it controls the ML pipelines, you know, security gates and the production. So all these, if governed properly.

Audited and audited properly can become a big enabler for people who are trying to get the AI and AI models out the door. Um, right. So, yeah that's gonna go hand in hand.

Ashish Rajan: So I think a couple of points that I took from there, to your point about the evolving nature of this as well, uh, makes it quite fascinating because on one side, uh, there is a whole conversation of.

Um, build versus buy for this as well. Yeah. Because on one side you may look at this as, hey, all of us have now got copilot or chat GPT or whatever access. And, uh, the, the way traditionally people would've worked in vulnerability management would not have involved I guess use of these tools to even understand the context of ai.

So now to, to what you said earlier [00:18:00] that that commentary around 30% drop in performance is like, oh yeah, I definitely don't want that to be happening. We can, to a lot of extent we can use AI to have that understanding of your per new, new stakeholder. I guess I'm curious, have you seen AI being used in the vulnerability management kind of vertical?

And if yes, uh, how is it being used? Is it being used to help the person doing the vulnerability management, or is it the other way around? Where do you see AI being used here? M

Sapna Paul: multiple facets. Ashish, there are multiple facets where we can use AI as a vulnerability professionals. Um, and, and then I'll, I'll take some examples that have run in the past and in the current company as well.

I'm doing it, first and the foremost, like how this is a very like tri like. Problem that we, A nice to have problem alert fatigue, right? Yeah. It's still there. It's still there. And I think the most, this is like just to take a step back and look at the whole workflow where it [00:19:00] can help, where AI can help reduce the noise of the vulnerabilities.

Now, vulnerabilities are increasing because of ai, but AI can also help you maneuver through those vulnerabilities to the most critical. Vulnerabilities and the risks of the company, right? So, um, one of the one facet is use AI in your workflow. If you have not done that, start thinking and start planning for it.

'cause it needs a lot of planning. Um, you can't just use. Public, models and start giving vulnerability data to it. No. That is not gonna help you. That's just gonna increase more risk for you. In a more contained environment, how can you take the model, feed the data it needs to, and then see what, what, you know, outcomes you can get from the model.

And see if that can help prioritize even farther than you are [00:20:00] prioritizing today. Oh wow. Right, right. Yeah. So, um, yeah, and, and you would be amazed to see what it can do. It's not, it could be deterministic, it could be rule-based. You can have boundaries associated with it. You have to work on the prompt engineering part of it.

Definitely. That what would be the prompt? And you have to put some boundaries and stuff like that. So once that's settled, you know, just see what AI can do. You can. You can, plug AI to your existing workflow systems like, you know, ServiceNow or Jira, and they have their own ais as well. So each tool has their own AI too.

And these companies are doing great job in giving that functionality to the users. So use that to, to lessen and lessen the burden of vulnerabilities on our stakeholders. Because they are. They are, they have the pressure of releasing ai Yeah. For the business, plus pressure of, of bridging of, [00:21:00] uh, reducing the security gaps.

So that's one facet. Other than that, it's improving efficiency, operational efficiency in within security. Your analyst hours can be saved if you just give him or heard the, you know, the magic of ai. And how all that can help reduce and focus that person to the quality con, the quality work rather than coordination, facilitation, all that manual work.

A person can really focus on vulnerability research. Then figuring out where to get this data, where to get this one, and, you know, all those traditional, uh, workflows can just die. You know, they don't they don't really exist in the coming years. Yeah. The data would be like on the plate for you to investigate to quality assess.

So it's gonna be a very interesting, that's a second facet of improving the operational efficiency within security with ai. These are some of the things that really help [00:22:00] security. Teams. Yeah. To think outside the box with ai,

Ashish Rajan: do you find the the skillset for a vulnerability management team member?

Is different moving forward with ai. And I guess 'cause to your, to your point there, there is the use of AI for understanding what hey, uh, the different facets. 'cause there's obviously, like, there's a, we haven't even touched on the cloud part, where vulnerability manages not just across your Salesforce other place, but they're also around cloud, multi-cloud.

Most companies are multicloud these days. Yeah. So you find that the, the skill requirement that used to be before, because I'm also thinking about people who are trying to build a vulnerability management program in 2026 and after. Mm-hmm. Where we're all to what you said assets are now, quote unquote AI enabled.

Our SaaS services are AI enabled. Do you feel the, the people who are gonna join the vulnerability management teams, they, they, I mean, I don't know what, what skills do you think they should have? To be able to work effectively, use [00:23:00] AI the way you are saying to be able to use ai. What would, do you see that as a quality that you would want in your future team members?

Sapna Paul: Yeah, great question. I, I think that's gonna help future leaders and, and professionals to think first of all, you just have to have the knack to learn ai, how it works, because if you don't understand what an asset do. At the end of the day, even for traditional vulnerability management, you won't understand the vulnerability.

Yeah. And how it can become a business risk. You know, so you have to understand the concept of ai. How it works, what are the neural networks and how it learns, how it trains, and what's the data? You don't have to go deep into the algorithms. Um, you know, that's, that's that. Then you're becoming a complete data scientist and ML engineer.

That's not the Yeah, the point, the point is to understand the cons. Like, it's like, you know, even like as, as a vulnerability or a risk manager, right? You have to have a good [00:24:00] acumen of security across the domain. Same thing applies here. You need to have acumen of AI in its life cycle. Mm. So you can understand the vulnerability in the different stages of ai.

Right. So that is, for that, to get to that part, you need to understand how an AI works and how a model is generated. Uh, so the future professions, they have to learn. It, it every, it's a, it's a mandate for everybody to learn ai. Yeah. Yeah. Um, and just on top of what you know. On top of your traditional vulnerability management skills, you have to know ai.

Mm. So that's, that's, these are the two things that's gonna be a demand for future professionals. Yeah.

Ashish Rajan: Sorry. I was gonna say, I think you, you touched on something interesting because a lot of leaders like yourself have a directive to increase AI adoption as well. How do you measure AI adoption in, 'cause I'm not saying [00:25:00] any field and technology is easy.

Uh, each field comes with this complexity and AI is now mainly more complex. So what's your thinking behind. How do I, like, there's obviously a solution of I can just buy a product and call it a day and like that was my ai, but there's a whole side of, oh, you could build, you could do a lot more things with it.

Yeah. So how do, I don't know if you've given both approach a thought and if you have, how would you measure the adoption or maturity of.

Sapna Paul: Yes. And, and, and I think our, our company is doing a great job in terms of adoption because it is really thinking, not just buying a tool and you know, switching it on.

And then that's our ai, right? It's, oh yeah. We're going a very different way of adoption through training. So there are a lot of metrics that we as leaders are measuring today, uh, which is around training knowledge. Certificates courses that people are, you know, doing, and a lot of companies are giving, you know, something like ChatGPT enterprise [00:26:00] version to to, to employees.

How much are they using it, the usage metrics are they using in our daily basis? So these are some of the metrics that are obviously will in leaders mind, and they, we will be like, there is going to be some tracking, there's going to be some data monitoring around it.

Yeah. So that's definitely happening. Plus the other side that you just said, yes, there are tools out there that can help you elevate your AI journey. How much are you thinking outside the box of using that AI into your workflows and then doing the research around it? So these are also, so it's actually both.

So there is a training aspect of it that leaders are measuring, management is measuring. Through the, the platform that that's been provided to them. And they can, yeah. They have freedom to also do their own research. Like if Chad GPT is in the organization. No, there's no, nobody's forbidding you to use Claude or to use Gemini, you know, don't, yeah.

Put in the company data, but. You can [00:27:00] think outside the box and see how different AI are working for you. Yeah. And these training platform do a really great job delineating how different platforms are, you know, good in what, like one is good in research, one is good in writing, they good at really the training platforms.

And I'm really impressed how they are, uh, you know, you know, they are, they're delineating these things for the AI platform. So just have the knack to learn. Because these metrics will be and is today being monitored by the management. And then the second aspect of is the AI that you bring in and to help ease your workflows and to help the company and to develop the program, stuff like that. I understand this is different for like different domains, the industry domains, you know, if I talk about regulatory domains, um, yeah. And based on my background there is there, those are risk averse organizations, right? They won't bring in AI and then they'll start using it.[00:28:00]

So that aspect of it is gonna be, you know, not, not appreciated in a regulatory sense because there's high risk involved when feeding data into the ai, but for low stake decisions for, you know, improving your own brand and, and making upskilling your. Those are like low stake decisions that you can definitely use AI and start building your acumen there.

So, yeah, so those are, that's, that's my true sense. And, and that's happening right now. Ashish, like, people are going to look at you and, and start to monitor how much you are using and adapting ai in your own life.

Ashish Rajan: Yeah, I definitely find that, uh, what's the word for it? The job requirement is also evolving for most of these things, and to your point, leaders, it's not just the people who are doing the groundwork, but also leaders are being asked for, what's your AI adoption like?

Yeah. Are there any frameworks that you have found, uh, especially from a AI security governance, all those, those kind [00:29:00] of things, have you found any frameworks or controls that you find relevant for vulnerability management that have been helpful for you in this AI world?

Sapna Paul: Yeah, uh, I think I'm in US, the main framework I go to is NIST AI RMF framework.

It's the go-to, it's becoming a defacto standard for all US companies. And, and, and it is, it is actually, uh, comprehensive. Yeah, in the sense it is like wider, like the horizons for that framework is wider. It doesn't gonna, it's not gonna go deep into the controls that you need to put, or, you know, it's not technical enough.

And that's, that's what every NIST framework, right? It's not gonna be as technical. You need to put this guardrail or that is gonna be worse. It's gonna go into that. But it just tells you on a high the principles that a framework, uh, that an AI system should have. That is covered by NIST AI framework.

The other one [00:30:00] is the EU AI Act. You know, it's becoming a law that you have to abide by the rules about explainability, you know, all of that auditability, traceability, data lineage, so stuff like that. So tho that's another one, which I have. Been closer to like these two frameworks I really go to when I'm being asked about a framework, and then they really help.

From that comes the different vulnerabilities that we can tie to these AI systems, yeah. Fairness biasness poisoning, uh, jailbreaking, prompt injections. These, all these different vulnerabilities can be tied back to the framework and there are many companies who are trying to put this together.

The vulnerabilities, the weaknesses identified, and you know, the compliance aspect of it. So you get like a fun, full, centralized picture of where your AI is standing. Where it is, you know, where it is lacking. So basically it is. I've seen a lot of companies doing that. But you can also use like, [00:31:00] um, I'm gonna talk a little bit like embedded control spot that I talked about.

There are like a lot of Python open source libraries that you can use within your ML pipelines. You know, um, Microsoft have counterfeit and there are like. Greca, I don't, I don't remember Greca is the word, but there are like open source pipelines that you can put, they can team for you. They can there are like good type repositories as well for ml right.

So you can use that. These are all open sources. And then to explain what and if I remember to explain what an AI is doing. There are tools like Sharp and Lyme, which are really good. Oh, right. Yeah. So, which is really catching a lot of you know, regulatory eyes because it explains how a model reached to a particular conclusion, and that is a big, uh, step in the right direction.

Because reg regulators like EU AI Act, they demands explainability. It's a, yeah. Explainability [00:32:00] is, is in itself, vulnerability and sustainability is itself vulnerability. So you need these things to be embedded into your pipelines, into your flows, um, to, to, and then tie it back to the compliance, right? So to get that full holistic picture.

Yeah. And, and you'll be in a good shape. Yeah.

Ashish Rajan: Do you find that I guess now that we are moving more into this world of AI workload, AI assets, your teams also using AI with vulnerability for vulnerability management to kind of bridge the gap a bit shifting left. In terms of, I guess a starting point.

'cause the, a lot of people may have established a vulnerability management program. They've been using it for years. Where do you even start doing this? 'cause this, I mean we, as we touched on cloud before, there's containers, humanities, the list goes on of the complexity of technology in any company.

Yeah. Where do you see people can start working on for either on the [00:33:00] AI adoption piece or working on the managing the risk of AI part? What, what's a good place to start and what's probably the bad place to start?

Sapna Paul: There's no bad place to start,

Ashish Rajan: start,

Sapna Paul: You know, you can start anywhere. And then I think on the adoption piece, there's a lot out there.

I think you just have to figure out what resonates with you. Which platform resonates with you? Which you know, which tool, which AI vendor resonates with you. You have to find that. Um, and it's high time that you have to find that I'm not only trying to create pressure, but it is a lot of noise out there and, and people are.

Going crazy on the new developments on ai. So in order for you to be in a, like, I don't want anybody to be in a state. Okay. I have, I've lost the ship or I've lost the train is gone. Yeah, you can still be on the train. But if you are coming [00:34:00] prepared to just board the train of ai, come prepared, know what you need to get out of this train, you know, knowing what you need the benefits.

From an adoption piece, you should just hash out the learning platform. There's so much out there. Just hash out. Start one. You know, this is what I've done. Like, start with one video and just make a note that you have to see one video every week. AI or whatever. Like for me, I'm trying to build a brand, right?

So I, I go out there and just see AI on PR and communications. It's not resonating with my job, but I love it because I was like, okay, great. This is giving me thinking outside the box of how I can use AI to build my brand. Right? So that's just that, just know what you need to do and just get on the train.

So that's an adoption piece, right? Um, and then on the, on the usage piece, right there are like if every company, I'm gonna be very basic here, every [00:35:00] company is a cloud provider, like using a cloud provider, right? Your platforms, your, your tools, your applications are on either AWS or Azure or GCP.

And every cloud provider has an AI platform. So you have access to their own platform, right? And you don't have to set up an infrastructure or anything in the back. You can just talk, go to your cloud service provider platform, start using AWS bedrock, is your OpenAI foundry. Uh, start writing prompts of what would help me, uh, in my job.

First of all, and you, you will be amazed to see the results of that. Secondly, use the personas that you deal with in your day-to-day life, uh, in your day-to-day professional life. For example, a vulnerability management deals with a cloud engineer, a system engineer you know, an analyst who is trying to patch through a bunch of servers and, and [00:36:00] whatnot, right?

Put, put yourself in those shoes. Write a prompt that if I am a system engineer and I need to patch through these servers in this weekend. Cycle, what should I focus on? And put some boundaries. Just make a simple prompt, and then you will see, okay, yes. If I give this AI to a person working in the cloud side, he will be, he will be thanking me because I'm easing his job so much, right?

Yeah. So yeah, think like, yeah, and, and these prompts will improve over time. You will just start with a basic prompt first. This will improve over time. And once you put it in their hands, your users, you will be amazed to see how they can build such great prompts. You just have to give that functionality or that flexibility to that person to start using ai to ease, anybody's life, job life.

Um, so these are some of the things I would say, and this is something that we've done in our own teams, like [00:37:00] user persona based. AI in teams, right? So in your in teams we have given, okay, use an AI bot and start talking to it. If you're going through a patching cycle, and you will be, you'll be amazed to see, like it has so much of data, it can learn through the data that has, it has been fed.

The compensating controls are there, you know, the actual vulnerability is there, and if you can't patch something, then you don't have to be, uh, okay, I can't do it. But you need to understand what are the controls. You know, if this is a critical vulnerability, there are controls that you can put and AI can give you those suggestions.

Yeah, so these are some of the things I would, I would think in vulnerability management space that a person could just. Start doing.

Ashish Rajan: Yeah. Fair. No, thank you for sharing that as well. I mean, those are most of the technical questions I had. I've got three fun questions for you as well. Okay. Uh, the first one being, what do you spend your most time on when not working on solving vulnerability management problems with ai?

Sapna Paul: What am I, what do I do? Okay. I spend a lot of time. Yeah. [00:38:00] What's, what's

Ashish Rajan: your, what's your thing outside of, uh, solving vulnerability management? Is there professionally, personally, whatever direction you want to go?

Sapna Paul: Uh, professionally, I would say I, I just, I like to be a good mentor and a coach to people, so I keep improving that skill in me. That's nice. Outside of the ai, that how can I coach better? How can I mentor, how can, how can I help somebody grow in their careers or even personally? So I do. A lot of coaching myself to coach better.

So that's something outside outside vulnerability management. And rest of the time I spend a lot of time with my daughters. Uh, if you see some of the. Paintings in the back of my wall. Yeah. Yeah. So I do a lot of painting with them. And, you know, just spend whatever time I could outside my job with my daughters.

They're in very tender and growing age, so a lot of time with them is, is does help me, uh, de-stress.

Ashish Rajan: That's awesome. Uh, thank you for sharing that. Second question. What is something that you're [00:39:00] proud of that is not on your social media?

Sapna Paul: I'm not too much on my social media. Oh, so everything,

Ashish Rajan: everything then.

Sapna Paul: Yeah. Yeah, I'm a, I'm a, yeah, I think a lot. I don't, I, yeah, I'm a very shy person, uh, in social media. I, I don't, I'm not so active. Uh, I've been sharing a lot of, uh, you know, what I'm learning in my vulnerability management and, and job. I do share on LinkedIn. But other than that, I'm proud of. You know where I've come.

Ashish Rajan: Yeah.

Sapna Paul: Awesome. From where I was, so just proud of what I received.

Ashish Rajan: Yeah. Yeah. Iin, thank you for sharing that. And the final question, what's your favorite cuisine or restaurant that you can share with us?

Sapna Paul: Uh, okay. I love Japanese or Ramen, a vegan ramen place. I'm a vegetarian. Um, so a vegan ramen place would be my go-to comfort.

Was there was one in San Francisco. I, it was very close to my office. I don't remember its name anymore. [00:40:00] It's been a long time. I've moved out,

Ashish Rajan: I place in San Francisco. Well, no, but thank you for sharing that. It, uh, Japanese is your favorite cuisine, though. I, I appreciate you sharing that. I appreciate all the insights you shared and the whole vulnerability management and how AI is impacting it.

Where can people find you, connect with you to, I guess, learn more about the space and how you're working on this, uh, particular problem?

Sapna Paul: Yes. Yes, absolutely. And, and I'm approachable on LinkedIn. That's my go-to place to connect with professionals. Yeah, so I, I think I have all my information there and if anybody wants to reach out and talk to me,

Ashish Rajan: I will put that in the show notes as well.

Thank you so much for coming on the show and I look forward to more conversation with yourself now. Thank you so much for doing this, and thank you, everyone for tuning in as well.

We, I hope you enjoy conversation. I'll see you next one. Thank you for listening or watching this episode of Cloud Security Podcast. This was brought to you by. Tech riot.io. If you are enjoying episodes on cloud security, you can find more episodes like these on Cloud Security podcast tv, our website, or on social media platforms like YouTube, LinkedIn, and Apple, Spotify.

In case you are interested in learning about AI security [00:41:00] as well. To check out a sister podcast called AI Security Podcast, which is available on YouTube, LinkedIn, Spotify, apple as well, where we talked. To other CISOs and practitioners about what's the latest in the world of AI security. Finally, if you're after a newsletter, it just gives you top news and insight from all the experts we talk to at Cloud Security Podcast.

You can check that out on cloud security newsletter.com. I'll see you in the next episode, please.

No items found.
More Videos