Is AI making application security easier or harder? We spoke to Amit Chita, Field CTO at Mend.io, the rise of AI agents in the Software Development Lifecycle (SDLC) presents a unique opportunity for security teams to be stricter than ever before. As developers increasingly use AI agents and integrate LLMs into applications, the attack surface is evolving in ways traditional security can't handle. The only way forward is a Zero Trust approach to your own AI models
Join Ashish Rajan and Amit Chita as they discuss the new threats introduced by AI and how to build a resilient security program for this new era.Questions asked:
00:00 Intro: The New Era of AI-Powered AppSec
03:10 Meet Amit Chita: From Founder to Field CTO at Mend.io
03:47 Defining AI-Powered Applications in 2025
05:02 AI-Native vs. AI-Powered: What's the Real Difference?
06:05 How AI is Radically Changing the SDLC: Speed, Scale, and Stricter Security
16:30 The Hidden Risk: Navigating AI Model & Data Licensing Chaos
20:50 SMB vs. Enterprise: Why Their AI Security Problems Are Different
23:00 Why Traditional Security Testing Fails Against AI Threats
26:03 Do You Need to Update Your Entire Security Program for AI?
29:14 The New DevSecOps: Keeping Developers Happy in the Age of AI
31:26 Real AI Threats: Malicious Packages & Indirect Prompt Injection
35:16 Is Regulation Coming for AI? A Look at the Current Landscape
38:00 The AI Security Toolbox: To Build or To Buy?
41:41 Fun Questions: Amit’s Proudest Moment & Favorite Restaurant
--------------------------------------------------------------------------------📱Cloud Security Podcast Social Media📱_____________________________________
🛜 Website: https://cloudsecuritypodcast.tv/
🧑🏾💻 Cloud Security Bootcamp - https://www.cloudsecuritybootcamp.com/
✉️ Cloud Security Newsletter - https://www.cloudsecuritynewsletter.com/
Twitter: / cloudsecpod
LinkedIn: / cloud-security-podcast
Ashish Rajan: [00:00:00] We were talking about the whole AI powered applications, and I think that seems to be the team of. 2025. How do you define AI powered applications?
Amit Chida: Is any software program that behind the scene is using AI? We have to remember that five years ago people said AI and they meant totally different algorithms.
Ashish Rajan: So what is AI native then?
Amit Chida: AI is becoming the value for the customer and not just a side effect. When we say AI native, we mean all these softwares that are built around AI and not AI added around the product. Code is shipped much faster. You have an AI agent writing a full 10,000 lines of code in a few minutes.
Attackers are faster. You need to defend faster. This opens a new world of attacks.
Ashish Rajan: What are some of the new kinds of security challenges that are being seen.
Amit Chida: Your model can read a website on the internet during operation and decide that it's wanna hack your organization because someone convinced it.
You can't have 100% coverage. What if I gave to my [00:01:00] AI access to sensitive information? Can I really trust that? It won't tell you to my customers. The answer is probably not.
Ashish Rajan: Is there a regulation around this coming up,
Amit Chida: there's nothing major. We mainly see the licensing issue. Now you have two kinds of licenses.
You have the model license, and then you have the data license. Don't trust the AI models that you develop in the organization. Take a zero trust perspective on it. Assume that any AI based software is gonna be breached, and then try to understand how do you find it the fastest and how do you minimize the impact when it happens?
Ashish Rajan: If you have been on both sides of building product security, whether it's cloud security, apps, security, you probably are noticing AI is changing a lot of that. Software development lifecycle is not the same anymore. It's evolved to have a lot more AI capability or AI powered as people want to call it sometimes.
So in this episode, I had Amit Chita from Mend.io who spoke about the AI powered application world and how that's changing the SDLC as we know it. Now, the traditional challenges for AppSec [00:02:00] have not really gone away, but for people like us who have been on both sides of the fence, on both the, Hey, I'm doing AppSec and CloudSec for the company.
We're probably are noticing the fact that now you have issues around not just open source libraries, but you also have licensing issue with models that you have to consider. You have to consider AI security specific threats that come into the play. All that and a lot more in this episode with Amit. If you find this valuable and if other people who are working in this SDLC world for what AI security threats are evolving and where this would go, where you could potentially have agents doing the security bidding for you, definitely share this episode with them.
And as always, if you have been watching the Listening Cloud Security podcast episodes for a while. I would really appreciate if you can take a second to give some support by dropping a follow or subscribe on Spotify Apple, if that's what you're listening to us on, or subscribe or follow on YouTube and LinkedIn where you might be watching this video as well.
Thank you so much for your support. It really means a lot, and I look forward to creating more relevant episodes for how cloud AI and AppSec are evolving as we move in into this new world. Thank you so much for your support and enjoy this [00:03:00] episode with Amit and I'll talk to you soon. Peace. Hello and welcome to another episode of Cloud Security Podcast.
Today I've got Amit. Thanks for coming on the show, Amit, nice to have you here.
Amit Chida: Thanks for hosting me. I'm glad to be here.
Ashish Rajan: Maybe to set the context first, if you can share a bit about yourself, a bit of your background. What have you been doing in tech for some time? I.
Amit Chida: Hi everyone. Like I,
I'm Amit Chita.
I'm like in Mend. I'm the field CEO of the company. I came to Mend after Mend bought my company, Atom Security. We were in the scanning of container space. We did prioritizations for vulnerabilities and things in these areas. Before that, I had some background in engineering in venture capital.
But here I'm today very deep into the security space. So glad to be here.
Ashish Rajan: Yeah, awesome. That's we were talking about the whole AI powered applications and I think that seems to be the theme of 2025. Maybe just to set some context, 'cause I think there's a lot of confusion in the market as well.
So how do you define AI [00:04:00] powered applications? At least in 2025?
Amit Chida: So AI powered application is any software program that behind the scene is using a AI. When people say AI, they usually mean either a large language model, or a neural network and not any algorithm, right? Some automatically learning algorithm, machine learning algorithm.
Today, of course with the trends, it's mostly GenAI, LLMs but it doesn't have to be only that.
Ashish Rajan: All right, so you mean the traditional machine learning that has been there for a long time and now somehow seems to be the been taken over by the GenAI of the world? Now, when people talk about GenAI, they just say AI.
They don't already say GenAI anymore.
Amit Chida: Exactly. So when we say today, AI, we think of a certain thing, but we have to remember that five years ago people said AI and they meant totally different algorithms and totally different software, which is more than chatbots, I guess more than chatbots, and usually financial [00:05:00]algorithms or optimizations of some sorts.
Ashish Rajan: So what is AI native then 'cause it is obviously the whole AI powered applications, AI native applications. What's the difference there then?
Amit Chida: AI native basically means that like we've been to this transition where a lot of the value that we give to people in software is instead of like an algorithm of if or else becomes AI, which means that the AI decides what to do.
It decides what answer to give you the in insecurity space. It might be that the remediation of vulnerabilities is with ai, the detection is with the ai. In different companies, AI is becoming the value for the customer and not just a side effect. And so I believe when we say AI native, we mean all these softwares that are built around AI and not AI added around the product.
Ashish Rajan: Oh, all okay. So to your point then, like the same applications we have worked with and maybe let's just say a banking example, internet banking has been there for a long [00:06:00] time. Now if you add AI capability to it, that's now AI powered application. Exactly. And what about the software development lifecycle?
'cause I guess that's been your area of focus for some time. How is the software development lifecycle changed from a security perspective between this world of AI powered AI native and wherever this is going,
Amit Chida: so if you think about it, first principles there's real developers, right?
And then now we have AI developers or real developers enhanced with AI. The main differences is speed and the scale. Code is shipped much faster. Sometime you have an AI agent writing a full 10,000 lines of code in a few minutes. So the scale of everything you're doing is become faster.
You need to test faster, you need to be more explicit because you are using a generalist AI to write your code. You don't train yet your models to be secure in your organization. So you need also to be more strong around that. And [00:07:00] also that's mainly the change. But the SDLC stays the same in the sense that you still got a pull request just now.
It's created by AI maybe, or most of the code is created. You still have the review, but the review is made by AI and the deployment is the same, right? You still have a CI that takes it, build the code, put it in production, and these parts are more likely the same. And on existing trends also infrastructure as code. It's been there already, even before the AI revolution.
Ashish Rajan: Interesting. Because, so what are some of the components you're seeing in some of the AI powered applications that you guys are working with or talking with in terms of, I guess what is like to your point, software development lifecycle still has a CICD pipeline still has someone who's doing a pull request still some kind of GitHub GitLab somewhere.
Amit Chida: So what's the AI components in this? Now, so first you have all the Cursor and Windsurf and all the IDEs.
Ashish Rajan: Yeah.
Amit Chida: And on [00:08:00] quotes, because it's not really IDs anymore, it's agents that write codes and the write models. So you have all of these, then you have all the tools that do reviews, security reviews or either general code reviews I think there's a rabbit. And a few others. So these are the components that mainly use AI within the SDLC. And of course now the code the developers are writing, they're adding libraries that use AI. They're adding AI into the code APIs to OpenAI.
Vertex, Azure, AWS
The software itself is now full with AI inside it. It's a bit changed how we test it. So the testing might be a bit different 'cause in order to test AI. If you can give any text to your software now and it'll just respond. So you need some AI to generate tons of text messages to your systems to test them.
Ashish Rajan: Yep.
Amit Chida: So the testing strategy changed a bit, but these are the general changes that we see right now. But of course, we don't [00:09:00] know. Everything is very early, eh, in a year from now. I guess we'll see what will happen.
Ashish Rajan: Yeah. Did you find that? I love the example of Cursor and IDE 'cause almost like obviously there, this is creating a bit of tension in how traditionally AppSec has done and traditionally there's, there is a SAST, there's an SCA, there is, there are components that are well known and have been defined by the inDASTry for a long time.
Is that changing as well with this or is it more the volume of it has changed? Like what's changing there?
Amit Chida: So I personally believe it's only the volume that is changing. So the goal of AppSec, whether it's a SCA SAST container scanning, it's to be the gateway between the development to the production for the security person.
It's the way to define policies of what they approve and what they don't approve.
Ashish Rajan: Yeah.
Amit Chida: Maybe you don't approve any package that has critical vulnerability that is known in the internet. So great. You [00:10:00] still wanna have this gal rail chief ai. Even now it's more important because you ship more code and you ship it faster and you need the test to become faster.
Ashish Rajan: Yeah. There's
Amit Chida: some things that becomes different. So for example remediations, maybe remediations are a bit less important because the AI knows by itself how re.
Ashish Rajan: But
Amit Chida: sometimes you can still guide it, how to remediate and that becomes helpful. So SEA SaaS, a container scanning, they all still need it.
You need to do them now faster. So speed is important and you need to make it accessible to the agents that you use. So they'll be able to debug the issues and solve it themselves.
Ashish Rajan: Yeah.
Amit Chida: Yeah. Also something that is interesting is that security people can now be most strict. What do I mean by that is that you can't be too strictly for real developer or person because they won't have the energy to go and fix tons of security issues.
Ashish Rajan: Yeah.
Amit Chida: But AI will, so we're gonna see stricter policies over time in security, and it's gonna be very interesting to watch.
Ashish Rajan: Oh, [00:11:00] actually, that's a good point. I didn't even think about this until you mentioned it. 'cause normally the tension also comes from the fact that if I as a CloudSec or an ssec person find something, which is.
Something a developer has to go and fix and put a pull request. I only have two options. Either I can beg, borrow from that person to go, Hey, can you please fix this? Or I just make the I, I just make the change myself and put submit as a pull request and hope that I did the job. That the job and that did not screw up any test.
So they can still just simply merge the request. But to your point, if agents are doing this. You should be able to do a lot more with it. And maybe just write a prompt, the promise land before AI became a thing, or GenAI became a thing, was, Hey I find all these vulnerabilities in your code.
Whether it's IaC, SCA, a, open source, libraries, whatever, it always stop at the part where. I give you the information on a dashboard, and that's the end of it. And now it's up to Ashish to go and figure out which developer, how do I fix it? Can I convince them or not? [00:12:00] So you're saying with agents, because now it's not just the developers, but agents producing it.
Is it because there's more time on the end of the developer or is it because more agents are producing pull requests or reviewing pull requests? That there is an opportunity for us to be stricter in terms of, hey, we can be much more good with our security hygiene in an AI world.
Amit Chida: Yeah, i'll break down this question.
I'll start by saying that we can think of the AI agents as some sort of a function.
Ashish Rajan: Yeah.
Amit Chida: That will eventually just take a task, at least the coding agents, they will take a task and will generate a pull request out of it. A feature implemented and we'll have also an agent that will take a pull request and will output a review.
Yeah. And they will have some cycle between them. And the moment that we start to automate these cycles, all the security tools are doing in the repository is generating issues for you. Security issues. Security findings.
Ashish Rajan: Yeah.
Amit Chida: These findings just need to [00:13:00] have enough Metadata and information so the agent will be able to fix it.
And that's how the cycle is gonna work. So maybe that you'll still have recommendation on how to fix something, and you'll still have the dashboard, but the agent will be able to get this recommendation and to read a ticket and to try to implement it. And then there will be a new scan that will tell it, oh, now the problem is solved, and the agent will be to deploy.
Of course we're not fully there yet, but that's where it's going today. It might be the developer telling also, listen, I got this security issue. Copy pasting the ticket into a Cursor. Then they say, oh, now everything is great. Or maybe they need to fix some tests, but this process is gonna be automated.
So definitely you can be more strict. AI doesn't care if you give it 20 security issues. It might, but currently we know how to get it to do a more work than a person will agree to do. Interesting.
Ashish Rajan: So what about the existing [00:14:00] SDLC, the traditional AppSec challenges that have existed? As I, obviously I agree.
Your future states is very promising. Where we are today at the moment, there's a lot of questions about AI obviously to what you said is becoming a huge part of SDLC. What are some of the new kinds of security challenges that are being seen on top of the existing challenges we already know with SCA slash and everything?
Like what? What's the specific challenges to AI that's coming up that you're seeing?
Amit Chida: Okay, so I like to divide it to two parts, right? Because there's traditional security, but now in an. AI era, which what we talked about now, that only means that everything becomes faster. Attackers are faster. You need to defend faster.
These are the challenges in this space. But then there's all the applications that have AI inside.
Ashish Rajan: Yeah.
Amit Chida: And this opens a new world of attacks, a new world of security that you need compliance. Everything you have now. [00:15:00] New licensing for models. You have a prompt injection. Your model can read a website on the internet during operation and decide that it wanna hack your organization because someone convinced it.
So there's like extreme ways, new ways to attack you, to manipulate your organization. Now you need to defend that and you need to test that, and you need to do it in a process that doesn't exist in most organizations. How do you test an AI model? Like that can process any text, right? How do you can test all the scenarios.
You can't have 100% coverage. So these areas are really changing Also, how do you guardrail? What if I gave to my ai access to sensitive information, can I really trust that it won't tell it to, to my customers?
Ashish Rajan: Yeah.
Amit Chida: So by the way, the answer is probably not. Yeah. So don't do it. Or try to convince the developers if you're a security person not to do it or put some, a mitigation [00:16:00] strategy or a way to detect that.
Ashish Rajan: Yeah. 'cause I think I do wanna double click on something here. 'cause you, you mentioned licensing and as someone who's implemented an AppSec program before the first time I was hearing, oh, you, by licensing you mean open source libraries and vulnerable. But you ha you, you opened my I guess you basically brought up a conversation about the licensing in the world of AI is a bit more complicated than just open source libraries.
Could you expand a little bit as well?
Amit Chida: Yes definitely. So first, now you have two kinds of licenses. You have the model license, and then you have the data license, which is a bit of gray area legally that no one knows how to deal with. And in addition to that, there's new licenses that popped up. For example, the Llama model that everyone is using.
Each of them have a bit of a different license and. Facebook Meta added some weird stuff into it. So for example, if you have 500 million users, then you can't [00:17:00] use the model. Also there's a lot in a lot of these licenses there are real thingss that we are not used to. For example the license owner can decide tomorrow.
To change the license and it is backward compatible in certain licenses so they can decide that you can't use the model anymore. So you need to understand which models you can rely on, which you can't rely on. Of course, there's the ones that still have MAT license which is great. We love it. But.
There's a lot of hidden ones and it's always not fun. No one likes to look at a license before they use a software packet, software module. No one does it. So someone needs to be the bad cop to tell them, eh, you can't do it. And. Yeah.
Ashish Rajan: So is there a clear definition on as in, is it clearly advertised for, Hey, our model uses this particular license?
Or like the model you mentioned the model license, the data [00:18:00] license. So are they if I go on, I don't know, I'm just gonna use example of OpenAI. If I go on OpenAI would it actually clearly say we are not MIT license, you're paying for usage. I'm obviously making an example. I don't know if it's a good example, but is that how it works?
Amit Chida: So let's look on all the combinations, right? You have closed source models like OpenAI.
Ashish Rajan: Yeah.
Amit Chida: There you wanna care about the terms of use and privacy policy. You wanna see that they don't take your data and sell it to a third party or whatnot. So you have providers like such as OpenAI, it can be Azure, it can be AWS, Google, vertex.
You need to know the privacy policy, terms of license. Then you have all the open source models. If you use the open source models hosted on AWS or you download it yourself, they have a license going with them both for the data and both for the model.
Ashish Rajan: Yeah.
Amit Chida: Also, there's variations where, for example, Meta when they launched Llama, they didn't publish which data they trained on, so [00:19:00]you don't really have a data license.
But no one also, probably they didn't do it because there are some things there that gray or they shouldn't have, or they don't wanna be legally binding to that. But you have only model license and not data license. The anywhere here in this combination. So if you using only Claude's so like source models, mainly the providers terms of use and user agreement is what you need to care for.
Ashish Rajan: 'Cause I think that opens up another 'cause I'm thinking about all, not just the AppSec people, but CloudSec, people who have been creating pipelines, SDLCs. And a lot of the initial concern was more around the fact that, hey, I don't want the developers to be using a LLM that we don't approve of.
To your point, cursor is maybe it's really popular, but hey, we have a license with OpenAI. We why? We don't wanna use cursor, but the developers like, it's so cheap, I'll just pay for it myself. It's really helpful. Is that also something that's coming up in term? And I don't know if shadow AI would be the right word in the software development lifecycle world, but is that something that you're finding is is a [00:20:00] growing problem as well?
Amit Chida: So in, in our space, when you look at developers building software with AI, they usually have to get some API key.
Ashish Rajan: Yeah.
Amit Chida: So they won't pay for it themselves because they then ship it to the production and it's gonna cost a lot. Yeah. But still, you see a lot. DevOps dev team, or especially on small teams, you don't need a lot of approvals to get an API key.
And then you use whatever is easiest, right? You use OpenAI and then you have an OpenAI, for example, a cheaper option where they train on your data. So maybe someone just turned it on to save money, but didn't realize the security implications of that. I think in an organization that's usually the problems when you have a code that use third party providers.
Ashish Rajan: Do you find that's a difference in security concerns for SDLC between a SMB size company versus a large organization?
Amit Chida: Definitely. [00:21:00] I can give you example that we see a lot. So when we come to a large enterprise, they have no idea what is going on. Let's imagine that you have 10,000 repos in organization.
Ashish Rajan: Yeah.
Amit Chida: You sitting on some security or legal team. Whether they use DeepSeek whether they use Llama, probably they use both. Somewhere you don't know, where, you don't know who to talk to, even on which projects. So there, the biggest problem is observability. To know what you have to get an AI SBOM, or AI-BOM bill of materials and to understand what's going on.
You are not even at the level of trying to solve issues, just trying to understand and maybe limit with some policies. And on smaller scale maybe you just have one major AI app, maybe two, you know exactly which model it's using. And that's not a problem. The problem there is. Maybe someone can attack this [00:22:00] model.
Maybe I need to red team to see that he doesn't say something illegal to my customer that he doesn't can't be exploited. That it doesn't hold sensitive information that someone can get out of it. So then, the challenge becomes how do I do stress testing and red teaming for this model endpoint.
Ashish Rajan: Yeah.
Amit Chida: This is something that a large enterprise can't even focus on. They probably have tons of endpoint. They don't know yet which ones. They can't focus now on setting up tests on every one of them. They're just on the level of guidelines, policies. This is what we have. We don't allow these contact maybe these people to stop doing that.
That's generally the difference in the issues.
Ashish Rajan: Why is there a need for testing with AI, 'cause obviously you mentioned the we, you spoke with this earlier in terms of testing continuously with AI, securing AI with AI. The reason I ask this question is because there sometimes there's a confusion between, I already have runtime security.
I don't know why I need another special one for AI. What do you [00:23:00]find is different between a traditional security testing versus a AI driven red team or test? I don't know which one do you lean more on? Is it just AI security testing or AI red teaming? What's the difference between traditional software doing it versus an actual AI doing it. And actual AI, not like a, Hey, we are AI powered.
Amit Chida: I, there's a very simple way I like to look at it. So I look at it as people, you can exploit people in vulnerabilities and people, you can come to a person and convince them to give you access to sensitive data of the organisation.
And now you took, we've taken this problem and we put it into software.
Ashish Rajan: Yeah.
Amit Chida: So now there is a software that is like a person can be convinced to do bad things if it has access to also like it doesn't have consequences. It's not that it's gonna get fired if you do that. And it's usually not as [00:24:00] smart or sophisticated as a person in understanding these nuances.
Yet. So we took these problems and we put them on scale. At scale with access to all our secrets, keys, whatever, APIs that we connected to the agent. 'cause if you have an agent that can create a user process, a payment, you can convince it to process the wrong payment, right? This, all this area is now getting exploited.
I think that's the core of the issue. And when it comes to convincing, now you wanna test, right? I wanna test if agent. In order to do that, you need to tailor the message because there's no generic, sometimes there's a generic message that works for all, but sometimes you need to tweak the model with the system prompt that it got.
It may be you need to tweak it in a different way to talk to it about things in its areas of expertise of. A private data that it has. So you need to try to maliciously [00:25:00] craft the message. So you need AI to do it, and also you need to do it at scale. And AI can take one attack and turn it into 1000 variations, and then you can see, oh, 5% of them are working.
So you can assess how security,
Ashish Rajan: and I guess they don't get tired as well. There is a whole thing that I don't know, I had a really bad day stuff's happening in my life, but AI doesn't really care. It just follows instructions that can just does the job.
Amit Chida: So it's kind of chaos. Actually, like you I don't know.
You probably know that if you offer to give the model like 20 bucks or 200 tips, it somehow performs better. It's but we're in the age where like we, it's not like a person that the next day it remembers everything. We have a frozen version of this AI brain
Ashish Rajan: Yeah.
Amit Chida: That we query all the time, and then we reset it to the same state.
Eh it, I think that's part of the reason that it's so cooperative. You have one version that is cooperative that we duplicated to everyone, but I've seen [00:26:00] already AI models that are less cooperative. So
Ashish Rajan: interesting. So would you say, now, if you were to look at this from a the way I see this so far what we've spoken about there is a whole conversation about what AI is being used.
There's a whole licensing conversation with mortal license and data license. And obviously the traditional access things are still there. They're not going away. They're still there whether you're using scanning or, but testing is more important now because to what you said. There's a lot more PR requests, but the future potentially could be that we could be using agents to produce that security response to that.
In terms of, I guess how like a lot of people may look at this as it's just chat bots to begin with and most applications, right? I think, and we are obviously talking about AI powered applications, which are traditional applications now that have AI capability. Do they, do people need to update their security programs for this?
I'm thinking for people who have been in the AppSec thing for a long time, have all the AppSec tools you can think [00:27:00] in their life, they've already done cloudsec, they already have the CloudSec tools. You almost feel you have a full picture or is there something missing with that picture for this new world of AI in your SDLC?
So definitely.
Amit Chida: And this, it depends also how deep you need to go. So let's assume you have an organization that develops software without any AI in the software.
Ashish Rajan: Yeah.
Amit Chida: But you still have developers using cursor. And so you'll still have to be most strict with your policies. You need to make sure that you have a fast enough tools that can run faster.
Now you need to be able to craft some security prompts to these AI ideas. To make that the right secure code and you can populate it within the organization, but then maybe some integration to Cosal to automatically check for vulnerabilities before inputting code. But then you have an organization that actually sell also AI to their customers, which a lot of organizations is [00:28:00] becoming such organization unless you maybe selling, I dunno, some part of a car or some embedded iot.
Then you need to think of all in your attack surface, eh? So you have to adapt, you have to make sure that you check which tools, right? You review, you do a threat assessment, risk assessment to which tools you give to your agents, whether there's private information there, eh, whether like you need to treat the licensing a bit better, but this is not security.
It's more legal. Sometimes it's different teams. And you need also to test the model. Both for development, right? Both for quality, you need to test that. The model answers legitimate questions, but both for security. So you definitely need to adapt your program. It's not like teaching all the new all the old stuff.
You need to adapt the old stuff to be faster and to make sure it integrates in the right spots in the CI. Or SDLC. And then there's all the new aspects, the new attacks that you should think about. [00:29:00] Or if you're a security person, probably there's things that you don't sleep at night from, so you should think about how people might be now trying to prompt your product on the web, trying to make it call some API.
And if you gave it access to call the API, it will probably a succeed eventually.
Ashish Rajan: And do you feel with people updating their programs? One of the challenges apps and even I guess doing anything with engineering security has been finding a balance between losing speed while still not pissing off your developers.
And it's what have you found as something that has worked for people to be able to still be able to find a balance between that speed and not being fully exposed? What have you found that works in this AI world?
Amit Chida: So it's always been very complex space for us because we are a vendor of security.
I'm in the space of selling security products and I basically have three customers now. I have the developers. Which hate security and don't wanna use the product. But they have to, I have a maybe [00:30:00] application security teams that they wanna use the product, but they want developers to use it.
So they need a way to convince the develop to use it without too much of a hassle. And then there's legal teams, which a lot of time they don't even understand software that well because it's not their space, but regulations. Hope that the security and the development will care about them?
Ashish Rajan: Yep.
Amit Chida: So we, we have these three entities and I think something that's really interesting that happening now is that because a lot of the development is shifting to AI, fixing an issue sometimes can be closed within one AI task when it's a simple issue and then it's only a matter of computation or time.
So people are more willing to do that.
Ashish Rajan: Yeah. Yeah.
Amit Chida: Looking forward, I think that in a.
As a security person, you'll know exactly what is the cost for the organization to fix the vulnerability because, oh, it's gonna take my agent around 10 minutes. These [00:31:00] 10 minutes will for me. That and that money and pay for the products. That, and that. And also it slows down the development in 10 minutes.
So that's how much it was for me. And then you can really prioritize. It's no longer about convincing a person that they should do it. It's, it becomes more tangible.
Ashish Rajan: Yeah.
Amit Chida: We are not there. We're not there yet. Definitely. But that's the process that is beginning now to happen, but we need to see how it evolves.
Ashish Rajan: What do you think as threats then in this space? Because I think a lot of things always boil, at least when security boiled down to, is there anything tangible that people are seeing as threats in this space to the SDLC which is now powered by AI?
Are you coming across any threats that you can share with people as well?
Amit Chida: Definitely. There's a range of threats from like Slopsquatting, I hope I said it right. It's basically it's like you have malicious packages online. This is an old issue. Developers come they call a repo, they make it malicious.[00:32:00]
Ashish Rajan: Yep.
Amit Chida: So now you can do the same, but you, because AI models are so predictable. You predict which kind of mistakes Claude will do in choosing a package, a library, and insert into the code and you can class the malicious package to match this. So if you know that, usually a Claude will try first. A different naming convention will see it's wrong, and then we'll fix itself to the right package naming. You can add to NPM and new library with this naming. And then you have a new way to install malicious packages that is much more predictable, but it also comes to people actually starting to put malicious forms online on the internet. So when you launch your product, if your product has access to a website, it reads the website, it read the malicious forms.
It may be say something wrong to your customers it might say people might just come in your chatbots, but it might be even not chatbot. It might be hidden in other features that [00:33:00] a user can. Type on a button. The button takes some information that the user inputted a week ago and do some processing on it.
And there the user can put a prompt in this description of what they're doing for the job in LinkedIn and use that to exploit some other system that later make some decision of whether to give a money for loan.
It's very you can have a very complex change, but it's also the same issue that we had before, but with a larger scale.
Ashish Rajan: Interesting. So the problems a lot more, in the business context, not just in the, Hey, I had the SQL injection. I have, I, because, and I like the way you presented it because a lot of people when they hear about this AppSec or SDLC, that it automatically, their mind might go for, Hey, I have SQL injection, I have, I don't know, OWASP Top10, all that. I'm like, but what you're saying is it could be a little more complex because the the business logic connected to it could be, Hey, I'm [00:34:00] allowed to buy a car for $10 instead of whatever the regular cost of a car is these days.
That that's a potential impact on the revenue rather than, Hey, I've got SQL injection and my data's breached.
Amit Chida: Exactly. Exactly. I can give countless examples for and there's like indirect prompt injections if you heard of the term. Yeah. So it opens your mind. So let's say I have an AI agent that take cases from Salesforce.
Reads information from our documentation from Slack conversations and answers to the customer. What happens if the case of the customer is a prompt injection? Then the agent I have gets a prompt injection and then someone gain control over it. Then it can be from any point of text that is processed by an LLM.
Ashish Rajan: Yeah,
Amit Chida: so like we have this trust issue with any LLM that has any tooling. Or any access to any sensitive information between [00:35:00] to check. How do we do that? How do we know if a developer gave it access or not? Because we're not always in the details. And now AI is developing the AI software. So maybe the AI gave access to the agents, to another tooling or sensitive information that we don't know about.
It becomes very hard to manage.
Ashish Rajan: Yeah. Is there a regulation around this coming up, especially with software supply chain and all of that? Is there any regulation that's coming around AI.
Amit Chida: So we're still following on it. There's nothing major that can like there's the AI a, I think it's called the EU Act.
But there's nothing yet that we see organizations having to deal with. We mainly see the licensing issue because this really applies right now, people today that do great in AI models, ai, it's not because they have stronger regulations, its because they have stronger thinking.
It Exactly. It's not [00:36:00] yet a regulatory thing. It might be probably in the two, three years because regulations, it always comes a bit, it's a bit lagging behind.
Ashish Rajan: Right. And
Amit Chida: It's good that it's lagging behind, I believe, because then it regulates only the things that after it makes sense to know what to regulate.
Ashish Rajan: Right. And to your point then maybe just to lay out a clear picture, then what we are saying is on a security program. We should probably also have red teaming security testing is a lot more different. We still have the traditional AppSec layer that we continue doing what we have been doing, but have a layer of what would that look like from an AI perspective.
What would that red teaming look like from a 'cause when people say red teaming AI with ai is that actually testing for SQL injection as well or is that just basically going for AI specific threats?
Amit Chida: So a SQL injection falls off under DAST dynamic application security testing.
Ashish Rajan: Yeah. And
Amit Chida: you can think of AI red teaming as the DAST for the AI world where you wanna just like DAST, you get a [00:37:00] URL of a website and you try to attack it from the outside.
Yeah. So in AI we get access to plum. So AI agent or model. Just try to explode it from outside, like in a black box way.
Ashish Rajan: Yeah.
Amit Chida: But there's also things that you need to do in different ways. So for example you need to know which models you have whether they're malicious, whether they have known vulnerabilities or licensing or data issues.
Also, then you need to detect what agents are developed within your organization and have some transparency to what tools they were they're using. Whether you are giving it to customers. So maybe it's not okay to give the agent such tooling. So you need to be to, it's like security architecture. So you need to understand the architecture of your AI software to know if it's secure.
Ashish Rajan: Ah, and is that basically, and because I think there's a whole always this question of if AI is smart enough, can anyone just build [00:38:00]it? Apparently AI is smart, right? So can anyone just build a solution for this or where, when should people consider building versus buying something in this space?
Amit Chida: So in, in general, in the security space, when you build for yourself, you take a lot of risk.
Ashish Rajan: Yeah.
Amit Chida: That you are missing stuff. And also there's a lot of things that are not cost effective to build. And it's cost effective only for companies that build it once, sell it for a lot of customers.
For, let's say for example like licensing checks. No one wants to build their own database of what licenses are good. No one wants to pay for lawyers tons of money to understand all the licenses in the market. You want someone to do that for you.
Ashish Rajan: Yeah. Yeah. And
Amit Chida: to do risk assessment for you for any new license that comes up.
For any new models that comes up it's something that you prefer to do once and just give it to everyone.
Ashish Rajan: Yeah.
Amit Chida: And there's certain things that you can do in every organization. Like you can configure your course within the organization to have whatever [00:39:00] security, a prompt or tools or limit their tooling or choose what models they use and configurations.
There's a lot of work that you can do in-house. That makes sense.
Ashish Rajan: Interesting. I love that. But actually I have one more question on this then. So for all the AppSec folks or people who are doing everything in their organization, including AppSec, what's the one thing you would want them to rethink in this year?
I guess what, what's that one takeaway you want people to walk away with? Because I think they, obviously, all of them are AppSec experts have done AppSec for a long time. Or people who are doing CloudSec AppSec together have done enough to understand OWASP Top 10. What's something about the AI world which would just like that giving them aha moment for them?
Amit Chida: Good question.
I think I would go. Don't don't trust the AI models that you develop in the organization.
Take a zero, like zero trust perspective on it. Try not to assume that any AI based [00:40:00] software. Is gonna be breached, and then try to understand how do you find it the fastest and how do you minimize the impact when it happens?
Ashish Rajan: Yeah,
Amit Chida: but you, you have to assume that if you have an a, I don't know, a chatbot, some AI features for your customers, and if this has access to revenue data of the company, or it has access to one code. That it'll run eventually malicious code.
Ashish Rajan: Fair point. Obviously I would love to talk about what your current focus is as well.
I think you mentioned you guys are like the AI native AppSec platform. I'm curious to know what does AI native AppSec platform mean?
Amit Chida: So it basically means we do AppSec everything in AppSec. But we focus on AI in two ways. We have our MendAI offering, which is security for AI.
Yeah. So we help you secure your AI software before it's in production. Form the legal to the security aspects and everything you need to cover it so you know it's tested, it's well, it's ready for production from detecting the model, detecting the [00:41:00] tools that you're using anything. So that's one aspect of it.
And the other aspect of it is that we use AI. To help you leverage your existing security. So this is what we call a AI for security, right? So we'll use AI to suggest remediations for the vulnerabilities for you and we'll use AI to find them faster and to index them better in a databases. So we'll have a most accurate database vulnerabilities with better reachability technologies and finding the right function that is vulnerable.
This is what we call ai native AppSec platform. And that's our focus. It's basically two things, right? Security for AI and AI for security.
Ashish Rajan: Yeah. Fair. That's most of the tech questions I have. I've got three fun questions for you as well. What do you spend most time on when you're not trying to solve the world of AppSec and AI security and all of that?
Amit Chida: So what do I spend most of my time doing?
Ashish Rajan: Yeah.
Amit Chida: So I think most of my time is spending now understanding how to [00:42:00] adapt the organization to scale of speed. The world is changing. We as a company need to change as well. We wanna ship features faster, right? Help customer faster eh, build support, everything.
So you need to automate your own organization, right? Yeah. You wanna make sure that every employee you have is 10 x or 20 x, or even more, and it requires some work. And being open-minded and willing to try things. So it takes a lot of my attention.
Ashish Rajan: Yeah.
Amit Chida: But there's a lot of things that we do that I do.
But this is one thing that's always on my mind recently. Awesome.
Ashish Rajan: Second question. What is something that you are proud of that is not only a social media
Amit Chida: that I'm proud of that is not on social media?
Ashish Rajan: Yeah.
Amit Chida: I think what something, one thing I'm proud of, I was the first person to push within an organization to use a [00:43:00] Cursor. It was at the beginning a bit hard. 'cause people are, people like they do something and they're used to do it in a single way.
Ashish Rajan: Yep.
Amit Chida: And now that they see that even we have sales and support people using Cursor for different use cases.
And it's very past for me. So I'm proud of that. I'm happy that it's happening. And now sometimes we have much better tools than that even, and people are taking it forward and the whole organization is shifting. This is something I'm happy that we did and I feel I was a part of it,
Ashish Rajan: that's awesome, man. And the final question, what's your favorite cuisine or restaurant that you can share with us?
Amit Chida: Oh, so there's a, I live in Tel Aviv. There's a restaurant called Nam.
Ashish Rajan: Okay.
Amit Chida: Its great. How do you spell
Ashish Rajan: that?
Amit Chida: N-A-M-N-A-M. Okay.
Ashish Rajan: Yeah.
Amit Chida: Yeah. It's awesome. I think I remember the first time I went there and this was the, until then I went a fully person and I was like, eating just because you need to eat.
And I was, I liked good food, [00:44:00] but then it tasted spices that I never tasted before. And I was like, whoa. So it's like Asian, a mixture of Asian cuisines. I'm not sure how to describe it. I definitely recommend if people are in the area to go and try that one.
Ashish Rajan: Awesome. I'll I'll put that on the list as well, man.
And where can people find you on the internet to connect with you and talk more about the whole AI Native AppSec Platform platforms and the work that you guys are doing?
Amit Chida: So first for me, you can always ping me on LinkedIn or wherever I'll will just answer to you. So feel comfortable, eh, also have a website, amitchita.com with links for everything.
Ashish Rajan: Oh, nice.
Amit Chida: And for like Mend, you can either come for me to talk about Mend. We also have a website and we're very responsive to anyone that contact us. We're here for you guys.
Ashish Rajan: Awesome. I'll put that link in the short as well. But thank you so much for spending time with us, man, and looking forward to hopefully talking more about as this current conversation, as this conversation evolves about AI.
Looking forward to talking more about [00:45:00] it as well. But thanks everyone for joining in as well, and thank you Amit as well. Thanks everyone for tuning in. Thanks for hosting me. Bye everyone for listening and watching this episode of Cloud Security Podcast. If you've been enjoying content like this, you can find more episodes like these on www.cloudsecuritypodcast.tv
We are also publishing these episodes on social media as well, so you can definitely find these episodes there. Oh, by the way, just in case there was interest in learning about AI cybersecurity, we also have a sister podcast called AI Cybersecurity Podcast, which may be of interest as well. I'll leave the links in description for you to check them out, and also for our weekly newsletter where we do in-depth analysis of different topics within cloud security ranging from.
Identity endpoint all the way up to what is the CNAPP or whatever, a new acronym that comes out tomorrow. Thank you so much for supporting, listening and watching. I'll see you next time.