Is AI security just "Cloud Security 2.0"? Toni De La Fuente, creator of the open-source tool Prowler, joins Ashish to explain why securing AI workloads requires a fundamentally different approach than traditional cloud infrastructure.We dive deep into the "Shared Responsibility Gap" emerging with managed AI services like AWS Bedrock and OpenAI. Toni spoke about the hidden dangers of default AI architectures, why you should never connect an MCP (Model Context Protocol) directly to a database.We discuss the new AI-driven SDLC, where tools like Claude Code can generate infrastructure but also create massive security blind spots if not monitored.
Questions asked:
00:00 Introduction
02:50 Who is Toni De La Fuente? (Creator of Prowler)
03:50 AI Security vs. Cloud Security: What's the Difference?
07:20 The Shared Responsibility Gap in AI Services (Bedrock, OpenAI)
11:30 The "Fifth Party" Risk: Managed AI Access
13:40 AI Architecture Best Practices: Never Connect MCP to DB Directly
16:40 Prowler's AI Pillars: Generating Dashboards & Detections
22:30 The New SDLC: Securing Code from Claude Code & Lovable
25:30 The "Magic" Trap: Why AI Doesn't Know Your Security Context
28:30 Top 3 Priorities for Security Leaders (Infra, LLM, Shadow AI)
30:40 Future Predictions: Why Predicting 12 Months Out is Impossible
Toni De La Fuente: [00:00:00] The same way that we distinguish between cloud infrastructure, application infrastructure, we can distinguish between AI infrastructure or AI configuration itself. If a company like AWS launches AI service like Bedrock, you could expect that this following all the best practices. But what about your side or what about your expectations?
Ashish Rajan: I could be using open AI or Claude or whatever in the background. Now I have fourth party, fifth party as well, because now my Bedrock is the access to my Claude.
Toni De La Fuente: We have seen many how insecure at the default configurational or default AI architecture with MCP can be never compute an MCP talking to a database directly, for example.
We think that AI is going to know everything magically. This is not magic.
Ashish Rajan: If you are working on an AI workload these days, it's highly likely you're probably using a managed cloud service today, like a Bedrock vertex ai, or whatever the Azure new service is called. Now. They've changed name three times, so I'm not gonna keep up with the name anymore.
But the reality is, for all of us who have been [00:01:00] working in the cloud security space, a lot of what we believed. Used to be shared responsibility model. There's been a gap that's been created. There's also opportunities for us to use things like Claude Code and others to be able to produce a lot more things that we could not before.
Thanks to open Source. In today's conversation, I had Toni De La Fuente, who runs one of the most popular cloud security open source projects that does run and has been running for decades now called Prowler, and he was on the episode talking about the shift that's coming from cloud security to AI workloads that are running on cloud, how software development lifecycle has changed and what he's seeing across the community that is using his open source projects.
I think you realize how much open source is evolving, not just because of the fact that over 90% of open source is being used in most office nowadays, but now with AI in the picture and the deep seeks of the world. What does that look like for. People who come from a cloud background looking into this AI workload space, all that, and a lot more in this conversation with Tony.
And if you know someone who's in the cloud space working towards securing AI workloads, definitely share this episode with them. And as [00:02:00] always, if you are here for the second or third time, and I've been finding. Episodes of Cloud Security Podcast helpful. Definitely take a few seconds to hit the subscribe button, whether it's on iTunes, Spotify, YouTube, or LinkedIn.
It's free for you. Takes only a second. It means a lot that you support the work we do and helps us reach more people as well. Thank you for taking the time and I hope you enjoy this episode with Tony. I'll talk to you soon. Peace. Hello. Welcome to another episode of Cloud Security podcast. I've got Tony with me.
He's a return guest to Cloud Security Podcast. Tony, welcome to the show again, man.
Toni De La Fuente: Hi. Thanks. Thanks for having me.
Ashish Rajan: Happy to
Toni De La Fuente: be here again.
Ashish Rajan: I am excited for this conversation because now the last conversation you and I had was in the cloud world. Now we're in this AI world, but for people who may not, uh, have heard of you before, could you share a bit about yourself and what you do and what you've been up to?
Toni De La Fuente: Well, I have been into computers and cybersecurity more than 25 years in cloud security and around cloud security. Pretty match. Yeah. 12, 15 years. And, um, yeah, and since almost 10 years [00:03:00] I started Prowler as an open source project, and two years and a half ago I started the Prowler, that is also called Prowler.
And so that is, that is Tony is an enthusiast of cybersecurity and of course I, I love cloud security and. Ai, of course. And AI security.
Ashish Rajan: Yeah. And maybe, uh, double click on the whole AI security and cloud security as well. Obviously you kinda like me, you come from a cloud security background. Uh, and maybe just to kinda lay the land for people, how, when you separate cloud security and AI security, how do you distinguish between the two?
'cause this is obviously, I did a LinkedIn post the other day about how AI security is not, uh, cloud Security 2.0. I don't know where you stand on that, but what is your thinking about how do you distinguish between the two? Like for people who are trying to get their head around AI security and cloud security?
Toni De La Fuente: Well, so we have to probably distinguish between. The, the same way that we distinguish [00:04:00] between cloud infrastructure and application infrastructure, we can distinguish between AI infrastructure or AI configuration itself. The way we use AI or an application that has AI underneath, right? So our two components, so AI has infrastructure, ai, and the infrastructure of ai.
In many cases it's cloud infrastructure, right?
Ashish Rajan: Yep.
Toni De La Fuente: We are talking about GPUs. GPUs are in the cloud. Of course, you can buy GPUs and have them in your data center or in your stairs, right? In your roof. But people has use, um, providers for a.
Data, where is the data where, where you build your LLMs or you build those, uh, capabilities in the cloud or even services to build those, like let's say sagemaker or s3 or, uh, [00:05:00] database, you name it, multiple services that. Very often in the cloud. So when we talk about building AI or those capabilities there are, there is a lot of overlapping with building cloud, right?
Or in the cloud. So that is why I think, of course it's not the same, but has so many different capabilities overlapped,
Ashish Rajan: to your point, the CISOs who are probably dealing with managed gen AI services, like for example, obviously there is the what's the word for it? The Amazon Bedrocks of the world, Google, vertex, Azure.
There's so, there's so many variations to the, uh, the, I guess, managed gen AI as people like to call it. What's the, how is that different to when we were building workloads in cloud? I guess because, because you know how people think about this as like, Hey, I have an EC2 instance in Amazon. What does that really translate to in this AI security world in terms of workload?
Toni De La Fuente: I like to see it like, uh, an additional component with it, with its own [00:06:00] specifics in terms of security. We, we support, of course, Bedrock. And when it comes to Bedrock, uh, you have to take to know what is, um, uh, AI service, right? And, but remember. So many different articles talking about improvements or security improvements, let's say that way, about Bedrock.
So we have controls for Bedrock because Bedrock can be configured in many different ways. And again, if you think about what is the biggest issue in cloud is probably the same in ai, which is the shared responsibility model when you are using a an AI service. It can be Bedrock, but we can talk about chat, g, pt, or even chat, GPT Enterprise.
So you have a lot of different components and confirmation points that they belong to your responsibility, and you probably don't know. Yeah, so that issue [00:07:00] happens exactly the same in the cloud or in ai. And again, the AI is in the cloud like Bedrock, and you have to configure it properly to guarantee the, you know, the guardrails, controls, prompt injection, et cetera.
This, at the end of the day, you are putting some security best practices around your lms, right? Um. Is that,
Ashish Rajan: I think I remember seeing something from you around the whole shared responsibility gap. Is that kind of where you kinda mentioned? Yeah, I think we, I think maybe it was from you or someone from your team about shared responsibility gap that comes with gen AI being introduced into a traditional cloud workload.
Toni De La Fuente: Yep. It's super hard think, think about any new service. Always. It's always the same. Yeah. The people doesn't know how to. Configure those services properly and or what is their responsibility? I we may, we may guess if you move to, let's use a, again, the Bedrock example. If some, a [00:08:00] company like AWS launches AI service like Bedrock, you could expect that is following all the best practices.
Ashish Rajan: Yeah,
Toni De La Fuente: because they may say, well, on my side of the responsibility model, yes. Yeah, but what about your side or what about your expectations? I mean, that is very blurry, right? Yeah. Sometimes it's not even clear for anybody, not for the customer, not for integrators, and of course not for the csp.
Ashish Rajan: Yeah. Yeah.
Toni De La Fuente: So that is why we said sometimes, uh, well, many times we, we, we said, okay. Can tell you that is wrong. It's just responsibility. Right? It's, it's the responsibility of any CS PN right? Or any start spn to tell you what you have to do directly.
Ashish Rajan: So do you find that, like for people who are building AI workloads on Bedrocks or Vertex or any of this as well, what's the, you know, obviously there was a whole shared responsibility model that [00:09:00] AWS had for cloud.
Where you could see the line between, Hey, this is, this is the stuff you manage. This is the stuff we manage. And security's kind of like a blurry line between that, where security, we manage security of our stuff. You manage security of your thing. Is that not that straightforward with Bedrocks and stuff?
I guess
Toni De La Fuente: it is. No, it is not. It is not. It's not even easy for functions for several
Ashish Rajan: stuff as well. Yeah,
Toni De La Fuente: exactly. Exactly. So that was very easy for S3 for digital machines. Yeah. Yeah. And that's all because now RDS. RDS you may think if you see, if you know the service or RDS or the, or the equivalent for any other cloud provider, right?
Everything that is inside the, the AWS is your responsibility, but also many things outside the configuration, right? Yeah. That is started the challenge to understand and now Aurora serverless Aurora. [00:10:00] Now what? Yeah. What is the responsibility at the end of the day, if you can't access an API endpoint with an option that you can see and you can modify, is your responsibility.
The problem with that is that you don't always are aware Yeah. Of those options. And exactly the same happens with services like, uh, AI services in any equal provider. Yeah. Yeah. So we didn't know about, I remember the first time computing Bedrock is like, okay, the guard rates. But now if you look at the service, uh, a year and a half ago, I think it was the launch, right?
In, yeah. Uh, during event 2024, I think. Yeah. Compared to now, there are at least more than twice the security documentations compared to the launch. So that is because you vendor are pushing to the customer that part of the responsibility.
Ashish Rajan: Interesting. So we are more [00:11:00] responsible. And to your point about the serverless notion here, bringing that back to AI workloads.
'cause I could be using open AI or cloud or whatever in the background or building on top of it. And Bedrock is my, is my foundation stepping stone to access that. So now I have. The, I used to be first party with just me and things that I manage on my virtual machine. Then I had third party with me and Amazon managing Amazon or Microsoft or Google, managing my workload.
Now I have fourth party, fifth party as well, because now my Bedrock is the access to my cloud or open air.
Toni De La Fuente: Exactly. It's like, its like the, um. The hub, right? Yeah. Between, between you and any LLM or any other, other service underneath that may work better for your workload. Depending on what your application is going to do, uh, which is which of course, those are great services.
Ashish Rajan: Yeah.
Toni De La Fuente: But in terms of security, it's [00:12:00] very important to know what you have to do in that regard and make that very clear to the users. So that is why we cloud security companies or AI security companies have, uh, important. We have to tell people this is what you have to do. Um, prevent. To happen before.
Ashish Rajan: What are you seeing in the, uh, AI securities landscape then? Because obviously a lot of the cloud security landscape is always about misconfiguration exposure, identity. What are the top things that you're seeing in the AI security space? Obviously, probably being an open source tool is being used by multiple engineering teams across the board, secured engineering otherwise as well.
And in the, even the regulated space. I'm just curious, uh, even as an open source how much. Is now being challenged by this new AI worker in terms of, is it still misconfiguration or are there specific AI security vulnerabilities? Like what are you seeing more of? [00:13:00]
Toni De La Fuente: I will, I will say stuff from.
Inside the model or let's say, or interacting with the model and outside the model is how your agents or your cps, the tools, et cetera, are using to that information. First of all, make sure your model has what it has to have, right? And is computer or is distilled or um, has include anything you need, but, and only what you need or users need to do with, with whatever you build with that gene ai, right?
And then once you have that, and, and sometimes you have to do some assessment to that LLM to make sure it has the correct content, no more content, et cetera. You need to put diff different layers of security to control the access. So we have seen many how insecure at default configuration or default AI architecture
Ashish Rajan: Yeah.
Toni De La Fuente: With MCP can be. So we, so we [00:14:00] recommend of course, uh, and that is how we have built our own implementation, is put your NCP on top of your RAC
Ashish Rajan: Oh. Okay.
Toni De La Fuente: And the R back is below the API. So don't make don't never compute an MCP talking to a database directly, for example, right? Yeah. Or so, those type of basics.
But those are very important basics when it comes to developing uh, an application, a production application, production level application with ai, right? RA is in, in, in ai. It's possible if the AI is probably, and the architecture is probably computer it's beyond configuration, it's more like a architecture, best practices.
Right? So is are different layers of security where we can put best practices in place. In place.
Ashish Rajan: Are there any specific AI vulnerabilities that come to mind that you're seeing more of versus the others?
Toni De La Fuente: I'm not able to say [00:15:00] any specific. Other than the one that was was a dip sick long ago that happened.
Um, so there, there are very good tools to assess lms. We do have Promptfoo, for example, inside Prowler. Yeah. Now Promptfoo is a very popular assessment tool for for LLMs. And we use Promptfoo underneath Prowler to assess LLMs as well. So, which is very, uh, capable, has a lot of different detections. Uh, and it does the Mitre mapping with, uh, or the, oh, sorry, the OS for ai.
Ashish Rajan: Yeah,
Toni De La Fuente: into, into any lms. Not everybody is building their own lms. We all know the cost of building your own lms but it's important to know there are tools that you can use so that you can use progress for. That is to assess inside LM and also everything outside, which can be local answers, can be storage, can be, uh, a [00:16:00] p gateway.
Multiple services around your, your workload.
Ashish Rajan: So I'm curious because obviously, um, a lot of people who will t tune into this episode would also be very pro open source they would wanna start. So a lot of people may have already been doing cloud security using open source and now are looking at AI workload.
What's your advice to people who are adopting the open source mindset into this cloud and AI world? What's a good place to start? Doing this, obviously there, there is there like a ma my, my mind map that you have for, hey, anyone who's listening, who's a CSO or who a engineering leader going, Hey, we, we, we are gonna do AI security as well, but I wanna do it open source way.
What's your recommended approach to start doing, start building that mental block? Should they look at cloud first? Should they look at AI first? There is that question as well now for new services and existing,
Toni De La Fuente: Uh, I don't, I, I don't think we have to look [00:17:00] at open source or not. Open source is part of 95, 90 6% of the software.
So open source is, is what we have to do basically. It's not a choice. It's what It's yeah. Uh, um. So we, we have something called Owler AI pillars. And it's basically one pillar of open source platform like Owler or any other is like, okay, if it's open source with ai, you can extend it.
Way faster and way better in our case. And I recommend other, uh, security vendors to do the same regardless if it's open source. But with open source, of course, you have way more advantage is to create your developer skills. For ai. Ai. Mm-hmm. So if you create those skills, you can grow your software, add more capabilities in our, in the case of product for example, you can have more detections, more remediations, more compliance frameworks or better mapping in those compliance frameworks, [00:18:00] additional cloud providers.
So we of course support the, the major cloud providers, but also we support niche cloud providers and we're adding cloud providers all the time. Or SaaS, even SaaS, because now we support Microsoft 365, MongoDB Atlas and many others. And that is possible to go and to grow that way because AI knows very well about how to build those components into power.
So skills is one of them. Then the the coverage for new checks, et cetera. Then the dashboard. Of course you have to bring a dashboard if you have, uh, AI capabilities or any other application. With ai, now you can generate dashboards as a service in five seconds. You can generate dashboards if you have a proper access to the data.
So that is part of ai. Thanks to the ai, now it's no longer a dashboard that you have a static information that you have to look at, but you can build it [00:19:00] based on your requirements with a very simple prompt.
Ashish Rajan: Yeah, wow.
Toni De La Fuente: Even adding your colors, et cetera. You can do that with product and many other tools.
So the dashboard as a service is key context. So with ai, AI is, is great to create a narrative, right? It's great to create the literature. Yeah. Gluing, gluing, gluing words together, right? So if you give AI context and you take advantage of that context, you get. Very good and precise information about what you are looking for.
So that is part of the, the information that you have to, or the AI pillars when it comes to building applications and that, that is not necessarily open source, but with open source you get even better context because you have access to, uh, deeper information, right? Or deeper way of, of doing stuff. The, the last one will will be configuration.
Ashish Rajan: Yeah,
Toni De La Fuente: so with with open source, you have access, more configuration options, more [00:20:00] customization of the platform and ai. If you point, for example, our MCP points to docs doer com, um, and prouder hub that is com, that has also an API. So if you point your AI to the proper places to learn, you are going to get a monster in the good way, right?
Uh, and very smart. Or very smart, uh, application, giving you a lot of information. Again, open source is key for the success of ai of course.
Ashish Rajan: Yeah. I guess Steve, what you said I love what you said about these days. You don't have to have a static uh, looking dashboard. Even if you're using open source to build it, you can hack, have it live.
'cause there's a whole concept of continuous testing when it comes to ai, which was there in cloud. And I think it used to be something that you would feel would take a lot of effort and a lot of resources. But is that's probably getting a lot [00:21:00] more easier with ai. So. Is the concept of continuous testing that has come, has been introduced with this, where it's no longer enough for me to know what my AI is doing or how many AI services that I have right now, but any, every second, 'cause I don't know, like one of those, uh, what's it open bot or whatever the new thing is called, Clawdbot or Moltbot, whatever the thing we call
Toni De La Fuente: Yeah.
Ashish Rajan: That gets introduced tomorrow. It's being used in your organization. You're trying to find out. Like having capability to consistently run something ongoingly and maybe across SDLC as well. Would you think those are things that people can actually now start thinking about themselves and then say they can go, Hey, now I should be able to do, I don't need to buy, I don't know, like a full on platform somewhere?
Uh, I can obviously buy one if I want to, but at least if I have the capability using something like an open source solution, I can go it on that part.
Toni De La Fuente: Open, open source, of course, helped, helps. On, on, on that. If you think about the the, all these tools that they, they can do pretty much anything.
[00:22:00] They would, wouldn't be possible or even, trust on those, uh, on those applications without open source, because you can look at them, you can see what they do, et cetera. So if you think it's the same paradigm as in security or cybersecurity tools, being open source is easy to trust those tools because you can see them, right?
I don't think it's, it's a, it's, we have to pick one, one way or another is it's a matter of. Speed trust, and of course impact, right?
Ashish Rajan: Mm-hmm. Yeah. And do you find that SDLC has also been, you know how there was a whole influx code that was going for a long time where all our pipelines were being transformed into from a static cloud formation template into a IAC Terraform and other things were being brought in.
How has that changed with the world of cloud and AI that you are seeing at your end?
Toni De La Fuente: That is very good point because we totally see that, that is changing. You know, as I said, the, the paradigm of building applications [00:23:00] is, is becoming very different now than before. So right now, how is the people building applications, and I say building applications because when it comes to qa, maintenance, integration, deployment, et cetera, may be different.
But building applications and pushing those applications to prout. Or to make your application accessible. That is cloud, that involves cloud somehow, right? Yeah. Or a SaaS or infrastructure service, you name it, whatever that is. Cloud. Yeah. What, what happens with the security of that of that new, uh, software development life cycle?
Ashish Rajan: Yeah,
Toni De La Fuente: so when you create a new application with Lovable or even with with Claude Code, that application, if you want to manage everything, Claude Code is going to generate also the Terraform code. It's gonna ask you for credentials in AWS, let's say eh, Azure with Terraform, and then you deploy something there, and then you deploy your [00:24:00] to servers or to application servers with your application running with the storage, with the database, everything.
Okay. Good. You are a great software architect. Now what?
Ashish Rajan: Yeah.
Toni De La Fuente: Now your boss said, man, I'm going to give you a promotion because you are, uh, you're the hero. But we have a ski issue. What can you do? And now is when allowing. To scan those infrastructure, that infrastructure as code with infrastructure as code.
We do infrastructure, running, infrastructure, infrastructure in the cloud, and then fixing those issues. Both sides in runtime, but also in the infrastructure as code to have the whole se cycle cover that is key. So the same way that, I mean, we, we can keep creating applications very fast, but at the same time, we have to keep hardening and actively monitoring those applications, the workloads, infrastructure, uh, with expert applications.
[00:25:00] Yeah. Everything is, I think everything is better with AI is faster with ai, but also we have to take into account the security point. So an example that that is, is for me is beautiful because we realize how important power it is, is. So we, we did this experiment. You, we said in cloud, cloud, Claude Code.
I said, okay, Claude Code, this is, these are my AWS account credential. Credentials, the role attached to these credentials.
Ashish Rajan: Yeah.
Toni De La Fuente: Please tell me in this account if I have any bucket open to the internet. Very simple.
Ashish Rajan: Yeah. Check.
Toni De La Fuente: That was probably the check that I wrote in, I wrote in Power 10 years ago.
One of the first checks. Checks for sure. Okay. Do you know what Claude Code does? Word does it first. It, it, it tries. To learn about what the AWS, [00:26:00] S3. Three, it says, okay, I'm going to use bottle three to, to get to know that. I'm gonna write a script to get to know that. I'm gonna try to run that in a container in order, in order to put everything and run that.
Ashish Rajan: Okay.
Toni De La Fuente: It doesn't work well because it looks to the default region. I mean, you say no, that is not correct it, and ends up saying, okay, I'm going to use power for that. Because it's done. So sometimes we think, and this is very important for security, for ai, for cloud we think that AI is going to know everything magically.
Now. There's no magic here. Yeah. So it is because, so you have to measure between ai. Created detections or to use the AI to take advantage of rule-based detections? Yeah. At the end of the day, that is what we truly believe is needed. So AI around everything, but sometimes you have to tell the AI A no, this is the A, [00:27:00] B, C that you have to take into account.
Ashish Rajan: Mm-hmm.
Toni De La Fuente: I'm having, but this looks magic. But it's not magic. It's because we have have exposed. The MCP, the problem MCP exposed and the AI is saying, okay, to do this, I need the expert agent, which is probably in this case. We, we have that, that experiment and we were very, very happily surprised to see that without telling anything and Tropic was speaking proud.
Ashish Rajan: That's pretty awesome man. I think. And I was also gonna do what you said about the changing STLC. How should security leaders, and it's kind of my last question as well. 'cause I find people who have come from kinda like you and I, who've come from a cloud security background. And now in this world of ai, what are, is there any top three or. Mindset shifts or security controls that come to mind that people should think about before they take these AI first cloud applications into production? Because I, I guess there is a whole notion [00:28:00] of people have done traditional workload for, for a while, so they may have an understanding of. Cloud security and how that needs to be for traditional application when it comes to moving, say, a cloud hosted AI workload into production.
Are there any top three things that come to mind, especially now that, now that we understand there's a shared responsibility gap that you mentioned earlier. What, any top three things that come to mind that people should definitely have before they move things?
Toni De La Fuente: Yeah, I mean, I will say more than three, but let me, lemme try, lemme try to find only three, which is a very good ex exercise.
First of all, if you are building an application. Most likely it's going to be in the cloud. Make sure, oh the whole infrastructure is secure, right? Yeah. Second, the LLM underneath. If you're using an LM, where is that? LLM is local, is with a third party. Is that third party using that LM to improve the LLM is an L lm That is only a tenant for you?
Ashish Rajan: Yes.
Toni De La Fuente: And and third shadow ai. Who is [00:29:00] using that? That ai. Who is uploading contracts with PPII asking, Hey, give me a, you know, an, give me a, an abstract, or gimme the most important points of this contract.
Ashish Rajan: Yeah,
Toni De La Fuente: people is doing that in charge GPT all the time and in deep seek and all that stuff. And you don't know that there is an option to say, don't learn, no disable learning from anything that I upload.
Things like that. Yeah. That happens also in the tools. And I could go even farther. When we call, we talk about who can access to, uh, the AI through the NM CP, for example. Mm-hmm. Authentication, uh, authorization, access through the AR, A-A-A-A-P endpoint. All all that stuff. Yeah. Many other things.
But I will say that the infra secure, LM secure and who has access to the tool.
Ashish Rajan: Looking ahead, where? Where do you see people fo should focus their attention, especially people who are engineering [00:30:00] leaders or security leaders? Where should they focus their attention for the next 1224 months?
Toni De La Fuente: To the tropic and open AI commercials. Keep, keep watching them and have fun.
Ashish Rajan: Yeah.
Toni De La Fuente: I dunno, Ashish, I don't like to play, uh, with being a missionary of, of anything, you know, I prefer to build what people needs now, and I think they're gonna need tomorrow, but not in two years. I'm not even now in times of ai.
Yeah. Um, so I have no idea if I, I can tell you something, but it's more like no joke than reality. Uh,
Ashish Rajan: I was curious because you also speak to so many people who are in such varied field as well. I people talking to people who are customers of you in Europe, customers, you, you know, America. So I was wondering if you had some pattern, but I'm with you man.
It's like so hard to predict the future here. Like, we would say something here and then TRO would release something right now. And the moment this thing goes live, it just would be completely obsolete.
Toni De La Fuente: Totally. Yeah. Yeah. So that is why [00:31:00] I don't really like to, to figure that out. I know, I've been talking to CISOs and CEOs and other founders in Europe and governments, et cetera in the US as well.
And I think it's exciting. What I can tell is that I love what is going on now in the IT space, in the cybersecurity space. I think AI is something great to be working with.
Ashish Rajan: Yeah.
Toni De La Fuente: I think I feel very passionate with everything that is possible now compared to 12 months ago. So can you imagine in 12 months.
So it is going to be amazing for us as a product company with 20 people. We are building like. If we were an army, and also we have more than, more than 300 contributors. So now with ai it's like, okay, now we're talking. So it's like, it's like we, of course we are not the big ones, but we are in terms of, the company size.
But we are one of, I, I want to think we are. One of the big [00:32:00] ones in terms of the product capabilities and also product deep into the community. And remember, at the end of the day, the AI or cloud community are companies using the cloud?
Ashish Rajan: Yeah, yeah, yeah, yeah. Yeah. A hundred percent. A hundred percent, man.
Where can people find more about, uh, Prowler and get to know you and connect with you on what the work you're doing with the AI security and cloud security spaceman.
Toni De La Fuente: I think the easier way is to find us@prowler.com. We can, they can go from there to our GitHub repo or sign up for our cloud service and they can find me very easily at, on LinkedIn at Toni De La Fuente.
This is,
Ashish Rajan: I, I'll put that on the shownotes. But dude, thanks so much for coming on the show, man. And thank you for sharing all that information too.
Toni De La Fuente: Yeah, always, always a pleasure to share time with you, and every time that we see each other in conferences, I, I always tell you, Hey, we're building this, we're building that.
Ashish Rajan: So I get excited you, man, I'm doing it for so many years. I always get excited because I think I, because I've seen you because build the open source. You're working at AWS and you've kind of [00:33:00] gone through the journey and now you have your own thing as well. I'm really happy for you, man. I think I, I, I, and I think because the open source mission the flag you've been raising for such a long time is what commendable.
So I think I'm, I'm looking forward to where this next chapter for you guys for AI security space as well. But thanks so much for coming on the show, man, and I look forward to seeing you and hopefully get many people get to know more about Prowler and the other work you're doing. Uh, but for everyone else, thanks for tuning again.
See you next time. Thank you for listening or watching this episode of Cloud Security Podcast. This was brought to you by Tech riot.io. If you are enjoying episodes on cloud security, you can find more episodes like these on Cloud Security Podcast tv, our website, or on social media platforms like YouTube, LinkedIn, and Apple, Spotify, in case you are interested in learning about AI security as well, do check out a sister podcast called AI Security Podcast, which is available on YouTube, LinkedIn, Spotify, apple as well, where we talk.
To other CISOs and practitioners about what's the latest in the world of AI security. Finally, if you're after a newsletter, it just gives you top news and insight from all the experts we talk to at Cloud Security Podcast. You can [00:34:00] check that out on cloud security newsletter.com. I'll see you in the next episode,
please.




















