AI is already breaking the Silos Between AppSec & CloudSec

View Show Notes and Transcript

The silos between Application Security and Cloud Security are officially breaking down, and AI is the primary catalyst. In this episode, Tejas Dakve, Senior Manager, Application Security, Bloomberg Industry Group and Aditya Patel, VP of Cybersecurity Architecture discuss how the AI-driven landscape is forcing a fundamental change in how we secure our applications and infrastructure.The conversation explores why traditional security models and gates are "absolutely impossible" to maintain against the sheer speed and volume of AI-generated code . We discuss the urgent need for security to evolve from the "Department of No" to the "Department of Safe Yes" by building "paved road" solutions and automated governance for developers .This episode can be a blueprint for the future of security teams. Learn why traditional threat modeling is no longer a one-time event, how the lines between AppSec and CloudSec are merging, and why the future of the industry belongs to "T-shaped engineers" with a multidisciplinary range of skills.

Questions asked:
00:00 Introduction
02:30
Who is Tejas Dakve? (AppSec)
03:40 Who is Aditya Patel? (CloudSec)
04:30 Common Use Cases for AI in Cloud & Applications
08:00 How AI Changed the Landscape for AppSec Teams
09:00 Why Traditional Security Models Don't Work for AI
11:00 AI is Breaking Down Security Silos (CloudSec & AppSec)
12:15 The "Hallucination" Problem: AI Knows Everything Until You're the Expert
12:45 The Speed & Volume of AI-Generated Code is the Real Challenge
14:30 How to Handle the AI Code Explosion? "Paved Roads"
15:45 From "Department of No" to "Department of Safe Yes"
16:30 Baking Security into the AI Lifecycle (Like DevSecOps)
18:25 Securing Agentic AI: Why IAM is More Important than the Chat
24:00 The Silo: AppSec Doesn't Have Visibility into Cloud IAM
25:00 Merging Threat Models: AppSec + CloudSec
26:20 Using New Frameworks: MITRE ATLAS & OWASP LLM Top 10
27:30 Threat Modeling Must Be a "Living & Breathing Process"
28:30 Using AI for Automated Threat Modeling
31:00 Building vs. Buying AI Security Tools
34:10 Prioritizing Vulnerabilities: Quality Over Quantity
37:20 The Rise of the "T-Shaped" Security Engineer
39:20 Building AI Governance with Cross-Functional Teams
40:10 Secure by Design for AI-Native Applications
44:10 AI Adoption Maturity: The 5 Stages of Grief
50:00 How the Security Role is Evolving with AI
53:00 The "Range" Analogy: Tiger Woods vs. Roger Federer
55:20 Career Advice for Evolving in the Age of AI
01:00:00 Career Advice for Newcomers: Get an IT Help Desk Job
01:03:00 Fun Questions: Cats, Philanthropy, and Thai Food

Tejas Dakve: [00:00:00] Why do you feel the traditional security models don't work? It has absolutely changed the landscape. From application security perspective. We are now expected to know more about the latest threat vector, which is out there.

Aditya Patel: AI seems to know everything until it's a topic where you have the firsthand knowledge.

That's when you know that there might be mistake.

Tejas Dakve: The speed of writing the code and the volume of code that security teams have to secure.

Ashish Rajan: Yeah.

Tejas Dakve: Is unparallel, it's absolutely impossible to use our traditional security gates.

Ashish Rajan: What would that look like?

Tejas Dakve: Security leaders? We have traditionally been recognized as a department of, no.

I think we have to change our mindset from department of no to, you know, department of safe. Yes.

Ashish Rajan: If you have been tackling AI security in your organization, you probably have seen the changes it's bringing to cloud security, AI security, application security, and every other vertical security you may have in your organization.

In this particular [00:01:00] conversation, I had TA and who have been working in the cloud and AppSec space for a while and we spoke about the silos that are breaking as AI security is becoming more mainstream and how some of the traditional things we used to do in terms of AppSec, DevSecOps, CloudSec, how the batterie are already being broken, and what you can do to a change your mindset as well as maturity level on how you review and assess AI security at AI speed.

And we also speak about how can you mature your practices and perhaps even teach your team how to be a better AI security person and not just a CloudSec or an AppSec person. All that, and a lot more in this episode of Cloud Security Podcast. If you know someone who's working on this particular thing or about to start their AI security, either in their CloudSec role or the AppSec role, definitely share this episode with them.

And if you have been listening or watching a cloud security episode for a while, maybe this is your second or third episode. I would really appreciate if you hit the subscriber follow button. If you are watching this on YouTube or LinkedIn, [00:02:00] or perhaps you're listening or watching this on Apple Podcast or Spotify really means a lot.

You have many support our work, but also means that more people like you find this podcast and get to see our work as well. So thank you so much for doing that and supporting us. I hope you enjoy this episode and I'll talk to you soon. Peace. Hello and welcome to another episode of Cloud Security Podcast.

I've got Aditya and Tejas with me. Hey guys, thanks for coming on the show.

Tejas Dakve: Hey, Isha, a and hello to everyone tuning in remotely.

Ashish Rajan: Thanks for having us. I am looking forward to this conversation, man. And uh, I mean, we will talk about this before recording. I have a lot of questions about this particular space and considering we have existed in a world where there are silos between AppSec and clouds six.

So it's really good to have two sides of the coin here. And maybe to kick things off, uh, if I can get just an introduction and, and a bit about your professional background. Maybe pages we can start with you.

Tejas Dakve: Sure. Perfect. Uh, first of all, I'm very excited to speak, to be speaking with you. We ran into each other at RSA and glad, uh, we have managed to put this thing together.

Uh, [00:03:00] myself, St. I lead the application security program at Bloomberg Industry Group. I lead a very passionate and dedicated team of security engineers with a goal of maturing our program and fit it right into the development lifecycle, and also aligning with organization security culture. Uh, but today I'm here to talk about my personal views and I won't be speaking on behalf of my employer, but I'm really looking forward to, to share some hot takes on this podcast.

Ashish Rajan: Likewise. Um, I'm excited for this conversation, man. Uh, Aditya. Yourself as well, man. Yeah.

Aditya Patel: Everyone. My name is Aditya. I'm a security professional based out of Dallas, Texas. I've been in the industry, uh, close to 15 years now. I come from a, a computer science background and then have been into almost all, all type of security personas in my career.

Pen, Testa, security engineer, security architect, consultant, uh, et cetera. And, uh, I've had the privilege to, [00:04:00] uh, work for, work with some of the largest companies in the world currently working as a security architect, as a, at a major cloud provider and as, uh. Just one quick disclaimer is the views that I share here are my own, not my employers, but excited to be, part of this discussion.

Ashish Rajan: Yeah, I mean, I'm excited. I've got an AppSec person, a CloudSec person, and we're talking about silos maybe to kind of kick things off. And as much as AI is top of mind for a lot of people, I'm curious to see, uh, maybe, uh, AHI we can start with you on what kind of use cases are you seeing for AI when, especially when building applications in AI, whether it's embedding AI into an application in cloud or just using, uh, AI for some activity in an application or being part of it.

What are you seeing as AI use cases in the cloud context or in wider in general, in the, in the customers or people you see around you.

Aditya Patel: Yeah. Yeah. So my take is that cloud has become the new default [00:05:00] for workload development, for AI driven development. And the reason, I mean, if you look at some of the earlier companies who were the early, early adopters of cloud, Netflix or Spotify, or Airbnb, these were all startups who ended up becoming like unicorns and giants.

So if you're a startup specifically you would want to, you know, consider cloud and the use cases they are, I mean, almost every company's, em embracing ai, what comes to my mind is look at Canva or Figma. These are like visual graphics web-based, uh, photo editing, video editing companies.

They are what I call pre AI era companies who started on the web, but when AI came, uh, up, they quickly pivoted. Now you have a lot of AI generation tools within these, these products. Or if you want to look at very specific AI companies like philanthropic building models or hugging face or perplexity, uh, these are out and out AI companies and they need a lot of compute.

They need a lot of [00:06:00] memory, gpu, CPU storage essentially a lot of elasticity in terms of how they want to and what they are building. Yeah. For model training inference. So yeah, all, all sort of use cases and almost every companies is adopting it. Specifically the, the model, uh, companies that are building models.

Ashish Rajan: Alright, so, is it also being embedded into applications or primarily for people who are building models?

Aditya Patel: both now you can think of it as a, as a multiple stacks at the foundation level. There are companies that are building foundational models, uh, including major cloud providers and folks like OpenAI and Anthropic Mistral.

And then on top of it, as you go up the chain companies are embedding these models either from these providers or from open source into their applications. There are like AI driven summaries. Amazon has a Rufus AI review, summarization chat bot, for example. So a lot of use cases.

Ashish Rajan: Yeah. And maybe, uh, it's a good segway into just your world as well.

What are you seeing in the [00:07:00] AI landscape for how it's being used by application and application security kind of use cases, if there are any as well as well

Tejas Dakve: be, because I, because I work very closely with products I would focus on that side. I would categorize it in three different buckets, uh, which, which are fairly common.

One is of course conversational AI . Almost every product which is available, which is SA based out there in the market, they're introducing. Chat bot or some sort of conversational AI feature. And business does see value in it. And the second is I would say generation generative AI. It could be in the form of like a content generation platform or, or like a GitHub copilot that developers use, or a Cursor that developers use for, for automated code generation.

Yeah. And the third bucket that I would say is, uh, like a recommendation or decision support system where a lot of products are enhancing their [00:08:00] service with ai, where it is helping the human user to make a decision faster and more accurately using the backend algorithm. Uh, like I said, I, I would say the three buckets.

Conversational ai, generative ai and the recommendation and the decision support system. But what it has done for AppSec is that it has absolutely changed the landscape from application security perspective. We are now. We are now expected to know more about the latest threat vector, which is out there previously, you know, let's say a standard web-based application.

We were expected to look out for SQL injection, cross site scripting, or in worst case scenario, remote code execution. But now we have to worry about all these different threats which are being introduced by conversational ai, generative ai, and the recommendation based AI as well. And each type of AI has a [00:09:00] different threat associated with it.

So these new introductions, they have actually forced application security teams or InfoSec in general to expand the threat vector and expand the scope of threat modeling as well.

Ashish Rajan: Why do you feel the traditional security models don't work outta curiosity? Like, I think, to your point, we've done OWASP Top 10

Why does this feel different?

Tejas Dakve: I mean, first of all, We don't have to move away from traditional application security or in information security concepts. It's just that this is a totally new attack vector where simply doing SaaS scan, SCA scan, DAST scan, or even a onetime threat modeling or a onetime pen testing is not going to be sufficient.

We need to do threat modeling because of ai as a we have to consider it as a living and breathing process, whereas traditionally we have done threat modeling against any system. When the system was in its [00:10:00] early phases of design and development. We used to do penetration testing before the product is going to get released or periodically after the.

The product has been released, but now because of the change changing landscape, we have to do threat modeling as a living process. We have to do rec teaming, paint testing as some sort of periodic happening event. And I think that is the main reason we can no longer rely on traditional security practices.

We have to still leverage it, but tweak our existing practices to make it more suitable for the AI based systems.

Ashish Rajan: And from your world, is this changing there as well, or traditional approaches seem to work, at least in terms of how security is scaled?

Aditya Patel: Yeah, I mean, I would, uh. I would echo what, uh, te just said and probably add, uh, add one more, uh, aspect to it.

So in terms of, technology and processes you have, uh, Cursor AI or GitHub Copilot, a lot of code [00:11:00] generation that's happening, that introduces new attack vector. Obviously you need to address it, that's different. But from a people point of view or from a culture point of view, traditionally we have had a very siloed security team focusing on, application security or cloud security.

If you are a bigger player then you have a centralized team, and then you have a decentralized team, a hybrid model where you might have a business security team living in an organization or you are, you have a security champion built into a, a development team. So a hybrid model, uh, with, uh, with AI now coming into play here, the new attack vectors that are coming in, the tooling that's, that's now available for security practitioners as well, right? You need to essentially update your tooling for identifying OWASP 10 for LLMs for instance. So I think that needs to shift a little bit. And then there is the problem that comes with these [00:12:00] models. What we call hallucination.

And if they're writing code, yes, they might be addressing at, on, on the surface some SQL injection attack writing parameters, queries, but they might be introducing new ones. Uh, I read a quote somewhere that really resonated with me, is that AI seems to know everything until it's a topic where you have the firsthand knowledge.

That's, that, that's when you, you know, that there might be mistakes. And I think that needs to be different for AI security than traditional security.

Tejas Dakve: And if, if I can add into that, I think one aspect that we also have to consider is because of generative AI that developers are leveraging the speed of writing. The code and the volume of code that security teams have to secure

Ashish Rajan: Yeah.

Tejas Dakve: Is unparallel. There it's absolutely impossible to con to use our traditional security gates where security teams used to be a gate Checkr. Yeah. Used to go provide some sort of approval before the feature goes [00:13:00] to production.

It's just impossible to continue functioning in that way because now we no longer have developers writing the code. We have AI agents writing the code. And I think that is also one important factor because AI is writing the code. There is a huge volume of code that security teams have to secure, and a huge volume of code also leads to more number of vulnerabilities, which ultimately get dumped onto developers for the mediation.

We have to find a newer way where security is no longer playing the role of a gatekeeper, but it's acting as a enabler by providing secure guardrails where developers have a free hand to do to, to, to, to achieve the innovation rather than security preventing them and asking to go through some sort of approval at every single stage before taking a feature to production.

Ashish Rajan: What would that look like? It sounds like a culture change as well, culture [00:14:00] process. So much there to kind unpack to what are some of the, I don't know, the top two or three that come to mind that people can look at to your point. So it's a complete unknown and we're talking about silos where if the codes being produced is now an overwhelming amount of what it used to be, and I'm sure it's the same on the cloud side as well.

With infra code being produced now the actual application code being produced. What are you guys seeing as ways people are tackling this? I think what's the, um, what's the way that works? I, and I would say that as a, as a, uh, as a topic on top, that yes, we all agree that we need to be more working with them, especially because we don't even know the kind of threats are gonna come tomorrow.

As we do the recording, I'm sure there's a new threat being developed somewhere, which is gonna appear to more of a natural language, which we have, would have never had an idea about. Maybe vendors may not even cover that for the next six months because they're trying to figure out what that model is.

Uh, what are you seeing as the things that seem to work? Because is this where agent workflow comes in and we [00:15:00] start doing agent workflow and security teams?

Aditya Patel: We have to, uh, I think, think that's, uh, that's what's, I know what's coming up as the next, uh, next big, uh, big hype. But I think there is meat behind this, this hype in general.

I think addressing your like, original question on, on this is. How do we operationalize some of that, differences in AI security. One approach that I've seen being taken is building what we call paved road mechanisms, paved road solutions. I think this term came out of Netflix, their open source tooling.

But it's a really neat idea. Like you don't need the developers to be security experts. Their core competency is writing code. So give them a solution, give them a, testing framework. Give them a library that they can just inherit, that will take care, or that will address some of the biggest vulnerabilities or some of the most common vulnerabilities.

So make it easy for them. One, one button click. That's one approach that I've taken, I've seen, taken at, at scale.

Tejas Dakve: from my end, I think [00:16:00] it's a mindset change that needs to happen. Security leaders, we have traditionally been, you know recognized as a department of no.

Mm-hmm. You go and ask security team, Hey, can I do this innovation? Can I, can I use this thing? The, the easiest answer is no. But I think we have to change our mindset from department of no to, you know, department of safe. Yes. And the important part, it's safe. Yes. Here we have to find ways where we can allow developers some sort of leeway.

And that can only happen by integrating security into AI lifecycle. The way we introduce security into development lifecycle, and I'm talking about CICD, and that's where the term DevSecOps came into the picture. Because we took security and introduced into CICD pipelines, we have to take similar approach to introduce security into AI lifecycle as well.

We have to [00:17:00] give some sort of option for developers to raise a request and a request should trigger some sort of automated review process where, and it could be anything I'm just gonna take and some sort of LLM model, uh, review process. As an example, as a developer, let's say I'm interested in using some sort of open source model from Hugging Face

Yeah. I don't want develop InfoSec team to take week or two weeks to review and approve the model. I want a process where I can simply submit a request and the request should trigger an automated scan. We do have tools available in the market. The process should use those tools to look for LLM specific vulnerabilities, identify how resilient that model is.

It can also send that approval to compliance and legal because we need to involve compliance and legal into AI related topics as well. So this is just an example. We have to change our mindset and take security. Into the [00:18:00] AI lifecycle, the way we took security and baked it into CICD pipelines.

Ashish Rajan: And would you say, I mean, I guess you obviously touched on Agentic Workflow.

How do you guys describe Agentic workflow? 'cause I mean the same as Agentic AI We started off by saying at least most of the industry, the smart people are way more smarter than I. They basically said we are gonna, we are way too far away from agentic AI. But now everything seems to be agentic AI and we also apparently on Agent Workflow as well.

How do you guys describe Agent Workflow today? Maybe Aditya you wanna start first? 'Cause you brought the term before.

Aditya Patel: Yeah, yeah. So I think, uh, Agentic AI or Agentic workflows, that's the organic evolution of where, where AI is headed, because you don't want AI just to be limited to a chat bot. You want AI to do things for you.

And that involves taking actions across multiple avenues, multiple parameters. So that is, that is coming for sure. And security, again, needs to, we are [00:19:00] playing catch up once again on this, but, uh, I think, uh, the good news there is the fundamentals are still the same. To give you an example how agentic AI security is different, so the fundamentals, the authentication, authorization, auditing all the best practices, you know, defense in depth, they are the same.

But for agent, let's take say authentication for example. For authentication. You, you can authenticate using something you know, and something you have, right? So your, your password or secret for, uh, for an agent or for an agent to go and do some complex action for you say it is interacting with five different 10 different endpoints and it needs to authenticate across each of those flows.

Uh, some of it can be on the internet, some of those can be internal. So it needs to pause the, the secret password or certificate, uh, what have you to authenticate itself. Hopefully it's not hard coded. I think we are past that stage, I hope. Mm-hmm. So hopefully there's an API call code [00:20:00] embedded where it goes to a vault or a secrets manager and fetches the credential authenticates to different endpoints.

But then you have the problem of what about if there is MFA. Then it needs to have a way of getting the OTPs from somewhere. That can also be done programmatically, but it needs to be, uh, coded in, tested in, uh, look for edge cases. It needs to it reliably fetch it from from your vault. Again, it can generate OTPs that, that can get synced.

Uh, last stage or I think the one more thing that you need to keep in mind there is bot detection or captures. Now, it, now it needs to solve captures and because most, uh, you know, things like a kamai or CloudFront. CloudFlare, they have automated bot detection. They would flag an agent. They are not they are, they're used to looking for fingerprints or patterns that a human would do, not an agent would do.

So they have not kept up there. So that is also [00:21:00] something that the agent needs to take care of. How it is reliably solving the capture. If it fails at any of these steps, your, uh, your workflow is not going to go through.

Ashish Rajan: Is that same as, so is that what is also Agentic workflow as well? Uh, when people say Agentic workflow

Aditya Patel: Yeah.

Similar to, yeah. It's very similar workflow and I'm talking about how do you add this, just the authentication layer in this case for the agentic workflow.

Ashish Rajan: Sure. And Tejas, what's your thoughts on the whole agent workflow and the security for it? Man.

Tejas Dakve: I actually look at agentic AI a bit differently than the traditional AI based systems.

Of course, some of the threat vectors still apply to it, but what is Agentic AI and, and how different it is from a traditional AI based system with a traditional AI based system you communicate with, with ai, uh, ai AI system, you you pass some command and AI responds back to it. It's like a conversation.

Conversation or the data is the main main [00:22:00] aspect here. But with Agenting ai, you of course communicate with it, but the agent is going to take some sort of action on your behalf. It could be sending an email or calling some sort of API. So for me, that action. Associated with that agent becomes very important factor here.

Uh, with conversational AI or, or a traditional AI conversation is the most important factor. But with the agentic AI is the, I am role policies, permissions associated with that agent is more important to me. So from application security perspective, my focus would be more on, of course I would care about the traditional aspect, but I would care more about what are Im roles and permissions and policies associated with it.

What can go wrong if that agent gets controlled by some malicious actors? So this, this is one important distinctions that that I want security leaders to be aware of.

Ashish Rajan: Yep. And I guess, would you say, uh, [00:23:00] obviously, uh, I think you were talking about cloud being the default. These just build AI agents, uh, what's and.

There's also the term multi AI agent frameworks. How does this working across the cloud in terms of AI security, specifically for the, from a cloud perspective? I think they just mentioned the whole, uh, IAM roles and stuff being important as well, which made me think of the whole cloud world.

So, yeah. How is that different in the cloud world?

Tejas Dakve: So before, before ADI answers it, I just want to add one more thing. From my perspective, I think there is a gap between application security and, and cloud security when it comes to agentic AI, because AppSec cares about source code, logic, things like that.

But we don't always have visibility into roles, permissions, policies associated with that agent. And typically that is handled by cloud security team.

Ashish Rajan: Yeah.

Tejas Dakve: Has the silos Y yes. The silo. Yeah. Yeah. And that definitely gets highlighted. When we are talking about [00:24:00] agentic ai, I have to go and figure out like Orca, Wiz or some CNAPP tool if I'm fluent with it.

If not, I have to go and find someone who can explain me what sort of permissions are going to be associated with this agent.

Ashish Rajan: Yeah. Sorry Ari, I pass it to you. Oh yeah. 'cause your threat model would actually, 'cause you're going back to a lot of people because in a, and obviously it may be different for different organizations, but most people, when they're trying to start a new feature or a function or program being built, the threat model initially doesn't really go into the details of the cloud pieces.

It just talks about, hey, we are doing a a gentech AI application threat model. And as someone who's trying to do a threat model of that application, you may not have the full context at that point in time.

Aditya Patel: Yeah. For all practical purposes, these are two different threat models. So there's a threat model for the underlying platform or the cloud, and then there's a threat model.

For the application itself.

Ashish Rajan: So how would you describe, because I think going back to what you said about the Pave road, and this is maybe one of the reasons why we are talking about the [00:25:00] silos as well, because to what they just mentioned the separation is his, uh, experience and a lot of other application security people would go down that path of your overall top 10.

Hey, I know that SEQ injection is like the number one thing that we find or cross out scripting is number one. So he'll, a lot of energy is spent on that threat modeling from that perspective. And they may not have the knowledge of Slack, especially if you throw multi-cloud in there, that's even another layer on top of that.

And Kubernetes is another layer on top of that. You can keep adding more layers and cloud on. On the other end, the threat model for that, I don't know how many people account for that in the beginning. So to your point, if that's a threat, different threat model, maybe when this agentic AI world. How would you do threat model of an agentic AI application?

And I think now because we have both cloud and the app people here, how would you suppose it should be happening versus when it traditionally happened?

Aditya Patel: Yeah, yeah. So I think first thing is, and we have been talking about how the fundamentals [00:26:00] apply, they still apply, but we have to acknowledge the fact that.

New attacks are being introduced or new sort of architectural considerations that need to be there for, uh, for multi-agent or Agentic workflows. If you look at threat models specifically, there are frameworks like stride. Or pasta. There's, there's a whole acronyms, uh, in a soup of these, uh, threat modeling methods.

Essentially, they're all the same, slightly different approaches but you want to look at how you're doing authentication, how you're doing authorization, what's the encryption layer what's the observability and auditing, et cetera. But none of these, the existing ones they address the new new threat vectors.

Things like, prompt injection or data poisoning, model theft the complexities that you get with automated requests coming through made through these agents. Fortunately, there has been some progress in the open source world on this. Uh, there is the Mitre Attack [00:27:00] Framework.

Now they have, uh, launched, uh, think something called Atlas, Atlas Framework.

Ashish Rajan: Yep.

Aditya Patel: And that covers the, uh, ai, uh. Threats from a library point of view, what type of gaps to look for? Um, there is, I think, uh, cloud Security Alliance also has something similar called Maestro. But the idea is you need to look for these specific AVA top 10 for lms.

You need to look for a specific, uh, uh, threats, uh, that are coming that are being introduced from these models in your threat model. So that I think needs to be factored in

Tejas Dakve: and, and I would say automated threat modeling wherever possible, or periodic threat modeling. One time threat model may have worked for our traditional systems, but with AI based system, it's just not suitable anymore.

Uh, I can give you one example as well, like GitHub co-pilot. When it, when it got rolled out and when organizations seriously considered it, almost every [00:28:00] organization did some sort of set modeling before they introduced GitHub co-pilot into their development ecosystem. Then few months back, uh, a researcher released a paper saying that GitHub copilot can be tricked to introduce backdoor into, into your product.

So now there is a new threat associated with GitHub copilot. Yeah. And I, I want to use this as a use case because it's a serious use case that the fact that GitHub copilot can introduce backdoor into your, into your product. Introduction of such threats make your previous threat model like a moot point.

So we have to look for automated threat modeling, if not periodic threat modeling as much as possible.

Aditya Patel: And, and use, uh, and, and use AI for it, right? Yeah, you can just ask your chat bot to, uh, you know, give you 10, top 10. This is my architecture. These are my you know, business cases, use cases, technical use cases.

Give me what threats to expect. So use it to your advantage.

Tejas Dakve: And I have been [00:29:00] preaching that actually, to be honest, if developers are using ai Yeah. For their benefit security teams need to embrace AI for, for security purposes as well. Almost every security vendor is now releasing MCP server from their end.

And that can fit into, that's a whole nother topic. Yes, yes. Separate podcast. But, but I, I want, leaders from security team to be cautiously ready for this change. I don't think we can, you know, sit on the side and let developers rely on AI for their benefits. We have to use it for security purposes as well.

Maybe not for automated remediation. I don't think we are there yet, but at least to, for automated suggestions or automated feedback for developers. Yeah,

Aditya Patel: There's just one thing I want to add there is I don't, and this is a hot take. I don't think we are at a point where we can completely rely on these systems.

There has to be a human in the loop, [00:30:00] uh, from a security point of view. Specifically if we are using these tools for remediation guidance or if we're using these tools for, uh, writing our test cases or security use cases, or some threat model use cases, you can't. Take it for granted. Or you can't just take it on its face value.

It might be because it's, uh, I think the term I I've seen used there is these chatbots are dreaming up the internet. They have read the internet, and now they are word by word, they're dreaming up things. So if you still need, uh, the subject matter experts, good news for us, our job is safe, at least for some time, but you need a human in the loop, at least for next couple of years.

Things are evolving fast, but I think for now our job seems to be safe.

Ashish Rajan: Yeah, I was gonna say, um, I, a friend of mine, Daniel Mesler mentioned this, I think one of his blogs, I think MCP servers, like my prompt talking to another prompt and hoping that the prompt, that the third prompt understands, it's like, yes, like a chain of prompts at that [00:31:00] point in time.

I, I think, uh, DEIS you mentioned automated threat modeling and, uh, is that something that, and using AI within the security teams. How are you seeing? 'cause I think there's a whole overwhelming sensation a lot of people have with ai. Like there's a whole thing to said about that, Hey, there's an overwhelming amount of code coming through.

And I'm a bit nervous about the fact that, hey, I used to look at 500 security things, now I'm looking at 5,000. And this just, just like, you know how it happens. Decision fatigue for a lot of lack of a better word. Yeah. Lack. And you are saying there's, the solution could be that you could use AI to solve your problems.

Is this something that people can build on their own in their, or they have to go buy a, go to a vendor or do any of that? Or can they just build the skills in-house?

Tejas Dakve: It depends on your underlying resources, to be honest. Oh, yeah. I depends on Oh yeah. Keeping,

Ashish Rajan: keeping ideal scenarios in mind. Yeah, of course.

Tejas Dakve: R right. Uh, and, and this is my personal thought. Yeah. I believe if there is a solution out there in the market, which is well supported [00:32:00] by by, by, by a mature security vendor, I would rely on their expertise and. And make use of their support and use that product or tool within my program to enhance my program further rather than using my own resources.

Of course, people will have different opinion, but this is how I, I, I look at it and you, you give examples of thousands of thousands of different projects. It is true, like we are no longer able to, keep track of thousands of different vulnerabilities and then finding out what is the, what are top 5% or 10% of different vulnerabilities, which needs to be prioritized.

It's like finding needle in, in a haystack. Uh, what I have been preaching is that create a pipeline that is most suitable to your program and to your organization. What I mean by that is accumulate all vulnerabilities onto one single platform. It, and I'm talking [00:33:00] about posture management type solution.

Ashish Rajan: Okay? Right, right. Accumulate

Tejas Dakve: all vulnerabilities, all threats, risk onto one single platform and create a pipeline from it. Move from all vulnerabilities. To just top 5% or 1% of vulnerabilities that matter to you the most, and to your organization the most. You can add different factors or different criteria to build that pipeline.

It could be all vulnerabilities, then applicable vulnerabilities or reachable vulnerabilities. You can also consider factors such as EPSS score or CISA KEV advisory. You can also add factors such as internet available. Proof of concept or whether it is internet facing or internal only use these different criteria and build a pipeline and focus on top 5% of vulnerabilities that matter the most to your organization.

This is the best way to handle [00:34:00] all these vulnerabilities. We are, I, I would say we are in the era, uh, in the era where quality matters more than quantity. Your CTO is not going to care about the fact that you fixed a hundred vulnerabilities. What if half of those vulnerabilities are internal only or have no exploitation path?

Yeah.

Ashish Rajan: Fix

Tejas Dakve: only five vulnerabilities, which are internet facing have highest likelihood of exploitation. Your CTO and CEO will be very happy to know that.

Ashish Rajan: Yep.

Aditya Patel: And to, to lay out the, the cloud view on that. When we spoke about cloud earlier. There is, IM in cloud. So on one end as Stia said, we are talking about how to prioritize the remediation, what to fix.

Of course, you can't fix everything. On the other end is when you are in cloud, you need to determine maybe environments, separate accounts, organizations within within your cloud footprint which [00:35:00] ones to move fo uh, fast in where the velocity is more important. And then which ones where you need to be more pragmatic, slower for business critical, uh, regulated industries.

And depending on that, you can adjust your I am posture. You can have more restrictive I am policies. You can limit your developers, uh, to, certain services. And API calls in a more restrictive way. But you can also have to complement that environments where. The restrictions are not as much where, you know, velocity needs to be higher so they don't have to come back.

Going back to silos, they don't have to come back to the security team. Hey, I need to, uh, you know, perform these three functions. I don't have the I am policy. It's restrictive. Then you are just adding friction. So even separate out the two. That way I think you can move fast where you can, and at the same time, you can be careful in some of the regulated or business critical workloads.

Ashish Rajan: Would that be different if AI is already part of your pipeline? I mean, te you mentioned CICD pipeline earlier. A lot [00:36:00] of applications we've had for years in most organizations is already embedded and a lot of them have done a great job of making it full. DevOps friendly, CSED, pipeline, DevSecOps, all of that.

If AI is, 'cause I think we've mentioned Get Gopi Copilot, which is an AI component that's now part of our ci cd pipeline. Developers may be using Cursor or any other coding generation tool. We may have, uh, your cloud providers giving Amazon, uh, the cloud providers giving their version of foundation model that you can use as well.

And can I, I can go and buy, get my own personal one if I want to on an enterprise license. How do you guys see this impact I guess where I'm going with this is that I, I, I almost feel like we already have our silos within the organization, right? And there's a collaboration gap here in terms of how do you see this collaborate between the two as it kind of becomes more integral into the part of CSCD pipelines that have existed for many years.

How do you see this collaboration happen? Are we turning into product [00:37:00] security teams now?

Aditya Patel: So I think it becomes now even more important for, you know, for lines to blur basically between security, between the d different product teams. Still at a, you know, at a large enough scale, you need to have some sort of federation security teams, security skills, security professionals.

They are not as much security is not revenue generating, so companies will never have a one is to one security engineer versus software developer ratio. So security has always, you know, you have to work to do more with less and which means you have to enable the developers to some extent rely on them.

Uh, so those lines need to blur. And I think the way collaboration can happen there is, and I've, I've seen this work really well, is, uh. No, have a, have an open lines of communication, have cross team training use paved road solutions. Don't burden the developers too much with, they don't [00:38:00] have to know nitty gries of, of security fundamentals on each and everything, but they should know the top three issues affecting the organization and how to address them.

So, yeah, I think a combination of this this should work. And it all starts with the, with the culture and mindset.

Tejas Dakve: Yeah. And I think we have to develop t-shaped engineers security engineers. What I mean by that is we want, of course, security engineer who is expert in their own domain, like AppSec, CloudSec, whatever it is.

But they should also be able to connect dots between various different sections. Uh, I'm not advocating to hire a Jack of Rights. I'm still, I still believe in hiring for depth. But when it comes to rewarding, we have to reward for breadth as well. We have to encourage and security leaders need to find need to create parts which are cross-functional.

And this needs to go beyond cloud security and [00:39:00] application security as well. If we are talking about ai, and since this is the theme of this conversation, we have to create cross-functional parts between application security, cloud security, legal compliance, uh, data security, and product engineering.

These teams need to come together, and only then we can have some sort of AI governance within an organization before Shadow AI starts taking place everywhere without, without anyone knowing about it.

Ashish Rajan: Would you say? Uh, I mean I love the POS concept as well. 'cause very cross-functional and I guess, yeah, the problem is way more bigger than just one, two teams to solve, I guess.

So in terms of, I guess, secure by design, 'cause we kind of spoke about the existing applications that are in A-C-I-C-D pipeline for the new AI agents or the ai, the new AI applications being built, uh, I think, uh, you mentioned secure by design. How would, does that change? Or is that, would that be [00:40:00] different in how you would build new applications today?

Now knowing what we know from so far that hey, we have devs across practices that we have, uh, in organization, we have done basic foundational security for a long time. So those practices are still there. So for people who are going, Hey, I want to be fully AI native, 'cause um, the same shift that happened in the cloud, there was a whole wave of.

Bandaging a cloud solution, then became cloud native, then became Cuba container native. We are kind of seeing, uh, almost like the same movie in a replay movie in a way. People are trying to think about, Hey, how do I make more AI native? So when thinking about AI native specifically, uh, with the mindset that security leaders cannot be gate gatekeepers anymore, uh, what should we people, should people be doing for safe adoption?

I guess the practices you mentioned, do they still apply?

Aditya Patel: Yeah. Uh, of course I think a hundred percent they still apply. Uh, you just need to evolve, evolve the practices, or evolve the tool chain a bit. As you know. As you do your [00:41:00] threat models, you need to update your, say, library of threats that you'll be now factoring in.

You need to update the architecture patterns that are approved in the organization. Uh, for an MCP server, this is the approved pattern. As long as the team adheres to more or less the same pattern, they're good. If there's any deviation, cut a ticket and then the secure teams will review. This way you're not reviewing every everything but only the delta.

So I think those sort of optimizations will continue. To be made. Uh, but in terms of secure by design, it depends on the threat model and then the threat model. For the, I think we spoke about threat model for cloud and threat model for applications. They are merging now. They have merged a good deal back in the day.

And I think TJs and I worked together at one point early in our career on, uh, you know, on a project where we would have different threat models for cloud, different threat models for application, different threat models, de depending on the use case. Now that is [00:42:00] merging because, uh, the compute that you use or the database that you use.

Is an integral part. It has been vetted already by, you know, other hundreds of other reviews. So those lines are merging. And I think same will happen with with ai. Uh, you, you need not have a separate third model just for ai, because AI is an integral part of your architecture, of your, you know, tool chain.

But yeah, we, so it doesn't have to be separate, but it needs to evolve.

Ashish Rajan: Did you have some thoughts on the whole, uh, I mean your com your world also had the whole DevSecOps play where we've been trying to convince developers to do security. When AI native comes in, are we asking them to do security for applications and for AI as well?

Tejas Dakve: Not really. I would say my focus has been primarily on building the guardrails and the paved roads, I see a lot of similarities the way we introduce security into CICD and I'm, I want us to rely on that same experience, the way we introduce security to [00:43:00] CICD. I want us to follow the same pattern and introduce security into ai lifecycle.

Don't have to reinvent the wheel over here. That's what I feel.

Ashish Rajan: So what would that look like? Would that still be the SCA SaaS and das?

Tejas Dakve: Oh, no, no. That, that would be different. The test cases tools and the scanning type of activities, those have to be different. If I have to go into the technical details of it I would say the licensing for LLM or, resiliency test for any model hallucination, test for any model. Those are the things are something that we need to bake into the pipeline, the way we baked SCA SaaS das into the pipeline. And that's what I, that's what I meant by not having to reinvent the wheel. We can still rely on what we did to successfully achieve the mature ecop practice, but rather use the same pattern for mature AI related practice [00:44:00] as well.

Right.

Ashish Rajan: Okay. So I was gonna say, I guess obviously you guys are talking and seeing people at different stages of AI security as well. What are some of the mature ones doing, and if you were to put it into some more of a maturity level for, for people who are either listening or watching to this and going, Hey, yeah, you know what they gave me good insight.

I'm ready to start my journey on maturing my AI security practice. 'cause I'll be honest, not everyone's doing AI as much as we would live to live in a world where everyone's doing ai, but the reality is there is still a lot of people who have to come on that journey. So people especially who are perhaps thinking about doing a security uplift based on what we are saying about how the silos need to kind of merge in this new AI native world.

How would you say the maturity is in term, or what's the maturity that some of the super mature people that you see come across in terms of how they do AI security? And for people who are starting today, what are some of different levels or stages they can think about on [00:45:00] how to implement AI security in the organization?

Aditya Patel: Yeah. Yeah. Funny you mentioned the stages. Have you heard of uh, uh, this thing called five Five stages? Five stages of grief?

Ashish Rajan: I was gonna say, is there five stages of grief? Yeah, I have heard of, yeah, I, the acceptance stage yet.

Aditya Patel: Yeah. So no denial, anger, bargain, depression, and acceptance. That's right. So AI is on a similar curve.

Some some companies are on the denial stage still, but most have moved past that stage. I think very few have reached the acceptance stage. More mature ones are closer to the right on this. But I think the first thing, that that a company needs to do is just, uh, have the acceptance that, you know, AI is here to stay and you need to adopt now.

Adopt or extinct, basically. Then I think more mature companies that I've seen doing this. It's a combination of building, I think we spoke about it the paved road solutions.

Ashish Rajan: Oh yeah.

Aditya Patel: Specific tooling. [00:46:00] Uh, reorganizing some of the teams into, uh, a purpose, in a purpose driven mission of solving the AI security problems.

Uh, coming up with no risk mattresses or risk library for ai. Specific threats. Open source is still making good progress there with some of the frameworks we spoke about. But now you need to take that open source tooling or that common library, library and convert it into very specific to your business the threats that apply to you, uh, apply to your applications and then don't stop there.

Have a remediation library also. So, if you, if in your AI threat model, or if in your application threat model you have, uh, these three things how do you sanitize input for a particular library or how do you detect. Not detect, but how do you address AI generated code? Maybe have a way of adding some, some signatures or, uh, have a way of doing the prompt injection or some extra bit of testing on the code [00:47:00] where it is, you know, more heavily written by the ai.

It is difficult because some of it is very hard to tell apart which code has been written by humans and which by ai. And then I think to tie it all together, the governance, uh, needs to evolve as well. So, end-to-end, what are your standards and policies and procedures around ai? Uh, how do you operationalize it?

I think earlier we spoke about using a new model. For example, if the dev, some development team wants to use a new open source model, how soon are you bringing that, that model, uh, into your company so you're not disrupting the pace of innovation? But at the same time, you want to vet that model, uh, hosted locally.

Maybe do some RAG on it. So I think governance is the answer there and that's the road to being, being proactive here.

Ashish Rajan: Yeah,

Tejas Dakve: Adi actually covered my points as well. But I would only bring governance upfront because what I have seen is, uh, mature organizations, the way they have successfully tackled AI problem [00:48:00] is they have gone ahead and established AI governance through some sort of AI related policy.

That's the best way that clearly described what is okay, what is not okay, and how to pursue what is okay. Yeah. As long as you are establishing that AI governance and AI related policy, you can then control, uh, you know, uh, the, the, the technical depth associated with shadow ai that, that may come along the way.

Uh, I would only highlight AI governance of utmost importance. And I think we have talked briefly about cross-functional teams as well. That is another important factor. A lot of organizations don't have a dedicated AI security team. They have a AppSec team, CloudSec team data, uh, science team, compliance, legal, but organizations still don't have dedicated AI security team.

And that's the reason I'm, I want to again, highlight the cross-functional teams and the role that [00:49:00] they play with respect to AI governance. So these are the two main things that I would like to highlight that I feel mature organizations doing are doing better compared to organizations which are still trying to tackle the AI related problem.

Ashish Rajan: I think it definitely aligns with, uh, so funny enough, I think a few days ago I was kind of working around this and I came up with a framework called Agent aptly named the A was for awareness because a lot of people sometimes are not even aware people are using AI in their organization. And the second one for me, the G is governance.

And I found that a lot of CISOs that I've been either working with or talking to, they all felt at least to what they, and you guys mentioned, at least it is a quote unquote paved road for, Hey, this is a happy path. Go full thought on this happy path. But the moment you kind of steer away from it.

Let's have a conversation. Let's see where we want to go. And that's the remaining of the agent kind of comes into that, and I made a post about this as well. Um, I, I love the analogy of how [00:50:00] people may have dedicated AI security teams, but if we just take a step back on what are some of the skills people require to be AI security ready?

I don't even know what that would be. And it feels like it's changing every single day. I feel like if I knew NA 10 or one of those automation tools until yesterday, that was okay. But today I feel like I need to know how to do, uh, web coding. I also need to go beyond that. I need to build my own applications.

My product managers are building applications, prototypes. So I've got two kind of questions over there. One is, I guess what, how does the security role change as the adoption of AI increases? We'll start with that and I'll come to the skills part, part as well.

Tejas Dakve: Sure. So if, if I go first, uh, I would like to highlight how it has made difficult for security folks as well, first of all.

Ashish Rajan: Yeah,

Tejas Dakve: so introduction of ai. Before introduction of ai, security engineers were expected to know, at least I'm talking [00:51:00] about application security engineers. Oh yeah. They were expected to know core AppSec concepts. They needed to know threat modeling web penetration testing, mobile related threats as well.

Then came along cloud security 10 years ago. AppSec engineers, even though there may be a dedicated CloudSec team, AppSec engineers are still expected to know some aspects of cloud security. Then in between came along DevSecOps, AppSec engineers needed to know some aspects of DevSecOps as well.

Ashish Rajan: Yep.

Tejas Dakve: Couple of years back, ai got came into the limelight and since then we need to know AI security aspect as well. What I mean by that is there is a whole new different attack group of attack vectors that application security engineers need to know if they are going to pen test, uh, a product that is powered by ai.

AppSec engineers also need to know, first of all, how developers are using AI to write their software. Only [00:52:00] then they can understand, only then they can speak the language with developers. Yeah, no longer developers are building the logic on their own. They're leveraging AI to write a software, so they have to understand how AI is helping them as well.

So coming back to your So, so, so these are the, uh, new problems that, that I wanted to highlight. AppSec engineers are now expected to know more, to do more compared to, you know, before the, before the AI security came along and coming back to your questions. And I think this is a good segue to my answer as well, because AppSec engineers are expected to know a lot of these things.

Uh, we need a T-shaped application security engineer. We need a security engineer who is an expert of at least one or two domains, but can, but someone who can have understanding of various different fields as well. Only [00:53:00] then we can probably fill the, the void that has been presented to us because of AI security.

Aditya Patel: let me take a quick detour to just explain my point. There is this book called Range, uh, by David Epstein. Have you heard of it?

It's a great book. It's called Range. The core message there is there are two types of people specialists two types of individuals in the in, in the world, especially sports.

Either you are an expert coming from like a single expertise, uh, or you are an expert or a world champ coming from a multidisciplinary background. And he he gives the example of Tiger Woods and Roger Federer. So Tiger Woods, when he was growing up by age two, he was like hitting the golf ball by age, I think seven or eight.

He was winning tournaments and he had only played golf and nothing else. Roger Federer by comparison. He played everything in anything like soccer, badminton, wrestling, skiing. And only when he became a teenager, he started playing tennis. [00:54:00] So both of them became like GOAT or legends, but they took different parts.

And I think the conclusion that, uh, that comes out of this research is like there are two types of environments. One is like a very deterministic environment, uh, like chess or music, or even golf, where if you just do one thing, you can be great there. But the other one is more like non-deterministic environment.

Tennis or science arts or cybersecurity, where if you have the multidisciplinary background training there, that helps. So I think if, you know the journey or the background that Tejas just took us through, if you want, if you're starting out in cybersecurity or if you want to pivot to cybersecurity, it's better to, you know, dip your feet into Android security io security AppSec, cloud security, uh, secure coding, uh, that will help you.

And again, AI is there to help and then become an expert [00:55:00] on it. I like, my personal view is I don't think we will have just a, we will have a dedicated AI security people or teams, but if you want to be successful, even as an AI security engineer or AI security architect, it helps to have that multidisciplinary background.

Tejas Dakve: For sure.

Ashish Rajan: I think, uh, it, it's a, oh wait, so, because I remember you telling me you wrote a CSA blog something similar. Was that on a similar note?

Aditya Patel: Yeah. So, I can maybe give you the link, but I wrote a blog on how, uh, security careers are evolving in the age of AI. So whether you are an analyst or a engineer or architect or consultant, how you can basically scale upon on ai no matter what, what different functions you're, you're performing.

Ashish Rajan: What, what's A-T-L-D-R? Is that so, so same TLDR or is this the five stages of grief? Like you start set, you're like, I can't do this. And then you get to your point. Well, I guess I have to do it either way. So what was the TLDR of the, the, the

Aditya Patel: TDR [00:56:00] is? Yeah. The TLDR is, uh, no matter the functional security subdomain you are in, uh, you have to scale up upon ai.

And then I give some actionable steps. Like, if you're an analyst, go and do this. Uh, read up on this. Yeah.

Ashish Rajan: I think what both of you said is, is actually in a way fascinating. Because I just, if you take away like, and obviously, uh, um, our families and fam friends are different generations as well, but I would say that, you know how sometimes a lot of tech people quote unquote tech people, Hey, come fix my computer, my internet.

We've all done this Technically to what Tejas is saying, Hey, I may be a Java expert at my, at my day job, but the moment I go home, I'm just the tech guy for support, for internet support, for laptop, how do I use ChatGPT? Why is my phone not recharging like there is this. Plethora of things that we normally do, but we don't qualify that as, Hey, we are all innovative, multi-skilled.

If you think about it, like we, we, we have the foundational technical skills. We've all done that for years, but [00:57:00] we don't put that as a resume that, Hey, I can fix the internet, just a router, restart, or whatever. But I, I guess I, I feel I, I just helped

Aditya Patel: on that note, uh, actually I just helped my, uh, one of my family members set up his Windows laptop, so I'm still doing that.

Ashish Rajan: Oh,

Aditya Patel: there you go. But

Ashish Rajan: that's what

Aditya Patel: I said. I think

Ashish Rajan: in a way, because one more thing, Hey, that used to be a assist admin. That was not like a thing that, you know, a someone who's qualified has done years of cloud security app security, the, you come back home that you're just basically updating Windows servers and like, what is going on there?

And I feel like maybe it, it is just a realization that. We've always done this, we've always multi-skilled. It just said, oh, in my work context I need to know something else as well. Now it's just that, and I think maybe the AI piece makes a transition a bit more easier. 'cause you can ask the question. But until we hit that point to what you were saying in the beginning where AI is smart until you know the topic you're talking about.

I think you said something about that. So once you become an expert yourself, then you can start judging it more for, [00:58:00] is it giving me the right information?

Aditya Patel: for sure. Yeah. And you can't know trust but verify. You can't trust it. And especially if you're using it for your day job in, in a in a domain where you are supposedly expert in you are.

You have all the more owners to verify what information you're getting out of these systems. Yeah.

Ashish Rajan: Do you, do you guys think we are moving to a security generalist viewpoint for security teams? Where, or is it going to go down the path of will? Kind of what happened with, uh, cloud security in a way where we had dedicated teams?

I think as you were saying, people would eventually, and I, I kind, I kind of lean on the same, where I feel they would be dedicated AI security team in the future as the AI evolves. I don't know if both of you feel that's where we are headed once it's matures a bit. Maybe not right now. We don't have enough AI native around, but eventually we'll get there.

And when we do, we would primarily be becoming more security journalist, where a Java person is also understanding cloud security because it can ask a question to [00:59:00] AI for, Hey, what's the Azure command for finding out the identity? I don't know what that is, but at least I'm hoping AI can. So is that where we are heading in terms of if you were to look into the future.

Tejas Dakve: I, I think that trend has already started. I'm started as well, speaking from my, yeah, I'm speaking from my personal experience as well. This trend has already started. AI is breaking the wall between this different segments of security. And I also encourage my team members at my organization to go and look for other collaborative opportunities with different security teams that we have.

Of course I want all of us to learn more about AI security, but I also want them to understand cloud security aspect or incident response aspect, because there are incidents associated with AI as well. Yeah. So this change is already happening. I don't think teams will merge. There will be dedicated AppSec CloudSec in future AI [01:00:00] security teams, but the gap is narrowing.

We would still have expert, but we would also have people who, who would have very tough knowledge, even for someone who is junior in in, in, in, in any particular role.

Ashish Rajan: Do you feel like, would you give a different advice to someone who's starting today? Because I think a lot of, for a WA for a long time, a lot of people were on the path, Hey, get a cloud certification or get a I don't know, off SEC certification from a pen testing perspective.

We've all done that. Or at least, at least I have done it. I've failed and I just stopped. I think, I don't even know if OSCP is still a thing. I'm, I'm sure it is. It is. Yeah. Yeah. Oh, good. I But what do you guys advise people coming in? Because I don't know what, where they should be leaning more on, say, if they wanted to join AppSec or cloud.

Second, they made up their minds. So developer down there wants to become an asset person. Is your advice different to them today versus what would've been two years ago?

Aditya Patel: Yeah, so like my advice to them and I I get these, uh, frequently, probably [01:01:00] not as much as, uh, you do, I don't have the social media reach as, as you are ish.

But my advice to them is work backwards from what you are seeing. If you're job hunting, work backwards from the type of jobs that you're targeting. Don't. Don't create this ideal plan that, Hey, I will learn only AppSec, or I will learn AppSec first and then CloudSec. It, it's a security generalist type of role, but if you're targeting a particular job market or job set, you need to know a bit about everything.

The T shape model that, that they just brought up. And then depending on the, uh, role you're targeting go deep on, on that one. So yeah, it's, uh, it's, I think we will have dedicated AI security roles. It's starting, but very nascent. But, uh, if you are new in the field, I think security, security generalist training upon, uh, these different functional roles, I think is the, is the way, at least for now,

Tejas Dakve: I think the problem that young people have is getting into the [01:02:00] cybersecurity field.

And I speak with a lot of people, uh. The question is, why do you want to work? Is it for salary purposes or do you really have passion for it? People, even though they have passion, they lack the experience. So my advice to them, while you are upscaling yourself, while you are doing all these different various courses, like online programs from different areas of security, see if you want, if you could get a job at entry level job at a IT help desk.

Or a, a tier one, tier two support roles that we have or a network engineer type role. You don't have to start in cybersecurity. It's difficult to get into cybersecurity While you are doing that, look for other entry level roles, even those roles are valuable. Yeah. Then you could get an experience of, how to network, how to communicate, how to troubleshoot.

Those skills are important in cybersecurity as well. So [01:03:00] look for other opportunities if you are not able to get into the field of cybersecurity. I know we all crave for experience and, uh, in order to get that experience, you have to start somewhere.

Ashish Rajan: I, as you say that, uh, the first thing that came to mind is every Gen Z person out there or anyone who's a parent of Gen Z person, they're all thinking that.

It's easy for you to say, man, I want all the answers right now. 'cause that was supposed to be the case I should have had this ages ago. But those are all the que questions I had. So thank you guys for sharing this. Three fun questions as well, so we get to know a bit more about you outside the whole context as well.

Uh, the first one being, what do you spend most of your time on when you're not trying to solve the AI security problems of the world? Well, maybe I think we'll start with you first.

Aditya Patel: Uh, trying to catch up one sleep, young kids. Any spare time goes there. Oh man. What about you te?

Tejas Dakve: I have a young family. I don't have kids, but I have two two cats and oh wives.

So I look forward to spending time there

Ashish Rajan: oh, [01:04:00] fair. And the second question, what is something that you're proud of that is not on your social media?

Tejas Dakve: Oh, wow. Uh, I, I, I actually come from very small town in India. I didn't even have English language school while growing up. I came from proper regional school. Oh, wow. I started learning English when I was in fifth grade. Wow. I sucked in English until 2013. To be honest, ADI may have seen me struggle as well because he, he, he knows me, uh, from 2015.

Uh, I feel really proud about the journey that, uh, that I have had starting from a very small town in India, no exposure to English language, and then ultimately landing here in the US and now leading a very mature application security program at, at an, at an established organization.

Ashish Rajan: That's awesome, manna.

Thank you for sharing that. What about you, Aditya?

Aditya Patel: Yeah, no, thanks for sharing that, [01:05:00] Tejas. I think something that I don't post too much about is I'm involved with a few organizations, uh, uh, NGOs in India, helping out young kids in their, uh, schooling or mentoring kids from colleges. So I take great pride in, in helping, uh, helping them, uh, especially the ones in need.

Because, they need all the help and support, they deserve it. So I'm really proud of, uh, that work. The organization is called, uh, Anjali Charitable Trust. It's based in, uh, so PR. Oh,

Ashish Rajan: nice. And they support, uh, I guess underprivileged kids, I imagine

Aditya Patel: underprivileged kids. Yeah. They provide afterschool meals, uh, afterschool, uh, coaching for the, you know, underprivileged kids.

Yes. Yeah.

Ashish Rajan: Awesome. No, thanks for sharing that. It's awesome as well, man. And uh, final question. What's your favorite cuisine or restaurant? Tejas just will come to you first.

Tejas Dakve: Oh me. I would have to pick Thai food. I love spicy food. I always go to Thai restaurant without any hesitation, and [01:06:00] I always You like one of those,

Ashish Rajan: uh, super high, spicy level ones?

Tejas Dakve: Yeah. I, I always look at the menu and they, you know, typically they have chilies, like one chili chili, three chili. I always look at the number of chilies and this is where I start in a Thai restaurant.

Ashish Rajan: Wait, I mean, apparently I've been told by a Thai friend of mine that there a trick is you can go higher than three as well.

You just need to ask them.

Tejas Dakve: Yes, I have tried that once and it was a mistake, so I don't, I don't go beyond that.

Aditya Patel: That level is not four or five. That level is like challenge accepted. Yeah, yeah, yeah. Basically, you may die at the end of this, but that's okay.

Tejas Dakve: I, I wasn't able to finish my meal. I'm just being honest with you.

I wasn't able to finish my meal, so I, I stick to what they regularly offer.

Ashish Rajan: Fair. Uh, I think, uh, uh, you definitely have a lot more, uh, higher tolerance than I do. 'cause I think the last time I tried doing that, my friend of mine, I think she was, she, she's Sri Lankan, and her and I were, I guess we, we were just young.

We were like, Hey, we'll just get, oh no, how [01:07:00] hard can this really be? Indian background, Sri Lankan background. We got this and every, we, every, I think it was so hot, we ended up not eating beyond the first morsel we had and we ended up chugging half a litre of milk after that just to kind of cool our tongue down.

But hey, so story for another time. What about you? What's your favorite cuisine, or restaurant?

Aditya Patel: Indian Street food. Any day. All day. A part of it is because, you know, nostalgia and I miss it. I don't get, uh, you know, as much as, as I used to. But yeah, Indian is street food.

Ashish Rajan: That awesome man, uh, that, well, that's definitely hard to get.

Although maybe when you come to the uk, uh, there's definitely quite a bit of it. So I'll definitely recommend a few places where if you guys can come down to London though. But how about you though? Sorry? How about you? I think I've got seasons. I think currently my wife and I on the season of a lot of Japanese food.

And I think also because I'm at that age where like I used to love rice, I used to love biryani. I love to love all of that. Now, every time I eat a biryani next morning it's, I can see the belly [01:08:00] out and I'm going, okay, this is not gonna work really well if I have it every single day. So I had to cut down a lot on rice.

So I've gone down for Korean barbecue. I've also gone down the path of Japanese, so it kind of keeps evolving at the current stage we are at, uh, because to London has a huge variety of Indian food. I've been into Pakistani food and I've been into Japanese food. These are figured out. And I think these are healthier choices

Aditya Patel: too.

Like, like we are big fans of Pho. So you go for sushi, you go for pha you know, you, you have no regrets in terms of health also.

Ashish Rajan: Yeah, yeah, yeah. I mean far obviously the Vietnamese food is definitely, uh, another level. But I was gonna say what a shout to the Pakistani food because they figured out how to mix vegetarian and meat together.

And I'm like, how is that? That was my eye opening. Well, is that even possible that you can combine? Uh, I think they had okra and beef in one, okra and chicken. I'm like, are the, I thought this is like two different dishes. They're not supposed to be together. Yeah. But somehow they combine it anyway. [01:09:00] People who from a Pakistani background would know this.

But, so those are my current go-to at the moment. A friend of mine. And he came on the podcast as well. He was talking about going into getting into Michelin star restaurants. That's also, as you, as you can tell, I can talk, the reason this question exists is I can talk about food forever. Yeah.

Uh, it's definitely current season is Japanese, Korean, and, uh, a little bit of, uh, Pakistani food in there. Awesome. Where we can definitely squeeze in some of those, uh, barbecue stuff. Yeah.

Aditya Patel: And I hear Indian food is even like, Pakistani food is much better in uk.

Ashish Rajan: Yeah. I think, uh, than what you got in US.

I, I think the way I, the way people describe it is that I think people who came from India over a hundred years ago, whatever, 'cause they kept leveling up and I think India was there and these guys kept leveling up beyond that. Yeah. So you almost feel like, how is this better than what you find in India?

Because to the point that there are branches of Indian restaurants and sweet shops and everything they've opening up in London, their [01:10:00] second branch. You go like, wait, so they have like, you know, normally would, you would expand from London to India 'cause you're like, Hey, yeah we are bringing it home.

But they're like, no, no, we need to go there because that's where the true competition is.

Aditya Patel: Ah, I see. Okay.

Ashish Rajan: I mean if you guys, whenever you guys come over, I'll definitely recommend a few places. Maybe even I'll hang out with you guys then as well, but for sure find you on the internet to kind of connect and talk more about AppSec CloudSec and how AI security is changing

Aditya Patel: for me, LinkedIn and I write a blog called secwale.com. S-E-C-W-A-L-E. Awesome. But LinkedIn is the best place to get in touch with me,

Tejas Dakve: put the same with me. I'm very active on LinkedIn these days, so that would be the best place to reach out to me.

Ashish Rajan: Awesome guys. Uh, thank you so much for, uh, joining me for this and hopefully at least after this conversation we are able to break some of the silos and, uh, go to that T shape world.

But I look forward to more evolutionist conversation as people have dedicated AI security teams as well. So thank you for joining and uh, thank you for sharing all that.

Aditya Patel: Thank you for inviting us Ashish.

Ashish Rajan: Yeah. Thank you for all the, for listening or watching this episode [01:11:00] of Cloud Security Podcast.

This was brought to you by Tech riot.io. If you are enjoying episodes on cloud security, you can find more episodes like these on cloudsecuritypodcast.tv our website, or on social media platforms like YouTube, LinkedIn, and Apple, Spotify. In case you are interested in learning about AI security as well, to check out a podcast called AI Security Podcast, which is available on YouTube, LinkedIn, Spotify, apple as well, where we talk to other CISOs and practitioners about what's the latest in the world of AI security.

Finally, if you're after a newsletter, it just gives you top news and insight from all the experts we talk to at Cloud Security Podcast. You can check that out on cloud security newsletter.com. I'll see you in the next episode,

please.

No items found.
More Videos