The Security Gaps in AWS Bedrock & Azure AI You Need to Know

View Show Notes and Transcript

The race to deploy AI is on, but are the cloud platforms we rely on secure by default? This episode features a practical, in-the-weeds discussion with Kyler Middleton, Principal Developer, Internal AI Solutions, Veradigm (AWS) and Sai Gunaranjan, Lead Architect, Veradigm (Azure) as they compare the security realities of building AI applications on the two largest cloud providers.The conversation uncovers critical security gaps you need to be aware of. Sai reveals that Azure AI defaults to sending customer data globally for processing to keep costs low, a major compliance risk that must be manually disabled . Kyler breaks down the challenges with AWS Bedrock, including the lack of resource-level security policies and a consolidated logging system that mixes all AI conversations into one place, making incident response incredibly difficult .This is an essential guide for any cloud security or platform engineer moving into the AI space. Learn about the real-world architectural patterns, the insecure defaults to watch out for, and the new skills required to transition from a Cloud Security Engineer to an AI Security Engineer.

Questions asked:
00:00 Introduction
02:30 Who are Kyler Middleton & Sai Gunaranjan?
03:40 Common AI Use Cases: Chatbots & Product Integration
05:15 Beyond IAM: The Full Scope of AI Security in the Cloud
07:30 The Role of the Cloud in Deploying Secure AI
13:10 AWS AI Architecture: Bedrock, Knowledge Bases & Vector Databases
15:10 Azure AI Architecture: AI Services, ML Workspaces & Foundry
21:00 The "Delete the Frontend" Problem: The Risk of Agentic AI
23:25 A Security Deep Dive into Microsoft Azure AI Services
29:20 Azure's Insecure Default: Sending Your Data Globally
31:35 A Security Deep Dive into AWS Bedrock
32:30 The Critical Gap: No Resource Policies in AWS Bedrock
33:20 AWS Bedrock's Logging Problem: A Nightmare for Incident Response
36:15 AWS vs. Azure: Which is More Secure for AI Today?
39:20 A Maturity Model for Adopting AI Security in the Cloud
44:15 From Cloud Security to AI Security Engineer: What's the Skill Gap?
48:45 Final Questions: Toddlers, Kickball, Barbecue & Ice Cream

Kyler Middleton: [00:00:00] If you choose a model that came out of a region of the world, but like censorship, then it might steer you towards censored ideas or it might refuse to talk about some things. You need to talk about

Sai Gunaranjan : Microsoft, at least Azure defaults, to send your data globally to any available compute for processing purposes.

We actually have to block those as well.

Kyler Middleton: The cloud providers aren't looking out for you as much as they should, and the wizards aren't looking out for you.

Sai Gunaranjan : We don't want anyone to start downloading any model that the marketers actually host a lot of stuff that will not be fully approved legally or from a compliance point of view.

Kyler Middleton: One of the funny classic examples is. You have an alert that pops and says the database is overloaded, and the AI is like, oh my goodness, the front end is doing this. I delete the front end and the database is saved.

Sai Gunaranjan : The defaults are not very safe, and that's when we have to be a little bit more cautious.

Ashish Rajan: AI is top of mind for everyone. Even in cloud security. Yes, people are deploying ai, both kinds of it, the conversational kind, the transactional kind into AWS Azure, Google Cloud, and all the popular cloud. And we had the fortune to talk to some of our previous guests, AWS and Sai. Who have been working on AWS and Azure deploy [00:01:00] AI applications, AI bots.

Uh, specifically we spoke about some of the security gaps, the truth behind deploying an AI application in a cloud provider, what some of the gaps are, what are some of the good features about this as well, and where one cloud is probably better than the other. And if you are a cloud security engineer today, would you become an AI security engineer tomorrow?

What's the delta that you have to fill? All that? And a lot more in this conversation with Kyler and sai. I mean. AI is top of mind for so many people. I'm so glad we are having those conversations on how cloud plays a big role in here and how you can build on some of the foundational pieces that you may have already done in your organization and how you can even start testing out ai.

So I'm super excited for you to listen or watch this conversation, and if you know someone who is working towards this. Or wanted to understand the AI security lens from a cloud perspective, definitely share this episode with them. They will Thank you for it. And if you're someone who has been listening or watching Cloud Security podcast episodes for a long time, I would really appreciate if you could take two seconds to hit the subscribe or follow button in case you're listening on Apple Podcast or [00:02:00] Spotify, or if you're watching on YouTube or LinkedIn, definitely hit the subscribe or follow there as well.

Your support means a lot. Thank you so much for taking that two seconds to support us. Enjoy this episode with Kyler and Sai and I will talk to you soon. Peace. Hello and welcome to another episode of cloud security podcast. I've got Kyler and Sai, returning guests. Thank you for coming and joining the show again, guys.

Kyler Middleton: It's good to be here.

Ashish Rajan: I was gonna say, maybe it's, it's been some time, so how about we start with some introductions. Kyler, if you wanna go first and then maybe follow by Sai.

Kyler Middleton: Absolutely. Hey everyone. It's good to be back. I think it's been like two or three years for me. It's been a while. Wow. So, okay.

Ashish Rajan: We should definitely make it more frequent than two, three years, but, okay, go on.

Kyler Middleton: I would love that. I have historically been focused on network cloud and DevOps engineering, and lately it's been AI because AI is everywhere and it's in everything. And so that's what Sai and I have been focusing on is how do you do platform engineering for AI stuff? 'cause you're gonna soon.

Sai Gunaranjan : Hi everyone. Uh, Sai and hi Ashish.

Yeah, it's been a while we spoke. Um, I'm Sai. [00:03:00] I am, I'm, I'm, I'm a cloud architect on Veradigm, focusing on Azure, Azure based platform and Microsoft based platforms. And uh, yeah, and recently it's all about AI and hosting AI services on Azure and AWS as well. So, yeah,

Ashish Rajan: I mean, it's a good segue into what are people using ai?

What are some of the common use cases for AI services in organizations? I mean, obviously you guys are in a health sector, uh, which is probably even more regulated. What are you guys seeing as a common use cases for ai? Maybe Sai, we can start with you and then follow by Kyler.

Sai Gunaranjan : Oh, sure. Uh, so I think, uh, at this time for, at least for us in our use cases, it's more to do with integrating with a lot of our products that we have and, um, considering most of them and like we, we have a good, good, good majority of them on Azure and, and AWS you know, right now it's, um, securely integrating AI based API endpoints and other solutions into the products and seeing how it works out.

Uh, that's, that's one of the use cases that we are working on. Um, and trying to design patterns that can support, [00:04:00] like securely support those, uh, integrations, that, that's one one big focus area for us right now.

Kyler Middleton: there's this clear schism between like internal AI developers, you know, human resources marketing, and that's chatbots.

Everyone wants chatbots. Help me write my social media post, help me, uh, create this ad copy, help me write my program. Um, so we're trying to like. Secure that and monitor that. And it's, it's difficult on all counts, uh, but we're doing our best more on that later. And externally, we're in the healthcare space and one of our internal products is like a transcription service.

So we put a little microphone between you and your doctor, or you and your nurse. And it uses a model to listen to the conversation and then transcribe the information correctly, formatted into your chart so that you can, you know, get the medication you need. And the doctors don't miss anything 'cause they're incredibly busy, at least in the United States, but probably also everywhere.

Um, so tools like that. Gen AI is incredible at summarization and pretty bad at novel stuff, so I'm sure we'll talk about all that stuff [00:05:00] more.

Ashish Rajan: Yeah, I mean, I feel, I feel like every time. Um, that's funny. I think I've made a post about this a couple of days ago, LinkedIn where people, when AI kind of became a thing, people were still very skeptic and some people are even still tole today skeptic about what security looks like for ai.

A lot of them say it would be primarily IAM data, just take care of that and that should be the rest of it. But I would you say there's a lot more to doing AI security than looking at then just looking at IAM or just looking at, uh, data security, data classification.

Kyler Middleton: Yeah, absolutely. Uh, first of all, you need a model that fits what you're looking for, so that's just kind of a costing thing.

Yeah. So do you want a giant model that knows everything or do you want a distilled model that's really cheap that can just do the things you care about, but that can touch on security and regulatory compliance because if you choose a model that, uh, came out of a region of the world that likes censorship, then it might steer you towards censored ideas or it might refuse to talk about some things you need to talk about or the data [00:06:00] out of it might be harmful, which matters a lot in a medical context.

So there's security in the literal sense from hackers and misuse, and there's security in a real operational sense like. Telling people the wrong medical advice or telling doctors to do the wrong thing, and that can cause real human harm too.

Sai Gunaranjan : I think the stuff that I mainly focus on primarily is not just IAM focused, it's also network data access from the, from, from the models. Also from the, from, um, the, the workspaces that actually run these models are kind of. To fine tune training, you know, they have access to a lot of sensitive data, so how do you secure those data pipelines as well?

Ashish Rajan: Yeah.

Sai Gunaranjan : Um, and then other than that, you know, just content and overall hallucination and those things, you know, how we can tone that down, uh, using content policies and filters and things like that. I, uh, you know, that's, that's also a focus, um, for us when, when we implement these solutions or we have to design those.

Yeah.

Ashish Rajan: Right. Would you, would you guys say that. I, I guess the, the reason I asked that question also is because, uh, um, I think before we started recording, Kyler and I were talking about how [00:07:00] AI security engineering, as a field has started coming up. Now suddenly there are roles coming up, which are AI security engineer people are saying, Hey, it's a cloud security engineer who's turning into an AI security engineer.

And there's, I mean, not that I'm gonna go into a debate right now about this putting a topic, but I definitely find that. Cloud plays a huge role in that AI ecosystem that we are building, and maybe that's the reason why we are having this conversation as well. Um, I, I'm curious to hear from you guys in terms obviously, uh, Azure and AWS, how big or how are that, how is that being used?

In the AI use case context that people, I, I guess people look at it from a, Hey, I'm using OpenAI, I'm using Deep Seek, I'm using Gemini. What role is cloud playing in this? And feel free to take, I don't know, Kyler, if you wanna go with AWS first, and then you can have Azure as well.

Kyler Middleton: Yeah, that, that sounds great.

So you can absolutely use chat GPT to do AI stuff, and you can even use the free tier, but in the same way that like Gmail and Google and things like that are free. [00:08:00] Because you're the product, so they're listening, and that's okay. If you need a recipe for what's in your fridge or a pirate shanty, do that.

But if you need to handle secure data, like your financial data, your medical data, don't upload that to chat GPT. Please don't do that. Use a private AI service with. Uh, you know, security and privacy and compliance policies and probably one you have a, BIA with, so you can share your private information.

So these platforms from Azure, from AWS, they have longstanding, uh, security and compliance processes and certifications and security teams so they can make sure that when they have your data, it is treated securely. It's ai or it's not ai, it's treated securely. 'cause you gotta keep it secure. Um. There's a ton of logging.

There's a ton of model access that you can get access to. Like these models are huge. They require a lot of compute, but you only need that compute while it's responding for 10 seconds. So that's a very bursty type of traffic for running like some bare metal in your, in your [00:09:00] bathroom or basement or something.

So, uh, having cloud providers do it. Is excellent. Uh, they also have some wonderful guardrail tools, uh, specifically in, in AWS side. There's a couple of different guardrails for hate and sexualization, stuff like that. You don't want your car dealership chatbot on your website, like talking about race and talking about sexualization.

I'm like, it's just not appropriate for work. So guardrails can filter bad stuff going in that you don't wanna process or have your a IC or bad stuff going out where the chat bot goes off the wall and tries to say something terrible and the guardrail says. No, sorry, the guard rails down right now. The chat bot come back later.

Ashish Rajan: Uh, what about Azure, Sai?

Sai Gunaranjan : Yeah,

Kyler Middleton: no,

Sai Gunaranjan : uh, I agree with all, with all that Kyler was saying, but also it's kind of like the question goes back to like 10 years ago, like, should you really host your own data center or go to the cloud? Yeah. I think the same things coming up with this as well. Do you really want to host your own models, build from scratch or start using what's already hosted and then fine tune it to meet your requirements.[00:10:00]

And I think that's where the cloud. Uh, providers actually have plenty of tools available to actually do both. You can actually build your own stuff if you want, or you can actually use what they already provide from the marketplace and then fine tune it or just use it as it is.

And I think that's where, uh, a lot of this cloud native companies are already there fully on cloud and have all the data, all the data, everything on cloud. They, they, they actually will use those services much, much more efficiently to just connect into the data sources and start training and, and responding back.

Uh, and so on. I think that's where the cloud really helps. Uh, also, like, like Kyler was mentioning, uh, there's a lot of pre-built security tooling that comes in with the cloud native tools. Like, you know, for example, the Azure AI services and stuff, they have all the content filtering.

They have policies that can actually emit a kind of certain usage and things like that, right? Those, if you are to build natively, it's gonna be much more big challenge of, to actually now build all of that into your data center, have all those guardrails, secure, and then you have someone review it and it's much more. You know, painstaking and also I think not so, um, like, like maybe not, not cost effective to do that on the [00:11:00] data center directly. So that's, I think that's where cloud really helps, um, for folk to start using AI services.

Ashish Rajan: Fair. And I, I guess Kyler, you mentioned BIA 'cause a lot of people would not even know what BIA is between cloud and your own company.

Um, what role, especially in a regulated industry, uh, would a BIA play and why is it important, uh, in the context of. What service is agreed between the cloud provider and your organization?

Kyler Middleton: Totally. I'll do my best on this and I think you'll probably correct me. I'm pretty sure you're an expert here. So A BIA is a compliance legal agreement that you sign with the company and it says that.

You will do a good job of protecting our data, and you are, I believe you legally liable for protecting the data from our customers and patients and things like that. And it, it helps make sure that your partner is going to be going to make good choices with your data. As I tell my three and a half year old make good [00:12:00] choices, uh, with their data and help do their best to protect the privacy of your patients and customers.

Did I hit it?

Ashish Rajan: Yeah, you did. And I was gonna, I mean, I was gonna say we should chat GPT what is, uh, BIA wrong? But maybe in going in theme with the episode, we should probably bring up a console with chat gt throw an api I a question there as well. But, uh, no, thank you for clarifying that, that, uh, that kind of makes sense as well because, uh, the, one of the reasons I ask that question is because, uh, compliance and regulation is quite huge.

And a lot of times it's easy for people to just jump onto a new service when it's announced, especially, hey, reinforce has happened. People are, oh, and maybe even Microsoft Ignite happened. New services. Hey, use, uh. Or maybe Google, vertex ai, whatever the service of choice is. Um, it's important to understand the compliance standard for that particular service as well, especially if you work in a regulated body.

I think that's kind of where I was coming from as well. And you hit it on the nail where it, it's more about being aware for just because a new [00:13:00] service has been announced and sexy doesn't really mean, I should just jump onto a straightaway.

Sai Gunaranjan : Yeah. Key shorten production. Yeah, yeah. Jump on it and do some POCs.

Figure out what, how it works and what, what controls you have on it, but

Ashish Rajan: yeah.

Sai Gunaranjan : Yeah, not not directly exposing it. Yeah. Yeah.

Ashish Rajan: And, and maybe, and that's a good segue into, just to kind of give some people an idea for what kind of components are involved in different AWS and Azure. Maybe if you start with AWS Kyler, what are some of the architectural components that come into mind for people who are building applications that are using AI services?

You mentioned. Obviously internal services using AI chat bots and all of that. Mm-hmm. What, what's the AWS components in there?

Kyler Middleton: Absolutely. So your chat bots generally can't just. Consume data. You can't just upload a PDF and say, read this, you can, but there's some magic going on there called rag read and generate or read.

And what that means is you are staging the data as vectors as these [00:14:00] little chunks of data in a vector database. So you need some sort of ingestion pipeline. And generally this is asynchronous, AWS calls this knowledge basis. And that's a pretty industry standard term that you see around Oh yeah. And what that means is you point it at a data source, which is, you know, an S3 bucket filled with files or a, you know, Atlassian Confluence or a SharePoint and say, here's how to authenticate.

Here's how to read it. Go and ingest all the files. Go ingest this SharePoint site or Confluence wiki and put all of that. As vectors into a vector database that's stored in AWS, like open search or Aurora. And uh, then when we do lexical search or sort of a keyword type semantic search against it, then it'll return relevant stuff.

And it kind of works, like your memory does, you think of lizards and you think of, oh, I saw a lizard at the zoo. And you can read about it in your, you know, in your brain vector search. Yeah. And so that's very important for, um. Making the data [00:15:00] available for the models that need to answer questions and do tasks in your network to know about stuff.

'cause not everything about your business is on the internet. Your payroll's not on the internet, but maybe your bot needs to know about it to do something

Ashish Rajan: fair. And is that the same in, uh, Azure as well side?

Sai Gunaranjan : Uh, yeah. Yeah, yeah. They, they, they had the same services. Um, just that I think. Um, it's, it's spread across multiple solutions.

Like they have AI services that also they have Azure ML services, and then AI services are actually also integrate into OpenAI. Um, right. And then all of them actually together provide an ai, um, kind of set of AI services for Azure and then you can actually connect it to your data sources and then use like, uh, some, uh, maybe a compute engine at the backend to run those models and and so on.

And you can actually purchase them from marketplace and things like that. Yeah, very similar process in Azure as well.

Ashish Rajan: Interesting. So maybe if I, uh, 'cause I think we, because we have both AWS and Azure people here. I wonder if you just double click [00:16:00] on AWS for a second.

Um, I'm curious to know about what are some of the common services people, uh, see in AWS that is used quite often? Uh, and what are some of the security things that you found as you were building these conversation bots? At least the way I understood it. There are, um, and all the, uh, I've been lucky enough to have a lot of conversation with financial and insurance people through, uh, the security advisory we run Call Tech Riot.

I found that a lot of the bots are two types. One is the, Hey, ask you a question. I get a response back. The other one is like, it goes and does things. So I'm curious, uh, if you've had experience with both and what are some of the things you found as you were working on, I guess securing them and building them in, uh, AWS

Kyler Middleton: that, that's a huge question and I'll try to make it into a little, sorry.

Sorry. I'm

Ashish Rajan: like, there's so many thoughts going through my mind as I was talking to you guys. I'm like, let's focus on AWS but I didn't really narrow it down as much. So hopefully, uh, it kind of, even if you wanna do part by part, that's totally okay.

Kyler Middleton: Let's do it. [00:17:00] So generally, like you said, there are transactional gen AI type applications.

You see this really commonly with chatbots where you say, uh, tell me about this team or project or something. And it has to go query your knowledge base to learn about, you know, data that's been asynchronously, added to it from your Confluence or your SharePoint or whatever. And it'll get some chunks back and then it'll talk to the big foundational model and generate a response for you.

That is a. Gen ai. Um, generally you can use like serverless models from the bedrock platform and knowledge bases and guardrails, uh, to just make all that happen. And that just kind of runs and responds and then it dies. And then there is no state, there is no like further action. It can't choose to go use tools generally.

Um, the other kind is cool and we're starting to see that pop up all over and it's called the agentic, uh, which means acts like an agent has agency of its own. There's a ton of security coming down [00:18:00] the pipeline. There's a ton of terminology to learn here, and what that means is these bots are generally given tools and given more broad ranging goals, go and.

Research this thing and get back to me in a couple minutes. So they're generally kind of more asynchronous. They don't run in five seconds. They run in a few minutes or a few hours or a few days, and you'll give them tools sometimes through MCP and sometimes directly into their programming interface.

And you say, go. Use your tools, go research, go figure out how to solve this problem. And I, I don't have as much experience with this yet. We're starting to build them. But one of the examples I, I use a lot because I am currently building it, is an SRE bot that we will trigger when there's an alert push to our Splunk or um, PagerDuty or Slack or something.

And it says, you know, has this happened in the past week? Go research in Slack and see if this has happened in the past week. Go look at our PagerDuty. How did we resolve it last time? Was [00:19:00] it a false positive or was it real? And how did we solve it? And we're not yet giving the bot the ability to like go restart systems, but we'll at at least giving it the ability to say, last time you needed to restart this service, tag the SRE team and say, Hey, you probably need to do this.

And if it can generate that kind of detailed response in like one to two minutes. That's pretty amazing. That's pretty useful. Yeah. Um, we're still working on making it work, but that's the goal and I think it's feasible.

Ashish Rajan: And I guess to your point about security considerations for these kind of ai, agentic ai, I don't know what to call them anymore.

It's like, let's just say agents. Yeah,

Kyler Middleton: totally. It's, well, if you are, you need to know where you're accepting data from because these bots can do bad things. It, it depends on the permissions that you've given them. Of course, butis are generally very trusting. They're just like, I don't even know the, um, new employee.

And you say, yeah, you for sure need to go click, push that red button. [00:20:00] You need to let me in. I work here, I promise, pinky promise. And they say, well, yeah, of course. Like, I don't know better. Ais are very trusting. They'll do what anyone asks. That can be really scary if they have any kind of right permissions, but also if they have read permissions.

If you say, yeah, I promise I'm in hr, tell me everyone's payroll, please. It will, if you don't have guardrails to prevent that, if you're not gating that access somehow through some mechanism, um, because AI themselves are pretty bad at security, they're very trusting. They're the human instinct in all of us, the untrained security person.

Um. So that's something we need to, to worry about.

Ashish Rajan: I don't know. I feel like we are not giving enough freedom to ai. Sometimes I feel like I'm just ready for a day when AI just deletes the entire backlog and everyone's like, who's responsible for this? Like, no idea. So it'll be really, when that happens.

Kyler Middleton: One of the funny classic examples is you have an alert that pops and says the database is overloaded, and the AI is like, oh my goodness, the front end is doing this. I delete the front end and the [00:21:00] database is saved. And it's like, well, you solved the problem, but not in a way that like satisfies our business requirements.

So. Watch out for things like that.

Ashish Rajan: Yeah, and just don't delete the entire repo because of it. Exactly. While you're at it.

Kyler Middleton: Yeah. I would be very cautious with giving AI's right access at this point. You should have a human in the loop unless you have a ton of security checkpoints, and I just think our tooling is not there yet.

Maybe inside Google or the Department of Defense, but certainly not in the commercial sector yet. It's just not there. I wouldn't give it read access myself.

Ashish Rajan: You mentioned serverless for bedrock. So I think the two services that keep coming up, at least in my conversations have been around. I find that I see bedrock SageMaker quite often.

I also see Kubernetes workloads. I also see serverless workload. I, I'm curious, um, is that the pattern that you, you're seeing and in the conversations you're having as well around AWS and how people build AI on AWS platforms?

Kyler Middleton: Yeah. The patterns that I've seen so far is for that transactional ai. The [00:22:00] non-agent one is as lambda, is serverless.

Yeah. As functions because then you don't have to main any, maintain any servers. Huge fan of that. I never want to patch another server again in my whole life if I can avoid it. And uh, for Kubernetes, I see a lot of that running distilled models. Like running your own models to avoid having to pay, you know, very large bedrock model prices.

'cause running reserved instances is very expensive. And, um, for Bedrock and SageMaker themselves, we're not quite to using SageMaker. SageMaker is a, um, suite of tools from AWS that lets you create your own models from data and distill larger models into small ones that are, you know, customized for your use case and then run them yourselves.

We haven't caught. Gotten there. I would love to get there in the future. We're seeing a lot of that in specialized spaces, regulatory sensitive spaces like finance and healthcare. Yep. Uh, because you need those models to not trust the inputs from folks and [00:23:00] follow security guidelines and follow the law.

It turns out you have to educate your AI's the same way you educate your users. Please follow the law. Um, one of our chat bots decided to be an insider trading, uh, advisor, so I'll tell that story later on. Um. So there's a, there's a lot going on there that those are the most common patterns that I see in AWS.

Ashish Rajan: Awesome. Now thank you for sharing that. And, um, sai for Azure, what are some of the architecture patterns you're seeing? Components you're seeing in the a Azure space as they're being built in? Yeah, so is that same there as well for a AI bots?

Sai Gunaranjan : Yeah, so we do have a serverless as well as, uh, the, like, like a compute hosted, uh, for, for models, hosting models and stuff like that.

Uh, depending on the use case, we prefer a lot of, so most of the serverless models don't come with a lot of network guidelines. Like you can't really. Monitor the network, um, ingress, egress controls for serverless, uh, just because it is serverless and so on. So, um, based on the use case and what it's [00:24:00] integrating with, we either allow for a server serverless implementation, or we go with a hosted compute.

Uh, model wherein we're able to, you know, govern the networking and also govern, you know, data source access and stuff like that. And then, you know, kind of run the models on top of it. It, it is expensive, but then it's, it's not worth the, the risk of not having, uh, the, the visibility into the, the, the whole network space as well.

So that's why we actually have to do that as well. Um, and on our, on a mainly Azure, we don't have any of. Like just chat Botts kind of thing. We have, most of them are kind of product integrations, so where we are being extremely cautious when we, uh, roll something out and kind of have a lot of checkpoints and stuff like that.

Yeah.

Ashish Rajan: Alright. So they're more embedded, like AI is more embedded into the product per se? That's

Sai Gunaranjan : where that Yeah, I told that that's, that's the main primary use case on Azure. Uh, a lot of them are being embedded into products and we are actually researching or going down the route. There's a lot of development happening there.

Um,

Ashish Rajan: and when you say embedded, does that mean. [00:25:00] I guess it's not to what Kyler was saying, it's not open ended. I'm asking a question. It's more like it does things in the background. I'm just gonna use a example from one of the conversations that I had. If I'm using internet banking and I wanna show, hey, this is how much ish has saved, and this is how much ish has saved, has spent on all the food has been eating everywhere.

Like so it does a pattern for the year and go, oh, you spend a lot more on food rather than saved the money. That would be a quote unquote ai. Thing is that would be a, is that a decent example of it?

Sai Gunaranjan : Kind of, yes. Yeah, so basically in into the product side, integrating into knowledge base, integrating into claims, integrating into that stuff so that, you know, when, like a support engineer actually has to, is kind of research a claim or something like that, they can actually go to the chat and then go from there and look up the product information.

And things like that. That's, that's where the, the right embed is going on. It's not directly into patient, but it's, yeah, it's other side.

Ashish Rajan: Oh yeah, yeah, of course. Uh, yeah, I think, uh, patient would be definitely a very, uh, huge leap at that point in time. So what's the equivalent services in Azure that [00:26:00] are popular?

Like obviously Bedrock and SageMaker are the two on the AWS side. What's on Azure side?

Sai Gunaranjan : Um, it's, it's Azure ML Workspace and Azure AI services, which have a lot of other. Services underneath them. So they have OpenAI, which integrates into OpenAI. Um, but, but mainly it's Azure Foundry, which has the, the Azure AI services and then Azure ML Workspace.

So those are the two main things that actually come up with Azure.

Ashish Rajan: And how do they work as in, in terms of. Is the Azure Foundry the place where I can say, connect to any model and then Azure

Sai Gunaranjan : Foundry gets you? Yeah. So it gets you access to the, the, the workspace or the project. From there you actually deploy models and then from there you can actually create workspaces and then connecting to model, connecting to data sources, and then have like endpoints created and things like that.

The ML workspace is very similar, but you can run Python workbooks there. You can also kind of have, I think you can also have your own models hosted there and things like that. So ML Workspace gives you a more, um, hands-on build your own approach.

Then Foundry gives you a lot [00:27:00] of pre-built models that you can use for your, um, integrations.

Ashish Rajan: Actually, that's a good point. Just because all of us are talking about Gen ai, ML hasn't really died suddenly. It's been there for a long time. So ML has, uh, that was a good, uh, good segue. I didn't even think about the whole notebook and all of that.

I'm, I'm sure that is all still relevant and all AI security engineer, when people talk about, or AI securities that people talk about these days is primarily gen AI security. So, um, to, to your point, uh, what is security for. Foundry or what does, what does security for AI models look like in an Azure context in terms of architecture patterns that you may have seen?

Sai Gunaranjan : Sure. So, um, we, we touched upon this in the talk as well. So we, it, it's, at least from, from the talk that we gave, it's kind of, uh, primarily 2, 2, 2 areas. One is. Purely I am based where you have access controls into the workspace. And because now the workspace has access to a lot, lot of sensitive data, right?

So it's one, you have access into the workspace. Other thing is, [00:28:00] um, the, the, the, the underlying networking, or once you get into the workspace, what access you have from there and are able to push data out, pull data in, are able to access like un untrusted models. Yeah. Um, so once those, so, so that, that's one that's just from the resource implementation point of view.

Then overlaying that is model access itself. So now that you have private access to the workspace as well as, uh, govern in ingress egress that we don't want anyone to start downloading any model that the marketplace actually host, like Marketplace has access to hugging phase. You have access to N Media, but a lot, lot of stuff that might not be fully approved legally or from a compliance point of view.

So you have to. Kind of have a check and balance over there. Yeah. And that's, that's another layer of implementation that we look at. And then the third thing would be mainly like what color was talking about earlier, like more kind of content based so that, you know, it doesn't have any kind of explicit material being deployed.

Kind of, uh, kind of responded back in the chat and stuff like this. It's not relevant for the, for, for the product. So it's just kind of. Kind of content safety that we can build on top of all of these things. So these are the three, four layers that we kind [00:29:00] of touch upon and we look at when we implement any AI service or design patterns.

Ashish Rajan: And I guess, would you still have Kubernetes or serverless? Oh, sorry. You did, you did call out the serverless part, not being the user. Yeah. So with.

Sai Gunaranjan : Exactly, so, so with the model when, when we allow a certain model to be, uh, to be pulled kind of, kind of deployed into your workspace, that's when the serverless and maybe a server kind of, kind of compute hosted method comes into picture.

Yeah. There's also a, there's also a third viewpoint from Microsoft wherein if you're doing a serverless and if you're actually hosting data, like within, like if you, so they actually could send your data off to different geographies for processing.

Ashish Rajan: Yeah. So

Sai Gunaranjan : we block those as well. So we have to actually.

To, to, to kind of keep cost low for the end user. Microsoft, at least Azure defaults to send your data globally to any available compute for processing purposes. Um, we actually have to block those as well so that, you know, you don't actually send your data into some, to, to, or to, to geography that you don't trust or you don't have issues with and stuff like that.

Right. You, there's a lot of data governance that actually happens there as [00:30:00] well. Interesting. That's another viewpoint that goes into hosting these. I'm not sure.

Ashish Rajan: Is that one of those ones where Azure says a feature? It's not a problem, it's a feature that we just basically,

Sai Gunaranjan : they, they have a lot of options, but I think the, the, the cost, the most cost effective one is what is defaulted.

Uh, at least that's what we been seeing. Um, you have actually either disabled that or, but there's a very good warning as well, which says like, Hey, you know, using this model, your data can go anywhere. So if you, if you don't, there's a. There, there's like a good text below it, but then that, that's the default.

So when, when we say a developer like, Hey, you just go deploy your own like self-serve model, go deploy your own thing and, and do your own stuff. Yeah. Uh, the defaults are not very safe and that's when we have to be a little bit more cautious. Like, so I think one of the design principle that we've always had is like the developers can do whatever they want to do, right?

It's not like we build it for them and then they go start using it. So that actually kinda breaks developer velocity. Yeah, so we let them do whatever they want to do and then that, that's where the defaults become a bit more scary and you have to actually have policies or other things that actually block them or a [00:31:00] templates to deploy that actually don't allow these defaults, and they actually over with other safe defaults that we want to use within the organization.

Kyler Middleton: We talked about this in our, in our talk, and that it's not just you that has FOMO in your organization, that we're gonna miss out on this cool AI stuff that'll make our product better, these cloud providers do too. And they are shipping stuff before they traditionally would and before the like, you know, wizards and defaults are secure.

Yeah. And that's kind of scary that both sides have FOMO and wanna move fast without double checking. The, the security, the cloud providers aren't looking out for you as much as they should, and the wizards aren't looking out for you. And maybe your developers wanna move fast and not worry about it too.

Oh, so there's a, there's a big gap for platform teams to jump in and bridge over to like, you gotta bring security here 'cause nobody else is looking out for you right now.

Ashish Rajan: Thank God because security knows everything anyways, so they would solve all your problems. Thanks guys. Uh, I was gonna say, what's the security layers like in AWS Highler, because I thinks [00:32:00] just mentioned the one from Azure, there's identity, there's logging, all of that.

What have you found in that same kind of context for AWS?

Kyler Middleton: Totally. It relies on IAM, which I love is, is well done. It's, it's complicated and hard to get started with, but once you nail it, you've got it nailed. So it's generally you have a principle which is an actor that targets a resource and it does a verb against it.

And that's what IAM validates on the principle side. And on the resource side, bedrock is a little bit strange because there are no resources in the entirety of bedrock. There's nothing to target. Uh, you target. Models, you target sort of groups of things in, in regions, but you can't add a resource policy to your knowledge base, to, you know, where you store all your sensitive data or to a particular model that's expensive.

You cannot protect that with a resource policy. There's other ways to do it, like organizational scps or RCPs, um, but not from the resource side. That's a little [00:33:00] concerning. Um, the logging is also a little bit interesting. On the Azure side. You deploy a model with a guardrail and you target that and you gather all the settings together that when you use that one, it's, it has a guardrail, it logs to a particular place, it has this rate limiting, blah, blah, blah.

The serverless models on AWS bedrock are great. You can get started right away, but because there's no deployment to target. The logs just all get combined into one bucket and there's no rate limiting and, uh, there's no guardrail that's required. You can't say you have to use a guardrail to use this model.

Guardrails, just an optional API flag when you call the model. If you, the developer wants to turn it off, they can. And that's, that's a little scary, right? Like, that's not great. Um, and it gets a little harder later too, to link when this AI conversation happened, all the conversations get put into one cloud.

Watch log group one bucket, and they're all intermingled. Every conversation, all your re-ranking, all of your different knowledge base [00:34:00] dips, all of your ingestion pipelines that use embedding models, all of that's in one place. And so how can you tell just the conversations that came from this application?

Well, you can't. Well, you kind of can. You can see which IAM role did it. 'cause that's an embedded key in what's logged. Yeah. So you need to make sure, this is kind of weird. Generally the advice is the opposite. Make an IAM role that can be used by lots of stuff. Abstract it. Yeah. Don't have a hundred roles.

Have one role. Make that Sure. That one's right. No. Do the opposite here. Every application needs its own. IAM role. Because that's how you're going to link, that's how you're going to trace what it's actually doing with your AI services. And it's just flatly a requirement now. 'cause all the logs are mixed together.

Ashish Rajan: Wow, okay. Um, 'cause that would basically mean incident response and stuff would be harder as well because you basically looking for a needle in a haystack at that point.

Kyler Middleton: Yeah, absolutely. I am now looking through thousands of logs from dozens of [00:35:00] applications and trying to figure out which one's which CloudTrail iss not great at that.

So you're probably shipping it to a sim 'cause you need to be able to disambiguate it with like structured data and say, yeah, filter for this I role, and just show the logs please. And it's, oh well, you know, it's just a hard to do with native tools.

Ashish Rajan: Fair. I, I was gonna say, if we talk to a WS about this, well they'll, I'm sure they'll say, well, we have an AI for that.

But I haven't heard of Agent Soc like going through Mountain and Log, we'll give you another service.

Kyler Middleton: It's, it's fighting fire with fire. It's throwing more AI at the problems AI it's causing, and like, I don't know if that holds up, you know, like, I don't know if we should just keep piling AI on until the problems go away.

That sounds expensive.

Ashish Rajan: Yeah. I was gonna say, because, uh, you've raised a good point about visibility. Um, it traditionally, and I, I guess sa you mentioned this earlier as well, where initially the choice used to be, should the application, the new application, should that go into cloud or should that go into on-premise?

There's certain things you make that call on these days, by default, people send that to cloud [00:36:00] with AI as well, I think the choice is harder probably between, Hey, if I go for, am I making my decision based on the LLM model or am I making my decision based on the provider? Sounds like there's clearly. Um, there's a lot more control policies on Azure side, which kind of counters the, uh, flexibility kind of side on AWS.

If you guys had to kind of throw a hat in the ring, for lack of a better word, um, is there some sense of, if people who are watching on listening to this conversation, where should they start with? Uh, if from a security first mindset while still keeping developers velocity at, as at whatever rate it needs to run.

Is there some to on that?

Sai Gunaranjan : Yeah, so from, from a security mindset, I think the pretty good controls that I've seen on Azure, especially when, when integrating with sensitive data and, and, and like third party solutions and private access to.

Uh, data sources and stuff like that. I think [00:37:00] that's, that's really helpful for us considering our, our industry type and things like that, right? So. Um, it, it is, it is not straightforward. There are a lot of cultures when you start doing private access, your private Dan gets in, you know, get, gets in the way you, yeah.

Yeah. So, so, so there are a good amount of challenges that have actually jumped through to get there, but once working, you at least have a secure pattern that you can implement and reuse further sensitive locations as well. So I would, I would, I think that's definitely there. Also, I think AWS also has good controls, but I, I've not worked on it that much.

I don't know. Um, I, I can't speak, uh, like in, in depth with it, \

Kyler Middleton: yeah, I, I think that's a good way to put it. I, I think the cloud will continue to be the slightly more expensive way to run applications. A way to have a lot of velocity. It's only cheaper when you have incredibly bursty workflows.

And AI is just one of those things that is incredibly bursty. It requires an absolute boatload of compute for 10 or 15 seconds every few [00:38:00] minutes, and that can make it a lot cheaper to run in the cloud depending on how expensive it's, um, I, I think that. If we just froze time, the place that AWS is in, I would prefer Azure.

I, I like how they've secured and structured their AI services. It's gonna keep improving. And I love the Lego block approach of AWS. I know they're gonna keep improving this stuff and even just disambiguating the logs for like per IAM role to call it to a different log group would make a huge difference.

And that's so minor. I'm sure. Stuff like that is coming. Hopefully they don't tell me to turn on an AI service to sort it out. Hopefully it's a checkbox. We'll see.

Ashish Rajan: If security first with the mindset, then I think Azure at least seems to have a better play there in terms of how much control you have, especially from a to what you said Kyler. Um, one of the biggest things you probably are afraid of, is there a leak, is there an exposure, which means there is potentially incident response that you have to do, which means logging is quite crucial for getting [00:39:00] your head around, logging fairly quickly without having to spin up another AI agent ready to do that.

But it, it's a, i I love that perspective, so thank you for sharing that as well. I was gonna say in terms of a starting point then, because, uh, a lot of people probably to what both of you said, feeling a lot of FOMO as well because maybe their organization has not gone ahead with AI or, uh, they themselves are trying to understand what that space is like as well.

Uh, in, in terms of security challenges. Uh, we kind of spoke about this, but maturity level. What is kind of like the maturity level that people go through as they deploy applications? Like I imagine, uh, you can pick any, you can pick the transactional one or the conversational chatbot. Uh, what are some of the maturities, uh, stages as someone can go through as they kind of start building towards?

I mean, let, let's take it this way. If obviously there's no, there is no greenfield, most people already are using AWS or Azure, some sort. So there's [00:40:00] already a lot of foundational security best practices. Hopefully, hopefully there's some of that was already there. Uh, let's ignore the over overprivileged Im user for, for a second there.

Uh, outside of that, if people are trying to build AI services, uh, I imagine they can use leverage a lot of existing practices. The reason I ask this question is because AI is being at least marketed as a thing that, hey, we don't have any tools for this. There is no tool that, because how do you go against voice and, uh, deep fake and like, no, I can throw all kinds of things that you see on the internet.

And at least bringing that back to the bare basics in terms of what can we use today that we have, uh, to build, start building with that maturity for Okay. And then you can add more layers. Hopefully that made sense.

Sai Gunaranjan : YY yes. Yes. Yeah. Yeah. Um, so foundational security by default from cloud, you know, um, again, all the network stuff, IM stuff, you already just covered those two.

Yeah, that's already there. But when you start to do ai, I think the additional [00:41:00] controls that I would, at least I would start with is, um, the, the model access, using like policies that, that, so, so even keeping the content filters aside for a second, which is also very important. I'm not, I'm not dismissing that, but just keeping that.

For, for a second. Uh, uh, aside model access. Like we, if, if, if you don't trust a certain model or where it's hosted or where it's been provided from, like maybe you don't want to enable deeps within an organization or you don't want to enable marketplace, such as having face maybe near environment. You only have a trusted model that you want to host or something like that.

Right. I think that would be a great starting point. Mm-hmm. And once developers have access to that and they start testing it out, then. Applying cult, the, the content filters and, and you know, the, the other, the other filters that actually, um, provide more, uh, kind of relevance to the conversation. I'd add more sense.

That's how I would atish I would start with it. Um. And, and yeah, I basically this, this, this saw before, before going to production, right? Before going live with it and exposing it to a larger audience and stuff like that. [00:42:00] With, within a control space, you have, you have all of these policies implemented, test it out and then, and then roll it out to much, much larger, uh, implementation.

That's how what I'm seeing it,

Kyler Middleton: yeah, absolutely. The truisms of platform engineering where you need to shift as far left as you possibly can are still true. AI in particular, like we talked about, there are big gaps between the security of the wizards and the defaults and, uh, the velocity that your teams will require to move forward and stay competitive and keep your business' doors open.

So shift left as far as you can to help and guide. You gotta make it cheap, but you gotta make it secure and all of that stuff is gonna be work that you do at this point. Maybe in six months, the defaults will be secure. We'll see. Uh, but clouds like to favor moving fast over being secure defaults. Uh, so maybe that work will continue to be required, um, for your developer teams that wanna get started.

I think a chatbot is an excellent way to get started. I've open sourced does slack and teams chatbot called [00:43:00] Vera that deploys a Terraform. You can go grab it from GitHub. We'll include it in the the case notes. Yep. If you wanna go just. Do it. I, that's my advice. I know that sounds, you know, infantile, but like especially for clouds, you don't have to make this huge thousands of dollar commitment.

You can just go get started and just play with it. These are really easily ingestible APIs for Bedrock, where you can access AI models and you can play with guardrails and you can test things out. Particularly when you have the scaffolding that someone else has provided, you're welcome to like. Put it in, put it in slack, bring chat GPT in-house privately and like play with it.

Add some of your internal data to a knowledge base and see if you can get the chat bot to talk about it. That kind of little stuff. It's not making huge business value. It's not making millions of dollars in sales. You need to start upskilling yourself on how AI actually works, because pretty soon those features that you can actually sell and will make you millions of dollars are gonna come around.

So make sure your skillset is there to [00:44:00] meet the moment when that comes up.

Ashish Rajan: Do I, do you both think that the cloud security folks can upskill into this? 'cause obviously the controls we mentioned so far are all around cloud security, but then there is a whole. Conversation around that. People have to upskill into ML ops as well.

And we kind of spoke briefly about ML is very different to GenAI. I don't know how different it is. Clearly I've never worked in ml. I've done clearly some work in GenAI. But now thanks to everyone talking about GenAI out. Out of curiosity, what's the delta there that you guys find? Obviously both of you have done cloud security and now working in the AI space as well.

Do you see, uh, I guess what, how did you work on that delta, if that makes.

Kyler Middleton: There are just the sort of vanilla platform engineer cloud challenges you are working with. IAM. You're working with cloud resources. You need to do logging and traceability and, and you know, rate limiting and stuff like that. But there's these totally novel solu totally novel problems that you haven't had to face before.

Is your model, uh, [00:45:00] stilted? Is it deciding that like. All the Indian names are actually doctors in your data set. Like is it doing that sort of implicit racist thing that you need to solve? And that kind of measurement is, is new, right? We don't have to care about that with our logs in our cloud providers and in our IAM stuff.

But that is something that you'll need to address for your AI and you. I'm not sure who that is. It could be the cloud team, it could be the DevOps team. I think it's gonna be at any significantly sized place, a totally new skillset of ai, security engineering, or AI engineering. Where you are measuring models, you're building scaffolding to tell if your models are biased or making poor choices based on things that don't make sense.

Um, there's this historical example where they trained a model to recognize cancerous tumors from pictures, which sounds incredible and so cool. Yeah. But it turns out it actually could just recognize how surgery centers were setting up the [00:46:00] framing and it could recognize, my God, there was a little card that had the name.

And so you have to make sure that it's doing the thing that you expect because you expect it and not just making a choice that seems right in the moment. And that that sort of large scale testing and looking for bias and stuff like that is. Totally new. That's a new skillset set. You're not alone in having no idea how to do that systematically.

Um, so you'll just need to practice and read and learn and listen to Future Ashish podcast because it's coming and these tools are changing all the time. There, there's no right answer right now.

Ashish Rajan: I was gonna joke about the, uh, the bias thing. All the old school Indian parents would be ama really happy if all the Indian interview a doctor.

I'm like, I knew it. Ashish wanted to be a doctor, so they would definitely be loving it. Like at least my old school parents would. Sorry, s go on. No, no. Yeah. No, no.

Sai Gunaranjan : And I think a lot of the countries don't exist, you know, or are, are actually in the very early preview mode on many of the, the club program.

Oh, right.

Ashish Rajan: Also, they're not like ga.

Sai Gunaranjan : No, no. So for example, like the names that Kyler bought up, right? If you wanna do something like that, [00:47:00] there's, there, there are custom blocks that you can create right now, a custom, uh, policy you can implement. That's just still very early previews.

Right now, the content filters are only like, you know, uh, kind of, maybe only, only sexual or maybe only explicit or self-harm and stuff like that, right? If you wanna do , like a custom. Policy. It's still, it's still in early preview. Um, at least on Azure side. They have those in preview right now.

Ashish Rajan: Um, oh, is that why there's a lot delay in, is that why there's a lot of delay in actually all of these production ready applications being publicly available because the, the content safety, I guess services have not really, I guess, has fully been tested or 'cause prompt injection is still a thing and all of that.

Sai Gunaranjan : So there are some built in policies, right? But if you wanna do more customized ones, which are more specific to your use case, which is kind of what most

Ashish Rajan: enterprise would have to do anyways. Yeah, yeah.

Sai Gunaranjan : Mm-hmm. So I think those, at least from cloud native solutions, are still not fully built in.

You have to actually maybe have other, other, other checkpoints in place or something else, but built into AI, Foundry or Azure, a foundry and those native [00:48:00] solutions, they don't have those built in yet. It's, it's, it's, it's in preview.

Ashish Rajan: Is that why people go for third party solutions then where they go, Hey, because there's a whole AI security ecosystem being created.

Mm-hmm. Um, and LLM firewalls. There's like content safety people. Oh, maybe that's where they're coming from. And Kyler out. Is that the same with a Ws as well out curiosity?

Kyler Middleton: Yeah, absolutely. There are tools that are native to the cloud, like the guardrails in particular that work pretty well, but the logging isn't great.

The resource policies aren't great. So doing these third party tools that give you more transparency and can help you score stuff in a more human understandable way, in a more feature rich way are for sure a gap that a lot of third parties are filling in.

Ashish Rajan: Yeah. Wow. Well, uh, that's a good, uh, way to kind of at least close all the technical conversations since you've been here, both before.

I've got three fun questions for you. I wonder if the answers have changed a couple of years have passed. I guess so. I don't [00:49:00] know. Maybe, we'll, I'll find out. I'll maybe I'll do one questionnaire time to both of you and we'll see how we go. So the first one being. What do you spend most time on when you're not trying to solve all the AI bot problems of the internet?

Uh, Kyler. You go first.

Kyler Middleton: Uh, squishing my daughter. I have a three and a half year old and just absolutely squishing her. It's, it's funny 'cause they say, don't shake your babies. She's three and a half. You shouldn't actually shake your babies. But she loves to be shaken and thrown. Fun. And that's what I'm in my free time.

I'm just manhandling a toddler, like just tossing her. She loves it,

Ashish Rajan: but you know how her, what's it called? Her l and m model is learning this new behavior. Now it's like, oh, this is fun. I should do more of this. Mm-hmm. Now, uh, what about you Sai?

Sai Gunaranjan : Uh, so yeah, I have a 6-year-old and, and a 2-year-old, and both of them, um, yeah, it's, it's very similar games and whatever I do with my 2-year-old throwing him around or, you know, or whatever, you know, my six old wants it.

But she's a different height and different weight, and I have difficulty with that. [00:50:00] So I, yeah, it's, it's, it's just, uh, like a, I mean,

Ashish Rajan: she's trying to tell you to work out, man. That's pretty much it. Yeah, exactly. I would say that, yeah,

Sai Gunaranjan : it's, it's, it's like a mini workout whenever I go play with them. But that's most of the time, a after I, after I do all my work stuff, it's just with them, play around the chase them, anything, you know, what, what they want to play games with.

Uh oh, fair. And maybe just cuddle and watch tv Sometimes this is also becoming a trend. We just watch some. Uh, TV with them. Yeah, stuff like that. Fair.

Ashish Rajan: Awesome. And, uh, second question for you then. Uh, what is something that you're proud of that is not on your social media? Sai you go first.

Sai Gunaranjan : Oh, I think, uh, I don't know.

I think most of my, yeah, most of my personal stuff like working, uh. The kids thing and everything else, or I sort of, yeah, I'm very proud of that. Also, I have my small backyard garden that I'm growing. Um, oh, you have a veggie patch? Yeah. Fruit patch. Veggie patch. Yeah. Mix and match of them harvested a lot of raspberries, uh, the season.

So interesting. Again, that's not [00:51:00] something that I, yeah. Published and the kids just love it, you know, it's like a small snack when you're playing around in the backyard. There is go shark plucking stuff and eating it and Yeah. They've been actually. I kind of take care of the plant right now, and we do have some veggies right now growing up as well.

So, yeah, that's, that's, I'm, I'm proud of my small garden.

Ashish Rajan: Awesome. Kyler, what about yourself?

Kyler Middleton: That's wonderful. I wish I had something as good as Sai. Uh. Outside of work, I do more work. And so I'm trying to find a place to just exist at. And the way that I do that is I play on a kickball team, like a grownups kickball.

Someone gets injured every time. That's how you know it's the grownups playing and it's so fun. And we also do, I do makeup together with my little daughter, not actual makeup like wings and, and she just likes to paint her face purple with. Oh my colored makeup and she loves it.

Ashish Rajan: Wait, I was gonna say, what's kickball?

Is that the same as soccer or is that different?

Kyler Middleton: It's like baseball, but with a big red rubber ball and you just roll it. I don't know how to compare [00:52:00] this to, I need to Google it in British sports.

Ashish Rajan: Yeah. Okay. I'll, I'll, uh, I'll find out what kickball is. I've never heard of it. I mean, I've heard paddle, paddle, whatever, paddle racket thing or whatever.

That was the theme, but clearly that's gone. Uh, maybe the final question, uh, what is your favorite cuisine or restaurant that you can share with us? Kyler.

Kyler Middleton: Oh my goodness. I love barbecue. I, I like southern barbecue in particular and just eating my body weight and ribs and it's covered. It's all over your face.

That's my happy place. I wanna be there right now actually. That's, I'm gonna go. That sounds amazing.

Ashish Rajan: That's one of my favorite as well. So I'll add that to the list. I'll join you as well. Uh, si What about yourself, man? Ice cream. Any, any kind of ice cream really. Wait, you kind of ice cream or gelato?

Sai Gunaranjan : Oh no.

Ice cream? No, not gelato. I don't like gelato at all. Oh, interesting. Okay. Yeah. Proper like vanilla and, yeah, I think vanilla is my favorite, but, uh, open to other, other options. Uh, but ice cream, but you won't say no for

Ashish Rajan: ice cream no matter what flavor. Oh,

Sai Gunaranjan : no. Yeah, yeah,

Ashish Rajan: yeah, yeah, for sure. So your model is trained to not be biased towards [00:53:00] Yeah.

Kyler Middleton: Vanilla is the best flavor. I, I am totally people like dog on vanilla, but it is by far the best flavor

Ashish Rajan: I can add to anything. Every once we try something else.

Sai Gunaranjan : Yeah. Yeah. But good. Yeah, yeah, yeah. No, yeah.

Ashish Rajan: Awesome. Now well thanks. Thanks for sharing that, guys. Um, where can people find you and connect with you, Kyler?

Uh, you go first.

Kyler Middleton: I am terminally on, uh, LinkedIn for better or worse, so that's easy to find me and I do a great deal of writing on Let's do devops.com and I do a podcast with Ned Bevan on Terraform and DevOps called Day two DevOps. And,

Ashish Rajan: uh, s yourself?

Sai Gunaranjan : I'm primarily on, on LinkedIn. I don't have a lot of social media presence, uh, just primarily LinkedIn and I post stuff there and maybe new releases and stuff from Microsoft and because I just post stuff there.

But yeah, I'm primarily on LinkedIn. Awesome. Yeah,

Ashish Rajan: I was, I was gonna say, maybe you should post some of those raspberry pictures there. Man. I think that would be a great viral post right there because it would be so out of the blue people are like. [00:54:00] Why is this guy posting raspberries? Yeah.

Sai Gunaranjan : That's why I don't maybe want to post, but

Ashish Rajan: but I appreciate both of you spending your time with this. I would definitely put the talk in there as well, which you both gave at fwd:cloudsec. That's pretty awesome. Uh, so yeah, thank you so much for joining me and, uh, sharing all that as well.

Uh, hopefully we don't have to wait two years next time to come back with us, uh, and we come back a bit more sooner. But thanks so much for tuning in folks. Uh, and uh, thank you for joining us, Kyler and Sai. Thank you so much.

Thanks everyone. Thank you so much for listening and watching this episode of Cloud Security Podcast. If you've been enjoying content like this, you can find more episodes like these on www.cloudsecuritypodcast.tv We are also publishing these episodes on social media as well, so you can definitely find these episodes there.

Oh, by the way, just in case there was interest in learning about. Cybersecurity. We also have a sister podcast called AI Cybersecurity Podcast, which may be of interest as well. I'll leave the links in description for you to check them out, and also for our weekly newsletter where we do in-depth analysis of different topics within cloud security, ranging from identity endpoint all the way [00:55:00] up to what is the CNAPP or whatever, a new acronym that comes out tomorrow.

Thank you so much for supporting, listening and watching. I'll see you next.

No items found.
More Videos