Transitioning a mature organization from an API-first model to an AI-first model is no small feat. In this episode, Yash Kosaraju, CISO of Sendbird, shares the story of how they pivoted from a traditional chat API platform to an AI agent platform and how security had to evolve to keep up.Yash spoke about the industry's obsession with "Zero Trust," arguing instead for a practical "Multi-Layer Trust" approach that assumes controls will fail . We dive deep into the specific architecture of securing AI agents, including the concept of a "Trust OS," dealing with new incident response definitions (is a wrong AI answer an incident?), and the critical need to secure the bridge between AI agents and customer environments .This episode is packed with actionable advice for AppSec engineers feeling overwhelmed by the speed of AI. Yash shares how his team embeds security engineers into sprint teams for real-time feedback, the importance of "AI CTFs" for security awareness, and why enabling employees with enterprise-grade AI tools is better than blocking them entirely .
Questions asked:
00:00 Introduction
02:20 Who is Yash Kosaraju? (CISO at Sendbird)
03:30 Sendbird's Pivot: From Chat API to AI Agent Platform
05:00 Balancing Speed and Security in an AI Transition
06:50 Embedding Security Engineers into AI Sprint Teams
08:20 Threats in the AI Agent World (Data & Vendor Risks)
10:50 Blind Spots: "It's Microsoft, so it must be secure"
12:00 Securing AI Agents vs. AI-Embedded Applications
13:15 The Risk of Agents Making Changes in Customer Environments
14:30 Multi-Layer Trust vs. Zero Trust (Marketing vs. Reality)
17:30 Practical Multi-Layer Security: Device, Browser, Identity, MFA
18:25 What is "Trust OS"? A Foundation for Responsible AI
20:45 Balancing Agent Security vs. Endpoint Security
24:15 AI Incident Response: When an AI Gives a Wrong Answer
29:20 Security for Platform Engineers: Enabling vs. Blocking
30:45 Providing Enterprise AI Tools (Gemini, ChatGPT, Cursor) to Employees
32:45 Building a "Security as Enabler" Culture
36:15 What Questions to Ask AI Vendors (Paying with Data?)
39:20 Personal Use of Corporate AI Accounts
43:30 Using AI to Learn AI (Gemini Conversations)
45:00 The Stress on AppSec Engineers: "I Don't Know What I'm Doing"
48:20 The AI CTF: Gamifying Security Training
50:10 Fun Questions: Outdoors, Team Building, and Indian/Korean Food
Yash Kosaraju: [00:00:00] You're now balancing speed with security in a very different level. We went from a mature organization to let's pivot, use this new technology and then sort of build something new from the ground up very, very quickly. The attack paths are different. The types of security issues are different. The data security models are different.
The major blind spot was. The notion of, yes, we're using GitHub copilot. It's an enterprise tool backed by Microsoft, so it's secure. It's okay to use.
Ashish Rajan: How different would that be to a traditional zero trust?
Yash Kosaraju: A true multilayer approach doesn't use, I guess, flashy marketing terms and call it zero trust. It's plain multilayers of secure.
The definition of like what is an incident is also changing when an AI agent gives a wrong answer or a suboptimal answer. How do you classify that? It is an incident of sort. It's not a breach. This is new. A staff engineer. Staff AppSec or project engineer is not automatically a staff AI security engineer.
It does put a decent amount of stress on AppSec engineers where [00:01:00] companies are moving really fast on AI and they're expected to help secure them.
Ashish Rajan: AI systems are mostly of two types, AI agents or application embedding ai. Now, the first one was a thing before chat, GPT or anything else kind of exploded in this world.
Fortunately, I had a chance to speak to the CSO of Sendbird Yash Kosaraju. Sendbird is a chat agent for B2B Enterprises, and it's been doing that for many years before ChatGPT exploded. So they had to quickly transition from the API driven world that they're living in before to an AI driven world with AI agents and everything, and floating around in this conversation with ya.
We spoke about some of the changes, the challenges they had. The things that they had to do to evolve into this new world that they're moving into. How he's looking at multi-layer trust as the foundation for building a security team while giving the entire company access to AI and how he believes others can position themselves and their organizations to do better with safe and secure usage of ai.
All that, and a lot more in this episode with Yash. I hope he enjoys this episode. If you know someone who's working on AI security, especially in the [00:02:00] cloud space with the GitHubs of the world, the a Ws of world, all having ai, this is the episode that you send them and hopefully if you're in that space as well, definitely enjoy this episode.
And as always, if this is your second or third episode, I would really appreciate if you take a quick second to quickly verify you're subscribing and following us on the audio platform. If you're listening this on Spotify or Apple, or if you're watching this on YouTube or LinkedIn, it really means a lot because.
We are doing this for you out of love. So if you have a few seconds, do this quick thing of making sure that you are subscribed and following us on whatever platform you're listening or watching this on. It really means a lot. Thank you so much for that. I hope you enjoyed this episode and I'll talk to you soon.
Peace. Hello. Welcome to another episode of Cloud Security Podcast. I've got Y with me. Thank. Amen. Thanks for coming on the.
Yash Kosaraju: Thanks for having me.
Ashish Rajan: Oh man. I'm, I'm excited for this conversation 'cause we've been trying to kind of coordinate a, a few time zones and a few things, but I'm super excited for this conversation.
Maybe to start off with, if you can share a bit about yourself and your professional journey, man.
Yash Kosaraju: Yeah, totally. So I have what you would call probably the most straightforward journey into [00:03:00] security. It started with a master's degree in cybersecurity. I got that from Johns Hopkins. Then started, uh, consulting with Isaac Partners in the Bay Area for about a year or so.
From there, moved on to Box, did a little bit of everything, uh, moved on to Twilio. Twilio was a crazy ride. The company grew from I think about 15, 1600 people to about 8,000 in my. Four years there. So that, it was interesting to see that, uh, hypergrowth and how security scales with it from there. Moved on to Sendbird as their first CISO about three and a half years now.
Ashish Rajan: Uh, and Sendbird for people who don't know I mean, I guess you've come from Twillio background, so come can't feels like a natural shift. So what's Sendbird?
Yash Kosaraju: Sendbird's an AI agent platform that. Uh, serves customer service. So a lot of our customers use Sendbird's AI agent to interact with their customers or their end users and provide a very good, I guess, 24 7, 365 customer support.
Ashish Rajan: And I guess the [00:04:00] reason I wanted that explanation is also because that's, that was for context of people. This was before you were, you guys were doing this before AI. And then now you're doing it after. Well, I say before ai, I mean before ChatGPT kind of exploded the world. You were doing it before where it was very much on the whole API driven world, and then you kind of had to change that suddenly to what is this new AI agent world that we are moving on to a, I would love to know what was some of the biggest shifts you saw with the changes that you did?
What was the world before? And how did the world change as the transition happened?
Yash Kosaraju: Yeah, so Sendbird was historically a chat, API company with a lot of enterprise customers. We had some of the big names both in US and in apac, and as we saw the technology evolve. The company had to evolve as well to serve our customers better than what we were doing.
So we went from a mature organization to let's pivot, use this new technology, and then [00:05:00] sort of build something new from the ground up very, very quickly. Couple of challenges. One very obvious, you're using ai. It's a new territory. The engineering teams and product teams are figuring it out, and so was security, right?
So we had to figure out a technology that we had not used before and we had to help really smart engineers to do the right thing to do it securely. So that is a big challenge in itself. Yeah. The other one is you're now balancing speed with security in a very different level. You go from fine, we are a really fast startup, but a mature startup, so be it.
And we had mature security practices to now we are, now we have this seed stage company within a mature company that's moving at lightning speed, wanting to build like POCs and experiment with stuff like how do you enable them to move at that speed? We made a bunch of changes. Like one of the big things that paid off was.
Going from general, like threat modeling and design reviews, where engineering would come to us. We all get into a room, sit for an hour, talk through the [00:06:00] design. We went to embedding a security engineer into the team that was actually doing the POCs and like testing things out. Yeah, being part of their sprints, their standups, their slack channels, and giving real time feedback.
Small tidbits of feedback, but like very real time in the moment so it doesn't feel like they have to stop and then go get a security review. That plus all the other like infrastructure security measures, like IAM just in time access, all of the things that we had built as foundational layers, those help keep those guardrails in place because it was the same infrastructure that we were leveraging to build this new product.
Ashish Rajan: Interesting. Because I, I think one of the biggest. Things a lot of people talk about, and I made a post about this yesterday as well. It could feel like that the, the, it's a path unknown for some, a lot of people. Was it easier because you were already, you know, these days most AI agents are either a chat bot or a application embedded with AI coming from that [00:07:00] organization, which is already make building a chatbot for all these enterprise.
Was it, I mean, 'cause I imagine most people are like, oh yes, had it easy. 'cause he was already building a chatbot before, so I, I just want to add some more color to that as well. Did that background kind of help with that transition or it, it, it was the same problem or it was a different problem because it was a different challenge.
Yash Kosaraju: So the infrastructure was the same, right? So we had the same scaled infrastructure, which was built to scale, built with all the different multi-layered security approach that we had taken. So that part could have been, you could say that was a little easy to get past, but then on the application layer, you're building something totally different.
You now relying on LLMs and honestly, nobody truly knows how they work. We have engineers who know what L LMS do, how they work, but again, like truly the details, it's really hard. So you go from fine, we are building an application, go test for a was top 10, go test for exercise, sql, all of those things.
Yeah, that changes to a lot of [00:08:00] different things, right? So the attack parts are different, the. Types of security issues are different. The data security models are different. It's a very different way of thinking when you're trying to secure an AI based application versus a traditional application. So that mind shift of, okay, now we are an AI first company, how do we first enable the company to be AI first?
How do we do it in a secure way that needed a lot of work?
Ashish Rajan: Did you guys have to put like specially guardrails for agent development? Ai. Well, and I say agent, well it's the agent, AI agent. I use that word very loosely. And I see you smiling as well. So like, 'cause it seems is we're technically all making AI agents, which are Yes, the level one, but hey, everyone wants to call it agent ai.
So the agent AI agents that you guys are building. 'cause a lot of people are also saying that, hey, I mean, what are some of the threats that we are seeing and what kind of frameworks can we use for this? So maybe start there, what are some of the threats that changed with this new world for you?
Yash Kosaraju: I think it's the, the [00:09:00] data's become, well, the data was always front right and center, but it's become much more important, right?
If you're using an AI vendor, I think it's paramount to ask the question of like, how are you using my data? Are you building your own LLM and using my data for training your generic LLM? Or are you using a commercially available LLM? What contracts do you have with them? How is your customer data, which is my data?
Yeah, being used by this other LLM that you're using. So I think that's become very important. Sort of extending on that. You also then need to ask, okay, if and when I decide to terminate my contract, how do you delete my data that if it has been used in some LLM training context, even that is just for my account or app that you use the training for, right?
So the whole, I guess the 360 lifecycle of that data becomes paramount. And then if you look at all the new. OWASP AI or LLM top 10 that have come up like constant context injection or hallucinations, like all of those have become paramount to, and you [00:10:00] will see recently there have been a bunch of cases where AI makes a mistake and that does cause like monetary, financial, and sort of issues to the entity using that, right?
Like we had a case where an airline AI. Bot made a mistake and the user is able to get a ticket for cheap, and there are multiple things that happen that way. So you need like guardrails around that to make sure such things don't happen and the AI stays within the boundaries of the context that you set it for.
Ashish Rajan: Yep. Yep. And do you find that. With the AI agents that are being developed was like what's the common fit pitfall? 'cause I imagine as you're trying to focus on security, or maybe actually a better way to put this is 'cause obviously most people, to what you said, they're using third party like GitHub, which has the copilot, AWS has their bedrock, SageMaker and the other enterprise AI tools, which has, has been proven in the past, has not secured by default.
So what are some of the blind [00:11:00] spots that you found appeared as part of, obviously the data was one element, you called out. Were there any, any other blind spots that came up as you guys were exploring? How do you secure this new AI capability in your organization?
Yash Kosaraju: Yeah, I think the major blind spot was the notion of, yes, we're using GitHub copilot.
It's an enterprise tool backed by Microsoft, so it's secure, it's okay to use, but as you look at it, right, the different terms and services for, say, a beta model that the copilot has released was as to a GA model. So it's, it's important that you look at. Those look at the, uh, nuances of what using one model with the other, so one feature versus the other and not have a notion of it's secure because it is Microsoft or secure because it is on AWS.
Ashish Rajan: Yeah. Yep. And would you say in terms of. I guess endpoints 'cause there, there's the whole agent side. Mm-hmm. But a lot of people are also building applications behind the scene that are being embedded with LLM, [00:12:00] which is kind of like the layer above the GitHubs and the pieces of the world, the application being built on it.
Was there any blind spots there in terms of. How to look at the agentic capability, and I feel like I've heard agentic so much that I keep using the word agentic as well. So the AI capability, let's just just say in an application, so when people are embedding AI into an application, just the application being built on top of this GitHub copilot and AWS Bedrock or Azure, whatever.
Was, was there any changes that you guys had to do at that layer, uh, in terms of what that would look like?
Yash Kosaraju: Are you talking as, as a customer or like company building agents or as an enterprise?
Ashish Rajan: Sorry, I meant, because obviously you guys were doing what you were doing before to keep it secure anyways. I meant more on the blind spot side.
Where, where, what were the newer threats that you saw from that perspective that you had deal with? Was it OWASP Top 10 LLM agents? That was enough. 'cause I feel a lot of people talk about input output. Hey, focus on input. Mm-hmm. And output. Let that LLM be the black box. [00:13:00] Was that the same framework to be applied to an AI embedded application?
Yash Kosaraju: I think to some extent, but there are other nuances, right? As an AI agent, it needs to be able to make changes in the customer's backend. So for example, our customer service AI agent will be able to go make, let's say, account changes or top up something, give a refund, or get some information from the backend about the end user that's asking a question.
Yep. So you now go into this world where. Our applications have a path to make calls into the customer's environment. So that gets tricky because you then have to figure out authentication, authorization, like how do you get logs and what's the visibility from what's happening in your environment to an API call that's going into the customer's environment.
Like where's that handoff as a responsibility? Like how do you make sure the customer isn't shooting themselves in the foot making sure they have secure defaults and they think through, right? Because [00:14:00] you often see, and this is more of a BA B2B or an API vendor problem than a general AI Problem is when our customers do vendor reviews, it's a security team that's involved.
They do a very thorough job, but. Post that during implementation, I haven't seen a lot of customers have their security team involved. And how do you set up Sandberg? Like how securely do you set it up, right? What settings do you have? What configurations can you use? Can you not use what's right for you?
And those things are amplified in the AI agent world because now the AI agent actually performs actions in your environment and securing those and having good configurations around that becomes much more important.
Ashish Rajan: And would you say if you're building on AWS bedrock like you guys are, was there, like what, how did you find those capabilities to be from your security perspective?
Was there like the, as you were building it, did you find that a zero trust approach or a. Multi-layer [00:15:00] approach because I remember reading about this that you guys went down the multi-layer trust path. Why that path? Was there any shortcomings from in the way, say GitHub, AWS or any of those were building as you're building applications?
Why this added layer, if that makes sense? 'cause they obviously, to what we said, they obviously have some security capability. Amazon has some, GitHub has some. What, obviously just not trusting that, but adding your own multi-layered trust approach. Why? Why use that approach for securing your AI applications?
Yash Kosaraju: I guess that that's applicable to securing any applications. That's doesn't have to do with Bedrock or LLM. It's like AWS and GitHub. They're very powerful enterprise ready tools. They have a lot of settings, they have a lot of features, but those settings and features are set in a way by default that enhances productivity.
They're not secure by default. So whenever you get onto these platforms, you have like a multi account strategy or you have a multi org strategy within GitHub. It is a lot of [00:16:00] work to set them up the right way. And one of the general philosophies I have is some of your security controls will fail. It's not a question of if they will fail, it's when will they fail, right?
So having a multi-layered approach where you say, okay, what if this fails? What protects us next, right? Let's having multiple layers as we mature. That has been the foundational model which we have been using to build be it I am, be it just in time access, be it browser trust to to other laptop based solutions as well.
Ashish Rajan: So how different would that be toward traditional zero trust,
Yash Kosaraju: I guess Traditional zero trust is. Also multi-layered in some approach. A true multi-layered approach doesn't use, I guess, flashy marketing terms and call it zero trust. It's explain multi-layers of security.
Ashish Rajan: It just does the job. Yes. Fair. Uh, maybe if you can share an, share an example for like when you say multi-layer, I think before we started recording, you were talking about the end-to-end flow [00:17:00] of.
Is it just as simple as Ashish is as an, as a part of your workforce logging in, and then from that point onwards, I'm, I'm maintaining a trust level as a lot of people say how there is zero trust. How does that work across this new world that you guys have created for yourself?
Yash Kosaraju: So with multi-layered approach, what we do is like when you log in and you try to access a GitHub, there are multiple things that are checked, right?
Like first. Is the request coming from the laptop that the company has given you, or is it a personal device? So once it's from a company device, and is it coming from a Chrome instance or Chrome browser that's enrolled within the Google Enterprise work? Platform that we have, and then Okta does Okta, which is a single sign-on platform.
It does a bunch of checks of like, what's your device health? It does integrations with our endpoint protection system, which is a CrowdStrike of like, what's that posture look like? Then we have IC keys and 5 0 2 based MFA that you have to go through. There's [00:18:00] Okta Verify, there's fingerprint, there's UBI Key.
There's like multiple layers and then you have passwords because of, because some of the countries we operate in still require passwords for people that have access to sensitive systems. So like this is like four or five layers that happen in the background that you as a user wouldn't even realize.
Ashish Rajan: Interesting. And you also have something called Trust OS that you have spoken about as well as a foundation for responsible ai. What's, what's the thinking there?
Yash Kosaraju: So the thinking there is like, as a company, we've always been big into like, how do we do the right thing? How do we give our customers the ability to do the right thing and make them their environment secure and safe and with Trust OS the company has built the foundation layer based on like observability, human oversight, so that they can get more confidence into enabling sandbars AI agents to do important actions on their behalf.
And for that to happen, they need to be able to see. What CI agent doing? They need to have the ability to write [00:19:00] like automated test cases, just to have peace of mind of like, yes, there are enough tools that the platform's providing us for us to be comfortable in giving it the ability to do things that would actually make a difference.
Ashish Rajan: Would you say it changes or. Would it be any different if you were, I think the, the question that I'm trying to ask is, is the foundation os more in the cons looking at a system as an OS or, 'cause a lot of people may look at o the Trust OS as my, my, like a MI or. That right image layer or is it more the entire system is being used at as like a trust system?
Yash Kosaraju: It's the entire Sendbird platform. The controls you have within the platform, which are more targeted around like observability, the controls you as the owner of the application have the human oversight your human agents can have on what the AI does. Yeah. It's more assuring our customers of like. Yes, there's an unknown AI that's performing actions, but here are all the [00:20:00] ways you have oversight.
You have the high level visibility into what's going on, right? The conversations, the actions that were taken, the resolutions that were made. Categorizing them, uh, showing them up. And like also on the flip side, giving the customers the ability to configure the types of things EI agent should and should not do and build test cases around that so they can get alerted when something changes in one of those test cases breaks.
Ashish Rajan: And would you say, I think that's a good approach as well, because then your entire ecosystem is a trusts at that point. Yeah. One element of it.
Yash Kosaraju: Right.
Ashish Rajan: And as you guys are working on securing these agents and endpoints, you can, you mentioned earlier that your team's now part of almost like a cross-functional team.
It's part of all the developments to kind of have that initial input when the design phase is happening and kind of an enable a lot of that. How are you finding the balance between protecting the [00:21:00] agents and the endpoints? Uh, 'cause there's obviously, at the moment in cybersecurity, these are all very sub separated things.
Mm-hmm. There's a lot of AI vendors, there's a lot of endpoint vendors, there's a lot of cloud vendors. All of that is obviously a lot of different kinds of information. How are you guys finding a balance in terms of finding the right security posture, for lack of a better word, and what's the hardest part of doing this?
Yash Kosaraju: So let's split that out, right? Let's break that down. So one, it's choosing the LLMs that we work with. So my team works closely with AI ML team to figure out like, who do we use under the layer, under the blanket in a way. What contracts do we have with them? So my GRC team's involved in figuring out contracts, the data flow issues like data agreements of like our vendors, be it AWS or Microsoft or OpenAI, will not use our customer's data for training.
So that's in the contract to between SENDBIRD and those vendors. So that's one thing that we're involved in. And then also working. So the in, uh, security engineering [00:22:00] team also works with them. To figure out what versions of these do we use? Like what can we enable? What's in like beta mode? What's like ga?
What's the differences? Review those and make sure we are using the right models for our customers. And again, there's a balance of like usability was a safety there, right? So we figured that out with the team. I think a big part of it is acknowledging that the AI ML team is the expert. Yeah, we are the security folks, but like they know what they're talking about.
It's us asking them the right questions of like what can go wrong. Getting them into the mindset of thinking like, how can this break and how can we detect things when they break early? How do we fix it? How do we minimize the risk? So. To both Sendbird and its customers. So that's one phase of it. And moving to the application itself, right, it's now testing the application, testing for hallucination, context switching, how's the AI responding?
There's PII in this, so like building features where in Sendbird AI agent dashboard, you can go in and say like mask, PII. So [00:23:00] as a human agent, when you go review all the back and forth chat or conversation that has happened with the agent. The admin can go and say, mask all the PII. So the other agents don't see PII.
So building security features into the platform, right. They all go into, again, what we talked about as the trust layer or the trust always. And then the other side of it, which we briefly touched upon is we also have. The customer endpoints that they configure, that we would then make a call to when certain things happen.
Yeah. And like working with them to secure it the right way. Giving them the tools of like enforcing HTT ps, giving them the ability to have IP white listing on like a certain sider range of sendbird. From where these calls would originate, giving them the option to do like authentication, like header based authentication and things like that.
So that communication is also secured as much as we can. Again, there's that balance of like, what can we provide, what does the customer accept? And like what's that middle ground of like, [00:24:00] yes, it's usable Plus also it's just not like an open API that we are hitting that's open to the internet.
Interesting.
Ashish Rajan: And would you say, I guess the, the flip to it to it also is the AI agents. At least the reason I keep hopping on the whole thing is because. We are all talking about ai, that AI agents are taking autonomous actions. Mm-hmm. And one of the things that is probably not spoken about enough in this context is the whole incident response.
Like I did a whole ask a question to on LinkedIn and other places as well. What is an incident is also being redefined because security incident, what is an actual incident? Could you come from that background where you guys have built an agent before? Well, chat bot agent, before it was a thing, and you've done this in the B2B space.
What, what is the new kind of challenge that came with this new AI from an instant response perspective? Is the multi-cloud world making it complicated? What are some of the challenges you're seeing and perhaps where there is still work needed?
Yash Kosaraju: I think it's the [00:25:00] blooding of lines of. Environment making calls into the customer's environment.
Historically, when it was just bot, which didn't have the ability to make state changing actions, my incident response team had sort of control in a way of like they knew where the logs were. We have a same that in just like terabytes and terabytes of log data every month. So like detections are built on top of it, so we know.
Where to look for, we run simulations, we do tabletops, like it's a, a tried and tested method that we keep on improving upon. But now with the AI agents, you add this other element where a call is going into a customer's environment, and then the origination of the incident either could be in our environment or somewhere in a customer's environment with sort of a dotted line between the two.
Yeah, so figuring out how that works and like when you're in the middle of an incident like these unknowns, even though they may be small, they cause big roadblocks because you're now in an unchartered territory almost, and it's a very [00:26:00] high stress environment. Right. And you haven't, you can't test this out.
To cover all your bases. I think that's one thing that we are internally figuring out, like what does that look like? And to your point, the definition of like what is an incident is also changing, right? When an AI agent gives a wrong answer or a suboptimal answer, like. How do you classify that? It is an incident of sort, it's not a breach.
There's no data breach, but still there's something that has happened that wasn't supposed to happen. Like how do you classify, how do you detect that? So that's something my incident response team is working with the product teams to figure out how do we best build around it, think through all of these things so that when something like this happens, we are ready to help our customers and like we are ready to work with them to sort of resolve these things quickly.
Interesting.
Ashish Rajan: Actually, maybe that makes me curious. Is there information being provided by these LM providers for your incident response teams to work on? Or is it more at the input output stage that you're able [00:27:00] to kind of pick up that information and go, Hey, oh, this doesn't seem right, this is potentially an incident.
Yash Kosaraju: I don't know the answer to the first question, if they have anything out there. I, we have looked, but I'm not as thoroughly as,
Ashish Rajan: I don't know, need to, like, I've not heard this, and obviously enterprise versions have logging enabled, but they're not giving you. I obviously, I don't wanna comment on that, but it sounds like there's a bit of an unknown black box there, which is where the thing comes in from.
Yash Kosaraju: Let me say this. As far as I know, and I'm not very confident when I say this as, as much as I've seen, there isn't anything that they provide. Yeah. Now, moving past that, we are looking at the input output, right? Like is the output having data that it shouldn't, right? Is it, is there cross user data that's going through the LLM?
Like we have safeguards, but then we also need monitoring systems on top of it, right? So that's what we are looking at and the customers. Our customers can set specific rules of like, do not touch on this [00:28:00] topic. Here's your RAG, here's what you're supposed to look at. Here's how you do things. Like, if it's deviating, that's what we would want to know.
Yeah, and that's again, to your point in the input output layer of sorts.
Ashish Rajan: Yeah. Yeah. Which is kind of where most of this, and that's where the logs are coming from. That's where the Yep. Potential breach is being picked up as well. Yes. At end of the day, we don't really know what's happening inside Open AI or and tropic or whatever.
I don't think they just, they're not gonna give us air logs, let's just say that. No, no. No one's really that important enough yet. Well, maybe one day, 'cause I think this is because this is same with AWS and Azure and things as well. Like we get a version of, Hey, this is the application logs in your CloudWatch.
But we don't really know what's happening on, on a deeper layer, so, which could easily solve the problem for a lot of people. But anyway, I think it's that abstraction
Yash Kosaraju: layer, right? Like AWS says like, Hey, we are abstracting a bunch of things for you. Anything underneath that is our responsibility. We are not gonna tell you anything about it.
Same with open ai, just that layer seems to be much higher. [00:29:00] Yeah, and it's just like on the application layer, it's not infrastructure as a service to be fair to them as well. Yeah. They're providing LLM or software as a service, so that's the layer they're operating in.
Ashish Rajan: Yeah. Yeah. You get a log for that as she logged in and did something, but that's pretty much like maybe, maybe I'm oversimplifying it, but that's the kind of information you can expect.
But what about engineers who are building platforms? 'cause there's a lot of people that I'm talking to who are. In this world where initially when there is no standard, many people would use any version of EL defined. They could phase, they might go for whatever, and eventually there'll be a point, at least this is where my thinking is.
That will go down the same path of what we did with Cloud Native, whether it'll be a platform, there'll be engineers and all of that At the moment. What is that you, that you guys are thinking from a security being usable for engineers and developers? You kind of mentioned the fact earlier that your team is being part of the development and [00:30:00] the early in the cycle.
Is that the same for engineers who are building platforms and stuff as well, or is that different? In terms of how to approach the balance there for security.
Yash Kosaraju: I think, let's take it up a notch, right? You asked about like engineers and development teams. Let's open it up to the whole company. Yeah. So what you're seeing is here's a new shiny piece of technology that promises to make you more efficient.
Yeah. Everybody at the end of the day is trying to get their job job done faster, quicker, better. Yeah. Right. So if you don't enable them by providing them AI tools, they are going to find tools either pay out of pocket or use free versions again to get the job done. So very early on when these tools came, uh, came about, what we did was we, from the IT team, we started providing.
Enterprise level tools to the company. So we have like Google Gemini, we have chat GPT teams, we have cloud code, we have a cursor enterprise. So we have a bunch of these tools, right, like [00:31:00] each at different levels of like, we have AI coding tools, we have co-pilots, we have GitHub co-pilot enabled, and again.
Talking about like non-engineers, we have charge GPT teams plan, and I think teams and enterprise is a big difference. So I think there are like three and then three paid tiers of charge GPT, and one of second one second paid One is where your data is not used for training and you get like single sign on and all of those things as well.
So that's where we are, right? Which is a balance between, no, we don't need to pay for the biggest, shiniest, most expensive thing. Take what's good enough and what's secure enough in a way. So that's there. So Google Gemini is available for everybody. And then, so this kind of opens the doors for everybody to test AI to sort of even uplevel themselves in the AI world, right?
Like you're hearing a lot of. These of like, if you're not using ai, you're falling behind. Yeah. So there are two aspects of this. One, enabling people to do the right thing the right way. And that also includes communication of like, here's all the tools that we have enabled for you. Here are everything else [00:32:00] that we are disabling, right?
Like all the LLMs coming out of China are blocked. All the Chrome apps that might not be official that uh, like, Hey, this is a chat GPT based Chrome app that will let you do something or summarize. Those are blocked, but you have access to chat GPT. So that's the balance of like, you block everything that you know, could put the company at risk.
At the same time, you open up the paved path of like multiple, at multiple levels. You have AI enablement across the company.
Ashish Rajan: Do you find that. It. Uh, it's a funny thing, right? Because a lot of people that I spoke to, especially other CSOs as well, a lot of them get nervous about this very feeling that you just shared.
Where, where that also means my product managers can make a prototype hopefully in a sandbox environment. I could have CPS floating around my organization. I could have like, how do you balance this? I think that's kind of where I, I find it quite intriguing in terms of. Like what, what's been your [00:33:00] approach to hopefully, I mean, I guess we are all, I mean, I'm, I'm saying there's no definite answer always, but find some balance in security and usability.
Yash Kosaraju: I think when the rest of the organization realizes that your team, the security team is coming in to help people do the right thing. Yeah. And just because they have taken a shortcut, they're not gonna be penalized, but you're gonna understand why they did it. Like what their pain point doesn't help solve it.
You'll find that they often come to you saying, Hey, this is what I'm trying out. What do you recommend, or this is something I found, can you actually use this or not? So we have engineers that ping us saying, Hey, we see this new ML model or new version of some model that is available in our copilot. Can we enable it?
They don't have to ask us, they have admin, they can just go do it. But like building that relation where you are, look you, you as in the extension with the security team is looked at a team that's helping protect the company. Not sort of putting roadblocks. I think that's the most crucial piece [00:34:00] where they would come to you, because you're right, you can never, it's a back ole game, right?
Like you can never stop everything. There's something new coming up. Uh, if you ask me, how do you secure MCP server, I don't have a great answer for you today. Mm-hmm. It's MCP today. It's something else tomorrow. So that's right. It's not about blocking those, it's about building a culture where there's enough trust that people will come to you and figure things out together.
Ashish Rajan: Was there something that you found that helped build that culture in practices or something you did? Basically? Obviously there's a whole mindset of the actual people as well. Outside of that, were there any things that. You felt that worked in, you know, kind of hitting the mark with some of the people to explain what that is.
Yash Kosaraju: I think a lot of times we talk about what the security team does to build that culture. One thing that's not often talked about is. What's the company's perception of security? Right? Like how much does the company take out the security team as example, like the e-staff, the board, the actual people, like how much do they care about [00:35:00] it and are they being empowered and able to think about it?
So at Berg, me and my team, I think we are fortunate enough that the company itself cares about security quite a lot. Yeah. That makes our life. Well, slightly easier that we now step into a culture where security is welcomed. But also we have to be careful of like what standards and what bars we set for ourselves.
Like how do we conduct ourself and work with the rest of the company to sort of keep that trust and build upon it. And I guess the second part of your question is like, like what did we. I don't think it's one or two things that we do. It's basically how we operate. It isn't going and saying, you can or you cannot do this.
It's basically saying, here's the problem, here's the risk we see. How can we solve it together?
Ashish Rajan: Yep. And because there's a culture already, rather than you blocking it using some kind of an AI security tool or something, you're basically going down the path of how do we work together on this? Yeah. As you've identified someone something, someone is using something.
Yash Kosaraju: Yeah. It's not only ai, right? It's, it could be like permissions within infrastructure. [00:36:00] It's what they deploy. It's how things are deployed. And once you start having these conversations, you'll see that there are a lot of people within the rest of the company that want to do the right thing. You then become the enabler for these teams to do the right thing.
Something they wanted to do for a while, but they just didn't have the bandwidth or the resources to do it. And then you work with them. It's. It's not that difficult in that case.
Ashish Rajan: Well, was there any thoughts on, for, maybe for listeners or people who are watching some YouTube or LinkedIn, if people are integrating AI into their applications today, whether it's in the environment or whether it's the workforce that you were talking about, what kind of questions they should be asking from the AI services?
Because at this point in time there is a plethora of like, there's obviously the, the popular ones which have enterprise contract and everything, but there are also all these open source ones. There's also all these third party ones, which is, like we mentioned, GitHub has a copilot. You may not have a [00:37:00] enterprise license, you just have GitHub.
And they basically said, Hey, you know what? For every GitHub user, it's a free copilot. Just use for one month and see how you go. Like there's, there's obviously, and I understand there's pressure on all those companies to. Push AI on everyone. What have you found are some things that you can share with other people who are trying to walk this path in terms of things they should like?
I mean, I don't know. Top three things that come to mind that they should consider every single time they're in this unknown zone with AI services.
Yash Kosaraju: Yeah. One, think about how you're paying for the service, right? It could be marketed as free, but if you're not paying dollars for it or whatever your local currency is, you're paying with your data.
Yeah. And that has been said before in multiple contexts, but in AI, that's very much true. Where if there is a free service, it. Forget code generation, right? There's so many services online that say, Hey, upload a photo from your iPhone. We're gonna make it into a professional photo with like suit and tie and things like that, and it's free.[00:38:00]
But what you need to realize is you're paying with your photo like your. Image is now part of their library based on which they're generating all of these things. So it doesn't mean you don't do it, it means you need to consciously make a risk decision of like, what am I giving to the service? Am I willing to take the risk for the output that I'm getting?
I think that's one a way to think of it. The other is, I guess, sort of shifting gears a little bit, instead of three things you should look out, I would also say experiment with ai, right? Like AI is. Something that's changing so much and it's changing the way everybody does their work, so. Again, this has been said, but I'll say it again, is like, embrace ai.
See how it can make your life easier. I use AI to change our policies, like create, I guess, templates of documents and things like that, so it's an interesting world we are living in, so experiment with it, but also be cautious of like what data you're putting in. The other thing which was talking to somebody else very tangential, is like [00:39:00] healthcare data, right?
A lot of people are using chat GPT as their personal doctor.
Ashish Rajan: Yeah.
Yash Kosaraju: One thing I would sort of encourage you to think about is like, how can that data be then used to build sort of a. Digital model of you and what your health symptoms are. And that's, that's an area we haven't thought of before. Like there's, there was a lot of tracking based on social media and your search results and all of those, but now with ai, it's like everything you put in can be and eventually will be used to create that digital persona of yours.
So again, just be cautious of what you're uploading out there.
Ashish Rajan: And I guess to your point, one more thing that comes to mind as you were talking about this, is many people use their work email as a personal email sometimes as well. If you start using chat g and Tropic or whatever other AI service you get from your, from your work for your, I don't know, family's AI account or whatever for doing things, that's a big unknown as well.
We don't really know. 'cause I think early, I [00:40:00] remember when I was still cso, we had this thing where people would sign up for all these services with our company domain. And you get an email saying from how iPhone saying, Hey, by the way, your service has been hacked. And so I'm like, the person doesn't even exist anymore, but hey, they signed up for the service.
And I think we don't even know what that looks like in this AI world if people are using that for their personal things. I don't know if you've come across that yet.
Yash Kosaraju: So we talked about it within the security team. I think the origination of the conversation was, so we have chat, GPT, enterprise, uh, not enterprise, like I guess the mid-tier enterprise.
Yeah. Uh, call it whatever. We have a bunch of these Gemini and things like that that are enterprise level. And the debate was should we restrict them to company devices only or should we let people use them on their mobile phones? Long story short, what we decided was, as long as people or employees of Sandberg want to use it for personal stuff, and they know it's a company account, the data belongs to the company.
But I think I'm okay if they do, if they're using chat GPT Enterprise for personal queries, as long as there are boundaries of [00:41:00] what's ethical and like all of those things. But if we can provide an AI service. That employees can use on a, on a personal note and have their personal data not be used for training.
I think I'm okay with that because that doesn't send but, or its customers, but it adds an additional layer of security around the employees, around their sort of personal parameter, if you will. Right. So that's something they're okay with.
Ashish Rajan: Yeah, and I guess, or just talking on the same vein for people who are listening, 'cause a lot of people still worry about, Hey, how do I stop people from.
Putting company data into an AI service, is that using it as a third party tool, as a prevention measure. Or is that a culture thing? Where do you stand on that?
Yash Kosaraju: I've seen both work. We have taken a culture first approach with that. As I said, like we give employees a lot of these tools Yeah. That they can work with.
And there's communication around that thing. These are [00:42:00] what we are enabling you with, but these things you should be cautious with. Yeah, right. Like deep seeks of the world and things like that, which we block. We send out communications like here's why we are blocking it. Here's a reminder. These are the approved AI tools.
If you wanna try something out, be careful. Do not put real enterprise data. If you want to test something out with fake data, that's fine. So like having that, giving them the ability to do something and sort of keeping things in a responsible manner, right? Like when people know that, hey, I can do this, but I need to be responsible with what we as a company stand for and make sure we are doing the right thing so that.
That I believe today goes a long way rather than a tool approach where you're like, Hey, I'm gonna block every single thing that can go wrong. We do block certain things. Again, I'm not saying we don't block anything and everything's open, like we do block a bunch of things. Yeah. But there's this combination of that with the communication of like, look, you're empowered to do the right thing.
Uh, think [00:43:00] about it. Make the right decision.
Ashish Rajan: Yep. Yep. And as you've gone through this journey with this new AI world, I'm curious, what are some of the, probably the hardest and maybe the easiest things about this journey that you guys went through that people can look out for or can apply?
Yash Kosaraju: I think the easiest thing, somebody said this to me is you can use AI to learn ai.
Ashish Rajan: No. Okay.
Yash Kosaraju: Whenever a new thing comes up, you're then looking for resources. Like, how do I learn this? Yeah. With ai, you don't have to look for it. You just go ask ai, like, help me understand what RAG stands for. Help me how. Help me understand what, how LLMs work. Or like ask Chad GBT how Chad GBT works. And the fun thing that I've been doing these days is there's this Gemini app where you could have conversations with it.
So I go on walks and I'm like, explain this to me. And it's a conversation with the AI on. Anything and everything, and that could be how AI works, right? So that's been interesting. And that's, I wouldn't say it's the easy part of it, it's the convenient [00:44:00] part of it where the resources are abundant, there are, there's straightforward ways.
You just have to spend time with it to learn things. Yeah. The hard thing is AI has been disrupting the status quo so much, and at such a rapid pace it's really hard to keep up with things.
Ashish Rajan: Yeah, as much as I would like to sit here and say that, Hey, you know, I'm keeping up. I was talking to someone about the code being generated and this new information. Now suddenly everyone has access to every information. A Java developer is also now a Java script developer. Uh, and like, uh, you know, the, the boundaries that even there from what used to be like, Hey, I know I don't know Java, but now I feel like maybe I can make a Java app too.
Like, you know, that sentiment that people have, I wonder, and the
Yash Kosaraju: problem that people don't realize with it is all the AI based code generators. Yeah. They generate massive amounts of code. So I, I have a cousin who works at Salesforce and he was complaining that initial Salesforce scripts that they would debug from enterprise customers used to be 500, 600 lines.
Yeah. And the developer would [00:45:00] know what that does. Yeah. Now they're getting cases with like 2000 lines of Salesforce code and the customer says, Hey, I don't know what this does, but it's not working. It was working yesterday. Chat. GPT made a few edits and it's broken. Like go figure it out.
Ashish Rajan: So these kind of things are where a lot of people are feeling overwhelmed as well. And someone said the fact that initially, even from AppSec perspective, right? SaaS and SaaS is known for having a lot of false positives. We already had 50,000, now we have 500,000. It's like, but then you don't know which one is the ai, which is not ai.
And I, I honestly, I'm sure the vendors out there are trying to make this place a better place. But in terms of building the confidence in teams. For this new level of threat. Have you guys found a way to manage that? Like, because obviously it, it is a skills gap, like, and it's on all layers. It's a skills gap.
It's not just security, but tech in general development in general product management. Everyone's now like, what does my job look like in this new world? I have no idea. [00:46:00] How are you guys maintaining that pace of staying updated and relevant for the tasks that you have at hand in your organizations?
Yash Kosaraju: One thing I'll mention before we talk about like how do you stay updated is when we started this journey, one of my project engineers, a senior engineer came to me and said, Hey, we are building this new AI stuff.
We have to review and secure it. I don't know if I can do a good enough job with this, as I know I can do with like traditional AppSec, right? Yeah. So I think that's the aspect that we hardly talk about where it does put. A decent amount of stress on AppSec engineers where companies are moving really fast on AI and they're expected to help secure them.
So I think the first part I would like to call out is like, let's acknowledge it's new technology. We will make mistakes. It's okay to do that. We'll eventually catch up to it, right? Like when SQL injections are excesses came out like the very early days, those were big deals and like nobody knew exactly how they worked.
And it was something [00:47:00] that people were still figuring out. Today, we are much ahead of that game, right? The frameworks is all those things. So we will eventually get to something similar on AI front. So everybody listening on AppSec, like if you feel that sense of overwhelmingness where you're like, I don't know what I'm doing, hang in there, things will get better.
Coming back to your question, like what do we do? I think it's acknowledging that, right? This is new, this a staff engineer. Staff AppSec or project engineer is not automatically a staff AI security engineer. Yes. Acknowledging that and giving them the space to experiment, to figure out things. The other thing is they are staff engineers for a reason.
They can figure shit out. Yeah. So giving them the space, giving them the confidence to be like, go figure it out. Spend some time on ai. So I encourage like all my teams, including it, GRC, incident response, like start using AI within. There would be it questionnaire automation, be it. Giving your, giving your vendor pen test or SOC to report to [00:48:00] any AI and saying, Hey, summarize this for me, right?
Like, what's, what's a qualified finding instead of reading through the whole report? So use a, using AI at each level, getting comfortable with it, understanding how it works. We also share a lot of knowledge. So ev everybody goes in a different way and when they find something really unique, really good, they bring it back to the team.
We talk about it. Sort of use that model to scale up. The other cool thing that one of my engineers did, he did this over a weekend, is using Sendbird. He built an ai, uh, CTF. Oh, really? This is basically anybody can use it, right?
So there's AI chat bot, there's Clothes Apparel store, and you can like ask questions. And he has made it deliberately insecure in such a way that you could ask questions of like, oh, what's in your toolbox? Like what data do you have access to? And like there are flags. There are about like 10 flags or so. And I guess that would help not just the security team, but also the rest of the company just.[00:49:00]
Wrap their heads around like how things can go wrong and all you're doing is just asking questions that weren't being expected.
Ashish Rajan: Yeah. I think, I feel like the new world of security awareness training needs to be on a CTF style.
Yash Kosaraju: That's I. We didn't talk about this before the podcast, I promise that, but that's the world we are actually going into.
So we used to do multiple trainings, like one for the general employee base, one for engineers, where we would pick code diffs that had real vulnerabilities and fixes and sort of work through that. That worked. But as we were sort of talking through like the whole CTF thing came up. Like that's where we went.
Like this person went and built an actual CTF around our product and said, this is how our product can break and we'll give you prizes if you can figure it out. So that's something really exciting we are doing in the coming weeks.
Ashish Rajan: Yeah, that I'm excited for this as well. I, I'm excited. Wait, so but then.
The person used to build CTF before and now, or is it [00:50:00] is they just use AI to build CTF?
Yash Kosaraju: Uh, he's a staff engineer. He is an AppSec engineer. I think he got bored on a weekend and he just used, he probably vibe coded his way through it. I don't know. I'd be surprised if he didn't. But he built a CTF with like complete scoreboard and things like that.
But based on Sendbird.
Ashish Rajan: Obviously, that's all the technical questions I had. I also have three fun questions for you so people get to know a bit about you as well. The first one being, what do you, what do you spend most time on when you're trying to solve, when you're not trying to solve cybersecurity problems for Sendbird?
Yash Kosaraju: That has changed over the years. So now we have two kids, so most of my time is around them, playing with them, jumping around with them, doing things that they enjoy. Before kids, we would go on a lot of hikes and big into photography. So I'd go do like astrophotography. We have done overnight trips to Yosemite where we would just leave at 4:00 PM in the afternoon, spend the whole night in Yosemite, taking photos, and then come back the following morning.
So big outdoors person, we travel [00:51:00] quite a bit as well.
Ashish Rajan: Awesome. And second question being, what is some something that you're proud of that is not on your social media?
Yash Kosaraju: I don't have social media, so let's start there.
Ashish Rajan: So what is something that you're proud of that you can share with us?
Yash Kosaraju: What I'm proud of, I think it's what we have been able to achieve as a team.
So like I started off as an IC and then quickly realized I have more fun building teams. Yeah. And I guess I'm more proud of the team and like what they do today. Yeah.
Ashish Rajan: Awesome. And final question, what's your favorite. Cuisine or restaurant that you can share with us?
Yash Kosaraju: Cuisine or restaurant? It depends.
Comfort food is definitely Indian. Yeah. General tip, if you're looking for Indian food, do not go to big restaurants, go to hole in the wall places. Anything on Yelp that doesn't have great reviews might still give you good food other than Indian, like, we really enjoy, I really enjoy Ramen, uh, ramen dojo, ori.
There are a bunch of restaurants here in the Bay that I really like.
Ashish Rajan: Awesome.
Yash Kosaraju: And I've also come to love Korean food because, the company has a big presence in [00:52:00] Korea, so I do travel quite a bit. Fun fact, I've been to Korea about a dozen times now and I have really started to enjoy their food.
Ashish Rajan: Interesting. Like the whole Bibimbap and everything else.
Yash Kosaraju: You know how it's funny, Bibimbap, like the first thing people ask is like, Hey, did you, do you eat Bibimbap? People don't eat. Bibimbap. Bibimbap is one of those things that's like very foreign. And when you go out Biba like, okay, fine, it's a 400. Yeah. There's so many other things.
Ashish Rajan: Wait, so I mean, but Korean barbecue is a Korean barbecue thing though, right? They do barbecue.
Yash Kosaraju: Yes. Yes. They call it BB q like that.
Ashish Rajan: That is true.
Yash Kosaraju: That is true. Bibimbap not so much. I've never had Bibimbap in Korea.
Ashish Rajan: Interesting. So what, I mean, I guess now, now I'm just curious. So what's, what's popular dish for them then is, is it just kimchi and rice and curry and all of that is.
Yash Kosaraju: Kimchi is something that they serve with pretty much everything you order. So they do this thing called panang, which is like five or six side dishes that has kimchi and like shrimp and like a bunch of other sort of spicy things. And then your main dish is something different. And [00:53:00] with rice, they have different types of rice, which again is like a small portion of the meal.
Yeah, they're big on meat. So you'll find a lot of like pork and beef dishes, which are all, that's the center of the meal and there's like all of these side dishes that go with it.
Ashish Rajan: So rice is a side dish.
Yash Kosaraju: It's a small portion of the meal.
Ashish Rajan: Oh right. So, 'cause I imagine with most of the, like any other cooking which is non-Western.
Mm-hmm Whether it's your Middle Eastern cooking or Indian cooking or anywhere, rice is like a major portion. Yeah. Or it's almost like it's a mountain of rice you have to eat to qualified as a meal as well.
Yash Kosaraju: That is Indian, uh, I guess cuisine in general, but not Korea.
Ashish Rajan: Interesting. Right. And I, I haven't been to Korea, so this is definitely something, something I learned.
So I may have to explore this as well. I may take some, may, may take some tips for Korea next time I go there, man. But dude, this was awesome man. Thank you so much for sharing all that and we can people connect with you if they wanted to talk more about this with you,
Yash Kosaraju: LinkedIn? Um, well I am on LinkedIn. I don't actively [00:54:00] post, but I'm on LinkedIn quite a bit.
So if you want to chat, uh, hit me up on LinkedIn.
Ashish Rajan: Awesome. I'll do that. I'll put the links in the show as well. But thank you so much for doing this and uh, thank you everyone for tuning in as well. Thanks so much for the time. Thanks
Yash Kosaraju: Ashish. This was fun.
Ashish Rajan: Likewise. Thank you. Bye. Thank you for listening or watching this episode of Cloud Security Podcast.
This was brought to you by Tech riot.io. If you are enjoying episodes on cloud security, you can find more episodes like these on Cloud Security Podcast tv, our website, or on social media platforms like YouTube, LinkedIn, and Apple, Spotify. In case you are interested in learning about AI security as well, to check out our podcast called AI Security Podcast, which is available on YouTube, LinkedIn, Spotify, apple as well, where we talk.
To other CISOs and practitioners about what's the latest in the world of AI security. Finally, if you're after a newsletter, it just gives you top news and insight from all the experts we talk to at Cloud Security Podcast. You can check that out on cloud security newsletter.com. I'll see you in the next episode,
please.

















.jpg)


