Securing AI: Threat Modeling & Detection | Live Panel with Anthropic & Canva

View Show Notes and Transcript

Is Artificial Intelligence the ultimate security dragon πŸ‰ we need to slay, or a powerful ally we must train? Recorded LIVE at BSidesSF, this special episode dives headfirst into the most pressing debates around AI security.Join host Ashish Rajan as he navigates the complex landscape of AI threats and opportunities with two leading experts:

  • Jackie Bow (Anthropic): Championing the "How to Train Your Dragon" approach, Jackie reveals how we can leverage AI, and even its 'hallucinations,' for advanced threat detection, response, and creative security solutions.
  • Kane Narraway (Canva): Taking the "Knight/Wizard" stance, Kane illuminates the critical challenges in securing AI systems, understanding the new layers of risk, and the complexities of AI threat modeling.

πŸ”₯ In this episode, we tackle the tough questions:

  • Is the hype around past 'AI-powered' security justified, or was it "hot garbage"?
  • How can you build effective threat models when AI introduces new, complex failure points?
  • What are the real risks and challenges when implementing AI in production?
  • Can AI tools like 'vibe coding' democratize security, or do they risk deskilling professionals?
  • How can defenders possibly keep pace with AI-driven attacks without fully embracing AI themselves?
  • Exploring the future of AI in both offensive and defensive cybersecurity.

Questions asked:
‍00:00 πŸ‰ Intro: Slaying or Training the AI Dragon at BSidesSF?
‍03:15 πŸŽ“ Meet Jackie Bow (Anthropic): Training AI for Security Defense
‍03:41 πŸ›‘οΈ Meet Kane Narraway (Canva): Securing AI Systems & Facing Risks
‍04:51 πŸ€” Was Traditional Security Ops "Hot Garbage"? Setting the Scene
‍06:32 ⚠️ The Real Risks: What AI Brings to Your Organisation
‍07:27 πŸ€– AI in Action: Leveraging AI for Threat Detection & Response
‍08:37 🀯 AI Hallucinations: Bug, Feature, or Security Blind Spot?
‍09:54 πŸ—ΊοΈ Threat Modeling AI: The Core Challenges & Learnings
‍13:29 πŸ’‘ Getting Started: Practical AI Threat Detection First Steps
‍17:56 ☁️ AI & Cloud: Integrating AI into Your Existing Environments
‍25:38 ❓ AI vs. Traditional: Is Threat Modeling Different Now?
‍29:52 πŸš€ Your First Step: Where to Begin with AI Threat Modeling?
‍33:17 πŸ˜‚ Fun Questions & Final Thoughts on the Future of AI Security

BSides SF Panel

Kane Narraway: [00:00:00] If you are working at a, like a AI provider, you have a very different set of risks than your standard company. A lot of those risks are just SaaS risks plus in their own way. So they just add more layers in that can have risks. They add more areas that can be compromised, and they just increase the risk threshold a little bit.

And so I wouldn't say there's anything super specific. But it just makes things worse in general, right? And so it's more effort that you have to put into securing that tool set.

Dana Torgersen: Alright, we're live folks. How is everybody doing today here at BSides sf? Woo. Thank you for that. I'm Dana Torreon, vice President of Product Marketing with Armor Code, and today I get the benefit of being the mc here in theater 13.

For those of you here in the room with us and those of you watching from home today is real special. We actually have. Live broadcast, it's going out. I call this a three-way. It's live here. It's live for the BSides folks, and it's gonna be live out on the podcast, www.cloud security podcast tv. Folks, this is a podcast that now has been running [00:01:00] for six years, tons of episodes with followers of 150,000 different folks who I'm guessing give a darn about cybersecurity,

Ashish Rajan: right?

Kind of. Yes.

Dana Torgersen: There we go. So I'd like you to give a warm welcome to Ashish Rajan and and our guest folks on the panel today. Thank you.

Ashish Rajan: Take it away. Thank you for the live audience as well. Before I introduce my esteemed guests over here I did this thing. I need to share the story from this morning. I did what every person does to maximize the opportunity for a live podcast panel.

I asked AI, Hey, how do I engage this audience? We are doing a live podcast panel. People are gonna be had just lunch. They probably would be feeling a bit low. I don't know if someone supports the SF team yesterday, that loss, unfortunately, but it said not to mention that, but so I said, Hey, how do I run Cloud Security Podcast

We are weekly podcasts. People primarily who are interested in us, are security leaders and CISOs, how do I make a podcast engaging for BSides? And their theme is, by the way it's a dragon. Oh. And the dragon is AI and Oh, okay. So my, I gave that as an input and [00:02:00] it came up and said, Hey, by the way.

You should start like with a cheer up exercise for people to just shout out a word. I wanna give the word to you and if you guys can shout, I just wanna check if it's hallucination of it actually works. So the people who would hear it would be the podcast audience and the live audience in the background as well.

And hopefully people on their phone in the overflow room. You guys could get to participate as well. So the, what it said was I, because the theme is Slay the Dragon, very medieval. Great opportunity for you to dive into the medieval times and ask people to shout AI. So in 3, 2, 1, I'll just do that. I feel everyone can shout AI. That'll be great for the podcast audience and for my video as well. If you guys don't mind. Is that, so I'm gonna do just fingers that'll make it easier.

3, 2, 1

Jackie Bow: AI. Woo.

Ashish Rajan: Awesome. That was the energy. So that worked. Other, I said for people who've been using GenAI probably would know this, that once you start one level, how do I even make it even more exciting and more people should just. Get in with it. [00:03:00] So they said stop reading the script and just get back to the episode is where it said.

So I'm gonna, I'm gonna get to the episode. So everyone, welcome to Cloud Security Podcast. As I mentioned, we are a weekly podcast. Today's day is about slaying dragons. I've got two esteemed guests with me. Jackie, if you don't mind taking a few seconds to introduce yourself.

Jackie Bow: Sure. Happy to be here. This is like my favorite conference of the year, in like my, I would, it's not my hometown, but I've lived here for long enough that it feels like home.

So yeah. I'm Jackie Bow. I've been working in security for just about 15 years now. Mostly in detection and response, but it bounced around. Currently I am the technical lead of the threat detection engineering platform at Anthropic.

Ashish Rajan: Awesome. And Kane.

Kane Narraway: Yeah. Hello everyone. So I lead the enterprise security team at Canva.

So a lot of that is dealing with zero trust, internal endpoints, that kind of stuff. And a big focus for me, the last, oh year, two years, has been on securing AI tools, LLMs, MCPs , all of that good stuff.

Ashish Rajan: Awesome. And as you can tell, there's a [00:04:00] theme already forming that I wanted to tell you guys about.

We have a bit of a debate in terms of, we have one side, which is leveraging AI. So imagine for people who will be listening to the audio. Imagine a video, there's a slide up there. Just pretend there's a slide where there's a dragon on top of my head with an AI label on it. And we've got people who watched the How to Train A Dragon Movie.

So we've got Jackie, who's the how to train a dragon working with, beside the AI dragon on defending against this big boss dragon AI. And we've got the knight shining armor Kane on the other side trying to defend with the shield of the fire flames from the dragon. So people who hear the audio definitely check out the video as well.

So you get to hear that on the BSides thing as well. Okay. I feel like I have more of a wizard vibe than a knight. Oh, okay. Fair. So you have your staff, is

Jackie Bow: that

Ashish Rajan: what you yeah. Okay. We'll go for that. So it is a wizard guy. It is not, it's not a shield, it's a wizard with a staff. To set the scene we've got the first question in terms of for, I think I'm gonna start with yourself.

'Cause you've had security operation experience. Security operation now in terms of, I guess a [00:05:00] lot of people from a different background here. How has traditional security operation been done?

Jackie Bow: Yeah.

Ashish Rajan: Before the leveraging AI part, if you can set the scene for people. Of course.

Jackie Bow: Yeah. I think in the realm of threat detection response, we have been locked into these monolithic tools or SIEMs, security information event managers.

And most of the time these are ones that you purchase wholesale, they're black boxes. And or if you're fortunate enough to work at a company that has custom built their own. I think most people have, experience with things like Splunk or some of the other large SIEM providers.

But actually AI has come into the picture and it tainted, I think, a lot of detection and response people's view on AI because for the past, like I. At least 10 years we've been sold this idea of AI powered, like machine learning detection response next Gen XDR, and it's all trash, it's all hot garbage.

Ashish Rajan: Wait, he has had so much extra proteins with breakfast. I did.

Jackie Bow: Yeah.

Ashish Rajan: That was our breakfast. If she's a bit spicy, that's the reason why.

Jackie Bow: Yes. Let's blame. [00:06:00] We'll blame the protein. It's definitely the protein. Yeah. But yeah, we've been sold this idea that oh, this black box model can do detection and response for you for this, low premium of, a high subscription cost to a vendor.

And I think up until this point defenders I think are rightfully like skeptical of AI because they're like, this just gives me more false positives.

Ashish Rajan: Actually, that's a good point because the moment people talk about leveraging AI into an organization the first thing that comes up is hallucination.

Jackie Bow: Oh, yeah.

Ashish Rajan: There's a lot of things that goes into it.

Jackie Bow: Yeah.

Ashish Rajan: I'll put a pin on it for a second. I'm gonna come back to Kane. Since you've been securing the Wizard of AI what are some of the risks that are being introduced by AI systems and organizations that come to top of mind for you?

Kane Narraway: Yeah, like I, I feel like it depends on what angle you're looking at. If you are working at like a AI provider, you have a very different set of risks than your standard company. A lot of those risks are just SaaS risks plus in their own way. So they just add more layers in that can have risks.

They add more areas that can be compromised and they just increase the risk threshold a little bit. [00:07:00] And so I wouldn't say there's anything super specific. But it just makes things worse in general. And so it's more effort that you have to put into securing that tool set.

Ashish Rajan: Right.

And I guess talking about securing and detection response, they go hand in hand where people are trying to figure out hey to what you were saying earlier, whether it's Splunk or any other SIEM, all of us are familiar with false positives being like, a lot of level one dis spends a lot of time just triaging incident for, is this even a false positive? Or should I just call it DEFCON one on this?

Jackie Bow: Yeah.

Ashish Rajan: So how can an organization leverage AI, tame that dragon for detection and response?

Jackie Bow: Yeah, that is a great question. It's also can I pump up my talk tomorrow?

Ashish Rajan: For sure. Okay. I was gonna say please do, he's got a talk as well, but we need to pump up both your talk.

Jackie Bow: Yeah. Tomorrow I'll be presenting with my colleague Peter about some tools that we built with actually using Claude, which is Anthropic's LLM Claude Code to build tools. And then we actually do a lot of the investigation and triage using Claude. And so I think for me, the difference in right now [00:08:00] leveraging AI is instead of a black box of like, alerts go in and something gets spit out and you have no idea how it got there especially with these models that have extended thinking, you can actually see what prompts go in.

You can tweak those prompts. And with the outputs, you actually have more control over seeing what's happening. You can also leverage things like best of end. So you can have a model with the same prompt, triage a detection, end number of times, and then out of that choose the best response. So I think the power for individual teams to leverage like generative LLMs to do this work there's just so much more visibility and it's no longer that black box of why am I getting this response.

Ashish Rajan: How do you balance hallucination then? Because to your point Yeah. The moment you and maybe it's my bias, and I don't know if anyone else has this, because we've been hearing about hallucination being like the Yeah. Number one thing people talk about. Oh yeah, you should use AI, but be careful there'll be hallucination.

Yeah. Yeah. So how do you balance that? Yeah.

Jackie Bow: So we'll talk about this a little bit tomorrow too, but hallucinations are basically the model being super helpful and coming up with convincing sounding [00:09:00] answers. And in some cases we actually wanna encourage this. We don't want to encourage the model to make up events that have happened, but we actually want the model to break out of like playbook, style, or rigid human thinking and have creativity because any of us who are incident responders or who work in any open-ended investigation or even like bug, like fixing bugs, know that most of the times our most like incredible ideas come when we're doing things creatively are not the same way that we did before. So actually encouraging models to think for themselves and hallucinate, maybe investigative actions that you wouldn't have thought of is actually good, but you wanna box them in a little bit, right? You don't want them to come up with oh, here are all these, like network logs and they just are completely, not true.

Ashish Rajan: Wait, so we want models to hallucinate.

Jackie Bow: Yeah, a bit within boundaries why, let your models have a good time too.

Ashish Rajan: Fair. I wait, how do you even I'm like, wait, so we tell them, let them hallucinate [00:10:00] and all these bugs, but to your point I agree. It might bring up some creative things that you may have not, may not have thought of, but, so I'm with you on that one.

Yeah. In terms of building threat model, 'cause Kane's got a talk straight after this by the way, if you wanna join in. How do you even build a threat model for something like an AI system? Or where do you even start? 'cause I, I can imagine doing a threat model for an AI system. It's not the same as, I'm sure there's a few AppSec people listening or watching this as well.

It's almost Hey, what stride model should I use? Or whatever other thing. How do you even start doing threat modeling for an AI system that let you not hallucinate?

Kane Narraway: Yeah it's an interesting question, and again, spoiler, that's a lot of what my talk goes into after this. So if you're interested, feel free to come along.

Sort of the high level is that I like to focus on sort of two areas, like whatever model you use is fine. I like to think of it as using access at the beginning. So like how are you interacting with them? What are, desktops, phones, like where are you accessing them from? Yeah. And then on the other end, what integrations do you have?

[00:11:00] So what is your AI talking to? Is it talking to like your Jira servers or your Salesforce or whatever? And those two things are the things that introduce the most risk in my opinion, because that's increasing the surface area of the things that can go wrong. And this gets even worse when you start connecting it to like customer data stores and doing public customer support.

'cause then it's not your employees, it's like a, an unknown third party that can potentially do weird things,

Ashish Rajan: could you expand on the could you expand on the whole authorization? Because now seems like you can't spend a day on the internet without talking about MCP and A two A and whatever else comes with it.

What does that, how does that play a role in your threat model and the authentication authorization part?

Kane Narraway: Yeah it's interesting 'cause, especially with MCP, they've got a spec for authorization in the model, which a lot of people have had problems with, let's say and there's definitely a few blog posts on that are worth reading.

But there are ways you can encapsulate it as well, right? There's a bunch of vendors I think CloudFlare did one a few weeks ago. I think Merge has one now where you can host them [00:12:00] like on a public service. So it's not like a thing running on your workstation anymore. It is like a public thing where all of your employees are accessing it.

And rather than having thousands and thousands of agents across all your laptops, you just have this one server that goes and connects to everything, which from a security point of view, I prefer to threat model one thing rather than a thousand different versions of open source code that people are running.

Jackie Bow: Just like on like MCP servers, I think I totally agree with having an open standard is a great first step. And then having, the first, like the first pass at okay, authorization or identity for agents and for MCP servers.

And I think I really love what I'm seeing, like coming out of CloudFlare and like it encourages the maturation of this technology. Especially by security practitioners. So we can actually get the standard that is the most secure.

Kane Narraway: And you've gotta start somewhere. That's the thing at the end of the day.

And even taking this beyond into using it, what we found is that we can build sort of triage bots using LLMs to then threat model our AI tools. Yes. And so [00:13:00] building up like a corpus of info that you can ingest in and then have the AI do the triage rather than you do it yourself.

Jackie Bow: Yes.

Kane Narraway: Especially since every tool is an AI tool now, I don't wanna have to do this hundreds of times. Every vendor I use.

Ashish Rajan: The next gen AI agent that's out there everywhere feels like everything has a next gen AI agent these days, but. And that's a interesting point about building capability as well.

I think you, so I guess in your talk and we've been talking about detection response for infrastructure that is potentially using AI systems or build, running AI systems in your case.

Jackie Bow: Yeah. Yeah.

Ashish Rajan: How do you even, like, where do you start? And especially if you already have a team, like many of us may already have a security operations team.

We would like to do threat detection, but sometimes we don't have the resources or the time.

Jackie Bow: Yeah. So I think one of the most important things. That we found is a like base technical stack that really allows integration with these tooling and it basically is set up your technical stack.

So it is engineering forward because models you can think of as software engineers, you give them tools to use and their [00:14:00] efficacy is how open your stack is. So are you using common programming languages, are using either open source tools or well-documented tools? Are you using tools that have like very good APIs because when you think about giving a model the ability to do work like on your behalf, you actually need to give it like hands or like access to things, which you know, is the MCP servers, it is tools. And so I think a good place to start if you're starting from square one, which honestly is I think a lot of us dream about coming into a company and being like, oh, I can just build this from scratch.

Rather than here's the legacy SIEM. Good luck. But if you are in that position, I think really focusing on tooling that is like open, that has very well documented standards. If you can use a SIEM that uses an open detection standard like Sigma rules better than if you know you're using a SIEM that has a proprietary not well-known format.

And like for us, we built most of our tooling using Claude Code, which is a, coding agent that is really a [00:15:00] collaborator. So like we use Claude in how we do triage and investigations, but we also use Claude to build like our Terraform and our infrastructure and yeah.

Ashish Rajan: I don't know about you, but I personally fall in the camp of security people who don't code.

So I feel like with nose and I'm,

Jackie Bow: you wanna go outside?

Ashish Rajan: I'm like, I feel like. I heard about vibe coding. I've been hearing a lot about vibe coding the entire day in

Jackie Bow: Oh, I vibe code all day.

Ashish Rajan: Yeah. Which is why I'm like, it makes me it makes me nervous that, whoa, does that mean that all those ideas that I've had before, that I wish I was a programmer?

Yes, exactly. So is that how easy it is to Yes. Even as a security person?

Jackie Bow: Yes. Okay. So I will say some of the best security people are software engineers, or were software engineers because in order to understand how to circumvent a system, understanding how the system works is great. But I will say that what I have seen with, with coding tools, especially like Claude Code and there's tons out there. There's a Copilot, Cursor, Windsurf. These have, Lovable. These [00:16:00] have lowered the barrier to entry from idea, ideation to prototyping in a way that makes it so if you have these ideas, you can actually go and create a prototype relatively quickly.

And we could talk about is this a good thing or a bad thing. I think it's a good thing I'm on team, build more shit.

Ashish Rajan: Kane, how do we threat model this

Kane Narraway: one? It depends, right? And that's the typical security engineer answer right there. That's a consultant answer, I can drop the mic and leave now.

So I think people are gonna use it whether we want them to or not at the end of the day. And I think you've gotta secure it in place the best you can. And so at the moment, a lot of that is through education. Because there's not a lot of tooling out today, right? That kind of helps this.

And there's things like YOLO mode, right? Where you can just ask Cursor to go do your thing. What are you gonna say? And then you cross your fingers and hope for the best. And then you. You add, please don't make vulnerabilities. Claude please at the end and and that's how you secure it.

But I do think, there are some things you can do where, like I said, if you are connecting to [00:17:00] sensitive integrations that's where you wanna put your effort because at the end of the day, you're not gonna be able to secure or threat model all of this stuff, right? Yeah. Yeah. And so really just focusing down on like, where is the risk?

What data is it ingesting? That's fine. If you are connecting it to, or your log sources. Maybe it's fine if it's just telemetry, right? Yeah. In that case. But maybe if you're connecting it to like your customer, RDS or something. Then you are like, oh, now I need to put a bit more effort into securing this.

Ashish Rajan: I guess to your point, it's focusing on data identity access rather than, hey, you can white code versus you can. The AI agent can do its own thing. MCP, whatever else it comes after.

Kane Narraway: Yeah, exactly. And you might have some guidance on use our provided MCP servers, don't go out to the internet and just download random ones.

It's a lot of typical stuff in package management. Really. Yeah. That is I. Improving over time, like it's getting there.

Ashish Rajan: Talking about CPS for detection as well. 'cause I think Kain raised an interesting point about using the right kind of logs. Which kind of 1 0 1 for instant response detection, all of that.

Obviously on the security podcast, people have spent years trying to learn [00:18:00] AWS Azure. Cloud logging, all of that. Yeah. Now with the AI of systems being attached to their existing legacy systems as well. Some of them obviously may have started today building applications. So your AI from day one, or AI native, if you wanna call that, but for people who are trying to incorporate direction response in legacy systems, which are running on cloud.

How does cloud fit into like a. Cloud environment, yeah,

Jackie Bow: I don't think you can separate cloud from most of the modern uses of AI because in Claude's case, you can run cloud on Bedrock, which is AWS or you can run it on Vertex, which is GCP. And so you can access the models that way.

You can access the API. We also have a first party, API, but most of what we build is in the cloud, so it's, either in GCP or AWS and. I think one of the great things, you mentioned kinda like legacy or like people learning AWS is when I'm writing a detection signature, say for some random thing in AWS, 'cause AWS and GCP come out with new [00:19:00] services all the time and you're like, what does this log look like?

I can just ask Claude, okay, what are the fields that I should look for? Clouds like here they are and then I can prototype a detection signature, especially doing detection engineering. I can throw up a PR and have a detection written like five, 10 minutes.

Ashish Rajan: Wow. So the entire life cycle from, we have a new service to, we now have a preventative

Of detection.

Jackie Bow: Detective control. Yeah. A detect

Ashish Rajan: control. And wait, so how do you balance between when to retire the control? There is a whole question about, yes, you built one, someone's watering the plant, someone's making sure it grows into this big three. But last time to, hopefully not chop down real three, but in this context, like retire detection.

Jackie Bow: Yeah, I would say like the detective, like the detection lifecycle is such a, it's such a, it's a great question because it's like very nuanced and it's very different everywhere you go. And a lot of people have different ideas. But the way that I like to break it down is you have both like alerting detections, which are things that, you immediately need a human to look at.

Like it needs human intervention. And then you [00:20:00] should have a ton of like lower confidence signals. And one of the best things about using AI is I can spin up like, and number of Claude agents who can just look over all of my non alerting detections and then surface things that are interesting.

And and one of the things we actually were surprised by is I don't if people have used Claude. Claude has a bit of a personality and I was running. Claude over a bunch of detections that we had, and Claude wrote this report for me that was like, I'm seeing this alert happen a lot of times and I worry about the security posture of a program that is still having this as a firing detection.

And I was like, oh,

Ashish Rajan: okay. I was like, are you really working? What are you doing? Yeah. I was like,

Jackie Bow: We're just testing now, Claude. But. Yeah,

Ashish Rajan: it's imagine sending an email to hr. I've mentioned to Jackie five times. Yeah. She's not looking at this. Why

Jackie Bow: isn't she tuning this detection? Yeah.

Ashish Rajan: Yeah. Fair. So to your point, you're able to wait.

So are you using MCP connectivity to AWS or and in terms [00:21:00] of like the. Foundational pillar. Yeah. So what Kane was talking about as well, I'm curious. Yeah.

Jackie Bow: So MCP, you can think of MCP as like it's an open standard for writing these connectors that you can provide to AI agents. But really, like under the hood, everything is you can break it down to tool use.

So tool use is the ability to give a model. Actions that it wouldn't normally do or to coax it down a path. And so for us, we use like a custom tool that we wrote. We could also use an MCP server, but we just wrote a tool that does querying into our data lakes. Or Claude wrote a tool that queries our data lakes.

Kane Narraway: I was gonna ask a question if that's okay. Yeah. How mu how much of your stuff is custom to you? Like a snowflake stuff? Yeah. Versus how much is possible for. Wider. Yeah, wider audience to is more Great question.

Jackie Bow: Yeah. So what we're building is SIEM agnostic ish. So if you use, because what? Yeah. If you use a SIEM that, treats how it works as a proprietary secret, I'm sorry.

Because. That. Yeah. I [00:22:00] won't go that's yeah,

Ashish Rajan: like one quirk bottle.

Jackie Bow: But I would say this is the tooling that we're using, none of this is anthropic secret sauce. Like nothing is things that are only available to us. We're using models that are currently out and everything we're building is in cloud.

And could be, we're using like Postgres databases, data lakes, things that you can have in either G-C-P-A-W-S, or Azure. So

Ashish Rajan: yeah, I, 'cause obviously we've got two camps here for leveraging AI and securing AI as well. I'm curious in terms of now that we know how to build we can wipe code letter hallucinate. Yeah. And with interesting solutions. And hopefully we can figure out a way not to talk to a developer and still be able to figure out what the hell they're doing, in terms of, I guess my question is in the existing market that we are in with security AI being this.

Big unknown kind of on the side, and we are able to leverage something like Claude to make our own detection. Where is the, what's the starting point for someone to enter into the stupid Kas point?

Jackie Bow: Yeah.

Ashish Rajan: Are we just able to use leveraging existing [00:23:00] cloud logs existing application logs, putting into a data lake to what you're saying? Yeah. And just go, Claude, go hallucinate on it. And. Hopefully come back. I,

Jackie Bow: I feel like we're stuck on the hallucination. I was just like,

Ashish Rajan: felt really right to say that. What else do you find here? But is that kind of where you going with that?

Yeah,

Jackie Bow: basically we found that once you have the logs, which you know, is like the first thing that you need to do then I. Giving Claude access and tools to both like query your logs and also do like some processing. Like we have some tools that we've created that write standardized reports based on like a detection signature.

And there's like a lot of ways that you can experiment and create different tools. I think one of the most exciting things for us is the ability to rapidly prototype and run experiments so we can try different strategies of triage, we can try different, yeah, like different like modalities and we can have an idea of okay, I wanna have, a thousand Claude agents go and look over every log versus I want only to, surface like I want it only to look at the [00:24:00] alerting detections and then to give me like really clear reports.

It's just, yeah, it's very exciting because the ability to just have an idea prototype, go out and test and then get results. I've never had this kind of power. Yeah. Before,

Ashish Rajan: It sounded like a visit already, but my visit over here on the other hand I was gonna say now, I love the passion and energy Jackie has about how AI is amazing.

Without giving up too much about your talk what can you share about some of the things you found about doing threat modeling across AI systems, both the SaaS ones and the in-house ones, if you could share that as well. 'Cause I wanna balance the picture as well. As much as I'm excited about AI and we can leverage to.

Amazing. Do amazing things. Curious to know about what you had found in the whole thread modeling that you did.

Kane Narraway: Yeah. What I find is that so my talk is about enterprise search. So if you've used glean Atlassian, RVO, the Slack has one every vendor has one now, basically. And so it was looking at some of those tools and like what some of the problems are.

And [00:25:00] what you find is that when you are doing these things, there's like limitations. And so the biggest limitation is of course. Authorization. And so they've had to build, like all the vendors have to build on top of the already existing SaaS APIs, right? Yep. But those SaaS APIs aren't always goods, and then the layer on top of it isn't always good.

And so what happens is when you're building auth on top of auth. Bad things usually happen. And so I find the issues are not like, you'll read about things like prompt injection and you'll realize this is the worst thing possible and we need to look at it. But really it only matters if you're building sort of public facing platforms, right?

Yeah.

Jackie Bow: Yeah.

Kane Narraway: The bigger risk with a lot of this stuff is who has access to what.

Ashish Rajan: Yeah.

Kane Narraway: How you're getting like thousands of service accounts now to connect all of this stuff together. Yep. And so again, it's a lot of the existing stuff that just gets amped up to 11. In that regards,

Ashish Rajan: because to your point with the threat modeling space.

Traditionally we have looked at, oh, what threats am I looking out for? A lot of the conversation around threat molding, AI systems going with, oh, it's a dynamic system. I don't [00:26:00] know what Cain would say next. Yeah, it's can't put in the chatbot or wherever. But what you're saying also is that what's the true reality of in internals of the, I guess inside an organization there's a lot of SaaS, which is using ai.

I'm just gonna use, throw a few words Salesforce has an ai, Atlassian has an ai, Canva has an ai, everyone has a customer facing ai. And that's obviously being used by other customers on the other side. So you being on the other side of this way, you're obviously. Part of the consuming safe space yourself.

You have your own SaaS AI that you're looking at and the GLE and everything else you mentioned as well. Is there what's missing in the current approach for threat modeling? Or is it the same way to approach AI systems as well in terms of, 'cause a lot of people would be thinking, am I learning something new here completely,

Kane Narraway: yeah.

Ashish Rajan: Or am I able to just leverage what I know?

Kane Narraway: There's a few new bugs and things like that. There's things like I said, like YOLO mode Yeah. And stuff like that, which is all like brand new. But again, like I think if you've threat modeled a lot of SaaS tools in the past you'll pick this up pretty quickly.

I do [00:27:00] think that I. Probably what we'll see will be interesting, like MCP at the moment is like a layer for our LLMs, right? That kind of sits in front of our already existing APIs. Yeah. I do wonder how long it will be until we go, we just don't need like those original APIs. And we just LLM it.

And at that point that's much more scary because MCP is basically taking wide swaying prompts that I'm putting in, and it's turning it into specific API actions that I can usually see. And so yeah, when I can't see those things in the future, that will be really interesting. Yeah, for sure.

We can audit them as well. Is that what you

Jackie Bow: Yeah, I think like one of, or the thing that Kane is getting at and from like the, like incident response and investigation side is like we. We need to keep logs of what is happening? 'cause I'm very excited about, look at how much I can do, look how much like my work is like amplified by using lms.

But you take that for every person at a company and even people who maybe have not historically interacted with like [00:28:00] infrastructure or technical systems, but they're able to now and. We still are having this like really beginning forming idea of what is identity when it comes to like agentic workflows.

And so the ability to trace back like where actions are coming from I think is gonna be more and more critical. Especially for me, if I'm looking at an incident of like, why did this server go down? And it's oh, this API call that came from like bedrock. It came from like an AI that doesn't actually help me.

I need to know where it actually came from. And so like tracing actions I think is critical.

Ashish Rajan: Are you able to use AI for those con like going into the rabbit holes as well?

Jackie Bow: I would say AI is pretty good at going into rabbit holes. But yeah, I think you need to, guide it. Massage. Yeah.

Yeah, it's

Kane Narraway: been pretty classic right as well that security teams don't really scale with. Engineering departments generally.

Jackie Bow: Yes. Yeah. And so

Kane Narraway: like I feel like we, we have to, yeah, you do not want to, even if you are like one of the doomers who is no, I will do everything manually. Land the hard way. If your engineers are [00:29:00] doing it, then like you are going to fall further and further behind.

I

Jackie Bow: think. I think that's such a good point. And like I think the position I have is we are not going to be able to keep up as defenders if we are not willing to use this technology. I think if we are. Only on the side of oh MCP servers are vulnerable, or, this technology is, let's only talk about prompt injection, which is something like, I feel this community sometimes gets stuck a little bit in is like the hacking or the breaking I.

We won't be able to scale with offensive capabilities and offensive technologies if we are just waiting and blocking ourselves on. We'll wait until it's more secure. We'll wait until it's better.

Ashish Rajan: Here we are 16 years later, still talking about cloud adoption. I guess so. Some things would always be slow.

I imagine. I think it's. I love both the perspective, but I also want people to walk away with a starting point for they heard how passionate you are about Claude Code and people should definitely go and try that. Even if you've never coded before. Yeah. Open up a visual code or whatever your favorite editor [00:30:00] is.

What's a good starting point for someone who's basically inspired after hearing just to start leveraging ai?

Jackie Bow: Yeah, I feel like trying out some of the coding assistance, there's a ton of resources out there. I feel like anthropic documentation is pretty great.

Ashish Rajan: No bias. No bias. Completely. Yeah,

Jackie Bow: I'm completely not bias at all.

And there's, yeah, there's a lot of YouTube videos and things to talk you through. And if you have an idea on something that you would've liked to build. Prototype it yourself, stand up your own like AWS or GCP account. And I don't recommend you do this on like your corporate like on your production system livestream,

Ashish Rajan: so I'm glad you mentioned it.

Jackie Bow: But yeah, in a sandbox environment or if where you work provides like nice sandboxing, so you can have a playground. But I would say definitely don't be afraid just to try things.

Ashish Rajan: So is there a I don't know, like a. S3 bucket, going to internet or whatever. Is there like a, is there a thread that comes to mind?

That's probably the easiest ones to start with as well, with this kind of a white coding.

Jackie Bow: So in order, I think one [00:31:00] interesting thing is like just throwing logs at a like at an LM and asking it to come up with patterns or if you have an idea about a detection signature, but also if you're like, I want to, create a system to collect logs or I want to one of the interesting ones for me is I want to run some kind of analysis over a bunch of files.

And set setting up like infrastructure and systems to do that. I found Claude Code to be really helpful with that. Yeah.

Ashish Rajan: And how do you scale something like that? Because, yeah, there's one thing making one,

Jackie Bow: yes. Yeah.

Ashish Rajan: And now you're like, okay, now that I do this across 300 plus a WS accounts or GCP accounts.

Yeah,

Jackie Bow: I think so. What we start with is, an idea. And then we will, we talk to Claude, and then we come up with a design doc. And then, in, in a good design doc you have components about scalability and and it's really, I found it's really collaborative to, like with my colleagues, will.

Come up with these design docs, we'll iterate on them. Then we will start broad and then move into the specificity of an actual technical deployment. And [00:32:00] then we'll go into the actual vibe coding with Claude, where we take the design doc in like a markdown format, drop it into a repo, and then let Claude Cook.

Yeah,

Ashish Rajan: With guidance we're hallucinating. We are cooking. Yes. I dunno, we should need to change, update the vocabulary a bit. Can on the same flip side for threat modeling as well, what's a good starting point for AppSec folks or people who've been on the other side? Yeah. And how can this scale that as well?

Kane Narraway: Yeah. Here's like my kind of opposite take I guess, which is if you are like a cloud security engineer and. You are building stuff, you should still learn to code manually and connect APIs and do your day-to-day manually so that you really understand that because if you use vibe coding to do that, you're stealing that learning away from yourself.

However, say you need to build a UI and like you're not a front end developer, like you just need something to show. Say

Jackie Bow: you need to make a button.

Kane Narraway: Yeah. Then go nuts. Yeah, do that vibe, code that, and I feel like that way you will gain knowledge [00:33:00] in your domain and you'll keep that. And so specifically with something like threat modeling, do loads of it build up a big piece of like knowledge base when it comes to that stuff.

Keep doing it so that you are good at it. And then you can tell what the AI is good and bad at in that regards. Yes. Yeah. And that way you can use it as a triage step. And say, look, this one's high risk. It's done half the work for me, but here's a bunch of stuff I still need to do. And I needed to do the learning first to do that.

Jackie Bow: Yeah. That's such a good point.

Yeah.

Ashish Rajan: Oh, awesome. I think I just wanna take a moment for Dana. Do we? Okay. By the way, for folks who have questions, feel free to use, the QR code. We've got someone walking around here as well. Hopefully, if someone else does have a question, feel free to raise your hand if you have any.

Dana would find you in a few seconds, but

Jackie Bow: Dana will find you. Yeah,

Ashish Rajan: Dana will find you. Sounds like

Dana Torgersen: a threat.

Ashish Rajan: Yeah. Is, does anyone have any questions that people would want to ask before we proceed to the next question? I can't even see people. I just see lights. Yeah. So I'm just like, I'm just assuming Dana can see them.

I'm like, okay, cool. [00:34:00] So we will find, okay. But if you have, feel free to find these people out afterwards as well. I've got a couple more questions coming towards the tail end of the episode as well. I was gonna say, if this is like fun question. We've been very serious so far. Yeah. Totally.

Not having fun at all. So I'm, I've got some fun questions too, just lighten up the mood a bit. No AI was involved with this. I didn't let them hallucinate. First question was, or is. If AI could protect one thing in your life besides your passwords. What would you want it to guard?

Jackie Bow: So this, my answer I think Kane might have feelings about this too, is I would want something to protect my dog.

I have a Pomeranian and sometimes I can't be home with her, so I would like something that just,

Ashish Rajan: like a shield walks around with it.

Jackie Bow: Yeah. And make sure she's okay and that she's has enrichment time. Yeah.

Ashish Rajan: Plays with her as well. I when she's bored.

Kane Narraway: Fair? Yeah. That's a good one. What about you can, I can't steal that one. I have a little, I have a little eight week old Pomeranian. Oh okay. So here's a funny story, right? My friend lives in rural Australia and he has like a [00:35:00] big, like homestead, right? And he has these fat wombats that come and steal his strawberries every night.

And so he's shown me videos of them. So I think we need to make a startup for like wombat. Detection and response or something? Yes.

Ashish Rajan: For all the non-Australian vomits like giant rats or raccoons, for lack of better. But if people have not seen them, just they're everywhere in Australia. Okay.

Good. Great answer. I've got the second question. What's one? Totally ridiculous. What's one, two totally ridiculous thing you think AI should have security for? I think I would love for it to protect my crypto wallet, but Oh, what's a ridiculous thing?

Jackie Bow: Oh man. I'm thinking of oh, this is

I'm just gonna say it.

I have a no filter at this point. Like I talk to AI a lot as like a therapist or like for interpersonal things. And so I would like protection for like, when I'm talking to, to LLMs about emotions or so they're, yeah, keep remind you. Keep that private.

Remind you yeah. To

Ashish Rajan: remind you that hey, everything you put here is not like you may hear answers, which I may be hallucinating,

yeah. So don't take my advice seriously. [00:36:00] I'm about your life

Kane Narraway: choices. Fair. Or my, you came. I, it's really hard to follow with Wombat aren't say, yeah. It's, I feel like that is a pretty dumb one by itself. So fair. Okay, so that's most ridiculous

Ashish Rajan: thing.

Kane Narraway: Fair. I think that's pretty ridiculous,

Ashish Rajan: I would think. Oh, wait, so then the next question is if your AI could be a your spirit animal, I. That would walk around with you. Don't say warm bad. Now, should we

Jackie Bow: answer at the same time?

Ashish Rajan: Oh. Oh, okay. So if people are ready for it the question here is, if you, if AI security could be your sidekick what kind of animal form it should take, which animal would you pick and why? Yeah, so let's just same. I'll let you come to the mic as well.

Jackie Bow: One. Two three p Pomeranian

Ashish Rajan: Pian.

Oh yeah. Alright. I'm gonna, I'm gonna say a different one. Golden doodles, like another in there. Someone needs to stand up for the golden doodles out there.

Jackie Bow: Chicken nugget. But

Ashish Rajan: wait, why Pomeranian? Why your sidekick needs to be your best friend in life?

Jackie Bow: Yeah. I just think since I have a Pomeranian and she is, she's already my sidekick that I would, [00:37:00] yeah.

Ashish Rajan: Is that there's like dog's our best friend. At this point, I, we should just end the show at this point. Yeah. We're actually

Jackie Bow: gonna do a slideshow of our dogs now.

Ashish Rajan: We've got an airdrop going on here, so should come with us dog show off. But that was the episode that we wanted to record.

Thank you so much for everyone who joined us and at the o floor rooms as well. Thank you for engaging us in the conversation. Hopefully. I don't know if we have been able to sway you on the and side of security where you're test still testing the ground or happy to test ai. Behind the passionate Jackie that we have here, or still threat modeling your way with the visit cane that we have.

Hopefully you can display some dragons, AI dragons for the rest of the conference as well. But thank you so much for joining us and being part of the podcast live as well. Thank you so much.

Dana Torgersen: I think right ahead. Thank you. Thank you folks. We've got some gifts for you from our great sponsors. Oh,

Ashish Rajan: thank you.

We AI sponsors

Dana Torgersen: King.

Ashish Rajan: Here you go, Jimmy. Oh,

Dana Torgersen: well done.

Ashish Rajan: Oh, I don't get one. 'cause I said ai. I said ai.

Dana Torgersen: I didn't [00:38:00] get a no. AI sponsors. You get a thank you outta this guy. Oh, thank you so much. Thank you folks. That was a great conversation. I hope it all got captured on tape. For all the rest of you folks in the room here if you wanna stay right here, we've got future proof.

Your career evolving in the age of what I. Ai ai. We get paid every time you say it. Don't forget coffee upstairs till 4:00 PM there's also headshot still thanks to Opal Security. Great folks over there. Just a, I guess it's technically out and around by the concessions. Have a great rest of your BSides.

We get to do all of this again, rest of this afternoon and all day tomorrow. Thank you for coming.

Ashish Rajan: Thanks everyone. Thanks. Thank you so much for listening and watching this episode of Cloud Security Podcast. If you've been enjoying content like this, you can find more episodes like these on www do cloud podcast or tv.

We are also publishing these episodes on social media as well, so you can definitely find these episodes there. Oh, by the way, just in case there was interest in learning about AI cybersecurity, we also have a sister podcast called AI Cybersecurity Podcast, which may be of interest as well. I'll leave the links in description for you to check them out, and also for our weekly newsletter where [00:39:00] we do in-depth analysis of different topics within cloud security, ranging from identity endpoint all the way up to what is the C app or whatever, a new acronym that comes out tomorrow.

Thank you so much for supporting, listening and watching. I'll see you next time.

‍

No items found.
More Videos