Bryan Woolgar-O'Neil, CTO of Harmonic Security, shares how to build developer-first AI security culture while implementing MCP gateways and governance controls that actually work.
Bryan Woolgar-O'Neil: [00:00:00] They had like a mountain of bugs. Fixing a bug took three to five days. They could do something that was taking five days in like 20 minutes.
Ashish Rajan: I don't know if developer friendly is a thing that is possible
Bryan Woolgar-O'Neil: hire, my engineer's gonna react to this. So you could plug into a data store, pull data from that, and then push it to the web.
An engineer could have done that previously, but they probably wouldn't. AI doesn't have that knowledge. Roughly like 70% of MCP servers are local server, run locally rather than things you connect to just block. And the engineer's like, what the hell? You've just thought my flow working.
Ashish Rajan: It's developers for security possible with ai.
A lot of times the answer may seem like no, because of the speed of ai. My conversation with Bryan, who is the CTO of an AI security company called Harmonic Security, came and spoke about how they have been working with their own developers to have a developer first AI culture. We also spoke about some other changes that have coming with MCP or Mortal Context Protocol that was released by Anthropic, how they're adopting it in their organization with MCP gateways, and also how you can progress through in a maturity model for AI security as part of security [00:01:00] program.
All that, and a lot more in this conversation with Bryan from Harmonic Security. As always, if you enjoy this episode or if you know someone who would like to understand the AI security, the different maturity levels and MCP security and all of that, definitely share this episode with them as well. And if you've been watching or listening to an episode of this podcast for some time and have been finding it valuable, I really appreciate it.
If you could take a quick second to hit the subscribe button, it only takes a second for you. It's free for you, but it means a lot that we show support for the work we do here and it helps grow our reach as well. Thank you so much for supporting the work we do, whether it's on Apple, Spotify, YouTube, or Linkedin.
Enjoy this conversation with Bryan and I'll see you soon. Hello, welcome to another episode of Cloud Security Podcast. I've got Bryan with me from Harmonic Security. Heyman, thanks for coming on the show.
Bryan Woolgar-O'Neil: Hey, nice to meet you. It's lovely to be on the show today.
Ashish Rajan: I, but dude, I'm so excited for this developer first conversation, but maybe to kick things off, if you can share a bit about yourself, where you are, where you, what's your background into this world of IT and cybersecurity?
Bryan Woolgar-O'Neil: Yeah, yeah. Um, yeah. I'm Bryan. I'm CTO [00:02:00]co-founder at Harmonic here. My background planner is like in engineering, so I was a developer for 10 or 15 years writing code every day. And haven't written it for 10 years, but I still kinda see myself, if I got the coding gloves on, I could still do it. I went from there, um, to be the CTO of kind of, um, digital shadows, which are threat intelligence company.
And then myself and my co-founder Al, uh, set up Harmonic a couple years ago and Harmonic. We kinda set it up to kind of help organizations adopt ai but adopt AI securely. So we, we give AI governance and controls and they kind of, special sauce that we can provide provides on organizations we're looking at kinda user intent and the kinda data and understanding what AI adoption is actually happening within an organization and allowing them to control that and help them adopt it safely.
Ashish Rajan: And talking about AI adoption safely as well. A lot of people are in that experimental phase.
Some people have gone to the next point as well. [00:03:00] What do you find as the contention point for why people's usage of AI kind of stops at that experimentation stage. Is it just because governance failure or what, what, what have been the conversation that you've been hearing from CISOs and others on why some of that's failing if, if it's not even like, I guess, uh, picked up early?
Bryan Woolgar-O'Neil: Yeah, I think there are two things that pop in my head. One is the, I think there's, there's definitely like employees are adopting ai, whether organizations like it or not. Um, so like the think, one of the interesting thing for me is like just, and probably why we started a company right? Was um, ChatGPT come out and it first thing that I'd use for ages and you're like, oh, Jesus.
Like, look at what it could actually do. Like you could just see even at that early stages. What could happen and like, um, what capabilities it could provide. And then so you think of like why employees are adopting AI today is like, there, there's so much it can do to help them do their job better, to help their business better.
And then if you [00:04:00] think of what the organizations are thinking, they're thinking we could, we typically, when we, we talk to customers, they we obviously are a vendor, so we're trying to bucket everyone in, trying to sell to 'em, et cetera. But in generally, people fall into like, two large buckets. You get kind of the, the blocking company where it's like they've traditionally blocked everything and they open up a very small amount and they're very like, they've got a lot of controls, a lot of governance on every single thing to the nth degree. And then there's a permissive group who, like, because of that culture, they, they're kind of quite open and, and both of 'em are trying to adopt ai, but they're coming from different roots, uh, into that kind of problem.
And I think they, they see different things.
Ashish Rajan: I'm with you on the split behind this. There's a thought process here where a lot of people say that a lot of organizations may have start as an experimentation, but generally that's where governance failure also is a thing that happens, which a lot of CISOs and other people may not be able to pick up because you [00:05:00] can't really tell the difference.
At what point is it experimentation versus now you have production data in there. Uh, was there something along those lines that you were seeing as blind spots?
Bryan Woolgar-O'Neil: Yeah, definitely. I think there's, there's definitely like the, there's people we speak to and they, they're, they know like.
We're using these sites 'cause they've got some firewall type telemetry around like that, but they don't know what people are actually doing underneath it. So they don't have the context of like, is it, is it experimentation? Is it production? Is it someone asking about like, what they should eat for dinner tonight.
Like they, they've got not, no idea about what's actually going on there. So I think like the, even though they can tell that there's a certain amount of traffic going to this website, it is like, what? What does that actually mean to an organization? And I think a lot of them don't know. They just are like, well we've bought this chat to Z Enterprise license.
Ashish Rajan: Yeah,
Bryan Woolgar-O'Neil: we're all good now. And it's like, well wait a minute. Like what about, what are people actually doing on that? Are they actually using that particular license? Are they using lots of other sites as well? So I think that [00:06:00] there's. To me of like having, having the idea of governance where you don't have any governance at all, but you don't actually know what's going on under the covers.
I think that's the probably the big thing. Uh,
Ashish Rajan: so this whole idea or the notion behind, I have quote unquote my AI under control, uh, would that be like a false sense of security? And perhaps why is that considered like a false sense of security?
Bryan Woolgar-O'Neil: Yeah, I think some people probably do, right? I think like the, the most mature organizations probably have, have kinda leaned into it and leaned into it early.
So I don't think it's kind of like a blanket statement, but I think the majority case is probably, there's a whole bunch of in of things going on, whether you call it shadow AI or something else where
Ashish Rajan: yeah,
Bryan Woolgar-O'Neil: There's usage that is going on an organization that you, you're not aware of and you don't understand, and it, it could produce risks.
So whether that is risk of, your data going to somewhere that's gonna train on it, for example, or
Ashish Rajan: Yeah.
Bryan Woolgar-O'Neil: Uh, people using personal accounts so that [00:07:00]if they ever move the job, they take all your data and all your prompts with you. Yeah. So I think there's a whole bunch of different things that could occur from that.
Ashish Rajan: But I guess, 'cause you've obviously come from a threat intelligence background, and I'm sure you would've noticed that a lot of these seems to be like, to, to your point, if I had my personal Gmail versus my work Gmail, that that's, it's not a new problem. Like why is AI is, is this fundamentally different?
And if it is, so why? But if it's not, then can we just reuse what we already have?
Bryan Woolgar-O'Neil: Yeah, I, I see it as different, but I probably would say that with the, but the I see is different problem probably for a couple of reasons. One is the, I think it's like the. The kind of power that AI can bring to a business.
If you think about some of the newer things people are doing with, with agents Yeah. You, you, you kind of are doing the same, the same jobs. So I think, uh, an example of someone one of our customers were telling me about last week, where they, they had like a mountain of bugs and [00:08:00] they never got to them at their organization.
And like a, like fixing a bug took. Three to five days per bug. And they had a whole bunch of them. So like, they were, like, they, in the, they had a process where they'd look at it, they'd look at logs, root cause analysis, write some code, check it works, deploy it, et cetera. And they were engineers started experimenting with, like using CLA code and use an MCP to kind of do that job automatically.
So they could do something that was taken five days and like 20 minutes and they could paralyze it. So you could do them in multiple times. So like to some extent you're doing the same job as before, right? Yeah. Because you're still doing root cause analysis, you're still writing code, but the, the AI's doing it within, within that process and it's doing it a lot faster, a lot more.
So like all the governance you used to have around. Writing code and access to different systems is kind of being given away and it's been amplified, so it's been [00:09:00] happening. So like the, do you need the same controls for that that you had before, which is probably a lot of like manual, like human reviews and efforts like that?
Or do you need a more intelligent control that's kind of understands the intent and where, where information's going and, and what's going on within that process and Yeah.
Ashish Rajan: Yeah. And
Bryan Woolgar-O'Neil: me, it's, it's that, it's the scale of it and the kind of, and the, the access it has and also like. AI is isn't a black box.
Right. It doesn't always produce the same answer. So with all those three things together, I think it, to me it does feel like a different problem. That needs a different solution.
Ashish Rajan: Yeah. I think it's, you've touched on an interesting point. 'cause a lot of the earlier problems we had were also driven by the fact that security was always told to be, Hey, you have to be developer friendly.
Developer first is a slogan. That's been I don't think people have forgotten, forgotten in all the AI experimentation we are doing, but that's definitely a thing. And I guess to what you were saying where, hey, you may have visibility [00:10:00] into some of the enterprise licenses you have signed up for, that we were saying earlier.
But then there is this whole experimentation being allowed for developers so that clearly they need to innovate and security needs to find out a way that, oh, I mean, I'm gonna keep all the production data separate and that's good enough, but I don't know if. Developer friendly is a thing that is possible.
Obviously you guys are running an AI security company yourselves, so you may have similar challenges to the developers that you have on your side. I'm curious, is it possible one of the reasons why I was asking if it's fundamentally different is, is that one driving factor for a lot of organization is that, hey, it needs to be developer first, and is that notion or developer first still possible in an AI world?
And if it is, how have you been able to kind of walk that path and where, where is the, where is the comfortable path? I, I know there's no, there's no perfect answer, but I'm curious where, where's the comfortable path that you've been able to find between you, what you're [00:11:00] doing and the customers of work you're working with?
Bryan Woolgar-O'Neil: Yeah, you remind me like when, when we speak to customers about doing something we have an MCP gateway, so you need to put it on a developer's workstation. And whenever we talk to them, you can see the feeder in their eyes of like, oh, how are my engineers gonna react to this so I can, and get what, what's going on?
But the, yeah, I guess we, we use ai within Harmonic, I guess we, we, in some ways it's easier for us, right? We are an AI first company. We were born in the AI era. So like we can adopt a whole bunch of things quickly. We can put controls in and I'll say our product is kinda looking at controls. But that, yeah, we, I think we, we've kind of evolved or kind of, um, iterated to a decent position within how we kind of operate.
So like we, yeah. We focus on, there's kind of almost like our kinda standard set is like using, uh, cloud code for a lot of our kind of more [00:12:00] ENT flows, but we kind of, we also experiment quite a lot around the side of that, but essentially we've probably defined what's experimentation, what is production code as a separate thing, even though they're using the same, same tools.
I think our security team are really looking at like, what are the, what are the different pinch points they, they care about within that process selectively. They're not trying to control everything, but it's like, as you said, like access to production data.
Ashish Rajan: Yeah. Yeah. So
Bryan Woolgar-O'Neil: What can access that?
Where can it access, how much and all, all you kind of to that extent, standard things that you put on employees. I think the. The difference is the, obviously AI is doing that task versus your employee, then how do you, how do you get those controls and, and we've been, I guess eating our own dog food or whatever.
The drinking own c Yeah, yeah. The way the
Ashish Rajan: French were already drink your own champagne. Yeah.
Bryan Woolgar-O'Neil: So that we, we, the tools that we've been making for customers, we've been running it harmonics. So we had our [00:13:00] MCP gateway like three or four months before we kind of put it on for customers where we were kind of trying different things out, seeing where we had we wanted to add in additional controls around Yeah.
Certain things like, um, I guess with an MCP you've got things like destructive actions. Like we, we don't really want AI to be creating a new Jira projects. Yeah. Um, we don't want it to be creating, get repositories because we are not trying to build like, uh, oh, go and build my whole website from scratch and some prototype thing.
It's more like, uh, we, we want you to do this feature, so we, we want you to exist within our own architecture. So we've spent a lot of effort putting in place things that, I guess that we've kinda evolved into Claude skills, but we are trying to do a similar thing before just using cloud code and kind of templates and like, oh, market documents with like our architecture.
So you, you're pushing it in certain directions
Ashish Rajan: and I guess you [00:14:00] obviously touched on MCP and what I've found in a lot of conversations I've had where MCP as much as it may feel for people who are. Living and breathing the AI world. MCP is like, oh yeah, yeah, I know what MCP is and everything else. There is definitely still not common among many organizations if you, especially the ones probably who are more aware of it, could be the ones who are on that developer first landscape.
Could you describe MCP and what was the point or the, what was the reason for going down the path of building this MCP gateway for yourself and what led you guys down that path?
Bryan Woolgar-O'Neil: Yeah. Uh, so yeah, in terms of like what MCP is, I guess, like you've got a whole bunch of applications that had APIs, right?
Yeah. So they. They're all different. And essentially what CPS in, I think it's got like two main purposes. So one is like trying to standardize what that interface is to lots of different APIs so that the AI can just use one interface. Yeah. And then I think the second thing that I think people forget is like as, as an [00:15:00] engineer, whenever you looked at API documentation, you'd be like, what the hell does all this mean?
Like, I know what I wanna do. Yeah, but what, like, as a developer, you, you wouldn't know which things to call. Yeah. And you probably have to, the, a great definition
Ashish Rajan: is to be, if you look at the list of the slashes, you can go down the path of. And I mean, that is the ones that have been documented, not the ones who are, which are not even documented.
Bryan Woolgar-O'Neil: Yeah. So like this for me, MCP, it gives that kinda standard interface, but it also gives the instructions to the AI on how to use itself. So I think the AI doesn't know anything about these interfaces.
Ashish Rajan: Yeah.
Bryan Woolgar-O'Neil: Um, it pulls in the MCP spec and it, and then understands, alright, how can I best use this GitHub MCP server?
It will. So that's like your server instructions, your tool instructions. So it's almost like guidance for the ai, um, to be able to use all these different tools. Right. And the best. P servers they don't just wrap an API, they kind of give those great guidance on, so they, if an MCP plugs it in, it can [00:16:00]automatically use it straight away without like the user.
There's a lot of times where as a user you have to like change your prompt to get it to call the right MCP sales, but if you've got a really well designed cps, whatever, it can call them directly. I guess why do we, why do we make the, our kind of gateway as we born out of, I guess I talked about ourselves wanting those kind of controls, but also a kind of.
The way we think about product is like listening to our customers and responding. So at first people were saying, they didn't come in and say, we want an MCP gateway. Obviously they came in and said, I'm really about worried about what my engineers are doing. I know they're probably using lots of AI and lots of tools, but like, what are they doing?
Yeah, so that we, we researched that area and probably the, there's like two or three different initiatives came out that we're working on in internally, but one of them was around MCP and it was like, people were probably seeing it in a similar way to things like like third [00:17:00] party libraries you might add into code repository.
So like,
Ashish Rajan: oh yeah, yeah,
Bryan Woolgar-O'Neil: We wanna have some way of reviewing and risk assessing those and then putting some controls around what is available to be used or not. And then again the, they were also looking at like the, if these can connect to my different data sources what ones are they connecting to and what data's flowing between different systems.
'cause you could, you could plug into a data store, pull data from that and then push it to the web. And, and an engineer could have done that previously, but they probably wouldn't. 'cause they've got an in in belt knowledge of what is acceptable, whereas the AI doesn't have that knowledge. Yeah. So just data here and push it over there.
And they, that's the, the issue that the, the organizations were worried about. Interesting
Ashish Rajan: because, so I got to summarize. MCP or model context protocol is a plug and play kind of a thing for. What traditionally was called API before, where someone [00:18:00] would say, Hey, I want to, uh, talk to the a p of Amazon or talk to the API of GitHub to your example.
But instead of doing that, I can use the AI plug and play my GitHub, MCP or my harmonic MCP or any other MCP that I wanna go down the path of. And I, you, you said something really interesting as well there, because a lot of people, and again, it goes back to what you were saying about the visibility part, where I may have an understanding, but I don't know what kind of data is flowing through it, obviously.
And I would say there's still a lot of things are being evolved. 'cause a lot of people would hear this for the first time and go, oh, what, what are my, uh, top three things that I should be looking out for from an MCP perspective? Should I do, am I getting an m mc? Am I building an MCP gateway myself?
What are the top three things that comes to mind for you where people who are probably looking at MCP that they should at least go Okay. Bare minimum, at least use these three to even get some kind of comfort. Is there a top three that comes to mind?
Bryan Woolgar-O'Neil: Yeah, I [00:19:00] was, I was on another panel the other week and someone asked this question and I try to summarize what, how we've been hearing about it.
And like, I think the top level is just like, give me visibility into what, what was being used. But it's like basic level of like inventory, like what clients and services have been used, what tools have been used, so you can get a sense of like where your usage is, how much usage there is, what departments are using it within your organization.
So
Ashish Rajan: yeah,
Bryan Woolgar-O'Neil: you know, um, usage, information, telemetry call it what you like. And then the second thing was like, we want those access controls so we can do, we can approve or risk assess a server. And, and most companies want to try and get down to like an approved set of servers.
Ashish Rajan: Yeah. Um,
Bryan Woolgar-O'Neil: whether they're official ones from sites, whether they're like homegrown ones, but like they kinda want to get to a smaller subset that they're happy with and a process of then being able to add new ones over [00:20:00] time.
They kind of, they, their process evolves. So it's kinda like visibility and we call that middle bit access controls, but it's like a way of doing risk assessment and providing access to MCP. Below that, I think there's then the, there's the specific risks around it that they might care about. So things around data, like what data's going where, uh, more kinda like adversarial risks, like prompt injection, other type of attacks that might happen within our organization.
I think it's like that's when those things start to come in as like the third level. But I think there's probably quite a, quite a gap between the, the first two and the first two. Feel like the basics to get something basic in place that gives you that control and kind of that governance, I guess, of that.
The MCP usage and then below that it's like. Now we've got those con general basic controls in place. How do we then worry about specific threat vectors, whether that's like data loss or whether that is [00:21:00] around an adversarial kind of attack on, on your organization? I,
Ashish Rajan: I think it's a good point. Also worthwhile calling out as well.
Uh, at least from what I've seen so far. A lot of people are running MCP locally on their laptops as well. It's not that it's hosted. I mean, obviously we use the example of GitHub earlier. Yeah. But that's, that's the other, that's the other half. That's the port you're plugging into, but the cable is coming from you somewhere.
Is that how I described it? Right. Like it's someone's local laptop at that point jam. Right.
Bryan Woolgar-O'Neil: Yeah. And I think that there's a few reasons for that. From, from our kind of stats, I think roughly like 70% of MCP servers are local servers. So like, and that's probably across the board. What, what is running, they're kind of servers that run locally rather than things you connect to.
So I guess why is that? And it is probably worth saying that you can probably write a local MCP server using like cloud code or something within like three or four hours. It's not like a long task. Like if you imagine like you're setting up your own API and you had to set up [00:22:00] your own infrastructure like a local MCP, three or four hours later you've got something that will help you connect into different systems you need to connect to.
Ashish Rajan: Yeah.
Bryan Woolgar-O'Neil: They also, um, you don't need to worry about networking to an extent. Mm-hmm. So it's like if the engineers already get access to some, some data or some systems on his local machine, if they run an MCP server locally, it's got the same. Access rights. Whereas if you start hosting it on the internet and then you want to talk to your databases or different systems that you've kind of got behind A VPN or whatnot, then it's like you then have to start worrying about networking and internet access.
So like the engineer who wants to experiment, wants to get on faster.
Ashish Rajan: Yeah.
Bryan Woolgar-O'Neil: Easier for them to run it locally. And some of the, the last point is probably like some of the tools need local capabilities 'cause they run, they run commands and whatnot on the operating system to operate so that they, yeah, they can do their job.
So like if you add those three things together, then it makes sense that someone runs [00:23:00] it locally on their machine and kind of develops it that way. Uh, and that's typically what, what we've seen internally and from our kind of customers usage.
Ashish Rajan: Do you find that a lot of people have built their architecture or at least evolved their architecture into a data centric, data security centric view, uh, or data security centric model?
Does this kind of work in this context as well? Because throughout what you were saying, if my MCP server or my MCP client is on my local laptop, but I could access GitHub before by extension and my MCP can access GitHub as well because my laptop could access this. There was already a network port open and yeah.
So does data centric, or sorry, does data security centric architecture at that point in time give you some kind of security or some layer of security? Because my hope is they would still be kept away from production data.
Bryan Woolgar-O'Neil: Yeah, it depends like the, what you're probably always kind of [00:24:00] like some organizations have good controls around their production data, right?
But there probably still is roots where engineers can get access, even if it's like just in time access to those. Databases if you're doing troubleshooting or or other type of things. So it depends. If you imagine if you go back to the beginning, we talked about like permissive and blocking organizations.
Imagine in a blocking organization, getting access to production data is probably hard and you're probably going through like five or six steps to get on there. Yeah. To get access to the data. And so you probably can't run MCP locally without network changes and whatnot to do that. Or they could do it.
It's hard enough that it's a barrier for someone not to try and do that. But I imagine a lot of organizations that are more permissive, they might still have controls around it, but they probably are controls where an engineer can still have access to production data and data systems from their, their local desktop.
Like, I mean, because to
Ashish Rajan: your point, a lot of people have [00:25:00] to be developer first. And in a developer first world, you almost, to your point, would be more on the permissive side rather than the blocking side. Uh, yeah. Is there a developer. Friendly way of adopting MCP while remaining safe.
Bryan Woolgar-O'Neil: Yeah, I think that the, I think what, what the way we are trying to approach it is that, is it's, is having that sitting on the end point with the end user so that you can, I think one of the big things we, we, we don't just do MCP, we look on the browser and, and the endpoint in general.
Uh, and one of the things that we, we think about is like, almost like a focus on coaching at the point of time of data loss or other types of events that you wanna step in, in the middle of. So I think, like if you think about the user, the de developer first view of that is you wanna be transparent when they're just doing their job and there's nothing to worry about.
But I think when you do need to step in, [00:26:00] I think the, something that we are trying to do is to step in in a way that is. Almost doesn't just block, because I think like, uh, the example I gave earlier of fixing bugs, let's say that takes 12 minutes 'cause it has to look up logs, it has to do root cause analysis, it has to write code.
So it might take a 12 minutes to do that agent flow. And if it after 10 minutes, you just block and the engineer's like, what the hell? You've just stopped my flow working. What, what's going on here? So I think like the thing that we are trying to do, uh, is that whenever we do step in the middle, one that we don't just stop the flow. And then two is that we, we almost try and coach. So if we are in an MCP flow, we're trying to coach the ai, so we're trying to tell the AI enough context for it to complete its task. So it, its job is to complete that task as a user. You've given it a task, like, fix this bug, uh, here's some information.
So if we, if data's being sent to somewhere that it shouldn't be, [00:27:00] we will try and then coach the AI to explain what that data is, why it shouldn't be sent there, and what it could do about it. So then what we found is most ais will read that information and then they'll come they'll work out a new plan basically to then complete the task without the data being exposed.
And on the browser, we think of the end user. It's like a similar approach, but you're trying to coach the end users, the actual employees to say, well. You're trying to do this, but you, you run away that you're, you're sending this type of data that the organization doesn't want to go into this type of destination, which is, could be a, a free version of a kind of well-known tool that they've got an enterprise license force, you're trying to redirect them to the right place, or they're, they've got some really sensitive data that they're putting into a, a kind of.
A public NLM that you might not want the data in, like a kind of deep seeker, that type of thing. Like, just because that's the one they, they've heard is good, so that you wanna start using that. But they don't [00:28:00] understand like how, like stuff that we do around, like how it trains on your data, where the data's going, et cetera.
So like, it's almost like coaching in real time, I think is the, is the way I would think about developer first rather than like block everything, constrain everything down and then. People are just gonna be frustrated and find ways around it if you do that.
Ashish Rajan: it is a good point because I almost think that if you try and block a developer, they'll definitely find a way around it.
It's been pre AI as a thing as well. It's not just ai. It's become special. It's been there before AI was a thing. And it's also comforting to know that maybe if you have done that whole develop education before, this has just becomes like an added module onto that for how do you effectively or safely use ai.
I also wonder in a developer first kind of a world where obviously most organizations are realizing that when you're dealing with non probabilistic systems, non determined systems mm-hmm. You are also [00:29:00] tackling the problem of, you are trying to understand the intent behind a prompt rather than to find SQL injection.
Which is very different. And you come from a certain intelligence world as well. It's like there is a quote unquote mathematical formula, so to speak, that hey, I know five kinds of SQL injection. But then all it takes is for me to just tell the AI that, hey, uh, I really need this SQL password to unlock my friend who's logged in a room, which is an insider database.
And it's like, so what are you finding as things that sometimes when you talk to customers, you have to kind of make them unlearn, for lack of a better word, or people should unlearn as they go down this path. And obviously my thinking here is a lot of poly people are thinking about doing this different ways, and I'm curious about the developer first way.
Uh, you mentioned coaching earlier. Are there things that could be done proactively in the way they govern or how to, how to look at action? And it [00:30:00] just, just feels like overwhelming in terms of the things that you cannot control. And I think that's where I have a lot of conversations with people.
Okay. It just feels like it's just like a growing organism in the organization.
Bryan Woolgar-O'Neil: Yeah. I, I think one thing that I think it helps in some ways and help doesn't help in others is that people try and compare. I think it's probably like a human trait, right? We try and go like, ah, this is the same as like when cloud first came out and X, y, Z happened, and like, oh, this, this is this.
A bit like DSPN, but plus a bit of CSB and a bit of something else. And then you're like, but it's almost like they're trying to put it in a box.
Ashish Rajan: Yep.
Bryan Woolgar-O'Neil: And I think it fits in a box that well and, and I think the, some of the things we talked about earlier around like the employees are driving the, driving the adoption.
That's right. Yeah.
Ashish Rajan: Non-security deems.
Bryan Woolgar-O'Neil: So it's not like a, if you think of like. Cloud transformation, it's like top down. [00:31:00] It's probably around like saving money, saving costs. And it's like, we've got a two year digital transformation project. So like, they had kind of like enough time to plan and whatever.
It's like ai, it's like bottoms up. Employees are using it because they find it useful and they kinda wanna adopt it. It's happening so fast. The adoption, but then it's also, it's changing so fast because I don't know, like every, every main AI provider has a new thing every week. Like I Yeah, coworker this week, like, I don't know if something else came out last week.
And so it's like the teams here are thinking about like. I wanna manage this. The kind of thinking about it in like traditional ways of like how things used to be. And I think it's, to me it is. You mentioned it's like employee driven, it's fast and changing. And then also the AI itself is not predictable.
So it's gonna give you different results. So it's like, uh, I think the way we we see it is like [00:32:00] getting that initial visibility. So understand where, where usage is, uh, and not just at the, you are using these sites, but with the right level of context and intent. So you can write, really understand where what parts of an organization are using AI for what reasons and then adding the controls in, but trying to be.
Almost take a positive approach on control. So it's like, just in time coaching and development of your employees, rather than like, let's lock everything down and, and not allow everything. So I think like it's hard to pick the thing to lock down in a kind of, in, in its diy it's easier if you're looking at that intent and that, um, focus on what someone's trying to do and then
Ashish Rajan: Yep.
Bryan Woolgar-O'Neil: Blocking for a particular reason. And I think that you talked about policies earlier as well, like, I think the, I think the policy can come through within those coaching moments as well, rather than like, here's a 30 page [00:33:00] policy document, no one's gonna read, or like, we've given you 30 minutes on training.
Like, why aren't you following our policy? It's like, I think policy has it, there is a control element, but it can also be a kind of positive element or positive experience in terms of like how you adopt and how you want to push people to the right areas.
Ashish Rajan: To find that accountability is also evolving. I mean, obviously earlier there was a way for me to find out that it was a, she should triggered a, an action.
And I guess we were talking about MCP on a local laptop. We were talking about shadow AI earlier. A lot of that traditionally used to be a corporate IT problem. Mm-hmm. Sometimes and majority case's, not a cybersecurity problem. Like there was a head of IT or head of corporate IT who was looking after it.
I mean, you finding the way we were handling different kinds of scenarios as, hey, that is Bryan's department. This is a shish department. We don't talk to each other. Are you finding that the current way to, and I [00:34:00] think to what you said, it's not another cloud movement that is another digital transformation that's gonna go for two years.
And top John, this is very much bottom up. Developers are already using it. If they're telling, you know, they, they're not using it, I'm pretty sure they're using it personally and it's copy patient, the code and they work from home or whatever. Do you find that. Accountability and the way responsibilities are split for security is also changing with ai or needs to change with ai.
And does the organization need to change in the way they approach security in general?
Bryan Woolgar-O'Neil: Yeah, I'd see, I think that the majority of places I see is from when we talked, we talked to a whole bunch of customers, prospects, and in general, we, we always talk to the security team because there's a security element to it, but there's a whole bunch of new job titles and like, they're not all the same yet, but they're, they I've seen ones where they brought in like a, like a head of AI development and so they bring them into the engineering org as someone who, oh, in [00:35:00] the
Ashish Rajan: engineering org.
So
Bryan Woolgar-O'Neil: does that where it's like so specific where it's like we've been brought in to manage ai, but they're more like core
Ashish Rajan: development.
Bryan Woolgar-O'Neil: Yes. But they're kind of like, they're probably from a engineering background and maybe a data background, but they've been brought into there. That's kinda one thing. I think most organizations have a AI committee now.
Oh yeah. But I've seen that evolve a little bit into like, 'cause that was almost like we've got an AI committee and we've pulled in the CIO and someone from security and someone from legal, someone from compliance,
Ashish Rajan: privacy as well as there, yeah. Privacy.
Bryan Woolgar-O'Neil: But then now there's like, it seems like there's more like permanent people in those roles.
It's like, oh, this is our head of ai, or this is our kind of AI compliance person, or. So it almost feels like that committee model's evolved into like a, we have a department now and it's a small department around ai and like interesting, like how that evolved. So I think this is, I'm not sure what it's gonna go to, but these are, I think [00:36:00] what's happening in the wild is you've got probably some micro stuff at the department level where you're like, we need to adopt ai.
We, our engineering team need to develop faster. We can't just have a legacy approach to it. So we're bringing people in there. And then at the top level of our business, it's like, how do we adopt AI in general as a company? Like what are we doing to the, there's like new roles being created and I think, yeah, they're all slightly different and the different takes on it.
So I think that to me it still needs to converge a little bit and, and probably see like what is actually working in those organizations. And yeah, it's interesting. Are you finding
Ashish Rajan: that AI is, at that point, obviously there's multiple usage to the AIPs. There's obviously. On the cloud side, people are using it to develop infrastructure code.
Software developers are using it to produce code. Then there is the other side where people are using for productivity. Code creation is a form of productivity, but there also email summary and there's a whole, [00:37:00] another side that probably is not getting a lot of attention is with the whole integration of AI into your existing products and features being developed for it.
Yeah. And quote unquote AI capabilities being added to existing products. Do you find, in the conversations you've had with people or customers and prospects, are there clear signs or are you finding that majority people have AI in production in some way, shape, or form? Or is it more the fact that it's a it's a more inside productivity thing rather than a, Hey, I have a feature on my application, which is more than a chatbot.
'cause that, to, to your point, that kind of raises the risk bar at that point in time. Where it's, I'm no longer just producing some cloud formation downplay for my AWS instance. Now this thing is in production and Ashish the random malicious person on the internet could actually do a prompt injection kind of a thing.
Yeah. So are you finding that there's, like, in terms of level of coverage that you find with between your customers and prospects, is there the usage of AI on which [00:38:00] side is of the fence is run?
Bryan Woolgar-O'Neil: I guess in the, with the people we, so we, our kinda like customer base is generally like a thousand employees up.
So they're kind generally fairly large enterprise to like, you kind of like a hundred thousand plus kinda organizations. Like they, I think they're, they're, they're more, they're more in a sense of like they want the workforce. Yeah. They're worried about the workforce side of it. And I think the example, I was on a call last week and someone.
Describe what they're trying to do is like 10 x every employee, which I thought was, I'm gonna use that again in like, but that's kind of, that's like one of their like, like they're a huge company. Like that's one of their like, key company objectives is like 10 x every employee via ai. So like that's, I think that just shows how people are thinking about it.
I think the people building AI into their products, it's probably like the two things. And one is, like, I've heard a lot of people talking about we've got AI [00:39:00] agents in production.
Ashish Rajan: Yeah.
Bryan Woolgar-O'Neil: And when you get into it, it, it turns out that that's still just. A developer running an kinda an agentic flow within cloud code.
So there, there is some agentic information going on there.
Ashish Rajan: Yeah.
Bryan Woolgar-O'Neil: Or there was kinda probably a rush in the beginning for everyone to essentially whack a chat bot onto their product. Like that's the classroom. Yeah. So they, they, they whack that chat bot and then they're like, oh, no, no, that chat bot's got on the air first.
Oh. Like we, so it's like, yeah and then there's, so that's probably the, the predominant things that we see. And then there are some that's probably a bit more interesting where they are, they are thinking about like how to, how to embed AI within the guts of their products and kind of where, where they can go.
And I think they're, they probably. Less common and, um, they're not interesting. They're not quite our focus, but they're, I think that the majority of people there are more [00:40:00] worried about Yeah. Their general workforce. The workforce. Yeah. Yeah.
Ashish Rajan: Yeah. And I guess, and I do wanna put this layer on top as well. I think obviously that's where the majority use cases are landing for people.
That's where the CISO concerns are coming from an internal perspective. So like now the internal threat matrix has changed because of it. I guess when it comes to the other side where applications are integrating with ai, there'll be a whole another conversation at that point in time. Yeah. For the workforce piece where CISOs clearly have written a pro security program before and they're obviously building or at least trying to fill that gap for what that could look like for this AI first world, for the workforce.
I'm curious for people who have probably started building on one because they're finally started unblocking or people have already had a security program built in, what are you finding as a ways people can start approaching doing security for AI, for, uh, where, where the majority use case is on the productivity side.
Bryan Woolgar-O'Neil: [00:41:00] I think kind of the, it's probably similar to what something I mentioned earlier where I think the way that people are thinking about it and going through that is that getting that visibility layer first. So kind of getting that layer on like what, what tools are being used within my organization who's using them, how much are they using them?
But also like what are they using them for? So trying to get a kind of lower level view onto like where's the AI adoption app? Who's into it? Where's the kind of use cases that are kind of people are using it for, but getting that kind of holistic view across their organization of what, what does AI usage look like?
They then. I think the next thing people we, that we are working with are trying to do is around like, alright, where do we wanna then make some changes? So like there's some, there's some parts where they can make changes within their products or things like trying to push people to approve tools. So there's, there's some use cases where you're like, [00:42:00] you could also do that.
You, you're using, I don't know, um, copilot, but we've bought chat enterprise, why aren't you using the enterprise thing? Oh
Ashish Rajan: yeah.
Bryan Woolgar-O'Neil: On and then there's more like, given that education in time, so like if, uh, if you, if you do put sensitive data in kind of telling people that, and then again trying to give them the direction they wanna go down.
So that kind of like intent based coaching. And then we've seen other things where we, we almost feed into other programs like the, uh, if you have that AI team or that AI committee, they're kind of taking a bunch of our information and they're kinda doing some other activities internally. Like I, like there's a lot of people who are doing the AI training or they're trying to identify things like AI champions within their business so that you can try and get adoption to happen through them.
Because if you think like, it's like we can help determine, we can help un you understand what's going on, and in some cases we can help with controls. But if you're [00:43:00] thinking about adoption and like how do you push adoption within sales in a 4,000 person organization, you, you probably wanna find the five people who are doing it really well and then use them to help do the adoption within that area rather than doing, yeah.
Yeah. I think everyone's, over the last two years, everyone's described to me like they've got some high level AI training. Like this is how you write a prompt. This is what a ChatGPT is like. Uh, and you're like, and I think that's, that's useful. But then now it's like, oh, do we run that training again?
Like you just, this is actually what? Oh,
Or do you go into more of like, this is how Jim is using ai
Ashish Rajan: Yeah. To
Bryan Woolgar-O'Neil: generate better outbound emails or to do follow ups or to look through his account list and determine who to target next or how to target them and all the rest of it. So like the, so I think it's almost like that, that AI team, I think within a business or the one within engineering I described, [00:44:00] I think it's like fueling them with enough information to them.
Go and help the AI adoption story and help that go on while providing the kinda controls and the kinda governance around that. And I think that's where, it's, where you mentioned it, it's kind of like, it's not just a security concern, right? It's kind of more people concern and I think they need to work together.
If it, the ones who do the best job, I think they're they've either got someone who's over a few different departments and can do that leverage across different bits or they've, like the one customer I'm thinking about, we, we meet they're kind of one, one of our bigger customers, but we meet them every week and they've got the security uh, the kind of previously in compliance and legal joiners every week, but all three of them join rather than just security, which I think is probably a sign that.
They all benefit and they've all got slightly different takes on what we're doing. But like they're, they're working together to think about like what their big thing is around, like, they need to, like the, [00:45:00] the industry they're in is kinda around advertising and they're like, we really need to be the forefront of ai.
Yeah. So they premise, but they're also like, we need to really focus on making sure that we are at the front of this. Like if they don't adopt ai, they, they're almost like adopted AI type, or, and not everyone thinks that way, but like that is of course a common thing that, uh, we hear.
Ashish Rajan: Do you find that, uh, I guess how would you describe the maturity?
'cause I, in my mind, for people who are listening or watching this, they may have a question around, okay, I have. Some kind of security for ai. I'm like in that camp of, I feel I'm comfortable with ai. What are the, are there any I don't know, two or three stages that come to mind where, hey, if you're on stage one, that's like your basic foundation level.
Stage two is like you, I mean, there's a, there's room to improve, but you're better than most or whatever. Is there a kind of like a three scale or a five scale version that comes to mind for maturity of security [00:46:00] in this space where so people can actually think about. What they're missing in their security programs for this.
Yeah,
Bryan Woolgar-O'Neil: I, I think kind of, I've not got a maturity model, but we, we did lots of that in intelligence background. Um, so I'll try and make one up on the spot. I think like the top level probably, like, if you imagine the sort of like basic telemetry you might get out of a, like, kind of like a firewall or a sassy tool that's gonna lift, you can tell at a very high level that there's AI usage going on and, but probably not what it actually is.
Or like a bit more information around almost like subscription level information. So you can't quite tell where your risks are, but you can get a sense, a high level of where your AI is at. I think the next level down would be that level where you, you have that understanding of usage and intent within ai so you can understand things like, the key use cases, you can understand things like [00:47:00] the type of the tools that you're using in the versions. Are they all public tools? Are they tools that train on your data? So more like a, an in-depth view of like your kind of inventory of, of AI usage. Like, wait, what are my engineers doing around MCP?
Are we using any, are they remote? Are they local? But so like, if you imagine that level of just understanding, because think at that point you can then, even if you don't wanna put any controls on, you can start setting policies against it. You can start having like human conversations about like.
Guys we're, we seem to be doing this over here. I need to go and talk to the, the head of, um, engineering to stop that or
Ashish Rajan: yeah.
Bryan Woolgar-O'Neil: Find out more about it. So I think given that level is the next one the next level would be like adding those control basin. And I think I probably think about it as like access controls first.
So just like some basic provisions around a lot of people will say to us like, we don't want people using N [00:48:00] LMS based in kind of geographic regions that we might not like that kind of train on data. So like, we wanna kind of control that. We wanna ensure that we've already made an investment, like most people have picked one or two different AI tools and they've paid a bunch of money for an enterprise agreement.
Yeah. So they wanna then push people to those rather than other tools. So you've got that kinda like basic access controls or if you broaden that to like MCP, that's where it can in around, like, we've got these approved MCP servers, we've not approved these, or we're not gonna allow you to delete production.
We're not gonna allow to delete data from the Postgres MCP server, for example. And then I think below that's then where you start putting in below. That's when you start putting in the bottom level, which is that more intelligent coaching aspect. Mm. Coaching the AI and the users in, in real time around like whenever [00:49:00] some data or intent that they're trying to do isn't something that would benefit the organization.
You can kinda step in line and coach and inform them about why it's not important. And that to me is like the, the policy in real time is like that type of thing. So it's almost like, I think about it as like real time training. It's like, uh, it's just about to happen. We stop at happening and then we train the employee in real time.
So it's that sort of like knowledge share and kind of then they, they're like, they're like to do that less over time. They might do it again, but they might not do it for a little while. They might do it. Yeah. Less often because they're, you're training them piece by piece. It's kinda like bitesize training for individuals.
Ashish Rajan: Uh, I think you've kind of laid out the maturity framework. 'cause in my mind, the way I was thinking about this, and if clients really well, because depending on where the priority for the organization is, people can actually pick this. And depending on what layer they are, they can definitely go on that path as well.
Is [00:50:00] there a question that you feel almost this is now future looking in the next couple of years, that you feel CISO should, uh, be looking out for in this particular space for AI security? Especially for them from a workspace, for a workforce perspective that you think people are not talking enough about?
Bryan Woolgar-O'Neil: Yeah. I think that the thing that's gonna change is like the, if you think about like work, the workforce is adopting ai
but I think they're still on like. Like they're trying to do a task, and that task is, it might take them five prompts into an AI tool, and I think the AI's evolving as well.
So like make those,
Ashish Rajan: yeah. For
Bryan Woolgar-O'Neil: me, those tasks are gonna become for what of a better word, more urgent or more like flows because instead of it being like one task and you have to kind of feed the AI lots of information, I imagine they'll become more connected and more knowledgeable and they'll be able to kinda understand more about your [00:51:00] business via different techniques that they're coming out.
So you'll end up having more agent processes going on within your environment, even if it's for a ChatGPT interface or Gemini co-pilot, whatever one you're using. Or if it's using a more specialized tool, like a kind of Claude code or even the Claude Cowork came out this week, but I think they, what, where they're going to think is probably the most interesting from like a workforce perspective.
So I think from like a. From like a security and AI governance point of point of view. It's like what are the, I think you're still back to like what we're doing now, which is what are those tasks people are trying to do?
Ashish Rajan: Yeah.
Bryan Woolgar-O'Neil: What, what, in what cases are those tasks things that we want people to do within a business?
And then how can you do them securely? I think the big thing that's just changing is that they're gonna become a bit more automated. So they're gonna go from a user is doing 10 prompt responses in chat. JT to the user might [00:52:00] do three, and then the AI's doing more. It might connect to more systems, it might have more knowledge.
So it, but you, you're probably still trying to do the same task. You're still trying to send that sales outbound campaign that everybody loves, or you're trying to, you're trying to fix a bug in code or you're trying to write a new feature. Yeah. In an engineering world. So I think they, but I think people are talking about that and that, so think they, but I think they're talking about it as like.
What are, like, I'm worried about agents, and then you're like what are you actually worried about? Like, what do you mean by to me, everything's an agent. Everyone's talking about like, I know Slack bot and now rebrand as an agent and like Yep.
Ashish Rajan: And the Salesforce is an agent. Everyone has an agent.
Bryan Woolgar-O'Neil: So it's like, it's like people are like, oh, what should I do about agents is probably the thing I hear the most. And then you're like, well, what, when you break it down, it comes back to the same challenges around like, what are my employees doing? What are they using AI for? Within those kind of agent [00:53:00] flows, like what controls I need in place so that my data doesn't go to somewhere else, so that I don't get an adversarial attack.
So that, um, we don't end up spamming all our customers because we've got almost like a logic error within, like the ai, the prompt isn't very clear, so the AI doesn't understand it. It does something. We don't want all these of things come back, but almost the same problems that they're gonna get.
More of them because the ability for AI to do them is gonna be faster and greater.
Ashish Rajan: That's very well put together, man. I think it's gonna continue to evolve and I think it's not, and to, I'm gonna add one more thing as well. It's not just about the evolving prompts. Evolving models, cheaper model, expensive model.
I think the other day we were talking about SMLs as well. You moved from LLMs to S SLMs because SMLs 'cause they're cheaper. Yeah. Once people start looking into this, they'll soon realize this life in AI is so much more bigger than just worrying about Chad, GP and Claude.
Bryan Woolgar-O'Neil: Yeah. And, and we so entirely, [00:54:00] we use small lag models because we talked about coaching. We, so we need to get response back within like 200 milliseconds. Yeah. And we want the. We wanna look at intent. So we need to have a model that can understand the context and semantics of what's going in there.
And we also want the model to explain to the end user what was wrong. So it needs to understand that context, it needs to generate that response. So it's like, yeah, it's easier to build a small language model to do one thing right. Or a smaller subset, rather than like, if you think Claude and like it's, it's not just one big model that's doing everything as you use like a Claude or whatever.
That's right. But
Ashish Rajan: yeah,
Bryan Woolgar-O'Neil: I think they're gonna end up having more micro models or small language models or whatever you wanna call them.
Ashish Rajan: Yeah, yeah.
Bryan Woolgar-O'Neil: Embedded within that. And they, organizations will end up having that around their different job functions. And so like if someone's using, we talked about sales, there's a whole bunch of sales AI out there today.
That's probably some of it's like a thin [00:55:00] wrapper on an api, API, some of it will be, I think that what will win out is someone will develop their own. Small language models or constrained models that are great at sales, and they'll be way better than something like Open AI will be because they're being trained and for that specific purpose, I think like it will evolve in that direction.
Definitely.
Ashish Rajan: And I think this is an interesting one, right? Because I think SMLA, I think a lot of, a lot few people, few people in the organization, uh, get to their s ml stage to begin with. There. A lot of them are still on their LLM stage to even start talking about s ML is like talking about 10 years in the future for a lot of companies.
So maybe just on that, do you see the future would have a lot of organizations try and build SMLs for their governance policy and then something like Harmonic would just be enabling it? Like you guys have the SMLs and they just give you policy Or where, where do you see the future kind of going in that direction?
I mean, obviously we can't predict the glass. I'm gonna put that [00:56:00]caveat, we're not like I predicting the future here. Just where you, where's your gut feel going?
Bryan Woolgar-O'Neil: Yeah, I think there's still. I talked about someone writing an MCP server taking four hours, like to create your own small language models, not a, not a quick job.
So I think like the ability for organizations to make them themselves will be hard
Ashish Rajan: person you need. That's where your, uh, head of AI or what were you saying? AI development person comes in and they're like, oh,
Bryan Woolgar-O'Neil: SMLs. Yeah. So we, our, our, our small language models, they, they're built from an ML team, right?
Yeah. They're an ML team who are, who do all the traditional things that those teams do. Like, they, they, they get data together, whether, however they do that, whether it's collecting real data, synthesize data, they label it they work out the different features within a small language model. What does that architecture look like?
They train the model. They have to test it. They have to go around, they have to deploy it, manage it. So like, yeah, I don't think organizations will. Use them or build them out the box. But I think they [00:57:00] will, my view is that like vendors like ourselves will kind of generate them and then there'll be more special, we, we do ones around like sensitive data detection.
There'll be ones around like intent of like trying to classify the intent behind, um, usage. But if you imagine that's around, we are around AI adoption and controls and governance. So if you're a company who's doing like sales or marketing or whatnot, like, and there's a bunch of AI companies out there, like, they're like, how do you build models for those areas?
And like you think of coding is probably the one that's exploded, right? You've got, you've got. All the main providers are trying to do AI coding, but then you've got like people at Cursor written their own models and whatnot. So I think, like I imagine we'll see that more, but for lots of different parts of the kind of in like a spread
Ashish Rajan: of SML providers everywhere in your organization.
Bryan Woolgar-O'Neil: Yeah, and I think so, and you might still, [00:58:00]you might still have like a general one, like a Claude that'll cover, maybe it covers code and engineering and it covers two or three other cases. But I think there's kinda rim, I think for like specialist providers to almost take the same inspiration from how LLMs work and, but focus them on smaller problem sets or smaller business domains.
I think that's gonna happen.
Ashish Rajan: Interesting. Yeah. Dude, thanks for sharing this 'cause those are all the technical questions I had. I've got three fun questions for you. I'll probably try and go through 'em real quickly. Okay. Uh, first one being, what do you spend most time on when you are not trying to solve AI security problems?
Bryan Woolgar-O'Neil: Oh, that's a good question. I think the, I got three kids, so like, the, been a five, restart a startup and having three kids is pretty much all the error in the day at the minute. That's, that
Ashish Rajan: sounds like a busy, busy, busy schedule in there.
Bryan Woolgar-O'Neil: Yeah, definitely.
Ashish Rajan: I've got the second question, which is, what is something that you are proud of that is not on your social [00:59:00] media?
Bryan Woolgar-O'Neil: Ooh. Not on, I think everything's on my social media. The, um, probably the weirdest thing was, um, um, when I was starting, um, digital, uh, digital shadows like 12, 13 years ago, whatever it was I, I wasn't, wasn't a founder there because I decided I wanted to make a feature film. So he went off and started Digital Shadows and I went and made a feature film.
Oh. And then he got some money and I went and joined them afterwards. So, um, wait,
Ashish Rajan: so did, did your feature film play in one of those event cinemas or somewhere?
Bryan Woolgar-O'Neil: Uh, we did. So we, we ended up, we did like a cinema tour of the uk so we did about, like 15, 16 dates and
Ashish Rajan: Oh wow. So you had the whole movie poster in your house somewhere as well?
Bryan Woolgar-O'Neil: Well, there is, it is somewhere, but it's not in my room, but yeah. Yeah. Fair. I do have one where it's signed by all the cast and whatnot, so yeah.
Ashish Rajan: Oh, wow. Wait, were you the producer director? What was that?
Bryan Woolgar-O'Neil: I did it. I wrote, directed and produced. So it's like a
Ashish Rajan: [01:00:00] and producer, director, and produced by Bryan.
Yeah. And behind the camera Bryan, acting Bryan.
Bryan Woolgar-O'Neil: Yeah I did do cameo, as you might expect, but yeah, that was, oh, there you go. But that's probably on social media, but like from a long time ago. So fair. You find, I
Ashish Rajan: look forward to, but what's the name of the movie, if you don't mind me asking? Uh,
Bryan Woolgar-O'Neil: it's called YouTube will have the trailer somewhere.
Oh, right.
Ashish Rajan: Okay. I'll, I'll, uh, I'll put the trailer in the, uh, mix as well. Final question. What's your favorite cuisine or restaurant that you can share with us?
Bryan Woolgar-O'Neil: Maybe not a cuisine restaurant, but like, my favorite thing is, um, pork dumplings.
Ashish Rajan: Oh, with the
Bryan Woolgar-O'Neil: chili sauce? Yeah. Well, you get ones where it's like, it's got the soup in them.
Are you favorite? Oh, yeah. Yeah.
Ashish Rajan: But is that the one with the chili soup kind of thing, or is it, do one that like the. The Shalong bows, I think, I can't remember. I think they're called the Shalong bows, but yeah, I know what you mean. The soup drone dumplings thing. Yeah. Also pork dumplings.
Bryan Woolgar-O'Neil: Yeah. But if any, if anything, if I'm ever right and they've got pork dumplings on the [01:01:00] menu, even if I'm not that hungry, I'll get that as a side order as well. So like, it's just like I have to do it. If they're on there, have to try them, which Oh,
Ashish Rajan: awesome, awesome. It does
Bryan Woolgar-O'Neil: like a proper, like dumpling restaurant and there's like a million to choose from.
You're like, what? Yeah, I mean, just gimme
Ashish Rajan: the four dumplings. The regular, yeah. Yeah. Fair man. Those are good though. Those are all the fun questions I had in terms of finding out about Harmonic and what you guys do and connecting with you and talking about how you guys are going down the path of being a AI secure internally or externally, however it may be.
Where can people find you and connect with you? What's your LinkedIn? Yeah,
Bryan Woolgar-O'Neil: so yeah, I think, um. Obviously a website, harmonic Security has got a whole bunch of resources, information on it. I think that I'm on LinkedIn, like I'm happy for people to reach out on LinkedIn and kind of start discussions with, with me on there.
And yeah, and on our website there's like, it's kind of like a guidey walkthrough of like our products. And if you kind of, if you're someone like me who wants to see it before you talk to a salesperson, then that's, uh, we wanna kind of, uh, focus on, on that [01:02:00] area rather than like just click the get a demo button.
But that is all there on the website as well. I,
Ashish Rajan: I appreciate that. I would, uh, put those things in the short notes as well. But thank you so much for coming on the show as well. And I look, I do, I, this is a great conversation. Thank you for being upfront about what you guys are doing and, uh, how you guys are adopting AI and in a developer friendly way is always good to know.
So thank you so much for sharing that as well.
Bryan Woolgar-O'Neil: Yeah, thanks a lot. It's been great fun. So.
Ashish Rajan: Thank you for listening or watching this episode of Cloud Security Podcast. This was brought to you by Tech riot.io. If you are enjoying episodes on cloud security, you can find more episodes like these on Cloud Security podcast tv, our website, or on social media platforms like YouTube, LinkedIn, and Apple, Spotify.
In case you are interested in learning about AI security as well, to check out our system podcast called AI Security Podcast, which is available on YouTube, LinkedIn, Spotify, apple as well, where we talk to other CISOs and practitioners about what's the latest in the world of AI security. Finally, if you're after a newsletter, it just gives you top news and insight from all the experts we talked to at Cloud Security Podcast.
You can check that out on [01:03:00] cloud security newsletter.com. I'll see you in the next episode,
please.




















