Fixing Version Matching with a Risk-Based Approach to SCA in the AI Era

View Show Notes and Transcript

"Every time I hear SCA, I'm like, isn't that a solved problem?". In this episode, Roy Gottlieb, co-founder of Hopper Security, shared how this is one of the biggest misconceptions in application security today. He makes the case that the 20-year-old method of version matching for open source vulnerabilities is obsolete, unsustainable, and costing companies a fortune in wasted developer time.The conversation reveals a critical business insight from a real-world customer: 5-8% of their entire R&D organization's time was being spent patching versions from SCA alerts . We spoke about why we "need to stop discussing versions and we need to start discussing the actual risk". The discussion outlines a new, risk-based approach that moves beyond noisy version matching to analyze if a vulnerable function is actually reachable by the application logic, a method that can eliminate over 90% of the noise.

Questions asked:
00:00 Introduction
02:22 Who is Roy Gottlieb?
03:20 What is Application Security in 2025?
05:50 Is SCA (Software Composition Analysis) a Solved Problem?
08:45 The Core Problem: Why We Need to Stop Discussing Versions & Start Discussing Risk
10:20 Beyond Reachability: A New Method for Risk Analysis
13:20 The Blind Spot of Traditional SCA
16:15 The Hidden Cost: How SCA Consumes 5-8% of R&D Time
19:15 The Internal Library Blind Spot: A Huge Undiscovered Risk
24:45 The Impact of "Vibe Coding" on the Future of AppSec
28:15 How MCP Servers & AI Coding Assistants Increase Risk
30:20 Are AppSec Teams Adopting AI for Defense?
34:15 How CISOs Can Separate Signal from Noise in the AppSec Market
36:10 Best-of-Breed vs. Platform in the Modern AppSec Stack
40:00 Final Questions: Family, Friends, and Italian Food

Roy Gottlieb: [00:00:00] Every time I hear SCA I'm like, isn't that a solved problem? The first thing that needs to happen in our community is we need to stop discussing versions and we need to start discussing the actual risk. We are no longer in a position we can just keep on matching versions because we are exposed to an exponentially growing amount of versions in our code bases.

What are some of the blindspots people have that they do not realize? Uh, one of our customers approached us and they're like, we've learned that five to 8% of their time is being spent on technical security issues. Most of which come from SCA, we're quite literally spending five to 8% of our r and d organization, time patching versions.

Is this really the right way? There's an interesting question of what visibility and enforcement do you have to really answer the question of which of application is using an older version. Of that internal develop library.

Ashish Rajan: Has AI also opened the doors for more people to be AppSec people?

Roy Gottlieb: Our audience typically don't buy the, our AI model says this, that never flies.

You need to be SCAutious as you would basically in any other AI use [00:01:00] case.

Ashish Rajan: Application security has been spoken about for so long and even today, now that AI is part of it, open source security has not been solved in this. And by that what I mean the traditional way of approaching whether your vulnerable library is present because you have validated that from a database, it's not the right way to approach SCA or software composition analysis, which is basically means that you're trying to identify is there an open source library that I'm using, which is vulnerable.

We spoke about how enterprises use libraries today and how the prolification of AI and the production of more code than ever and where some of the blind spots are for people trying to wrap their heads around this particular problem of this, a massive amount of code coming in, which is not just your stack O flow code, but basically vibe code it through the way.

If you are someone who's looking at the application security intersection of ai, this is the episode for you. And if you know someone who's working on the same problem, definitely share the episode with them as well. As always, if you have been here for the second or third time, I would really appreciate if we take a quick second to hit the subscribe follow button on Cloud Security [00:02:00] podcast.

If you are watching this on YouTube, LinkedIn, or listening to this on Apple or Spotify, I really appreciate the support. I hope you enjoy this episode and I'll talk to you soon. Hello, welcome to another episode of Classical podcast. I've got Roy with me. Hey man. Thanks for coming on the show.

Hey, thank you for having me. Just to kind of set some context, could you share a bit about yourself, your background?

Roy Gottlieb: Yeah, absolutely. So, uh, Roy, uh, born and raised in Israel, uh, like many Israeli founders, started my professional career with the technological unit of the IDF. Very quickly understood that.

I am not the most technical person in the room. That's one of these privileges that you get, uh, when you're serving in these units. Yeah. Very quickly migrated from, you know, the security side. Offensive defensive to the investing side for almost a decade. Alright. And then at some point the I Impose syndrome kind of killed me and I'm Okay, let's, let's go back to, you know, building stuff that matters.

You, you didn't wanna miss out on the fun. Exactly. Fair. I'm a like, at some point, like, uh. I'll, I'll take my own target and, and get it.

Ashish Rajan: Yeah. Fair. I mean, it's, it's not a bad thing. Uh, 'cause you picked up the [00:03:00] AppSec world as well. Uh, actually maybe a good point to start is seems like AI has disrupted everything.

So how do you describe application security in 2025?

Roy Gottlieb: So I, I don't think the application security definition has changed at all. Mm-hmm. I think. There are just additional cases that we need to cater for, either with existing tools or with additional capabilities. So historically, or like five years ago, AppSec was the classic stack of SAST, SCA, container scanning, secret scanning you infrastructure code.

Yeah. And then a little bit of bug bounty, CICD, security, et cetera. Now we kind of see these components getting more and more important because there's just. Enormous amount of code generated by additional bots. Mm-hmm. And AI agents. Yeah. And in addition, you have to cater for specific cases of AI, but whether it's incorporating AI in the code and what are the issues that come with it, with that.

Yeah. [00:04:00] And also being able to just simply understand what AI components am I exposed to? Where, what's the risk in that? Et cetera. Yeah. So it's just an evolution of the same thing amplifies the historical stack. Also introduces a new layer of AI specific set of problems.

Ashish Rajan: So what's different this time?

I guess your point of the evolution? 'cause clearly it's uh, I don't know, maybe 20, 20 plus years of apps, application security. Many people have gone through various itterations of it. We still talk about open source today. And we still talk about open source from the context of LLM. Models, we talk about open source from libraries being used in code.

There's so many variations of it as well. How has that open source kind of world evolved in AppSec specifically?

Roy Gottlieb: So that, that's a very good question because the open source security side has mostly been reliant on open source vulnerability databases, the N vds, the OS V dev, the GSAs, whatever. Yeah. And now if you're specifically asking about how AI is affecting this.

[00:05:00] So it just amplifies a problem with the existing ones that we have. We'll, we can discuss that later. Just little, the level of noise, lack of evidence, actionability, developer pushback, all the stuff that we've been kind of seeing for quite some time. Yeah, and in addition, there's this question of, now I have open source packages, but also AI models.

Who maps the risks in models? How do I inventory which models am I exposed to? What are the known and unknown risks in each model? Mm-hmm. What is the remediation path for? So, so for an open source package, we know what do I do if I have an AI related risk? Yeah. Um, so this is kind of how we see this.

Transitioning to AI specific, somebody needs to now map the risk of what are the known and unknown risks of using ai, you know, self development processes, what's the ground truth? Where do I get the data from, and then how do I find this in my stack?

Ashish Rajan: Uh, so a lot of people may already have and maybe it's the fault of being marketed AppSec for so long.

And yeah, I'm probably in that category as well. Every time I hear [00:06:00] SCA I'm like, isn't that a solve problem? Like, I know that, hey, to your point, I match it. Someone is matching it to a database of threat intel information about libraries. And is that, so how, why is it different in this world of ai?

Because if it's the same code being used and otherwise, why is it the same?

Roy Gottlieb: So there, there are multiple differences. The first one is we leave that pain of the sentiment that SCA has been commoditized. And for the vast majority, in full transparency, it has been commoditized. Mm-hmm. Uh, we see very, very advanced accounts that quite literally tell us.

Why would I pay a premium for a commercial product with six, seven figures a year? Yeah. Where my team is technically savvy enough to take the open source projects, run them internally and, and, and, and build that system in a very coste effective way. Yeah. So how does AI generally affect it? Or what's the, the, the, the problem is the main problem is that we have a data availability issue problem with the data that is available about open source risks today to begin with.

Oh, [00:07:00] okay. Uh, a decade ago when we've kind of added open source vulnerabilities to the NVD of the world, yeah. We chose the same pattern of granularity of data, the structure, the format. We were used to flagging risk on open source packages by a very simple process called version matching. Okay.

Maybe in addition, you have an unstructured textual description. Essentially it doesn't really matter your, your vulnerability of choice, whether that's N videos without the F-G-H-S-A and, and the commercial ones. We are structurally looking for versions. And the first thing that AI did is amplify the problem with that method.

We are no longer in a position, we can just keep on matching versions because we are exposed to an exponentially growing amount of versions in our code bases.

Ashish Rajan: I was gonna say, 'cause I think it's a, 'cause there's obviously different kinds of open source as well.

There's like the nested version there is the, hey what I can see by the name of a presence. So it does that need to [00:08:00] change as well in terms of, it's not just about whether, I don't know, uh, my. Really bad. Library JS file is in a repository somewhere, and it is apparently in the NVD file somewhere that, hey, it's vulnerable.

So that's kind of where most, in most knowledge of AppSec kind of stops with SCA.

Roy Gottlieb: Exactly. So the, the thing is, if, if, and I, I hate this example 'cause it's a boring one, but I mean, it's common enough. You can be a better example. Go for it. No, no, no, no. I, I, I, it's just common enough so people kind of relate.

Uh, yeah. If you take the Log4J library as an example, yeah. It's 60,000 lines of code. These 60,000 lines of code are exposing 7,000 functions. Ballpark. Okay. Yeah. The vulnerability itself is in the lookup function within the j and DI manager class within the Log4J Core. You could find that information, you can read fixed commit, you can compare the version that is, uh, fixed to the vulnerable version.

You can read exploit examples, you can read blog posts. Yeah. But the vulnerability database does not structurally tell us. Where [00:09:00] exactly in, again, across all of the open source vulnerabilities. So the first thing that needs to happen in our community is we need to stop discussing versions, and we need to start discussing the actual ex exploit or actual risk from every package.

And AI is very powerful and, and, and has a lot of promise in kind of helping us build that knowledge base to map everything retroactively, but also future looking. Uh, we have seen some advancements, so some of the Go ecosystem, for example, is already richer in context that, you know, some of the disclosure are.

Pinpointing specific functions, but generally speaking yeah, we're, we're, this is not where we are right now. So that's like one big problem with data that is available in theory, it's out there. Yeah. But we, we, we don't structurally share it because historically we didn't want to help the attackers. Right.

We didn't wanna tell them, Hey, here's the line of code, here's the function, please do this to just, yeah. Yeah. Hack this. Um, so that's changed now. We see a a like, like the, the Go database is an example. Yeah. [00:10:00] And we are building that knowledge base internally. So that's one of the things that we were doing with Hopper is we're leveraging on our vulnerability reSCArch background from the Army and AI availability to kind of streamline and automate vulnerability reSCArch at scale to build that knowledge base of which functions are actually impacting every CVE in the world to be able to really pinpoint risks at a much granular way.

That's one element that we see.

Ashish Rajan: So when you say pinpoint vulnerability, 'cause I think most people can think of open source SCA and go is that the same as the file being there or is it the reachability of it. When you say context, what are you refer to? I guess,

Roy Gottlieb: so I'll try and avoid the term reachability.

Okay. Uh, although we use it as well and I'll explain why. Because reachability is a broad term that could be used in many different ways. I think James Berthorty had a great, uh, you know, five types, probably there are like 10 types of different reachability. You have package level, function level runtime internet facing.

Yeah. Like so many different types. And basically what we do [00:11:00] is we say, let's first understand what the real issue is and where it resides within the package. Yeah. And is there even a remotely theoretical way that the application logic would get to that flow? You can later build on additional layers of prioritization.

Yeah. What is the actual flow in runtime? What are the compensating controls that I have? Uh, you know, can I add compensating controls, et cetera, et cetera. But at the very, very basic, we're shifting the method of risk analysis. Yeah. From package version basis to where exactly is the function method line of code, and the way I have architected my application, the APIs that I'm using, the function calls that I'm triggering, the logic that I've deployed, is it potentially remotely exposing me to that vulnerable part? And what we've seen is that essentially that makes sense because almost by design you're increasing the resolution of the scan by an order of magnitude.

So it doesn't [00:12:00] ask you to do much. It's still the same, you know, read only experience, no agent, no runtime, no CICD steps. Nothing heavy. Yeah. Um, you're just getting higher Fidelity results that off the bat eliminate over 90 to 95% of the noise.

Ashish Rajan: Interesting and interesting that you didn't want to use the word reachability as well, because I find that a lot of people kind of link to it because I, well, I, and to what you said, the examples are quite complex sometimes, and you gave a good example as well.

There you find that, uh, with open source libraries and pulling them into an API world. Is there a misconception or blind spot that today AppSec people have about this particular industry? Um, and obviously it could be the fact that 25 years of, or 20 plus years of just doing AppSec has made us very much attuned to a certain kind of SCA and you're saying it's like that's probably not the right way moving forward.

So for people who are trying to understand, get their head around this. Uh, whether it's through AI or not through ai, [00:13:00] uh, what do you see is the change or what do you see are the blind spots that people are get who are still using the legacy way of doing it? And maybe legacy is a loose term for it, but the previous versions of it's been

Roy Gottlieb: 20 years, you have legacy and you have semi modern and you have modern.

And so we, we have competition as well. So it's, it's,

Ashish Rajan: oh yeah, there you go. So I wonder what, what are some of the blind spots? 'cause people are clearly at different stages of that. Like some people may have used a legacy application, they still can't do that because they have a contract or whatever. Some may use semi morden, some may use.

So where do you find is that each, in each one of those scenarios, a blind spot people have that they do not realize? Hey, I, I'm a little behind the eight ball. Uh, if I'm sticking with the legacy version, I need to be either semi morden or morden.

Roy Gottlieb: So I, I think it kind of starts from what are you trying to achieve?

Mm-hmm. You know, the security leaders tend to always stick with, like, people, process, technology. Yeah. Yeah. The technology should serve the process. Yeah. We've seen, companies that have the appetite and the capability and regulatory requirements. We just patch everything. You know, these, these still exist.

Yeah. Recently we've been fortunate [00:14:00] enough that even FedRAMP, I dunno if you followed the recent RFC, but even the most rigorous regulation body is kind of getting to that stage of acknowledgement where we can't keep on patching everything that just, that's not feasible enough. Yeah. But we have seen, uh, that segment.

It's not huge, but I've actually recently written, uh, a little bit of a rant on Oh, right, okay. On, uh, LinkedIn about that. But what, what I'm getting at is. The question is, what are you trying to achieve? What's your process? What's your policy? Are you trying to reduce risk? Are you trying to, how mindful are you to the engineering velocity?

Um, you know, what are the regulatory requirements? What do your customers want? Yeah. And, and, and, and demand. And the main blind spot that we see now with customers is that just that knowledge of am I actually reducing risk by patching more? How do I know if what I am patching is the right thing to patch?

And another interesting thing is we see more and more security leaders getting at the board level. So there are [00:15:00] now more of a business enabler. Uh, they have to do, a lot with le a lot more with less. Yeah. Uh, they've been elevated to be a business enabler. Yeah. And we see more and more of them being kind of a part of the discussion of.

What is the burden that we are putting on our engineering organization and are we actually getting value out of all of that? You wanna call it taxation, technical, that security that it doesn't really matter, but am IU as a company? It's my job as a C level with a company. To make sure that we take that very expensive, precious resource called our r and d organization.

Yeah. Typically very expensive talents. Yeah. And are we really utilizing it in a, in an effective way or are we just throwing them at pile of tickets that we have zero visibility into the value that fixing them would deliver to the business?

Ashish Rajan: And talking about business deliverables, I guess cost and financial outcomes has always been a, been used as a way to.

Either get a security [00:16:00] product or probably help the board understand why it's important. Has that changed as well in this world now in terms of the cost associated with the, with the potential breach because of open source?

Roy Gottlieb: So I think the costs, what, what we see in terms of cost and this is, uh, like a, an education phase we see in the market.

I think , are getting a lot more aware that the cost is not just the cost of the SKU or, or the line item in the budget. There is a total cost of ownership element. 'cause you can buy a tool, but then you need your team to spend time on that spending whatever amount of hours. Yeah. And then if you're really, you know, a business enabler, then you need to be mindful of what is that taxation element that you're putting on the engineering organization.

So just to give you just one example of a very extreme level of, of cost that we, we've seen, uh, one of our customers approached us and they're like. You know what? We've surveyed our engineering organization. Okay. And we've learned that five to 8% of their time is being spent on [00:17:00] technical security issues.

Most of which come from SCA, because you know, we're highly regulated. Yeah. So we're quite literally spending five to 8% of our r and d organization, time patching versions. Is this really the right way? You know, is, is there a way to take that precious resource that is there to deliver value to, to generate revenue, to build products, to release features, to provide uptime, to improve customer experience?

And can we do this in a bit of a different way? To actually deliver more with what we have. Yeah. Because if we keep on going, you know, in a year or two we'll be at the 15% mark, and then in five years we'll be at the 20, 25% mark. And you know what, like I love open source men, but I'm not in the business of maintaining it.

That's,

Ashish Rajan: yeah.

Roy Gottlieb: Fair. So,

Ashish Rajan: wait, what do people do today then? I guess obviously you guys are working towards challenging the norm. Which has been the norm for a long time. What are people doing in the [00:18:00] legacy world for internal libraries and otherwise? What, what's been the path and yeah, what, what's been the path for them to triage this?

Roy Gottlieb: So what do you refer to as internal libraries?

Ashish Rajan: Uh, as in like the internal open source libraries. So every organization would have like a, Hey, we use a library, which is, let's just say a bank internal banking. I have my own special libraries that I use for ish should log into his bank account. So that's how I know that yes, it's Ashish, blah, blah, blah.

And that's the same library used across the organization. And that's usually where, at least in the dev psych course program that I ran. We found that, or a lot of the libraries that we were creating for us internally, they had a lot of open source libraries that we were using. And we found that some of them had the right license, some of them did not have the right license.

And I can go in the whole rabbit hole with that. But anyway, uh, long story short, I failed that program miserably. I didn't lose my job for second. Oh, you've learned?

Roy Gottlieb: Yeah.

Ashish Rajan: Yeah. I learned. But I definitely found that, um, our approach to. Open source library was a lot more gee. Yeah. What are [00:19:00] people doing about that?

Because I don't wanna answer the question, but I almost feel like I'm curious, what are you finding that when you walk into a conversation with a potential customer prospect, uh, what are they doing today?

Roy Gottlieb: Okay, now, now I understand the question. Yeah. And, and I have a very interesting story for you. The larger accounts, yeah. We typically see them heavily relying on these internal software artifacts Yeah. To store reproducible, internal developed libraries. Yep. Typically that would be the, uh, you know, Azure develops or J four Factory or so other next. So it's like where you store these internal Yeah.

Uh. Typically there would be like com do acme dot logging com. Acme crypto security. Yeah. Yep. Uh, messaging, analytics, et cetera. Yeah. And we've identified back to the blind spot. Yeah. Case. So in one case we've seen a critical reachable on the function level text for show, and we're like, how did this happen?

And we, what we've identified is an interesting blind spot. 'Cause you, you ask how are things operating today? Yeah, yeah, yeah. So let's say [00:20:00] you have one of these classic SCA that is a manifest level scanner. Yeah. So the repo would be clean. Your platform team does an amazing job. They patch everything.

Yeah. They bump it. It's great that internal library repo Yeah. Is well maintained. Yep. And they've published it to that internal artifact storing system. Okay. And you would see an updated artifact. In that artifact. Sorry, J four Artifactory, let's say. Yeah. And even if you scan the Artifactory, you would see, yeah, I have a clean version.

Yeah. Now there's an interesting question of what visibility and enforcement do you have to really answer the question of which of application is using an older version of that internal develop library still proliferating risk to the application layer? Because you know what? I've failed to see a single organization that says.

They delete the older versions from the artifact. Not a single one. Yeah. Let's see what breaks, and then they'll call us, you know, like not a single time. No. And part of the interesting thing is, [00:21:00] you know, being able to actually. Understand how an application is being built, what are the components, how code is being bundled into internal libraries and applications and services.

Yeah. Yeah. And you know, that whole world of inter internal dependencies and interdependencies is something that we, we've learned that we need to cover very, very quickly. 'Cause otherwise it's a huge blind spot. Yeah. Although, yeah. You know, we're, we're talking about reducing the amount of, uh, noise Yeah.

In the category. Yeah. It's not an easy discussion with a customer until you know it, but you also have a million more

Ashish Rajan: actually. Yeah. 'cause to your point their focus is the latest version of the library, not the older versions. Yeah. Because to your point, when we talk about patching, we're like, oh yeah, I'm using the latest version of os.

But what does that mean from a library perspective? 'cause the person probably did the version one of it left ages ago as well, so you don't even know what to do there.

Roy Gottlieb: Yep. But, but, but in theory, you would've expected the internal library use cases to be the easiest patch. You have an internal LI library that the team has [00:22:00] published.

Yeah. You just need to ask the application team to move to, you know, KnowMe. Yeah, yeah. Crypto. You know, from 2.0 0.5 to 3.2 0.1. Yeah. Uh, but they also know that they need to see what's gonna break and, you know, what's my motivation to to fix it. And that's kind of the, that blind spot that we see quite commonly around that.

And the issue with it is where that meets some of our customers is that. If you scan on the runtime side, yeah. You may actually be able to see that you're using a vulnerable version of a library, but then no matter where you look, you're like, it's, it's, it's a phantom one. You know, I, I, if you do like dependency tree that doesn't analyze that internal dependency from the, everything would look clean.

Yeah. Like the, was clean. The, the, the defactor is clean. Triaging, this becomes a very, very big pain. Yeah. Uh, so being able to pinpoint not just, you know, the function and the source code, but [00:23:00] also which vulnerability originates from where Yeah. Is also a very interesting ask that we've learned from our customers.

Interesting man.

Ashish Rajan: Because I, I definitely find, wait, so does that mean the way people approach application security programs, should that change as well then at least I, at least the SCA component of it.

Roy Gottlieb: I think that, again, when you build your AppSec program, you need to kind of prioritize and we see like the classic four pillars that we've discussed.

Yeah. But I do feel like you need to really understand that if we're now going to incorporate the classic basic legacy tools that would probably, uh, charge a lot of toil and false positives on your organization. We do see modern customers kind of heavily relying on risk-based SCA it could be, again, depending on what they need, their tech stack, their business, how they deploy software, et cetera, et cetera.

Yeah.

Ashish Rajan: But with the context as well for whether this is in use and where the function is used.

Roy Gottlieb: [00:24:00] Absolutely. Ba basically, you know, the whole category is going down that path. Very small portion of the customers that we see also have like a runtime component, although it's covering a little less.

Yeah. Yeah. Um, but every single customer we speak with, you know, at some point in their renewal, like this has to, like, this has to be part of the, the, the essays.

Ashish Rajan: So another term that's been popular on the AppSec world, it's a whole vibe coding thing as well.

Roy Gottlieb: Oh, that's a good one.

Ashish Rajan: Yeah. So what's your take on it?

Some people believe it would not take off. I definitely am. Or I, I've got my own thoughts on this, but where, where is what's your take on this and what do you see as, uh, how it would impact the SCA world specifically?

Roy Gottlieb: I have failed to see a single instance in history where fighting and technology resulted in that technology not proliferating.

I'm a huge believer of, you know. AI agenda, AI and vibe coding is just basically saying, you know, now you can more people could develop, or developers could develop much [00:25:00] faster. And you know, that's, so I do feel we're gonna see a lot more code written by a lot more people.

Ashish Rajan: Yeah.

Potentially not developers either as well.

Roy Gottlieb: Not every single one of 'em is going to be a developer. Absolutely not. Yeah. And also junior developers are just gonna generate a lot more code. 'cause you know, you just need to have a basic understanding. Yeah. And what we see is that, that would probably just keep on challenging that paradigm of how we run risk analysis today.

If we can't already keep up, then the question is how are we doing things a bit different to prepare for the scale and the amount of code that these vibe coding people and tools are going to generate. Also in addition, that means that we're probably going to, similarly to the shift left approach, which in all honesty, we are huge believers of, but I don't think it took off that widespread.

Okay. I, it's, it's a mentality thing. Shift left would actually happen when you motivate developers to generate more secure code. Yeah. And most of my [00:26:00] friends, and my wife by the way, is a software engineer. Yeah. Not rely, they, they're not, you know, the, the compensation and performance has zero to do with, you know, with the security of the code they ship.

Yeah. That's right. Zero. That's right. Not a single one. Yeah. Yeah. You know, shift left is, is is a general mentality That is Right. 'cause it's much more cost effective to solve, uh, issues early on. Yeah. But it'll actually hits. Critical mass. When developers are incentivized to generate more secure code specifically for vibe coding, I do think we have a huge opportunity to kind of incorporate more and more security aware capabilities to the development, the modern developed lifecycle, whether that's via MCP servers, which great shirt by the way.

I thank you. Or, uh, the, the classic ID plugins or, you know, the, the. The classic flow of empowering developers is just gonna be even more important. Important now than ever. Yeah, because they're not really aware and they're not really care about security. They care about releasing features, uh, you know that [00:27:00] they have the requirements they need to fulfill.

Yeah. And, and they just ship software. That's what they do.

Ashish Rajan: And I, it's funny, I think one, I feel one of the reasons why my DevSecOps program failed was kind of like that as well, because there was no true compensation for a developer to even care. Like I did not build a security champions program.

There was none of that stuff to even consider the fact that, hey, why would there be a motivation for a developer to solve it? Apart from my naive security brain thinking, well, everyone clearly wants to remove vulnerability. Why would they not want, I mean, it sounds like a stupid idea. You work for a company, they pay you to stay, make good software, you could, you make good software, but you also make sure it's secure as well.

So vulnerability should be solved everywhere. Anyway, a very naive way of thinking. But you find, uh, that now people are white coding MCB service pretty prevalent as well. How is the whole MCP server impacting this kind of, is there an impact of that as well in this, like in terms of how people are.

Developing and what risk it exposes [00:28:00] people to.

Roy Gottlieb: So yeah, first of all, there are like AI and, and, and now MCP specific question. Yeah, yeah.

Ashish Rajan: This is, for me, it's more around like the, the coding part of it rather than the MCP server part of it. But

Roy Gottlieb: the more you rely on different MP servers and coding edge, for example, the more you're prone to hallucinating packages and, you know, generating insecure code by, by almost by design.

Yeah. Um, so that element is very, very critical and we see that, proliferating very, very quickly to. Pretty much every account. Mm-hmm. Um, there, and as you probably know, there already has been multiple attacks and attackers are kind of, let me try and pinpoint what is the most likely package to be hallucinated so I can register it.

Yeah, yeah. Um, so the more we use these tools and the more access we provide different platforms and, and tools to our source code via these MCP servers. Yeah. The more our codes become bloated and the less control we have. Into what we are actually generating. Yeah. And that just calls for more G days at the coding level.

[00:29:00] Yeah. But also at even after you, you just build it where you store it at the, at the repo. The repo's gonna have a lot more code that a lot less people reviewed and actually wrote.

Ashish Rajan: Yeah. So you actually feel it would be an overwhelming amount of code coming out for developers to review. Uh, which also means that there's a lot of s uh, open source libraries that would be included that.

Maybe real, maybe just an AI has hallucination as well. So the, are you seeing, uh, have you seen that as a real case that's happening across the board for people to be more inclined towards a, um, there is a pattern in the industry where it's not just me, a developer producing code using Stack Overflow copy paste.

Now I'm using copilot to produce my code or Cursor or whatever other, according to that, I have to vibe code my way. I'm building internal libraries that are potentially being used to what you said, but to upgrade my version because my security team has been on my back for a while. Are, are you seeing a lot of AppSec teams [00:30:00] adopting a lot of, uh, AI themselves as well?

I think they're

Roy Gottlieb: still at the experimental phase. Okay. Um. Then some of the issues are widespread and some are, I don't think hallucinated packages are that common yet. Okay. It's like an edge case that we start learning about. Yeah. But what we are seeing appsec programs trying to heavily leverage on AI and how can we leverage that to kind of streamline process because we're going to have to support.

A lot more code. Yeah. And we're not getting more budget for this. Yeah. We we're not going to increase our head count by five x. Yeah. But the amount of code that our team is going to generate is, is going to grow exponentially. Yeah.

Ashish Rajan: Do you feel like, um. Has AI also opened the doors for more people to be AppSec people?

And the reason I say that is because I started in, I am design infrastructure, but I was never a AppSec quota. And I, I always used to think that, oh, maybe my dev sec course could've program would've been better if I had an AppSec person in the team. The, the saying seems to be that with AI, [00:31:00] a cloud person could also become an AppSec person.

A red team person could become an AppSec person if they want to as well. That the skills should be transferable. And where I'm going with this is that CISOs obviously are making a choice for a I understand that. The open source libraries, it's not just about being present, but it's also, uh, is it reachable, is it accessible where it's being used or not?

Well, how does, how do you see this impacting the teams that are being created for AppSec with the new way of how AppSec is, is being done? The, should the team be different as well, or The current team kind of caters for AI world as well?

Roy Gottlieb: So I, I think very few teams really understand the intrinsic challenges that AI specific things bring in.

Um, but what we do see is they have to kind of very quickly keep up and, and learn what that risk means. I'm not a huge fan of just throwing the whole team on everything, although we do see some teams, bringing that approach to the table. Yeah. But I don't think it's upec specific.

Uh, [00:32:00] I have met multiple teams where, and you've probably come across these we are all security engineers. Yeah. And, and you know, it's a romantic approach because it's, it has merits to it. Because we all need to understand the whole stack. Yeah. Yeah. Because, you know, essentially the code that we build is shipped to an environment and is behind a network firewall.

Yeah. For example. Yeah. And so somebody needs that end-to-end contextual analysis, but it's not very common and it's very, very difficult. Yeah. I do think AI is very good at quickly educating our team members about what everybody's doing. What does the day-to-day look like? What are the general risks that everybody's having to deal with?

But it's now going to replace deep understanding and, and, and deep problems. And one of the fun, I'll give you a very interesting experience from, from an internal process we had. We are, we basically built an infrastructure that we took a vulnerability reSCArcher, a human that has been doing this for like 20 years in the Army.

Um, very [00:33:00] capable of, uncovering zero days, whatever, and then put AI in their hands okay. And see how quickly they can, uh, triage. Have vulnerability and understand, again, the function, et cetera, et cetera, and we've compared that to just asking a question to a model. Yeah. Okay. You are not yet in a position, or Unfortunately for us, we're not yet in a position where we can just throw a question at a model like, Hey, tell me what's the vulnerable function like.

This has a lot of issues and not only models are not yet capable of fully triaging and automating. They, they'll get there. I'm not saying they're not, but yeah. The issue with it is that we, there is an AppSec specific thing, which is we have a technical audience, typically a software engineer Yeah. That likes deterministic findings.

Yeah. That are reproducible. That are evidence-based. Yeah. Our audience typically don't buy the, our AI model says this. That never flies. Yeah. So [00:34:00] we see a lot of potential in augmenting human using ai. Yeah. But you need to be SCAutious as you would, basically in in, in any other AI use case.

Ashish Rajan: So for people who are trying to, uh, like CISOs or AppSec leaders, we're trying to make a call for whether they're uplifting their AppSec program for ai, uh, or they're looking ahead, how can they separate the signal from the noise today in this 2025 version of AppSec.

Roy Gottlieb: So again, the first thing they need to do is they need to come up with a program that suits their business.

Yeah. And really understand that there is a modern way of identifying risk. Mm-hmm. Risk is just no longer inventorying of theoretical vulnerabilities in a code base. But what is actually potentially affecting me? What is the business criticality of this? What's the blast radius of this, et cetera. And they also need to understand that there is a.

Huge wave of, of code coming our way that humans [00:35:00] have very, very little understanding of. Historically, we had to have full ownership of every line of code 'cause somebody wrote it and then another person, person typically did the code of review. Yeah. These days we see debug and triage time that is manually getting extended more and more because the developers are less aware.

Of the internal, decisions and the architectural decisions and why are things there? Yeah. So I do feel like CISOs have to kind of take into account that the issues that we're seeing now already are going to be augmented 10, 20, a hundred times over next two to three years with the adoption of ai.

Yeah. So we need to find the right tools to empower our teams, automate, triaging and investigation. Find the balance between ai, everything to deterministic and what our engineers can actually digest and work with. Mm-hmm. Uh, be even more mindful of the tax that we're going to put [00:36:00] on our development organization, because essentially we're here to make money, not, patch software.

Yeah, yeah. And really find that balance between getting ready for the AI revolution uplifting our tool stack and staff to kind of prepare for what's coming. Yeah. But still balancing, enabling the business and not like panic and just block everything.

Ashish Rajan: Because you came from a VC world as well. We were talking about this before we started the recording, the whole best of breed versus, um, one platform rules them all.

There's so many variations of it as well. How do you see the AppSec space for that particular thing in terms of best is that still relevant today in the AI world? Uh, if you were to put your VC back, how VC hat back on?

Roy Gottlieb: So I think it's more relevant than ever, uh, but it's more of an ICP market segmentation standpoint.

Mm-hmm. There will always be the subset of the market that needs the best of suites. Yeah. The thing is that if you look at the category as big as AppSec, you know, tens of billions of dollars have been in, invested in that over the last. 20 years. Yeah. SaaS, SCA secret [00:37:00] scanning infrastructure code, container scanning, bug bounty program, CI c, security.

If you really think your vendor is going to deliver the 80 20 on each component, then you're, you're probably hallucinating, but, but you, you just need to understand the trade-offs. Okay. Yep. If you just need your basis covered, you're ticking the box. The main motivation is compliance. And you're okay with like a subpar, but I need everything.

Immediately today. Yeah. Then I honestly believe that best of Street could be the right decision for you. Nonetheless, that comes at a higher and higher tax rate every day. Okay. Like, uh, especially in the key components. Yeah. SCA container scanning, sa, et cetera. Yeah. Where we do see the market maturing is that.

And, and we also see the industry opening up is that even the best of suite platforms are opening more and more connectors, kind of what we see with the ASPM category. Yeah. 'Cause wide platforms understand their customer needs. Yeah. And they're trying to be a little less [00:38:00] opinionated and less they have to, where you can plug your own scanner if, if you think it's much better than ours.

Ashish Rajan: Uh, interesting. Uh, yeah. I, I guess to your point it's sparked. All the context you can get to make the right call for the vulnerable team in front of you.

Roy Gottlieb: Yeah. And then obviously the, the reason that we think there's a huge opportunity here is that we're not optimizing a small portion of the OPSEC program.

Yeah. We're actually and you know, we've, we've been buying now baked off against pretty much every vendor in the category. We're happy to just, bake us off against anyone. Play with it for a weak, call us back. You're not optimizing by a margin. We fully understand that to introduce the best of breed tool to such a category that is already noisy.

Yeah. Yeah. For the many times in this, in conversation, the value has to be an order of magnitude more. Yeah. Or 20, 20 times more

Ashish Rajan: just to stand out. Yeah.

Roy Gottlieb: Just to justify a, a switch. Yeah. Uh, it wouldn't make sense for us to come to you as you say, you know what? We are 20 times better than you're off the shelf.

Open source project wrapped, [00:39:00] or like classic, SCA it, it, it doesn't make sense to you? No. But if you're optimizing, the vast majority of your OPSEC program in with a tool that is quarterly, 20, 25 times more accurate. Yeah. And your engineers actually use, take action on Yeah. This is where the value that we, we, we see.

And from a, from an investor and a founder's perspective, I do think that it's a little naive. The founders think that, you know, I'm gonna take 10, 20 million in capital injection and I'm gonna take on the whole opposite category like we did it. But that requires in the cloud security market. Yeah. Yeah.

But that requires second timers, endless amount of capital. The best, the best of everything. Yeah. Yeah. I think the startup opportunities to kind of becoming industry experts in the most painful problems of our customers, solving them 20, 30 times better than anyone else. And try and, be focused on a strategy that would make sense for a startup company.

Yeah. That is typically not a best of [00:40:00] suite.

Ashish Rajan: Interesting. Yeah. I love the perspective, man. Those are the technical questions I had. I've got three fun questions for you as well.

Roy Gottlieb: Okay. I'm not a fun person. Ah,

Ashish Rajan: well, we'll find out. What is, uh, something that you spend time on when you're not trying to solve the AppSec problems of the world Family.

Oh, perfect. Wow. You're like one line of sharp answers about thought. You're not fun. Uh, no family,

Roy Gottlieb: but basically the, the way I see it is I have the places where I, I enjoy what I do, but I have the place where I exhaust energy. Yeah. And I have the places where I recharge.

Ashish Rajan: Fair

Roy Gottlieb: family is where I recharge.

That's awesome.

Ashish Rajan: And, uh, maybe as a re related question as well, second one, what is something that you are proud of that is not on your social media?

Roy Gottlieb: It's a weird one, but I am very proud of the group of friends that I've been carrying around for the last, you know, almost four years of my life. I have a few from kindergarten, a few from, you know, elementary school all the way to, to the Army, and I think.

Sticking to great random people you meet along the way is an [00:41:00] underestimated skill. Yeah. And we try and implement that in, in, in the company culture. We're very selective into who we, who we bring in. Yeah. But we've haven't had, churn. And, and, and you know how looked after Great.

Security engineers and, and, and, you know, reverse engineer and software development. Yes. They have other options

Ashish Rajan: as well to go to. Yeah,

Roy Gottlieb: they, they, they choose us every day. Um, they can easily make more money. They can easily, you know, sticking to the right people is kind of what I, I think my LinkedIn profile would not.

Ashish Rajan: Highlight. Awesome. And, uh, final question. What is your favorite cuisine or restaurant you can share with us?

Roy Gottlieb: I'm Italian. Like, uh, my mother's Italian. It's not the healthiest one. Um, but, uh, my favorite cuisine is home again. Oh, fair. Yes. Uh, but Italian lasagna, pasta, the unhealthy stuff. You know, I mean that recharges the, the fun part of Yeah, yeah.

Yeah. It's all

Ashish Rajan: comfort food. I think it's all, that's how I describe it. Yeah. It's comforting. Yeah. So for me,

Roy Gottlieb: comforting is Italian.

Ashish Rajan: Yeah. Perfect. And where can people learn more about what Hopper Security is doing and what you guys are working on? [00:42:00]

Roy Gottlieb: So basically, you know, we have the website. Yeah. Like, uh, most companies do.

Yeah. It's hopper.security. I'm more than happy to kind of, you know, we're still young and, and, and ambitious enough. I actually want to get to know people who want to get to know us. Yeah. So I just encourage a direct conversation. We're not that distance hit hit us up directly on, on, you know, LinkedIn, Roy Gottlieb.

If you, you're into the more technical stuff, we'll pull in our CTO. We're still very much in that evangelizing and we wanna solve this problem for the world. Yeah. Uh, whether we're the right fit for you or not, we, we, we're always here to kind of offer. And advise of what we see in the market.

Ashish Rajan: Yeah.

That's awesome, man. Thank you for sharing that. I'll put that in links as well. But, uh, thank you for coming on the show.

Roy Gottlieb: Absolutely. Thank you for having me. Yeah, that's been great.

Ashish Rajan: Yeah, I likewise, man, and, uh, thank you everyone. Thank you so much for listening and watching this episode of Cloud Security Podcast.

If you've been enjoying content like this, you can find more episodes like these on www.cloudsecuritypodcast.tv We are also publishing these episodes on social media as well, so you can definitely find these episodes there. Oh, by the way, just in case there was interest in learning about AI cybersecurity We also [00:43:00] have a sister podcast called AI Cybersecurity Podcast, which may be of interest as well. I'll leave the links in description for you to check them out, and also for our weekly newsletter where we do in-depth analysis of different topics within cloud security, ranging from identity endpoint all the way up to what is the CNAPP or whatever, a new acronym that comes out tomorrow.

Thank you so much for supporting, listening and watching. I'll see you next.

No items found.
More Videos