Threat detection is often limited to popular cloud services, so whats happening to all the "not so popular or commonly known" cloud services in your environment? We are speaking to Suresh Vasudevan, CEO of Sysdig about challenges typically companies find with this space and what should be the approach for threat detection. If you feel you are looking at threats from all cloud services you might want to hear this episode to know you actually are.
Thank you to our episode sponsors Vanta and Sysdig You can find out more about Sysdig at - sysdig.com/cloudsecuritypodcast You can check them out at vanta.com/cloud
Questions asked: 00:00 Introduction 03:41 A bit about Suresh 05:14 How was threat detection done traditionally? 07:33 How does threat detection translate to cloud? 08:47 Uncommon services attack vector examples 11:00 Uncommon services explained 11:31 Problems with threat detection in cloud 16:53 How to approach prioritisation? 19:48 Bridging Cloud and Applications
Ashish Rajan: [00:00:00] Have you ever noticed how detection primarily talks about some of the common services that you will find on the cloud provider, like for example, you would have things around EC2 instances, virtual machines in Azure, but when you talk about machine learning or AI or LLMs being built on cloud, suddenly there's a lot less information available.
It's not really anyone's fault. It just, because most products out there are primarily focused on the fact that, and by the way, it's not anyone's fault. It's just the nature of the industry where you would find a lot of the conversations are easily searchable when you are looking for common services, but a lot of us were trying to work on uncommon services or not so popular services on the cloud service provider, have to figure it out ourselves on how to approach this.
So in this conversation, I have Suresh Vasudevan. He's the CEO of Sysdig. This is a sponsored episode by Sysdig, but we focus primarily on a value driven conversation as always. So we spoke about how the approach for Falco, which is their open source project that has over 6, 000 stars and being used by millions of people on doing real time threat detection [00:01:00] in both in cloud and the hybrid world as well, how they have worked on that and use those principles to make threat detection as a community, as a space better.
So we spoke about what traditional threat detection is we also spoke about what it should be like in the cloud space. What should you be asking yourself? When you think about threat detection cloud, if you are someone who runs a SOC team, sometimes you just told, Hey, take care of the cloud. What does that really mean?
What kind of capability should you have in the team? You can use to, separate the signal from the noise. Also, should you get all the logs or should you just get the logs that you care about? So that and a lot more around the threat detection conversation, both for common as well as not so common services in the cloud context.
I hope you enjoy this episode. If you know someone who's trying to learn about threat detection, what it should be like, or if you just are someone who's looking at threat detection in general, as a concept in the cloud context, this is a definitely a great episode for yourself or share it with a colleague who's probably trying to learn about threat detection in this space as well.
So they get to learn about this as well. If you have been listening to us for a long time and have found value in the content we create. We would really appreciate if [00:02:00] you listen to this on iTunes or spotify, leave us a review or rating, or if you watch on YouTube or LinkedIn, give us a follow, subscribe. Also helps us get more visible to people who can benefit from learning about cloud security as well.
And when you share an episode of Cloud Security Podcast with your friend or colleague or someone who's trying to learn about cloud security, it definitely helps us spread the word as well. So we appreciate you helping us in growing the cloud security community and helping everyone learn together on the different topics of cloud security should be and how the best practice should be applied in the cloud security context. I hope you enjoy this episode with Suresh and had a great Halloween over the past few days. For those who were celebrating that, I will see you next episode. Peace.
We interrupt this episode for a message from our episode sponsor.
Growing a business that likely means more tools, third party vendors, and data sharing, aka way more risk. Vanta's market leading trust management platform brings GRC and security efforts together, integrate information from multiple systems, and reduce risk to your business and your brand. All without the need for additional staffing.
And by automating up to 90 percent of the work for SOC 2, ISO 27001 and more, [00:03:00] you will be able to focus on strategy and security, not maintaining compliance. Join 5, 000 fast growing companies that leverage Vanta to manage risk and prove security in real time. Cloud security listeners get 1, 000 off Vanta.
Just go to Vanta. com slash cloud to claim your discount. That's V A N T A dot com slash C L O U D. Now back to the episode.
Welcome to another episode of Cloud Security podcast. So we're talking about threat detection for some common services and not so common services. I've got a really good person over here. Hey Suresh, thanks for coming into the show.
Suresh Vasudevan: Pleasure. Thank you for having me Ashish.
Ashish Rajan: I'm looking forward to this conversation. I think threat detection, incident response, are these some of the topics that are not spoken about enough in the cloud security space, I was really looking forward to this, but for people who may not know who Suresh is, could you share a bit about yourself and I guess what your background is a bit?
Suresh Vasudevan: Yeah, surely. So I am Suresh Vasudevan, CEO of Sysdig and I'll touch a bit on Sysdig. I started my professional career, started at McKinsey way back in the nineties. I moved to Silicon Valley in the late nineties and [00:04:00] joined NetApp. At the time, a leader in the data storage space and spent almost 10 years at NetApp eventually running all of the product and R& D organization at NetApp.
I left in 2008 and since then have mostly been at younger companies, startups where I've joined a CEO. Sysdig happens to be the third company that I'm CEO of. The first one after NetApp was a company called Omnion acquired. It was in the video server, video streaming space acquired by Harmonic. The last company prior to Sysdig was a company called Nimble Storage joined when it was a very young company, pre product. And then we grew that quickly to where we went public within three years of launching product and then ultimately became part of HP. I joined Sysdig a little over five years ago and was fascinated by the intersection of cloud and security.
And that really was what I fundamentally then believed and continue to believe that securing applications in the cloud is likely the largest cyber security segment. Sysdig just [00:05:00] very briefly got started when we invented the technology for threat detection in modern cloud native applications, and that became an open source project called Falco, which today has over a hundred million downloads, is used by all the major cloud providers
Ashish Rajan: so maybe to your point about Falco being that open source product, which is very popular in the space already how has threat detection done traditionally out of curiosity? To define it for people who probably are hearing it for the first time, how do you see this as being done traditionally?
Suresh Vasudevan: Yeah, so I think there are many aspects to threat detection, but if I were to start with some of the fundamental principles so if you think about secops as the function that's responsible for how do you defend your enterprise against attackers, and broadly speaking, there's a phase where you're thinking about how do I detect something anomalous taking place that could represent a threat, then the next phase tends to be how do I triage all of these anomalies, because of hundreds, thousands, tens of thousands of events there are some that are real incidents, real threats, and then there's a lot of false positives. So [00:06:00] what tools do I have for investigation and rapid triage? And then perhaps the most important phase, how do I respond when I see something that truly represents a threat unfolding in in my environment, right?
So that's the broad sort of detect, investigate, respond. If I had to simplify it down. And if you think about the traditional SOC or SecOps teams and where the focus of threat detection has traditionally been. It's typically involved having a few fundamental platforms or tools that have been the cornerstone.
The first one, recognizing that endpoints often represent the origin of most threats. How do you have more tooling in place so that I'm able to look at signals coming from my EDR tools? The second one, understanding that in a traditional often on prem environment, I want to use logs in order to be able to either triage or connect the dots.
How do I have a SIEM platform in place? So I'm aggregating logs from various sources. And then the third one, as I'm creating response actions, how do I automate those response actions sometimes within [00:07:00] my source solutions, if you will. So I think about it as what data sources did you typically rely on in your SecOps teams?
The data sources were often EDR tooling, network tooling, and logs from. A variety of application sources. Those were the data sources and the techniques involved often SIEM based tools, if you will. And you're writing a lot of heuristic rules, experts, if you will, SOC analysts that are experts at essentially writing queries, looking at your structured data and extracting information that might allow you to triage.
That's how I broadly describe the approach that we make.
Ashish Rajan: And how does that translate to a cloud world as well?
Suresh Vasudevan: Yeah, so I think it's useful to step back and think about what breaks down as you move into the cloud, given the traditional approach you had. And so before we even go into sort of what you need, if you think about what are the things that break down, the first thing I would say.
Maybe the most important problem in modern cloud environments, modern cloud native applications [00:08:00] is lack of visibility. So the traditional tools that I've had, do they give me enough visibility? Simple questions like do I have a good understanding of all the assets that I have in my cloud, all the resources that I have in my cloud, if I have hundreds of thousands of services, do I know how those services are composed into applications?
So when I see an event, I can map that to which applications impacted. So first one I would say is visibility at the resource and service level. The second thing I would also say is visibility at the identity layer. So do I know who has access to what resources in the cloud and that who can be a person more often than not, that who can be a service that's accessing another service?
So non human identities. And so that's a second problem. That's very significant in the cloud.
Ashish Rajan: So a lot of people can talk about some of the popular services. Obviously we are talking about uncommon services. Like how would you define if this changes between a uncommon service?
Suresh Vasudevan: So let me motivate the term uncommon services, and [00:09:00] I'm going to use a threat operation that our threat research team recently uncovered to illustrate what an uncommon service might look like.
I'm not even sure if it's a sort of formal term, but this example will illustrate that. This was an operation, I think, that we dubbed AmberSquid. And in essence, if I summarize what we learned in that particular attack vector. This was the ultimate exploit was a crypto miner. The insertion point, I believe was a vulnerability in your software.
And so that's how the attackers managed to get into the cloud environment. Now, if you think about a crypto mining exploit, once you make your way into a cloud environment, and typically there are, I generally see three vectors that seem to dominate on how people get into your cloud environment, software vulnerabilities, misconfigurations in your cloud and identity exploits, right? Those are often the three roads that lead into your cloud estate. So in this instance, they managed to get into the cloud. If you think about the exploit as a minor exploit, what you're typically looking for as an attacker is how do I provision compute resources in order to use those compute resources to launch a minor?[00:10:00]
That could be a Kubernetes pod that I'm launching. It could be an EC2 instance or a Google compute instance that I'm spawning. And then I'm running my crypto miner on those instances. What was really interesting in this operation was that the provisioning of compute was obfuscated by using some other services where compute was a byproduct rather than the direct target.
So it makes detection a little bit more challenge. So let me give you some example. If I were to use SageMaker or Azure Machine Learning to provision a machine learning model, underpinning that model is a compute resource. And so what the attackers did in this case was launched a machine learning model.
And as part of spooling up that model, the underlying compute had a minor that was being also launched as part of the machine learning model. So it was hidden underneath that. I can give you other examples, right? So if I use Azure DevOps or a Google cloud build in order to launch a CI CD instance, as part of creating that CI CD instance, I'm using the underlying compute resource to also launch [00:11:00] a minor.
So there are two ways I think about uncommon services. If I think about it in the context of threat detection, it's how do I obfuscate my ultimate goal by doing something that masquerades as normal activity where it's much harder to detect, but because it's very nuanced and hidden underneath another layer, if you will, is the way I think about it, Ashish.
Ashish Rajan: I appreciate you sharing the nuance of how some of the uncommon sense is I think it's very timely with the ML AI conversation as well, in terms of any compute service using like code build or cloud build in terms of how like the traditional approach you mentioned initially, where we had the detection part, the response part, almost isolated to a large extent in the cloud context, does that approach not work in the cloud context?
Suresh Vasudevan: At the high level of abstraction that I described it at, it's not a different process for the cloud, but let's peel the onion a little bit and first also understand what are the challenges in implementing that in the context of the cloud. And I'm going to call out three in particular. The [00:12:00] first comment I would make is that the speed of attacks, maybe the most significant difference between an on prem world or a traditional application landscape and a cloud landscape is the speed of attacks.
What you often see is that once I have an insertion point as an attacker and I get into the cloud, a lot of what I do next are reconnaissance activities, but these are not someone sitting on a keyboard and dreaming up the next way to find out what's available in your cloud. You're launching a series of automation.
The very thing that makes the cloud powerful for users is also what attackers to do things faster, right? So you're launching a bunch of scripts to say, let me quickly enumerate all of the roles that I have in this AWS account. All of the associated permissions. If I see a permission that I can exploit, let me just quickly hijack that role to escalate my privileges and then do more damage.
So There's a very short window of time where there's a reconnaissance taking place and then the exploit is launched within a very short window of time. In fact, so that's the first problem I would say is the [00:13:00] time from detection to response is far shorter. In fact, our we put out a fairly significant sort of piece of research talking about 10 minutes being the average time from insertion to exploit.
And if you think about that as the amount of time that sec ops teams have to detect triage and respond, that's the first problem that the cloud presents. The second one I want to call out is the data sources, right? So in order to do detection, in order to do triage, in order to respond, what do I need in the cloud?
Let's go back to Uncommon and Services. In order to know that someone is hijacking a CICD process to be able to launch and exploit, I need to be able to get logs. I might have a dozen different regions in AWS, a dozen different regions in Google. I need to be able to get audit logs for every user activity for every cloud activity in all of those regions.
I need to correlate that cloud activity and user activity with workload actions. Can I see inside a container that someone's launched a malware? Can I see [00:14:00] inside an EC2 machine? Can I see inside a serverless function like Fargate? And how do I correlate that with activity at the cloud layer. So how do I connect the dots and how do I aggregate a different type of data?
Is this other big problem you have. And so that's what I would say. First is speed of attack. The second one is the data sources are much, much larger and they're very varied. And third, the only way to truly detect an attack, because a lot of the actions masquerade as normal admin activity is to be able to connect the dots across many things.
And so that I think is where secOps teams will face challenges as they think about taking their existing approach and extrapolating that to the cloud.
Ashish Rajan: 10 minutes seems like a really short time. And I think it's it's not enough because I was thinking about services like CloudTrail and others, which takes about 15 minutes to even let you know that, hey, by the way, Ashish just started doing something malicious.
It's really interesting to what you called out as well there.
Suresh Vasudevan: So this is a great example, right? So if, in fact, you said 15 minutes, if you think about CloudTrail and you think about the traditional approach that our industry uses to [00:15:00] processing logs, the traditional approach is it's an unstructured log.
I'm going to ingest the log into some repository, whether it's an XDR repository or a SIEM, and then I'm essentially going through the equivalent of an ETL process to convert it into a structured data set that I can then query on top, if you're talking sometimes hours, but definitely a substantial amount of time.
And then there's the concern around some set of detection engineers have to decide what queries to write on top of that underlying log in order to be able to detect malicious activity. How do you do that when most of the activity is normal activity except and it's not managed. So that's the first problem is the traditional method takes hours, and you're lucky if it takes only 15 minutes. On the other hand, if you think about the same CloudTrail log source, and if you say instead of the traditional process off store structure query, every log event is an API call. Just as those log events are being written into an S3 bucket. I can also [00:16:00] use an Amazon service like EventBridge to be able to propagate that log event in real time.
Thank you. And that's now seconds. And if I can then do the detection, not by storing and then querying after the fact, this is entirely what Falco does. Falco started off as a streaming detection engine that sits and watches activity within containers. We've since expanded Falco into being able to look at logs, but instead of acting as a detection engine that first stores data and then processes data after the fact, Falco is able to do streaming detection on logs as they're being appended to. And so fundamentally, without necessarily going deep into file code, the paradigm has to shift where you have to think about, are SIEMs good for investigation but for detection? Do you need streaming detection in place? This is arguably one of the comments we would make as you think about what is necessary for cloud.
That's a really important distinction, I think.
Ashish Rajan: Because I think to your point, a SIEM could also be just a log aggregator as well. Just basically aggregating logs and on top of that, you're putting [00:17:00] rules on top. And some people just call that as threat detection as well, that.
Yep. I have a SIEM and I have some kind of custom rules above it, call it out that, Oh, Ashish logged in from some unapproved region or whatever as well, and can go in that direction. So detection as a SOC team, it's obviously difficult to differentiate between a signal and the noise as well in terms of how that's happening and which ones are the right signal to go for, which ones are not like, I think in terms of prioritization and otherwise, is there some thoughts on how the SOC team should be approving this?
A lot of them may just go, I just need a Splunk license or something. I would, my life is set after that.
Suresh Vasudevan: Yeah. So let me start by just saying the role of a SIEM or an XDR repository still is very important because as a log aggregation vehicle, if you want to do detailed investigation on a post incident basis, then it's important for you to have the underlying log sources.
And so it's still important to aggregate logs somewhere. I think the distinction I'm drawing is using the SIEM repository or XDR repository for doing detections [00:18:00] is not timely enough in the cloud and you need to complement that with almost an engine that acts as a cache before the logs are going into the SIEM and is able to do detections in real time.
So that's the first comment I'd make. So you still want to store the logs. And this is what really Falco does. It uses some amount of state and it basically looks at logs as they're being written into the SIEM to make sure that your first layer of detection is already in place without relying on manual detection rules being written on the underlying log structure.
So that's the first comment I'd make. The second observation you made is extremely important, which is the amount of noise as you look at the number of data sources you're looking at in a cloud can overwhelm the signal to noise ratio can be such that SecOps teams and SOC analysts are overburdened on where is the real incident versus where is it not?
And so the second aspect is equally important. First, you want to be able to detection in stream. Second, a really good way to make sure you're focusing on the most important things is to be able to Connect the dots across sources. So [00:19:00] things like can I connect in an attack path? Can I connect the initial vulnerability that was used to come into my cloud environment with the fact that someone may have escalated their privileges by doing something in the cloud IAM role with the fact that they then assumed a role to connect to an S3 repository that had sensitive data. And then I saw them making a network connection to a command and control. How do I connect these dots? And one of the biggest comments I make is in the cloud, identity happens to be perhaps the most important connective tissue across actions.
So if you can somehow look at how an individual role and individual identity and individual user is connected across these multiple different event types. Then you have a better chance of separating signal from noise. And that's really where a lot of our sort of focuses as well. Ashish
Ashish Rajan: interesting. And because I imagine a lot of SOC teams are just told, Hey, just, can you just look after the cloud? That's a notion that's being thrown at as well. A lot of the SOC teams would be more around understanding what am I really looking [00:20:00] for? What are the threats? Would you say there are two layers to this as well?
There's obviously a cloud layer where if I was a leader listening to this conversation, like a CISO or a SOC manager, there are obviously two aspects of looking at this. One is that, Hey, is there a misconfiguration that's on the cloud? Then there is the other application side of it as well is there's something wrong with the application.
I know Falco is open source, and I haven't done a lot of work with it, but can Falco go be that bridge for not just your AWS misconfiguration or cloud misconfiguration, but also applications?
Suresh Vasudevan: Yeah. Yeah. No, you ask a great question. So first of all, Falco is much more of a, on the detection side, rather than looking across the broader misconfigurations and posture. What Falco can do is detect when someone's making a change that breaks a good configuration and results in a noncompliant or poor configuration. So it does that as a live detection. But I think the larger question you asked is really interesting, right? Which is if someone is listening to this and saying, how do I gear up my SecOps team and my SOC to be able to detect threats in the cloud?
I would make two or three observations in the cloud. I think there's a [00:21:00] learning curve that SecOps has to come up the curve on and a lot of the constructs in the cloud are not necessarily deeply familiar to the sec ops teams into SOC analysts. And so I fundamentally believe that in the cloud context, you want a partnership between SecOps teams and CloudOps and DevOps, right?
And so we talk about DevSecOps. I fundamentally believe that in the context of how do I first Yeah. As I'm hardening my cloud, that is a task that's not just a security task. That's a task that's shared between security and DevOps or CloudOps teams in a similar way. And when I think about how do I detect and respond to incidents, it also needs to be a partnership between SecOps teams and DevOps teams or SecOps teams and CloudOps teams.
So if you think about who are the detection engineers and how are they constructing the detection rules, whether those rules are being applied to falco or whether they're being applied to a SIEM who is authoring those detection rules, who's basically able to say these are the kind of things that represent risks in the cloud.
I think [00:22:00] trying to do that just as a SOC team is not going to be as effective as a collaboration between the SOC team and the CloudOps team. So as a leader, the first thing I would want to do is have my SecOps teams and CloudOps teams start to work together on what are the threat scenarios that we want to be protected against?
What kind of data sources do we need? Do we have them currently in our SOC team? What kind of tooling do we need? So it's both organizational as well as tooling, I think.
Ashish Rajan: What should leaders consider as a capability in their SOC team in terms of, yes, I've been told to look after the cloud, but what does that mean in terms of capability for people or maybe even processes as well that you've seen?
And it doesn't have to be like an extensive list, but in terms of what you think from a threat detection capability, what should teams consider having as capability within the team?
Suresh Vasudevan: So I'll take it as a given that the first phase of protection that lies outside the SOC is hardening and prevention. So let's take as a given that you're already investing in good vulnerability management practices, good entitlement management, good posture management practices.
And now we're focused on detection and response two comments that make [00:23:00] organizationally. I would absolutely think that the most important thing that you could do is create this bridge between whoever you pick from DevOps and CloudOps and whoever you pick from SecOps to identify and start saying, let's quantify the risks.
Let's quantify the attack scenarios, the threat scenarios and figure out exactly what we need to make sure we have detections in place. From a data source standpoint, the SOC has always been heavily focused and has a deep understanding of EDR as a data source, network as a data source, and SIEM as a log aggregation tool.
When you start translating that into what new data types do I need from the cloud? Do I have a workload protection solution in place where I'm getting instrumentation from my cloud workloads, whether that's compute in the cloud, traditional VMS, whether that's containers in the cloud or serverless. Have I complemented my E. D. R. Tooling with cloud workload tooling. Secondly, have I complemented my traditional log sources with monitoring cloud logs? [00:24:00] Cloud logs could be things like CloudTrail, Azure activity logs, but they're also logs from services like Okta, services like GitHub. Do I have the mechanism to aggregate those? And this is really where the challenge is steep, Ashish, because if you think about taking all of your CloudTrail logs, remember, every single action, every single API call is logged.
So if I want to aggregate cloud trail logs across dozens of regions, it's expensive. If I now want to look at every mutating and non mutating activity in Okta, in GitHub, it becomes expensive. So there are good strategies for how do you optimize cost, as well as get that data into your, so that's what I would say.
So cloud monitoring complemented by workload monitoring as new data sources and organizationally a blending of SecOps and CloudOps or DevOps is I think are some important steps to start with.
Ashish Rajan: And I think that made me think of going back to the whole uncommon services as well, because in a broader context beyond.
Just the fact that, hey, if you're using machine learning on AWS or Azure or Google cloud, this whole product [00:25:00] context, I think you were talking about this just before the call started with in terms of the MGM attacks and otherwise the supply chain for lack of a better word, like I think people are relying on Okta being that point of entry rather than My AWS account or Azure account being the point of entry.
I think you guys have some research in that space as well. Was there some thoughts on that as well in terms of uncommon attack paths, not just from a, Hey, it's not just your AWS account or Azure account is coming in, by the way, it could be anything else as well.
Suresh Vasudevan: Yeah, no, it's funny you mentioned this, Ashish.
I have to say in the month since the sort of Caesar's MGM, et cetera, one of the things that we're able to do is in our product using our threat detection. We don't just look at CloudTrail and Azure activity and Kubernetes audit logs and workload. We also are able to pass. We have rules around Okta logs to do detection on anomalous things that people may be doing within your Okta environment and the spike in number of people essentially saying, let me Enable all of my Okta detection rules was quite significant, right?
And so it tells you the degree of concern around Okta. But there's a whole [00:26:00] lot more we'll discover as the details come around. But what was really interesting to me are two things in the MGM attack and what we know so far. The first being that sort of like almost every cloud attack, it started with social engineering. And that's not new news. Every one of your listeners knows this. It was basically someone resetting all factors of authentication for privileged user. And so suddenly you basically reset all MFA factors and that person now had keys to the kingdom. But what was really interesting was after that, the sophistication of the exploit and how they leveraged Okta, where they set up a parallel identity provider, something you do when you want to have contractors have access into your environment, they mapped existing Okta users into fake users within this alternate identity provider and through that mapping mechanism, they were able to essentially have access to a wide ranging set of apps within these organizations. And so the level of sophistication you need and the detection rules you need in order to detect that [00:27:00] those things are happening, that's where I'm afraid that many of our SecOps teams are not able to make that investment.
Do you have comprehensive, first of all, collection of all Okta logs. Secondly, have you thought deeply about what detection rules would you put in place to know that something unusual is happening with an Okta? And it so happens that these particular exploits leveraged Okta. What happens when the next one leverages GitHub or Jenkins?
And so then you have to think about do I have all the logs to understand when an unusual repo creations occurring within github? And is that sort of an normal developer activity or an anomalous developer activity. So that's really where I would say the challenge becomes knowing the most important data sources, making sure that you're doing live detections on those and having the expertise to be able to say, what am I looking for?
Ashish Rajan: Yep. I think I love the knowing the data sources part because I think unfortunately us security professionals are very well known for give me all the logs because I need all the logs. Then the whole question about volume of data into your SIEM provider or whatever comes in as well. Is there a sense of without [00:28:00] encouraging people to just take in all the logs in the world and creating a large amount of noise by just basically ingesting everything.
What is your general recommendation for approaching threat detection in this context? Because I'm also thinking about people who are probably like all majority enterprises are hybrid cloud and they are the cloud first. The complexity of architecture is quite a bit these days. So in terms of what do you recommend as people ingesting logs, like how should they approach it?
Suresh Vasudevan: Yeah, I think, there's no silver bullet. In fact, there's an architectural question first on why am I storing raw logs and then spending time processing those logs before I can get to detection when time to detection a short. So is there an alternate approach? As I mentioned, we have believers in streaming detection to complement logs for investigation.
That's the entire premise on which we were founded, if you will. So the other question you're asking is what log sources because they are extremely voluminous in the cloud and extremely multidimensional. And so it starts with stepping back and saying what are the [00:29:00] most important. That's why I started by saying sitting down with cloudops and secops and saying, what are the most important threat vectors that we should address and how do we address them?
One of our large customers does, which really significantly changed the cost structure of these logs and allowed them to store a lot more logs is not only are they using Falco and our threat detection, our commercial product in this instance to do the detection on these streaming logs. They don't store the raw logs in their back end in this particular instance, the query database, they are storing filtered events in the database rather than because they're just saying filtered events drop the cost of my log storage by a factor of 20 X. And I really don't need all of the events. So if I can figure out exactly what are the events I want to track.
Not only will I do detection on them, I'm going to store them in the back end so that if I have to do further investigation, I've trapped most of the events I need for my investigation in any case. And so some other things you'll need to think about are how do I filter logs, how do I decide which logs are the most important given [00:30:00] my risk vectors, and can I really do detection on the fly instead of a post sort of storage structured queries, if you will.
Those are some of the things that I would say people need to think about how to approach this problem differently.
Ashish Rajan: Awesome. Thank you so much for sharing that. I think that's most of the technical questions I had, but I also had one more on just uncommon services and common services and just threat detection in general, how it should be done for a multi cloud or a hybrid world is the question that a lot of people ask.
Is there any resource you recommend for this as well?
Suresh Vasudevan: The places that I go, part of it is just morbid fascination, but dark reading, bleeping computer where you're constantly getting updates on sort of various breach types is all makes for great reading, educational reading. Flashpoint is a service that we subscribe to.
It's a threat intel platform that we subscribe to where we get a lot of information. And so we have multiple threat intel sources that we track as well. And then there's a whole bunch of, I actually enjoy whether it's competitive threat research teams, our own threat research team. So I've basically bookmarked about five or six different threat [00:31:00] research team publications that I go to all the time because they're all they make for great reading.
And so those are my typical sources, right? So some that are sensationalist articles about, breaches, some that are really educational threat research sources, and then our own threat intel platforms. Those are probably the places that I go to get educated most of the time. Awesome.
Ashish Rajan: Thank you for sharing that.
And I've got three fun questions as well, which is a thing that we do on the podcast as well. Where do you spend time on when you're not I guess outside of work, where do you spend most time on?
Suresh Vasudevan: So I think the two things that I enjoy doing, one, I'm a mountain biker, unfortunately just weekends, but I love mountain biking.
So that's with a bunch of friends. It's probably the one time that I'm not thinking about work. So that I enjoy quite a bit. And now that NBA season has started again, I basically don't miss a single game. And those are probably the two things that I always want to do.
Ashish Rajan: What team do you support?
Suresh Vasudevan: Warriors.
okay, fair enough. I'll get hate mail for it or... I was gonna say I don't know. Good luck, man. Thank you. I, it's crazy when they win. I'm, there's a two hour glow. And when they lose this, the next two hours are down for me. [00:32:00] Enjoy watching.
Ashish Rajan: Thanks for sharing that. I guess the next question is, what is something that you're proud of, but is not on your social media?
Suresh Vasudevan: Not on my social media. I think sort of two things, personal and professional. Personally, my wife is my closest friend, has been for the longest time. We were together in college. And something really nice happened over the last couple of years. My older daughter is a sophomore in college. My younger daughter is a high school senior.
And somewhere in the last two years, It felt to me like I went from one friend and two children to three friends because Oh wow And by the debates I'm able to have with them, the conversations I'm able to have with them, somehow in a sudden fashion, it just changed. The nature of our relationship changed.
I thoroughly enjoy my conversations with them. And it's learning how not to be a parent, but to be still a parent and to treat them almost as equals in an intellectual sense. probably something I'm enjoying that. I don't know. I'm proud of them and enjoying the process of interacting with them as they've changed over the last few years.
That's probably personally what I'm most proud of. Professionally, as much as we fixate [00:33:00] on what did my companies achieve and how big did we get and so on and so forth, I started my career at NetApp, which I think arguably had one of the best cultures I've ever been part of. It was a fantastic culture.
And I learned what it is to build a company that has a sort of good culture. Probably the thing I'm most proud of is when I look at Nimble, when I look at Sysdig, irrespective of how people feel about what we achieve, I feel like almost everyone I know from those places would say it was one of the best places that they've ever worked at, that the culture in those places was really special, and they learned a lot when they were there.
To me, that's probably the most gratifying thing about being at these companies, and that's something I'm proud of, is being part of the process. It's clearly many people that make a culture happen, but I certainly was part of that.
Ashish Rajan: Yeah, Good on you for maintaining the culture as well. And I think we'll try and make sure we get your two daughters a copy of this video as well.
So they get to know you're proud of them as well. Last question. What is your favorite cuisine or restaurant that you can share?
Suresh Vasudevan: My favorite cuisine is Indian. Italian is A close second. My [00:34:00] favorite restaurant, it's distant. So it's in India. It's in Mumbai in India. It's a restaurant called Trishna.
It's a Mangalorean seafood place. I think of it as the best seafood in the world. And I don't go there often given how far I am from the place. Every time I'm in India, I try and make it a point to go to Mumbai and have food there.
Ashish Rajan: Wow. Thank you for sharing that. But where can people find you on the internet?
Suresh Vasudevan: LinkedIn is the one social platform that I go to frequently. And so that's the easiest way to connect with me. Awesome.
Ashish Rajan: All right. I'll put that in the show notes as well. I'll put the show notes on the research report as well, that you spoke about as well, but thank you so much for joining us and thank you for everyone who tuned in and we will see you in the next episode and hopefully one more episode with Suresh in the future as well.
But thanks so much for joining in everyone. We'll see you in the next episode.
Suresh Vasudevan: Thank you. Thank you all. Thank you, Ashish.