AWS Cloud Security Strategies for 2024

View Show Notes and Transcript

What is the future of Cloud Security with Gen AI, how has it evolved and where is it heading? We spoke to Clarke Rogers at AWS re:Invent 2023 about the AWS advancements in generative AI for security professionals, examining tools like Amazon Inspector for Lambda, Amazon Detective, and the potential of Amazon Q for security policy clarity. He also spoke about strategic advice for CISOs and security engineers on aligning security investments with business outcomes and managing risks effectively.

Questions asked:
00:00 Introduction
00:21 Theme for re:Invent 2023
02:03 Where to start with AWS Security?
04:02 Top priorities for Cloud Security
07:17 Cloud Security Engineer Skillset
09:05 Zero Trust and Ransomware
11:38 Thinking about GenAI in 2024
13:08 Data Security  
14:50 What not to include in 2024 Cloud Security Roadmap?|
16:03 Where you connect with Clarke to learn more

Ashish Rajan: [00:00:00] Welcome to another episode of Cloud Security Podcast. Today's conversation is going to be interesting. Could you introduce yourself to the audience first? Certainly.

Clarke Rogers: Thank you for having me. My name is Clark Rogers, and I'm a director of enterprise strategy at AWS. I spend the majority of my time with customer executives, helping them along their cloud adoption journey, their digital transformation journey, and of course, that all touches security, risk, compliance, and privacy.

Ashish Rajan: And what's been the theme for you at re:Invent so far? We've had three days of keynote. What's been the general theme for you that you're taking away for your customers?

Clarke Rogers: So I'm really excited about three announcements that came out during Adam's keynote and they all are around the support of generative AI for security professionals, in my opinion.

Okay. So we have Amazon Inspector. Yeah. For Lambda, right? So this allows security professionals to use security generative AI to actually look in the code of Lambda, see what's wrong and address it as they need to. Second, Amazon Detective. Vulnerability management groups, right? So this is tiering by criticality and grouping [00:01:00] vulnerabilities so then that security operations professional can quickly triage what he or she needs to do and then go take action.

So I love that because we've had this promise for years that computers are going to help us do the work and now we actually see it in reality. So very exciting. And while It may not be obvious that Amazon Q is a security tool and it's certainly not marketed as a security tool. Yep. I think security practitioners are going to love it because one of the big things I see in large enterprises is mystery around what's the security policy for this?

Yeah. What's the regulatory framework for that? And what specifically do I need to do to make my application or my infrastructure or whatever the case may be appropriate for my environment? Yeah. So I'm thinking about how security teams can I'm going to upload all their documentation and point it to their databases and their file shares that has all this stuff.

And then if I'm that new developer to the company, and I want to know what the secure coding standards are for my particular application. It's not get a meeting with somebody or talk to my [00:02:00] mentor. It's open up the box and just type in and go.

Ashish Rajan: Wow. It's very exciting. Yeah. And so on Cloud Security Podcast, we have a broad enough audience that kind of stems from like your cloud security engineers all the way to architects and to CISOs and VPs and Directors of Cloud Security as well. And some of the conversations we've been having on the floor has been around the whole notion of cloud security has changed and evolved quite a bit. And the advice that used to be current for 2010 or 2015 is not the 2023 advice. So for people who are looking at this space now because a lot of people here came for the first time Sure. and I think it was amazing to see but at the same time it felt like a firehose opened up in their face now they're like, oh wow, there's a lot of information. So for people who are trying to approach the cloud security space more from a strategic lens. People thinking more, hey I have a 2024 initiative for cloud security programs.

What do you think is a good place to start? And maybe some low hanging fruit they can hit off straight off the bat.

Clarke Rogers: Certainly from a strategic point of view, I always [00:03:00]encourage CISOs and security engineers to really take a step back from hey, there's this new security tool that I can use for problem X and actually see if problem X exists within your environment or if that's a particular threat.

And then make sure that security investments are aligning with the appropriate business outcomes and the appropriate risk for the business. Yep. That can take a lot of working backwards work to do, but once you have that, and you have clear line of sight of what your security outcomes are, Then you can know where your investments go.

Maybe it is security engineering training for my developers. Maybe it is a new tool, because I see that there's a gap. Because maybe the threat model has changed. But it's really listening to the business objectives that you're trying to hit. Because ultimately security is there to mitigate risk for the business.

The business decides what risks they want to take on and what avenues that they want to approach from a business perspective. And security is there to help them be successful.

Ashish Rajan: And [00:04:00] maybe Amazon Q can help in some of that as well, a hundred percent. Yeah. I definitely find it interesting when people think about cloud security programs are there any?

Like top three things, because I imagine at the moment the spectrum is quite wide. We have companies that start today and tomorrow maybe, I don't know, chat GPT level for lack of like why within a year like that companies I go billions. I imagine there's a lot more examples like that probably in different areas.

What's usually your advice for hey, you should consider these three things and if you do these three or four things, whatever that small list It doesn't need to be an exhaustive list, but what do you normally recommend now to people who are starting to build that program in a smaller setup rather than an enterprise where they may not have the right kind of resources?

What do you tell them to do?

Clarke Rogers: There you asked for three, I'll give you five. Oh, perfect. There are five sort of core tenets regardless whether if you're a digital native startup or a very small company or a very large enterprise that I always advise customers to really focus on.

The first one, as you can appreciate in the cloud, identity is everything. Do I get my [00:05:00] identity correct? How am I authenticating? What am I putting MFA on? All those kinds of questions. And again, you're thinking about your business objectives and your business risk. So identity is number one. Number two is detective controls, or logging and monitoring.

Once I have an entity created in my environment, What is that thing doing? And what telemetry am I getting from it? What do I want to get from it? How am I reporting on it? How am I reacting to it? Etc. The third is infrastructure security. So this is when we're thinking about VPC design, network segmentation, firewalls, security groups, all that good stuff, right?

And vulnerability management. patching, scanning, all that good thing, right? So how am I going to implement that? Am I going to use native services? Am I going to use a third party product? Whatever the case may be. The next is data protection, right? So where's my data? It's the age old question.

Where's my data? What's data do I have? What do I need to protect? What's my encryption scheme? What's my resilience patterns I'm going to put in place back up all that good thing. And then [00:06:00] lastly, and I would argue that this is probably the most important one. Even though it's number five, and that's incident response.

Oh yes. Making sure that you've accounted for those first four, but that you've really practiced what happens when something goes bump in the night. Yes. Do my incident responders have their tools already in AWS? Do they know how to use AWS? Is there good documentation of all the different systems that are out there that they need to investigate?

Yeah. Do they have a quarantine area they can move, boxes that might have been infected into so they can do triage later. And then of course the tabletop exercises with the senior management, legal, comms, all that kind of thing. So it's those five things that regardless of the size or complexity of your industry, you need to be thinking about and depending on if you're highly regulated or not, you're going to do that to a different degree.

Ashish Rajan: Would you say the role of a cloud engineer or a cloud security engineer has also evolved? Like now, obviously we're talking about Amazon Q, Gen AI seems to be the topic that we were talking about in terms of [00:07:00]teams that are being, involved in this, a lot of people would have mixed opinion on, Hey, what am I training people on this?

I think we obviously Amazon has Amazon Bedrock and other services as well. Do you find that the role has changed? As a CISO, am I looking at different skill sets that I need to add to the team now?

Clarke Rogers: I think it's more of an evolution of skill sets. Okay. Earlier you mentioned what cloud security would have been in the 2011 to 2014 versus what it is now.

Yeah. At that time frame, it was a lot of point and click in the console. Yeah. Operations with security. A desire to automate certain things, right? But there was quite frankly, not a lot of development talent in the security org, right? So now we're seeing, not only is there development and engineering talent in the security org, so the security org is basically acting as a security service to the rest of the enterprise.

Yeah. where they're building tools that other people can use. So that's fantastic. But what I really love is the evolution of, what I'll call the regular development teams [00:08:00] or the product or the service teams that they're building security in there, right? So they're using the quote unquote security team as a resource for best practices, what kind of threats are out there that kind of thing yeah, but now the remits on the developer to own that full stack.

So they're owning the security of it. They're owning the compliance of it. They're owning maybe the C I C D pipeline as well as the functionality of whatever they're building. But we find and I would hope you agree that any security work you can do up front at the beginning of a development task saves your bacon at the end of it because the day before production, finding a security vulnerability, that's going to cause problems with both the business and security, right?

Ashish Rajan: No, I don't agree with this one. Definitely not the right way to do it.

Clarke Rogers: But it's one of those things. And we see that the organizations that move that way, not only do they see fewer security reworks that are coming back from production, but they're also seeing increased velocity. So when you start, now this is where you go into security [00:09:00] actually being a business enabler and a business benefit to the enterprise or whatever your organization is.

Ashish Rajan: The space of security is quite interesting, right? I think most of the year we were talking about ransomware and zero trust and all of that. There wasn't a lot of conversation about that. Do you find that when you talk to customers at the moment, is that still as a conversation?

Does that come up? In terms of, yes, Gen AI has become the focus. I don't know if Gen AI is going to solve ransomware, but I'm sure it's going to be somewhere there.

Clarke Rogers: I think that's a very good point, right? GenAI as of now is not a panacea for every security issue that's out there. In fact, we may see more security issues.

Ransomware still comes up and maybe not specifically ransomware, but business limiting events.

Ashish Rajan: Oh, yeah, that's a good way to put it.

Clarke Rogers: So back to our previous conversation about engineering from the beginning. Yeah. When I see organizations start to think about that, And they are really thinking about security in the ideation process.

And then they actually start programming it in, etc. Then you start having the best [00:10:00] practices just being the norm across the enterprise instead of an exception. And once you start to see that, you have a certain level of business resilience that you didn't have before. For example, if you have an organization that is only pushing to production via pipeline.

There's no human access into production. You're following the best practices for identity and access management for segregation of duties and MFA, et cetera. You're using strong encryption and protecting those keys. Yeah. And of course, using the encryption on the sensitive data, of course, yeah. If for some reason you were to get popped in a ransomware event.

Yeah. Let's say that the criminals have taken your data, but what have they taken? They've taken encrypted blobs. Yeah. Yeah. That they ideally can't decrypt. So now you're in a completely different situation. Everything is code. You rebuild your environment in a day or two. Yeah. We have some customers who can rehydrate in about 24 hours.

Yeah. Yeah. They do that and now your ransomware event is a non event. And that's a completely different story. Again, when you're thinking about the C suite and the board of directors. [00:11:00] Yeah. The level of confidence that you give them. Yeah. By having that capability. It's no longer cool security stuff.

Yeah. That is, business enabling excellence, right?

Ashish Rajan: And to take that further with the board as well, I think one of the challenges that some people would have with the board, now that Gen AI is top of mind for a lot of people, would be getting the right amount of budget for it as well, that kind of was like the inspiration for that question about, is my team going to change? Or what would you say people should consider in the cloud security program they think for 2024? from a, hey Gen AI is going to be top of mind. Are there specific security components that they should think about to include in their program for 2024?

Clarke Rogers: I think that for security professionals, if they haven't done so already, there needs to be a deep dive from an education perspective on Gen AI. How are LLMs built, right? How do we interact with LLMs? How do we deconflict LLMs and the hallucination issue and all that kind of thing. Fortunately, they're the Bedrock service allows [00:12:00] customers to keep their data in their own VPC and using the same security tools that they have for everything else at AWS.

So they can protect their data there and experiment with a wide variety of LLMs. But I think, for the security professional. They really need to be thinking about how does this stuff work? How would I break it? Yeah. Yeah. And then what defenses do I need to put in place? Because many customers are building their own LLMs out of their own data, right?

Yeah. So they need to put the protections in there. And then of course be aware of the prompt manipulations and the prompt engineering as well.

Ashish Rajan: I'm glad you said that because so assisted podcast to Cloud Security Podcast is AI Cybersecurity podcast. I run with he's a former CISO of Robinhood Caleb Sima.

Oh, yes. Yeah. So him and I started a podcast together for just the exact same thing. The entire first season is literally a primer on AI and LLM from a cybersecurity lens. He is absolutely fantastic. Yeah. Yeah, I think I'm alright as well. ? ? No, but I think he is super awesome. I think the initiative from him and I was that him and I would have conversations where a lot of the time we just go with, Oh, [00:13:00] am I just looking at prompt injection or what's my privacy like? And data would not come up as well quite often as well. Now people are realizing all data is important as well. Do you find is a conversation with data coming up as well for you from a data security perspective?

Clarke Rogers: I try not to laugh at it. But what I do find funny is for 30 years, we've been talking about data management. Yeah. Where's my data? What's sensitive? How do I protect it? All this kind of thing. The customers that have taken that seriously and really put effort behind that are the ones who are able to go build LLMs and engage with this and get rich insights out of their data.

Oh yeah. The customers that have not have put that on their back burner. There's something in Excel. There's something in Access.

Ashish Rajan: Yeah. Access is still a thing, yeah.

Clarke Rogers: There's something in another database, etc. They're the ones who's they are answering to the board. Oh, we have 10 AI projects going on right now, but they have to get their data clean and organized before they can do it.

So that's a challenge I see unfortunately a little bit too much.

Ashish Rajan: So data should be part of it. And I'll get some consideration for AI. Educating [00:14:00] the general AI primer should be part of the security program as well.

Clarke Rogers: And not just for security professionals, right? Oh, brought across the board.

It's across the board, because we don't know that next great idea may be coming from HR or finance or someone else in the organization. We recently released PartyRock on AWS, which allows people to build their own applications on top of Bedrock. And it's in a very fun and easy way. It's almost gamified.

Using something like that to educate, let's call them the non technical folks inside of your organization. You may build that next great mousetrap in a part of the organization that you never thought of before. And that's what's exciting to me. It's the democratisation tech across the enterprise.

Yeah. So everybody's positions can be uplifted because of this.

Ashish Rajan: And I think final thoughts on advice for people who are obviously looking at working on this. We spoke about a few different things they should definitely include. Is there something they [00:15:00] should not include in their 2024 roadmap?

The reason I mentioned that is because the same thing about advice for 2012 2022 doesn't really matter anymore. We now in the Gen AI world now we are basically living that bubble or whatever you call it, is there mistakes that you've seen people make that you would ask them not to repeat in their next program that they're building?

Clarke Rogers: Complacency. Okay. I think we all need to, cause Gen AI relatively came out of nowhere in January of this year, December of the year before. And now it's all the rage. Yeah. It may or may not be all the rage four or five months from now, or it may be the thing for the next ten years.

Yeah. But what happens is people often see that bright, shiny object, they focus on nothing but it, and they miss something else wonderful that comes by. I encourage people to always be curious, always be asking questions, don't be so confident. In what you've done that it's bulletproof, from a security perspective and just always, I guess the idea is be comfortable with being uncomfortable that you don't know everything and continue to seek [00:16:00] new education new points of view and execute on.

Ashish Rajan: Awesome. That's most of the questions I had. Thank you for coming on the show. I'll put your LinkedIn on the show notes as well so people can connect with you as well. But so much for coming on the show.

Fantastic. Thank you. Alright.