The Future of Software Development with AI

View Show Notes and Transcript

How can we leverage AI for more secure and efficient code and how will it impact devsecops? Ashish spoke to Michael Hanley, CSO and SVP of Engineering at GitHub, about the transformative impact of GitHub Copilot and AI on software development and security. Michael speaks about GitHub's internal use of Copilot for over three years and its role in enhancing developer satisfaction and productivity by removing mundane coding tasks. They speak about the broader implications for DevSecOps, the future of AI in coding, and strategic tips for integrating AI tools within organizations

Questions asked:
00:00 Introduction  
02:19 A bit about Michael Hanley
04:25 Making Security Easy for Developers
07:17 What is GitHub Copilot
10:01 Whats the Future of AI for Security and Developers?
13:36 Security Recommendations for using AI
16:35 How is data stored in GitHub Copilot?
17:40 How is AI impacting DevSecOps?
21:50 The balance between Security and Innovation
24:18 The evolution of education with AI
27:30 Strategic Approach for CISOs implementing AI Pair Programmers
30:08 Bridging the gap between Security and Engineering
34:37 The Fun Questions


Michael Hanley: but if you can just go into ChatGPT window, or you can go into Copilot chat or whatever your natural language interface is and say, I'm looking at this function, am I following security best practices? Or what are the security best practices for implementing X, Y, and Z in this language? This is a pretty big game changer.

There's probably not actually a lot of tools that you can buy that make developers happier, but the data suggests that developers are overwhelmingly more happy using things like GitHub Copilot or other types of AI pair programmers because It actually takes some of the boilerplate parts out of their job and they can spend more time on doing the work that's created.


Ashish Rajan: few weeks ago, we spoke about GitHub Copilot, but I thought it was worthwhile to letting you know that GitHub Copilot has probably been used within GitHub for over three years. That's right. You heard that, right? So to talk about what it is like to run GitHub co pilot and what are some of the business challenges or maybe even cultural challenges or security challenges you might think about as you're trying to think about how do I adopt AI in my organization while managing that balance between innovation security for that, we have Michael Hanley, who is the CSO as well as the SVP of engineering at [00:01:00] GitHub. Michael and I spoke about how GitHub co pilot has been used internally for three years before it became a thing.

Yes, you heard that it was before ChatGPT. And we also spoke about some of the changes that security can start seeing the possible future of what. developer security could look like where we could have all new code being produced of higher quality. Worthwhile saying that it does not mean that we don't do what we're doing. And in no way Michael and I are talking about the fact that hey it's going to affect any jobs All that means the reason it's called Copilot is because it is something that you add onto your existing workflow.

You can use that to enhance productivity, increase security overall by hopefully producing more higher quality code, but you still need to do what you were doing from a DevSecOps perspective. You still need to eyeball on it. Overall, it was a very fascinating conversation for what AI could be in the future from a developer security perspective, whether you're producing code that is code in cloud or cloud in code, whatever you're hearing on the internet these days about how cloud and code is changing, but I'll let you enjoy this episode with [00:02:00] Michael Hanley.

As always, people who are joining us for the second and third time, I would really appreciate if you can drop us a review or rating if you're listening to, or watching this on iTunes or Spotify, but if you're watching this on YouTube, hit that subscribe button. All right, I will let you enjoy this episode with Michael Hanley and I'll see you in the next one.

Welcome to Podcast. We have Michael Hanley from GitHub today. Michael, you have an interesting role with engineering and security. Maybe to start off with, can you get a bit about your background on how'd you got into the whole professional and everything?

Michael Hanley: Yeah, for sure. And thanks for having me on the show.

Ashish, I appreciate it. It's good to be here with you and your listeners. Yeah. I've been doing security for about 20 years now, a little bit longer than that depending on when you count the professional start date. But I, like a lot of people got my start by taking things apart, probably without my parents permission.

Probably starting with the Zenith Heath kit that we had in the basement that my brothers and I used to play text based games on. From a professional standpoint when I got into college, did a job working in a co op, largely doing tech support actually for one of the big service providers at the time.

And from there that just the interest in security sort of snowballed from there on [00:03:00] out. So went to graduate school for security, worked for a defense department research lab for awhile, then moved on to Duo security in about 2015, which at the time was a small startup and really had just a Amazing experience there.

We sold the company to Cisco in 2018. Did a few years at Cisco where I later became the CSO for the company for the last year that I was there. And then when GitHub called, look, if you can't make a dent in software security by going to work for GitHub, come on, that's a great opportunity.

Had a great run at a lot of those places, but just very grateful and feel very fortunate to be a part of the team at at GitHub. But as you alluded to now have a dual role so for the last two years or so I've been running security and engineering and, I really find the thesis for that is fundamentally at GitHub we believe that good security starts with the developer and everything that we do is around enabling all of that great security work to happen at the edge where the developer is actually doing the work of building and operating GitHub services. So that shows up, hopefully in the product thesis in terms of how we're building things for our customers, [00:04:00] but also internally, we want all that security work enablement and capability push to the edge.

The engineers are doing work in real time.

Ashish Rajan: I love what you said about making a dent in software security as well, because I think that's where my next few questions were going to be around cause the field by itself, a lot of, people who use GitHub as a product, they have challenges with open source, they have Hey, what advisory am I going for vulnerability management or open source vulnerability, whatever that may be.

And I think to what you said, making it easier for developers. How is security being made easy for developers? So to your point, that is like the motto that you obviously spoke about. I'm curious , I was seeing like a keynote that you did for RSA and you had a few things that you guys are working on, which was in that direction, I don't know how many people are aware of how many things are already in GitHub when they use that as a number one code repository for their company. I'm curious if you can share some of those.

Michael Hanley: It's definitely one of those, you're using it, whether you know it or not, probably at some level inside your company. Isnt that

Ashish Rajan: the best security,

Michael Hanley: you

Ashish Rajan: use it without even knowing it.

Michael Hanley: Yeah. Yeah. Look, I'd say GitHub is in a unique [00:05:00] position, right? Because just thinking about security for a second, most companies are primarily responsible for keeping their own business secure and their customer's data and then shipping safe and secure products. But where I think GitHub has those two missions, which I would call the more traditional parts of thinking about the security role and remit in a company, but we also have a third, which is because developers, to your point, are counting on us as the home of all the open source projects that they use the home of where they're actually building the software that they then need to make attestations as secure back to their customers.

Since all of that workflow happens for many people on. GitHub. com we have a unique third mission, which is actually to make it easier and better for people to experience great security outcomes in their day to day work of doing software development. So there's a ton of work that we do really that's oriented around that, that I feel is unique here.

Some of that shows up in the form of actually going out and helping open source projects with security. So we have security researchers who like literally they're out doing the volunteer firefighter brigade thing where they're going to go help open source projects, adopt security tooling processes, literally doing some of the bug fix [00:06:00] works, et cetera.

So that shows up in places like that.

Ashish Rajan: It

Michael Hanley: also shows up in places like making sure that all the security tooling that we have, we give away for free to people who are doing work in the open and public repositories as part of open source projects because. Again, we all depend on that also shows up in places like we, we are wrapping up our work to require everybody who contributes code on GitHub com to use two factor authentication, which is frankly, a massive undertaking to get that done and to not just get that done, but to get it done well in a way that doesn't stink for the people who need to use those security technologies. Because at the end of the day, like this is the kind of thing that actually.

Because of GitHub's unique position in the ecosystem. If we do those things, it actually makes the entire software security ecosystem more secure. You think about all the open source dependencies that we have in everything that we do.

Ashish Rajan: Yeah.

Michael Hanley: If all you need to do is get somebody to click a link and have their GitHub account phished, and they're a maintainer on some project that you depend on the security industry would love for you to believe that their exotic zero day is creeping around every quarter and that you specifically will always be the target of that, undisclosed Oday that some foreign [00:07:00] intelligence service has.

But the reality is there's still actually a ton of phishing going on and there's still a ton of things that aren't patched. Where GitHub can uniquely influence that, like stamping out or making dramatically more expensive, the ATO problem, doing things like that to, to drive up security, the ecosystem as a whole, we try to find those unique leverage points and opportunities.

Ashish Rajan: Would you find that to your point, your challenge is a lot more broader because most of us are trying to just solve, cybersecurity challenges in our own organization, not across multiple organizations who are paying enterprise customers for you guys as well. I guess one conversation that we've had recently is around the whole Copilot space, and I think, you and I were talking about this GitHub obviously has GitHub Copilot.

You guys have been using it internally for a lot longer and maybe a good place to start could be if you can define what that is and how you guys were using it before you made it public.

Michael Hanley: Yeah I mentioned I've been at GitHub for about three years and on this second or third month that I had been here, we had a company, All Hands, where we showed the first internal demo of GitHub Copilot for code completion.

So the initial flavor of Copilot that we released was, your experience in the editor, and we still have [00:08:00] this today and it's been very successful is the, you're in the editor and you have your Copilot riding along with you in real time, using context to help you complete the tasks that you have at hand.

And when I saw the demo, it was one of those moments where, I had to run the scroller back and say wait a second, Did I see what I thought I saw? I remind you again, I've been at GitHub for two or three months at that point. And at the end of that, I really had this this is actually going to change everything sentiment.

And, we've now seen, of course, that for more than a million developers who are out there, they're now using this day in, day out. And the impact is like, Pretty phenomenal if you think about it. Again, internally at GitHub, we've been using it for, again, close to three years at this point.

And that's inclusive of things beyond code completion. I'll come back to that in a second. But, when we go out and actually talk to developers who are using Copilot for some of them, they're doing coding tasks that are like 55 percent faster. It's writing up to 60 percent of their code in some languages.

We think this number could grow as high as 80 percent in the course of the next few quarters, next few years. And this is a fundamental shift and transformation landscape. And again, that's just. Code completion, right? That's just that task. And if you think [00:09:00] about the entirety of the work that you have, like the entirety of the experience of being a software developer, that's where you're bringing the idea to code, which is an amazing leverage point for a lot of things around productivity, but especially security.

That is the cheapest, most effective, most durable place to have good security input impacting a developer because it's in the flow of them bringing an idea to code.

But there's tons of other places where we can have a positive impact on developers from an AI perspective. So some of the things that we showed last fall at GitHub universe, where we're embedding AI experiences or Copilot experiences into other parts of the broader software development life cycle.

That's also delighting developers, like the ability to have a conversation with your docs, the ability to get automatically suggested security fixes. And this is, mind you, we've had, this is less than two years of commercially available developer tools that are built on top of new AI technology.

So it's hard for me not to be like very excited about what the future holds when we see like these phenomenal results. And again, less than two years of it being commercially available, but even internally at GitHub, just a massive impact on our, developer experience. And I think, pretty much every GitHub er would tell you they wouldn't go without [00:10:00] it at this point.

Ashish Rajan: Yeah.

Michael Hanley: Wow.

Ashish Rajan: And it's funny, I'm glad we spoke about three years ago, cause that was definitely for people who've been keeping a timeline on ChatGPT, before ChatGPT happened as well. So it would have happened from GitHub anyways. It just happens to be that suddenly the whole Gen AI space blew up at that point in time.

Yeah, I would probably say maybe to paint a picture for people, because we started the conversation with, Hey, these are some of the challenges people have developers look at it. Now there's a GitHub copilot as well. Where do you see the future kind of go for this? Obviously I'm thinking hypothetical, but is it like we are primarily increased productivity, a more secure code, more high quality code, more test coverage? What are we looking at in the potential future? And it doesn't have to be GitHub Copilot, but just in general.

Michael Hanley: Yeah. I think yes. And all of those things, maybe it's a good place to start.

there's a lot of different places where it could go, but I'll say like one of the areas that I'm personally most excited about is I think this will be a transformative moment, not just for developer experience, but in particular security. So if you look at security in a very general sense. It's not something that as an industry, we've had a great amount [00:11:00] of success with.

I would say certainly things have gotten better over the course of the last couple of decades, but also it's 2024 and we are still telling people to patch their systems and turn on two factor authentication. And part of the reason we have to do that is because these things are harder than they should be for people to get the basics and foundational things. So if you're telling organizations, they still need to do that stuff because it's difficult. You're never going to get to the point where they're fixing magically all of their technical debt. But if you look at what AI is doing, we talk about shift left in security and shift left is typically meant Hey, when you run tests, you're going to get.

Feedback from your CI okay, that's cool. That's ideally before you ship to production, you're going to catch some stuff and that's left of a vulnerability being in the wild. But now shift left means as you are writing code, bringing the idea from an issue or a conversation you had through the keyboard and the editor to code at it's Genesis, you're getting security input there and you're getting security input from a pair programmer that again, even today, less than two years of something being commercially [00:12:00] available is well trained, understand security well, can emulate some of the capabilities that you would have in like static analysis tooling.

And you're going to actually shift so far left that you're going to prevent in many cases, vulnerabilities from ever being shipped in the first place. So this is like transformational in terms of thinking about secure software going forward. Yes, let's have conversations about memory safe languages.

Let's have conversations about lots of other good stuff that's going on there in the space. But to have that expertise, like with you all the time, especially knowing that there's just a shortage of security expertise in the world. And frankly, a lot of security tools and experiences that are bolted on later are not good experiences for developers or taking you out of the flow to go somewhere else to get information that's probably designed by somebody who knows a lot about security, but not necessarily for the security layperson, if you will. And it's just generally security is just not always a great experience. So to have that's a big deal. But I also think it opens up the opportunity because we talk a lot about security technical debt or even just quality technical debt in general.

Yeah. We now have the opportunity because we have such leverage from AI to say, We might be able to go [00:13:00] back and actually exterminate things retroactively and go fix things that we never would have had the person power for previously. And to me, these are some of the real, again, it's early days that you mentioned ChatGPT to you a minute ago.

People forget that ChatGPT is only what 15 months old. And that was really when this went mainstream because of the natural language interface. It's so early. So there's so much yet to be excited about on this front. So that's personally, that's where I'm really bullish again. There's lots of great stuff on productivity.

And how people interact with code and with projects that they're building. But topics like security, I think we really now are at the advent of a real possibility that we're maybe going to start getting caught up on this topic.

Ashish Rajan: Yeah. At least the new code that we produce. Cause I also wonder from a security perspective, a lot of people are a bit hesitant in using anything.

And it could be the fact because now ChatGPT is primarily is being blocked by a lot of organizations because they're scared of it or data or whatever. Yeah. But I guess having done this internally at scale, a lot of people who we have as listeners or people who watch the content, they're always working for large enterprise.

And what are some of the [00:14:00] challenges you find people would, or may come across if they were to use GitHub Copilot across the board or maybe is there a recommendation for how they should start using it as well?

Michael Hanley: Yeah. What I find from talking to a lot of customers is it's just uncertainty, right?

Because AI sounds like this new thing, but if you think about it, we called self driving cars, AI. Five years ago, and now we just call it self driving cars, right? So this is just that, that next thing. And what I always recommend to people is to step back and say like in security, we understand how to assess risk for things, right?

In fact, in many cases, we're actually doing that the same way for 20, 30 years. And a lot of those practices and procedures have been durable. So when you think about assessing the risk of AI, it's actually a lot of the same questions that you would ask for any other third party tool. Who's got my data?

Where is it going? How is it secured? What's the security and trustworthiness of this company? In some cases you might be asking for things like compliance benchmarks and things like that. And I'd say starting from that, the things that you already know, like we don't need to invent new risk analysis frameworks.

People will try we don't [00:15:00] need to necessarily invent at the onset new risk analysis frameworks for things like AI especially frankly, in a space that looks different every month because it's moving so quickly. And the rate of change is only accelerating, which is again, exciting. It presents a lot of challenges and opportunities, but I always advise people to just go back to those things that you know, and ask a lot of those same questions.

And with AI, I think it's in particular important to work closely, especially with, partners in like legal teams and finance teams, teams that traditionally like are shared stakeholders and risk along with the security team is really looking at don't be the security department of no, be the security department of yes.

And because the pace of change here, and again the impact on business, even just take software development, the impact on software development of AI has already been massive. When you're looking at returns, like 50, 60 percent of the code being written by an AI pair programmer that cost a fraction of the salary of developer.

So you're basically giving you're giving your developer like a superman cape to go get, that much better at security. Those are undeniable economic returns. Or again, you can apply that to how AI can assist in really any other [00:16:00] job family. Yeah. So I think getting to a yes actually ends up being important for the future of like your company being competitive, successful. Operating well, and of course, it's going to be regulatory environments and things like that. They're going to come along at different rates, but the upside, the opportunity to me it's worth figuring this out. It's worth figuring out how to go through those risk management exercises to get to a yes.

And again, I think we already know how to do risk management well. So it's important not to get caught up in okay it's scary. It's new. I have a lot of uncertainty, but actually, but I know how to ask questions about vendors and how they interact with store and process our data and going back to what, to help you assess those risks, I think is a great place to start.

Ashish Rajan: Yeah. I guess that leads to the obvious question. How's data stored and processed, in GitHub Copilot is that an answer to that as well already?

Michael Hanley: Yeah. So the good news is and we can include this in the show notes, we publish all the specs for like where we provide customers options for what we store and what we don't and how things are trained and how all that's processed.

And, we provide customers some amount of options in those cases too. I think the meta point to that is transparency with AI if you're a vendor is [00:17:00] really important because you need to acknowledge that again, this is new and unfamiliar to a lot of folks.

And again, even though I would assert that a lot of the same risk management processes we've been doing for decades can still help us answer questions effectively for AI. You do have to acknowledge that it's novel, right? When you look at things like, wow, 50 percent of my code, like that's, it's a wow moment in a good way, but it can cause some people to want to understand, the details of that.

So I think leading with transparency ends up being really important for those vendors to make sure that you can actually like customers can actually build trust. And you're acting in a trustworthy way, because otherwise you can see how people would get into some of those States and getting people comfortable with this new technology through transparency and answering questions openly.

And honestly, I think ends up being very critical.

Ashish Rajan: I'm glad you mentioned transparency. Maybe that should get become part of the risk management exercise going forward.

Yeah. And I would definitely put the links in the show notes as well. I think another thing that kind of comes down to transparency from a cultural perspective as well, the whole DevSecOps world as well, which kind of is in that realm of, Hey, how do we make developers write more high quality code and how do we work with [00:18:00] them and all of that.

Is DevSecOps being redefined as well through the use of AI?

Michael Hanley: Probably. In, for, in a lot of the same ways that we talked about a minute ago. Cause so again, if you think about first off, a lot of people probably define DevSecOps slightly differently, depending on where you are. But let's just say for a moment, it's the idea that, the developer teams and security teams are working on things that you were generally shipping to production multiple times a day.

And there's some continuity between your developer systems and the, Production customer facing thing that you're building and running. I think with that in mind, yes, it's going to change. And again, it's going to change in part because that experience of where you were adding security value is going to continue to get moved to the left.

So if the state of the practice up until the last 2 years has been There's not enough security people. We rely on security tooling that's clunky to give you feedback late in the process. Developers have to do a lot of context switching in and out of security as if it is this extracurricular thing, not something that is embedded into everything that you're doing.

But now what we're saying is in real time, Like I don't even need to have a security engineer who's shoulder surfing you. In fact, that would, that's probably not possible in really [00:19:00] any organization. I think we have a pretty well resourced security team here at GitHub, but I certainly don't have enough people to one shoulder surf every single software engineer, nor do I think that would be good use of investment resources.

But you're gonna effectively have some of the same amount of help and expertise. And by the way, You also get the benefit of, we talked about ChatGPT or interfaces that are natural language. Let's abstract that a little bit and talk about just natural language interfaces with AI. I basically have a staff engineer that I can just ask questions of anytime I want.

And I, and what's cool about natural languages, security can be a little scary to people who aren't in it and it can be intimidating, especially because of how security has been done in certain places or experiences that people have had with security teams. People may not actually feel comfortable asking a question in all cases, but if you can just go Chat GPT window, or you can go into Copilot chat or whatever your natural language interface is and say, I'm looking at this function.

Am I following security best practices or what are the security best practices for implementing X, Y, and Z in this language? And you can get a response back that is trustworthy. That's factually correct that you can go implement. This is a pretty big game changer [00:20:00] in terms of giving you real time feedback where otherwise maybe it's like submit a ticket and hear back in 48 hours.

So there's a big shift in that, that I think is very pro developer and pro developer experience again, not just for productivity, but security, et cetera. So I do think it changes DevSecOps in some of those ways. And again, you're just pushing more of that security value further left, but I think the thing of caution and I mentioned self driving cars earlier is this is not level five self driving and you can't just be like, cool. I have the AI agent hands off the wheel. I'm going to, read a magazine or whatever, while the car's driving. Of course you wouldn't do that. That's not safe. And that's not a good practice. So I view it as it's a major enhancement and a major force multiplier and lever, and you still need all the other stuff that you do across the software security development the life cycle, but you should just expect better results from that. So I think that's how I would describe the state of the practice and how it's changing.

Ashish Rajan: Also to your point, you would still require whatever regular security practices you may have from a DevSecOps perspective or whatever else you do from a, Hey, [00:21:00] I want to maintain high quality of code, it's just that if you start using AI technology or at least start adopting it, you may start seeing better results.

So your overall workload should hopefully improve as well.

Michael Hanley: Yes. And, I mentioned we've been using Copilot for three years. There's been no need to scale the number of security engineers as a function of the additional code written by GitHub. In fact, I would say there's been no discernible impact on like the defect rate in a negative connotation based on the fact that we're writing one and a half X more code which is again, pretty phenomenal, right?

You're getting a big benefit there without having to necessarily offset that through additional person power. But again, I'm very excited and bullish on just where this is going to go generally. Cause it's phenomenal sets of impacts, but yeah, you do need to keep your hands on the wheel.

You do need to have those corresponding investments. You can't say I'm going to cash out of my security budget and instead buy something that's a pair programmer. That's definitely not the signal that we'd want to send to folks.

Ashish Rajan: Awesome. I guess the next question following, and maybe this is where a lot of resistance from security comes in for adopting new technologies goes, especially if it may look like it may replace their job, [00:22:00] where is the balance between innovation and being secure? And when we go to the AI space.

Michael Hanley: First off, our view is like the Copilot branding is very intentional for GitHub and it's the idea that it is helping you do your best work, not replacing you. We would have called it autopilot if we thought that was going to be the name for it or just pilot, but it is the Copilot because it is unlocking human progress and also enabling really actually anybody who wants to be a developer to be a developer.

In that respect, you can think about it as this is actually an entirely new pathway for people to get into development. Some people go do traditional sort of four year degree programs. Some people get in through participating in open source. Some people do coding bootcamps. Now you can learn by just asking an agent or having a conversation in natural language with a chat interface, which is pretty cool.

And it's going to unlock a lot of new possibilities. But I think, in terms of the fear or eh, I'm worried about this taking my job. I actually think this is going to create new jobs. And I just think I got a question. We do our sort of monthly internal AMAs and I have one coming up later this week.

One of the developers asked me a question of are we going to [00:23:00] change our practices internally for how we think about interviews to explicitly expect that people who are interviewing for jobs at GitHub are going to use things like Copilot. And my answer was yes, because that is going to be the norm, it is the norm already for more than a million developers, and this is only going to accelerate.

So the question is not like, can you still do it without Copilot though? I think there's maybe some signal in that. I think the question increasingly becomes how effective are you at pair programming with an AI and I do think that is an evolving way of working. But again, it's one that makes development much more accessible to a broader range of people.

And frankly, even my own personal experience now in the editor is, I love the chat interface because I can just highlight a block of code and say, I think this is doing this. Can you explain this to me? And that's a great quick way to just again, have a narrative dialogue back and forth to understand what's going on in something.

So I think we'll see more of those changes happening over time. But again, I would expect it actually creates net new opportunity because more people can get into development who haven't been able to be in it before.

Ashish Rajan: I love the analogy that [00:24:00] you might expect people to already know GitHub Copilot. Cause , I'm going to age myself as I say this. But a few decades ago, people used to put I know PowerPoint, Excel, these many words I can type in. That is not a thing anymore. People just expect you to be able to type on a computer at whatever speed and know word Excel, even if it's a Google version or whatever.

And I love what you said that it is a Copilot. So it's going to work with you. It's like the same way, I think universities nowadays have open book exams. Yeah. Because before that, it was like, Hey, you can't take a book to your exam. Now you can take a book to your exam, go bonkers.

Yeah. Is that how you see this as well? It is.

Michael Hanley: Yeah. I would describe the advent of AI enabled tooling and AI developer tooling generally as a permanent structural change in how software engineering is done. This is not a fad. It will not go back to the way it was.

This is not going to disappear and fade out in a year. And in fact, again, I mentioned this a little bit earlier, but like my personal belief is the rate of change is dramatically accelerating now. And again, even still in the early innings on this, there's a lot of great work happening across the industry.

I'm really excited of all the work that we're doing at [00:25:00] GitHub. But again, at the end of the day, like this is great for developers. But it is going to mean how we train, how we get people into development can probably be new pathways. But we also might think about adapting more traditional ways that we've trained developers.

I've talked to some leaders at some of the large leading computer science schools here in the United States. And they are looking at this similarly, which is to say, okay We want to teach students that acknowledges the way things are actually getting done in the real world now. And the way things are getting done is increasingly people, again, even in large enterprises.

I think there's a little bit of a myth that large enterprises are like averse to AI. On the contrary, I think what we're seeing again, even more traditional large enterprises are looking for ways to unlock productivity and security improvements with AI. So they want to actually teach their students with the idea that they have access to these things which again, I think is great, but it is a different way of thinking than To use your point, like the closed textbook type testing that we used to have.

And again, we'll adapt to this. And I think it's one of those things where this is a good set of problems, a good set of challenges to have to adopt to, because the opportunity is like [00:26:00] multiplicatively better than doing nothing.

Ashish Rajan: I love the example. When you mentioned the university, I'm like, Oh my God.

Yeah. I remember studying and going, this is not going to be helpful. And I learned Java like ages ago. And I remember just being really bad at it. And there was no way, and I think, to the point that I used to hate it now that you know, but I feel like it was a Copilot back then. Which kind of helps you understand.

Cause not that there was a Google back then, but I'm sure even with Google, you're still looking for hundreds of articles to figure out what is the right way to do something and how do you get to a point where it's safe to use the code from a security perspective? There's a lot of nuance to just writing code these days.

It's not just as simple as I'm going to copy paste from Stack Overflow.

Michael Hanley: Yeah, exactly. And I think what's also interesting too, is like. There's also a lot of different learning styles and I know personally, like for me, online learning was always tough for me probably just cause I have a really short attention span and it was my own limitation and shortcoming.

But again, the idea that like, maybe you are sitting in a classroom and you're getting a lecture from a teacher, whether it's in like a bootcamp or in a college classroom setting, but you can ask follow up questions without needing to go to office hours because you've got a chat interface [00:27:00] there again, I believe that we will see developers who are completely self taught through interaction with agents or chat interfaces over the course of the next several years, because I think that's now generally accessible to people.

Ashish Rajan: Yeah. I think the future for me is definitely a lot more brighter as I would say, and to quote what you said as well, this is still early days. We're not even in proper AGI mode as well. This is still like the early days of 15 months of ChatGPT and probably three years of you guys using Copilot and like a lot that can happen in just in between this and maybe another couple of years, maybe, or maybe even longer.

The other question that I have, and having done this at a large organization, some of the listeners who we have obviously are CISOs or SVPs of engineering like yourself. What would be your advice for them to start laying the foundation for this? in their organization, like just an AI in general, because I imagine it's not just as simple as, Hey, I've purchased, GitHub Copilot, I'm ready.

I'm sure the cultural challenges, there are other challenges around it. What could be some of the low hanging fruits that they can aim for, at least to have the right foundations to start working on it?

Michael Hanley: First of all to try and buy GitHub [00:28:00] Copilot. But I think really what you're getting at is what's the strategic approach abstract from any particular tool.

And I think the most important thing, I think whether you're a security practitioner or you're a head of engineering or you do both jobs, like I do it's actually to go talk to your developers. What are your developers doing? Do you understand it? What do they need? What are their top friction and pain points?

What can you do that would make them 10 times more productive? What can you do that actually improves their happiness? Which I think is an oft overlooked metric. I'm not aware of any like happiness Dora metric, for example, but you can ask developers do you enjoy doing your job?

And what's cool is there's probably not actually a lot of tools that you can buy that make developers happier, but the data suggests that developers are overwhelmingly more happy using things like GitHub Copilot or other types of programmers, because it actually takes some of the boilerplate parts out of their job, and they can spend more time on doing the work that's creative. So if you're a business leader in that, again, security and or engineering leadership context, going and actually talking to the developers to understand, like, Where those friction and pain points are, and then wedding those to the right medicine that [00:29:00] actually solves those problems or soothes those ailments is great because then it's on the one hand, you're like, Hey, I've actually gone and heard them and they know that I've heard them.

And then I have addressed that by doing something specific and actionable and measurable to improve the quality of life for the developers. Yeah, I think that's just a great way to a engender trust with the team, but B to just help them do the thing that they came to work to do, which was to probably create awesome stuff.

And so I think that actually is really important is talking to developers. And then I think if you're in a company that's like selling stuff which I guess would be most every company to some degree, but if you're like, if you have a service or something that you sell to customers is also talking to them and understand what are their concerns about either AI in terms of their own consumption of it, or things that were maybe built with the assistance of AI to make sure that you're hearing that stuff as well.

And again, transparency is a big part of this and, just bringing people along and socializing, like what's happening to help them get comfortable with it. And helping them go through their traditional risk management type approaches to assess whether AI is right for them and in what way they can onboard it is the other piece of it.

So really talking to your developers internally. [00:30:00] Talking to your customers externally and making sure you really understand those constituencies is just going to help you be a better operator, period independent of which roles or both that you have.

Ashish Rajan: And I would you say bridging the gap between engineering security is like a kind of, that's where you're going with this as well, talking to developers, is there, obviously you had the dual role.

For people probably who don't have the siloed roles, is there any thoughts on how do you bridge the, maybe how do you see the dual role when it comes to building developer friendly security, because you have to think about from security perspective, people think I need to maintain compliance, I need to maintain risk and at the same time engineering, I want to do innovation as well.

And I feel you answered this with how do you find the balance between innovation and security, but having that dual role and not that I want every organization out there to change the dual role, but is there some advantage to that by just going down that path?

Michael Hanley: Yeah. I think there's a ton of advantages that we realize on a daily basis.

And I would maybe be a little bold and say, I hope more organizations look for ways to follow suit because often the security team is walled off. We're in an ivory tower and [00:31:00] disconnected from what's happening in the organization. And I think that can be harmful. Not just to like productivity but security as well.

Like you can be doing great security work in the ivory tower, but ultimately, if it doesn't translate to what's happening on the ground, or it's not something that your developers can work or, is otherwise just not landing well with teams. That's actually probably not the best outcome for the business, and I think what's what's probably a common misunderstanding or misconception in the space is this idea that like very high security assurance is opposing to productivity and speed and velocity. And, when you look at again, like the modern software driven organization, which is probably most organizations at this point, since software is only accelerating the space.

Generally, you might say security is just a drag shoot for a lot of these folks, or, we'll figure out how to do the minimum on this so that it doesn't slow us down too much, but I would reject that thesis. And I would actually say that, in fact, if you're thoughtful about it, and you really build security, not for what the security practitioner thinks is needed, but for what is actually needed through an understanding of [00:32:00] actually working with the developers and saying.

The business outcome that I need from a security standpoint is this, I'm going to actually go talk to the developers about we need to do these things. How can I do this in such a way that works great for you? That is a great experience for you. And you're going to get some hard feedback at the onset of that, but I believe if you spend that time, it's possible to get that right.

I'll give you an example of this. When we rolled out two factor authentication to the millions and millions of users that we've enrolled in two factor on GitHub. com over the course of the last year or so, we actually made the decision to do that internally many months prior. And before we really flipped the bit on like enrolling a single person on .GitHub.Com we've, we first did a ton of customer research and we actually went out to like open source maintainers. We talked to engineers internally. We talked to some of our big commercial customers and said we feel like we need to do this because the outcome that we care about is better security for the overall ecosystem.

And I think everybody can agree that phishing and account takeover are big problems. And when you can ground in that, like. Yep. Everybody gets that and can generally agree with that. And it's not what Mike says. It's what the industry data [00:33:00] shows. Then you can move forward from there and say okay, knowing that we want to do this, help us understand what you would need to be successful.

And the answer is different for open source communities and our internal employees and some of our commercial customers. But if you really spend the time to get that right. For example, we saw less support ticket volume than even our best case and expectations for doing that rollout.

And I would really credit the team for the reason for that is we spent just a ton of time talking to people about how to get this right. And it ended up being a no op from a support ticket driver standpoint. So that's one of those I'm proud of the work that the team did on that, but it required the better part of a year of thoughtful research and actually talking to humans who have to interact with this stuff to understand what they need. So I think that just that focus on the design of great security experiences that are 99 percent of the time. It's not actually the security team experiencing them.

It's like real people who are in finance and design and marketing and engineering, trying to get their jobs done. Thoughtfully putting the time into making sure that it's an easy, good experience for people to do. That stuff is what's key. And again, I think when you're in [00:34:00] an organization, that's very engineering driven.

Like GitHub having those two organizations just be under the same roof and under the same leader so that you can really have that super high degree of alignment on the priorities of like website always needs to be up, always needs to be secure, always needs to be accessible. And we just drive that through the organization.

It's actually worked quite well from a developer experience standpoint.

Ashish Rajan: Awesome. And we're very well said as well. Cause I think it's funny how much of it just basically finding out the right answer, like doing an initial research, like any other product out there as well, just to know and validate that, Hey, this is actually a thing we should work on is a great business in general as well, I would say.

So yeah, good for you guys to do doing that as well. Just reaching out in general. That's most of the technical questions I had, the three fun questions that I have for you. The first one, where do you spend your time when you're not working on solving AI challenges?

Michael Hanley: When I'm not doing my day job, it was just sometimes my night job at GitHub, when I'm not working on things related to GitHub, I'm blessed to have a big family.

My wife and I have eight kids, ninth one on the way. I think in some ways you could say, being able to deal with uncertainty at scale [00:35:00] and cope with that both at home and at work, it ends up being a trait that plays well in both spaces. I think the main point I would make there is I find just as a leader thinking through what's important to you and how you balance those priorities, like if I do a great job with my first and most important job, which is to be dad to my kids and husband to my wife. If I do a great job there, I can do a great job in my work at GitHub. And I think always keeping that balance and prioritization in place ends up helping out a lot. But yeah, that's most of my free time is dedicated to the folks under my roof.

Ashish Rajan: It definitely would help a lot.

I'm sure a lot of people reach out, like, how do you manage eight kids or nine? So the second question I have is what is something that you're proud of that you, I guess that is not in social media.

Michael Hanley: Yeah, my family, the answer I just gave you. Yeah very proud of of my whole family. And, I am proud of some of the great career experiences that I've had along the way too.

I've really been fortunate to be a part of some great organizations and including GitHub and just feel very blessed at home and in my professional life.

Ashish Rajan: Awesome. And the final question, what is your favorite cuisine or restaurant that you can share?

Michael Hanley: So I say my favorite cuisine, [00:36:00] I'm generally a, like smoked meat and barbecue kind of guy.

It's like a side hobby. I'm not very good at it. I managed to

Ashish Rajan: barbecue, but it's very

Michael Hanley: forgiving. This is the advantage of like me not being good at it. It's it's hard to have like bad barbecues. So I have a tolerance for my barbecue that I produce on a somewhat regular basis.

Ashish Rajan: Like it's I love that. That's awesome. My favorite as well. And to your point, I I'm really bad at it as well, but I guess at least I get to eat a lot of it. So I think I could be the one that's right. Stay up. Like they wake up at 3 AM for something has to be like served at 12. And I going, I don't know if I can do that, but I appreciate you sharing that.

And that was like the end of the question as well. Where can people find you to connect with you and maybe just find you on the internet to know more about what you guys are doing to GitHub Copilot and other things. Sure.

Michael Hanley: You can find me on. LinkedIn and Twitter, just search for Mike Hanley and GitHub on LinkedIn.

You'll find me there. Same thing on Twitter. And then also, of course you can find me on GitHub. My handle there is MPH4 and if you're interested in hearing more about some of the work that we're doing at GitHub, obviously you can just go to github. com we [00:37:00] are hiring right now. If something like Copilot.

It's really exciting to you and you want to get involved in that. We have many jobs open right now where you can come work on tools that are just changing the lives of millions of developers every single day. We'd love to hear from you if you're interested in that.

Ashish Rajan: Awesome. I'll definitely include the hiring link in there as well, but thank you so much for coming on the show, Michael.

Yeah, it's my pleasure. Thanks for having me. I'm looking forward to doing this again, hopefully in person next time, but thank you for making the time. Awesome. Thank you again. Thank you for listening or watching this episode of Cloud Security Podcast. We have been running for the past five years, so I'm sure we haven't covered everything cloud security yet.

And if there's a particular cloud security topic that we can cover for you in an interview format on Cloud Security Podcast, or make a training video on tutorials on Cloud Security Bootcamp, definitely reach out to us on info at cloudsecuritypodcast. tv. By the way, if you're interested in AI and cybersecurity, as many cybersecurity leaders are, you might be interested in our sister AI Cybersecurity Podcast, which I run with former CSO of Robinhood, Caleb Sima, where we talk about everything AI and cybersecurity.

How can organizations deal with cybersecurity on AI [00:38:00] systems, AI platforms, whatever AI has to bring next as an evolution of ChatGPT, and everything else continues. If you have any other suggestions, definitely drop them on info at CloudSecurityPodcast. tv. I'll drop them in the description and the show notes as well so you can reach out to us easily.

Otherwise, I will see you in the next episode. Peace.