- tl;dr sec
- Posts
- Google Cloud CISO: Shift Down not Left, 4 ways Google uses AI for Security (with Phil Venables)
Google Cloud CISO: Shift Down not Left, 4 ways Google uses AI for Security (with Phil Venables)
Moving from artisanal to industrial security at scale, and how Google uses AI for reversing malware, analyzing hacker videos, fuzzing, and more
During RSA week 2025 I had the pleasure of interviewing Phil Venables, who writes one of my absolute favorite blogs, which contains tons of insightful posts on building highly effective, scalable security programs.
Phil Venables was CISO of Google Cloud for over four years, CISO at Goldman Sachs before that, and is now a partner at Ballistic Ventures.
In our conversation, Phil and I discussed:
Delivering security at scale (artisanal → industrial) and shifting down, not left
How Google leverages AI for security, what’s under- and over-hyped in AI
and more!
You can watch the video on YouTube below.
I’ve also pulled out the key takeaways in this post, and see the cleaned up full transcript at the bottom.
Questions, thoughts, feedback— let us know!
Table of Contents
Key Takeaways
Delivering Security at Scale: From Artisanal to Industrial
The reality is that with the right experts and resourcing, we do know how to secure computer systems. The tough part is securing a system with all of its dependencies at scale.
Phil discussed an example from a prior job where a standard crypto upgrade in one product ended up taking 18 months and touched every system in the whole company.
Another example: for the Struts vulnerability, companies knew the importance of fixing it, but they struggled to upgrade all of their application software to be able to operate with the new version.
Shifting from Lagging to Leading Indicators in Security Programs
Many security programs focus on lagging indicators (how many vulnerabilities do we have? Am I having breaches? Are there data leaks?). These are important to track, but it's more important to track leading indicators.
One leading indicator: What percentage of software is reproducibly built? Without that, it's hard to patch and upgrade effectively.
Moving from lagging to leading indicators gives you better security and all of these adjacent business benefits (engineering agility, faster time to market) that make the security team more integrated and supporting of the business and development teams.
Shifting Left Vs. Shifting Down: Embedding Security into Platforms and Frameworks
The evolution beyond "shift left": While shifting security earlier in the lifecycle matters, the bigger opportunity is "shift down" - embedding security controls into application frameworks and cloud platforms so developers inherit security by default.
Shift down goes beyond auth: It includes mitigating SQL injection, XSS, and other common vulnerabilities directly in the framework layer. Plus reliability, resilience, change management, and phased deployment patterns - all automated in the platform.
GCP’s Secure by Default Breaking Changes: When GCP shipped org policy changes with security features fully enabled by default (despite some being breaking changes), 99.5% of customers kept the controls on.
OpenSSF's Alpha-Omega targeted approach: Rather than trying to secure all open source, they focus on the critical packages. There are also industry-specific efforts, like FINOS for financial services' unique open source dependencies.
AI For Security: The Transformational and the Incremental
Beyond chatbots: The real AI transformation is happening in pharmaceutical discovery, material science, and other domains - not just conversational AI.
Google's internal AI security wins:
Auto-reversing malware uploaded to VirusTotal using AI, then feeding detections into Google Security Operations.
Using NotebookLM to analyze hacker videos and extract new techniques (saving hours of analyst time).
AI-generated incident reports that humans actually prefer over human-written ones.
30%+ improvement in fuzz testing coverage by using AI to develop better fuzzing harnesses.
The power of stacked benefits: While everyone looks for transformational AI use cases, the bigger wins often come from stacking 20-30 small workflow improvements that collectively 10x team productivity.
How to start with AI: Don't have a management meeting to decide the top 10 use cases. Instead, give teams controlled access to AI tools and let them self-innovate. Example: Mandiant's threat intel team independently started using LLMs for better attribution analysis of North Korean IT workers.
AI in Security: Overhyped vs Underhyped
Overhyped: High-end adversarial AI use. Attackers are mostly using AI for basic productivity gains (better phishing emails, deepfakes) - not for finding novel vulnerabilities or attack chains. The defense isn't AI, it's phishing-resistant auth and better business controls.
Underhyped: AI-generated secure configurations. LLMs fine-tuned on infrastructure-as-code + RAG with organizational templates = massive reduction in "time to hello world" while maintaining security standards.
Transcript
Introduction and Phil Venables’ Background
Clint
Hello there. My name is Clint Gibler. I'm the head of security research at Semgrep, and today I'm thrilled to be joined by Phil Venables. Welcome, Phil.
Phil Venables
Yeah, pleasure to be here.
Clint
So if someone hasn't heard of you already, could you tell us a bit about yourself and your background?
Phil Venables
Yeah, so I'm sufficiently old that I've been doing security a long time. So I've been a CISO at four different companies for over 30 years.
So I started off as a software engineer building all sorts of different kind of industrial control system software. Then stumbled into doing security for banks by building cryptographic software for high-value payment systems. And I got my first internet security exposure in 1995 by putting a traditional bank online on the Internet for the first time, one of the first banks that went online.
And then after that was Chief Information Security Officer for Goldman Sachs for many years. And then for the past four and a half years I've been in senior security roles at Google, including CISO for Google Cloud, which is where I still am as a strategic adviser.
But I also do some other things for Ballistic Ventures, an early stage cyber venture capital firm, and some other things. So yeah, it's great to be here. And after a long time in the industry working on security.
Clint
Yeah I think the breadth of experience really comes through in your blog, which is one of my favorite blogs to read on like the entire internet. And whenever I get an email saying there's a new blog post, I'm very excited.
Phil Venables
That's good to know. I think there's a lot of technical content out there but not a lot of how to run a security program content based on experience from real practitioners. I would encourage a lot more CISOs to write blogs of their experiences as opposed to just technical blogs. We definitely need more of that in the community.
Clint
Yeah, I totally agree. There's a lot of, here's how to exploit this issue, here's how to secure this thing, like the technical weeds. Yes, but how do you run an effective security program? What are the metrics you should care about? Just like how to think about it and things like that, that are less covered.
Delivering Security at Scale: From Artisanal to Industrial
Clint
So actually along those lines, one thing I'm curious to hear more about: so you had a series of blog posts, I think one of them specifically was about running a security program going from artisanal to industrial. And there was a bunch of different cool points in that, but I was wondering if you could share a bit about that.
Phil Venables
So the interesting thing is, so I started thinking more about what's the challenge of security in the real world. And again, this may or may not be a statement of the obvious, but I think the thing, the reality is: we do know how to secure computer systems.
Like, so I can take a computer system, put it on the Internet, have it do something useful. Now with enough experts, on software security, detection, operating system hardening, hardware security, a whole array of things, a collection of experts can run a secure system against all reasonable adversaries.
The problem is how do you do that at scale? And so one of the things, you know, when you've got that one system with enough experts or artisans really doing that, the question is not whether we can secure computer systems, it's whether we can secure computer systems with all of their dependencies, all of their large scale environment.
And so how to think about moving from that artisanal to industrial scale is the key to doing security. And I always remember one thing about it's... it's often, and I think you guys work on this a lot, is around the issues around dependencies.
So again, I always remember this was burnt into me like over a decade ago. I was working in one organization and we had to upgrade the crypto in a particular directory server product. I think it was moving from like Triple DES to AES. It was just the standard crypto upgrade that you need to do. And it was literally just flicking a switch.
The problem is to flick that switch required us to then also upgrade a bunch of other authentication systems that depended on needing to be upgraded to then work with that directory controller. And you think, okay, we'll go do that.
The problem then was there was a bunch of network file system software that needed to be upgraded. Then that caused a database upgrade, which caused a bunch of other application upgrades. So that one flicking of that switch required us to literally touch every system in the entire company, and it took us 18 months.
That's an example of where you know how to do security, but security at massive scale with intricate dependencies and other real world software is where all the challenges come.
I also remember like a few years ago where you'll probably remember the Struts series of vulnerabilities. And again, that was a classic example where doing some of the security upgrades changed the nature of certain application servers, which then required a lot of companies to go upgrade their software. So some companies that got criticized for having a breach associated with a Struts vulnerability, it's not that they were unaware of it or unaware of the importance of fixing it, but they were struggling to upgrade all their application software to conform to be able to operate with the new patch.
Clint
There's breaking changes in the library.
Phil Venables
Exactly. And so security of a system is different from security at massive scale with all the dependencies that real organizations have. And I think when you think about it that way, you start to have to do things in different ways.
Shifting from Lagging to Leading Indicators in Security Programs
Clint
Yeah, I like that framing of it and sort of calling out the intricacies and challenges there. I think one thing that comes across, and maybe you even have a specific blog post about this, is almost taking like a systems design thinking where like yes, security is an emergent property we want, but also, how does engineering work? How does the SDLC work? How does patch management work? All these things that all net contribute to sort of a system and organization that's well...
Phil Venables
I think also this is why it's important to kind of shift from lagging indicators to leading indicators. So the classic lagging indicators are just, you know, have I got vulnerabilities? Am I having breaches? Is there data leaks? All of those things, which you obviously have to measure.
But the real things to measure for effective security programs are things like what percentage of my software is capable of being reproducibly built. And because as we all know, if you can't do that, then you can't do a lot of changes. You're worried about making changes, you can't patch and upgrade effectively. But the great thing about that measure is it not only gives you a better security program, it also gives you better agility, time to market. And so it has a whole bunch of other reasons for doing it, not just security. And so many of these leading indicator metrics like software reproducibility, infrastructure reproducibility, a whole array of other things give you wider business benefit and great security. And you almost then can't help but have the good lagging indicators.
And again, back to the kind of the industrial theme, it's a similar thing. When you look at high-tech manufacturing or any types of manufacturing, those organizations, they're very much focused on what's the quality of what comes off the production line. That's clearly important, but what they actually focus most of their metrics programs on is, you know, have they designed for manufacturability? Are they doing preventative maintenance? Is the quality checks throughout the entire process? Are the machines working? And so if they get all of that right, they can't help but have good product comes off to the end of the production line.
And I think doing that in security, moving from those lagging to leading indicators, not only gives you better security, but it gives you all of those adjacent business benefits that make the security team more integrated and supporting of the business and development teams. So I think that that's a big key for that.
The Best Leading Indicators: Reproducible Builds and Infrastructure, Cold Start Recovery
Clint
Yeah, I like that. And yeah, because I also have some examples about just engineering or broader company wins and security wins sort of being together. But before we talk about that, I'm curious, do you think overall we should be focusing on leading indicators or lagging indicators or both? I'm curious how you would weight the two and what would your be like your, I don't know, top three or so leading indicators?
Phil Venables
I think you have to really focus most on these leading indicators. I mean, you clearly have to look at the lagging indicators because ultimately you're judged on whether you have breaches, whether you have vulnerabilities.
But you watch those outcomes, but you need to do other things to get those outcomes that aren't often directly related to those outcomes. Again, software... you know, the more you have your software reproducibly built, the more your software security program is going to be in better shape, and the more you can drive change because you're less fearful of change.
I mean, in my experience, some of the best leading indicators is that what percentage of your software is capable of being reproducibly and reliably built and deployed? How much of your infrastructure is reproducibly built? So the classic immutable infrastructure design pattern, how much of your infrastructure looks like that, whether you're on the cloud or an on-premise cloud-like environment helps you do that.
Another one I like is this measure of what you would call cold start recovery, which is how quickly can you rebuild your entire environment in the face of some form of destruction or disruption? And again, this is a classic measure.
You see this a lot where organizations test their backups. They do a backup and they pull a file back, and if the file's there, they go, you know, my backups are working. But in an extreme case, that's not how backups work. You're effectively taking a bunch of immutable backups and a bare metal or blank cloud environment and rebooting your entire company.
And what we've found in the work we've done with many organizations, you see a few failure modes in that. One, sometimes the backup system doesn't have the backup software, or it doesn't have the catalog and index of the backups that you need to use to restore the backups. Sometimes there's circular dependencies. I've seen this in some organizations where, you know, to bring their authentication system back, they need DNS, but to bring DNS back, they need to reach the authentication system. And so that's a different way of restoring.
And then ultimately it's also a capacity issue. Sometimes you see these kind of big ransomware or data destruction events, and the organization has got great backups, but it's still taking them three weeks to bring their environment back just because of the sheer bandwidth from the storage system to the recovery system to the rest. And so when you think about how do you measure the time to reboot your company is... very few organizations know that measure. But when you get that measure, classic leading indicator, you have to fix inventory, you have to fix backups, you have to do reproducibility. And that gives you great outcomes, and it gives you great business outcomes and great efficiency as well as security.
So there's a few of these things. There's a lot of stuff you can do on data governance, identity and access, really measuring whether you've got your environment under control. That again, when you deliver that, you get better security outcomes, but you also get a lot of efficiency and better business outcomes as well.
Clint
Yeah, I like that a lot. Yeah, to use some of the examples you were talking about before with like Struts, it's like yeah, if the leading indicator of like, oh, how long does it take us to patch, or how easy it is to patch, you're like, oh cool, like we can patch everything on average very, very quickly. Then you don't have to worry as much about the, like CVEs and dependencies you're using and things like that.
Phil Venables
Yeah, because you just get into the mode of constantly upgrading.
Security Improving Engineering, CISO → CTO
Clint
Yeah. And you maybe have a good asset inventory, so you're like, oh, and we also know where all the software is deployed, so we know where to fix. Yeah, I like that. I think a lot of this is good engineering. Like a high functioning engineering org has a lot of these properties that also have sort of adjacent security value as well.
Phil Venables
Exactly. And there's no reason why the security team in an environment where there may not be a high functioning engineering organization, there's no reason why a security team can't consider its objective to improve the engineering prowess of the organization, even if it's just to benefit security.
And it's one of the things that's interesting. I'm seeing an early trend in the industry where more and more chief information security officers are also becoming heads of infrastructure. So there's a lot more CISOs that are CISO and CTO these days than they used to be. And I think that's a consequence of the CISOs pushing their organization to get better at infrastructure, and their business responding by saying, 'Okay, you're complaining about infrastructure, you go fix it.' And which I think is a positive thing.
Clint
Yeah, I've seen that too. Another trend I've seen a number of companies talk about is having the product security or appsec team be like, okay, you should be best buds with the platform engineering team or the people optimizing the efficiency and productivity of the engineers at that company. Because then you put the nice security controls into the, I don't know, git repo that everyone like forks for a new microservice, and you're like boom, like it's in from the start.
Shifting Left Vs. Shifting Down: Embedding Security into Platforms and Frameworks
Phil Venables
No, exactly. It's funny as well. So one of the things we're seeing a lot of is, we've talked for years about shift left, so putting security as early in the design life cycle as possible, which I think is important.
But I think the big thing we're seeing now is a push to shift down and exactly do that, to push more and more security controls into the application frameworks, the cloud platforms, everything else, so that when you're developing software and you're developing your applications, you're inheriting a higher degree of standard control from the environment. So I think we need to keep moving that dialogue from shift left to shift down into the platforms. And that's going to be especially important.
Clint
Yeah, I like that a lot. So I guess going like back in the day, it's like, okay, cool, this thing is already released, and then we're running a bunch of security scanning tools. And then we're like, okay, shifting left, okay, now we're giving like your PR comments. Okay, we're shifting left even more, we're giving you feedback in IDE. Okay, now we're shifting, to use your terminology, down in terms of how do we build into the libraries you're using, into the infrastructure and platform it's running on, all these security controls that ideally are orthogonal and invisible to developers, where they're like, you want mTLS, you want authentication, authorization, like it's already set up for you, like don't worry about it.
Phil Venables
No, that's right. You know, my colleagues at Google did a great job on this. And it’s encoded in the Building Secure and Reliable Systems book. And it's not just authentication, authorization. It could be mitigating common application security vulnerability, SQL injection, all there in the kind of the application framework layers.
And again, it's the other important thing is it's not just security. You can do this for reliability, for resilience, for constructing different patterns and embedding it in the application and infrastructure frameworks so that the development teams can focus on building the business logic, not having to build the whole security, reliability, and resilience scaffolding, or even the kind of the change management and phased deployment approach can be automated as part of shifting down into the platform.
Cloud Providers and Secure By Default
Clint
Totally. Yeah, I think Google has done an excellent job of that. And yeah, the Building Secure and Reliable Systems is an excellent book.
I'm curious, it seems like a number of companies, especially the bigger, more well-funded ones in terms of security engineering, have built a lot of these useful frameworks and architectures internally. I'm curious like what do you think that looks like or how could one adopt this at like an ecosystem level in terms of, I don't know, like a secure by default library for lots of types of things? I think there's isolated instances of this, like specific libraries or specific things, but I'm curious because I don't know if it really makes sense from like a business model for like a venture-backed company, or maybe it does, but yeah, I'm curious about your thoughts.
Like how would one raise the security bar of the industry as a whole via like building more things in the platforms? Like I know GCP does this with the Shared Fate model and things like this, but I'm curious more broadly.
Phil Venables
Yeah, you're right. Clearly the cloud providers, especially at GCP, we've gone down this heavy mission of not just secure by design, building security into the platform, but also doing secure by default so that every product ships with full safeties on. And then if customers want to turn those things off, then okay, they can risk manage that. I wouldn't say we do a perfect job on that. I think we do a good job on that, but there's more to do. And so we're shipping more and more product with those, you know, high safeties on.
And what was interesting is we shipped a bunch of org policy changes last year where more products came with all of these security features fully enabled. And we had to manage that carefully because some of them were breaking changes for customers, so there was a bit of a phased deployment. But what was interesting, it was something crazy, like 99.5% of customers kept those controls on.
And we've been mandatory deploying multifactor authentication for admin activities. So for many years we've made it available and encouraged it. We now mandate MFA. And again, we've not seen a lot of push back. So some of my colleagues were worried we'd get a lot of customer push back. The reality is organizations welcomed it because I think they want us in all of the tech community, not just cloud providers, to just be a lot more opinionated about what the right degree of control is.
Now I think you see more and more companies doing the same thing, whether it's a SaaS provider or a cloud provider. I think we also do a lot across the open source community. So one of the things that has not been that widely talked about is the work we do on the Open Source Security Foundation of equipping open-source projects and their maintainers with better tooling to help them manage vulnerabilities. It has also enabled us to help some open source projects include their own version of not just secure by design but secure by default, to try and get this across all of the open-source community as well. And so I think we're getting there, but it's definitely been led by cloud providers like us driving that into the platform itself.
Clint
Yeah, definitely. And I do have to give Google a shout out in terms of contributing a lot to open source security in terms of there's like OSS-Fuzz, there's osv.dev, there's a lot of things that are, OpenSSF to your point.
Lots of things that I think are nice to see. Because again, it's sort of like a tragedy of the commons in terms of like, oh, there's all these open source libraries that everyone uses. Who's actually securing them? So putting money and compute behind making the ecosystem better, I think is very admirable.
Phil Venables
That's right. Well, same thing with frameworks like SLSA for kind of the supply chain levels for software artifacts as a useful complement to SBOMs to make sure that you don't just know what's in the software, but you're actually securely building it. And then through OpenSSF, providing the right tooling to help people do that.
And again, as you know, the open source package maintainers out there, a lot of them are doing it like for the love of developing that open source software. They don't necessarily want or should have to provide a whole kind of maintenance framework around it. And so what the OpenSSF have done a pretty good job of is giving all of these package maintainers better and better tools to kind of industrialize around the thing that they're developing. And I think that's having a lot of good effect.
Clint
I think so too. And I think recently there's like OSV Excalibur and other related tools. There's some like open source scanners for dependencies and things like that. And I think some like helping people auto patch, maybe Maven and some other things.
Phil Venables
Well, I think as well, one of the things the OpenSSF have done a pretty good job on with the Alpha-Omega work is understanding there's a classic 80/20. One of my favorite hobbies is looking for 80/20s everywhere because you keep finding them... is kind of 80% of open source risk is in around 20% or even probably less. It might even be like 1/99 rather than 80/20. So I think OpenSSF've done a really good job of focusing on those most used packages.
The other thing that I don't think has been talked enough about is each individual industry also has their own unique open-source concentrations. So for example, we partnered with an organization called FINOS, which is the financial sector open-source consortium that manages a set of common open source packages that are used by major financial institutions. Which is distinct from the major packages used by everybody else. But they did a great job with OpenSSF on looking at how to improve the security of the open-source software that might be quite unique to financial services. And we see similar things in other industries as well.
Clint
Oh, interesting. I hadn't heard of that. That's super cool. I love the idea of a vertical or an industry being like, cool, what do we all rely on? Okay, cool, let's like all chip in to make it better.
AI For Transformational Things
Clint
So yeah, I guess AI is obviously a very hot topic right now, and I was wanting to shift into that and talk a bit about a number of areas. But I guess maybe to start off, what are you most excited in AI or LLMs right now? Obviously there's MCP, etc.
Phil Venables
So it's interesting. I mean, this maybe not the right answer, but I think the thing that is kind of not talked about as much in AI — nothing to do with security — it's the use of AI for transformational things.
I've just been looking this week at how many number of organizations are using Google DeepMind, for example, to transform pharmaceutical discovery, material science. There's a whole array of things. So the world tends to look at the world of LLMs like it's just another chatbot, but the actual transformation in kind of AI is in these broader use cases. And I think we should not forget about that in a world of, you know, people just thinking this is all about conversational AI or agentic AI. So there's a lot of transformation going on.
AI in Security Operations (malware analysis, alert generation, workflow automation)
But specifically kind of narrowing it in, one of the things I think there's a lot of opportunities in security where we're seeing a lot of security teams, and we're providing tooling for this, but there's a lot of homegrown innovation in organizations around automating security operations.
I mean, for example, we're using internally at Google, we're using some of our models to reverse engineer malware that gets uploaded to VirusTotal. We're using it to autogenerate alerts. We're using it to automate a lot of workflows. And we're seeing a lot of that going on in security operations. We're also seeing it in the software pipeline, software delivery, generate automatic generation of secure configurations. There's a whole array of things. So it's kind of an exciting time, and it's exciting to watch individual security teams really kind of prototype and adopt and drive this, as well as doing their other job, which is managing the safe and secure adoption of AI by their businesses. So they have this dual mission of using AI to improve their own capability as well as managing the risk of their organization's deployment and use of AI.
Clint
Yeah, I think there's a lot of interesting work being done in both securing AI as well as AI for security.
Using AI to Automatically Reverse Malware, Stacked Productivity Benefits, Incident Write-ups, Fuzzing
Clint
To delve into some of those areas you just talked about, specifically I guess Google using AI for their own security posture. So you mentioned security operations and then securely generating config. I guess, could you tell me more about the like auto reversing malware and uploading it to VirusTotal?
Phil Venables
Yeah, so everybody probably knows, VirusTotal is the world's biggest database of kind of malware. So when you upload stuff to that, we're using models to reverse engineer and analyze that to then auto feed new types of detections into our Google Security Operations suite. So a lot of that goes on.
There's a lot of workflow automation being done as well where everything from using an LLM... we have a thing called SecLM, which is a fine-tuned security model based on one of our foundation models that's trained off all of our Google threat intelligence data. And so again, everything from natural language interface to query large-scale data repositories where you previously would have to write a script or a rule base is one simple use case, all the way through to using LLMs to look for anomalies in large-scale data sets.
And even use cases... you know, one of my favorite use cases I look at is some of our security teams using Notebook LM. So the notebooklm.google.com is uploading activist and hacker videos to a tool that then analyzes it and gives you a report on what new techniques might be used in there is something that can save hours of a threat intelligence analyst's time. And so while there is transformational use cases for AI in security, sometimes I think we have to remember that some of the bigger gains are these stacked benefits of little productivity improvements that collectively 10x the productivity of the security team.
Another one of my favorite examples, our detection response team uses one of our LLMs to generate security incident writeups and postmortem analysis. And the team did that over a long period and did the kind of the A/B test to have people quality check what it was writing up. And it turns out the humans prefer the incident reports and postmortems that the AI writes. So it's doing it better, and it's doing it more efficiently.
Now you take that one example and you add in like 20, 30 other workflow improvements that just liberates the security detection and response team to be doing much higher-end activities for a higher proportion of their time because the basic toil of running this has now been taken care of by AI. So it's just that those stacked benefits are immense as well as the individual transformation benefits.
I mean, another example as well is, you know, we've been using AI to improve fuzz testing coverage. And so I think, you know, I think we've probably upticked fuzz testing coverage by over 30% just using AI to develop some of the fuzzing harnesses in more effective ways. And again, that increases the aperture to which your existing vulnerability management programs can work, which again is uncovering more vulnerabilities that you want to fix, but is also again transformative to the productivity and in fact job enjoyment of the people who are using these technologies. So we're seeing opportunities everywhere.
How Security Teams Should Start Adopting AI tools
Clint
Nice. Yeah, I love all those examples. In terms of the stacked benefits, I'm curious, let's say you are a security team member at some company. Where do you think you would start in terms of either replacing or augmenting some of your own work so that you can focus on higher leverage things?
Phil Venables
Well, I think the main thing is to just give people the ability to use the tooling in a controlled way so that every individual can start figuring out how to prototype.
So a lot of our internal use cases were not like some structured work. It wasn't like we had a management meeting one day and put on a whiteboard, like here's the top 10 areas. We actually just made the tools available and people self-innovated.
So for example, one of our Mandiant threat intelligence teams self-innovated the use of LLMs to do better attribution of various things.
So for example, we talked about this in the research report that we did on the North Korean IT worker sets of incidents where North Koreans have been getting jobs at companies not actually to attack them, but just to earn money. And so a lot of our threat intel teams helping with organizations that were victims of that were able to use AI to do better attribution analysis based on analyzing the resumes of who'd been submitting for job applications.
Again, so that's something that could be done with traditional tools, it could be done with human experts, but the fact that people are able to use the platforms to self-innovate their own workflows that we then spot and then industrialize and push across the rest of the teams is a useful way of doing things.
So I think you do have to pick some priorities and focus top down, but also giving people the tools that they will self-innovate and bubble up what are useful use cases that you can then capture and promulgate turns out to be a useful way of doing it.
But we see a lot of focus. It's security operations, it's secure configuration development, even things like automating compliance activities and compliance attestations. Just you see opportunities everywhere. Some are transformational, some are just these incremental benefits that collectively add up to a lot of improvement.
Clint
Nice, that makes sense. I like the idea of giving people the tooling and freedom to experiment and then like, oh yeah, you see what works for you and then I guess, oh, this seems like a great use case. Okay, cool. How do we industrialize and then apply this across everyone?
Give People a Controlled Prototyping Environment
Phil Venables
Yeah but I think that's why as well it's important. It's a bit like what we talked earlier about kind of shift downward. It's also important to provide a broader controlled AI platform. So rather than just let everybody go kind of use some random chatbot on the Internet, like providing a platform that's enriched with your organization's fine-tuned data, with access to internal databases in controlled ways to let people prototype in a controlled environment is also important.
And again, the security teams partnering with the AI and infrastructure teams in organizations can do a really big value add for their organizations in providing that almost like controlled prototyping environment that's safe to use to find those benefits is important.
Overhyped vs Underhyped Applications of AI to Security
Clint
Yeah, I like it. So you, yeah, you're making sure we're not leaking any sensitive data and it's not getting fed into training of models and things like that.
So you've listed a number of cool areas so far in terms of like specific concrete applications, but I'm curious if you had thoughts on, what are the areas that are maybe overhyped in applying LLMs to security or maybe what's underhyped that people should be looking at more?
Phil Venables
I think it's kind of building on what we just talked about. I think the people keep looking for the transformations and then neglecting those stacked benefits of lots of little improvements.
But I think we're probably overhyping at the moment the potential for really high-end adversarial use of AI. Again, we published a Google threat intelligence report recently based on our analysis of how North Korea, Russia, China and Iran have been using some of our public AI platforms. And again, they're using it to generate better phishing emails, but the answer to that is not AI. The answer to that is implementing strong phishing resistant authentication. They're using it to generate deep fakes, they're using it to generate fake voice images that can dupe business processes. But again, the answer to that is a little bit of deep fake detection, but also the answer is better business control processes.
And we're seeing attackers use AI just like the rest of us are using it to improve our productivity. I think it will come, but we're not yet really seeing intense adversarial use on finding new forms of vulnerabilities or chaining attacks together in different ways. So I think that's a little bit overhyped. I mean, I think it will come. We should probably actually be quite sad that adversaries are not having to do that because they're still having success at more basic things. So I think that's a little bit overhyped.
I think underhyped is the use of AI to generate secure configurations. So we've been embedding this in a lot of our products, not just the security products. So if you're standing up a Kubernetes cluster or a database environment, a lot of that now can be autogenerated configuration that is done in a secure and opinionated way. People almost don't notice these transformational productivity effects of how quickly you can stand up environments and configure them and make sure it's configured in a secure way. So that's almost going unnoticed, the fact that that transformation is happening.
Clint
I guess where does the LLM come into play there? Because I could imagine you could have like templates and things that are just largely deterministically like, yeah, these are how it should look. But I guess where is the like fuzzy matching?
Phil Venables
It's a combination of things. So you'll have basically an LLM that is fine-tuned on how to generate infrastructure configuration as code. And then it will also have access to you through some kind of RAG approach to some internal templates of what your organizational standards are.
But as you'll know, many organizations have got quite mature kind of infrastructure deployment pattern libraries. But ultimately when you do come down to do that, whether you've got a database with an AI pipeline and a Kubernetes cluster and access to an external SaaS provider that you're feeding data to, that requires a lot of effort to build the configuration for that use case, even though you may be composing pre-templated patterns. Having an AI do that and assemble that for you and do it in such a way where it's been trained on how to do that conforming to your security and configuration and resilience standards is just a great productivity boon.
I like to always think about in terms of the old time to hello world, which is, you know, whether you sit down at a new programming language or an application framework, just how long does it take you to actually get that first line of code productive, turning out that the use of AI is just reducing time to hello world across the board.
And a lot of the developer use of this stuff is to autogenerate the scaffolding or to generate the IDL to access data stores. And that's just a huge productivity boom so developers can focus on developing software, not building the scaffolding that they have to do to then get to develop software.
Closing Advice on AI Adoption & Phil’s Socials
Clint
Nice. Yeah, that makes a lot of sense. I like that a lot.
Yeah, Phil, it's been a pleasure. Thanks so much for your time today. I guess are there any parting words or advice you would like to share for people, and also where can people find you or what should they be aware of?
Phil Venables:
Exciting. So, you know, so I'm at philvenables.com or @philvenables on X. So, you know, I publish a lot of stuff regularly and always happy to take feedback from people on what they would like to publish next.
I think just kind of rounding out on the AI conversation, there's two things I would say to security teams. One, it's a great opportunity for security teams to exhibit some business leadership by working with their business and IT counterparts to help their organizations safely adopt AI. And I think the organizations we're finding are looking to the security team to almost be that chief digital risk officer as opposed to just the chief security officer.
And then similarly for organizations, the best way to figure out what to do with AI is to just get going and do some prototyping and learn by doing. And in many cases, as we've found, the big use cases actually come from lots of experimentation as opposed to imagining what the big use cases would be. And I think just getting going on those things.
And then finally, just kind of engage with the community. So we have the Coalition for Secure AI that we set up where it's a community approach to building standards. There's things like the ML Commons where they're building a lot of common tooling available for people. And then things like the OpenSSF and other organizations, just encourage people to sign up and get involved in the community. And it's not just the big tech companies that should drive this. It's open for everybody.
Clint
Yeah, I think that's some great advice. Well, thanks so much for your time and take care!