• tl;dr sec
  • Posts
  • [tl;dr sec] #306 - Claude Code's Hacking Campaign, Rust in Android, Secrets Scanners Miss

[tl;dr sec] #306 - Claude Code's Hacking Campaign, Rust in Android, Secrets Scanners Miss

Claude used by state actors for a hacking campaign + industry weighs in, Rust -> 1000x reduction in memory safety vulns in Android, why your secret scanner is missing valid secrets

Hey there,

I hope you’ve been doing well!

I’m stoked to announce I’ll be doing a webinar with my friend Daniel Miessler on his epic AI setup.

(If you’re not familiar with Daniel, he writes the excellent Unsupervised Learning newsletter and created the Fabric project.)

Daniel has spent maybe more time than anyone I know on his personal AI infrastructure.

So I’m stoked to have him walk through his setup, do some live demos, and answer your questions.

We’ll cover:

  • Building out Claude Code as your command center.

  • A maturity model for integrating AI into your work.

  • Recent developments and how best to use them: Skills, sub-agents, etc.

  • “Productionizing” your AI infra and tactical tips: ensuring the right agents and skills are called, structuring context for easy agent retrieval, automatically managing agent history, coordinating between agents, tool calling, etc.

  • And more!

Hope to see you there 👋 

👉️ Join us Dec 3rd at 10am PT 👈️ 

🦃 P.S. No tl;dr sec next week due to Thanksgiving! Hope you get some time to relax with people you care about.

Sponsor

📣 Start your Red Team Journey with
Altered Security

Altered Security offers multiple Red Team courses for on-prem and cloud with affordable and enterprise-like hands-on labs. 

Highlights of Altered Security courses:

  • Industry recognized certifications like Certified Red Team Professional (CRTP), CRTE, CARTP and more. 

  • Easy to access and huge enterprise-like labs.

  • Designed by Black Hat USA and DEF CON veterans. 

  • Trained more than 40K professionals from 130+ countries and 500+ organizations. 

Get 20% OFF on all courses in our Black Friday deals until December 17, 2025. No coupon code required.

👉 Enroll Now 👈

Courses and online labs have really helped me level up my hands-on knowledge over the course of my career. Take a look if you want to gain some red team experience 👀 

AppSec

Secrets Story: The Prefixed Secrets That Tried%20to%2BGet\nAway
Semgrep’s Lewis Ardern describes how secret scanning tools miss valid leaked secrets due to over-reliance on false positive reduction techniques like non-word boundaries and keywords. He found hundreds of valid GitHub, OpenAI, Anthropic, and other tokens publicly leaked on GitHub that went undetected by popular scanning tools.

💡 Secret detection is actually surprisingly complex and nuanced, which this post does a good job at demonstrating with concrete examples. If you want to get into the nitty gritty, check it out.

Effortless Web Security: Secure by Design in the Wild
Aaron Shim shares two Google initiatives to support by Secure by Design in web development: contributing to the W3C Secure Web Application Guidelines (SWAG) Community Group and introducing Auto-CSP in Angular. The W3C group has some great documentation on common vulnerability classes, and discusses best practices and guidelines for web developers and maintainers of important web dependencies.

Auto-CSP in Angular automatically generates Content Security Policy configurations during build time. CSP is a great defense against XSS, but tough to do in practice. Auto-CSP works by rewriting all <script> tags with src attributes to inline script tags that dynamically add those scripts to the page, and then all inline scripts are hashed and used in the CSP with strict-dynamic.

Rust in Android: move fast and fix things
Google’s Jeff Vander Stoep shares how Android's adoption of Rust has led to a 1000x reduction in memory safety vulnerability density compared to Android’s C and C++ code. Memory safety vulnerabilities have fallen to below 20% of total vulnerabilities for the first time, and Rust has also significantly improved development efficiency with a 4x lower rollback rate and 25% less time spent in code review compared to C++.

Android is expanding Rust adoption to the Linux kernel, firmware, and first-party applications like Nearby Presence and MLS, and parsers for PNG, JSON, and web fonts in Chromium.

See also Comprehensive Rust, a free Rust course developed by the Android team at Google.

Sponsor

📣 Meet Cortex Cloud 2.0: The Autonomous AI Workforce for Cloud Security

The complexity of securing modern cloud environments — from development to deployment — has reached a breaking point. Siloed tools and alert fatigue turn visibility into chaos.

Discover how Cortex Cloud™ 2.0 sets the new standard for autonomous cloud security, connecting code, cloud, runtime and automation.

You’ll learn how to:

  • Solve any cloud security challenge with automation driven by agentic AI trained on over 1.2 billion real-world responses.

  • Reduce risk across your multicloud environment with intuitive, actionable command centers that elegantly visualize risk. 

  • Stop advanced attacks with a performance-optimized version of our best-in-class CDR agent with 50% less resource consumption.

Watch this webinar now to see the new standard for cloud security for yourself.

👉 WATCH NOW 👈

Automatically investigating alerts, gathering context, and auto-fixing things where possible is going to be huge when done well. And 1.2B real-world responses is a pretty big dataset to train on 🤯 

Cloud Security

AWS re:Invent 2025: Your guide to security sessions across four transformative themes
There will be >80 security-focused sessions across four main themes: Securing and Leveraging AI, Architecting Security and Identity at Scale, Building a Culture of Security, and Innovations in AWS Security.

The AI security sessions cover protecting AI workloads, securing agentic AI systems, and using AI for security operations, with workshops on red teaming generative AI applications and implementing authentication for AI agents.

How I Overlooked the Problem and Shot Myself in the Foot
AWS consultant Dmytro Sirant on how he overlooked a critical issue during an IAM-to-SSO migration, where deleting an IAM user left a KMS key with an immutable policy that prevented further infrastructure changes. He discovered that Terraform would silently "succeed" when trying to update an EKS cluster's encryption key (which isn't actually possible), and had to go through AWS Support's specific recovery process, which involves creating specially-named IAM users for each affected key.

Lesson: always update KMS key policies before deleting IAM users.

righteousgambit/quiet-riot
By Wes Ladd: An enumeration tool for scalable, unauthenticated validation of AWS, Azure, and GCP principals; including AWS Acccount IDs, root e-mail addresses, users, and roles, Azure Active Directory Users, and Google Workspace Users/E-mails.

Things you wish you didn't need to know about AWS service-linked roles
Plerion’s Daniel Grzelak explores the peculiarities of AWS service-linked roles (SLRs), explaining how they differ from service roles (owned by AWS, not you, can’t be edited, and more), their security implications, and potential issues with their implementation.

The post shows how SLRs can be used to enumerate services in any AWS account, bypassing permission restrictions to discover resources, some have loosely scoped policies, and they may potentially create security issues through dubious SLR policy practices like using non-reserved tags (not prefixed with aws:), name-based policies, and policies that may allow privilege escalation.

Blue Team

R3DRUN3/magnet
By Simone Ragonesi: A purple-team telemetry & simulation toolkit designed to generate both telemetry and malicious activity for testing detection capabilities and SOC analyst responses. Magnet includes simulation modules mapped to MITRE ATT&CK, like ransomware (which generates and encrypts thousands of files, attempts to delete shadow copies, and places ransom notes), discovery simulation, and high CPU miner simulation, while writing detailed activity logs in various formats to help security teams validate their detection rules and behavioral analytics.

Automation for Threat Detection Quality Assurance
Blake Hensley outlines various types of automated tests to verify threat detection rules before deployment, including: foundational checks (schema validation, query syntax validation, source health monitoring, pipeline integrity checks), a simple backtest (running a query backwards in time over historical data to determine if there are too many results), execution efficiency (performance metrics like query time), dynamic backtests, comparing the original query results to the new query, using an LLM-as-judge, unit tests (known pass/fail cases), and validating the detection logic works from purple team activity in a lab environment.

See also Blake’s example GitHub implementation (kql-tester) that applies the first four tests to KQL in Azure Sentinel analytic rules.

Threat Hunting vs. Threat Intelligence
Recorded Future’s Maddy Maletz explains how threat intelligence (understanding external threats, their motives, and TTPs) and threat hunting (proactively searching for threats already inside systems) work together to create a more effective security strategy. Threat intelligence guides hunting hypotheses and provides context for suspicious findings, while threat hunting validates intelligence and discovers what automated defenses miss.

💡 I thought this was a nice high level overview, and I like the comparison table.

Red Team

winsecurity/MaleficentVM
A practice VM for malware development. It contains practice challenges for malware development just like CTF challenges, including: enumerate the OS version or service configuration, inject shellcode into a target process, use IAT hooking to hook a specific function, etc.

EvilBytecode/GoDefender
A powerful Go-based security toolkit designed to detect and defend against debugging, virtualization, and DLL injection attacks. GoDefender provides multiple protection mechanisms to make reverse engineering and analysis significantly more difficult, including virtualization detection (VMware, VirtualBox, KVM, QEMU, Parallels), anti-debugging techniques (API monitoring, critical function patching, process validation), and DLL injection prevention by leveraging Binary Image Signature Mitigation Policy to block non-Microsoft binaries.

Evading Elastic EDR's call stack signatures with call gadgets
Almond OffSec’s SAERXCIT demonstrates a technique to evade Elastic EDR's call stack-based detection by inserting arbitrary modules into the call stack during module loading, allowing shellcode to load a network module without getting detected (PoC).

AI + Security

Quicklinks

*Sponsored

Private AI Compute: our next step in building private and helpful AI
Google’s Jay Yagnik announces Private AI Compute, a new cloud-based AI system designed to deliver AI capabilities while maintaining user data privacy. Private AI Compute runs on one Google stack powered by their custom Tensor Processing Units (TPUs) and Titanium Intelligence Enclaves (TIE). Remote attestation and encryption are used to connect your device to the hardware-secured sealed cloud environment, allowing Gemini models to securely process your data within a specialized, protected space that not even Google can access.

💡 Apple previously announced their Private Cloud Compute, so it makes sense that Google would get into the same game. I wonder if OpenAI or Anthropic will too 🤔 

Public Report: Google Private AI Compute Review
NCC Group shares their public report on Google’s Private AI Compute cloud system. The assessment included architecture review, cryptographic assessment of the attestation/encryption implementations, security analysis of IP-blinding relay, source code review, and more. Ten consultants, 100 person-days.

💡 Beyond just the findings, the system architecture overview, system components, and general discussion of how things work is neat. Also shout-out to my former NCC colleagues 🙌 

Disrupting the first reported AI-orchestrated cyber espionage campaign
Anthropic describes a campaign by a Chinese state-sponsored group used Claude Code as an autonomous agent to target 30 global organizations, successfully infiltrating a small number. The attackers jailbroke Claude by breaking tasks into seemingly innocent components, convincing it it was performing legitimate security testing, then had it autonomously perform reconnaissance, vulnerability discovery, exploit development, lateral movement, credential harvesting, and data exfiltration with minimal human intervention. Full Report.

“The threat actor was able to use AI to perform 80-90% of the campaign, with human intervention required only sporadically (perhaps 4-6 critical decision points per hacking campaign).”

Researchers question Anthropic claim that AI-assisted attack was 90% autonomous
Article by Ars Technica’s Dan Goodin tying together a few security researchers questioning the Anthropic report’s claims, arguing that Claude regularly declines legit security researchers when asking it to perform cybersecurity tasks, so why would these threat actors get much better performance? And the report said that Claude “frequently overstated findings and occasionally fabricated data during autonomous operations, claiming to have obtained credentials that didn’t work or identifying critical discoveries that proved to be publicly available information.”

Capabilities of Lower Sophistication Actors Will Increase
LinkedIn post by Chris Thompson, founder of Offensive AI Con: “A lot of people are missing the point; offensive cyber capabilities in current models are a side-effect of being trained on coding datasets. As frontier model labs and private groups start to shift to tuning current models and training purpose-built cyber models on refined offensive datasets, the effectiveness of open and closed models will increase significantly, enabling stealth & evasion focused offensive cyber operations and advanced ransomware attacks.”

The dawn of AI-orchestrated cyberattacks: A call to action for cyber defense
PwC’s Morgan Adamski (former NSA Director and U.S. Cyber Command Executive Director) and David Ames and former NSA Director Rob Joyce weigh in on the Anthropic report. “Cost asymmetry: The operation showed that attackers can add more compute/data/test time to model exploits and get immediate, scaled impact, while defenders are working linearly (focusing on adding headcount, dealing with fragmented tools, etc.).”

From Rob Joyce on LinkedIn: “I’ve been following offensive AI developments with great interest over the last year. I don’t think defenders yet appreciate how rigorously capable Agentic AI will test their attack surface. Some remain dismissive of today’s AI-driven hacking capabilities, but they’re not accounting for the exponential rate of improvement.”

How to replicate the Claude Code attack with Promptfoo
Ian Webster walks through using Promptfoo to jailbreak Claude Code to perform tasks including creating and installing a keylogger and reverse shell, enumerating and exfiltrating SSH private keys and API keys, etc.

After running 332 adversarial scenarios, they found that many agents lose track of their safety training 15 turns into a conversation about "Blue Team playbooks."

💡 I found the discussion of attack strategies interesting, like the “meta” prompting strategy that is effectively an agent reasoning loop on the attacker's side that attempts a jailbreak, looks at why the jailbreak didn't work, and then intelligently modifies it to try again, and “hydra”, which uses multi-turn conversations to gradually escalate and can backtrack and reset the agent's state each time it hits a refusal.

Misc

AI

Misc

Music

✉️ Wrapping Up

Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏

Thanks for reading!

Cheers,
Clint

P.S. Feel free to connect with me on LinkedIn 👋