• tl;dr sec
  • Posts
  • [tl;dr sec] #254 - Cloud CTFs, AI + AppSec, Awesome Threat Detection

[tl;dr sec] #254 - Cloud CTFs, AI + AppSec, Awesome Threat Detection

Practice your GCP and Azure security skills, scaling AppSec with LLMs, a curated list of awesome threat detection and hunting resources

Hey there,

I hope you’ve been doing well!

🤯 Over 90,000 subscribers!

*Crushes scouter*

I’m incredibly honored and humbled to say that tl;dr sec has >90K readers!

When I started this newsletter on a whim over 5 years ago, I never imagined a few hundred people reading it, let alone this.

Thank you for joining me on this journey, I’m very grateful that you take time out of your busy life to read it.

If you find tl;dr sec useful, I’d appreciate if you share it with a friend, or reshare this LinkedIn post so more people can hear about it.

I’ll continue working hard to ensure this newsletter is one of the highest signal ways you spend your time every week.

Have a wonderful day and weekend!

Sponsor

📣 AWS EKS Security Best Practices

Tired of feeling overwhelmed by the complexities of securing your EKS environment? 

Look no further. This comprehensive cheat sheet is your one-stop resource for mastering EKS security best practices and safeguarding your cloud-native applications.

Packed with actionable advice that you can put in place immediately, learn how to improve the security posture of your EKS environment, from initial setup to ongoing monitoring and compliance.

There are lots of potential footguns in Kubernetes, awesome to see a cheat sheet of the things you should care about 👌 

AppSec

BSidesSF 2025
Next year’s iteration of one of my favorite cons! A+ content and attendees. They’re open for sponsoring and the CFP is live until December 6th. I’m going to submit, you should too!

Escape-Technologies/graphinder
By Escape: Blazing fast GraphQL endpoints finder using subdomain enumeration, scripts analysis and bruteforce.

gradio-app/safehttpx
A secure by default Python HTTP client library, which performs DNS validation on the supplied URL, preventing SSRF and DNS rebinding attacks. This prevents an attacker from scanning sensitive internal services, accessing cloud metadata services, etc. The library was created as as result of Trail of Bits’ security audit of Gradio 5 (Hugging Face).

TypeError/secure
By Caleb Kinney: A lightweight modern Python library to add security headers (CSP, HSTS, etc.) to over 10 web frameworks, including Django, Flask, FastAPI, and more. Secure by default, customize as needed.

Sponsor

📣 Secure and govern infrastructure without slowing down developers

Resourcely is a configuration platform for deploying secure-by-default cloud infrastructure.

With Resourcely Guardrails, security teams set policies that are enforced as part of your CI pipeline. Prevent risky infrastructure from being deployed, without getting in the way of developers.

Campaigns, specifically built for security teams, allow users to scan their environment for vulnerabilities that violate their policies and remediate them without asking developers to write Terraform.

👉 Sign Up Free 👈

I often include secure-by-default/secure guardrails links in tl;dr sec. Why? It’s one of the highest leverage things you can lean into, so I’m stoked to see people making it easier 🤘 

Cloud Security

narenaryan/whispr
By Naren Yellavula: A CLI tool to safely inject secrets from your favorite secret vault (Ex: AWS Secrets Manager, Azure Key Vault etc.) into your app's environment. Useful for secure local software development.

n0jam/gcp-ctf-workshop
The code used to set up the infrastructure for Marion Säckel and Marcus Hallberg’s BSides NY 2024 workshop "A hitchhiker's guide to a Google Cloud CTF."

Sentinel for Purple Teaming
Jason Ostrom announces PurpleCloud, an open-source Azure Cyber Range for automatically spinning up environments for on-premise AD and Azure cloud-native attack simulations. It automates the setup of best practice Azure logging and logs to Microsoft Sentinel. The post walks through using Atomic Red Team to test detections, explores attacking Managed Identities, and discusses detection opportunities with various logs.

Bringing AWS Security to Mergers and Acquisitions
Almahdi Sahad shares a 6 step 30/60/90 day plan for securely integrating AWS accounts from acquired companies: setting up a solid Landing Zone with automated security controls, obtaining root access to all accounts, onboarding accounts to your org, performing a risk assessment focused on critical issues like overly permissive IAM, public resources, and poor patch management, sustaining momentum (converting human access to SSO, replacing IAM User Access Keys with IAM Roles), and adjusting to the new normal.

Exploring Google Cloud Default Service Accounts: Deep Dive and Real-World Adoption Trends
Datadog’s Christophe Tafani-Dereeper analyzes the usage and security implications of Google Cloud's machine identities, focusing on default service accounts for Compute Engine and GKE. As you might guess: it’s easy to overprivilege. Tip: enable Workload Identity Federation on GKE.

Out of a sample of thousands of Compute Engine instances and GKE clusters, Christophe found that over 1 in 3 Compute Engine instances use the default service account, with 13% having unrestricted project admin access. For GKE, 46% use the default account, 7% have an unrestricted scope (effectively are project admins), 27% have the default scope, allowing them to read all GCS buckets and pull container images in the project.

Container Security

Adventures in kernel land - Lessons learned from a deep dive in memory forensics on Google Kubernetes Engine
Spotify’s Marcus Hallberg summarizes his research from various presentations, describing his journey in developing memory forensics capabilities for containerized workloads in GKE: custom Docker image using AVML for memory acquisition and Volatility3 for analysis, deployed as a pod with CAP_SYS_ADMIN permissions. He describes overcoming challenges like accessing the kernel space and building a symbol file of the kernel.

CSI Forensics: Unraveling Kubernetes Crime Scenes
Sysdig’s Alberto Pellitteri describes how to use Kubernetes container checkpointing for automated Digital Forensics and Incident Response (DFIR), leveraging Falco, Falcosidekick, and Argo to trigger checkpoints upon detecting malicious activity. The post covers both static analysis of checkpointed containers using tools like checkpointctl and CRIT (examining the filesystem, running processes, mounts, and container memory), as well as dynamic analysis by restoring checkpoints in isolated environments (tools: Wireshark, Sysdig, strace).

Supply Chain

woodruffw/zizmor
By Trail of Bits’ William Woodruff: A tool for finding security issues in GitHub Actions CI/CD setups. Detects template injections, impostor commits, credential leaks, etc. See src/audit for the check implementations.

OSV's approach to data quality
OSV’s Andrew Pollock and Charl de Nysschen describe OSV's Data Quality Program, aimed at maintaining high-quality vulnerability data across nearly 30 ecosystems and multiple sources. They're developing an open-source OSV record linter to validate records against properties beyond just JSON Schema compliance, and will gate record imports based on these quality requirements.

Exploiting Fortune 500 Through Hidden Supply Chain Links
Lupin & Holmes’ Roni Carta describes how Depi, their supply chain security tool, identified a dependency confusion vulnerability in HashiCorp's Consul. The issue stemmed from wildcard dependencies in package.json files (“consul-lock-sessions”: “*”), which could be exploited by claiming unpublished npm packages. This led to a $17,000 bug bounty from a Fortune 500 company, and HashiCorp changed the package.json to use the file: protocol for local dependencies.

Blue Team

0x4D31/awesome-threat-detection
A curated list of awesome threat detection and hunting resources by Adel Karimi and contributors, covering tools, detection rules, resources, podcasts, newsletters, videos, trainings, labs, threat simulation, and more.

Announcing LOLRMM: A Unified Approach to RMM Software Tracking
New project by Michael Haag, Jose Hernandez, Hare Sudhan, Kostas, and Nasreddine Bencherchali: LOLRMM (Living Off The Land Remote Monitoring and Management), a centralized platform for tracking and managing RMM software that often get abused by threat actors. The project aims to help defenders proactively identify and remediate RMM tools in their environments by providing a comprehensive database, offering insights and artifacts, and facilitating collaboration.

Red Team

amjcyber/pwnlook
An offensive postexploitation tool that will give you complete control over the Outlook desktop application and the emails configured in it.

Exploiting Microsoft Teams on macOS during a Purple Team engagement
Quarkslab’s Mathieu Farrell describes finding a dylib injection vulnerability in Microsoft Teams on MacOS, allowing an attacker to secretly spy on users through their microphone and camera. Tip: check if an app’s Disable Library Validation Entitlement is set to True, which means that the loading process is not checking the signature of the libraries it’s loading.

AI + Security

ngalongc/AuthzAI
By Ron Chan: An automated tool to test and analyze API endpoints for potential permission model violations using OpenAI structured outputs.

Easily create custom SAST guardrails with human language and Semgrep Assistant
Nitin Nayar describes how the new Semgrep Assistant feature “Memories” allows you use natural language to fine-tune AI-based remediation advice based on your organization’s specific guidelines.

💡 There are lots of folks doing “AI autofix,” but this is the first I’ve seen that allows you to easily customize the fix to your org’s best practices. Which is rad, because rather than an onboarding or annual dev training that’s easy to forget, it’s like a just-in-time, situation-specific tailored guidance on whatever you want to ensure the dev knows.

💡 For example: we use <this secrets manager>, use <this library> to parse XML, or <these golden images>, or <this middleware> to apply AuthN/AuthZ and set the right headers and cookie flags, etc.

One Year of Using LLMs for Application Security: What We Learned
DryRun Security’s Ken Johnson shares key lessons learned and challenges faced. Tips: ask LLMs detailed questions, use the right LLM for the task (e.g. some specialize in embeddings, or understanding code), the value of retrieval-augmented generation (RAG), combining deterministic and probabilistic methods works best, agent-based execution enhances performance (give access to tools and docs vs single-shot question and answer interactions).

Ken highlights various challenges with CodeLlama, LLaMA, Mixtral, and Mistral. See this post for how DryRun tests their LLM workflows.

The Future of Application Security: Integrating LLMs and AI Agents into Manual Workflows
Anshuman Bhartiya explores how to scale AppSec activities using LLMs, demonstrating a workflow that automates risk classification, rapid risk assessment, and security reviews on technical specifications. He describes a proof-of-concept using OpenAI's Chat Completions API and Assistants API with RAG, as well as an alternative implementation using AI agents via OpenAI's Swarm framework.

Learnings: image parsing is not trivial (e.g. parsing architecture diagrams), a RAG assistant provides more detailed and company-specific relevant output, access control around RAG is still an open challenge, and of course you need to be careful when AI Agents can autonomously perform tasks.

💡 This is a great example of a solid blog post: some big picture ideas and context, great diagrams and visual overviews, demo videos, code snippets/tactical details, lessons learned 👌 

Misc

✉️ Wrapping Up

Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏

Thanks for reading!

Cheers,
Clint
@clintgibler