• tl;dr sec
  • Posts
  • [tl;dr sec] #246 - GitHub Actions Attack Diagram, The Worst Places to Leak Secrets, Red Team TTPs

[tl;dr sec] #246 - GitHub Actions Attack Diagram, The Worst Places to Leak Secrets, Red Team TTPs

Common attack paths in identifying GHA vulns, how quickly AWS tokens are found by location, resources for red teamers and defenders

Hey there,

I hope you’ve been doing well!

🍞 Oh no you sourdoughn’t

New #PeakBayArea story.

This past weekend I brought a sourdough loaf from a local co-op bakery to a friend’s potluck.

I offered a slice to my friend, but he turned it down because he’s celiac. (Fun fact 1: celiac ≠ gluten intolerance. Fun fact 2: gluten intolerance is one of the only intolerances allowed/celebrated in the Bay Area.)

His partner also turned it down, not because she can’t eat bread or doesn’t like it, but because then there’d (temporarily) be no kisses, due to potential gluten transfer.

We were a bit skeptical that a kiss could have an effect, but my friend referenced that the FDA says “gluten-free is <20 parts per million,” and who knows what a kiss would be 😂 

The Bay Area: where gluten can affect your love life.

Have you ever wanted to get Tanya Janca (She Hacks Purple) to train your development team? Well now you can!

If you’re not familiar, Tanya is the author of “Alice and Bob Learn Security,” an IANS faculty member, and a frequent keynote speaker and trainer.

She’s decided to, for a limited time, offer to trade with you– she will come and give a lesson or talk about secure coding (options here), in exchange for chatting with a member of Semgrep’s sales team.

She’s currently only committed to doing this for Htodoacktober, and as she’s just one (incredibly productive) person, she only has so much time. 

So if this sounds useful, reach out! 

P.S. She’s ALSO giving two free secure coding training sessions, virtual, and live, for anyone. No need to sit down with sales. Sign up for that here.

Sponsor

📣 When Your SOC Analysts Need Backup

Every SOC team faces the same challenge: Too many alerts, not enough time. We get it—burnout is real, and so is the worry of missing critical threats. That’s where Dropzone AI steps in. It’s like adding an experienced analyst to your team, one that works tirelessly 24/7, digging through every alert, gathering context, and giving you concise summaries and conclusions. AI SOC automation is here, giving you the support you need to stay ahead. Ready to see what this could mean for your team?

Too many alerts and not enough people or time is a common challenge. I think the SOC is a great domain for AI, which Caleb Sima also highlighted in his BSidesSF keynote.

AppSec

Hard Truths your CISO won’t tell you
Slides from an off-the-record fireside chat given by my bud Travis McPeak. I like the 🌶️ opinions, and agree with many of them.

huntresslabs/rogueapps
By Huntress’s Matt Kiely: RogueApps are OIDC/OAuth 2.0 applications that, while not explicitly evil, are often abused and used maliciously. This repo documents the emerging attack surface of SaaS, OIDC, and OAuth 2.0 applications that help attackers during intrusions. The project aims to surface OIDC/OAuth 2.0 application tradecraft to aid defenders in detection, deterrence, and mitigation.

Bypassing airport security via SQL injection
Ian Carroll and Sam Curry found SQL injection (and other issues) in FlyCASS, a system used by small airlines to manage Known Crewmember (KCM) and Cockpit Access Security System (CASS) authorizations. The bug gave them admin access to the system, which allowed them to add unauthorized personnel who could then bypass airport security and access cockpits.

Making Sense of the Application Security Product Market
James Chiappetta discusses the evolving AppSec product market, including challenges in categorizing the space, ASPM, and applying AI to AppSec. He breaks down AppSec products into four functional categories: Security Assurance & Governance, Secure the Cloud & App Environment, Automate Security Flaw Detection, and Prioritize, Track, & Remediate Flaws.

Sponsor

💎 Catching AppSec Design Risks Before Code is Even Written

Waiting until deployment of an app to address security is like putting on a helmet after you crash your bike. 

Apiiro is leveraging AI to analyze feature designs and catch potential vulnerabilities before a single line of code is written. This proactive approach saves time and resources while also ensuring security is baked into your applications from the start. 

Ready to shift left? Learn how Apiiro is redefining secure development and get ahead of risks before they become real threats.

Using AI for threat modeling and security design reviews is very promising. I gave a number of examples of both in my TL;DR: Applying AI to Cybersecurity talk.

Cloud Security

What’s the worst place to leave your secrets? – What happens to AWS credentials left in public places
Cybenari’s Idan Ben Ari deployed canary tokens (fake AWS credentials) across various public locations and analyzed access attempts. Creds in NPM were used in under a minute, PyPI and GitHub around 2 minutes, Pastebin ~1 hour, web server ~2 days, DockerHub ~7 days, BitBucket and GitLab… never.

Your queues, your responsibility
Plerion’s Sid Rajalakshmi conducted a large-scale scan of AWS SQS queues, finding 209 publicly accessible queues. Methodology: he found ~250K raw AWS account IDs from Github repos and other public sources, filtered them to only valid account IDs (~215K), made a wordlist of common queue names by searching GitHub and using SourceGraph, and then constructed queries that can see if there’s a queue present (without accessing data or making changes) for each account ID x common queue names x 10 most popular regions.

A Fresh Perspective on Exfiltrating ECS Task Metadata Credentials
CRED’s Saransh Rana demonstrates how an attacker with access to an EC2 host running ECS tasks can extract and exfiltrate IAM credentials for all task roles. The attack leverages the /proc filesystem to read container environment variables, including the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI used to fetch temporary credentials. Exfiltration methods include sending Base64-encoded creds to an attacker-controlled SQS queue.

My Methodology to AWS Detection Engineering (Part 1: Object Selection)
Chester Le Bron discusses threat detection engineering in AWS, focusing on tailored event aggregation and risk-based alerting using specific CloudTrail fields (userIdentity.arn, userIdentity.principalId, target resource ARN and principal ID, etc.) for effective correlation of events, compared to IPs for example, which can be ephemeral.

In Part 2: Risk Assignment, Chester describes risk scoring in more detail using Splunk and SPL, evaluating severity, fidelity, and base risk scores, handling deduplication, and other tips.

Container Security

The hater’s guide to Kubernetes
Paul Butler explains how his startup successfully uses Kubernetes by focusing on a small subset of its features. They primarily use Deployments, Services, CronJobs, ConfigMaps, and Secrets, while cautiously using StatefulSets and RBAC. They avoid hand-writing YAML (using Pulumi instead), non-built-in resources and operators, Helm, service meshes, and Ingress resources.

Kubernetes Testing Environment for EKS, GKE, AKS
Orca’s Roi Nisimi announces KTE, an open-source Kubernetes Testing Environment for AWS (EKS), Azure (AKS), and GCP (GKE), designed to help organizations identify and address potential vulnerabilities before production. Using KTE, orgs can simulate various attack scenarios, test security patches, and evaluate the effectiveness of security configurations and policies.

It supports a number of OSS security tools, including polaris, kor, trivy-operator, kubesec, checkov, and kube-score, and recently added support for a basic AI assistant to query Kubernetes security findings, leveraging RAG and GPT-4o-mini.

Supply Chain

jstawinski/GitHub-Actions-Attack-Diagram
An overview diagram based on common attack paths John Stawinski and Adnan Khan have used in identifying GitHub Actions vulnerabilities. Starting with read-only or write access to a GitHub organization/repository, you can follow the diagram to determine how to identify and exploit GitHub Actions misconfigurations, from self-hosted runner takeover, to PWN requests, to secrets exfiltration.

3.7 Million Fake GitHub Stars: A Growing Threat Linked to Scams and Malware
Socket’s Hao He describes the issue of fake GitHub stars (pay $0.10 per star), which are used to make repos seem legitimate, to spread malware, cryptocurrency stealers, etc. Hao describes their methodology in finding fake stars: a low activity heuristic (GitHub users that star a repo and then become inactive), and a clustering heuristic (detecting groups of accounts that similarly star repos).

Blue Team

cisagov/ScubaGear
Automation to assess the state of your M365 tenant against CISA's baselines.

State-backed attackers and commercial surveillance vendors repeatedly use the same exploits
Clement Lecigne of Google’s Threat Analysis Group (TAG) describes watering hole attacks targeting Mongolian government websites, delivering exploits for both iOS and Android devices. The campaigns, linked to Russian government-backed actor APT29, reused exploits previously seen in commercial surveillance vendor tools from Intellexa and NSO Group.

💡 Note: users with lockdown mode enabled were not affected even when running a vulnerable iOS version.

Red Team

RoseSecurity/Red-Teaming-TTPs
By RoseSecurity: Useful techniques, tactics, and procedures for red teamers and defenders, with cheatsheets and guides, covering cloud, ICS, Linux, Mac OS, threat intel, web, Windows, and more.

tehstoni/tryharder
A C++ staged shellcode loader with evasion capabilities, designed to work with Sliver and capable of ingesting shellcode from other sources like Cobalt Strike.

AI + Security

jthack/ffufai
By Joseph Thacker: An AI-powered wrapper for the popular web fuzzer ffuf that automatically suggests file extensions for fuzzing based on the target URL and its headers, using either OpenAI's or Anthropic's models.

WebFuzzing/EvoMaster
By Andrea Arcuri et al: An AI-driven tool for automatically generating system-level test cases (fuzzing) for web/enterprise applications. Currently targeting whitebox and blackbox testing of Web APIs, like REST, GraphQL and RPC (e.g., gRPC and Thrift). It uses an evolutionary algorithm and dynamic program analysis to generate effective test cases.

The tech behind Semgrep Assistant’s triage and remediation guidance
Rohit Jayaram and Chushi Li walk through some nice details about the prompt chains and evaluation loops behind Semgrep Assistant’s AI autofix and rule generation features. Assistant leverages project-specific data like dependencies and prior fixes, RAG for pulling in additional context (e.g. Semgrep rule documentation, OWASP docs), self-evaluation loops, and more.

💡 I love posts that actually get into some of the tactical weeds about how to make systems that leverage LLMs Actually Work™️ in the real world, which I find is meaningfully different than “mostly works 80% of the time on my machine.” Noice!

Provisioning cloud infrastructure the wrong way, but faster
Trail of Bits’ Artem Dinaburg walks through how GPT-4o and Claude Sonnet 3.5 readily generate insecure Terraform (hard-coded credentials) and insecure code to generate passwords (doesn’t use secure sources of entropy).

💡 I agree that most developers will probably write a very simple, non security focused prompt to generate code, but “when I mention nothing about security, LLMs don’t focus on security when generating code” is not that surprising. It would have been nice if the post compared how security-focused prompts impacted the output quality, or having the model evaluate it’s prior output.

💡 Project idea: what would be cool is determining additional description (perhaps a persona, step-by-step instructions, other?) that you can add to any code generation prompt that meaningfully improves its security, and actually measure the improvement at some reasonable scale.

Misc

Security Research Threats
An ongoing collection of legal threats made against Security Researchers: over-reactions, demands, and cease & desist letters against good faith research. H/T Casey Ellis for sharing.

Especially relevant recently given that the City of Columbus has filed a lawsuit against a security researcher for showing that the ransomware attack on the city’s systems that “didn’t steal any valuable or usable info” actually did.

  • uv seems like a nice pip alternative for Python projects

  • Super/Man: The Christopher Reeve Story - A moving trailer. He seems like an inspiring man.

  • Reseachers at the University of British Columbia have built an ‘AI Scientist’ that has been able to make some incremental improvements on existing algorithms and ideas. No crazy breakthroughs, but I still find it impressive that it can make some improvements autonomously.

  • The U.S. government has restricted the export of high-end AI chips to China, so state-linked Chinese entities are using Amazon Bedrock or Azure to gain access to frontier models or advanced GPUs.

✉️ Wrapping Up

Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏

Thanks for reading!

Cheers,
Clint
@clintgibler