• tl;dr sec
  • Posts
  • [tl;dr sec] #271 - Threat Modeling (+ AI), Backdoored GitHub Actions, Compromising a Threat Actor's Telegram

[tl;dr sec] #271 - Threat Modeling (+ AI), Backdoored GitHub Actions, Compromising a Threat Actor's Telegram

Threat modeling (with) LLMs, tj-actions woes, reading a threat actor's Telegram C2

Hey there,

I hope you’ve been doing well!

🎖️ Working on Your Personality

One family ritual I enjoy is having group FaceTime calls.

We do the normal stuff, like catching up on each other’s lives, wishing each other happy birthday, and having my siblings and I alternate doing push-ups while the others count out loud.

Two recent funny things stood out that I wanted to share.

First, I forget how we got on this subject, but my mom said something like, “…you’ve got this going well, you’ve got that going well… I guess you’ve just got to work on your personality.” 😂 

So now “You’ve just gotta work on your personality” is our collective go-to advice whenever one of us is struggling with something.

Second, my sister has taken to calling her boyfriend “bro,” which is… awkwardly close to “brother,” especially when we’re both on the call.

So if you need an endearing nickname for your significant other, and “baby” and “dear” aren’t cutting it, try “bro.” 😂 

Sponsor

📣 Secrets Are Dead: Why Machine and Workload Identities Are the Future

Static secrets like API keys, tokens, and passwords introduce major security risks, create compliance headaches, and are difficult to manage at scale. The future of cloud security lies in dynamic, cryptographic machine and workload identities—eliminating static secrets and enforcing zero-trust authentication across your infrastructure.

Join this webinar to explore how organizations are using short-lived certificates, automated identity issuance, and granular access controls to eliminate credential-based risks. Learn best practices for securing workloads across hybrid and multi-cloud environments while simplifying access management and ensuring compliance.

👉 Register Now 👈

Lots of breaches are due to stolen or leaked secrets, nice to hear about reducing credential-based risks 👍️ 

AppSec

Sign in as anyone: Bypassing SAML SSO authentication with parser differentials
GitHub’s Peter Stöckli describes two critical authentication bypass vulnerabilities in the ruby-saml library up to version 1.17.0. The vulnerabilities, stemming from parser differentials between REXML and Nokogiri, allow attackers with a single valid signature to impersonate any user.

Related:

Threat modeling the TRAIL of Bits way
Kelly Kaoudis introduces TRAIL (Threat and Risk Analysis Informed Lifecycle), a threat modeling process developed by Trail of Bits that combines elements from existing methodologies like Mozilla's Rapid Risk Assessment (RRA) and NIST guidelines. TRAIL analyzes connections between system components to uncover design-level weaknesses and architectural risks, going beyond individual vulnerabilities. The process involves building a detailed system model, identifying threat actor paths, and documenting threat scenarios, as well as including short-term mitigation options and long-term strategic recommendations. The post gives examples from ToB’s assessments of Arch Linux Pacman and Linkerd.

Continuous TRAIL
Follow-up post to the above: Spencer Michaels, Paweł Płatek, and Kelly Kaoudis describe how to further tailor a TRAIL threat model, how to maintain it, when to update it as development continues, and how to make use of it. Focus on keeping up to date: the trust zones, threat actors, trust zone connections, and security-relevant assumptions. Questions to consider when deciding when to update your threat model:

  • Does this change add a new system component (e.g., microservice, module, major feature, or third-party integration)?

  • Does this change add a new trust zone (e.g., by adding a new network segment)?

  • Does this change introduce a new threat actor (e.g., a new user role)?

  • Does this change add a new connection between system components that crosses a boundary between trust zones (e.g., a new application service on an existing server instance that can be called by a service in a different zone)?

💡 Expert human-driven threat models are obviously the best, but I think there’s a lot of promise in leveraging LLMs (and program analysis) to do continuous threat model updates by analyzing design docs/new feature specs and code changes. There are a number of start-ups and OSS tools doing this, exciting times.

Sponsor

No matter the maturity level of your overall program, there is room for canaries. In this work, Rami McCarthy considers the different categories the maturity levels when building out your security canary program. Some ideas you will have thought of already, some you may not have!

People widely seem to find canaries quite useful, to great to see a maturity model for them 🤘 

Cloud Security

MistakenVMtity: Another cloud image confusion attack
Ian McKay describes how Azure CLI's image search functionality can be exploited for image confusion attacks, similar to Seth Art’s post on AWS AMIs (previously in tl;dr sec). Ian shows how an attacker could create a malicious VM image with a matching offer name (e.g. "GitHub-Enterprise") that appears in search results, potentially tricking users into deploying it.

The Risk You Can't Afford to Ignore: AWS SES and Email Spoofing
Chandrapal Badshah describes a hidden risk in AWS Simple Email Service (SES) where verifying an entire domain allows any IAM user with the right permissions to spoof any email address in that domain. By default, these emails aren't logged, making detection difficult. He recommends using individually verified email identities where possible, or applying a Sending Authorization policy to allowlist specific email addresses.

Evaluating AWS Native Approaches for Detecting Suspicious API Calls
Adan Alvarez describes evaluates three AWS native approaches for detecting suspicious API calls: 1) EventBridge → SNS → Email, 2) CloudTrail → S3 → Lambda → SNS → Email, and 3) CloudTrail → CloudWatch → MetricFilter → MetricAlert → SNS → Email.

Choose 1) if you clearly know what to do when a specific event occurs. 3) is great for simple alerts. If manual investigation is expected and speed is less important choose 2) or 3), and 2) is the best choice if you require more complex alerts with additional context.

Shrinking the haystack: The six phases of cloud threat detection
Red Canary’s Brian Davis describes a six-phase approach for detecting cloud threats in the control plane, focusing on telemetry from AWS CloudTrail, Azure Monitor, and GCP Audit Logs. The phases are: Ingest (getting data into your processing system), Standardize (normalizing formats, for example using a standard like OCSF), Combine (correlating events), Detect (applying analytics/detectors), Suppress (reducing noise), and Respond. Nice overview with some discussion on the important aspects of each part.

Supply Chain

Announcing OSV-Scanner V2: Vulnerability scanner and remediation tool for open source
Google’s Rex Pan and Xueqin Cui announce the launch of OSV-Scanner V2, which now integrates OSV-SCALIBR (expands support for the kinds of dependencies that can be extracted from projects and containers), supports layer and base image-aware container scanning, supports guided remediation for Java via Maven pom.xml, and more.

Harden-Runner detection: tj-actions/changed-files action is compromised
StepSecurity’s Varun Sharma describes how their Harden-Runner (basically like EDR for GitHub Actions runners) detected anomalous traffic from the tj-actions/changed-files Action, and shares a summary of the incident, details of the malicious commit that caused public repos to leak secrets to logs, etc.

Shortly after, Semgrep posted a blog with details on determining if you’re affected and next steps, and Lewis Ardern wrote a rule you can run locally to check: semgrep --config r/10Uz5qo/semgrep.tj-actions-compromised .

  • See also Wiz’s post about it.

  • Wiz’s Rami McCarthy shared a follow-up post indicating that an additional supply chain attack on reviewdog/actions-setup@v1 may have contributed to the compromise of tj-actions/changed-files.

💡 Congrats to Wiz for being acquired by Google for $32B 🥳 Wiz joins the esteemed club of tl;dr sec sponsors with excellent exits 😎 See also the Strategy of Security reflections on the acquisition.

Blue Team

detection.studio
Alex introduces detection.studio, a browser-based tool for converting Sigma rules to SIEM-specific languages like Splunk's SPL, Elasticsearch's ES|QL, and Grafana's Loki. The tool performs local in-browser conversion using pySigma, offers improved support for pipelines and filters, and includes features like persistent workspaces and rule sharing.

Krook9d/PurpleLab
A project enabling you to easily deploy an entire lab to create/test your detection rules, simulate logs, download and run malware and MITRE ATT&CK techniques, restore the sandbox, and more.

Compromising Threat Actor Communications
Super cool post by Ben Folland on compromising threat actors’ Telegram-based command and control (C2) communications and analysing the data. Using VirusTotal, he found malware samples that communicate with api.telegram.org, from he extracted Telegram bot tokens, which could then be used to forward hundreds of screenshots and keylogs from when the threat actor tested his payload on his production hacking box.

This access allowed Ben to identify additional malspam campaigns being performed by this actor, the domains owned, email addresses used for services like Cloudflare and his VPS, and also all of the plaintext credentials used for the campaign’s infrastructure.

Red Team

joaoviictorti/shadow-rs
By João Victor: A Windows kernel rootkit written in Rust, demonstrating advanced techniques for kernel manipulation while leveraging Rust’s safety and performance features.

Measuring the Success of Your Adversary Simulations
TrustedSec’s Jason Lang discusses how to measure the success of Adversary Simulation (AdSim) engagements, focusing on three key goals: testing controls, achieving defensive enlightenment (clarity on gaps in security controls), and maintaining professional courtesy. He provides detailed metrics and questions to evaluate each goal, including assessing the effectiveness of preventative and detective controls, the level of collaboration between red and blue teams, and the professionalism of the engagement.

AI + Security

👨‍🍳 This week I give you: threat modeling with LLMs, and threat modeling LLM systems.

TM-Bench - A Benchmark for LLM-Based Threat Modeling
Matt Adams has created TM-Bench, the first benchmark for evaluating LLMs’ capabilities in threat modeling across a diverse range of application scenarios, with a focus on local models that can run on consumer-grade hardware. Models are evaluated across STRIDE coverage and accuracy, threat completeness, and technical validity, and the outputs are evaluating using Claude 3.7 Sonnet using an “LLM-as-a-judge” approach.

Scaling Threat Modeling with AI: Generating 1000 Threat Models Using Gemini 2.0 and AI Security Analyzer
Marcin Niemiec walks through using Google’s Gemini 2.0 to generate threat models at security documentation at scale for open source projects. Marcin found he needed to use a multi-step prompt strategy to get good results (vs one long detailed prompt), and shares example results from Python’s Flask framework. The post links to prompts used to generate four different types of security documents: security design documentation, threat modeling, attack surface analysis, and attack tree analysis.

In a follow-up post, Marcin walks through using Google’s Gemini 2.0 Flash Thinking Experimental model to perform more in-depth threat modeling.

Agentic AI Threat Modeling Framework: MAESTRO
Ken Huang introduces MAESTRO (Multi-Agent Environment, Security, Threat, Risk, and Outcome), a novel threat modeling framework for Agentic AI that addresses gaps in existing threat modeling frameworks (STRIDE, PASTA, …). MAESTRO is built around a 7-layer reference architecture, going from 1) Foundation Models, to 2) Data Operations, 3) Agent Frameworks, 4) Deployment and Infrastructure, 5) Evaluation and Observability, 6) Security and Compliance, and 7) Agent Ecosystem. The framework considers AI-specific threats like adversarial attacks, goal misalignment, and multi-agent interactions, while providing layer-specific and cross-layer mitigation strategies.

Analyzing Secure AI Architectures
NCC Group shares several architectural patterns for secure AI application design, focusing on isolating trusted and untrusted execution contexts to mitigate risks like prompt injection. Key patterns include:

  1. Gatekeeper: Using separate "user-facing" and "data-facing" model instances to isolate untrusted data.

  2. Orchestration Tree: A high-trust orchestrator model delegates tasks to restricted leaf nodes.

  3. State Machine: Dynamically adjusting model functionality (e.g. what tools can be called, data that can be accessed) based on application context.

Misc

✉️ Wrapping Up

Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏

Thanks for reading!

Cheers,
Clint
@clintgibler