• tl;dr sec
  • Posts
  • [tl;dr sec] #237 - fwd:cloudsec videos, LLM Bugfinders, Burp Suite Guide

[tl;dr sec] #237 - fwd:cloudsec videos, LLM Bugfinders, Burp Suite Guide

>40 awesome CloudSec talks, finding vulns with LLMs, how to use Burp like a pro

Hey there,

I hope you’ve been doing well!

🎶 Funny + Feelz

One of the most delightful things I watched recently is Forrest Brazeal playing songs at fwd:cloudsec.

(If you’re not yet familiar with Forrest, you’re welcome for making your life better.)

He plays a number of hilarious tech/cloud related songs, including the absolute banger Legacy Land, and the moving You Belong.

I love seeing people in our community embrace their creative side and share:

Regardless of your background or skill, I want you to dabble and share your creative work. I bet it will be delightful 😃 

P.S. I’ve made progress on the tl;dr sec Tech Stack post, but it’s turning out to be longer than expected.


Keep your infrastructure safe, even if your IdP is compromised 🔒

Imagine waking up to find your sensitive data exposed, operations halted, and security protocols compromised. This nightmare scenario is increasingly common as attackers target identity ecosystems. If your Identity Provider (IdP) is breached, the fallout can be catastrophic: user impersonation, downgraded security, unauthorized admins, data theft, and full control over your infrastructure.

To address these threats, Teleport 16 includes device trust for verified access, phish-resistant MFA, and administrative actions independent of the IdP, keeping your infrastructure safe no matter what.

👉️ Learn More 👈️

Given recent Identity Provider compromises, not being solely reliant on them is 👌 


Execute commands by sending JSON? Learn how unsafe deserialization vulnerabilities work in Ruby projects
Excellent post by GitHub’s Peter Stöckli on creating an arbitrary code execution gadget for Oj, a Ruby JSON deserialization library. The post links to a repo that contains working sample exploits that work for Oj (JSON), Ox (XML), Psych (YAML), and Marshal (custom binary format).

Announcing the Burp Suite Professional chapter in the Testing Handbook
Trail of Bits’ Maciej Domanski announces the new Burp Suite chapter in their Testing Handbook, covering: a step-by-step guide in rapidly mastering Burp to test applications, using Burp Repeater, Intruder, and Collaborator, useful plugins, best practices, and more.

Security firm CertiK found a bug and drained $3M from the Kraken crypto exchange
Trail of Bits’ Dan Guido shares two new Semgrep rules to detect similar kinds of deposit issues. To be honest, I feel like being into crypto is like smoking: years ago it was cool, but given what we know now, is a bit negligent. See also: web3isgoinggreat.com.


📣 Stop PII Leaks at the Code-Level & Automate Data Mapping for Compliance

PII leaks in production are costly and disruptive. Privacy compliance workflows are often manual and error-prone, relying on spreadsheets and internal surveys. HoundDog.ai can help. 

Our AI-powered code scanner takes a proactive approach to data security and privacy, stopping PII data leaks at the code level from the very start and automating tedious data mapping tasks for privacy compliance - see 1-minute explainer video.

Try our free scanner to map out the sensitive data flows in your application and leverage AI to maximize coverage. We would love to hear your feedback at [email protected].

👉️ Try Our Free Scanner Now 👈️

It’s neat to see AI being combined with code analysis, especially around privacy and data security, which doesn’t seem to have gotten its due yet 👍️ 

Cloud Security

By NetSPI’s Scott Weston: Enumeration/exploit/analysis/download/etc pentesting framework for GCP; modeled like Pacu for AWS. See also Scott’s fwd:cloudsec 2024 talk on it.

A tool that uses Terraform to automatically create vulnerable or concerning infrastructure in your environment to test if your alerting systems will detect it. It currently supports ~50 scenarios inspired by real security events, such as: a public S3 bucket, an AMI shared with another AWS account, an S3 subdomain takeover, a public Jenkins instance, and much more.

The Unauditable, Unmanageable HMAC Keys in Google Cloud
By Vectra’s Kat Traxler. The issues: Google Cloud Audit Logs do not record HMAC key creation or deletion events when associated with user accounts, there’s no API that lets admin’s audit HMAC keys associated with user accounts, and there are no Cloud IAM permissions to restrict the creation, deletion or usage of HMAC keys.

But HMAC keys can be used to create Sigv4 signed headers used to authenticate against the Cloud Storage XML API for up to 7 days, which could be used for persistence or exfiltration.

fwd:cloudsec 2024 talk recordings
If you’re somehow not familiar, fwd:cloudsec is THE cloud security conference. See this playlist for >40 talks. Too many great ones to list, but here are a few that stick out to me:

Container Security

Taking a look at Kubernetes Profiling
Rory McCune highlights how Kubernetes profiling is on by default in a number of components (API server, scheduler, controller-manager, Kubelet). If an attacker is able to authenticate to those endpoints, they can potentially change the service’s log level (avoiding detection), access sensitive info, or perform a denial of service attack. Rory recommends disabling profiling in production.

A Guide To Kubernetes Logs That Isn't A Vendor Pitch
Graham Helton gives a great overview of Kubernetes logging, categorizing logs into debug and security logs, across four layers: code, container, cluster, and cloud. Graham discusses kubectl logs, sidecar containers, tuning AuditPolicy to collect the right amount of logs, and more.

Stop worrying about 'allowPrivilegeEscalation' 
Christophe Tafani-Dereeper debunks some myths about the allowPrivilegeEscalation option in Kubernetes security contexts. You should turn it off if you can, but in itself it won’t let an exploited app escape its container. Basically the option controls whether a process can gain more privileges than its parent process (think setreuid, setregid). Great FAQ at the bottom.


A WAF without false positives?

Now possible.

Last week, we soft launched “firewall” to the world. With firewall, you can automatically block critical injection attacks, introduce rate limiting for APIs, and monitor outbound traffic… all with 99.9% less false positives… with just 1 line of code.

Currently focused on Node.js apps, we are expanding to PHP, C++, and more fast.


Supply Chain

The Open Source Problem
Dave Aitel shares that of the top 5000 Pip packages, there are 310 packages that fulfill the Jia Tan (xz backdoor) criteria: someone with commit access, is in timezone 8 (mostly China), with an email that’s NAME + NUMBER @ gmail. More here and here.

💡 I love the approach of querying the open source ecosystem for various security properties using a graph database.

The State of Software Supply Chain Security
Blackberry shares some stats from a survey of 1,000 senior IT decision-makers and cybersecurity leaders.

  • >75% of software supply chains have experienced cyberattacks in the last 12 months. (really?)

  • 51% of companies were able to recover from a breach within a week ~40% took a month.

  • 74% of attacks originated from members of the software supply chain that companies were unaware of or did not monitor before the breach.

  • While 78% of companies track the impact of supply chain attacks, only 65% inform their customers about these incidents, potentially due to fear of public shaming or damage to their corporate reputation.

Polyfill supply chain attack hits 100K+ sites
polyfill.js is a popular open source library that adds modern functionality to older browsers. Sansec reports that in February a Chinese company bought the domain and GitHub account, and since then the domain was caught injecting malware on mobile devices via any site that embeds cdn.polyfill.io. After publicizing this, Sansec started “mysteriously” getting DDoS’d 🤔 I wonder by whom? More from Bleeping Computer and Cloudflare, and potentially the dev who sold it.

My colleague Kurt Boberg has created a free Semgrep rule you can use to find uses of this compromised polyfill library and automatically replace it with a safe version.

Blue Team

Pivot Atlas
An educational pivoting handbook for cyber threat intelligence analysts by Amitai Cohen, with reference material on how to make the best use of various threat activity observables (domain, IP addresses, file hashes, TLS certs, malware samples) → what steps to take to reveal potentially related malicious infrastructure or tooling. Awesome resource, nice work! 🔥 

Red Team

Fuzz Map
A fuzzer for GUIs that automatically identifies states using code coverage and builds a visual map, by Jonathan Chan. Neat demo, and excellently detailed explanation of how it works.

By Caleb Gross: A tool to read Chromium data (namely, cookies and local storage) straight from disk—without spinning up the browser. He primarily uses this to extract tokens from authenticated browser sessions for use in automation, scraping, etc., but this could also be used on a red team for stealing a compromised user’s creds.

A tool that provides a simple way to hide commands executed on compromised machine by providing it’s own shell runtime (enter your command, volana executes for you).

Also TIL: “On many Linux distros, you can prevent a command from being included in bash history by adding a space before you execute the command.”

AI + Security

Empowering defenders through our Cybersecurity Grant Program
OpenAI shares some information about eight of the selected projects who have been recipients of their Cybersecurity Grant Program.

Mostly academics, some industry, working on topics like: fortifying LLMs against prompt injection, the below Coguard, hardening inference infrastructure for LLMs, using AI for OSINT, preventing training data leakage, identifying and reasoning about code vulnerabilities, building SOC agents, autonomous red team agents / solving CTF challenges, and more.

Enhancing Automated Configuration Security Capabilities with OpenAI Grant Funding
CoGuard describes how they used an LLM-based workflow to automatically create configuration security rules by deriving security-relevant configuration parameters and (static analysis) rules from available software documentation and manuals. More methodology details on GitHub.

💡 To oversimplify, there are basically two time intensive parts to building any security scanning tool (SAST, cloud, whatever): a set of rules codifying security domain expertise, and an engine that can look for those things, in code, your cloud environment, etc. based on those rules. If you can automate the rule generation part, that can save a lot of time and effort.

Evaluating Offensive Security Capabilities of Large Language Models
Google Project Zero’s Sergei Glazunov and Mark Brand explore the offensive security capabilities of LLMs using the CyberSecEval 2 benchmark, released by Meta with Llama 3.

They were able to achieve up to a 20x improvement on vulnerabilities like buffer overflow and memory corruption via: giving the LLMs space to reason (vs just spit out an answer), giving the model access to an interactive environment, giving it access to tools like a debugger and Python scripting environment, setting up tasks so that potential solutions can be verified automatically, and letting the models explore multiple independent hypotheses.

💡 The key insight here, and I see this again and again, is basically that the scaffolding and infrastructure you build around LLMs can drastically increase task performance. Regularly like 2x to 3x, and in this case up to 20x.

Automated LLM Bugfinders
Dave Aitel’s response to the above Google Project Zero blog post, also an excellent read. Both posts agree that giving models tools is key, but Dave’s proposed approach is a bit different. He argues:

  • Instead of having Agents handle the control flow of analysis, transitions should be handled by Python logic (more reliable, won’t get confused). Basically a non-deterministic pushdown automata (NDPA).

  • Use many different types of Agents, each with their own specialized prompt, focused on small tractable problems. Have a data structure that allows them to pass this data to the next set of agents.

  • Specialized tools beat more generalized tools. e.g. a backward-slice tool > a generic code_browser tool, a smart function tracer > a generalized debugger.



The Gili Ra’anan model: Questions emerging from Cyberstarts' remarkable success
Some serious 🌶️ and 🍵 here: Allegedly CISOs are getting points worth tens of thousands of dollars for buying security products from a VC fund’s portfolio companies. Those purchases make the start-ups appear more valuable. Robert Hansen’s (RSnake) post with more.

Silicon Valley steps up staff screening over Chinese espionage threat
“In March, US prosecutors charged a former software engineer at Google with allegedly stealing AI trade secrets while secretly working with two companies based in China. Tesla, Micron and Motorola have all been subject to ‘egregious’ theft of intellectual property by China in the past five years, according to the US.”

✉️ Wrapping Up

Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏

Thanks for reading!