- tl;dr sec
- [tl;dr sec] #189 - CISA on Defending CI/CD, Backdooring NPM via S3, AI + Reverse Engineering
[tl;dr sec] #189 - CISA on Defending CI/CD, Backdooring NPM via S3, AI + Reverse Engineering
CISA/NSA's guide on defending cloud CI/CD, backdooring NPM modules depending on binaries in S3, I'm collecting AI + cybersecurity resources
I hope you’ve been doing well!
🎇 4th of July
I spent my 4th of July, ironically, with a group of Australians, who taught me what a “bubbler” is.
We did fulfill the American past time of smoking meats though, while listening to the classic Zucc Smokin Meats.
And in typical Bay Area fashion, while on one side of the fire pit people were talking about how people met their partners, on the other side people were discussing low level database architecture trade-offs and if LLMs truly “understand” things.
You know, just normal friends hanging out stuff 😂
Regardless of where you’re based, I hope you had the chance to relax!
🆕 Original Content: AI <> Cybersecurity
I’ve started collecting a number of AI applications to cybersecurity resources in this post.
Currently the post has ~8 OSS reverse engineering tools that leverage LLMs, and some meta analysis I wrote about trends we see and (to my knowledge) currently unexplored applications of AI/ML to reversing that seem promising.
I’m going to be adding sections on a bunch of other topics like AppSec, cloud security, pen testing, etc.
Let me know what you’d like me to add to the post!
🐐 Sacrifice to the Inbox Gods
A few longtime readers have reached out saying that their email provider has filtered tl;dr sec or it’s gone to spam, when it didn’t used to.
If you work at Google or Microsoft on a related team, or know someone who does, I’d like to shamelessly ask for your help in getting tl;dr sec on the naughty “nice” list, or generally how to help people receive it.
Feel free to reply to this directly 🙏
📣 How to automate the detection and prioritization of complex behavioral threats with AWS CloudTrail and Kubernetes audit logs
Monitoring AWS CloudTrail and Kubernetes audit logs are a critical part of maintaining security in your AWS cloud because it provides visibility into your account activity across your infrastructure. Because this data contains all actions performed by all authenticated users, identification of the attackers becomes extremely hard.
Learn from Jeff Vogt, Field CTO at Lacework (and former Senior DevOps Engineer), on how to automate the detection and prioritization of threats from your CloudTrail data so that you can easily (and quickly!) identify attacks such as compromised credentials, potential AWS defense evasion, cloud ransomware, and cloud-based cryptomining.
📜 In this newsletter…
AppSec: Why authorization is hard and an authz maturity model
Web Security: Burp’s new custom scripting engine, param analyzer Burp extension, overview of new Sec-Fetch HTTP headers
Cloud Security: How to tighten IAM policies
Container Security: Kubernetes Bill of Materials, executing arbitrary code in RO filesystems, Analyzing Volatile Memory on a Google Kubernetes Engine Node
Supply Chain: Backdooring NPM module via compromised S3 bucket, confidential computing project from Google that has adopted SLSA, 10K’s of GitHub repos potentially vulnerable to RepoJacking, CISA/NSA’s guide for hardening CI/CD
Blue Team: Automated audit log analysis tool for Google Workspace
Red Team: Tool for mTLS based on pre-shared connection key, REST-driven utility used to smuggle files in/out of networks defended by IDS
Machine Learning + Security: Cross Plugin Request Forgery example, hacking Auto-GPT and escaping its docker container
Machine Learning: Control your cloud via ChatGPT plugin, towards a generalist agent for the web
Misc: The Agony and Ecstasy of the World’s Biggest Tom Cruise Impersonator, it’s better to be born rich than smart
Rules are hard because they evolve over time
Oso's Graham Neray publishes the second part of his series on why authorization is hard, and proposes a maturity model for how companies need and use different authorization rule sets over time.
Graham also describes three classes of solutions to solving the complexity of authorization rules: language-specific libraries, Zanzibar clones, and domain-specific languages.
BChecks: Houston, we have a solution!
Ollie Whitehouse describes how Burp Suite professional now has a powerful scripting language that allows you to quickly build on Burp’s scanning engine to create custom checks.
A Burp extension for parameter analysis of large-scale web application penetration tests, assisting in: identifying sensitive data, identifying hash algorithms, decoding parameters, and determining which parameters are reflected in the response.
Use Fetch Metadata headers to prevent cross-origin attacks
Nice overview by Semgrep’s Pieter De Cremer on CSRF, how SameSite cookies can help protect against it but where they fall short, and how Fetch Metadata headers can offer a solution.
How to get rid of AWS access keys – Part 2: Reducing Privileges
Wiz's Scott Piper advises on how to reduce risk by tightening IAM policies, specifically: removing unnecessary privileges and applying additional restrictions, like proxying sensitive access (e.g. S3 Object Lambdas) and isolating functionality that requires access keys.
KSOC has published their Kubernetes Bill of Materials (KBOM) standard, which offers an initial specification in JSON and has been designed for extensibility across various cloud service providers (CSPs) as well as DIY Kubernetes setups.
Executing Arbitrary Code & Executables in Read-Only FileSystems
WithSecure's Golan Myers discusses various methods for achieving remote code execution in read-only file systems, specifically within Kubernetes environments where writable folders are designated as noexec. For example: using Bash's built in /dev/tcp utility and hijacking an existing process, reading and writing from temporary file systems, etc. Golan concludes with mitigation options: SELinux or detecting with Falco.
Analyzing Volatile Memory on a Google Kubernetes Engine Node
Spotify's Marcus Hallberg presents a new technique for conducting memory analysis on a GKE node by leveraging three open-source tools (AVML, dwarf2json, and Volatility 3), enabling a comprehensive snapshot of all processes and memory activities on a GKE node.
Hijacking S3 Buckets: New Attack Technique
Checkmarx's Guy Nachshon delves into a novel attack observed against the NPM package bignum, where the attacker hijacked the S3 bucket used to serve binaries and replaced them with malicious versions that stole users' credentials.
Since S3 buckets must have globally unique names, when a bucket is deleted, that name becomes available again, allowing attackers to put malicious contents in that bucket.
Bringing Transparency to Confidential Computing with SLSA
Google's Asra Ali, Razieh Behjati and Tiziano Santoro discuss Project Oak, which relies on the confidential computing paradigm to build an infrastructure for processing sensitive user data in a secure and privacy-preserving way: data is protected during transit, at rest, and while in use.
GitHub Dataset Research Reveals Millions Potentially Vulnerable to RepoJacking
Aqua Nautilus’ Ilay Goldman and Yakir Kadkoda found ~37K GitHub repos vulnerable to RepoJacking (e.g. repo owner changes or deletes their username), and provide real-life examples that could result in code execution through installation scripts, README instructions, and repository releases. The article also shares mitigations and current attacker techniques to bypass GitHub's restrictions.
Defending Continuous Integration/Continuous Delivery Environments
23 page PDF by CISA and NSA that outlines threats, attack surface, threat scenarios, and active hardening steps you can take to increase the security of cloud CI/CD deployments.
Machine Learning + Security
ChatGPT Plugin Exploit Explained: From Prompt Injection to Accessing Private Data
Johann Rehberger discusses the first exploitable Cross Plugin Request Forgery that affects ChatGPT plugins. User visits site with malicious prompt injection text —> ChatGPT follows malicious instructions (e.g summarize user’s email using Zapier plugin and leak it to the attacker). tl;dr: ChatGPT could be tricked to access any sensitive info or perform sensitive functionality, like CSRF. Good ol’ Confused Deputy.
Hacking Auto-GPT and escaping its docker container
Positive Security's Lukas Euler describes an indirect prompt injection attack against Auto-GPT that results in arbitrary code execution when performing a seemingly harmless task, like summarizing an attacker-controlled website.
Lukas explains the attack chain used to overcome restrictions imposed by Auto-GPT's architecture and deceive users into unintentionally approving potentially malicious commands.
Mind2Web: Towards a Generalist Agent for the Web
Mind2Web is a dataset for developing and evaluating generalist agents for the web that can follow language instructions to complete complex tasks on any website.
Necessary Illusions: Thought Control in Democratic Societies - by Noam Chomsky
✉️ Wrapping Up
Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.
If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏
Thanks for reading!