[tl;dr sec] #191 - BadZure, Detection & Response Pipelines, 18K Subscribers!
Spin up purposefully vulnerable Azure AD tenants, detailed examples of modern detection pipelines, new subscriber milestone!
I hope you’ve been doing well!
Over 18,000 subscribers! 🤯
I’m honored to announce that tl;dr sec has surpassed 18k subscribers!
What started as a way for me to lazily efficiently share links with friends has turned into something that many people rely on for keeping up with the security industry. Which is pretty rad. And humbling.
As always, I’ll continue to work hard every week to make sure this is one of the most densely useful, no fluff things you read. Scout’s honor 🖖
If you’re new, welcome! I’m thrilled you’re here.
I’ll be sharing tools, blog posts, and talks across most every area of security.
If you’re going to Vegas this year I’ll be bringing some exclusive new swag, more details soon…
I’ll leave you with this stealth shot I took recently of a guy pulling his two dogs, who were wearing matching collars and glasses, on a skateboard.
📣 Opal, scalable identity security
Opal is designed to give teams the building blocks for identity-first security: view authorization paths, manage risk, and seamlessly apply intelligent policies built to grow with your organization.
They are built from the ground up to synthesize the data needed to construct and monitor all of your company’s access – from a single pane of glass.
Opal is used by best-in-class security teams today, such as Blend, Databricks, Drata, Figma, Scale AI, and more. There is no one-size-fits-all when it comes to access, but they provide the foundation to scale least privilege the right way.
Identity is key!
📜 In this newsletter…
Secrets: Thousands of images on Docker Hub leak auth secrets & private keys, new service to browse live secrets found on GitHub/NPM
Web Security: IIS short filename enumerator, Burp reshaper extension, Burp GraphQL extension
Cloud Security: Tool to find AWS misconfigs, BadZure, workshop on refining IAM permissions, how to monitor AWS root users at scale
Container Security: Kubernetes logging best practices
Blue Team: Detection and response pipeline compilation
Red Team: A reverse shell using curl, tool to sign with expired certificates
Startups: Founder’s Choice VC Leaderboard, VC is ripe for disruption, 12 ways to fail a cybersecurity startup
Machine Learning + Security: A self-hardening prompt injection detector, leveraging LLMs for phishing, tool to test LLM responses, learnings from Semgrep Assistant, prompt injection with control characters, a framework to securely use LLMs in companies
Machine Learning: Automatically vet/improve your prompts, wasting scam callers’ time with AI, the Darwinian argument for worrying about AI, emerging architectures for LLM apps
Misc: Normies watching musicals, AI tattoos, legal lullabies, Andrew Huberman bio
Thousands of images on Docker Hub leak auth secrets, private keys
A recent paper analyzed 337,171 images from Docker Hub and thousands of private registries and found that roughly 8.5% contain sensitive data such as private keys and API secrets. Matt Johansen had a nice overview thread.
Forager: Browse Millions of Leaked API keys Found With TruffleHog
Truffle’s Hon Kwok announces Forager, a new services that continuously scans GitHub and NPM for leaked secrets and posts redacted info about the secrets that are still live.
TIL if you add a
.gitignore is no longer used to block files and directories from being published. Yikes.
0.1% of pushes (not just commits!) have live credentials in them.
90.9% of pushes with live credentials are to personal repositories. 9.1% are to organization repositories.
7.8% of pushes with live credentials are to forks rather than the original repository.
📣 How to develop and test cloud-based security detections using Atomic Red Team
With thousands of attack scenarios, over 7,000 GitHub stars, 46,379 weekly views, and around 10 new attacks added weekly, the Atomic Red Team library of scripted cyber attacks has become the industry standard for detection validation and attack simulation.
Learn how to use the most popular execution engine, Invoke-AtomicRedTeam, led by Carrie Roberts (@OrOneEqualsOne) and Atomic Red Team maintainers, and walk through 3 scenarios of developing and testing cloud-based detections using the MITRE ATT&CK Framework.
By Bitquark: An IIS short filename enumeration tool designed to quickly determine which files with short filenames exist on an IIS webserver. Once a short filename has been identified, the tool will try to automatically identify the full filename.
By Stan Girard, Luca Corrieri, and Oussama Cherifi: A tool designed to audit your AWS infrastructure for misconfigurations or potential security issues. It can currently perform 63 different checks across various AWS services.
By Mauricio Velazco: A PowerShell script that orchestrates the setup of Azure Active Directory tenants, populating them with diverse entities while also introducing common security misconfigurations to create a vulnerable playground with multiple attack paths.
Refining IAM Permissions Like A Pro
This free AWS workshop is designed to teach attendees how to build automation for constant monitoring and analysis of infrequently used and business-critical IAM permissions. It also explores how to use the last accessed information programmatically and discusses remediation approaches towards least privilege.
How to Monitor AWS IAM Root Users at Scale: Best Practices
CloudYali provides insights into best practices and effective strategies for managing IAM users at scale. The article delves into IAM user monitoring, emphasising the importance of the root user, and highlighting essential IAM security practices. It also covers automation of IAM Credential Report generation at scale, streamlining the collection of IAM user information and facilitating more efficient monitoring and management for cloud teams.
Kubernetes logging best practices
Selvam Raju discusses Kubernetes logging and shares some best practices, including using a centralized logging solution, implementing log rotation, and streaming logs to a log server, among others. Selvam also provides a few CLI alternatives for log collection and analysis in Kubernetes, such as kubetail, which allows you to tail logs from multiple pods simultaneously.
A reverse shell using curl.
Founder’s Choice VC Leaderboard
Top VCs, as rated by founders whose company they’ve invested in.
12 ways to fail a cybersecurity startup
One of many great deep dives by Ross Haleliuk, who runs the excellent Venture in Security newsletter. Also, congrats to Ross for winning the European Cybersecurity Blogger Award!
Machine Learning + Security
A self-hardening prompt injection detector.
WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks
Daniel Kelley covers a new cybercrime tool that enables more effective phishing emails by leveraging LLMs, cybercrime forum discussions around “jailbreaks,” and more.
By Subho Majumdar and Leon Derczynski: A modular tool for testing LLMs for undesirable prompt responses. It comes with >10 types of probes and supports Hugging Face hub models, OpenAI, and more.
AI & Cybersecurity: Learnings from three months of Semgrep Assistant
Bence Nagy shares learnings from the private beta of GPT-4-powered Semgrep Assistant, which so far has received 95% positive feedback on its recommendations. Assistant is now in public beta, so if you use GitHub.com, you can now start using it immediately for free.
Don’t you (forget NLP): Prompt injection with control characters in ChatGPT
Dropbox’s Mark Breitenbach, Adrian Wood , Win Suen , and Po-Ning Tseng describe a new type of prompt injection: malicious user input containing control characters (like backspace) can circumvent system instructions designed to constrain the question and information context. In extreme cases, the models will also hallucinate or respond with an answer to a completely different question.
This post also had a great, systematic testing methodology that’s also worth reading for.
A framework to securely use LLMs in companies - Part 1: Overview of Risks
Sandesh Anand gives a nice framework to categorize risks based on different use cases and if you’re using a 3rd party LLM or self-hosting.
See also Rami’s: How to securely build product features using AI APIs.
AutoChain - Build lightweight, extensible, and testable LLM Agents.
Code Interpreter API - An open source implementation of the ChatGPT Code Interpreter.
Introducing Llama 2 - New and improved version of Llama released by Meta, can be used commercially.
NotebookLM - New (hasn’t been killed off yet) project by Google aimed at reimagining what notetaking software would be like with an LLM at its core.
Input a description of your task and some test cases, and this tool will generate, test, and rank a multitude of prompts to find the ones that perform the best.
Wasting Scam Callers’ Time with AI
A service that leverages GPT-4 to waste the time of telemarketers and scammers. The audio samples are pretty hilarious.
The Darwinian Argument for Worrying About AI
If individuals, companies, and nations who leverage AI outcompete those who don’t, we will gradually cede more and more power to AI until it is running everything.
Emerging Architectures for LLM Applications
a16z’s Matt Bornstein and Rajko Radovanovic explore the architecture of the emerging LLM app stack, discussing common systems, tools and design patterns, including the 'In-Context Learning' design pattern, which uses off-the-shelf LLMs with clever prompting and private contextual data.
If you want a tattoo of the tl;dr sec robot, I’ll send you a high res version.
Legal Lullabies - Lull yourself to sleep with the soothing white noise of your favorite tech giant’s terms of service.
✉️ Wrapping Up
Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.
If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏
Thanks for reading!