- tl;dr sec
- [tl;dr sec] #199 - Supply Chain Security Overview, Container Escapes, AI + Cybersecurity
[tl;dr sec] #199 - Supply Chain Security Overview, Container Escapes, AI + Cybersecurity
Detailed overview of the areas of supply chain security, 7 ways to escape containers, AI for threat modeling, TTPs, & malicious packages
I hope you’ve been doing well!
I’ve got a few exciting announcements this week.
Come join my buds Tanya Janca and Leif Dreizler for a workshop next Tuesday Sept 19 in San Francisco, and I’ll be there for the happy hour after!
AI and LLM applications have been moving very quickly.
So I’m excited to distill the 100+ hours I’ve spent absorbing current research into a single 45min webinar with Wiz.
The focus will be: how is AI being applied to cybersecurity, across AppSec, web security, cloud security, blue team, red team, and more.
In addition to providing more tools and resources than you can shake a stick robot’s arm at, I’ll reflect on promising opportunities and where things are headed.
You can sign up here. When: next Thursday, Sept 21 at 11am PDT.
I’m stoked to announce a new long-form post!
If you keep hearing “supply chain security” but aren’t sure what that’s all about, Francis Odum has written a great intro / overview of the space.
In Part 2, he’ll analyze 12+ software supply chain vendors and their differentiators.
You can read part 1 here.
📣 Cloudy Visibility?
Cloud-first security teams are leading the pack in adopting Cloud Native Application Protection Platforms (CNAPP). This CNAPP Buyer’s Guide contains everything you need to know to make sure you’re adapting to the evolving threatscape and staying ahead of attackers, including:
What exactly is CNAPP
Why Gartner predicts that 80% of teams will move to CNAPP by 2026
How leading security orgs are consolidating their security stack (CSPM, CWPP, CIEM, CDR)
Bonus: An RFP template with a scorecard to assess potential solutions
Get the complete breakdown in the CNAPP Buyer’s Guide.
Static Taint Analysis for Go
HashiCorp’s Kent Gruber has been working on a static taint analysis package for Go, which can be a building block for detecting SQL injection, XSS, etc., and walks through how it works.
4,500 of the Top 1 Million Websites Leaked Source Code, Secrets
Truffle’s Joe Leon et al scanned the Alexa Top 1 Million for https://.com/.git, and found 4,500 exposed their source code (note: this is not even looking at sub paths, only top level).
📣 Large US telco pentests 3x faster with Corellium
Security testing iOS apps can get complicated. Apple releases multiple devices and iOS updates each year. Devices get bricked. And public jailbreaks are harder to come by.
With Corellium, you can easily spin-up near limitless combinations of virtual iPhone device models and iOS versions, jailbroken or not, with the click of a button. And advanced security testing tools are built into the platform for full-stack testing: OS, file, app, and network.
Read how our large US telco client was able to conduct three times as many pentests in the time it used to take to conduct one.
Whoa, 3x faster is legit 🤯
Datadog "Cloud Security Atlas" now has Azure content
Attacks and vulnerabilities, along with actionable reproduction/remediation information. H/T Christophe Tafani-Dereeper.
Establishing a data perimeter on AWS: Allow access to company data only from expected networks
AWS’ Laura Reith walks through how, giving example resource-based policies, including handling AWS services that use service roles to access resources on your behalf, partners accessing your resources from their own networks, etc.
A security community success story of mitigating a misconfiguration
This story by Scott Piper warms my heart. Basically, a bunch of researchers were writing about how it was easy to insecurely configure AWS IAM roles that were used by GitHub Actions, and by reaching out to AWS and Hashicorp, the defaults were changed so they became secure by default, eliminating this vulnerability from happening in the future. Let’s go!
How to Protect Yourself From the New Kubernetes Attacks in 2023
KSOC walks through four attacks targeting Kubernetes (Dero and Monero crypto miners, Scarleteel, RBAC-Buster), which likely actually matter to you, and security mitigations to prevent them.
Container security fundamentals part 6: seccomp
Datadog's Rory McCune continues his great series on security layers that can isolate containers not only from other processes but also from the underlying host. In this post, he discusses how seccomp filters, which can restrict which Linux syscalls a process can perform, are used as a "last line of defense" by container runtimes.
Contain Yourself: Staying Undetected Using the Windows Container Isolation Framework
Deep Instinct’s Daniel Avinoam shares the blog post version of his DEF CON talk on the basics of Windows containers, its file system isolation framework, reverse-engineering its main mini-filter driver, and how it can be utilized and manipulated by a bad actor to bypass EDR products in multiple domains.
7 Ways to Escape a Container
For each, Lightspin’s Ori Abargil shares the container configuration that makes it susceptible to the escape technique, outlines the minimal capabilities required inside the container to escape, and shares concrete commands to set up the vulnerable container and escape it.
The ways: mount the host filesystem, use a mounted Docker socket, process injection, adding a malicious kernel module, reading secrets form the host, overriding files on host, and abusing notify on release.
Build a CVE library with aggregated CISA, EPSS and CVSS data.
CISA Open Source Software Security Roadmap
CISA has released their 8 page roadmap covering their four key priorities: (1) establishing CISA’s role in supporting the security of open source software, (2) driving visibility into open source software usage and risks, (3) reducing risks to the federal government, and (4) hardening the open source ecosystem. OpenSSF weighs in as well and shares some useful links.
Politics / Privacy
How China Demands Tech Firms Reveal Hackable Flaws in Their Products
Because of a 2021 law requiring tech companies share discovered vulnerabilities within two days, including to agencies that use it for offense and defense. So basically leave China or give their government’s offensive teams a head start. Full Atlantic Council report here.
The online portal for reporting hackable flaws includes a required entry field for details of where in the code to “trigger” the vulnerability or a video that demonstrates “detailed proof of the vulnerability discovery process,” as well as a nonrequired entry field for uploading a proof-of-concept exploit to demonstrate the flaw.
Also TIL there’s another law in China forbidding China-based executives of foreign firms from telling others at their own company about how they interact with the government. “Firms may not fully understand changes in their own local offices’ behavior because those local offices may not be permitted to talk to them about it, under pain of espionage charges.”
Machine Learning + Security
NSA ‘recently completed’ AI strategic study, General Paul Nakasone says, on how generative AI/ML will be used for missions
Bruce Schneier weighs in on the power and danger of LLMs being able to use tools (e.g. make phone calls, do their own research, interact with websites or APIs).
China is getting better at using AI to generate content designed to go viral in U.S. and other democracies. MS threat intel says China has “successfully engaged target audiences in at least 40 languages and grown its audience to over 103 million.”
Leveraging LLMs for Threat Modeling - GPT-3.5 vs Claude 2 vs GPT-4
Marcin Niemiec provided a fictitious high level project description, architecture description, and user story to the three models and evaluated how well they threat modeled. tl;dr: GPT-4 performed the best, you can review all of their outputs, and he created a GitHub Action to auto-threat model/security review.
Our TRAM Large Language Model Automates TTP Identification in CTI Reports
MITRE’s James Ross and Jackie Lasky describe how they’ve extended TRAM (Threat Report ATT&CK Mapper), a tool to automatically identify adversary tactics, techniques, and procedures (TTPs) in cyber threat intelligence (CTI) reports, to use LLMs— specifically, by fine-tuning SciBERT on 50 ATT&CK techniques.
Introducing Socket AI – ChatGPT-Powered Threat Analysis
Socket’s Mikola Lysenko describes how they’re using LLMs to find malicious packages or potentially dangerous/vulnerable npm and PyPi packages. He gives examples of information exfiltration, injection vulnerabilities, exposed credentials, backdoors, code that installs dependencies, prompt poisoning, etc. Here’s their earlier post on the same topic.
MetaGPT - An LLM agent to replicate a software company. Takes a one line requirement as input and outputs user stories / competitive analysis / requirements / data structures / APIs / documents, etc.
promptfoo - A tool for testing your prompts. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality.
Aviary Explorer - Nice web UI for comparing the output of 3 LLMs at a time.
Asking 60+ LLMs a set of 20 questions - To compare how they respond.
Wrapper’s Delight - OpenAI wrapper that auto-logs every interaction, provides analytics, AI-assisted query of logs, etc.
Absolutely epic 🔥 prompt engineering guide by
Hrishi Olickel - One of the best, if not the best I’ve seen.
Open Interpreter - An open source, locally running implementation of OpenAI’s Code Interpreter.
Lessons from 139 YC AI startups (S23) - That is… a lot of AI startups in one batch 😆
Microsoft’s Copilot Copyright Commitment - If you’re using Copilot services and get sued for copyright infringement, they’ll get your back in court #RideOrGenAI baby ✊
Can Ghidra play Super Mario Bros?
Playing NES ROMs with Ghidra's PCode Emulator 😆
✉️ Wrapping Up
Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.
If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏
Thanks for reading!