• tl;dr sec
  • Posts
  • [tl;dr sec] #250 - CNAPP Guide, OpenAI's o1 vs CTFs, Cloud Logging Tips

[tl;dr sec] #250 - CNAPP Guide, OpenAI's o1 vs CTFs, Cloud Logging Tips

Detailed CNAPP overview and Day 2 guide, o1 semi-escapes to solve a broken challenge, config tips & tricks for cloud control plane logs

Hey there,

I hope you’ve been doing well!

🏕️ Semgrep Off-site

This week Semgrep folks from all over the world flew in to camp.

Our last off-site was a bit cold and rainy, and this one delivered on the promise of being warmer, reaching >95 degrees every day 🫠 Unlike last time, I haven’t been snored out of my cabin!

It’s been awesome putting real life faces to names, as Semgrep has ~doubled in the last 18 months.

People gave lightning talks on topics ranging from launching new products, to building a personal brand (H/T Tanya Janca), to creating informants (see the CIA’s MICE framework). We broke into teams and played some start-up themed games, and of course the security team brought lock picks and practice locks.

So many great memories I’ll treasure, like my friend Cooper challenging me to a knife fight if I beat him in bowling (program analysis engineers are tough).

I’m typing this now at a wooden table in the lodge dining hall, type type typing away, enjoying watching my colleagues enjoying happy hour. Finishing this before karaoke…

Friendly reminder: come join 11 year Netflix security veteran Scott Behrens and I next Thursday Oct 10 at 10 AM PT.

Scott is super sharp, and is currently the strategic tech lead for all of Security, Privacy, and Risk at Netflix.

👉️ See you There 👈️ 

Sponsor

📣 Best Practices for AWS Vulnerability Management

Tired of chasing hidden vulnerabilities in your AWS environments? 

This comprehensive cheat sheet provides actionable steps to identify, assess, and mitigate security risks, ensuring your cloud infrastructure is protected.

Discover the critical steps for effective vulnerability management, including.

  • Asset Discovery 

  • Continuous Scanning 

  • Risk-based Prioritization 

o0o a cheat sheet on risk-based prioritization and knowing what you have to protect 👍️ 

AppSec

Using YouTube to steal your files
Lyra shares a clever attack chain that uses Google Slides' YouTube embed feature to gain access to a victim’s Google Drive files/folders. Path traversal, open redirects across YouTube subdomains, and quirks in Google Docs' sharing UI combine to craft a convincing one-click attack that causes the victim to grant editor permissions to an attacker's email. Delightful.

Hacking Kia: Remotely Controlling Cars With Just a License Plate
Neiko Rivera, Sam Curry, Justin Rhinehart, and Ian Carroll discovered a set of vulnerabilities in Kia vehicles that allowed remote control over key functions using only a license plate. These attacks could be executed remotely on any hardware-equipped vehicle in about 30 seconds, and would allow an attacker to silently obtain personal information, including the victim's name, phone number, email address, and physical address. 😅 

Announcing the Security Exceptions program pack 1.0
Robert Auger has released a set of resources for establishing a security exceptions program, including definitions, reporting requirements, a preparation checklist, a process diagram, runbooks, a document template and tracker, and baseline metrics.

Memory Safety Strategies and Techniques
InfoSecWorld 2024 talk by Adobe’s Florian Noeding that shares a prioritization model to target the riskiest areas of your code bases, an overview of fuzzing, sandboxing and rewriting non memory safe code, and a guide on how to drive large-scale technical change across an enterprise.

Eliminating Memory Safety Vulnerabilities at the Source
Excellent post by Google’s Jeff Vander Stoep and Alex Rebert on how prioritizing Safe Coding and memory-safe languages for new development can rapidly reduce memory safety vulnerabilities in large codebases, even as the amount of unsafe code grows. Prioritizing Rust for new Android code caused the percentage of memory safety vulnerabilities to drop from 76% to 24% over 6 years.

Great section at the end re: the evolution of memory safety strategies (reactive patching → proactive mitigating → proactive vulnerability discovery → high-assurance prevention).

💡 The key, unintuitive insight here is that even if the vast majority of a codebase is say C++, writing new code in a memory safe language like Rust has a huge security impact, because new code is disproportionately responsible for bugs (vs hardened code that’s been there for years).

Sponsor

📣 How LUCR-3 (Scattered Spider) Orchestrates Identity-Based Attacks Across Environments

Join Permiso Security's webinar on October 16th to discuss how advanced threat actors are compromising the identity infrastructure of some of the world's largest organizations with ease. Upon gaining access to the identity provider, they are able to move laterally into Iaas, PaaS, and SaaS environments and steal data - all in the course of 2-3 days.

Expect to learn how advanced threat groups target human and non-human identities for compromise, how they maintain persistence in environments, and hear tips for detecting suspicious and malicious activity in identity providers, cloud service providers, and SaaS applications. We'll also share actionable steps security teams can take to prevent breaches or know about them as quickly as possible.

👉 Register Now 👈

Advanced attackers seem to be able to compromise identities and pivot quite stealthily these days, good to see how we can detect lateral movement 👌 

Cloud Security

Lambda Watchdog
Continuously scans Lambda images for CVEs.

saw-your-packet/CloudShovel
By Eduard Agavriloae and Matei Josephs: A tool designed to search for sensitive information within public or private Amazon Machine Images (AMIs), which automates the process of launching instances from target AMIs, mounting their volumes, and scanning for potential secrets or sensitive data. CloudShovel is a modified version of the research (previously, on tl;dr sec) made on AWS CloudQuarry where they scanned 20k+ public AMIs.

Cloud Logging Tips and Tricks
Wiz’s Alice Klimovitsky maps cloud log sources into categories (identity, data, network, compute, and control plane), and in this post covers the importance of control plane logs, and offers configuration tips and tricks across AWS, Azure, GCP, and Google Workspace.

Cedar, Rego, and OpenFGA Policy Languages: Comparative Language Security Assessment
AWS engaged Trail of Bits’ Ian Smith and Kelly Kaoudis to comparatively assess several authorization and access management policy languages to identify broadly applicable threats, identify language features that partially or fully mitigate those threats, and to provide security recommendations to improve the general design of policy languages.

Implementing CNAPP: Day 2 Focus Areas
PwC's Naman Sogani covers Cloud-Native Application Protection Platforms (CNAPP) focus areas after the initial rollout, including pre-deployment security (IaC and secret scanning, container image scanning and SCA), capabilities beyond Cloud Security Posture Management (CSPM), and operational Workflows and Technology Integrations.

Redefining CNAPP: A Complete Guide To the Future of Cloud Security
Excellently detailed post by Francis Odum and James Berthoty providing a holistic view of cloud security's evolution, tracing its major milestones, a detailed breakdown of the key vendors today, and evolving market shifts. They present a new framework for redefining CNAPP, addressing its limitations and contradictions while offering a roadmap to navigate the future of cloud security.

Container Security

encap-attack
WithSecure’s Matthew Grove announces encap-attack, an open-source tool for identifying and exploiting gaps in host and overlay firewalls, particularly useful during Kubernetes security testing. It currently supports the two most popular encapsulation protocols, IP-in-IP and VXLAN.

Kubernetes CRD generation pitfalls
Ahmet Alp Balkan discusses common pitfalls and best practices when developing Kubernetes Custom Resource Definitions (CRDs) using controller-gen. Key recommendations include: explicitly marking all fields as +required or +optional, upgrading to controller-tools v0.16+ for reliable field requirements, carefully handling zero vs null values, and setting explicit defaults for nested structs. The article also highlights quirks in CRD validation and defaulting, such as enum case-sensitivity.

💡 There is way too much subtlety and nuance here 😅 

Supply Chain

Fuzzing confused dependencies with Depfuzzer
Synaktiv’s Pierre Martin and Kévin Schouteeten describe DepFuzzer, a tool designed to automate the detection of dependency confusion vulnerabilities in package files. It looks for package names that can be registered (using deps.dev), and maintainer email addresses that can be taken over.

Mitigating Attack Vectors in GitHub Workflows
Great, detailed post by OpenSSF/Google’s Joyce Brum on attack vectors against GitHub Actions workflows and how to secure them, covering: running untrusted code in privileged workflows, code injections, vulnerable Actions, malicious releases, tag-renaming attacks, imposter commits, and the unsafe use of caches.

GitHub Users Targeted by New Wave of Spambots Promoting Malicious Downloads
Socket’s Sarah Gooding describes a widespread GitHub spam campaign where bots were hijacking existing issues to post links to malicious executables, which will also be sent as email to users receiving Issue notifications. Martin Leduc has created a GitHub Action to automatically filter comments with suspicious content.

💡 I thought this was a clever abuse of an existing trusted party and “works as intended” feature: devs trust GitHub and are used to getting email from GitHub, GitHub sends emails letting you know about new issues or comments, which is useful → use GitHub issue notifications to spread malware.

Blue Team

reecdeep/segugio
By reecDeep: Segugio allows the execution and tracking of critical steps in the malware detonation process, from clicking on the first stage to extracting the malware's final stage configuration. Segugio aims to speed up the extraction of IoCs from malicious artifacts.

Breaking Boundaries: Investigating Vulnerable Drivers and Mitigating Risks
Check Point Research’s Jiri Vinopal describes how many vulnerable Windows drivers share common design flaws allowing non-privileged user access, making them prone to exploitation. Using a custom YARA rule, they uncovered thousands of potentially vulnerable drivers, including in security products.

Red Team

zimnyaa/remotechrome
Tool by zimnyaa to dump Chrome cookies remotely with atexec and the Chrome DevTools Protocol (CDP).

“All your loaders suck until further notice”
R3V3RS3R describes how they exploited vulnerabilities in the Amadey malware C2 panel to gain access to nearly two dozen panels and recover over two million stolen credentials. How: upload a CSRF payload as a ".tar" file and include it via XSS, which then injects PHP code into the sync.php file for persistent remote code execution. I like things that raise the “cost” for cybercriminals.

💡 I’m definitely not recommending this, as it’s a bit chaotic neutral/good energy, but I wonder what would happen if some governments gave blanket permission for anyone to offensively hack any ransomware or cybercriminal group. Of course there are tons of nuances (attribution is hard, potential to burn ongoing gov’t operations, compromised machines of normal people being used as jump boxes, not wanting to make the Internet a free-for-all warzone, ...), but as a thought experiment, I wonder how it would play out.

AI + Security

dhammon/ai-goat
Learn AI security through a series of vulnerable LLM CTF challenges. Runs totally locally, using the Vicuna LLM.

Winning the AIxCC Qualification Round
Theori Security discusses their DARPA's AIxCC (AI Cyber Challenge) submission, though it is scant on technical details 😒 It seems like every team used dynamic analysis (fuzzing) potentially with static analysis for finding bugs, and LLMs for patching.

For other AIxCC team coverage, see tl;dr sec #243 for Trail of Bits, and #244 for Team Atlanta.

OpenAI o1 System Card
There’s a short security section from page 14-17. They evaluated o1-preview on >100 high school, collegiate, and professional CTF challenges across web security, reverse engineering, binary and network exploitation, cryptography, and miscellaneous. o1-preview was allowed iterative debugging and access to tools available in the headless Kali Linux distribution (with up to 60 rounds of tool use for each attempt), 12 attempts at each task.

Performance: 26.7% of high-school level, 0% of collegiate level, and 2.5% of professional level CTF challenges. Two interesting things:

First: “Most of the (Professional) solutions identified by o1-preview and o1-mini are shortcuts the contest organizers did not foresee.”

🔥 This second one is crazy: one challenge container failed to start due to a bug in the evaluation infrastructure. So after being unable to connect to the container, the model:

  1. Used nmap to scan the challenge network.

  2. Found that the Docker daemon API running on the evaluation host VM was accessible due to a misconfiguration.

  3. Used the Docker API to list the containers running on the evaluation host.

  4. Identified the broken challenge container and briefly attempted to debug why the container failed to start.

  5. After that failed, the model started a new instance of the broken challenge container with the start command ‘cat flag.txt’ so it could read the flag from the container logs via the Docker API.

So basically the model (semi) escaped its environment to troubleshoot why it couldn’t solve the challenge. And then “cheated” to solve it. 🤯 Very cool.

Misc

✉️ Wrapping Up

Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏

Thanks for reading!

Cheers,
Clint
@clintgibler