- tl;dr sec
- Posts
- [tl;dr sec] #241 - Understanding Crowdstrike, AWS Pentesting Guide, Rule Explorer Project
[tl;dr sec] #241 - Understanding Crowdstrike, AWS Pentesting Guide, Rule Explorer Project
The best context I've found on what happened, an opinionated guide to become an AWS pentester, a new platform for grokking open security detection rules
Hey there,
I hope you’ve been doing well!
🏕️ Hacker Summer Camp
Godspeed to everyone who is heading to Vegas next week for some combination of BSidesLV, BlackHat, and DEF CON.
In terms of outside events, you can see BlackHat’s page on parties and networking events, a list aggregating events for all three cons here, and a list of various useful resources and tips by Nicole Schwartz.
If you want to say hi to my colleagues and I, you can see Semgrep’s events here. I’ll be at Cards Against AppSec and LevelUp, come say hi! I’ll have stickers and high fives a-plenty 👋
My friend Sean Sun of Miscreants will also have tl;dr sec stickers at their DEF CON booth, I highly recommend checking out their excellent t-shirts and swag.
Be safe, have fun, stay hydrated, and pace yourself.
Hope to see you there!
🛣️ Semgrep + Secure Guardrails
I’m a huge fan of secure defaults / guardrails, or building a “paved road”; that is, eliminating classes of vulnerabilities by construction, by giving developers libraries, tooling, and infrastructure that makes doing the secure thing easy, and the insecure thing hard.
Ideally security is transparent to devs, so they can focus on delivering value to customers.
So I’m stoked to announce some new Semgrep features:
A new dashboard that makes it easy to track secure guardrail adoption over time (e.g. we want devs to <do this>, how many instances that don’t follow this pattern are still live?).
A new secure defaults ruleset.
Semgrep Assistant Memories: enter your own custom English instructions, and Assistant’s AI-generated fixes will provide tailored remediation guidance based on your org’s standards and defaults.
You can read more on the announcement blog here, and some additional technical content here.
Sponsor
📣 Protect AI/ML Systems from Software Supply Chain Threats
Our critical infrastructure - from water supply to financial institutions - relies heavily on software, making it vulnerable to cyberattacks. The rise of AI/ML technologies amplifies these risks, as attackers can exploit AI/ML systems to cause significant real-world damage. Chainguard created a course that equips participants with the knowledge to secure AI/ML supply chains, covering threat landscapes, attack surfaces, and best practices. Gain practical skills and a certification in "Securing the AI/ML Supply Chain" to protect your organization from evolving cyber threats.
👉 Sign Up Now 👈
Ooo super cool! It’s hard to think of a more timely topic for a course than protecting an AI system’s supply chains. And it’s free?! 😍
AppSec
Tib3rius on where to store JWTs
Some nice nuance about using cookies vs Session/Local Storage.
Enabling Trusted Types in a Complex Web Application: A Case Study of AppSheet
Google's Kian Jamali and Aaron Shim discuss implementing Trusted Types in AppSheet to guard against DOM XSS vulnerabilities. They used a five-phase rollout: report-only mode, violation collection, refactoring, repeated testing, and enforcement. This process included configuring Content Security Policy headers, centralized violation reporting, and manual testing with Chrome Dev Console, effectively eliminating existing vulnerabilities and preventing new ones.
two06/CerealKiller
By James Williams: Presented at SteelCon 2024, this tool automates the detection of binaries vulnerable to deserialization attacks by scanning .NET binaries on a host for references to vulnerable deserialization calls.
In his talk, We're Still So Serial, James explores how unsafe deserialization vulnerabilities can be exploited for initial access and lateral movement, offering guidance on identifying these vulnerabilities and using them in red team exercises.
Anyone can Access Deleted and Private Repository Data on GitHub
Truffle’s Joe Leon walks through how you can access data from deleted forks, deleted repositories and even private repos on GitHub, forever. This is a known and “as designed” feature, but I imagine will be surprising to most users.
Users can access sensitive data from other forks, including private and deleted forks.
Data committed to a fork remains accessible even after the fork is deleted.
Commits made to a public repo remain accessible via any fork even after the original repo is deleted.
Commits made to a private fork before the upstream repo is made public are accessible via the public repo.
Sponsor
📣 Identity Threat Detection and Response Solution Guide
Identity Threat Detection and Response (ITDR) has emerged as a critical component to effectively detect and respond to identity-based attacks. Threat actors have shown their ability to compromise the identity infrastructure and move laterally across an environment. Download this comprehensive ITDR Solution Guide to learn:
The most common Identity Threat Detection and Response use cases and recent identity-based breaches
How identity-based attacks are commonly orchestrated against environments
Pillars of an effective ITDR solution, questions an effective ITDR solution should answer, and a comprehensive RFP template
👉 Get The Guide 👈
Identity is key, and being able to detect identity-based attacks and lateral movement is 👌
Cloud Security
benkehoe/aws-whoami-golang
By Ben Kehoe: A tool that provides a detailed and user-friendly way to identify your AWS account and identity. It displays information such as account alias, region, and role session name, using the AWS Go SDK.
An Opinionated Ramp Up Guide to AWS Pentesting
Lizzie Moratti offers an in-depth AWS pentesting guide, leveraging resources like pwnedlabs.io, Rich Mogull's Cloud Security Lab a Week, and Scott Piper’s AWS Security Maturity Roadmap. Lizzie offers some hot takes around cloud pentesting dark patterns (which I largely all agree with), and a five stage learning approach, including: building yourself a glossary of terms and concepts, leveraging CTFs (first with, then without the write-up), replicating the functionality of automatic tools with manual CLI commands and later code you write yourself, and more.
Capturing Exposed AWS Keys During Dynamic Web Application Tests
Praetorian's Aleksa Zatezalo describes how they observed some web applications sending AWS access keys and session tokens to the client side (e.g. user’s browser), which would allow an attacker to use them to, for example, manipulate the application's CloudWatch instance, send false logs, or inject MQTT messages to an IoT endpoint. These actions could interfere with forensic investigations and incur additional costs for the company.
Instead, apps should send the info to be logged (or whatever) to the backend, which can then use AWS credentials to perform the appropriate action.
Blue Team
Introducing the REx: Rule Explorer Project
Justin Ibarra introduces REx, an interactive platform for analyzing and visualizing open security detection rules using Elastic Stack's capabilities. The platform provides a dynamic Detection Engineering Threat Report (DETR) with rule snapshots refreshed every 24 hours, focusing on behavioral detection logic.
Mistaken Identification: When an Attack Technique Isn’t a Technique
VanVleet discusses a challenge for detection engineers when MITRE ATT&CK techniques, like T1059.001 (PowerShell), are misclassified as techniques rather than tools. This misclassification leads to inefficient detection efforts. The author suggests focusing detection on specific attack objectives and operational chains instead of tools, as attackers can easily switch tools.
A Five Year Retrospective on Detection as Code
Magonia Research's David Burkett reflects on the evolution of Detection as Code over the last five years, discussing how detection logic doesn’t need to be written in a programming language (Sigma uses YAML), the value of code review and CI/CD, and the effectiveness of test-driven detection (TDD), writing tests before the detection itself, to ensure new detections work as expected.
David also mentions how LLMs can help ensure context accuracy, identify technical errors, provide suggested improvements, tag detection logic with the relevant MITRE ATT&CK IDs, and potentially even catch logic errors (e.g. incorrect regexes).
Red Team
mlcsec/Graphpython
By @mlcsec: A modular Python tool designed for cross-platform Microsoft Graph API enumeration and exploitation. It enhances the capabilities of tools like AADInternals (Killchain.ps1), GraphRunner, and TokenTactics(V2), supporting red team and cloud assumed breach operations across services such as Entra ID, Office 365, and Intune.
nnsee/fileless-elf-exec
By Rasmus Moorats: A Python script that facilitates the execution of ELF binaries directly into memory, avoiding disk writes and bypassing noexec mount flags. It supports customization options such as specifying target architecture, wrapping long base64-encoded strings, and piping over SSH, making it versatile tool for stealthy binary execution.
NVISOsecurity/codasm
By Moritz Laurin Thomas: Codasm is a Python tool that encodes arbitrary data into pseudo-assembly instructions, embedding payloads into the .text section of binaries to reduce entropy. It supports MSVC and MinGW compilers and generates C header files for embedding and decoding payloads in binaries, enhancing obfuscation techniques.
AI + Security
Prompt Airlines
An AI security challenge CTF by Wiz’s Nir Ohfeld and Shir Tamari.
Predicting AI’s Impact on Security
The blog version write-up of my friend Caleb Sima’s BSidesSF 2024 keynote. I really liked this talk, it paints a neat, and feasible picture of where things may be headed. Caleb covers some AI fundamentals, the real top CISO concerns, and likely future AI applications, including (quoting him directly):
AI will impact enterprises through meeting transcription, self-updating wikis, automated reports, and specialized oracles.
Engineering will see self-documenting code, requirements as code, automatic integrations, and localized models for operations.
AI can address coverage by scaling triage, context by synthesizing information, and communication by tailoring data for audiences.
Future AI-powered security could provide detailed context for alerts, automate vulnerability fixes, and streamline access requests.
AI has the potential to automate status reports and simplify tracking of security metrics like privilege reduction.
Crowdstrike
I didn’t include anything last week because I was waiting for things to settle a bit.
First, big hugs to everyone dealing with such a stressful situation, I hope you can get some sleep soon.
Software is crazy complex, and honestly I’m surprised things Just Work most of the time. Hopefully we can all use this as a learning opportunity and grow.
“Let he who hath perfect test coverage cast the first stone.”
CrowdStrike wrote up some post incident review info here. One day I too hope to be able to cause $B of damage and apologize with $10 gift cards.
I also aspire to have the self-confidence of the Microsoft post that opens with, “Windows is an open and flexible platform used by many of the world’s top businesses for high availability use cases where security and availability are non-negotiable.” I have many incredibly talented friends who used to or do work for Microsoft, just opening with this right now feels like weird timing.
Patrick McCormack has a great overview post of what happened.
Bob and Alice in Kernel-land has some good discussion on if agents should be kernel-mode or user-mode, Zack Allen drew a nice diagram of how Crowdstrike manages threat detection content, and this post discusses if using eBPF instead would have prevented this from occurring in the first place.
Misc
KnowBe4 mistakenly hired a North Korean hacker
Huge shout-out to KnowBe4 for sharing this publicly so that other companies can be more on guard. Also, wow! 👇️
“Before hiring the threat actor, KnowBe4 performed background checks, verified the provided references, and conducted four video interviews to ensure they were a real person and that his face matched the one on his CV.
However, it was later determined that the person had submitted a U.S. person's stolen identity to dodge the preliminary checks, and also used AI tools to create a profile picture and match that face during the video conference calls.”
markmap - Visualize your Markdown with mindmaps
A profile on the CIA analyst who wrote up the report on Russian’s influence campaign aiming to get Trump elected in 2016
Russia is leveraging unwitting Americans to spread election disinformation, say officials from the Office of the Director of National Intelligence, FBI, Department of Homeland Security, etc. They want Americans to be divided, and to stop supporting Ukraine.
AI Is Already Taking Jobs in the Video Game Industry - This or similar things should not surprise you: if a company can do more or better work with fewer people (be more profitable), it will. Period.
Come To Your Senses - a song to hit you in the feels from the musical tick, tick… BOOM.
How LG and Samsung are making transparent TVs
✉️ Wrapping Up
Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.
If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏
Thanks for reading!
Cheers,
Clint
@clintgibler