- tl;dr sec
- Posts
- [tl;dr sec] #262 - Red Teaming AI, AWS Org Policies Deep Dive, Anti-EDR Compendium
[tl;dr sec] #262 - Red Teaming AI, AWS Org Policies Deep Dive, Anti-EDR Compendium
Microsoft and OpenAI on red teaming AI, SCPs and Resource Control Policies in detail, how EDR works and how to bypass it
Hey there,
I hope you’ve been doing well!
🏖️ Working
Edit: I initially accidentally scheduled this for Friday instead of Thursday by accident, so it’s not getting sent out at the usual time. Apologies.
Most of this newsletter has been written at a table sitting next to a pool.
This sounds nice, and it is, but it would also be nice to not have laptop tan lines on my legs on a trip. But as the French say, “La Vee.”
Couple tidbits from this week:
One to two hour massages for <$50 USD.
Motorbikes to get everywhere.
Meeting people from all over, like a digital nomad from Kazakhstan, and a Singaporean with a dream of living in NYC.
I went to a pretty woo woo yoga class, which happened to be in the same strip of stores as “Bitcoin House Bali,” which had a parking sign that said “Bitcoiners Only” 🫠
Sponsor
📣 How to Conduct a GenAI Risk Assessment
Most orgs have moved from panic to practicality when it comes to GenAI use. But, there are nuances to risk mitigation for GenAI vs. other technologies. View this guide to learn how to:
Discover the AI tools in use in your org
Conduct security reviews for AI vendors
Determine where AI tools are connected to other apps
Educate your workforce on safe and compliant AI use
👉 View Now 👈
Discovering what AI tools your org is using and what they’re connecting to is non trivial. And it’s super useful to know what data is going where 😅
AppSec
intigriti/CTF
A repository hosting challenge files, source code, solve scripts, etc. from the previous Intigriti 1337UP CTF competitions.
Gitxray: a security X-Ray for GitHub repositories
Kulkan Security’s Lucas Lavarello announces Gitxray, an open-source security tool for scanning GitHub repos, orgs, and contributors to identify information disclosure and suspicious behavior. Gitxray uses GitHub's public REST APIs to detect unintended disclosures in contributor profiles (e.g. Armored PGP Keys, or Key Names), spot shared or fake contributors, analyze PR rejection patterns, detect tampered commit dates, and more.
Vigilante Justice on GitHub
For years you’ve been able to decorate your GitHub contribution graph (e.g. spelling words) using fake commit Git histories, as there's no timestamp validation mechanism between Git and GitHub. Truffle Security’s Dylan Ayrey shares that you can also graffiti other GitHub users, like malicious fraudsters, if they open an Issue on a repo you have push access to.
💡 Dylan does cool research and is such a fun presenter, you should definitely check out his posts and talks if you haven’t already. And he’s super nice.
Unsafe Archive Unpacking: Labs and Semgrep Rules
Doyensec intern Michael Pastor describes his research into how unsafe archive extraction can lead to path traversal vulnerabilities across multiple programming languages (Python, Ruby, Swift, Java, PHP, JavaScript). The post provides code examples of unsafe implementations and mitigation strategies using path sanitization and validation. See the GitHub repo with labs, PoC code, and Semgrep rules.
💡 It’s cool that an intern can do security research, learn Semgrep, and then codify that research into Semgrep rules, all in few months.
Slack actually had two of their interns help add support for Hack into Semgrep. During Ellen Wang’s Datadog internship, she built Guarddog, a tool for finding malicious dependencies using Semgrep, and gave a talk on it at OWASP Global AppSec SF.
Sponsor
Tracebit brings a new approach to a familiar idea - security canaries. We deploy new canaries (e.g. cloud infra, tokens) using up-to-date approaches (e.g. terraform). We just partnered with Panther to make responding to our alerts even easier - check the link to see a worked example of an Okta compromise detection!
I’ve heard from a few red teamers that canaries are consistently one of the things that catch them. And done well, they’re pretty high signal / low overhead 🤘
Cloud Security
cloud-copilot/iam-simulate
By Cloud Copilot: An AWS IAM Simulator and Policy Tester that outputs detailed explanations of how a request was evaluated, built as a Node/Typescript library. See also the announcement blog and live playground.
Fast Unauthenticated Role Scanning
Ryan Gerstenkorn announces roles, a GoLang tool for fast unauthenticated IAM principal enumeration in AWS accounts. Enumerating principles can give you interesting details about the software used by a target or even indicate the presence of a possible vulnerability leading to unauthenticated access to the AWS Account. Roles supports caching and role name interpolation for variables like the Account ID and Region, and can hit 13.5k tests/second.
The Security Canary Maturity Model
Rami McCarthy presents a Security Canary Maturity Model, categorizing canary deployments across Coverage, Impact, Management, and Program dimensions, and three maturity levels (Defined, Managed, Optimized).
💡 I’ve read a fair amount about canaries over the years, and this is a great distillation of what starting out, good, and excellent looks like in a number of areas. There were some smart ideas in a few of the Optimized categories I hadn’t thought of before.
Deep Dive - AWS Organization Policies (Part 1)
PwC’s Naman Sogani gives a thorough overview of AWS Organization Policies, focusing on Authorization Policies - Service Control Policies (SCPs) and Resource Control Policies (RCPs). He describes how these policies work together to create permission guardrails for IAM principals and resources across AWS Organizations, explaining their scope, evaluation logic, and best practices. The post also discusses the concept of a data perimeter and how SCPs and RCPs contribute to building one.
Supply Chain
2024 CVE Data Review
An overview and misc trends by Jerry Gamblin, including: 40,009 CVEs were published (up 38% from 2023), an average of 108 CVEs per day, XSS was the most popular CWE at 15.5%, and more.
The Year in Review
2024 update from the OSV team:
New Linux ecosystem support (Ubuntu, ‘Chainguard, Red Hat, SUSE/openSUSE) and expanded existing coverage of Debian GNU/Linux, bringing the total to 30 ecosystems supported.
The expansion of coverage of C/C++ software with Git range coverage of CVEs programmatically converted from the NVD has continued, despite NVD woes.
New OSV-Scanner features: guided remediation for npm, transitive dependency scanning for Maven, support for more lockfile formats (NuGet, pdm, PNPM, gradle/verification-metadata.xml, CycloneDX 1.4 and 1.5).
Upcoming: better, layer-focused container scanning support, including base layer identification, guided remediation for Maven, improvements to reachability analysis and VEX autogeneration.
Project Quarantine
Great walk through by Mike Fiedler about the implementation choices behind PyPI's new "Project Quarantine" feature, which allows admins to mark potentially harmful projects as quarantined, preventing users from installing them. PyPI Admins have so far reported ~140 projects as Quarantined. TIL obfuscated code is against PyPI’s Acceptable Use Policy, and that there’s a beta API for endpoint for “Observer” security researchers to submit malware reports. Future work: automatically place a project in quarantine when enough “credible reports” are received.
💡 “With the current full-time security staff for PyPI == 1.” Nothing to see here, just millions of companies, relying on the security posture of a package registry with 1 full time security person 😅
Snyk security researcher deploys malicious NPM packages targeting Cursor.com
Paul McCarty discovered three malicious NPM packages uploaded by Snyk’s Security Labs that appeared to be attempting a dependency confusion attack against Cursor, an AI startup. OpenSSF's package analysis scanner also flagged the packages.
💡 Attackers are already attempting to do these types of attacks, so in my opinion it’s great for good faith security researchers to be finding these issues first. My only feedback is that it’s probably better to not exfiltrate environment variables, as those could have sensitive secrets like API keys, credentials, etc.
Blue Team
100DaysofYARA/2025
Rules shared by the community from 100 Days of YARA 2025.
milankovo/YaraVM
An IDA processor for loading and disassembling compiled YARA rules, for example, if your security vendor is giving you YARA rules as binaries, which covers a vast environment including the world's largest Linux fleet and 180,000+ employees.
Summiting the Pyramid: Bring the Pain with Robust and Accurate Detection
MITRE’s Michaela Adams, Roman Daszczyszak, and Steve Luke summarize a new release of the Summiting the Pyramid methodology for building robust detections. The updated framework now includes guidance for creating accurate network-based detections in addition to host-based detections, introduces Detection Decomposition Diagrams to visualize technique implementations, and provides actionable steps for defenders to build detections that are resistant to adversary evasion while minimizing false positives.
How Google Does It: Modernizing threat detection
Google’s Anton Chuvakin and Tim Nguyen share an inside look at Google's threat detection and response approach, covering the world's largest Linux fleet and 180,000+ employees.
Roughly 97% of their events are generated through automated “hunts,” and then presented to a human along with a risk score and details about where to investigate. Thus they can triage events more quickly because the human starts with all the relevant context to make a decision.
LLM-generated drafts reduced the time that engineers spent writing executive summaries by 53%.
All threat hunts begin with threat modeling to ensure the right detections are built and necessary telemetry gathered.
They maintain a comprehensive cloud asset inventory so they know what they need to secure.
Detection engineers triage alerts - align the pain of false positives with who wrote the detection.
Security engineering is treated as software engineering - everyone writes code.
Average dwell time of attackers is “hours.”
Red Team
BlackSnufkin/LitterBox
By BlackSnufkin: A sandbox environment designed specifically for malware development and payload testing. A web app that enables red teamers to validate evasion techniques, assess detection signatures, and test implant behavior before deployment in the field.
ElJaviLuki/CobaltStrike_OpenBeacon
A fully functional, from-scratch alternative to the Cobalt Strike Beacon (a command and control, or C2 tool).
💡 I like to look at what other repos people have. In this case, I came across GrindrPlus, an Xposed Module (a framework that allows you to modify an app’s behavior at runtime, like Frida) that unlocks and adds unique features to the Grindr app. 🤔
The (Anti-)EDR Compendium
Epic post by Dobin Rutishauser describing how Endpoint Detection and Response (EDR) tools work, their three main detection techniques (file scanning, memory scanning, and telemetry/behaviour of actions performed by the process), various attacks on EDR (command line spoofing, memory encryption, callstack spoofing, remote processes, …), and more, with tons of supporting references. Highly recommend if you want to quickly get up to speed.
AI + Security
Twitter mega post by Exa CEO Will Bryk: Thoughts on the eve of AGI
New AWS Skill Builder course available: Securing Generative AI on AWS
New course by AWS to help you understand and implement security best practices for genAI applications and models on AWS. Topics: Introduction to the Generative AI Security Scoping Matrix, key AI security frameworks (OWASP Top 10 for LLMs and the MITRE ATLAS framework), practical security strategies, and real-world application case studies.
3 takeaways from red teaming 100 generative AI products
Blake Bullwinkel and Ram Shankar Siva Kumar share insights from Microsoft's AI red team's experience in red teaming over 100 generative AI products in their new whitepaper. They introduce an AI red team ontology for modeling attacks, 8 main lessons learned, and share case studies covering security, responsible AI, and psychosocial harms.
OpenAI’s red teaming innovations define new essentials for security leaders in the AI era
This VentureBeat article gives a nice overview of OpenAI’s recently released whitepapers:
Misc
Notes on China
Fascinating post by Dwarkesh Patel, highly recommend. “It’s funny how China has basically the inverse problem as America. We subsidize demand and restrict supply. They subsidize supply and restrict demand. We can’t rebuild fallen bridges. They build bridges to nowhere. In the most desirable cities in this country, every random Victorian house and park bench is a historic site that can’t be disturbed. There, they’ll bulldoze a 500 year old temple to build an endless skyscraper complex that no one wants to live in.”
Chinese venture capitalists force failed founders on to debtor blacklist
Yikes, basically “redemption rights” clauses enable the Chinese VCs to require founders to buy back investors’ shares plus interest if certain targets such as an IPO timeline, valuation goals or revenue metrics are not met. Given the personal liability, I don’t know why someone would choose to take money under these terms.
Securing a city-sized event: How Amazon integrates physical and logical security at re:Invent
400,000 online attendees, 60,000 in-person, seven venues over twelve miles and nearly seven million square feet. Drones, K9 units, CCTV, network monitoring, and coordination with 40+ agencies. Wow.
Alex Hormozi - Letting Gen Z write his content script, saying no to everything except your goal
VC is a great gig - “You take risks with other people’s money, you get paid a hefty fee no matter what, you can spend lavishly on unlimited traveling, parties, events, & ‘tech week,’ …”
Andrew Wilkinson on how due to incentives you can build a profitable, successful business but still be considered a failing business by VCs
Ali Abdaal - My honest advice to someone who wants passive income
“Be realistic. Plan for a miracle.” —Bhagwan Shree Rajineesh
✉️ Wrapping Up
Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.
If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏
Thanks for reading!
Cheers,
Clint
@clintgibler