- tl;dr sec
- Posts
- [tl;dr sec] #265 - Google's Secure-by-Design Web Framework, AI-powered Phishing, How Palantir Secures Source Control
[tl;dr sec] #265 - Google's Secure-by-Design Web Framework, AI-powered Phishing, How Palantir Secures Source Control
How Google eliminates vuln classes, human expert-level AI spear phishing, how Palantir hardens their code writing process
Hey there,
I hope you’ve been doing well!
Self-described Chief Envelope Opener Isaac Evans announces Semgrep’s $100M Series D, the mission (make it expensive to exploit software), our focus on extensibility and scaling security by eliminating vulnerability classes via secure guardrails, and a future vision of combining the best parts of deterministic static analysis + LLMs.
I’ve been at Semgrep almost 5 years, which is roughly 300 start-up years 👴
It’s been quite a wild ride seeing Semgrep go from:
A tool that’s a good idea, but kinda buggy, that no one’s heard of, to →
A mature tool that supports almost every programming language and is used by a who’s who of FAANG/$B company security teams.
To be honest, there’s been a lot of stressful times, many company strategy decision point forks in the road that could have gone either way, and it’s never really felt like “yeah this is definitely going to work out” (and it still doesn’t, lots to do).
Anywho, I hope whatever you’re working on is going well, and that you’re making security a little bit better 🙌
Sponsor
📣 Permiso’s Blueprint on Identity Risk Management
Download the program guide to learn why:
Organizations need to implement an effective IRM program and ensure that it aligns with an organization's unique needs, risk profile, and business objectives.
Understanding your identity risk profile requires dynamic monitoring across static identity controls and runtime behavior.
Successfully implementing an IRM strategy at scale requires multi-stakeholder involvement and championing across the organization, elevating identity from a security control to a strategic business enabler.
👉 Download 👈
Nice, turning a security control into a business enabler is always a huge win 🤘
AppSec
utkusen/baitroute
By Utku Sen: A web honeypot project that serves realistic, vulnerable-looking endpoints to detect vulnerability scans and mislead attackers by providing false positive results. It can be imported as a library into your project, has built-in rules (e.g. /etc/password, /.git-credentials, …), and currently has Go, Python and Javascript implementations.
Top 10 web hacking techniques of 2024
Portswigger’s James Kettle shares some epic research, a number of which have already been included in prior tl;dr sec issues. Top few:
Confusion Attacks: Exploiting Hidden Semantic Ambiguity in Apache HTTP Server by Orange Tsai
SQL Injection Isn't Dead Smuggling Queries at the Protocol Level by Paul Gerste
Unveiling TE.0 HTTP Request Smuggling: Discovering a Critical Vulnerability in Thousands of Google Cloud Websites by Paolo Arnolfo, Guillermo Gregorio, and @_medusa_1_
Secure by Design: Google's Blueprint for a High-Assurance Web Framework
David Dworken and Lukas Weichselbaum describe the elements of a high-assurance web framework: safe by default, adaptable as the web and new security research evolves, and observability of framework behavior and security feature adoption. They then list 20 key security control features of Google’s high-assurance web framework, including enforcing secure cookies, XSRF protection, a contextual auto-escaping template system, strict CSP, Trusted Types, Prototype Pollution mitigations, and more. These security controls are maintained by the security team and work out of the box.
“For hundreds of services that have fully adopted recommended high-assurance web frameworks, the number of XSS vulnerabilities is ~1 per year across all of Google.” 🤯
Sponsor
In this research Rami McCarthy put GuardDuty through its paces for a number of cloud threats. There were some gaps that were expected and others that took us by surprise! We make the case that Tracebit Canaries represent a practical complementary strategy to these gaps.
I do love me some benchmarking security tools and putting them through the paces 🤓
Cloud Security
Securing Inherited Cloud: Top Lessons
Google’s Anton Chuvakin and Taylor Lehmann outline a 9-step playbook for securing inherited Google Cloud projects, including obtaining admin access, bringing the environment under control, implementing least privilege, enabling MFA and addressing security hygiene, conducting a compromise assessment, verifying or enabling logging and centralizing logs, setting up a Cloud Asset Inventory, minimizing use of service account keys, and finding and securing valuable data.
Own (or PWN) the Org with CloudFormation StackSets
Securosis’ Rich Mogull describes how StackSets enable you to deploy CloudFormation across multiple accounts and regions, how they’re a privileged administration tool, and by default can perform any create action in any account which isn’t restricted using an SCP, and how they’re very valuable for deploying security capabilities across your entire organization. As always, with a great hands-on lab.
Testing and evaluating GuardDuty detections
AWS’ Marshall Jones introduces the updated GuardDuty Tester, an open-source tool for simulating over 100 unique GuardDuty finding types and resource combinations across various AWS services. The post walks through three practical use cases: manual investigation of findings, setting up and testing alerting mechanisms, and implementing automated responses using Lambda and EventBridge.
RogueOIDC: AWS Persistence and Evasion through attacker-controlled OIDC Identity Provider
OffensAI’s Eduard Agavriloae demonstrates how an attacker-controlled OIDC provider can be used to gain persistent access and evade detection in an AWS account. He describes setting up and configuring a rogue OIDC server, and using it to assume IAM roles, along with evasion techniques like mimicking legitimate providers already used by the AWS account (e.g. Terraform Cloud) or using unique sessions per API call. Great list of related work 👍️
Supply Chain
Clone2Leak: Your Git Credentials Belong To Us
GMO Flatt Security’s RyotaK describes a series of vulnerabilities in Git-related projects that could leak user credentials to attackers, mostly around the Git Credential Protocol that git uses to retrieve credentials from the credential helper. The vulnerabilities included improper handling of carriage returns in GitHub Desktop and Git Credential Manager, newline injection in Git LFS, and logic flaws in GitHub CLI and GitHub Codespaces' credential helpers.
Attacks on Maven proxy repositories
GitHub’s Michael Stepankin walks through several vulnerabilities he discovered in popular Maven repository managers like Sonatype Nexus, JFrog Artifactory, and Reposilite, including stored XSS, arbitrary file overwrite, arbitrary file read, and artifact poisoning. The post describes PoC exploits that can lead to pre-auth remote code execution and poisoning of the local artifacts in Sonatype Nexus and JFrog Artifactory.
OSV-Scanner v2.0.0-beta1 is ready!
New features in Google’s OSV-Scanner V2 beta, including layer-aware container scanning for Debian, Ubuntu, and Alpine images, providing vulnerability scans with layer-specific information and base image identification to pinpoint where vulnerable packages were introduced.
Also: interactive HTML output, expanded remediation support for Maven pom.xml files (automated patch suggestions for direct and transitive dependencies), and additional dependency extraction capabilities for various languages and artifact types like Haskell, .NET, Python wheels, Go binaries, and more.
How Palantir Secures Source Control
The third post in Palantir’s software supply chain security series. Palantir requires commit signing with a hardware-backed signing key + touch (and a custom PR status check that verifies commit signatures on commits made via the GitHub UI), enforces code reviews via another required PR status check called policy bot, uses restrictive GHE permissions, have multiple AppSec engineers dedicated to maintaining standard and custom CodeQL rules, and use an internal tool called Autorelease to securely tag and release code.
💡 Great read, they’ve clearly put a lot of thought into this. Also enjoyed the discussion of the limitations of GitHub’s built-in feature that enforces signed commits.
Red Team
almounah/evil-go
By Haroun Al Mounayar: A fork of Golang designed to produce stealthier binaries by implementing IAT hiding and GoReSym evasion, reducing the number of visible imported functions in the binary, and generally giving AV/EDR a harder time.
zengfr/frida-codeshare-scripts
A curated collection of useful Frida scripts and resources for dynamic instrumentation and reverse engineering.
Being a good CLR host – Modernizing offensive .NET tradecraft
IBM X-Force Red’s Joshua Magri describes how to leverage CLR customizations to improve OPSEC when executing .NET assemblies in memory, including taking control of CLR memory management and implementing a custom assembly loading manager. Joshua also walks through how to load malicious assemblies without being scanned by AMSI (Windows Defender, EDRs) and shares a proof-of-concept on GitHub here.
AI + Security
raink: Use LLMs for Document Ranking
Bishop Fox’s Caleb Gross announces introduces raink, a new open source tool that implements a novel LLM-based listwise ranking algorithm. The tool can solve complex ranking problems like given the text of a recent security advisory along with the list of code changes generated from a Ghidriff patch diff, ask it to rank the changed functions that are most likely related to the issue described in the advisory. Rad.
See also Caleb and Josh Shomo’s RVAsec 2024 talk: Patch Perfect: Harmonizing with LLMs to Find Security Vulns.
Evaluating LLMs’ Capability to Launch Fully Automated Spear Phishing Campaigns
Paper comparing LLMs’ capability of conducting personalized phishing attacks compared to human experts and models from last year. 101 participants. Control group of arbitrary phishing emails: 12% click-through rate (CTR). Emails from human experts: 54%, fully AI: 54%, AI + human-in-the-loop: 56%. Note that human expert and a fully AI system (that does information gathering and creates a personalized vulnerability profile for each target) performed the same 😅
They also used LLMs to detect the intention of emails, with Claude 3.5 Sonnet scoring >90% with low false positives rates, and it detected several seemingly benign emails that passed human detection. Lastly, they analyzed the economics of phishing, highlighting how AI can enable attackers to target more individuals at lower cost, potentially increasing profitability up to 50X.
Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities
Academic paper by Arjun Krishna, Erick Galinkin, Leon Derczynski, and Jeffrey Martin that analyzes package hallucination in LLMs (when they recommend a package that doesn’t exist, which could be squatted by an attacker), across Python, JavaScript, and Rust, if model size impacts package hallucination, and if package hallucination varies between coding and general-purpose LLMs. Findings include:
All models tested hallucinated at least some packages on all languages tested.
Python had the most variance in hallucination rates, and JavaScript had the most stable performance across models.
Smaller models hallucinate packages more.
More + discussion on LinkedIn here.
Misc
The State of the Cybersecurity Market in 2024
Friend of the newsletter Mike Privette provides a comprehensive analysis of the cybersecurity industry's financial landscape in 2024, highlighting key trends in funding, M&A, and public markets. The report reveals a 4% increase in total funding to $14B across 621 rounds, with 271 M&A transactions valued at $45.7B and AI-focused funding growing 96% YoY. Notable trends include a shift towards hybrid "Software and a Service" models, continued US dominance in global funding ($10.9B, 83% of total), and more.
Sahil Bloom discusses his upcoming book “The 5 Types of Wealth: A Transformative Guide to Design Your Dream Life” on Prof G. Ali Abdaal overview.
The Arrival Fallacy - “When I achieve this goal, then I will be happy.” “Upon meeting a goal, you will initially feel delighted. But, very quickly, you find yourself back at your usual level of happiness, or even facing a sense of emptiness.” The newsletter speaks #hardtruths 😅
An excerpt from They Thought They Were Free: The Germans, 1933-45
Ryan Holiday on Stoicism in 2025 and dealing with adversity
Productivity hack by @oreghall: Write down every decision on paper, don't act without first writing down a decision.
Interesting reflections from Sam Curry: “Let me pick out a team of 3 people and I guarantee you that we could hack pretty much any non-hyper secured target in just a few weeks. Knowing this, it’s a lot less honest to put out work like ‘we hacked X company’ because it’s surprising to no-one who actually does this stuff. It’s getting harder, sure, but it’s still beating the same drum.”
Surely this story of meeting someone you’re interested in is fake 😅
✉️ Wrapping Up
Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.
If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏
Thanks for reading!
Cheers,
Clint
@clintgibler