- tl;dr sec
- Posts
- [tl;dr sec] #277 - Cybersecurity (Anti)Patterns, $64K from Deleted Files, New from Meta AI Security
[tl;dr sec] #277 - Cybersecurity (Anti)Patterns, $64K from Deleted Files, New from Meta AI Security
How to avoid Busywork Generators, bug bounty story of secrets in deleted files, new AI security tools and evals from Meta
Hey there,
I hope you’ve been doing well!
😩 Almost… made it…
It’s getting late and I have an early morning interviewing some cool folks, so I’ll share a few fun anecdotes and reflect more next week.
Some stand-out moments:
All of the kind words, stories, and hugs after my BSidesSF keynote 🥹 And going around the conference with my new smoking jacket lol.
Wendy Nather giving a very moving keynote- audible sniffles from me and the people around me.
Meeting two Wiz researchers, and hearing that they were amused by my characterization of their IngressNightmare write-up as “bringing that 8200 energy.” And then I recommended them things to do in SF 😃
Me trying to nap curled up in a ball, with only a curtain separating me from the LARPers.
Hanging out with a number of awesome security creators and mind melding.
Congrats to ProjectDiscovery for winning the RSA Innovation Sandbox! I’ve been a fan for awhile, great to see good things headed their way.
Watching a friend walk through one of the most elaborate jailbreaks I’ve ever seen to get root on his humanoid robot, and then him changing its voice and walking it around the event (somewhat ominously).
Meeting some people I’ve looked up to for most of my career, and hearing that they like and read tl;dr sec 🤯 It’s a weird feeling man.
I hope you’ve been surviving and thriving this week my friend! 🤗
Sponsor
📣 Introducing the Permiso Platform:
Threat-Informed Risk Exposure with Best In Class Threat Detection for Human and Non-Human Identities
We’re excited to announce the launch of the Permiso Platform, the identity security platform that detects and protects against both human and non-human identity threats across your cloud and on-prem environments. Our latest threat informed, posture-based prevention features are driven by several years of threat research that have generated more than 1,500+ detection signals in our product. Those signals help security teams detect suspicious and malicious activity in their identity providers, cloud service providers and SaaS environments. Learn how Permiso covers your *aaS in the cloud.
👉 Learn More 👈
I’ve included a number of blogs and open source tools from Permiso in tl;dr sec, I appreciate the research sharing and focus on important areas like identity. Hm and this Universal Identity Graph looks pretty neat 👀
AppSec
OrgSec Guide
A guide by Luis Fontes on building and managing an effective security program across DevSecOps, endpoint security, GRC, IAM, infra & cloud, operational security, product security, security operations, and AI. Nice overview of different areas, and I like the discussion of outcomes, metrics, and lists of resources.
How I made $64k from deleted files — a bug bounty story
Sharon Brizinov describes how they built an automation to scan tens of thousands of public GitHub repos for leaked secrets. For each repo, the automation restored deleted files, found dangling blobs and unpacked .pack files to search them for exposed API keys, tokens, and credentials. How to get the deleted files: restore deleted files by diffing parent-child commits, unpack all .pack files using git unpack-objects
and find dangling objects using git fsck
.
Sometimes secrets were leaked in accidentally committed binary files (e.g. .pyc
) or in hidden files (e.g. .env
) or were zipped into an archive that was later committed.
Cybersecurity (Anti)Patterns: Busywork Generators
Love this post by Eugene Lim (spaceraccoon) arguing for secure by design approaches and addressing root causes vs generating thousands of findings/alerts and subsequent follow-ups and meetings. I like Kelly Shortridge’s Ice Cream Cone Hierarchy of Security Solutions, the point that getting clean is different than staying clean (e.g. eliminating SQL injection), and asking 5 why’s to get to the root cause.
The post ends with a nice tj-actions
supply chain attack case study: 5 why’s + what they did to systematically reduce GitHub Actions risk. See also starter-kitty for some safe-by-default components they’ve built.
Sponsor
📣 Do You Trust AI?
Yeah, we didn't think so. You know better. With so many known and unknown risks permeating the building blocks of AI, you are right to be hesitant.
No doubt shadow AI and board-level mandates to adopt AI yesterday have you scrambling for answers. You aren't alone, and at Noma Security we intimately understand the challenge based on 100s of conversations about AI security with leaders like you.
Noma Security will help you trust AI with an end-to-end approach to AI security with supply chain threat visibility, AI posture management, agentic AI monitoring and runtime guardrails.
Nice, I like the broad range of coverage from AI posture management, to agent monitoring and runtime guardrails. H/T AI innovation for new risk areas to cover 😆
Cloud Security
Yamato-Security/suzaku
By Yamato Security: A Sigma-based threat hunting and fast forensics timeline generator for cloud logs.
yanilov/control-tags
By Yoav Yanilov: A scalable integrity framework for Attribute-Based Access Control (ABAC) on AWS. Features: centralized, hierarchical management of tagging integrity for ABAC, multi-party approval for sensitive actions (guarded actions) and sensitive resources (resource seals), and more.
Azure's Apex Permissions: Elevate Access & The Logs Security Teams Overlook
Permiso’s Nathan Eades shares a number of technical details about Azure’s “Elevate Access” feature, including misconceptions (it grants an identity with the Global Administrator (Entra ID Role) role the User Access Administrator (Azure RBAC) role at the root scope (/
)), and logging challenges (it has historically not been logged in standard subscription Activity Logs, or in Management Group level activity logs, but instead in Azure Monitor Directory Activity Logs).
The post also discusses how an attacker could leverage Elevate Access, as well as detection and mitigation methods. Yikes, lots of additional complexity and details in this post, IAM is hard 🫠
Tag Your Way In: New Privilege Escalation Technique in GCP
Mitiga’s Ariel Kalman describes a novel privilege escalation technique in GCP that exploits IAM Conditions and tagBindings. An attacker with only roles/resourcemanager.tagUser
and roles/viewer
can escalate privileges by attaching specific tags to resources, satisfying conditional IAM bindings and gaining elevated access. For example, an attacker could add an env=sandbox
tag to a resource that isn’t actually part of the sandbox, but now due to the conditional IAM binding that gives them admin permissions in the sandbox, they can perform sensitive actions on that resource.
The post provides a Sigma detection rule and recommends treating tag management as privileged, separating tagging and access boundaries, and using deny policies to mitigate this issue.
Sponsored Tool
📣 BloodHound Enterprise by SpecterOps: Reduce Identity Risk with Attack Path Management
Identity Attack Paths plague every organization. Adversaries use them to move laterally and attempt to escalate to domain-level controls.
Close the door on the adversary with BloodHound. BloodHound continuously maps your identity architecture—exposing misconfigurations and critical attack paths— and provides proven remediation guidance to reduce risk.
👉 Learn More 👈
BloodHound is the way it seems folks determine identity attack paths in Azure and Active Directory. Pretty neat to be able to partner with them, as BloodHound and SpecterOps’ blog are 🔥
Blue Team
p0dalirius/FindUnusualSessions
By Rémi Gascou: A tool to remotely detect unusual sessions opened on Windows machines using RPC.
Sell Forum Accounts & Report Cybercrime Anonymously
Apparently PRODAFT is a threat intelligence company offering to buy forum accounts frequented by cybercriminals (XSS, Exploit in, RAMP4U, Verified, Breachforums) so that they can get better visibility into their infrastructure and TTPs. I need to think about this more, but I kind of like the “step away from your past life” framing + monetary incentive 🤔
My SIEM-Agnostic Creative Process to Detection Engineering
Ryan G. Cox outlines a seven-phase process for developing a detection suite in any SIEM environment for a new log source, starting from zero detections. The process:
Researching the log sources - What would an attacker try to do in this system?
Detection brainstorming
Diving into the logs - Review the frequency of different types of actions, what looks normal?
Detection organization - You might want to group related logic into a single rule to reduce redundancy and make your detections easier to manage.
Detection creation
Validation and testing - Test for success, failure, and edge cases.
Iteration - Tweak thresholds and exceptions to reduce noise.
Red Team
DarkSpaceSecurity/DocEx
By Dark Space Security: APT emulation tool to exfiltrate sensitive .docx, .pptx, .xlsx, .pdf files.
EricEsquivel/Inline-EA
By Eric Esquivel: A Cobalt Strike Beacon Object File (BOF) to execute .NET assemblies in your current Beacon process. This tool was built to bypass the latest Elastic at the time of making (version 8.17.4), and works against CrowdStrike Falcon and Microsoft Defender for Endpoint (MDE).
AI + Security
AsyncFuncAI/deepwiki-open
Automatically generate a Wiki for GitHub repos. Analyzes the code structure, generates documentation, creates visual diagrams explaining how things work, and more. Useful for orienting in a repo and doing code reviews. H/T my bud Louis Nyffenegger.
google/mcp-security
Google has released MCP servers that enable MCP clients to access Google’ security products and services, including Chronicle (threat detection, investigation, and hunting), Google Security Operations SOAR (security orchestration, automation, and response), Google Threat Intelligence (GTI), and Security Command Center (SCC).
Introducing the MCP Server for Wiz: Smarter AI Context, Stronger Cloud Security
Wiz announces their MCP server which integrates Wiz's security offerings to provide a unified security data source, cloud visibility, and contextual intelligence. The post discusses different use cases per product, like enabling natural language queries across Wiz Cloud for quick discovery of security issues, exposure analysis, and risk assessments, identifying and containing active threats with Wiz Defend, applying security fixes using Wiz Code, etc.
Jumping the line: How MCP servers can attack you before you ever use them
Trail of Bits describes how malicious tool descriptions can manipulate model behavior without ever being invoked. When a client application connects to an MCP server, it asks the server what tools it offers, the server responds with tool descriptions that the client adds to the model’s context to let it know what tools are available → prompt injection. Potential impact: generated code can be exfiltrated, vulnerabilities can be inserted, security alerts can be suppressed or miscategorized.
How MCP servers can steal your conversation history
Trail of Bits’ Keith Hoodlet on how an attacker could injecting trigger phrases into tool descriptions to exfiltrate the user’s entire conversation history, which can activate specifically when sensitive data patterns (like API keys, internal project names, or financial identifiers) appear in the conversation.
Deceiving users with ANSI terminal codes in MCP
Keith Hoodlet continues the blog series, describing how ANSI escape sequences can hide malicious instructions, for example: making text invisible (e.g. white text on white background), overwriting content through cursor movement, clearing the screen, or making hyperlinks that appear to go to a legit destination, but go to a malicious one instead.
LlamaFirewall - A security-first guardrail framework for mitigating agentic prompt injection, misalignment, and insecure coding risks.
AutoPatchBench - Evals to help the community build automatic AI security vulnerability fixing (GitHub). It features 136 fuzzing-identified C/C++ vulnerabilities in real-world code repos along with verified fixes sourced from the ARVO dataset.
A sensitive document classification framework that makes it easy to apply LLMs to preventing sensitive data exfiltration.
The PromptGuard v2 model family for stopping prompt injection attacks, which integrates w/ LlamaFirewall, and improves on PromptGuard v1 in precision, recall, latency, and model size.
CodeShield, which now integrates with LlamaFirewall, and does live blocking of insecure LLM code outputs, using regex and Semgrep.
Cybersecurity defender reasoning evals created in partnership with CrowdStrike (announced; to be released soon).
Audio deepfake detection (an ML model shared with trusted partners to defend against audio scams/fraud/social engineering attacks)
LlamaFirewall paper, which provides evals of misalignment detection, PromptGuard, and CodeShield.
💡 I haven’t had time to look into these in detail yet, but I love the release of more open source AI security tools, and the emphasis on evals, especially around AutoPatchBench, as tons of AppSec vendors are doing this but I’ve seen comparatively little in public benchmarking. It'd be nice to see this in non C/C++ code though.
Misc
The startling reason Australia is shooting koalas out of trees from helicopters - Getting shot out of a tree is not a g’day.
Some researchers claim, “Our team has devised a way to accurately and swiftly predict when prime numbers will appear,” which has large implications regarding cryptography. Big If True™️.
bugfish-nuke: A Windows tool for emergency privacy: instantly deletes sensitive data and active logins to protect your information during unexpected searches or house warrants.
The Hard Questions: 100 Essential Questions to Ask Before You Say "I Do"
University of Zurich researchers used the r/ChangeMyView subreddit to see how effective LLMs were at persuasion. Reddit’s response + additional links. A draft version of the study’s findings suggests the AI comments were 3-6x times more persuasive than humans.
Pete Buttigieg goes on the FLAGRANT podcast (more conservative-leaning). Love to see people with different views joking and debating respectfully, and how Pete emphasizes meeting people where they’re at vs finger wagging and shaming them for feeling differently.
✉️ Wrapping Up
Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.
If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏
Thanks for reading!
Cheers,
Clint
@clintgibler