• tl;dr sec
  • Posts
  • [tl;dr sec] #245 - Every Hacker Summer Camp AI Talk, AWS IAM Security Tools, Ransomware Tool Matrix

[tl;dr sec] #245 - Every Hacker Summer Camp AI Talk, AWS IAM Security Tools, Ransomware Tool Matrix

Skim >60 talks in ~15min, oodles of IAM tool-dles, all the tools ransomware gangs use

Hey there,

I hope you’ve been doing well!

I was curious about all of the AI talks at Hacker Summer Camp this year.

So I decided to gather them all in one place, summarize their abstracts in 1 and 2-3 sentences, group them by category, and add supporting links (talk recordings, slides, papers, etc.) where possible.

So instead of spending hours scrolling through different conference websites, you can quickly get up to speed on the >60 talks in ~15 minutes.

👉️ Read the TL;DR here 👈️ 

P.S. Please let me know any feedback about what you liked, what you’d find more useful, etc., in case I am a glutton for pain decide to do this again!

I had a great chat with Sandesh Anand, formerly of Cigital and head of security at Razorpay, about both strategic and tactical ways to scale security programs, including:

  • Partnering with the Platform team to eliminate an entire vulnerability class via secure defaults.

  • Leveraging the core competencies of non security teams for security.

  • Getting big, strategic security work prioritized by engineering teams.

  • How to never let a good incident go to waste.

  • How Sandesh’s new company Seezo is applying AI to AppSec.

Sponsor

📣 Start Here: 5 Tips to Fight AI Security Risk

Don’t let AI risks catch you off guard!

Most AI security risks fall into 4 primary categories: adversarial attacks, model inversion attacks, data poisoning, and model theft. 

Given these risks in AI systems, what’s the best way to secure them? 

Fortunately, Wiz recently put together an ebook, Getting Started with AI Security, to answer that exact question across the following areas:  

  • AI risks and best practices for mitigation: Understand the threats and how to tackle them. 

  • Safeguarding your AI development pipeline: Learn how AI-SPM can protect your AI projects. 

  • Using AI to power security: Discover how AI can enhance your security operations and incident response. 

Discover the 5 best strategies to secure AI risks in this essential guide.

Wiz researchers have found some critical issues in AI platforms recently, I’m curious about their advice on using AI securely 👍️ 

Container Security

Azure/dalec
A project designed to create secure, reproducible system packages and containers using a declarative configuration, with a focus on supply chain security through build-time SBOMs and Provenance attestations.

Level up your Kubernetes security with the CIS GKE Benchmarks
Google Cloud has partnered with the Center for Internet Security (CIS) to release updated CIS Benchmarks for GKE and GKE Autopilot, featuring >80 recommended controls that can help enhance your organization’s GKE security posture.

Unprotected container registries
Christopher Bleckmann-Dreher demonstrates how easy it is to discover and exploit unprotected container registries, finding over 10,000 exposed registries in a 2024 Internet-wide scan. He describes how to identify open registries, pull/push images, and backdoor existing images. He was able to push a canary’d image to >4,500 unprotected registries (which I would not recommend doing 😅).

Sponsor

📣 Discover, secure, and govern genAI use

Nudge Security has discovered over 500 unique genAI apps in customer environments to date, without the need for agents, browser plug-ins, network proxies, or any prior knowledge of an app’s existence. 

Within minutes of starting a free trial, you’ll have a full inventory of all genAI apps in use (along with every other SaaS app) and security profiles for each provider to quickly vet new or unfamiliar tools. 

Get your free genAI inventory today.  

👉 Free Trial 👈

Who all is using AI in your environment?! 😱

Cloud Security

The Hunt for ALBeast: A Technical Walkthrough
Miggo’s Liad Eliyahu describes ALBeast, a vulnerability in AWS Application Load Balancer (ALB) authentication that allows bypassing authentication and authorization in misconfigured applications. The issue stems from applications not validating the JWT token signer and improper security group configurations. Mitigation steps: validate the ALB JWT token signer and restricting application traffic to only trusted ALBs.

💡 Cryptography and authn/authz are hard. How can we build platforms such that these nuances are irrelevant to people building on them and still be secure?

Exposing Security Observability Gaps in AWS Native Security Tooling
Security Runner’s Jonathan Walker evaluated AWS IAM Access Analyzer's effectiveness in detecting publicly exposed resources. It effectively identifies public resources for 65% of evaluated fundamental services, including S3, IAM roles, KMS keys, Lambda functions, and RDS snapshots, but coverage gaps exist for services like AWS Glue, SES, and CloudWatch Logs.

An AWS IAM Security Tooling Reference [2024]
Rami McCarthy shares an updated overview of AWS IAM security tools, including: PMapper for privilege escalation detection, Cloudsplaining for least privilege violations, Apeman for graph-based modeling, Parliament for policy linting, aws-lint-iam-policies for CI/CD integration, and IAMSpy for formal verification of if an action by a given IAM entity is possible against a particular resource.

Meet IAM APE: An Open Source Tool to Simplify AWS IAM Policy Management
Orca Security's Tohar Braun describes their cloud security tool IAM AWS Policy Evaluator (IAM APE), designed to simplify the process of calculating effective permissions for an AWS entity.

  1. The tool collects all the IAM policies present in your account,

  2. Calculates the effective permissions that each entity has,

  3. Presents you with a single policy, summarizing all of their actual permissions.

An AWS Administrator Identity Crisis: Part 1
SpecterOps’ Daniel Heinsen walks through some nuances in defining what an “administrator” in a cloud environment actually means, and advises instead focusing on specific resources and actions that matter to defenders. Permission sets, transitive permissions, and more, oh my!

Daniel also recently released Apeman at BlackHat Arsenal, a tool that models the AWS permission structure within a graph database, allowing you to more easily determine which principals can execute specific actions.

💡 SpecterOps maintains BloodHound, a tool that ingests Active Directory or Azure info into a graph database to identify hidden, unintended, or complex attack paths. Apeman basically does the same but for AWS. This is a nice example of taking the same approach/idea and applying it to a new domain.

Ask yourself: in what other domains are there complex relationships between actors, targets, and nuanced permission sets? Is there already a BloodHound for <X> for that?

Also: Primate themed cloud security tools, so hot right now! 😂 

Blue Team

BushidoUK/Ransomware-Tool-Matrix
A resource containing all the tools each ransomware gangs uses, by @BushidoToken.

Linux Detection Engineering - A primer on persistence mechanisms
Deep dive by Elastic’s Ruben Groenewoud on basic to advanced Linux persistence techniques, including scheduled tasks (cron), systemd, shell profile modifications, XDG autostart entries, setuid/setgid binaries, sudoers file modification, account manipulation, adding to the SSH authorized_keys, and bind/reverse shells. For each, he discusses how it works, how to set it up, how to detect it (SIEM and Endpoint rules), and how to hunt for it (ES|QL and OSQuery hunts).

Ruben also released PANIX, a customized Linux persistence tool for security research and detection engineering.

 

Red Team

helviojunior/hookchain
By Helvio Junior: A new perspective for bypassing EDR solutions. “Through a precise combination of IAT Hooking techniques, dynamic SSN resolution, and indirect system calls, HookChain redirects the execution flow of Windows subsystems in a way that remains invisible to the vigilant eyes of EDRs that only act on Ntdll.dll, without requiring changes to the source code of the applications and malware involved.”

SSH Tunnelling to Punch Through Corporate Firewalls
JUMPSEC’s Sunny Chau describes advanced SSH tunneling techniques for red teaming, focusing on using the native Windows 10/11 OpenSSH client for reverse dynamic SOCKS proxies. He details three methods to bypass corporate firewalls: basic -R flag usage, leveraging Azure domains/ASNs, and using alternative egress ports (80, 443, 53). The post also covers getting payloads in/exfiltrating data via SCP and in-memory PowerShell execution.

AI + Security

Anthropic has taken the unusual step of publishing Claude’s system prompt, which is awesome for transparency 🙌 

AI Made a 0-Day: Noah Get the Boat
HOPE 2024 talk by Erica Burgess (watch the video at the 3:56:48 timestamp) on how she “used AI to generate an RCE zero-day for server compromise to manipulate search engine AI for vulnerability discovery, for captcha bypass, to make tools that would have been impossible without generative AI, and more.”

Using AI for Offensive Security
29 page Cloud Security Alliance whitepaper by Adam Lundqvist, Kirti Chopra, Michael Roza, Sven Vetsch et al covering various AI-powered offensive security domains, including: AI augmentation and autonomy, reconnaissance, scanning, vulnerability analysis, exploitation, reporting, threat actor’s use of AI, challenges and limitations, and more.

Relatedly, in his post Superhuman Hackbots on the Horizon, Adam Lundqvist describes a few hypothetical scenarios of potential impacts of better-than-human Hackbots (ransomware, disinformation, espionage, etc.), and hypothesizes that at the rate of LLM and tooling improvements the first serious attacks could occur by mid 2025.

Offensive AI Agents: A Timeline of My Research
Jeff Sims gives an overview of 3 offensive AI systems he built, including links to the supporting papers, videos, and conference talks.

  • BlackMamba - Leverages GenAI for on-the-fly synthesis of a polymorphic keylogger payload, dynamically executed using Python’s exec() function.

  • EyeSpy - The agent operates entirely in memory, except for a benign stub, and has the ability to self-repair its AI-generated malicious code. The agent identifies compilation or runtime errors and initiates a repair cycle with the LLM until the code is executable in memory.

  • Red Reaper - Inspired by ISOON leaks (Chinese APT capabilities), it emulates advanced espionage and influence techniques, using an ensemble of machine learning approaches (NLP, graph analysis, GenAI) to analyze and operationalize intelligence from stolen emails.

Misc

✉️ Wrapping Up

Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏

Thanks for reading!

Cheers,
Clint
@clintgibler