• tl;dr sec
  • Posts
  • [tl;dr sec] #220 - Detecting Manual AWS Actions, AI Threat Models, Living Off the False Positive

[tl;dr sec] #220 - Detecting Manual AWS Actions, AI Threat Models, Living Off the False Positive

How to alert on non infra as code AWS actions, threat modeling apps that use AI, autogenerated list of FPs from popular detection rulesets

Hey there,

I hope you’ve been doing well!

👨‍🎤 Musical Improv

Recently I’ve been really enjoying doing musical improv comedy. (Security obviously being my bridge job until my very stable acting career takes off.)

Though I’m certainly not (yet) good at singing, there’s something absolutely delightful about totally making up a song on the spot, and sometimes even a whole musical with a narrative arc.

If you want to see some masters of the musical improv craft, you’ve got to check out Off Book (example musical on YouTube, Spotify playlist). For Spotify, try episode 310: Infinity Time, Infinity Wine.

If this sounds fun- if you’re in the San Francisco Bay Area and interested in trying musical improv, this site lists a bunch of opportunities.

For non-musical improv, I’d recommend Endgames, Leela, or BATS, all of whom have one off drop in classes where you can try improv with no commitment, as well as intro classes for people who’ve never done it before.

Improv jam at a conference? 🤔 

Sponsor

📣 Step up your AI security game: Free AI ThreatModels Demo for AWS, Azure, and GCP

Step up your AI security game with our ThreatModels of AWS, Azure, and GCP's AI services. TrustOnCloud helps you onboard cloud services faster for critical and compliance-bound workloads, so you can easily adopt new cloud services for your business while improving your security and implementing controls to maintain your risk appetite. After your tailored demo, you will receive a copy of your chosen AI ThreatModel. 

TrustOnCloud knows their threat models! Their threat models on S3 and Azure Storage are some of the most detailed I’ve seen, and were widely shared when I included them in tl;dr sec.

AppSec

Infisical/infisical
By Infisical: An open-source secret management platform. Includes a dashboard, client SDKs, CLI, secret versioning, audit logs, RBAC, native integrations with GitHub, AWS, Terraform, and more.

TruffleHog Now Detects AWS Canaries without setting them off
Dylan Ayrey shares how: they scanned many GitHub repos looking for canaries, leveraged prior research to decode account IDs from access IDs, and thus found unique AWS accounts associated with canarytokens.org. This approach could be used by threat actors or red teams to avoid tripping a canary. TruffleHog is intentionally not including any account IDs associated with Thinkst’s paid offerings. Neat work!

Semgrep Code brings modern static analysis to C/C++
The Semgrep Pro engine now has GA support for C and C++. This post includes a great technical overview of why static analysis on C/C++ is hard (e.g. C++ grammar is undecidable, preprocessors and macros), the pros and cons of requiring compilation to do analysis, and how Semgrep understands and parses C/C++ source code effectively without requiring a build or compile step. If you’re a programming languages or compiler nerd, there’s good stuff here.

Great to see folks from Yahoo and Splunk already getting value from it 🥳 

Top 10 web hacking techniques of 2023
Portswigger’s James Kettle shares the most innovative must-read web security research published in the last year. This list (and prior years) are absolutely outstanding work, highly recommend checking it out. Top few:

  1. Smashing the state machine: the true potential of web race conditions

  2. Exploiting Hardened .NET Deserialization

  3. SMTP Smuggling - Spoofing E-Mails Worldwide

  4. PHP filter chains: file read from error-based oracle

  5. Exploiting HTTP Parsers Inconsistencies

Go Go XSS Gadgets: Chaining a DOM Clobbering Exploit in the Wild
Some web hacking wizardy by Brett Buerhaus, Sam Curry, and Maik Robert describe a neat bug chain: XSS on one domain to send a postMessage to the company’s authentication portal on a different domain, leading to XSS on the more sensitive authentication portal, bypassing a Content Security Policy, injecting external scripts into an already loaded DOM, and more. Very cool write-up.

Sponsor

📣 Kubernetes Security for Dummies  

How to implement a container security solution and 10 Kubernetes security best practices all rolled into one. 

Wiz partnered with Wiley to create the Kubernetes Security for Dummies eBook. This free guide includes everything essential to know about building a strong security foundation and running a well-protected operating system.  

You’ll learn:   

  • The fundamentals of cloud-native environments 

  • Expert advice for securing Kubernetes and container environments 

  • Strategies for keeping your cloud-compliant 

  • Essential functions you want in a container security solution

  • 10 Kubernetes security best practices to start implementing today 

A free book on Kubernetes Security?! Let’s go!

Cloud Security

sigma/aws_console_getsignintoken.yml
TIL there are AWS-related Sigma rules, like this one by Chester Le Bron for detecting potentially suspicious events involving "GetSigninToken", which could indicate an adversary using the aws_consoler tool.

cycloidio/terracognita
Tool by Cycloid that reads from existing public and private cloud providers to generate infrastructure as code (Terraform). Currently supports AWS, GCP, AzureRM and VMware vSphere.

When AWS invariants aren’t [invariant]
Aidan Steele on how some AWS roles still allow self-assumption, which could violate some assumptions you have as then they can change their own session name. In short: search CloudTrail for instances of AssumeRole with additionalEventData.explicitTrustGrant == false. These will yield results for role assumptions that aren’t permitted by the trust policy, i.e. the ones that are going to surprise you and violate your invariants.

Detecting Manual AWS Actions: An Update!
My bud Arkadiy Tetelman updated his excellent post on detecting manual AWS actions, that is, not using an approved infrastructure-as-code workflow. He provides a new trigger mechanism (devs must access AWS through Okta / AssumeRoleWithSAML and AWS role session name set to their email address), an updated list of mutating AWS actions, and detecting session name bypasses (create an alert on AssumeRole calls where the new session name is not identical to the original).

How to secure CI/CD roles without burning production to the ground
Yan Cui recommends using identity federation instead of putting IAM credentials in CI/CD tools, discusses how least privilege can be a productivity burden (you need to update the cloud role every time you use new services), and provides hardening recommendations.

Including: use a separate AWS account for each environment to isolate them, use Service Control Policies to deny access to unused regions and resources, and use attribute-based access control to limit the CI/CD role’s ability to find out what it has access to.

Blue Team

Living off the False Positive
By Justin Ibarra: An autogenerated collection of false positives sourced from popular rule sets, categorized by ATT&CK technique, which can be used by red teams to blend in and by blue teams to identify weak spots in detection logic.

referefref/modpot
By James Brine: A modular web-application honeypot platform built using go and gin that allows the user to deploy simple HTML/JS honeypots that mimic web applications in order to detect requests and form entries that are related to attacks.

Enhancing Sudoers File Security on MacOS with Osquery and Splunk
@Alexandarmatev describes how to monitor the sudoers file in a way that minimizes false positives and can catch subtle but dangerous changes by using osquery to collect detailed logs and Splunk for in-depth analysis. By including the actual contents of the sudoers file in the logs, you can compare it to a pre-defined list of known-safe entries in Splunk, ensuring that alerts are triggered only for modifications that might pose a security risk.

The NIST Cybersecurity Framework (CSF) 2.0
The NIST Cybersecurity Framework (CSF) 2.0 (32 page PDF) provides guidance to industry, government agencies, and other organizations to manage cybersecurity risks. It offers a taxonomy of high-level cybersecurity outcomes that can be used by any org— regardless of size, sector, or maturity— to better understand, assess, prioritize, and communicate its cybersecurity efforts.

CSF 2.0 contains new content on governance and supply chains, Implementation Examples and Informative References, current and target state Organizational Profiles so orgs can compare where they are versus where they want to be and so they can implement and assess security controls more quickly, and more.


Red Team

MacOS CI/CD with Tart
Justin Bui describes how the Snowflake Red Team built their macOS CI/CD and payload development environment with AWS EC2 and Tart, and automated the process with Terraform/Packer.

Bypassing EDRs With EDR-Preloading
Marcus Hutchins describes a technique called “EDR-Preloading,” which involves running malicious code before the EDR’s DLL is loaded into the process, enabling you to prevent it from running at all. Nicely detailed walkthrough, and he shares PoC code here.

AI + Security

At least 10% of Reddit’s S-1 revenue is from selling data to train LLMs
Instead of Reddit building product experiences that create good advertising data to earn more on ads, Reddit will launch product experiences that produce more valuable data to feed to LLMs… User data remains the currency of the realm, but it’s packaged & sold in a very different way. Cookies will be replaced by data-purchasing contracts.”

Azure/PyRIT
The Python Risk Identification Tool for generative AI is an automation framework to empower security professionals and ML engineers to red team foundation models and their applications. It can help assess the robustness of LLM endpoints against different harm categories such as fabrication/ungrounded content (e.g., hallucination), misuse (e.g., bias), and prohibited content (e.g., harassment).

Analyzing AI Application Threat Models
NCC Group provides an analysis of the attack surface of applications that employ AI, a set of known and novel attack vectors enumerated by a Models-As-Threat-Actors (MATA) methodology, security controls that organizations can implement to mitigate vulnerabilities on the architecture layer, and best practices for security teams validating controls in dynamic environments. Very detailed post, nice work!

TensorFlow Threat Model and Security Guidelines Update
Google’s Jose Duart announces an update to the TensorFlow threat model, which updates the security recommendations, provides clear examples, and serves as a baseline to define scope in the Google Vulnerability Reward Program.

Misc

Gartner Identifies the Top Cybersecurity Trends for 2024
Generative AI, improving insecure employee behavior via human-centric design, third-party risks, continuous threat exposure, adopting outcome-driven metrics to bridge boardroom communication gaps, and identity-first approaches to security.

“By 2027, 50% of large enterprise CISOs will have adopted human-centric security design practices to minimize cybersecurity-induced friction and maximize control adoption.”

✉️ Wrapping Up

Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏

Thanks for reading!

Cheers,
Clint
@clintgibler