- tl;dr sec
- [tl;dr sec] #215 - Cloud Threat Landscape, Web LLM Security Labs, Azure Logs Primer
[tl;dr sec] #215 - Cloud Threat Landscape, Web LLM Security Labs, Azure Logs Primer
A database of cloud security incidents, campaigns, and techniques, Portswigger's labs on testing LLMs in web apps, using Azure logs for detection
I hope you’ve been doing well!
🐕️ Dogn’t You Dare
Last week, I shared that recently I slid on something questionable while entering a BART train.
Afterwards, a kind reader shared that apparently a province in Italy is requiring owners to register their dog’s DNA so they can fine people who don’t properly dispose of their dog’s waste.
I did not write that on my “things DNA will be used for” BINGO card. In SF, this would settle the age old “human or pet” question.
What a time to be alive.
New guest post!
My friend Ross Haleliuk, best-selling author of the recently released Cyber for Builders book and the excellent Venture In Security newsletter shared his thoughts on the similarities between how QA and pentesting have evolved.
Two things especially resonated with me: 1) that lack of continuous security posture testing will one day be seen as negligent, and 2) that security often tries to reinvent the wheel, when we should be learning from other fields.
I have a really neat interview about learning from other fields that should be released soon…
📣 The Critical State of AI in the Cloud
What 200,000+ cloud accounts revealed about the AI surge.
Over the past year and a half, generative AI has seen explosive growth among both end-users and businesses. But at what rate? In this new Wiz Research report, discover how many organizations are adopting managed AI services and what this could mean for your security teams.
An in-depth breakdown of generative AI’s impact on cloud landscapes
Statistics on the surge of AI services across major cloud providers
What this means to you: Rising costs and new security concerns
Be the first to get access to these exclusive insights – grab your free copy now.
Apparently Wiz has AI-SPM features (TIL that term), which gives security teams insight into potentially sensitive data, secrets, etc. being sent to model providers, and puts that data on the same Wiz Security Graph. Neat.
Identify Slack Workspace Names from Webhook URLs
Truffle’s Joe Leon introduces whoamislack, a tool to enumerate Slack Workspace Names from Slack Webhook URLs, which works even if the keys are no longer active.
Methodology: How we discovered over 18,000 API secret tokens
Announcing cvemap from ProjectDiscovery
ProjectDiscovery’s Brendan O’Leary discusses cvemap, a new tool that integrates data from CISA’s Known Exploited Vulnerabilities Catalog (KEV), Exploit Prediction Scoring System (EPSS), PoCs from official sources and GitHub, CVEs reported on HackerOne, live exposure on the Internet, GitHub and OSS popularity data, and nuclei templates that can fingerprint the CVE. Requires an API key.
📣 Shinobi: A simple way to understand, govern and fix your cloud infrastructure.
Your cloud is ever-changing. Incidents and misconfigurations are a fact of life, but let's get real. Applying that fix recommended by your cloud security tool is rarely as simple as clicking "accept", at least not without breaking something. Changes need context.
With Shinobi's knowledge engine and visuals, you can get detailed context about your environment by simply talking to it. We've created a sandbox for you to explore cloud investigations with Shinobi. We hope you enjoy it, and feel free to book a demo.
Investigating and securing your cloud environment via chat (vs memorizing all the commands) sounds 👌 Cool that there’s a sandbox to play around with it.
Cloud Threat Landscape
A curated public instance of Wiz Research’s internal cloud threat intelligence database, summarizing info about publicly disclosed cloud security incidents and campaigns. It also lists threat actors known to have compromised cloud environments, the tools and techniques in their arsenal, and the technologies they prefer to target. Explore 107 incidents, 96 threat actors, and 100+ attack techniques.
By welldone.cloud’s Michael Kirchner: A collection of example Service Control Policies (SCPs) that are useful for sandbox and training AWS accounts. The SCPs deny API calls that change baseline account settings, have long-term financial effects, or operate outside allow-listed AWS regions or services.
Azure Logs: Breaking Through the Cloud Cover
Permiso’s Nathan Eades shares a primer on the log structure of Azure Monitor Activity Logs, highlighting the most critical log fields, and why understanding and using correlationId and operationId is crucial for accurately sequencing and making sense of operations within Azure.
The final answer: AWS account IDs are secrets
Plerion’s Daniel Grzelak argues that while AWS’ docs state that account IDs are not secrets, many cloud attacks require knowing a target account ID or ARN, so if an attacker can’t identify a target account ID, it’s harder to hack it. Daniel says that many companies are unintentionally exposing AWS resources publicly, and the reason they haven’t been compromised is that AWS identifiers, like account IDs, are not well known.
Google Cloud Incident Response Cheat Sheet
Noah McDonald and Wes Guerra share a cheat sheet covering an overview of IR in GCP, logs for threat hunting and incident response, log analysis, service accounts, the GCP attack matrix, and more.
An Open Container Image (OCI) hook to generate seccomp profiles by tracing the syscalls made by the container. The generated profile allows all the syscalls made and denies every other syscall, thus reducing the container’s attack surface. This can be high effort to do by hand, great to see it streamlined.
Kubernetes security: Safeguarding your container kingdom
Red Canary’s Madhav Nakar provides an overview of the core components in Kubernetes, container related issues (malicious images, suspicious registry locations, on-the-fly containers), cluster misconfigurations (RBAC misconfigs, pod securityContext specification, exposed kubelet API), and other Kubernetes attacks.
How the Sys:All Loophole Allowed Us To Penetrate GKE Clusters in Production
Orca Security’s Ofir Yakobi describes a misconfiguration issue that centers on how the system:authenticated group in GKE includes any Google authenticated account, including outside the organization. They found over 1,000 clusters with varying degrees of exposure.
This reminds of the recent issue with GitHub-to-AWS keyless authentication (1, 2) where if you weren’t careful you could accidentally let any GitHub Action from any GitHub repo assume the AWS role, rather than just your repo.
Perhaps platforms should name permissions like: dangerouslyAllowAnyoneFromAnyOrgIncludingNotYoursYouProbablyDontWantThis. That gives a different vibe than system:authenticated.
Dependency Confusions in Docker and remote pwning of your infra
Dependency confusion is when you accidentally pull a malicious dependency or container instead of the legitimate intended one. Guillaume Quéré describes how this can occur when the following conditions are met:
Two or more mirrors are configured and Docker Hub is declared first
The project uses one or more namespaces in their internal registry
That namespace is not registered on Docker Hub.
Deceptive Deprecation: The Truth About npm Deprecated Packages
Aqua Nautilus’ Ilay Goldman and Yakir Kadkoda discuss how ~21% of the top 50K downloaded npm packages are deprecated, meaning they might contain unfixed vulnerabilities or can be taken over by attackers. They’ve released Dependency Deprecation Checker, which scans your package.json file and checks for dependencies that rely on deprecated packages.
They came onto this direction of research when bug hunting at scale using Semgrep and found that maintainers would just archive a repo instead of fix a security issue 🤦
MavenGate: a supply chain attack method for Java & Android apps
Oversecured found a number of Maven packages with an expired domain that an attacker could potentially register and thus compromise. How? Take all the package names (groupId) on Maven Central → convert to domain → see if you can register it using GoDaddy Bulk Domain Search. 3,710 packages (~14%) had domains you could purchase, and there were 291 that pointed to available io.github.[username]’s.
💡 Research Tip: If you’re looking for a project idea, try taking an attack or approach and then apply it to a new programming language, ecosystem, web framework, etc. That’s basically what this is, as people previously examined a similar thing for NPM and PyPI.
A threat-hunting tool for Linux that uses eBPF probes to monitor for security-relevant events, and enriches and correlates them. Like Sysmon for Windows.
The Efficacy of Dynamic Honeypots for Novel Exploitation Discovery
Say there’s a new CVE but no public proof of concept yet. Rather than trying to recreate the PoC from the public details, which is time intensive, Greynoise discusses the effectiveness of spinning up the vulnerable software in a honeypot and then just catching PoCs sent to your app. Note: it’s important that the honeypot can handle multiple steps in an exploit chain.
On Detection: Tactical to Functional
SpecterOps’ Jared Atkinson encourages one to think about the subroutines that must be implemented, and the steps and their order for an attacker to accomplish their goal. For example, System Token Impersonation/Theft’s function chain has 4 parts, each relying on the output of a prior step. By thinking this way, in terms of attacker goals and the various ways it can be achieved, you can create more robust and fine-tuned detections.
Machine Learning + Security
Web LLM attacks
Portswigger’s Web Security Academy document on attacking and defending apps that leverage LLMs, with links to hands-on labs.
AI-poisoning tool Nightshade now available for artists to use
University of Chicago academics have released a tool that artists can use to subtly modify their art such that AI models trained on it may erroneously categorize and then later render objects (e.g. thinking a cow is a purse). The goal is to have artists be compensated for their work.
Using Multi-Modal Large Language Models For Breaking Captchas
Aashiq Ramachandran walks through how straightforward it is to solve text and math CAPTCHAs using LLMs that can analyze images, such as Google Gemini Vision Pro, and releases i-am-a-bot, a tool that uses the same approach to solve CAPTCHAs programmatically.
What used to be an effective protection can now be solved in a few hundred lines of Python. In what other domains is this also the case?
Thread by Trail of Bits’ Dan Guido on the stress of being CEO and a detailed walk through on what he does for his personal healthcare
Dingboard - A free AI powered meme image editor. A Gen Z colleague told me about this so it must be legit.
About Stolen Device Protection for iPhone
A new layer of security when your iPhone is away from familiar locations, like home or work, that helps protect your accounts and personal info in case your iPhone is stolen. For example, requiring biometric auth to perform sensitive actions and a security delay before certain actions can be taken (so you can report the device as lost).
✉️ Wrapping Up
Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.
If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏
Thanks for reading!