• tl;dr sec
  • Posts
  • [tl;dr sec] #221 - Hacking Google AI for $50K, Detecting Secrets in Videos, Securing Medicare & Medicaid

[tl;dr sec] #221 - Hacking Google AI for $50K, Detecting Secrets in Videos, Securing Medicare & Medicaid

Bugs found in a private Google bug bounty event, GitLab's new OSS tool to find secrets leaked in video, how to secure a massive U.S. gov't org

Hey there,

I hope you’ve been doing well!

One of the biggest AWS accounts in the world.

Developers who don't work at your company.

Highly sensitive PII you must protect.

I had a great time chatting with my friend Robert Wood on how he handles this as the CISO for the Centers for Medicare and Medicaid Services (CMS). We had a great discussion including:

  • How do you encourage secure software development when it's being built by vendors?

  • Leaning into secure defaults across a diverse environment

  • Building centralized platforms and data ingestion pipelines to streamline security processes

  • Making security a people-aligned function

And much more! You can 👉️ watch it here 👈️ 

Sponsor

📣 Conveyor’s market-leading AI autofills security questionnaires for you.

Conveyor’s AI for security review automation is so good, it will:

  • Deflect questionnaires when customers can self-serve what they need from a sophisticated trust center

  • Auto-generate accurate answers to entire questionnaires + export in the original format

  • Autofill OneTrust portal questionnaires in one click

  • Use both documents & Q&A pairs to generate answers so knowledge base maintenance is a breeze

Try it for free and mention this ad to get 5 free questionnaire credits.

Instead of linking you to a boring landing page, we thought a little SNL-worthy skit might tell the story better.

Never fill out another security questionnaire?! 😍 More info here. Also, I thought the skit approach is different and fun, nice.

Secrets

Finding secrets on GitLab
Richard Tweed shares gitlab-secrets, a tool to find dangling or force-pushed commits in GitLab repos that may contain secrets. Two issues ago I included github-secrets by Neodyme, who originally discussed how this can occur and included a tool for examining GitHub repos for the same.

Keeping secrets out of public repositories
GitHub has enabled secret scanning push protection by default for all pushes to public repos. Raising the security bar / defaults for an ecosystem is 👌 Great work. Also: “In just the first eight weeks of 2024, GitHub has detected over 1 million leaked secrets on public repositories.”

How-to: Detecting secrets in video content
GitLab’s Dennis Appelt describes their newly open sourced tool that can detect secrets in video content. Approach: split the video into frames, run optical character recognition (OCR) over each frame (Google Cloud Platform's Video Intelligence API handles both), then approximate regex match the resulting text against known secret patterns. There’s also an automated workflow that scans new videos uploaded to YouTube on GitLab’s channel.

Fun fact, Gitlab had to implement this because they open source their meeting recordings over at GitLab Unfiltered. Someone used them to make the GitLab Meeting Simulator, which people would use to pretend they were working.

Sponsor

📣 Compete in a Cloud Security CTF Challenge

Join Lacework for a virtual Cloud Security Capture the Flag Challenge on March 19. You’ll have one hour to complete as many challenges as possible. Plus, the top 3 scorers will win a Valve Steam Deck. Spots are limited, so register now.

oOo a CTF and a chance to win a gaming device, let’s go! 🙌 

AppSec

LocoMocoSec CFP is open until March 31st
I love LocoMocoSec - it’s a single track conference in Hawai’i that focuses on what I would call “modern security” talks- scaling security, modern tooling, building a “paved road,” and other stuff you’ll probably love if you like this newsletter. I highly recommend submitting and/or attending!

I Stopped Using Passwords. It's Great—and a Total Mess
A Wired article walking through what it’s like trying to use passkeys everywhere. In short, where it works, it’s great, but some devices don’t support passkeys, passkeys aren’t supported on all sites, and a few other edge cases exist.

One does not simply implement passkeys
Josh Grossman shares a detailed walkthrough of the user experience trying to use passkeys on a site (Kayak), navigating challenges logging in on mobile and desktop, different browsers, etc.

classvsoftware/under-new-management
By Matt Frisbie: A Chrome extension that periodically checks if your installed extensions change owners, which could indicate an open source developer being bought out by someone shady. Note that the check appears to be based on developer info listed on the Chrome Web Store, not based on source code changes (e.g. new contributions or ownership change on GitHub).

For additional sadness context, see the many offers to buy the Hover Zoom extension, or Matt’s post Let's buy some fake reviews for a pointless Chrome extension.

Using form hijacking to bypass CSP
Portswigger’s Gareth Heyes gives a great overview of form hijacking, which is when you use HTML injection to inject your own form action using the formaction attribute or injecting your own form to send data to the attacker’s server. Over eager password managers can fill in injected input elements with credentials, leading to them being stolen.

Gareth shares real world examples of form hijacking (on InfoSec Mastodon and Portswigger’s own site) and walks through a number of CSP mistakes and how to fix them.

Cloud Security

Hacking Terraform State for Privilege Escalation
Plerion’s Daniel Grzelak demonstrates how an attacker with access to modify a Terraform State file can insert a malicious provider that will execute code on the next terraform init, and provides mitigation guidance.

Navigating the Cloud: Exploring Lateral Movement Techniques
Palo Alto Networks’ Eden Elazar put together a good overview of four in-the-wild attacker techniques, including how the technique works in each of the three main cloud service providers (AWS, GCP, Azure), covering: Snapshot Creation, SSH Keys, Serial Console Access, and Management Services.

Introducing pql, a pipelined query language that compiles to SQL
Anyone else get excited about new query languages? Just me? RunReveal’s Evan Johnson announces the open source pql, a pipelined-based query language that compiles down to SQL. Unsupported functions are passed through, providing full usage of the underlying APIs. The goal is to have a vendor-agnostic query language so you’re not locked into a given SIEM.

The state of ABAC on AWS (in 2024)
Rami McCarthy shares a great overview of the state of attribute-based access control (ABAC) in AWS, compares how it’s evolved since Scott Piper’s post in 2020, and discusses a number of problems: lack of privilege support, its inconsistent interface, challenges scaling ABAC, and difficulties when trying to do temporary access via ABAC.

Container Security

NilsIrl/dockerc
By Nils: A tool to compile Docker images to standalone portable binaries. The README includes a Reddit screenshot that made me actually lol 😂 

Auditing AWS EKS Pod Permissions
Puma Security’s Eric Johnson discussing blocking pod IMDS access (via IMDSv2 hop limits or Network Policy), IAM Roles for Service Accounts (IRSA), auditing IRSA permissions, EKS Pod Identity, and auditing EKS Pod Identity permissions. See also Datadog’s Managed Kubernetes Auditing Toolkit (MKAT).

Supply Chain

Over 100,000 Infected Repos Found on GitHub
Apiiro’s Matan Giladi and Gil David describe a large scale campaign in which malicious actors cloned legit repos, inserted malware, uploaded them back to GitHub with identical names, automatically forked each thousands of times, and then promoted them across the web via forums, Discord, etc.

Exploring the GitHub Advisory Database for fun and (no) profit
Aquia’s Dakota Riley walks through analyzing the 16,488 advisories in the open source GitHub Advisory Database using Pandas/Jupyter, including looking at KEV prevalence (73) as well as high EPSS (263, 43 of which overlap with KEV advisories), vulnerability breakdown by ecosystem (Maven, npm, …), common CWEs, 16 advisories for GitHub Actions, and more. Great example of exploring a dataset with Pandas.

👉️ Read Online if Clipped 👈️

Blue Team

binref/refinery
By Jesko Hüttenhain: A collection of Python scripts that implement transformations of binary data such as compression and encryption, built to aid with malware triage. All scripts read from stdin and write to STDOUT, think CyberChef on the commandline.

The Threat Detection Balancing Act: Coverage vs Cost
I really like this post by Andrew VanVleet, in which he argues that there is a fundamental upper limit to the number of detections a company can reasonably employ, based on false positive rates and maintenance time. Thus, detection engineers should focus on maximizing incremental coverage and minimize the increment cost for each deployed detection to ensure you’re covering as much of the total attack surface as possible.

The napkin math calculations provide great perspective, and I like the emphasis at the end of mechanisms that eliminate classes of attack techniques entirely.

Red Team

naksyn/Pyramid
By Diego Capriotti: A tool to help operate in EDRs' blind spots. Pyramid can perform post-exploitation tasks in an evasive manner, executing offensive tooling from a signed binary (e.g. python.exe) by importing their dependencies in memory.

Mail in the Middle – A tool to automate spear phishing campaigns
Orange Cyberdefense’s Felipe Molina describes their newly open source mail-in-the-middle, a tool that automates the infrastructure to catch “stranded emails” that go to typosquatted domains, and can automatically forward them to the intended recipient after making malicious modifications, like changing links, adding tracking pixels, including UNC paths to exfiltrate NetNTLM hashes, modifying attachments to be C2 payloads, etc.

This reminds me of the millions of US military emails that ended up sent off to .ml.

AI + Security

  • aiocrioc - Script by James Brine to extract Indicators of Compromise (IoC) from text using an LLM, such as IP addresses, domains, URLs, etc., then does the same from images, after OCR-ing them with Pytesseract.

  • create_threat_model - New Fabric pattern by Daniel Miessler. There are also Fabric patterns for creating MermaidJS visuals and mindmaps, which are pretty rad.

Cloudflare announces Firewall for AI
Cloudflare has launched a Web Application Firewall (WAF) specifically tailored to be deployed in front of LLMs. It currently provides DoS protection, identifies sensitive info in responses (both using existing Cloudflare features), and a detector for potential model abuses (likelihood of input being prompt injection or that it’ll lead to inaccurate or offensive output).

See also Cloudflare’s AI Assistant that lets you query your security event data in plain language, turning your input into GraphQL filters for their API and then visualizes it.

We Hacked Google A.I. for $50,000
Roni Carta, Joseph Thacker, and Justin Gardner discuss several vulnerabilities found in Google’s AI systems, including an IDOR in Bard that allowed unauthorized access to other users' images, and a DoS in Google Cloud's GraphQL API via Directive Overloading, a hard-coded secret, and exfiltrating Google Workspace info via prompt (access email → Markdown button where URL leaks email text).

ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications 
This sounds more hype-y than it really is in my opinion. Basically the authors tested prompts that caused models to “replicate the input as output (replication) and engage in malicious activities (payload).”

They tested “Morris II” (I’m doing air quotes right now and making a serious face) the malicious prompt in two use cases (spamming and exfiltrating personal data), under two settings (black-box and white-box accesses), using two types of input data (text and images), against three different models (Gemini Pro, ChatGPT 4.0, and LLaVA).

Misc

microsoft/Security-101
Seven lessons from Microsoft to kick-start your cybersecurity learning, each divided into small lessons that take 30-60 minutes. It covers basic concepts (CIA triad, differences between risks and threats), understanding what a security control is and what forms they take, zero trust, concepts and themes across identity, networking, security operations, infra and data security, and more.

✉️ Wrapping Up

Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏

Thanks for reading!

Cheers,
Clint
@clintgibler