- tl;dr sec
- Posts
- [tl;dr sec] #34 - Game Theory + 0days, Kubernetes Hacking Practice, AWS Least Privilege
[tl;dr sec] #34 - Game Theory + 0days, Kubernetes Hacking Practice, AWS Least Privilege
Game theory applied to finding and disclosing 0days, Kubernetes training labs, rightsize your AWS IAM policies to Terraform.
Hey there,
I hope you’re doing well!
My response last week after hearing from readers who are also fans of musicals and plays:
The overlap of people into both security and musicals/theater
The last I’ll say about it (for now) is that the Hamilton movie, featuring the original cast, is coming to Disney’s streaming platform on July 3. Interestingly, Disney paid $75 million for the worldwide rights.
Note: this week’s issue is a bit longer than usual because I’ve included lengthy blurbs from an article I really liked at the bottom. It’s worth it.
📜 In this newsletter...
🔗 Links:
AppSec: macOS Calendar deep dive, finding secrets in Slack and the GitHub commit stream, mobile pen testing checklist, preventing SQLi in Django, new stuff from GitHub Satellite 2020
Web Security: Automating scans with Selenium + ZAP, proxying thick clients, 5 most common GraphQL vulnerabilities
Cloud Security: Privilege escalation in GCP, scan IAM usage patterns => create least privileged Terraform code, analyzing S3 bucket accesses through "access points," monitoring security-related AWS metrics, creating an AWS sandbox
Container Security: scan K8s clusters for risky RBAC permissions, intentionally vulnerable K8s cluster for practice, K8s Easter CTF source code, validate K8s policies using Conftest
Red Team: Netcat with firewall and IDS/IPS evasion and other useful features, hijacking library functions using `LD_PRELOAD`
Politics / Privacy: Being a trustworthy custodian of customer data, your WeChat messages are helping train the Chinese censorship machine, applications of behavioral psychology in military, industry, and politics.
📚 Game Theory: Why System Security Is Like Poker, Not Chess
Interesting thoughts on game theory, responsible disclosure, and national security by David Brumley.
AppSec
Exploring macOS Calendar Alerts: Part 1 – Attempting to execute code
NCC Group’s Andy Grant will take you on a glorious journey into Calendar.app and iCalendar RFCs. Fun fact: alerts for calendar events can be used to launch apps or run binaries, because why not 🤷♂️
PaperMtn/slack-watchman
“Slack Watchman is an application that uses the Slack API to look for potentially sensitive data exposed in your Slack workspaces.”
How I made 10K in bug bounties from GitHub secret leaks
Tillson Galloway has made nearly $10,000 reporting leaked secrets on Hacker One and he’s released git-hound, a “batch-catching, pattern-matching, patch-attacking secret snatcher.” 👏
Mobile Application Penetration Testing Checklist
Reasonably detailed checklist for Android and iOS.
Preventing SQL injection: a Django author’s perspective
Jacob Kaplan-Moss, the creator of Django, on how to prevent SQLi in your Django apps. The post lists a number of API calls that are not safe by default, which is a good starting point for manual review if you’re an AppSec engineer or security consultant.
Codespaces - Basically, open a project in VS Code in your browser from its project page. Cool!
Insights - SDLC metrics like code review turnaround time, PR size, etc. for the purpose of understanding your processes and streamlining them.
Code and secret scanning beta - Scan every commit with CodeQL for vulnerabilities and hard-coded secrets.
I thought the former would be perceived as a big deal, but so far the response seems to mostly be meh, not sure why.
Web Security
Automating security tests using Selenium and OWASP ZAP
Intro to automated vulnerability scans and their limitations, proxying Selenium tests through Zap, invoking authenticated active scans using Zap, and more.
Proxying Unaware Thick Clients
Some tips on how to proxy thick clients that ignore all system and environment variable proxy settings, including hooking network functions, firewall based traffic redirection, etc.
The 5 Most Common GraphQL Security Vulnerabilities
You can also test the attacks yourself: CarveSystems/vulnerable-graphql-api.
Inconsistent Authorization Checks - layers of resolvers tend to make doing authz correctly more complicated.
REST Proxies Allow Attacks on Underlying APIs - a malicious request may be able to modify the path or parameters to the backend API.
Missing Validation of Custom Scalars - e.g. a malicious input may be interpreted as a JS object rather than a primitive type.
Failure to Appropriately Rate-limit - loops in relationships between objects can lead to DoS.
Introspection Reveals Non-public Information - GraphQL introspection makes it easy to discover backup or admin functionality that was intended to be hidden.
Cloud Security
Privilege Escalation in Google Cloud Platform – Part 1 (IAM)
More great work by Spencer Gitzen of RhinoSecurity Labs. See also: part 2 and the accompanying GCP privilege escalation scanner.
bridgecrewio/AirIAM
“AirIAM scans existing IAM usage patterns and provides a simple method to migrate IAM configurations into a rightsized Terraform plan. It identifies unused users, roles, groups, policies and policy attachments and replaces them with a Least Privileges Terraform code modeled to manage AWS IAM.” 🔥
IAM Access Analyzer flags unintended access to S3 buckets shared through access points
“With S3 Access Points, customers can easily manage shared data sets by creating separate access points for individual applications. Access points are unique hostnames attached to a bucket and customers can set distinct permissions using access point policies. To help you identify buckets that can be accessed publicly or from other AWS accounts or organizations, IAM Access Analyzer analyzes resource policies. Now, Access Analyzer analyzes access point policies in addition to bucket policies and bucket ACLs. Access Analyzer makes it easier to identify and remediate unintended public, cross-account, or cross-organization sharing of your S3 buckets that use access points.”
Monitor and Notify on AWS Account Root User Activity and Other Security Metrics
Setting up logging/alerting for a personal AWS account on a budget using Terraform (repo). Creates a multi-regional Trail, connects it with a CloudWatch Log Group, and creates a number of metric filters and metric alerts to receive SNS notifications for: root login, Console login without MFA, Action without MFA, illegal use of a KMS key, use of a KMS Key to decrypt, and changes in security groups, IAM, route tables, or NACL.
Creating an AWS Sandbox
How to create an AWS Sandbox account that allows you to play around while creating safeguards to ensure you don’t create a security hole in the AWS account through a misconfiguration.
Container Security
cyberark/KubiScan
“A tool for scanning Kubernetes cluster for risky permissions in Kubernetes’s role-based access control authorization model. The tool was published as part of the Securing Kubernetes Clusters by Eliminating Risky Permissions research.”
Bust-a-Kube
“An intentionally-vulnerable Kubernetes cluster, intended to help people self-train on attacking and defending Kubernetes clusters” by Jay Beale of InGuardians. I know what you’re thinking- “With a catchy name like that, surely they must have an in-theme accompanying music video.” And you’re right 🤣
NodyHub/k8s-ctf-rocks
The source code for the Kubernetes Easter CTF (http://k8s-ctf.rocks/). Contains a Vagrantfile that allows you to setup the CTF locally.
Validate Kubernetes API Versions With Conftest
“Conftest is a utility to help you write tests against structured configuration data. For instance you could write tests for your Kubernetes configurations, Terraform code, Serverless configs or any other structured data.” This post describes writing validation policies for deprecated Kubernetes API versions.
Red Team
cytopia/pwncat
“Netcat on steroids with Firewall, IDS/IPS evasion, bind and reverse shell and port forwarding magic - and its fully scriptable with Python (PSE).”
Hijacking Library Functions and Injecting Code Using the Dynamic Linker
Nice overview of static vs dynamic linking and how to hijack library functions using LD_PRELOAD.
Politics / Privacy
While data may still be the new oil, businesses can no longer view it as an asset to be extracted and exploited. The reality is that no company owns customer data — rather, customers entrust them with it. Instead of thinking like “data owners,” companies must act as “data custodians” who protect personal information and use it only with a customer’s best interests in mind.
Some tactics to get started:
Create a monthly or quarterly data accountability report to promote transparency and peer-accountability around personal data use.
Celebrate teams that access the least amount of personal data to perform their jobs, and share their tips with other teams.
Pair employees when the most sensitive data must be accessed.
Establish an organization-wide privacy committee to make decisions about a company’s data culture and policies.
Drilling for data can seem like a limitless source of profit, but trust is the real resource that businesses should be mining. When a consumer shares their personal information, they should trust companies to use it with their best interests in mind. The organizations that succeed in the coming years will be those that build long-term, trusted relationships with customers by humanizing data and taking responsibility as its custodians.
We Chat, They Watch: How International Users Unwittingly Build up WeChat’s Chinese Censorship Apparatus
Lengthy report by The Citizen Lab:
“We present results from technical experiments which reveal that WeChat communications conducted entirely among non-China-registered accounts are subject to pervasive content surveillance that was previously thought to be exclusively reserved for China-registered accounts.
Documents and images transmitted entirely among non-China-registered accounts undergo content surveillance wherein these files are analyzed for content that is politically sensitive in China.
Upon analysis, files deemed politically sensitive are used to invisibly train and build up WeChat’s Chinese political censorship system.”
Invisible Manipulators of Your Mind
A discussion of the behavioral psychology work by Amos Tversky and Daniel Kahneman (Thinking, Fast and Slow) applied to military, industry, and politics.
Interesting thoughts on game theory, responsible disclosure, and national security by David Brumley.
Cyber offense and defense isn’t chess. It’s a game of poker. In chess, you have complete visibility into your opponent’s position and moves. In poker, you lack that visibility which also happens in the cyber realm. You don’t have certainty in what exploits your adversary knows about, whether they are using an exploit they disclosed, and whether your zero-day is really a zero-day globally.
Strategy means you’ve thought through the larger picture of various alternatives, risks, and rewards. You’ve built a game, not in the playful, fun sense, but one that allows you to reason through actions, incentives, and possibilities.
The Zero Day Game
Imagine you found a new 0day. You can either disclose it or create an exploit and attack others.
1. Exploit only. There is value in offense and you get some utility such as access, intelligence, or control.
2. Disclose only. We disclose because it leads to a patch, network filter, or other remediation. Disclosure isn’t defense; it just a precursor to defense. After disclosure, a patch or remedy is created, and eventually rolled out. The remedy is what takes the vulnerability off of the table for an attacker.
3. Exploit then disclose. Why not exploit a few systems, hope not to get caught, then disclose? If your few exploits are never noticed, you could still be seen as the good guy.
4. Disclose then exploit. The disclosure opportunity window is the time between when a vulnerability is disclosed and when the remedy is protecting a system. We know the opportunity window can be very large. Using known vulnerabilities works (at least in part) because we can’t patch everything immediately.
5. Stockpile. You take no action and keep the information to yourself to decide at some later time. But zero-days have a shelf life that expires when someone else finds the same vulnerability. You don’t know how long it would take, but it’s a fair bet that any zero-day you find will eventually be discovered by others, as long as the software stays relevant.
The Actions We Take Are in the Context of the Overall World
In game theory, we create a game state to capture that context. Game theory also asks us to be formal and provide the utility — positive or negative — for each action. Ask anyone in risk assessment; if you don’t have a cost for an action, you can’t assess the risk. The nice thing about game theory is you can use different utility functions to understand how they change the outcome. For example, how does a defenders’ strategy change if the cost of being exploited is $10 vs. $1 million?
Some lessons and properties:
1. Measure how quickly you get remedies deployed to disclosed vulnerability. Knowing you are running vulnerable software isn’t enough. What matters is how quickly you can deploy a remedy or patch. That is something you can measure and optimize.
2. Small powers — or those with less to lose — are more prone to attack. This checks the box that the model represents reality in this dimension.
3. Responsible disclosure has two worlds: one where the vendor fixes the software and one where they don’t. If the vendor never fixes the issue (cough cough, IoT), does it help? In the near term, it gives bad guys information on where to look. On the other hand, the traditional argument is it helps provide public awareness on who is a “responsible” vendor and who is not. Beyond that we can start to model such scenarios. What is the (negative) utility in shaming an unresponsive vendor? How bad does it need to be for them to take action? Would a rational person simply ignore the vulnerability knowing it kicks the can down the road?
4. The player that finds the zero-day has the choice. If you don’t spend time finding vulnerabilities in your own software and supply chain, your strategy is by definition reactive. I’d also add if you don’t use techniques, at the very least, as comprehensive as an adversary (e.g., criminals — if you are a business), you are also choosing to be reactive.
Zero-Day Is Really Zero-Disclosure
The term “zero-day vulnerability” is a bit of a misnomer. If you find a previously unreported vulnerability that doesn’t mean no one else knows about it. What it means is no one else has publicly disclosed it.
The method you used to find the vulnerability can change the probability that your opponent also finds the vulnerability. The time it takes to find a vulnerability relates to how easy or difficult the vulnerability is to discover.
If you have a super-secret technique that’s able to find a vulnerability that cannot be found using commonly available tools, that vulnerability likely has a longer shelf life before someone else discovers it.
You can also start to estimate how many new exploits your adversary may have. For example, Google has reported over 3,849 new security-critical bugs using their oss-fuzz infrastructure over three years, which works out to about 3.5 per day. Think about it: Google, statistically, will find 28 new security issues between Christmas and New Year’s. Google has nation-state offensive capabilities. Yes, weaponization takes more time and not all 3.5 vulnerabilities per day can be weaponized, but you get the gist.
Ricochet Attacks and the Glass House
Exploits are bits and can be copied. What if you got really good at ricochet? That changes the strategy in game models. Interestingly, it can provide a real incentive for everyone not to attack.
What is interesting in ricochet is it incentivizes peace even when there is a vulnerability. If Red and Blue have an equal number of systems, and both have ricochet, neither should attack. It’s like the old saying: those who live in glass houses should not throw stones.
To me, the framework suggests the US is behaving rationally. They likely have the most to lose if someone else finds and weaponizes a vulnerability. Rationally (not just ethically), it makes sense to put their thumb on the disclosure side of the scale.
Imagine a crazy world where Russia simply said, “If I see a cyberattack, I will ricochet the same attack against every vulnerable computer in Israel.” That would incentivize Israel to not just keep the peace with Russia, but also incentivize Israel to pressure allies to not attack as well. It would also guide national policy (e.g., getting really good at ricochet).
Even if you can’t ricochet, the game theory suggests you should disclose not just vulnerabilities you find, but also those launched against you. Attack/defense hacking competitions teach us the best thing to do is attack the weakest player first. If you use an exploit against a weak player and they detect it, you know not to use it against a strong player. It doesn’t say a stronger player wouldn’t detect it as well, but does provide some information.
If you disclosed any attack on your network, especially if you disclosed a new zero-day, you could be disincentivizing attackers. It would make sense, at least, that they don’t attack you first but someone else.
For example, suppose you can invest either in a really deep static analysis tool that highlights buggy lines of code that identifies 100% of all flaws, but it’s difficult to take action on a report. Is that deep analysis really benefitting you compared to something less deep but more actionable? The goal is typically not to find flaws, but to reduce the window from when a vulnerability is introduced to when a patch is fielded. Think through all the incentives that go into such a program.
✉️ Wrapping Up
Have questions, comments, or feedback? Just reply directly, I'd love to hear from you.
If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them
🙏
Thanks for reading!
Cheers,
Clint