I hope you’ve been doing well.
A few days ago I clicked on a LinkedIn notification and learned that apparently my AppSec Cali talk was discussed on the Application Security Weekly podcast, episode #94, which is pretty cool. Their discussion of AppSec Cali starts at 3:56.
This week I tried something new: I recorded an early version of an upcoming talk (an extended/improved version of the AppSec Cali one) and reached out to a number of sharp friends of mine at various companies for feedback. I always seek feedback, but usually I do it internally amongst my colleagues, and if I ask externally, it’s been just one or two people.
The people I asked have been incredibly generous with their time and have given me excellent feedback. The next version is going to be markedly better thanks to them, and with even more appendix slides 😅 Asking for and incorporating their feedback has been a fair amount of work, but I’d highly recommend trying this for your next talk.
Local Event: Detect complex code patterns using semantic
One thing I like about the Bay Area is you’re always finding out about cool new companies you’d never heard of that are on their way to #changingtheworld. This week, on 3 Startups and a Lie™:
- TOMS Shoes but for avocado toast- for every slice you buy, they donate one to a village in a third world country
- Flying in dateable women from NYC to SF
- A subscription service for quarters ($20 of quarters for only $26.99, what a steal!)
- Dating apps on the blockchain
One of my favorite up and coming companies is r2c, a
startup building static analysis tooling designed to be fast and high signal.
They’ve built a tool,
lets you match interesting code patterns without writing messy abstract syntax tree (AST) visitors or
using overly heavyweight tools. Think of it like a code and dataflow-aware
r2c is having a meetup to discuss
sgrep and other cool program analysis
topics on Monday, Feb 10 (Eventbrite
Hope to see you there!
📜 In this newsletter...🔗 Links:
- Cloud Security: Security notes for the core AWS services, Azure security best practices from Microsoft, NSA on mitigating cloud vulns, Check Point finds critical bugs in Azure, training lab that simulates an attack in AWS, interview about Parliament
- Web Security: CSP evaluator, Dropbox's 5 favorite bug bounty bugs, Collaborator++, a service that live streams secrets pushed to GitHub
- Automatic Exploit Generation: Papers and presentations on automatically finding bugs + creating working exploits for them 🔥
- Politics: U.S. vs ISIS in cyberspace; Britain and Huawei, sitting in a tree, s.t.e.a.l.i.n.g national IP; new tech to help journalists detect doctored images, guns that guarantee hits, journalist election safety kit
- OSINT: The Courthouse News Service, a public dataset for coordinated misinformation campaigns on Facebook,
theHarvester, and other tools
- Misc: Post exploitation scripts, Apple's endpoint security framework, security architecture anti-patterns, the impact of kindness and a life well lived
- Leading Indicators of a Great Info/Cybersecurity Program: How to assess the security posture of another company given limited time / resources
mykter / aws-security-cert-service-notes
“An all-in-one-place collection of security information about all of the core AWS services.” Some notes created while the author was studying for the AWS Certified Security - Specialty exam.
Azure Security Compass
Presentations (slides + video) by Microsoft on how to rapidly increase your Azure security posture. Includes a main presentation, top 10 Azure security best practices, a tracking worksheet, and cloud role tracking. (Thanks Marco Lancini)
Mitigating Cloud Vulnerabilities
Eight page PDF from the NSA covering common cloud components, cloud encryption and key management, cloud threat actors, and cloud vulnerabilities and mitigations (misconfiguration, poor access control, shared tenancy vulns, supply chain vulns).
Remote Cloud Execution – Critical Vulnerabilities in Azure Cloud Infrastructure
Solid work by the Check Point Research team in which they found some critical vulnerabilities in Azure Stack, including being able to access other tenant’s information. Nice write-up, and good reminder that no target should be considered invulnerable, even things you think must have already been tested a million times.
Incident Response Generator
An application that simulates an attack on AWS infrastructure, built for the Securosis/Cloud Security Alliance Advanced Cloud Security Practitioner training class. Students run a CloudFormation template that sets things up, then they attempt to contain and respond to the simulated attack. (Thanks Marco Lancini)
The tool has continued to evolve from finding what are basically spelling mistakes to looking for privilege escalations and the ability to create custom auditors to identify concerns that are specific to a customer’s environment, such as which policies grant access to a specific sensitive bucket.
Parliament can also identify when a Condition is used that does not make sense for the given actions. That’s a much harder problem and requires a more in-depth data set of knowledge than any other tool uses. To get that data, I had to write a scraper for the IAM docs and have worked with the AWS security team, reporting over a hundred issues that no other tool was capable of finding in AWS’s own policies, in their docs, and elsewhere.
Content Security Policy Evaluator
Given a URL, displays its CSP header nicely color highlighted and applies some light rules on if it follows best practices.
Dropbox bug bounty program has paid out over $1,000,000
The Dropbox ProdSec team’s 5 favorite bugs from the past 5 years of submissions, including a shared link password bypass, Paper notification CSS injection, Gopher (the protocol) SSRF, app cache manifiest, and ImageTragick.
Tool Release – Collaborator++
NCC Group’s Corey Arthur released Collaborator++, a Burp plugin that extends the built-in Collaborator’s existing functionality to ease finding complex out-of-band bugs (e.g. SSRF, XXE, blind XSS).
By searching Shodan.io for response headers sent by Burp Collaborator, NCC Group discovered the existence of 187 private Collaborator servers. 122 of which were externally pollable, many of which were configured with SSL certificates with common name attributes suggesting ownership by leading security companies and private researchers, highlighting the importance of an alternative security mechanism. This tool aims to secure Collaborator servers by providing an authenticated proxy for polling for Collaborator interactions, additionally and optionally enabling server owners to limit unauthenticated polling to the local network.
On average, I was finding and verifying secrets within 7 minutes of them being committed… around 50% of them were valid meaning I could access the respective service using the captured credentials/keys.
What I wasn’t expecting to find was valid package manager API keys, i.e., npm for Node.js; PyPi for Python; and NuGet for C#. The total number of downloads for these packages is in the millions. And the majority of these keys had publishing permissions. Meaning a bad actor could theoretically embed malicious code into the packages, reupload them without detection, and potentially infect millions of devices.
In case you weren’t worried enough about supply chain attacks 😅 One cool feature is that he has code that will try to automatically verify if DB creds are valid and raise a GitHub issue if they are.
Automatic Exploit Generation
Automatic exploit generation is a neat field in which the goal is, given a program, automatically find vulnerabilities and create working exploits for them. Work in this space often combines deep low level knowlege (assembly, reverse engineering, exploit development) with traditionally more academic topics like program analysis and formal methods.
Gollum: Modular and Greybox Exploit Generation for Heap Overflows in Interpreters
CCS 2019 paper by Sean Heelan et al. (blog | slides)
We present the first approach to automatic exploit generation for heap overflows in interpreters. It is also the first approach to exploit generation in any class of program that integrates a solution for automatic heap layout manipulation. At the core of the approach is a novel method for discovering exploit primitives—inputs to the target program that result in a sensitive operation, such as a function call or a memory write, utilizing attacker-injected data. To produce an exploit primitive from a heap overflow vulnerability, one has to discover a target data structure to corrupt, ensure an instance of that data structure is adjacent to the source of the overflow on the heap, and ensure that the post-overflow corrupted data is used in a manner desired by the attacker. Our system addresses all three tasks in an automatic, greybox, and modular manner. Our implementation is called
Gollum, and we demonstrate its capabilities by producing exploits from 10 unique vulnerabilities in the PHP and Python interpreters, 5 of which do not have existing public exploits.
Greybox Automatic Exploit Generation in Language Interpreters, and Some Other Stuff
Slides from a lecture Sean Heelan gave, which includes a brief history of AEG, highlights trends, and discusses the CCS paper. You can see the bibliography of the talk here.
A Bootcamp for your Cloud
If you haven’t heard of it already, Sean Heelan and Halvar Flake started a company, optimyze.cloud, with a really interesting business model.
Basically, they examine your computing stack, find optimizations, help you implement them, then you pay them 50% of your savings over the next 2 years. That is, they only get paid if they save you money. I don’t have first hand experience with this service, but Sean and Halvar are both brilliant.
USCYBERCOM After Action Assessments of Operation GLOWING SYMPHONY
FOIA requests shedding some light on how the U.S. government is working to counter ISIS’s efforts online.
Britain Knows It’s Selling Out Its National Security to Huawei
Why let a company you’ve labeled a “high risk vendor” bid on up to 35% market share on your country’s 5G network? If you feel compelled to have test centers doing in-depth supply chain reviews to de-risk the situation, and previous testing found the examined code to be “dangerously sloppy,” maybe uhh just don’t?
The real reason for Britain’s nonexclusion of Huawei was kept under wraps by its government: fear of retaliation. After Brexit, London sees itself as dependent on Beijing’s goodwill. In an interview with the Global Times on Jan. 20, the Chinese ambassador to Britain made it clear that an exclusion of Huawei would severely damage economic and political relations. And for Johnson, the threats from Beijing—a government with expansive control over its national economy—were more credible than those of U.S. President Donald Trump’s administration.
Tool to Help Journalists Spot Doctored Images Is Unveiled by Jigsaw
Jigsaw, a company owned by Alphabet (Google’s parent company) has built some tech to help journalists detect when an image is likely fake or altered. It will likely not be available to the public.
Jigsaw also announced an interactive platform showing coordinated disinformation campaigns from around the world over the past decade. They include Ukrainian soldiers receiving targeted disinformation encouraging them to defect during the 2014 Russian annexation of Crimea; associates of President Rodrigo Duterte of the Philippines hiring “click armies” to write pro-Duterte comments and stories online; and a small-town California hospital hiring a private firm, Psy-Group, to influence public opinion about a contested seat on the hospital board.
This ‘Guaranteed Hit’ Tech Could Be Added to Army’s Next-Gen Squad Weapon
“…allows you to acquire, lock on and engage targets” using a weapon-mounted optic and special pistol grip that “allows the weapon to fire only when it’s a guaranteed hit.”
CPJ Emergencies has compiled a safety kit for journalists covering the 2020 election that includes information for editors, reporters, and photojournalists on how to prepare for assignments and how to mitigate digital, physical, and psychological risk.
Thanks InfoSecSherpa for the heads up on the first two links.
OSINT-y Goodness, #23 — Courthouse News Service
The CNS is a legal news service for civil litigation (not criminal law) that includes articles written by CNS staff about various court cases, including “Dingers” - update-to-the-minute updates about filiings and motions made in court cases.
If you are looking for court documents, there are more straightforward sites like PACER or individual court websites you can search to get information more quickly. The value of the Courthouse News Site is the commentary provided by their writers and the free PDF links to court documents.
Facebook’s Coordinated Inauthentic Behavior - An OSINT Analysis
Fascinating article that examines public data Facebook has released about various misinformation campaigns on its platform. The authors manually extracted and centralized information from articles released on Facebook Newsroom and placed them in this GitHub repo. The site has interactice visualizations that allow you to slice the data by targeted audience, budget, keywords, and more.
Spiderfoot 3.0 Released
SpiderFoot (source code) is an OSINT collection and reconnaissance tool. The new version features a richer CLI, now supports IPv6 addresses, phone numbers, email addressses, usernames, real names, and ASNs, and more new features and bug fixes.
laramies / theHarvester
Gathers emails, names, subdomains, IPs, and URLs using multiple public data sources.
Week in OSINT #2020–05
Mentions a few tools and approaches, including TrellOSINT (Google
site:trello.com <key words>),
NameCheckup (searches usernames across popular
Google Open Directory Search from ex16x41’s OSINT
repo on GitHub.
mubix / post-exploitation
A collection of post exploitation scripts, commands, and binaries for msf4, win32bins, Linux, mac OS, and BSD.
Detection Engineering using Apple’s Endpoint Security Framework
Post by SpecterOps’ Richie Cyrus on the collection and detection stages of the Funnel of Fidelity.
Clayton Christensen taught me patience, showed me love, then gave me confidence and hope
Harvard Business School professor Clayton Christensen, author of classics like The Innovator’s Dilemma and How Will You Measure Your Life? ( 👈 I’d highly recommend) passed away on January 23rd. This blog post describes the personal interactions and impact Clayton had on the author, which I found inspiring. Media nowadays likes to emphasize conflicted heroes, anti heroes, etc. because it makes a compelling narrative. But genuine kindness is incredibly powerful and should be celebrated. It’s how I hope to live my life.
The Leading Indicators of a Great Info/Cybersecurity Program
Great Twitter thread by Phil Venables on assessing a company’s security program (largely quoted or paraphrased below). See tl;dr sec #12 for Phil on risk and tl;dr sec #17 for the art of influencing. If you’re not following Phil on Twitter, you’re missing out!
Phil: You can assess the security of suppliers, counter-parties, or companies you’re about to invest in with on-site reviews, security testing, people capability assessments, governance check-ups, etc. But often you have limited time, money, skills, or access. Here are some leading indicators that will help give you more than a cursory point-in-time view of their security:
- Accountable Executive. There’s a senior (in the org hierarchy) accountable leader for security, a CISO/other role - someone at an executive level clearly & indisputably on the hook - with the support of other management - for the effectiveness of the security program.
- Experience Depth. That leader and others (e.g. senior engineers, PMs, CIO, CTO, Chief Risk Officer, Head of Audit) have a depth of expertise built up over some years. Not just time served, the quality of experience is more important.
- High Reliability Organization. The organization has at least some of the qualities of a high reliability organization such as preoccupation with failure/incident learning. A signal of this could be as simple as how welcome your questions are.
- Independent Challenge. There is some organization (internal or external) that provides a regular independent view of security (be it a risk function and/or an audit function, or external counterparts) that report directly to the Board or one of its committees.
- Strategic Architecture. An enterprise architecture or framework that establishes a defensible environment, an approach for embedding controls in business and IT processes (ambient control) and an overall zeal for creating secure products by design (shift left).
- Transparency. They don’t keep you at a distance and their risk register, controls, incident and issue history is widely shared and discussed within the organization so they can improve. The same incidents / issues rarely recur - they learn and adapt.
- Preventative Maintenance. There is a tangible budget / plan for preventative maintenance - system improvements, end of life systems replacement, upgrades, technical debt pay down and so on.
- Extended Enterprise. They look at their business and technology processes from the customer (upstream), through their environment, to their supply chain (downstream) - even to their 4th parties (suppliers of suppliers).
Bottom line: Not only are leading indicators a potentially efficient way of assessing risk, they are perhaps more of a reliable indicator to assess the sustainability of an organization’s security program.
✉️ Wrapping Up
Have questions, comments, or feedback? Just reply directly, I'd love to hear from you.
If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏
Thanks for reading!Cheers,