• tl;dr sec
  • Posts
  • Mechanizing the Methodology: How to find vulnerabilities while you're doing other things

Mechanizing the Methodology: How to find vulnerabilities while you're doing other things

Daniel Miessler describes how to automate your OSINT and recon processes so you can find more and better bugs with less manual effort.

Daniel Miessler, Founder, Unsupervised Learning twitter, linkedin
Red Team Village, DEF CON. August 8th, 2020.

You can also check out Daniel’s excellent blog at danielmiessler.com.

In this talk, Daniel Miessler discusses:

  • The philosophy behind how he automates his recon and OSINT

  • Several concrete examples of useful automations he’s built

  • How these individual automation building blocks can be combined into powerful and complex chains

  • How to set up lightweight, continuous scanning so you can focus your manual time on interesting targets

  • Where to learn more

Philosophy / Methodology

The core philosophy behind Daniel’s automation approach is the Unix Philosophy:

* Write programs that do one thing and do it well.

* Write programs to work together.

* Write programs to handle text streams, because that is a universal interface.

Rather than creating a complex, comprehensive tool that does many things, instead, Daniel creates a series of small, self-contained scripts that do one thing well, for example, finding the subdomains of a given domain.

Each utility is designed to be composable: its output can be easily fed into another utility as input. Input formats are generally simple, text files with one IP address, domain name, etc. per line.

Another interested aspect of Daniel’s approach is that each utility is designed to answer a question; for example, “Which of these IPs are running web servers?”

But what should these individual scripts look like?

There are a couple of primary approaches one can take, and each has fundamental tradeoffs.

Frameworks (e.g. Amass, Intrigue, Spiderfoot)

  • Pros: These are very powerful and can automate a number of steps and tasks for you, saving you time.

  • Cons: They abstract steps from you so it can be unclear how they’re doing certain tasks.

    • This is important not just for your own personal learning, but there are often subtle tradeoffs in different ways to accomplish the same task (e.g. in speed, accuracy, or completeness), and when you don’t know how your framework is accomplishing what you ask it, you have blind spots.

Custom (writing custom code from scratch)

  • Pros: You know exactly what your tools are doing and how. No magic.

  • Cons: Doing this takes a massive amount of upfront time and effort.

Hybrid (leverage existing utilities, but wrap and organize them in a custom way)

  • Pros: Highly customized for your workflow and you’re building a solid foundation - you know all the tradeoffs being made under the hood.

  • Cons: Takes a bit of expertise and custom glue.

Daniel focuses on the Hybrid approach and recommends you do too.

Recon Building Block Examples

After describing the thought process and philosophy behind his approach, Daniel gives a number of concrete examples.

Chaining Modules Together

Alright, so far we’ve built some interesting standalone primitives, building blocks that each answer an interesting question about a target.

The true power of this approach comes from combining these utilities to answer complex questions.

This is a simplified model. In practice you might have a number of modules that contribute sources and others that do validation or clean up the results of other modules.

Now turn this idea up to 11: for example, you could have one module whose output feeds 10 other modules.

These modules on the right can then gather additional info and feed into each other.

Continuous Monitoring

There are fancy ways to do automation, but you can also just use cron, a simple utility available on any Linux box.

And you can not just automate the scanning, but also automatically send yourself alerts (e.g. over email or Slack) when something interesting has been discovered. You can even have separate Slack channels dedicated to your favorite targets.

curl -X POST -H 'Content-type: application/json' \
  --data '{"text": "New open PostgresDB @ 1.2.3...4"}' WEBHOOK_URL 

Note: you’ll want to make sure that source modules have completed by the time the modules that consume their output are started.


Manually making sure your Linux scanning box (don’t run from your personal computer) has the right tools and dependencies can be a pain.

Instead, we want to leverage modern infrastructure as code practices to define the tools and dependencies our scanning box needs. Then, we can make local updates, apply them, and be up and running in a few minutes.

Daniel uses Terraform and Ansible to deploy boxes to AWS, but check out Ben Bidmead’s Axiom for a slick way to deploy a Linux stack to Digital Ocean.

Continuous Improvement: Learn, Adjust, Repeat

One thing powerful about this approach is we’re codifying our domain knowledge in tools. We can then easily leverage these forever going forward.

How often do you watch a conference talk and think, “Oh nice, that’s neat. I should use that some day…” only to then forget whatever the clever trick was a few weeks later?

With this approach, when you come across something new and helpful, you can codify that into a module. It becomes another tool in your toolbelt, making you ever more effective and leveraged over time.

Convert your knowledge and learnings into something tangible and repeatable.

For keeping up with the latest tips and tactics, Daniel recommends following:

You can also keep up with Daniel’s writing and research on his blog: danielmiessler.com.


And finally, Daniel ties it all together nicely:

Meta-reflections on This Talk

I'd like to take a moment to talk about this presentation, specifically, its content and structure.

Like a film maker watching someone else's film or a composer examining the composition of a song, I think it's useful to reflect on well-constructed talks to see what you can learn.

Here are a few things I enjoyed about this talk.

First, I enjoyed Daniel's emphasis on his thought process and the methodology behind his approach. Talks that give you insights into how the speaker views the world and thinks through problems are especially valuable in my opinion. This allows you to generalize and extrapolate the concrete details given in the talks to situations and problem spaces beyond what the talk directly covered. Jason Haddix also does this really well with his Bug Hunter's Methodology (v4.01 slides) talks, which I highly recommend checking out.

Conversely, talks that are only, "Here are some things I found or broke," may give you some point-in-time knowledge, once, but don't necessarily make the listener better.

Second, I thought this talk had a nice structure: it flowed naturally and was easy to follow. I've attempted to sketch the overall structure below, making the flow and groupings visually clear by color.