This page lists a number of resources (books, videos, links, and tools) for topics we find interesting.

Currently the primary focus is static analysis, but check back later for links about other automated bug finding techniques (fuzzing, symbolic execution, model checking, etc.), DevSecOps, and more.

Learning about Static Analysis





There are a number of software engineering

Software engineering

  • ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL)
  • ACM/IEEE International Conference on Software Engineering (ICSE)
  • ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE)
  • ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI)

Security -


  • CCC
  • BlackHat
  • OWASP AppSec Conferences


There’s a Foundations of Programming Languages lecture series from SEPL Goethe University Frankfurt that discusses some core static analysis concepts and principles in short, fairly accessible videos.

Courses and Papers

On the industry, applied side, see the following whitepaper from Jeremy Boone of NCC Group: Best Practices for the use of Static Code Analysis within a Real-World Secure Development Lifecycle.

On the academic side, Monica Lam, one of the authors of the Dragon Book, teaches the “CS243: Program Analysis and Optimization” class at Stanford. She’s kindly put the slides on the course website.

Rolf Rolles ( @RolfRolles ) has built an impressive career around combining program analysis and reverse engineering. His blog posts are absolutely worth reading, and his Program Analysis Reading List has an excellent set of links to books, PhD theses, and tools to review.

Doing Static Analysis


Tools for Security Bugs

We’ve found the following list of tools useful for finding bugs, or at least useful to review as reference implementations.

A static analysis tool for Ruby on Rails apps. Probably the only static analysis tool we’ve consistently heard good feedback about from security engineers and pen testers everywhere (homepage, source code).
  • Our opinion of why this is the case is that Brakeman doesn’t try to do fancy static analysis, it just focuses on classes of problems that matter and can be detected in a high signal, low noise way.
SpotBugs, with the find-sec-bugs extension, is a popular tool for analyzing Java code.
Puma Scan
Puma Scan can analyze C# source code either in an IDE or in a continuous integration sesrver such as Jenkins, etc.
Bandit uses AST matching to find security issues in Python code.
While no longer maintained, PyT, provides some good examples of doing static taint analysis on Python web apps.

TODO Dlint from duo labs

Pyre check from FB -

Klee is “a symbolic virtual machine built on top of the LLVM compiler infrastructure” that can handle C and C++ code.
  • The original Klee paper demonstrated how Klee could automatically generate high coverage test cases for GNU coreutils, in a number of cases with higher coverage than developer-written tests. In this paper, Klee ended up finding 56 serious bugs, including 3 that had been present but not found via other means for over 15 years.
Sobelow is a static analysis tool for web applications written in Elixir that use the Phoenix web framework.
SonarQube is a static analysis tool with plugins for a number of different languages. The core is open source, but you have to pay for some of the language support (see supported languages here, and pricing info here). The paid versions offer CI/CD integration and other enterprise-y features.
  • Honestly, we haven’t played with SonarQube much, so we can’t vouch for its usefulness. Many of the checks seem to be code quality / code smell rather than security, but there are some security rules, and you can write your own.
FlowDroid uses static analysis to track the flow of sensitive information in Android applications. It was created by a number of academics at Paderborn University and TU Darmstadt.
  • One thing quite interesting about FlowDroid is that accurately models Android’s lifecycles and callbacks, which is challenging to do.
  • FlowDroid is also context-, flow-, field-, and object-sensitive, which makes it a useful reference implementation to review.

Tools for Correctness and Robustness

These tools perform correctness and robustness checks, rather than being security-focused, like the above list. They can aid developers in writing more robust code, and their implementation can be useful to review for security professionals wishing to perform similarly sophisticated static analyses.

Infer is an advanced static analysis tool from Facebook that can analyze Java as well as C/C++/Objective C. Infer checks for:
  • Java/Android: null pointer exceptions, resource leaks, annotation reachability, missing lock guards, and concurrency race conditions.
  • C/C++/Objective C: null pointer dereferences, memory leaks, coding conventions and unavailable API’s.
Error Prone
Google’s Error Prone analyzes Java code, can hook into your build system, and even suggest fixes. It has an impressive array of bug patterns it looks for, but similar to Infer, they’re mostly around correctness, not security.
Clang Static Analyzer
The Clang Static Analyzer is an open source, path-sensitive symbolic execution tool built on clang and LLVM that can analyze C, C++, and Objective C code.
  • It’s integrated into XCode or can be run as a standalone CLI tool.
  • Like many LLVM things, it’s largely driven by Apple.
  • Similar to Infer and Error Prone, it tends to focus on correctness and robustness, though it has some security-specific checks (like looking for unsafe function usage, such as gets or bcopy or calling set[e|g|eu]id without checking the return). See here for the available checkers.

Tool Lists

For an utterly massive list of static analysis tools, not all related to security, see the awesome-static-analysis repo on GitHub.


There are a number of static analysis frameworks that can be useful to build on or review for insights into how people tackling this problem space structure their code, the implementation challenges they face, etc.

Warning: all of these frameworks are pretty academic, so the barrier to entry is not low, and the documentation can leave much to be desired.

Soot is “a framework for analyzing and transforming Java and Android applications” originally developed by the Sable Research Group of McGill University.
  • Soot can parse Java bytecode and source code up to Java 7, including languages that compile to Java bytecode (e.g. Scala, Clojure, …)
  • Analyses provided include: call-graph construction, points-to analysis, def/use chains, template-driven intra- and inter-procedural data-flow analysis.
WALA, the IBM T. J. Watson Libraries for Analysis, provides static analysis capabilities for Java bytecode and related languages as well as JavaScript (homepage, source code)
  • Warning: when Clint used WALA during grad school, it was pretty tough to use. We haven’t had time to review it recently in detail, so perhaps the documentation is better now. At the very least, there appears to have been some WALA-based talks (1) and workshops (1, 2, 3) that have been recorded.
Phasar is a relatively new LLVM-based static analysis framework for C and C++ code. It’s focus is data flow analysis, but it’s designed to be flexible so it can be leveraged for other analyses.


A common challenge when developing or evaluating automated bug finding tools is analyzing their performance:

  • Overall, how well does a given tool perform?
  • When does it do well? When does it do poorly?
  • How does one tool compare against another?

Several groups have created bencharks to aid in evaluating tools, which are largely composed of sample vulnerable code snippets that demonstrate how a type of vulnerability may manifest.

In theory, these are modeled after how bugs appear in the real world. But like any benchmark and metric, they are not a guarantee that a tool will be effective in your specific situation.

Here are a few benchmark projects:

  1. The NIST SAMATE (Software Assurance Metrics And Tool Evaluation) project has some example vulnerable code aimed at providing a benchmark for static analysis as well as other types of tools.
  2. The OWASP Benchmark project is “a free and open test suite designed to evaluate the speed, coverage, and accuracy of automated software vulnerability detection tools and services.” It can be used for SAST, DAST, and IAST tools.
  3. Google released Firing Range: “a test bed for web application security scanners, providing synthetic, wide coverage for an array of vulnerabilities.” From a quick perusal of the source code, it appears to support several types of XSS, CORS issues, and other vulnerabilities.

Commercial Tools

There are a plethora of commercial static analysis tools (static application security testing, or SAST). Some of the older, more established players include:

Most of these tools support a variety of languages, though RIPS only supports PHP and Java currently.

There are a few newer companies with some interesting tech, including ShiftLeft and Semmle.