• tl;dr sec
  • Posts
  • Stepping Up Our Game: Re-focusing the Security Community on Defense and Making Security Work for Everyone

Stepping Up Our Game: Re-focusing the Security Community on Defense and Making Security Work for Everyone

tl;dr In this Black Hat USA 2017 keynote, Alex Stamos discusses how the world has changed around the security community, some ways we’re focusing on the wrong things, and how we can do better.

Alex Stamos, CSO, Facebook twitter
Black Hat USA 2017
💬 abstract 📹️ video

This talk is a nice blend of:

  • Reminiscing about the past, the early days of the security community.

  • A frank discussion of some ways the security community isn’t living up to its potential in making the world a better and safer place.

  • Some recommendations on what should be focused on going forward.

  • A rousing call to action and an optimistic vision of the future.

The World Has Changed

We’re no longer the upstarts. We’re no longer the hacker kids fighting against corporate conformity. My friends from that era are now CEOs, CISOs, VCs, and some of them have worked in the White House. We don’t fight the Man anymore. In some ways, we are the Man.

Times have changed, but the community and industry has not. We haven’t changed our attitude towards what kind of responsibility that puts on us.

Three Ways the Infosec Community Can Improve

1. We focus on complexity, not harm

We glorify the complexity of a hack, when in reality attackers will do the simplest thing that works.

The vast majority of actual human harm occurs outside of the space we generally consider “infosec”: abuse; that is, using legitimate features to cause harm. Doxxing, for example.

The day-to-day issues that cause people’s privacy to be violated are generally not technically complex (e.g password reuse). They’re the issues we’ve had for decades.

Zooming in on the small ‘Traditional Infosec’ part of the above triangle

2. Our field punishes imperfect solutions in an imperfect world

The modern technology landscape requires people to walk on tightropes, and we haven’t built users a safety net. “Don’t click that link/open that doc. Cert warning? Just use your knowledge of X509 to know if it’s safe.”

We need to put ourselves in the shoes of the people using our products. And not just our parents and our family members, but the billions of people who will be online in the next 20 years who have never been exposed to the Internet.

We have a problem with empathy- “I found where the problem is, it’s between the chair and the keyboard.” This is dangerous, because it makes it easy to shift the responsibility of building trustworthy, dependable systems onto other people.

Every day we make billions of people walk these tightropes, and if they fall off, we say we can do nothing to help them.

A common affliction in the infosec community. But let’s be realistic - most people aren’t being targeted by advanced nation state actors, and oftentimes making small compromises to make something massively more widespread (e.g. end-to-end encryption) can meaningfully raise the security bar.

Case Study: Infrastructure as a Service (IaaS)

During the first introduction of the "public cloud," there were a number of highly visible talks about bypassing security protections via incredibly technical attacks, for example, targeting GPU or hypervisor bugs. This research was valuable, in that in the long term it made the cloud more safe.

However, these issues were hyped up in a way that many businesses had the perception that using a public cloud wasn't safe, when in reality, these cloud platform's security mechanisms were generally good enough and adopting IaaS could have been a manjor security and operational benefit to many companies.

Security researchers like to focus on interesting, incredibly technical problems, which distracted people from the real problems of the public cloud, simple issues such as:

  • People having credentials that are too powerful.

  • Not setting up good network isolation.

  • Burning API keys into software.

Case Study: Rolling Out End-to-End Encryption in WhatsApp

When Facebook rolled out end-to-end encryption in WhatsApp for 1 billion users, they had to make some hard trade-offs to make it usable by the huge diversity of people who use the product.

Some people called these tradeoffs an intentional backdoor. Eventually, over 70 experts chimed in saying the tradeoffs were reasonable, but it's things like this that keep people from trying to do hard things and it makes it hard to have an honest discussion about what trade-offs should be made.

Because Facebook is a massive company it could weather this criticism, but Alex worries that smaller companies aren't going to be willing to take risks like this with potentially huge benefits because they'll be afraid of blowback due to trade-offs they had to make being perceived as a flaw or intentional issue.

3. We don’t engage the world effectively

Both developers as well as at a geopolitical level.

A recent intense period of debate about encryption was the Department of Justice vs. Apple case after the San Bernadino terrorist attacks. While there was some good, well-reasoned criticism of the government’s views by advocacy groups, tech companies, and academic papers, there were many more hot takes on Twitter from security professionals calling government officials stupid and/or evil.

To a number of infosec professionals, they assumed that if someone wants a solution to the “encryption problem,” it must be because they don’t understand how crypto works, they don’t understand math, haven’t though about it deeply, or because it’s part of some mass surveillance conspiracy.

The infosec community’s views on encryption aren’t always obvious to people with different backgrounds. Coming from a different community, the trade-offs can look quite different.

As the Facebook CISO, part of Alex’s job is to go around the world and engage with government and law enforcement. Like infosec, law enforcement is kind of like a family, a profession with a real sense of community.

One conference he’s been to several times is the Crimes Against Children Conference. Not exactly the most uplifting subject matter, but it is uplifting to attend and see thousands of people who have dedicated their lives to protecting children against abuse in the physical world and online.

These are people just like us who believe they can leave the world a little bit better than they found it. They have seen something that is truly evil, and they’ve decided that they’re going to combat it.

As a community, we need to have empathy with people who disagree with us.

Put your mind in the shoes of someone whose job it is to put child molesters in jail, or to stop the growth of terrorist networks, and think about: what are the kinds of solutions we might be able to offer that don’t require backdoors, or that don’t require us to violate some of the principles that we hold very dear?

Doing this will allow our voices to actually be heard and demonstrate that we are adults willing to engage in a difficult topic.

What the Infosec Community Should Do Going Forward

1. Focus more of our attention and innovation on defense

We’re not going to bug squash ourselves out of this situation. The only way that systems are going to get better in the long term is about eliminating entire classes of bugs, by building architectures that are resilient to failure, that fail gracefully, and by building relationships between the security side and the builder side so that we can move forward together.

Facebook is putting up $1 million in 2018 for the best new defensive research in the Internet Defense Prize.

2. We need to broaden what we consider our responsibility

A huge amount of harm comes from areas outside what we traditionally consider our domain, and unfortunately they don’t easily fall under other people’s domains either.

The best people to work on these problems is us, as we’ve spent decades thinking about how technology can be subverted and abused.

Alex and several of his colleagues wrote a whitepaper about Facebook’s observations about the U.S. 2016 election and more broadly what a modern information campaign looks like that attempts to subvert another country’s democracy using technical means.

To properly handle these threats, we need to leverage people’s expertise from other domains.

3. We need to support the diversity of people, backgrounds, and thoughts in the infosec community

The members of the security community need to be representative of the types of people we need to protect. This is the only way we can foresee the challenges people are going to have with technology and understand which solutions are actually possible in a specific cultural situation.

For example, when you’re doing election security, it’s incredibly useful to have people with degrees in international relations or foreign service, and the use cases and threat models of someone building a product in the Bay Area is totally different from someone using it in a small town in India.

Building a team with diverse backgrounds is key, because you never know what sort of problems you’re going to get into.

4. We need to retain the talent we already have

One way to create an environment that’s open and welcoming to diverse people and diverse viewpoints is to diversify the management team. This helps with finding talent as well as providing an example for how you can raise through the ranks even if you’re new to the field.

Every one of us has an important power - the power of inclusion.

By making people feel included in conversations and at cons, we build them up, giving them experiences and knowledge that’s beneficial for their career and makes them powerful and valuable to the world in the future.

Our collective behavior determines who feels welcome in the infosec community, and thus who are going to be our colleagues over the next several decades. If you’re about to make a snide comment to someone, think if that will make them feel like they don’t belong in our community.

Call to Action

The keynote ends with the following call to action (note: slightly edited/condensed):

I’d like us to focus on fixing. I’d like us to have empathy for the people who use the technology we build, and I’d like us to foresee the ways they may be harmed, and to move quickly to mitigate it.

I’d like us to put as much thought into: how do we eliminate entire classes of vulnerabilities? And not just into spectacular demonstrations on stage.

Be careful of how we talk to people in our companies and around the world, because it really does have an impact on our ability to be heard.

Let’s work to make this a community where everybody feels they can be a part of making the future more safe and secure.

Stay in Touch!

If you have any feedback, questions, or comments about this post, please reach out! We’d love to chat.