tl;dr Koen Hendrix describes analyzing the security maturity of Riot product teams, measuring that maturity’s impact quantitatively using bug bounty data, and discusses 1 lightweight prompt that can be added into the sprint planning process to prime developers about security.
- Based on observing how development teams discuss security and interact (or don’t) with the security team, Koen groups dev teams into 4 security maturity levels.
- Teams at these maturity levels range from largely not thinking about security (Level 1), to having one or two security advocates (Level 2), to security being a consistent part of discussions but it’s not yet easy and natural (Level 3), to security consciousness being pervasive and ever-present (Level 4).
- To examine if a dev team’s level had a measurable impact on the security of the code bases they worked on, Koen analyzed Riot’s 2017 bug bounty data group by team maturity level. The differences were clear and significant.
- Compared to teams at Level 1, teams at Levels 2-4 had:
- A 20% / 35% / 45% reduced average bug cost
- A 35% / 55% / 70% reduced average time to fix
- The average issue severity found from internal testing was 30% / 35% / 42% lower
- Riot chose to focus on raising Level 1 and 2 teams to Level 3, as that yields the biggest security benefits vs effort required, makes teams’ security processes self-sustaining without constant security team involvement, and makes them more accepting of future security tools and processes provided by the security team.
- They did this by shaping development team behaviour, rather than purely focusing on automation and technical competencies and capabilities.
How to uplevel dev teams? During standard sprint planning, dev teams now ask the following prompt and spend 2-3 minutes discussing it, recording the outcomes as part of the story in Jira/Trello/etc.:
How can a malicious user intentionally abuse this functionality? How can we prevent that?
Though the dev team may not think of every possible abuse case, this approach is highly scalable, as it primes devs to think about security continuously during design and development without the security team needing to attend every meeting (which is not feasible).
Some final thoughts:
- The security level of a team influences how the security team should interact with them.
- If the majority of your teams are Level 1 and 2, rolling out optional toolings and processes isn’t going to help. First, you need to level up how much they care about security.
- Work with Level 3 and 4 teams when building new tooling to get early feedback and iterate to smooth out friction points before rolling the tooling out to the rest of the org.
In Search of the Magical Unicorn Dev Team
Like all hero’s journeys, whether it’s from Joseph Campbell or a league of err… legends, this story starts with a goal.
At Riot, there are some development teams that get security right without any interactions with the security team- they do the right practices, make the right tech choices and constantly think about security.
Koen set out to find what makes these teams special, so that their practices, skills, and processes can be distilled to help other teams approve.
About Riot Games
Riot has 3,500 employees (~700 engineers) across 20 offices world-wide, supporting over 100 million monthly active users.
Challenges and Trends
Riot has similar culture and security challenges to what I’ve seen at many tech companies:
- Developers have high autonomy to use the technology they want, agility and flexibility are encouraged, and friction is minimized.
- Security generally can’t block except for critical issues- a carrot approach must be used.
Koen has found that:
- Hard to use security tools and libraries will only be adopted by the
teams who are already quite good at security.
- Things the security team produces must fit into dev teams’ existing systems and workflows.
- Many teams see security as a phase to go through.
- So they’ll build a new product and think about security after it’s been deployed.
Why Some Product Teams are Great
Initial Investigation: Observe how product teams work and communicate
To gather some initial data, Koen met with 10-15 dev teams and sat in a few of their meetings, observing things like: does security come up regularly? How do they talk about security? Is it usually just one person bringing it up or many? How do they document the conversation?
The 4 Levels of Product Team Security Maturity
While he groups teams into buckets, in reality it’s a spectrum.
- Level 1: These teams typically don’t talk about security at all. It’s not
that they don’t care, it’s just not on their minds.
- This means that the security of a piece of code depends on the engineer writing it + any automation you have in place.
- Level 2: These teams have key stakeholders asking them about security
criteria or a key teammember advocating for specific security activities,
such as a review prior to release.
- These teams tend to see security as a phase, and security is bolted on after the product has been built.
- Security at this level is not sustainable, as if the security advocate is absent or changes teams this team will regress to Level 1.
- Level 3: Security is a consistent part of conversations, whether it’s
during design or discussing a given piece of code.
- It may not feel natural for them, it’s still work, but they value security conversations and make sure to have them
- This level is sustainable, as if the person who cares most about security is removed from the team, the team will continue being security conscious.
- Level 4: Security isn’t something the team has to consciously do, they live
and breathe it.
- In Level 3 teams, security is typically a process that is repeated; for example a standard agenda item. Level 4 teams don’t need that, they have a culture of security.
- Not all, but many Level 4 teams work in sensitive areas, such as authentication or dealing with payment processing.
2017 Product Team Security Levels
Most of the Level 1 teams were teams the security team had a hard time having good conversations and building solid relationships with - in local, smaller offices or teams where development is outsourced.
Trends are nice, but:
Is there a measurable difference in how secure a product team’s code is based on the levels we’ve created?
Ideally we want some concrete numbers. Bug bounty to the rescue!
2017 Bug Bounty Statistics by Product Team Security Level
|Level 1 - Absence||Level 2 - Reactive||Level 3 - Proactive Process||Level 4 - Proactive Mindset|
|Avg $ Per Bug||$1||$0.8||$0.65||$0.55|
|Avg Time to Fix High Risk||1||0.65||0.45||0.3|
|Avg Issue Severity||$1||$0.7||$0.65||$0.58|
The table shows a pretty drastic reduction in average bug cost (20% / 35% / 45%), average time to fix (35% / 55% / 70%), and average issue severity found from internal testing (30% / 35% / 42%), with the biggest gains coming from moving from Level 1 to 2 and Level 2 to 3.
Note: the “Avg Issue Severity” does not capture the volume of bugs, only the average severity, because currently it’s hard for them to get volume stats due to the difficulty of accounting for differences in complexity and size of the project.
How Do We Improve?
There's a lot of things you could do, but as an AppSec team you have a finite amount of time. So where to you focus for maximum impact?
- Having Level 4 teams is great, but transitioning from Level 3 -> 4 is a lot of work and the security benefits are comparatively small.
- It's tough to get Level 1 and 2 teams to adopt new security tools and processes.
- The security outcome improvements from transitioning from Level 1 or 2 to Level 3 are significant.
Thus, the highest impact action is getting teams to Level 3, where security is self-sustaining without outside security team involvement and subsequent systemic security improvements will be more readily adopted.
Every engineer wants to write secure code, but they don’t always consider security. When prompted and primed, many devs are actually quite goood at writing secure code.
How can we prime devs to think about security?
Goal: create a systematic way to have devs think about security as they’re building that’s lightweight, and easy to adopt and stick with.
So how, when, and where should we insert this security primer?
Answer: User Stories
They decided to insert the security prompt into existing dev processes for user stories, which the dev team discusses during sprint planning meetings. The results of user story discussions are written down in JIRA, Trello, etc.
Again, this needs to be very lightweight if it’s going to be a conversation point for every user story a team discusses.
The One Security Prompt
For each sprint/backlog item, the team dev asks the following question and has a 2-3 minute conversation about it:
How can a malicious user intentionally abuse this functionality? How can we prevent that?
I really like this approach for several reasons:
- Because it's just one, simple question (with a follow-up sub question), it's easy, lightweight, scalable, and feasible to actually get adoption.
- By having this conversation before building the feature has begun, the team may be able to avoid insecure architecture decisions that would be challenging or time-intensive to fix later if the feature only received a security review pre-launch.
- Lastly, by priming the development team to think about security, hopefully that mindset will carry over into the implementation as well.
Example: Building a Site to Support Tournament Organizers
Koen then goes through an example (25:18) of a dev team using this question when they’re planning to build a website to help support tournament organizers.
The last 2 acceptance criteria were added to the story because of the security conversation. Nice! With only a minute or two of conversation, the security posture of this product is a little bit better.
When you make this conversation standard across all dev teams, you get a nice scaling of security conscientiousness across an entire organization without the AppSec team having to be involved in every sprint planning meeting. Very cool.
In many cases, the dev team may not think of every abuse case a security engineer might.
But, the security team can’t join every sprint planning meeting, so while this approach isn’t perfect, it is scalable.
Stay in Touch!
If you have any feedback, questions, or comments about this post, please reach out! We’d love to chat.