• tl;dr sec
  • Posts
  • How pentesting mirrors the evolution of quality assurance

How pentesting mirrors the evolution of quality assurance

And why software engineering can help us to mature the security industry

Hey there! This guest post is by my friend Ross Haleliuk, best-selling author of the recently released Cyber for Builders, who also has an excellent Venture In Security newsletter. Enjoy! -Clint

I often talk about the fact that security is becoming more and more similar to software engineering, and that adopting an engineering approach to cybersecurity will greatly help us to advance the profession. Despite the strong arguments in favor of this approach, some are naturally skeptical. Rather than discussing the merits of the idea, I would like to show the similarities between security and software engineering. Nothing illustrates it better than the parallels one can draw between two areas of two fields: pentesting in security and quality assurance in software engineering.

Why pentesting and quality assurance are conceptually similar

Quality assurance in software development is a process of ensuring that software, once it’s deployed to production, will be free of “bugs” - unintended defects that prevent customers from being able to accomplish their goals, negatively impact user experience, and make it hard to realize full value of the product. It’s understood that software developers cannot ship perfect code all the time, and even when everyone does their part, changes to the product can unexpectedly break existing functionality. Not every bug should be fixed: if the goal was to eliminate 100% of bugs, no company would be able to innovate and deliver new products to the market. Instead of trying to solve all issues, smart technology leaders ruthlessly prioritize what should be looked at, and by doing so ensure that engineering resources are always tackling problems most impactful to the business.

Pentesting plays a similar role in security: its goal is to test a product, or a specific technical implementation to find and document gaps so that they can be addressed. If left unchecked, some issues can enable attackers to break into the customer’s environment, obtain access to sensitive information, or otherwise impact the so-called CIA, confidentiality, integrity, and availability of data. There is also a shared understanding that every organization has some vulnerabilities - and therefore the goal is not to eliminate all of them (similar to getting rid of all the bugs, it’s neither possible nor feasible), but to address issues that have the highest potential to lead to negative consequences for the business.

The evolution of approaches to quality assurance

Over the past several decades, we have witnessed an evolution of approaches to quality assurance. Before adoption of Agile and Lean methodologies, when software was shipped once per year following months or even years of planning, writing code, and then testing, quality review was something a team would do for several months prior to the release, according to the plan.

When the new Agile and Lean methodologies started to gain traction, often in the form of teams adopting Scrum and Kanban frameworks to work management, it became clear that QA needed to change as well. Instead of it being a manual step done on an annual basis, it became a part of the two-week iteration (often called “sprint”). For a long time, the testing process stayed manual: people trained to use the product would be given specifications detailing how the new functionality was supposed to work, and then use their creative thinking to break it and find bugs.

At a certain point, it started to look like all software was going to be built outside of North America and Europe: organizations looking to cut costs, were outsourcing quality assurance to regions with lower cost of labor, most commonly India. Countries such as Argentina, Ukraine, and Russia, to name some, were also offering similar services, but because of the relatively poor English skills of the people who live in these areas, they remained less common. There were several challenges with outsourcing QA:

  • The quality of work varied, but for the most part it was much lower than the work of in-house QA. QA outsourcing agencies didn’t pay their employees well, so they couldn’t attract and retain good talent. Instead, they relied on people without education or recent graduates to do the work.

  • Even when the people hired wanted to do good work, they didn’t have enough context to do it well. To test a software product and find bugs, one needs to know its functionality deeply and understand what problems it is designed to solve. Outsourced quality assurance teams had none of this: they would work on several products at once without developing close relationships with specific software or domain areas.

It didn’t take long for companies to see that lower QA cost came at the expense of quality - ironically, the very thing QA was supposed to ensure. With that, and to a degree because of the rising cost of labor in previously affordable countries, the era of outsourcing quality assurance for the most part ended.

Following the advances in other areas of technology, the rise of testing automation, and the insourcing of the QA function, many companies started to evolve the way they did quality assurance from manual testing to automation. QA analysts willing and able to learn the new way of working stayed in the industry and were generously rewarded; those that preferred to stick with what they knew - manual testing - have been finding it increasingly harder to secure a good pay, and even to find a job in this competitive field with relatively low barriers to entry.

Today, the role of QA in software organization continues to evolve. Companies with mature software development practices are now expecting that engineers will test their own work, and writing automated tests is becoming a critical part of the “definition of done.” A profession that started as an entry-level, software-adjacent area of work is now inseparable from software engineering. In fact, large organizations such as Amazon are now hiring software engineers with focus on QA, typically called “engineer in test.”

The evolution of approaches to pentesting

When we reflect on the trajectory of pentesting as a discipline, it becomes clear that it neatly follows the evolution the QA function has been through over the past several decades.

At first, the need for pentesting wasn’t well understood - companies assumed that they are “fine” and their organization must be “secure” - after all, their employees are required to use passwords with at least one number and a special character. Then, a once-or-twice-a-year cadence for pentesting was established to meet the ever-expanding compliance requirements. Pentesting was done manually; at first - by local practitioners, and with time more and more often by service providers based in India and other countries with lower cost of labor, just like it was in QA. Outsourcing, however, didn’t consume the whole pentesting space: demand for talented professionals from established players such as NCC Group, Bishop Fox, Trail of Bits, Doyensec, and Include Security, to name some, remained high. 

Driven by the desire to create the “Uber of pentesting”, a number of savvy founders launched pentesting-as-a-service providers. Many of them were structured as two-sided marketplaces: on one side, there were buyers (customers), and on the other side - sellers (pentesters). Initially, this model looked promising, but then it became clear that pentesting is not the same as driving cabs in that it requires a very specific, hard to obtain skill set. Many people who joined pentesting-as-a-service providers as pentesters were new to the field. Those with a true passion for the space, not just a desire to make a quick buck, would quickly grow their skills, amass impressive portfolios, and go out on their own. As a result, many pentesting-as-a-service platforms turned into bootcamps for entry-level pentesters: great for the industry, but not ideal for the business. Another issue these companies ran into was customer acquisition and retention. Enterprises needed to see strong talent and valuable findings, and because there were few and far between, they retreated. The only reliable customer base for these “Ubers of pentesting” was small and medium-size businesses, but they had little to no budgets, and a part would churn annually in large numbers. 

Over time, platforms that relied on inexpensive workers and service providers focused on outsourcing started to struggle to show value: freelance pentesters would easily uncover many insignificant and unimportant challenges, but findings of impactful vulnerabilities were rare. Today, the space has considerably matured, and bug bounty platforms such as HackerOne and Bugcrowd are generally doing a great job designing incentive systems that encourage bounty hunters to only surface findings that could be impactful. Moreover, given that customers are only paying for valid reports, the model works quite well overall. 

Similar to how the adoption of QA automation has been steadily growing, the number of companies leveraging automated pentesting tools has also been going up. The challenge is that neither external consultants, nor the automation tools have a good understanding of the business context, and therefore the value of their findings will always be limited. Moreover, since attackers (and users of software tools) are people, to truly emulate their thinking, pentesting (and QA) functions will continue to need people - ideally, fully dedicated and armed with the right business, technical, and organizational context.

While small and medium-size businesses are and will continue to rely on consultants and service providers, large companies are learning the importance of developing offensive expertise internally. It is incredibly valuable to have an external user unencumbered with their biases, and knowledge about the product, to test the product as a QA, or do a pentest. However, an outsider’s perspective cannot be a replacement to the insider’s knowledge and expertise.

Looking into the future: bringing the learnings from QA to security

As we look into the future, I would anticipate that a lot of the same trends that have shaped the present of QA, will be defining the state of pentesting. Similar to how software engineers are now expected to test their own work, defense-focused security practitioners will be asked to validate their organization’s defenses. The two previously separate disciplines - security offense and security defense - are merging into one giving the rise of the so-called purple teaming, just like writing software and testing software are now inseparable under the umbrella of software engineering. Developers can no longer just write code and throw it over the fence for someone else to test; in the same way, blue teams are unlikely to continue to rely on another team to validate if their work has indeed strengthened the organization’s security posture.

Another trend is the idea of continuous testing: as new software (or changes to the organization’s security environment) are deployed, both QA and security tests should run automatically. A decade from now, the idea that an organization can live without continuous validation of its security posture, will be as unacceptable as the idea of shipping software without having any tests run in the background.  

Conceptually, the function of pentesting is a form of QA: instead of testing if a product or a system itself works as intended, it looks to ensure that it is secure. What is different is what happens if the issue is not discovered by the organization. If a software bug gets through, customers are likely to report it to the company that built the product. Users want to have a good experience with the product, so they are incentivized to report any gaps they come across. If a security issue is discovered, customers can report it using the company’s vulnerability disclosure program (a security equivalent of bug reporting). However, not everyone who finds an issue is incentivized to share it with the company - some actors proactively look for vulnerabilities with the intent to exploit them. This is the main factor that makes good pentesting practice in some ways much more critical than quality assurance. 

Another distinction is that software bugs are found during the regular use of the product, but security issues are something people must be intentionally looking for. This is why bug bounties play and will continue to play such an important role in the security space.

The intent of this short piece wasn’t to prove that pentesting is 100% the same as quality assurance; each has its unique sides, and it’s natural that there are parts that differ. What matters is that cybersecurity practitioners, leaders, and investors keep an open mind and continue to learn from other disciplines. Too often we are trying to reinvent the wheel despite the fact that areas such as software engineering have solved a lot of the same problems security is tackling today. By embracing the engineering approach to security, we will find an inspiration for solving a lot of the problems that from the first glance may appear to be unique to our field.