This guide is synthesized from almost two hundred resources, as well as a survey of over a hundred security professionals (who have bought a collective thirteen hundred assessments, and sold over twenty-five thousand). Check out the bibliography.
You can’t buy security, but security services vendors play a key role in effective security programs. Shopping for and deriving value from these vendors is becoming a core competency for security professionals. However, many organizations struggle due to information asymmetry and difficulty of assessing performance or quality. Combined with misleading sales tactics and a lack of standardization on delivery, this overwhelming marketplace, now with even more alternatives to traditional single-vendor assessments - buying a decent security engagement is harder than ever.
As one survey respondent put it:
“There is too little reliable information about the type and quality of services offered, and too little knowledge about how to sell and buy them.”
In Penetration Testing Considered Harmful Today (2012), Haroon Meer identified penetration testing as a “market for lemons” - in which information asymmetry (buyers can’t tell good products from bad) drives buyers to offer a price averaging their expectations. This phenomenon then forces good products out of the market, leaving one dominated by subpar products.
In 2007, Gary McGraw coined the “badness-o-meter” for security assessments. It’s a dial that shows possible assessment outcomes, a range from “your security sucks” to “we don’t know.” It’s impossible to prove security, so assessments can only prove your consultant wasn’t good enough to get in within the time and scope allotted.
There isn’t a strong belief in the quality of the average vendor. In our survey, buyers expressed more confidence than sellers. But, sellers would be more qualified to judge average quality, and I trust their judgement more:
This guide will share everything you need to know to effectively buy and get value from security services.
Types of Security Consulting
Security is a big business, the global security services market likely crests 100 billion USD. 1
Both professional and managed services are components of this market. We will put aside the marketplace for managed services, which has unique complexities and contracting models. This guide will focus on professional services, which include a variety of common security consulting engagements.
More specifically, we’ll discuss the process of buying a security assessment - a consulting engagement focused on evaluating security design, architecture, and/or implemented controls to identify whether they are operating as intended and helping the organization to meet its security requirements (NIST). To start, let’s break down the types of assessments on offer.
Vulnerability assessments and penetration testing are the most common and broad modes of assessment.
A vulnerability assessment is a systemic approach to comprehensively identifying vulnerabilities or information security deficiencies. These assessments tend to be highly automated. Human input is primarily used for prioritization and confirmation of tool-reported vulnerabilities. Vulnerability assessments are an efficient, and therefor cheap, way to identify common and known risks. Generally, identified vulnerabilities are not exploited during a vulnerability assessment. A vulnerability assessment is the least expensive and least comprehensive service that is generally available.
A penetration test is a security exercise that attempts to safely identify vulnerabilities and weak spots in a system or network’s controls. Identified vulnerabilities are exploited to determine impact and to identify further vulnerabilities accessible post-exploitation.
There are a few specific subtypes of penetration test, characterized by the transparency provided to both the vendor and the target:
- White-box (“full knowledge”): The vendor will be provided full access to the system, as well as documentation on its internals, source code, and partnership from subject-matter experts throughout the assessment.
- Grey-box (“partial knowledge”): The vendor will be granted standard access for the varying personas of users of the system. They may be provided some documentation on architecture or implementation.
- Black-box (“no knowledge”): The vendor will be in the same position as a black-hat hacker, with no special access granted or information provided.
Check out Daniel Miessler’s blog post for more on the difference between a vulnerability assessment and a penetration test.
Other services are denoted specifically by the methodology, deliverable, or goals of the engagement. The most common examples follow.
A code review focuses on as comprehensive as possible a review of the source code for vulnerabilities. These assessments leverage static analysis tooling as a matter of course. For any finding, all instances of that pattern should be identified. Sophisticated vendors will develop custom rules or analysis tooling. These can be specific to a client, or more generally applicable against a specific language or framework.
A threat modeling engagement takes a step back from vulnerability discovery. Instead, it focuses on enumerating likely threats to a system. These assessments may follow a popular methodology, like STRIDE, or leverage a vendor’s custom approach.
Red team (adversary simulation)
A red team assessment is in many ways an extension to a penetration test. It is characterized by an attempt to model the behavior of a specific adversary and/or an attempt to compromise an organization by any means possible, while avoiding detection. The term “red team” originated with late 90s DoD cyber exercises2, where “red” emulated the opposing force (think Cold War). Red teaming requires a mature defensive posture to provide meaningful additional value.
Social engineering (phishing, vishing, smishing)
A social engineering assessment is narrowly targeted at that class of threat. The goal is to evaluate an organization’s resistance to social engineering. These assessments can target an entire company, or more narrowly focus on specific employees or assets. Social engineering can also be performed as a component of a larger assessment.
Technical Specialty (hardware, cryptography, cryptocurrency)
|￣￣￣￣￣￣￣￣￣￣|— EFF (@EFF) August 11, 2018
| CRYPTO |
| MEANS |
| CRYPTOGRAPHY |
Many technologies have unique risks that require specific expertise to identify. In these cases, you may need a specialty assessment, from a consultant or consultancy that has deep experience in your problem space.
On newer models
When researching vendors, you will stumble across a few emergent offerings - bug bounties and “automated” pentests or red teaming. Without too much digression, suffice to say that vendor marketing often makes false comparisons between these services and traditional consulting engagements. While these new offerings can provide value, they are not a drop-in replacement for a services engagement. As one survey respondent put it: “Getting tough to find something that’s not just a dressed up Nessus scan.”
Motivations for the assessment and impact on vendor selection
To book a successful security consulting engagement, you must first understand your motivation. There are a wide range of reasons to contract with a security consultancy. Yours will have outsized downstream considerations for vendor selection. Some of the most common drivers are:
The most proactive motivation for an assessment is pure risk reduction. This is where an organization voluntarily commissions a security assessment in order to holistically improve their security. They can use the assessment to find gaps in security architecture, implementation and program to prioritize for remediation and investment.
This motivation allows for the most flexibility in vendor selection, as it is entirely for the benefit of organizational security. There are two considerations that also apply to all other cases. First, when interviewing possible vendors, focus on their collaboration model and the experience of working with them. Second, focus on their ability to map their assessment to concrete, contextual business risk.
Many compliance schemes either suggest or mandate a third-party security assessment. Some examples include PCI DSS (Requirement 188.8.131.52), FedRAMP, FINRA, NCUA, SOC2, and HITRUST.
When procuring an assessment due to compliance, you must ensure that the vendor you’re working with carries any certification required by the scheme (like PCI Approved Scanning Vendor) or otherwise meets the certification requirement. Additionally, it is worth weighing the benefits of finding a stronger assessor who will provide a more critical assessment, versus a vendor that performs a less substantial assessment, offering an easy path to meet the compliance requirement. You should balance this consideration with your overall security program and other assessment activities. Finally, when contracting for compliance, you may find value in working with your auditor to identify a vendor who has a good reputation with them.
Companies, especially in the Business-to-Business space, hit a size at which certain customers will require a vendor security review as part of procurement. 3 These reviews vary, but standard forms such as Shared Assessments’ SIG Questionnaire or the Cloud Security Alliance’s Consensus Assessment Initiative Questionnaire are available.
Evidence of an external assessment is a frequent requirement for these reviews. The CAIQ AAC-02, for example, specifies that “independent reviews and assessments shall be performed at least annually to ensure that the organization addresses nonconformities of established policies, standards, procedures, and compliance obligations.”
Often, a security assessment can serve multiple agendas. If your organization has heard customer demand for evidence of such an assessment, this should influence your vendor selection. For very large clients, you may hear implicitly or explicitly of one or more preferred vendor(s). Depending on the size of the deal, it can be worthwhile to bow to these preferences, especially if the client has recommended a vendor you otherwise consider. It is also worth potentially paying the premium to conduct your assessment with a more notable “brand name” firm. Name recognition can engender client confidence in the assessment. Finally, be sure to understand the deliverables your vendor offers, to keep the proper artifacts as sales collateral. This may be a letter of engagements, attestation of the assessment, and executive or engagement summaries.
Investment or M&A
Security assessments can occur pre- or post-deal in a merger, acquisition, or investment. They can be used as due diligence before a deal, or afterward, to baseline security program planning. The assessment may be an explicit deal contingency, or used to understand the potential risk and resultant costs associated with the deal. 4
Often, M&A or investment activity is fast rolling, making engagement speed a crucial criteria. It is also important to vet for depth of industry experience, These assessments rely on the third party’s ability to identify and communicate business risk, even without the partnership of a standard assessment from the target. Experience with M&A engagements is also worth seeking. Most organizations will have limited chances to gain experience in these activities. A vendor may work dozens of these engagements a year and be able to provide more value from that broad base of experience. In some of these cases, if you are the party being assessed, you may have little choice in the matter. The investor or acquirer will pick the vendor, or at least have preferred relationships. You may be able to get ahead by having your own assessment handy - as discussed in Sales above.
Third-party assessments are internal political fuel. They can provide an objective view of gaps, highlight problems the security team is hoping to communicate upwards, or capture progress to qualify return on investment.
To roadshow an assessment internally, you should consider your audience. One tack could be focusing during procurement on the report’s business-friendliness, such as the quality of the executive summary. It also can be beneficial to favor any executive preference or pre-existing relationship, and to account for vendor name-recognition.
Following a breach, more than just incident response is called for. In the long term, it is often wise to commission an external assessment to confirm that remediation and hardening activities were effective. It also can re-engender internal trust, in line with Internal attestation above.
Though the assessment is not itself incident response, there is value in working with a vendor with experience in incidents. This could be your response partner, or another vendor who can directly leverage the outcomes of the response. Additionally, it is helpful to select a partner who has the ability to support longer-term advisory work. Finally, consider working with internal or external legal counsel to commission the engagement. That way you may be able to conduct it under privilege5.
Types of Security Services Vendors
In the lede, we spoke about the difficulties presented by the marketplace for security services. They include misleading sales tactics, evolving terminology, and an overwhelming array of vendors and services. To navigate the complexity as a buyer, it helps to understand the common vendor profiles. You can then take this knowledge and your understanding of your goals, and use it to narrow down the possible vendors for your contract.
The biggest, name-brand companies offering security services are general professional services firms. They offer security services as a small part of their portfolio. For example, all the “Big Four” accounting firms (Deloitte, Ernst & Young, KPMG and PwC6) have cybersecurity practices that offer assessment services. IBM, Booz Allen Hamilton, and HPE fall into this category as well.
A step-down in size and name recognition, you will find the large “pure play” cybersecurity services companies. These are organizations that focus on security and have a large staff of security consultants (generally in the hundreds). A couple examples would be NCC Group or Optiv.
On an even smaller scale, you’ll find the boutique security consultancies. These consultancies have a shallow bench of consultants and offer a more limited range of services - frequently targeting a narrow customer base. They may work locally (like the consultancies that popped up in Seattle in Microsoft’s orbit in the Vista days7), or in a specific industry or vertical.
The specialty vendor is a subclass of the boutique consultancy. These are consultancies that focus on a single service offering.
Luta Security is an example, offering services exclusively in the vulnerability coordination space. Latacora is another vendor which has carved out a niche, by offering a virtual security team to startups.
At the smallest scale, you have vendors that are operating entities for sole practitioners. Most of the other types of vendors will have one or more cases where individual consultants have spun out independent practices. These vendors may enjoy working alone, or can grow over time into a larger team.
A trend in this space involves independent security researchers and bug bounty hunters who also offer direct consulting services. Companies who run bug bounties may retain the most successful participants to do more targeted or collaborative assessments.
These vendors are identifiable by competing primarily on price for contracts. They tend to prominently offer vulnerability assessments in addition to more comprehensive services. Their advertising and execution leans heavily on automation, driving cost-efficiencies.
Managed Security Service Provider
If you are looking for more than assessment services, or already have a relationship with a managed security service provider (MSSP), they may be an option. Many MSSPs have an assessment offering in house, or a trusted partner for consulting services.
While this relationship is already high-trust, consider the conflict of interest. Especially if you’re hoping to validate the risk reduction provided by your MSSP.
Value Added Reseller
While the line between MSSP and VAR has blurred in recent years, historically the clear difference lay with the period of engagement. A VAR performs a short term (generally under a year long) transactional service - offering third-party software and hardware, plus consulting, configuration, and customization services. A VAR may offer security assessment as part of their engagement. Consider the value proposition, as you are normally paying a premium for working through a VAR. However, it can be worthwhile to use one if your organization is already leaning on them, indicating a lack of resourcing or qualifications to manage the engagement internally.
How to find potential vendors
“How do I find a vendor” is one of the first, and most surprisingly difficult, problems when building a security assessment program from scratch.
Some trends from the survey:
What is the greatest challenge in buying security services?
Finding good vendors, who provide significant value
… who are available when you need them
… who can support specific systems or architecture
… who provide consistent quality staff
In some cases, the trouble is figuring out where to start to explore the industry. In others, it is separating the wheat from the chaff. At this point, security consulting is a big business and there are hundreds of options in the U.S. alone. In this section we’ll present a variety of approaches to help you find a strong partner for your assessment program or one off test.
“Ask your network” is likely the first piece of advice you’ll receive. There is a good reason for this: a recommendation from a trusted party, based on their personal experience with a vendor, is an exceptionally high signal. But, when seeking network recommendations, be sure to keep in mind both the quality of your network and their ability to accurately assess the quality of their experience.
Be sure to take recommendations from non-clients of the vendor with a grain of salt, as there is more potential bias. Often, organizations can have successful engagements and be happy with a clean report – or maybe a flashy finding. This is not a clear indication of a good assessment in all cases. It’s possible that the finding was actually low-hanging fruit, or the clean report a sign of poor coverage.
Follow-the-leader (industry endorsements)
Security assessments are common practice at most of the largest tech companies, including all the Big Five (FAANG, or is it MANGA?). These companies also tend to be large enough to support marketplaces or integrations - and many require a security assessment for inclusion inside these walled gardens. In such cases, they often advertise a specific set of approved vendors. Those vendors are validated and vetted by a sophisticated security program, and can be considered a safe bet. However, be wary that your needs (and budget!) may be different from those of these large organizations.
Security conferences are a rich hunting ground for vendors. Unfortunately, it can be hard to distinguish between high quality talks and vendor pitches, especially if you’re only referencing a conference program. Be wary of pay-to-play schemes and sponsored sessions. Despite that caveat, it can be productive to review the speakers and trainers, especially at “tier one” conferences (like BlackHat, Defcon, and CCC). Keep an eye out for talks and workshops specific to your technologies and problem space.
Like conference speakers, published research can also indicate the appropriate quality vendor. Security assessment is partially a creative pursuit. Research can reflect curiosity and innovation. Additionally, research can indicate a vendor’s technical depth on the bench. You may not need an expert in that specific topic, but if you’re looking for adjacent work, there is a correlated signal. Again, like with talks and trainings, this requires confidence separating marketing fluff and FUD from real research. Competitions like Pwn2Own can also demonstrate the proficiency of a vendor’s engineers.
Like research, you can try and select a vendor based on their prominent staff or leadership. This is only valuable for vendors small enough that you could reasonably expect the direct engagement of that person. This is irrelevant for large vendors. They are more than happy to sell you on a big name, but with hundreds of consultants are likely to keep their notable employees for their most prestigious or challenging engagements, or research and presentations. If you’re considering hiring a former security researcher, bug hunter, or independent consultant, you should be vetting them individually.
Most security vendors will either have public reports available, or will provide an example upon request. Reviewing these examples can help indicate the quality of the deliverable you’ll receive. They may contain evidence of the vendor’s methodology and reasoning around risk within a business context. Assume that these reports are of a higher quality than average for the vendor. Any deficiencies should be a red flag.
A generic repository of reports has been collated over on GitHub. The CNCF has also commissioned a set of high quality reports as part of graduating projects, for example:
If you are seeking a public report, expect a substantial surcharge. Public reports should always be objective and comprehensive. Reputable vendors should not suppress or remove findings from a report based on customer demand.
If one of your motivations for a security assessment is meeting compliance obligations, ensuring your provider satisfies the relevant requirements is obviously essential. For example, the Payment Card Industry (PCI) standard requires that organizations contract an Approved Scanning Vendor (ASV) for certain levels of certification. Another example would be CREST accreditation. Even if you’re not motivated by compliance, these can still be valuable broad indexes of possible vendors, although inclusion in an arbitrary scheme does not provide a strong signal.
Assessment standards work
There have been numerous efforts to standardize penetration testing and security assessments10. While none have achieved universal adoption, their contributors can be a useful reference list. Not contributing to these efforts isn’t a negative signal, but participants in major standardization efforts, such as the Penetration Testing Execution Standard, show a focus on methodology, consistency in delivery, and alignment with industry best practices.
Certifications are controversial in the security industry. A few maintain some cachet, such as lab-based certs like those from GIAC and Offensive Security for penetration testers. There are regional certification schemes with fairly broad adoption, like CREST in the UK. You should value the certification of an individual delivering your assessment more than a vendor carrying an organizational certification. Know that not all certification schemes are equal. Be dubious of non-technical certification, as well as overly broad ones.
Industry analysts have a mixed reputation, with the Gartner Magic Quadrant a meme in many circles. That being said, if you’re in the type of organization for which analysts hold a lot of sway, turning to Gartner, Forrester, or another advisory is a reasonable option. You’ll need to pay for access to these reports. Based on those I’ve reviewed, they at least won’t steer you terribly wrong. More importantly, just like “nobody ever got fired for buying IBM,” if your company respects these rankings you benefit from following suit. However, you should accept that in using these recommendations you will be directed at the top of the market. The only companies in most of these analyst reports are those that are large enough to afford the inefficiency of directly courting inclusion.
Respondents were asked to stack-rank their preferred methods of finding an assessment vendor. As expected, network recommendations are considered best. I personally dispute the value of published research. It’s less likely to be directly correlated to real world assessment competency, but it is certainly flashy.
How you should scope your assessment
Scope management is the process of defining what work is required, and then making sure that all of that work, and only that work, is done. 11
Accurate scoping is key to a successful assessment. If the scope places the wrong focus, then risks will not be identified. If an assessment is scoped too narrowly, there will be insufficient coverage of the targeted risks. If it is scoped too broadly, then either depth of coverage will suffer or excessive costs will be introduced.
Strike a balance between performing a comprehensive set of tests and evaluating functionality and features that present the greatest risk
GSA IT Security Procedural Guide: Conducting Penetration Test Exercises
It is important to identify the limitations of your resources - time, budget, or internal expertise and ability to support assessment efforts. The scope must be grounded in your organization’s risk, informed by your threat model and asset classification schemas.
In order to minimize the cost of penetration testing, one essential thing that must be done is to reduce the amount of labor time associated with each test. In order for penetration testing to be truly useful to software developers, testing scenarios and test cases need to involve a great deal of in-depth knowledge of the application undergoing the testing. This means that more, not less, human time and energy needs to go into testing.
CISA - Adapting Penetration Testing for Software Development Purposes
Define your budget before beginning the scoping process, especially in organizations with significant processes around approvals. The cost range of “a security assessment” is practically unbounded, which means that overall budget and desired scope will be the salient factors defining the vendors you can afford. When considering possible elements of the scope, take into account the trade-offs of cutting down the scope versus selecting a vendor with a cheaper rate.
Bear in mind Parkinson’s law - “work expands so as to fill the time available for its completion”
How to pick your target
There is no rule for targeting an assessment. Take advantage of your familiarity with your environment, you know your assets best! Targeting is nuanced, based on a variety of factors, including:
- Your motivations for the assessment
- Your desired balance of breadth and depth
- Your risk assessment, threat model, and data classification (you have these, right?)
- Your measurement goals, such as controls testing or detection capability assessment
For your first assessment with a new vendor, consider limiting the scope to a cost-efficient trial run.
Every requirement you specify limits your pool of vendors. This will in most cases increase your costs. The major justifications for a constrain tend to be technological limitations or regulatory obligation. Carefully consider requirements, weighing the benefits. Some common, significant requirements I’ve seen include:
- The assessment must be performed onsite
- Clearance, certification, or citizenship requirements
- The vendor must follow specific methodologies (e.g a specific tool, set of test cases, host)
Additional follow-on requirements
Remediation assistance is a common follow-on requirement. However, that can significantly limit the available vendors. It can force you into using a global firm, MSSP, or VAR. Know that any competent vendor’s report should be usable by an internal team or existing MSP. High-pressure sales tactics pushing remediation are generally FUD.
How to request and review proposals
Running an RFP
A request for proposal (RFP) is a document that solicits proposals, often made through a bidding process, by an agency or company interested in procurement of a commodity, service, or valuable asset, to potential suppliers to submit business proposals. 12
Large enterprises frequently conduct structured RFPs. However, in small and medium organizations procurement much more closely resembles a few vendor solicitations and a comparison of the proposals.
How many vendors should you consider?
Gartner, and my anecdotal experience, recommend between three and five providers for a shortlist. 13 Remember to make informed decisions with regards to the cost of running this process in terms of internal resourcing. Any more than five vendors and you’re making an outsized investment just in vendor selection, which invites decision paralysis. If you are planning to procure a five day assessment, the RFP process can easily become a large component of your total costs. For less mature organizations, it can be more effective to start with a (formal or informal) RFI (Request For Information). This is a more casual introduction that can help you narrow your specific requirements for a vendor.
Fielding initial sales calls
All but the smallest firms have dedicated sales, who will be your point-of-contact during the process. Small vendors may have managing or principal consultants participate directly, which provides an advantage for your ability to directly vet the technical resources. Make sure that before the call you get a good understanding of who you’re speaking to: sales, sales and delivery, or the actual person who will deliver your assessment?
No matter the vendor, do not sign a contract before getting a chance to speak with a technical resource! That’s ultimately who you’re paying for.
What to ask prospective vendors
In order to make an informed decision, you need data. For less experienced clients, it can be unclear what to ask or what questions are valuable. Here’s a cheat sheet of key questions:
How soon would you be able to staff this engagement?
Security services vendors can frequently book weeks or months out. If you have a tight timeline it can be efficient to lead with that constraint. You also can use this opportunity to discuss staffing models. There are benefits and drawbacks to staffing multiple people on an engagement. It is easier to staff and quicker to deliver an engagement when multiple testers can be used. These collaborative engagements can also benefit from thought partnership and complementary skill sets. They also take less chronological time, which makes them easier for you to support as a client and present shorter requirements for maintaining the test environment or change freeze. However, in a solo engagement the consultant is more likely to develop domain- and target-specific knowledge - which can have outsized impact if you have complicated business logic. If you have rigorous Assessor Requirements, onboarding multiple consultants can be more challenging.
What experience do you have with organizations like ours?
Be aware of any specifics of your client profile - industry, technology, or maturity. Identify a vendor that has evidence of success with such clients or at least is able to clearly identify how those will impact the assessment. Ask for a relevant reference or case study as evidence of this competency.
What is your engagement model?
This is an open ended question, but you should be sure to dig into:
Their supported collaboration models
Do they provide real time access and feedback via Slack (or other chat)? Do they prefer to operate over email or phone calls? What do they include in project updates and on what cadence?
How they assign people to projects
You should expect the vendor to align resourcing based on vertical, technology stack, or level of experience. You should ask whether you will be able to review consultant biographies, or whether you are guaranteed a level of experience or expertise.
Despite this callout, it is not uncommon or unreasonable for vendors to only agree to general requirements. Substantial contracts (think ~$500k+) provide more leeway for client specification.
Their project management approach
This has an outsized impact on large engagements and for enterprise clients. There are a variety of common models: from informal project management, to identification of a lead consultant with responsibility for the project, to dedicated project management staff - a PM or a TPM, or simply client-managed projects with no vendor-side coordination.
The deliverable timeline and quality assurance process
There are several considerations for the deliverable logistics. This starts with the timeline, which can range from last-day-of-testing to multiple weeks. It can also be insightful to ask who is responsible for editing the report. It could be a technical writer, a peer of the consultant, management, or something else entirely. Finally, gauge the vendor’s willingness to update reports post-readout, and the timeline (and any costs) in that case as well.
The methodology and tools
Frankly, the specific methodology is less important than a clear signal that the vendor will ensure consistency in delivery. This is particularly important for large vendors, who have a broad slate of potential consultants. A clear methodology can also be valuable evidence when using the report for sales or compliance, which expect mention of common frameworks. This conversation should also lean into a discussion of how the vendor calculates risk. You should ensure they will be willing to rate risk within the context of your specific environment and business.
While the methodology may not be the most important component, it certainly can be vetted. Using a web application assessment as an example: “We test the OWASP Top 10” is significantly less promising than “We follow a methodology aligned to the OWASP ASVS”
My favorite methodology question (for web assessments) is “How will you test our authorization logic?” I like this question because it can take a side-channel look at elements like manual versus automated testing, as well as level of diligence and consistency.
During these initial conversations, there are a few important themes:
- You should you hear potential vendors speak to your specific industry, business logic, threat model and needs
- You should favor vendors who are able to ask informed questions on your stack, implementation, and prioritization
- You should take these opportunities to judge vendors on communication, responsiveness, and collaboration skills
What to expect from vendors?
In How you should scope your assessment, we discussed how you should enter procurement with a strong conception of the scope of the assessment. Following the initial conversation(s) with a vendor they should propose a scope of work.
The penetration test team should identify what testing they believe will give a full picture of the vulnerability status of the estate.
- Penetration Testing: Advice on how to get the most from penetration testing, National Cybersecurity Centre
While scoping, it is important to explicitly communicate your expectations and requirements. This starts with your proposed scope. Consider requesting multiple scoping options from the vendor. These can be broken down by level of effort, coverage, or inclusion/exclusion of certain elements. The proposal should document clear objectives for the assessment, as well as the expected level of effort and depth of coverage. You should also ensure the scope includes concrete reporting requirements, as necessary. For example, if you’re expecting the report to use CVSS score or CHECK severity levels for risk ratings.
Good scoping lays the groundwork for a successful assessment. A suitable vendor will make sure to take into account your needs, priorities, and specific attack surface. Beware a vendor that quotes based on a naive metric like “number of IP addresses” or “number of services.” That is insufficient to accurately scope an assessment and lends itself to false precision. The process generally involves the vendor providing a questionnaire, or asking to run a perimeter scan or review code to understand complexity. This can be followed by a discussion or demonstration of the items in scope, all in service of generating a quote.
An assessment proposal should be accompanied by detailed scope and pricing information. This will be either fixed price or time and materials 14.
Get a quote with a detailed breakdown of costs. Beware “one size fits all pricing.” Detailed pricing is necessary to ensure you’re evaluating possible vendor proposals like-for-like. Some line items you may see broken out include:
- Pricing per-consultant, or varying rates for different levels of expertise
- Dedicated project management
- Reporting time
- Surcharges for additional deliverables, such as attestations of engagement, executive reports or retest reports
- Surcharges for travel, or expenses. Be wary as these can add up fast, especially for onsite engagements.
- Surcharges for out-of-hours testing or for retesting
The quote may also include denote sticker price, with a discount listed. This might be earmarked due to volume, a first time customer, or a recurring customer. However, don’t let a proposed discount obscure the actual value of the rate.
The quote should also spell out payment terms, such as:
- Net 30 (or 60, 180, 365, with penalties for late payment)
- Percent upfront (commonly half, either as a deposit or delivered at kickoff)
- Milestone based
Why are security assessments so expensive?
This is a common question about security services. It’s often paired with a perceived breakdown of the cost-per-consultant (e.g “Why is the company charging $X,XXX/day, that’s $XXX/day/consultant?”). While many vendors will charge as much as they can get away with, there are a variety of contributing factors to the price:
- A significant portion of the cost is the need to compensate technical staff competitively. Hiring security engineers is difficult. A lack of experienced candidates has raised the average salary, and the market is only getting more competitive.
- This cost is increased when you account for the fact that consultants may not be billable 100% of the time. This occurs due to scheduling constraints, inconsistent demand, ongoing professional development, and general burnout prevention.
- Cost of staff also goes beyond salary, with the general rule of thumb for fully loaded cost of an employee (benefits, office space) as double raw salary.
- The vendor also will incur overhead for non-technical functions, including sales and marketing. This tends to scale for larger organizations, which require more sophisticated logistics and need to drive increased demand. Sales commissions can be sizable, as well as advertising and executive costs.
The 3 factors that determine how much money you will make (or lose) in a consulting practice:
- Bill Rate: how much do you charge your customers. This is pretty familiar to most folks.
- Utilization: what percentage of your employees’ time is spent being billable. The trick here is if you can get them to work 50 hours/week because then they’re at 125% utilization and suspiciously close to “uncompensated overtime”, a concept I’ll maybe explain in the future.
- Leverage: the ratio of bosses to worker bees. More experienced people are more expensive to have as employees. Usually a company loses money on these folks because the bill rate is less than what they are paid. Conversely, the biggest margin is on work done by junior folks. A leveraged ratio is 1:25, a less leveraged ratio is 1:5 or less.
Signing the deal
Once you have multiple quotes for your security assessment, you should be ready to lock down a vendor. There are a few steps in this process, beyond blindly signing on a dotted line.
Security assessment services are variably priced, which means there can be room for negotiation on:
- Rate: Price is an obvious axis for negotiation. A strong client may be able to directly negotiate a lower rate. However you can also consider cost savings tied to the consultant profile. As discussed in Quotes, some vendors charge per-consultant, with cost tied to expertise. In these cases, it may be possible to suggest a less experienced consultant if that meets your needs, providing cost savings. For smaller vendors, flexibility around payment terms may be met with more flexibility on pricing.
- Scope: When looking to cut scope to save on costs, it is important to understand the tradeoffs. Cutting scope involves sacrificing breadth for depth. This may involve a sampling approach, or removing some assets or asset classes entirely.
- Level of effort: Lowering the level of effort is the corollary to cutting scope. You will not receive the same coverage in fewer days/hours, so balancing cuts to level of effort with cuts to scope can be a multi-dimensional strategy. Cutting the level of effort means you sacrifice depth for breadth.
- Reporting: In proposals that include retesting, multiple revisions, or multiple forms of report, it may be possible to drop those line items to reduce overall cost. Before taking that step, make sure you’ll have the right artifacts to drive your business outcomes.
- Relationship: Some vendors may provide a discount on volume. Negotiating multi-assessment contracts (whether multi-year or different services) can decrease cost-per-assessment.
It is important to vet the proposals you receive and to not just take the lowest bid. Some considerations:
- Prefer explicit proposals: Vendor proposals should clearly document goals and capture contracted level of effort in terms of person-hours/days/weeks. Ideally, the proposal will also clearly state the methodology to be applied, as well as timelines for delivery.
- Compare like-for-like: One common trap new clients fall into is going for the lowest quote, without comparing what they’ll get. Underscoping proposals in order to provide the lowest quote is a common tactic for less scrupulous vendors. I’d recommend suspicion of any quotes that significantly deviate from the pack. Go back to those vendors to interrogate how they can offer cost savings over competitors.
- Think long-term: When reviewing proposals, it can be valuable to take a step back. If your organization is large or scaling, you should look for a vendor who can support that growth. You may also want your vendor to support both security and compliance. This allows you to develop a partnership, not just commission one-off testing. Working with a single vendor for more of your assessments opens up the potential for economies of scale and allows you to tap more strategic advisory work based on a history of collaboration.
- Vet specific consultants: If you’re a large enterprise client, you may have the leverage to specifically approve the consultants who will be working on your projects. I have been required to interview to onboard to engagements for some large clients. However I’ve also seen some confusion over why it isn’t generally possible to choose which consultants will work on a project, or review bios. This is for a few reasons:
- Often, a specific consultant is unavailable due to scheduling constraints. Sometimes a vendor will have a consultant booked on long-term or recurring engagements occupying their calendar for months.
- You, the client, may not actually be a good judge of skill matching. Not all consultants deliver all types of engagements. A good vendor will pair consultants with projects where they can have the most impact.
- Consultancies are incentivized to balance utilization across their workforce. This isn’t, as clients suspect, to foist less experienced or less competent consultants on clients. But it can certainly be a means of ensuring junior consultants are applied to reasonable engagements.
- Frequently, a client may want specific consultants - whether well-known or a personal connection - written into contracts. Vendors are unlikely to lock in a consultant’s schedule, especially for a deal that could easily fall through.
How does this triad play into proposals?
If you are more flexible on timeline, you can save on cost and quality. Trying to book in Q4 can be especially tenuous or expensive to get on vendors’ crowded calendars.
Similarly, while there are certainly varying prices for good services, the speed and quality of an abnormally cheap quote should be considered.
One of the more effective ways to vet a consultancy is to conduct a reference check. This is similar to leveraging your network for recommendations. Reviewing example reports is a start, but ideally you will find a reference client through your network. If not, you can often request that the vendor place you in touch with someone. This may not be feasible if you’re pursuing a small deal. Vendors are hesitant to ask too much of their reference clients, but this can definitely be demanded at a certain deal size. Remember that this is the vendor trying to groom their image, but a signal remains. Case studies are another options, albeit marketing material.
Contracting security assessments
The amount of paperwork involved in buying an assessment can be surprising. Especially so for technical people buying or selling their first engagement. This section will provide a survey of the elements to expect in the contracting process. However, I recommend consulting a lawyer before signing any document!
PSA: German pentesting outfit Cure53 has open-sourced a set of standard contract templates for penetration testing
(Mutual) Non-Disclosure Agreement - (m)NDA
A mutual NDA is typically signed early in the process. This allows the client to candidly share sensitive information for scoping and procurement. The vendor can also confidentially share their proprietary methodologies and other internal information. Depending on the relative size of the companies, either party may provide standard language. For more on NDAs, see bitmovin’s Universal NDA project.
Master Service Agreement - MSA
The MSA is the overarching document establishing the terms and conditions for the deal. This document will often be red-lined and negotiated. It is referenced in all future Statements of Work (SOWs). For an example, see Secureworks’ MSA - available publicly at secureworks.com/msa.
Significant terms include:
- Service Fees; Taxes; Invoicing and Payment
- Proprietary Rights
- Warranties; Limitation of Liability; Insurance
The vendor will frequently draft the MSA and provide it for review. Some enterprise clients require negotiations to start from their own standard MSA terms.
Statement of Work - SOW
A Statement of Work is the main legal document and serves as the formal contract for a specific engagement. Once you’ve gone through your RFP and selected a vendor, you’ll execute the SOW. The folks at Triaxiom Security have provided a good breakdown of what elements you should expect:
- A detailed scope for the proposed assessment, including level of effort
- Agreed deliverables for the project, including report formats and any retests or revisions
- Key dates for kickoff, completion, and delivery
- Price, including payment terms (superseding the MSA)
Rules of engagement
The statement of work may also contain rules of engagement, or it may be a separate document. These will be more stringent when the client is a regulated organization. The rules of engagement can define:
- Sensitive data handling
- Country of operation
- Country of data residency
- Data retention period
- Security of data storage
- Secure communication mechanisms
- Escalation and pivoting
- Red-line out-of-scope systems
- Prohibited activities such as social engineering and targeting employee desktops
- Emergency contact/notification protocols (contact for functional issues, logic issues)
The security services industry has a standard set of specific risks and implications that drive common contractual clauses. Any vendor you receive a proposal from should have standard language. Clauses commonly include:
- Report sharing/Intellectual Property
If the customer wants to modify the scope or scale mid-assessment, a change order would be required. However, it is important to consider that the vendor may not be able to accommodate an immediate expansion. Due to scheduling requirements extra time may require a delay or even re-engagement. Work with your vendor to plan a flexible engagement if you expect to require a change in mid-flight.
Preparing for your assessment
Before anything else, preparation is the key to success.
- Alexander Graham Bell
Sufficient preparation is key to getting value from your security engagement. It was astounding as a consultant how many clients spent thousands of dollars after the kickoff having us wait for access or documentation. Preparation has multiple components that contribute to getting your vendor up and running.
The first set of requirements are organizational:
- Internal alignment: Security services engagements can be fraught, especially if they are allowed to turn adversarial. Before the vendor starts, make sure you garner buy-in from stakeholders across your organization. A crucial element of this communication should be to warn the blue team, unless it is a covert assessment. So long as testing the organization’s response to an attack is not a goal, avoid putting unnecessary strain on your coworkers.
- Define and document a single communication channel for the consultants to interface with your organization. This will ensure you can easily track progress, respond to questions, and manage dispatch into the rest of your company. You should also setup an escalation policy and channel. For example, you might want critical issues reported immediately and fed to your vulnerability management process.
- Known risks: Consider documenting and sharing your internal risk assessments, threat models, and previous assessment results with your vendor. This level of transparency is uncommon, but can substantially improve their ability to hit the ground running and speak directly to your most business-critical risks.
Technical preparation is also necessary before your vendors can start any assessment. For a deep dive into why, I recommend Jerome Smith of NCC Group’s The Why Behind Web Application Penetration Test Prerequisites.
One often overlooked step is to resolve any low hanging fruit and outstanding issues before commissioning a new engagement. Especially for anything beyond a vulnerability scan, you should preemptively conduct a scan and resolve obvious issues. This may seem redundant with the services you’re commissioning, but it is a way to optimize the engagement value. Otherwise, your vendor will spend time reporting these findings at a much more expensive cost-per-bug. Do you want to pay someone to tell you things you already know?
Generally, network assessments can make it obligatory to test in production. Where possible though you should instead set up a test environment. This environment should duplicate production, including infrastructure and integrations. This is most important for security sensitive features, such as single-sign on, or those called out in threat models and risk assessments. Additionally, make sure you properly configure and enable relevant features. Provide access to all possible roles (including internal ones) and pre-populate test data where possible.
That last item is fairly uncommon, but do you want to be spending consultant day rates on “true positive pentest noise”?
On a related note, to the extent possible you should institute a change freeze on the target during the test period. This will avoid any disruption to testing and can clarify (via versioning) root cause analysis.
Finally, be sure to disable undifferentiated or out-of-scope defense in depth controls.
Assessing whether defined security controls are functioning is not a valuable use of penetration testing resources.
Avoiding or bypassing these controls can consume significant time. That provides no value when they are not the target. Some common examples include web application firewalls, IP allowlisting, and risk-based authentication. However, be sure to document the disabled controls so they can be considered when assessing the risk of findings.
There are several important components of onboarding that will make your vendor’s personnel most effective. This should start with any hardware and software needed for their task. It can be efficient to start from your standard employee onboarding and profile. For vendors, it is also more likely you will need to facilitate remote access. The larger your organization, the earlier you may need to start this process. In enterprises, it can require involvement from legal, human resources, and IT. Beyond general setup, you should also ensure you get test accounts and access provisioned. Any additional demos, developer and customer documentation, and code you can provide can expedite and increase the depth of coverage of the engagement.
Tips from the survey
Here’s what other professionals recommend:
“Need about double the resource to manage than you think!”
“Always have technical staff work with your procurement team – and that’s on both sides (vendor and client).”
“Have one primary person for all contact with vendor”
“Leverage business initiatives (e.g. new product launches) to fund pentest procurement as a capital expenditure, leverage ROI to bring it to routine operational budget for the rest of the landscape.”
After the engagement
Above anything else, how you process the results of your engagement is where most of the value can be derived.
Make sure your vendor cleans up after themselves. More than once on a security assessment I stumbled on escalated access or shells left behind by a previous assessment. Dr. McGrew has discussed extensively how vendors can reduce these risks, for example in his 2016 Black Hat talk “Secure Penetration Testing Operations: Demonstrated Weaknesses in Learning Material and Tools”
Receiving the deliverable
Almost all assessments result in a deliverable, the concrete artifact of the assessment and its outcomes. The deliverable is crucial. It is the main part of the assessment that remains beyond the time-boxed period.
Consider negotiating progressive deliverables for larger engagements. Instead of needing to process the results all at once, you can get a jump on analysis and remediation. Milestone deliverables can also drive confidence in vendor performance. In the best case, you can ask for re-prioritization or pivot targeting based on progress.
The deliverable is often paired with a readout. This is a meeting or presentation focused on communicating the outcomes, providing any necessary context, and formally marking the close of the assessment. As a final touch point with the technical resources from your assessment, it is another opportunity to resolve any open questions and garner additional value. It is also the best opportunity to impart additional business context that may require revised ratings or front matter.
Consider asking the following questions at your next readout:
- If you had more time, where would you dig further or look next?
- What would you recommend we do differently for our next engagement?
- Were there any trends you observed? Are there any systemic mitigations you’d recommend?
- Were there any areas that were particularly well hardened?
- How does our posture compare to the (industry/benchmark/average engagement)?
Your vendor is a resource, take full advantage! One massive benefit of consultants is that they can see inside more companies in a year than most security practitioners will in a career.15
Security consulting firms are the only way you have to know how you compare to others in your field as only a consulting firm can combine trust-based data acquisition with identity-protecting pooling of that otherwise unobtainable comparability data.
- Penetration testing: a duet16
Reading an assessment report
In order to act on your assessment results, you need to understand how to navigate the report. You can refer back to (Public reports)[#public-reports] for examples.
The average report will contain contain:
- Assessment details: Scope, level of effort, tools and methodology, vendor and consultant information
- Executive summary: A (often narrative) recounting of the overall outcome, including risk posture, findings of note, and executive or meta recommendations
- Findings: Summarized, and then each with details, impact and risk, reproduction information, and remediation guidance
- Appendix: Expanding on contents of the report, such as additional information on bug classes, detailed remediation steps, custom tools or scripts developed, or raw data from testing
Generally, the idea is that the report can be decomposed. Sections present different levels of detail for different audiences. The assessment details and executive report should target leadership, providing the high level outcomes in a risk oriented and narrative fashion. The findings summary is generally suited to security leadership, to understand the types and quantities of vulnerabilities found, and their relative risk. Finding details are then ideal for line-level engineers who have to address the vulnerabilities, with all the details necessary to understand, reproduce, and fix the issues.
From the Survey: what do vendors do when there are no findings?
- They manage client expectations, and note limitations
- They provide extensive details on testing methodology or coverage
- They strive to provide assurance of diligence
- They highlight defenses encountered and “true negatives”
- Some firms perform internal quality control investigations
- They follow up and validate the results with client expectations (as a sanity check)
- They try to add value via additional conversation on “other security related observations and best practices that can be deployed given the lack of findings”
Ingesting the results
Processing the outcome of a security assessment should be a team effort. In order to smooth this process.
Manage follow up work through your standard processes: You should already have vulnerability management procedures. This should include clear ownership and SLAs for remediation. Ensure you feed assessment results into this process. It allows you another opportunity to exercise that flow, centralizes tracking and reporting, and takes advantage of existing organizational infrastructure.
Triage all findings: It is important to reassess all findings, no matter the vendor. You have substantially more context on your organization and its risks. This is your opportunity to re-rate vulnerabilities and ensure they are prioritized accordingly. Beyond risk, you should also perform root cause and variant analysis. Go from an assessment that can identify an instance of a vulnerability to a posture that has killed the bug class. For each finding, as with all risks, you will have the choice to fix, mitigate, or accept it. At this point, you should also start estimating fix difficulty and level of effort. Mature organizations may find that the assessment reinforces their pending remediation efforts.
Consider requesting a parsable report: Assessment reports are generally delivered as either a document or presentation. Depending on your internal process for handling vulnerabilities, it can be helpful to request a different format. Enterprise clients will go so far as to have vendors file tickets within their issue tracker. At the very least, a CSV file (or similar) of the findings will let you programmatically import them. Set aside capacity to handle this process, including remediation within your established SLAs.
Looking to the future
Each security assessment is only a step in your larger security journey. Minimizing the transactional nature of a point-in time test maximizes value. When closing out an assessment, take the opportunity to reflect and revise your strategic plan.
How’d it go?
How do you decide if you had a good experience with the vendor?
Generally, answering this question is easiest with obviously bad outcomes. For example, poor communication or blatant false positives on the report.
If the assessment is successful on its face due to the quality of findings or vendor interactions, it can be harder to critique. Here are a few prompts to consider:
- How were their answers to the questions in the readout? Did they demonstrate a high level of diligence in testing? Did they show a strong understanding of your risk profile and business?
- How was the report quality? Was the narrative clear and digestible? Did the report show a sufficient depth of coverage? Were findings reproducible and contextualized? Were trends or areas of future discovery identified?
- Were there false positives or false negatives? How did the vendor handle false positives? What were the provided reasons for any issues? How egregious are the mistakes?
- Are there existing open vulnerabilities within the scope? How did the vendor perform against known risks? Did you have outstanding vulnerabilities? If so, were they reported? How were these “canary bugs” rediscovered in the assessment?
- How do you feel about price for value given the category and quality of the vendor chosen? Was there a clear benefit (presuming a non-bargain vendor) over a vulnerability scan?
- How was the vendor’s communication? Did you feel prepared by the kickoff? Were you able to easily reach the testers? Were communication rules of engagement respected? Did the readout answer any questions?
Common testing cadences
Whether or not you felt your assessment was a success, the next element of planning focuses on scheduling. When creating an assessment calendar, the first question is frequency. For the average organization, annual assessments are a good starting point. Other common patterns include quarterly, development cycle aligned (e.g per-release), and compliance aligned (e.g pre-audit).
Scope and vendor
You also need to reconsider scope and vendors. For scoping, retro the most recent assessment on both breadth and depth. Consider any limitations listed in the assessment report. Retargeting largely similar scopes can improve overall coverage of deficiencies. Targeting more diverse scopes can ensure broad, minimal coverage.
There are competing beliefs on using the same or different vendors for your assessments. Vendor rotation for assessment traces back to at least a 2010 SANS whitepaper. Vendors including Atredis, SecureIdeas, and Triaxiom Security have blogs on this topic that conclude by recommending vendor consistency. A review of the tradeoffs of rotating vendors:
Pros of rotation:
- Provides comparison point for cross-vendor performance and value
- Reasonable based on belief vendors will work hardest for new clients and that firms are of fungible quality
- May be recommended or required by policy or auditors
- Different firms may have different specialties and methodologies that surface different vulnerabilities
Pros of repetition:
- Decrease ramp up time on subsequent engagements
- Improved communication and project management from long term relationship
- Improved technical and business impact due to experience with target
- Potential cost-savings for volume or relationship
- New vendors carry risk of underperformance
A compromise approach also exists. If you’re happy with your vendor, you can stick with them but request they rotate who performs the testing. This balances fresh eyes and historic context. This works so long as the vendor is sufficiently staffed relevant to your assessment frequency.
Scaling your program
There are special considerations for enterprise assessment programs. Large companies have more significant requirements. They also tend to have additional budget and leverage with vendors. Focusing on a small set of vendors and using large contracts allow the enterprise to optimize their assessment program. They can use this leverage on price, scheduling, and consultant selection. At scale, project management also becomes an outsized element of the program. This includes how the vendor tracks and passes context, range of skill sets on offer, and how much the vendor can manage the program with minimal overhead.
Standardization will be essential as you scale. It ensures repeatability, consistency, and positive (internal customer experience). One of the most sizable decisions will be when you begin to bring penetration testing in house. Your goal should be to staff internally in a way that will allow increased value from internal alignment, heavy utilization, and decreased costs.
You will also need to define ownership and standardize the intake process. This project management function can have all assessments scheduled and arranged by a dedicated team, or can provide utilities and guidelines for distributed management. Guidelines should include:
What needs to be assessed and how frequently - per the Cobalt.io State of Penetration Testing 2021 report, the average respondent assessed “63% of their application portfolio.”
Projects should be prioritized on a risk basis and default scope boundaries such as product, feature, or team, should be established.
Standards for time and budget should be provided, such as “all application versions shall have a 1 week assessment, pre-approved under $15,000.”
Unify the intake process and leverage your size to demand standard and automatable reporting formats from vendors. Develop internal triage guidelines on severity. Deploy internal guidance on remediation for common vulnerability classes using consistent mechanisms.
Develop metrics for your security assessment program. You can bootstrap with metrics on coverage of your portfolio, finding characteristic measures (risk, impact, exploitability), and finding density. Your existing vulnerability management program should account for measurement such as mean time to resolution. 17
Force-multiply the value of your assessments and 10x your security. Do not treat them as transactional bug hunts. Use the opportunity to identify trends in your security posture, build guardrails and secure wrapper libraries around differentiated weaknesses, and kill bug classes at scale.
From the Survey: How are companies calculating return on investment?
- Many don’t
- Some try to referee quality:
- “Look at the overall quality from the pentest provider over time (can’t do it for an individual assessment)”
- “Depth of analysis and quality of analysis that goes beyond scanning tools.”
- “Quality of findings, specifically those that are scalable across our company.”
- “Quality of the assessment, quality of the findings”
- Others look at impact:
- ““identify and close critical or high bugs … a general sentiment from those who hear about it.”
- “Risk reduction” / “Aggregate organizational risk identified”
- “Value in contributing to sales success” / “$ business lost from potential risks”
- “1. grading the visibility to areas needing improvement, 2. grading the efficacy of monitoring and our response capabilities”
Security service procurement is often done poorly. Security is a market for lemons and assessments are emblematic of this problem. Buyers struggle to define their motivations, to find and distinguish vendors, to contract, to partner in delivery and to close and evolve the assessment in a way that drives business value. Information asymmetry runs rampant and much of the guidance is biased by the origin. Procuring a low-quality assessment results in buyer’s remorse, wasted budget, and residual unknown risks.
I hope this guide helps you in building a successful security assessment program at your company. Constantly interrogate and evolve your approach and good luck keeping pace with information security risk.
Per Grand View Research’s “cybersecurity Services Market Size, Share & Trends Analysis Report By Service Type, By Professional Services (Penetration Testing, Training, Consulting & Advisory), By Managed Services, By Industry Vertical, And Segment Forecasts, 2021 - 2028” ↩
If you haven’t heard of the Vista penetration test, I highly recommend reading it. It was, in my opinion, one of the seminal moments in the history of the security services industry. Slide 26 is a good place to start ↩
OWASP has compiled a variety of these frameworks as part of its Web Security Testing Guide project. This includes the Penetration Testing Execution Standard, PCI DSS Penetration Testing Guidance, Penetration Testing Framework, NIST 800-115, and Open Source Security Testing Methodology Manual. ↩
Mulachy, R. (2009) PMP Exam Prep, Sixth Edition. USA: RMC Publications, Inc. ↩
Blake, Gary & Bly, Robert W. (1993). The Elements of Technical Writing. New York: Macmillan Publishers. p. 100. ISBN 0020130856. ↩
In a fixed price assessment, there is a flat price for the outcome. In time and materials, there is an hourly or daily rate that is agreed upon, with costs based on consumption and for additional materials. ↩
Geer, D., & Harthorne, J. (n.d.). Penetration testing: a duet. 18th Annual Computer Security Applications Conference, 2002. Proceedings. doi:10.1109/csac.2002.1176290 ↩
Check out Douglas W. Hubbard and Richard Seiersen’s How to Measure Anything in Cybersecurity Risk, which is the canonical book on measuring risk, developing metrics, and determining return on investment. ↩