When Headcount Doesn’t Help

This is part four of our series on empirical exposure management. This post focuses on a single chart: Figure 19 from Prioritization to Prediction Vol. 8. It is, we think, the most actionable benchmark in the exposure management literature.

Most exposure programs are run like a throughput problem. The implicit theory of change is straightforward: fix more vulnerabilities, reduce more risk. The data says that is an incomplete model. In practice, exposure is a constrained optimization problem. You have finite remediation capacity. What you pick matters as much as how fast you patch. Figure 19 puts numbers on that claim in a way that is hard to argue with.

The Simulation Setup

The P2P Vol. 8 methodology starts from observed enterprise data, not hypotheticals. Each organization enters the simulation with its actual open vulnerability inventory. A capacity constraint is imposed based on empirically grounded thresholds: a low-capacity organization closes roughly 6.6% of open vulnerabilities per month, a median organization closes about 15.2%, and a high-capacity organization closes around 27.1% (you may recall these from our last article). These are not theoretical bands—they reflect the distribution of mean monthly close rates measured across real enterprises.

The simulation then applies different prioritization strategies within each capacity tier and measures a single output: exploitability reduction relative to random selection. How much better you do than chance is the cleanest executive benchmark available for the only question that matters in exposure management. What do I get, in risk reduction, for the remediation I can actually afford?

The Punchline: Strategy Dominates Capacity

At median capacity, CVSS-based prioritization performs roughly the same as random selection. This is worth sitting with. An organization that has invested in tooling, process, and headcount to prioritize by CVSS severity is capturing almost no benefit from that investment relative to a coin flip. CISOs have long intuited that severity isn't a strategy. Figure 19 quantifies the intuition.

The more striking result is what happens when you compare strategies across capacity tiers. Exploit-code-driven prioritization at low capacity outperforms CVSS-based prioritization at high capacity by a factor that should end most budget conversations about hiring more patchers versus improving the signal those patchers act on. Adding headcount will not rescue a weak prioritization function. The data is unambiguous on this.

Reading the Decision Matrix

Figure 19 effectively partitions organizations into four cells defined by the intersection of capacity and strategy quality. The exploitability reduction multiples tell the story directly

Figure 19


With low capacity and CVSS-based prioritization, an organization achieves roughly a 2× improvement over random. Quadrupling that capacity while keeping the same poor strategy gets you to around 6×—meaningful, but expensive. Switching to exploit-code-driven prioritization at low capacity jumps the outcome to approximately 22×. Combine high capacity with exploit-code strategy and the figure reaches roughly 29×.

The ratio between the best and worst outcomes is approximately 14:1. The ratio between the two capacity extremes, holding strategy constant at CVSS, is only 3:1. Strategy variance dwarfs capacity variance. That is the core governance insight this chart delivers, and it belongs in every board-level exposure management review.

Why Exploitability Is the Right Signal

The simulation result makes intuitive sense once you accept the underlying distributional reality of vulnerability risk. Nearly 95% of assets carry at least one highly exploitable vulnerability. For any organization with two or more assets, it is near-certain that at least one vulnerability in their environment is being—or will shortly be—exploited somewhere in the wild. The exposure surface is effectively universal. The question is never whether exploitable vulnerabilities exist; it is whether the organization is closing the ones that are actively being weaponized before they are used against it.

CVSS was designed to describe the intrinsic properties of a vulnerability in isolation. It was not designed to predict exploitation probability in a specific organization's environment. Exploit code availability, active exploitation in the wild, and prevalence of the vulnerability across the organization's asset inventory are the signals that predict exploitation probability. The model architecture that underlies P2P's risk score treats these as primary features; CVSS variables, when tested, did not improve predictive performance and were excluded from the final model. The simulation's outcome reflects that signal quality difference directly.

Treating Remediation as Constrained Optimization

The practical implication is a reframe of how programs are governed. Most teams measure remediation volume—tickets closed, patch compliance rates, mean time to remediate. These metrics describe throughput. They do not measure the marginal risk reduction of each remediation slot. Under constrained optimization, what matters is not how many findings you close but whether each slot is allocated to the vulnerability that reduces exploitability probability the most at the margin.

That reframe has operational consequences. Prioritization queues need to be built from exploitability signals, not severity thresholds. Capacity investment decisions need to model the expected risk reduction per additional remediation unit. And program performance reporting needs to track exploitability reduction (not patch volume) as the primary outcome metric.

Figure 19 makes this concrete. It separates programs that optimize from programs that process. The gap between them, measured in exploitability reduction, is roughly an order of magnitude.

Next
Next

Capacity is King