EPSS V5 Is Here
It’s an exciting day at Empirical: on June 15 we’re releasing V5 of EPSS, the scoring system that our team helped create and which was recently recommended by Anthropic to help teams prepare for an AI-accelerated tsunami of vulnerabilities.
The new version of the Exploit Prediction Scoring System represents a 23% improvement over the prior model, continuing the work we’ve been doing to make vulnerability prioritization more accurate, more useful, and more grounded in how exploitation actually happens. We’ll unpack that more in a moment.
But let’s take a step back first. EPSS was built to answer a simple question that security teams have been wrestling with for years: which vulnerabilities are actually likely to be exploited? That question has only become more urgent as vulnerability volume has continued to rise and remediation teams are being asked to cover more ground with the same people and resources.
We think Anthropic was right to recommend EPSS. If the number of findings continues to grow, teams need a practical way to decide what deserves attention first. Severity alone has never been enough to do that well, and that’s always been the problem EPSS was designed to solve. A severity-based approach, such as remediating every CVE with a CVSS score of 7.0 or higher, can force organizations to act on more than half of all published vulnerabilities, even though only a small fraction of that work is focused on vulnerabilities that will actually see exploitation.EPSS tackles the problem differently. Rather than measuring severity, it predicts the likelihood of real-world exploitation, which lets teams cover comparable or greater risk with a much smaller remediation burden.
And EPSS V5 pushes that work even further. The model now scores all 318,000-plus published CVEs and includes improvements across the modeling pipeline, the underlying data, and the calibration process that turns model output into usable probabilities.
A big part of the work in V5 went into model optimization. We improved the modeling techniques and optimization methods used to rank vulnerabilities by likely exploitation. We also refined probability calibration so that the scores more closely reflect the true observed likelihood of exploitation. That’s especially important for organizations that use EPSS in quantitative risk models or as part of broader decision-making around remediation and exposure management.
We also improved the exploit-code intelligence that feeds the model. The upstream classifier used to detect and categorize published exploit code now does a better job of identifying repositories and artifacts that may indicate elevated exploitation risk. On top of that, we made a series of smaller feature and data-feed improvements that strengthen the model overall.
Taken together, those changes produced a meaningful jump in performance. Empirical measures the model’s success using a scoring method that rewards the model for ranking genuinely exploited vulnerabilities above ones that pose no real threat — a random guess would score around 0.025, and a perfect model would score 1.0. On May 4th, 2026, the current model (v4) scored 0.514. The new model (v5) scored 0.633 on the same data, representing a 23% improvement in the model's ability to correctly surface the vulnerabilities that matter.
As Jay Jacobs puts it, every version of EPSS reflects a commitment to building the most accurate model we can. That is the standard we hold ourselves to because teams use these predictions to make real remediation decisions. Better predictions lead to better prioritization, and better prioritization leads to better risk reduction.
EPSS has become one of the most widely deployed vulnerability prioritization models in the industry. It’s integrated into security products and used by organizations across sectors to inform remediation workflows, exposure management programs, and risk reporting. At Empirical, we provide the training, infrastructure, and expertise behind the model, along with enterprise support for organizations that need higher-frequency updates, version stability, and operational support in production environments.
If you want to explore the model for yourself, EPSS scores are freely available on our site. And if you’d like to engage more deeply with the community around EPSS, the EPSS SIG at FIRST remains the main forum for practitioners, researchers, and contributors working on these questions together.
We’re proud of what V5 delivers, and we think it gives defenders a stronger foundation for making the kinds of prioritization decisions that modern security operations demand.