Research & Articles

Sharing what the data shows us.

Michael Roytman Michael Roytman

The Knowing Machine

Dan Geer posed the right question at Black Hat in 2014: are vulnerabilities sparse or dense? If sparse, you can patch your way to safety. If dense, patching without prioritization is the myth of Sisyphus, but the boulder is growing in size. Eleven years later, writing with Dave Aitel in Lawfare, he conceded the terms and moved to the next problem. What the industry needs, he argued, is a "knowing machine" that converts hazards into risks, not a "pointing machine" that merely enumerates flaws and screams equally at all of them.

Read More
Michael Roytman Michael Roytman

The 500 Organization Reality Check

This is part five of our series on empirical exposure management. Data from 500+ organizations in P2P Vol. 8 showed that 95% of enterprise assets already carried a top-tier exploitable vulnerability when the CVE firehose was 18,000 a year; at 48,000+ in 2025 and climbing, that exposure has only compounded while remediation capacity has not.

Read More
Michael Roytman Michael Roytman

Using ServiceNow? We’ve got you covered.

Empirical Security’s new ServiceNow Vulnerability Response app brings predictive exploit intelligence directly into ServiceNow workflows so security teams can move from reactive vulnerability management to forward-looking remediation.

Read More
Michael Roytman Michael Roytman

When Headcount Doesn’t Help

The practical implication is a reframe of how programs are governed. Most teams measure remediation volume—tickets closed, patch compliance rates, mean time to remediate. These metrics describe throughput. They do not measure the marginal risk reduction of each remediation slot. Under constrained optimization, what matters is not how many findings you close but whether each slot is allocated to the vulnerability that reduces exploitability probability the most at the margin.

Read More
Ed Bellis Ed Bellis

Say Goodbye to Kenna — Say Hello to Local Models at Scale

Last week, Cisco announced the End of Life of Cisco VM (Kenna Vulnerability Management), a company and product I spent the better part of 13 years building. Needless to say, this brought me through the whole range of emotions, but it also served as a great way to reflect on that time.

Read More
Joe Clay Joe Clay

New Features: Critical Indicators & Known Exploitation Calendar Heatmap

We built critical indicators to explain the reasoning behind any CVE’s Empirical Score (0% - 100% real-world exploitation risk). Every CVE we analyze is modeled against over 2,000 data points. We took these model weight contributions and grouped them into the following categories: Chatter, Exploitation, Threat Intelligence, Vulnerability Attributes, Exploit Code, References, and Vendor.

Read More
Michael Roytman Michael Roytman

Risk Model Slop

In cybersecurity risk scoring, “risk model slop” is the quiet but widening gap between what a probability means in a model and how vendors distort it once it leaves its original calibration.

Read More
Jay Jacobs Jay Jacobs

Finding New Exploits with A Bespoke Model

“Why do we need another scoring system?” is not the best question to ask. Instead we need to get accustomed to asking about performance. This post walks through an example from our latest improvement to our exploit code classifier.

Read More
Jay Jacobs Jay Jacobs

It’s Not About Making a Scoring System

“Why do we need another scoring system?” is not the best question to ask. Instead we need to get accustomed to asking about performance. This post walks through an example from our latest improvement to our exploit code classifier.

Read More