Local Models vs Global Scores: Why Context Isn’t Enough
Security teams often start risk‑based vulnerability management with global signals like EPSS and CVSS, then “add context” about their own assets, networks, and controls. That workflow improves how people interpret a score, but it doesn’t guarantee better decisions. The reason is structural: global models are optimized for population‑wide performance, not for the unique mix of technologies, identities, and controls inside your organization. When you rely on a model trained on everyone else’s data, you inherit their averages and their blind spots. If the goal is defensible, automated action (what to patch, when to gate, what to suppress), you need a model that is trained to understand your environment.
The first gap is about features. Global models are built from variables that are visible everywhere: publication age, CVE metadata, exploit chatter, and so on. They cannot directly encode local‑only drivers such as reachability inside your network, the true business criticality of a service, the real state of your controls, or how identities and lateral‑movement paths change exposure. Those local variables interact in ways that reshape the risk landscape from one organization to the next. Adding them after the fact as “context” on a global prediction tweaks presentation, not the underlying resolution of the model. In other words, you can annotate a coarse signal, but you can’t make it precise without changing how it’s learned.
The second gap is calibration. Even when a global model reports well‑calibrated probabilities in aggregate, that calibration rarely transfers intact to a specific estate. A vulnerability that shows a 5% chance of exploitation across the population might be closer to 20% on a flat network with weak controls or effectively 0% behind strong segmentation and monitoring. Decisions like patching, gating, and suppression depend on the local probability being accurate, not the global average. Achieving that requires training or at least reweighting on your own outcomes and priors, and continuously checking calibration metrics as your environment evolves.
The third gap is prescriptive power. A local model lets you ask counterfactual questions that drive operations: What happens to predicted risk if we patch this class of vulnerabilities on these assets? How much does risk drop if MFA coverage increases by 20% or EDR response tightens by ten minutes? Which tickets deliver the greatest marginal risk reduction per hour of effort? Those “what‑ifs” enable shadow pricing, policy optimization, and realistic capacity planning. These are capabilities a global model can’t offer because it doesn’t represent your control graph or operational constraints. That’s the difference between scoring vulnerabilities and optimizing a security program.
None of this means global models should be discarded. They remain valuable as shared language and as priors: CVSS communicates severity characteristics, EPSS contributes an evidence‑based probability signal, KEV identifies vulnerabilities known to be exploited. The practical approach is to combine global and local models. Use global signals for awareness, benchmarking, and as inputs and then let a locally trained, locally calibrated model drive day‑to‑day decisions. Context improves interpretation; local models improve decisions. If you want autonomous, defensible security actions, invest in a model that learns your environment and keeps learning as it changes.