Simplified Total Risk Management (STORM)
STORM is a way to measure risk as a number instead of a label. It produces bounded, comparable, statistically analyzable risk measurements from the same qualitative judgments a traditional low / medium / high assessment uses. You get the analytical properties of a quantitative risk assessment without having to collect the monetary valuations, loss-exceedance curves, and actuarial frequencies a traditional quantitative assessment requires.
What STORM Is For
STORM is designed for organizations that currently run qualitative risk assessments and find the output too vague to act on. "We have 40 High risks" is not a measurement — it does not tell you whether you are getting better or worse, it does not compare to last quarter, and it does not tell an executive how much of the organization's value is actually exposed. STORM replaces the labels with numbers that do all of those things.
It fits three common cases:
- A compliance or audit program that produces an annual risk report and needs the report to support year-over-year comparison and trending.
- A security testing engagement (vulnerability scan, penetration test, audit) whose findings need to be aggregated into a single risk picture that the Board can read.
- An enterprise risk program spanning operational, financial, legal, reputational, and cyber risk that wants all of those kinds of risk on one comparable scale.
What It Measures
A STORM measurement is a bounded positive integer, called a Risk Unit (RU). An RU can also be read as an approximate percentage of the asset at risk: a 25 RU measurement is roughly a 25% exposure. Larger numbers mean greater risk. Because the scale is bounded and the construction is algorithmic, two measurements are directly comparable — whether they are of the same organization six months apart, of two organizations in the same industry, or of two organizations in different industries.
The same measurement applies at every level of an organization: a single application, the host it runs on, the network the host is part of, the business unit that owns the network, the enterprise. Sub-measurements aggregate into super-measurements through the same procedure, which is what lets a single board-level number sit on top of the hundreds of individual findings underneath it.
How It Works
STORM has two stages. The first stage, the Transforms, converts ordinary qualitative judgments into numbers between 0 and 1. The second stage, the diminishing-impact aggregation, combines a variable-length list of those numbers into a single bounded measurement. Together they are the qualitative-to-quantitative (L2N) transition.
Stage 1 — The Transforms
Four Transforms handle the four inputs any risk assessment needs:
- Asset Transform
- Turns the question "how important is this asset?" into a value between 0 and 1. Two methods: Basic Criticality (a 1–10 or 1–100 score, fine for most cases) and Container-Content-Process (separate scoring of infrastructure, information, and processes on a 0–5 maturity scale, for assets that need finer resolution).
- Threat Transform (HAM533)
- Turns the question "how likely is this threat, and what is its potential impact?" into a probability and impact pair. Uses three axes: History (how often the threat occurs), Access (how much access the threat agent has or can obtain), and Means (the technical capability and resources of the threat agent). You can try the calculator below.
- Vulnerability Transform
- Turns the question "how exposed is the asset by this weakness?" into an exposure value between 0 and 1. Three options depending on the kind of vulnerability: CVSS Adaptation for technical findings that already carry a CVSS score, Simple Exposure for non-technical findings where exposure is estimable as a percentage, and Capability-Resource-Visibility- Effects for structured evaluation of complex vulnerabilities.
- Control Transform
- Turns the question "how effective is this control?" into a reduction value between 0 and 1. A control with a Transform value of 0.75 reduces the residual exposure by 75%. The same mechanism supports "what if" analysis — you can measure the effect of a proposed control before spending any money on it.
Stage 2 — The Aggregation
Once a set of risk factors has been described through the Transforms, they present as a list of scalar values. Stage 2 combines them into a single number using a diminishing-impact function: the largest risk factor contributes in full, each successive factor contributes less than the last, and the series converges so the measurement stays within a fixed range regardless of how many findings are on the list.
This is the property that makes counting, averaging, and summing qualitative findings fail and makes STORM work. Counting does not tell you whether 40 Highs are getting better. Averaging dilutes severe findings into a soup of trivial ones. Summing grows with organization size rather than with actual risk. A diminishing-impact aggregation preserves the worst case, rewards remediation of dominant exposures, stays bounded, and is comparable across asset bases of any size. The specific mathematical form of the function is covered in the white paper.
Try the Threat Transform
The HAM533 calculator below runs the Threat Transform on whatever threats you enter. Qualitative descriptions for history, access, and means become a bounded numeric threat probability. You can add, edit, and delete threats, and export the assessment as CSV.
A Measurement Is a Comparison
A qualitative risk label is a classification; a STORM measurement is a coordinate. Two coordinates can be subtracted. A program that reduced its measurement from 48 RU to 31 RU over a year has a defensible, quantifiable story to tell about the year's security investment. A program that produced "we had 40 Highs last year, we have 38 Highs this year" does not.
Figure — Aggregate STORM measurements across five client engagements, 2012–2025. Healthcare line is an actual STORM-RM measurement series from the ATRA HIPAA engagement; the four financial institution and retail lines are RSK/VM measurements from earlier security-testing engagements scaled by 0.15 to the STORM-RM range. Hover over any point for year and Risk Unit value.
The same property supports drill-down: an enterprise measurement is the aggregate of its component measurements, so a board-level number can always be broken down to the specific findings driving it. The example below uses real measurements from a large transportation company. The highest-risk areas are Information Technology (19%) and Business Operations (13%). Drilling down, IT risk concentrates in obsolete Windows systems, and Business Operations concentrates in Enterprise Risk Management itself — which STORM is adopted to reduce.
Fitting Existing Frameworks
STORM does not replace NIST 800-30, OCTAVE, ISO 27005, FAIR, or COBIT. It substitutes for the qualitative or custom-quantitative internals each of those frameworks leaves to the organization. The framework's process remains the same; its inputs become quantitative; its outputs become comparable. The white paper covers the full mapping.
What STORM Does Not Require
A traditional quantitative risk assessment asks for monetary asset valuations, loss-exceedance curves, and actuarial threat frequencies — data most organizations cannot produce and cannot share even when they can. STORM does not require any of those inputs. It requires only the same qualitative judgments a conventional qualitative assessment requires: what matters, what could go wrong, and how bad it would be.
A STORM program has roughly the same information cost as the qualitative program it replaces. It does not need new data collection processes, it does not force disclosure of proprietary valuations, and it can begin with the information the organization already has.
White Paper
The full public white paper covers the mathematical foundations, the measurement requirements, the five formal constraints on the diminishing-impact function, and the framework mapping in detail. Written in IEEE research-paper format, suitable for an executive, an auditor, or an academic.
Download the STORM/RSK white paper (PDF)
A non-disclosure companion document covering the specific form of the diminishing-impact function, Transform constants, and client case studies is available on request via the contact page.
Simplified Total Risk Management, STORM, ATRA, StrongCOR, RAPID, and RSK are trademarks of Andrew T. Robinson.