Advanced Total Risk Assessment (ATRA)

ATRA is the AI-integrated form of STORM. It performs a complete quantitative risk assessment — threat, asset, vulnerability, and control Transforms plus the STORM diminishing-impact aggregation — and uses AI to extract the underlying findings, recommendations, and Transform inputs directly from client artifacts.

The result is an audit-grade quantitative risk assessment produced at a fraction of the labor cost of a manual engagement, delivered in the same deliverable format, and updated continuously rather than once a year.

What ATRA Does

ATRA combines three capabilities the industry has historically delivered separately:

  1. A complete STORM risk engine — all four Transforms, the aggregation, the framework mappings, and maturity-level scoring.
  2. A graph-backed data model that preserves every assessment, finding, recommendation, control, and interview as a first-class entity linked by the relationships that actually govern risk data.
  3. An AI layer that reads the client's own artifacts — interview transcripts, vulnerability-scanner output, penetration-test reports, policy documents, audit findings, CMDB and cloud inventories, threat feeds — and extracts findings, recommendations, and Transform-input proposals.

An analyst reviews, adjusts, and approves. The methodology is unchanged. The labor curve is radically flatter.

Why AI + STORM

Every STORM engagement has spent most of its time on the same few activities: interviewing stakeholders and extracting findings, correlating scanner output with controls, mapping vulnerabilities to asset values, drafting recommendations that cite specific evidence. None of that is where the intellectual value lives. The intellectual value is in the methodology itself and in the judgment calls an analyst makes on edge cases.

ATRA automates the mechanical work and leaves the judgment calls to the analyst. A security-test report that used to take days to transcribe into a STORM assessment is ingested in minutes, with every finding proposed, every CVE referenced, every recommendation drafted, and every Transform input populated. The result is a quantitative assessment delivered at the cost and cadence of a qualitative one — which is the original point of STORM, and which AI finally makes universally practical.

Complete L2N Coverage

L2N stands for quaLitative to quaNtitative — the STORM commitment that every qualitative judgment about an asset, threat, vulnerability, or control can be converted into a bounded numeric measurement that is objective, repeatable, comparable, and scalable. ATRA implements the full L2N pipeline end to end, with AI-driven ingestion populating each of the four Transforms from the matching class of client artifact:

Asset Transform (Basic Criticality / Container-Content-Process)
Populated from configuration-management databases, cloud resource inventories, identity directories, SaaS registries, and business-process documentation.
Threat Transform (HAM533)
Populated from threat-intelligence feeds, sector-specific breach data, historical incident records, and organizational context.
Vulnerability Transform (CVSSA / SEM / CRVE)
Populated from vulnerability-scanner output, CVE and vendor advisory feeds, penetration-test reports, and configuration audits.
Control Transform (SCEP)
Populated from policy documents, control-test evidence, audit artifacts, and continuous-monitoring data.

Every AI-proposed value carries provenance — the source artifact it was derived from, the model that produced it, and the model's self-reported confidence. Values that appear in deliverables are human-approved.

The chart below shows real aggregate measurements across five client engagements over a decade. The emphasized line is the ATRA-produced STORM measurement series for an active healthcare HIPAA engagement; the others are longitudinal RSK/VM measurements scaled for STORM comparability. What ATRA delivers is visible in the shape: continuous, comparable, trendable numbers that support decision-making year after year.

Figure — Aggregate STORM risk measurements across five client engagements, 2012–2025. Healthcare line is an ATRA-generated STORM-RM series; the others are RSK/VM measurements scaled for STORM. Hover any point for year and value.

Year-Over-Year, Without the Reconciliation

ATRA stores each assessment in a graph. Findings, recommendations, controls, and measurements are first-class nodes with stable identity across re-numbering, re-titling, and re-scoping. Historical comparison is a query — the 2018 risk number for any asset, finding, or business unit is directly comparable to the 2025 risk number, without spreadsheet reconciliation.

Year-over-year trend charts, unresolved-recommendation reports, safeguard- coverage matrices, and orphan-control detection all fall out of the graph structure without custom joins. The annual risk-analysis Word document is generated from the same graph, in the same format the engagement has always produced, with a full audit trail.

Auditability Is Structural, Not Bolted On

A fair objection to AI in risk tooling is that the outputs become opaque. ATRA is built so this cannot happen: the AI produces Transform inputs, not risk outputs. The Transforms and the aggregation remain deterministic, inspectable, and reproducible. For any ATRA measurement an auditor can see the contributing risk factors with their scalar values, which values were AI-proposed versus human-authored versus human-approved-AI, the exact source artifact each AI proposal came from, and the aggregation that produced the number. Every measurement is reproducible from the stored inputs.

Further Reading

To evaluate ATRA, request the non-disclosure companion document, or discuss an engagement, use the contact page. The companion document covers Transform constants, the specific form of the diminishing-impact function, and client case studies.

Simplified Total Risk Management, STORM, ATRA, StrongCOR, RAPID, and RSK are trademarks of Andrew T. Robinson.