Published 2026-04-09

Friendly Fire: How Your Security Program Makes You Less Secure

Published 2026-04-09

Security

You've heard of "friendly fire" — when military forces accidentally harm their own side. In the world of cybersecurity, the same concept applies: your controls can become both a vulnerability and a threat to your organization. This occurs at the point where "defense in depth" meets diminishing returns.

In addition to nation-state APTs, ransomware, and the zero-day supply chain compromises, this self-inflicted damage accumulates every day in every organization that follows the standard playbook: overly granular access controls that nobody can audit, layers upon layers of controls that cost more than the risks they mitigate, and risk assessments that produce colors instead of decisions.

The security industry has spent thirty years building security programs on four orthodoxies that feel right but are demonstrably wrong. This article examines all four — and offers alternatives (which many will consider heresies) that produce better outcomes at lower cost. The full treatment is in Guerilla Security: The Martial Art of Information Security (revised 2026), particularly Chapters 9 and 17. What follows is the argument in concentrated form.

Heresy #1: Your Definition of Risk Is Wrong

The traditional definition of risk, enshrined in the CISSP body of knowledge and in the mental models of most security practitioners, is: risk is the probability of loss events. Under this definition, risk is inherently negative. Every risk is a bad thing. Controls reduce risk. More controls reduce more risk. The only question is how much risk you're willing to tolerate. This creates a discipline that can only say "no."

ISO 31000 offers a fundamentally different definition: risk is the effect of uncertainty on objectives. That single word — objectives — transforms risk from a one-dimensional negative into a multi-dimensional analysis of outcomes. A decision to implement a control has both upside risk (it improves security) and downside risk (it costs money, creates friction, may break something). A decision not to implement a control also has both upside risk (saves money, reduces friction) and downside risk (leaves a vulnerability unaddressed).

This isn't academic hairsplitting. The wrong definition produces the wrong outcomes. A security program built on "probability of loss" can never justify removing a control, even when that control costs more than the risk it mitigates, even when it drives 61% of employees to use unsafe workarounds (Ivanti, 2024), even when the workarounds create worse vulnerabilities than the control was supposed to prevent. Under the CISSP definition, more controls are always better. Under ISO 31000, you can finally ask: does this action produce a net benefit to the organization's objectives, considering both the upside and the downside?

Every argument that follows depends on this reframe. If you accept that security decisions have costs, and that those costs are themselves risks, then the entire orthodoxy unravels.

Heresy #2: Least Privilege Is Making You Less Secure

The principle of least privilege — reduce every user's access to the absolute minimum required for their current task — is a fifty-year-old idea that made sense when systems had simple privilege models and users numbered in the dozens. Applied literally to modern platforms, it produces the opposite of its intent.

The evidence is damning: "least privilege" is actually more privilege, encourages insecure workarounds, and costs even modest size organizations millions of dollars per year in lost productivity.

  • A 2023 Palo Alto Unit 42 analysis of 680,000 cloud identities found that 99% had excessive permissions — many unused for 60+ days. The mechanism is predictable: when getting access is painful, people request more than they need, and no one revokes what goes unused.
  • The StrongDM Access Productivity Report (2022) found that 64% of organizations experience daily or weekly productivity impacts from access issues, and 53% share credentials across teams as a workaround.
  • Ivanti's 2024 survey of 7,800 IT professionals found that access friction costs an average of 1.6 hours per employee per month. For a 2,000-person organization at $100/hour loaded cost, that is approximately $3.8 million per year in lost productivity — before counting the security cost of the workarounds it drives.

Charles Perrow's Normal Accidents explains why. Systems with interactive complexity (multiple components interact in unexpected ways) and tight coupling (changes propagate rapidly) produce failures that no safety system can prevent. When your access control system becomes a tightly coupled complex system in its own right — hundreds or thousands of fine-grained roles that no one can enumerate, much less audit — the security mechanism itself becomes a source of failure.

The irony is that the paper that gave us the principle of least privilege also gave us the answer. Saltzer and Schroeder (1975) articulated psychological acceptability in the same paper — the principle that security mechanisms must be designed for ease of use so that users routinely apply them correctly. The industry worshipped one principle and ignored the other. For fifty years.

The alternative is optimal privilege: the smallest number of well-defined roles that let your organization function effectively while remaining auditable and monitorable. Aim for tens of roles, not hundreds or thousands. Accept that most roles will have more privileges than strictly necessary — and compensate with monitoring rather than restriction. A smaller number of well-understood roles is far easier to monitor for anomalous behavior than thousands of fine-grained roles that no one can explain.

A cumbersome control that is routinely bypassed is worse than a slightly less restrictive control that is followed reliably. Beautement and Sasse formalized this in 2008 as the compliance budget: every user has a finite tolerance for security overhead, and when a control exceeds that budget the user works around it (Beautement, Sasse & Wonham, 2008). Industry surveys put bypass rates among IT and engineering staff in the 60–80% range — and these are the populations most aware of the risks. (Full treatment: Least Privilege Can Be Poor Practice.)

Heresy #3: Your Risk Assessment Is Theater

Most organizations measure risk using qualitative labels — low, medium, high, critical. These labels feel like measurement. They are not. Qualitative risk assessment is a consensus exercise, not an analytical one. Ten people in a room assign "medium" to a risk because it feels medium, not because they've evaluated probability and impact against a defined scale.

Qualitative labels fail for four specific reasons:

  1. They are not comparable. Is your "high" the same as my "high"? Without a common unit, you cannot prioritize across risk categories. Every risk is "high" to the person who identified it.
  2. They cannot be trended. If your risk was "medium" last year and "medium" this year, has it improved or deteriorated? You cannot tell.
  3. They cannot inform decisions. If Risk A is "high" and Risk B is "high" and you have budget for one, which do you choose? The labels provide no basis for that decision.
  4. The counting problem. Organizations compensate by counting labels or assigning numbers (Low=1, Medium=2, High=3) and adding them up. The result looks quantitative but is not. A "3" at a 200-person credit union does not represent the same risk as a "3" at a 20,000-person commercial bank. The numbers are ordinal — they indicate rank order. You cannot add, average, or perform arithmetic on them. The organizations that do this anyway are performing mathematical operations on labels and treating the output as measurement. It is not.

The reason organizations settle for qualitative assessment is not preference — it's information cost. Traditional quantitative methods (Monte Carlo simulation, full FAIR analysis) require detailed data about threat frequency, loss magnitude distributions, and control effectiveness that most organizations don't have and can't afford to collect. The information cost of a rigorous quantitative assessment can exceed the cost of the controls it recommends.

This is a real problem — but it is a problem with the implementation, not with quantitative measurement itself. RESCOR's Simplified Total Risk Management (STORM) methodology resolves it through the qualitative-to-quantitative (L-to-N) transition — a successively approximate quantitative result derived from inputs that cost no more to collect than a qualitative assessment. STORM uses the same inputs (assets, threats, vulnerabilities, controls) but processes them through transforms that produce numeric values on a consistent scale at the same information cost as a qualitative assessment. The result is not as precise as a full quantitative analysis — but it doesn't need to be. A risk measured at 14% is meaningfully different from a risk measured at 7%, and both are infinitely more useful than two risks labeled "medium."

The practical argument: quantitative measurement at qualitative cost. There is no legitimate reason to settle for colors when numbers are available at the same price. (Full treatment: Guerilla Security, Chapter 18.)

Heresy #4: "An Ounce of Prevention" Requires an Ounce of Detection and Correction

The aphorism "an ounce of prevention is worth a pound of cure" is often cited in security contexts. While it emphasizes the importance of proactive measures, this leads to an endemic overinvestment in preventive controls and the corresponding complacency in detection and correction.

Being compromised is a "when, not if" calculation. Your security program is a set piece in a world of dynamic and infinitely mobile threats (think "Maginot Line"). You may have the best preventive controls money can buy, but attackers will find ways over, under, or around them. If your detective and corrective controls aren't at least as good, the attackers only have to get lucky once, and you have to be lucky every time, all the time.

The data bears this out. IBM's 2024 Cost of a Data Breach Report found that organizations with mature detection and response capabilities contained breaches in an average of 168 days. Organizations without them: 292 days. That 124-day gap is the difference between a contained incident and a catastrophe — and it's a gap that no amount of preventive spending can close after the attacker is already inside.

And even the best technical detective controls have a ceiling. SIEM platforms generate alerts. EDR tools flag anomalies. But who notices the CFO's email account sending wire transfer instructions at 2 AM? Who recognizes that the "IT help desk" caller asking for a password reset doesn't sound like anyone in IT? Who reports the USB drive left in the parking lot instead of plugging it in? Your workforce — trained, empowered, and listened to — is a detective layer that no technology can replicate. Organizations that treat employees as a liability to be controlled rather than a force multiplier to be engaged are discarding their most scalable, most adaptive detection capability.

The alternative: balance your investment across all three control types. Measure your mean time to detect and mean time to respond, not just your preventive control coverage. Run tabletop exercises and phishing simulations that treat employees as participants, not targets. And create reporting channels so frictionless that reporting a suspicious email is easier than ignoring it. (Full treatment: Guerilla Security, Chapters 9 and 14.)

The Unifying Thesis: Your Controls can be a Threat and a Vulnerability

These four orthodoxies — the wrong definition of risk, the wrong approach to access control, the wrong approach to risk measurement, and the overreliance on preventive controls — reinforce each other into a single systemic failure:

  • The wrong risk definition prevents you from seeing that controls have costs.
  • Because you can't see costs, you implement maximum controls (least privilege, short session timeouts, MFA on everything, complex password rules).
  • Maximum controls create friction that drives workarounds.
  • Workarounds create vulnerabilities that are worse than the original risks.
  • You can't see this happening because your risk assessment produces colors instead of numbers, and you aren't measuring friction at all.
  • You focus on preventive controls at the expense of detective and corrective controls, and don't engage your workforce as a critical layer of defense.
  • So you add more controls. The cycle repeats.

System accidents arise when the number of controls and how they interact cannot be enumerated, much less understood. And as you bolt on additional controls, the complexity grows. If an experienced security professional requires more than a few minutes to understand your security architecture, your access control model, or your risk profile, then your security program is a source of risk, not a mitigator of it.

Control friction — the cumulative productivity cost of security controls on the people who must interact with them — is not a side effect. It is a measurable, trackable security metric that belongs alongside vulnerability counts and patch compliance. Organizations that measure it discover what the research already shows: that excessive security controls are, in a very real sense, a denial-of-service attack perpetrated by the organization against itself.

Productivity is a form of availability in the CIA triad. A system that is technically "up" but costs an employee 45 minutes to authenticate into is not fully available. A customer who abandons a transaction because the fraud detection system blocked a legitimate purchase has experienced a denial of service — not from an attacker, but from you.

What the Alternative Looks Like

The alternative is not less security. It is better security — measured by outcomes rather than effort or how much you spend on controls.

  • Redefine risk. Adopt ISO 31000's definition. Evaluate every security decision — including the decision to implement a control — as a trade-off with both upside and downside. Stop treating "more controls" as inherently better.
  • Measure friction. Track access request volume, time-to-resolution, help desk tickets for authentication issues, workaround frequency, and customer abandonment rates. These are security metrics. If they're high, your security program is working against you.
  • Reduce complexity. Simplify your security architecture, access control model, and risk profile. Do more with fewer, well-monitored controls. Complexity is a source of risk, not a mitigator.
  • Replace least privilege with optimal privilege. Consolidate roles aggressively. Accept slightly broader access. Compensate with monitoring. A smaller number of well-understood roles is more auditable, more monitorable, and more secure than thousands of fine-grained roles that no one can explain.
  • Measure risk quantitatively. STORM provides quantitative measurement at qualitative cost. If you can measure risk as a number, you can compare, trend, prioritize, and make actual decisions. If you're using colors, you're guessing.
  • Balance preventive, detective, and corrective controls. Measure your mean time to detect and mean time to respond — not just your preventive coverage. Train your workforce as a detection layer: employees who know what to look for and how to report it are a scalable capability that no technology can replicate.
  • Apply the friction test. For every control: does the risk this control mitigates exceed the cost this control imposes? If you can't answer that quantitatively, you are operating on faith, not risk management.

The organizations that produce the best security outcomes are not the ones that implement the most controls. They are the ones that implement the right controls — and can prove it with numbers.

Guerilla Security: The Full Argument

This article summarizes the distinctive arguments from Guerilla Security: The Martial Art of Information Security (revised 2026), a 20-chapter treatment of security philosophy, threat landscape, defense, detection and response, and governance. The book has been in continuous publication since 1994 — the Three Laws of Guerilla Security, the RAPID methodology, and the entropy-energy model of risk were groundbreaking then and remain the foundation of RESCOR's practice today.

Download the complete booklet: Guerilla Security (2026 Edition) →

RESCOR Can Help

RESCOR builds security programs that produce measurable outcomes — not checkbox compliance. STORM quantitative risk measurement, RAPID iterative governance, and StrongCOR subscription services provide the methodology, the tools, and the ongoing support to replace orthodoxy with evidence.

Schedule a consultation → | +1 863 SECURE1 (+1 863 732-8731)