Published 2026-03-29
The Limits of Artificial Intelligence (AI)
Artificial Intelligence offers tremendous opportunity for improved human productivity. It also has hard limits — some theoretical, some practical, some regulatory — that every organization must understand before deploying AI in any capacity where the consequences of failure matter.
Computability
Some problems can be defined but are not computable. The classic example is the halting problem: no computational solution can determine whether an arbitrary program terminates or ends in infinite recursion. This is a hard mathematical limit that applies to all automata, including AI systems.
While computability limits are real, they are not the primary reason AI systems produce incorrect results. The more immediate problem is how modern AI actually works.
Why AI Hallucinates
Large language models (LLMs) are statistical pattern matchers. They predict the next likely token in a sequence — not the next correct token. When an LLM generates a confident-sounding answer that is factually wrong, it is not experiencing a computability failure. It is doing exactly what it was designed to do: producing output that is statistically plausible based on its training data. The output sounds right because sounding right is what the model optimizes for.
This means hallucinations are not bugs that will be fixed in the next version. They are an inherent property of how statistical language models work. The frequency of hallucinations can be reduced through better training data, retrieval-augmented generation (RAG), and other techniques — but the fundamental mechanism that produces them cannot be eliminated without changing what these models are.
Articulated vs. Unarticulated Knowledge
No AI can be trained on knowledge that has not been made explicit or articulated. Humans have extensive stores of latent, unconscious, or unarticulated knowledge — insight, intuition, moral conscience. The prohibition against murder is universal to all known human societies, arising from millions of years of evolution, not rational decision [Sowell 1996], [Hayek 1945].
Such unarticulated knowledge is inaccessible to AIs. You can ask an AI "why is murder wrong" and it will recapitulate various articulated arguments — not answer from intrinsic moral knowledge. The challenge for the contrary view is to describe a causal mechanism by which an LLM acquires the latent storehouse of unconscious information that produces human insight, intuition, and moral conscience. Insight — the leap from observed pattern to a previously unimagined explanatory frame — depends almost entirely on knowledge that has never been articulated. If asked to exhibit the leap of insight that comes after string theory or M-theory, no automaton trained on the existing literature can produce it: the next theory by definition cannot be in the training data.
There is an honest epistemic problem behind this: we cannot definitively know whether a sufficiently sophisticated automaton possesses something like sentience or moral agency, because we have no agreed test for those properties even in humans. The question collapses into "what is sentience?" and we do not have a good answer. This is not a position to be argued out of by better evidence; it is a category that may be settled by statute rather than by experiment. The European AI Act, the various US executive orders on AI, and the case law that will follow them are already moving toward a position that — for legal and liability purposes — no automaton is to be treated as sentient or as possessing moral agency, irrespective of how convincingly its outputs simulate either. The practical position this article takes: even in cases of genuine ambiguity, the responsible default is to treat AI outputs as the product of statistical machinery, not of mind, and to reserve moral and legal accountability for the humans who deploy and supervise it.
This distinction matters in any domain where judgment, ethics, or context that cannot be fully specified in advance are important to the outcome.
Determinism & Complex Systems
Classical computation is deterministic — given the same input, you get the same output. Modern LLMs introduce stochastic elements (temperature-based sampling) that make their output genuinely non-deterministic in practice. But whether the model is deterministic or stochastic, the deeper issue remains: no computational model can fully represent the behavior of complex systems.
Complex System Characteristics
If you have heard of the "butterfly effect," that is a metaphor for complex systems:
- Component Count — large number of interacting elements.
- Coupling — nth-order interactions cannot be enumerated.
- Non-Linearity — responses do not correspond reliably to inputs.
- Chaos — possible outputs are constrained, but which will occur cannot be predicted.
Weather Prediction: A Case Study
Weather predictions out to 72 hours improved from ~70% accuracy in 1960 to over 90% in 2020 — a 30-35% improvement over 60 years. This improvement is not proportional to the thousand-fold increase in computing power applied to the problem.
Since 2023, AI-based weather models (Google DeepMind's GraphCast, Huawei's Pangu-Weather) have actually outperformed traditional numerical weather prediction for many short-range forecast types. This is a genuine achievement. But it does not invalidate the complexity argument — it illustrates it. Even with dramatically better AI models, there is a hard predictability horizon around 10-14 days that no amount of data or computing power can extend. The weather is a coupled non-linear chaotic system, and its long-term future states cannot be predicted.
Complex Systems & AI
The economy, the climate, global supply chains, and large-scale technology environments are all complex systems. AI can improve short-term prediction and pattern recognition within the predictability horizon — that is what GraphCast and Pangu-Weather do, and they are real achievements. What AI cannot do is extend the predictability horizon. No amount of computational power closes the gap between the last day a chaotic system can be predicted and the first day it cannot. To "solve" prediction in the sense the marketing implies — perfect forecasts at arbitrary horizons — is not a problem AI is failing at. It is a problem nothing can succeed at, by the mathematical structure of complexity itself.
Accountability & Liability
Only humans have legal accountability for their actions. AIs cannot be held accountable or liable — the AI has no assets, and shutting it down has no punitive or rehabilitative effect. Removing humans from decision loops has serious implications for autonomous vehicles, AI-driven medical diagnosis, automated trading systems, and other applications where errors have significant consequences [Perrow 1999].
Some argue that liability can be transferred to AI developers or users. But transferring liability to manufacturers is difficult in the US where manufacturers typically cannot be held liable for the use of their product. Transferring liability to users would reduce AI deployment in any application where wrong answers carry risk — which is precisely where AI oversight matters most.
System (Normal) Accidents
Modern technological systems tend toward interactive complexity and tight coupling, leading to cascading failures from seemingly innocuous inputs [Perrow 1999]. AI amplifies this risk in two ways:
- Correlated failures. When millions of systems use the same AI model, a single model error affects all of them simultaneously. The 2024 CrowdStrike incident — where one bad update grounded airlines worldwide — demonstrates this at a smaller scale. AI-driven automation creates the same risk with potentially larger blast radius.
- Speed of propagation. Human-in-the-loop systems have a natural buffer: the time it takes a human to notice, evaluate, and act. Fully automated AI systems propagate errors at machine speed, leaving no time to intervene before cascading failures develop.
Data Quality & Bias
AI amplifies the "garbage in, garbage out" problem. Models trained on biased data produce biased outputs — a well-documented issue affecting hiring algorithms, lending decisions, criminal justice risk scoring, and medical diagnosis. The bias is often invisible because it is embedded in historical data that reflects historical discrimination.
Organizations deploying AI must evaluate training data quality, test for bias in outputs, and establish ongoing monitoring. This is not a one-time activity — model behavior can drift as the world changes while the training data remains static.
Regulatory & Compliance Risk
AI regulation is accelerating worldwide. The EU AI Act establishes risk-based categories for AI systems with mandatory requirements for high-risk applications. In the US, sector-specific regulations are emerging for AI in healthcare (FDA), financial services (SEC, OCC), and government (Executive Order 14110). State-level AI legislation (Colorado, Connecticut, and others) adds further compliance requirements.
Organizations using AI — especially in regulated industries — must assess whether their AI deployments comply with current and emerging regulations. This includes transparency requirements, impact assessments, human oversight obligations, and documentation of training data and model behavior.
Energy & Environmental Costs
Training and operating large AI models consumes significant energy. A single large model training run can consume as much electricity as dozens of US homes use in a year. Inference (running the trained model) is less expensive per query but scales with usage. Organizations should factor energy costs and environmental impact into AI deployment decisions — both as a financial consideration and as part of ESG commitments.
What To Do About It
AI should be embraced for its genuine potential to improve human productivity and decision-making. But every organization deploying AI must conduct a risk assessment that considers:
- What happens when the AI is wrong — and how will you detect it?
- What data is the AI trained on, and does it reflect the biases you want to perpetuate?
- What regulatory requirements apply to your use of AI?
- Where are humans in the decision loop, and is that sufficient?
- What is the blast radius of a correlated AI failure across your systems?
RESCOR helps organizations assess AI risk using STORM quantitative risk measurement and develop AI governance programs through StrongCOR. The goal is not to avoid AI — it is to deploy AI with the same rigor you apply to any other technology decision that affects your organization's security, compliance, and operational resilience.