Recursive systems—whether in control loops, optimisation engines, or autonomous agents—exhibit a class of failure modes that conventional safeguards cannot reliably prevent. The core issue is structural: when a system's output feeds back into its own inputs, small errors can compound exponentially.
Runaway gradients occur when optimisation processes update parameters without bounds, leading to values that grow unboundedly large or oscillate violently. In recursive systems, this is not an edge case but a predictable failure mode under specific conditions.
Recursive amplification compounds errors across iterations. A 1% deviation in iteration n can become a 50% deviation by iteration n+10 if the system lacks inherent stability constraints. This is why systems that "usually work" fail catastrophically under pressure.
Delayed failure detection is endemic to recursive architectures. By the time monitoring systems register anomalous behaviour, the underlying state may already be irrecoverable. Probabilistic safeguards, which trigger on statistical anomalies, often activate too late or not at all when the system drifts gradually rather than failing abruptly.
Optimisation loops in safety-critical systems present a specific risk: they are designed to seek extrema, but without deterministic constraints, they cannot distinguish between beneficial optimisation and runaway optimisation toward destructive states. Alignment—ensuring a system pursues intended goals—is necessary but insufficient. A well-aligned system can still exhibit recursive instability if its underlying dynamics are unbounded.
In safety-critical systems, probabilistic safeguards provide statistical guarantees that hold on average. But averages do not prevent the single catastrophic failure that destroys a turbine, crashes an aircraft, or corrupts a financial system. Deterministic constraints that guarantee zero overshoot behaviour, by contrast, provide guarantees that hold in every execution, without exception.