The software powering your bank’s ATM network was likely written in COBOL sometime in the 1970s or 1980s. The flight management system on a commercial aircraft traces its lineage to code certified in the 1990s, or earlier. The control software for many nuclear facilities has not been substantially rewritten in decades. None of this is an accident or a failure of modernization. It is, counterintuitively, the reason those systems still work.

Modern software development culture prizes novelty. New frameworks, new languages, new architectural patterns. The implicit assumption is that newer means better. The evidence from the most safety-critical systems in the world suggests the opposite: age, under the right conditions, is a feature.

1. Old Code Has Already Failed

Software fails in ways its authors didn’t anticipate. A new codebase contains every latent bug its developers haven’t found yet, plus every interaction with external systems they haven’t tested. A codebase that has been running in production for thirty years has, by definition, encountered an enormous range of real-world conditions. The bugs that could be found have been found. What remains has survived.

This is not a theoretical point. The COBOL systems running at many of the world’s largest banks process an estimated $3 trillion in daily transactions. IBM has noted that COBOL handles more transactions per day than Google searches. Those systems are not reliable despite their age. They are reliable partly because of it. Every edge case that crashed them got fixed. Every race condition that appeared under load got patched. The surviving code is a distillation of three decades of real-world adversarial testing.

2. Certification Creates a Forcing Function That Modern Development Avoids

In aviation, medical devices, and nuclear power, software must be certified before it can be deployed. For DO-178C, the aviation software standard, certification can cost more than the initial development. It requires exhaustive documentation of requirements, traceability from requirement to test case, and proof that every line of code is exercised by tests. The process is slow and expensive enough that nobody rewrites certified software unless they absolutely have to.

That reluctance is the point. The cost of certification forces a discipline that commercial software development rarely achieves: you define exactly what the software must do, you prove it does that, and then you stop changing it. The absence of continuous updates is not stagnation. It is stability by design. Modern software, updated weekly or daily, is a moving target whose current behavior is never fully characterized.

Flowchart diagram illustrating the complexity of aviation software certification under DO-178C
The DO-178C aviation software standard requires traceability from every requirement to every test case. The expense of compliance is a feature, not a bug.

3. The Rewrite Fallacy Kills More Systems Than Technical Debt Does

When organizations do attempt to replace legacy systems, the results are instructive. The UK’s National Health Service attempted to replace its aging patient record systems with a unified national program beginning in 2003. After nearly a decade and roughly £10 billion spent, the program was abandoned in 2011 with the original patchwork of older systems still running. The State of California attempted a similar modernization of its Department of Motor Vehicles systems in the early 1990s, spending over $44 million before canceling the project.

The pattern repeats across industries. The new system cannot replicate every behavior of the old one, because no one fully documented what the old one does. Requirements that were implicit in the legacy code, baked in through years of patches and workarounds, get missed. The old system accumulated institutional knowledge in a form that resists transfer.

4. Simplicity at the Hardware Level Is a Genuine Advantage

The processors running safety-critical embedded systems are often 16-bit or 32-bit chips that have not been manufactured in large volume since the 1990s. This is not nostalgia. It is a deliberate choice rooted in a practical reality: a simpler processor has fewer possible states, fewer speculative execution pathways, and a smaller attack surface. You can formally verify what a simple processor will do. You cannot formally verify what a modern out-of-order superscalar processor will do under every possible instruction sequence.

The Voyager probes, launched in 1977, are still transmitting data from interstellar space. They run on processors with roughly 69,000 transistors. A modern smartphone contains more than 15 billion. The Voyager software has been running continuously for nearly 50 years. The comparison isn’t entirely fair, but the underlying point holds: complexity is the enemy of reliability, and old systems often have less of it.

5. The Modern Development Cycle Is Structurally Hostile to Reliability

Agile development, continuous integration, and rapid release cycles are genuinely useful for consumer software where fast iteration matters. They are structurally incompatible with building reliable systems, because reliability requires stability and stability requires not changing things. A codebase that deploys new versions daily is a codebase whose behavior at any given moment is not fully known.

This is not a criticism of agile as a methodology. It is an observation about the mismatch between development culture and reliability requirements. The industries that produce the most reliable software, aviation, medical devices, industrial control systems, are also the industries most resistant to modern development practices. That correlation is not coincidental.

6. Longevity Creates Expertise That Cannot Be Replicated Quickly

The engineers who maintain legacy COBOL systems at major financial institutions are, in many cases, retired or approaching retirement. Banks have been running advertisements and workshops to attract younger COBOL programmers for years, with limited success. This is framed as a crisis, and in workforce terms it is. But it also reflects something important: deep expertise in a system takes decades to accumulate.

The engineers who have maintained these systems for thirty years carry knowledge that was never written down. They know which modules are fragile, which patches interact in unexpected ways, which behaviors were intentional and which were accidents that became load-bearing. That knowledge is the actual reliability asset, not just the code. Modern systems, rewritten every few years and staffed by developers with short tenures, cannot accumulate an equivalent depth.

7. Age Reveals Which Abstractions Actually Hold

Software is built on abstractions: assumptions about how data will be structured, how systems will communicate, what the operating environment will look like. Most abstractions leak eventually. The ones that have survived thirty or forty years in production are, by revealed preference, the ones that were accurate enough to hold under real conditions.

SQL, released commercially in the late 1970s, still underlies most of the world’s data infrastructure. The internet protocol suite was designed in the early 1970s and routes essentially all global internet traffic. These are not the technologies that most programmers find exciting. They are the technologies that turned out to be right. There is a selection effect at work in old software that novelty cannot replicate: the bad abstractions have already been eliminated, usually painfully.

The uncomfortable implication for the industry is that the push to modernize legacy systems is often driven less by technical necessity than by developer preference and management anxiety about running something nobody fully understands anymore. Both are real pressures. Neither is a good reason to replace software that has been proven, at scale, over decades.