The most expensive software in your organization probably isn’t on any vendor invoice. It’s the internal tool from 2014, the integration nobody fully understands, the microservice that one engineer kept alive for six years and then left. It doesn’t send you a bill. It just slowly consumes everything around it.
This is how technical debt actually works in practice. Not as a dramatic collapse, but as a slow tax on every subsequent decision. Here are the specific mechanisms through which un-sunsetted software quietly becomes your largest expense.
1. The Maintenance Floor Never Goes to Zero
Every running system has a maintenance floor: the irreducible minimum of attention it requires just to stay functional. Security patches, dependency updates, credential rotations, the occasional 3am page when something upstream changes. Companies routinely undercount this cost because it’s distributed across people’s time rather than appearing as a line item.
A system that takes eight hours a month to maintain sounds cheap until you have forty of them. That’s two full-time engineers doing nothing but keeping lights on. Amazon’s internal accounting concept of “undifferentiated heavy lifting” captures this precisely: work that consumes real resources without moving the product forward. The older the system, the higher its floor, because the environment around it keeps changing even when the system itself doesn’t.
2. Old Code Raises the Cost of New Code
The second-order effect is where legacy software really destroys value. Every new feature has to be built around the constraints of what already exists. New engineers spend weeks learning the edges of old systems before they can ship anything adjacent to them. Integration points multiply. What should be a two-week project becomes a two-month project because three of those weeks are archaeology.
This is the compounding nature of technical debt that the “debt” metaphor actually gets right. You’re not just paying interest on the old system; you’re paying interest on every future system that has to accommodate it. Stripe’s engineering blog has written candidly about the cost of long-lived systems at scale, and the pattern they describe is consistent: the blast radius of old architectural decisions keeps expanding.
3. The Engineers Who Built It Won’t Be There
Institutional knowledge evaporates faster than codebases do. The average engineer tenure at a tech company is under two years. The system they built might run for ten. What remains after they leave is documentation that was already incomplete when it was written, and tribal knowledge that lives nowhere except the heads of whoever is still around.
This creates a specific kind of operational risk that doesn’t show up in any cost model until something breaks. When it does, the remediation cost isn’t just the fix; it’s the hours of forensic investigation, the decisions made under uncertainty, and the shadow of anxiety that follows. The software nobody rewrites is usually the most critical, and that’s exactly why the knowledge problem is so dangerous. Critical systems attract the most caution, which translates directly into the slowest engineering.
4. Security Debt Has a Different Risk Profile Than Technical Debt
Most technical debt degrades performance or velocity. Security debt can end the company. The distinction matters because the two types of debt are often conflated in conversations about legacy systems, and they warrant different urgency.
Systems built before modern security practices became standard often carry structural vulnerabilities that can’t be patched out because they’re architectural. An internal tool that was never hardened because it was “just internal” sits inside the perimeter, which is meaningless in an era of credential phishing and compromised contractors. The cost of a breach, in legal fees, customer notification, regulatory scrutiny, and reputational damage, dwarfs the engineering cost of decommissioning the system that enabled it. Companies that have done the math on this almost uniformly say they should have acted sooner.
5. Vendor Lock-In Compounds Over Time
Legacy systems often depend on vendors or libraries that have themselves become legacy. The database version the vendor no longer supports. The authentication library with a CVE that can’t be patched without breaking the application. The cloud service that’s being deprecated on a timeline you don’t control.
Each of these dependencies creates a forcing function that eventually arrives whether you’re ready or not. The organizations that migrate on their own schedule pay once. Those who migrate under pressure from a vendor EOL or a security incident pay two or three times: once for the panic, once for the shortcuts, and once again when those shortcuts need to be fixed. The cheapest time to deal with a dependency is before it’s urgent, which is almost never when it gets dealt with.
6. The Sunset Decision Keeps Getting Deferred for Rational Reasons
Here’s the uncomfortable part: the individual decisions that lead to these outcomes are usually defensible. Sunsetting a system is expensive, disruptive, and risky. Migration projects have a long history of going over time and budget. The engineers most qualified to replace the old system are also the most qualified to keep it running, and keeping it running is faster in the short term.
So nothing happens. Quarter after quarter, the system survives by inertia. The costs accumulate in the background while every decision-maker can point to a rational reason for not acting now. This is less a failure of individual judgment than a failure of accounting. If the true cost of a legacy system were visible on a dashboard, alongside the engineering time it consumes, the knowledge risk it represents, and the velocity tax it imposes on adjacent work, the sunset case would make itself. The problem is that most organizations never build that dashboard. They just pay the bill without reading it.