Software accumulates. Features get added, requirements shift, and the codebase grows in one direction: larger. But reliability doesn’t scale with size. If anything, the relationship runs the other way. The teams that ship the most dependable software tend to be the ones who treat deletion as seriously as addition.

This isn’t about minimalism as an aesthetic. It’s about how complexity actually kills software quality.

1. Every Line of Code Is a Liability, Not an Asset

Code doesn’t just sit there. It has to be read by new engineers, executed by processors, compiled by toolchains, scanned by security audits, and kept compatible with every library update that touches it. A function nobody calls still adds cognitive overhead. A configuration flag nobody flips still has to be considered when someone refactors the surrounding logic.

The kernel team at Linux takes this seriously. Patches that remove code often receive the same scrutiny as patches that add it, because the reviewers understand that less code means fewer places for bugs to hide. When you delete a function, you don’t just remove its bugs. You remove the bugs that other code would have introduced by interacting with it.

2. Dead Code Actively Misleads Engineers

Code that isn’t executed still looks like it might be. A developer new to a codebase has no reliable way to distinguish active logic from abandoned experiments without running the system or doing careful analysis. This isn’t a hypothetical problem. Engineers regularly spend time understanding, preserving, or working around code that has been functionally dead for years.

The danger compounds when someone finds a bug in a live code path and, while fixing it, assumes a nearby dormant path is relevant. They might mirror the fix there too, adding complexity to something that runs zero times in production, or worse, they might leave it inconsistent with the live path and create a latent problem for the next time the dead code gets accidentally reactivated.

Diagram of abstraction layers fading and dissolving upward
Abstractions that outlive their purpose don't become neutral. They become traps.

As the article Deleting Dead Code Is Harder Than Writing New Code covers, the friction isn’t technical. It’s psychological. Nobody wants to delete something they can’t fully trace. That fear is understandable and also exactly why dead code accumulates for years.

3. Smaller Codebases Have Fewer Interactions to Go Wrong

Bugs don’t usually live in single functions in isolation. They emerge from interactions: function A passes an assumption to function B that function C doesn’t share, and the failure only surfaces when all three execute in sequence under specific conditions. The number of possible interactions in a system grows much faster than the number of components. Double the codebase and you more than double the interaction surface.

This is why rewrites sometimes work, even though they’re risky. When Amazon’s retail team or Netflix’s streaming infrastructure did major internal simplification efforts, they didn’t just clean up code aesthetically. They reduced the number of components that could interact in unexpected ways. Fewer components means the space of possible failures shrinks.

4. Feature Flags and Configuration Branches Are Hidden Complexity

Feature flags are useful. They’re also a form of code that multiplies complexity. Every flag doubles the number of states the system can be in. Two flags means four states. Ten flags means over a thousand. Most of those states have never been tested together, because nobody thought to test them.

The reasonable response isn’t to avoid feature flags. It’s to delete them aggressively once a feature is fully rolled out. A flag that controlled a gradual release two years ago, now set to 100% everywhere, is pure overhead. It adds branching logic to code, adds noise to configuration files, and occasionally gets toggled incorrectly during an incident. Deleting it makes the system simpler and the incident runbook shorter.

5. Abstraction Layers That Outlive Their Purpose Become Traps

Abstractions are worth building when they reduce repetition or isolate genuine complexity. They become liabilities when the thing they were abstracting over no longer exists. A database abstraction layer written to support three different backends, when the company settled on one backend three years ago, now adds code that engineers have to navigate without getting any benefit from the flexibility it once provided.

This is premature abstraction’s quieter cousin: abstraction that was once justified but became premature in hindsight as the system simplified. Removing it requires understanding the original intent, which takes time, which is why it doesn’t happen. The fix is building deletion reviews into the same process as addition reviews.

6. Tests for Deleted Features Create False Confidence

Test coverage is valuable, but only if the tests cover behavior the system is supposed to have. When features get removed without removing their tests, the test suite keeps passing while providing no information about the current system’s correctness. Worse, it inflates coverage numbers. Engineers see high test coverage and feel safe. Some of that coverage is testing code paths that should no longer exist.

This connects to a broader point about reliability metrics. A system’s real reliability comes from how well its behavior matches its intended specification, not from the raw size of its test suite or its codebase. Deleting tests for deleted features makes the coverage signal honest again.

7. The Teams That Delete Code Have Different Instincts

Deletion is a cultural signal. Teams that delete code regularly have accepted that software is not a monument. They treat the codebase as a working tool, something to be kept sharp rather than preserved in amber. That mindset changes how engineers approach new additions too. If you know that code you add today might be deleted in six months when requirements shift, you write it differently: with less ceremony, fewer layers, more directness.

The real cost of keeping software alive includes the cognitive tax on every engineer who has to read and maintain what came before them. Teams that pay down that cost proactively, through deliberate deletion, ship faster and with fewer surprises. The codebases aren’t just smaller. They’re more honest about what the system actually does.