There is a running joke in software engineering that the best commit you will ever make is the one that shows a negative line count. Delete 500 lines, ship nothing visibly new, and somehow the codebase gets faster, the tests get greener, and your teammates start buying you coffee. If you have spent any real time in a production environment, you already know this is not really a joke. The engineers who command the highest salaries and the most respect are very often the ones who have learned to see code not as output, but as liability.

This counterintuitive truth shapes how the most effective engineering teams operate, and it connects to a broader pattern in tech where the most valuable moves are often invisible ones. The same logic explains why tech companies deliberately hide their best features rather than showing everything they can do. Restraint, in software as in product strategy, turns out to be a form of mastery.

Why Code Is a Liability, Not an Asset

Here is the mental model shift that separates senior engineers from everyone else: every line of code you write is a line someone has to read, debug, test, maintain, and eventually migrate. It is not an asset sitting in a vault accruing value. It is more like a mortgage. The feature you shipped last quarter is now a monthly payment in the form of cognitive overhead, security surface area, and refactoring friction.

This is why the concept of technical debt (the accumulated cost of shortcuts, redundant systems, and poorly considered abstractions) is one of the most financially meaningful ideas in software engineering, even though it rarely shows up on a balance sheet. A codebase with 200,000 lines of tightly coupled logic is not twice as valuable as one with 100,000 lines. It is often significantly more expensive to operate and extend.

The engineers who understand this at a gut level are the ones who look at a pull request and ask “do we need this at all” before asking “does this work.”

Side by side comparison of a complex tangled codebase versus a clean minimal one
Complexity accumulates quietly. The left side did not start that way.

The Real Skill Is Knowing What to Cut

Deleting code sounds simple. It is not. Removing the wrong abstraction breaks three services you forgot were depending on it. Deleting a function that looks unused turns out to disable a logging path that only activates under specific error conditions you will not see for six months. This is why senior engineers with deep context in a system are uniquely positioned to do this work, and why that context commands a premium.

There is a term called “dead code” for logic that can never be reached during execution, and most large production systems are full of it. Flags for experiments that ended two years ago. Compatibility shims for API versions nobody uses. Migration utilities that ran once in 2019 and have been dormant ever since. Identifying and safely removing this code requires understanding the system’s history, its current behavior, and its failure modes well enough to be confident nothing breaks when you pull a thread.

That kind of knowledge is not cheap. It is built over years of being paged at 2am, of reading changelogs nobody else reads, of asking “why does this exist” until you find the person who wrote it or the incident report that prompted it. It is, in the most literal sense, institutional knowledge with a price tag.

The Complexity Trap and Why Teams Fall Into It

Most codebases do not become complicated overnight. They accumulate complexity the way a city accumulates infrastructure: one reasonable decision at a time, each one solving a real problem, none of them accounting for what the system will look like five years later. A microservice that made sense at 10 engineers becomes a coordination nightmare at 100. An ORM (Object-Relational Mapper, a tool that abstracts database queries into code objects) that simplified early development becomes a performance bottleneck when the data model grows complex enough.

The trap is that adding code is always the path of least resistance. It is faster to write a new handler than to refactor an old one. It is easier to add a flag than to redesign a flow. Teams under deadline pressure consistently choose the local optimum (ship the feature now) over the global one (keep the system comprehensible). This is not a failure of intelligence. It is a structural incentive problem, which is worth thinking about carefully when you consider how tech companies deliberately design software that becomes slower over time as a predictable consequence of exactly this kind of accumulated decision-making.

Geological strata metaphor showing layers of accumulated legacy code being excavated
Most production systems carry years of sediment. The work of removing it safely is where deep expertise earns its price.

What Deletion Actually Looks Like in Practice

Let me give you a concrete example. Imagine a payments service that has grown over five years to support four different checkout flows: a legacy cart system, a mobile-optimized version, an enterprise API, and an experimental one-click flow that never launched. Each flow has its own validation logic, its own error handling, its own test suite. They share a database but diverged in their business logic years ago.

A junior engineer asked to “add discount code support” will add it to each flow independently, probably copying and pasting with minor variations. A senior engineer recognizes this as the moment to consolidate. They spend two weeks not adding the feature but reducing four flows to one unified flow with well-tested edge cases, then add discount support once in a single, auditable place. The feature ships two weeks later than the naive approach. The codebase is now 30 percent smaller. The next ten features take half as long to build.

This is the work that does not show up in velocity metrics. It does not generate a satisfying stream of green checkboxes. It requires confidence, deep knowledge, and the organizational trust to say “I am going to make this slower now so we can go faster forever.” Those qualities are rare, which is why they are expensive.

It also requires the kind of focused, distraction-free thinking that is genuinely hard to protect in a typical engineering organization. The engineers who do this work best tend to be deliberate about how they structure their time, which connects to why tech workers are scheduling their calendars backwards and reclaiming hours a week to do exactly this kind of deep, high-leverage thinking.

The Market Finally Catches Up

For a long time, software compensation was implicitly tied to output: features shipped, tickets closed, lines written. This made a certain kind of intuitive sense in the early days of a product when raw building speed was the actual bottleneck. But as systems matured and teams scaled, the real bottleneck shifted. It stopped being “can we build this” and became “can we understand what we already built well enough to change it safely.”

The market has, slowly and unevenly, started pricing this correctly. Staff and principal engineers (the senior-most individual contributor levels at most tech companies) are often evaluated explicitly on their ability to reduce complexity, improve system legibility, and make other engineers more effective. That last point is crucial. When a great engineer deletes the right 500 lines, they are not just improving the codebase for themselves. They are making every future engineer who touches that code faster, less confused, and less likely to introduce bugs.

That multiplier effect is the real reason deletion pays more than addition. You are not just doing less work. You are compressing the future cost of the system for everyone who comes after you. And in a field where engineering time is one of the most expensive inputs a company has, that compression is genuinely, measurably valuable. The best engineers have always known this. The industry is just now learning to pay for it properly.