The engineer who owns 40% of your codebase is probably a liability.
That’s a hard thing to say, and most engineering managers won’t say it, because the number feels like evidence of contribution. Forty percent sounds like work. It sounds like dedication. In software economics, it is almost always a sign that something has gone wrong.
The Metric That Ate Engineering Culture
Lines of code as a productivity signal has been discredited for decades. Fred Brooks identified the problem in The Mythical Man-Month in 1975. Every subsequent generation of engineering leadership has nodded along and then quietly reinstated some version of it, because the pressure to quantify developer output never goes away. Commit frequency, story points, pull request volume, and code coverage percentages are all, at some level, the same mistake wearing different clothes.
The engineer who writes the most code is not building the most value. In many cases, they’re destroying it. More code means more surface area for bugs, more complexity for the next person, more maintenance burden compounding indefinitely. Software debt doesn’t expire. It accrues interest every day the system runs.
This is why the engineer who costs $400K is often cheaper than the one you hired to save money. The cost of an engineer isn’t their salary. It’s their salary plus the total long-term cost of every decision they embed into your system. A prolific but undisciplined engineer can write enough code in two years to occupy three engineers for a decade.
What 3% Actually Signals
The engineer who contributed 3% of your codebase might have spent their time doing any of the following: deleting redundant systems, refactoring critical paths that nobody else understood well enough to touch, writing the one authentication module that everything else depends on, or designing the data model that made the next two years of feature development tractable. None of that shows up impressively in contribution graphs.
Knowledge concentration is the deeper story. Research on software systems consistently finds that a small portion of files account for a disproportionate share of bugs and change activity. The engineers who understand those files, who can reason about them under pressure and modify them safely, are the engineers whose absence would hurt you most. Their contribution isn’t measured in lines; it’s measured in the confidence the team has when approaching the hard parts of the system.
This is related to what organizational theorists call “key person risk,” but the software version is more pernicious because it’s invisible. When your highest-volume committer leaves, everyone notices immediately. When the engineer who really understood the payment processing logic leaves, you might not notice for six months, until someone has to change it.
The Difference Between Writing Code and Owning Complexity
There’s a useful distinction between engineers who produce code and engineers who manage complexity. The best engineers are complexity destroyers. They look at a system and ask what can be removed, standardized, or made so simple that it doesn’t need to be understood again. This work produces negative line counts and enormous positive value.
Kent Beck once noted that he could delete 90% of the code in any system without losing 10% of the value. That’s an overstatement made for rhetorical effect, but the directional point is correct. Most code in most mature systems exists because it was easier to add than to think, because deadlines compressed the design process, because the engineer who wrote it left before anyone understood it well enough to simplify it. The accumulation is not neutral. It is a drag on every future decision the team makes.
Software is never truly finished, and the systems that age best are the ones where someone cared about the shape of the thing, not just whether the tests passed. An engineer who writes less but writes with intention is compressing future costs in ways that show up nowhere on a performance review.
Why Volume Signals Get Rewarded Anyway
The persistence of lines-of-code thinking isn’t irrational. It’s a response to a real measurement problem. High-quality architectural work is genuinely hard to evaluate, especially for managers who aren’t deep in the code. Commit volume is legible. PR counts are legible. Jira tickets closed is legible. The contribution of the engineer who spent three weeks figuring out that you didn’t need a microservices migration is not legible. Neither is the hour they spent explaining to a junior engineer why a particular abstraction was wrong, saving two weeks of rework.
This creates a systematic bias in hiring and promotion toward engineers who generate visible output rather than engineers who generate good outcomes. It also creates a systematic bias toward building over refactoring, adding over removing, and moving fast over moving correctly. The cumulative effect on a codebase over several years is predictable. Deleting a feature is harder than building one for exactly this reason: the incentives consistently favor addition.
The Bus Factor Is the Right Question
One practical proxy for evaluating what an engineer is actually worth to your system is the bus factor question: if this person were unavailable tomorrow, how much would the team’s effective capability drop, and for how long?
The 40% committer might score poorly here. If their code is readable, well-documented, and structurally sound, the team absorbs the loss quickly. If it’s sprawling, underdocumented, and idiosyncratic (which high-volume code often is), the loss is significant, but the loss is of navigation ability, not architectural understanding. Someone else will eventually figure it out.
The 3% engineer who owns the authentication layer, understands the database schema at a conceptual level, and has been the person everyone goes to when something breaks in an unexpected way, that person’s absence lands differently. Their knowledge was never in the code to begin with. It was in their head, in the questions they asked, in the decisions they prevented.
How to Actually Measure Engineering Contribution
This is the hard part, because there’s no clean answer. The most credible frameworks focus on outcomes rather than outputs: did the system get more reliable, more extensible, or faster to develop against during this person’s tenure? Did junior engineers on the team grow faster in their presence? Did architectural decisions hold up over time or require constant renegotiation?
Some teams use techniques borrowed from academia, tracking change coupling (which files change together) and fault density (which files produce the most bugs) to identify the high-leverage areas of a codebase. Engineers who make measurable improvements to those specific areas are doing disproportionately valuable work regardless of commit count. The tool doesn’t make the evaluation automatic, but it makes the conversation more honest.
The cultural shift required is harder than any tooling change. It means explicitly valuing the engineer who comes back from a sprint with fewer lines than they started with. It means treating “I figured out we don’t need to build this” as a legitimate and laudable outcome. It means evaluating the senior engineer not by what they produced but by what they made possible.
What This Means
If you manage engineers, the relevant question about any contributor isn’t how much they wrote but what the system looks like because of them. High output and high value can coexist, but they often don’t, and the bias in most engineering organizations runs hard toward rewarding the former while hoping for the latter.
The engineers who write less but write well, who delete more than they add, who spend time in code review and architecture discussions and informal mentorship, are consistently undervalued by metrics-driven evaluation. They’re also, in many cases, the reason your system still works.
You can’t fix this with a dashboard. You fix it by understanding what your codebase actually depends on, and who actually understands it, and then making sure those things are not entirely the same person.