Walk into any software team and ask them to point to the worst code in their codebase. They will not hesitate. Everyone knows where the bodies are buried. What is harder to explain is why the bodies are there in the first place, and why, in many cases, the engineers who buried them were smart people making rational decisions.
The story of deliberately obscure code is not really about bad programmers. It is about incentive structures, time pressure, job security, and the peculiar economics of software development. And once you understand those forces, the “bad” code starts to look a lot less like failure and a lot more like a predictable output of a broken system. That pattern of seemingly irrational behavior driven by hidden incentives shows up everywhere in tech, from how APIs are deliberately made difficult to use to how pricing tiers are structured to trap you.
The Job Security Myth That Turned Into a Real Strategy
The most charitable explanation for deliberately unreadable code is also the most cynical: engineers sometimes write code only they can understand because only they can understand it. The logic is almost embarrassingly simple. If you are the only person who knows how the payment processing module works, you are very difficult to fire.
This is not a fringe behavior. A 2020 survey by Tidelift found that nearly 60 percent of professional developers reported inheriting code so opaque that the original author had to be tracked down to explain it, and in many cases that author was still employed specifically because of that opacity. The technical term for this is “knowledge hoarding,” and it is a documented organizational pathology.
But here is where it gets more nuanced. Many engineers who write opaque code are not consciously scheming. They are optimizing for the wrong metric under time pressure. When a sprint deadline is 48 hours away, writing a beautifully documented, modular function that any junior developer could extend takes three times as long as writing a dense, highly optimized block that works right now. The engineer chooses speed, ships the feature, and moves on. The unreadable code is a byproduct, not a goal. That said, the byproduct still functions as job security, whether intended or not.
Obfuscation as Intellectual Property Protection
There is a second category of deliberately unreadable code that operates at the organizational rather than individual level: intentional obfuscation as a form of intellectual property protection.
Commercially distributed software is often run through an obfuscator before shipping. JavaScript minifiers strip out variable names and compress logic into single-character identifiers. Native binaries are compiled and sometimes further obfuscated to resist reverse engineering. This is not accidental and it is not laziness. It is a deliberate business decision.
The logic mirrors what you see in other areas of tech economics. Companies that deliberately design software to become obsolete every few years are applying the same principle: control the asset, control the customer relationship. If a competitor can read your source code by decompiling your binary, your moat shrinks. Obfuscation, in this context, is a product strategy.
The same logic applies inside enterprise software teams. If a rival internal team or an outsourcing contractor can fully understand your module in a week, you are replaceable. If understanding it requires six months of institutional knowledge, you are not.
The Performance Optimization Trap
Some of the most genuinely unreadable code is also some of the most technically impressive, and this is where the story gets sympathetic.
High-performance systems, particularly those operating in game engines, financial trading platforms, database kernels, or real-time signal processing, often require optimizations that look insane to anyone who has not spent years working at that level. Bit manipulation tricks, branch-free algorithms, SIMD intrinsics, cache-line alignment hacks: these techniques produce code that is nearly impossible to read but routinely outperforms readable alternatives by factors of ten or more.
The famous “fast inverse square root” function from the Quake III source code is a classic example. The implementation uses a hardcoded hexadecimal constant (0x5f3759df) and a Newton-Raphson iteration in a way that makes no intuitive sense until you understand the underlying floating-point representation trick. A comment in the original code simply reads: “what the f—.” The function was, and remains, brilliant. It was also, at the time, nearly incomprehensible without significant context.
This creates a real tension. The engineers who write this code are often the most skilled people on the team. They are not hiding knowledge maliciously. They are solving a genuinely hard problem in the only way the constraints allow. But the result is the same: code that only a handful of people can safely modify, debug, or extend.
The cognitive cost of this is real and measurable. Developers context-switching into unfamiliar, dense code pay a significant mental toll, and that context switch tax compounds across a team, slowing down every engineer who has to touch the module later.
The Documentation Debt That Makes Everything Worse
The reason opaque code stays opaque almost always comes down to documentation, or the absence of it. Most engineering teams treat documentation as something that will happen after the real work is done. It almost never does.
Tech documentation is always out of date because keeping it current is nobody’s job. That is not a metaphor. In most organizations, there is no formal role responsible for documentation accuracy. Writing a comment or a README is voluntary, unrewarded behavior. Shipping features is tracked, incentivized, and celebrated. The incentive gap is enormous.
This means that even code written with good intentions, code that was readable when it was written, becomes opaque over time as the context around it erodes. The original author leaves. The comments fall out of sync with the implementation. The README describes a version of the system that no longer exists. What was once understandable becomes archaeology.
Why This Problem Is Getting More Complex, Not Less
You might expect that better tooling, AI-assisted code review, and higher engineering standards would be eroding the problem of deliberately or accidentally unreadable code. The evidence suggests the opposite.
As systems grow more complex and teams more distributed, the surface area of incomprehensible code expands. More code is now generated rather than written, including by language models that produce syntactically correct but semantically opaque output. AI-generated code often lacks the authorial intent that makes human-written code, even messy human-written code, interpretable in context. And AI systems have their own failure modes that are not obvious from the outside, which extends to the code they help produce.
The underlying problem is not technical. It is organizational. Code is unreadable when the people who write it are not incentivized to make it readable, when speed is rewarded over craft, when documentation is a second-class citizen, and when job security is something you build into the codebase rather than earn through visible contribution.
Fix those incentives, and the code gets cleaner. Leave them in place, and all the linting tools in the world will not save you.