The Wrong Goal

Most developers learn to value abstractions for the wrong reason: they make code shorter. You have three functions doing similar things, so you extract a helper. You have repeated logic across services, so you build a library. The code gets smaller. You feel good. But shorter isn’t the goal. The goal is fewer decisions, fewer places for bugs to live, and fewer things for the next developer to hold in their head at once.

A good abstraction reduces the surface area of a problem. A great abstraction reconceives the problem so that surface area collapses entirely. The difference isn’t aesthetic. It determines whether a codebase grows manageable complexity or grows into a system that slowly consumes the people maintaining it.

There’s a useful way to think about this in layers. At the bottom, you have raw implementation: every detail explicit, every decision made in-line. One level up, a good abstraction hides those details behind a name and an interface. But one level above that, a great abstraction removes the need to make the decision at all. The code doesn’t call a cleaned-up version of the thing. It doesn’t call anything, because the problem it was solving has been dissolved.

What Dissolving a Problem Looks Like

SQL is the clearest example I know. Before relational databases with declarative query languages, retrieving data meant writing traversal logic: open this file, seek to this offset, follow this pointer, accumulate these records, sort them like this. A good abstraction over that mess would have given you a cleaner API for the traversal. SQL didn’t do that. SQL let you describe what you wanted and delegated how to the query planner entirely. The decision about traversal strategy was removed from the programmer’s scope of concern.

The result wasn’t just shorter code. It was the elimination of a class of decisions that programmers had previously needed to make and get right. Bugs in traversal logic, performance characteristics of different access patterns, index selection: these didn’t get wrapped behind a nicer interface. They got pushed into a layer where specialists (and eventually, optimizers) could handle them better than application developers ever could.

The same pattern shows up in garbage collection, build systems, and infrastructure-as-code. In each case, the abstraction that mattered wasn’t the one that cleaned up your existing workflow. It was the one that made the workflow obsolete.

Conceptual diagram showing a large implementation space being dissolved into a minimal interface
The gap between wrapping a problem and dissolving it is the gap between a good abstraction and a great one.

The Trap of Premature Compression

The difficulty is that building abstractions that dissolve problems requires understanding a problem deeply enough that you can see through it. Most abstractions get built earlier than that, when the pattern is visible but not yet fully understood. Those abstractions have a characteristic failure mode: they compress the common case beautifully and make the edge cases worse than they were before.

Every framework you’ve ever fought against has this property. Rails is an excellent example of both sides. For standard CRUD applications, ActiveRecord removes enormous amounts of boilerplate. The abstraction holds. But as soon as your data access patterns diverge from what ActiveRecord assumed, you end up writing around the framework, learning its internals to override them, or generating SQL that the ORM produces but doesn’t quite do what you need. The abstraction that made the common case trivial made the uncommon case harder than raw SQL would have been.

This isn’t a criticism of Rails specifically. It’s a description of what premature abstractions do in general. They solve the problem you had when you built them, and they become friction for the problem you develop later. The cost isn’t just the time to work around them. It’s the conceptual overhead of holding both the abstraction’s model and the underlying reality in your head simultaneously.

The honest way to avoid this is to build abstractions later than feels comfortable. Wait until you’ve seen the problem from enough angles that you know which variation matters and which is incidental. The Ruby community has a shorthand for this: don’t abstract until you have three concrete instances of a pattern. Even that rule is probably too aggressive. Two instances look like a pattern. Three instances look like three instances that might diverge.

When AI Enters the Abstraction Stack

Large language models are now doing something interesting to this hierarchy. For a narrow but growing set of problems, they’re not providing a better API for a known task. They’re accepting a description of an intent and handling the implementation decisions entirely. In the best cases, the problem that would have required a pipeline of discrete coded steps gets handled by a single call that takes English as input.

This is exactly the shape of a great abstraction, and it has exactly the associated risks. The LLM, like the query planner in SQL, is making decisions you can no longer inspect directly. When those decisions are good, you’ve genuinely dissolved the problem. When they’re wrong, diagnosing and correcting them requires understanding a layer you’ve deliberately abstracted away. That tradeoff has always existed in software. It’s just moved to a new layer where the decisions are harder to audit and harder to predict. The prompt you write is not the prompt the model reads, and the gap between those two things is where your abstraction leaks.

What This Asks of You as a Designer

Building great abstractions is fundamentally a writing problem. Not writing code, but writing down what you understand about a domain. The test of whether an abstraction is ready to be built is whether you can state, clearly and concisely, what decisions it removes and what it leaves to the caller. If you can’t write that down in a paragraph, you’re not ready to build the abstraction. You’re ready to build one more concrete implementation that teaches you something you don’t yet know.

This also means that the discipline of abstraction is closely tied to the discipline of compression as a skill in software. You’re not just making code smaller. You’re removing degrees of freedom from a problem, collapsing option spaces, deciding what decisions the abstraction owns permanently and what it surfaces to the caller. Done well, that process produces systems where the right thing is easy and the wrong thing is difficult, not because of constraints in the code but because the problem has been framed so clearly that the wrong thing barely makes sense to attempt.

The developers who are consistently good at this aren’t necessarily the ones who know the most techniques. They’re the ones who are most willing to sit with a problem before reaching for a solution. The abstraction that makes code unnecessary isn’t found. It’s earned by spending enough time with the problem that you can finally see its actual shape, rather than the shape it presented when you first encountered it.