We are good at abstractions. We are bad at abstractions.

21 Feb, 2025 (updated: 21 Feb, 2025)
1281 words | 6 min to read | 3 hr, 51 min to write

Software engineers, despite being trained problem solvers, often struggle with defining proper abstraction levels. It’s baffling when you consider that humans are, by far, the most capable animals on the planet when it comes to abstract thinking. Why, then, are we so absurdly bad at translating that strength into clean, effective abstractions in code? The answer lies not in our capacity for abstract thought, but in the difficulty of applying the right level of abstraction in the right context.

How good we really are?

Humans don’t just think in abstractions — we live in them. Our entire understanding of the world depends on navigating thousands of abstraction layers stacked on top of each other. Nature itself is an endless hierarchy of concepts: atoms form molecules, molecules form cells, cells form tissues, tissues form organs, and somehow, all of this turns into you sitting in front of a screen, reading about abstractions.

And you don’t even notice. Your brain seamlessly jumps between abstraction layers, filtering out irrelevant complexity to focus on what matters. It’s abstraction on autopilot. You think house, not walls, roof, pipes, and electrical wiring. You think forest, not trees, soil composition, fungal networks, and wildlife populations. Life would be unbearable if you had to consciously manage every layer of detail.

Words are the ultimate example. Take the word chair. It’s absurdly abstract. It doesn’t specify the material, color, shape, or whether it has armrests. It simply communicates the idea of “something you sit on.” Yet, when someone says “chair,” you don’t freeze, paralyzed by the ambiguity. Your mind immediately conjures something reasonable, adjusting the mental image depending on whether you’re picturing a classroom, a beach, or a medieval throne room.

So, humans are phenomenally good at abstraction—not just as a skill but as the foundation of cognition itself. We think in concepts, speak in symbols, and navigate a world where reality is sliced into mental models so seamlessly that we forget the slicing even happened.

Until, of course, we sit down to write code and realize that, unlike our brains, computers need every layer spelled out, every boundary defined, and every abstraction implemented with brutal specificity. And that’s where everything starts to break. At our ability to slice the nature at it’s joints… for computers

Concrete Abstractions

The irony of software design is that abstractions, by nature fluid and flexible in thought, must become rigid and concrete in code. We can’t just implement all the layers of abstraction we unconsciously navigate in our minds. If we tried, our programs would spiral into abstraction hell—overly abstract, overly complex, and ultimately brittle. Every layer would ripple through the system like the butterfly effect, until the whole thing collapses under its own weight.

That’s why good software design demands restraint. Abstractions can’t remain vague ideas—they must translate into clear, enforceable boundaries. You can’t just say “handle payments” and move on. You need to define what a payment is, how it’s processed, where it’s logged, and what happens when it fails. Every abstraction must crystallize into a concrete structure, shaping how the system operates and evolves.

And this is the ultimate trick. If you were looking for the answer, here it is:

We’re brilliant at abstract abstractions. We’re terrible at defining concrete ones.

This is why software engineers struggle—not because humans are bad at abstract thinking, but because translating abstract ideas into concrete, maintainable code is a skill, not an instinct.

I could stop here. That conclusion alone answers the question posed by the title. And, as a bonus, it also explains why naming things in code is notoriously hard—because names are nothing more than concrete labels for abstract ideas.

But doesn’t it beg further questions? After spending so much time thinking about this, I can already anticipate them. So let’s keep going and break down what makes us so consistently bad at defining abstractions in our software—and, more importantly, how we can get better at it.

Experience Matters, and Most Don’t Have It

Even when engineers do think abstractly, they often get it wrong. Why? Because good abstraction isn’t just about thinking in concepts; it’s about thinking in the right concepts for the right problem. That requires experience, and most engineers start their careers working on existing systems that are often abstraction graveyards. Inheritance hierarchies six levels deep. Microservices for applications that could be a single script. Layers of “flexibility” nobody asked for. It’s hard to learn elegant abstraction when you’re drowning in legacy code that looks like it was designed by an algorithm trained on spite.

Premature Abstraction: The Original Sin

Nothing wrecks a codebase faster than an abstraction born before its time. Engineers love to “future-proof” things, extrapolating from one requirement to build a towering generalization fit for every imaginable use case. This leads to bloated codebases full of indirection and awkward interfaces. It’s how you end up with a FactoryFactory or a 300-line class that exists solely to format dates. As the saying goes, duplication is cheaper than the wrong abstraction—yet engineers keep trying to solve problems they don’t have, as if code complexity were some kind of investment portfolio.

The Reusability Mirage

There’s also the classic confusion between “reusability” and “good abstraction.” Just because you can reuse something doesn’t mean it’s well-designed. Plenty of reusable code is brittle, confusing, and only adaptable if you enjoy reading documentation written like IKEA instructions. Overgeneralization turns what should be simple, focused abstractions into sprawling, rigid frameworks that crack the moment reality deviates from the developer’s initial vision—which it always does.

Business Pressure and the Abstraction Debt Spiral

Of course, it’s easy to blame individual engineers, but the industry itself encourages bad abstractions. Under pressure to ship features, nobody has time to step back and reconsider architecture. You need the feature working now, not after a philosophical debate about whether the service layer should be abstracted one more level. This rush leads to technical debt, where every abstraction is reactive, hastily patched rather than thoughtfully designed.

Cognitive Biases: Your Brain, Working Against You

Cognitive biases make the problem worse. Experts fall into the “curse of knowledge,” designing abstractions that make perfect sense to them but look like hieroglyphics to everyone else. Engineers overfit abstractions to current problems, creating elegant solutions that break the moment requirements change. Underfitting happens just as often—abstractions that leak complexity like a rusty faucet, leaving users to wrestle with the underlying mess.

We Never Really Learned How to Abstract

I mean, not for computers.

Part of the problem is education. Most computer science curricula emphasize algorithms, data structures, and language syntax but gloss over abstraction design. You learn how to implement a binary search tree, not how to decide if you even need one. Concepts like Domain-Driven Design or layered architecture sound great in theory but require years of practice to apply without tripping over your own cleverness.

How to Actually Get Better at Abstraction

So, how do you get better at defining abstractions? Start by asking: What problem is this abstraction solving? If the answer sounds like “future-proofing,” “reusability,” or “because it looks cleaner,” you’re already off track. Abstraction should clarify, not complicate. Delay abstraction until patterns emerge naturally. If you find yourself writing the same code three times, fine—abstract it. If not, leave it alone. Good abstraction takes experience, judgment, and, most importantly, the humility to recognize when an abstraction complicates more than it simplifies. Learn from well-designed systems, not just popular ones. UNIX gets abstraction right because it prioritizes clarity and composition, not because it has more layers than a wedding cake.

Conclusion

Figure it out yourself. Isn’t abstract thinking your superpower?