There is a principle in systems design called accretion, and it describes how complexity builds up in a codebase over time, layer by layer, like sediment. Nobody adds complexity on purpose. They add a feature, then a workaround, then a tool to manage the workaround, and before long the system is doing three times the work to produce the same output it managed on day one. The same thing happens to teams. Not to their code, but to their tooling. And the teams that figure this out early tend to quietly outperform everyone around them.

Digital minimalists who ignore most tech trends on purpose consistently outperform teams that chase every new tool, and the pattern keeps repeating across company sizes and industries. The question worth digging into is why, because the answer is more technically interesting than “less is more.”

The Hidden Cost of Tool Sprawl

Think about what happens when a team adopts a new tool. There is the obvious cost: the subscription fee, the onboarding time, the hour someone spends configuring it. But there is also a set of costs that rarely appear in any budget conversation.

Every tool a team uses creates what you might call cognitive switching overhead. This is the mental cost of context-switching between different interfaces, different mental models, and different notification streams. It is similar to what happens at the CPU level when a processor switches between threads: there is a real cost to saving state, clearing the cache, and loading a new context. For humans, that cost is not measured in nanoseconds but in minutes, and research in cognitive psychology has consistently shown it can take 15 to 20 minutes to return to deep focus after an interruption.

Now multiply that by eight tools, each with its own notification cadence. You can see the problem.

What makes this worse is that most tools are specifically engineered to pull your attention back. Multitasking apps are built to make you switch tasks more often because engagement metrics drive their entire business model. The tool that promises to organize your work is simultaneously optimizing for the number of times you open it per day. These two goals are not compatible.

Why Quarterly Deletion Works as a System

The teams that actively prune their toolset every quarter are essentially running a process that mirrors good dependency management in software. If you have worked in a large JavaScript project, you know the feeling of opening package.json and finding 200 dependencies, half of which nobody can explain. Some were added for a feature that got rolled back. Some are redundant with tools added six months later. A few are actively conflicting with each other.

The solution is not to add a tool that manages your dependencies better. It is to remove the dependencies you do not need.

Quarterly audits work because they create a forcing function, a term borrowed from product design that describes a constraint that makes the desired behavior the path of least resistance. Without a scheduled review, the default is always accumulation. Nobody removes a tool because it costs effort and carries political risk (someone on the team championed it). With a scheduled audit, the question is no longer “should we remove this?” but “can you justify keeping it?”

That inversion is everything. It shifts the burden of proof.

Some teams formalize this with a simple scoring rubric. Each tool gets rated on three dimensions: how many team members use it daily (not just have it installed), whether its function overlaps with another tool already in the stack, and whether it reduces or increases the number of places a piece of information lives. If a tool scores poorly on all three, it gets removed regardless of sunk cost.

The Information Consolidation Principle

One of the most underrated costs of tool sprawl is what happens to information locality, meaning where your team’s knowledge actually lives. A healthy system has high locality: you know exactly where to find a thing, and it only lives in one place. A sprawling tool stack destroys locality.

Consider a typical engineering team with a chat tool, a project management tool, a documentation wiki, a shared drive, a code review platform, and a separate ticket tracker. A decision made in a chat thread about a project’s architecture might never make it to the wiki. The ticket tracker references a spec doc that lives in the shared drive, which links to a chat thread nobody can find. The code review has comments that contradict the wiki.

This is not a people problem. It is a systems problem. And it is worth noting that good code comments exist precisely because the logic behind a decision rarely survives in the code alone. The same principle applies to organizational decisions. If the context behind a choice is scattered across six tools, it effectively does not exist.

Teams that constrain their toolset to three or four core systems force decisions to be recorded in a small number of canonical locations. New team members can get up to speed faster. Decisions are easier to revisit and audit. The team’s collective memory becomes a real, searchable asset rather than a distributed system with no consistency guarantees.

What the Deletion Process Actually Looks Like

In practice, the best-performing teams treat tool audits like sprint retrospectives: scheduled, structured, and blameless. Here is a rough version of how one high-output engineering team structures theirs.

First, they pull usage data from every tool they pay for. Most SaaS platforms expose this if you know where to look. They are not asking “do people like this tool?” but “do people actually use it?”

Second, they map each tool to a job it is supposed to do. If two tools are doing the same job, one gets cut. No debate about which is better, just which one has more adoption and lower switching cost.

Third, they look at integration overhead. Every tool that requires a custom integration, a Zapier bridge, or a manual sync process is a liability. Each integration is a failure point and a maintenance burden.

Finally, and this is the part most teams skip, they document why they removed each tool. That record prevents the same tool from being re-adopted six months later by someone who missed the original audit.

The Counterintuitive Truth About Constraints

There is a version of this argument that sounds like productivity minimalism, like a lifestyle choice rather than a systems decision. But the engineering framing is more useful and more honest. Productivity software builders often do not use their own products, and that tells you something important: the people who think hardest about productivity tools often conclude that fewer, simpler systems beat more sophisticated ones.

Constraints are not a compromise. In compiler design, constraints allow for better optimization because the optimizer has fewer possible states to reason about. The same logic applies to teams. A team working within a constrained, well-understood tool stack can optimize their workflows in ways that a sprawling stack simply does not permit.

Delete half your tools. See what survives. The things that come back are the things that actually mattered.