There is a practice among experienced engineers that looks, from the outside, like pure self-sabotage. They delete tools that are working fine. They switch editors mid-project. They impose artificial constraints on processes that already run smoothly. They break things that are not broken. And then, somehow, they come out the other side faster, sharper, and more capable than before. This is not masochism. It is a deliberate technique with a real name and a defensible logic, and once you understand it, you will start seeing it everywhere in high-performing technical teams.
Successful remote teams have quietly figured out something in-person offices never will, and a big part of that something is a willingness to periodically question whether the way they are working is actually the best way, rather than just the familiar way.
The Problem With Smooth Workflows
When a workflow becomes frictionless, it also becomes invisible. That sounds like a good thing, right? Automation, muscle memory, zero cognitive overhead. But invisibility has a cost. When you stop noticing how you work, you also stop noticing when how you work has quietly become wrong.
Think about it in code terms. A function that has been running without complaint for two years is one nobody touches. Nobody refactors it. Nobody questions whether it still fits the architecture. It just runs. And then one day, you need to extend it, and you open it up and find something terrifying: a 400-line function with global state dependencies, no tests, and three different abstractions layered on top of each other like geological strata. The function worked, but the workflow that produced it calcified into something fragile and unmaintainable.
The same thing happens to personal and team workflows. The task management system that made perfect sense for a three-person team becomes a bottleneck for a team of twelve, but nobody questions it because it still technically works. The deployment checklist written in 2019 still gets followed step by step, even though half the steps are now automated and two of them actively slow things down. Smooth workflows hide their own obsolescence.
Deliberate Workflow Disruption as a Debugging Tool
The engineers who deliberately break their workflows are essentially running a diagnostic. They are treating their own processes the way a good debugger treats suspicious code: not by assuming it works, but by proving it.
Here is how it tends to work in practice. A developer might spend two weeks working exclusively in a stripped-down environment, without their usual suite of extensions, linters, and autocomplete tools. The friction this creates is uncomfortable, but it is also revealing. Suddenly they notice which parts of their work actually require skill and judgment, and which parts they have been outsourcing to tooling without realizing it. They find gaps in their own understanding that their tools had been quietly papering over.
This connects to something deeper about how expertise actually forms. There is solid research in cognitive science suggesting that desirable difficulty, the idea that introducing certain kinds of productive struggle improves long-term learning and retention, is real and measurable. When everything is too smooth, your brain has no reason to consolidate what it is doing into durable skill. Struggle, the right kind of struggle, builds the neural pathways that smooth workflows bypass entirely.
It is a little like rubber duck debugging, which works not because the duck is magic but because the act of explaining your problem forces you to re-examine assumptions you had stopped examining. Deliberately breaking your workflow forces the same kind of re-examination at the process level instead of the code level.
What “Breaking” Actually Looks Like
The technique is more structured than it sounds. There are a few common patterns worth knowing.
Constraint imposition is the most common form. You artificially limit yourself: one terminal window only, no browser during focused work, write all logic before looking at documentation. The constraint is not the point. The friction the constraint creates, and what you learn from navigating that friction, is the point. Top performers use a similar logic in the single-tab rule approach, collapsing their digital environment down to the minimum viable workspace to surface what actually matters.
Tool rotation is the more aggressive version. You swap out a core tool, your editor, your terminal, your project management system, for an unfamiliar alternative for a defined period. The goal is not to find a better tool (though sometimes you do). The goal is to identify which parts of your current tool you actually depend on versus which parts you have simply habituated to.
Process audits triggered by intentional failure are what separates the most systematic practitioners from the casual ones. Instead of just swapping tools, they deliberately fail to follow a step in their workflow, then observe what breaks. If nothing breaks, that step was probably overhead. If something breaks badly, they have just discovered an undocumented dependency, and that is valuable.
The Organizational Dimension
This principle scales beyond individuals. The best engineering teams build deliberate disruption into their rhythms at the team level. Game days (where you intentionally break production systems in controlled ways to test your recovery processes) are the most famous example, popularized by Netflix’s Chaos Engineering approach. But the same logic applies to workflows, not just infrastructure.
Teams that run periodic “workflow postmortems,” dedicated sessions where they examine not what broke in production but how they are working and what has calcified, tend to catch problems that retrospectives miss. Retrospectives are triggered by events. Workflow postmortems are triggered by calendars, which means they catch the quiet dysfunction that never rises to the level of an incident.
This also connects to why digital calendars are making some teams worse at time management, not because calendars are bad but because the design of most calendar tools optimizes for booking time, not for reflecting on how that time is actually being used.
The Counterintuitive Productivity Math
Here is the part that takes some convincing. Breaking your workflow costs time in the short term, always. The first week of a tool rotation is slower. A constraint-imposition sprint will produce less output than a normal sprint. A workflow postmortem takes an afternoon that you could have spent shipping.
The math only works if you think in longer time horizons. A workflow audit that costs four hours and eliminates thirty minutes of daily friction pays for itself in eight days. A two-week constraint sprint that surfaces three skill gaps you did not know you had compounds across the next several years of your career. The short-term cost is real and visible. The long-term gain is diffuse and invisible, which is exactly why most people skip it.
This is the same asymmetry you see in why tech companies build features they never plan to release: the value is not in the immediate output but in the organizational learning and capability that the work generates. The feature is the vehicle. The knowledge is the asset.
The engineers who understand this, who can tolerate short-term friction for compounding long-term gains, are the ones who seem to keep getting better in ways that are hard to attribute to any single decision. That is because no single decision explains it. It is the accumulated effect of dozens of small deliberate disruptions, each one a tiny diagnostic, each one a small course correction, each one a bet that understanding how you work is as important as the work itself.