The single most mocked piece of advice in all of consumer technology turns out to be correct. When a support technician asks whether you’ve tried turning it off and on again, they aren’t stalling. They aren’t reading from a script designed to filter out impatient callers before escalating to the real help. They’re giving you the fastest path to a working machine, and the reason why exposes something uncomfortable about how software is built.

Software Is Not Designed to Run Forever

Every program running on your computer is managing memory. It requests chunks of RAM from the operating system, uses them, and is supposed to release them when it’s done. In theory, this is orderly. In practice, it isn’t. A program might hold onto memory it no longer needs, either because of a coding error, an edge case the developer didn’t anticipate, or a dependency library that behaves badly under certain conditions. This is called a memory leak, and virtually every complex piece of software has them.

Over hours or days of continuous operation, these leaks accumulate. A browser that ran fine at startup starts stuttering after a week because it has quietly consumed gigabytes of RAM across hundreds of open-and-close cycles of tabs, extensions, and background processes. The operating system itself isn’t immune. Background daemons, update services, and system processes all have the same vulnerability. The machine isn’t broken in any permanent sense. It is, accurately, tired.

Restarting flushes all of this. The operating system reclaims every byte of memory, terminates every process, resets every connection, and starts clean. It is, in computer science terms, returning the system to a known good state. That phrase matters. A known good state is where debugging can begin, which is why rebooting is the first step in almost every serious troubleshooting protocol, not just the casual advice of a frustrated help desk worker.

Abstract diagram of a network with stale and corrupted connection states
Network connections maintain state across every hop between your device and the server. Any of those states can silently degrade without the hardware itself failing.

Why Developers Don’t Just Fix the Leaks

The obvious question is why software ships with memory leaks in the first place. The answer is more structural than it might appear.

Modern applications are not written from scratch. They’re assembled from libraries, frameworks, and third-party components, each of which has its own memory management behavior. A web application might depend on hundreds of packages. The developer controls their own code. They do not fully control what those packages do internally, and they certainly don’t control how those packages interact with each other under every possible combination of user behavior and system state.

Testing catches obvious leaks. It doesn’t catch the leak that only manifests after 72 hours of continuous use on a machine running 47 other processes. Reproducing that environment in a test suite is expensive and imperfect. So companies ship software that they know will degrade over time, because the degradation is slow enough that most users will reboot before they notice, and because fixing every edge-case leak would take more engineering time than the problem costs in support tickets. This is a rational economic decision, not negligence. It’s worth being honest about that distinction.

This dynamic is part of a broader pattern in how software quality is managed. Software updates slow your device down on purpose for related structural reasons: the economics of maintaining old hardware rarely justify the engineering investment required to do it cleanly.

The Network Problem Is Separate and Equally Real

Memory isn’t the only thing that rebooting fixes. Network connections are stateful, meaning your device and the router (and the router and the modem, and the modem and your ISP’s infrastructure) maintain ongoing handshakes that track what’s been sent, received, and acknowledged. These states can become corrupted or stale without any single component actually failing.

A router that’s been running for months may have its routing table cluttered with dead entries. A DHCP lease may have technically expired without the device noticing. A TCP connection may be stuck in a half-open state, neither party sure whether the other has ended the session. None of these conditions represent a hardware problem. All of them can cause symptoms that look catastrophic from a user’s perspective: pages that won’t load, apps that can’t authenticate, calls that drop for no visible reason.

Restarting the router clears the routing table and forces fresh DHCP negotiations. Restarting the device closes every open socket and starts new ones. The network, too, returns to a known good state.

What the Reboot Reveals About Software Architecture

The persistence of the reboot as a fix isn’t a sign that the industry is lazy. It’s a sign that the alternative, software that truly manages its own state correctly under all conditions across all hardware configurations indefinitely, is an enormously hard problem. Deterministic behavior in a non-deterministic environment is one of the central unsolved challenges in systems engineering.

Some systems are built to handle this more gracefully. Server software in high-availability environments uses techniques like process isolation, watchdog timers, and automatic restarts of specific services rather than entire machines, precisely because rebooting a production server is expensive. The reason your laptop doesn’t work this way is partly cost and partly the assumption that users will reboot it themselves.

There’s also a support economics argument. For a help desk fielding thousands of calls, rebooting resolves a significant fraction of reported problems immediately and with no diagnostic effort required. It’s not that technicians are hiding deeper knowledge behind a dismissive first step. It’s that the reboot genuinely solves the problem in enough cases that it would be wasteful to skip it. Any support workflow that began with advanced diagnostics before confirming the machine had been restarted would be poorly designed.

The Joke Survives Because It Works

The reboot has become cultural shorthand for tech condescension, immortalized by the IT Crowd and a thousand office memes. That framing is unfair to what’s actually a sound piece of engineering advice. The joke works because the advice sounds too simple to be real. But simplicity and validity aren’t opposites.

The deeper truth is that rebooting is only necessary because of compromises made at every layer of the stack, in language design, in library ecosystems, in testing practices, in the hardware abstraction layer. Those compromises exist because perfect software is more expensive than software that can be periodically restarted. Users pay for that tradeoff in mild inconvenience. The tech support script is just the visible surface of a bargain struck long before you called.