In 2013, Google sent a $1,500 wearable computer onto the faces of journalists, developers, and early adopters who spent most of their time getting asked to leave restaurants. Google Glass was clunky, invasive-looking, and solved no problem that most people actually had. The company had engineers. It had user researchers. It had enough institutional knowledge to know, before the first unit shipped, that the mass market wasn’t ready for this product and this product wasn’t ready for the mass market.

They launched it anyway.

The standard narrative is that Glass was a failed moonshot, a cautionary tale about hubris and silicon valley’s disconnect from normal human behavior. That narrative is too comfortable. It lets everyone off the hook and misses the actual story.

The Setup

Glass was incubated inside Google X, the company’s semi-secretive lab for high-risk, high-reward projects. The Explorer Program, which sold early units to developers and selected enthusiasts, was never positioned as a real product launch. Google called participants “Explorers.” The language was deliberate. You don’t call your customers explorers unless you’re acknowledging, at least implicitly, that the territory is unmapped.

But Google didn’t quietly test Glass in a lab. They put it on the cover of magazines. They sent units to fashion shows. Diane von Furstenberg used them backstage at New York Fashion Week in 2012. Google co-founder Sergey Brin wore a pair on the subway and let photographers catch him. This was not the behavior of a company running a low-key beta test.

So which was it? A genuine product push, or a controlled experiment? The answer is that it was neither, and both, and the distinction matters more than people realize.

Abstract diagram showing a single failed product trajectory branching into multiple successful capability streams
The product is the decoy. The capabilities are the point.

What Actually Happened

Google wasn’t trying to sell Glass to consumers. They were using the consumer launch to accomplish several other things simultaneously, none of which appeared in any press release.

First, they were stress-testing the supply chain and manufacturing tolerances for miniaturized wearable hardware. You cannot simulate this in a lab. You need real units in the real world, exposed to sweat, rain, varying temperatures, and the specific ways humans abuse objects they wear on their faces. The Explorer Program generated that data at scale.

Second, and more importantly, they were mapping the regulatory and social friction points that any wearable camera technology would eventually face. The backlash against Glass users in bars, gyms, and theaters wasn’t a PR disaster. It was a dataset. Google learned exactly where the privacy fault lines sat, which jurisdictions were hostile, and what the specific objections were, before those objections could sink a product they actually cared about.

Third, they were recruiting. The Explorers skewed heavily toward developers and technically sophisticated users. Many of them went on to build applications for Android Wear, Google’s subsequent wearable platform. Others joined Google directly. The Explorer Program was partly a very expensive, very public recruiting event.

The commercial failure of Glass as a consumer product was priced in. The other returns were not.

Why This Pattern Keeps Repeating

Google is not the only company that does this, and wearables are not the only category where it happens. Amazon launched the Fire Phone in 2014 with features nobody asked for, at a price point that made no sense, in a market already dominated by Apple and Samsung. It was discontinued within a year. But the Fire Phone’s failure accelerated Amazon’s investment in what actually became Alexa, because the project forced the company to get serious about voice interfaces and natural language processing at a time when those investments might not have been approved otherwise.

Microsoft’s Zune lost badly to the iPod. It also gave Microsoft the hardware competency and supply chain relationships that eventually made the Surface line possible. You can draw a reasonably straight line from Zune’s failure to Surface Pro’s success if you’re willing to look at what the organization learned rather than what the product sold.

The pattern is consistent enough that it deserves a name. Call it capability harvesting. The product fails in the market. The capabilities built to support the product survive inside the organization, often redeployed somewhere more useful. The failure absorbs political and budget risk that might otherwise prevent the underlying capability from being developed at all.

The Uncomfortable Implication

If this is a real strategy, and I’m arguing it is, then a lot of what looks like incompetence in big tech is actually intentional. Companies launch products they know will struggle because the launch itself generates value that has nothing to do with whether the product succeeds.

This should make you skeptical of the “fail fast” mythology that saturates startup culture. Failing fast is good advice for startups that are genuinely uncertain about product-market fit. It’s different advice when a company with deep pockets launches something primarily to harvest capabilities, map resistance, or create organizational cover for a larger bet. Those companies aren’t failing fast. They’re spending deliberately on things that look like failures.

The distinction matters for anyone trying to read the market. When a company with Google’s resources launches something that seems obviously doomed, the right question isn’t “how did they get this so wrong?” It’s “what are they learning from this that we can’t see?”

Glass itself never died cleanly. Google relaunched it as Glass Enterprise Edition in 2017, targeting industrial and manufacturing use cases, logistics workers, surgeons, field technicians. That version found real customers. The consumer backlash that torpedoed the original Explorer Program turned out to be largely irrelevant in a factory setting where everyone is already wearing safety equipment and nobody cares about looking conspicuous. Google knew those enterprise use cases existed when they launched the consumer version. The consumer launch wasn’t a detour. It was a subsidy.

What We Can Learn

None of this means every product failure is secretly strategic. Most failures are just failures. Companies misjudge markets, build the wrong thing, ship too late, price incorrectly, and suffer for it. The mythology of the deliberate failure can become its own kind of cope, a way for post-hoc rationalization to launder genuine mistakes into clever strategy.

The tell is in the organizational behavior after the failure. If the team disperses and the technology gets shelved, it was probably just a failure. If the team reconvenes around a related project and the underlying capabilities show up somewhere else inside the organization within two or three years, you’re probably looking at capability harvesting.

Watch where the engineers go. That’s the real product roadmap.

For founders and strategists at companies that aren’t Google-sized, the lesson is narrower but still useful. Before you write off a product experiment as a mistake, ask what the experiment actually generated. Customer conversations that shifted your understanding of the problem. Infrastructure you built that now supports something else. Credibility in a space you’d otherwise have had to enter cold. The market result and the organizational result are separate questions, and conflating them is how companies talk themselves out of the value they already captured.