In 2023, Google committed roughly $300 million to Anthropic, the AI safety company founded largely by people who had left Google’s own DeepMind and Google Brain divisions. Anthropic’s Claude was, by any honest assessment, a direct competitor to Google’s search business, its Bard product, and its broader ambitions in AI. Google was, in effect, funding the engineers and researchers most capable of understanding exactly where Google was vulnerable.
This was not an oversight. It was a calculation.
The Setup
To understand why, you have to go back to a moment in late 2022 that rattled Google’s senior leadership enough that they reportedly issued an internal “code red.” OpenAI had released ChatGPT, and within two months it had reached 100 million users, a pace of adoption that no consumer product had previously matched. Google’s search business, which generated the majority of its revenue, suddenly faced a plausible alternative model for finding information.
Dario Amodei, Anthropic’s CEO, and his sister Daniela had left OpenAI in 2021 along with several other senior researchers. They had taken with them a specific thesis: that the most capable AI systems would also be the most dangerous, and that safety research had to be developed in parallel with capability research, not as an afterthought. Their company was built around that belief. It was also, incidentally, built to compete directly with OpenAI and, by extension, with Google.
Google’s investment in Anthropic came alongside a separate and larger commitment from Amazon, which put in up to $4 billion. Microsoft, meanwhile, had already committed billions to OpenAI. The major cloud providers were essentially placing bets on every serious AI lab simultaneously.
What Actually Happened
The standard explanation for why large companies invest in potential competitors is that they want a financial return if they can’t win outright. That’s partly true but misses the more important mechanisms at work.
First, consider what Google actually bought. The investment came with a cloud computing contract. Anthropic agreed to use Google Cloud’s infrastructure for a significant portion of its training workloads. Google’s cloud division, which trails Amazon Web Services and Microsoft Azure in market share, gained a high-profile customer whose compute bills run into the hundreds of millions annually. The investment was partly a customer acquisition strategy dressed as venture capital.
Second, and more important, Google bought information. A board observer seat, regular technical briefings, access to the research directions a well-funded competitor is pursuing. In a field where the difference between capability levels can hinge on architectural choices made six months earlier, knowing what Anthropic is working on has real strategic value. This is not industrial espionage. It is the standard information flow that comes with being a significant investor.
Third, and this is the part most analyses underweight: Google bought optionality on an outcome it couldn’t fully control. If transformer-based models trained on internet-scale data turn out to be a dead end, and some serious researchers believe this, Google has hedged by funding groups pursuing alternative approaches. If regulation tightens around AI development, having invested in the company most focused on safety gives Google credibility in those conversations. If Anthropic builds something genuinely transformative, Google participates in the upside.
None of this requires believing that Claude will never threaten Google Search. It requires only believing that the cost of being wrong about AI’s trajectory is higher than the cost of the investment itself.
Why This Pattern Repeats
Google’s move followed a logic that large technology companies have applied many times, with varying levels of self-awareness.
Intel Capital, the chip company’s venture arm, spent decades investing in software companies that could have theoretically reduced dependence on Intel hardware. In practice, those investments gave Intel early visibility into where computing workloads were heading, which informed its own product roadmap. The venture arm served as an early warning system.
Salesforce Ventures has invested in companies that compete with Salesforce’s core CRM product. The firm’s public rationale involves building the broader enterprise software market, which is true as far as it goes. But the investments also mean Salesforce sees term sheets, pitch decks, and growth metrics from competitors before those companies are mature enough to pose a serious threat. By the time a startup is a genuine problem, Salesforce has already had years of data about its trajectory.
Microsoft’s $13 billion commitment to OpenAI is perhaps the clearest recent example of a company betting on a partner-competitor relationship. OpenAI’s GPT models power Microsoft’s Copilot features, which means Microsoft is both distributing the technology and dependent on a company it doesn’t fully control. The relationship gives Microsoft capabilities it couldn’t build as fast internally, while giving OpenAI distribution and compute it couldn’t afford independently. The tension in that arrangement is not a bug. It is precisely what makes both parties continue to invest in making it work.
What This Tells Us About How Tech Giants Think About Survival
The conventional view of corporate strategy treats incumbents as trying to protect their current position. The more accurate view is that large technology companies with significant cash reserves treat every major technological shift as an event they must have exposure to, regardless of whether that exposure is comfortable.
This is not generosity. Intel didn’t fund competing architectures out of open-mindedness. Google didn’t fund Anthropic out of philosophical commitment to AI safety, though that may be a genuine secondary concern. These decisions follow from a specific kind of institutional risk management: the recognition that the cost of missing a transition entirely exceeds the cost of funding the people most likely to cause it.
There is a related dynamic worth examining. Large companies are often bad at acquiring the thing that would actually help them. Full acquisitions absorb startups into bureaucracies that can slow or eliminate what made those companies valuable. Minority investment is a way to maintain the benefits of the relationship without absorbing the costs of ownership. Anthropic inside Google would probably be slower than Anthropic outside Google. Google benefits more from Anthropic being fast.
The Lesson
When a company worth more than a trillion dollars invests in a startup that could theoretically end its primary business, the instinct is to read it as cognitive dissonance or corporate theater. It is usually neither.
The more accurate read is that large technology companies have developed, out of necessity, a sophisticated understanding of their own limitations. They know their internal research organizations move slower than well-funded startups with no legacy products to protect. They know that the next major platform shift will come from somewhere they didn’t fully anticipate. And they know that a minority stake in the companies most likely to cause that shift costs a fraction of what it would cost to be caught entirely flat-footed.
Google’s investment in Anthropic may look, in five years, like one of two things: a shrewd hedge that gave Google a foothold in the AI era it was slow to lead, or an expensive reminder that you cannot buy your way out of disruption once it has already started. The honest answer is that Google’s leadership almost certainly doesn’t know which outcome is more likely. That uncertainty is precisely why they wrote the check.