In April 2010, China Telecom announced to the global internet that it was the best path to reach roughly 15% of all internet destinations. For about 18 minutes, traffic meant for the U.S. Senate, the Department of Defense, NASA, and major commercial networks traveled through Chinese routers instead of its intended paths. No authentication was required. No alarm went off. The protocol simply believed what it was told.
This was not a novel attack. It was the ordinary, everyday operation of the Border Gateway Protocol, known as BGP, doing exactly what it was designed to do.
The Setup
BGP is the routing protocol that determines how data moves between the roughly 75,000 autonomous networks that together constitute the internet. When you load a webpage, BGP is what decides which sequence of networks your request traverses to reach a server and return data to you. Every internet service provider, cloud provider, and large enterprise runs BGP to advertise which IP address blocks they control and to learn where everyone else’s addresses live.
The protocol was formalized in 1989. Its designers, Yakov Rekhter and Kirk Lougheed, drafted the original specification over lunch on three paper napkins, a detail that has since become foundational internet lore. The design philosophy was pragmatic: build something that works, assume the people running it are competent and honest, and ship it. BGP was intended as a temporary fix while something more robust was developed.
That something more robust never arrived. BGP became the permanent solution.
The core vulnerability is straightforward. When a network operator announces through BGP that they own a block of IP addresses, there is no cryptographic verification of that claim. Other networks on the internet accept the announcement and update their routing tables accordingly. If you announce a more specific or seemingly more efficient path to a destination, traffic will flow toward you. Whether you have any legitimate connection to those addresses is irrelevant to the protocol.
This is called a BGP route hijack, and it is not rare.
What Happened
The China Telecom incident is the most widely cited example, but it sits inside a longer history of similar events. In 2008, Pakistan Telecom accidentally knocked YouTube offline globally for about two hours by announcing that it owned YouTube’s IP address space, intending to block the service domestically. The announcement leaked beyond Pakistan’s borders and was accepted by upstream providers. Traffic meant for YouTube’s servers traveled to Pakistan instead, found nothing useful, and dropped.
In 2018, a small internet exchange in Nigeria briefly attracted significant portions of Google’s traffic, redirecting it through Russia and China before reaching its destination. In 2019, a misconfiguration at a small ISP in Pennsylvania caused a cascade where Cloudflare, Facebook, and other major providers lost traffic for several hours. The ISP had used a network optimization tool that generated BGP announcements it had no business making, and those announcements were trusted and propagated.
None of these required sophisticated attackers. They required, variously: a misconfigured router, a poorly tuned optimization tool, and a state-owned carrier with ambiguous motives. The common thread is that BGP accepted all of it.
The technical community has understood this problem for decades. RPKI (Resource Public Key Infrastructure) was developed to address it. Under RPKI, IP address holders can cryptographically sign records stating which networks are authorized to announce their addresses. Routers that check these signatures, called Route Origin Validation, can reject invalid announcements. The technology works. It has been available for years.
Adoption, as of this writing, covers a meaningful but far from complete share of global routes. The largest cloud providers, Cloudflare, Amazon, and others have implemented it. Many regional carriers and smaller ISPs have not. RPKI only prevents hijacks for routes where the legitimate holder has created a signed record. Until adoption is near-universal, significant gaps remain.
Why It Matters
The business stakes here are not abstract. BGP hijacks have been used to intercept cryptocurrency transaction traffic, to steal IP addresses and use them for spam campaigns, and to reroute traffic in ways that allow passive surveillance of communications. A 2018 paper from researchers at the Naval War College and Tel Aviv University documented repeated incidents where traffic to and from financial networks was rerouted through networks in previously Soviet states, with timing and specificity that the authors found difficult to attribute to simple misconfiguration.
For anyone building infrastructure that depends on network reliability, BGP is the layer below the layers you think about. You can deploy robust redundancy, multiple cloud providers, sophisticated failover, and still find that a misconfigured router at an ISP you have never heard of in a country you do not operate in has redirected your traffic for twenty minutes. Your monitoring will show packet loss. The cause will be invisible to you until someone else notices and publishes an analysis.
The economics of the fix are also instructive. RPKI costs money to implement and creates operational complexity. The networks that implement it bear the full cost, but only see the benefit if enough other networks implement it too. This is a classic collective action problem. The unpaid labor holding up the internet often looks like this: costly maintenance work that benefits everyone but that no individual actor has a strong incentive to prioritize.
What We Can Learn
The BGP story is not primarily a story about a technical flaw that engineers failed to fix. It is a story about what happens when a protocol optimized for a small, trusted community gets inherited by a global, adversarial one.
The 1989 internet ran on the assumption that the people operating networks knew each other, had professional reputations to protect, and shared interests in keeping things running. For those conditions, BGP was adequate. The protocol scaled remarkably well in every dimension except the social one. When the assumption of good faith became less reliable, the protocol had no fallback.
This pattern appears repeatedly in technology. A system is built for a context where trust is cheap and verification is expensive. The system succeeds and grows. Growth introduces actors who do not share the original community’s norms. The trust model becomes the vulnerability, but by then, replacing it requires convincing thousands of independent operators to update their infrastructure simultaneously, which is effectively impossible to mandate.
The lesson for anyone building infrastructure today is not to design for the users you have. Design for the worst-faith actor who will eventually share your protocol, your platform, or your API. The cost of retrofitting trust mechanisms into a widely deployed system is almost always higher than building them in from the start, and the interim period between when a vulnerability is understood and when it is fixed is measured not in months but in decades.
BGP will not be replaced soon. The fix, to the extent one arrives, will be gradual adoption of RPKI and related standards, nudged along by regulatory pressure in some jurisdictions and by the sheer embarrassment of continued incidents in others. The internet will keep routing traffic on a protocol held together, in part, by the assumption that most people running routers are trying to do the right thing.
Mostly, that assumption holds. The problem is that “mostly” is a poor security model for infrastructure that routes the world’s information.