The Cloudflare-Verizon BGP Leak (2019)
On June 24, 2019, a large portion of Cloudflare's (AS13335) traffic was rerouted through the network of a steel company in Pennsylvania. For over two hours, internet users around the world experienced degraded connectivity to millions of websites protected by Cloudflare, including major services, APIs, and infrastructure endpoints. The root cause was not a sophisticated cyberattack. It was a misconfigured BGP optimizer at a small regional ISP, a chain of missing route filters, and a Tier 1 carrier that blindly propagated the damage to the entire global routing table.
This incident remains one of the most studied BGP route leaks in internet history. It exposed systemic failures in how the world's largest networks handle route validation and filtering, and it accelerated industry-wide adoption of RPKI and other routing security measures.
The Players
Four organizations are central to understanding this incident:
- Allegheny Technologies (AS396531) -- a specialty metals manufacturer headquartered in Pittsburgh, Pennsylvania. Allegheny had no business announcing internet routes to anyone. They were a customer of DQE Communications and ran a BGP optimizer product called Noction on their network.
- DQE Communications (AS33154) -- a small regional ISP based in Pittsburgh that provided transit to Allegheny Technologies. DQE accepted Allegheny's leaked routes without filtering and passed them upstream.
- Verizon (AS701) -- one of the largest Tier 1 carriers in the world. Verizon accepted DQE's routes -- including the 20,000+ leaked prefixes -- without applying any prefix limits or route filtering, then propagated them to the global internet.
- Cloudflare (AS13335) -- a global CDN and security provider whose traffic was the primary victim of the leak. Cloudflare's anycast prefixes were among the thousands of routes that got rerouted through Allegheny's network.
What Is a BGP Optimizer?
A BGP optimizer is a piece of software that manipulates BGP routing to improve performance for a network's outbound traffic. Products like Noction BGP Optimizer work by taking the prefixes a network learns from its upstream providers and splitting them into more-specific prefixes. The idea is straightforward: if a network has two upstream links and learns a /20 prefix from both, the optimizer can split it into two /21s, send each one out a different link, and then measure which link performs better for traffic to each half of the address space.
The problem is that these more-specific prefixes are created internally -- they are synthetic routes that do not correspond to any real allocation or authorization. They exist purely to influence local traffic engineering. Under normal circumstances, these routes should never be advertised externally. They should stay inside the network that created them.
When a BGP optimizer's routes leak externally, the consequences are severe. BGP routers everywhere prefer more-specific prefixes over less-specific ones due to the longest prefix match rule. If the legitimate holder announces a /20 and the leaked route is a /21 (a more-specific subset), every router on the internet that learns both routes will prefer the /21 -- sending traffic toward the leaking network instead of the legitimate origin.
The Chain of Failures
The June 2019 leak was not a single mistake. It was a cascade of failures at every link in the chain. Each organization had an opportunity to prevent the damage, and each one failed.
Failure 1: Allegheny Technologies (AS396531)
Allegheny ran a Noction BGP optimizer that generated more-specific prefixes from routes it learned from its upstream provider, DQE. The optimizer was supposed to keep these synthetic routes internal, using them only to make outbound traffic engineering decisions. Instead, the routes were advertised back to DQE via BGP. Over 20,000 prefixes that Allegheny had no authority to announce were sent upstream -- prefixes belonging to Cloudflare, Amazon, and many other networks.
This was the ignition point. A steel manufacturer's network told the internet: "Send me all the traffic for these 20,000 IP blocks." Allegheny's network had a fraction of the capacity needed to handle even a tiny percentage of that traffic.
Failure 2: DQE Communications (AS33154)
DQE was Allegheny's transit provider. When DQE received 20,000+ route announcements from a customer that should have been announcing perhaps a handful of prefixes at most, their routers should have rejected the excess. Standard practice for any ISP is to configure max-prefix limits on customer sessions -- if a customer suddenly announces 100x more routes than expected, the session should be torn down automatically.
DQE had no such limits in place. They accepted every single route and propagated them all to their own upstream providers, including Verizon.
Failure 3: Verizon (AS701)
This was the most consequential failure in the chain. Verizon (AS701) is a Tier 1 network -- one of the largest carriers on the planet. When Verizon received over 20,000 unexpected route announcements from DQE (a small regional ISP), their routers accepted every single one and propagated them globally.
Verizon had no RPKI validation. No IRR-based route filtering. No max-prefix limits on their session with DQE. None of the standard safety mechanisms that the internet engineering community has been advocating for decades. A Tier 1 carrier with the ability to propagate routes to every corner of the internet did exactly that -- with routes that had no business existing in the first place.
How the Leak Worked Technically
To understand why this leak was so effective at rerouting traffic, you need to understand the longest prefix match rule that governs all IP routing.
When a router has multiple routes that match a destination IP address, it always selects the most specific one -- the route with the longest prefix length. A /24 (256 addresses) is preferred over a /20 (4,096 addresses), which is preferred over a /16 (65,536 addresses). This rule is fundamental and cannot be overridden by any BGP attribute.
Cloudflare legitimately announced their prefixes as /20s and similar aggregates. The Noction optimizer at Allegheny split these into /21s and /22s -- more-specific subnets of the same address space. When Verizon propagated these more-specific routes globally, routers everywhere preferred them over Cloudflare's legitimate announcements. Traffic that should have gone directly to Cloudflare's anycast network instead flowed toward Allegheny's network in Pittsburgh.
The AS paths for the leaked routes looked something like this:
701 33154 396531 13335
Reading right to left: the routes claimed to originate from Cloudflare (AS13335), but they transited through Allegheny (AS396531), then DQE (AS33154), then Verizon (AS701). Because the prefix length was more specific, it did not matter that the AS path was longer than the legitimate Cloudflare path. Longest prefix match always wins.
The Impact
The consequences were immediate and severe:
- Massive packet loss -- Allegheny's network was a small corporate network with bandwidth measured in the low gigabits. When internet-scale traffic -- tens or hundreds of gigabits per second -- arrived at their routers, the vast majority of packets were dropped. Users trying to reach Cloudflare-protected sites experienced timeouts, partial page loads, and complete unreachability.
- Global scope -- Because Verizon is a Tier 1 carrier with global reach, the leaked routes propagated worldwide. The impact was not limited to Verizon's customers or the US east coast. Networks in Europe, Asia, and elsewhere that used Verizon as a transit provider or peer also received the bad routes.
- Duration -- The leak persisted for approximately two hours before it was resolved. In internet time, where BGP convergence for a simple route withdrawal happens in seconds to minutes, two hours is an eternity. The delay was partly because the affected parties had to identify the source, contact the responsible networks, and convince them to take action.
- Collateral damage -- While Cloudflare was the most prominent victim, the leak included over 20,000 prefixes. Routes for Amazon, Fastly, and numerous other networks were also affected. Any network whose prefixes happened to be in Allegheny's routing table and were fed through the Noction optimizer was a potential victim.
Why Existing Safeguards Failed
The internet routing ecosystem has well-documented best practices for preventing exactly this type of incident. Every single one of them was absent at one or more points in the chain.
Max-Prefix Limits
Every BGP session between a transit provider and a customer should have a maximum prefix limit configured. If the customer announces more prefixes than the agreed-upon limit, the session should be automatically torn down. A steel company's network should be announcing perhaps 1-5 prefixes. When Allegheny suddenly announced 20,000+, DQE's routers should have shut the session immediately. They did not.
IRR Filtering
The Internet Routing Registry (IRR) is a set of databases where networks register the prefixes they intend to announce. Transit providers can build prefix filters from IRR data, accepting only the prefixes their customers have registered. Neither DQE nor Verizon filtered based on IRR data. If they had, the 20,000 unregistered prefixes would have been rejected at the first hop.
RPKI and ROAs
Cloudflare had done their part: they had created RPKI Route Origin Authorizations (ROAs) for their prefixes, cryptographically asserting that only AS13335 was authorized to originate them, with specific maximum prefix lengths. The leaked routes violated these ROAs -- they were more specific than the maximum length Cloudflare's ROAs permitted, and some had Allegheny's ASN as the origin.
But RPKI only works if the networks receiving and propagating routes actually validate them. In 2019, Verizon did not perform RPKI Route Origin Validation. The cryptographic proof that these routes were illegitimate existed, but the network that most needed to check it simply did not. This was perhaps the most frustrating aspect of the entire incident: the defense existed, but the critical party chose not to use it.
MANRS and BCP38/BCP84
The MANRS (Mutually Agreed Norms for Routing Security) initiative and IETF Best Current Practices documents BCP38 and BCP84 have long recommended comprehensive route filtering, especially at network boundaries. These are industry-standard recommendations, not obscure research proposals. Verizon, as one of the largest networks on the planet, was a particularly conspicuous violator of these norms.
The Timeline
The incident unfolded roughly as follows:
- ~13:34 UTC -- Allegheny's Noction BGP optimizer begins leaking more-specific prefixes to DQE Communications.
- ~13:35 UTC -- DQE accepts the routes and propagates them to Verizon. Within seconds, Verizon is distributing them to the global routing table.
- ~13:40 UTC -- Cloudflare's monitoring systems detect anomalous traffic patterns and increased error rates. Engineers begin investigating.
- ~14:00 UTC -- The source of the leak is identified through BGP monitoring data. Cloudflare and other affected parties begin reaching out to the upstream networks.
- ~15:30 UTC -- After extensive communication with DQE and Verizon, the leaked routes are finally withdrawn. BGP convergence begins, and traffic starts returning to normal paths.
- ~16:00 UTC -- Global routing is largely restored. Post-incident analysis begins.
The two-hour resolution time highlights a persistent problem with BGP incidents: there is no centralized authority that can force a network to withdraw bad routes. The affected parties had to identify the source through publicly available BGP monitoring data, then make phone calls and send emails to the responsible organizations to convince them to act. In a world where BGP changes propagate globally in under a minute, a two-hour response time is painfully slow.
What the AS Paths Revealed
During the incident, anyone looking at the global routing table through a BGP looking glass could see the anomaly. For Cloudflare prefixes that were normally announced directly by AS13335 with short AS paths, the routing table suddenly showed paths like:
... 701 33154 396531 13335
This path tells a clear story: the route appears to originate from Cloudflare (AS13335), but it passes through three unexpected networks -- a steel company (AS396531), a small Pittsburgh ISP (AS33154), and then Verizon (AS701) -- before reaching the rest of the internet. For anyone familiar with BGP, this path was immediately suspicious. Cloudflare's routes normally appear with short paths through major transit and peering links. Seeing a Tier 1 carrier routing Cloudflare traffic through a steel company was a red flag visible from any vantage point.
The leaked routes also had the telltale sign of a BGP optimizer: the prefix lengths were more specific than what Cloudflare normally announces. Where Cloudflare announced a /20, the routing table now showed /21s and /22s -- the exact pattern produced by an optimizer splitting prefixes for traffic engineering purposes.
Comparing to Other Major BGP Incidents
The 2019 Cloudflare-Verizon leak was not the first or last major BGP incident, but it stands out because of the specific combination of factors involved:
- Pakistan vs YouTube (2008) -- Pakistan Telecom announced YouTube's /24 prefixes to block YouTube domestically, but the announcement leaked globally via PCCW. That was a BGP hijack -- an unauthorized origin announcing someone else's prefix. The 2019 incident was a route leak -- the routes still showed Cloudflare as the origin, but they were propagated through an unauthorized path.
- Google's 2018 Leak via MainOne -- A Nigerian ISP accidentally leaked Google's prefixes through its peering with a Chinese telecom. Similar pattern: a small network leaked routes, and a larger network failed to filter them. Google traffic was briefly rerouted through China and Nigeria.
- Facebook Outage (2021) -- Facebook's own internal configuration change withdrew all BGP routes for Facebook, Instagram, and WhatsApp. That was a self-inflicted outage, not a route leak by a third party.
What made the 2019 Cloudflare incident particularly impactful was the scale (20,000+ prefixes), the amplification factor (Verizon's global reach), and the irony (RPKI protections existed but were ignored by the propagating carrier). It became a case study in why routing security cannot depend on just one party doing the right thing -- every network in the chain must participate.
The Aftermath and Industry Response
The June 2019 incident had lasting effects on the internet routing ecosystem:
Accelerated RPKI Adoption
The incident became a powerful argument for RPKI deployment. Cloudflare's blog post about the incident -- which directly called out Verizon for not performing Route Origin Validation -- generated significant public pressure. In the years following, RPKI adoption accelerated. By 2024, over 50% of global prefixes had valid ROAs, up from roughly 20% at the time of the incident. Major networks that had been slow to deploy RPKI, including several Tier 1 carriers, accelerated their timelines.
Renewed Focus on Route Filtering
The incident reinforced the importance of MANRS (Mutually Agreed Norms for Routing Security) principles. ISPs faced increased pressure to implement:
- Max-prefix limits on all BGP sessions, especially customer sessions
- IRR-based prefix filtering that only accepts registered routes from customers
- RPKI validation that drops or deprioritizes RPKI-invalid routes
- BCP84/RFC 7454 compliance for BGP operations and security
BGP Optimizer Scrutiny
The incident drew attention to BGP optimizer products and their potential for causing harm. While the tools themselves serve a legitimate purpose (outbound traffic engineering), the risk of external leakage became a widely discussed concern. Network operators became more cautious about deploying these tools, and vendors improved their documentation around proper egress filtering.
ASPA Development
The incident highlighted that RPKI alone cannot prevent route leaks -- it only validates the origin AS, not the path. This accelerated work on ASPA (Autonomous System Provider Authorization), a mechanism that allows autonomous systems to declare their authorized upstream providers. With ASPA, networks can detect when a route takes an unauthorized path, even if the origin is valid. ASPA is progressing through IETF standardization and is expected to complement RPKI ROAs.
Lessons for Network Operators
The 2019 leak offers concrete lessons for anyone operating a network with BGP:
- Always set max-prefix limits -- Every customer BGP session should have a prefix limit that matches the expected number of announcements. If a customer should announce 5 prefixes, set the limit to 10, not unlimited.
- Filter customer routes -- Use IRR data or explicit prefix lists to accept only the routes your customers are authorized to announce. Never accept arbitrary routes from a customer without validation.
- Deploy RPKI -- Create ROAs for your own prefixes and perform Route Origin Validation on routes you receive. Drop or deprioritize RPKI-invalid routes.
- Monitor your routes -- Use BGP monitoring services and looking glass tools to verify that your prefixes are being announced correctly from all vantage points. Set up alerts for unexpected origin changes or more-specific announcements.
- Be cautious with BGP optimizers -- If you use a BGP optimizer, ensure that the synthetic routes it creates cannot leak externally. Apply strict egress filters on all BGP sessions to prevent unintended announcements.
- Set ROA maxLength carefully -- When creating RPKI ROAs, set the maxLength to match exactly the prefix length you announce. If you announce a /20, set maxLength to 20, not /24. This ensures that any more-specific announcement will be RPKI-invalid and can be rejected by networks performing validation.
Checking the Current State
You can use this looking glass to verify the current routing state of the networks involved in the 2019 incident. Check that Cloudflare's prefixes show normal, direct AS paths and valid RPKI status:
- AS13335 -- Cloudflare's routes and RPKI validation
- AS701 -- Verizon's network and peering
- AS33154 -- DQE Communications
- 1.1.1.1 -- Cloudflare DNS: verify the AS path and RPKI status
- 104.16.0.0 -- One of Cloudflare's most-used prefix ranges
If all is well, you should see Cloudflare's prefixes originated cleanly by AS13335 with short AS paths and valid RPKI status -- the way it should have looked on June 24, 2019, had the right safeguards been in place.