How the Internet Backbone Works: Tier 1 Networks, Peering, and Transit
The internet backbone is the collection of high-capacity fiber optic networks operated by a small number of large carriers that interconnect to form the core transport fabric of the global internet. These backbone networks — often called Tier 1 networks — carry the vast majority of long-haul internet traffic between cities, countries, and continents. They are the structural layer between the submarine cables that cross oceans and the last-mile networks that deliver data to homes and businesses. Understanding the backbone means understanding how traffic actually moves between autonomous systems, why some BGP paths are shorter than others, and why the economics of interconnection shape the topology of the entire internet.
The ISP Hierarchy: Tier 1, Tier 2, and Tier 3
The internet is not a flat mesh of equals. It is a rough hierarchy of networks classified by their position in the interconnection food chain. This hierarchy is not formally defined by any standards body — it emerged organically from the economics of who pays whom to carry traffic.
Tier 1 Networks
A Tier 1 network is a network that can reach every destination on the internet without purchasing transit from any other network. It achieves this through settlement-free peering agreements with every other Tier 1 network. Because each Tier 1 network peers with all the others, and each has its own extensive customer base, the set of Tier 1 networks collectively provides full reachability. A Tier 1 network never pays another network for transit — it either peers for free or sells transit to smaller networks.
The current set of widely recognized Tier 1 networks includes:
- Lumen / Level 3 (AS3356) — The largest backbone by number of customer routes. Level 3 merged with CenturyLink in 2017, which later rebranded to Lumen Technologies. Its network spans North America, Europe, and Latin America with deep fiber assets.
- NTT Communications (AS2914) — Operates the NTT Global IP Network (formerly Verio), one of the largest backbones with particularly strong presence in Asia-Pacific, North America, and Europe.
- Cogent Communications (AS174) — Known for aggressive pricing and a willingness to engage in peering disputes. Cogent operates a dense metro fiber network optimized for data center connectivity.
- GTT Communications (AS3257) — A global Tier 1 carrier formed through acquisitions of Tinet, nLayer, and Interoute, serving enterprise and carrier customers.
- Arelion / Telia Carrier (AS1299) — The former Telia Carrier division, rebranded to Arelion in 2022. One of the oldest backbone networks, with dominant presence in Northern Europe and extensive transatlantic and transpacific capacity.
- Telecom Italia Sparkle (AS6762) — Strong Mediterranean and Southern European backbone with extensive reach into the Middle East and Latin America.
- Zayo (AS6461) — Operates extensive fiber infrastructure in North America and Western Europe, with a focus on high-bandwidth connectivity for data centers and cloud providers.
- Verizon Business (AS701) — One of the legacy backbones (formerly UUNET/MCI/Worldcom), though its relative importance has declined as Verizon has de-emphasized wholesale transit.
The exact list of Tier 1 networks is debatable and shifts slowly over time as networks merge, go bankrupt, or change their interconnection strategies. What defines the group is the mutual settlement-free peering — each member peers with every other member, and none pays any of the others for transit.
Tier 2 Networks
A Tier 2 network peers with some networks but also purchases transit from one or more Tier 1 providers to reach the rest of the internet. Most large regional ISPs, national carriers, and many content and cloud providers fall into this category. Examples include Hurricane Electric (AS6939), Sprint/T-Mobile (AS1239), and Comcast (AS7922).
Tier 2 networks often have extensive peering at Internet Exchange Points, which reduces their transit costs by handling a large fraction of their traffic through settlement-free interconnection. The line between a large Tier 2 and a Tier 1 is fuzzy — a network that peers with most of the internet but still buys transit from one Tier 1 is technically Tier 2, even if it looks almost like a Tier 1 from the outside.
Tier 3 Networks
A Tier 3 network is a smaller ISP or enterprise network that primarily purchases transit and does little or no peering. These are the leaf nodes of the internet topology — the access networks, regional ISPs, hosting providers, and corporate networks whose only connection to the broader internet is through one or two upstream transit providers. A Tier 3 network shows up in BGP as an origin AS at the end of AS paths, always reached through its transit providers.
Transit vs. Peering vs. Paid Peering
The three fundamental commercial relationships that determine how traffic flows between networks are transit, settlement-free peering, and paid peering. Each has different economics and different implications for BGP route propagation.
Transit
In a transit relationship, a customer network pays a provider network for access to the entire internet. The transit provider announces the customer's routes to all of its peers and upstream providers, and provides the customer with a default route or a full BGP table so the customer can reach every destination. Transit is sold by committed bandwidth — a typical price might be $0.50 to $3.00 per Mbps per month for a 10 Gbps port, depending on the market and location. In competitive markets like Ashburn, Virginia or Amsterdam, prices are lower; in remote or undersea-cable-constrained locations, prices can be dramatically higher.
From a BGP perspective, transit means the provider includes the customer's prefixes in announcements to everyone — upstream, peers, and other customers. The customer's AS appears at the end of AS paths originated by the provider.
Settlement-Free Peering
In settlement-free peering, two networks agree to exchange traffic destined for each other's networks (and their respective customers) at no charge. Neither network pays the other. Each side only announces its own routes and its customers' routes to the peer — not routes learned from other peers or upstream providers. This distinction is crucial: peering provides access to a subset of the internet (the peer's network and customers), not the full routing table.
Settlement-free peering typically happens between networks of roughly comparable size and traffic volume. The rationale is mutual benefit: both networks save on transit costs by exchanging traffic directly. Peering is established either at Internet Exchange Points (public peering) or via dedicated cross-connects in colocation facilities (private peering).
Paid Peering
Paid peering is a hybrid arrangement where one network pays the other for a peering-style interconnection. Unlike transit, the paying network typically only receives routes for the other network's customers — not a full routing table. Unlike settlement-free peering, money changes hands. Paid peering usually arises when traffic ratios between two networks are significantly imbalanced, or when a large eyeball network (ISP with many end users) has enough market power to demand payment from content providers who want direct access to those eyeballs.
The distinction between paid peering and transit is subtle but real. In paid peering, the paying network gets access only to the other network's customer cone — the set of networks reachable through it. In transit, the customer gets access to the entire internet. The BGP behavior differs accordingly: paid peering announcements are limited in scope, just like settlement-free peering, while transit announcements propagate broadly.
The Default-Free Zone and the Full BGP Table
The Default-Free Zone (DFZ) is the set of routers on the internet that carry a complete copy of the global routing table and do not use a default route. These routers know an explicit path to every routable prefix on the internet. Every Tier 1 network operates entirely within the DFZ — their routers must have full routing knowledge because there is no upstream to use as a default destination for unknown routes.
The full BGP routing table — the set of all prefixes announced globally — has grown enormously over the decades. As of 2025, the IPv4 DFZ table contains over 1 million prefixes, while the IPv6 table exceeds 200,000 prefixes. This growth is driven by increased address usage, more specific route announcements for traffic engineering, and the proliferation of smaller autonomous systems.
Carrying the full table requires substantial router memory and processing power. Each prefix in the BGP table may have multiple candidate paths (from different peers and transit providers), and the router must run the BGP best-path selection algorithm on each to determine the winning route. A DFZ router at a major backbone exchange point might maintain BGP sessions with hundreds of peers, each sending a large fraction of the full table, resulting in millions of path entries in the Adj-RIB-In (the raw received routes before best-path selection).
Smaller networks avoid this complexity by purchasing transit and using a default route — any traffic they do not have a specific route for is sent to their transit provider, which does carry the full table. The decision of whether to carry a full table, partial routes, or just a default is a significant architectural choice for any network.
Backbone Fiber Routes and Physical Topology
The logical topology of BGP — which AS connects to which — sits atop a physical topology of fiber optic cables. Backbone networks own or lease vast amounts of long-haul fiber, and the routes this fiber follows are shaped by geography, rights-of-way, and demand.
North American Backbone Routes
The major US backbone fiber routes generally follow railroad rights-of-way, interstate highway corridors, and pipeline easements. Key routes include:
- Northern corridor — New York/New Jersey through Chicago to Seattle, roughly following the I-90/I-94 corridor. This connects the major East Coast data center markets (Ashburn, northern NJ) to the Chicago hub and onward to the Pacific Northwest and transpacific cable landing points.
- Southern corridor — The Sunbelt route from northern Virginia through Atlanta, Dallas, and on to Los Angeles. This follows the I-85/I-20/I-10 corridor and connects the East Coast to the major West Coast cable landing stations and data centers.
- Central corridor — Routes through the Midwest, connecting Chicago to Dallas and Kansas City, and east-west across the central US.
- Pacific coastal — The West Coast route from Seattle through Portland, San Francisco/San Jose, and Los Angeles, connecting submarine cable landing stations with major metro data center clusters.
The most important single intersection in the US backbone is Chicago, where virtually every major backbone has a point of presence. Ashburn, Virginia is the largest concentration of data center capacity and internet exchange traffic, while 60 Hudson Street and 111 8th Avenue in New York City are historic carrier hotels where many long-haul fiber routes terminate.
European and Global Routes
In Europe, backbone fiber follows major population corridors: London to Amsterdam, Frankfurt, Paris, and onward to Southern and Eastern Europe. Frankfurt and Amsterdam are the continent's primary interconnection hubs, hosting DE-CIX and AMS-IX respectively — two of the world's largest IXPs by traffic volume. London (Docklands) is the primary terminus for transatlantic cables and hosts LINX.
Intercontinental backbone capacity rides on submarine cables. The transatlantic route (US East Coast to UK/France/Iberia) is the most heavily provisioned, with dozens of cable systems providing aggregate capacity measured in petabits per second. The transpacific route (US West Coast to Japan/Southeast Asia) is the second largest. Routes to Africa, South America, the Middle East, and South/Southeast Asia have grown rapidly with cables like 2Africa, Equiano, and PEACE.
Major Tier 1 Networks in Detail
Lumen / Level 3 (AS3356)
Lumen Technologies (formerly CenturyLink, which acquired Level 3 Communications in 2017) operates what is by many measures the largest backbone on the internet. AS3356 consistently appears as the AS with the most customer routes in the global BGP table — meaning more networks buy transit from Lumen than from any other single provider. Its fiber network spans approximately 720,000 km of fiber route miles globally, with dense coverage across North America, Europe, and Latin America.
Level 3's dominance was built through a series of acquisitions: it absorbed Genuity (the former BBN Planet), Broadwing (the former ITC DeltaCom / Williams Communications), Global Crossing, and tw telecom, each adding fiber routes and customer bases. This history is visible in BGP data — Lumen still announces prefixes acquired from these legacy networks, and many older peering arrangements were inherited from predecessor companies.
NTT Communications (AS2914)
NTT's Global IP Network is one of the top-tier global backbones, operated by the Japanese telecommunications giant. NTT acquired Verio in 2000, which gave it a US backbone, and has since built one of the most geographically diverse Tier 1 networks, with particularly strong infrastructure in Asia-Pacific (Japan, Hong Kong, Singapore), North America, and Europe. NTT is known for consistently strong network performance metrics, frequently topping independent backbone performance benchmarks.
Cogent Communications (AS174)
Cogent is a famously aggressive Tier 1 provider. It offers some of the lowest transit prices in the industry, built on an efficient operating model focused on data center colocation connectivity. Cogent's network is heavily optimized for metro Ethernet within major data center markets, with long-haul backbone capacity connecting them. Cogent is notable for its history of peering disputes — it has temporarily been de-peered by several other Tier 1 networks over the years, including well-publicized disputes with Sprint in 2008 and with Verizon. These disputes, while usually resolved within days or weeks, demonstrate that even Tier 1 peering is not unconditional.
Arelion / Telia Carrier (AS1299)
Arelion (formerly Telia Carrier, rebranded in 2022) is one of the most storied backbone networks. Originating from the Swedish national telecom operator Telia, its carrier division built one of the first pan-European IP backbones and expanded globally. Arelion has a dominant position in Northern and Central Europe and operates extensive transatlantic and transpacific links. It is frequently seen in AS paths for traffic entering or leaving Scandinavia, the Baltics, and Eastern Europe. Arelion is also notable for appearing in major BGP route leak incidents, where misconfigured customers accidentally re-announced Arelion's full table to other providers.
GTT Communications (AS3257)
GTT was formed through a series of mergers that combined Tinet (a European backbone), nLayer (a US backbone), and Interoute (a European fiber network). The result is a global Tier 1 network with particular strength in Europe. GTT went through a financial restructuring in 2022 but continues to operate its backbone. In BGP data, GTT's AS frequently appears in European transit paths.
Traffic Exchange Economics
The economics of internet interconnection drive the structure of the backbone. Understanding who pays whom — and why — explains much of the internet's topology.
The Value of Eyeballs vs. Content
The internet interconnection market has a fundamental asymmetry: eyeball networks (ISPs serving end users) and content networks (providers serving content) exchange traffic that is heavily asymmetric in volume. A residential broadband user downloads far more than they upload — streaming video, web pages, software updates all flow from content to eyeball networks. This means content networks send much more traffic to eyeball networks than they receive in return.
This asymmetry gives large eyeball networks negotiating power. They can argue that content networks benefit more from the interconnection (since the content provider's customers are requesting the data), and therefore the content network should pay — either through paid peering or by purchasing transit. This is the root of many peering disputes and the reason paid peering exists.
Transit Pricing Trends
Transit prices have declined dramatically over the past two decades, falling from hundreds of dollars per Mbps in the early 2000s to under $1 per Mbps in competitive markets today. This decline is driven by: fiber capacity growing faster than demand (thanks to DWDM improvements), intense competition among transit providers, and the rise of content delivery networks that reduce demand for long-haul transit.
Despite declining unit prices, the total transit market remains large because bandwidth consumption continues to grow. Video streaming, cloud computing, remote work, and IoT devices all drive traffic growth rates of 25-35% per year, partially offsetting per-unit price declines.
The Peering Negotiation
Peering agreements are negotiated bilaterally, and each network sets its own peering policy. Typical criteria for settlement-free peering include:
- Traffic volume — Minimum traffic exchange thresholds (e.g., 1 Gbps sustained)
- Traffic ratio — Traffic should be roughly balanced (e.g., no worse than 2:1)
- Geographic scope — The peer should have presence in multiple regions
- Mutual benefit — Both sides should save on transit costs by peering
- NOC capability — The peer should have a 24/7 network operations center
- Interconnection points — The peer should be willing to interconnect at multiple locations
Networks with open peering policies (like Hurricane Electric) will peer with almost anyone who asks. Networks with selective or restrictive peering policies carefully evaluate each potential peer and may reject requests that do not meet their criteria. Some large eyeball networks have effectively closed peering policies, demanding paid peering from all but the largest counterparts.
Content Provider Networks: The New Backbone
Over the past decade, the traditional Tier 1 backbone hierarchy has been disrupted by the rise of content provider networks — massive private networks built by companies like Google, Meta (Facebook), Amazon, Microsoft, and Netflix. These networks now carry a significant fraction of all internet traffic, and their architecture looks very different from a traditional backbone.
Google (AS15169)
Google's network is one of the largest on the planet. Google owns or leases substantial submarine cable capacity (including investments in the Dunant, Grace Hopper, Firmina, and Topaz cables), operates a global backbone connecting its data centers, and peers with thousands of networks at hundreds of Internet Exchange Points and private interconnection points worldwide. Google's peering policy is generally open — it peers freely to offload traffic from transit providers, minimizing its transit costs (which are substantial given the volume of YouTube, Search, Cloud, and Android traffic). Google's approach has effectively made it a near-Tier-1 network for its own traffic, even though it technically buys transit from traditional Tier 1s for full reachability.
Meta / Facebook (AS32934)
Meta's network follows a similar model. It operates a private global backbone connecting its data centers, invests in submarine cables (including the 2Africa cable encircling the African continent), and maintains extensive peering. Meta is one of the largest sources of traffic on the internet due to Facebook, Instagram, WhatsApp, and video content. Its approach to interconnection is well documented — Meta maintains a public peering policy and actively peers at major IXPs globally.
Netflix Open Connect
Netflix takes a distinctive approach through its Open Connect program. Rather than relying on traditional CDN providers or building a massive backbone, Netflix places dedicated caching servers (called Open Connect Appliances or OCAs) directly inside ISP networks. An ISP that joins the Open Connect program gets a free Netflix server installed in their facility, which caches Netflix content and serves it directly to subscribers without the traffic ever crossing the ISP's upstream transit links.
This model is enormously efficient. Netflix traffic, which represents a significant fraction of peak evening internet traffic in many countries, is served from inside the ISP's own network. The ISP saves on transit costs, Netflix reduces its own network expenses, and users get better streaming performance with lower latency. From a BGP perspective, this means Netflix traffic often does not appear in backbone transit paths at all — it originates from within the ISP's own AS.
CDN Peering and Direct Interconnection
The rise of content delivery networks and content provider networks has fundamentally reshaped internet traffic patterns. Historically, most traffic traversed multiple transit providers on its way from source to destination, generating long AS paths. Today, a large fraction of traffic takes a shortcut: it flows directly from a content network to an eyeball network through a single peering link, producing an AS path of just two hops — the content AS and the eyeball AS.
This trend has reduced the relative importance of traditional Tier 1 transit networks. The backbone providers still carry enormous volumes of traffic, but the highest-volume flows (streaming video, social media, cloud services) increasingly bypass them entirely through direct peering and embedded caching. This is sometimes called the flattening of the internet — the hierarchy is compressing as content moves closer to end users.
How Backbone Routing Works in Practice
When a packet travels across the backbone, its path is determined by a combination of BGP routing decisions and interior gateway protocol (IGP) routing within each autonomous system.
Inter-Domain Routing: BGP
Between autonomous systems, BGP determines which AS path a packet follows. Each backbone router maintains the full BGP table and selects the best path for each prefix based on a series of criteria: local preference (which reflects the network's business relationships — customer routes are preferred over peer routes, which are preferred over transit routes), AS path length, origin type, multi-exit discriminator (MED), and various tie-breaking rules.
The business-relationship-driven local preference is the most important factor in backbone routing decisions. A Tier 1 network always prefers to route traffic through a customer (which generates revenue), then through a settlement-free peer (which costs nothing), and only as a last resort through a paid transit provider (which the Tier 1 should never need, by definition). This preference hierarchy ensures that traffic follows the most economically efficient path from the backbone operator's perspective.
Intra-Domain Routing: IGP and Traffic Engineering
Within a single backbone network, an interior gateway protocol (typically IS-IS or OSPF) determines how traffic moves between routers. The IGP calculates shortest paths based on link metrics, which network engineers tune to control traffic distribution across the backbone's fiber links. This is traffic engineering — the art of distributing traffic load across available capacity to avoid congestion on any single link.
Many backbone networks also use MPLS for traffic engineering, which allows explicit path selection independent of IGP shortest-path routing. MPLS-TE tunnels can route traffic along non-shortest paths to balance load, avoid congested links, or provide fast failover when a link or node fails. Segment routing, a newer approach, achieves similar traffic engineering goals with less operational overhead than traditional MPLS-TE.
Hot-Potato vs. Cold-Potato Routing
When a backbone network receives traffic destined for a peer's network, it must decide where to hand off the traffic. There are two philosophical approaches:
- Hot-potato routing — Hand off the traffic at the nearest peering point, minimizing how far the traffic travels on your own network. This is the default behavior and is economically rational for the sending network: carry the traffic the shortest distance on your own (expensive) backbone and push it onto the peer's network as quickly as possible.
- Cold-potato routing — Carry the traffic on your own backbone to the peering point closest to the destination, then hand it off. This provides better end-to-end performance for the user but costs the sending network more because it carries the traffic farther on its own infrastructure.
In practice, most backbone networks use hot-potato routing by default. Cold-potato routing is sometimes used when a customer pays a premium for better performance, or when a peering agreement specifies it. The choice between hot-potato and cold-potato routing is controlled through BGP's MED attribute and IGP metrics, and it can significantly affect the AS path and latency experienced by end users.
Resilience and Redundancy
Backbone networks are engineered for high availability. A single fiber cut, router failure, or peering point outage should not cause a network-wide disruption. This resilience comes from several design principles:
- Diverse fiber paths — Backbone networks use geographically diverse fiber routes so that a single event (construction dig, earthquake, hurricane) cannot sever all paths between two cities. Critical routes are protected by fiber on completely separate physical paths.
- Multiple peering points — Tier 1 networks peer with each other at many locations. If one peering point fails, traffic shifts to alternative peering points within seconds as BGP withdraws the failed routes and reconverges.
- Fast restoration — MPLS fast-reroute and IGP loop-free alternates can restore traffic within 50 milliseconds of a failure — far faster than BGP convergence, which can take seconds to minutes.
- Capacity headroom — Well-run backbones keep significant spare capacity on every link (typically operating at 30-50% utilization during peak), so that when a parallel link fails, the remaining links can absorb the rerouted traffic without congestion.
Despite these precautions, backbone failures do occur and can have wide-reaching effects. The CenturyLink/Level 3 outage in 2020 demonstrated how a single backbone provider's failure can affect a large swath of the internet. When a Tier 1 network experiences a significant outage, the effects are visible in BGP data: routes are withdrawn, AS paths change, and convergence events ripple across the global routing table.
The Future of the Backbone
Several trends are reshaping the internet backbone:
- Content provider dominance — Google, Meta, Amazon, and Microsoft now collectively invest more in network infrastructure than all traditional Tier 1 providers combined. Their private backbones carry growing shares of global traffic, and their submarine cable investments are expanding the physical infrastructure of the internet. The traditional Tier 1 model of selling transit is under pressure as the largest traffic sources bypass it entirely.
- 400G and 800G optics — New coherent optical technology is pushing per-wavelength speeds from 100 Gbps to 400 Gbps and 800 Gbps, dramatically increasing backbone capacity without new fiber deployment. This keeps ahead of bandwidth demand growth and continues the long-term decline in per-bit transport costs.
- Edge computing and distributed architectures — As computation moves closer to end users (edge clouds, distributed databases, embedded CDN caches), less traffic needs to traverse long-haul backbone paths. This flattens the traffic pattern and reduces the relative importance of cross-country backbone capacity.
- Consolidation — The backbone industry has consolidated significantly through mergers (Level 3 + CenturyLink/Lumen, GTT's acquisitions, Zayo's growth). Further consolidation is likely as transit prices decline and scale advantages increase.
- IPv6 table growth — The IPv6 routing table is growing faster proportionally than the IPv4 table, increasing the memory and processing demands on DFZ routers. This growth will accelerate as IPv4 exhaustion forces more networks to adopt IPv6.
Seeing the Backbone in BGP Data
The backbone hierarchy is directly visible in BGP routing data. When you look up a prefix or ASN in a BGP looking glass, you can observe:
- AS paths through Tier 1 networks — Routes that traverse a Tier 1 AS (like AS3356 or AS1299) indicate traffic flowing through the backbone. Shorter paths through Tier 1 networks generally mean better connectivity.
- Transit relationships — When an AS consistently appears upstream of another in AS paths, the upstream is likely a transit provider. The AS that appears most frequently as an upstream across many paths is typically a backbone carrier.
- Peering shortcuts — When two ASes are adjacent in an AS path without a Tier 1 intermediary, they likely have a direct peering relationship. Content providers like Google (AS15169) or Meta (AS32934) often appear directly adjacent to eyeball ISPs, reflecting their extensive peering.
- Path diversity — A well-connected network will have multiple available paths to any destination, often through different Tier 1 providers. You can see this in the looking glass by examining alternative paths for a given prefix.
- Customer cone size — The number of ASes reachable through a given AS (its customer cone) is a measure of backbone importance. Tier 1 networks have the largest customer cones because they are the upstream for thousands of smaller networks.
Try looking up any of the major backbone ASNs in the god.ad BGP Looking Glass to see their interconnection relationships, customer counts, and the routes they carry. Examining Lumen (AS3356), Cogent (AS174), or Arelion (AS1299) reveals the scope and structure of backbone networks directly from live BGP data.