How Submarine Cables Work: The Physical Internet

Nearly every byte of intercontinental internet traffic travels through cables laid on the ocean floor. Not satellites, not radio waves — fiber optic cables resting on seabed sediment, sometimes at depths exceeding 8,000 meters. These submarine cables carry an estimated 99% of all intercontinental data, handling everything from streaming video and financial transactions to BGP route announcements between networks on different continents. The entire global internet depends on roughly 600 cable systems comprising over 1.4 million kilometers of undersea fiber.

Understanding submarine cables means understanding the physical infrastructure that makes the internet's logical routing — the BGP routes you see in a looking glass — actually work. When a cable is cut, the routing changes are immediate and visible: AS paths shift, latency increases, and traffic reroutes through surviving cables within seconds.

Cable Construction: Engineering for the Deep

A submarine cable is not simply a fiber optic strand dragged across the ocean. It is a precision-engineered system designed to survive decades of continuous operation in one of the harshest environments on earth — crushing pressure, corrosive saltwater, marine life, anchors, earthquakes, and fishing trawlers.

The Fiber Core

At the center of every modern submarine cable are optical fiber pairs. Each fiber is a hair-thin strand of ultra-pure glass (about 9 micrometers core diameter for single-mode fiber) through which pulses of laser light carry data. Fibers are always deployed in pairs — one for each direction of transmission. A modern cable typically contains between 8 and 24 fiber pairs, though some recent systems push beyond this. Google's Dunant cable (Virginia to France) carries 12 fiber pairs. The 2Africa cable circling the African continent uses up to 16 fiber pairs on some segments.

Each fiber pair can carry enormous bandwidth. Using Dense Wavelength Division Multiplexing (DWDM), a single fiber pair on a modern system can carry 25 Tbps or more. A 16-fiber-pair cable can therefore deliver over 400 Tbps of total capacity — enough to stream tens of millions of 4K video feeds simultaneously.

The Layer Cake

The fibers are packaged in a carefully engineered series of protective layers, building outward from the delicate glass core:

  1. Fiber optic strands — The glass fibers themselves, each with a primary coating of acrylate polymer to prevent micro-bending losses.
  2. Fiber bundle / tube — Fibers are arranged inside a central tube or slotted core, typically filled with a thixotropic gel that prevents water ingress and cushions the fibers against mechanical stress.
  3. Steel strength member — High-tensile steel wires surround the fiber tube, providing the mechanical strength needed to withstand laying tension (cables are lowered from ships under their own weight, so the top portion bears the load of kilometers of cable hanging below it).
  4. Copper conductor — A copper tube or conductor carries electrical power to the repeaters along the cable. Voltages at the shore end can reach 10,000 to 15,000 volts DC, with the current path running through the conductor and returning via seawater ground. Total power feed for a long transoceanic cable can be 15,000 to 20,000 watts.
  5. Polyethylene insulation — A thick layer of high-density polyethylene provides electrical insulation for the copper conductor and waterproofing for the entire assembly.
  6. Steel armor (near shore) — In shallow-water segments (the first 1,000-1,500 meters of depth), one or two layers of galvanized steel armor wires are wrapped helically around the cable, increasing diameter but providing critical protection against anchors, fishing gear, and abrasion on rocky seabeds. Some near-shore sections are also buried in trenches 1-2 meters below the seafloor using specialized plows.

The result is a cable that ranges from about 17mm in diameter in the deep ocean (roughly the size of a garden hose) to 50mm or more in armored shallow-water sections. The deep-sea lightweight design is intentional — it minimizes the weight that the cable ship must handle and reduces the tension loads during laying operations that can span thousands of kilometers.

Submarine Cable Cross-Section Fiber pairs (8-24) 9 um single-mode glass Gel-filled tube Waterproof, cushioning Steel strength member High-tensile wires Copper conductor Powers repeaters, ~10 kV DC Polyethylene insulation Electrical + waterproof Steel armor (near-shore) Protection from anchors/trawlers ~50mm armored (near-shore) / ~17mm deep-sea

Repeaters: Amplifying the Light

Light signals attenuate as they travel through glass fiber — losing about 0.2 dB per kilometer at the 1550nm wavelength used in submarine systems. After roughly 60 to 100 kilometers, the signal is too weak to be reliably detected. Submarine cables solve this by placing optical repeaters (also called amplifiers) at regular intervals along the cable.

Each repeater contains Erbium-Doped Fiber Amplifiers (EDFAs). These are short segments of fiber doped with erbium ions that, when energized by a pump laser, amplify the optical signal directly — no conversion to electrical signals required. This "all-optical" amplification is crucial because it means the repeater works regardless of the modulation format or data rate, enabling future capacity upgrades without replacing hardware on the ocean floor.

A typical transoceanic cable contains 50 to 150 repeaters, each housed in a pressure-resistant cylindrical casing about 1 meter long and weighing around 300 kg. They are powered by the constant DC current running through the copper conductor, with shore-end power feed equipment at each landing station. The repeaters are designed for a service life of at least 25 years — because replacing one means pulling up the cable from potentially 4,000 meters of water.

Repeater design is one of the most critical engineering challenges in submarine cable systems. Each repeater adds a tiny amount of noise to the signal (Amplified Spontaneous Emission, or ASE noise). Over a 6,000-kilometer crossing with 80+ repeaters, this accumulated noise defines the ultimate capacity of the system. This is why submarine cable engineers obsess over signal-to-noise ratio budgets and why advances in coherent modulation technology — which can extract more data from a noisier signal — directly translate to increased cable capacity.

DWDM: Multiplying Capacity with Color

Dense Wavelength Division Multiplexing (DWDM) is the technology that transformed submarine cables from carrying a few gigabits per second to carrying tens of terabits. The principle is conceptually simple: send many different wavelengths (colors) of laser light simultaneously through the same fiber, each carrying an independent data stream.

Modern submarine DWDM systems use the C-band (approximately 1530-1565 nm wavelength range) and increasingly the L-band (approximately 1565-1625 nm). Within the C-band alone, a system can carry 100 or more wavelength channels, each spaced 50 GHz or 37.5 GHz apart using a flexible grid.

Each wavelength channel on a modern system uses coherent modulation — typically 16QAM or 64QAM with polarization multiplexing — to achieve data rates of 400 Gbps to 800 Gbps per channel. The transmitter encodes data onto both the amplitude and phase of the optical signal, in both polarization states, effectively creating four independent data lanes per wavelength. The receiver uses a coherent detector with a local oscillator laser and sophisticated digital signal processing (DSP) to recover the data despite accumulated noise and fiber impairments.

The math works out impressively. A modern submarine fiber pair using C+L band DWDM with, say, 150 channels at 400 Gbps each yields 60 Tbps per fiber pair. A 16-fiber-pair cable then delivers nearly 1 Petabit per second of total capacity. These numbers are not theoretical — they represent what the latest generation of cables being deployed in the mid-2020s can achieve.

Landing Stations: Where the Ocean Meets the Network

At each end of a submarine cable is a cable landing station (CLS) — the facility where the cable transitions from the marine environment to terrestrial fiber networks. Landing stations are critical infrastructure, and their design reflects this importance.

A cable arrives at the shore through a beach manhole — a reinforced underground vault typically located just above the high-tide line. From there, conduits carry the cable to the landing station building, which may be a few hundred meters to several kilometers inland. The cable is terminated at submarine line terminal equipment (SLTE), which contains the DWDM transponders, power feed equipment, and network management systems.

The power feed equipment is particularly interesting. It supplies the DC voltage (up to 15 kV) that powers all repeaters along the cable. For a long system, both landing stations feed power simultaneously — one at positive voltage, the other at negative — with a virtual ground point in the middle of the ocean. If one end's power fails, the other can sometimes sustain the system at reduced capacity by powering a subset of repeaters.

Landing stations connect to backhaul networks — high-capacity terrestrial fiber links that carry traffic to major cities, data centers, and Internet Exchange Points. The choice of landing station location is therefore partly determined by proximity to population centers and network infrastructure. Many landing stations are located near major metro areas — for example, several transatlantic cables land in northern New Jersey close to the massive data center cluster in Ashburn, Virginia, and the DE-CIX New York exchange.

Backhaul connectivity matters enormously for routing. A cable with landing stations well-connected to IXPs and peering hubs will attract more traffic because networks can reach more destinations with fewer AS hops. This is reflected in the BGP AS paths — a cable landing at a well-connected hub produces shorter, more efficient routes.

Cable Ships: The Installation Fleet

Submarine cables are installed by a small fleet of specialized vessels — cable ships. Only about 40 cable-laying vessels exist worldwide, operated by companies like SubCom (US), Alcatel Submarine Networks (France), NEC (Japan), and a handful of others. These ships are among the most specialized vessels afloat.

A cable ship carries thousands of kilometers of cable in massive circular tanks below deck. The cable is paid out over the stern through a series of sheaves (pulleys) and tensioners that control the rate of deployment. In deep water, the cable free-falls to the ocean floor under its own weight. In shallow water, the ship deploys a cable plow — a remotely operated sled towed behind the ship that cuts a trench in the seabed and simultaneously lays the cable into it, with the trench walls collapsing back to bury the cable.

Laying speed depends on water depth and seabed conditions. In deep, flat ocean, a cable ship might lay 150-200 km of cable per day. Near shore, where burial and careful navigation around obstacles is required, progress can slow to 5-10 km per day. A major transoceanic cable installation takes several months and costs hundreds of millions of dollars.

The cable-laying process begins with an extensive marine survey that maps the seabed along the planned route using multibeam sonar and sub-bottom profilers. The survey identifies hazards (rocky outcrops, steep slopes, submarine canyons, active fault lines, areas of heavy fishing or anchoring) and determines the optimal route. Routes are planned to avoid these hazards, which is why cables rarely follow a straight line between landing points — they zigzag around terrain features, staying on flat, soft seabed wherever possible.

Route Planning and Hazards

Planning a submarine cable route is a multi-year process involving oceanographic surveys, geological analysis, geopolitical considerations, and regulatory approvals from every country whose territorial waters the cable enters.

Natural Hazards

The ocean floor presents numerous threats to cable integrity:

Human Hazards

The majority of cable faults — roughly 70-80% — are caused by human activity, not natural events:

Geopolitical Routing

Cable routes are also shaped by geopolitics. Cables must obtain landing rights from sovereign nations, and some countries have complicated or time-consuming permitting processes. Sanctions regimes affect where cables can land. Strategic considerations influence route diversity — for example, a nation might want cables that reach different continents via different ocean basins to avoid single points of failure. The routing of cables around or through contested waters (South China Sea, Arctic) is inherently political.

Latency: Speed of Light Math

One of the most important properties of submarine cables for network engineers is latency — the time it takes for a signal to traverse the cable. This matters enormously for BGP convergence, real-time applications, and financial trading.

Light in a vacuum travels at c = 299,792 km/s. But light in glass fiber travels slower — the refractive index of silica glass is approximately 1.467 at the 1550nm wavelength used in submarine systems. This means the speed of light in fiber is roughly:

c / n = 299,792 / 1.467 = ~204,360 km/s

This gives us a simple formula for one-way propagation delay:

delay = distance / 204,360 km/s

For real cable systems (using cable route distances, not great-circle distances, since cables rarely follow the shortest path):

These are fiber propagation delays only. Real-world latency adds a few milliseconds for equipment processing at each end, DWDM transponder latency, and any regeneration or switching at intermediate points. But the dominant factor is always the speed of light in glass — physics sets the floor, and no amount of engineering can beat it.

For financial trading firms, these latencies are critical. A transatlantic cable with a route even 100 km shorter than competitors translates to roughly 0.5 ms less latency — which is why companies have invested in cables optimized for the shortest possible path between financial centers like New York, London, and Tokyo. Hibernia Atlantic's (now GTT) Express cable was specifically marketed on its low-latency route between New York and London.

In BGP terms, latency affects path selection. When a network has multiple paths available (for example, via different transatlantic cables), the latency difference can influence which path is preferred for latency-sensitive traffic, even though BGP's default decision process does not directly consider latency.

Major Cable Systems

The global submarine cable network has grown from a handful of transatlantic telegraph cables in the 1850s to a dense web connecting every inhabited continent. Here are some of the most significant modern systems:

Transatlantic

Transpacific

Circumnavigating Africa

Intra-Asia and Global

A clear trend is visible: hyperscale cloud providers and content companies — Google, Meta, Microsoft, and Amazon — are now the dominant investors in new submarine cable construction. As of the mid-2020s, these four companies own or have significant investment stakes in cables carrying the majority of new transoceanic capacity. This represents a fundamental shift from the traditional model where cables were built by consortia of telecommunications carriers.

Ownership Models

Submarine cables have historically been funded through three models, and the balance among them has shifted dramatically:

Consortium Cables

The traditional model. A group of telecommunications carriers jointly fund the construction of a cable, with each receiving a share of the fiber pairs or a guaranteed amount of capacity. A consortium cable might have 20 or more members, each contributing to the cost in proportion to their capacity allocation. Examples include the SEA-ME-WE series and the Asia-America Gateway (AAG). This model spreads risk and cost but requires complex governance — decisions about upgrades, repairs, and routing must be agreed upon by all members.

Private Cables

A single company funds the entire cable for its own use. Google pioneered this approach among content providers with its Curie cable and has since built Dunant, Equiano, Grace Hopper, and others. Meta, Microsoft, and Amazon have followed. Private cables give the owner full control over capacity, routing, and maintenance schedules. They also avoid the governance overhead of consortium negotiations and allow the owner to provision capacity exactly when and where they need it.

Carrier-Neutral / Open Access

A cable is built by a company (often a dedicated submarine cable operator like Aqua Comms, EllaLink, or Crosslake) that sells capacity to multiple customers on commercial terms. This is similar to a real estate developer building office space for lease rather than for their own use. The cable operator handles all maintenance and operations, selling individual fiber pairs, wavelengths, or bandwidth units to customers.

The shift toward private cables owned by hyperscalers has profound implications for internet architecture. When Google builds its own cable from the US to South America, traffic between Google's servers and South American users can flow entirely over Google-owned infrastructure — from data center to submarine cable to landing station to local network. This vertical integration reduces the number of peering and transit relationships needed, potentially simplifying BGP routing but also concentrating control over physical infrastructure in fewer hands.

Route Diversity and Redundancy

No single submarine cable can be considered reliable enough to carry critical traffic on its own. Cables fail — frequently. The International Cable Protection Committee records an average of roughly 100-150 cable faults per year worldwide. Most are repaired within one to two weeks, but during that time, traffic must reroute.

Network engineers achieve resilience through route diversity — ensuring that traffic between two regions can flow over multiple independent cables, ideally following geographically separate routes. This is where submarine cable infrastructure directly intersects with BGP routing.

Route Diversity: Transatlantic Cable Paths North America Europe NJ VA FL UK FR ES Northern route FAULT Southern route BGP reconverges in seconds Traffic shifts to surviving cables via alternate AS paths Active Faulted

Consider the transatlantic corridor. A network might have capacity on three cables: one landing in northern Europe via a northern great-circle route, another connecting the US mid-Atlantic coast to France, and a third taking a southern route to Spain. Under normal conditions, traffic is distributed across all three. If the central cable is cut, BGP sessions on that path go down, the affected prefixes are withdrawn from the cut cable's landing station, and within seconds, BGP reconverges to route traffic over the remaining two cables.

Key diversity strategies include:

Cable Cuts and Repairs

When a submarine cable breaks, the effects ripple through the global routing table within seconds. Here is what happens, from the physical event through to BGP reconvergence:

Detection

Cable operators monitor their systems continuously from Network Operations Centers (NOCs). When a fiber goes dark, the SLTE equipment at the landing stations detects the loss of signal immediately. More sophisticated monitoring uses Optical Time Domain Reflectometry (OTDR) — sending a pulse of light down the fiber and measuring the reflection. The time delay of the reflected pulse reveals exactly where the break occurred, typically to within a few hundred meters over thousands of kilometers of cable.

BGP Impact

The routing impact of a cable cut depends on what BGP sessions were running over that cable. When the cable fails:

  1. Link layer goes down — The physical interfaces at the landing stations lose light, triggering interface-down events.
  2. BGP hold timer expires — If the BGP session between routers at opposite landing stations relied on that cable, the session drops. With default hold timers (90 seconds) or BFD (Bidirectional Forwarding Detection, sub-second), the BGP neighbor relationship is declared dead.
  3. Route withdrawal — The router withdraws all prefixes that were learned through the failed session, sending BGP UPDATE messages with withdrawn routes to all its other peers.
  4. Reconvergence — Networks that were using the failed cable's path select their next-best route from their BGP tables. Traffic shifts to alternative cables. This process typically completes in seconds to low tens of seconds for well-engineered networks, though full global convergence can take longer as withdrawal messages propagate hop by hop through the internet's AS path graph.

The 2008 double cable cut in the Mediterranean (the FLAG and SEA-ME-WE 4 cables were cut within days of each other, likely by dragging ship anchors) caused widespread disruption in the Middle East and South Asia. India lost an estimated 50-60% of its westbound internet capacity. Traffic that normally flowed via the Mediterranean was forced to reroute via the Pacific or via terrestrial paths, significantly increasing latency and congestion. BGP monitoring systems showed massive route instability as networks scrambled to find alternative paths.

Repair Operations

Repairing a submarine cable is a logistically complex operation that typically takes 1-3 weeks from fault identification to traffic restoration:

  1. Fault localization — OTDR measurements from both ends pinpoint the break location. The cable owner dispatches a repair ship.
  2. Ship mobilization — Repair vessels are stationed at strategic ports around the world, but even so, it can take days for a ship to reach the fault location. Weather conditions must also be favorable — operations cannot proceed in heavy seas.
  3. Cable recovery — The ship uses a grapnel — a specialized hook dragged along the ocean floor — to snag and raise the cable. In deep water, this process can take many hours, as the cable must be raised from potentially 3,000-5,000 meters depth.
  4. Cable repair — The damaged section is cut out. New cable is spliced in using a cable joint — a meticulous process of aligning and fusing individual fiber pairs, testing each splice for loss, then sealing the joint in a pressure-resistant housing. Because the repair adds cable length (the new section plus the slack needed for the splice), the repaired cable is longer than the original, and the excess is laid in a gentle loop or "omega" on the seabed.
  5. Testing and restoration — Each fiber pair is tested end-to-end. Once the cable operator confirms all fibers are operational, traffic is restored and BGP sessions are re-established.

Cable repair is expensive — a single repair operation typically costs $1-3 million, including ship time, crew, and materials. The small number of repair vessels worldwide means that during periods of multiple concurrent faults (which unfortunately do occur), repairs can be delayed as ships are dispatched to higher-priority breaks first.

Impact on BGP Routing

Submarine cables are the physical substrate upon which intercontinental BGP routing operates. The relationship between physical cable infrastructure and logical BGP routing runs deep:

AS Path Length and Cable Topology

The submarine cable network's topology directly influences AS path lengths. Regions well-served by cables with direct connections to major hubs have shorter AS paths to the rest of the internet. Regions with limited cable connectivity must route traffic through intermediate countries, adding AS hops. For example, many African countries historically had to route traffic to Europe and back just to reach other African countries, because the available submarine cables all terminated in Europe. The 2Africa cable is specifically designed to address this by providing direct connectivity between African nations.

Latency-Aware Traffic Engineering

Large networks use BGP communities and traffic engineering to select specific submarine cable paths based on latency requirements. A network might tag routes learned via a low-latency northern transatlantic cable with a community value that causes latency-sensitive traffic to prefer that path, while bulk traffic uses a higher-capacity but slightly higher-latency southern route.

CDN and Content Delivery

The global footprint of CDN edge servers is partly determined by submarine cable landing points. Placing a CDN node near a cable landing station ensures low-latency access for users on the far side of that cable. This is why CDN providers study cable maps carefully when planning their infrastructure deployments.

Convergence During Failures

When a major cable fails, the global BGP routing table can experience a storm of updates as networks switch to alternative paths. These convergence events are visible in BGP monitoring tools. A looking glass can show the change in real-time: AS paths to affected prefixes will suddenly include different transit networks, and the path lengths may increase as traffic is forced through longer routes. This is one of the most practical uses of a BGP looking glass — observing how physical cable events manifest as routing changes.

The Future: Longer, Faster, Deeper

Submarine cable technology continues to advance on multiple fronts:

The economics of submarine cables are also evolving. As hyperscale cloud and content companies become the dominant builders of new cables, the traditional telecom consortium model is giving way to private, vertically integrated infrastructure. This consolidation means that a growing portion of the internet's physical layer is owned by the same companies that generate and consume the majority of internet traffic.

See It in Action

The submarine cable network is the foundation that makes global BGP routing work. You can observe the effects of cable topology directly by examining routes to networks in different regions:

Try looking up IP addresses in different regions with the BGP Looking Glass tool and compare the AS paths. Notice how paths between continents typically traverse the networks of companies that operate submarine cables — these are the transit providers whose physical infrastructure connects the world's autonomous systems across the oceans.

See BGP routing data in real time

Open Looking Glass
More Articles
How DOCSIS Works: Cable Internet Technology Explained
How DSL Works: Internet Over Telephone Lines
How Rate Limiting Works
How Fiber to the Home (FTTH) Works
How WiFi Works: 802.11 from Radio to Router
How eBPF Works: Programmable Networking in the Linux Kernel