The Akamai Edge DNS Outage (July 2021)

On July 22, 2021, at approximately 15:46 UTC, a significant portion of the internet went dark. Websites for major airlines, banks, gaming platforms, and logistics companies became unreachable. The cause was not a cyberattack or a natural disaster. It was a software configuration update pushed to Akamai's (AS20940) Edge DNS platform that triggered a latent bug, causing DNS resolution failures for thousands of domains that depended on Akamai's authoritative DNS infrastructure.

The outage lasted approximately one hour before Akamai rolled back the change. But in that hour, the incident exposed a structural vulnerability in how the modern internet is built: too many critical services depend on too few infrastructure providers. This was the third major CDN/DNS provider outage in 2021 alone, following Fastly in June and Cloudflare earlier in July.

What is Akamai Edge DNS?

Akamai Technologies is one of the oldest and largest content delivery and internet infrastructure companies. Founded in 1998 at MIT, Akamai operates a massive globally distributed network that handles an estimated 30% of all web traffic. Its infrastructure spans more than 4,100 Points of Presence in over 130 countries.

Edge DNS is Akamai's authoritative DNS service. When a company delegates its domain's DNS to Akamai, Edge DNS becomes responsible for answering all DNS queries for that domain. If a user anywhere in the world types delta.com into their browser, the query eventually reaches Akamai's Edge DNS servers, which return the IP address needed to connect.

This is different from a recursive DNS resolver like Google's 8.8.8.8 or Cloudflare's 1.1.1.1. Those resolvers are the intermediaries that look up answers on behalf of end users. Akamai's Edge DNS sits at the other end of the chain: it is the authoritative source of truth for the domains it hosts. When the authoritative server fails, no recursive resolver in the world can get an answer, because the answer simply does not exist anywhere else.

User Browser query Recursive Resolver lookup Root / TLD Servers refer Akamai Edge DNS FAILURE Response: SERVFAIL

What Happened: The Timeline

Akamai's post-incident summary described the root cause as a software configuration update to its Edge DNS service that triggered a bug in the DNS software. The update was part of routine maintenance — the kind of change that infrastructure providers push regularly to their global networks. But this particular update interacted with an existing code defect that had gone undetected.

The timeline unfolded rapidly:

The total impact window was roughly one hour, but the effects were not uniform. Some domains recovered within minutes as the rollback propagated, while others took longer depending on DNS TTL values and caching behavior at recursive resolvers.

Who Was Affected

The outage affected a wide swath of the internet economy. Akamai's Edge DNS customers include some of the largest companies in the world. During the outage, the following services experienced partial or complete DNS resolution failures:

The breadth of the impact illustrates how deeply Akamai is embedded in the internet's infrastructure. When a provider handling 30% of global web traffic has a DNS failure, the blast radius is enormous.

The Technical Failure

Akamai did not publish a detailed technical post-mortem with specific code-level root cause analysis. What they disclosed was that a software configuration update triggered a bug that had existed in the Edge DNS platform but had never been exercised by prior configurations.

This is a common pattern in distributed systems failures. The bug was latent -- it existed in the codebase but required a specific combination of configuration values to manifest. Normal testing and validation processes did not catch it because the triggering conditions had never been tested together. When the configuration update changed the operating parameters, the bug activated and caused DNS servers to fail.

In DNS terms, the failure mode was straightforward: Edge DNS servers stopped returning valid responses. Instead of answering queries with the correct IP addresses for hosted domains, they returned SERVFAIL (Server Failure) responses. A SERVFAIL tells the querying recursive resolver that the authoritative server encountered an error and cannot provide an answer.

When a recursive resolver gets a SERVFAIL from all authoritative servers for a domain, it has no choice but to return that failure to the end user's application. The browser then shows an error page. The domain effectively ceases to exist on the internet for the duration of the failure.

Akamai Edge DNS Outage Timeline 15:46 15:50 16:00 16:45 17:00 DNS RESOLUTION FAILURES Config update triggers bug Users report widespread outages Root cause found Rollback begins Rollback complete Recovery begins Full service restored

Why DNS Failures Are So Devastating

DNS is often called the phone book of the internet, but that metaphor understates its criticality. DNS is more like the nervous system. Every single action on the internet -- loading a webpage, sending an email, connecting to an API, authenticating a user -- begins with a DNS lookup. When DNS fails, nothing works.

What makes authoritative DNS failures particularly severe is that there is no fallback. When a CDN fails to serve cached content, the request can potentially fall back to an origin server. When a BGP route is withdrawn, traffic may reroute through alternative paths. But when an authoritative DNS server fails, the domain it hosts has no alternative source of truth.

Recursive resolvers can mitigate the impact briefly through cached records. If a resolver recently looked up delta.com and cached the result with a TTL of 300 seconds, users querying that resolver within those 5 minutes will still get an answer. But once the cached TTL expires and the resolver tries to refresh the record from the authoritative server, it hits the same SERVFAIL, and the domain goes dark for users of that resolver too.

This is why DNS TTL values matter. Domains with very short TTLs (30-60 seconds) lost resolution almost immediately. Domains with longer TTLs (300+ seconds) had a brief grace period from cached records. But no TTL is long enough to survive an hour-long authoritative DNS outage without some users being affected.

The 2021 Infrastructure Outage Pattern

The Akamai outage did not happen in isolation. It was the third major infrastructure provider outage in less than two months, forming a pattern that forced the industry to confront the concentration of the internet's critical services.

Three Major CDN/DNS Outages in 2021 Fastly June 8, 2021 CDN edge config error Duration: ~1 hour Affected: Reddit, NYT, gov.uk, Twitch, Amazon, GitHub AS54113 Customer config Cloudflare July 17, 2021 BGP route leak via Verizon's backbone Duration: ~30 min Affected: Cloudflare customers in some regions AS13335 Akamai July 22, 2021 DNS config triggers software bug Duration: ~1 hour Affected: Delta, FedEx, Southwest, banks, PSN AS20940

June 8: Fastly CDN Outage

Fastly (AS54113) experienced a global outage when a customer's valid configuration change triggered an undiscovered software bug in Fastly's edge servers. The outage took down Reddit, The New York Times, the UK government's gov.uk, Twitch, Amazon, and many others. Fastly restored service within about an hour. The root cause pattern is strikingly similar to Akamai's: a routine configuration change that activated a latent bug.

July 17: Cloudflare BGP Route Leak

Cloudflare (AS13335) was partially affected by a BGP route leak that caused some of its traffic to be misrouted. A network misconfiguration at a backbone provider caused BGP routes to be improperly propagated, resulting in connectivity issues for Cloudflare customers in certain regions. The incident lasted roughly 30 minutes.

July 22: Akamai Edge DNS Outage

Just five days after the Cloudflare incident, Akamai's Edge DNS failure brought down another swath of the internet. The clustering of three incidents from three different providers using three different failure modes underscored that the problem was systemic, not specific to any one company.

DNS Provider Concentration Risk

The internet's DNS infrastructure is remarkably concentrated. A handful of managed DNS providers -- Akamai, Cloudflare, AWS Route 53, Google Cloud DNS, Dyn (now Oracle) -- host the authoritative DNS for millions of domains. This concentration creates single points of failure that can take down large portions of the internet simultaneously.

The concentration problem exists at every level of the DNS hierarchy. The root servers, while operated by multiple organizations, share the same root zone. The TLD servers for .com are all operated by Verisign. And below the TLD level, a few large providers dominate:

DNS Provider Concentration Risk Single DNS Provider Provider X: Edge DNS Bank A Airline FedEx PSN Bank B Retail ALL domains fail Multi-Provider DNS Provider X Provider Y delta.com NS records ns1.providerX.com ns2.providerX.com ns1.providerY.com ns2.providerY.com Provider X fails, Provider Y answers

This concentration is not the result of carelessness. Managed DNS services are complex to operate reliably. They require a globally distributed anycast network, DDoS mitigation, DNSSEC support, low-latency response times, and 100% uptime SLAs. Building this in-house is beyond the capabilities of most organizations, so they delegate to a specialist. But that delegation creates shared fate: when the specialist fails, all of its customers fail together.

Mitigation: Multi-Provider DNS

The most effective mitigation against DNS provider failure is running authoritative DNS on multiple independent providers simultaneously. DNS natively supports this through NS (nameserver) records: a domain can list nameservers from two or more providers, and recursive resolvers will query any of them.

For example, a domain could configure its NS records to point to both Akamai and AWS Route 53:

example.com.  IN  NS  a1-1.akam.net.       (Akamai)
example.com.  IN  NS  a2-2.akam.net.       (Akamai)
example.com.  IN  NS  ns-111.awsdns-11.com (Route 53)
example.com.  IN  NS  ns-222.awsdns-22.net (Route 53)

If Akamai's Edge DNS goes down, recursive resolvers will fail over to the Route 53 nameservers and get valid answers. The domain stays up even though one of its DNS providers has failed completely.

In practice, multi-provider DNS is operationally complex. The two providers must serve identical zone data, which requires synchronization. Zone transfers (AXFR/IXFR) or API-based synchronization must be configured and monitored. DNSSEC adds another layer of complexity, since both providers must serve the same signed records. But the resilience benefit is substantial -- it eliminates the single provider as a single point of failure.

What This Means for BGP

While the Akamai outage was a DNS failure rather than a BGP routing failure, the two systems are deeply intertwined. DNS tells clients which IP address to connect to, and BGP determines how to route packets to that IP address. A failure in either system breaks connectivity.

From a BGP perspective, the Akamai outage is instructive because it shows how internet infrastructure failures do not always manifest as BGP hijacks or route withdrawals. During the Akamai outage, the BGP routes for Akamai's DNS servers were perfectly healthy. AS20940 continued to announce its prefixes normally. Traffic was reaching Akamai's DNS servers without any routing issues. The problem was at the application layer: the DNS software on those servers was returning errors.

This highlights an important limitation of BGP monitoring. A BGP looking glass can detect route withdrawals, hijacks, and path changes, but it cannot detect application-layer failures. The routes to Akamai's DNS servers looked perfectly normal throughout the incident. To detect DNS failures specifically, you need DNS monitoring in addition to BGP monitoring.

That said, BGP analysis is valuable for understanding the broader infrastructure picture. By looking up a DNS provider's autonomous system, you can see its peering relationships, the number of prefixes it announces, and its position in the internet topology. This gives you insight into the blast radius of a potential failure.

The Dyn Attack Precedent

The Akamai outage also echoed a landmark incident from 2016: the Mirai botnet DDoS attack against Dyn, another major managed DNS provider. On October 21, 2016, a massive distributed denial-of-service attack using hundreds of thousands of compromised IoT devices overwhelmed Dyn's DNS infrastructure, causing widespread outages for Twitter, Netflix, Reddit, GitHub, Spotify, and many others.

The Dyn attack and the Akamai outage share a common lesson: authoritative DNS is a critical chokepoint. Whether the failure is caused by a DDoS attack, a software bug, or a configuration error, the result is the same: every domain hosted on the failed provider becomes unreachable. The Dyn attack prompted some companies to adopt multi-provider DNS, but five years later, the Akamai outage showed that many still had not.

Later that same year, in October 2021, Facebook (now Meta) experienced its own catastrophic outage when an internal maintenance operation accidentally withdrew all of Facebook's BGP routes, including the routes to its own DNS servers. That outage lasted over six hours and was a BGP failure rather than a DNS software failure, but the result was identical: Facebook, Instagram, and WhatsApp became completely unreachable worldwide.

Lessons for Internet Resilience

The Akamai Edge DNS outage of July 2021 offers several durable lessons:

Investigate the Infrastructure

You can explore the BGP routing data for the networks involved in this incident. Look up their autonomous systems to see announced prefixes, peering relationships, and AS paths:

Understanding how these networks are interconnected -- and how dependent the rest of the internet is on them -- is the first step toward building more resilient infrastructure.

See BGP routing data in real time

Open Looking Glass
More Articles
The Pakistan YouTube BGP Hijack (2008)
The Facebook DNS Outage (October 2021)
The Cloudflare-Verizon BGP Leak (2019)
The AWS S3 Outage (February 2017)
The Dyn DNS DDoS Attack and Mirai Botnet (2016)
The CenturyLink/Level3 Flowspec Outage (2020)