Domain Fronting, SNI Spoofing, and Encrypted Client Hello

Every time your browser connects to a website over HTTPS, it performs a TLS handshake. Before any encrypted data flows, the client must tell the server which hostname it wants to reach. This happens through a field called the Server Name Indication (SNI), and it is sent in plaintext. That single exposed field has become the battleground for censorship, surveillance, circumvention tools, and advanced persistent threat groups alike.

This article examines three interrelated techniques that exploit or protect the SNI field: domain fronting, which hides the true destination behind a CDN's shared infrastructure; SNI spoofing, which lies about the destination outright; and Encrypted Client Hello (ECH), which encrypts the SNI so that no on-path observer can read it at all.

TLS SNI: The Plaintext Leak in Every HTTPS Connection

When TLS was first designed (as SSL 3.0 in 1996), each IP address typically hosted a single website. The server knew which certificate to present because there was a one-to-one mapping between IP and hostname. But as IPv4 address space became scarce and shared hosting became the norm, a problem emerged: if a server hosts hundreds of domains on one IP address, how does it know which certificate to use before the encrypted session begins?

The answer was Server Name Indication (SNI), standardized in RFC 4366 (2006) as a TLS extension. The client includes the desired hostname in the ClientHello message at the very start of the TLS handshake. The server reads this field, selects the appropriate certificate, and proceeds with the handshake.

The critical detail: the ClientHello is sent before any encryption is established. The SNI field is therefore visible to anyone who can observe the network traffic — your ISP, a corporate firewall, a national censor, or an attacker performing a man-in-the-middle interception.

TLS Handshake with Plaintext SNI Client Server Network Observer ClientHello: SNI = "secret-site.com" (PLAINTEXT) can read SNI ServerHello + Certificate Client Key Exchange / Finished Encrypted Application Data (HTTP Host header hidden inside TLS) Visible to observer Handshake (partially visible) Fully encrypted

This means that even though HTTPS encrypts the content of your communications, the identity of the server you are connecting to is exposed in every connection. A censor can block access to specific domains by inspecting the SNI field without breaking TLS. An ISP can log every website you visit by reading SNI, even though it cannot see the page content. This is precisely how Deep Packet Inspection (DPI) systems in countries like China, Russia, Iran, and Egypt enforce domain-level blocking.

Notice that the HTTP Host header — which also identifies the destination — travels inside the encrypted TLS tunnel. Only the SNI field in the ClientHello is exposed. This discrepancy between the plaintext SNI and the encrypted Host header is exactly what domain fronting exploits.

Domain Fronting: Hiding Behind a CDN

Domain fronting is a technique that abuses the gap between TLS SNI and the HTTP Host header. It works by connecting to a CDN that hosts many domains on the same set of IP addresses. The client places an innocuous, uncensored domain in the SNI field (and in the TLS certificate verification), but puts the actual target domain in the HTTP Host header inside the encrypted stream.

Here is how it works step by step:

  1. The client initiates a TLS connection to a CDN edge server (e.g., Cloudflare, Amazon CloudFront, or Google).
  2. In the ClientHello, the SNI field says allowed-site.example.com — a domain that is not blocked by the censor.
  3. The CDN edge server accepts the connection and completes the TLS handshake using the certificate for the front domain.
  4. Inside the now-encrypted HTTP request, the Host header says blocked-site.example.com — the actual destination.
  5. The CDN sees the Host header, routes the request to the backend for blocked-site.example.com, and returns the response through the same encrypted tunnel.

To a network observer or censor performing DPI, this looks like an ordinary HTTPS connection to allowed-site.example.com. Blocking it would require blocking the entire CDN, which would collaterally disable thousands of legitimate websites.

Domain Fronting Through a CDN Client Censor / DPI Inspects SNI only CDN Edge (Cloudflare, AWS, etc.) Shared IP for all domains allowed-site.com blocked-site.com 1 Client connects to CDN IP. SNI = "allowed-site.com" 2 Censor sees "allowed-site.com" in SNI. Connection passes. 3 TLS handshake completes. Tunnel is now encrypted. 4 Inside TLS: HTTP Host header = "blocked-site.com" 5 CDN routes request to blocked-site.com backend. Response returns. What the censor sees (outside TLS): TCP dst: 104.16.x.x:443 (CDN IP) TLS SNI: allowed-site.com What the CDN sees (inside TLS): Host: blocked-site.com

Domain fronting was first described in a 2015 academic paper by Fifield, Lan, Hynes, Wegmann, and Paxson at UC Berkeley, though the underlying technique had been observed in practice earlier. Its power lies in a fundamental asymmetry: the censor can only see the SNI (and the IP address), but the actual routing decision happens inside the encrypted tunnel based on the Host header.

Requirements for Domain Fronting

Domain fronting only works when specific conditions are met:

Censorship Circumvention: Signal, Tor, and Telegram

Domain fronting gained widespread attention through its use by censorship circumvention tools:

Signal Messenger

In 2016, Open Whisper Systems (the organization behind Signal) deployed domain fronting to keep Signal accessible in Egypt, the UAE, Oman, and Qatar. When these countries blocked Signal's servers, the app was modified to route its traffic through Google App Engine, using Google domains as the front. The censors could see connections to Google servers but could not distinguish Signal traffic from any other Google service. Blocking Signal would have required blocking all of Google.

This approach worked until Google intervened. In April 2018, Google updated its infrastructure to reject requests where the SNI and Host header did not match, explicitly breaking domain fronting. Google's stated reason was that domain fronting "has never been a supported feature" and that it could "never be considered a reliable approach."

Tor Meek Pluggable Transport

The Tor project developed the meek pluggable transport specifically around domain fronting. Meek disguises Tor traffic as ordinary HTTPS connections to major cloud services. Users in countries that block Tor could connect through fronts on Google, Amazon CloudFront, or Microsoft Azure.

Meek used a relay hosted on a cloud platform (e.g., an Amazon CloudFront distribution). The client would connect with SNI set to an unblocked domain on the same CDN. Inside the TLS tunnel, the request would reach the meek relay, which would forward it into the Tor network. From the censor's perspective, the user appeared to be browsing a mainstream website.

Telegram

When Russia attempted to block Telegram in 2018, Telegram deployed domain fronting through Amazon and Google infrastructure. Russia's response was to block millions of IP addresses belonging to Amazon AWS (AS16509) and Google Cloud (AS15169), causing widespread collateral damage to unrelated services. This incident vividly demonstrated both the power and the controversy of domain fronting.

CDN Response: Shutting Down Domain Fronting

The major CDN and cloud providers systematically disabled domain fronting between 2017 and 2018:

The simultaneous shutdown by all major providers within weeks of each other effectively ended domain fronting as a reliable censorship circumvention strategy. The Tor project noted that meek over Google and Amazon was no longer viable, and Signal was forced to find alternative approaches (they eventually switched to proxying through other infrastructure).

The Dark Side: APT29 and Domain Fronting for C2

Domain fronting was not used only by privacy advocates. Advanced Persistent Threat (APT) groups recognized its value for hiding command-and-control (C2) communication.

APT29 / Cozy Bear

APT29 (also known as Cozy Bear, attributed to Russia's SVR intelligence service) is among the most sophisticated threat actors known to use domain fronting. This group, responsible for the 2016 DNC hack and the 2020 SolarWinds supply-chain compromise, used domain fronting to hide C2 traffic within legitimate CDN connections.

The technique worked like this:

  1. APT29 hosted their C2 server behind a CDN like Amazon CloudFront or a similar service.
  2. Malware on a compromised machine would initiate HTTPS connections with the SNI set to a high-reputation domain (e.g., a popular news site hosted on the same CDN).
  3. Inside the encrypted tunnel, the Host header pointed to the C2 server's CloudFront distribution.
  4. The CDN routed the request to the C2 backend. Commands flowed back through the same encrypted path.

Network monitoring tools and enterprise firewalls saw connections to legitimate-looking domains over standard HTTPS on port 443. The traffic blended perfectly with normal web browsing, making detection extremely difficult. Even TLS inspection proxies that could decrypt traffic would see requests going to a legitimate CDN domain — the C2 infrastructure was hidden behind the CDN's edge network.

Other Malware Using Domain Fronting

APT29 was not alone. The technique appeared in several offensive security frameworks and malware campaigns:

The dual-use nature of domain fronting — simultaneously a lifeline for dissidents and a tool for espionage — made the ethical and policy discussions around its shutdown deeply contentious.

SNI Spoofing: A Cruder Approach

SNI spoofing is a simpler but less effective technique. Instead of routing through a CDN that legitimately handles both domains, the client simply lies: it puts a fake domain in the SNI field while connecting directly to the real server.

For example, a client might set the SNI to google.com while actually connecting to the IP address of blocked-site.com. A naive DPI system that only checks the SNI field and not the destination IP would be fooled.

Why SNI Spoofing Usually Fails

SNI spoofing has significant limitations compared to domain fronting:

Some DPI systems in countries with less sophisticated censorship infrastructure can be bypassed by SNI spoofing, especially those that only inspect the SNI field for keyword matching without cross-referencing DNS responses or performing IP-to-AS lookups. But this is an arms race: as censorship technology improves, crude SNI spoofing becomes less effective.

Geneva and Automated Evasion

Researchers at the University of Maryland developed Geneva, a genetic algorithm that automatically discovers packet-level censorship evasion strategies. Some of Geneva's discovered techniques involve fragmenting the ClientHello so that the SNI field is split across TCP segments, confusing DPI systems that cannot reassemble fragmented TLS handshakes. This is related to SNI spoofing but operates at a lower level — manipulating how the SNI is transmitted rather than what it contains.

Encrypted Client Hello (ECH): Solving the Problem Properly

While domain fronting was a clever hack and SNI spoofing was a crude workaround, Encrypted Client Hello (ECH) is the IETF's proper engineering solution to the SNI exposure problem. ECH encrypts the entire ClientHello message — including the SNI field — so that no on-path observer can see which hostname the client is requesting.

The Evolution: ESNI to ECH

ECH evolved from an earlier proposal called Encrypted SNI (ESNI), which encrypted only the SNI extension. ESNI was deployed experimentally by Cloudflare and Firefox in 2018-2019, but the IETF TLS working group determined that encrypting SNI alone was insufficient — other ClientHello extensions like ALPN (Application-Layer Protocol Negotiation) and the supported versions list also leak information. The solution was broadened to encrypt the entire ClientHello, and the protocol was renamed to Encrypted Client Hello (ECH).

How ECH Works

ECH uses a split ClientHello design. The client constructs two versions of the ClientHello:

The key exchange works as follows:

  1. The server publishes its ECH configuration, including an HPKE (Hybrid Public Key Encryption) public key, in a DNS HTTPS resource record (or via the SVCB record type). This DNS record also specifies which algorithms to use and the public_name that should appear in the outer SNI.
  2. The client resolves the DNS record for the target domain (ideally over DNS over HTTPS to prevent the DNS query itself from leaking the domain name) and retrieves the ECH configuration.
  3. The client generates the ClientHelloInner with the real SNI, encrypts it using the server's HPKE public key, and wraps it in the ClientHelloOuter with the innocuous public_name as the visible SNI.
  4. The server (or the client-facing server, if using a split architecture) decrypts the inner ClientHello, learns the real target hostname, and completes the TLS handshake with the appropriate certificate.
  5. If decryption fails (e.g., due to key rotation), the server can send a retry_configs response containing its current ECH keys, allowing the client to retry.
Encrypted Client Hello (ECH) Key Exchange DNS Resolver Client Observer Sees outer SNI only Client-Facing Server (CDN Edge) Step 0 DNS HTTPS query ECH config + HPKE public key Step 1 ClientHelloOuter: SNI: "cdn-edge.example.net" (visible) ECH: [encrypted ClientHelloInner] Step 2 Server decrypts inner ClientHello with its private key ClientHelloInner (decrypted): SNI: "secret-site.com" Step 3 TLS handshake completes using the inner SNI ServerHello + Certificate for secret-site.com Step 4 Encrypted Application Data Observer cannot determine destination Unlike domain fronting, ECH is a standard protocol mechanism — no mismatch between SNI and Host header

The Role of DNS HTTPS Records

ECH relies on DNS HTTPS (or SVCB) resource records to distribute the server's ECH configuration. These record types, defined in RFC 9460, carry service binding parameters including the ECH public key, supported cipher suites, and the public_name for the outer SNI.

A DNS query for _https._tcp.secret-site.com (or simply a HTTPS record query for secret-site.com) might return:

secret-site.com. IN HTTPS 1 . ech="..." alpn="h2,h3"

The ech parameter contains the base64-encoded ECHConfigList, which includes the HPKE public key and cipher suite identifiers. This is the information the client needs to encrypt its ClientHelloInner.

There is an obvious bootstrapping concern: if the DNS query for the ECH configuration is itself unencrypted, a censor could block or tamper with it. This is why ECH works best in combination with DNS over HTTPS (DoH) or DNS over TLS (DoT), which encrypts the DNS resolution itself. The full privacy chain requires both encrypted DNS and ECH: encrypted DNS hides which domain you are looking up, and ECH hides which domain you are connecting to.

ECH vs Domain Fronting: Key Differences

While ECH and domain fronting both hide the destination hostname from observers, they are fundamentally different:

The Tension: Privacy vs Enterprise Security

ECH has ignited a fierce debate between privacy advocates and enterprise security teams. The two camps have fundamentally incompatible goals.

The Privacy Case for ECH

Privacy advocates argue that the plaintext SNI is one of the last significant metadata leaks in HTTPS connections. Even with encrypted DNS, an observer can see which sites you visit by reading the SNI field. ECH closes this gap, providing meaningful privacy improvements for:

The Enterprise Security Case Against ECH

Enterprise security teams see ECH as a threat to network visibility. Many organizations deploy TLS inspection (also called SSL/TLS interception or "break and inspect") to monitor encrypted traffic for threats. These systems work by acting as a trusted man-in-the-middle: the enterprise CA issues certificates dynamically, and the DPI appliance decrypts, inspects, and re-encrypts traffic.

ECH complicates this in several ways:

Some enterprise security vendors have already announced that they will block ECH at their perimeter by dropping connections that carry the ECH extension. This is straightforward: the presence of the encrypted_client_hello extension in the ClientHelloOuter is visible, so a firewall can detect and block ECH connections even though it cannot read the encrypted contents. The resulting arms race mirrors the earlier dynamic around DoH, which many enterprises block for similar reasons.

Russia and China's Response

Russia's Roskomnadzor began blocking ECH (and ESNI before it) in 2020, adding detection rules to its DPI infrastructure. When a connection carrying the encrypted_client_hello extension is detected, it is dropped. China's Great Firewall similarly detects and blocks connections with the ESNI/ECH extensions. These countries view ECH as a direct threat to their censorship infrastructure, which relies heavily on SNI-based filtering.

This creates a paradox: ECH is designed to prevent SNI-based censorship, but authoritarian regimes can simply block ECH itself, forcing connections back to plaintext SNI. The effectiveness of ECH as a censorship circumvention tool depends on its adoption being widespread enough that blocking ECH would cause unacceptable collateral damage — the same dynamic that made domain fronting effective.

Browser and Server Support for ECH

As of 2026, ECH support is actively being deployed but is not yet universal:

Browsers

Servers and CDNs

TLS Libraries

The BGP and Network-Level Perspective

From a BGP and network routing perspective, domain fronting, SNI spoofing, and ECH all share a common thread: they exploit the tension between network-level identity (IP addresses and AS numbers) and application-level identity (domain names).

BGP routes traffic based on IP prefixes and autonomous systems. It has no concept of domain names. When a CDN like Cloudflare (AS13335) announces the prefix 104.16.0.0/12, all traffic to millions of different websites flows to the same set of edge servers. The differentiation between cloudflare.com and any other Cloudflare-hosted site happens at the application layer, not the network layer.

This architectural reality is what makes all three techniques possible:

You can observe this multi-tenant architecture directly. Look up any Cloudflare IP and you will see a single prefix covering thousands of domains. The AS path to reach all of those domains is identical because, at the BGP level, they are the same destination.

The Full Privacy Stack

SNI encryption does not exist in isolation. True connection privacy requires encrypting metadata at every layer:

For users seeking the strongest possible privacy, Tor remains the most robust solution, as it encrypts the destination at every layer and routes traffic through multiple independent relays. ECH is not a replacement for Tor; it is a baseline improvement that benefits all web users without requiring special software.

What Comes Next

The trajectory is clear. The IETF is standardizing ECH, browser vendors are shipping it, and the largest CDN provider (Cloudflare) has enabled it by default for all its customers. The practical effect is that SNI-based censorship and surveillance are becoming progressively less effective for traffic routed through ECH-enabled infrastructure.

However, the countermeasures are equally clear. Countries that censor the internet can block ECH connections entirely, reverting clients to plaintext SNI. Enterprise security vendors are developing alternatives to SNI-based inspection, including client-side agents, certificate-based device identity, and zero-trust architectures that do not depend on network-level inspection.

Domain fronting, despite being shut down by CDN providers, demonstrated a fundamental principle: when many domains share the same network infrastructure, it becomes difficult to selectively block individual destinations without collateral damage. ECH enshrines this principle in a standards-track protocol, transforming what was once a hack into an engineered privacy guarantee.

You can explore the network infrastructure that makes these techniques possible by examining the major CDN providers in the looking glass:

See BGP routing data in real time

Open Looking Glass
More Articles
How TLS/HTTPS Works: Securing the Internet's Traffic
Certificate Transparency: How CT Logs Secure the Web's PKI
How Firewalls Work: Packet Filtering, Stateful Inspection, and Beyond
What is Cross-Site Scripting (XSS)?
What is Cross-Site Request Forgery (CSRF)?
What is Server-Side Request Forgery (SSRF)?