Domain Fronting, SNI Spoofing, and Encrypted Client Hello
Every time your browser connects to a website over HTTPS, it performs a TLS handshake. Before any encrypted data flows, the client must tell the server which hostname it wants to reach. This happens through a field called the Server Name Indication (SNI), and it is sent in plaintext. That single exposed field has become the battleground for censorship, surveillance, circumvention tools, and advanced persistent threat groups alike.
This article examines three interrelated techniques that exploit or protect the SNI field: domain fronting, which hides the true destination behind a CDN's shared infrastructure; SNI spoofing, which lies about the destination outright; and Encrypted Client Hello (ECH), which encrypts the SNI so that no on-path observer can read it at all.
TLS SNI: The Plaintext Leak in Every HTTPS Connection
When TLS was first designed (as SSL 3.0 in 1996), each IP address typically hosted a single website. The server knew which certificate to present because there was a one-to-one mapping between IP and hostname. But as IPv4 address space became scarce and shared hosting became the norm, a problem emerged: if a server hosts hundreds of domains on one IP address, how does it know which certificate to use before the encrypted session begins?
The answer was Server Name Indication (SNI), standardized in RFC 4366 (2006) as a TLS extension. The client includes the desired hostname in the ClientHello message at the very start of the TLS handshake. The server reads this field, selects the appropriate certificate, and proceeds with the handshake.
The critical detail: the ClientHello is sent before any encryption is established. The SNI field is therefore visible to anyone who can observe the network traffic — your ISP, a corporate firewall, a national censor, or an attacker performing a man-in-the-middle interception.
This means that even though HTTPS encrypts the content of your communications, the identity of the server you are connecting to is exposed in every connection. A censor can block access to specific domains by inspecting the SNI field without breaking TLS. An ISP can log every website you visit by reading SNI, even though it cannot see the page content. This is precisely how Deep Packet Inspection (DPI) systems in countries like China, Russia, Iran, and Egypt enforce domain-level blocking.
Notice that the HTTP Host header — which also identifies the destination — travels inside the encrypted TLS tunnel. Only the SNI field in the ClientHello is exposed. This discrepancy between the plaintext SNI and the encrypted Host header is exactly what domain fronting exploits.
Domain Fronting: Hiding Behind a CDN
Domain fronting is a technique that abuses the gap between TLS SNI and the HTTP Host header. It works by connecting to a CDN that hosts many domains on the same set of IP addresses. The client places an innocuous, uncensored domain in the SNI field (and in the TLS certificate verification), but puts the actual target domain in the HTTP Host header inside the encrypted stream.
Here is how it works step by step:
- The client initiates a TLS connection to a CDN edge server (e.g., Cloudflare, Amazon CloudFront, or Google).
- In the
ClientHello, the SNI field saysallowed-site.example.com— a domain that is not blocked by the censor. - The CDN edge server accepts the connection and completes the TLS handshake using the certificate for the front domain.
- Inside the now-encrypted HTTP request, the
Hostheader saysblocked-site.example.com— the actual destination. - The CDN sees the Host header, routes the request to the backend for
blocked-site.example.com, and returns the response through the same encrypted tunnel.
To a network observer or censor performing DPI, this looks like an ordinary HTTPS connection to allowed-site.example.com. Blocking it would require blocking the entire CDN, which would collaterally disable thousands of legitimate websites.
Domain fronting was first described in a 2015 academic paper by Fifield, Lan, Hynes, Wegmann, and Paxson at UC Berkeley, though the underlying technique had been observed in practice earlier. Its power lies in a fundamental asymmetry: the censor can only see the SNI (and the IP address), but the actual routing decision happens inside the encrypted tunnel based on the Host header.
Requirements for Domain Fronting
Domain fronting only works when specific conditions are met:
- Shared infrastructure — The front domain and the hidden domain must be served by the same CDN, on the same IP addresses. This is why CDNs are the natural vehicle: they host millions of domains on shared anycast IP pools.
- Permissive routing — The CDN must route requests based on the HTTP Host header rather than strictly enforcing that it matches the SNI. This was the default behavior on most CDNs before they recognized the abuse potential.
- Collateral damage risk — The front domain must be important enough that blocking it entirely is unacceptable. Large CDN IP ranges hosting Google, Amazon, or Microsoft services are ideal fronts because censors cannot afford to block all of Google (AS15169) or Amazon (AS16509).
Censorship Circumvention: Signal, Tor, and Telegram
Domain fronting gained widespread attention through its use by censorship circumvention tools:
Signal Messenger
In 2016, Open Whisper Systems (the organization behind Signal) deployed domain fronting to keep Signal accessible in Egypt, the UAE, Oman, and Qatar. When these countries blocked Signal's servers, the app was modified to route its traffic through Google App Engine, using Google domains as the front. The censors could see connections to Google servers but could not distinguish Signal traffic from any other Google service. Blocking Signal would have required blocking all of Google.
This approach worked until Google intervened. In April 2018, Google updated its infrastructure to reject requests where the SNI and Host header did not match, explicitly breaking domain fronting. Google's stated reason was that domain fronting "has never been a supported feature" and that it could "never be considered a reliable approach."
Tor Meek Pluggable Transport
The Tor project developed the meek pluggable transport specifically around domain fronting. Meek disguises Tor traffic as ordinary HTTPS connections to major cloud services. Users in countries that block Tor could connect through fronts on Google, Amazon CloudFront, or Microsoft Azure.
Meek used a relay hosted on a cloud platform (e.g., an Amazon CloudFront distribution). The client would connect with SNI set to an unblocked domain on the same CDN. Inside the TLS tunnel, the request would reach the meek relay, which would forward it into the Tor network. From the censor's perspective, the user appeared to be browsing a mainstream website.
Telegram
When Russia attempted to block Telegram in 2018, Telegram deployed domain fronting through Amazon and Google infrastructure. Russia's response was to block millions of IP addresses belonging to Amazon AWS (AS16509) and Google Cloud (AS15169), causing widespread collateral damage to unrelated services. This incident vividly demonstrated both the power and the controversy of domain fronting.
CDN Response: Shutting Down Domain Fronting
The major CDN and cloud providers systematically disabled domain fronting between 2017 and 2018:
- Google (April 2018) — Began enforcing SNI/Host header matching across Google Cloud, App Engine, and its CDN infrastructure. Google stated that domain fronting was an unintended side effect of how its serving infrastructure worked, not a deliberate feature.
- Amazon CloudFront (April 2018) — Updated its terms of service and its edge infrastructure to prohibit domain fronting. AWS now rejects requests where the Host header does not match the CloudFront distribution associated with the SNI domain. Amazon framed this as addressing a security concern, since domain fronting could be used to bypass AWS WAF and other controls.
- Cloudflare — Implemented SNI/Host matching enforcement. Cloudflare has been vocal about the fact that domain fronting was being used by both censorship circumvention tools and by malware operators.
- Microsoft Azure (2018) — Similarly blocked domain fronting on its CDN and cloud services.
The simultaneous shutdown by all major providers within weeks of each other effectively ended domain fronting as a reliable censorship circumvention strategy. The Tor project noted that meek over Google and Amazon was no longer viable, and Signal was forced to find alternative approaches (they eventually switched to proxying through other infrastructure).
The Dark Side: APT29 and Domain Fronting for C2
Domain fronting was not used only by privacy advocates. Advanced Persistent Threat (APT) groups recognized its value for hiding command-and-control (C2) communication.
APT29 / Cozy Bear
APT29 (also known as Cozy Bear, attributed to Russia's SVR intelligence service) is among the most sophisticated threat actors known to use domain fronting. This group, responsible for the 2016 DNC hack and the 2020 SolarWinds supply-chain compromise, used domain fronting to hide C2 traffic within legitimate CDN connections.
The technique worked like this:
- APT29 hosted their C2 server behind a CDN like Amazon CloudFront or a similar service.
- Malware on a compromised machine would initiate HTTPS connections with the SNI set to a high-reputation domain (e.g., a popular news site hosted on the same CDN).
- Inside the encrypted tunnel, the Host header pointed to the C2 server's CloudFront distribution.
- The CDN routed the request to the C2 backend. Commands flowed back through the same encrypted path.
Network monitoring tools and enterprise firewalls saw connections to legitimate-looking domains over standard HTTPS on port 443. The traffic blended perfectly with normal web browsing, making detection extremely difficult. Even TLS inspection proxies that could decrypt traffic would see requests going to a legitimate CDN domain — the C2 infrastructure was hidden behind the CDN's edge network.
Other Malware Using Domain Fronting
APT29 was not alone. The technique appeared in several offensive security frameworks and malware campaigns:
- Cobalt Strike — The widely used penetration testing (and, unfortunately, crimeware) framework included built-in support for domain fronting C2 profiles.
- PhantomLance / OceanLotus (APT32) — Used domain fronting through Google infrastructure for mobile malware C2.
- Various nation-state actors — Multiple APT groups incorporated domain fronting into their toolkits before CDN providers shut it down.
The dual-use nature of domain fronting — simultaneously a lifeline for dissidents and a tool for espionage — made the ethical and policy discussions around its shutdown deeply contentious.
SNI Spoofing: A Cruder Approach
SNI spoofing is a simpler but less effective technique. Instead of routing through a CDN that legitimately handles both domains, the client simply lies: it puts a fake domain in the SNI field while connecting directly to the real server.
For example, a client might set the SNI to google.com while actually connecting to the IP address of blocked-site.com. A naive DPI system that only checks the SNI field and not the destination IP would be fooled.
Why SNI Spoofing Usually Fails
SNI spoofing has significant limitations compared to domain fronting:
- IP address mismatch — Sophisticated DPI systems check both the SNI and the destination IP. If the SNI says
google.combut the IP address does not belong to Google's autonomous system, the forgery is obvious. You can verify this yourself by looking up any IP — the looking glass shows which AS owns each address. - Certificate mismatch — The destination server's TLS certificate will not match the spoofed SNI. If the client enforces certificate validation (which well-written software always does), the connection will fail. If the client disables validation to make the connection work, it becomes vulnerable to man-in-the-middle attacks.
- No CDN routing — Unlike domain fronting, there is no CDN in the middle performing legitimate routing based on the Host header. The traffic goes directly to the blocked server's IP.
Some DPI systems in countries with less sophisticated censorship infrastructure can be bypassed by SNI spoofing, especially those that only inspect the SNI field for keyword matching without cross-referencing DNS responses or performing IP-to-AS lookups. But this is an arms race: as censorship technology improves, crude SNI spoofing becomes less effective.
Geneva and Automated Evasion
Researchers at the University of Maryland developed Geneva, a genetic algorithm that automatically discovers packet-level censorship evasion strategies. Some of Geneva's discovered techniques involve fragmenting the ClientHello so that the SNI field is split across TCP segments, confusing DPI systems that cannot reassemble fragmented TLS handshakes. This is related to SNI spoofing but operates at a lower level — manipulating how the SNI is transmitted rather than what it contains.
Encrypted Client Hello (ECH): Solving the Problem Properly
While domain fronting was a clever hack and SNI spoofing was a crude workaround, Encrypted Client Hello (ECH) is the IETF's proper engineering solution to the SNI exposure problem. ECH encrypts the entire ClientHello message — including the SNI field — so that no on-path observer can see which hostname the client is requesting.
The Evolution: ESNI to ECH
ECH evolved from an earlier proposal called Encrypted SNI (ESNI), which encrypted only the SNI extension. ESNI was deployed experimentally by Cloudflare and Firefox in 2018-2019, but the IETF TLS working group determined that encrypting SNI alone was insufficient — other ClientHello extensions like ALPN (Application-Layer Protocol Negotiation) and the supported versions list also leak information. The solution was broadened to encrypt the entire ClientHello, and the protocol was renamed to Encrypted Client Hello (ECH).
How ECH Works
ECH uses a split ClientHello design. The client constructs two versions of the ClientHello:
- ClientHelloOuter — This is sent in plaintext (as in normal TLS). It contains a non-sensitive SNI, typically the name of the client-facing server (e.g., the CDN's edge hostname). It also contains the encrypted payload.
- ClientHelloInner — This contains the actual target hostname and is encrypted using the server's ECH public key. Only the server can decrypt it.
The key exchange works as follows:
- The server publishes its ECH configuration, including an HPKE (Hybrid Public Key Encryption) public key, in a DNS
HTTPSresource record (or via theSVCBrecord type). This DNS record also specifies which algorithms to use and the public_name that should appear in the outer SNI. - The client resolves the DNS record for the target domain (ideally over DNS over HTTPS to prevent the DNS query itself from leaking the domain name) and retrieves the ECH configuration.
- The client generates the
ClientHelloInnerwith the real SNI, encrypts it using the server's HPKE public key, and wraps it in theClientHelloOuterwith the innocuous public_name as the visible SNI. - The server (or the client-facing server, if using a split architecture) decrypts the inner
ClientHello, learns the real target hostname, and completes the TLS handshake with the appropriate certificate. - If decryption fails (e.g., due to key rotation), the server can send a
retry_configsresponse containing its current ECH keys, allowing the client to retry.
The Role of DNS HTTPS Records
ECH relies on DNS HTTPS (or SVCB) resource records to distribute the server's ECH configuration. These record types, defined in RFC 9460, carry service binding parameters including the ECH public key, supported cipher suites, and the public_name for the outer SNI.
A DNS query for _https._tcp.secret-site.com (or simply a HTTPS record query for secret-site.com) might return:
secret-site.com. IN HTTPS 1 . ech="..." alpn="h2,h3"
The ech parameter contains the base64-encoded ECHConfigList, which includes the HPKE public key and cipher suite identifiers. This is the information the client needs to encrypt its ClientHelloInner.
There is an obvious bootstrapping concern: if the DNS query for the ECH configuration is itself unencrypted, a censor could block or tamper with it. This is why ECH works best in combination with DNS over HTTPS (DoH) or DNS over TLS (DoT), which encrypts the DNS resolution itself. The full privacy chain requires both encrypted DNS and ECH: encrypted DNS hides which domain you are looking up, and ECH hides which domain you are connecting to.
ECH vs Domain Fronting: Key Differences
While ECH and domain fronting both hide the destination hostname from observers, they are fundamentally different:
- Legitimacy — ECH is a standards-track IETF protocol (draft-ietf-tls-esni). Domain fronting is an undocumented abuse of CDN routing behavior.
- Server cooperation — ECH requires the server to explicitly publish its ECH keys and support the protocol. Domain fronting works without the target server's knowledge or consent.
- SNI/Host consistency — In ECH, the outer SNI and the inner (real) SNI are both legitimate; the outer one identifies the client-facing server that will decrypt the inner one. In domain fronting, the SNI and Host header point to genuinely different destinations.
- Security model — ECH provides cryptographic privacy guarantees. Domain fronting provides security-through-obscurity that depends on CDN permissiveness.
The Tension: Privacy vs Enterprise Security
ECH has ignited a fierce debate between privacy advocates and enterprise security teams. The two camps have fundamentally incompatible goals.
The Privacy Case for ECH
Privacy advocates argue that the plaintext SNI is one of the last significant metadata leaks in HTTPS connections. Even with encrypted DNS, an observer can see which sites you visit by reading the SNI field. ECH closes this gap, providing meaningful privacy improvements for:
- Users under authoritarian regimes — ECH prevents SNI-based censorship, making it harder for states to selectively block websites without blocking entire CDN platforms.
- Journalists and activists — Connecting to sensitive resources (whistleblower platforms, dissident news sites) without revealing the destination to ISPs or state surveillance.
- General privacy — Preventing ISPs from building browsing profiles based on SNI monitoring, even when users use encrypted DNS.
The Enterprise Security Case Against ECH
Enterprise security teams see ECH as a threat to network visibility. Many organizations deploy TLS inspection (also called SSL/TLS interception or "break and inspect") to monitor encrypted traffic for threats. These systems work by acting as a trusted man-in-the-middle: the enterprise CA issues certificates dynamically, and the DPI appliance decrypts, inspects, and re-encrypts traffic.
ECH complicates this in several ways:
- SNI-based policy enforcement — Many firewalls and web proxies use the plaintext SNI to make access control decisions (e.g., blocking known malware domains, enforcing acceptable use policies). ECH hides the SNI, making these policies unenforceable without full TLS interception.
- DLP and compliance — Organizations in regulated industries (finance, healthcare) are required to inspect outbound traffic for data exfiltration. If the destination is hidden, compliance becomes more difficult.
- Threat detection — Security teams use SNI-based indicators of compromise (IoCs) to detect malware C2 traffic. ECH renders these IoCs invisible.
- Split-tunnel complexity — Organizations that route some traffic through inspection and some directly (based on the destination domain) lose the ability to make that decision.
Some enterprise security vendors have already announced that they will block ECH at their perimeter by dropping connections that carry the ECH extension. This is straightforward: the presence of the encrypted_client_hello extension in the ClientHelloOuter is visible, so a firewall can detect and block ECH connections even though it cannot read the encrypted contents. The resulting arms race mirrors the earlier dynamic around DoH, which many enterprises block for similar reasons.
Russia and China's Response
Russia's Roskomnadzor began blocking ECH (and ESNI before it) in 2020, adding detection rules to its DPI infrastructure. When a connection carrying the encrypted_client_hello extension is detected, it is dropped. China's Great Firewall similarly detects and blocks connections with the ESNI/ECH extensions. These countries view ECH as a direct threat to their censorship infrastructure, which relies heavily on SNI-based filtering.
This creates a paradox: ECH is designed to prevent SNI-based censorship, but authoritarian regimes can simply block ECH itself, forcing connections back to plaintext SNI. The effectiveness of ECH as a censorship circumvention tool depends on its adoption being widespread enough that blocking ECH would cause unacceptable collateral damage — the same dynamic that made domain fronting effective.
Browser and Server Support for ECH
As of 2026, ECH support is actively being deployed but is not yet universal:
Browsers
- Firefox — Has supported ECH since Firefox 118 (September 2023), enabled by default when DNS over HTTPS is active. Firefox was the first major browser to ship ECH to its general user base.
- Chrome/Chromium — Enabled ECH support in Chrome 117 (September 2023). Chrome uses the DNS HTTPS record to discover ECH configurations automatically. ECH is active when Chrome is configured to use secure DNS.
- Edge — Supports ECH through the Chromium engine, following Chrome's timeline.
- Safari — Apple has been implementing ECH support in WebKit, with initial support appearing in Safari 18.
Servers and CDNs
- Cloudflare — The most aggressive ECH adopter. Cloudflare has published ECH keys for all domains on its platform, making ECH available to millions of websites without requiring individual site operators to configure anything. Cloudflare publishes ECH configurations via DNS HTTPS records automatically.
- Apache and Nginx — Server-side ECH support through OpenSSL and BoringSSL is progressing. OpenSSL 3.x has experimental ECH support.
- AWS CloudFront — Has not yet deployed ECH support for its CDN customers.
- Deeno / other runtimes — Various web server frameworks are adding ECH support as TLS libraries implement the protocol.
TLS Libraries
- BoringSSL — Google's fork of OpenSSL, used in Chrome and Cloudflare's infrastructure, has full ECH support.
- OpenSSL — ECH support has been merged and is available in recent versions, though not yet enabled by default in all distributions.
- wolfSSL — Has implemented ECH support.
- rustls — The Rust TLS library has experimental ECH support.
The BGP and Network-Level Perspective
From a BGP and network routing perspective, domain fronting, SNI spoofing, and ECH all share a common thread: they exploit the tension between network-level identity (IP addresses and AS numbers) and application-level identity (domain names).
BGP routes traffic based on IP prefixes and autonomous systems. It has no concept of domain names. When a CDN like Cloudflare (AS13335) announces the prefix 104.16.0.0/12, all traffic to millions of different websites flows to the same set of edge servers. The differentiation between cloudflare.com and any other Cloudflare-hosted site happens at the application layer, not the network layer.
This architectural reality is what makes all three techniques possible:
- Domain fronting exploits the fact that many domains share IP addresses on CDNs, and the routing from CDN edge to backend happens based on the Host header, not the IP address.
- SNI spoofing exploits the fact that DPI systems operate between the network and application layers, and may not correlate the SNI with the destination IP's actual BGP route.
- ECH leverages the existing multi-tenant CDN architecture: the outer SNI identifies the CDN edge (which is the correct network destination), while the inner SNI identifies the specific origin behind the CDN.
You can observe this multi-tenant architecture directly. Look up any Cloudflare IP and you will see a single prefix covering thousands of domains. The AS path to reach all of those domains is identical because, at the BGP level, they are the same destination.
The Full Privacy Stack
SNI encryption does not exist in isolation. True connection privacy requires encrypting metadata at every layer:
- DNS resolution — Use DNS over HTTPS (DoH) or DNS over TLS (DoT) to prevent the DNS query from revealing the target domain.
- TLS handshake — Use ECH to encrypt the SNI and other ClientHello fields.
- IP-level identity — The destination IP address is still visible in every packet. If the IP is uniquely associated with a single domain (not hosted on a shared CDN), the destination is revealed regardless of DNS or SNI encryption. CDN-hosted sites benefit because the IP maps to the CDN, not the individual site.
- Traffic analysis — Even with all metadata encrypted, patterns like connection timing, packet sizes, and traffic volume can reveal the destination through fingerprinting. Defenses like padding (built into TLS 1.3 and HTTP/2) help, but do not eliminate this vector.
For users seeking the strongest possible privacy, Tor remains the most robust solution, as it encrypts the destination at every layer and routes traffic through multiple independent relays. ECH is not a replacement for Tor; it is a baseline improvement that benefits all web users without requiring special software.
What Comes Next
The trajectory is clear. The IETF is standardizing ECH, browser vendors are shipping it, and the largest CDN provider (Cloudflare) has enabled it by default for all its customers. The practical effect is that SNI-based censorship and surveillance are becoming progressively less effective for traffic routed through ECH-enabled infrastructure.
However, the countermeasures are equally clear. Countries that censor the internet can block ECH connections entirely, reverting clients to plaintext SNI. Enterprise security vendors are developing alternatives to SNI-based inspection, including client-side agents, certificate-based device identity, and zero-trust architectures that do not depend on network-level inspection.
Domain fronting, despite being shut down by CDN providers, demonstrated a fundamental principle: when many domains share the same network infrastructure, it becomes difficult to selectively block individual destinations without collateral damage. ECH enshrines this principle in a standards-track protocol, transforming what was once a hack into an engineered privacy guarantee.
You can explore the network infrastructure that makes these techniques possible by examining the major CDN providers in the looking glass: