How QUIC and HTTP/3 Work: The End of TCP for the Web
QUIC is a transport protocol that replaces TCP for web traffic. Originally developed by Google and standardized by the IETF as RFC 9000 in 2021, QUIC runs over UDP and integrates TLS 1.3 encryption directly into the transport layer. HTTP/3 (RFC 9114) is the version of HTTP designed to run exclusively on top of QUIC, replacing the TCP+TLS stack that HTTP/1.1 and HTTP/2 depended on. Together, QUIC and HTTP/3 address fundamental performance problems that have plagued web transport for decades — head-of-line blocking, slow connection establishment, and fragile connections that break when users switch networks.
If you look up a major website in the looking glass — say Cloudflare (AS13335) or Google (AS15169) — the servers behind those AS paths are almost certainly serving HTTP/3 today. The protocol has moved from experimental to dominant: as of 2025, over 30% of all web traffic uses HTTP/3, and that number continues to climb.
The Problem with TCP: Why a New Transport Protocol?
TCP (Transmission Control Protocol) has been the backbone of reliable internet communication since 1981 (RFC 793). It provides ordered, reliable byte-stream delivery — if a packet is lost, TCP retransmits it and holds all subsequent data until the gap is filled. For a single stream of data, this is exactly right. But modern web pages are not a single stream.
A typical web page loads dozens of resources in parallel: HTML, CSS, JavaScript files, images, fonts, API responses. HTTP/2 introduced multiplexing, allowing multiple logical streams to share a single TCP connection. This was a major improvement over HTTP/1.1, which required multiple TCP connections for parallelism. But HTTP/2's multiplexing has a fatal flaw: it runs over TCP, and TCP only sees a single byte stream.
TCP Head-of-Line Blocking
When TCP detects a lost packet, it stops delivering all data to the application until the lost packet is retransmitted and received. If you have 10 HTTP/2 streams multiplexed over one TCP connection, and a single packet belonging to stream 3 is lost, all 10 streams are blocked. Streams 1, 2, and 4 through 10 have their data sitting in the kernel's receive buffer, fully received and ready — but TCP will not deliver any of it until stream 3's missing packet arrives.
This is head-of-line (HOL) blocking, and it gets worse on lossy networks. On a mobile connection with 2% packet loss, HTTP/2 over TCP can actually perform worse than HTTP/1.1 with six parallel connections, because HTTP/1.1's six independent TCP connections mean a loss on one connection only blocks that connection's resources. Research by Google showed that at loss rates above 1%, HTTP/2's single-connection multiplexing became a net negative compared to HTTP/1.1's multiple connections.
This is not a bug in HTTP/2's design — it is a fundamental limitation of TCP. TCP guarantees ordered delivery of a byte stream, and there is no way to tell TCP that bytes 50,000 through 51,000 belong to a different logical stream than bytes 48,000 through 49,000. The only real fix is to move below TCP and build a transport protocol that understands multiplexing natively.
The Connection Setup Tax
Before any HTTP data can flow over TCP+TLS, two sequential handshakes must complete. First, TCP's three-way handshake (SYN, SYN-ACK, ACK) takes one round trip. Then, TLS 1.3 adds another round trip for its handshake (Client Hello, Server Hello + encrypted extensions, Client Finished). That is a minimum of 2 round trips (2-RTT) before the first byte of application data — and with TLS 1.2, it was 3-RTT.
On a 50ms latency connection, 2-RTT means 100ms of dead time. On a transatlantic connection with 150ms RTT, it is 300ms. On a mobile connection routed through a CDN edge node in another city, it can easily exceed 200ms. For every new connection — and web browsers open many — this tax is paid in full.
Connection Fragility
A TCP connection is identified by a 4-tuple: source IP, source port, destination IP, destination port. If any of these change, the connection is broken. When a mobile user walks from Wi-Fi to cellular, their source IP changes. When a device moves between Wi-Fi access points, the IP may change. When a NAT device reassigns port mappings (common on cellular networks), the source port changes. In every case, the TCP connection dies, and the application must establish a new connection from scratch — paying the full handshake cost again, losing any in-flight data, and often causing a visible interruption.
QUIC: A New Transport Protocol Built on UDP
QUIC solves all three problems — head-of-line blocking, connection setup latency, and connection fragility — by building a new transport protocol on top of UDP. UDP (User Datagram Protocol) provides only basic packet delivery: no ordering, no reliability, no congestion control. QUIC implements all of these features itself, in userspace, giving it complete control over the transport behavior.
The choice of UDP is pragmatic, not ideological. Deploying a new IP protocol number (like TCP is protocol 6 and UDP is protocol 17) is essentially impossible on today's internet — middleboxes, firewalls, and NAT devices would drop packets with unknown protocol numbers. UDP, however, passes through virtually all network paths because it is already widely used for DNS, video streaming, and gaming. QUIC piggybacks on this existing deployability.
QUIC's Multiplexed Streams: Solving Head-of-Line Blocking
The defining feature of QUIC is native stream multiplexing. Unlike TCP, which sees only a single byte stream, QUIC understands that data belongs to independent streams. Each QUIC stream is an independent, ordered sequence of bytes. Packet loss on one stream has zero effect on other streams — they continue to deliver data to the application immediately.
Internally, QUIC assigns each chunk of data to a specific stream ID. When a packet is lost, only the stream(s) whose data was in that packet are paused for retransmission. All other streams continue flowing. This is the fundamental architectural difference from TCP+HTTP/2, and it eliminates HOL blocking at the transport layer entirely.
QUIC supports several stream types: bidirectional streams (both sides can send data), unidirectional streams (one side sends, the other receives), and connection-level flow control (controlling the total amount of buffered data across all streams). Stream IDs encode which side initiated the stream and whether it is bidirectional or unidirectional, using the two least significant bits:
0x0— Client-initiated, bidirectional0x1— Server-initiated, bidirectional0x2— Client-initiated, unidirectional0x3— Server-initiated, unidirectional
Streams are lightweight — creating a new QUIC stream costs essentially nothing, unlike opening a new TCP connection. An HTTP/3 client opens a new bidirectional stream for each request, which is analogous to HTTP/2's stream multiplexing but without the TCP HOL blocking penalty.
The QUIC Handshake: Merging Transport and Crypto
QUIC merges the transport handshake and the cryptographic handshake into a single operation. Instead of TCP's 1-RTT three-way handshake followed by TLS's 1-RTT handshake, QUIC completes both in a single round trip for initial connections, and zero round trips for resumed connections.
1-RTT Initial Handshake
When a client connects to a QUIC server for the first time, the handshake proceeds as follows:
- Client Initial — The client sends a QUIC Initial packet containing a TLS 1.3 Client Hello. This packet includes the client's supported cipher suites, key shares (X25519 or P-256), the server name (SNI), and ALPN (advertising "h3" for HTTP/3). The packet is protected with a well-known key derived from the connection ID (this protects against accidental corruption but provides no confidentiality for the Initial packet itself).
- Server Initial + Handshake — The server responds with its own Initial packet (containing the TLS Server Hello) and immediately follows with Handshake packets containing the encrypted server certificate, certificate verify, and Finished messages. At this point, the server has everything it needs to derive the 1-RTT application keys.
- Client Handshake + 1-RTT data — The client verifies the server's certificate, sends its Handshake Finished, and immediately begins sending 1-RTT application data (e.g., the HTTP/3 request). The entire process completes in 1 round trip.
Compared to TCP+TLS 1.3 (2-RTT), QUIC saves one full round trip on every new connection. On a 100ms RTT path, this means data starts flowing 100ms sooner. Across billions of connections per day, this savings is significant.
0-RTT Resumption
When a client has previously connected to a server, it caches the server's transport parameters and a TLS resumption ticket (a pre-shared key). On reconnection, the client can send encrypted application data — typically an HTTP/3 request — in its very first packet, before the server has responded at all. This is 0-RTT: the client sends data immediately, with no handshake latency.
0-RTT is powerful but carries an important caveat: replay protection. Because the server has not yet contributed any randomness to the key derivation, an attacker who captures a 0-RTT Client Initial can replay it to the server. The server will process the replayed request as if it were legitimate. For this reason:
- 0-RTT should only carry idempotent requests (GET, HEAD — not POST, PUT, DELETE)
- Servers must implement replay detection (e.g., tracking previously seen 0-RTT tickets)
- 0-RTT data does not have forward secrecy until the handshake completes and keys are rotated
Most QUIC implementations and CDN providers enable 0-RTT by default for safe request methods, giving repeat visitors essentially instant connection establishment.
Built-in TLS 1.3: Encryption Is Not Optional
Unlike TCP, where TLS is a separate layer that can be omitted, QUIC mandates encryption. Every QUIC packet (except the very first Initial packets, which use a predictable key) is encrypted with TLS 1.3. There is no unencrypted QUIC — the protocol simply does not support it.
This design decision has several consequences:
- Privacy — Not only is the payload encrypted, but most of the QUIC header is also encrypted (or at least integrity-protected). Middleboxes cannot inspect QUIC traffic, cannot modify it, and cannot even reliably identify individual QUIC connections by examining the wire format.
- Middlebox ossification resistance — TCP has suffered from decades of middlebox interference. Firewalls, NATs, and load balancers inspect TCP headers and make assumptions about the protocol's behavior. This has made it effectively impossible to deploy TCP extensions — new TCP features get dropped or mangled by middleboxes that do not recognize them. QUIC encrypts its transport headers precisely to prevent this ossification.
- Reduced attack surface — Because encryption is mandatory, there is no fallback to cleartext. Downgrade attacks that force a connection to unencrypted mode are impossible.
- Cipher suite simplicity — QUIC uses exactly the same cipher suites as TLS 1.3: AES-128-GCM, AES-256-GCM, and ChaCha20-Poly1305. All provide AEAD (Authenticated Encryption with Associated Data), ensuring both confidentiality and integrity.
QUIC uses different encryption keys at different phases of the connection: Initial keys (derived from the connection ID, essentially public), Handshake keys (derived after key exchange, protecting handshake messages), and 1-RTT keys (the session keys used for application data). Keys can also be updated mid-connection using the Key Update mechanism, providing ongoing forward secrecy.
Connection IDs and Migration
TCP identifies connections by a 4-tuple: (source IP, source port, destination IP, destination port). QUIC uses Connection IDs (CIDs) — opaque tokens embedded in the QUIC header that identify the connection independently of the network path. When a client's IP address changes (moving from Wi-Fi to cellular, roaming between access points), it can continue the QUIC connection seamlessly by sending packets from the new IP with the same Connection ID.
The details of connection migration involve careful cryptographic validation:
- Each endpoint provides the other with a set of connection IDs during the handshake and can issue new ones at any time via NEW_CONNECTION_ID frames.
- When a client's network path changes, it sends a PATH_CHALLENGE frame from the new address. The server responds with a PATH_RESPONSE, proving that the new path is valid (this prevents off-path attackers from hijacking connections).
- After path validation, the client begins using a new Connection ID for the new path. This prevents a network observer who sees traffic on both the old and new paths from linking them — improving privacy during migration.
Connection migration is particularly valuable for mobile devices. A user on a voice call, video stream, or large file download can walk from their home Wi-Fi to cellular without the application experiencing any interruption. With TCP, this transition kills the connection and forces a full reconnect.
QUIC Packet Structure and Loss Recovery
Every QUIC packet has a header and a payload. The header contains a Connection ID and a packet number. The payload contains one or more frames — QUIC's fundamental unit of communication. Frame types include:
- STREAM — carries application data for a specific stream, identified by stream ID and offset
- ACK — acknowledges received packets (with ranges, not just a single sequence number like TCP)
- CRYPTO — carries TLS handshake messages
- NEW_CONNECTION_ID — provides new connection IDs for migration
- PATH_CHALLENGE / PATH_RESPONSE — validates new network paths
- MAX_DATA / MAX_STREAM_DATA — flow control
- CONNECTION_CLOSE — terminates the connection
- PING — keepalive
QUIC's packet numbers are monotonically increasing and never reused, even for retransmissions. When QUIC retransmits lost data, it sends it in a new packet with a new packet number. This is fundamentally different from TCP, where retransmitted segments carry the same sequence number as the original, creating the retransmission ambiguity problem (was the ACK for the original or the retransmission?). QUIC's approach allows precise RTT measurement even during loss recovery.
Loss detection in QUIC uses two mechanisms: packet threshold (a packet is considered lost if a later packet has been acknowledged, similar to TCP's fast retransmit) and time threshold (a packet is lost if too much time has elapsed since a later packet was acknowledged). QUIC also implements congestion control — typically Cubic (matching modern TCP) or BBR — but because QUIC runs in userspace, the congestion control algorithm can be updated without waiting for OS kernel updates.
HTTP/3: HTTP Over QUIC
HTTP/3 (RFC 9114) is the mapping of HTTP semantics onto QUIC transport. It is conceptually similar to HTTP/2 but redesigned to take advantage of QUIC's native stream multiplexing. The core HTTP semantics — methods (GET, POST), status codes (200, 404), headers — are identical. What changes is how they are framed and transmitted.
HTTP/3 Stream Mapping
In HTTP/3, each HTTP request-response pair uses its own QUIC bidirectional stream. The client opens a new stream, sends a HEADERS frame followed by optional DATA frames, and the server responds on the same stream. Because each request is on an independent QUIC stream, loss or delay on one request has no effect on others.
HTTP/3 also uses several unidirectional streams for control purposes:
- Control stream — carries HTTP/3 settings and configuration (SETTINGS, GOAWAY frames)
- QPACK encoder stream — sends header table updates for header compression
- QPACK decoder stream — acknowledges header table updates
Both the client and server open one of each of these unidirectional streams at the start of the connection.
HTTP/3 Frame Types
HTTP/3 defines its own framing layer on top of QUIC streams (separate from QUIC's packet-level frames). Each HTTP/3 frame has a type and length. The key frame types are:
- HEADERS — encoded request or response headers (using QPACK)
- DATA — request or response body payload
- SETTINGS — connection-level configuration (max header list size, QPACK table capacity)
- GOAWAY — graceful connection shutdown, indicating the last stream ID the server will process
- CANCEL_PUSH — cancels a previously promised server push
Note that HTTP/3 does not have a PRIORITY frame or WINDOW_UPDATE frame — priority signaling uses the Extensible Priorities scheme (RFC 9218), and flow control is handled entirely by QUIC at the transport layer.
QPACK: Header Compression for HTTP/3
HTTP/2 used HPACK for header compression, which relies on both sides maintaining an identical, ordered header table. Because TCP guarantees ordered delivery, HPACK updates are always processed in the correct sequence. QUIC's streams, however, are delivered independently and potentially out of order — a HEADERS frame on stream 7 might arrive before one on stream 5. If both reference the same HPACK table state, the receiver cannot decode stream 7's headers until stream 5's have been processed. This would reintroduce head-of-line blocking at the header compression layer.
QPACK (RFC 9204) solves this by separating header table updates from header references. Table updates are sent on a dedicated unidirectional encoder stream, and headers on request streams either reference already-acknowledged table entries (safe, no blocking) or use literal encoding (slightly less efficient but never blocks). The receiver sends acknowledgments on the decoder stream, and the encoder only references table entries that it knows the decoder has processed.
This design means QPACK can achieve compression ratios close to HPACK's without introducing any inter-stream dependencies. In practice, the first few requests on a connection may use slightly more bytes for headers (before the dynamic table is populated), but steady-state compression is comparable to HTTP/2.
Connection Establishment: Alt-Svc and HTTPS DNS Records
A browser cannot simply send a QUIC packet to a server — it first needs to discover that the server supports HTTP/3. There are two primary discovery mechanisms:
Alt-Svc header — When a browser connects to a server over HTTP/2 (on TCP), the server includes an Alt-Svc response header advertising HTTP/3 support:
Alt-Svc: h3=":443"; ma=86400
This tells the browser: "I support HTTP/3 (h3) on UDP port 443, and you can cache this for 86400 seconds." The browser will attempt a QUIC connection for subsequent requests and, if it succeeds, switch to HTTP/3. If the QUIC connection fails (perhaps UDP is blocked by a firewall), the browser falls back to HTTP/2 over TCP. This means the first visit to a site always uses TCP — HTTP/3 kicks in on the second connection.
HTTPS DNS records (SVCB/HTTPS, RFC 9460) allow the server to advertise HTTP/3 support in DNS itself. A browser can query for the HTTPS record type and learn that the server supports h3 before making any TCP connection at all:
example.com. 300 IN HTTPS 1 . alpn="h3,h2" ipv4hint=93.184.216.34
This enables first-visit HTTP/3 — the browser can attempt QUIC from the very first connection. Major browsers and CDN providers now support HTTPS DNS records.
Congestion Control and Flow Control
QUIC implements its own congestion control, independent of the OS kernel. The initial QUIC specification does not mandate a specific algorithm — it describes a sender-side algorithm similar to TCP's NewReno as a baseline, but implementations are free to use any congestion control scheme.
In practice, most QUIC deployments use one of:
- Cubic — the same algorithm used by default in Linux TCP, based on a cubic function for window growth
- BBR (Bottleneck Bandwidth and Round-trip propagation time) — a model-based algorithm developed by Google that estimates the bottleneck bandwidth and minimum RTT, often achieving better throughput on high-BDP (bandwidth-delay product) paths
- Reno/NewReno — the classic AIMD (additive increase, multiplicative decrease) algorithm, used mainly as a reference implementation
Because QUIC's congestion control runs in userspace (as part of the application or library, not the OS kernel), it can be updated rapidly. When Google deploys a new congestion control variant to Chrome, it takes effect immediately for QUIC connections — no kernel patches, no OS upgrades, no waiting for ISPs to update routers. This agility has allowed QUIC to iterate on congestion control far faster than TCP.
QUIC's flow control operates at two levels: per-stream (limiting how much data can be buffered on a single stream) and connection-level (limiting total data across all streams). The receiver advertises its buffer capacity through MAX_STREAM_DATA and MAX_DATA frames, and the sender must respect these limits. This two-level system prevents a single stream from consuming the entire connection's buffer capacity.
QUIC and the Network: NATs, Firewalls, and Middleboxes
Running over UDP means QUIC faces different network challenges than TCP:
UDP blocking and rate limiting — Some corporate firewalls block non-DNS UDP traffic. Some ISPs rate-limit UDP. In these environments, QUIC connections fail and browsers fall back to HTTP/2 over TCP. This is why HTTP/3 discovery (Alt-Svc) includes a graceful fallback mechanism — if the QUIC connection cannot be established within a few hundred milliseconds, the browser continues using TCP.
NAT timeouts — NAT devices maintain state for UDP flows, but UDP NAT bindings typically have shorter timeouts than TCP (often 30-120 seconds versus minutes for TCP). QUIC implementations send periodic keepalive packets (PING frames) to prevent NAT bindings from expiring. The exact interval is implementation-dependent but typically ranges from 15 to 30 seconds.
Middlebox ossification — A key motivation for QUIC's encrypted headers is preventing middlebox ossification. When middleboxes can inspect transport headers, they inevitably start making assumptions about the protocol's behavior and blocking packets that do not match their expectations. TCP extension deployment has been plagued by this — features like TCP Fast Open see limited deployment because middleboxes interfere with them. By encrypting QUIC's transport parameters, the protocol ensures that only the endpoints can interpret the connection state, making it safe to evolve the protocol in the future.
ECN (Explicit Congestion Notification) — QUIC supports ECN feedback via ACK frames, allowing routers to signal congestion without dropping packets. This can improve performance on congested links, but requires ECN support along the network path.
Deployment Status and Real-World Adoption
QUIC and HTTP/3 have moved from experimental to production-ready across the ecosystem:
CDN and server support:
- Cloudflare (AS13335) — enabled HTTP/3 on all plans since 2020, one of the earliest large-scale deployments
- Google (AS15169) — QUIC originated at Google; YouTube, Search, Gmail, and all Google services use QUIC
- Amazon CloudFront (AS16509) — supports HTTP/3 on all distributions
- Fastly (AS54113) — HTTP/3 support via their H2O-based stack
- Akamai — HTTP/3 support across their platform
- nginx — HTTP/3 support since version 1.25.0 (2023)
Browser support:
- Chrome/Chromium — full HTTP/3 since Chrome 87 (2020), using Google's own QUIC implementation
- Firefox — full HTTP/3 since Firefox 88 (2021), using the neqo library
- Safari — HTTP/3 since Safari 14 (2020) on macOS Big Sur and iOS 14
- Edge — full support via Chromium
- curl — HTTP/3 support via multiple QUIC backends (ngtcp2, quiche, msh3)
QUIC libraries:
- quiche (Cloudflare) — Rust implementation used in Cloudflare's edge
- ngtcp2 — C implementation used by curl and nginx
- MsQuic (Microsoft) — C implementation used in Windows, .NET, and IIS
- s2n-quic (Amazon) — Rust implementation used in AWS services
- Quinn — Pure Rust async QUIC implementation
- LSQUIC (LiteSpeed) — C implementation used in LiteSpeed Web Server
As of 2025, HTTP/3 accounts for roughly 30% of global web traffic. The percentage is higher for mobile traffic (where QUIC's connection migration and reduced handshake latency are most beneficial) and for traffic served by Google and Cloudflare (which aggressively prefer QUIC).
QUIC vs TCP: Performance in Practice
The theoretical advantages of QUIC translate to measurable real-world improvements, but the magnitude depends on network conditions:
- Low-latency, reliable networks (e.g., wired connections with <1% loss) — QUIC shows modest improvements, mainly from the 1-RTT handshake savings. Steady-state throughput is similar to TCP.
- High-latency networks (intercontinental links, satellite) — the 1-RTT handshake saves 100-300ms per new connection. 0-RTT resumption eliminates connection setup latency entirely for repeat visits.
- Lossy networks (mobile, congested Wi-Fi) — this is where QUIC shines most. Eliminating HOL blocking means that a 2% packet loss rate affects individual streams rather than the entire connection. Google reported that QUIC reduced search latency by 3.6% on desktop and 8% on mobile, with even larger gains on the slowest 1% of connections.
- Network transitions (Wi-Fi to cellular) — with TCP, a network change causes a full connection reset (several hundred milliseconds to re-establish). With QUIC connection migration, the transition is seamless — typically a single RTT for path validation.
One area where QUIC can underperform is CPU usage. QUIC's userspace implementation means it does not benefit from kernel-level optimizations like TCP offloading, GRO (Generic Receive Offload), and hardware checksum offloading that modern NICs provide for TCP. UDP processing in the kernel is also less optimized than TCP. QUIC servers typically use more CPU per connection than equivalent TCP servers. However, techniques like GSO (Generic Segmentation Offload) for UDP, io_uring for batch syscalls, and the ongoing work on UDP hardware offload are closing this gap.
Beyond the Web: QUIC for Other Protocols
While HTTP/3 is the primary application of QUIC, the protocol is designed as a general-purpose transport. Other protocols building on QUIC include:
- DNS over QUIC (DoQ, RFC 9250) — encrypts DNS queries using QUIC, offering similar privacy benefits to DNS over HTTPS but with lower overhead since QUIC avoids the HTTP framing layer
- WebTransport — a browser API that provides reliable streams and unreliable datagrams over QUIC, enabling low-latency bidirectional communication for applications like gaming and live collaboration
- MASQUE — a framework for proxying UDP traffic over QUIC, used for VPN-like functionality
- SMB over QUIC — Microsoft's implementation for remote file access, replacing TCP-based SMB tunneled over VPNs
Security Considerations
QUIC's design addresses several security concerns that have affected TCP:
- Amplification attacks — QUIC requires the server to validate the client's address before sending large responses. The Initial packet from the client must be at least 1200 bytes (padded if necessary), and the server's response must not exceed three times the data received until the client's address is validated. This prevents QUIC from being used as an amplification vector in DDoS attacks.
- Reset attacks — TCP's RST packets are unauthenticated; an off-path attacker who can guess the 4-tuple can terminate connections. QUIC's CONNECTION_CLOSE frames are encrypted and authenticated, making off-path termination impossible.
- Injection attacks — because all QUIC packets (after Initial) are authenticated and encrypted, an off-path attacker cannot inject data into a QUIC connection. This is a strict improvement over TCP, where data injection is possible if the attacker can guess sequence numbers.
- Retry mechanism — QUIC servers under load can send a Retry packet containing an encrypted token. The client must re-send its Initial with this token, proving it can receive packets at its claimed address. This functions as a lightweight proof-of-work against address spoofing.
The Future: QUIC v2 and Multipath QUIC
The QUIC protocol continues to evolve:
QUIC v2 (RFC 9369) — published in 2023, QUIC v2 is an intentional version bump that changes the wire format of Initial packets. The primary goal is to defeat middleboxes that have already started to ossify around QUIC v1's wire image. QUIC v2 is semantically identical to v1 — it is the same protocol with different cryptographic constants, proving that QUIC's version negotiation works in practice.
Multipath QUIC (RFC 9443) — extends QUIC to use multiple network paths simultaneously. A mobile device could send data over both Wi-Fi and cellular at the same time, using whichever path is faster for each packet. This goes beyond connection migration (which switches paths) to enable true path aggregation, improving both throughput and resilience.
Unreliable QUIC datagrams (RFC 9221) — adds support for sending unreliable data over QUIC connections, alongside reliable streams. This is useful for applications like video conferencing or gaming where some data (e.g., a single video frame) is not worth retransmitting if lost because newer data has already replaced it.
See It in Action
Every HTTP/3 connection ultimately relies on the same BGP routing infrastructure that carries all internet traffic. The QUIC packets travel as UDP datagrams inside IP packets, following the same AS paths as TCP traffic. You can explore the networks that power major QUIC/HTTP/3 deployments:
- AS13335 — Cloudflare, serving HTTP/3 from 300+ cities worldwide
- AS15169 — Google, where QUIC was invented and carries the majority of Google's traffic
- AS16509 — Amazon Web Services / CloudFront, with global HTTP/3 support
- AS32934 — Meta, using QUIC for Facebook, Instagram, and WhatsApp traffic
- 1.1.1.1 — Cloudflare's DNS resolver, which also supports DNS over QUIC
Use the looking glass to look up any IP address or domain and see the BGP route your traffic follows. Whether that traffic is carried by TCP or QUIC at the transport layer, it traverses the same autonomous systems, crosses the same internet exchange points, and is routed by the same BGP decisions. QUIC changes how data is framed and encrypted on the wire — but BGP determines which wire it travels on.