Post-Quantum Cryptography and Its Impact on the Internet
The cryptographic algorithms that protect virtually all internet communication today -- RSA, ECDSA, Diffie-Hellman, and their elliptic curve variants -- rely on mathematical problems that classical computers cannot solve efficiently. Factoring large integers and computing discrete logarithms take billions of years on the fastest supercomputers. But a sufficiently powerful quantum computer running Shor's algorithm could break all of them in hours. This is not science fiction: governments and corporations are spending billions on quantum computing, and the cryptographic community has already begun deploying replacements. The transition to post-quantum cryptography (PQC) will touch every layer of the internet stack, from TLS handshakes and JWT tokens to RPKI route origin authorizations and cryptocurrency wallets.
Why Quantum Computers Threaten Current Cryptography
To understand the threat, you need to understand two quantum algorithms: Shor's algorithm and Grover's algorithm. They attack different categories of cryptography in fundamentally different ways.
Shor's Algorithm: Breaking Asymmetric Cryptography
Peter Shor published his quantum factoring algorithm in 1994. It efficiently solves two mathematical problems that underpin all widely deployed public-key cryptography:
- Integer factorization -- RSA relies on the difficulty of factoring the product of two large primes. A 2048-bit RSA key would require classical computers roughly 2^112 operations to break. Shor's algorithm reduces this to a polynomial-time computation on a quantum computer with sufficient qubits.
- Discrete logarithm problem -- Diffie-Hellman key exchange, DSA, ECDSA, and ECDH all rely on the difficulty of computing discrete logarithms in finite groups or on elliptic curves. Shor's algorithm solves both the finite-field and elliptic curve variants efficiently.
The critical word is efficiently. A quantum computer with approximately 4,000 error-corrected logical qubits could break RSA-2048 in hours. For 256-bit elliptic curve keys (used in ECDSA P-256, the standard for TLS certificates and Bitcoin transactions), roughly 2,500 logical qubits suffice. Current quantum computers have thousands of physical qubits but very few logical (error-corrected) qubits -- the gap is narrowing.
Grover's Algorithm: Weakening Symmetric Cryptography
Lov Grover's 1996 algorithm provides a quadratic speedup for unstructured search problems. Applied to cryptography, it effectively halves the security level of symmetric ciphers and hash functions:
- AES-128 drops from 128-bit to 64-bit security -- dangerously weak
- AES-256 drops from 256-bit to 128-bit security -- still considered secure
- SHA-256 pre-image resistance drops from 256-bit to 128-bit -- still adequate
The mitigation is straightforward: double the key sizes. AES-256 and SHA-384 or SHA-512 provide sufficient post-quantum security margins. This is why the symmetric side of PQC is relatively simple -- the asymmetric side is where the real challenge lies.
Forward Secrecy and "Harvest Now, Decrypt Later"
One concept makes PQC migration urgent even though large-scale quantum computers may be years away: harvest now, decrypt later (HNDL).
Intelligence agencies and sophisticated attackers can record encrypted network traffic today and store it. When a quantum computer becomes available -- even decades from now -- they can decrypt all that stored traffic retroactively. Every TLS session, every VPN tunnel, every encrypted email that was captured becomes readable.
This is why forward secrecy (also called perfect forward secrecy, PFS) matters so much. Forward secrecy means that even if a server's long-term private key is compromised in the future, past session keys cannot be recovered. It works by using ephemeral (one-time) key pairs for each session:
- DHE (Diffie-Hellman Ephemeral) -- generates a fresh DH key pair for every session
- ECDHE (Elliptic Curve Diffie-Hellman Ephemeral) -- the modern, faster variant using elliptic curves
TLS 1.3 mandates forward secrecy -- static RSA key exchange (where the client encrypts the session key directly with the server's RSA public key) was removed entirely. Every TLS 1.3 connection uses ECDHE (or DHE). But here is the problem: ECDHE is itself vulnerable to Shor's algorithm. An attacker who recorded the TLS handshake -- including the ephemeral ECDH public keys exchanged in the clear -- can use a future quantum computer to derive the shared secret and decrypt the entire session.
This means even forward-secret TLS 1.3 connections are vulnerable to HNDL if the key exchange is not post-quantum. Data with long secrecy requirements -- medical records, legal communications, trade secrets, classified information -- is at risk today. This is why key exchange (KEM) migration is the most urgent part of the PQC transition, and why Google and Cloudflare began deploying hybrid post-quantum key exchange in 2023.
NIST Post-Quantum Cryptography Standards
The U.S. National Institute of Standards and Technology (NIST) began its PQC standardization process in 2016, evaluating 82 submissions across multiple rounds. By 2024, four algorithms were standardized, with additional selections ongoing. These algorithms are based on mathematical problems believed to be hard even for quantum computers.
ML-KEM (Kyber) -- Key Encapsulation
ML-KEM (Module Lattice-Based Key Encapsulation Mechanism), previously known as CRYSTALS-Kyber, is NIST's primary standard for key exchange (FIPS 203). It is based on the Module Learning With Errors (MLWE) problem -- a lattice problem where you must recover a secret vector from noisy linear equations over a polynomial ring.
ML-KEM comes in three parameter sets:
- ML-KEM-512 -- roughly equivalent to AES-128 security (NIST Level 1)
- ML-KEM-768 -- roughly equivalent to AES-192 security (NIST Level 3) -- the most widely deployed
- ML-KEM-1024 -- roughly equivalent to AES-256 security (NIST Level 5)
The key advantage of ML-KEM is performance: key generation, encapsulation, and decapsulation are all extremely fast -- comparable to or faster than X25519 ECDH. The tradeoff is key size: an ML-KEM-768 public key is 1,184 bytes, versus 32 bytes for X25519.
ML-DSA (Dilithium) -- Digital Signatures
ML-DSA (Module Lattice-Based Digital Signature Algorithm), previously CRYSTALS-Dilithium, is NIST's primary digital signature standard (FIPS 204). It is also based on lattice problems (MLWE and MSIS -- Module Short Integer Solution).
ML-DSA parameter sets:
- ML-DSA-44 -- NIST Level 2, public key 1,312 bytes, signature 2,420 bytes
- ML-DSA-65 -- NIST Level 3, public key 1,952 bytes, signature 3,309 bytes
- ML-DSA-87 -- NIST Level 5, public key 2,592 bytes, signature 4,627 bytes
For comparison, an Ed25519 signature is 64 bytes and its public key is 32 bytes. The size increase is dramatic but the computational performance is reasonable.
SLH-DSA (SPHINCS+) -- Hash-Based Signatures
SLH-DSA (Stateless Hash-Based Digital Signature Algorithm), based on SPHINCS+, is NIST's conservative backup signature scheme (FIPS 205). Its security relies solely on the security of hash functions -- no lattice assumptions required. This makes it the most conservative choice: even if lattice problems turn out to be easier than expected, hash-based signatures remain secure as long as SHA-256 or SHAKE are not broken.
The tradeoff is large signatures (up to 49,856 bytes for the highest security level) and slower signing. It is suitable for scenarios where signature size is not critical and maximum conservatism is desired, such as firmware signing or certificate authority root keys.
FN-DSA (Falcon) -- Compact Lattice Signatures
FN-DSA (FFT over NTRU Lattice-Based Digital Signature Algorithm), based on Falcon, produces significantly smaller signatures than ML-DSA (666 bytes at NIST Level 1 vs. 2,420 bytes for ML-DSA-44). It is based on the NTRU lattice and uses fast Fourier transforms for signing.
However, FN-DSA has a more complex implementation, requires careful constant-time floating-point arithmetic to resist side-channel attacks, and has a trickier key generation process. It is expected to be standardized as FIPS 206 and is particularly attractive for bandwidth-constrained applications.
Hybrid Key Exchange in TLS
The internet cannot simply swap X25519 for ML-KEM overnight. There are two risks: quantum computers might arrive sooner than expected (making classical-only key exchange vulnerable), and PQC algorithms might harbor undiscovered weaknesses (making PQC-only key exchange risky). The solution is hybrid key exchange: combine a classical algorithm with a post-quantum algorithm so that the connection is secure as long as either one remains unbroken.
The most widely deployed hybrid is X25519Kyber768 (also called X25519MLKEM768), which concatenates an X25519 ECDH exchange with an ML-KEM-768 encapsulation. Both shared secrets are combined (typically via HKDF) to derive the session key. An attacker would need to break both X25519 (requiring a quantum computer) and ML-KEM-768 (requiring a new mathematical breakthrough) to recover the session key.
Deployment Status
Hybrid PQC key exchange is already widely deployed:
- Google Chrome enabled X25519Kyber768 by default in Chrome 124 (April 2024) for all TLS 1.3 connections
- Cloudflare enabled post-quantum key exchange across all customer domains -- all traffic to sites behind Cloudflare's network (AS13335) can negotiate PQC
- Apple added ML-KEM support to iMessage in iOS 17.4 (PQ3 protocol) and to Safari
- Mozilla Firefox added support for ML-KEM in TLS
- AWS, Signal, and Cloudflare WARP have all deployed PQC key exchange
As of early 2026, a significant fraction of TLS connections on the internet already use hybrid post-quantum key exchange. The key exchange transition is well underway.
Impact on TLS/HTTPS Performance
The most immediate practical impact of PQC on TLS/HTTPS is increased data sizes during the handshake. A classical TLS 1.3 handshake using X25519 sends 32 bytes of key material in the ClientHello and 32 bytes in the ServerHello. With X25519+ML-KEM-768, the ClientHello carries ~1,216 bytes of key material and the ServerHello carries ~1,120 bytes.
ClientHello Fragmentation
The larger ClientHello is the main pain point. The combined key shares plus other TLS extensions can push the ClientHello beyond a single TCP segment's Maximum Segment Size (typically 1,460 bytes on Ethernet). This means the ClientHello gets split across multiple TCP packets, adding a round trip before the server can even begin processing. Some middleboxes (firewalls, load balancers, DPI devices) that were not designed to handle multi-fragment ClientHellos have been observed to drop or malfunction on these larger messages.
This is not hypothetical: early deployments of Kyber-based key exchange triggered compatibility issues with certain enterprise firewalls and TLS inspection proxies. Chrome and other browsers implemented fallback logic to retry with classical-only key exchange if the hybrid handshake fails.
Performance Benchmarks
The computational cost of ML-KEM is minimal -- encapsulation and decapsulation take microseconds, comparable to X25519. The performance hit is dominated by the additional bytes on the wire:
- First-connection latency may increase by 1-5ms on broadband connections due to the larger handshake, and more on high-latency links
- QUIC/HTTP/3 is somewhat more affected because the entire handshake must fit within the initial QUIC packet flight, and QUIC packets are limited by UDP MTU constraints
- 0-RTT resumption in TLS 1.3 is unaffected since it uses pre-shared keys, not key exchange
For digital signatures (needed when PQC certificates are deployed), the impact will be more pronounced. An ML-DSA-65 certificate chain with three certificates would add roughly 15 KB to the handshake -- a significant increase from the ~3 KB typical of current ECDSA certificate chains.
Impact on JWTs and Digital Signatures
JSON Web Tokens (JWTs) are used throughout the internet for authentication and authorization. The signature algorithms currently used for JWTs are all quantum-vulnerable:
- RS256 (RSA-PKCS1-v1_5 with SHA-256) -- broken by Shor's algorithm
- RS384, RS512 -- same vulnerability
- ES256 (ECDSA P-256 with SHA-256) -- broken by Shor's algorithm
- ES384, ES512 -- same vulnerability
- PS256, PS384, PS512 (RSA-PSS) -- broken by Shor's algorithm
The IETF has proposed extensions to JOSE (JSON Object Signing and Encryption) and COSE (CBOR Object Signing and Encryption) to support post-quantum algorithms. Draft specifications introduce new algorithm identifiers for ML-DSA and SLH-DSA in JWS (JSON Web Signature) and JWE (JSON Web Encryption).
Migration Challenges for JWTs
The JWT migration faces several practical challenges:
- Token size -- A JWT signed with ES256 has a ~86-byte signature. With ML-DSA-65, the signature alone would be 3,309 bytes. JWTs are often passed in HTTP headers (Authorization: Bearer), cookies, and URL parameters, where size limits apply. Many HTTP servers default to 8 KB header limits; a PQC JWT with claims could approach or exceed this.
- JWKS endpoints -- JSON Web Key Sets publish the public keys used to validate JWTs. PQC public keys are 20-80x larger than current keys. JWKS endpoints, key rotation mechanisms, and key caching logic all need updating.
- Ecosystem inertia -- JWT libraries exist in every programming language. All need to add PQC algorithm support. Authorization servers (OAuth 2.0 / OpenID Connect providers) and resource servers must upgrade in coordination.
- Hybrid signatures -- During the transition, tokens may need dual signatures (classical + PQC) for backward compatibility. Standards for composite signatures in JWS are still being developed.
Impact on SSH and Code Signing
SSH
OpenSSH added post-quantum key exchange in version 9.0 (April 2022), using a hybrid of NTRU Prime and X25519 ([email protected]). This is enabled by default for all new SSH connections. More recently, OpenSSH has been updated to support ML-KEM-768 as well.
SSH host keys and user authentication keys (typically Ed25519 or RSA) also need post-quantum replacements. However, because SSH key exchange already provides forward secrecy, the urgency is lower than for TLS -- an attacker recording SSH traffic today cannot decrypt it even with a future quantum computer, assuming the key exchange is PQC-protected. The authentication keys protect against impersonation, not retroactive decryption.
Code Signing
Code signing (for OS packages, firmware, application binaries) has a different threat model. A forged signature on malware only needs to be valid at the time of verification. If quantum computers arrive in 2035, an attacker could forge signatures on any code that uses RSA or ECDSA unless the verification infrastructure has been upgraded.
For code that must remain verifiable for decades (firmware updates, legal documents, archival records), PQC signatures are important even now. Apple, Microsoft, and Linux distributions are all planning PQC transitions for their code signing infrastructure.
Certificate Authorities
The Web PKI -- the system of Certificate Authorities (CAs) that issue TLS certificates -- faces an especially complex PQC transition. CA root certificates have lifetimes of 20-30 years. A root certificate issued today with RSA-4096 will still be trusted in 2050. If a quantum computer becomes available before that root expires, every certificate chaining to it becomes forgeable.
This means CAs need to begin issuing PQC root certificates soon. The CA/Browser Forum is developing requirements for PQC certificates, including hybrid certificates that contain both a classical and a post-quantum signature. The size implications are significant: a certificate chain with ML-DSA-65 signatures would be roughly 5x larger than the current ECDSA equivalent.
Impact on Blockchain and Cryptocurrency
Blockchain networks are uniquely vulnerable to quantum attacks because their security model depends fundamentally on public-key cryptography.
Bitcoin
Bitcoin uses ECDSA on the secp256k1 curve for transaction signing. Every Bitcoin address is derived from an ECDSA public key. A quantum computer running Shor's algorithm could derive the private key from any exposed public key and steal the associated funds.
Public keys are exposed in two scenarios: (1) when a transaction is broadcast (the public key is revealed in the spending script), and (2) for addresses that have been reused (the public key is on the blockchain permanently). Addresses using Pay-to-Public-Key-Hash (P2PKH) only expose the hash of the public key until they are spent, providing some protection -- but once spent from, the public key is visible.
An estimated 25% of all Bitcoin (by value) is in addresses with exposed public keys. A quantum-capable attacker could attempt to steal these funds. The Bitcoin community has discussed various migration strategies, but no concrete PQC upgrade timeline exists yet, and a hard fork would likely be required.
Ethereum
Ethereum also uses ECDSA (on secp256k1) for transaction signing, with the same vulnerability. However, Ethereum's roadmap explicitly includes quantum resistance. Vitalik Buterin has discussed account abstraction (EIP-4337) as a path to PQC: with account abstraction, each account can define its own signature verification logic, allowing individual accounts to upgrade to PQC signature schemes without a protocol-level hard fork.
Ethereum's longer-term roadmap includes replacing the current BLS signature scheme used for proof-of-stake consensus with a PQC alternative, potentially based on STARKs (which are hash-based and thus quantum-resistant).
Hash Functions Remain Secure
Notably, the SHA-256 hashing used in Bitcoin mining and Ethereum's keccak256 are not broken by quantum computers -- Grover's algorithm only provides a quadratic speedup, reducing SHA-256 mining security to 128-bit, which remains practically secure. Proof-of-work mining is not threatened by quantum computers in any near-term scenario.
Impact on BGP Security (RPKI)
RPKI (Resource Public Key Infrastructure) is the cryptographic system that secures BGP routing. It uses RSA and ECDSA signatures for Route Origin Authorizations (ROAs) -- the signed statements that authorize specific autonomous systems to announce specific IP prefixes.
A quantum-capable attacker could forge ROAs, creating cryptographically valid authorizations for BGP hijacks that would pass RPKI validation. This would undermine the primary defense against BGP hijacking and potentially allow large-scale traffic interception.
The RPKI PQC transition faces several challenges specific to internet routing:
- Size constraints -- RPKI validation data is distributed via repositories (rsync and RRDP). Larger PQC signatures mean larger repository sizes and longer synchronization times for the validators that every ISP runs.
- Validation performance -- RPKI validators must verify thousands of ROAs. PQC signature verification is generally fast, but the increased data size affects parse and transfer times.
- Coordination across RIRs -- The five Regional Internet Registries (ARIN, RIPE NCC, APNIC, AFRINIC, LACNIC) must coordinate the transition. RPKI trust anchors are rooted at the RIRs, and replacing them requires careful sequencing.
- Router resource constraints -- BGP routers that perform RPKI validation have limited memory and CPU. PQC certificates and ROAs must be processable on existing router hardware.
- BGPsec -- The BGPsec path validation protocol (which cryptographically signs each AS hop, unlike RPKI which only validates the origin) uses ECDSA signatures on every BGP update. PQC signatures for BGPsec would add kilobytes to every BGP update message, which is problematic for a protocol that processes millions of updates.
The IETF's SIDROPS working group is actively developing specifications for PQC in RPKI. The transition will likely involve a hybrid period where ROAs carry both classical and PQC signatures, similar to the TLS approach. You can see the current RPKI validation status of any route by looking up an IP address or prefix -- for example, 1.1.1.1 shows a valid RPKI status because Cloudflare has published ROAs for its prefixes.
Timeline Estimates and Q-Day
Q-Day is the term for the day a cryptographically relevant quantum computer (CRQC) becomes operational -- one powerful enough to break RSA-2048 or ECDSA P-256 in practice. Estimates vary widely:
- Optimistic (quantum-skeptic) view: 2040-2050 or later. Building a fault-tolerant quantum computer with millions of physical qubits remains an enormous engineering challenge. Current error rates are too high, and no clear path to the required scale exists.
- Moderate view: 2030-2040. Investments from Google, IBM, Microsoft, and nation-states are accelerating progress. IBM's roadmap targets 100,000+ qubit systems by 2033. Error correction breakthroughs could compress timelines.
- Aggressive view: 2028-2032. Some researchers argue that novel approaches (topological qubits, photonic systems) could achieve breakthroughs faster than expected. China's quantum computing program is opaque, and their progress may be further along than publicly known.
The critical insight is that the migration itself takes years. An organization that begins PQC migration when Q-Day arrives is already too late -- their historical data has been harvestable for the entire preceding period. The NSA recommended in 2022 that National Security Systems begin transitioning to PQC immediately, with a target completion of 2035. The White House issued a National Security Memorandum (NSM-10) setting similar timelines for all federal agencies.
What Organizations Should Do Now
Regardless of when Q-Day arrives, organizations should begin preparing for the PQC transition today. The core principle is crypto-agility: designing systems so that cryptographic algorithms can be swapped without redesigning the entire system.
1. Inventory Cryptographic Usage
Most organizations have no clear picture of where and how they use cryptography. A cryptographic inventory should catalog:
- All TLS/HTTPS endpoints and their cipher suites
- Certificate types, key sizes, and CA chains
- JWT/JWS signing algorithms across all services
- SSH key types used for infrastructure access
- VPN protocols and their key exchange mechanisms
- Database encryption, disk encryption, backup encryption
- Code signing certificates and processes
- API authentication mechanisms (mTLS, signed requests)
2. Prioritize by Data Sensitivity and Lifespan
Focus first on data with long secrecy requirements. Medical records, legal documents, financial data, and government communications that must remain confidential for 10+ years are at risk from HNDL attacks today. Key exchange (KEM) migration for these data flows should be the highest priority.
3. Enable Hybrid PQC Where Available
Many systems already support hybrid PQC:
- Ensure web servers and CDNs support X25519+ML-KEM-768 (Cloudflare and AWS already enable this by default)
- Update SSH to OpenSSH 9.0+ and verify post-quantum key exchange is active
- Configure VPNs (WireGuard, IPsec) to use PQC-capable configurations where available
- Update TLS libraries (OpenSSL 3.5+, BoringSSL, rustls) to versions with PQC support
4. Design for Crypto-Agility
Architect new systems so that cryptographic algorithms are configurable, not hardcoded:
- Abstract cryptographic operations behind interfaces that can swap algorithms
- Use algorithm negotiation (like TLS cipher suite negotiation) rather than fixed algorithms
- Design token and certificate formats to accommodate larger PQC keys and signatures
- Ensure JWKS endpoints and key management systems can handle PQC key types
- Plan for increased bandwidth and storage from larger PQC artifacts
5. Test and Monitor
Run PQC interoperability tests with your infrastructure:
- Test web applications with Chrome's PQC key exchange to verify no middlebox breakage
- Benchmark handshake performance with PQC cipher suites
- Verify that firewalls, WAFs, and TLS inspection proxies handle larger handshakes
- Monitor for PQC-related connection failures in production
The Bigger Picture
The PQC transition is the largest coordinated cryptographic migration in internet history. Unlike the SHA-1 deprecation or the TLS 1.0/1.1 sunset, PQC affects every cryptographic protocol simultaneously. The internet's routing security (BGP/RPKI), its transport security (TLS), its authentication tokens (JWT), and its financial systems (blockchain) all depend on the same vulnerable mathematical assumptions.
The good news is that the transition has already begun. Every major browser, most CDNs, and many cloud providers now support hybrid PQC key exchange. NIST has finalized its standards. The open-source cryptographic library ecosystem is adding support rapidly. The path forward is clear -- the question is whether organizations will begin walking it before Q-Day makes it a sprint.
You can explore the networks leading the PQC deployment using the looking glass. Cloudflare (AS13335) was among the first to deploy PQC across its entire network. Google (AS15169) enabled it in Chrome and across Google services. Look up any IP or domain to see the BGP routing data for the networks that carry your encrypted traffic today: