How PROXY Protocol Works: Preserving Client IPs Through L4 Proxies

PROXY protocol is a connection-level protocol that preserves the original client IP address and port when traffic passes through a Layer 4 proxy or load balancer. Without it, the upstream server sees the proxy's IP address as the source of every connection, making client identification, geolocation, rate limiting, abuse detection, and access logging impossible. PROXY protocol solves this by prepending a small header to the beginning of each TCP connection that carries the original source and destination addresses. It was designed by Willy Tarreau, the author of HAProxy, and has become the standard mechanism for passing client connection metadata through L4 proxies across the industry.

If you operate load balancers, reverse proxies, or any multi-tier infrastructure where TCP connections are terminated and re-established at intermediate hops, understanding PROXY protocol is essential. It is the only reliable way to convey original client identity through L4 forwarding without modifying the application protocol itself.

The Problem: Client IP Lost at Layer 4

When a client connects to a web server directly, the server can read the client's IP address from the TCP socket. The kernel exposes the remote peer's address via getpeername(), and application code uses it for logging, authentication, and security decisions. This works because the TCP connection is end-to-end between client and server.

The moment you place a proxy, load balancer, or NAT device in the path, this breaks. A Layer 4 proxy terminates the client's TCP connection and opens a new connection to the backend server. The backend sees the proxy's IP address as the source — not the client's. If you have 10,000 clients connecting through a single load balancer, the backend sees 10,000 connections all originating from the same IP address. Every audit log entry, every rate limit counter, every geo-IP lookup returns information about the proxy, not the actual client.

Client IP Lost Through L4 Proxy Client 203.0.113.45 src: 203.0.113.45 TCP connection 1 L4 Proxy 10.0.0.1 terminates + re-opens src: 10.0.0.1 TCP connection 2 Backend Server sees 10.0.0.1 Client IP 203.0.113.45 is lost Backend cannot distinguish clients — all connections appear from 10.0.0.1 With PROXY Protocol: PROXY TCP4 203.0.113.45 198.51.100.1 52312 443\r\n + application data

At Layer 7, HTTP reverse proxies can insert an X-Forwarded-For header into the HTTP request, carrying the client's IP inside the application protocol. But this only works for HTTP traffic. For arbitrary TCP protocols — databases, mail servers, custom binary protocols, TLS passthrough, SSH, gRPC over raw TCP — there is no application-layer mechanism to carry the client address. PROXY protocol was invented precisely for this gap.

PROXY Protocol v1: The Text Header

PROXY protocol version 1 (PPv1) is a human-readable text header prepended to the very beginning of a TCP connection, before any application data. The proxy writes this header immediately after establishing the connection to the backend, and the backend reads and strips it before processing the application protocol.

The format is a single line terminated by \r\n:

PROXY TCP4 203.0.113.45 198.51.100.1 52312 443\r\n

The fields are space-delimited and fixed in order:

PROXYProtocol signature — always the literal string "PROXY"
TCP4 | TCP6Address family — TCP4 for IPv4, TCP6 for IPv6
Source addressThe original client IP address
Destination addressThe original destination IP the client connected to
Source portThe client's ephemeral source port
Destination portThe destination port the client connected to

For IPv6, the header looks like:

PROXY TCP6 2001:db8::1 2001:db8::2 52312 443\r\n

There is also a special UNKNOWN keyword used when the proxy cannot determine the address (for example, health check connections originating from the proxy itself):

PROXY UNKNOWN\r\n

The entire v1 header must fit in 108 bytes (the maximum length defined by the specification). This is more than enough: the longest possible header with two full IPv6 addresses, two 5-digit ports, and the protocol keyword is 104 characters plus \r\n.

Parsing Constraints

The backend must read the PROXY header before reading any application data. This is critical: the header is not a separate protocol layer that can be negotiated. It appears on the wire as the first bytes of the connection. If the backend does not expect PROXY protocol and a proxy sends it, the backend will try to parse the "PROXY TCP4..." string as application data — an HTTP server would return a 400 Bad Request, a TLS server would fail the handshake, and a database server would drop the connection with a protocol error.

This means PROXY protocol must be configured on both sides of the connection simultaneously. You enable "send PROXY protocol" on the proxy and "accept PROXY protocol" on the backend. There is no auto-detection in v1 — the listener must know in advance whether to expect the header.

The text format of v1 is intentionally simple: any developer can implement a parser in a few dozen lines of code. The header is easy to inspect in packet captures and debug with tcpdump or Wireshark. This simplicity drove rapid adoption, but it has limitations: it cannot carry arbitrary metadata beyond the four address fields, and text parsing (while trivial) is slower than binary parsing for high-throughput proxies processing millions of connections per second.

PROXY Protocol v2: The Binary Header

PROXY protocol version 2 (PPv2) replaced the text format with a compact binary encoding that is faster to parse, supports extensibility via Type-Length-Value (TLV) fields, and handles edge cases like Unix domain sockets and AF_UNSPEC connections.

The v2 header structure is:

+------+------+------+------+------+------+------+------+
| 0x0D 0x0A 0x0D 0x0A 0x00 0x0D 0x0A 0x51  |  (12 bytes)
| 0x55 0x49 0x54 0x0A                        |  signature
+------+------+-----------+-----------------+
| ver/cmd     | fam/proto | address length  |
| (1 byte)    | (1 byte)  | (2 bytes)       |
+-------------+-----------+-----------------+
| source address + dest address + ports      |  (variable)
+--------------------------------------------+
| TLV extensions (optional)                  |  (variable)
+--------------------------------------------+

The 12-byte signature is carefully chosen. Its first bytes (\r\n\r\n\0\r\n) intentionally break any HTTP parser that accidentally receives it — the null byte and the pattern of carriage returns will cause HTTP parsers to reject the data immediately rather than silently misinterpreting it. The remaining bytes QUIT\n form a mnemonic that can be spotted in hex dumps.

Version and Command Byte

The 13th byte encodes the protocol version in the high nibble and the command in the low nibble:

The LOCAL command is significant. When a load balancer sends health check probes to backends, it is the proxy itself originating the connection, not a real client. The LOCAL command tells the backend "this is not a proxied connection — treat the peer address normally." Without LOCAL, backends would either need to be configured to accept health checks without PROXY headers (requiring a separate listener) or the health check would appear to come from a fake client address.

Address Family and Transport Protocol Byte

The 14th byte encodes the address family in the high nibble and the transport protocol in the low nibble:

AF_UNIX support is notable: it allows PROXY protocol to work with Unix domain socket connections within the same machine, which is useful in container and microservice architectures where an L4 proxy (like HAProxy) forwards to a backend over a Unix socket rather than a TCP connection.

Address Block

The addresses follow the family/protocol byte. For IPv4/TCP, the block is 12 bytes: 4 bytes source address, 4 bytes destination address, 2 bytes source port, 2 bytes destination port. For IPv6/TCP, it is 36 bytes: 16 + 16 + 2 + 2. All multi-byte values are in network byte order (big-endian).

The total header length for IPv4 is 28 bytes (12 signature + 4 ver/cmd/fam/len + 12 addresses), and for IPv6 it is 52 bytes (12 + 4 + 36), plus any TLV extensions. This is substantially more compact than the text format for IPv6, where addresses alone can consume 78 characters.

PP2 Type-Length-Value (TLV) Extensions

The most significant improvement in PPv2 over v1 is TLV-based extensibility. After the address block, the remaining bytes (up to the length declared in the header) consist of a sequence of TLV records. Each TLV has a 1-byte type, a 2-byte length, and a variable-length value:

+------+---------+-----------+
| type | length  |   value   |
| 1B   | 2B (BE) | variable  |
+------+---------+-----------+

TLVs carry metadata that goes far beyond source and destination addresses. The PROXY protocol specification defines several standard TLV types, and cloud providers have added vendor-specific extensions.

Standard TLV Types

PP2_TYPE_ALPN (0x01)Application-Layer Protocol Negotiation value from TLS handshake (e.g., h2, http/1.1)
PP2_TYPE_AUTHORITY (0x02)The host name value from the TLS SNI extension — the server name the client requested
PP2_TYPE_CRC32C (0x03)CRC32c checksum of the entire PROXY protocol header for integrity verification
PP2_TYPE_NOOP (0x04)Padding bytes, used for alignment; receivers must skip these
PP2_TYPE_UNIQUE_ID (0x05)Unique connection identifier assigned by the proxy, useful for log correlation
PP2_TYPE_SSL (0x20)SSL/TLS information sub-structure (detailed below)
PP2_TYPE_NETNS (0x30)Network namespace name (Linux-specific)

PP2_TYPE_SSL: TLS Session Metadata

The SSL TLV (type 0x20) is particularly rich. When the proxy terminates TLS and forwards plaintext to the backend, the backend loses all information about the client's TLS session. The SSL TLV preserves it by embedding a sub-structure with its own nested TLVs:

PP2_TYPE_SSL structure:
  client     (1 byte)   - bitfield of client capabilities
  verify     (4 bytes)  - OpenSSL verification result
  sub-TLVs:
    PP2_SUBTYPE_SSL_VERSION  (0x21) - e.g., "TLSv1.3"
    PP2_SUBTYPE_SSL_CN       (0x22) - client cert Common Name
    PP2_SUBTYPE_SSL_CIPHER   (0x23) - e.g., "ECDHE-RSA-AES256-GCM-SHA384"
    PP2_SUBTYPE_SSL_SIG_ALG  (0x24) - signature algorithm
    PP2_SUBTYPE_SSL_KEY_ALG  (0x25) - key algorithm

The client bitfield indicates whether the client connected via SSL, presented a certificate, and whether that certificate was validated. This allows backend applications to enforce mutual TLS (mTLS) policies even when TLS is terminated at the proxy layer.

Cloud Provider TLV Extensions

Cloud providers have extended PPv2 with custom TLV types to expose infrastructure metadata to backend applications.

AWS (PP2_TYPE_AWS, 0xEA): AWS Network Load Balancers (NLBs) inject a TLV with type 0xEA that carries the VPC endpoint ID when traffic arrives through an AWS PrivateLink endpoint. The sub-type 0x01 (PP2_SUBTYPE_AWS_VPCE_ID) contains the VPC endpoint identifier as a string, e.g., vpce-0123456789abcdef0. This allows multi-tenant backend services to identify which PrivateLink endpoint the request arrived through, enabling per-customer authentication and routing without relying on source IP (which is NATted within the AWS network).

Azure (PP2_TYPE_AZURE, 0xEE): Azure Private Link services inject a TLV with type 0xEE carrying the LinkID of the private endpoint connection. The sub-type 0x01 (PP2_SUBTYPE_AZURE_PRIVATEENDPOINT_LINKID) contains a 4-byte integer identifying the private endpoint. Like AWS, this enables multi-tenant services to identify which Azure customer is connecting through Private Link, even though all connections appear to originate from NAT addresses within the Azure fabric.

Google Cloud: Google Cloud Load Balancers also support PROXY protocol v2, though Google typically uses its own metadata passing mechanisms for internal services. When PROXY protocol is enabled on a Google Cloud Network Load Balancer, it sends standard PPv2 headers with client addresses.

PROXY Protocol vs X-Forwarded-For

The most common alternative to PROXY protocol is the X-Forwarded-For (XFF) HTTP header. Understanding when to use each is important for infrastructure design.

X-Forwarded-For is an HTTP header that each proxy in the chain appends to. If a client at 203.0.113.45 connects through a CDN at 198.51.100.10 and then a load balancer at 10.0.0.1, the backend receives:

X-Forwarded-For: 203.0.113.45, 198.51.100.10

Key differences between the two approaches:

PROXY ProtocolX-Forwarded-For
Protocol layerTransport (below application)Application (HTTP header)
Protocol supportAny TCP/UDP protocolHTTP only
ChainingSingle hop only (last proxy)Accumulates through proxy chain
SpoofabilityCannot be spoofed by clientClient can set fake XFF header
MetadataTLVs carry TLS info, cloud IDsLimited to IP addresses
PerformanceFixed-size binary (v2), fast parseVariable string, requires HTTP parsing
Backend supportRequires explicit supportAny HTTP application can read headers

PROXY protocol operates at a fundamentally different layer than XFF. It is injected once by the first proxy that terminates the client's TCP connection. If there are multiple L4 proxies in series, each one strips the incoming PROXY header and writes a new one. The backend always sees the header from the last proxy in the chain. In contrast, XFF is cumulative — each HTTP proxy appends its address.

In practice, production deployments often use both. A common pattern: an AWS NLB uses PROXY protocol v2 to pass the client IP to an Nginx instance, which reads the PROXY header to obtain the client IP and then sets X-Forwarded-For in the HTTP request before forwarding to the application server. The L4 hop uses PROXY protocol (because NLB speaks TCP, not HTTP), and the L7 hop uses XFF (because the application understands HTTP).

Combined PROXY Protocol + X-Forwarded-For Client 203.0.113.45 TCP AWS NLB L4 load balancer sends PP v2 PP2 header Nginx L7 reverse proxy reads PP, sets XFF XFF header App Server reads XFF NLB -> Nginx (TCP stream): \r\n\r\n\0\r\nQUIT\n [v2.1|IPv4|TCP] [203.0.113.45:52312 -> 198.51.100.1:443] [TLVs...] Binary PROXY v2 header prepended, followed by raw TCP payload Nginx -> App Server (HTTP): GET /api/data HTTP/1.1 X-Forwarded-For: 203.0.113.45

Adoption: Where PROXY Protocol Is Used

PROXY protocol has become the de facto standard for L4 client IP preservation. Every major proxy, load balancer, and cloud provider supports it.

HAProxy

HAProxy is the origin of PROXY protocol. It can both send and receive v1 and v2 headers. Sending is enabled per backend server with the send-proxy (v1) or send-proxy-v2 (v2) keyword. Receiving is enabled per frontend bind with the accept-proxy keyword:

# HAProxy: sending PROXY protocol to backends
backend web_servers
    mode tcp
    server web1 10.0.1.10:8080 send-proxy-v2 check

# HAProxy: accepting PROXY protocol from upstream
frontend https_in
    bind :443 accept-proxy
    default_backend app_servers

HAProxy also supports send-proxy-v2-ssl and send-proxy-v2-ssl-cn to include TLS session information in the v2 header's SSL TLV.

Nginx

Nginx supports PROXY protocol on both the receiving and sending sides, though the configuration differs between the stream (L4) and http (L7) modules:

# Nginx: accepting PROXY protocol on HTTP
server {
    listen 80 proxy_protocol;
    listen 443 ssl proxy_protocol;

    # Use the real client IP from PROXY header
    set_real_ip_from 10.0.0.0/8;
    real_ip_header proxy_protocol;

    # Pass the client IP to application
    proxy_set_header X-Forwarded-For $proxy_protocol_addr;
}

# Nginx: accepting PROXY protocol on TCP stream
stream {
    server {
        listen 3306 proxy_protocol;
        proxy_pass mysql_backend;
        proxy_protocol on;  # send PROXY to upstream
    }
}

The $proxy_protocol_addr and $proxy_protocol_port variables expose the client's real address from the PROXY header, making it available for logging, access control, and forwarding.

AWS Elastic Load Balancing

AWS Network Load Balancer (NLB) supports PROXY protocol v2 as a target group attribute. When enabled, the NLB prepends a PPv2 header to every connection forwarded to targets. This is the primary mechanism for preserving client IPs through NLB, since NLB operates at Layer 4 and cannot modify HTTP headers.

NLB adds the AWS VPC endpoint ID TLV (type 0xEA) when traffic arrives through PrivateLink, enabling multi-tenant SaaS architectures where the backend needs to identify which customer's VPC endpoint initiated the connection.

AWS Application Load Balancer (ALB) does not use PROXY protocol because it operates at Layer 7 and uses X-Forwarded-For headers instead. However, when an ALB is placed behind an NLB (a common pattern for combining L4 and L7 features), the NLB can send PROXY protocol to the ALB, which then extracts the client IP and propagates it via XFF.

Cloudflare

Cloudflare supports PROXY protocol v1 and v2 on Spectrum, its L4 proxy product. When Spectrum proxies arbitrary TCP traffic to an origin server, enabling PROXY protocol is the only way for the origin to learn the client's real IP address, since the traffic may not be HTTP (and thus XFF is not available). Cloudflare's HTTP products use their own CF-Connecting-IP and X-Forwarded-For headers for client IP propagation at Layer 7.

Other Implementations

Security Considerations

PROXY protocol is a powerful mechanism, and like all trust-based protocols, it introduces significant security risks if misconfigured.

Trust Boundary: Accept Only from Known Sources

The fundamental security rule: only accept PROXY protocol from trusted sources. The PROXY header is self-asserted — the sender claims "the real client is 203.0.113.45" and the receiver believes it. If an attacker can connect directly to a backend that accepts PROXY protocol, they can forge any client IP address.

Consider a backend server that accepts PROXY protocol and uses the client IP for access control. If port 8080 is exposed to the internet (even accidentally), an attacker can connect directly and send:

PROXY TCP4 10.0.0.1 198.51.100.1 1234 8080\r\n

The backend now believes the connection came from 10.0.0.1, potentially bypassing IP-based access controls that trust internal addresses. This is analogous to XFF spoofing, but harder to detect because it happens below the application layer.

Mitigations are straightforward but non-negotiable:

Connection-Level vs Request-Level

PROXY protocol is per-connection, not per-request. For HTTP/1.1 with keep-alive or HTTP/2 multiplexing, a single TCP connection may carry many HTTP requests from the same client. The PROXY header is read once at connection establishment. This is correct behavior — the client IP does not change during a TCP connection. But it means the PROXY header cannot represent scenarios where multiple clients share a connection (which does not happen in practice, since each client has its own TCP connection to the proxy).

No Encryption or Authentication

PROXY protocol headers are sent in plaintext (even v2's binary format is not encrypted). If the network path between the proxy and backend is untrusted, an attacker who can perform a man-in-the-middle attack could modify the PROXY header to change the claimed client IP. In practice, proxy-to-backend connections traverse trusted internal networks, VPCs, or Unix domain sockets, so this is rarely an issue. For environments requiring transport security between proxy and backend, use TLS on the backend connection, with the PROXY header sent inside the encrypted tunnel.

The PPv2 CRC32c TLV (type 0x03) provides integrity checking but not authentication — it detects accidental corruption but cannot prevent deliberate tampering by an attacker who can recalculate the checksum.

Health Checks and the LOCAL Command

One of the most common operational issues with PROXY protocol is health check compatibility. When a load balancer is configured to send PROXY protocol to backends, its health check probes must also include PROXY protocol headers. Otherwise, the backend rejects the health check connection (because it expects a PROXY header and receives bare HTTP or TCP data), marks the backend as unhealthy, and removes it from rotation — a false positive that takes down the entire service.

PPv2's LOCAL command solves this cleanly. When the proxy sends a health check, it prepends a PPv2 header with the LOCAL command (ver/cmd byte = 0x20). The backend reads the header, sees LOCAL, and treats the connection as a direct (non-proxied) connection using the real socket addresses. This works because the health check is genuinely from the proxy, not a proxied client connection.

In HAProxy, this happens automatically when send-proxy-v2 is used with health checks. HAProxy sends a LOCAL PROXY v2 header for health check connections and a PROXY v2 header with the client address for real traffic:

backend web_servers
    mode tcp
    option httpchk GET /health
    server web1 10.0.1.10:8080 send-proxy-v2 check
    # Health checks automatically use LOCAL command
    # Real traffic uses PROXY command with client addresses

PPv1 has no equivalent of LOCAL. In v1 deployments, health checks either use PROXY UNKNOWN\r\n (which some backends accept) or are sent on a separate port that does not expect PROXY protocol headers.

Implementation Details and Edge Cases

Connection Splicing and Zero-Copy

After reading the PROXY header, high-performance proxies like HAProxy can use connection splicing (splice() on Linux) to forward data between the client and backend connections without copying bytes through userspace. The PROXY header is the only data the proxy needs to read and write to the backend connection — the remaining data can be spliced at kernel level. This is particularly effective in TCP mode where the proxy has no reason to inspect application data.

Timeout Considerations

When PROXY protocol is enabled, the backend must read the header within its initial connection timeout. If the proxy is slow to send the header (or does not send it at all because of a misconfiguration), the backend's accept timeout will fire and close the connection. This timeout is separate from the application protocol timeout — Nginx uses proxy_protocol_timeout (default 30s) to control how long it waits for the PROXY header before giving up.

Protocol Detection

Some implementations support auto-detection of PROXY protocol. They peek at the first bytes of a new connection: if the bytes match the PPv1 signature (PROXY ) or the PPv2 12-byte signature, they parse a PROXY header; otherwise they treat the connection as a direct connection without PROXY protocol. This is useful during migration (enabling PROXY protocol on some upstreams before others), but introduces a small parsing overhead on every connection and expands the attack surface slightly — an attacker could send crafted data starting with PROXY to confuse the parser.

UDP and QUIC

PROXY protocol was designed for connection-oriented TCP streams, but PPv2 includes a DGRAM transport type for UDP. In practice, UDP support is less common because UDP is connectionless — there is no "connection setup" moment where a header can be naturally prepended. For QUIC (HTTP/3), which runs over UDP, PROXY protocol support is emerging but not yet standardized. The QUIC protocol itself carries client addresses in its headers, so the need for an external mechanism like PROXY protocol is reduced.

Maximum Header Size

PPv1 headers are limited to 108 bytes by specification. PPv2 headers use a 16-bit length field, supporting up to 65,535 bytes of address and TLV data. In practice, PPv2 headers with typical TLVs (SSL info, AWS VPC endpoint, unique ID) are under 200 bytes. Implementations should allocate at least 232 bytes for the initial header read buffer (216 bytes for full IPv6 addresses and TLVs, plus the 16-byte fixed header), though most connections will use far less.

Practical Deployment Patterns

TLS Passthrough with Client IP

A common pattern is using an L4 proxy for TLS passthrough (the backend terminates TLS) while still preserving the client IP. The proxy inspects the TLS ClientHello to read the SNI for routing, then prepends a PROXY header and forwards the raw TLS stream:

# HAProxy TLS passthrough with PROXY protocol
frontend tls_front
    mode tcp
    bind :443
    tcp-request inspect-delay 5s
    tcp-request content accept if { req.ssl_hello_type 1 }

    use_backend sni_app1 if { req.ssl_sni -i app1.example.com }
    use_backend sni_app2 if { req.ssl_sni -i app2.example.com }

backend sni_app1
    mode tcp
    server app1 10.0.1.10:443 send-proxy-v2 check

The backend receives the PROXY header followed by the client's raw TLS ClientHello. It reads the PROXY header to obtain the client IP, then processes the TLS handshake normally. The backend has both the client's real IP and the ability to terminate TLS with its own certificate.

Multi-Tier Proxy Chains

In large deployments, traffic may pass through multiple proxy layers: a CDN edge, a regional load balancer, and a local reverse proxy. PROXY protocol only covers a single hop — each proxy reads the incoming PROXY header (if any), uses the client IP internally, and writes a new PROXY header on the outgoing connection.

The recommended pattern is: use PROXY protocol for L4 hops and switch to XFF at the first L7 hop. The L7 proxy (Nginx, HAProxy in HTTP mode, Envoy) reads the PROXY header, extracts the client IP, and injects it into X-Forwarded-For for all downstream HTTP services.

Kubernetes and Container Environments

In Kubernetes, PROXY protocol is commonly used between external load balancers (cloud NLBs or bare-metal MetalLB) and ingress controllers (Nginx Ingress, Traefik, Envoy-based Istio gateways). Without PROXY protocol or externalTrafficPolicy: Local, the client IP is lost when traffic crosses the Service network through kube-proxy's DNAT.

Enabling PROXY protocol in Kubernetes requires configuration on both sides: the cloud LB target group and the ingress controller. A common misconfiguration is enabling PROXY protocol on the LB but not the ingress controller (or vice versa), causing all traffic to fail.

Network Infrastructure and BGP

PROXY protocol operates at the application edge, but the underlying network infrastructure that routes client traffic to the proxy is managed by BGP. Understanding both layers gives you complete visibility into how traffic reaches your services.

When a client at IP 203.0.113.45 connects to your load balancer, that client's IP belongs to a prefix announced by an autonomous system. Looking up the IP in a BGP routing table tells you which network the client originates from, the AS path between your infrastructure and the client, and the geographic region of the announcing AS — all information that complements what PROXY protocol provides at the connection level.

Major infrastructure providers that implement PROXY protocol are themselves participants in the global BGP routing system. AWS NLBs run within AS16509 (Amazon), Cloudflare Spectrum operates from AS13335, and Google Cloud load balancers route through AS15169. The PROXY protocol header tells you the client's IP; the god.ad BGP Looking Glass tells you everything about the network that IP belongs to.

Look up any IP address or ASN to explore the BGP routing infrastructure behind the client connections flowing through your proxies and load balancers.

See BGP routing data in real time

Open Looking Glass
More Articles
What is DNS? The Internet's Phone Book
What is an IP Address?
IPv4 vs IPv6: What's the Difference?
What is a Network Prefix (CIDR)?
How Does Traceroute Work?
What is a CDN? Content Delivery Networks Explained