How NAT Works: Network Address Translation Explained
Network Address Translation (NAT) is the mechanism that allows millions of devices on private networks to share a single public IP address when communicating on the internet. If you are reading this from a home network, a coffee shop, or an office, your device almost certainly has a private IP address like 192.168.1.x or 10.0.0.x — and a NAT device (typically your router) is translating that address to the single public IP your ISP assigned you. You can look up that public address in the looking glass to see how it routes on the internet.
NAT was introduced in the mid-1990s as a stopgap to slow IPv4 address exhaustion. IPv4 provides roughly 4.3 billion addresses — far too few for every device to have its own. NAT solved this by allowing entire networks to hide behind a single public address, effectively decoupling the internal network topology from the public internet. What was meant as a temporary fix became permanent infrastructure. NAT now sits in the forwarding path of virtually every packet on the internet, and its presence has profoundly shaped protocol design, application architecture, and the structure of the network itself.
Private Address Space: The Foundation of NAT
NAT depends on the existence of private IP address ranges, defined in RFC 1918. These addresses are not routable on the public internet — no autonomous system will announce them in BGP, and any packet with a private source or destination address that reaches the internet backbone will be dropped. The three reserved ranges are:
| 10.0.0.0/8 | 16,777,216 addresses | Large enterprises, cloud providers |
| 172.16.0.0/12 | 1,048,576 addresses | Medium networks |
| 192.168.0.0/16 | 65,536 addresses | Home and small office networks |
Any organization can use these addresses internally without coordination or registration. Your home router assigns addresses from 192.168.0.0/24 or 192.168.1.0/24; a corporate data center might use 10.0.0.0/8 for thousands of servers. Multiple organizations can — and do — use identical private addresses simultaneously, since these addresses never appear on the public internet. NAT is the boundary that makes this possible: it translates between the private, unroutable addresses inside and the public, globally unique addresses outside.
How NAT Works: The Translation Table
At its core, NAT is a packet-rewriting mechanism. A NAT device (router, firewall, or load balancer) sits at the boundary between the private network and the public internet. When a packet crosses that boundary, the NAT device modifies the IP header — rewriting source or destination addresses, and often port numbers — and maintains a translation table (also called a NAT table or session table) to ensure that return traffic is correctly mapped back to the original sender.
Consider the simplest case: a device at 192.168.1.100 wants to reach a web server at 93.184.216.34. The NAT router has a public address of 203.0.113.5. The process works as follows:
- The internal device sends a TCP SYN packet: source
192.168.1.100:52431, destination93.184.216.34:443 - The NAT router intercepts the packet, rewrites the source address to
203.0.113.5:52431(or assigns a different port), and forwards it to the internet - The NAT router records the mapping in its translation table:
192.168.1.100:52431 ↔ 203.0.113.5:52431 → 93.184.216.34:443 - The web server receives a packet from
203.0.113.5:52431and replies to that address - The NAT router receives the reply, looks up the mapping, rewrites the destination back to
192.168.1.100:52431, and forwards it to the internal network
The remote server never sees the private address. From its perspective, the connection came from 203.0.113.5. If you look up 203.0.113.5 in the looking glass, you see the prefix and AS path for that public address — the private addresses behind it are invisible to the routing system.
Types of NAT
NAT is not a single technique — it is a family of address-translation strategies, each suited to different scenarios. The terminology can be confusing because different RFCs, vendors, and textbooks use overlapping names, but the core mechanisms are distinct.
Source NAT (SNAT)
Source NAT rewrites the source address of outgoing packets. This is the most common form — the one described above. When an internal device initiates a connection to the internet, the NAT device replaces the private source address with its own public address. Return traffic is translated back using the session table.
SNAT is the default behavior of every home router. It is also used extensively in cloud environments: when a VM in AWS or GCP with a private address makes an outbound request, a NAT gateway performs SNAT to give it a routable source address. Linux implements SNAT via iptables / nftables with the SNAT or MASQUERADE target — the difference being that MASQUERADE dynamically uses the outgoing interface's current address, making it suitable for interfaces with dynamic IPs (like a home broadband connection).
Destination NAT (DNAT)
Destination NAT rewrites the destination address of incoming packets. This is used to expose internal servers to the internet. If you run a web server at 192.168.1.50 and configure a DNAT rule (often called port forwarding) on your router, incoming connections to 203.0.113.5:443 are translated to 192.168.1.50:443.
DNAT is the basis of load balancing. A load balancer receives connections on a public virtual IP (VIP) and rewrites the destination to one of several backend servers. The backend servers see the original client's source address (unless the load balancer also performs SNAT). In Linux, DNAT is implemented with the DNAT target in the PREROUTING chain. Kubernetes NodePort and ClusterIP services rely on DNAT rules (via iptables or IPVS) to route traffic to the correct pod.
PAT / NAPT: Port Address Translation
When most people say "NAT," they actually mean PAT (Port Address Translation), also called NAPT (Network Address Port Translation). PAT allows many internal devices to share a single public IP address by using different source ports to distinguish between connections.
Without PAT, basic NAT would need one public IP per simultaneous internal device — defeating the purpose. PAT solves this by including the transport-layer port number in the translation mapping. Two internal devices can both connect to the same external server, and the NAT router differentiates them by assigning different external source ports:
| Internal | External (after PAT) | Destination |
192.168.1.100:52431 | 203.0.113.5:52431 | 93.184.216.34:443 |
192.168.1.101:52431 | 203.0.113.5:52432 | 93.184.216.34:443 |
192.168.1.102:38200 | 203.0.113.5:38200 | 93.184.216.34:443 |
Notice that .100 and .101 both used source port 52431 internally, but the NAT router assigned a different external port to .101. The port space is 16 bits (ports 1024-65535 for ephemeral use), so a single public IP can theoretically support roughly 64,000 simultaneous mappings per destination. In practice, with multiple destinations, a single PAT address can handle hundreds of thousands of concurrent connections.
Static NAT vs Dynamic NAT
Static NAT creates a permanent one-to-one mapping between an internal address and an external address. It is used for servers that must be consistently reachable at a fixed public address. The mapping exists whether or not traffic is flowing. AWS Elastic IPs and Azure static public IPs are examples of static NAT assignments.
Dynamic NAT assigns public addresses from a pool on demand. When an internal device initiates a connection, it gets the next available public address from the pool. When the session ends, the address returns to the pool. If the pool is exhausted, new connections fail — which is why dynamic NAT without PAT is rarely used in modern networks.
Connection Tracking: The State Machine Behind NAT
NAT cannot function without connection tracking (also called stateful inspection). The NAT device must maintain state for every active connection so it can correctly translate return traffic. This state tracking is one of the most performance-critical components in any NAT implementation.
For TCP, the connection tracker monitors the three-way handshake (SYN, SYN-ACK, ACK), the data transfer phase, and the four-way teardown (FIN, FIN-ACK). Each state transition updates the NAT entry's timer. A typical TCP NAT entry persists for the duration of the connection plus a timeout after the final FIN (often 120 seconds). An established TCP connection that goes idle may have its entry kept for hours — Linux's nf_conntrack defaults to a 5-day timeout for established TCP connections.
For UDP, there is no connection state in the protocol itself, so the connection tracker creates a pseudo-connection based on the 5-tuple (source IP, source port, destination IP, destination port, protocol). UDP NAT entries typically time out after 30-180 seconds of inactivity. This short timeout is why UDP-based protocols like VoIP and gaming sometimes experience connectivity issues behind NAT — if the entry expires, return traffic is dropped.
For ICMP (used by ping and traceroute), NAT uses the ICMP identifier field as a pseudo-port to track sessions. Each ping gets a unique identifier, allowing the NAT to map replies back to the correct internal host.
The connection tracking table has a finite size. Linux defaults to nf_conntrack_max = 262144 entries. On a busy NAT gateway, exhausting the connection tracking table causes all new connections to be dropped — a common failure mode on underprovisioned NAT gateways. Monitoring the /proc/net/nf_conntrack table size is essential for any Linux-based NAT deployment.
NAT Traversal: The Problem with Inbound Connections
NAT creates a fundamental asymmetry: outbound connections work transparently, but inbound connections to devices behind NAT are blocked by default. The NAT router has no translation entry for unsolicited inbound packets, so it drops them. This breaks any protocol that requires both endpoints to accept incoming connections — which includes VoIP, video conferencing, peer-to-peer file sharing, online gaming, and WebRTC.
Over the decades, a suite of protocols and techniques has evolved to work around this limitation. Collectively, these are called NAT traversal.
STUN: Discovering Your Public Address
STUN (Session Traversal Utilities for NAT), defined in RFC 5389, is the simplest NAT traversal mechanism. A STUN client sends a request to a public STUN server, which replies with the client's observed public IP address and port — the address as seen after NAT translation. This tells the client what its "reflexive transport address" is.
Once two peers behind NAT know their public addresses (via STUN), they can attempt to communicate directly. This works when the NAT is well-behaved — specifically, when the external port assigned by NAT is the same regardless of the destination. RFC 4787 classifies NAT behavior types:
- Endpoint-Independent Mapping (EIM) — The NAT assigns the same external port for a given internal IP:port, regardless of destination. STUN works with this type because the address learned from the STUN server is the same one the peer will see.
- Address-Dependent Mapping — The NAT assigns different external ports for different destination IPs. STUN still works, but only if the peer is at the same IP as the STUN server used for discovery.
- Address-and-Port-Dependent Mapping — The NAT assigns different external ports for different destination IP:port pairs. This is the most restrictive type and makes direct peer-to-peer communication via STUN nearly impossible.
TURN: Relay When Direct Connection Fails
TURN (Traversal Using Relays around NAT), defined in RFC 5766, is the fallback when STUN fails. Instead of a direct connection, both peers send their traffic through a TURN relay server on the public internet. The relay has a public IP address and can accept connections from both peers, then forwards traffic between them.
TURN always works because both peers are making outbound connections to the relay, which NAT handles naturally. The cost is latency (traffic takes a detour through the relay) and bandwidth (the relay operator must pay for all the traffic). For this reason, TURN is a last resort — used only when direct communication is impossible.
ICE: Putting It All Together
ICE (Interactive Connectivity Establishment), defined in RFC 8445, is the framework that combines STUN and TURN into a systematic connection process. ICE is used by WebRTC, SIP, and other real-time communication protocols. It works as follows:
- Gather candidates — Each peer collects a list of possible addresses: its local address (host candidate), its STUN-discovered public address (server-reflexive candidate), and a TURN relay address (relay candidate).
- Exchange candidates — Peers exchange their candidate lists through a signaling server (via SIP, WebSocket, or any out-of-band channel).
- Connectivity checks — Each peer attempts STUN binding requests to every candidate of the other peer, in priority order. Host candidates are tried first (fastest if both peers are on the same network), then server-reflexive (direct peer-to-peer through NAT), then relay (TURN fallback).
- Select best path — The highest-priority candidate pair that successfully completes a connectivity check becomes the active path.
In practice, ICE achieves direct peer-to-peer connectivity about 80-90% of the time, falling back to TURN relay for the remainder (typically when one or both peers are behind symmetric NAT or a strict corporate firewall).
Hole Punching
UDP hole punching is the specific technique that makes direct peer-to-peer connections possible through NAT. The basic idea: if both peers send packets to each other's STUN-discovered public addresses simultaneously, each outbound packet creates a NAT mapping that allows the other peer's packets in. The "hole" is the NAT table entry created by the outbound packet.
TCP hole punching is also possible but more complex, requiring simultaneous SYN packets from both sides — a technique supported by some operating systems via the SO_REUSEADDR and SO_REUSEPORT socket options. It is less commonly used because TCP's three-way handshake makes the timing trickier.
Carrier-Grade NAT (CGNAT)
As IPv4 address exhaustion worsened, ISPs began deploying Carrier-Grade NAT (CGNAT), also called Large-Scale NAT (LSN), defined in RFC 6888. CGNAT places a NAT device inside the ISP's network, adding a second layer of translation on top of the customer's home NAT. This means your traffic passes through two NAT devices: your home router's NAT and the ISP's CGNAT.
With CGNAT, the address your home router receives from the ISP is not a public address at all — it is from the 100.64.0.0/10 range, the "shared address space" defined in RFC 6598 specifically for CGNAT. The ISP's CGNAT device then translates this to a genuine public address shared by hundreds or thousands of subscribers.
CGNAT has significant implications:
- Port forwarding is impossible — You cannot configure DNAT on a device you do not control. Running a server behind CGNAT requires either IPv6 (which bypasses CGNAT) or a tunneling service.
- Port exhaustion — With thousands of subscribers sharing a public IP, the 65,535-port limit becomes a real constraint. ISPs typically allocate a port block to each subscriber (e.g., 1,000-2,000 ports), limiting simultaneous connections.
- Logging and compliance — When law enforcement traces an IP address involved in abuse, the ISP must correlate the timestamp and port number against CGNAT logs to identify the specific subscriber. This requires logging every NAT translation — generating enormous volumes of log data (RFC 6302).
- Application breakage — Double NAT makes NAT traversal harder. Some applications fail behind CGNAT because their NAT traversal logic does not account for two translation layers.
- Geo-IP inaccuracy — Many subscribers share one public IP, so geolocation databases may map that address to a generic region rather than any subscriber's actual location.
You can detect CGNAT by checking whether your router's WAN address is in the 100.64.0.0/10 range. If it is, you are behind CGNAT. Many mobile carriers (especially in developing regions) and some fixed-line ISPs in Asia, Europe, and Latin America use CGNAT extensively. Looking up your public IP in the looking glass shows the ISP's shared address and the AS it belongs to — but dozens or hundreds of other subscribers share that same address.
NAT and Protocol Complications
NAT was designed in an era when protocols used a single connection between a known source and destination. Many protocols that came before (or did not anticipate) NAT carry IP addresses inside their payloads — not just in the IP header that NAT modifies. This creates a category of problems called embedded address issues.
FTP: The Classic NAT Headache
FTP's active mode has the client tell the server to connect back to it on a specified port (via the PORT command), embedding its IP address in the application-layer data. NAT rewrites the IP header but not the FTP payload, so the server receives a private IP address it cannot reach. The solution was the ALG (Application-Level Gateway) — a NAT module that inspects FTP traffic and rewrites embedded addresses. Linux's nf_nat_ftp kernel module does this, and most home routers include an FTP ALG.
FTP passive mode (PASV) reverses the data connection direction, which works better with NAT since the client initiates the connection. This is why passive mode became the de facto standard for FTP through NAT.
SIP and VoIP
SIP (Session Initiation Protocol) for VoIP embeds IP addresses and port numbers in its SDP (Session Description Protocol) body to tell the peer where to send media streams. Behind NAT, these embedded addresses are private and unreachable. SIP ALGs on home routers are notoriously buggy — they often mangle SIP messages, causing one-way audio or failed call setup. Many VoIP providers instruct users to disable the SIP ALG and rely on STUN/TURN/ICE instead.
IPsec
IPsec in its original form is incompatible with NAT. AH (Authentication Header) mode includes the IP addresses in its integrity check, so NAT's address rewriting invalidates the authentication. ESP (Encapsulating Security Payload) mode does not authenticate the outer header, but NAT cannot inspect the encrypted payload to perform port translation. The solution is NAT-T (NAT Traversal for IPsec), defined in RFC 3947, which encapsulates ESP packets in UDP on port 4500, giving NAT a port number to translate.
NAT Behavioral Requirements
Not all NAT implementations behave the same way. RFC 4787 (for UDP) and RFC 7857 (for TCP) define behavioral requirements that NAT devices should follow for maximum compatibility. Key requirements include:
- Endpoint-Independent Mapping — A NAT should assign the same external IP:port for all traffic from a given internal IP:port, regardless of destination. This is essential for STUN-based NAT traversal.
- Endpoint-Independent Filtering — A NAT should accept incoming packets on a mapped port from any external address, not just the address the outbound packet was sent to. Strict filtering (address-dependent or address-and-port-dependent) breaks many peer-to-peer protocols.
- Port preservation — When possible, the NAT should use the same external port as the internal port. This simplifies debugging and improves compatibility.
- Hairpinning — When an internal device sends traffic to the NAT's external address, the NAT should translate it back to the internal destination without sending it to the internet. This allows internal devices to reach each other via the public address.
- Deterministic timeout — NAT entries should have predictable timeouts. RFC 4787 recommends at least 2 minutes for UDP, with refresh on both inbound and outbound traffic.
Many cheap home routers violate these requirements, causing intermittent connectivity issues for applications that rely on NAT traversal. Enterprise and carrier-grade NAT devices generally comply, but behavior varies.
NAT in Linux: Netfilter and nftables
On Linux, NAT is implemented in the Netfilter framework, configured via iptables (legacy) or nftables (modern). The NAT process hooks into the packet processing pipeline at specific points:
- PREROUTING — DNAT happens here, before the routing decision. This allows the destination address rewrite to affect where the packet is forwarded.
- POSTROUTING — SNAT happens here, after the routing decision. The source address is rewritten just before the packet leaves the interface.
A typical Linux NAT gateway configuration:
# Enable IP forwarding
sysctl -w net.ipv4.ip_forward=1
# SNAT: masquerade all outbound traffic from the LAN
iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o eth0 -j MASQUERADE
# DNAT: port forward port 443 to an internal web server
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to 192.168.1.50:443
# Allow forwarded traffic
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -s 192.168.1.0/24 -j ACCEPT
The conntrack subsystem automatically tracks connections and handles the reverse translation. You can inspect the current connection tracking table with conntrack -L to see every active NAT mapping, including protocol, addresses, ports, and state.
NAT and IPv6: Is NAT Still Necessary?
IPv6 was designed with enough addresses — 2^128, or roughly 340 undecillion — that NAT should be unnecessary. Every device can have a globally unique, publicly routable IPv6 address. The original IPv6 architecture explicitly rejected NAT, viewing it as a harmful workaround that broke the end-to-end principle of the internet.
In practice, NAT is far less common in IPv6, but it has not disappeared entirely:
- NPTv6 (Network Prefix Translation) — Defined in RFC 6296, NPTv6 translates between internal and external IPv6 prefixes without rewriting ports. Unlike NAT44, NPTv6 is stateless and one-to-one, preserving the end-to-end reachability of IPv6. It is used by organizations that want to renumber their network without changing internal addresses, or that want provider-independent addressing without owning their own IPv6 prefix.
- NAT64 — Translates between IPv6 and IPv4, allowing IPv6-only networks to reach IPv4-only servers. NAT64 (RFC 6146) is paired with DNS64 (RFC 6147), which synthesizes AAAA records for domains that only have A records. This is widely used by mobile carriers running IPv6-only networks — Apple requires all iOS apps to work on IPv6-only networks with NAT64.
- 464XLAT — Defined in RFC 6877, this combines a stateless SIIT translator (CLAT) on the client side with a stateful NAT64 translator (PLAT) on the provider side. It allows IPv4-only applications to work on IPv6-only networks by translating IPv4 packets to IPv6 on the device, then back to IPv4 at the provider's NAT64. Android uses 464XLAT extensively on IPv6-only mobile networks.
The key difference between IPv4 NAT and IPv6 approaches is that IPv6 translation is primarily used for backward compatibility with IPv4, not for address conservation. As IPv6 deployment increases, the need for translation decreases. You can check whether an address is IPv4 or IPv6 in the looking glass — try looking up 2606:4700:4700::1111 (Cloudflare's IPv6 DNS) to see a native IPv6 route with no NAT involved.
How NAT Shaped the Internet
NAT was a pragmatic engineering decision that had profound unintended consequences for the internet's architecture. Understanding these effects explains why the internet works the way it does today.
The End of End-to-End
The internet was originally designed around the end-to-end principle: any host should be able to communicate directly with any other host. NAT broke this by making most hosts unreachable from the outside. The internet evolved from a symmetric network where every node was both a client and a server into an asymmetric network where most endpoints are clients behind NAT, and servers require special configurations (port forwarding, public IPs, or NAT traversal protocols) to be reachable.
This asymmetry drove the rise of the client-server model and centralized services. Peer-to-peer architectures, which require both endpoints to accept connections, became difficult to implement reliably. Email, which originally assumed any host could receive connections, had to adapt (though mail servers typically have public IPs). The modern web's architecture — clients making requests to centralized servers behind load balancers — fits naturally into a NAT-dominated network.
NAT as Accidental Security
NAT provides incidental security by blocking unsolicited inbound connections. A device behind NAT cannot be directly scanned or probed from the internet. This is not the same as a firewall — NAT does not inspect or filter traffic, and it does not protect against outbound-initiated attacks — but it provides a significant reduction in attack surface for consumer devices.
This "security" became a feature that users and administrators relied on, even though it was never NAT's intended purpose. It became an argument against IPv6 adoption: "If every device has a public IP, won't they all be attacked?" The answer is that IPv6 networks should use stateful firewalls (and the sheer size of IPv6 subnets makes scanning impractical), but the perception persists.
NAT Extended IPv4's Lifetime
IPv4 address exhaustion was predicted in the early 1990s. Without NAT, the internet would have run out of addresses years before IPv6 was ready. NAT, combined with CIDR (classless routing), delayed the crisis long enough for IPv6 to be developed, standardized, and gradually deployed. IANA allocated its last /8 blocks in 2011, and the Regional Internet Registries have been running out progressively since then. Yet the internet continues to grow on IPv4, largely because NAT allows thousands of devices to share each remaining address.
Ironically, NAT's success in extending IPv4's life also slowed IPv6 adoption. With NAT making IPv4 "good enough" for most use cases, the economic pressure to deploy IPv6 remained low for years. The result is the prolonged dual-stack transition period we are in now, where much of the internet supports both IPv4 (with NAT) and IPv6 (without).
Impact on Protocol Design
Every network protocol designed since the mid-1990s has had to account for NAT. HTTP works naturally with NAT because it is client-initiated. WebSocket maintains a persistent connection through NAT. WebRTC includes ICE for NAT traversal as a core component, not an afterthought. QUIC (HTTP/3) runs over UDP partly because UDP's connectionless nature makes NAT traversal simpler — and QUIC includes its own connection migration mechanism that handles NAT rebinding (when the NAT assigns a new external port mid-session).
Protocols that predate NAT, or that were designed without NAT in mind (FTP, SIP, H.323, many gaming protocols), required ALGs, protocol extensions, or workarounds. The history of internet protocol design is, in large part, a history of adapting to NAT.
NAT Performance and Scaling
NAT is not free. Every translated packet requires a table lookup and header rewrite. For high-throughput NAT devices (ISP CGNAT, cloud NAT gateways, container networking), performance is critical.
- Connection tracking memory — Each conntrack entry in Linux consumes roughly 300-400 bytes. At 1 million entries, that is 300-400 MB of kernel memory dedicated to tracking NAT state.
- Hash table contention — The conntrack table is a hash table. High connection rates cause hash collisions and lock contention, especially on multi-core systems. The
nf_conntrack_bucketsparameter controls the hash table size and should be tuned for the expected connection count. - New connection rate — Creating new conntrack entries is more expensive than matching existing ones. A Linux NAT gateway can typically handle 50,000-200,000 new connections per second before becoming a bottleneck, depending on hardware. DDoS attacks that send SYN floods with randomized source ports create a new conntrack entry per packet, quickly exhausting the table.
- Hardware offload — Enterprise NAT devices and modern NICs support conntrack offload, where established flows are programmed into the NIC's flow table and translated in hardware, bypassing the kernel entirely. This reduces CPU overhead and can achieve line-rate NAT at 100 Gbps and above.
Cloud providers have productized NAT as managed services: AWS NAT Gateway, Google Cloud NAT, and Azure NAT Gateway all handle the scaling challenges behind a simple API. These services automatically scale conntrack tables, distribute traffic across multiple NAT instances, and handle port allocation — abstracting away the operational complexity of high-performance NAT.
Debugging NAT Issues
When something breaks behind NAT, the symptoms are often confusing: connections that work one way but not the other, intermittent timeouts, or protocols that partially work. A systematic approach helps:
- Identify your public IP — Use the looking glass or a service like
curl ifconfig.meto see your public address. Compare it to your router's WAN address. If they differ, you are behind CGNAT or another upstream NAT. - Check the conntrack table — On a Linux NAT gateway,
conntrack -Lshows all active mappings. Look for your connection's 5-tuple to verify the translation is happening. - Watch for table exhaustion —
dmesg | grep conntrackwill show "table full, dropping packet" messages if the conntrack table is exhausted. Increasenf_conntrack_maxor investigate why so many connections are open. - Test with TCP and UDP separately — NAT handles them differently. If TCP works but UDP does not, the UDP timeout may be too short. If UDP works but TCP does not, a stateful firewall rule may be blocking the TCP handshake.
- Check for ALG interference — SIP ALGs on home routers cause frequent VoIP issues. Disabling the ALG (if your router allows it) often resolves one-way audio or registration failures.
- Verify hairpinning — If internal devices cannot reach your server via its public address, your router may not support hairpin NAT. Use the internal address instead, or configure split-horizon DNS.
Look Up Your NAT-Translated Address
Every connection you make to the internet passes through at least one NAT device. The public IP address that the world sees — the one after NAT translation — is the address that appears in the global BGP routing table, assigned to the prefix announced by your ISP's autonomous system. Use the god.ad looking glass to look up your public IP and see which network announces your prefix, the AS path traffic takes to reach you, and where your address fits in the global routing table.