How IP Multicast Works: IGMP, PIM, and Multicast Distribution Trees

IP Multicast is a network-layer communication model that allows a single source to send packets to a group of receivers simultaneously, without needing to send individual copies to each destination. Unlike unicast, where a packet travels a dedicated path from one source to one destination, and broadcast, where a packet is sent to every host on a network segment, multicast delivers traffic only to hosts that have explicitly expressed interest in receiving it. This one-to-many (or many-to-many) model is essential for applications like live video streaming, IPTV, real-time financial market data, software updates at scale, and routing protocol communication. The multicast address range 224.0.0.0/4 in IPv4 and ff00::/8 in IPv6 identifies groups rather than individual hosts, and a suite of protocols — BGP, IGMP, PIM, and MSDP — work together to build distribution trees that carry traffic from sources to receivers across complex network topologies.

Multicast matters for network operations because it fundamentally changes how traffic flows through a network. In a unicast model, sending a live video stream to 10,000 viewers requires 10,000 individual streams from the source, consuming bandwidth proportional to the number of receivers. With multicast, the source sends a single stream, and the network itself replicates packets only at branch points in the distribution tree. This means a 10 Mbps video stream to 10,000 receivers consumes roughly 10 Mbps at the source — not 100 Gbps. The tradeoff is complexity: multicast requires stateful forwarding in routers (each router must know which interfaces have interested receivers), group membership management, and specialized tree-building protocols that interact with both IGPs like OSPF and interdomain routing via BGP.

Multicast Addressing

Multicast relies on a dedicated address space to identify groups. An IP address in the multicast range does not identify a single host — it identifies a group of zero or more receivers. Any host can join or leave a group at any time, and any host can send packets to a group address without being a member of that group.

IPv4 Multicast Addresses: 224.0.0.0/4

IPv4 reserves the Class D address space, 224.0.0.0 through 239.255.255.255, for multicast. This /4 block is subdivided into several ranges with different scoping and administrative purposes:

IPv6 Multicast Addresses: ff00::/8

IPv6 was designed with multicast as a first-class citizen — there is no broadcast in IPv6, so many functions that used broadcast in IPv4 use multicast instead. IPv6 multicast addresses begin with ff (0xff) and encode the scope and group identity in a structured format:

  ff[flags][scope]:[group ID]

  Flags (4 bits):
    0 = permanently assigned (well-known)
    1 = transient (dynamically assigned)

  Scope (4 bits):
    1 = interface-local
    2 = link-local
    5 = site-local
    8 = organization-local
    e = global

Key IPv6 multicast addresses include ff02::1 (all nodes, link-local — replaces IPv4 broadcast), ff02::2 (all routers, link-local), ff02::5 (OSPFv3 AllSPFRouters), ff02::9 (RIPng), ff02::d (PIM routers), and ff02::1:ff00:0/104 (solicited-node multicast, used by Neighbor Discovery Protocol to efficiently resolve addresses without broadcasting ARP-like requests to every host on the link).

Layer 2 Multicast Address Mapping

Multicast packets need a Layer 2 destination address too. For IPv4, the IEEE allocated the OUI 01:00:5e, and the lower 23 bits of the multicast MAC address map to the lower 23 bits of the IPv4 multicast group address. Because a multicast IPv4 address has 28 variable bits but only 23 are mapped, there is a 32:1 ambiguity — 32 different multicast groups can map to the same MAC address. This means switches that rely purely on MAC-level filtering may deliver packets for groups a host did not join. IGMP snooping addresses this issue. For IPv6, the mapping uses the prefix 33:33 followed by the lower 32 bits of the IPv6 multicast address, providing a 1:1 mapping (no ambiguity).

IGMP: Group Membership on the LAN

The Internet Group Management Protocol (IGMP) operates between hosts and their directly connected multicast router. Its sole purpose is to let a router know which multicast groups have active listeners on each of its interfaces. IGMP does not build distribution trees across the network — that is PIM's job. IGMP operates only on the last hop, between the receiver and its first-hop router.

IGMPv1 (RFC 1112)

The original version defined two message types: Membership Query (sent by the router to 224.0.0.1) and Membership Report (sent by hosts to the group address). The router periodically queries the network (default: every 125 seconds), and hosts respond with reports for each group they have joined. If a host on the subnet sends a report for a group, other hosts suppress their own report for that same group (to reduce traffic). The critical weakness of IGMPv1 is the absence of an explicit leave mechanism — the router only learns that a group has no listeners by timing out after multiple query intervals (typically 3 minutes), causing unnecessary multicast traffic to continue flowing to a segment with no receivers.

IGMPv2 (RFC 2236)

IGMPv2 added the Leave Group message (sent to 224.0.0.2, all-routers), allowing hosts to immediately signal when they no longer want multicast traffic. When a router receives a Leave, it sends a Group-Specific Query to the group address on that interface, asking if any remaining hosts are still interested. If no host responds within a short timeout (last member query interval, default 1 second, repeated last member query count times), the router prunes the group from that interface. This reduces leave latency from minutes to seconds. IGMPv2 also introduced the querier election mechanism: on a shared subnet with multiple multicast routers, the router with the lowest IP address becomes the designated IGMP querier, preventing redundant queries.

IGMPv3 (RFC 3376)

IGMPv3 is the most significant evolution, adding source filtering capability that enables Source-Specific Multicast (SSM). A host can now report membership in one of two modes:

IGMPv3 eliminates report suppression (every host sends its own report, to 224.0.0.22, the IGMPv3 routers address) so the router can track individual host state. This is critical for source filtering since different hosts may want different source lists. IGMPv3 is defined in RFC 3376 and is required for SSM operation.

IGMP Join / Query / Leave Sequence (IGMPv2) Router Host A Host B t=0 IGMP Report (join 239.1.1.1) Router adds 239.1.1.1 to interface state t=5s IGMP Report (join 239.1.1.1) t=125s General Query (dst: 224.0.0.1) t=126s Report (Host B suppresses) t=200s Leave Group (dst: 224.0.0.2) t=200s Group-Specific Query (239.1.1.1) Host B responds → group stays active

MLD: Multicast Listener Discovery for IPv6

Multicast Listener Discovery (MLD) is the IPv6 equivalent of IGMP. It provides the same functionality — letting routers learn which multicast groups have interested listeners on attached links — but it runs on top of ICMPv6 rather than as a standalone protocol. MLD messages are ICMPv6 type 130-132 (MLDv1) and type 143 (MLDv2).

MLDv1 (RFC 2710) corresponds to IGMPv2 and provides the same query/report/done message exchange. MLDv2 (RFC 3810) corresponds to IGMPv3 and adds source filtering support for SSM. In IPv6 networks, MLD is critical because IPv6 Neighbor Discovery itself depends on multicast — every IPv6 host joins the solicited-node multicast group (ff02::1:ff00:0/104 + lower 24 bits of its address) to receive neighbor solicitation messages, so MLD is active on every IPv6-capable interface even if no application-level multicast is in use.

MLDv2 report messages are sent to ff02::16 (all MLDv2-capable routers). The querier election works identically to IGMPv2 but uses the lowest IPv6 link-local address instead of the lowest IPv4 address. One important operational difference: MLD requires the Hop Limit to be 1 and the source address to be a link-local address (fe80::/10), which provides built-in security against spoofed MLD messages from remote attackers.

IGMP Snooping and MLD Snooping

Layer 2 switches do not run multicast routing protocols, but they still need to constrain multicast traffic to only those ports with interested receivers. Without IGMP snooping, a switch would treat multicast traffic like broadcast — flooding it out every port in the VLAN. For a high-bandwidth multicast stream, this would waste switch backplane capacity and link bandwidth on ports with no receivers.

IGMP snooping is the process of a Layer 2 switch inspecting IGMP messages passing between hosts and multicast routers. The switch builds a table mapping (VLAN, multicast group) to the set of ports that have active group members. When multicast traffic arrives, the switch forwards it only to those ports (plus the multicast router port). The multicast router port is identified by listening for IGMP queries, PIM Hellos, or other router-generated multicast protocol traffic.

IGMP snooping is standardized in RFC 4541 but predates that RFC as a vendor feature. Critical operational considerations include: the snooping switch must identify and maintain a list of multicast router ports (via PIM Hellos, IGMP queries, or static configuration); it must handle report suppression correctly (with IGMPv2, where hosts suppress reports, the switch must not prune a port just because it did not see a report); and it must forward IGMP queries to all ports in the VLAN. Misconfigurations in IGMP snooping are a common cause of multicast failures in enterprise networks — symptoms include receivers intermittently losing streams or multicast traffic unexpectedly flooding an entire VLAN.

MLD snooping provides the identical function for IPv6. It is standardized in RFC 4541 alongside IGMP snooping. The switch inspects MLDv1/v2 messages within ICMPv6 packets. One critical detail: IPv6 Neighbor Discovery depends on multicast, and if MLD snooping is misconfigured, it can break IPv6 connectivity entirely by preventing solicited-node multicast messages from reaching the correct ports.

PIM: Protocol Independent Multicast

Protocol Independent Multicast (PIM) is the dominant multicast routing protocol used to build distribution trees across routers. The "protocol independent" name reflects that PIM does not carry its own topology information — it relies on the underlying unicast routing table (populated by any IGP or BGP) to make RPF (Reverse Path Forwarding) decisions. This is in contrast to earlier protocols like DVMRP and MOSPF, which maintained their own separate routing tables.

PIM operates by exchanging messages between routers to build and maintain multicast distribution trees. All PIM routers send periodic PIM Hello messages to 224.0.0.13 (ff02::d for IPv6) on all PIM-enabled interfaces to discover neighbors and elect a Designated Router (DR) on multi-access networks. The DR is responsible for sending PIM Join/Prune messages upstream and registering sources with the Rendezvous Point.

PIM-SM: Sparse Mode (RFC 7761)

PIM Sparse Mode is by far the most widely deployed PIM variant and is suitable for networks where group members are sparsely distributed (which describes most real-world deployments). PIM-SM uses an explicit join model — multicast traffic is only delivered to parts of the network that have requested it via Join messages. It builds two types of distribution trees:

The source registration process handles the bootstrapping problem — when a new source starts sending to a group, the RP may not yet be receiving that traffic. The source's DR encapsulates multicast packets in PIM Register messages (unicast) and sends them to the RP. The RP decapsulates them and forwards them down the shared tree. The RP also issues a (S,G) Join toward the source to begin receiving traffic natively (not encapsulated). Once native traffic arrives, the RP sends a Register-Stop back to the source's DR to end the encapsulation.

PIM-DM: Dense Mode (RFC 3973)

PIM Dense Mode takes the opposite approach: it assumes all routers want multicast traffic and uses a flood-and-prune model. When a source starts sending, traffic is flooded throughout the entire PIM-DM domain. Routers that have no downstream receivers send Prune messages upstream, causing the traffic to stop on those branches. Prune state times out (typically after 3 minutes), causing traffic to flood again, and the cycle repeats.

PIM-DM does not require a Rendezvous Point, which simplifies configuration, but the periodic flooding makes it unsuitable for large networks or groups with sparse receivers. PIM-DM is rarely used in modern networks. It was historically deployed in small LAN environments where most hosts were expected to be receivers (such as early corporate video deployments). RFC 3973 classifies PIM-DM as Experimental.

PIM-SSM: Source-Specific Multicast (RFC 4607)

Source-Specific Multicast is not a separate protocol but rather a subset of PIM-SM operation that dramatically simplifies the multicast model. With SSM, receivers subscribe to a specific (S,G) channel rather than just a group (*,G). This eliminates the need for:

SSM requires IGMPv3 (or MLDv2) on the last hop so that hosts can specify the source address in their join request. The SSM address range is 232.0.0.0/8 for IPv4 and ff3x::/32 for IPv6. When a host sends an IGMPv3 INCLUDE report for (S, 232.1.1.1), the DR creates (S,G) state and sends a PIM (S,G) Join directly toward the source. No RP is involved.

SSM is the recommended model for new deployments, particularly for one-to-many applications like IPTV and live streaming where the source address is known in advance. It is simpler to operate, more secure (no unauthorized sources can inject traffic since receivers only accept from specified sources), and more scalable. The primary limitation is that it does not support any-source multicast — receivers must know the source address ahead of time.

Reverse Path Forwarding (RPF)

The Reverse Path Forwarding (RPF) check is the fundamental loop-prevention mechanism in multicast routing. For every multicast packet that arrives at a router, the router checks whether the packet arrived on the interface that the unicast routing table says is the best path back toward the source. If it did, the packet passes the RPF check and is forwarded on downstream interfaces. If it arrived on a different interface, it fails the RPF check and is dropped.

This is the inverse of unicast forwarding: instead of looking up the destination and forwarding toward it, the router looks up the source and verifies the packet came from the correct direction. The RPF check ensures that multicast traffic follows a tree topology (no loops) without requiring a spanning tree protocol or hop-count-based loop prevention.

The RPF check uses the unicast routing table by default, which is why PIM is "protocol independent" — any unicast routing protocol (OSPF, IS-IS, BGP, static routes) can provide the RPF information. Some networks use a separate multicast RIB (MRIB) or static multicast routes (ip mroute) when the desired multicast topology differs from the unicast topology — for example, when traffic engineering causes asymmetric unicast paths but multicast should follow a symmetric tree.

RPF failures are one of the most common causes of multicast troubleshooting. Asymmetric routing (where unicast traffic takes a different path in each direction), floating static routes, VRF misconfigurations, and BGP next-hop resolution issues can all cause RPF checks to fail, silently dropping multicast traffic.

PIM-SM: Shared Tree (*,G) vs Shortest-Path Tree (S,G) Phase 1: Shared Tree (*,G) Source RP R1 R2 Receiver Register Traffic via RP Phase 2: Source Tree (S,G) Source RP R1 Receiver Direct path Receiver's DR sends (S,G) Join toward source, bypassing RP SPT switchover: Prune sent to RP to stop RPT traffic

Rendezvous Points

The Rendezvous Point (RP) is central to PIM-SM operation for Any-Source Multicast (ASM). It serves as the meeting point where sources and receivers initially converge. Sources register with the RP (via PIM Register messages), and receivers join the shared tree rooted at the RP. Without an RP, PIM-SM ASM cannot function because there would be no common root for the shared tree and no way for receivers to discover sources.

Choosing and configuring the RP is a critical operational decision. The RP should be placed topologically close to the center of the multicast domain to minimize the path length on the shared tree. It must be stable and reachable — if the RP goes down, new receivers cannot join groups and new sources cannot register. There are three primary mechanisms for RP distribution:

Anycast RP (RFC 4610) provides RP redundancy by configuring the same RP address (as a /32 loopback) on multiple routers. When combined with MSDP (see below) between the Anycast RP peers, this provides seamless failover — if one RP fails, the unicast routing protocol converges to the next-closest RP. PIM Anycast RP (RFC 4610) is an alternative that does not require MSDP by using PIM Register messages between RP peers.

MSDP: Multicast Source Discovery Protocol

MSDP (RFC 3618) was designed to solve the interdomain multicast problem: how does an RP in one autonomous system learn about active multicast sources in another AS? Each domain runs its own RP, and MSDP creates TCP-based peering sessions (port 639) between RPs (or MSDP speakers) in different domains. When a source begins sending in one domain, that domain's RP originates a Source-Active (SA) message containing the source address, group address, and originator RP address. This SA message is flooded across the MSDP mesh to all MSDP peers.

When an RP in a remote domain receives an SA message for a group that has local receivers, it can join the source tree across domain boundaries (via an interdomain (S,G) Join, typically routed via BGP). This enables receivers in one AS to receive multicast traffic from sources in another AS, with each domain maintaining its own RP and shared tree internally.

MSDP applies RPF checking to SA messages — an SA message is accepted only if it arrives from the MSDP peer that is the next hop toward the originating RP (as determined by the BGP routing table). This prevents SA message loops in the MSDP mesh. MSDP also supports SA filters to control which source-group pairs are advertised or accepted, providing policy control over interdomain multicast.

MSDP is considered legacy technology for new deployments. SSM eliminates the need for MSDP entirely because receivers specify the source address directly. For ASM, RFC 3618 classifies MSDP as Experimental, and the IETF has not advanced it to Standards Track. However, MSDP remains widely deployed in networks that run interdomain ASM, and Anycast RP typically relies on MSDP to synchronize source state between RP peers.

ASM vs SSM: Two Models of Multicast

The multicast world is fundamentally divided into two models, and understanding the distinction is critical for deployment decisions:

Any-Source Multicast (ASM) is the original model defined in RFC 1112. Receivers join a group address (*,G) without specifying a source. Any host can send to the group, and traffic from all sources is delivered to all receivers. ASM requires RPs, shared trees, source registration, and (for interdomain operation) MSDP. It supports many-to-many communication patterns where any participant can be both a source and a receiver (e.g., multicast-based conferencing, gaming). The complexity of ASM comes from the RP infrastructure, the source discovery problem, and the potential for unauthorized sources to inject traffic into a group.

Source-Specific Multicast (SSM), defined in RFC 4607, simplifies the model to one-to-many. Receivers subscribe to a (S,G) channel — a specific source S sending to group G. This means: no RPs, no shared trees, no MSDP, no source registration, no source discovery, and no possibility of unauthorized source injection. SSM is more secure, simpler to operate, and more scalable. It requires IGMPv3/MLDv2 on the last hop, and receivers must know the source address (typically via an out-of-band signaling mechanism like a web page, SDP description, or application configuration).

Most new multicast deployments — particularly IPTV and live streaming — use SSM. The simplicity of the (S,G) model, combined with the elimination of RP-related failure modes, makes SSM operationally superior for content delivery use cases. ASM remains relevant for applications that genuinely need any-source semantics (some financial trading platforms, routing protocol multicast).

Multicast VPNs (MVPN)

Enterprise and service provider networks often need to carry multicast traffic within MPLS VPNs. Multicast VPN (MVPN) extends the L3VPN framework to support multicast routing within VRFs across a provider's backbone.

The evolution of MVPN technology has gone through several phases:

MVPN adds considerable complexity but is essential for service providers offering multicast-enabled VPN services. IPTV delivery over MPLS networks is one of the largest MVPN use cases, where the service provider's own video headend sends multicast streams that must reach set-top boxes across thousands of customer sites connected via L3VPN.

Interdomain Multicast and BGP

Multicast across autonomous system boundaries introduces additional challenges beyond what MSDP addresses. The RPF check, which is the foundation of multicast forwarding, depends on the unicast routing table. For interdomain multicast, this means BGP routes must provide correct RPF information. Two key BGP mechanisms support this:

In practice, truly global interdomain multicast (ASM across the public internet) never achieved widespread adoption. The complexity of coordinating RPs, MSDP peering, and RPF across thousands of autonomous systems proved too difficult. Most large-scale multicast deployments are intradomain (within a single AS or a coordinated set of ASes), and content delivery to the broader internet uses unicast CDNs instead. SSM simplifies the interdomain case somewhat because it eliminates MSDP and RPs, but adoption of interdomain SSM also remains limited.

Multicast State and Scalability

A fundamental challenge of multicast is that it requires per-group, per-source state in every router along the distribution tree. Each active (S,G) or (*,G) entry in a router's Multicast Forwarding Information Base (MFIB) consumes memory and requires processing for creation, maintenance, and teardown. In networks with thousands of active multicast groups and sources, this state can become significant.

Router-level multicast state includes:

Scalability strategies include: using SSM (which eliminates *,G and RP-related state), aggregating groups to reduce the number of active entries, deploying IGMP/MLD limits to cap the number of groups per interface, using PIM state-limit features, and properly scoping multicast domains to contain state growth.

Multicast in Modern Networks

Multicast deployment patterns have evolved significantly over the past two decades. While native IP multicast never became the universal content delivery mechanism some predicted in the 1990s, it remains critical in specific domains:

For general internet content delivery, unicast CDNs (Cloudflare, Akamai, Fastly) won the scalability battle over native multicast. CDNs replicate content at the edge via caching and anycast, achieving the same bandwidth savings as multicast without requiring multicast support in every router along the path. This is why you can watch a live stream on YouTube or Twitch without your ISP enabling multicast — the CDN handles replication at the application layer.

Troubleshooting Multicast

Multicast troubleshooting is notoriously difficult because failures are often silent — packets are simply dropped rather than generating error messages. Common troubleshooting areas include:

Explore Multicast-Capable Networks

While multicast routing state is internal to each network and not visible in the global BGP routing table, the autonomous systems that carry multicast traffic are the same ones you can examine via BGP. Major ISPs, content delivery networks, and IPTV providers all maintain extensive multicast infrastructure within their networks. Use the god.ad BGP Looking Glass to explore the autonomous systems, IP address allocations, and peering relationships of networks that rely on multicast for their services:

See BGP routing data in real time

Open Looking Glass
More Articles
What is DNS? The Internet's Phone Book
What is an IP Address?
IPv4 vs IPv6: What's the Difference?
What is a Network Prefix (CIDR)?
How Does Traceroute Work?
What is a CDN? Content Delivery Networks Explained