CCIE SPv5.1 Labs
  • Intro
    • Setup
  • Purpose
  • Video Demonstration
  • Containerlab Tips
  • Labs
    • ISIS
      • Start
      • Topology
      • Prefix Suppression
      • Hello padding
      • Overload Bit
      • LSP size
      • Default metric
      • Hello/Hold Timer
      • Mesh groups
      • Prefix Summarization
      • Default Route Preference
      • ISIS Timers
      • Log Neighbor Changes
      • Troubleshooting 1 - No routes
      • Troubleshooting 2 - Adjacency
      • IPv6 Single Topology
      • IPv6 Single Topology Challenge
      • IPv6 Multi Topology
      • IPv6 Single to Multi Topology
      • Wide Metrics Explained
      • Route Filtering
      • Backdoor Link
      • Non-Optimal Intra-Area routing
      • Multi Area
      • Authentication
      • Conditional ATT Bit
      • Troubleshooting iBGP
      • Troubleshooting TE Tunnel
    • LDP
      • Start
      • Topology
      • LDP and ECMP
      • LDP and Static Routes
      • LDP Timers
      • LDP Authentication
      • LDP Session Protection
      • LDP/IGP Sync (OSPF)
      • LDP/IGP Sync (ISIS)
      • LDP Local Allocation Filtering
      • LDP Conditional Label Advertisement
      • LDP Inbound Label Advertisement Filtering
      • LDP Label Advertisement Filtering Challenge
      • LDP Implicit Withdraw
      • LDP Transport Address Troubleshooting
      • LDP Static Labels
    • MPLS-TE
      • Start
      • Topology
      • Basic TE Tunnel w/ OSPF
      • Basic TE Tunnel w/ ISIS
      • TE Tunnel using Admin Weight
      • TE Tunnel using Link Affinity
      • TE Tunnel with Explicit-Null
      • TE Tunnel with Conditional Attributes
      • RSVP message pacing
      • Reoptimization timer
      • IGP TE Flooding Thresholds
      • CSPF Tiebreakers
      • TE Tunnel Preemption
      • TE Tunnel Soft Preemption
      • Tunneling LDP inside RSVP
      • PE to P TE Tunnel
      • Autoroute Announce Metric (XE)
      • Autoroute Announce Metric (XR)
      • Autoroute Announce Absolute Metric
      • Autoroute Announce Backup Path
      • Forwarding Adjacency
      • Forwarding Adjacency with OSPF
      • TE Tunnels with UCMP
      • Auto-Bandwidth
      • FRR Link Protection (XE, BFD)
      • FRR Link Protection (XE, RSVP Hellos)
      • FRR Node Protection (XR)
      • FRR Path Protection
      • FRR Multiple Backup Tunnels (Node Protection)
      • FRR Multiple Backup Tunnels (Link Protection)
      • FRR Multiple Backup Tunnels (Backwidth/Link Protection)
      • FRR Backup Auto-Tunnels
      • FRR Backup Auto-Tunnels with SRLG
      • Full Mesh Auto-Tunnels
      • Full Mesh Dynamic Auto-Tunnels
      • One-Hop Auto-Tunnels
      • CBTS/PBTS
      • Traditional DS-TE
      • IETF DS-TE with MAM
      • IETF DS-TE with RDM
      • RDM w/ FRR Troubleshooting
      • Per-VRF TE Tunnels
      • Tactical TE Issues
      • Multicast and MPLS-TE
    • SR
      • Start
      • Topology
      • Basic SR with ISIS
      • Basic SR with OSPF
      • SRGB Modifcation
      • SR with ExpNull
      • SR Anycast SID
      • SR Adjacency SID
      • SR LAN Adjacency SID (Walkthrough)
      • SR and RSVP-TE interaction
      • SR Basic Inter-area with ISIS
      • SR Basic Inter-area with OSPF
      • SR Basic Inter-IGP (redistribution)
      • SR Basic Inter-AS using BGP
      • SR BGP Data Center (eBGP)
      • SR BGP Data Center (iBGP)
      • LFA
      • LFA Tiebreakers (ISIS)
      • LFA Tiebreakers (OSPF)
      • Remote LFA
      • RLFA Tiebreakers?
      • TI-LFA
      • Remote LFA or TILFA?
      • TI-LFA Node Protection
      • TI-LFA SRLG Protection
      • TI-LFA Protection Priorities (ISIS)
      • TI-LFA Protection Priorities (OSPF)
      • Microloop Avoidance
      • SR/LDP Interworking
      • SR/LDP SRMS OSPF Inter-Area
      • SR/LDP Design Challenge #1
      • SR/LDP Design Challenge #2
      • Migrate LDP to SR (ISIS)
      • OAM with SR
      • SR-MPLS using IPv6
      • Basic SR-TE with AS
      • Basic SR-TE with AS and ODN
      • SR-TE with AS Primary/Secondary Paths
      • SR-TE Dynamic Policies
      • SR-TE Dynamic Policy with Margin
      • SR-TE Explicit Paths
      • SR-TE Disjoint Planes using Anycast SIDs
      • SR-TE Flex-Algo w/ Latency
      • SR-TE Flex-Algo w/ Affinity
      • SR-TE Disjoint Planes using Flex-Algo
      • SR-TE BSIDs
      • SR-TE RSVP-TE Stitching
      • SR-TE Autoroute Include
      • SR Inter-IGP using PCE
      • SR-TE PCC Features
      • SR-TE PCE Instantiated Policy
      • SR-TE PCE Redundancy
      • SR-TE PCE Redundancy w/ Sync
      • SR-TE Basic BGP EPE
      • SR-TE BGP EPE for Unified MPLS
      • SR-TE Disjoint Paths
      • SR Converged SDN Transport Challenge
      • SR OAM DPM
      • SR OAM Tools
      • Performance-Measurement (Interface Delay)
    • SRv6
      • Start
      • Topology
      • Basic SRv6
      • SRv6 uSID
      • SRv6 uSID w/ EVPN-VPWS and BGP IPv4/IPv6
      • SRv6 uSID w/ SR-TE
      • SRv6 uSID w/ SR-TE Explicit Paths
      • SRv6 uSID w/ L3 IGW
      • SRv6 uSID w/ Dual-Connected PE
      • SRv6 uSID w/ Flex Algo
      • SRv6 uSID - Scale (Pt. 1)
      • SRv6 uSID - Scale (Pt. 2)
      • SRv6 uSID - Scale (Pt. 3) (UPA Walkthrough)
      • SRv6 uSID - Scale (Pt. 4) (Flex Algo)
      • SRv6 uSID w/ TI-LFA
    • Multicast
      • Start
      • Topology
      • Basic PIM-SSM
      • PIM-SSM Static Mapping
      • Basic PIM-SM
      • PIM-SM with Anycast RP
      • PIM-SM with Auto-RP
      • PIM-SM with BSR
      • PIM-SM with BSR for IPv6
      • PIM-BiDir
      • PIM-BiDir for IPv6
      • PIM-BiDir with Phantom RP
      • PIM Security
      • PIM Boundaries with AutoRP
      • PIM Boundaries with BSR
      • PIM-SM IPv6 using Embedded RP
      • PIM SSM Range Note
      • PIM RPF Troubleshooting #1
      • PIM RPF Troubleshooting #2
      • PIM RP Troubleshooting
      • PIM Duplicate Traffic Troubleshooting
      • Using IOS-XR as a Sender/Receiver
      • PIM-SM without Receiver IGMP Joins
      • RP Discovery Methods
      • Basic Interdomain Multicast w/o MSDP
      • Basic Interdomain Multicast w/ MSDP
      • MSDP Filtering
      • MSDP Flood Reduction
      • MSDP Default Peer
      • MSDP RPF Check (IOS-XR)
      • MSDP RPF Check (IOS-XE)
      • Interdomain MBGP Policies
      • PIM Boundaries using MSDP
    • MVPN
      • Start
      • Topology
      • Profile 0
      • Profile 0 with data MDTs
      • Profile 1
      • Profile 1 w/ Redundant Roots
      • Profile 1 with data MDTs
      • Profile 6
      • Profile 7
      • Profile 3
      • Profile 3 with S-PMSI
      • Profile 11
      • Profile 11 with S-PMSI
      • Profile 11 w/ Receiver-only Sites
      • Profile 9 with S-PMSI
      • Profile 12
      • Profile 13
      • UMH (Upstream Multicast Hop) Challenge
      • Profile 13 w/ Configuration Knobs
      • Profile 13 w/ PE RP
      • Profile 12 w/ PE Anycast RP
      • Profile 14 (Partitioned MDT)
      • Profile 14 with Extranet option #1
      • Profile 14 with Extranet option #2
      • Profile 14 w/ IPv6
      • Profile 17
      • Profile 19
      • Profile 21
    • MVPN SR
      • Start
      • Topology
      • Profile 27
      • Profile 27 w/ Constraints
      • Profile 27 w/ FRR
      • Profile 28
      • Profile 28 w/ Constraints and FRR
      • Profile 28 w/ Data MDTs
      • Profile 29
    • VPWS
      • Start
      • Topology
      • Basic VPWS
      • VPWS with Tag Manipulation
      • Redundant VPWS
      • Redundant VPWS (IOS-XR)
      • VPWS with PW interfaces
      • Manual VPWS
      • VPWS with Sequencing
      • Pseudowire Logging
      • VPWS with FAT-PW
      • MS-PS (Pseudowire stitching)
      • VPWS with BGP AD
    • VPLS
      • Start
      • Topology
      • Basic VPLS with LDP
      • VPLS with LDP and BGP
      • VPLS with BGP only
      • Hub and Spoke VPLS
      • Tunnel L2 Protocols over VPLS
      • Basic H-VPLS
      • H-VPLS with BGP
      • H-VPLS with QinQ
      • H-VPLS with Redundancy
      • VPLS with Routing
      • VPLS MAC Protection
      • Basic E-TREE
      • VPLS with LDP/BGP-AD and XRv RR
      • VPLS with BGP and XRv RR
      • VPLS with Storm Control
    • EVPN
      • Start
      • Topology
      • EVPN VPWS
      • EVPN VPWS Multihomed
      • EVPN VPWS Multihomed Single-Active
      • Basic Single-homed EVPN E-LAN
      • EVPN E-LAN Service Label Allocation
      • EVPN E-LAN Ethernet Tag
      • EVPN E-LAN Multihomed
      • EVPN E-LAN on XRv
      • EVPN IRB
      • EVPN-VPWS Multihomed IOS-XR (All-Active)
      • EVPN-VPWS Multihomed IOS-XR (Port-Active)
      • EVPN-VPWS Multihomed IOS-XR (Single-Active)
      • EVPN-VPWS Multihomed IOS-XR (Non-Bundle)
      • PBB-EVPN (Informational)
    • BGP Multi-Homing (XE)
      • Start
      • Topology
      • Lab1 ECMP
      • Lab2 UCMP
      • Lab3 Backup Path
      • Lab4 Shadow Session
      • Lab5 Shadow RR
      • Lab6 RR with Add-Path
      • Lab7 MPLS + Add Path ECMP
      • Lab8 MPLS + Shadow RR
      • Lab9 MPLS + RDs + UCMP
    • BGP Multi-Homing (XR)
      • Start
      • Topology
      • Lab1 ECMP
      • Lab2 UCMP
      • Lab3 Backup Path
      • Lab4 “Shadow Session”
      • Lab5 “Shadow RR”
      • Lab6 RR with Add-Path
      • Lab7 MPLS + Add Path ECMP
      • Lab8 MPLS + “Shadow RR”
      • Lab9 MPLS + RDs + UCMP
      • Lab10 MPLS + Same RD + Add-Path + UCMP
      • Lab11 MPLS + Same RD + Add-Path + Repair Path
    • BGP
      • Start
      • Conditional Advertisement
      • Aggregation and Deaggregation
      • Local AS
      • BGP QoS Policy Propagation
      • Non-Optimal eBGP Routing
      • Multihomed Enterprise Challenge
      • Provider Communities
      • Destination-Based RTBH
      • Destination-Based RTBH (Community-Based)
      • Source-Based RTBH
      • Source-Based RTBH (Community-Based)
      • Multihomed Enterprise Challenge (XRv)
      • Provider Communities (XRv)
      • DMZ Link BW Lab1
      • DMZ Link BW Lab2
      • PIC Edge in the Global Table
      • PIC Edge Troubleshooting
      • PIC Edge for VPNv4
      • AIGP
      • AIGP Translation
      • Cost-Community (iBGP)
      • Cost-Community (confed eBGP)
      • Destination-Based RTBH (VRF Provider-triggered)
      • Destination-Based RTBH (VRF CE-triggered)
      • Source-Based RTBH (VRF Provider-triggered)
      • Flowspec (Global IPv4/6PE)
      • Flowspec (VRF)
      • Flowspec (Global IPv4/6PE w/ Redirect)
      • Flowspec (Global IPv4/6PE w/ Redirect) T-Shoot
      • Flowspec (VRF w/ Redirect)
      • Flowspec (Global IPv4/6PE w/ CE Advertisement)
    • Intra-AS L3VPN
      • Start
      • Partitioned RRs
      • Partitioned RRs with IOS-XR
      • RT Filter
      • Non-Optimal Multi-Homed Routing
      • Troubleshoot #1 (BGP)
      • Troubleshoot #2 (OSPF)
      • Troubleshoot #3 (OSPF)
      • Troubleshoot #4 (OSPF Inter-AS)
      • VRF to Global Internet Access (IOS-XE)
      • VRF to Global Internet Access (IOS-XR)
    • Inter-AS L3VPN
      • Start
      • Inter-AS Option A
      • Inter-AS Option B
      • Inter-AS Option C
      • Inter-AS Option AB (D)
      • CSC
      • CSC with Option AB (D)
      • Inter-AS Option C - iBGP LU
      • Inter-AS Option B w/ RT Rewrite
      • Inter-AS Option C w/ RT Rewrite
      • Inter-AS Option A Multi-Homed
      • Inter-AS Option B Multi-Homed
      • Inter-AS Option C Multi-Homed
    • Russo Inter-AS
      • Start
      • Topology
      • Option A L3NNI
      • Option A L2NNI
      • Option A mVPN
      • Option B L3NNI
      • Option B mVPN
      • Option C L3NNI
      • Option C L3NNI w/ L2VPN
      • Option C mVPN
    • BGP RPKI
      • Start
      • RPKI on IOS-XE (Enabling the feature)
      • RPKI on IOS-XE (Validation)
      • RPKI on IOS-XR (Enabling the feature)
      • Enable SSH in Routinator
      • RPKI on IOS-XR (Validation)
      • RPKI on IOS-XR (RPKI Routes)
      • RPKI on IOS-XR (VRF)
      • RPKI iBGP Mesh (No Signaling)
      • RPKI iBGP Mesh (iBGP Signaling)
    • NAT
      • Start
      • Egress PE NAT44
      • NAT44 within an INET VRF
      • Internet Reachability between VRFs
      • CGNAT
      • NAT64 Stateful
      • NAT64 Stateful w/ Static NAT
      • NAT64 Stateless
      • MAP-T BR
    • BFD
      • Start
      • Topology
      • OSPF Hellos
      • ISIS Hellos
      • BGP Keepalives
      • PIM Hellos
      • Basic BFD for all protocols
      • BFD Asymmetric Timers
      • BFD Templates
      • BFD Tshoot #1
      • BFD for Static Routes
      • BFD Multi-Hop
      • BFD for VPNv4 Static Routes
      • BFD for VPNv6 Static Routes
      • BFD for Pseudowires
    • QoS
      • Start
      • QoS on IOS-XE
      • Advanced QoS on IOS-XE Pt. 1
      • Advanced QoS on IOS-XE Pt. 2
      • MPLS QoS Design
      • Notes - QoS on IOS-XR
    • NSO
      • Start
      • Basic NSO Usage
      • Basic NSO Template Service
      • Advanced NSO Template Service
      • Advanced NSO Template Service #2
      • NSO Template vs. Template Service
      • NSO API using Python
      • NSO API using Python #2
      • NSO API using Python #3
      • Using a NETCONF NED
      • Python Service
      • Nano Services
    • MDT
      • Start
      • MDT Server Setup
      • Basic Dial-Out
      • Filtering Data using XPATH
      • Finding the correct YANG model
      • Finding the correct YANG model #2
      • Event-Driven MDT
      • Basic Dial-In using gNMI
      • Dial-Out with TLS
      • Dial-In with TLS
      • Dial-In with two-way TLS
    • App-Hosting
      • Start
      • Lab - iperf3 Docker Container
      • Notes - LXC Container
      • Notes - Native Applications
      • Notes - Process Scripts
    • ZTP
      • Notes - Classic ZTP
      • Notes - Secure ZTP
    • L2 Connectivity Notes
      • 802.1ad (Q-in-Q)
      • MST-AG
      • MC-LAG
      • G.8032
    • Ethernet OAM
      • Start
      • Topology
      • CFM
      • y1731
      • Notes - y1564
    • Security
      • Start
      • Notes - Security ACLs
      • Notes - Hybrid ACLs
      • Notes - MPP (IOS-XR)
      • Notes - MPP (IOS-XE)
      • Notes - CoPP (IOS-XE)
      • Notes - LPTS (IOS-XR)
      • Notes - WAN MACsec White Paper
      • Notes - WAN MACsec Config Guide
      • Notes - AAA
      • Notes - uRPF
      • Notes - VTY lines (IOS-XR)
      • Lab - uRPF
      • Lab - MPP
      • Lab - AAA (IOS-XE)
      • Lab - AAA (IOS-XR)
      • Lab - CoPP and LPTS
    • Assurance
      • Start
      • Notes - Syslog on IOS-XE
      • Notes - Syslog on IOS-XR
      • Notes - SNMP Traps
      • Syslog (IOS-XR)
      • RMON
      • Netflow (IOS-XE)
      • Netflow (IOS-XR)
Powered by GitBook
On this page
  • Answer
  • Explanation
  • Verification
  • IGMPv3 and MLDv2 Joins
  • IGMPv3 and MLDv2 Verification
  • IGMPv3 Overview
  • MLDv2 Overview
  • Distribution Tree Verification
  • State Verification
  • IPv6 Verification
  • What happens if PIM is disabled on the RPF interface?
  1. Labs
  2. Multicast

Basic PIM-SSM

Load base.ipv4.and.ipv6.cfg

#IOS-XE
config replace flash:base.ipv4.and.ipv6.cfg
 
#IOS-XR
configure
load bootflash:base.ipv4.and.ipv6.cfg
commit replace
y

Using PIM-SSM, configure R2 and R4 to join the channels:

  • (7.1.7.1, 232.1.1.1)

  • (8.3.8.3, 232.1.1.1)

  • (2007:7:1:7::1, ff38::1)

  • (2008:8:3:8::3, ff38::1)

Ensure that multicast traffic from R1 and R3 is delivered correctly to R2 and R4.

Answer

#R5
ip multicast-routing distributed
ip pim ssm default
ipv6 multicast-routing
!
int GigabitEthernet2.525
 ip pim sparse-mode
 ip igmp ver 3
int GigabitEthernet2.550
 ip pim sparse-mode

#R6
ip multicast-routing distributed
ip pim ssm default
ipv6 multicast-routing
!
int GigabitEthernet2.560
 ip pim sparse-mode
int GigabitEthernet2.562
 ip pim sparse-mode
int GigabitEthernet2.569
 ip pim sparse-mode

#R7
ip multicast-routing distributed
ip pim ssm default
ipv6 multicast-routing
!
int GigabitEthernet2.517
 ip pim sparse-mode
int GigabitEthernet2.571
 ip pim sparse-mode
int GigabitEthernet2.574
 ip pim sparse-mode

#R8
ip multicast-routing distributed
ip pim ssm default
ipv6 multicast-routing
!
int GigabitEthernet2.538
 ip pim sparse-mode
int GigabitEthernet2.582
 ip pim sparse-mode

#R9
ip multicast-routing distributed
ip pim ssm default
ipv6 multicast-routing
!
int GigabitEthernet2.569
 ip pim sparse-mode
int GigabitEthernet2.593
 ip pim sparse-mode
int GigabitEthernet2.594
 ip pim sparse-mode

#R10
ip multicast-routing distributed
ip pim ssm default
ipv6 multicast-routing
!
int GigabitEthernet2.501
 ip pim sparse-mode
int GigabitEthernet2.550
 ip pim sparse-mode
int GigabitEthernet2.560
 ip pim sparse-mode

#XR1
multicast-routing
 add ipv4
  interface GigabitEthernet0/0/0/0.501 enable
  interface GigabitEthernet0/0/0/0.514 enable
  interface GigabitEthernet0/0/0/0.571 enable
 add ipv6
  interface GigabitEthernet0/0/0/0.501 enable
  interface GigabitEthernet0/0/0/0.514 enable
  interface GigabitEthernet0/0/0/0.571 enable

#XR2
multicast-routing
 add ipv4
  interface GigabitEthernet0/0/0/0.524 enable
  interface GigabitEthernet0/0/0/0.562 enable
  interface GigabitEthernet0/0/0/0.582 enable
 add ipv6
  interface GigabitEthernet0/0/0/0.524 enable
  interface GigabitEthernet0/0/0/0.562 enable
  interface GigabitEthernet0/0/0/0.582 enable

#XR3
multicast-routing
 add ipv4
  interface GigabitEthernet0/0/0/0.543 enable
  interface GigabitEthernet0/0/0/0.593 enable
 add ipv6
  interface GigabitEthernet0/0/0/0.543 enable
  interface GigabitEthernet0/0/0/0.593 enable

#XR4
multicast-routing
 add ipv4
  interface GigabitEthernet0/0/0/0.514 enable
  interface GigabitEthernet0/0/0/0.524 enable
  interface GigabitEthernet0/0/0/0.574 enable
  interface GigabitEthernet0/0/0/0.594 enable
 add ipv6
  interface GigabitEthernet0/0/0/0.514 enable
  interface GigabitEthernet0/0/0/0.524 enable
  interface GigabitEthernet0/0/0/0.574 enable
  interface GigabitEthernet0/0/0/0.594 enable

#R2
! If you want the router to respond to ICMP for testing, we must enable PIM
ip multicast-routing distributed
ip pim ssm default
ipv6 multicast-routing
!
ipv6 access-list V6_SOURCE_LIST
 sequence 10 permit ipv6 host 2007:7:1:7::1 any
 sequence 20 permit ipv6 host 2008:8:3:8::3 any
!
int gi2.525
 ip pim sparse-mode
 ip igmp ver 3
 ip igmp join-group 232.1.1.1 source 7.1.7.1
 ip igmp join-group 232.1.1.1 source 8.3.8.3
 ipv6 mld join-group FF38::1 source-list V6_SOURCE_LIST

#R4
! If you want the router to respond to ICMP for testing, we must enable PIM
ip multicast-routing distributed
ip pim ssm default
ipv6 multicast-routing
!
ipv6 access-list V6_SOURCE_LIST
 sequence 10 permit ipv6 host 2007:7:1:7::1 any
 sequence 20 permit ipv6 host 2008:8:3:8::3 any
!
int GigabitEthernet2.543
 ip pim sparse-mode
 ip igmp ver 3
 ip igmp join-group 232.1.1.1 source 7.1.7.1
 ip igmp join-group 232.1.1.1 source 8.3.8.3
 ipv6 mld join-group FF38::1 source-list V6_SOURCE_LIST

Explanation

IP Multicast routing is a forwarding concept that allows efficient distribution of traffic intended for multiple hosts. Instead of requiring the source server to replicate a unicast stream per interested receiver, which would negatively impact resources on the server and impact bandwidth utilization in the network, the server can send a single stream and let the network devices do the work of packet replication. This is extremely efficient, because every network segment will only carry a single copy of the packet. As the tree branches out, each router makes copies of the packet as needed.

To ensure that this forwarding is loop-free, the routers must communicate and build a loop-free distribution tree. The protocol that routers use to build this tree is PIM. PIM-SSM is the most simple PIM mode. In PIM-SSM (Source Specific Multicast), the receiver already knows the source and group of the “channel” it is interested in receiving. This is notated as (S, G). IGMPv3 is required to signal the interested group and list of acceptable sources for that group. The LHR learns about the (S, G) via IGMPv3 or MLDv2, and signals a PIM Join upstream. The upstream interface, also called incoming interface, is found using an RPF check. The LHR consults the unicast RIB to find the bestpath for the source. This interface should be the interface the router receives the traffic on. Therefore, the router sends a PIM Join out this interface.

PIM routers upstream continue this process, adding the interface on which the PIM Join was received to the OIL (outgoing interface list), and sending a new PIM Join out the IIF (incoming interface).

In PIM-SSM, the PIM Join will eventually terminate at the FHR. At this point, a full multicast distribution tree has been built. We can say that this distribution tree is overlayed ontop of the IP unicast topology, as the unicast RIB is used to determine the IIF, and therefore used to build the tree. Since PIM allows any unicast route to be valid, PIM is protocol-independent.

To enable multicast-routing on IOS-XE, we require the ip multicast-routing distributed global command. Additionally, we must globally enable PIM-SSM by specifying what SSM range we are using. In this lab we are using the default range, 232/8, but you can also use a different range if desired, and specify this using a standard ACL.

#IOS-XE
ip multicast-routing distributed
ip pim ssm default

On IOS-XR, we instead enable the multicast-routing process, and list the interfaces we want to enable. As a shortcut, we have the option to use interface all. Any interface activated for multicast is also implicitly activated for PIM and IGMPv3/MLDv2. Additionally, IOS-XR enables PIM-SSM by default for 232/8, so we don’t need to specify this separately.

#IOS-XR
multicast-routing
 add ipv4
  interface GigabitEthernet0/0/0/0.501 enable
  interface GigabitEthernet0/0/0/0.514 enable
  interface GigabitEthernet0/0/0/0.571 enable
  !
  ! or
  !
  interface all enable

We also must enable IPv6 multicast-routing. This is done on IOS-XE using the ipv6 multicast-routing global command. This automatically enables PIM-SSM for IPv6, so we don’t need to worry about enabling SSM separately.

#IOS-XE
ipv6 multicast-routing

On IOS-XR, we simply enable the interfaces under multicast-routing add ipv6.

#IOS-XR
multicast-routing
 add ipv6
  interface GigabitEthernet0/0/0/0.501 enable
  interface GigabitEthernet0/0/0/0.514 enable
  interface GigabitEthernet0/0/0/0.571 enable
  !
  ! or
  !
  interface all enable

Next, we enable PIM. This is done on IOS-XE by enabling PIM on every single interface, whether it faces another PIM neighbor, a receiver, or a source.

#IOS-XE
int GigabitEthernet2.525
 ip pim sparse-mode
int GigabitEthernet2.550
 ip pim sparse-mode

On IOS-XE, PIM is enabled for IPv6 for all IPv6-enabled interfaces automatically. There is no need to specify this on a per-interface basis. However, you can disable IPv6 PIM on a per-interface basis using the following command:

#IOS-XE
int GigabitEthernet2.525
 no ipv6 pim

On IOS-XR, PIM is enabled by default on every multicast-enabled interface. However, if you need to specify non-default settings for a PIM interface, you do so under router pim.

Lastly, we must enable IGMPv3 and MLDv2 on interfaces facing receivers. MLDv2 is already the default on both IOS-XE and IOS-XR when using PIM for IPv6, so we have nothing to configure there. Additionally, IGMPv3 is the default on IOS-XR when enabling multicast on an interface. IGMPv3 is backwards compatible with IGMPv2, so this makes sense for XR to use this by default. However, on IOS-XE, IGMPv2 is the default when enabling PIM on an interface, so we must set R5’s interface towards R2 to use IGMPv3 explicitly.

#R5
int GigabitEthernet2.525
 ip pim sparse-mode
 ip igmp ver 3

At this point, our core network is fully setup to support PIM-SSM for IPv4 and IPv6. Note that there is no RP needed for PIM-SSM. This lets us focus on the core principals of multicast forwarding.

Verification

IGMPv3 and MLDv2 Joins

We’ll now configure our receivers to join the (S, G) groups. Additionally, the CSR1000v receivers will need to enable PIM if we want to verify that multicast is working by using ICMP. Without this, the CSR1000v receiver will not respond to the ICMP Request destined to the multicast group address.

#R2
! If you want the router to respond to ICMP for testing, we must enable PIM
ip multicast-routing distributed
ip pim ssm default
ipv6 multicast-routing
!
ipv6 access-list V6_SOURCE_LIST
 sequence 10 permit ipv6 host 2007:7:1:7::1 any
 sequence 20 permit ipv6 host 2008:8:3:8::3 any
!
int gi2.525
 ip pim sparse-mode
 ip igmp ver 3
 ip igmp join-group 232.1.1.1 source 7.1.7.1
 ip igmp join-group 232.1.1.1 source 8.3.8.3
 ipv6 mld join-group FF38::1 source-list V6_SOURCE_LIST

#R4
! If you want the router to respond to ICMP for testing, we must enable PIM
ip multicast-routing distributed
ip pim ssm default
ipv6 multicast-routing
!
ipv6 access-list V6_SOURCE_LIST
 sequence 10 permit ipv6 host 2007:7:1:7::1 any
 sequence 20 permit ipv6 host 2008:8:3:8::3 any
!
int GigabitEthernet2.543
 ip pim sparse-mode
 ip igmp ver 3
 ip igmp join-group 232.1.1.1 source 7.1.7.1
 ip igmp join-group 232.1.1.1 source 8.3.8.3
 ipv6 mld join-group FF38::1 source-list V6_SOURCE_LIST

For IPv4 IGMP, we can specify multiple sources per group and the router will configure a new line for each configuration.

#R2
int gi2.525
 ip igmp join-group 232.1.1.1 source 7.1.7.1
 ip igmp join-group 232.1.1.1 source 8.3.8.3

However, if we try this for IPv6 MLD, the router will overwrite the last entry with the new source. So instead we must specify a source list using an ACL.

#R2
ipv6 access-list V6_SOURCE_LIST
 sequence 10 permit ipv6 host 2007:7:1:7::1 any
 sequence 20 permit ipv6 host 2008:8:3:8::3 any
!
int gi2.525
 ipv6 mld join-group FF38::1 source-list V6_SOURCE_LIST

IGMPv3 and MLDv2 Verification

On R5, we can watch for IGMPv3 Reports using debug ip igmp. We’ll also filter our debug to only gi2.525, otherwise we will see IGMP activity for the AutoRP 224.0.1.40 group on other interfaces.

#R5
debug condition interface gi2.525
debug ip igmp

We can verify the same thing on XRv3. It seems that using debug igmp <interface> does not work for some reason. Perhaps something to do with the interface-specific debug being implemented in hardware on the line card.

#XR3
debug igmp

We can also verify joined groups on the LHR via show ip igmp groups. Notice that the group mode is “INCLUDE” and two sources are listed.

This can be seen on IOS-XR as well on XRv3:

Verification of MLDv2 on IOS-XE using debugs does not seem very useful. We can verify that a V2 Report is received but not the contents of the report.

#R5
debug ipv6 mld

Above, 5 groups are reported, but we can see 4 groups are ignored. This leaves only one group in the MLDv2 group database. Just like IGMPv3, the mode is INCLUDE and the two sources are listed:

On IOS-XR, the mld debug proves very similar to the igmp debug. We can explicitly see the groups the host has joined:

#XRv3
debug mld

We can also see the joined groups using show mld groups

IGMPv3 Overview

IGMPv3 is quite simple. A host indicates interest in a particular (S, G) using an IGMPv3 Membership Report that is mode: INCLUDE, with the sources it is willing to receive multicast traffic from for that particular group. ASM works by using a mode: EXCLUDE for zero sources, i.e. “exclude nothing.” Below, we see 232.1.1.1 is in include mode and 224.0.1.40 is in exclude mode:

To add sources to an SSM group, the host uses mode “Allow new sources”:

To remove sources from an SSM group, the host uses mode “Block old sources.” This is effectively a IGMP Leave.

To leave an ASM group, the router does the opposite of the IGMP Join, as you might expect. Whereas an IGMPv3 Join for an ASM group is “exclude mode: none,” the IGMPv3 Leave for an ASM group is “change to include mode: none.”

MLDv2 Overview

MLDv2 (Multicast Listener Discovery) uses ICMPv6 for operation. It uses the same modes as IGMPv3, so it can essentially be seen as the IPv6 equivalent of IGMPv3.

For example, notice that the ASM group ff02::2 and SSM group ff38::1 look identical to IGMPv3. ASM uses “exclude none” and SSM uses “include” mode.

Likewise, adding a source to an SSM group is the same “Allow new sources” mode:

Removing a source uses the same “Block old source” mode:

Leaving an ASM group is the same “Change to include” mode:

In addition to accomodating ASM Joins, IGMPv3 and MLDv2 are fully backwards compatible with IGMPv2 and MLDv1. This allows you to enable IGMPv3 and MLDv2 on all router interfaces, and still be able to support IGMPv2-only or MLDv1-only hosts.

Distribution Tree Verification

Now that we have verified that the host-to-LHR signaling is working correctly, we can look at how the distribution tree is built to deliver packets from the source to the interested receivers over a L3 routed network. In PIM-SSM, the tree that is built is a shortest-path tree using (S, G) state. This is called the shortest-path tree, because from a unicast routing perspective, it follows the IGP shortest path. This happens naturally because the only interface that each router will accept multicast traffic on is the interface that is used for the unicast bestpath towards the source of the multicast traffic.

First we’ll verify PIM is enabled and that PIM neighborships are established. Without a PIM neighbor on the other end of a link, a router cannot have a valid RPF interface for a given (S, G) pair.

The command show ip pim int displays interfaces enabled for PIM, and the current DR on the segment. The DR is used on LAN segments when multiple PIM routers exists. It decides which router will forward traffic onto the segment (when the routers are acting as LHR) and which router will register traffic to the RP (when the routers are acting as FHR).

The command show ip pim neighbor displays PIM adjacencies. Note that these are only one-way adjacencies. PIM does not use a two way check. So it is possible to see a PIM neighbor on one end of the link, but not the other end. Also, IOS-XR strangely creates neighbor state for itself. I’m not quite sure why this is. This is indicated with an astrik, so you can mentally filter these out.

PIM neighborship is established simply upon receiving PIM Hellos. These are sent by default at a 30 second interval. The Hello contains some options, such as the hold time, DR priority, and supported capabilities. The hold time does not need to match; it can be asymmetrical between PIM neighbors.

In PIM-SSM, the distribution tree is explicitly built at each branch (router hop) using a PIM Join message. The PIM Join is sent out the RPF interface. The RPF interface is the unicast best path towards the source for a given (S, G) entry. The RPF controls two things: it enables loop prevention, as multicast traffic is only accepted via the RPF interface, and it also decides which interface is the IIF, and therefore which interface the PIM Join is sent out of, and to which PIM neighbor. These go hand-in-hand, because it only makes sense to send a PIM Join out the interface for which you will accept the traffic.

The RPF check uses the unicast RIB by default. We can verify the RPF interface for a given source using the following command. Note that the ? in the IOS-XE output simply means the RPF neighbor could not be resolved to a hostname. If you have DNS lookups enabled, this command output will hang due to the DNS resolution attempt.

Note that IOS-XE allows you to do an RPF check for any source address, while IOS-XR only shows you RPF checks for sources that have existing state in the MRIB. This means we can do an arbitrary lookup on IOS-XE but not IOS-XR. Below, R5 will show us the RPF check for a valid source, and an invalid source. But IOS-XR just tells us that no RPF entry has been performed for either address.

In IOS-XE, we can also see the routing table used to perform RPF lookups. This is a separate structure from the unicast RIB, because although the unicast RIB is used to populate this table, you can also use multicast BGP and static mroutes to override the unicast RIB routes for RPF checks.

The RPF neighbor is the PIM neighbor that is reachable out the RPF interface. If the RPF interface is connected to a LAN, it is the neighbor that is either on the path for the unicast best route, or if ECMP, it is the PIM neighbor with the highest IP address on the LAN.

In case ECMP routes exist for a given source, only one interface can be used for the RPF interface. The local interface with the highest IP address is used as the tiebreaker.

Now that the RPF interfaces have been found, the LHRs can start the building of the distribution tree by sending a PIM Join out the RPF interface. We can see this on their PIM neighbors by using debug ip pim.

These routers, upon receiving a PIM Join, will add the interface that received the PIM Join to the OIL for the (S, G) entry, perform an RPF check on the S address, and send a PIM Join to their PIM RPF neighbor. This process repeats until the PIM Join arrives at the FHR (R7 and R8). In this way, the distribution tree is built “backwards” - from the leafs (LHRs) to the root (FHR).

State Verification

We can verify the state along the path by using show ip mroute on IOS-XE and show mrib route on IOS-XR. Let’s look at the tree that is built from R1 to R2:

R5 has a (S, G) entry with IIF pointing towards R10, and OIL containing R2. The OIL was populated via the IGMP Membership Report.

R10 has the IIF pointing towards XRv1 and the OIL contains the interface towards R5. The OIL was populated via the PIM Join that R5 sent to R10.

XRv1 shows similar information but in a slightly different format. The most important flags are A (accept) which is associated with the IIF, and F (forward) which is associated with the interfaces in the OIL.

Finally, R7 shows the IIF as the interface facing the host, and the OIL as pointing towards both XRv1 and XRv4. This is beacuse XRv4 is on the path towards R4, which also joined this (S, G) channel.

IPv6 Verification

Note that this process works exactly the same way for IPv6. PIM Joins are sent out the RPF interface. We can see the RPF interfaces the LHRs use:

The PIM Join process works exactly the same way as well. For brevity we will only verify the state on R5 and XRv1.

This process repeats for each interest receiver and each (S, G) channel, building a multicast distribution tree per (S, G) entry.

What happens if PIM is disabled on the RPF interface?

Let’s turn off PIM on R10’s interface facing XRv1:

#R10
int gi2.501
 no ip pim sparse-mode

R10 no longer has a valid RPF check for the source.

The entry in the mroute table will show that the RPF neighbor is 0.0.0.0 and IIF is null.

You might be inclined to wonder why R10 can’t just use its interface towards R6 for the RPF interface, since it has a valid PIM adjacency with R6 still. This would violate the loop prevention mechanism inherent to PIM - the RPF check must be the unicast best path towards the Source. However, we can manually override the RPF check with a static mroute. This will solve the issue:

#R10
ip mroute 7.1.7.1 255.255.255.255 9.6.10.6

R10 sends a PIM Join out the interface. R6 now has three ECMP routes to 7.1.7.1 to choose between for the RPF check. It finds that interface Gi2.562 has the higher IP address, so this is used for the RPF check. Luckily for us, it did not choose R10 as the RPF neighbor, which would break the tree.

The tree for (7.1.7.1, 232.1.1.1) now looks like this (at least for the portion that delivers to R2).

What happens if R6 loses its PIM adjacency on the current RPF interface? Because the RPF check was won based on ECMP routes, R6 is allowed to use the next-best RPF interface automatically:

#R6
int GigabitEthernet2.562
 no ip pim sparse-mode
PreviousTopologyNextPIM-SSM Static Mapping

Last updated 1 month ago