Profile 6

Load basic.startup.config.with.cpim.cfg

#IOS-XE
config replace flash:basic.startup.config.with.cpim.cfg
Y

#IOS-XR
configure
load bootflash:basic.startup.config.with.cpim.cfg
commit replace
y

The basic IP addresses, L3VPN, and C-PIM between the PEs and CEs is pre-configured.

  • Configure multicast VPN using mLDP in the core.

  • You cannot use C-PIM between PEs. Instead use mLDP for signaling of customer multicast state.

  • Configure the customer PIM domain to use SSM.

  • Test that C3 can join (C2, 232.1.1.1) and that C2 can ping this group and receive replies from C3.

See answer below (scroll down).

Answer

We will use mLDP for in-band signaling which doesn’t use C-PIM between PEs. First ensure that mLDP is configured on the IOS-XR routers.

#P1, P2, PE3
mpls ldp mldp add ipv4

Next, because we are no longer using C-PIM as the overlay signaling, and mLDP in-band doesn’t support ASM, we must enable SSM in the C-PIM. SSM is enabled by default on IOS XR (PE3).

#PE1, PE2
ip pim vrf CUSTOMER ssm default

#CE1, CE2, CE3
ip pim ssm default

Lastly, we enable mLDP as in-band signaling on the PEs.

#PE1, PE2
ip multicast vrf CUSTOMER mpls mldp
ip pim vrf CUSTOMER mpls source lo0
#PE3
route-policy USE_MLDP_INBAND
 set core-tree mldp-inband
end-policy
!
multicast-routing add ipv4 int lo0 enable
!
multicast-routing vrf CUSTOMER add ipv4
 mdt so lo0
 mdt mldp in-band-signaling ipv4
!
router pim vrf CUSTOMER add ipv4
 rpf topology route-policy USE_MLDP_INBAND

Join an SSM group on C3 and test multicast traffic.

#CE3
int gi3
 ip igmp ver 3

#C3
int gi0/1
 ip igmp ver 3
 ip igmp join-group 232.1.1.1 so 10.1.2.10

#C2
do ping 232.1.1.1 repeat 5
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 232.1.1.1, timeout is 2 seconds:

Reply to request 0 from 10.1.3.10, 32 ms

Verification

First, C3 joins the (S, G) using IGMPv3. CE3 installs state for this and sends a PIM Join towards PE3. PE3 installs this state into its C-PIM table:

PE3 finds that the S is reachable via PE2, so it creates an mLDP P2MP FEC for this (S, G) state rooted at PE2.

Above you can see that the opaque information consists of the Customer (S, G), the fact that it is vpnv4 traffic, and the RD, which is learned from the BGP route that is used for the unicast route to the source (10.1.2.10). The usage of the RD is a little confusing. Typically an RD is never used for importing state on a PE - that is the job of the RT. However, it works in this case because the RD is found from the original VPNv4 route that this single PE advertised. The RD must be unique per-VRF on the PE, so there are no issues with using the RD for association with the PE’s VRF.

To elaborate further, the problem with the egress PE using the export RT, is that the root PE may not be importing that RT. Say for example that PE3 exports RT A, and imports RT B. PE3 cannot use its own export RT, because PE2 may not import RT A. PE3 also can’t use an import RT because again, while PE3 is exporting that RT, it may not be importing that RT. Also, multiple VRFs can export the same RT. Therefore using the RD to identify the VRF makes the most sense in this case.

Moving on, this forms the mLDP P2MP tree that is rooted at PE2. At PE2, the router finds the (S, G) state information and issues its own PIM Join towards CE2. From CE2’s perspective, it has no idea whether the core used PIM or mLDP in-band to signal this state.

Multicast traffic flows over the P2MP mLDP tree that was created. On PE2 the outgoing interface is an Lspvif. On PE3 the incoming interface is the ImdtCUSTOMER (”I” meaning in-band).

You can see a summary list of the mLDP trees using the following commands:

Theory

In this profile, mLDP replaces C-PIM in the core. PEs no longer need to maintan PIM adjacencies with each other. Instead, PIM Join/Prune messages are encoded in the mLDP opaque value in the FEC. BSR is not supported now, so PIM-SSM must be used. All state must be signaled explicitly, and there is no “shared LAN” mechanism overlayed on the provider core any more.

The signaling here is called “in-band” mLDP, because the signaling (learning PIM state in the control plane) is in-band of the underlay protocol, which itself is also mLDP. Out-of-band signaling would be PIM or BGP, because the control plane runs separately, out-of-band of the underlay protocol.

When a PE receives a PIM Join for an (S, G), it does an RPF lookup to determine the nexthop for that source (S). That nexthop should be another PE. It then joins an mLDP P2MP tree rooted at that nexthop and encodes the (S, G) into the opaque value. A mLDP P2MP tree is created from the receiver towards the source by using a mLDP FEC mapping message. The mapping message starts with the receiver, just like a PIM Join starts with the receiver and goes hop-by-hop to the FHR.

When the tree is formed with that nexthop as the root, that root PE will convert the P2MP tree into a PIM Join and send it towards the source (which is towards the CE).

The advantage of this is that you no longer need to maintain a full mesh of C-PIM adjacencies between PEs in the core. The disadvantage is that it requires PIM-SSM on the customer side, and requires a unique P2MP mLDP tree for every single customer state in the provider core.

In-band mLDP does not use explicit data MDTs. But in fact, by default it works like data MDTs, because there is one tree per (S, G) state. Traffic only flows to PEs that have joined the mLDP tree for that particular state. There is no default MDT with in-band mLDP, because a default or “LAN broadcast-like” tree is not necessary. (The default MDT in profile 0 and 1 is mostly used to maintain C-PIM adjacencies between PEs).

The drawback with this is that there is a P2MP FEC for every single customer (S, G) state. This does not scale well, as it can quickly use up labels on PE routers as the customer MRIB state grows.

Last updated