PIM Boundaries with AutoRP
Last updated
Last updated
Load multicast.multi.domain.init.cfg
Configure the group of routers circled on the left as a sub-PIM domain. R6 should be the RP for group range 239.10.0.0/24. This information should be propagated using AutoRP and should not leak outside this sub domain.
Configure R9 as the RP for group range 239.20.0.0/24 using AutoRP. This group range should be for domain-wide traffic, and should propagate to all routers in the topology.
Note that an interface has been added between R9 and XR2.
PIM’s use of the underlying unicast topology makes PIM an overlay technology. The PIM domain is overlaid ontop of the unicast underlay. Routers also do not have the ability to run multiple PIM processes like they do for an IGP. This means that the PIM domain is essentially bounded by the collection of routers that share RP mapping information.
In some scenarios, you may want to create multiple PIM sub domains within a single IGP. The easiest way to do this is to limit the propagation of RP information to each sub domain.
In this lab, we have a sub domain that is circled on the left, yet we also want to create a topology-wide domain of all PIM routers. This makes it tricky to bound the sub domain’s RP advertisements.
To begin, we advertise RP mappings in AutoRP just like before. AutoRP actually lets us define the TTL of the C-RP and MA messages, but this is very rarely a good tool for bounding the domain. For example, on R6, if we make the TTL=2, R9 will still learn the mappings but R7 will not.
Next, we use the multicast boundary feature to prohibit the advertisement of the sub domain’s AutoRP mapping to router that don’t belong to the sub domain. On IOS-XE, we can actually filter the content of the AutoRP messages. Any mappings that are not permitted by the ACL are filtered out of the AutoRP message.
By using debug ip pim auto-rp we can watch that R6 is filtering the 239.10.0.0/24 mapping from Auto-RP messages sent in/out of Gi2.562 and Gi2.569:
IOS-XR does not have this ability. Instead, all we can do is filter the Auto-RP RP-discovery packets entirely. In addition, IOS-XR denies the local multicast group 239.10.0.0/24. This prevents multicast traffic in this group range from using these interfaces, as this traffic should be bound to the local sub domain.
Note that if all we had was IOS-XR routers on the edge between the sub domain and the rest of the topology, we could not use AutoRP for disseminating RP information, as there would be no way to selectively permit the 239.20.0.0/24 RP mapping but deny 239.10.0.0/24.
We’ve also introduced another problem: Now R7 and XR4 are only receiving the 239.20.0.0/24 RP mapping information through XR1, which is not the shortest path. For this reason, the (R9, 224.0.1.40) traffic does not pass the RPF check (remember this is basic dense mode traffic). XR4’s route to R9 is via the direct link. And R7’s best route to R9 is via XR4.
To fix this, we can increase the IGP cost of the links towards R9 via XR4-R9 link so that the RPF check will pass.
On all routers in the sub domain, we should see two RP mappings:
On a router that does not belong to the sub domain, we should only see an RP mapping for the 239.20.0.0/24 group range:
When you apply the AutoRP filter, it appears to also work as a normal, data plane boundary. So you must ensure that the 224.0.1.40 group is permited through.
In this lab, we simply had a permit all statement:
However, if you use inverse logic, only permitting groups you want to pass the filter, you must ensure to add permit host 224.0.1.40.