PIM Boundaries with BSR
Last updated
Last updated
Load multicast.multi.domain.init.cfg
Configure the group of routers circled on the left as a sub-PIM domain. R6 should be the RP for group range 239.10.0.0/24.
Configure R9 as the RP for group range 239.20.0.0/24. This group range should be for domain-wide traffic, and should propagate to all routers in the topology.
Use BSR to disseminate the information. Ensure that local multicast traffic between the sub domain and the rest of the topology cannot leak through.
Note that an interface has been added between R9 and XR2.
When using PIM for RP mapping dissemintation, there is no way to filter the contents of the BSR messages. We can use ip pim bsr-border but this would prevent learning the RP mapping of for the domain-wide group 239.20.0.0/24 in the sub domain. So this is not a good solution.
Instead, we can simply allow all BSR RP mappings to be advertised among the entire domain and then create a hard multicast boundary on each router on the edge of the sub domain.
First we advertise RP information as usual using BSR:
All routers should have both RP mappings. For example, now R8 knows about the local 239.10.0.0/24 group mapping:
To prevent local multicast traffic from leaking out of the sub domain, we create a multicast boundary which denies 239.10.0.0/24. This is fairly simple on IOS-XR, as our only option is to apply an ACL:
However, on IOS-XE, this is a little more confusing. We are given both an in and out direction option. What does this mean exactly, and which should we use?
The in option means that multicast traffic arriving from a source, coming into the interface will be filtered out. So if a source outside the local domain sends to 239.10.0.0/24, the traffic will be filtered. This is filtered by simply not creating (S, G) state for this traffic.
Let’s test this out by joining 239.10.0.1 from both R5 and XR3 and pinging this from R3.
First, we’ll make sure the multicast traffic works before applying any filtering. R6 has both the link towards R10 and towards R9 in the OIL for (*, 239.10.0.1).
Pinging the group from R8 results in responses from both routers:
We’ll now clear ip mroute 239.10.0.1 on R6 and R8, and apply a multicast boundary inbound to deny 239.10.0.1.
Interestingly, the first packet which uses the shared tree gets a response from R5. Subsequent packets only receive a reponse from XR3.
What happened here is the following:
R3 sends to 239.10.0.1 and R8 registers this with R6.
R6 receives the Register, decapsulates it and sends it down the shared tree.
R6 sends a Register stop to R8.
R5 tries to build a (S, G) tree rooted at R3, but this is denied by the boundary ACL, as R3 will not create (S, G) state for a group denied by a multicast boundary ACL applied to the RPF interface.
More specifically, we see that a join for (8.3.8.3, 239.10.0.1) is denied from being created, because the RPF interface is Gi2.562. A multicast boundary is applied in the in direction to this interface, denying this group, so the (S, G) state cannot be created.
So we can say that this successfully blocks multicast traffic, although the first packet is allowed through. However, this is only because we have the default spt switchover turned on.
What happens if a source within the local domain sends to this group?
Again, only the first packet flows down the shared tree. The receiver’s LHR in the non-local domain tries to join the (S, G), but the router denies this state from being created.
Let’s examine what happens if we apply the ACL in the out direction. The out direction means that multicast traffic is not allowed out the interface (sent from a receiver and out the given interface). In practice, this means that the router will not add this interface to an OIL for a group that is denied by the ACL.
We can see that now R6 will not even add Gi2.569 to the OIL for (*, 239.10.0.1). R6 receives a Join from R9, but it denies adding the interface to the OIL.
Now R4 will never receive the traffic, no matter where the sender resides.
However, we still have a problem: R3 can send multicast traffic to R5, which is not what we want.
The best solution is to apply the ACL in both directions. This can simply be done by omitting the direction keyword:
Now traffic from R3 simply never works. R4 cannot join the shared tree, and R5 cannot receive the traffic. (With the exception that R5 does receive the first packet which is forwarded down the shared tree. This is because the inbound direction only blocks formation of (S, G) trees.)
Traffic from a source within the sub domain is only delivered to R5.
Creating a multicast boundary on IOS-XR does not require much thought, because all we can do is apply the boundary in both directions at once.
On IOS-XE, we have the option to apply an ACL in either the in direction, out direction, or both.
The in direction means that multicast traffic is not allowed to ingress the given interface
No (S, G) state is allowed to be created which uses that interface as the RPF interface
The out direction means that multicast traffic is not allowed to be sent out the given interface
The interface is not allowed to be added to the OIL
In most cases, it is easiest to just apply the boundary in both directions.
Also note that in this lab we only used a standard ACL. This matches the multicast group address. You can also use an extended ACL, in which the first parameter matches the source, and the second parameter matches the group address. If you want to match (*, G) you would use host 0.0.0.0, which is 0.0.0.0/32, not 0.0.0.0/0. (This is a special meaning so that you can specifically identify the shared tree any not “any” (S, G) tree).