PIM Boundaries with BSR

Load multicast.multi.domain.init.cfg

#IOS-XE
config replace flash:multicast.multi.domain.init.cfg
 
#IOS-XR
configure
load bootflash:multicast.multi.domain.init.cfg
commit replace
y

Configure the group of routers circled on the left as a sub-PIM domain. R6 should be the RP for group range 239.10.0.0/24.

Configure R9 as the RP for group range 239.20.0.0/24. This group range should be for domain-wide traffic, and should propagate to all routers in the topology.

Use BSR to disseminate the information. Ensure that local multicast traffic between the sub domain and the rest of the topology cannot leak through.

Note that an interface has been added between R9 and XR2.

Answer

#R9
int lo0
 ip pim sparse-mode
!
ip access-list standard DOMAIN_SCOPE_RANGE
 10 permit 239.20.0.0 0.0.0.255
!
ip pim bsr-candidate lo0
ip pim rp-candidate lo0 group-list DOMAIN_SCOPE_RANGE

#R6
int lo0
 ip pim sparse-mode
!
ip access-list standard LOCAL_SCOPE_RANGE
 10 permit 239.10.0.0 0.0.0.255
!
ip pim bsr-candidate lo0
ip pim rp-candidate lo0 group-list LOCAL_SCOPE_RANGE
!
access-l 1 deny 239.10.0.0 0.0.0.255
access-l 1 permit any
!
int GigabitEthernet2.562
 ip multicast boundary 1
int GigabitEthernet2.569
 ip multicast boundary 1

#XR4
ipv4 access-list BOUNDARY_ACL
 10 deny ipv4 239.10.0.0/24 any
 20 deny ipv4 host 224.0.1.40 any
!
multicast-routing
 address-family ipv4
  interface GigabitEthernet0/0/0/0.524
   boundary BOUNDARY_ACL
  !
  interface GigabitEthernet0/0/0/0.594
   boundary BOUNDARY_ACL

Explanation

When using PIM for RP mapping dissemintation, there is no way to filter the contents of the BSR messages. We can use ip pim bsr-border but this would prevent learning the RP mapping of for the domain-wide group 239.20.0.0/24 in the sub domain. So this is not a good solution.

Instead, we can simply allow all BSR RP mappings to be advertised among the entire domain and then create a hard multicast boundary on each router on the edge of the sub domain.

First we advertise RP information as usual using BSR:

#R9
int lo0
 ip pim sparse-mode
!
ip access-list standard DOMAIN_SCOPE_RANGE
 10 permit 239.20.0.0 0.0.0.255
!
ip pim bsr-candidate lo0
ip pim rp-candidate lo0 group-list DOMAIN_SCOPE_RANGE

#R6
int lo0
 ip pim sparse-mode
!
ip access-list standard LOCAL_SCOPE_RANGE
 10 permit 239.10.0.0 0.0.0.255
!
ip pim bsr-candidate lo0
ip pim rp-candidate lo0 group-list LOCAL_SCOPE_RANGE
!
access-l 1 deny 239.10.0.0 0.0.0.255
access-l 1 permit any

All routers should have both RP mappings. For example, now R8 knows about the local 239.10.0.0/24 group mapping:

To prevent local multicast traffic from leaking out of the sub domain, we create a multicast boundary which denies 239.10.0.0/24. This is fairly simple on IOS-XR, as our only option is to apply an ACL:

#XR4
ipv4 access-list BOUNDARY_ACL
 10 deny ipv4 239.10.0.0/24 any
 20 permit ipv4 any any
!
multicast-routing
 address-family ipv4
  interface GigabitEthernet0/0/0/0.524
   boundary BOUNDARY_ACL
  !
  interface GigabitEthernet0/0/0/0.594
   boundary BOUNDARY_ACL

However, on IOS-XE, this is a little more confusing. We are given both an in and out direction option. What does this mean exactly, and which should we use?

#R6
access-l 1 deny 239.10.0.0 0.0.0.255
access-l 1 permit any

The in option means that multicast traffic arriving from a source, coming into the interface will be filtered out. So if a source outside the local domain sends to 239.10.0.0/24, the traffic will be filtered. This is filtered by simply not creating (S, G) state for this traffic.

Let’s test this out by joining 239.10.0.1 from both R5 and XR3 and pinging this from R3.

#R5
int lo0
 ip pim sprase-mode
 ip igmp join-group 239.10.0.1

#XR3
multicast-routing add ipv4 int lo0 en
router igmp interface lo0 join-group 239.10.0.1

First, we’ll make sure the multicast traffic works before applying any filtering. R6 has both the link towards R10 and towards R9 in the OIL for (*, 239.10.0.1).

Pinging the group from R8 results in responses from both routers:

We’ll now clear ip mroute 239.10.0.1 on R6 and R8, and apply a multicast boundary inbound to deny 239.10.0.1.

#R6
int GigabitEthernet2.562
 ip multicast boundary 1 in
int GigabitEthernet2.569
 ip multicast boundary 1 in

Interestingly, the first packet which uses the shared tree gets a response from R5. Subsequent packets only receive a reponse from XR3.

What happened here is the following:

  • R3 sends to 239.10.0.1 and R8 registers this with R6.

  • R6 receives the Register, decapsulates it and sends it down the shared tree.

  • R6 sends a Register stop to R8.

  • R5 tries to build a (S, G) tree rooted at R3, but this is denied by the boundary ACL, as R3 will not create (S, G) state for a group denied by a multicast boundary ACL applied to the RPF interface.

More specifically, we see that a join for (8.3.8.3, 239.10.0.1) is denied from being created, because the RPF interface is Gi2.562. A multicast boundary is applied in the in direction to this interface, denying this group, so the (S, G) state cannot be created.

So we can say that this successfully blocks multicast traffic, although the first packet is allowed through. However, this is only because we have the default spt switchover turned on.

What happens if a source within the local domain sends to this group?

Again, only the first packet flows down the shared tree. The receiver’s LHR in the non-local domain tries to join the (S, G), but the router denies this state from being created.

Let’s examine what happens if we apply the ACL in the out direction. The out direction means that multicast traffic is not allowed out the interface (sent from a receiver and out the given interface). In practice, this means that the router will not add this interface to an OIL for a group that is denied by the ACL.

#R6
int GigabitEthernet2.562
 no ip multicast boundary 1 in
 ip multicast boundary 1 out
int GigabitEthernet2.569
 no ip multicast boundary 1 in
 ip multicast boundary 1 out

We can see that now R6 will not even add Gi2.569 to the OIL for (*, 239.10.0.1). R6 receives a Join from R9, but it denies adding the interface to the OIL.

Now R4 will never receive the traffic, no matter where the sender resides.

However, we still have a problem: R3 can send multicast traffic to R5, which is not what we want.

The best solution is to apply the ACL in both directions. This can simply be done by omitting the direction keyword:

#R6
int GigabitEthernet2.562
 no ip multicast boundary 1 out
 ip multicast boundary 1
int GigabitEthernet2.569
 no ip multicast boundary 1 out
 ip multicast boundary 1

Now traffic from R3 simply never works. R4 cannot join the shared tree, and R5 cannot receive the traffic. (With the exception that R5 does receive the first packet which is forwarded down the shared tree. This is because the inbound direction only blocks formation of (S, G) trees.)

Traffic from a source within the sub domain is only delivered to R5.

Summary

Creating a multicast boundary on IOS-XR does not require much thought, because all we can do is apply the boundary in both directions at once.

On IOS-XE, we have the option to apply an ACL in either the in direction, out direction, or both.

  • The in direction means that multicast traffic is not allowed to ingress the given interface

    • No (S, G) state is allowed to be created which uses that interface as the RPF interface

  • The out direction means that multicast traffic is not allowed to be sent out the given interface

    • The interface is not allowed to be added to the OIL

In most cases, it is easiest to just apply the boundary in both directions.

Also note that in this lab we only used a standard ACL. This matches the multicast group address. You can also use an extended ACL, in which the first parameter matches the source, and the second parameter matches the group address. If you want to match (*, G) you would use host 0.0.0.0, which is 0.0.0.0/32, not 0.0.0.0/0. (This is a special meaning so that you can specifically identify the shared tree any not “any” (S, G) tree).

Last updated