Partitioned RRs
Last updated
Last updated
Load vpnv4.partitioned.rr.init.cfg
L3VPN is fully setup, except for iBGP within the core. All PEs are already configured to peer with both RRs.
R3 should only reflect routes for VPN_A, and R6 should only reflect routes for VPN_B. When you look at the VPNv4 table on each RR, you should only see VPN_A routes on R3, and VPN_B routes on R6. Achieve this by only configuring the RRs.
By default, a RR disables its local RT filter, which allows it to reflect VPNv4 routes for which it does not locally have a VRF importing the matching RT. We can control this using the bgp rr-group command. This takes a named extcommunity-list as a sort of RT ACL. Any received routes with RTs not permitted in the extcommunity-list are discarded.
In our lab, we specified the individual VPN RTs that each RR will permit.
We only have two VRFs in our lab, so this works fine. But if we had hundreds of VRFs, this would not scale well. (And this feature is only used for scalability in the first place, when lots of VRFs are defined in the core).
Note that a better option might be to designate a unique RT for each RR. Then each customer VRF would export two RT values: one for the VPN membership, and one for which RR should reflect the VPNv4 routes.
We turn on VPNv4 update debugging and refresh VPNv4 inbound on R3.
We can see that the VPN_B routes are denied by the RT filter. 100:2 is not permitted.
All we see in R3’s local VPNv4 table are VPN_A routes:
Let’s change the RT policy so that R3 and R6 each have a unique RT they are matching:
On the PEs, we export the appropriate RTs in addition to the regular RTs. Except we make a mistake on R5, exporting R6’s RT instead of R3. The question is: will this break the L3VPN?
The answer is that the L3VPN is still fully operational. The reason is that both R2 and R5 peer with both RRs. R3 will reflect 1.1.1.1/32, and R6 will reflect 8.8.8.8/32, so there is no issue.