EVPN VPWS Multihomed Single-Active
Load basic.evpn.multihomed.vpws.init.cfg
BGP l2vpn/evpn is already preconfigured in the core.
Configure EVPN VPWS multihomed service between CE7 and CE9.
Use static port-channels, and single-active mode so that traffic is only forwarded between two PEs (i.e. PE5 and PE7 only). The CEs are already preconfigured with port-channels, but you may need to add this config:
If any PE loses its two core-facing interfaces, it should bring down the BE on the AC side.
Answer
Explanation
In the previous lab, we saw that all-active ESI was used by default. Each PE would load share traffic to both remote PEs. This was achieved by allocating a local label for the VPWS neighbor. This label instruction was to load balance to both remote PEs.
When we specify the ESI mode as single-active, each PE will only use one single PE as the remote PE. For example, we see on XR7 that both XR5 and XR6 are advertising partipiation in the VPWS:
Since this route is using an ESI (0.0.0.0.0.0.0.0.7), XR7 checks the load balancing mode of the ESI using the per-ESI type 1 route. It sees that it is single-active:
Therefore XR7 only uses one single PE as the VPWS neighbor. It appears that XR7 essentially does its own DF election for both XR5 and XR6 to determine which PE to use.
Every PE performs these steps, so traffic only exists between two PEs (depending on which ingress PE receives the traffic).
This task also asks us to configure the PEs to bring down the AC if the core links go down. Without doing this step, if the PE becomes isolated from the core, the AC bundle will still be up, and traffic will be blackholed.
This is a fairly simple feature. If all links specified in the group go down, the bundle interfaces are brought down (errdisabled).
The potential problem is that the BGP routes will still be active for the full BGP hold time (3 minutes). In the real world, we would run BFD on our IGP links to faciliate fast failover. To test this more easily in our lab, we can reduce the BGP keep-alive/hold timers to 1/3 seconds on PE8.
PE8’s core interfaces then go down. The bundle member interfaces are brought into errdisable state. In our lab we are using static port-channel, so the CE will still see the link as up. But in the real world, the CE would react to the missing LACPDUs.
PE8 withdraws its EVPN BGP routes. The remote PEs see the withdrawn routes (due to the BGP session between the RR and PE8 timing out), and the nexthop is updated to PE7.
When the core interfaces come back up, it takes 60 seconds for the interface to automatically recover.
Also note that just one interface in the group going down does not cause the AC to go down. All interfaces identified in the group must go down.
Last updated