EVPN-VPWS Multihomed IOS-XR (All-Active)

You must stop the evpn topology, and start the mh-evpn topology in containerlab. Make sure to also transfer the config files. All nodes in the mh-evpn topology are XRd.

Load init.cfg

configure
load init.cfg
commit replace
y

SR-MPLS with ISIS is already preconfigured in the core. BGP L2VPN/EVPN peering sessions are already established.

  • Create an All-Active multi-homed EVPN-VPWS service from CE1 to CE2.

  • LACP active mode should be used.

  • Configure 10.0.0.1/24 on CE1 and 10.0.0.2/24 on CE2.

Answer

Explanation

XRv9K has full LACP functionality, allowing us to fully test EVPN multihoming modes. XRv9K only supports EVPN-VPWS, so we will use this to test all multihoming modes. (Note that I originally wrote this lab using XRv9K, but all functionality is also supported in XRd).

The default mode is all-active. All PEs are actively forwarding traffic on the ESI. In EVPN, a DF must be elected to prevent duplication of BUM traffic onto the ESI. But with EVPN-VPWS, there is no bridging, so the DF election doesn’t make any difference with all-active mode.

First we must configure a common LACP SysID on the two PEs to “trick” the CE into believing that it is connected to the same LACP partner on both links.

Then we configure the bundle and give it the same ESI value on both PEs. Note that you should not use an ESI that contains many zeros in the middle octects. In other multi-homing modes, an ESI Import RT is generated from the ESI, and if these values are all zeros, it breaks the functionality.

We then simply configure the bundle on each PE and assign it to the VPWS. Note that on every PE, each bundle will only consist of a single member link.

The CE should see the bundle as up, and both links as active:

We can see the details of the ESI using the following show command:

Above we see under Topology that the mode is All-active. The configure AApF means All-Active per Flow. We should see the remote peer under the Peering Details.

We should see that both PEs are activing as Primary for the EVPN-VPWS service:

The remote PEs will load balance traffic to both PE1 and PE2. This is achieved by allocating a local label with an ECMP instruction to forward to each remote PE. In this case, both PEs happened to reserve the same service label for the VPWS (24002):

Traffic between the two CEs should be working. Traffic is hashed among both links in the bundle, and likewise the ingress PE will use hashing to determine which of the two egress PEs to send the traffic to.

This mode of course offers redundancy as well, because if any one link goes down, that PE will remove its type 1 BGP route. For example, let’s shutdown Gi0/0/0/1 on PE1.

PE1 withdraws its type 1 per-ESI route, removing itself from the ESI. This is a debug taken from PE3:

PE3 now only lists PE2 as the nexthop for the VPWS:

CE1 sees that the link is no longer receiving LACPDUs, so it is removed from the bundle:

Last updated