You must stop the evpn topology, and start the mh-evpn topology in containerlab. Make sure to also transfer the config files. All nodes in the mh-evpn topology are XRd.
Load init.cfg
configure
load init.cfg
commit replace
y
SR-MPLS with ISIS is already preconfigured in the core. BGP L2VPN/EVPN peering sessions are already established.
Create an All-Active multi-homed EVPN-VPWS service from CE1 to CE2.
LACP active mode should be used.
Configure 10.0.0.1/24 on CE1 and 10.0.0.2/24 on CE2.
Answer
#CE1
int gi0/0/0/1
no shut
bundle id 1 mode active
!
int gi0/0/0/2
no shut
bundle id 1 mode active
!
int be1
ip add 10.0.0.1/24
#CE2
int gi0/0/0/1
no shut
bundle id 1 mode active
!
int gi0/0/0/2
no shut
bundle id 1 mode active
!
int be1
ip add 10.0.0.2/24
#PE1, PE2
int gi0/0/0/1
no shut
bundle id 1 mode active
!
lacp system mac 0000.0000.0012
!
int be1 l2transport
!
evpn
interface Bundle-Ether1
ethernet-segment
identifier type 0 12.12.12.12.12.12.12.12.12
!
l2vpn xc group VPWS p2p 1
int be1
neighbor evpn evi 100 service 100
#PE3, PE4
int gi0/0/0/1
no shut
bundle id 1 mode active
!
lacp system mac 0000.0000.0034
!
int be1 l2transport
!
evpn
interface Bundle-Ether1
ethernet-segment
identifier type 0 34.34.34.34.34.34.34.34.34
!
l2vpn xc group VPWS p2p 1
int be1
neighbor evpn evi 100 service 100
Explanation
XRv9K has full LACP functionality, allowing us to fully test EVPN multihoming modes. XRv9K only supports EVPN-VPWS, so we will use this to test all multihoming modes. (Note that I originally wrote this lab using XRv9K, but all functionality is also supported in XRd).
The default mode is all-active. All PEs are actively forwarding traffic on the ESI. In EVPN, a DF must be elected to prevent duplication of BUM traffic onto the ESI. But with EVPN-VPWS, there is no bridging, so the DF election doesn’t make any difference with all-active mode.
First we must configure a common LACP SysID on the two PEs to “trick” the CE into believing that it is connected to the same LACP partner on both links.
#PE1, PE2
lacp system mac 0000.0000.0012
Then we configure the bundle and give it the same ESI value on both PEs. Note that you should not use an ESI that contains many zeros in the middle octects. In other multi-homing modes, an ESI Import RT is generated from the ESI, and if these values are all zeros, it breaks the functionality.
#PE1, PE2
evpn
interface Bundle-Ether1
ethernet-segment
identifier type 0 12.12.12.12.12.12.12.12.12
We then simply configure the bundle on each PE and assign it to the VPWS. Note that on every PE, each bundle will only consist of a single member link.
#PE1, PE2
int gi0/0/0/1
no shut
bundle id 1 mode active
!
int be1 l2transport
!
l2vpn xc group VPWS p2p 1
int be1
neighbor evpn evi 100 service 100
The CE should see the bundle as up, and both links as active:
We can see the details of the ESI using the following show command:
RP/0/RP0/CPU0:PE1#show evpn ethernet-segment detail
Mon Jan 8 16:24:40.669 UTC
Legend:
B - No Forwarders EVPN-enabled,
C - MAC missing (Backbone S-MAC PBB-EVPN / Grouping ES-MAC vES),
RT - ES-Import Route Target missing,
E - ESI missing,
H - Interface handle missing,
I - Name (Interface or Virtual Access) missing,
M - Interface in Down state,
O - BGP End of Download missing,
P - Interface already Access Protected,
Pf - Interface forced single-homed,
R - BGP RID not received,
S - Interface in redundancy standby state,
X - ESI-extracted MAC Conflict
SHG - No local split-horizon-group label allocated
Hp - Interface blocked on peering complete during HA event
Rc - Recovery timer running during peering sequence
Ethernet Segment Id Interface Nexthops
------------------------ ---------------------------------- --------------------
0012.1212.1212.1212.1212 BE1 10.0.0.1
10.0.0.2
ES to BGP Gates : Ready
ES to L2FIB Gates : Ready
Main port :
Interface name : Bundle-Ether1
Interface MAC : 0050.f003.a0fe
IfHandle : 0x0000002c
State : Up
Redundancy : Not Defined
ESI type : 0
Value : 12.1212.1212.1212.1212
ES Import RT : 1212.1212.1212 (from ESI)
Source MAC : 0000.0000.0000 (N/A)
Topology :
Operational : MH, All-active
Configured : All-active (AApF) (default)
Service Carving : Auto-selection
Multicast : Disabled
Convergence :
Peering Details : 2 Nexthops
10.0.0.1 [MOD:P:7fff:T]
10.0.0.2 [MOD:P:00:T]
Service Carving Synchronization:
Mode : NONE
Peer Updates :
10.0.0.1 [SCT: N/A]
10.0.0.2 [SCT: N/A]
Service Carving Results:
Forwarders : 1
Elected : 0
Not Elected : 0
EVPN-VPWS Service Carving Results:
Primary : 1
Backup : 0
Non-DF : 0
MAC Flushing mode : STP-TCN
Peering timer : 3 sec [not running]
Recovery timer : 30 sec [not running]
Carving timer : 0 sec [not running]
HRW Reset timer : 5 sec [not running]
Local SHG label : 24005
Remote SHG labels : 1
24005 : nexthop 10.0.0.2
Access signal mode: Bundle OOS (Default)
Above we see under Topology that the mode is All-active. The configure AApF means All-Active per Flow. We should see the remote peer under the Peering Details.
We should see that both PEs are activing as Primary for the EVPN-VPWS service:
The remote PEs will load balance traffic to both PE1 and PE2. This is achieved by allocating a local label with an ECMP instruction to forward to each remote PE. In this case, both PEs happened to reserve the same service label for the VPWS (24002):
Traffic between the two CEs should be working. Traffic is hashed among both links in the bundle, and likewise the ingress PE will use hashing to determine which of the two egress PEs to send the traffic to.
This mode of course offers redundancy as well, because if any one link goes down, that PE will remove its type 1 BGP route. For example, let’s shutdown Gi0/0/0/1 on PE1.
#PE1
int gi0/0/0/1
shut
PE1 withdraws its type 1 per-ESI route, removing itself from the ESI. This is a debug taken from PE3:
PE3 now only lists PE2 as the nexthop for the VPWS:
CE1 sees that the link is no longer receiving LACPDUs, so it is removed from the bundle: