H-VPLS with Redundancy

Load hvpls.bgp.init.cfg

#IOS-XE (R1-R6, CE1-3)
config replace flash:hvpls.bgp.init.cfg

  • Configure H-VPLS using LDP with BGP-AD for the N-PE mesh.

  • R1, R3, and R5 are the U-PEs, and R2, R4, and R6 are the N-PEs.

    • Each U-PE should only peer to the N-PE one number higher (i.e. R1 to R2, etc).

  • Configure N-PE redundancy on R1 so that if R2 goes down, R1 switches over to R4 as the N-PE.

  • BGP is already pre-configured in the core.

Answer

!
! N-PEs
!
#R2
l2vpn vfi context VPLS1
 vpn id 100
 autodiscovery bgp signaling ldp
!
bridge-domain 100
 member vfi VPLS1
 member 1.1.1.1 100 encap mpls

#R4
l2vpn vfi context VPLS1
 vpn id 100
 autodiscovery bgp signaling ldp
!
bridge-domain 100
 member vfi VPLS1
 member 3.3.3.3 100 encap mpls
 member 1.1.1.1 100 encap mpls

#R6
l2vpn vfi context VPLS1
 vpn id 100
 autodiscovery bgp signaling ldp
!
bridge-domain 100
 member vfi VPLS1
 member 5.5.5.5 100 encap mpls

!
! U-PEs
!
#R1
int Gi4
 service instance 1 eth
  encap default
 exit
!
l2vpn xconnect context N_PE
 member gi4 service-instance 1
 member 2.2.2.2 100 encap mpls group N_PE priority 1
 member 4.4.4.4 100 encap mpls group N_PE priority 2

#R3
int Gi6
 service instance 1 eth
  encap default
 exit
!
l2vpn vfi context VPLS1
 vpn id 100
 member 4.4.4.4 encap mpls
bridge-domain 100
 member gi6 service-instance 1
 member vfi VPLS1

#R5
int Gi6
 service instance 1 eth
  encap default
 exit
!
l2vpn vfi context VPLS1
 vpn id 100
 member 6.6.6.6 encap mpls
bridge-domain 100
 member gi6 service-instance 1
 member vfi VPLS1

Explanation

In order to achieve N-PE redundancy we must use pseudowire redundancy. Unfortunately there doesn’t appear to be a way to use pseudowire redundancy with a VPLS bridge-domain, so we have no choice but to revert to using a regular VPWS service on R1. This seems to prevent us from having multiple CEs attached to R1.

In summary, R1 just uses VPWS redundancy. This is basic VPWS redundancy and has nothing to do with VPLS. In fact, no MAC learning happens on R1.

l2vpn xconnect context N_PE
 member gi3 service-instance 1
 member 2.2.2.2 100 encap mpls group N_PE priority 1
 member 4.4.4.4 100 encap mpls group N_PE priority 2

R2 and R4 both configure R1 as a U-PE like normal.

#R2
bridge-domain 100
 member vfi VPLS1
 member 1.1.1.1 100 encap mpls

#R4
bridge-domain 100
 member vfi VPLS1
 member 3.3.3.3 100 encap mpls
 member 1.1.1.1 100 encap mpls

Verification

On R2 we can shutdown Lo0. This is the PW endpoint, so it will bring down the PW on R1. R1 will then switchover to R4 as the primary xconnect endpoint. During this we can ping CE2 from CE1 to observe convergence.

#R2
int lo0
 shut

Only one ping is lost:

If we bring Lo0 back up R1 will move the pseudowire back to R2. R2 will be forced to learn MACs again, and form a full mesh of PWs in the N-PE VPLS. This seems to have even worse convergence than the initial failover.

We can try to optimize this a bit by playing with the delay timers:

#R1
l2vpn xconnect context N_PE
 redundancy delay 0 10 group N_PE

! syntax: redundancy delay enable-delay disable-delay group group-name

The frist timer is the enable timer, which is how long to wait to switchover to the standby PW. The second timer is the one we want to increase, which is the disable timer. This is how long to wait before switching back to the primary PW.

We still see one lost ping when failing over, but now we get no packet loss when Lo0 comes back up.

Last updated