H-VPLS with QinQ
Load hvpls.qinq.init.cfg
#IOS-XE (R1-R6, CE1-3, CE12)
config replace flash:hvpls.qinq.init.cfg

R1, R3 and R5 are “access switches” that run QinQ instead of MPLS. Configure H-VPLS using QinQ in the access network.
CE1 and CE2 should be in one VPLS domain, and CE12 and CE3 should be in another. Allow the customers to use any vlan tag they wish to use, but make sure to keep customer traffic separated.
Each “access switch” router should peer with the N-PE that it is one number higher.
R1 connects to R2 on Gi6
R3 connects to R4 on Gi7
R5 connects to R6 on Gi7
Use BGP for both AD and signaling in the N-PE full mesh. BGP is already preconfigured.
Answer
#R1
int gi4
service instance 1 ethernet
encapsulation default
rewrite ingress tag push dot1q 10 symmetric
exit
int gi5
service instance 1 ethernet
encapsulation default
rewrite ingress tag push dot1q 20 symmetric
exit
int gi6
service instance 10 ethernet
encapsulation dot1q 10
service instance 20 ethernet
encapsulation dot1q 20
exit
!
bridge-domain 10
member gi4 service-instance 1
member gi6 service-instance 10
!
bridge-domain 20
member gi5 service-instance 1
member gi6 service-instance 20
#R3
int gi6
service instance 1 ethernet
encapsulation default
rewrite ingress tag push dot1q 10 symmetric
exit
int gi7
service instance 10 ethernet
encapsulation dot1q 10
exit
!
bridge-domain 10
member gi6 service-instance 1
member gi7 service-instance 10
#R5
int gi6
service instance 1 ethernet
encapsulation default
rewrite ingress tag push dot1q 20 symmetric
exit
int gi7
service instance 20 ethernet
encapsulation dot1q 20
exit
!
bridge-domain 20
member gi6 service-instance 1
member gi7 service-instance 20
##
## N-PEs
##
#R2
l2vpn vfi context VPLS10
vpn id 10
autodiscovery bgp signaling bgp
ve id 2
!
l2vpn vfi context VPLS20
vpn id 20
autodiscovery bgp signaling bgp
ve id 2
!
int gi6
service instance 10 eth
encapsulation dot1q 10
service instance 20 eth
encapsulation dot1q 20
exit
!
bridge-domain 10
member vfi VPLS10
member gi6 service-instance 10
!
bridge-domain 20
member vfi VPLS20
member gi6 service-instance 20
#R4
l2vpn vfi context VPLS10
vpn id 10
autodiscovery bgp signaling bgp
ve id 4
!
int gi7
service instance 10 eth
encapsulation dot1q 10
exit
!
bridge-domain 10
member vfi VPLS10
member gi7 service-instance 10
#R6
l2vpn vfi context VPLS20
vpn id 20
autodiscovery bgp signaling bgp
ve id 6
!
int gi7
service instance 20 eth
encapsulation dot1q 20
exit
!
bridge-domain 20
member vfi VPLS20
member gi7 service-instance 20
Explanation
When using QinQ in the access network with H-VPLS, the access network does pure L2 bridging. The “access switches” push a tag ontop of the customer traffic in order to separate customer traffic at the NNI (U-PE to N-PE connection). This allows for up to 4094 customers per U-PE switch.
We can implement QinQ on CSR1000v by using EFPs. The U-PE pushes a tag upon ingress, one per customer, and pops it upon egress (symmetric). At the NNI, the traffic is classified based on the outer tag. Both the UNI and NNI belong to the same bridge-domain.
The N-PE receives the traffic as doubled tagged at the NNI. The N-PEs form a basic VPLS domain between themselves, one per customer. Because the N-PEs are not forming pseudowires with the U-PEs (which would need split horizon to be disabled), we can use BGP for both auto discovery and signaling in the N-PE VPLS domains.
Verification
On the U-PEs, we will simply have bridged traffic, no pseudowires. We should see local MACs on UNIs and remote MACs on NNIs:

At the N-PEs, we should see a normal VPLS setup. The AC on the N-PEs is really an NNI instead of a UNI, but the operation is basically the same as we’ve seen in previous labs. We can see the VFI neighbors and the bridge table.


On the CEs we can verify that we aren’t able to reach CEs in other VPLS domains. All CEs are overlapping with dot1q tags and IP subnets which allows us to test this. Below, CE1 only gets a reply from CE2.

Below, CE3 only gets a reply from CE12:

Last updated