Lab7 MPLS + Add Path ECMP
Last updated
Last updated
Load lab7.init.cfg
The core is now MPLS-enabled. R7 is the only RR. 6PE is used for global IPv6 traffic, so that it is tunneled across the IPv4 MPLS core.
Configure add path on the minimum number of routers so that traffic from R1 to R6 is load shared among both R4 and R5.
https://www.youtube.com/watch?v=ZWEdIGurTDk&list=PL3Y9eZjZCcsejbVWD3wJIePqe3NiImqxB&index=10
The core is now MPLS enabled, and R3 is a BGP-free P router. This means that the multipath decision happens at the ingress PE (R2), because it is the one that decides the ultimate egress PE by placing the egree PE’s transport label on the traffic. The ingress PE is tunneling the traffic directly to the egress PE, so R2 needs to have both paths in order to do load sharing.
The minimum configuration we need is to enable Add Path between R2 and R7, and configure R7 to advertise the additional path via R5. On R7 with IOS-XR, we cannot do this on a per-neighbor basis. We use an RPL to select a backup route and advertise it as an additional path.
On R2, we simply need to activate the Add Path capability and use load sharing for iBGP paths. We do not need to select paths for installation into the CEF table as repair routes or for advertisement. Instead, enabling multipath will automatically allow the additional paths received from R7 to be used.
On R7 we see that Add Path is negotiated for only R2:
We also see that R7 marked the path via R5 as an additional path and is sending it to R2.
On R2 we can see that it is selecting the path via R5 for multipath installation.
Overall, there were no new concepts here in this lab. This lab was meant to demonstrate that the ingress PE makes all load sharing decisions when the core is running MPLS, because the P routers simply switch on the top (transport) label, transporting the traffic to the egress PE. They don’t independently make IP/IPv6 destination based forwarding decisions, as R3 has been doing up to this point.
This lab also demonstrates 6PE functionality. The IPv4 prefixes do not need a BGP label, but the IPv6 prefixes do. The IPv4 traffic is automatically tunneled across the BGP core based on the nexthop value. But the IPv6 traffic needs a service label so that the egress PE can lookup the label and determine that this is IPv6 traffic, not IPv4.
Update: After going through this again, I found that this isn’t 100% accurate. A service label isn’t technically needed, but the LSP cannot end prematurely, as the core-facing interface on the egress PE may not run IPv6. What we can do is set the label mode to per-vrf on the two egress PEs. Because the IPv6 routes are in the global table, the egress PEs do not allocate a service label. Instead, they advertise label value 2, which is IPv6-explicit-null. This keeps the LSP intact until reaching the egress PE, ensuring that the ingress interface does not try to do a IPv6 lookup (which will fail because IPv6 is not enabled on this interface). But this is not technically a service label.