Remote LFA
Last updated
Last updated
Load lfa.init.cfg
In the LFA lab we saw that we only achieved 50% coverage using LFA. Using remote LFA, achieve 100% coverage for all prefixes on R3 without breaking end-to-end LSPs.
In the previous LFA lab, we saw that we achieved only 50% coverage. This is because the loopbacks of R3’s directly connected neighbors cannot simply be IP routed to the other directly connected neighbor, for risk of the neighbor looping the packet back to R3.
A solution to this is to use remote LFA. With remote LFA, we use LDP to tunnel traffic beyond our nexthop router. This prevents the nexthop router from making an IP forwarding decision and looping traffic back to the PLR. Instead, the nexthop makes an MPLS forwarding decision, and forwards traffic to the node specified in the top label. That node is the PQ node, which is explained later.
To enable remote LFA we add the command fast-reroute per-prefix remote-lfa tunnel mpls-ldp to the ISIS interface. Note that the command fast-reroute per-prefix must also exist.
Additionally, since this uses LDP, we must enable LDP on all routers.
To understand how remote LFAs are calculated, we need to expore the P and Q space. P space is the nodes the PLR can reach without using the protected link. Q space is the nodes that can reach the destination without using the protected link. Essentially, if a node is in the P space, it means the PLR can reach it without using the protected link. And if the node is also in the Q space, it means the PLR can release the packet at that node without risk of that node looping the traffic back to the PLR. A node that is in both the P and Q space is the node at which the router can release the packet. This is called a PQ node.
In our topology, using R3, the P space for 4.4.4.1/32 (reachable via Gi0/0/0/4) is R5 and R6. R3 can reach these nodes without traversing Gi0/0/0/4. The Q space is R10 and R6. These nodes can reach 4.4.4.1/32 without traversing R3’s Gi0/0/0/4 link.
Therefore, for 4.4.4.1/32, if we tunnel the packet to R6 using R5’s LDP label for R6, we can achieve fast reroute.
A common pitfall is not accepting targetted LDP sessions on the PQ nodes. Currently, R6 is not accepting tLDP sessions. R3 can still install the LFA backup path via R6 though.
However, a closer look at the FIB shows that only one label is used for this backup path:
Label 24000 is R5’s label for R6:
This would end up breaking an end-to-end LSP between R3 and R4. VPN traffic would have <Transport to R6>/<VPN label>, and R5 would pop the top label, exposing the VPN label too early.
To solve this, R3 and R6 must form a tLDP session. R3 needs to push <Transport to R6>/<Transport to R4>/<VPN>. R3 needs to learn R6’s LDP label for R4.
R3 is actually trying to actively initiate the session already simply due to the remote LFA feature being enabled, and R3 finding R6 as a PQ node. R3 allows us to continue using the LFA even though tLDP is down, because we might only be using remote LFA for IP traffic, in which case we don’t care about an end-to-end LSP.
If we simply enable targeted LDP acceptance on R6, we can see that R3 and R6 form a tLDP session. R3 learn’s R6’s label binding for 4.4.4.1/32, and uses this as a second label in the stack for the backup path.
24001 is R6’s local label for 4.4.4.1/32:
Now the end-to-end LSP is retained under fast reroute conditions.
Also notice that we’ve achieved 100% coverage. In an upcoming lab, we will see a situtation that remote LFA cannot protect..