CCIE SPv5.1 Labs
  • Intro
    • Setup
  • Purpose
  • Video Demonstration
  • Containerlab Tips
  • Labs
    • ISIS
      • Start
      • Topology
      • Prefix Suppression
      • Hello padding
      • Overload Bit
      • LSP size
      • Default metric
      • Hello/Hold Timer
      • Mesh groups
      • Prefix Summarization
      • Default Route Preference
      • ISIS Timers
      • Log Neighbor Changes
      • Troubleshooting 1 - No routes
      • Troubleshooting 2 - Adjacency
      • IPv6 Single Topology
      • IPv6 Single Topology Challenge
      • IPv6 Multi Topology
      • IPv6 Single to Multi Topology
      • Wide Metrics Explained
      • Route Filtering
      • Backdoor Link
      • Non-Optimal Intra-Area routing
      • Multi Area
      • Authentication
      • Conditional ATT Bit
      • Troubleshooting iBGP
      • Troubleshooting TE Tunnel
    • LDP
      • Start
      • Topology
      • LDP and ECMP
      • LDP and Static Routes
      • LDP Timers
      • LDP Authentication
      • LDP Session Protection
      • LDP/IGP Sync (OSPF)
      • LDP/IGP Sync (ISIS)
      • LDP Local Allocation Filtering
      • LDP Conditional Label Advertisement
      • LDP Inbound Label Advertisement Filtering
      • LDP Label Advertisement Filtering Challenge
      • LDP Implicit Withdraw
      • LDP Transport Address Troubleshooting
      • LDP Static Labels
    • MPLS-TE
      • Start
      • Topology
      • Basic TE Tunnel w/ OSPF
      • Basic TE Tunnel w/ ISIS
      • TE Tunnel using Admin Weight
      • TE Tunnel using Link Affinity
      • TE Tunnel with Explicit-Null
      • TE Tunnel with Conditional Attributes
      • RSVP message pacing
      • Reoptimization timer
      • IGP TE Flooding Thresholds
      • CSPF Tiebreakers
      • TE Tunnel Preemption
      • TE Tunnel Soft Preemption
      • Tunneling LDP inside RSVP
      • PE to P TE Tunnel
      • Autoroute Announce Metric (XE)
      • Autoroute Announce Metric (XR)
      • Autoroute Announce Absolute Metric
      • Autoroute Announce Backup Path
      • Forwarding Adjacency
      • Forwarding Adjacency with OSPF
      • TE Tunnels with UCMP
      • Auto-Bandwidth
      • FRR Link Protection (XE, BFD)
      • FRR Link Protection (XE, RSVP Hellos)
      • FRR Node Protection (XR)
      • FRR Path Protection
      • FRR Multiple Backup Tunnels (Node Protection)
      • FRR Multiple Backup Tunnels (Link Protection)
      • FRR Multiple Backup Tunnels (Backwidth/Link Protection)
      • FRR Backup Auto-Tunnels
      • FRR Backup Auto-Tunnels with SRLG
      • Full Mesh Auto-Tunnels
      • Full Mesh Dynamic Auto-Tunnels
      • One-Hop Auto-Tunnels
      • CBTS/PBTS
      • Traditional DS-TE
      • IETF DS-TE with MAM
      • IETF DS-TE with RDM
      • RDM w/ FRR Troubleshooting
      • Per-VRF TE Tunnels
      • Tactical TE Issues
      • Multicast and MPLS-TE
    • SR
      • Start
      • Topology
      • Basic SR with ISIS
      • Basic SR with OSPF
      • SRGB Modifcation
      • SR with ExpNull
      • SR Anycast SID
      • SR Adjacency SID
      • SR LAN Adjacency SID (Walkthrough)
      • SR and RSVP-TE interaction
      • SR Basic Inter-area with ISIS
      • SR Basic Inter-area with OSPF
      • SR Basic Inter-IGP (redistribution)
      • SR Basic Inter-AS using BGP
      • SR BGP Data Center (eBGP)
      • SR BGP Data Center (iBGP)
      • LFA
      • LFA Tiebreakers (ISIS)
      • LFA Tiebreakers (OSPF)
      • Remote LFA
      • RLFA Tiebreakers?
      • TI-LFA
      • Remote LFA or TILFA?
      • TI-LFA Node Protection
      • TI-LFA SRLG Protection
      • TI-LFA Protection Priorities (ISIS)
      • TI-LFA Protection Priorities (OSPF)
      • Microloop Avoidance
      • SR/LDP Interworking
      • SR/LDP SRMS OSPF Inter-Area
      • SR/LDP Design Challenge #1
      • SR/LDP Design Challenge #2
      • Migrate LDP to SR (ISIS)
      • OAM with SR
      • SR-MPLS using IPv6
      • Basic SR-TE with AS
      • Basic SR-TE with AS and ODN
      • SR-TE with AS Primary/Secondary Paths
      • SR-TE Dynamic Policies
      • SR-TE Dynamic Policy with Margin
      • SR-TE Explicit Paths
      • SR-TE Disjoint Planes using Anycast SIDs
      • SR-TE Flex-Algo w/ Latency
      • SR-TE Flex-Algo w/ Affinity
      • SR-TE Disjoint Planes using Flex-Algo
      • SR-TE BSIDs
      • SR-TE RSVP-TE Stitching
      • SR-TE Autoroute Include
      • SR Inter-IGP using PCE
      • SR-TE PCC Features
      • SR-TE PCE Instantiated Policy
      • SR-TE PCE Redundancy
      • SR-TE PCE Redundancy w/ Sync
      • SR-TE Basic BGP EPE
      • SR-TE BGP EPE for Unified MPLS
      • SR-TE Disjoint Paths
      • SR Converged SDN Transport Challenge
      • SR OAM DPM
      • SR OAM Tools
      • Performance-Measurement (Interface Delay)
    • SRv6
      • Start
      • Topology
      • Basic SRv6
      • SRv6 uSID
      • SRv6 uSID w/ EVPN-VPWS and BGP IPv4/IPv6
      • SRv6 uSID w/ SR-TE
      • SRv6 uSID w/ SR-TE Explicit Paths
      • SRv6 uSID w/ L3 IGW
      • SRv6 uSID w/ Dual-Connected PE
      • SRv6 uSID w/ Flex Algo
      • SRv6 uSID - Scale (Pt. 1)
      • SRv6 uSID - Scale (Pt. 2)
      • SRv6 uSID - Scale (Pt. 3) (UPA Walkthrough)
      • SRv6 uSID - Scale (Pt. 4) (Flex Algo)
      • SRv6 uSID w/ TI-LFA
    • Multicast
      • Start
      • Topology
      • Basic PIM-SSM
      • PIM-SSM Static Mapping
      • Basic PIM-SM
      • PIM-SM with Anycast RP
      • PIM-SM with Auto-RP
      • PIM-SM with BSR
      • PIM-SM with BSR for IPv6
      • PIM-BiDir
      • PIM-BiDir for IPv6
      • PIM-BiDir with Phantom RP
      • PIM Security
      • PIM Boundaries with AutoRP
      • PIM Boundaries with BSR
      • PIM-SM IPv6 using Embedded RP
      • PIM SSM Range Note
      • PIM RPF Troubleshooting #1
      • PIM RPF Troubleshooting #2
      • PIM RP Troubleshooting
      • PIM Duplicate Traffic Troubleshooting
      • Using IOS-XR as a Sender/Receiver
      • PIM-SM without Receiver IGMP Joins
      • RP Discovery Methods
      • Basic Interdomain Multicast w/o MSDP
      • Basic Interdomain Multicast w/ MSDP
      • MSDP Filtering
      • MSDP Flood Reduction
      • MSDP Default Peer
      • MSDP RPF Check (IOS-XR)
      • MSDP RPF Check (IOS-XE)
      • Interdomain MBGP Policies
      • PIM Boundaries using MSDP
    • MVPN
      • Start
      • Topology
      • Profile 0
      • Profile 0 with data MDTs
      • Profile 1
      • Profile 1 w/ Redundant Roots
      • Profile 1 with data MDTs
      • Profile 6
      • Profile 7
      • Profile 3
      • Profile 3 with S-PMSI
      • Profile 11
      • Profile 11 with S-PMSI
      • Profile 11 w/ Receiver-only Sites
      • Profile 9 with S-PMSI
      • Profile 12
      • Profile 13
      • UMH (Upstream Multicast Hop) Challenge
      • Profile 13 w/ Configuration Knobs
      • Profile 13 w/ PE RP
      • Profile 12 w/ PE Anycast RP
      • Profile 14 (Partitioned MDT)
      • Profile 14 with Extranet option #1
      • Profile 14 with Extranet option #2
      • Profile 14 w/ IPv6
      • Profile 17
      • Profile 19
      • Profile 21
    • MVPN SR
      • Start
      • Topology
      • Profile 27
      • Profile 27 w/ Constraints
      • Profile 27 w/ FRR
      • Profile 28
      • Profile 28 w/ Constraints and FRR
      • Profile 28 w/ Data MDTs
      • Profile 29
    • VPWS
      • Start
      • Topology
      • Basic VPWS
      • VPWS with Tag Manipulation
      • Redundant VPWS
      • Redundant VPWS (IOS-XR)
      • VPWS with PW interfaces
      • Manual VPWS
      • VPWS with Sequencing
      • Pseudowire Logging
      • VPWS with FAT-PW
      • MS-PS (Pseudowire stitching)
      • VPWS with BGP AD
    • VPLS
      • Start
      • Topology
      • Basic VPLS with LDP
      • VPLS with LDP and BGP
      • VPLS with BGP only
      • Hub and Spoke VPLS
      • Tunnel L2 Protocols over VPLS
      • Basic H-VPLS
      • H-VPLS with BGP
      • H-VPLS with QinQ
      • H-VPLS with Redundancy
      • VPLS with Routing
      • VPLS MAC Protection
      • Basic E-TREE
      • VPLS with LDP/BGP-AD and XRv RR
      • VPLS with BGP and XRv RR
      • VPLS with Storm Control
    • EVPN
      • Start
      • Topology
      • EVPN VPWS
      • EVPN VPWS Multihomed
      • EVPN VPWS Multihomed Single-Active
      • Basic Single-homed EVPN E-LAN
      • EVPN E-LAN Service Label Allocation
      • EVPN E-LAN Ethernet Tag
      • EVPN E-LAN Multihomed
      • EVPN E-LAN on XRv
      • EVPN IRB
      • EVPN-VPWS Multihomed IOS-XR (All-Active)
      • EVPN-VPWS Multihomed IOS-XR (Port-Active)
      • EVPN-VPWS Multihomed IOS-XR (Single-Active)
      • EVPN-VPWS Multihomed IOS-XR (Non-Bundle)
      • PBB-EVPN (Informational)
    • BGP Multi-Homing (XE)
      • Start
      • Topology
      • Lab1 ECMP
      • Lab2 UCMP
      • Lab3 Backup Path
      • Lab4 Shadow Session
      • Lab5 Shadow RR
      • Lab6 RR with Add-Path
      • Lab7 MPLS + Add Path ECMP
      • Lab8 MPLS + Shadow RR
      • Lab9 MPLS + RDs + UCMP
    • BGP Multi-Homing (XR)
      • Start
      • Topology
      • Lab1 ECMP
      • Lab2 UCMP
      • Lab3 Backup Path
      • Lab4 “Shadow Session”
      • Lab5 “Shadow RR”
      • Lab6 RR with Add-Path
      • Lab7 MPLS + Add Path ECMP
      • Lab8 MPLS + “Shadow RR”
      • Lab9 MPLS + RDs + UCMP
      • Lab10 MPLS + Same RD + Add-Path + UCMP
      • Lab11 MPLS + Same RD + Add-Path + Repair Path
    • BGP
      • Start
      • Conditional Advertisement
      • Aggregation and Deaggregation
      • Local AS
      • BGP QoS Policy Propagation
      • Non-Optimal eBGP Routing
      • Multihomed Enterprise Challenge
      • Provider Communities
      • Destination-Based RTBH
      • Destination-Based RTBH (Community-Based)
      • Source-Based RTBH
      • Source-Based RTBH (Community-Based)
      • Multihomed Enterprise Challenge (XRv)
      • Provider Communities (XRv)
      • DMZ Link BW Lab1
      • DMZ Link BW Lab2
      • PIC Edge in the Global Table
      • PIC Edge Troubleshooting
      • PIC Edge for VPNv4
      • AIGP
      • AIGP Translation
      • Cost-Community (iBGP)
      • Cost-Community (confed eBGP)
      • Destination-Based RTBH (VRF Provider-triggered)
      • Destination-Based RTBH (VRF CE-triggered)
      • Source-Based RTBH (VRF Provider-triggered)
      • Flowspec (Global IPv4/6PE)
      • Flowspec (VRF)
      • Flowspec (Global IPv4/6PE w/ Redirect)
      • Flowspec (Global IPv4/6PE w/ Redirect) T-Shoot
      • Flowspec (VRF w/ Redirect)
      • Flowspec (Global IPv4/6PE w/ CE Advertisement)
    • Intra-AS L3VPN
      • Start
      • Partitioned RRs
      • Partitioned RRs with IOS-XR
      • RT Filter
      • Non-Optimal Multi-Homed Routing
      • Troubleshoot #1 (BGP)
      • Troubleshoot #2 (OSPF)
      • Troubleshoot #3 (OSPF)
      • Troubleshoot #4 (OSPF Inter-AS)
      • VRF to Global Internet Access (IOS-XE)
      • VRF to Global Internet Access (IOS-XR)
    • Inter-AS L3VPN
      • Start
      • Inter-AS Option A
      • Inter-AS Option B
      • Inter-AS Option C
      • Inter-AS Option AB (D)
      • CSC
      • CSC with Option AB (D)
      • Inter-AS Option C - iBGP LU
      • Inter-AS Option B w/ RT Rewrite
      • Inter-AS Option C w/ RT Rewrite
      • Inter-AS Option A Multi-Homed
      • Inter-AS Option B Multi-Homed
      • Inter-AS Option C Multi-Homed
    • Russo Inter-AS
      • Start
      • Topology
      • Option A L3NNI
      • Option A L2NNI
      • Option A mVPN
      • Option B L3NNI
      • Option B mVPN
      • Option C L3NNI
      • Option C L3NNI w/ L2VPN
      • Option C mVPN
    • BGP RPKI
      • Start
      • RPKI on IOS-XE (Enabling the feature)
      • RPKI on IOS-XE (Validation)
      • RPKI on IOS-XR (Enabling the feature)
      • Enable SSH in Routinator
      • RPKI on IOS-XR (Validation)
      • RPKI on IOS-XR (RPKI Routes)
      • RPKI on IOS-XR (VRF)
      • RPKI iBGP Mesh (No Signaling)
      • RPKI iBGP Mesh (iBGP Signaling)
    • NAT
      • Start
      • Egress PE NAT44
      • NAT44 within an INET VRF
      • Internet Reachability between VRFs
      • CGNAT
      • NAT64 Stateful
      • NAT64 Stateful w/ Static NAT
      • NAT64 Stateless
      • MAP-T BR
    • BFD
      • Start
      • Topology
      • OSPF Hellos
      • ISIS Hellos
      • BGP Keepalives
      • PIM Hellos
      • Basic BFD for all protocols
      • BFD Asymmetric Timers
      • BFD Templates
      • BFD Tshoot #1
      • BFD for Static Routes
      • BFD Multi-Hop
      • BFD for VPNv4 Static Routes
      • BFD for VPNv6 Static Routes
      • BFD for Pseudowires
    • QoS
      • Start
      • QoS on IOS-XE
      • Advanced QoS on IOS-XE Pt. 1
      • Advanced QoS on IOS-XE Pt. 2
      • MPLS QoS Design
      • Notes - QoS on IOS-XR
    • NSO
      • Start
      • Basic NSO Usage
      • Basic NSO Template Service
      • Advanced NSO Template Service
      • Advanced NSO Template Service #2
      • NSO Template vs. Template Service
      • NSO API using Python
      • NSO API using Python #2
      • NSO API using Python #3
      • Using a NETCONF NED
      • Python Service
      • Nano Services
    • MDT
      • Start
      • MDT Server Setup
      • Basic Dial-Out
      • Filtering Data using XPATH
      • Finding the correct YANG model
      • Finding the correct YANG model #2
      • Event-Driven MDT
      • Basic Dial-In using gNMI
      • Dial-Out with TLS
      • Dial-In with TLS
      • Dial-In with two-way TLS
    • App-Hosting
      • Start
      • Lab - iperf3 Docker Container
      • Notes - LXC Container
      • Notes - Native Applications
      • Notes - Process Scripts
    • ZTP
      • Notes - Classic ZTP
      • Notes - Secure ZTP
    • L2 Connectivity Notes
      • 802.1ad (Q-in-Q)
      • MST-AG
      • MC-LAG
      • G.8032
    • Ethernet OAM
      • Start
      • Topology
      • CFM
      • y1731
      • Notes - y1564
    • Security
      • Start
      • Notes - Security ACLs
      • Notes - Hybrid ACLs
      • Notes - MPP (IOS-XR)
      • Notes - MPP (IOS-XE)
      • Notes - CoPP (IOS-XE)
      • Notes - LPTS (IOS-XR)
      • Notes - WAN MACsec White Paper
      • Notes - WAN MACsec Config Guide
      • Notes - AAA
      • Notes - uRPF
      • Notes - VTY lines (IOS-XR)
      • Lab - uRPF
      • Lab - MPP
      • Lab - AAA (IOS-XE)
      • Lab - AAA (IOS-XR)
      • Lab - CoPP and LPTS
    • Assurance
      • Start
      • Notes - Syslog on IOS-XE
      • Notes - Syslog on IOS-XR
      • Notes - SNMP Traps
      • Syslog (IOS-XR)
      • RMON
      • Netflow (IOS-XE)
      • Netflow (IOS-XR)
Powered by GitBook
On this page
  • Answer
  • Explanation
  • Topology Changes
  • A note on BGP-LS troubleshooting
  • Comparison to BGP-LU with SR
  1. Labs
  2. SR

SR-TE BGP EPE for Unified MPLS

PreviousSR-TE Basic BGP EPENextSR-TE Disjoint Paths

Last updated 1 month ago

Load multi-domain.init.cfg. You may need to load bootflash:blank.cfg and commit replace first.

configure
load bootflash:multi-domain.init.cfg
commit replace
y

There are two separate ISIS domains. eBGP is used between R3, R4, R5 and R6. Using BGP EPE with SR PCE, achieve an end-to-end LSP between R1 and R7.

Use R10 as the PCE. Do not distribute link state on R10 under the IGP processes.

Answer

#R1
segment-routing
 traffic-eng
  pcc
   source-address ipv4 1.1.1.1
   pce address ipv4 10.10.10.1
!
segment-routing
 traffic-eng
  policy POL_1_AS
   candidate-paths
    preference 100
     dynamic
      pcep

#R7
segment-routing
 traffic-eng
  pcc
   source-address ipv4 7.7.7.1
   pce address ipv4 10.10.10.1
!
segment-routing
 traffic-eng
  policy POL_1_AS
   candidate-paths
    preference 100
     dynamic
      pcep

#R3
router isis 1
 distribute link-state instance-id 101
!
router bgp 65001
 add link-state link-state
 neighbor 10.10.10.1
  remote-as 65010
  update-so lo1
  ebgp-multihop
  add link-state link-state
 neighbor 10.3.5.5
  egress-engineering
 neighbor 10.3.6.6
  egress-engineering
 !
!
mpls static
 int gi0/0/0/5
 int gi0/0/0/6

#R4
router isis 1
 distribute link-state instance-id 101
!
router bgp 65001
 add link-state link-state
 neighbor 10.10.10.1
  remote-as 65010
  update-so lo1
  ebgp-multihop
  add link-state link-state
 neighbor 10.4.5.5
  egress-engineering
 neighbor 10.4.6.6
  egress-engineering
 !
!
mpls static
 int gi0/0/0/5
 int gi0/0/0/6

#R5
router isis 2
 distribute link-state instance-id 102
!
router bgp 65002
 add link-state link-state
 neighbor 10.10.10.1
  remote-as 65010
  update-so lo1
  ebgp-multihop
  add link-state link-state
 neighbor 10.3.5.3
  egress-engineering
 neighbor 10.4.5.4
  egress-engineering
 !
!
mpls static
 int gi0/0/0/3
 int gi0/0/0/4

#R6
router isis 2
 distribute link-state instance-id 102
!
router bgp 65002
 add link-state link-state
 neighbor 10.10.10.1
  remote-as 65010
  update-so lo1
  ebgp-multihop
  add link-state link-state
 neighbor 10.3.6.3
  egress-engineering
 neighbor 10.4.6.4
  egress-engineering
 !
!
mpls static
 int gi0/0/0/3
 int gi0/0/0/4

#R10
router bgp 65010
 bgp unsafe-ebgp-policy
 address-family link-state link-state
 !
 neighbor 3.3.3.1
  remote-as 65001
  ebgp-multihop 255
  update-source Loopback1
  address-family link-state link-state
 !
 neighbor 4.4.4.1
  remote-as 65001
  ebgp-multihop 255
  update-source Loopback1
  address-family link-state link-state
 !
 neighbor 5.5.5.1
  remote-as 65002
  ebgp-multihop 255
  update-source Loopback1
  address-family link-state link-state
!
 neighbor 6.6.6.1
  remote-as 65002
  ebgp-multihop 255
  update-source Loopback1
  address-family link-state link-state
 !
!
pce
 address ipv4 10.10.10.1

Explanation

Before making any changes, IPv4 reachability exists between the PEs, but it is not an LSP. This is because each domain is advertising its PEs’ loopbacks via BGP and redistributing this into the IGP.

BGP EPE allows for an elegant method to provide inter-domain “unified MPLS” style end-to-end LSPs. As in the previous lab, we simply need to enable egress-engineering under each neighbor:

#R3
router bgp 65001
 neighbor 10.3.5.5
  egress-engineering
 neighbor 10.3.6.6
  egress-engineering

However, the egress-engineering command does not enable MPLS on the interface towards the eBGP peer. This is because you might be using BGP EPE simply for traffic engineering the egress PE’s forwarding decision. You may not want to actually run MPLS over the link to the eBGP peer. But when we are using BGP for inter-domain MPLS, we do need to use MPLS. With BGP IPv4/LU, MPLS is automatically enabled, as the AFI/SAFI itself requires forwarding of labeled traffic. For BGP EPE we can simply enable MPLS on the interface using mpls static. Another option is enabling the interface under mpls traffic-eng. Note that using “router bgp mpls activate” does not work, likely because we are not running any labeled AFIs with the eBGP peer.

#R3
mpls static
 int gi0/0/0/5
 int gi0/0/0/6

Next, we need to use a PCE to calculate the inter-domain paths. The PCE must receive all topology information: both ISIS domains, and all BGP EPE information. This is done using BGP-LS. The task instructs us not to use distribute link-state on the PCE itself. Instead, we can simply do this on the BGP edge routers. The IGP topology information will automatically be injected into BGP-LS locally on the router.

#R3
router isis 1
 distribute link-state instance-id 101
!
router bgp 65001
 add link-state link-state
 !
 neighbor 10.10.10.1
  remote-as 65010
  update-so lo1
  ebgp-multihop
  add link-state link-state

#R10
router bgp 65010
 address-family link-state link-state
 !
 neighbor 3.3.3.1
  remote-as 65001
  ebgp-multihop 255
  update-source Loopback1
  address-family link-state link-state
 !
!
pce
 address ipv4 10.10.10.1

BGP-LS is another AFI/SAFI. The AFI=link-state and SAFI=link-state, so you use address-family link-state link-state. This adress-family is used to carry IGP/LS topology information in the form of BGP udpates.

BGP-LS has three NLRI types:

  • Type 1 - Node [V]

    • Contains the hostname, area ID, RID, SRLB, SRGB, algos supported, etc. (information about the node)

    • Code [V] is for vertex

      • In graph theory, a vertex is a node in the graph

  • Type 2 - Link [E]

    • Contains the local/remote RID, IGP metric, admin group, max BW, TE metric, Adj SID etc. (information about the link)

    • Code [E] is for edge

      • In graph theory, an edge connects exactly two vertecies

  • Type 3 - IPv4 Prefix [T]

    • Contains the prefix, metric, flags, and SID index

    • Code [T] is likely for Topology Prefix

The IGP topology is transcoded into a common BGP-LS format. The elegant aspect of this solution is that both OSPF and ISIS data are identically encoded into BGP-LS NLRI. You will see next that the NLRI acts as a “key” for an entry in the TED. The NLRI contains the minimum information that uniquely identifies an entry. For example, a link is identified by the protocol and topology ID, the node on either end of the link, and the link identifying information, such as ifIndex or IP addresses, in case multiple links exist between the two nodes. All aspects of the entry, such as IGP/TE metric, link affinity, SID information, etc, is present in a BGP LS attribute, not the NLRI.

Let’s examine a type 1 NLRI, for example, for R1 (0000.0000.0001). In the NLRI you first see a [L2] for ISIS L2, a [I0x65] for instance ID 101. The protocol and instance ID together make an entry unique in a given IGP. This is why a unique instance ID is necessary per protocol. However, since the protocol itself is part of the entry, you can technically assign the same instance ID to two separate IGPs on the same node, ex. OSPF 1 and ISIS 1. The XR parser allows you to do this, but will give you a commit error if you try to assign the same instance ID to two separate instances of the same protocol (i.e. ISIS 1 and ISIS 2). (Note that in our specific topology, we can use the same instance ID for both nodes without any real issues, because all nodes belong to only one of the two IGPs, with the exception of R9 and R10. However, nodes R9 and R10 do not have to be in the LSP path).

Next in the NLRI we see a BGP-LS ID of 0 (XR does not use BGP-LS ID), and the SysID of the node. Below in the LS attribute, you can see MSD (max SID depth), node name, area, TE RID, etc. All of this information is translated from R1’s LSP into a BGP-LS Update. As the IGP topology changes, BGP-LS is automatically updated so that the BGP-LS feed always accurately represents the current IGP topology.

Next we’ll look at a type 2 NLRI, for example the link between R1 and R3. The BGP NLRI is extremely complex, but makes more sense when you notice that it is broken down into three parts: the source node identifier (which contains the exact information from the node’s type 1 NLRI), the remote node identifier, and a link identifier. The fact that each node’s type 1 NLRI is present in this type 2 NLRI allows the PCE to place the link between these two nodes in its condensed topology.

In the BGP-LS attribute, you see all the attributes of the link, such as TE metric and Adj SID.

Finally we’ll look at a type 3 NLRI, for example R1’s Lo1 prefix. This shows the prefix SID (index 1), flags, and metric. Notice that this NLRI again contains the node’s type 1 NLRI, allowing this prefix to be linked to the node object. The PFX-SID flags (40) means the N-flag is set. The 0 in 40/0 indicates algo 0. Likewise, the “extended IGP flags 0x20” indicates that the N flag is set.

For all of these NLRI, you can use the detail keyword to get a breakdown of the NLRI fields. For example, using the detail keyword on type 2 NLRI proves very useful:

Additionally, the BGP-EPE information is automatically encoded in BGP-LS. For example, we can see the link betwen R3 and R5. All of the complexity is within the NLRI “prefix” itself: the ASNs, instance number (BGP uses instance 0 to represent the “global” instance), and link IPs. In the Link State attribute, we simply see the peer SID. A metric isn’t advertised for EPE links.

Note that each EPE node must use a BGP RID equal to its IGP TE-RID. This allows the PCE to collapse all topology information into a single, global topology. If the BGP RID and TE-RID are different, then the PCE will not know that the BGP node connects to that IGP.

The PCE now has a fully populated SR-TED. The next step is to configure the PEs to request PCE path computation for the existing SR-TE policy. (The init file had pre-exisiting SR-TE policies on R1 and R7).

#R1
segment-routing
 traffic-eng
  pcc
   source-address ipv4 1.1.1.1
   pce address ipv4 10.10.10.1
!
segment-routing
 traffic-eng
  policy POL_1_AS
   candidate-paths
    preference 100
     dynamic
      pcep

#R7
segment-routing
 traffic-eng
  pcc
   source-address ipv4 7.7.7.1
   pce address ipv4 10.10.10.1
!
segment-routing
 traffic-eng
  policy POL_1_AS
   candidate-paths
    preference 100
     dynamic
      pcep

The PCE successfully computes a path between R1 and R7:

Note above that the accumulated metric is 20, which at first glance seems too low. The cost from R1 to R3 is 10, the cost from R3 to R5 is 0, and the cost from R5 to R7 10. BGP EPE links are always considered a metric of 0.

We now have a working end-to-end LSP that is inter-domain.

The beauty of this is that the inter-domain end-to-end LSP reachability is also fully TE-capable! This means that we can use the PCE to calculate paths that minimize latency, avoid link colors, etc. This is only possible in a very limited sense with RSVP-TE. In RSVP-TE, we can use a sort of “hack” on an IOS-XE ASBR to run the link as a passive TE interface. Then, the headend can define a loose ERO, which requests each ASBR to do part of the path calculation. You cannot use end-to-end constraints or minimize a particular metric. Additionally, each section of the LSP is only optimized for that one domain. (Each ASBR calculates the best TE metric towards the next reachable hop in the ERO, not the complete end-to-end best metric path). All of these downsides are solved by using SR-TE with a PCE.

Additionally, notice the beauty of using BGP-EPE over BGP-LU. We did not have to do complex next-hop self behavior and add state by signaling a label with PE prefixes. We can simply use basic BGP IPv4/unicast. The ASBRs allocate a BGP peer SID for each eBGP neighbor, which does not break the LSP. The only complexity is that we must manually enable MPLS on these interfaces.

Topology Changes

Any IGP topology changes are immediately signaled to BGP-LS. For example, let’s bring down the link between R1 and R4.

#R1
int gi0/0/0/4
 shut

Using the command show pce verification, we can see that a topology update event occured:

The BGP NLRI for the R1-R4 link has been removed from BGP-LS, and is reflected immediately in the SR-TED. The SR-TED shows R1 only has a single link now. Note that the below show command uses show pce ipv4 top for variety. This is fed by the SR-TED, so you can also use show segment-routing traffic-eng ipv4 top.

Let’s bring the R1-R4 link back up and examine what happens when the R3-R5 link is brought down:

#R4
int gi0/0/0/4
 no shut

#R3
int gi0/0/0/5
 shut

#R5
int gi0/0/0/3
 shut

We get a crazy 5-deep label stack path:

This is because all of the BGP EPE “links” have a metric of 0. So this figure-8 path is preferred over simply going from R6 to R7.

One way to fix this is to use hopcount as the metric for the SR-TE policy. This is because the EPE links have an IGP and TE metric of 0.

#R1
segment-routing
 traffic-eng
  policy POL_1_AS
   candidate-paths
    preference 100
      metric
       type hopcount

A note on BGP-LS troubleshooting

BGP-LS uses the same BGP mechanisms for bestpath selection and path validation. Only a single BGP-LS “route” is choosen and imported into the SR-TED among multiple identical BGP-LS NLRI from different peers. You can use standard BGP PAs to influence bestpath decision if needed, although you shouldn’t have to, as this should just be duplicate information.

Additionally, you can run into an issue where the BGP-LS NLRI nexthop is not reachable. In that case, the NLRI will not be valid, and will not be imported into the SR-TED.

Comparison to BGP-LU with SR

An alternative to BGP-EPE for ipv4/uni on the ASBR links is to use BGP-LU and set the label-index. This will propagate the loopbacks with their correct prefix SID index between the IGP domains. While this works to produce an end-to-end LSP, this LSP cannot be traffic engineered. The ability to use TE is the advantage of using BGP ipv4/uni with EPE instead. You produce a per-eBGP peer SID at each ASBR, allowing the PCE to create traffic engineered policies that meet given constraints/metric objectives. With simple BGP-LU and the SR label-index attribute, all you get is a plain end-to-end LSP that is inter-domain.