CCIE SPv5.1 Labs
  • Intro
    • Setup
  • Purpose
  • Video Demonstration
  • Containerlab Tips
  • Labs
    • ISIS
      • Start
      • Topology
      • Prefix Suppression
      • Hello padding
      • Overload Bit
      • LSP size
      • Default metric
      • Hello/Hold Timer
      • Mesh groups
      • Prefix Summarization
      • Default Route Preference
      • ISIS Timers
      • Log Neighbor Changes
      • Troubleshooting 1 - No routes
      • Troubleshooting 2 - Adjacency
      • IPv6 Single Topology
      • IPv6 Single Topology Challenge
      • IPv6 Multi Topology
      • IPv6 Single to Multi Topology
      • Wide Metrics Explained
      • Route Filtering
      • Backdoor Link
      • Non-Optimal Intra-Area routing
      • Multi Area
      • Authentication
      • Conditional ATT Bit
      • Troubleshooting iBGP
      • Troubleshooting TE Tunnel
    • LDP
      • Start
      • Topology
      • LDP and ECMP
      • LDP and Static Routes
      • LDP Timers
      • LDP Authentication
      • LDP Session Protection
      • LDP/IGP Sync (OSPF)
      • LDP/IGP Sync (ISIS)
      • LDP Local Allocation Filtering
      • LDP Conditional Label Advertisement
      • LDP Inbound Label Advertisement Filtering
      • LDP Label Advertisement Filtering Challenge
      • LDP Implicit Withdraw
      • LDP Transport Address Troubleshooting
      • LDP Static Labels
    • MPLS-TE
      • Start
      • Topology
      • Basic TE Tunnel w/ OSPF
      • Basic TE Tunnel w/ ISIS
      • TE Tunnel using Admin Weight
      • TE Tunnel using Link Affinity
      • TE Tunnel with Explicit-Null
      • TE Tunnel with Conditional Attributes
      • RSVP message pacing
      • Reoptimization timer
      • IGP TE Flooding Thresholds
      • CSPF Tiebreakers
      • TE Tunnel Preemption
      • TE Tunnel Soft Preemption
      • Tunneling LDP inside RSVP
      • PE to P TE Tunnel
      • Autoroute Announce Metric (XE)
      • Autoroute Announce Metric (XR)
      • Autoroute Announce Absolute Metric
      • Autoroute Announce Backup Path
      • Forwarding Adjacency
      • Forwarding Adjacency with OSPF
      • TE Tunnels with UCMP
      • Auto-Bandwidth
      • FRR Link Protection (XE, BFD)
      • FRR Link Protection (XE, RSVP Hellos)
      • FRR Node Protection (XR)
      • FRR Path Protection
      • FRR Multiple Backup Tunnels (Node Protection)
      • FRR Multiple Backup Tunnels (Link Protection)
      • FRR Multiple Backup Tunnels (Backwidth/Link Protection)
      • FRR Backup Auto-Tunnels
      • FRR Backup Auto-Tunnels with SRLG
      • Full Mesh Auto-Tunnels
      • Full Mesh Dynamic Auto-Tunnels
      • One-Hop Auto-Tunnels
      • CBTS/PBTS
      • Traditional DS-TE
      • IETF DS-TE with MAM
      • IETF DS-TE with RDM
      • RDM w/ FRR Troubleshooting
      • Per-VRF TE Tunnels
      • Tactical TE Issues
      • Multicast and MPLS-TE
    • SR
      • Start
      • Topology
      • Basic SR with ISIS
      • Basic SR with OSPF
      • SRGB Modifcation
      • SR with ExpNull
      • SR Anycast SID
      • SR Adjacency SID
      • SR LAN Adjacency SID (Walkthrough)
      • SR and RSVP-TE interaction
      • SR Basic Inter-area with ISIS
      • SR Basic Inter-area with OSPF
      • SR Basic Inter-IGP (redistribution)
      • SR Basic Inter-AS using BGP
      • SR BGP Data Center (eBGP)
      • SR BGP Data Center (iBGP)
      • LFA
      • LFA Tiebreakers (ISIS)
      • LFA Tiebreakers (OSPF)
      • Remote LFA
      • RLFA Tiebreakers?
      • TI-LFA
      • Remote LFA or TILFA?
      • TI-LFA Node Protection
      • TI-LFA SRLG Protection
      • TI-LFA Protection Priorities (ISIS)
      • TI-LFA Protection Priorities (OSPF)
      • Microloop Avoidance
      • SR/LDP Interworking
      • SR/LDP SRMS OSPF Inter-Area
      • SR/LDP Design Challenge #1
      • SR/LDP Design Challenge #2
      • Migrate LDP to SR (ISIS)
      • OAM with SR
      • SR-MPLS using IPv6
      • Basic SR-TE with AS
      • Basic SR-TE with AS and ODN
      • SR-TE with AS Primary/Secondary Paths
      • SR-TE Dynamic Policies
      • SR-TE Dynamic Policy with Margin
      • SR-TE Explicit Paths
      • SR-TE Disjoint Planes using Anycast SIDs
      • SR-TE Flex-Algo w/ Latency
      • SR-TE Flex-Algo w/ Affinity
      • SR-TE Disjoint Planes using Flex-Algo
      • SR-TE BSIDs
      • SR-TE RSVP-TE Stitching
      • SR-TE Autoroute Include
      • SR Inter-IGP using PCE
      • SR-TE PCC Features
      • SR-TE PCE Instantiated Policy
      • SR-TE PCE Redundancy
      • SR-TE PCE Redundancy w/ Sync
      • SR-TE Basic BGP EPE
      • SR-TE BGP EPE for Unified MPLS
      • SR-TE Disjoint Paths
      • SR Converged SDN Transport Challenge
      • SR OAM DPM
      • SR OAM Tools
      • Performance-Measurement (Interface Delay)
    • SRv6
      • Start
      • Topology
      • Basic SRv6
      • SRv6 uSID
      • SRv6 uSID w/ EVPN-VPWS and BGP IPv4/IPv6
      • SRv6 uSID w/ SR-TE
      • SRv6 uSID w/ SR-TE Explicit Paths
      • SRv6 uSID w/ L3 IGW
      • SRv6 uSID w/ Dual-Connected PE
      • SRv6 uSID w/ Flex Algo
      • SRv6 uSID - Scale (Pt. 1)
      • SRv6 uSID - Scale (Pt. 2)
      • SRv6 uSID - Scale (Pt. 3) (UPA Walkthrough)
      • SRv6 uSID - Scale (Pt. 4) (Flex Algo)
      • SRv6 uSID w/ TI-LFA
    • Multicast
      • Start
      • Topology
      • Basic PIM-SSM
      • PIM-SSM Static Mapping
      • Basic PIM-SM
      • PIM-SM with Anycast RP
      • PIM-SM with Auto-RP
      • PIM-SM with BSR
      • PIM-SM with BSR for IPv6
      • PIM-BiDir
      • PIM-BiDir for IPv6
      • PIM-BiDir with Phantom RP
      • PIM Security
      • PIM Boundaries with AutoRP
      • PIM Boundaries with BSR
      • PIM-SM IPv6 using Embedded RP
      • PIM SSM Range Note
      • PIM RPF Troubleshooting #1
      • PIM RPF Troubleshooting #2
      • PIM RP Troubleshooting
      • PIM Duplicate Traffic Troubleshooting
      • Using IOS-XR as a Sender/Receiver
      • PIM-SM without Receiver IGMP Joins
      • RP Discovery Methods
      • Basic Interdomain Multicast w/o MSDP
      • Basic Interdomain Multicast w/ MSDP
      • MSDP Filtering
      • MSDP Flood Reduction
      • MSDP Default Peer
      • MSDP RPF Check (IOS-XR)
      • MSDP RPF Check (IOS-XE)
      • Interdomain MBGP Policies
      • PIM Boundaries using MSDP
    • MVPN
      • Start
      • Topology
      • Profile 0
      • Profile 0 with data MDTs
      • Profile 1
      • Profile 1 w/ Redundant Roots
      • Profile 1 with data MDTs
      • Profile 6
      • Profile 7
      • Profile 3
      • Profile 3 with S-PMSI
      • Profile 11
      • Profile 11 with S-PMSI
      • Profile 11 w/ Receiver-only Sites
      • Profile 9 with S-PMSI
      • Profile 12
      • Profile 13
      • UMH (Upstream Multicast Hop) Challenge
      • Profile 13 w/ Configuration Knobs
      • Profile 13 w/ PE RP
      • Profile 12 w/ PE Anycast RP
      • Profile 14 (Partitioned MDT)
      • Profile 14 with Extranet option #1
      • Profile 14 with Extranet option #2
      • Profile 14 w/ IPv6
      • Profile 17
      • Profile 19
      • Profile 21
    • MVPN SR
      • Start
      • Topology
      • Profile 27
      • Profile 27 w/ Constraints
      • Profile 27 w/ FRR
      • Profile 28
      • Profile 28 w/ Constraints and FRR
      • Profile 28 w/ Data MDTs
      • Profile 29
    • VPWS
      • Start
      • Topology
      • Basic VPWS
      • VPWS with Tag Manipulation
      • Redundant VPWS
      • Redundant VPWS (IOS-XR)
      • VPWS with PW interfaces
      • Manual VPWS
      • VPWS with Sequencing
      • Pseudowire Logging
      • VPWS with FAT-PW
      • MS-PS (Pseudowire stitching)
      • VPWS with BGP AD
    • VPLS
      • Start
      • Topology
      • Basic VPLS with LDP
      • VPLS with LDP and BGP
      • VPLS with BGP only
      • Hub and Spoke VPLS
      • Tunnel L2 Protocols over VPLS
      • Basic H-VPLS
      • H-VPLS with BGP
      • H-VPLS with QinQ
      • H-VPLS with Redundancy
      • VPLS with Routing
      • VPLS MAC Protection
      • Basic E-TREE
      • VPLS with LDP/BGP-AD and XRv RR
      • VPLS with BGP and XRv RR
      • VPLS with Storm Control
    • EVPN
      • Start
      • Topology
      • EVPN VPWS
      • EVPN VPWS Multihomed
      • EVPN VPWS Multihomed Single-Active
      • Basic Single-homed EVPN E-LAN
      • EVPN E-LAN Service Label Allocation
      • EVPN E-LAN Ethernet Tag
      • EVPN E-LAN Multihomed
      • EVPN E-LAN on XRv
      • EVPN IRB
      • EVPN-VPWS Multihomed IOS-XR (All-Active)
      • EVPN-VPWS Multihomed IOS-XR (Port-Active)
      • EVPN-VPWS Multihomed IOS-XR (Single-Active)
      • EVPN-VPWS Multihomed IOS-XR (Non-Bundle)
      • PBB-EVPN (Informational)
    • BGP Multi-Homing (XE)
      • Start
      • Topology
      • Lab1 ECMP
      • Lab2 UCMP
      • Lab3 Backup Path
      • Lab4 Shadow Session
      • Lab5 Shadow RR
      • Lab6 RR with Add-Path
      • Lab7 MPLS + Add Path ECMP
      • Lab8 MPLS + Shadow RR
      • Lab9 MPLS + RDs + UCMP
    • BGP Multi-Homing (XR)
      • Start
      • Topology
      • Lab1 ECMP
      • Lab2 UCMP
      • Lab3 Backup Path
      • Lab4 “Shadow Session”
      • Lab5 “Shadow RR”
      • Lab6 RR with Add-Path
      • Lab7 MPLS + Add Path ECMP
      • Lab8 MPLS + “Shadow RR”
      • Lab9 MPLS + RDs + UCMP
      • Lab10 MPLS + Same RD + Add-Path + UCMP
      • Lab11 MPLS + Same RD + Add-Path + Repair Path
    • BGP
      • Start
      • Conditional Advertisement
      • Aggregation and Deaggregation
      • Local AS
      • BGP QoS Policy Propagation
      • Non-Optimal eBGP Routing
      • Multihomed Enterprise Challenge
      • Provider Communities
      • Destination-Based RTBH
      • Destination-Based RTBH (Community-Based)
      • Source-Based RTBH
      • Source-Based RTBH (Community-Based)
      • Multihomed Enterprise Challenge (XRv)
      • Provider Communities (XRv)
      • DMZ Link BW Lab1
      • DMZ Link BW Lab2
      • PIC Edge in the Global Table
      • PIC Edge Troubleshooting
      • PIC Edge for VPNv4
      • AIGP
      • AIGP Translation
      • Cost-Community (iBGP)
      • Cost-Community (confed eBGP)
      • Destination-Based RTBH (VRF Provider-triggered)
      • Destination-Based RTBH (VRF CE-triggered)
      • Source-Based RTBH (VRF Provider-triggered)
      • Flowspec (Global IPv4/6PE)
      • Flowspec (VRF)
      • Flowspec (Global IPv4/6PE w/ Redirect)
      • Flowspec (Global IPv4/6PE w/ Redirect) T-Shoot
      • Flowspec (VRF w/ Redirect)
      • Flowspec (Global IPv4/6PE w/ CE Advertisement)
    • Intra-AS L3VPN
      • Start
      • Partitioned RRs
      • Partitioned RRs with IOS-XR
      • RT Filter
      • Non-Optimal Multi-Homed Routing
      • Troubleshoot #1 (BGP)
      • Troubleshoot #2 (OSPF)
      • Troubleshoot #3 (OSPF)
      • Troubleshoot #4 (OSPF Inter-AS)
      • VRF to Global Internet Access (IOS-XE)
      • VRF to Global Internet Access (IOS-XR)
    • Inter-AS L3VPN
      • Start
      • Inter-AS Option A
      • Inter-AS Option B
      • Inter-AS Option C
      • Inter-AS Option AB (D)
      • CSC
      • CSC with Option AB (D)
      • Inter-AS Option C - iBGP LU
      • Inter-AS Option B w/ RT Rewrite
      • Inter-AS Option C w/ RT Rewrite
      • Inter-AS Option A Multi-Homed
      • Inter-AS Option B Multi-Homed
      • Inter-AS Option C Multi-Homed
    • Russo Inter-AS
      • Start
      • Topology
      • Option A L3NNI
      • Option A L2NNI
      • Option A mVPN
      • Option B L3NNI
      • Option B mVPN
      • Option C L3NNI
      • Option C L3NNI w/ L2VPN
      • Option C mVPN
    • BGP RPKI
      • Start
      • RPKI on IOS-XE (Enabling the feature)
      • RPKI on IOS-XE (Validation)
      • RPKI on IOS-XR (Enabling the feature)
      • Enable SSH in Routinator
      • RPKI on IOS-XR (Validation)
      • RPKI on IOS-XR (RPKI Routes)
      • RPKI on IOS-XR (VRF)
      • RPKI iBGP Mesh (No Signaling)
      • RPKI iBGP Mesh (iBGP Signaling)
    • NAT
      • Start
      • Egress PE NAT44
      • NAT44 within an INET VRF
      • Internet Reachability between VRFs
      • CGNAT
      • NAT64 Stateful
      • NAT64 Stateful w/ Static NAT
      • NAT64 Stateless
      • MAP-T BR
    • BFD
      • Start
      • Topology
      • OSPF Hellos
      • ISIS Hellos
      • BGP Keepalives
      • PIM Hellos
      • Basic BFD for all protocols
      • BFD Asymmetric Timers
      • BFD Templates
      • BFD Tshoot #1
      • BFD for Static Routes
      • BFD Multi-Hop
      • BFD for VPNv4 Static Routes
      • BFD for VPNv6 Static Routes
      • BFD for Pseudowires
    • QoS
      • Start
      • QoS on IOS-XE
      • Advanced QoS on IOS-XE Pt. 1
      • Advanced QoS on IOS-XE Pt. 2
      • MPLS QoS Design
      • Notes - QoS on IOS-XR
    • NSO
      • Start
      • Basic NSO Usage
      • Basic NSO Template Service
      • Advanced NSO Template Service
      • Advanced NSO Template Service #2
      • NSO Template vs. Template Service
      • NSO API using Python
      • NSO API using Python #2
      • NSO API using Python #3
      • Using a NETCONF NED
      • Python Service
      • Nano Services
    • MDT
      • Start
      • MDT Server Setup
      • Basic Dial-Out
      • Filtering Data using XPATH
      • Finding the correct YANG model
      • Finding the correct YANG model #2
      • Event-Driven MDT
      • Basic Dial-In using gNMI
      • Dial-Out with TLS
      • Dial-In with TLS
      • Dial-In with two-way TLS
    • App-Hosting
      • Start
      • Lab - iperf3 Docker Container
      • Notes - LXC Container
      • Notes - Native Applications
      • Notes - Process Scripts
    • ZTP
      • Notes - Classic ZTP
      • Notes - Secure ZTP
    • L2 Connectivity Notes
      • 802.1ad (Q-in-Q)
      • MST-AG
      • MC-LAG
      • G.8032
    • Ethernet OAM
      • Start
      • Topology
      • CFM
      • y1731
      • Notes - y1564
    • Security
      • Start
      • Notes - Security ACLs
      • Notes - Hybrid ACLs
      • Notes - MPP (IOS-XR)
      • Notes - MPP (IOS-XE)
      • Notes - CoPP (IOS-XE)
      • Notes - LPTS (IOS-XR)
      • Notes - WAN MACsec White Paper
      • Notes - WAN MACsec Config Guide
      • Notes - AAA
      • Notes - uRPF
      • Notes - VTY lines (IOS-XR)
      • Lab - uRPF
      • Lab - MPP
      • Lab - AAA (IOS-XE)
      • Lab - AAA (IOS-XR)
      • Lab - CoPP and LPTS
    • Assurance
      • Start
      • Notes - Syslog on IOS-XE
      • Notes - Syslog on IOS-XR
      • Notes - SNMP Traps
      • Syslog (IOS-XR)
      • RMON
      • Netflow (IOS-XE)
      • Netflow (IOS-XR)
Powered by GitBook
On this page
  • Answer
  • Explanation
  • Notes on PCE Northbound interface
  • Notes about Computation Design
  • Notes about SR-TED in general
  1. Labs
  2. SR

SR Inter-IGP using PCE

PreviousSR-TE Autoroute IncludeNextSR-TE PCC Features

Last updated 1 month ago

Load sr.inter.igp.pce.init.cfg

configure
load bootflash:sr.inter.igp.pce.init.cfg
commit replace
y

Nodes R1-R4 are running ISIS, and nodes R5-R8 are running OSPF. Node R3 is running both ISIS and OSPF.

Achieve an end-to-end LSP between R1 and R7 without redistributing routes into each IGP.

R10 is a PCE, not shown in the diagram. It belongs to both IGPs.

On R1 and R7, use an ODN policy with color 10 that simply uses the IGP metric and requests PCE computation.

Answer

#R1, R2, R3, R4
router isis 1
 add ipv4 uni
  mpls traffic-eng router-id lo1
  mpls traffic-eng level-2
!
mpls traffic-eng

#R3, R5, R6, R7, R8
router ospf 1
 mpls traffic-eng router-id lo1
 area 0
  mpls traffic-eng
root
!
mpls traffic-eng

#R1
router static add ipv4 uni 7.7.7.1/32 null0
!
segment-routing
 traffic-eng
  on-demand color 10
   dynamic
    pcep
    !
    metric
     type igp
    !
   !
  !
  pcc
   source-address ipv4 1.1.1.1
   pce address ipv4 10.10.10.1

#R7
router static add ipv4 uni 1.1.1.1/32 null0
!
segment-routing
 traffic-eng
  on-demand color 10
   dynamic
    pcep
    !
    metric
     type igp
    !
   !
  !
  pcc
   source-address ipv4 7.7.7.1
   pce address ipv4 10.10.10.1

#R10
pce address ipv4 10.10.10.1
!
router isis 1
 distribute link-state instance-id 100
!
router ospf 1
 distribute link-state instance-id 200

Explanation

A PCE (Path Computation Element) allows for policies requiring full network topology visibility, such as inter-domain policies and path disjoint policies, to be computed. The headend acts as PCC (Path Computation Client) and requests the path calculation from the PCE. The PCE is active and stateful, so it maintains the delegated policy, updating it as needed when there are any IGP topology changes.

First, we must configure a TE RID on all nodes in the network. This is missing from the init file. Without this, the PCE won’t be able to calculate a path, as none of the nodes can be “placed” in the topology graph without a TE RID.

#R1, R2, R3, R4
router isis 1
 add ipv4 uni
  mpls traffic-eng router-id lo1
  mpls traffic-eng level-2
!
mpls traffic-eng

#R3, R5, R6, R7, R8
router ospf 1
 mpls traffic-eng router-id lo1
 area 0
  mpls traffic-eng
root
!
mpls traffic-eng

Next, we enable PCE server functionality on R10. This is simply done using the following command. The address is the local address which will listen for TCP SYNs on the PCEP port (4189).

#R10
pce
 address ipv4 10.10.10.1

Now R10 needs to populate its SR-TED with both IGP topologies. R10 already belongs to both IGPs, so we simply use the distribute link-state command under each IGP. We must make sure to use separate topology IDs for each IGP so that they are kept separate in R10’s consolidated SR-TED. As a reminder, when no instance-id is specified, the instance id is 0. This is not a problem when the entire network is only a single IGP instance.

#R10
router isis 1
 distribute link-state instance-id 100
!
router ospf 1
 distribute link-state instance-id 200

Each PCC simply configures the PCE under the segment-routing traffic-eng configuration:

#R1
segment-routing
 traffic-eng
  pcc
   source-address ipv4 1.1.1.1
   pce address ipv4 10.10.10.1

#R7
segment-routing
 traffic-eng
  pcc
   source-address ipv4 7.7.7.1
   pce address ipv4 10.10.10.1

We should now see that the PCEP session is established between the PCCs and the PCE. On the PCCs we can use the following command:

Above, the PCE is stateful which appears to always be the case. (There does not appear to be a way to use XR as a stateless PCE). The PCE statefully keeps track of policies which allows the PCE to update the PCC’s paths and instantiate new paths on the PCC. Instantiation means the PCE configures the policy locally and then pushes the policy to the client. The default precendence is 255, which is used for PCE redundancy. The lowest precendence number is the best PCE.

On the PCE we can verify PCC sessions using a similar command:

The detail keyword on either the PCC or PCE provides some details of the PCEP session statistics:

The PCE should have an SR-TED consisting of all nodes in both IGPs. For example, verify that both 1.1.1.1 and 7.7.7.1 are present in R10’s SR-TED:

We can take this a step further and verify that R10, acting as PCE, can calculate a SR-TE policy between R1 and R7 using the following command:

We can now configure R1 and R7 to have an ODN policy which uses the PCE. The keyword pcep means to use PCEP (PCE Protocol) for computation as opposed to headend (local) computation. Note that this is not strictly necessary for ODN policies, as ODN policies use two candidate paths by default: pref 200 for local computation, and pref 100 for PCE computation. The PCE computation then takes effect when local computation fails.

#R1, R7
segment-routing
 traffic-eng
  on-demand color 10
   dynamic
    pcep
    !
    metric
     type igp

We’ll color CE routes on R1 and R7 so that each PE will request path computation for the ODN policy from the PCE.

#R1, R7
extcommunity-set opaque COLOR10
 10
end-set
!
route-policy SET_COLOR
 set extcommunity color COLOR10
end-policy

#R1
router bgp 100
 vrf BLUE
  neighbor 192.168.101.101
   address-family ipv4 unicast
    route-policy SET_COLOR in

#R7
router bgp 100
 vrf BLUE
  neighbor 192.168.107.107
   address-family ipv4 unicast
    route-policy SET_COLOR in

We should see that the policy is up on each PE. Only R1 is shown for brevity. Note that the PCE included the SID descriptor (ex. 7.7.7.1) along with the label (ex. 16007). This is how R1 knows the resolution of the label even though R1 is not itself resolving each label.

The workflow for a PCE policy works as follows:

  1. The PCC sends a PCEP Report that contains the name, constraints, and optimization metric for the SR-TE policy, but with an empty SID list. The delegate flag is set in the LSP object, indicating that it wants to delegate this policy to the PCE.

  2. The PCE, noticing the empty SID list, interprets this as a request for path computation. The PCE computes the path and signals it to the PCC in a PCEP Update

  3. The PCC installs the policy in its FIB and then sends a PCEP Report, echoing back the SID list and details of the policy. This is used as an acklodegment mechanism so the PCE knows the PCC was able to install the policy. The PCEP Report allows the PCE to track the policy in its SR-TED.

If the PCE cannot calculate the policy, it sends back an empty PCEP Update with the delegate flag cleared.

If at any time the topology changes, the PCE recalculates the policy and if anything has changed, signals the changes to the PCC in a PCEP Update. The PCC replies with a PCEP Report.

So as you can see, the basic PCEP functionality is enabled with PCEP Reports (sent from PCCs) and PCEP Updates (sent by the PCE).

We can see the number of SR-TE policies (LSPs) the PCE is tracking for each peer using the following command:

Using the detail keyword, we can get details of a given LSP:

Notice above that there is a Reported path section and a Computed path section. The computed path is the path that the PCE calculated and signaled to the PCC. The reported path is the path that was seen in the PCEP Report as an ACK from the PCC. So the reported path should be equal to the computed path.

You can also see above that any aspect of an SR-TE policy (such as metric margin, BSID value, metric of the path, name of the path) can be signaled via PCEP.

These LSPs are actually part of the SR-TED itself. The SR-TED is not only fed via the local IGP and BGP-LS, but also via PCEP. It is important for the PCE to track LSPs in its SR-TED so that it can enable features such as disjoint LSPs. The calculation of a new LSP might be based on the state of other existing LSPs.

As a note, we can now remove “distribute link-state” from all other nodes besides R10. R10 is the only node which requires a populated SR-TED. However, you can optionally still allow headends to compute intra-domain paths and only ask the PCE for inter-domain path calculation. In that case, you would leave “distribute link-state” configured on every node.

Finally, let’s confirm connectivity between CE101 and CE107. Currently we have an issue: the PEs are not selecting each other’s VPNv4 routes as valid, due to RIB failure:

Interestingly though, the color still triggered the ODN policy to come up, and a BSID was allocated. However, this recursion on the BSID cannot overcome the inaccessibility issue. To solve this, we can use a null0 static route on each PE. This will cause BGP not to flag the PE as inaccessible as a route in the RIB does exist (although via null0), and BGP will continue on, recursing the route via the BSID.

(Note, in IOS-XR 7.x, we can instead use the BGP knobs bgp bestpath igp-metric sr-policy and nexthop validation color-extcomm sr-policy.)

#R1
router static add ipv4 uni 7.7.7.1/32 null0

#R7
router static add ipv4 uni 1.1.1.1/32 null0

The VPNv4 route is now available for use:

The CEs have reachability to eachother via a PCE-computed end-to-end TE LSP!

Notes on PCE Northbound interface

A PCE’s northbound interface allows applications to program the PCE. The PCE’s southbound interface interacts with PCCs, generally over PCEP, to program their policies.

A PCE exposes REST APIs such as the following:

  • http://<sr-pce-ip-addr>:8080/topo/subscribe/json

    • Get topology info

  • http://<sr-pce-ip-addr>:8080/lsp/subscribe/json

    • Get LSP info

I’m not clear whether these are available on XRv as PCE.

Notes about Computation Design

There are three types of computation designs:

  • Centralized

    • Only an SDN controller programs policies. This is a “vertical” model. The PCE is responsible for pushing all policies to all routers. None of the routers do any local path computation.

  • Distributed

    • All routers calculate paths themselves. No PCE/controller is used.

  • Hybrid

    • Routers calculate paths when they can (intra-area and non-disjoint), but use a PCE to calculate when necessary (inter-domain and disjointness).

    • The routers build the policies themselves locally but use a PCE when necessary.

      • In the centralized model, the PCE/controller instantiates the policies.

    • This is called a “horizontal” model and is generally what is recommended.

Notes about SR-TED in general

The SR-TED on R10 is fed via each IGP. A different instance ID must be used to keep the IGPs separated within the SR-TED. The SR-TED consolidates all information learned via the local IGPs and BGP-LS (which we will see next) into a single graph, so the unique instance ID keeps the IGPs separate once consolidated.

The distribute link-state command also has an optional throttle parameter. The default throttle is 50ms for ISIS and 5ms for OSPF. This is how long the router will wait before distributing an IGP topology change into SR-TED or BGP-LS. (Similar to throttling SPF runs, for example).

All routers belonging to the same IGP must distribute using the same instance ID, even if they are in different IGP levels/areas. The instance ID is for completely separate IGP instances, not separate IGP areas.

The PCE treats all metrics and link attributes from different IGP instances as if they were global, since they are all consolidated into a single graph. This could create an issue if different operators manage the different IGPs, and link metrics, link affinitys, etc. are not comparable between the IGPs.

Nodes that connect to different IGPs must have the same TE RID so that the PCE can identify the node as belonging to both IGPs. However, a different RID should be used in each IGP to prevent the possibility of a duplicate RID if routers from each separate IGP inadvertently form an adjancency and the LSP/LSAs are leaked between the IGPs.