MPLS QoS Design

Topology: topology1

Load top3.l3vpn.setup.cfg

#IOS-XE (CSR1-7)
config replace flash:top3.l3vpn.setup.cfg

#IOS-XR (XR1-4, XR8-10)
configure
load top3.l3vpn.setup.cfg
commit replace
y

This lab is setup as follows:

  • CSR1-3, XR8 are PEs

  • CSR4-7 are CEs

  • XR1-4 are P routers

Note that you will be able to stage all configuration on XRd, but you cannot commit input policies on an interface on XRd. You will need to change the node type to XRv9K if you’d like to do this.

Before you begin, know that this is a very extensive lab which covers many different aspects of QoS in an MPLS network. I'd suggest really taking your time on this lab.

The service provider’s goals for QoS are the following:

  • Use RDM and make 100M available on all links for real-time traffic (CT1)

    • Use the following TE class definitions:

      • TE-Class 0 = CT1 priority 0

      • TE-Class 1 = CT1 priority 1

      • TE-Class 2 = CT0 priority 2

      • TE-Class 3 = CT0 priority 3

  • The service provider’s voice servers are located behind XR8. Configure CSR1-3 to have a TE tunnel with XR8. Each TE tunnel must reserve 25M bandwidth using TE-class 0. These three tunnels should be bidirectional.

    • Ensure real-time traffic uses these tunnels by using CBTS/PBTS

    • All non-EXP5 traffic between CSR1-3 and XR8 should use IGP bestpath.

  • Use a queuing policy in the core as follows:

    • Real time traffic (EXP5) should get priority and be policed to a rate of 20% link bandwidth

    • Priority traffic (EXP3 and EXP4) should receive 40% of remaining link bandwidth, use WRED, and fair-queuing

    • Network control traffic (EXP6) should be guaranteed 5% of remaining link bandwidth

    • BE traffic (EXP0) should received 55% of remaining link bandwidth and use WRED

Configure policing/shaping ingress/egress on the PEs towards the CEs as follows:

  • Each PE should police traffic received from the customer with a CIR of 100Mbps and a PIR of 120Mbps. Excess traffic should be marked down to EXP0.

  • Voice traffic (EF) should be marked with EXP5

  • Priority traffic (AF3X, AF4X, and CS6) should be marked with EX4

  • All other traffic should be marked with EXP0.

  • Each PE should shape traffic outbound to the customer based on the SP marking. The shaping policy should use the core queuing policy. Shape overall to 100M.

Configure shaping egress on the CEs towards the PEs as follows:

  • Each CE should shape traffic to 100Mbps as follows:

    • Voice traffic (EF) should get priority and be policed to 25Mbps

    • AF31-AF33 traffic should get 15% bandwidth and use WRED with a 50 msec queue-limit

      • Start dropping AF33 traffic up to 20% once the queue gets to 30 msec full

      • Start dropping AF32 traffic up to 20% once the queue gets to 35 msec full

      • Start dropping AF31 traffic up to 20% once the queue gets to 40 msec full

    • AF41-AF43 traffic should get 10% bandwidth and use WRED with a 30 msec queue-limit

      • Start dropping AF43 traffic up to 20% once the queue gets to 20 msec full

      • Start dropping AF42 traffic up to 20% once the queue gets to 22 msec full

      • Start dropping AF41 traffic up to 20% once the queue gets to 25 msec full

    • CS6 traffic should get 5% bandwidth

    • BE traffic should use fair-queuing

Answer

MPLS-TE with DS-TE

Core Queuing Policy

PE to CE policing/shaping

CE to PE shaping

MPLS-TE DS-TE Tunnels

Turn off PHP

Explanation

High-Level Overview

This extensive lab uses multiple technologies to achieve a comprehensive MPLS core QoS design. The core uses DiffServ for QoS in the data plane, and MPLS-TE with DS-TE for accomodating bandwidth reservations on a DiffServ-aware basis in the control plane. Traffic is selected to be forwarded down the TE tunnels via the EXP value. By using these separate tools - DS-TE, CBTS/PBTS, and DiffServ QoS, we can produce a unified QoS solution.

Additionally there is some complexity at the PE-CE edge. The CEs implement outbound shaping in order to ensure they do not burst above the PE’s ingress policer. The PEs mark traffic into one of four queues which are identified in the core based on EXP value. The MPLS DiffServ mode is “pipe mode,” in which the SP’s marking is not translated into the customer’s tunneled marking, and the egress PE makes a queuing decision based on the EXP marking. For this to work we should turn off PHP (by advertising expnull via LDP and MPLS-TE), and we must translate incoming EXP values to QoS group values, because the EXP value is lost when doing queuing outbound towards the CE.

MPLS-TE

This lab uses MPLS-TE with DS-TE in RDM mode (the default). First we must enable MPLS-TE for IOS-XE and IOS-XR. We also specify our own TE-class definition. Make sure to mark TE-classes 4 through 7 as unused. If you forget to do this for a class for which the router has a default definition, it will be used in addition to your custom defintions.

Next we allocate bandwidth for each interface using the RDM model. In this model, BC0 can use any unused bandwidth from BC1. For this reason we simply specify BC0 as the total bandwidth and BC1 as its own bandwidth. In our custom TE-Class definitions, we prevent BC0 from being able to ever preempt BC1 tunnels. Class-type 0 can only use priorities 2 and 3, while class-type 1 is always priority 0 or 1.

We then define our TE tunnel interfaces. In this lab we are pretending that all voice traffic is traversing XR8, as if there is a media server behind this PE. So we only need to build TE LSPs between all PEs and XR8. We are asked to ensure that EXP5 traffic uses this LSP which we can accomplish by using CBTS/PBTS. Since we need to use autoroute announce on this TE LSP, the only way to ensure that all other traffic uses the IGP path is to build a second tunnel for “exp default” that simply uses the IGP metric for the optimization of the LSP.

On XR8 we create two tunnels per each of the three PEs. One tunnel uses EXP5 and one tunnel uses the default EXP. On IOS-XR we don’t need to use a master EXP bundle tunnel like we do on IOS-XE.

Remember to configure MPLS-TE exp-null and LDP exp-null to ensure that we don’t lose the EXP value on the penultimate hop. This is needed in Pipe mode so that the egress PE performs egress queuing based on the MPLS EXP QoS and not the customer’s QoS marking.

CE outbound shaping

We’ll now look at the CE’s outbound shaping policies. The CE should make sure to shape to the rate that the PE is policing to minimize packet loss due to policer drops. Additionally, the CE can prioritize certain classes of traffic by giving them gauranteed bandwidth or priority.

Above, we give EF priority that is policed at 25Mbps during congestion. We give AF3 and AF4 minimium bandwidth guarantees and use WRED for congestion avoidance. The queue for these classes is set based on a msec value. This will be converted to a bytes count based on the bandwidth of the parent policy or parent interface. This is also the case for the custom WRED parameters for which we specify thresholds in time values as well.

PE inbound policing/marking

The PEs do two actions upon ingress from the CEs: police at the CIR/PIR and mark the EXP value.

The PEs use a CIR of 100mbps and PIR of 120mbps. Traffic that is excess is marked down to EXP0. All conforming traffic is marked based on the customer’s DSCP marking. Perhaps there is an accounting mechanism to account for priority/real-time traffic and charge above a certain threshold for this to discourage customers from marking all their traffic as priority.

PE outbound queuing to the CE

The PEs are using pipe mode, so they must translate the EXP marking ingress from the core to an internal QoS-group marking. Additionally we want to do WRED, so we set a discard-class marking as well.

Next we can use our core policy for outbound queuing to the CE, but we use QoS group values instead of EXP values. We also use discard-class values for WRED instead of EXP.

Core QoS Policy

Lastly, we look at the MPLS core QoS DiffServ policy. We police priority traffic to 20% of link bandwidth, which in this lab will always be 200M. This provides oversubscription for priority traffic, since we are doing a 100M reservation for CT1 on every link. This gives us a buffer to ensure that we are unlikely to drop priority traffic. This is how we align the data plane QoS policy with the DS-TE control plane reservations.

We also have a few other classes: a “priority” class is for high priority data traffic. We give it a large percent of gauranteed bandwidth and use WRED for congestion management. Control traffic is generally network control/signalling traffic and is guaranteed 5 percent bandwidth.

This queuing policy is applied outbound to every core interface.

Last updated