Notes - QoS on IOS-XR
Last updated
Last updated
QoS on IOS-XR is very similar to IOS-XE. The MQC syntax is essentially the same. There are a few extra features which are available.
On IOS-XR, you can configure regular RED by using “random-detect default”
When using regular WRED, you can base it on DSCP, EXP, IPP, or CoS. For any of these cases, you must specify the values yourself. The platform does not provide defaults.
IOS-XR uses MDRR (modified deficit round robin) instead of CBWFQ. However, it essentially operates in the same manner, even though it doesn’t use the same weighted fair queuing algorithm. I believe as the user you will not know the difference. LLQ can still be used, which brings strict priority queuing to the MDRR scheduling.
Overhead accounting is done on the interface as the policy-map is applied, not on the policy-map itself as in IOS-XE. On XRv I only have the options for using layer1, layer2, or turning off layer2 accounting.
On IOS-XR, 8 priority levels are supported. On IOS-XE, only 2 levels are supported.
On IOS-XR, you cannot police on the priority command like you can on IOS-XE. Instead you must configure a separate policer.
Note that on IOS-XE, the following configurations have two different meanings:
When the police rate is specified with the priority command, the queue can still use extra bandwidth when no congestion is present
When the police rate is specified separately, this queue can never go above the police rate
This is essentially the IOS-XR equivalent of IOS-XE’s service-group. You can apply an aggregate policy to multiple subinterfaces. All subinterfaces must belong to the same physical interface, just like on IOS-XE.
This feature allows you to share a single policer bucket among multiple classes. This seems to be a way to use a single policer rate for multiple classes within a single policy.
This feature allows you to apply multiple QoS policies in the same direction. One policy is used for classification, and marks traffic with traffic-class. The second policy does queuing, and matches the traffic-class. The classification policy will execute first. The queuing policy executes second, matching on the traffic class field that was set by the classification policy, which selects the queue.
The traffic-class tool does not appear to be very well documented. It is an internal marking mechanism similar to qos-group. But traffic-class is used for egress queuing, while qos-group is only used for marking(?).
This comment provides more clarity:
This feature makes the parent policy aware of the colors of traffic as marked by the child policy. This prevents conformed traffic from being dropped if it is mixed with exceeded traffic when presented to the parent policy.
In this scenario, there might be a total of 100m of conforming traffic presented to the parent policy, and excess traffic marked with cs1 over 100m. The parent policy needs to be aware of the markings the child policy is doing in order to drop the exceeded traffic and not the conforming traffic.
This feature allows individual classes in separate policies that are attached to subinterfaces on the same physical interface, to use an aggregate policy on the main interface.
This diagram helps explain the feature:
The default-class present on policies assigned to each individual subinterface is associated with an aggregate policy assigned to the main physical interface.
This feature allows the router to classify traffic into flows for the purpose of call admission control (CAC). For example, let’s say that you want to provide priority treatment to video conference calls, but police then at 5mbps. Let’s say each call is 1mbps. If more than 5 calls are present, all video callas will suffer. We want to do call admission control so that the 6th call is dropped instead.
From: