Advanced QoS on IOS-XE Pt. 1
Last updated
Last updated
Topology: ine-spv4
Load advanced.qos.pt.1.init.cfg
Continuing on from the previous lab, we will optimize the QoS policies on the CE (R1).
The customer noticies that during times of congestion, voice is receiving 10M, CS3 is receiving around 22M, and class-default is receiving around 18M. This is not what the customer was expecting. The customer would like CS3 to only use around 10M, and class-default should get more bandwidth, around 30M. Find a way to achieve this without using any additional shapers or policers.
The customer learned that shapers and policers do not account for the size of the Ethernet 4byte CRC trailer and the 20byte inter-packet overhead. The customer would like to account for this.
The customer has learned that the policer for voice traffic allows a large burst. The customer would like the pool of available tokens to be calculated every 10msec.
The customer has experienced that one flow in the class-default can starve out other flows. Implement a feature so that each flow gets a fair share of the class-default bandwidth.
In IOS-XE, each class has three paremeters:
Min bandwidth (bandwidth keyword)
Max bandwidth (shape keyword or police keyword)
Excess bandwidth
By default a ratio of 1
It is important to be aware of how IOS-XE will distribute unused bandwidth to all classes.
Previously our policy looked like this:
EF
priority, policed at 10M
CS3
bandwidth 5M
class-default
no bandwidth reservation
This meant that there was 35M remaining that were not accounted for. IOS-XE will distribute this evenly between all remaining classes. If EF is using its max 10M, then it of course doesn’t get any extra bandwidth (during times of congestion). This means that CS3 gets (35/2) + 5, and class-default gets (35/2). This results in CS3 getting 22.5M and class-default getting 17.5M. Of course, this is a bit contrived, because it is rare that CS3 traffic would use so much bandwidth. But it demonstrates the point.
To more deterministically share bandwidth among classes, it is better to use bandwidth remaining ratio. This cannot be used if there is a bandwidth reservation on any class in the policy. (You can’t use the bandwidth statement and bandwidth remaining statement in the same policy-map). IOS-XE will distribtue all remaining bandwidth to all classes in the ratio you specify. So if we remove the bandwidth statement from CS3, we are left with 40M unused bandwidth. If this is distributed in a 3:1 ratio, we will give CS3 10M bandwidth and class-default 30M bandwidth.
In practice, you should either use bandwidth remaining ratios, or have the bandwidth in the policy map add up to the full bandwidth. For example, we instead could have done this:
Or we can do this:
However, both of these methods make it difficult to add classes to the policy in the future. Using bandwidth remaining ratio is a bit more flexible.
Update: I found that the explanation for this is a difference in a two-parameter and three-parameter scheduler. It appears that CSR1000v actually uses a two-parameter scheduler which doesn’t allow you to mix bandwidth and bandwidth remaining statements. However, XRv does allow you to mix these statements within the same class. For this reason, this exercise seems to be more applicable to XRv, where it implements a three-parameter scheduler (min BW, max BW, excess BW).
From page 115 of “QoS for IP/MPLS Networks”: “The explicit configuration of minimum- and excess-bandwidth allocations are mutually exclusive in platforms with two-parameter schedulers.”
Page 82 of this book also goes over more detail of two vs three parameter schedulers. A two-parameter has min and max BW gaurantees that are independent and the excess BW depends on one of these parameters. The min BW generally implies the excess BW. A three-parameter scheduler supports independent configuration of min/max/excess BW for each queue.
A two parameter scheduler will share excess BW proportionally to the min BW gaurantee. If a min BW does not exist, the scheduler shares excess BW equally. A three-parameter scheduler shares excess BW equally among all queues. Each queue has an implicit “bandwidth remaining ratio 1.”
So this must mean that in this lab if CSR1000v is indeed a two-parameter scheduler, CS3 would receive 5x as much excess BW as class-default?
By default, a shaper and policer account for the L2 header and L3 payload, as shown below:
In rare cases, you might want to also consider the 4 bytes of CRC and the 20 bytes of inter-packet overhead that is required on Ethernet. (12 bytes of gap, 7 bytes of preamble, and 1 start frame delimiter).
To do this, you simply add the account keyword in the shape or police command. You can add up to 63 bytes, or subtract up to 63 bytes from the shaper/policer’s counting of the size of each packet.
This causes the shaper to add 24 bytes to the size of each packet that is sent. So if a 1300 byte packet is received, the shaper counts it as 1324 bytes. Note that overhead accounting is not reflected in the byte counters seen in show policy-map interface so there doesn’t seem to be a good way to confirm that it is taking affect.
A policer uses two different default intervals for determining the burst size:
When using the priority command, it uses a 200msec burst interval
When using the police command, it uses a 250msec burst interval
This simply means that the policer takes the target rate (CIR), and changes the value from a bits-per-second to bytes-per-interval. A 10Mbps policer with a 200msec burst allows 250KB every 200msec.
This might create a problem for voice traffic. It only takes 2msec to serialize 250KB (2,000,000 bits) at 1Gbps. If this gets used up immediately in one large burst, voice traffic will be dropped for the next 198msec. (Note that this typically would not occur in the real world, because voice traffic should use codecs that run at predictable bit rates, such as 56Kbps).
Instead we can set the burst interval lower, for example to 10msec. This means that only 12.5KB can be allowed every burst interval. We allow less bursting, but evaluate the rate more often.
We can set the burst size directly on the priority command as follows:
If we used a separate police statement, we would have the option to set the police rate as a percentage and use a time-based burst in msec.
Note that when policing on a priority queue, the policer takes affect before the queuing. So only a max of 10mbps of EF traffic is allowed into the queue. Once a packet is in the queue, the scheduler will service this queue as priority, meaning any time there is a packet in the queue, it is served immediately so that it does not incur delay.
By default, any flow can out-compete other flows in a given class. To change this, we can use the fair-queue keyword.
This command creates 16 queues within a given class in order to create some fairness between flows. Without this, one single high-rate flow can starve out low-rate flows. By using fair queueing, each flow has its own queue, so low rate flows are given an equal share of bandwidth.
Note that there is no weighting for DSCP marking with fair-queuing. Each flow is simply given an equal ratio of bandwidth (1/<num flows>) no matter its marking.