Flowspec (Global IPv4/6PE)
Last updated
Last updated
Topology: ine-spv4
Load flowspec.global.init.cfg
R1, R7 and R8 are all dual-stacked internet peers in the global table. IPv6 uses 6PE in the core.
Configure source-based RTBH within the core so that traffic sourced from 1.1.1.1/32 and 2001:1::1/128 is dropped. Also configure destination-based RTBH so that traffic destined to 8.8.8.9/32 and 2001:8::9/128 is dropped at the edge. Use XR1 as the central policy control router using flowspec.
Flowspec is a BGP tool used to disseminate ACLs/firewall filters. A centralized controller defines ACL rules using QoS logic, and advertises this as BGP flowspec NLRI. All receiving routers will dynamically implement this as an ingress filter on their participating interfaces in hardware. The ability to turn the flowspec NLRI into hardware filters is called “ePBR” (enhanced Policy Based Routing).
Flowspec allows us to use a traffic rate of “zero bytes/sec” which means to drop the traffic. This gives us a very elegant method of implementing S/D-RTBH without needing the clunky null0 dummy routes. Instead, traffic matches the source address or destination address in the flowspec policy, and is dropped as an ingress filter. In fact, we can be even more granular with our matching for RTBH, for example matching only small packets to/from the IP address, or matching packets with certain L4 characteristics to/from the IP address. This is controlled dynamically from the controller, so this is not signaled with BGP communities as we did previously with regular RTBH.
IOS-XE routers are only able to implement flowspec policy on interfaces in hardware. We cannot propagate flowspec policies from IOS-XE. On XRv, we can only advertise flowspec policies, using the router as a controller. The XRv, without a true line card, cannot actually implement the policies in the data plane. This means that in our labs, XRv will always be the controller and XE will always be the PEs.
Flowspec uses BGP SAFIs 133 (global unicast) and 134 (L3VPN unicast). It can be used with both IPv4 and IPv6, meaning it is supported for IPv4/uni, VPNv4/uni, IPv6/uni, and VPNv6/uni.
Traffic policies are coded as TLVs within the NLRI. Several NLRI types exist to accomodate flexible and complex ACL logic. For example, type 1 and type 2 are used to code a dest/src prefix, type 4/5/6 codes layer 4 ports, and type 11 matches DSCP value. Essentially these all define different criteria for ACL matching. Here is a full list of NLRI type codes:
The action of the traffic policy is implemented as an extcommunity.
0x8006 = police rate
A rate of 0 bytes per second means “drop”
0x8009 = remark DSCP value
0x0800 = mirror the traffic to an IP address
This means that flowspec is able to be used for much more than simple S/D-RTBH behavior.
Also note that flowspec allows for multiple policies to be defined, unlike when using QoS on an interface. In the case that multiple policies have been pushed via flowspec, all are applied to the interfaces. But only the first matching flowspec rule will be applied. It does not appear that you can control the order in which the flowspec policies are applied.
To implement flowspec, we first must activate the address-family on all participating routers:
To configure the IOS-XE PEs to implement the flowspec policy on their interfaces, we simply enable flowspec on “all-interfaces.” This is all that is required on IOS-XE to start implementing S/D-RTBH at the edge!
Note, to disable flowspec on individual interfaces, we can use the following command:
The complexity comes on the controller (XRv). We use QoS CLI tools (MQC) to define the policy. First we have class-maps that match the traffic criteria. Note that we must use “match-all” when using the “class-map type traffic.”
Next, we match these classes in a PBR policy-map, and define the action. This is not very different from regular QoS.
Finally, we instruct the router to use these policy maps in flowspec:
First we can verify the policy on the local controller, and verify that it is advertising the policies into BGP:
To inspect the details of the NLRI, we can simply copy+paste the NLRI syntax:
Above, we can see that the traffic-rate is 0. The first number is the ASN (100). This appears to be purely informational.
Looking at a pcap of the flowspec advertisement clears this up. Here’s an IPv4 flowspec Update. We can see how the NLRI types are used to define the match criteria, and how the extcommunity is used to define the action:
Here’s an IPv6 flowspec Update which looks very similar:
On the PEs, we can confirm that we receive these policies via BGP:
Notice that there is no nexthop, because the nexthop does not really matter here. This is not a route update that needs to point to a nexthop. It’s just a policy instruction. We can only see the details for all NLRIs at once on IOS-XE, using show bgp ipv4|ipv6 flowspec detail:
We can see that these policies were imported into the local flowspec table on the PE:
To verify that this is working, we can source pings from R1 (1.1.1.1 and 2001:1::1), or send pings to R8 (8.8.8.9 and 2001:8::9). We should see drops on the PE. The PE is implementing both source-based and destination-based RTBH at the edge: