R1, R7 and R8 are all dual-stacked internet peers. Internet is running in an INET vrf in the core.
Configure source-based RTBH within the core so that traffic sourced from 1.1.1.1/32 and 2001:1::1/128 is dropped. Also configure destination-based RTBH so that traffic destined to 8.8.8.9/32 and 2001:8::9/128 is dropped at the edge. Use XR1 as the central policy control router using flowspec.
Running flowspec for VPNv4/v6 is extremely similar to flowspec for regular IPv4/IPv6. This makes it very easy to implement one or the other without having to worry about caveats. It also removes the complexity of using dummy null0 routes on IOS-XE in the global VRF for VPNv4/v6 nexthops in order to discard the traffic.
To begin, we must configure peering for vpnv4/v6 flowspec instead of ipv4/v6 flowspec:
There is a bit more configuration on the IOS-XR central policy router. We must define the VRF, because the XR router needs to export the policies with the correct RT, so that other PEs in the INET VRF will import these flowspec NLRI updates. We must activate the VRF under BGP with an RD and activate the ipv4/v6 flowspec address-families.
Next, we configure the class-maps and policy-maps. This is no different than before.
#XR1
class-map type traffic match-all CM_FLOWSPEC_V4_R1
match source-address ipv4 1.1.1.1 255.255.255.255
end-class-map
!
class-map type traffic match-all CM_FLOWSPEC_V4_R8
match destination-address ipv4 8.8.8.9 255.255.255.255
end-class-map
!
class-map type traffic match-all CM_FLOWSPEC_V6_R1
match source-address ipv6 2001:1::1/128
end-class-map
!
class-map type traffic match-all CM_FLOWSPEC_V6_R8
match destination-address ipv6 2001:8::9/128
end-class-map
!
!
policy-map type pbr PM_FLOWSPEC_V4
class type traffic CM_FLOWSPEC_V4_R1
drop
!
class type traffic CM_FLOWSPEC_V4_R8
drop
!
policy-map type pbr PM_FLOWSPEC_V6
class type traffic CM_FLOWSPEC_V6_R1
drop
!
class type traffic CM_FLOWSPEC_V6_R8
drop
Finally, we use the policy-maps in a service-policy under the VRF under flowspec.
#XR1
flowspec
vrf INET
address-family ipv4
service-policy type pbr PM_FLOWSPEC_V4
!
address-family ipv6
service-policy type pbr PM_FLOWSPEC_V6
XR1 will now automatically inject the appropriate flowspec NLRI into vpnv4/v6 flowspec. We can confirm this on XR1 itself. First we see the local policies:
Next, we see that XR1 has created the BGP VRF ipv4/v6 flowspec entries:
These are taken from the BGP VRF table and injected into the vpnv4/v6 table with the appropriate export RT.
We can see the details of the NLRI by copying+pasting the NLRI. The /48 appears to refer to the length of the NLRI in bits and can be ignored. Notice that we now see the RT, as we would see in any VPN update.
The PEs should receive these NLRI. Again, these have no nexthop, because they are policy advertisement, not actual route advertisements.
On the PEs, we should see that these flowspec entries are imported into the flowspec table for the VRF:
If we send test traffic, we should see that the PE drops the traffic ingress at the edge, just like in the previous lab in which we ran the internet table in the global RIB.
In summary, flowspec for VRFs has very little difference compared to flowspec for the global RIB. All changes are fairly intuitive:
Use BGP vpnv4/v6 flowspec instead of BGP ipv4/v6 flowspec
Activate flowspec for all interfaces in the VRF, instead of the global RIB
Define the VRF on the XR controller, using the export RT under ipv4/v6 flowspec
The IOS-XE PEs appear to use the unicast RTs for importing into the flowspec VRF policy
Define the VRF under BGP on the XR controller, activating ipv4/v6 flowspec
Define the service-policy for flowspec under the associated VRF on the XR controller