aboutsummaryrefslogtreecommitdiffstats
path: root/tests/vpp/perf/l2/10ge2p1x520-eth-l2bdbasemaclrn-macip-iacl10sl-100kflows-ndrpdrdisc.robot
blob: e7e2f55b13aea0fa923f9c76cc47e31e76e6296a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
# Copyright (c) 2017 Cisco and/or its affiliates.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at:
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

*** Settings ***
| Resource | resources/libraries/robot/performance/performance_setup.robot
| ...
| Force Tags | 3_NODE_SINGLE_LINK_TOPO | PERFTEST | HW_ENV | NDRPDRDISC
| ... | NIC_Intel-X520-DA2 | ETH | L2BDMACLRN | FEATURE | MACIP | ACL_STATELESS
| ... | IACL | ACL10 | 100k_FLOWS
| ...
| Suite Setup | Run Keywords
| ... | Set up 3-node performance topology with DUT's NIC model | L2
| ... | Intel-X520-DA2
| ... | AND | Set up performance test suite with ACL
| Suite Teardown | Tear down 3-node performance topology
| ...
| Test Setup | Set up performance test
| ...
| Test Teardown | Tear down performance test with MACIP ACL
| ... | ${min_rate}pps | ${framesize} | ${traffic_profile}
| ...
| Documentation | *RFC2544: Packet throughput L2BD test cases with ACL*
| ...
| ... | *[Top] Network Topologies:* TG-DUT1-DUT2-TG 3-node circular topology\
| ... | with single links between nodes.
| ... | *[Enc] Packet Encapsulations:* Eth-IPv4 for L2 switching of IPv4.
| ... | *[Cfg] DUT configuration:* DUT1 is configured with L2 bridge domain\
| ... | and MAC learning enabled. DUT2 is configured with L2 cross-connects.\
| ... | Required MACIP ACL rules are applied to input paths of both DUT1\
| ... | interfaces. DUT1 and DUT2 are tested with 2p10GE NIC X520 Niantic by\
| ... | Intel.
| ... | *[Ver] TG verification:* TG finds and reports throughput NDR (Non Drop\
| ... | Rate) with zero packet loss tolerance or throughput PDR (Partial Drop\
| ... | Rate) with non-zero packet loss tolerance (LT) expressed in percentage\
| ... | of packets transmitted. NDR and PDR are discovered for different\
| ... | Ethernet L2 frame sizes using either binary search or linear search\
| ... | algorithms with configured starting rate and final step that determines\
| ... | throughput measurement resolution. Test packets are generated by TG on\
| ... | links to DUTs. TG traffic profile contains two L3 flow-groups\
| ... | (flow-group per direction, ${flows_per_dir} flows per flow-group) with\
| ... | all packets containing Ethernet header, IPv4 header with IP protocol=61\
| ... | and static payload. MAC addresses are matching MAC addresses of the TG\
| ... | node interfaces.
| ... | *[Ref] Applicable standard specifications:* RFC2544.

*** Variables ***
# X520-DA2 bandwidth limit
| ${s_limit}= | ${10000000000}

# ACL test setup
| ${acl_action}= | permit
| ${no_hit_aces_number}= | 10
| ${flows_per_dir}= | 100k

# starting points for non-hitting ACLs
| ${src_ip_start}= | 30.30.30.1
| ${ip_step}= | ${1}
| ${src_mac_start}= | 01:02:03:04:05:06
| ${src_mac_step}= | ${1000}
| ${src_mac_mask}= | 00:00:00:00:00:00
| ${tg_stream1_mac}= | ca:fe:00:00:00:00
| ${tg_stream2_mac}= | fa:ce:00:00:00:00
| ${tg_mac_mask}= | ff:ff:ff:fe:00:00
| ${tg_stream1_subnet}= | 10.0.0.0/15
| ${tg_stream2_subnet}= | 20.0.0.0/15

# traffic profile
| ${traffic_profile}= | trex-sl-3n-ethip4-macsrc100kip4src100k

*** Keywords ***
| Discover NDR or PDR for L2 Bridge Domain with MACIP ACLs
| | ...
| | [Arguments] | ${wt} | ${rxq} | ${framesize} | ${min_rate} | ${search_type}
| | ...
| | Set Test Variable | ${framesize}
| | Set Test Variable | ${min_rate}
| | ${max_rate}= | Calculate pps | ${s_limit} | ${framesize}
| | ${binary_min}= | Set Variable | ${min_rate}
| | ${binary_max}= | Set Variable | ${max_rate}
| | ${threshold}= | Set Variable | ${min_rate}
| | ...
| | Given Add '${wt}' worker threads and '${rxq}' rxqueues in 3-node single-link circular topology
| | And Add PCI devices to DUTs in 3-node single link topology
| | ${get_framesize}= | Get Frame Size | ${framesize}
| | And Run Keyword If | ${get_framesize} < ${1522} | Add no multi seg to all DUTs
| | And Apply startup configuration on all VPP DUTs
| | When Initialize L2 bridge domain with MACIP ACLs on DUT1 in 3-node circular topology
| | Then Run Keyword If | '${search_type}' == 'NDR'
| | ... | Find NDR using binary search and pps
| | ... | ${framesize} | ${binary_min} | ${binary_max} | ${traffic_profile}
| | ... | ${min_rate} | ${max_rate} | ${threshold}
| | ... | ELSE IF | '${search_type}' == 'PDR'
| | ... | Find PDR using binary search and pps
| | ... | ${framesize} | ${binary_min} | ${binary_max} | ${traffic_profile}
| | ... | ${min_rate} | ${max_rate} | ${threshold}
| | ... | ${perf_pdr_loss_acceptance} | ${perf_pdr_loss_acceptance_type}

*** Test Cases ***
| tc01-64B-1t1c-eth-l2bdbasemaclrn-macip-iacl10sl-100kflows-ndrdisc
| | [Documentation]
| | ... | [Cfg] DUT runs L2BD switching config with MACIP ACL with\
| | ... | 1 thread, 1 phy core, 1 receive queue per NIC port.
| | ... | [Ver] Find NDR for 64 Byte frames using binary search start at 10GE\
| | ... | linerate, step 50kpps.
| | ...
| | [Tags] | 64B | 1T1C | STHREAD | NDRDISC
| | ...
| | [Template] | Discover NDR or PDR for L2 Bridge Domain with MACIP ACLs
| | wt=1 | rxq=1 | framesize=${64} | min_rate=${50000} | search_type=NDR

| tc02-64B-1t1c-eth-l2bdbasemaclrn-macip-iacl10sl-100kflows-pdrdisc
| | [Documentation]
| | ... | [Cfg] DUT runs L2BD switching config with MACIP ACL with\
| | ... | 1 thread, 1 phy core, 1 receive queue per NIC port.
| | ... | [Ver] Find PDR for 64 Byte frames using binary search start at 10GE\
| | ... | linerate, step 50kpps, LT=0.5%.
| | ...
| | [Tags] | 64B | 1T1C | STHREAD | PDRDISC | SKIP_PATCH
| | ...
| | [Template] | Discover NDR or PDR for L2 Bridge Domain with MACIP ACLs
| | wt=1 | rxq=1 | framesize=${64} | min_rate=${50000} | search_type=PDR

| tc03-64B-2t2c-eth-l2bdbasemaclrn-macip-iacl10sl-100kflows-ndrdisc
| | [Documentation]
| | ... | [Cfg] DUT runs L2BD switching config with MACIP ACL with\
| | ... | 2 threads, 2 phy cores, 1 receive queue per NIC port.
| | ... | [Ver] Find NDR for 64 Byte frames using binary search start at 10GE\
| | ... | linerate, step 50kpps.
| | ...
| | [Tags] | 64B | 2T2C | MTHREAD | NDRDISC
| | ...
| | [Template] | Discover NDR or PDR for L2 Bridge Domain with MACIP ACLs
| | wt=2 | rxq=1 | framesize=${64} | min_rate=${50000} | search_type=NDR

| tc04-64B-2t2c-eth-l2bdbasemaclrn-macip-iacl10sl-100kflows-pdrdisc
| | [Documentation]
| | ... | [Cfg] DUT runs L2BD switching config with MACIP ACL with\
| | ... | 2 threads, 2 phy cores, 1 receive queue per NIC port.
| | ... | [Ver] Find PDR for 64 Byte frames using binary search start at 10GE\
| | ... | linerate, step 50kpps, LT=0.5%.
| | ...
| | [Tags] | 64B | 2T2C | MTHREAD | PDRDISC | SKIP_PATCH
| | ...
| | [Template] | Discover NDR or PDR for L2 Bridge Domain with MACIP ACLs
| | wt=2 | rxq=1 | framesize=${64} | min_rate=${50000} | search_type=PDR

| tc05-64B-4t4c-eth-l2bdbasemaclrn-macip-iacl10sl-100kflows-ndrdisc
| | [Documentation]
| | ... | [Cfg] DUT runs L2BD switching config with MACIP ACL with\
| | ... | 4 threads, 4 phy cores, 2 receive queues per NIC port.
| | ... | [Ver] Find NDR for 64 Byte frames using binary search start at 10GE\
| | ... | linerate, step 50kpps.
| | ...
| | [Tags] | 64B | 4T4C | MTHREAD | NDRDISC
| | ...
| | [Template] | Discover NDR or PDR for L2 Bridge Domain with MACIP ACLs
| | wt=4 | rxq=2 | framesize=${64} | min_rate=${50000} | search_type=NDR

| tc06-64B-4t4c-eth-l2bdbasemaclrn-macip-iacl10sl-100kflows-pdrdisc
| | [Documentation]
| | ... | [Cfg] DUT runs L2BD switching config with MACIP ACL with\
| | ... | 4 threads, 4 phy cores, 2 receive queues per NIC port.
| | ... | [Ver] Find PDR for 64 Byte frames using binary search start at 10GE\
| | ... | linerate, step 50kpps, LT=0.5%.
| | ...
| | [Tags] | 64B | 4T4C | MTHREAD | PDRDISC | SKIP_PATCH
| | ...
| | [Template] | Discover NDR or PDR for L2 Bridge Domain with MACIP ACLs
| | wt=4 | rxq=2 | framesize=${64} | min_rate=${50000} | search_type=PDR
upport ECMP, and VPP FIB is designed to quickly remove failed paths from * the ECMP set, however, it does not insert shared objects specific to the * protected resource into the forwarding object graph, since this would incur * a forwarding/performance cost. Failover time is thus route number dependent. * Details are provided in the implementation section below. * * PIC * * PIC refers to the concept that the converge time should be independent of * the number of prefixes/routes that are affected by the failure. PIC is * therefore most appropriate when considering networks with large number of * prefixes, i.e. BGP networks and thus recursive prefixes. There are several * flavours of PIC covering different locations of protection and failure * scenarios. An outline is given below, see the literature for more details: * * Y/16 - CE1 -- PE1---\ * | \ P1---\ * | \ PE3 -- CE3 - X/16 * | - P2---/ * Y/16 - CE2 -- PE2---/ * * CE = customer edge, PE = provider edge. external-BGP runs between customer * and provider, internal-BGP runs between provider and provider. * * 1) iBGP PIC-core: consider traffic from CE1 to X/16 via CE3. On PE1 there is * are routes; * X/16 (and hundreds of thousands of others like it) * via PE3 * and * PE3/32 (its loopback address) * via 10.0.0.1 Link0 (this is P1) * via 10.1.1.1 Link1 (this is P2) * the failure is the loss of link0 or link1 * As in all PIC scenarios, in order to provide prefix independent convergence * it must be that the route for X/16 (and all other routes via PE3) do not * need to be updated in the FIB. The FIB therefore needs to update a single * object that is shared by all routes - once this shared object is updated, * then all routes using it will be instantly updated to use the new forwarding * information. In this case the shared object is the resolving route via PE3. * Once the route via PE3 is updated via IGP (OSPF) convergence, then all * recursive routes that resolve through it are also updated. VPP FIB * implements this scenario via a recursive-adjacency. the X/16 and it sibling * routes share a recursive-adjacency that links to/points at/stacks on the * normal adjacency contributed by the route for PE3. Once this shared * recursive adj is re-linked then all routes are switched to using the new * forwarding information. This is shown below; * * pre-failure; * X/16 --> R-ADJ-1 --> ADJ-1-PE3 (multi-path via P1 and P2) * * post-failure: * X/16 --> R-ADJ-1 --> ADJ-2-PE3 (single path via P1) * * note that R-ADJ-1 (the recursive adj) remains in the forwarding graph, * therefore X/16 (and all its siblings) is not updated. * X/16 and its siblings share the recursive adj since they share the same * path-list. It is the path-list object that contributes the recursive-adj * (see next section for more details) * * * 2) iBGP PIC-edge; Traffic from CE3 to Y/16. On PE3 there is are routes; * Y/16 (and hundreds of thousands of others like it) * via PE1 * via PE2 * and * PE1/32 (PE1's loopback address) * via 10.0.2.2 Link0 (this is P1) * PE2/32 (PE2's loopback address) * via 10.0.3.3 Link1 (this is P2) * * the failure is the loss of reachability to PE2. this could be either the * loss of the link P2-PE2 or the loss of the node PE2. This is detected either * by the withdrawal of the PE2's loopback route or by some form of failure * detection (i.e. BFD). * VPP FIB again provides PIC via the use of the shared recursive-adj. Y/16 and * its siblings will again share a path-list for the list {PE1,PE2}, this * path-list will contribute a multi-path-recursive-adj, i.e. a multi-path-adj * with each choice therein being another adj; * * Y/16 -> RM-ADJ --> ADJ1 (for PE1) * --> ADJ2 (for PE2) * * when the route for PE1 is withdrawn then the multi-path-recursive-adjacency * is updated to be; * * Y/16 --> RM-ADJ --> ADJ1 (for PE1) * --> ADJ1 (for PE1) * * that is both choices in the ECMP set are the same and thus all traffic is * forwarded to PE1. Eventually the control plane will download a route update * for Y/16 to be via PE1 only. At that time the situation will be: * * Y/16 -> R-ADJ --> ADJ1 (for PE1) * * In the scenario above we assumed that PE1 and PE2 are ECMP for Y/16. eBGP * PIC core is also specified for the case were one PE is primary and the other * backup - VPP FIB does not support that case at this time. * * 3) eBGP PIC Edge; Traffic from CE3 to Y/16. On PE1 there is are routes; * Y/16 (and hundreds of thousands of others like it) * via CE1 (primary) * via PE2 (backup) * and * CE1 (this is an adj-fib) * via 11.0.0.1 Link0 (this is CE1) << this is an adj-fib * PE2 (PE2's loopback address) * via 10.0.5.5 Link1 (this is link PE1-PE2) * the failure is the loss of link0 to CE1. The failure can be detected by FIB * either as a link down event or by the control plane withdrawing the connected * prefix on the link0 (say 10.0.5.4/30). The latter works because the resolving * entry is an adj-fib, so removing the connected will withdraw the adj-fib, and * hence the recursive path becomes unresolved. The former is faster, * particularly in the case of Inter-AS option A where there are many VLAN * sub-interfaces on the PE-CE link, one for each VRF, and so the control plane * must remove the connected prefix for each sub-interface to trigger PIC in * each VRF. Note though that total PIC cutover time will depend on VRF scale * with either trigger. * Primary and backup paths in this eBGP PIC-edge scenario are calculated by * BGP. Each peer is configured to always advertise its best external path to * its iBGP peers. Backup paths therefore send traffic from the PE back into the * core to an alternate PE. A PE may have multiple external paths, i.e. multiple * directly connected CEs, it may also have multiple backup PEs, however there * is no correlation between the two, so unlike LFA-FRR, the redundancy model is * N-M; N primary paths are backed-up by M backup paths - only when all primary * paths fail, then the cutover is performed onto the M backup paths. Note that * PE2 must be suitably configured to forward traffic on its external path that * was received from PE1. VPP FIB does not support external-internal-BGP (eiBGP) * load-balancing. * * As with LFA-FRR the use of primary and backup paths is not currently * supported, however, the use of a recursive-multi-path-adj, and a suitably * constrained hashing algorithm to choose from the primary or backup path sets, * would again provide the necessary shared object and hence the prefix scale * independent cutover. * * Astute readers will recognise that both of the eBGP PIC scenarios refer only * to a BGP free core. * * Fast convergence implementation options come in two flavours: * 1) Insert switches into the data-path. The switch represents the protected * resource. If the switch is 'on' the primary path is taken, otherwise * the backup path is taken. Testing the switch in the data-path comes with * an associated performance cost. A given packet may encounter more than * one protected resource as it is forwarded. This approach minimises * cutover times as packets will be forwarded on the backup path as soon * as the protected resource is detected to be down and the single switch * is tripped. However, it comes at a performance cost, which increases * with each shared resource a packet encounters in the data-path. * This approach is thus best suited to LFA-FRR where the protected routes * are non-recursive (i.e. encounter few shared resources) and the * expectation on cutover times is more stringent (<50msecs). * 2) Update shared objects. Identify objects in the data-path, that are * required to be present whether or not fast convergence is required (i.e. * adjacencies) that can be shared by multiple routes. Create a dependency * between these objects at the protected resource. When the protected * resource fails, each of the shared objects is updated in a way that all * users of it see a consistent change. This approach incurs no performance * penalty as the data-path structure is unchanged, however, the cutover * times are longer as more work is required when the resource fails. This * scheme is thus more appropriate to recursive prefixes (where the packet * will encounter multiple protected resources) and to fast-convergence * technologies where the cutover times are less stringent (i.e. PIC). * * Implementation: * --------------- * * Due to the requirements outlined above, not all routes known to FIB * (e.g. adj-fibs) are installed in forwarding. However, should circumstances * change, those routes will need to be added. This adds the requirement that * a FIB maintains two tables per-VRF, per-AF (where a 'table' is indexed by * prefix); the forwarding and non-forwarding tables. * * For DP speed in VPP we want the lookup in the forwarding table to directly * result in the ADJ. So the two tables; one contains all the routes (a * lookup therein yields a fib_entry_t), the other contains only the forwarding * routes (a lookup therein yields an ip_adjacency_t). The latter is used by the * DP. * This trades memory for forwarding performance. A good trade-off in VPP's * expected operating environments. * * Note these tables are keyed only by the prefix (and since there 2 two * per-VRF, implicitly by the VRF too). The key for an adjacency is the * tuple:{next-hop, address (and it's AF), interface, link/ether-type}. * consider this curious, but allowed, config; * * set int ip addr 10.0.0.1/24 Gig0 * set ip arp Gig0 10.0.0.2 dead.dead.dead * # a host in that sub-net is routed via a better next hop (say it avoids a * # big L2 domain) * ip route add 10.0.0.2 Gig1 192.168.1.1 * # this recursive should go via Gig1 * ip route add 1.1.1.1/32 via 10.0.0.2 * # this non-recursive should go via Gig0 * ip route add 2.2.2.2/32 via Gig0 10.0.0.2 * * for the last route, the lookup for the path (via {Gig0, 10.0.0.2}) in the * prefix table would not yield the correct result. To fix this we need a * separate table for the adjacencies. * * - FIB data structures; * * fib_entry_t: * - a representation of a route. * - has a prefix. * - it maintains an array of path-lists that have been contributed by the * different sources * - install an adjacency in the forwarding table contributed by the best * source's path-list. * * fib_path_list_t: * - a list of paths * - path-lists may be shared between FIB entries. The path-lists are thus * kept in a DB. The key is the combined description of the paths. We share * path-lists when it will aid convergence to do so. Adding path-lists to * this DB that are never shared, or are not shared by prefixes that are * not subject to PIC, will increase the size of the DB unnecessarily and * may lead to increased search times due to hash collisions. * - the path-list contributes the appropriate adj for the entry in the * forwarding table. The adj can be 'normal', multi-path or recursive, * depending on the number of paths and their types. * - since path-lists are shared there is only one instance of the multi-path * adj that they [may] create. As such multi-path adjacencies do not need a * separate DB. * The path-list with recursive paths and the recursive adjacency that it * contributes forms the backbone of the fast convergence architecture (as * described previously). * * fib_path_t: * - a description of how to forward the traffic (i.e. via {Gig1, K}). * - the path describes the intent on how to forward. This differs from how * the path resolves. I.e. it might not be resolved at all (since the * interface is deleted or down). * - paths have different types, most notably recursive or non-recursive. * - a fib_path_t will contribute the appropriate adjacency object. It is from * these contributions that the DP graph/chain for the route is built. * - if the path is recursive and a recursion loop is detected, then the path * will contribute the special DROP adjacency. This way, whilst the control * plane graph is looped, the data-plane graph does not. * * we build a graph of these objects; * * fib_entry_t -> fib_path_list_t -> fib_path_t -> ... * * for recursive paths: * * fib_path_t -> fib_entry_t -> .... * * for non-recursive paths * * fib_path_t -> ip_adjacency_t -> interface * * These objects, which constitute the 'control plane' part of the FIB are used * to represent the resolution of a route. As a whole this is referred to as the * control plane graph. There is a separate DP graph to represent the forwarding * of a packet. In the DP graph each object represents an action that is applied * to a packet as it traverses the graph. For example, a lookup of a IP address * in the forwarding table could result in the following graph: * * recursive-adj --> multi-path-adj --> interface_A * --> interface_B * * A packet traversing this FIB DP graph would thus also traverse a VPP node * graph of: * * ipX_recursive --> ipX_rewrite --> interface_A_tx --> etc * * The taxonomy of objects in a FIB graph is as follows, consider; * * A --> * B --> D * C --> * * Where A,B and C are (for example) routes that resolve through D. * parent; D is the parent of A, B, and C. * children: A, B, and C are children of D. * sibling: A, B and C are siblings of one another. * * All shared objects in the FIB are reference counted. Users of these objects * are thus expected to use the add_lock/unlock semantics (as one would * normally use malloc/free). * * WALKS * * It is necessary to walk/traverse the graph forwards (entry to interface) to * perform a collapse or build a recursive adj and backwards (interface * to entry) to perform updates, i.e. when interface state changes or when * recursive route resolution updates occur. * A forward walk follows simply by navigating an object's parent pointer to * access its parent object. For objects with multiple parents (e.g. a * path-list), each parent is walked in turn. * To support back-walks direct dependencies are maintained between objects, * i.e. in the relationship, {A, B, C} --> D, then object D will maintain a list * of 'pointers' to its children {A, B, C}. Bare C-language pointers are not * allowed, so a pointer is described in terms of an object type (i.e. entry, * path-list, etc) and index - this allows the object to be retrieved from the * appropriate pool. A list is maintained to achieve fast convergence at scale. * When there are millions or recursive prefixes, it is very inefficient to * blindly walk the tables looking for entries that were affected by a given * topology change. The lowest hanging fruit when optimising is to remove * actions that are not required, so all back-walks only traverse objects that * are directly affected by the change. * * PIC Core and fast-reroute rely on FIB reacting quickly to an interface * state change to update the multi-path-adjacencies that use this interface. * An example graph is shown below: * * E_a --> * E_b --> PL_2 --> P_a --> Interface_A * ... --> P_c -\ * E_k --> \ * Interface_K * / * E_l --> / * E_m --> PL_1 --> P_d -/ * ... --> P_f --> Interface_F * E_z --> * * E = fib_entry_t * PL = fib_path_list_t * P = fib_path_t * The subscripts are arbitrary and serve only to distinguish object instances. * This CP graph result in the following DP graph: * * M-ADJ-2 --> Interface_A * \ * -> Interface_K * / * M-ADJ-1 --> Interface_F * * M-ADJ = multi-path-adjacency. * * When interface K goes down a back-walk is started over its dependants in the * control plane graph. This back-walk will reach PL_1 and PL_2 and result in * the calculation of new adjacencies that have interface K removed. The walk * will continue to the entry objects and thus the forwarding table is updated * for each prefix with the new adjacency. The DP graph then becomes: * * ADJ-3 --> Interface_A * * ADJ-4 --> Interface_F * * The eBGP PIC scenarios described above relied on the update of a path-list's * recursive-adjacency to provide the shared point of cutover. This is shown * below * * E_a --> * E_b --> PL_2 --> P_a --> E_44 --> PL_a --> P_b --> Interface_A * ... --> P_c -\ * E_k --> \ * \ * E_1 --> PL_k -> P_k --> Interface_K * / * E_l --> / * E_m --> PL_1 --> P_d -/ * ... --> P_f --> E_55 --> PL_e --> P_e --> Interface_E * E_z --> * * The failure scenario is the removal of entry E_1 and thus the paths P_c and * P_d become unresolved. To achieve PIC the two shared recursive path-lists, * PL_1 and PL_2 must be updated to remove E_1 from the recursive-multi-path- * adjacencies that they contribute, before any entry E_a to E_z is updated. * This means that as the update propagates backwards (right to left) in the * graph it must do so breadth first not depth first. Note this approach leads * to convergence times that are dependent on the number of path-list and so * the number of combinations of egress PEs - this is desirable as this * scale is considerably lower than the number of prefixes. * * If we consider another section of the graph that is similar to the one * shown above where there is another prefix E_2 in a similar position to E_1 * and so also has many dependent children. It is reasonable to expect that a * particular network failure may simultaneously render E_1 and E_2 unreachable. * This means that the update to withdraw E_2 is download immediately after the * update to withdraw E_1. It is a requirement on the FIB to not spend large * amounts of time in a back-walk whilst processing the update for E_1, i.e. the * back-walk must not reach as far as E_a and its siblings. Therefore, after the * back-walk has traversed one generation (breadth first) to update all the * path-lists it should be suspended/back-ground and further updates allowed * to be handled. Once the update queue is empty, the suspended walks can be * resumed. Note that in the case that multiple updates affect the same entry * (say E_1) then this will trigger multiple similar walks, these are merged, * so each child is updated only once. * In the presence of more layers of recursion PIC is still a desirable * feature. Consider an extension to the diagram above, where more recursive * routes (E_100 -> E_200) are added as children of E_a: * * E_100 --> * E_101 --> PL_3 --> P_j-\ * ... \ * E_199 --> E_a --> * E_b --> PL_2 --> P_a --> E_44 --> ...etc.. * ... --> P_c -\ * E_k \ * E_1 --> ...etc.. * / * E_l --> / * E_m --> PL_1 --> P_d -/ * ... --> P_e --> E_55 --> ...etc.. * E_z --> * * To achieve PIC for the routes E_100->E_199, PL_3 needs to be updated before * E_b -> E_z, a breadth first traversal at each level would not achieve this. * Instead the walk must proceed intelligently. Children on PL_2 are sorted so * those Entry objects that themselves have children appear first in the list, * those without later. When an entry object is walked that has children, a * walk of its children is pushed to the front background queue. The back * ground queue is a priority queue. As the breadth first traversal proceeds * across the dependent entry object E_a to E_k, when the first entry that does * not have children is reached (E_b), the walk is suspended and placed at the * back of the queue. Following this prioritisation method shared path-list * updates are performed before all non-resolving entry objects. * The CPU/core/thread that handles the updates is the same thread that handles * the back-walks. Handling updates has a higher priority than making walk * progress, so a walk is required to be interruptable/suspendable when new * updates are available. * !!! TODO - this section describes how walks should be not how they are !!! * * In the diagram above E_100 is an IP route, however, VPP has no restrictions * on the type of object that can be a dependent of a FIB entry. Children of * a FIB entry can be (and are) GRE & VXLAN tunnels endpoints, L2VPN LSPs etc. * By including all object types into the graph and extending the back-walk, we * can thus deliver fast convergence to technologies that overlay on an IP * network. * * If having read all the above carefully you are still thinking; 'i don't need * all this %&$* i have a route only I know about and I just need to jam it in', * then fib_table_entry_special_add() is your only friend. */ #ifndef __FIB_H__ #define __FIB_H__ #include <vnet/fib/fib_table.h> #include <vnet/fib/fib_entry.h> #endif