diff options
Diffstat (limited to 'docs/content/overview')
-rw-r--r-- | docs/content/overview/_index.md | 6 | ||||
-rw-r--r-- | docs/content/overview/c_dash/_index.md | 6 | ||||
-rw-r--r-- | docs/content/overview/c_dash/design.md | 6 | ||||
-rw-r--r-- | docs/content/overview/c_dash/releases.md | 8 | ||||
-rw-r--r-- | docs/content/overview/c_dash/structure.md | 20 | ||||
-rw-r--r-- | docs/content/overview/csit/_index.md | 6 | ||||
-rw-r--r-- | docs/content/overview/csit/design.md | 148 | ||||
-rw-r--r-- | docs/content/overview/csit/suite_generation.md | 123 | ||||
-rw-r--r-- | docs/content/overview/csit/test_naming.md | 112 | ||||
-rw-r--r-- | docs/content/overview/csit/test_scenarios.md | 66 | ||||
-rw-r--r-- | docs/content/overview/csit/test_tags.md | 863 |
11 files changed, 1364 insertions, 0 deletions
diff --git a/docs/content/overview/_index.md b/docs/content/overview/_index.md new file mode 100644 index 0000000000..97fb5dec78 --- /dev/null +++ b/docs/content/overview/_index.md @@ -0,0 +1,6 @@ +--- +bookCollapseSection: false +bookFlatSection: true +title: "Overview" +weight: 1 +--- diff --git a/docs/content/overview/c_dash/_index.md b/docs/content/overview/c_dash/_index.md new file mode 100644 index 0000000000..97b351006f --- /dev/null +++ b/docs/content/overview/c_dash/_index.md @@ -0,0 +1,6 @@ +--- +bookCollapseSection: true +bookFlatSection: false +title: "C-Dash" +weight: 1 +--- diff --git a/docs/content/overview/c_dash/design.md b/docs/content/overview/c_dash/design.md new file mode 100644 index 0000000000..ef8c62ab88 --- /dev/null +++ b/docs/content/overview/c_dash/design.md @@ -0,0 +1,6 @@ +--- +title: "Design" +weight: 1 +--- + +# Design diff --git a/docs/content/overview/c_dash/releases.md b/docs/content/overview/c_dash/releases.md new file mode 100644 index 0000000000..1e51c2978a --- /dev/null +++ b/docs/content/overview/c_dash/releases.md @@ -0,0 +1,8 @@ +--- +title: "Releases" +weight: 3 +--- + +# Releases + +## C-Dash v1 diff --git a/docs/content/overview/c_dash/structure.md b/docs/content/overview/c_dash/structure.md new file mode 100644 index 0000000000..ba427f1ee3 --- /dev/null +++ b/docs/content/overview/c_dash/structure.md @@ -0,0 +1,20 @@ +--- +title: "Structure" +weight: 2 +--- + +# Structure + +## Performance Trending + +## Per Release Performance + +## Per Release Performance Comparisons + +## Per Release Coverage Data + +## Test Job Statistics + +## Failures and Anomalies + +## Documentation diff --git a/docs/content/overview/csit/_index.md b/docs/content/overview/csit/_index.md new file mode 100644 index 0000000000..959348d2ae --- /dev/null +++ b/docs/content/overview/csit/_index.md @@ -0,0 +1,6 @@ +--- +bookCollapseSection: true +bookFlatSection: false +title: "CSIT" +weight: 2 +--- diff --git a/docs/content/overview/csit/design.md b/docs/content/overview/csit/design.md new file mode 100644 index 0000000000..53b764f5bb --- /dev/null +++ b/docs/content/overview/csit/design.md @@ -0,0 +1,148 @@ +--- +title: "Design" +weight: 1 +--- + +# Design + +FD.io CSIT system design needs to meet continuously expanding requirements of +FD.io projects including VPP, related sub-systems (e.g. plugin applications, +DPDK drivers) and FD.io applications (e.g. DPDK applications), as well as +growing number of compute platforms running those applications. With CSIT +project scope and charter including both FD.io continuous testing AND +performance trending/comparisons, those evolving requirements further amplify +the need for CSIT framework modularity, flexibility and usability. + +## Design Hierarchy + +CSIT follows a hierarchical system design with SUTs and DUTs at the bottom level +of the hierarchy, presentation level at the top level and a number of functional +layers in-between. The current CSIT system design including CSIT framework is +depicted in the figure below. + +{{< figure src="/cdocs/csit_design_picture.svg" title="CSIT Design" >}} + +A brief bottom-up description is provided here: + +1. SUTs, DUTs, TGs + - SUTs - Systems Under Test; + - DUTs - Devices Under Test; + - TGs - Traffic Generators; +2. Level-1 libraries - Robot and Python + - Lowest level CSIT libraries abstracting underlying test environment, SUT, + DUT and TG specifics; + - Used commonly across multiple L2 KWs; + - Performance and functional tests: + - L1 KWs (KeyWords) are implemented as RF libraries and Python + libraries; + - Performance TG L1 KWs: + - All L1 KWs are implemented as Python libraries: + - Support for TRex only today; + - CSIT IXIA drivers in progress; + - Performance data plane traffic profiles: + - TG-specific stream profiles provide full control of: + - Packet definition - layers, MACs, IPs, ports, combinations thereof + e.g. IPs and UDP ports; + - Stream definitions - different streams can run together, delayed, + one after each other; + - Stream profiles are independent of CSIT framework and can be used + in any T-rex setup, can be sent anywhere to repeat tests with + exactly the same setup; + - Easily extensible - one can create a new stream profile that meets + tests requirements; + - Same stream profile can be used for different tests with the same + traffic needs; + - Functional data plane traffic scripts: + - Scapy specific traffic scripts; +3. Level-2 libraries - Robot resource files: + - Higher level CSIT libraries abstracting required functions for executing + tests; + - L2 KWs are classified into the following functional categories: + - Configuration, test, verification, state report; + - Suite setup, suite teardown; + - Test setup, test teardown; +4. Tests - Robot: + - Test suites with test cases; + - Performance tests using physical testbed environment: + - VPP; + - DPDK-Testpmd; + - DPDK-L3Fwd; + - Tools: + - Documentation generator; + - Report generator; + - Testbed environment setup ansible playbooks; + - Operational debugging scripts; + +5. Test Lifecycle Abstraction + +A well coded test must follow a disciplined abstraction of the test +lifecycles that includes setup, configuration, test and verification. In +addition to improve test execution efficiency, the commmon aspects of +test setup and configuration shared across multiple test cases should be +done only once. Translating these high-level guidelines into the Robot +Framework one arrives to definition of a well coded RF tests for FD.io +CSIT. Anatomy of Good Tests for CSIT: + +1. Suite Setup - Suite startup Configuration common to all Test Cases in suite: + uses Configuration KWs, Verification KWs, StateReport KWs; +2. Test Setup - Test startup Configuration common to multiple Test Cases: uses + Configuration KWs, StateReport KWs; +3. Test Case - uses L2 KWs with RF Gherkin style: + - prefixed with {Given} - Verification of Test setup, reading state: uses + Configuration KWs, Verification KWs, StateReport KWs; + - prefixed with {When} - Test execution: Configuration KWs, Test KWs; + - prefixed with {Then} - Verification of Test execution, reading state: uses + Verification KWs, StateReport KWs; +4. Test Teardown - post Test teardown with Configuration cleanup and + Verification common to multiple Test Cases - uses: Configuration KWs, + Verification KWs, StateReport KWs; +5. Suite Teardown - Suite post-test Configuration cleanup: uses Configuration + KWs, Verification KWs, StateReport KWs; + +## RF Keywords Functional Classification + +CSIT RF KWs are classified into the functional categories matching the test +lifecycle events described earlier. All CSIT RF L2 and L1 KWs have been grouped +into the following functional categories: + +1. Configuration; +2. Test; +3. Verification; +4. StateReport; +5. SuiteSetup; +6. TestSetup; +7. SuiteTeardown; +8. TestTeardown; + +## RF Keywords Naming Guidelines + +Readability counts: "..code is read much more often than it is written." +Hence following a good and consistent grammar practice is important when +writing Robot Framework KeyWords and Tests. All CSIT test cases +are coded using Gherkin style and include only L2 KWs references. L2 KWs are +coded using simple style and include L2 KWs, L1 KWs, and L1 python references. +To improve readability, the proposal is to use the same grammar for both +Robot Framework KW styles, and to formalize the grammar of English +sentences used for naming the Robot Framework KWs. Robot +Framework KWs names are short sentences expressing functional description of +the command. They must follow English sentence grammar in one of the following +forms: + +1. **Imperative** - verb-object(s): *"Do something"*, verb in base form. +2. **Declarative** - subject-verb-object(s): *"Subject does something"*, verb in + a third-person singular present tense form. +3. **Affirmative** - modal_verb-verb-object(s): *"Subject should be something"*, + *"Object should exist"*, verb in base form. +4. **Negative** - modal_verb-Not-verb-object(s): *"Subject should not be + something"*, *"Object should not exist"*, verb in base form. + +Passive form MUST NOT be used. However a usage of past participle as an +adjective is okay. See usage examples provided in the Coding guidelines +section below. Following sections list applicability of the above +grammar forms to different Robot Framework KW categories. Usage +examples are provided, both good and bad. + +## Coding Guidelines + +Coding guidelines can be found on +[Design optimizations wiki page](https://wiki.fd.io/view/CSIT/Design_Optimizations). diff --git a/docs/content/overview/csit/suite_generation.md b/docs/content/overview/csit/suite_generation.md new file mode 100644 index 0000000000..84a19b8ab9 --- /dev/null +++ b/docs/content/overview/csit/suite_generation.md @@ -0,0 +1,123 @@ +--- +title: "Suite Generation" +weight: 5 +--- + +# Suite Generation + +CSIT uses robot suite files to define tests. However, not all suite files +available for Jenkins jobs (or manually started bootstrap scripts) are present +in CSIT git repository. They are generated only when needed. + +## Autogen Library + +There is a code generation layer implemented as Python library called "autogen", +called by various bash scripts. + +It generates the full extent of CSIT suites, using the ones in git as templates. + +## Sources + +The generated suites (and their contents) are affected by multiple information +sources, listed below. + +### Git Suites + +The suites present in git repository act as templates for generating suites. +One of autogen design principles is that any template suite should also act +as a full suite (no placeholders). + +In practice, autogen always re-creates the template suite with exactly +the same content, it is one of checks that autogen works correctly. + +### Regenerate Script + +Not all suites present in CSIT git repository act as template for autogen. +The distinction is on per-directory level. Directories with +`regenerate_testcases.py` script usually consider all suites as templates +(unless possibly not included by the glob patten in the script). + +The script also specifies minimal frame size, indirectly, by specifying protocol +(protocol "ip4" is the default, leading to 64B frame size). + +### Constants + +Values in `Constants.py` are taken into consideration when generating suites. +The values are mostly related to different NIC models and NIC drivers. + +### Python Code + +Python code in `resources/libraries/python/autogen` contains several other +information sources. + +#### Testcase Templates + +The test case part of template suite is ignored, test case lines +are created according to text templates in `Testcase.py` file. + +#### Testcase Argument Lists + +Each testcase template has different number of "arguments", e.g. values +to put into various placeholders. Different test types need different +lists of the argument values, the lists are in `regenerate_glob` method +in `Regenerator.py` file. + +#### Iteration Over Values + +Python code detects the test type (usually by substrings of suite file name), +then iterates over different quantities based on type. +For example, only ndrpdr suite templates generate other types (mrr and soak). + +#### Hardcoded Exclusions + +Some combinations of values are known not to work, so they are excluded. +Examples: Density tests for too much CPUs; IMIX for ASTF. + +## Non-Sources + +Some information sources are available in CSIT repository, +but do not affect the suites generated by autogen. + +### Testbeds + +Overall, no information visible in topology yaml files is taken into account +by autogen. + +#### Testbed Architecture + +Historically, suite files are agnostic to testbed architecture, e.g. ICX or ALT. + +#### Testbed Size + +Historically, 2-node and 3-node suites have diferent names, and while +most of the code is common, the differences are not always simple enough. +Autogen treat 2-node and 3-node suites as independent templates. + +TRex suites are intended for a 1-node circuit of otherwise 2-node or 3-node +testbeds, so they support all 3 robot tags. +They are also detected and treated differently by autogen, +mainly because they need different testcase arguments (no CPU count). +Autogen does nothing specifically related to the fact they should run +only in testbeds/NICs with TG-TG line available. + +#### Other Topology Info + +Some bonding tests need two (parallel) links between DUTs. Autogen does not +care, as suites are agnostic. Robot tag marks the difference, but the link +presence is not explicitly checked. + +### Job specs + +Information in job spec files depend on generated suites (not the other way). +Autogen should generate more suites, as job spec is limited by time budget. +More suites should be available for manually triggered verify jobs, +so autogen covers that. + +### Bootstrap Scripts + +Historically, bootstrap scripts perform some logic, +perhaps adding exclusion options to Robot invocation +(e.g. skipping testbed+NIC combinations for tests that need parallel links). + +Once again, the logic here relies on what autogen generates, +autogen does not look into bootstrap scripts. diff --git a/docs/content/overview/csit/test_naming.md b/docs/content/overview/csit/test_naming.md new file mode 100644 index 0000000000..d7a32518e5 --- /dev/null +++ b/docs/content/overview/csit/test_naming.md @@ -0,0 +1,112 @@ +--- +title: "Test Naming" +weight: 3 +--- + +# Test Naming + +## Background + +{{< release_csit >}} follows a common structured naming convention for all +performance and system functional tests, introduced in CSIT 17.01. + +The naming should be intuitive for majority of the tests. Complete +description of CSIT test naming convention is provided on +[CSIT test naming wiki page](https://wiki.fd.io/view/CSIT/csit-test-naming). +Below few illustrative examples of the naming usage for test suites across CSIT +performance, functional and Honeycomb management test areas. + +## Naming Convention + +The CSIT approach is to use tree naming convention and to encode following +testing information into test suite and test case names: + +1. packet network port configuration + * port type, physical or virtual; + * number of ports; + * NIC model, if applicable; + * port-NIC locality, if applicable; +2. packet encapsulations; +3. VPP packet processing + * packet forwarding mode; + * packet processing function(s); +4. packet forwarding path + * if present, network functions (processes, containers, VMs) and their + topology within the computer; +5. main measured variable, type of test. + +Proposed convention is to encode ports and NICs on the left (underlay), +followed by outer-most frame header, then other stacked headers up to the +header processed by vSwitch-VPP, then VPP forwarding function, then encap on +vhost interface, number of vhost interfaces, number of VMs. If chained VMs +present, they get added on the right. Test topology is expected to be +symmetric, in other words packets enter and leave SUT through ports specified +on the left of the test name. Here some examples to illustrate the convention +followed by the complete legend, and tables mapping the new test filenames to +old ones. + +## Naming Examples + +CSIT test suite naming examples (filename.robot) for common tested VPP +topologies: + +1. **Physical port to physical port - a.k.a. NIC-to-NIC, Phy-to-Phy, P2P** + * *PortNICConfig-WireEncapsulation-PacketForwardingFunction- + PacketProcessingFunction1-...-PacketProcessingFunctionN-TestType* + * *10ge2p1x520-dot1q-l2bdbasemaclrn-ndrdisc.robot* => 2 ports of 10GE on + Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain baseline switching + with MAC learning, NDR throughput discovery. + * *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-ndrchk.robot* => 2 ports of 10GE on + Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain baseline switching + with MAC learning, NDR throughput discovery. + * *10ge2p1x520-ethip4-ip4base-ndrdisc.robot* => 2 ports of 10GE on Intel x520 + NIC, IPv4 baseline routed forwarding, NDR throughput discovery. + * *10ge2p1x520-ethip6-ip6scale200k-ndrdisc.robot* => 2 ports of 10GE on Intel + x520 NIC, IPv6 scaled up routed forwarding, NDR throughput discovery. + * *10ge2p1x520-ethip4-ip4base-iacldstbase-ndrdisc.robot* => 2 ports of 10GE + on Intel x520 NIC, IPv4 baseline routed forwarding, ingress Access Control + Lists baseline matching on destination, NDR throughput discovery. + * *40ge2p1vic1385-ethip4-ip4base-ndrdisc.robot* => 2 ports of 40GE on Cisco + vic1385 NIC, IPv4 baseline routed forwarding, NDR throughput discovery. + * *eth2p-ethip4-ip4base-func.robot* => 2 ports of Ethernet, IPv4 baseline + routed forwarding, functional tests. + +2. **Physical port to VM (or VM chain) to physical port - a.k.a. NIC2VM2NIC, + P2V2P, NIC2VMchain2NIC, P2V2V2P** + * *PortNICConfig-WireEncapsulation-PacketForwardingFunction- + PacketProcessingFunction1-...-PacketProcessingFunctionN-VirtEncapsulation- + VirtPortConfig-VMconfig-TestType* + * *10ge2p1x520-dot1q-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2 ports + of 10GE on Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain + switching to/from two vhost interfaces and one VM, NDR throughput + discovery. + * *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2 + ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain + switching to/from two vhost interfaces and one VM, NDR throughput + discovery. + * *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-4vhost-2vm-ndrdisc.robot* => 2 + ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain + switching to/from four vhost interfaces and two VMs, NDR throughput + discovery. + * *eth2p-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm-func.robot* => 2 ports of + Ethernet, IPv4 VXLAN Ethernet, L2 bridge-domain switching to/from two vhost + interfaces and one VM, functional tests. + +3. **API CRUD tests - Create (Write), Read (Retrieve), Update (Modify), Delete + (Destroy) operations for configuration and operational data** + * *ManagementTestKeyword-ManagementOperation-ManagedFunction1-...- + ManagedFunctionN-ManagementAPI1-ManagementAPIN-TestType* + * *mgmt-cfg-lisp-apivat-func* => configuration of LISP with VAT API calls, + functional tests. + * *mgmt-cfg-l2bd-apihc-apivat-func* => configuration of L2 Bridge-Domain with + Honeycomb API and VAT API calls, functional tests. + * *mgmt-oper-int-apihcnc-func* => reading status and operational data of + interface with Honeycomb NetConf API calls, functional tests. + * *mgmt-cfg-int-tap-apihcnc-func* => configuration of tap interfaces with + Honeycomb NetConf API calls, functional tests. + * *mgmt-notif-int-subint-apihcnc-func* => notifications of interface and + sub-interface events with Honeycomb NetConf Notifications, functional + tests. + +For complete description of CSIT test naming convention please refer to +[CSIT test naming wiki page](https://wiki.fd.io/view/CSIT/csit-test-naming). diff --git a/docs/content/overview/csit/test_scenarios.md b/docs/content/overview/csit/test_scenarios.md new file mode 100644 index 0000000000..1f06765eae --- /dev/null +++ b/docs/content/overview/csit/test_scenarios.md @@ -0,0 +1,66 @@ +--- +title: "Test Scenarios" +weight: 2 +--- + +# Test Scenarios + +FD.io CSIT Dashboard includes multiple test scenarios of VPP +centric applications, topologies and use cases. In addition it also +covers baseline tests of DPDK sample applications. Tests are executed in +physical (performance tests) and virtual environments (functional +tests). + +Brief overview of test scenarios covered in this documentation: + +1. **VPP Performance**: VPP performance tests are executed in physical + FD.io testbeds, focusing on VPP network data plane performance in + NIC-to-NIC switching topologies. VPP application runs in + bare-metal host user-mode handling NICs. TRex is used as a traffic generator. + +2. **VPP Vhostuser Performance with KVM VMs**: VPP VM service switching + performance tests using vhostuser virtual interface for + interconnecting multiple NF-in-VM instances. VPP vswitch + instance runs in bare-metal user-mode handling NICs and connecting + over vhost-user interfaces to VM instances each running VPP with virtio + virtual interfaces. Similarly to VPP Performance, tests are run across a + range of configurations. TRex is used as a traffic generator. + +3. **VPP Memif Performance with LXC and Docker Containers**: VPP + Container service switching performance tests using memif virtual + interface for interconnecting multiple VPP-in-container instances. + VPP vswitch instance runs in bare-metal user-mode handling NICs and + connecting over memif (Slave side) interfaces to more instances of + VPP running in LXC or in Docker Containers, both with memif + interfaces (Master side). Similarly to VPP Performance, tests are + run across a range of configurations. TRex is used as a traffic + generator. + +4. **DPDK Performance**: VPP uses DPDK to drive the NICs and physical + interfaces. DPDK performance tests are used as a baseline to + profile performance of the DPDK sub-system. Two DPDK applications + are tested: Testpmd and L3fwd. DPDK tests are executed in the same + testing environment as VPP tests. DPDK Testpmd and L3fwd + applications run in host user-mode. TRex is used as a traffic + generator. + +5. **T-Rex Performance**: T-Rex perfomance tests are executed in physical + FD.io testbeds, focusing on T-Rex data plane performance in NIC-to-NIC + loopback topologies. + +6. **VPP Functional**: VPP functional tests are executed in virtual + FD.io testbeds, focusing on VPP packet processing functionality, + including both network data plane and in-line control plane. Tests + cover vNIC-to-vNIC vNIC-to-nestedVM-to-vNIC forwarding topologies. + Scapy is used as a traffic generator. + +All CSIT test data included in this report is auto-generated from Robot +Framework json output files produced by Linux Foundation FD.io Jenkins jobs +executed against {{< release_vpp >}} artifacts. + +FD.io CSIT system is developed using two main coding platforms: Robot +Framework and Python. {{< release_csit >}} source code for the executed test +suites is available in corresponding CSIT branch in the directory +`./tests/<name_of_the_test_suite>`. A local copy of CSIT source code +can be obtained by cloning CSIT git repository - `git clone +https://gerrit.fd.io/r/csit`. diff --git a/docs/content/overview/csit/test_tags.md b/docs/content/overview/csit/test_tags.md new file mode 100644 index 0000000000..8fc3021d6f --- /dev/null +++ b/docs/content/overview/csit/test_tags.md @@ -0,0 +1,863 @@ +--- +title: "Test Tags" +weight: 4 +--- + +# Test Tags + +All CSIT test cases are labelled with Robot Framework tags used to allow for +easy test case type identification, test case grouping and selection for +execution. Following sections list currently used CSIT tags and their +descriptions. + +## Testbed Topology Tags + +**2_NODE_DOUBLE_LINK_TOPO** + + 2 nodes connected in a circular topology with two links interconnecting + the devices. + +**2_NODE_SINGLE_LINK_TOPO** + + 2 nodes connected in a circular topology with at least one link + interconnecting devices. + +**3_NODE_DOUBLE_LINK_TOPO** + + 3 nodes connected in a circular topology with two links interconnecting + the devices. + +**3_NODE_SINGLE_LINK_TOPO** + + 3 nodes connected in a circular topology with at least one link + interconnecting devices. + +## Objective Tags + +**SKIP_PATCH** + + Test case(s) marked to not run in case of vpp-csit-verify (i.e. VPP patch) + and csit-vpp-verify jobs (i.e. CSIT patch). + +**SKIP_VPP_PATCH** + + Test case(s) marked to not run in case of vpp-csit-verify (i.e. VPP patch). + +## Environment Tags + +**HW_ENV** + + DUTs and TGs are running on bare metal. + +**VM_ENV** + + DUTs and TGs are running in virtual environment. + +**VPP_VM_ENV** + + DUTs with VPP and capable of running Virtual Machine. + +## NIC Model Tags + +**NIC_Intel-X520-DA2** + + Intel X520-DA2 NIC. + +**NIC_Intel-XL710** + + Intel XL710 NIC. + +**NIC_Intel-X710** + + Intel X710 NIC. + +**NIC_Intel-XXV710** + + Intel XXV710 NIC. + +**NIC_Cisco-VIC-1227** + + VIC-1227 by Cisco. + +**NIC_Cisco-VIC-1385** + + VIC-1385 by Cisco. + +**NIC_Amazon-Nitro-50G** + + Amazon EC2 ENA NIC. + +## Scaling Tags + +**FIB_20K** + + 2x10,000 entries in single fib table + +**FIB_200K** + + 2x100,000 entries in single fib table + +**FIB_1M** + + 2x500,000 entries in single fib table + +**FIB_2M** + + 2x1,000,000 entries in single fib table + +**L2BD_1** + + Test with 1 L2 bridge domain. + +**L2BD_10** + + Test with 10 L2 bridge domains. + +**L2BD_100** + + Test with 100 L2 bridge domains. + +**L2BD_1K** + + Test with 1000 L2 bridge domains. + +**VLAN_1** + + Test with 1 VLAN sub-interface. + +**VLAN_10** + + Test with 10 VLAN sub-interfaces. + +**VLAN_100** + + Test with 100 VLAN sub-interfaces. + +**VLAN_1K** + + Test with 1000 VLAN sub-interfaces. + +**VXLAN_1** + + Test with 1 VXLAN tunnel. + +**VXLAN_10** + + Test with 10 VXLAN tunnels. + +**VXLAN_100** + + Test with 100 VXLAN tunnels. + +**VXLAN_1K** + + Test with 1000 VXLAN tunnels. + +**TNL_{t}** + + IPSec in tunnel mode - {t} tunnels. + +**SRC_USER_{u}** + + Traffic flow with {u} unique IPs (users) in one direction. + {u}=(1,10,100,1000,2000,4000). + +**100_FLOWS** + + Traffic stream with 100 unique flows (10 IPs/users x 10 UDP ports) in one + direction. + +**10k_FLOWS** + + Traffic stream with 10 000 unique flows (10 IPs/users x 1000 UDP ports) in + one direction. + +**100k_FLOWS** + + Traffic stream with 100 000 unique flows (100 IPs/users x 1000 UDP ports) in + one direction. + +**HOSTS_{h}** + + Stateless or stateful traffic stream with {h} client source IP4 addresses, + usually with 63 flow differing in source port number. Could be UDP or TCP. + If NAT is used, the clients are inside. Outside IP range can differ. + {h}=(1024,4096,16384,65536,262144). + +**GENEVE4_{t}TUN** + + Test with {t} GENEVE IPv4 tunnel. {t}=(1,4,16,64,256,1024) + +## Test Category Tags + +**DEVICETEST** + + All vpp_device functional test cases. + +**PERFTEST** + + All performance test cases. + +## VPP Device Type Tags + +**SCAPY** + + All test cases that uses Scapy for packet generation and validation. + +## Performance Type Tags + +**NDRPDR** + + Single test finding both No Drop Rate and Partial Drop Rate simultaneously. + The search is done by optimized algorithm which performs + multiple trial runs at different durations and transmit rates. + The results come from the final trials, which have duration of 30 seconds. + +**MRR** + + Performance tests where TG sends the traffic at maximum rate (line rate) + and reports total sent/received packets over trial duration. + The result is an average of 10 trials of 1 second duration. + +**SOAK** + + Performance tests using PLRsearch to find the critical load. + +**RECONF** + + Performance tests aimed to measure lost packets (time) when performing + reconfiguration while full throughput offered load is applied. + +## Ethernet Frame Size Tags + +These are describing the traffic offered by Traffic Generator, +"primary" traffic in case of asymmetric load. +For traffic between DUTs, or for "secondary" traffic, see ${overhead} value. + +**{b}B** + + {b} Bytes frames used for test. + +**IMIX** + + IMIX frame sequence (28x 64B, 16x 570B, 4x 1518B) used for test. + +## Test Type Tags + +**BASE** + + Baseline test cases, no encapsulation, no feature(s) configured in tests. + No scaling whatsoever, beyond minimum needed for RSS. + +**IP4BASE** + + IPv4 baseline test cases, no encapsulation, no feature(s) configured in + tests. Minimal number of routes. Other quantities may be scaled. + +**IP6BASE** + + IPv6 baseline test cases, no encapsulation, no feature(s) configured in + tests. + +**L2XCBASE** + + L2XC baseline test cases, no encapsulation, no feature(s) configured in + tests. + +**L2BDBASE** + + L2BD baseline test cases, no encapsulation, no feature(s) configured in + tests. + +**L2PATCH** + + L2PATCH baseline test cases, no encapsulation, no feature(s) configured in + tests. + +**SCALE** + + Scale test cases. Other tags specify which quantities are scaled. + Also applies if scaling is set on TG only (e.g. DUT works as IP4BASE). + +**ENCAP** + + Test cases where encapsulation is used. Use also encapsulation tag(s). + +**FEATURE** + + At least one feature is configured in test cases. Use also feature tag(s). + +**UDP** + + Tests which use any kind of UDP traffic (STL or ASTF profile). + +**TCP** + + Tests which use any kind of TCP traffic (STL or ASTF profile). + +**TREX** + + Tests which test trex traffic without any software DUTs in the traffic path. + +**UDP_UDIR** + + Tests which use unidirectional UDP traffic (STL profile only). + +**UDP_BIDIR** + + Tests which use bidirectional UDP traffic (STL profile only). + +**UDP_CPS** + + Tests which measure connections per second on minimal UDP pseudoconnections. + This implies ASTF traffic profile is used. + This tag selects specific output processing in PAL. + +**TCP_CPS** + + Tests which measure connections per second on empty TCP connections. + This implies ASTF traffic profile is used. + This tag selects specific output processing in PAL. + +**TCP_RPS** + + Tests which measure requests per second on empty TCP connections. + This implies ASTF traffic profile is used. + This tag selects specific output processing in PAL. + +**UDP_PPS** + + Tests which measure packets per second on lightweight UDP transactions. + This implies ASTF traffic profile is used. + This tag selects specific output processing in PAL. + +**TCP_PPS** + + Tests which measure packets per second on lightweight TCP transactions. + This implies ASTF traffic profile is used. + This tag selects specific output processing in PAL. + +**HTTP** + + Tests which use traffic formed of valid HTTP requests (and responses). + +**LDP_NGINX** + + LDP NGINX is un-modified NGINX with VPP via LD_PRELOAD. + +**NF_DENSITY** + + Performance tests that measure throughput of multiple VNF and CNF + service topologies at different service densities. + +## NF Service Density Tags + +**CHAIN** + + NF service density tests with VNF or CNF service chain topology(ies). + +**PIPE** + + NF service density tests with CNF service pipeline topology(ies). + +**NF_L3FWDIP4** + + NF service density tests with DPDK l3fwd IPv4 routing as NF workload. + +**NF_VPPIP4** + + NF service density tests with VPP IPv4 routing as NF workload. + +**{r}R{c}C** + + Service density matrix locator {r}R{c}C, {r}Row denoting number of + service instances, {c}Column denoting number of NFs per service + instance. {r}=(1,2,4,6,8,10), {c}=(1,2,4,6,8,10). + +**{n}VM{t}T** + + Service density {n}VM{t}T, {n}Number of NF Qemu VMs, {t}Number of threads + per NF. + +**{n}DCR{t}T** + + Service density {n}DCR{t}T, {n}Number of NF Docker containers, {t}Number of + threads per NF. + +**{n}_ADDED_CHAINS** + + {n}Number of chains (or pipelines) added (and/or removed) + during RECONF test. + +## Forwarding Mode Tags + +**L2BDMACSTAT** + + VPP L2 bridge-domain, L2 MAC static. + +**L2BDMACLRN** + + VPP L2 bridge-domain, L2 MAC learning. + +**L2XCFWD** + + VPP L2 point-to-point cross-connect. + +**IP4FWD** + + VPP IPv4 routed forwarding. + +**IP6FWD** + + VPP IPv6 routed forwarding. + +**LOADBALANCER_MAGLEV** + + VPP Load balancer maglev mode. + +**LOADBALANCER_L3DSR** + + VPP Load balancer l3dsr mode. + +**LOADBALANCER_NAT4** + + VPP Load balancer nat4 mode. + +**N2N** + + Mode, where NICs from the same physical server are directly + connected with a cable. + +## Underlay Tags + +**IP4UNRLAY** + + IPv4 underlay. + +**IP6UNRLAY** + + IPv6 underlay. + +**MPLSUNRLAY** + + MPLS underlay. + +## Overlay Tags + +**L2OVRLAY** + + L2 overlay. + +**IP4OVRLAY** + + IPv4 overlay (IPv4 payload). + +**IP6OVRLAY** + + IPv6 overlay (IPv6 payload). + +## Tagging Tags + +**DOT1Q** + + All test cases with dot1q. + +**DOT1AD** + + All test cases with dot1ad. + +## Encapsulation Tags + +**ETH** + + All test cases with base Ethernet (no encapsulation). + +**LISP** + + All test cases with LISP. + +**LISPGPE** + + All test cases with LISP-GPE. + +**LISP_IP4o4** + + All test cases with LISP_IP4o4. + +**LISPGPE_IP4o4** + + All test cases with LISPGPE_IP4o4. + +**LISPGPE_IP6o4** + + All test cases with LISPGPE_IP6o4. + +**LISPGPE_IP4o6** + + All test cases with LISPGPE_IP4o6. + +**LISPGPE_IP6o6** + + All test cases with LISPGPE_IP6o6. + +**VXLAN** + + All test cases with Vxlan. + +**VXLANGPE** + + All test cases with VXLAN-GPE. + +**GRE** + + All test cases with GRE. + +**GTPU** + + All test cases with GTPU. + +**GTPU_HWACCEL** + + All test cases with GTPU_HWACCEL. + +**IPSEC** + + All test cases with IPSEC. + +**WIREGUARD** + + All test cases with WIREGUARD. + +**SRv6** + + All test cases with Segment routing over IPv6 dataplane. + +**SRv6_1SID** + + All SRv6 test cases with single SID. + +**SRv6_2SID_DECAP** + + All SRv6 test cases with two SIDs and with decapsulation. + +**SRv6_2SID_NODECAP** + + All SRv6 test cases with two SIDs and without decapsulation. + +**GENEVE** + + All test cases with GENEVE. + +**GENEVE_L3MODE** + + All test cases with GENEVE tunnel in L3 mode. + +**FLOW** + + All test cases with FLOW. + +**FLOW_DIR** + + All test cases with FLOW_DIR. + +**FLOW_RSS** + + All test cases with FLOW_RSS. + +**NTUPLE** + + All test cases with NTUPLE. + +**L2TPV3** + + All test cases with L2TPV3. + +## Interface Tags + +**PHY** + + All test cases which use physical interface(s). + +**GSO** + + All test cases which uses Generic Segmentation Offload. + +**VHOST** + + All test cases which uses VHOST. + +**VHOST_1024** + + All test cases which uses VHOST DPDK driver with qemu queue size set + to 1024. + +**VIRTIO** + + All test cases which uses VIRTIO native VPP driver. + +**VIRTIO_1024** + + All test cases which uses VIRTIO native VPP driver with qemu queue size set + to 1024. + +**CFS_OPT** + + All test cases which uses VM with optimised scheduler policy. + +**TUNTAP** + + All test cases which uses TUN and TAP. + +**AFPKT** + + All test cases which uses AFPKT. + +**NETMAP** + + All test cases which uses Netmap. + +**MEMIF** + + All test cases which uses Memif. + +**SINGLE_MEMIF** + + All test cases which uses only single Memif connection per DUT. One DUT + instance is running in container having one physical interface exposed to + container. + +**LBOND** + + All test cases which uses link bonding (BondEthernet interface). + +**LBOND_DPDK** + + All test cases which uses DPDK link bonding. + +**LBOND_VPP** + + All test cases which uses VPP link bonding. + +**LBOND_MODE_XOR** + + All test cases which uses link bonding with mode XOR. + +**LBOND_MODE_LACP** + + All test cases which uses link bonding with mode LACP. + +**LBOND_LB_L34** + + All test cases which uses link bonding with load-balance mode l34. + +**LBOND_{n}L** + + All test cases which use {n} link(s) for link bonding. + +**DRV_{d}** + + All test cases which NIC Driver for DUT is set to {d}. Default is VFIO_PCI. + {d}=(AVF, RDMA_CORE, VFIO_PCI, AF_XDP). + +**TG_DRV_{d}** + + All test cases which NIC Driver for TG is set to {d}. Default is IGB_UIO. + {d}=(RDMA_CORE, IGB_UIO). + +**RXQ_SIZE_{n}** + + All test cases which RXQ size (RX descriptors) are set to {n}. Default is 0, + which means VPP (API) default. + +**TXQ_SIZE_{n}** + + All test cases which TXQ size (TX descriptors) are set to {n}. Default is 0, + which means VPP (API) default. + +## Feature Tags + +**IACLDST** + + iACL destination. + +**ADLALWLIST** + + ADL allowlist. + +**NAT44** + + NAT44 configured and tested. + +**NAT64** + + NAT44 configured and tested. + +**ACL** + + ACL plugin configured and tested. + +**IACL** + + ACL plugin configured and tested on input path. + +**OACL** + + ACL plugin configured and tested on output path. + +**ACL_STATELESS** + + ACL plugin configured and tested in stateless mode (permit action). + +**ACL_STATEFUL** + + ACL plugin configured and tested in stateful mode (permit+reflect action). + +**ACL1** + + ACL plugin configured and tested with 1 not-hitting ACE. + +**ACL10** + + ACL plugin configured and tested with 10 not-hitting ACEs. + +**ACL50** + + ACL plugin configured and tested with 50 not-hitting ACEs. + +**SRv6_PROXY** + + SRv6 endpoint to SR-unaware appliance via proxy. + +**SRv6_PROXY_STAT** + + SRv6 endpoint to SR-unaware appliance via static proxy. + +**SRv6_PROXY_DYN** + + SRv6 endpoint to SR-unaware appliance via dynamic proxy. + +**SRv6_PROXY_MASQ** + + SRv6 endpoint to SR-unaware appliance via masquerading proxy. + +## Encryption Tags + +**IPSECSW** + + Crypto in software. + +**IPSECHW** + + Crypto in hardware. + +**IPSECTRAN** + + IPSec in transport mode. + +**IPSECTUN** + + IPSec in tunnel mode. + +**IPSECINT** + + IPSec in interface mode. + +**AES** + + IPSec using AES algorithms. + +**AES_128_CBC** + + IPSec using AES 128 CBC algorithms. + +**AES_128_GCM** + + IPSec using AES 128 GCM algorithms. + +**AES_256_GCM** + + IPSec using AES 256 GCM algorithms. + +**HMAC** + + IPSec using HMAC integrity algorithms. + +**HMAC_SHA_256** + + IPSec using HMAC SHA 256 integrity algorithms. + +**HMAC_SHA_512** + + IPSec using HMAC SHA 512 integrity algorithms. + +**SCHEDULER** + + IPSec using crypto sw scheduler engine. + +**FASTPATH** + + IPSec policy mode with spd fast path enabled. + +## Client-Workload Tags + +**VM** + + All test cases which use at least one virtual machine. + +**LXC** + + All test cases which use Linux container and LXC utils. + +**DRC** + + All test cases which use at least one Docker container. + +**DOCKER** + + All test cases which use Docker as container manager. + +**APP** + + All test cases with specific APP use. + +## Container Orchestration Tags + +**{n}VSWITCH** + + {n} VPP running in {n} Docker container(s) acting as a VSWITCH. + {n}=(1). + +**{n}VNF** + + {n} VPP running in {n} Docker container(s) acting as a VNF work load. + {n}=(1). + +## Multi-Threading Tags + +**STHREAD** + + *Dynamic tag*. + All test cases using single poll mode thread. + +**MTHREAD** + + *Dynamic tag*. + All test cases using more then one poll mode driver thread. + +**{n}NUMA** + + All test cases with packet processing on {n} socket(s). {n}=(1,2). + +**{c}C** + + {c} worker thread pinned to {c} dedicated physical core; or if + HyperThreading is enabled, {c}*2 worker threads each pinned to a separate + logical core within 1 dedicated physical core. Main thread pinned to core 1. + {t}=(1,2,4). + +**{t}T{c}C** + + *Dynamic tag*. + {t} worker threads pinned to {c} dedicated physical cores. Main thread + pinned to core 1. By default CSIT is configuring same amount of receive + queues per interface as worker threads. {t}=(1,2,4,8), {t}=(1,2,4). |