aboutsummaryrefslogtreecommitdiffstats
path: root/docs/content/overview
diff options
context:
space:
mode:
Diffstat (limited to 'docs/content/overview')
-rw-r--r--docs/content/overview/_index.md6
-rw-r--r--docs/content/overview/c_dash/_index.md6
-rw-r--r--docs/content/overview/c_dash/design.md16
-rw-r--r--docs/content/overview/c_dash/structure.md111
-rw-r--r--docs/content/overview/csit/_index.md45
-rw-r--r--docs/content/overview/csit/branching_strategy.md109
-rw-r--r--docs/content/overview/csit/design.md148
-rw-r--r--docs/content/overview/csit/suite_generation.md123
-rw-r--r--docs/content/overview/csit/test_naming.md112
-rw-r--r--docs/content/overview/csit/test_scenarios.md66
-rw-r--r--docs/content/overview/csit/test_tags.md876
11 files changed, 1618 insertions, 0 deletions
diff --git a/docs/content/overview/_index.md b/docs/content/overview/_index.md
new file mode 100644
index 0000000000..6c4d4210fd
--- /dev/null
+++ b/docs/content/overview/_index.md
@@ -0,0 +1,6 @@
+---
+bookCollapseSection: false
+bookFlatSection: true
+title: "Overview"
+weight: 1
+--- \ No newline at end of file
diff --git a/docs/content/overview/c_dash/_index.md b/docs/content/overview/c_dash/_index.md
new file mode 100644
index 0000000000..fdf583f377
--- /dev/null
+++ b/docs/content/overview/c_dash/_index.md
@@ -0,0 +1,6 @@
+---
+bookCollapseSection: true
+bookFlatSection: false
+title: "CSIT-Dash"
+weight: 1
+--- \ No newline at end of file
diff --git a/docs/content/overview/c_dash/design.md b/docs/content/overview/c_dash/design.md
new file mode 100644
index 0000000000..ca985f993e
--- /dev/null
+++ b/docs/content/overview/c_dash/design.md
@@ -0,0 +1,16 @@
+---
+title: "Design"
+weight: 1
+---
+
+# Design
+
+From a test to the graph or table.
+
+## Tests
+
+## ETL Pipeline
+
+{{< figure src="/cdocs/csit_etl_for_uti_data_flow_simplified.svg" title="CSIT ETL pipeline for UTI data" >}}
+
+## Presentation
diff --git a/docs/content/overview/c_dash/structure.md b/docs/content/overview/c_dash/structure.md
new file mode 100644
index 0000000000..982655e984
--- /dev/null
+++ b/docs/content/overview/c_dash/structure.md
@@ -0,0 +1,111 @@
+---
+title: "Structure"
+weight: 2
+---
+
+# Structure
+
+CSIT-Dash provides customizable views on performance data. We can split it into
+two groups. The first one is performance trending, which displays data collected
+on daily (MRR) or weekly (NDRPDR) basis.
+The other one presents data coming from release testing. In addition, we publish
+also information and statistics about our test jobs, failures and anomalies and
+the CSIT documentation.
+The screen of CSIT-Dash is divided in two parts. On the left side, there is the
+control panel which makes possible to select required information. The right
+side then displays the user-selected data in graphs or tables.
+
+The structure of CSIT-Dash consist of:
+
+- Performance Trending
+- Per Release Performance
+- Per Release Performance Comparisons
+- Per Release Coverage Data
+- Test Job Statistics
+- Failures and Anomalies
+- Documentation
+
+## Performance Trending
+
+Performance trending shows measured per run averages of MRR values, NDR or PDR
+values, user-selected telemetry metrics, group average values, and detected
+anomalies.
+
+In addition, the graphs show dynamic labels while hovering over graph data
+points. By clicking on data samples, the user gets detailed information and for
+latency graphs also high dynamic range histogram of measured latency.
+Latency by percentile distribution plots are used to show packet latency
+percentiles at different packet rate load levels:
+- No-Load, latency streams only,
+- Low-Load at 10% PDR,
+- Mid-Load at 50% PDR and
+- High-Load at 90% PDR.
+
+## Per Release Performance
+
+Per release performance section presents the graphs based on the results data
+obtained from the release test jobs. In order to verify benchmark results
+repeatability, CSIT performance tests are executed multiple times (target: 10
+times) on each physical testbed type. Box-and-Whisker plots are used to display
+variations in measured throughput and latency (PDR tests only) values.
+
+In addition, the graphs show dynamic labels while hovering over graph data
+points. By clicking on data samples or the box, the user gets detailed
+information and for latency graphs also high dynamic range histogram of measured
+latency.
+Latency by percentile distribution plots are used to show packet latency
+percentiles at different packet rate load levels:
+- No-Load, latency streams only,
+- Low-Load at 10% PDR,
+- Mid-Load at 50% PDR and
+- High-Load at 90% PDR.
+
+## Per Release Performance Comparisons
+
+Relative comparison of packet throughput (NDR, PDR and MRR) and latency (PDR)
+between user-selected releases, test beds, NICs, ... is calculated from results
+of tests running on physical test beds, in 1-core, 2-core and 4-core
+configurations.
+
+Listed mean and standard deviation values are computed based on a series of the
+same tests executed against respective VPP releases to verify test results
+repeatability, with percentage change calculated for mean values. Note that the
+standard deviation is quite high for a small number of packet throughput tests,
+what indicates poor test results repeatability and makes the relative change of
+mean throughput value not fully representative for these tests. The root causes
+behind poor results repeatability vary between the test cases.
+
+## Per Release Coverage Data
+
+Detailed result tables generated from CSIT test job executions. The coverage
+tests include also tests which are not run in iterative performance builds.
+The tables present NDR and PDR packet throughput (packets per second and bits
+per second) and latency percentiles (microseconds) at different packet rate load
+levels:
+- No-Load, latency streams only,
+- Low-Load at 10% PDR,
+- Mid-Load at 50% PDR and
+- High-Load at 90% PDR.
+
+## Test Job Statistics
+
+The elementary statistical data (number of passed and failed tests and the
+duration) of all daily and weekly trending performance jobs.
+In addition, the graphs show dynamic labels while hovering over graph data
+points with detailed information. By clicking on the graph, user gets the job
+summary with the list of failed tests.
+
+## Failures and Anomalies
+
+The presented tables list:
+- last build summary,
+- failed tests,
+- progressions and
+- regressions
+
+for all daily and weekly trending performance jobs.
+
+## Documentation
+
+This documentation describing the methodology, infrastructure and release notes
+for each CSIT release.
diff --git a/docs/content/overview/csit/_index.md b/docs/content/overview/csit/_index.md
new file mode 100644
index 0000000000..167c872c0b
--- /dev/null
+++ b/docs/content/overview/csit/_index.md
@@ -0,0 +1,45 @@
+---
+bookCollapseSection: true
+bookFlatSection: false
+title: "CSIT"
+weight: 2
+---
+
+# Continuous System Integration and Testing
+
+## CSIT Description
+
+1. Development of software code for fully automated VPP code testing,
+ functionality, performance, regression and new functions.
+2. Execution of CSIT test suites on VPP code running on LF FD.io virtual and
+ physical compute environments.
+3. Integration with FD.io continuous integration systems (Gerrit, Jenkins and
+ such).
+4. Identified existing FD.io project dependencies and interactions:
+ - vpp - Vector Packet Processing.
+ - ci-management - Management repo for Jenkins Job Builder, script and
+ management related to the Jenkins CI configuration.
+
+## Project Scope
+
+1. Automated regression testing of VPP code changes
+ - Functionality of VPP data plane, network control plane, management plane
+ against functional specifications.
+ - Performance of VPP data plane including non-drop-rate packet throughput
+ and delay, against established reference benchmarks.
+ - Performance of network control plane against established reference
+ benchmarks.
+ - Performance of management plane against established reference benchmarks.
+2. Test case definitions driven by supported and planned VPP functionality,
+ interfaces and performance:
+ - Uni-dimensional tests: Data plane, (Network) Control plane, Management
+ plane.
+ - Multi-dimensional tests: Use case driven.
+3. Integration with FD.io Continuous Integration system including FD.io Gerrit
+ and Jenkins
+ - Automated test execution triggered by VPP-VERIFY jobs other VPP and CSIT
+ project jobs.
+4. Integration with LF VPP test execution environment
+ - Functional tests execution on LF hosted VM environment.
+ - Performance and functional tests execution on LF hosted physical compute
+ environment.
diff --git a/docs/content/overview/csit/branching_strategy.md b/docs/content/overview/csit/branching_strategy.md
new file mode 100644
index 0000000000..16d8e0f471
--- /dev/null
+++ b/docs/content/overview/csit/branching_strategy.md
@@ -0,0 +1,109 @@
+---
+title: "Branching Strategy"
+weight: 6
+---
+
+# Branching Strategy
+
+## Definitions
+
+**CSIT development branch:** A CSIT branch used for test development which has a
+1:1 association with a VPP branch of the same name. CSIT development branches
+are never used for operational testing of VPP patches or images.
+
+**CSIT operational branch:** A CSIT branch pulled from a CSIT development or
+release branch which is used for operational testing of the VPP branch
+associated from its' parent branch. CSIT operational branches are named
+`oper-<YYMMDD>` for master and `oper-<release>-<YYMMDD>` for release branches.
+CSIT operational branches are the only branches which should be used to run
+verify jobs against VPP patches or images.
+
+**CSIT release branch:** A CSIT branch which is pulled from a development branch
+and is associated with a VPP release branch. CSIT release branches are never
+merged back into their parent branch and are never used for operational testing
+of VPP patches or images.
+
+## VPP Selection of CSIT Operational Branches
+
+Each VPP and release branch will have a script which specifies which CSIT
+operational branch is used when executing the per patch verify jobs. This is
+maintained in the VPP branch in the file
+`.../vpp/build-root/scripts/csit-test-branch`.
+
+## Branches
+
+### Main development branch: 'master'
+
+The CSIT development branch 'master' will be the main development for new VPP
+feature tests that have not been included in a release. Weekly CSIT operational
+branches will be pulled from 'master'. After validation of all CSIT verify jobs,
+the VPP script 'csit-test-branch' will be updated with the latest CSIT
+operational branch name. Older CSIT operational branches will be available for
+manual triggered vpp-csit-verify-* jobs.
+
+### Release branch: 'rls1606', 'rls1609', ...
+
+CSIT release branches shall be pulled from 'master' with the the convention
+`rls<release>` (e.g. rls1606, rls1609). New tests that are developed for
+existing VPP features will be committed into the 'master' branch, then
+cherry-picked|double committed into the latest CSIT release branch.
+Periodically CSIT operational branches will be pulled from the CSIT release
+branch when necessary and the VPP release branch updated to use the new CSIT
+operational branch.
+
+**VPP branch diagram:**
+
+ -- master --------------------------------------------------------------->
+ \ \
+ \--- stable/1606 ---[end] \--- stable/1609---[end]
+
+
+**CSIT branch diagram:**
+
+ /--- oper-rls1606-160623
+ / /--- oper-rls1606-$(DATE)
+ / / . . .
+ / / /--- oper-rls1609-$(DATE)
+ / / / . . .
+ /--- rls1606 ---[end] /--- rls1609 ---[end]
+ / / / /
+ / (cherry-picking) / (cherry-picking)
+ / / / /
+ -- master --------------------------------------------------------------->
+ \ \ . . .
+ \ \--- oper-$(DATE)
+ \--- oper-160710
+
+## Creating a CSIT Operational Branch
+
+### Run verify weekly job
+
+`csit-vpp-device-master-<OS>-<arch>-<testbed>-weekly` is run on the CSIT
+development or release branch (e.g. 'master' or 'stable/1606') using the latest
+VPP package set on nexus.fd.io for the associated VPP branch. Any anomalies will
+have the root cause identified and be resolved in the CSIT development branch
+prior to pulling the CSIT operational branch.
+
+### Pull CSIT operational branch from parent
+
+The CSIT operational branch is pulled from the parent CSIT development or
+release branch.
+
+### Run verify semiweekly job
+
+`csit-vpp-device-master-<OS>-<arch>-<testbed>-semiweekly` is run on the CSIT
+operational branch with the latest image of the associated VPP development or
+release branch. This job is run to validate the next reference VPP build for
+validating the results of all of the csit-vpp-verify* jobs.
+
+### Update VPP branch to use the new CSIT operational branch
+
+Push a patch updating the VPP branch to use the new CSIT operational branch. The
+VPP verify jobs will then be run and any anomalies will have the root cause
+identified and fixed in the CSIT operational branch prior to 'csit-test-branch'
+being merged.
+
+### Periodically lock/deprecate old CSIT Operational Branches
+
+Periodically old CSIT operational branches will be locked and/or deprecated to
+prevent changes being made to the operational branch.
diff --git a/docs/content/overview/csit/design.md b/docs/content/overview/csit/design.md
new file mode 100644
index 0000000000..f43d91a28e
--- /dev/null
+++ b/docs/content/overview/csit/design.md
@@ -0,0 +1,148 @@
+---
+title: "Design"
+weight: 1
+---
+
+# Design
+
+FD.io CSIT system design needs to meet continuously expanding requirements of
+FD.io projects including VPP, related sub-systems (e.g. plugin applications,
+DPDK drivers) and FD.io applications (e.g. DPDK applications), as well as
+growing number of compute platforms running those applications. With CSIT
+project scope and charter including both FD.io continuous testing AND
+performance trending/comparisons, those evolving requirements further amplify
+the need for CSIT framework modularity, flexibility and usability.
+
+## Design Hierarchy
+
+CSIT follows a hierarchical system design with SUTs and DUTs at the bottom level
+of the hierarchy, presentation level at the top level and a number of functional
+layers in-between. The current CSIT system design including CSIT framework is
+depicted in the figure below.
+
+{{< figure src="/cdocs/csit_design_picture.svg" title="CSIT Design" >}}
+
+A brief bottom-up description is provided here:
+
+1. SUTs, DUTs, TGs
+ - SUTs - Systems Under Test;
+ - DUTs - Devices Under Test;
+ - TGs - Traffic Generators;
+2. Level-1 libraries - Robot and Python
+ - Lowest level CSIT libraries abstracting underlying test environment, SUT,
+ DUT and TG specifics;
+ - Used commonly across multiple L2 KWs;
+ - Performance and functional tests:
+ - L1 KWs (KeyWords) are implemented as RF libraries and Python
+ libraries;
+ - Performance TG L1 KWs:
+ - All L1 KWs are implemented as Python libraries:
+ - Support for TRex only today;
+ - CSIT IXIA drivers in progress;
+ - Performance data plane traffic profiles:
+ - TG-specific stream profiles provide full control of:
+ - Packet definition - layers, MACs, IPs, ports, combinations thereof
+ e.g. IPs and UDP ports;
+ - Stream definitions - different streams can run together, delayed,
+ one after each other;
+ - Stream profiles are independent of CSIT framework and can be used
+ in any T-rex setup, can be sent anywhere to repeat tests with
+ exactly the same setup;
+ - Easily extensible - one can create a new stream profile that meets
+ tests requirements;
+ - Same stream profile can be used for different tests with the same
+ traffic needs;
+ - Functional data plane traffic scripts:
+ - Scapy specific traffic scripts;
+3. Level-2 libraries - Robot resource files:
+ - Higher level CSIT libraries abstracting required functions for executing
+ tests;
+ - L2 KWs are classified into the following functional categories:
+ - Configuration, test, verification, state report;
+ - Suite setup, suite teardown;
+ - Test setup, test teardown;
+4. Tests - Robot:
+ - Test suites with test cases;
+ - Performance tests using physical testbed environment:
+ - VPP;
+ - DPDK-Testpmd;
+ - DPDK-L3Fwd;
+ - TRex
+ - Tools:
+ - CSIT-Dash
+ - Testbed environment setup ansible playbooks;
+ - Operational debugging scripts;
+
+5. Test Lifecycle Abstraction
+
+A well coded test must follow a disciplined abstraction of the test
+lifecycles that includes setup, configuration, test and verification. In
+addition to improve test execution efficiency, the commmon aspects of
+test setup and configuration shared across multiple test cases should be
+done only once. Translating these high-level guidelines into the Robot
+Framework one arrives to definition of a well coded RF tests for FD.io
+CSIT. Anatomy of Good Tests for CSIT:
+
+1. Suite Setup - Suite startup Configuration common to all Test Cases in suite:
+ uses Configuration KWs, Verification KWs, StateReport KWs;
+2. Test Setup - Test startup Configuration common to multiple Test Cases: uses
+ Configuration KWs, StateReport KWs;
+3. Test Case - uses L2 KWs with RF Gherkin style:
+ - prefixed with {Given} - Verification of Test setup, reading state: uses
+ Configuration KWs, Verification KWs, StateReport KWs;
+ - prefixed with {When} - Test execution: Configuration KWs, Test KWs;
+ - prefixed with {Then} - Verification of Test execution, reading state: uses
+ Verification KWs, StateReport KWs;
+4. Test Teardown - post Test teardown with Configuration cleanup and
+ Verification common to multiple Test Cases - uses: Configuration KWs,
+ Verification KWs, StateReport KWs;
+5. Suite Teardown - Suite post-test Configuration cleanup: uses Configuration
+ KWs, Verification KWs, StateReport KWs;
+
+## RF Keywords Functional Classification
+
+CSIT RF KWs are classified into the functional categories matching the test
+lifecycle events described earlier. All CSIT RF L2 and L1 KWs have been grouped
+into the following functional categories:
+
+1. Configuration;
+2. Test;
+3. Verification;
+4. StateReport;
+5. SuiteSetup;
+6. TestSetup;
+7. SuiteTeardown;
+8. TestTeardown;
+
+## RF Keywords Naming Guidelines
+
+Readability counts: "..code is read much more often than it is written."
+Hence following a good and consistent grammar practice is important when
+writing Robot Framework KeyWords and Tests. All CSIT test cases
+are coded using Gherkin style and include only L2 KWs references. L2 KWs are
+coded using simple style and include L2 KWs, L1 KWs, and L1 python references.
+To improve readability, the proposal is to use the same grammar for both
+Robot Framework KW styles, and to formalize the grammar of English
+sentences used for naming the Robot Framework KWs. Robot
+Framework KWs names are short sentences expressing functional description of
+the command. They must follow English sentence grammar in one of the following
+forms:
+
+1. **Imperative** - verb-object(s): *"Do something"*, verb in base form.
+2. **Declarative** - subject-verb-object(s): *"Subject does something"*, verb in
+ a third-person singular present tense form.
+3. **Affirmative** - modal_verb-verb-object(s): *"Subject should be something"*,
+ *"Object should exist"*, verb in base form.
+4. **Negative** - modal_verb-Not-verb-object(s): *"Subject should not be
+ something"*, *"Object should not exist"*, verb in base form.
+
+Passive form MUST NOT be used. However a usage of past participle as an
+adjective is okay. See usage examples provided in the Coding guidelines
+section below. Following sections list applicability of the above
+grammar forms to different Robot Framework KW categories. Usage
+examples are provided, both good and bad.
+
+## Coding Guidelines
+
+Coding guidelines can be found on
+[Design optimizations wiki page](https://wiki.fd.io/view/CSIT/Design_Optimizations).
diff --git a/docs/content/overview/csit/suite_generation.md b/docs/content/overview/csit/suite_generation.md
new file mode 100644
index 0000000000..84a19b8ab9
--- /dev/null
+++ b/docs/content/overview/csit/suite_generation.md
@@ -0,0 +1,123 @@
+---
+title: "Suite Generation"
+weight: 5
+---
+
+# Suite Generation
+
+CSIT uses robot suite files to define tests. However, not all suite files
+available for Jenkins jobs (or manually started bootstrap scripts) are present
+in CSIT git repository. They are generated only when needed.
+
+## Autogen Library
+
+There is a code generation layer implemented as Python library called "autogen",
+called by various bash scripts.
+
+It generates the full extent of CSIT suites, using the ones in git as templates.
+
+## Sources
+
+The generated suites (and their contents) are affected by multiple information
+sources, listed below.
+
+### Git Suites
+
+The suites present in git repository act as templates for generating suites.
+One of autogen design principles is that any template suite should also act
+as a full suite (no placeholders).
+
+In practice, autogen always re-creates the template suite with exactly
+the same content, it is one of checks that autogen works correctly.
+
+### Regenerate Script
+
+Not all suites present in CSIT git repository act as template for autogen.
+The distinction is on per-directory level. Directories with
+`regenerate_testcases.py` script usually consider all suites as templates
+(unless possibly not included by the glob patten in the script).
+
+The script also specifies minimal frame size, indirectly, by specifying protocol
+(protocol "ip4" is the default, leading to 64B frame size).
+
+### Constants
+
+Values in `Constants.py` are taken into consideration when generating suites.
+The values are mostly related to different NIC models and NIC drivers.
+
+### Python Code
+
+Python code in `resources/libraries/python/autogen` contains several other
+information sources.
+
+#### Testcase Templates
+
+The test case part of template suite is ignored, test case lines
+are created according to text templates in `Testcase.py` file.
+
+#### Testcase Argument Lists
+
+Each testcase template has different number of "arguments", e.g. values
+to put into various placeholders. Different test types need different
+lists of the argument values, the lists are in `regenerate_glob` method
+in `Regenerator.py` file.
+
+#### Iteration Over Values
+
+Python code detects the test type (usually by substrings of suite file name),
+then iterates over different quantities based on type.
+For example, only ndrpdr suite templates generate other types (mrr and soak).
+
+#### Hardcoded Exclusions
+
+Some combinations of values are known not to work, so they are excluded.
+Examples: Density tests for too much CPUs; IMIX for ASTF.
+
+## Non-Sources
+
+Some information sources are available in CSIT repository,
+but do not affect the suites generated by autogen.
+
+### Testbeds
+
+Overall, no information visible in topology yaml files is taken into account
+by autogen.
+
+#### Testbed Architecture
+
+Historically, suite files are agnostic to testbed architecture, e.g. ICX or ALT.
+
+#### Testbed Size
+
+Historically, 2-node and 3-node suites have diferent names, and while
+most of the code is common, the differences are not always simple enough.
+Autogen treat 2-node and 3-node suites as independent templates.
+
+TRex suites are intended for a 1-node circuit of otherwise 2-node or 3-node
+testbeds, so they support all 3 robot tags.
+They are also detected and treated differently by autogen,
+mainly because they need different testcase arguments (no CPU count).
+Autogen does nothing specifically related to the fact they should run
+only in testbeds/NICs with TG-TG line available.
+
+#### Other Topology Info
+
+Some bonding tests need two (parallel) links between DUTs. Autogen does not
+care, as suites are agnostic. Robot tag marks the difference, but the link
+presence is not explicitly checked.
+
+### Job specs
+
+Information in job spec files depend on generated suites (not the other way).
+Autogen should generate more suites, as job spec is limited by time budget.
+More suites should be available for manually triggered verify jobs,
+so autogen covers that.
+
+### Bootstrap Scripts
+
+Historically, bootstrap scripts perform some logic,
+perhaps adding exclusion options to Robot invocation
+(e.g. skipping testbed+NIC combinations for tests that need parallel links).
+
+Once again, the logic here relies on what autogen generates,
+autogen does not look into bootstrap scripts.
diff --git a/docs/content/overview/csit/test_naming.md b/docs/content/overview/csit/test_naming.md
new file mode 100644
index 0000000000..d7a32518e5
--- /dev/null
+++ b/docs/content/overview/csit/test_naming.md
@@ -0,0 +1,112 @@
+---
+title: "Test Naming"
+weight: 3
+---
+
+# Test Naming
+
+## Background
+
+{{< release_csit >}} follows a common structured naming convention for all
+performance and system functional tests, introduced in CSIT 17.01.
+
+The naming should be intuitive for majority of the tests. Complete
+description of CSIT test naming convention is provided on
+[CSIT test naming wiki page](https://wiki.fd.io/view/CSIT/csit-test-naming).
+Below few illustrative examples of the naming usage for test suites across CSIT
+performance, functional and Honeycomb management test areas.
+
+## Naming Convention
+
+The CSIT approach is to use tree naming convention and to encode following
+testing information into test suite and test case names:
+
+1. packet network port configuration
+ * port type, physical or virtual;
+ * number of ports;
+ * NIC model, if applicable;
+ * port-NIC locality, if applicable;
+2. packet encapsulations;
+3. VPP packet processing
+ * packet forwarding mode;
+ * packet processing function(s);
+4. packet forwarding path
+ * if present, network functions (processes, containers, VMs) and their
+ topology within the computer;
+5. main measured variable, type of test.
+
+Proposed convention is to encode ports and NICs on the left (underlay),
+followed by outer-most frame header, then other stacked headers up to the
+header processed by vSwitch-VPP, then VPP forwarding function, then encap on
+vhost interface, number of vhost interfaces, number of VMs. If chained VMs
+present, they get added on the right. Test topology is expected to be
+symmetric, in other words packets enter and leave SUT through ports specified
+on the left of the test name. Here some examples to illustrate the convention
+followed by the complete legend, and tables mapping the new test filenames to
+old ones.
+
+## Naming Examples
+
+CSIT test suite naming examples (filename.robot) for common tested VPP
+topologies:
+
+1. **Physical port to physical port - a.k.a. NIC-to-NIC, Phy-to-Phy, P2P**
+ * *PortNICConfig-WireEncapsulation-PacketForwardingFunction-
+ PacketProcessingFunction1-...-PacketProcessingFunctionN-TestType*
+ * *10ge2p1x520-dot1q-l2bdbasemaclrn-ndrdisc.robot* => 2 ports of 10GE on
+ Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain baseline switching
+ with MAC learning, NDR throughput discovery.
+ * *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-ndrchk.robot* => 2 ports of 10GE on
+ Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain baseline switching
+ with MAC learning, NDR throughput discovery.
+ * *10ge2p1x520-ethip4-ip4base-ndrdisc.robot* => 2 ports of 10GE on Intel x520
+ NIC, IPv4 baseline routed forwarding, NDR throughput discovery.
+ * *10ge2p1x520-ethip6-ip6scale200k-ndrdisc.robot* => 2 ports of 10GE on Intel
+ x520 NIC, IPv6 scaled up routed forwarding, NDR throughput discovery.
+ * *10ge2p1x520-ethip4-ip4base-iacldstbase-ndrdisc.robot* => 2 ports of 10GE
+ on Intel x520 NIC, IPv4 baseline routed forwarding, ingress Access Control
+ Lists baseline matching on destination, NDR throughput discovery.
+ * *40ge2p1vic1385-ethip4-ip4base-ndrdisc.robot* => 2 ports of 40GE on Cisco
+ vic1385 NIC, IPv4 baseline routed forwarding, NDR throughput discovery.
+ * *eth2p-ethip4-ip4base-func.robot* => 2 ports of Ethernet, IPv4 baseline
+ routed forwarding, functional tests.
+
+2. **Physical port to VM (or VM chain) to physical port - a.k.a. NIC2VM2NIC,
+ P2V2P, NIC2VMchain2NIC, P2V2V2P**
+ * *PortNICConfig-WireEncapsulation-PacketForwardingFunction-
+ PacketProcessingFunction1-...-PacketProcessingFunctionN-VirtEncapsulation-
+ VirtPortConfig-VMconfig-TestType*
+ * *10ge2p1x520-dot1q-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2 ports
+ of 10GE on Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain
+ switching to/from two vhost interfaces and one VM, NDR throughput
+ discovery.
+ * *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2
+ ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain
+ switching to/from two vhost interfaces and one VM, NDR throughput
+ discovery.
+ * *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-4vhost-2vm-ndrdisc.robot* => 2
+ ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain
+ switching to/from four vhost interfaces and two VMs, NDR throughput
+ discovery.
+ * *eth2p-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm-func.robot* => 2 ports of
+ Ethernet, IPv4 VXLAN Ethernet, L2 bridge-domain switching to/from two vhost
+ interfaces and one VM, functional tests.
+
+3. **API CRUD tests - Create (Write), Read (Retrieve), Update (Modify), Delete
+ (Destroy) operations for configuration and operational data**
+ * *ManagementTestKeyword-ManagementOperation-ManagedFunction1-...-
+ ManagedFunctionN-ManagementAPI1-ManagementAPIN-TestType*
+ * *mgmt-cfg-lisp-apivat-func* => configuration of LISP with VAT API calls,
+ functional tests.
+ * *mgmt-cfg-l2bd-apihc-apivat-func* => configuration of L2 Bridge-Domain with
+ Honeycomb API and VAT API calls, functional tests.
+ * *mgmt-oper-int-apihcnc-func* => reading status and operational data of
+ interface with Honeycomb NetConf API calls, functional tests.
+ * *mgmt-cfg-int-tap-apihcnc-func* => configuration of tap interfaces with
+ Honeycomb NetConf API calls, functional tests.
+ * *mgmt-notif-int-subint-apihcnc-func* => notifications of interface and
+ sub-interface events with Honeycomb NetConf Notifications, functional
+ tests.
+
+For complete description of CSIT test naming convention please refer to
+[CSIT test naming wiki page](https://wiki.fd.io/view/CSIT/csit-test-naming).
diff --git a/docs/content/overview/csit/test_scenarios.md b/docs/content/overview/csit/test_scenarios.md
new file mode 100644
index 0000000000..1f06765eae
--- /dev/null
+++ b/docs/content/overview/csit/test_scenarios.md
@@ -0,0 +1,66 @@
+---
+title: "Test Scenarios"
+weight: 2
+---
+
+# Test Scenarios
+
+FD.io CSIT Dashboard includes multiple test scenarios of VPP
+centric applications, topologies and use cases. In addition it also
+covers baseline tests of DPDK sample applications. Tests are executed in
+physical (performance tests) and virtual environments (functional
+tests).
+
+Brief overview of test scenarios covered in this documentation:
+
+1. **VPP Performance**: VPP performance tests are executed in physical
+ FD.io testbeds, focusing on VPP network data plane performance in
+ NIC-to-NIC switching topologies. VPP application runs in
+ bare-metal host user-mode handling NICs. TRex is used as a traffic generator.
+
+2. **VPP Vhostuser Performance with KVM VMs**: VPP VM service switching
+ performance tests using vhostuser virtual interface for
+ interconnecting multiple NF-in-VM instances. VPP vswitch
+ instance runs in bare-metal user-mode handling NICs and connecting
+ over vhost-user interfaces to VM instances each running VPP with virtio
+ virtual interfaces. Similarly to VPP Performance, tests are run across a
+ range of configurations. TRex is used as a traffic generator.
+
+3. **VPP Memif Performance with LXC and Docker Containers**: VPP
+ Container service switching performance tests using memif virtual
+ interface for interconnecting multiple VPP-in-container instances.
+ VPP vswitch instance runs in bare-metal user-mode handling NICs and
+ connecting over memif (Slave side) interfaces to more instances of
+ VPP running in LXC or in Docker Containers, both with memif
+ interfaces (Master side). Similarly to VPP Performance, tests are
+ run across a range of configurations. TRex is used as a traffic
+ generator.
+
+4. **DPDK Performance**: VPP uses DPDK to drive the NICs and physical
+ interfaces. DPDK performance tests are used as a baseline to
+ profile performance of the DPDK sub-system. Two DPDK applications
+ are tested: Testpmd and L3fwd. DPDK tests are executed in the same
+ testing environment as VPP tests. DPDK Testpmd and L3fwd
+ applications run in host user-mode. TRex is used as a traffic
+ generator.
+
+5. **T-Rex Performance**: T-Rex perfomance tests are executed in physical
+ FD.io testbeds, focusing on T-Rex data plane performance in NIC-to-NIC
+ loopback topologies.
+
+6. **VPP Functional**: VPP functional tests are executed in virtual
+ FD.io testbeds, focusing on VPP packet processing functionality,
+ including both network data plane and in-line control plane. Tests
+ cover vNIC-to-vNIC vNIC-to-nestedVM-to-vNIC forwarding topologies.
+ Scapy is used as a traffic generator.
+
+All CSIT test data included in this report is auto-generated from Robot
+Framework json output files produced by Linux Foundation FD.io Jenkins jobs
+executed against {{< release_vpp >}} artifacts.
+
+FD.io CSIT system is developed using two main coding platforms: Robot
+Framework and Python. {{< release_csit >}} source code for the executed test
+suites is available in corresponding CSIT branch in the directory
+`./tests/<name_of_the_test_suite>`. A local copy of CSIT source code
+can be obtained by cloning CSIT git repository - `git clone
+https://gerrit.fd.io/r/csit`.
diff --git a/docs/content/overview/csit/test_tags.md b/docs/content/overview/csit/test_tags.md
new file mode 100644
index 0000000000..de38945c17
--- /dev/null
+++ b/docs/content/overview/csit/test_tags.md
@@ -0,0 +1,876 @@
+---
+title: "Test Tags"
+weight: 4
+---
+
+# Test Tags
+
+All CSIT test cases are labelled with Robot Framework tags used to allow for
+easy test case type identification, test case grouping and selection for
+execution. Following sections list currently used CSIT tags and their
+descriptions.
+
+## Testbed Topology Tags
+
+**2_NODE_DOUBLE_LINK_TOPO**
+
+ 2 nodes connected in a circular topology with two links interconnecting
+ the devices.
+
+**2_NODE_SINGLE_LINK_TOPO**
+
+ 2 nodes connected in a circular topology with at least one link
+ interconnecting devices.
+
+**3_NODE_DOUBLE_LINK_TOPO**
+
+ 3 nodes connected in a circular topology with two links interconnecting
+ the devices.
+
+**3_NODE_SINGLE_LINK_TOPO**
+
+ 3 nodes connected in a circular topology with at least one link
+ interconnecting devices.
+
+## Objective Tags
+
+**SKIP_PATCH**
+
+ Test case(s) marked to not run in case of vpp-csit-verify (i.e. VPP
+ patch) and csit-vpp-verify jobs (i.e. CSIT patch).
+
+**SKIP_VPP_PATCH**
+
+ Test case(s) marked to not run in case of vpp-csit-verify (i.e. VPP
+ patch).
+
+## Environment Tags
+
+**HW_ENV**
+
+ DUTs and TGs are running on bare metal.
+
+**VM_ENV**
+
+ DUTs and TGs are running in virtual environment.
+
+**VPP_VM_ENV**
+
+ DUTs with VPP and capable of running Virtual Machine.
+
+## NIC Model Tags
+
+**NIC_Intel-X520-DA2**
+
+ Intel X520-DA2 NIC.
+
+**NIC_Intel-XL710**
+
+ Intel XL710 NIC.
+
+**NIC_Intel-X710**
+
+ Intel X710 NIC.
+
+**NIC_Intel-XXV710**
+
+ Intel XXV710 NIC.
+
+**NIC_Cisco-VIC-1227**
+
+ VIC-1227 by Cisco.
+
+**NIC_Cisco-VIC-1385**
+
+ VIC-1385 by Cisco.
+
+**NIC_Amazon-Nitro-50G**
+
+ Amazon EC2 ENA NIC.
+
+## Scaling Tags
+
+**FIB_20K**
+
+ 2x10,000 entries in single fib table
+
+**FIB_200K**
+
+ 2x100,000 entries in single fib table
+
+**FIB_1M**
+
+ 2x500,000 entries in single fib table
+
+**FIB_2M**
+
+ 2x1,000,000 entries in single fib table
+
+**L2BD_1**
+
+ Test with 1 L2 bridge domain.
+
+**L2BD_10**
+
+ Test with 10 L2 bridge domains.
+
+**L2BD_100**
+
+ Test with 100 L2 bridge domains.
+
+**L2BD_1K**
+
+ Test with 1000 L2 bridge domains.
+
+**VLAN_1**
+
+ Test with 1 VLAN sub-interface.
+
+**VLAN_10**
+
+ Test with 10 VLAN sub-interfaces.
+
+**VLAN_100**
+
+ Test with 100 VLAN sub-interfaces.
+
+**VLAN_1K**
+
+ Test with 1000 VLAN sub-interfaces.
+
+**VXLAN_1**
+
+ Test with 1 VXLAN tunnel.
+
+**VXLAN_10**
+
+ Test with 10 VXLAN tunnels.
+
+**VXLAN_100**
+
+ Test with 100 VXLAN tunnels.
+
+**VXLAN_1K**
+
+ Test with 1000 VXLAN tunnels.
+
+**TNL_{t}**
+
+ IPSec in tunnel mode - {t} tunnels.
+
+**SRC_USER_{u}**
+
+ Traffic flow with {u} unique IPs (users) in one direction.
+ {u}=(1,10,100,1000,2000,4000).
+
+**100_FLOWS**
+
+ Traffic stream with 100 unique flows (10 IPs/users x 10 UDP ports) in
+ one direction.
+
+**10k_FLOWS**
+
+ Traffic stream with 10 000 unique flows (10 IPs/users x 1000 UDP ports)
+ in one direction.
+
+**100k_FLOWS**
+
+ Traffic stream with 100 000 unique flows (100 IPs/users x 1000 UDP
+ ports) in one direction.
+
+**HOSTS_{h}**
+
+ Stateless or stateful traffic stream with {h} client source IP4
+ addresses, usually with 63 flow differing in source port number.
+ Could be UDP or TCP. If NAT is used, the clients are inside.
+ Outside IP range can differ.
+ {h}=(1024,4096,16384,65536,262144).
+
+**GENEVE4_{t}TUN**
+
+ Test with {t} GENEVE IPv4 tunnel.
+ {t}=(1,4,16,64,256,1024)
+
+## Test Category Tags
+
+**DEVICETEST**
+
+ All vpp_device functional test cases.
+
+**PERFTEST**
+
+ All performance test cases.
+
+## VPP Device Type Tags
+
+**SCAPY**
+
+ All test cases that uses Scapy for packet generation and validation.
+
+## Performance Type Tags
+
+**NDRPDR**
+
+ Single test finding both No Drop Rate and Partial Drop Rate
+ simultaneously. The search is done by optimized algorithm which
+ performs multiple trial runs at different durations and transmit
+ rates. The results come from the final trials, which have duration
+ of 30 seconds.
+
+**MRR**
+
+ Performance tests where TG sends the traffic at maximum rate (line rate)
+ and reports total sent/received packets over trial duration.
+ The result is an average of 10 trials of 1 second duration.
+
+**SOAK**
+
+ Performance tests using PLRsearch to find the critical load.
+
+**RECONF**
+
+ Performance tests aimed to measure lost packets (time) when performing
+ reconfiguration while full throughput offered load is applied.
+
+## Ethernet Frame Size Tags
+
+These are describing the traffic offered by Traffic Generator,
+"primary" traffic in case of asymmetric load.
+For traffic between DUTs, or for "secondary" traffic, see ${overhead} value.
+
+**{b}B**
+
+ {b} Bytes frames used for test.
+
+**IMIX**
+
+ IMIX frame sequence (28x 64B, 16x 570B, 4x 1518B) used for test.
+
+## Test Type Tags
+
+**BASE**
+
+ Baseline test cases, no encapsulation, no feature(s) configured in tests.
+ No scaling whatsoever, beyond minimum needed for RSS.
+
+**IP4BASE**
+
+ IPv4 baseline test cases, no encapsulation, no feature(s) configured in
+ tests. Minimal number of routes. Other quantities may be scaled.
+
+**IP6BASE**
+
+ IPv6 baseline test cases, no encapsulation, no feature(s) configured in
+ tests.
+
+**L2XCBASE**
+
+ L2XC baseline test cases, no encapsulation, no feature(s) configured in
+ tests.
+
+**L2BDBASE**
+
+ L2BD baseline test cases, no encapsulation, no feature(s) configured in
+ tests.
+
+**L2PATCH**
+
+ L2PATCH baseline test cases, no encapsulation, no feature(s) configured
+ in tests.
+
+**SCALE**
+
+ Scale test cases. Other tags specify which quantities are scaled.
+ Also applies if scaling is set on TG only (e.g. DUT works as IP4BASE).
+
+**ENCAP**
+
+ Test cases where encapsulation is used. Use also encapsulation tag(s).
+
+**FEATURE**
+
+ At least one feature is configured in test cases. Use also feature
+ tag(s).
+
+**UDP**
+
+ Tests which use any kind of UDP traffic (STL or ASTF profile).
+
+**TCP**
+
+ Tests which use any kind of TCP traffic (STL or ASTF profile).
+
+**TREX**
+
+ Tests which test trex traffic without any software DUTs in the
+ traffic path.
+
+**UDP_UDIR**
+
+ Tests which use unidirectional UDP traffic (STL profile only).
+
+**UDP_BIDIR**
+
+ Tests which use bidirectional UDP traffic (STL profile only).
+
+**UDP_CPS**
+
+ Tests which measure connections per second on minimal UDP
+ pseudoconnections. This implies ASTF traffic profile is used.
+
+**TCP_CPS**
+
+ Tests which measure connections per second on empty TCP connections.
+ This implies ASTF traffic profile is used.
+
+**TCP_RPS**
+
+ Tests which measure requests per second on empty TCP connections.
+ This implies ASTF traffic profile is used.
+
+**UDP_PPS**
+
+ Tests which measure packets per second on lightweight UDP transactions.
+ This implies ASTF traffic profile is used.
+
+**TCP_PPS**
+
+ Tests which measure packets per second on lightweight TCP transactions.
+ This implies ASTF traffic profile is used.
+
+**HTTP**
+
+ Tests which use traffic formed of valid HTTP requests (and responses).
+
+**LDP_NGINX**
+
+ LDP NGINX is un-modified NGINX with VPP via LD_PRELOAD.
+
+**NF_DENSITY**
+
+ Performance tests that measure throughput of multiple VNF and CNF
+ service topologies at different service densities.
+
+## NF Service Density Tags
+
+**CHAIN**
+
+ NF service density tests with VNF or CNF service chain topology(ies).
+
+**PIPE**
+
+ NF service density tests with CNF service pipeline topology(ies).
+
+**NF_L3FWDIP4**
+
+ NF service density tests with DPDK l3fwd IPv4 routing as NF workload.
+
+**NF_VPPIP4**
+
+ NF service density tests with VPP IPv4 routing as NF workload.
+
+**{r}R{c}C**
+
+ Service density matrix locator {r}R{c}C, {r}Row denoting number of
+ service instances, {c}Column denoting number of NFs per service
+ instance.
+ {r}=(1,2,4,6,8,10), {c}=(1,2,4,6,8,10).
+
+**{n}VM{t}T**
+
+ Service density {n}VM{t}T, {n}Number of NF Qemu VMs, {t}Number of
+ threads per NF.
+
+**{n}DCR{t}T**
+
+ Service density {n}DCR{t}T, {n}Number of NF Docker containers,
+ {t}Number of threads per NF.
+
+**{n}_ADDED_CHAINS**
+
+ {n}Number of chains (or pipelines) added (and/or removed)
+ during RECONF test.
+
+## Forwarding Mode Tags
+
+**L2BDMACSTAT**
+
+ VPP L2 bridge-domain, L2 MAC static.
+
+**L2BDMACLRN**
+
+ VPP L2 bridge-domain, L2 MAC learning.
+
+**L2XCFWD**
+
+ VPP L2 point-to-point cross-connect.
+
+**IP4FWD**
+
+ VPP IPv4 routed forwarding.
+
+**IP6FWD**
+
+ VPP IPv6 routed forwarding.
+
+**LOADBALANCER_MAGLEV**
+
+ VPP Load balancer maglev mode.
+
+**LOADBALANCER_L3DSR**
+
+ VPP Load balancer l3dsr mode.
+
+**LOADBALANCER_NAT4**
+
+ VPP Load balancer nat4 mode.
+
+**N2N**
+
+ Mode, where NICs from the same physical server are directly
+ connected with a cable.
+
+## Underlay Tags
+
+**IP4UNRLAY**
+
+ IPv4 underlay.
+
+**IP6UNRLAY**
+
+ IPv6 underlay.
+
+**MPLSUNRLAY**
+
+ MPLS underlay.
+
+## Overlay Tags
+
+**L2OVRLAY**
+
+ L2 overlay.
+
+**IP4OVRLAY**
+
+ IPv4 overlay (IPv4 payload).
+
+**IP6OVRLAY**
+
+ IPv6 overlay (IPv6 payload).
+
+## Tagging Tags
+
+**DOT1Q**
+
+ All test cases with dot1q.
+
+**DOT1AD**
+
+ All test cases with dot1ad.
+
+## Encapsulation Tags
+
+**ETH**
+
+ All test cases with base Ethernet (no encapsulation).
+
+**LISP**
+
+ All test cases with LISP.
+
+**LISPGPE**
+
+ All test cases with LISP-GPE.
+
+**LISP_IP4o4**
+
+ All test cases with LISP_IP4o4.
+
+**LISPGPE_IP4o4**
+
+ All test cases with LISPGPE_IP4o4.
+
+**LISPGPE_IP6o4**
+
+ All test cases with LISPGPE_IP6o4.
+
+**LISPGPE_IP4o6**
+
+ All test cases with LISPGPE_IP4o6.
+
+**LISPGPE_IP6o6**
+
+ All test cases with LISPGPE_IP6o6.
+
+**VXLAN**
+
+ All test cases with Vxlan.
+
+**VXLANGPE**
+
+ All test cases with VXLAN-GPE.
+
+**GRE**
+
+ All test cases with GRE.
+
+**GTPU**
+
+ All test cases with GTPU.
+
+**GTPU_HWACCEL**
+
+ All test cases with GTPU_HWACCEL.
+
+**IPSEC**
+
+ All test cases with IPSEC.
+
+**WIREGUARD**
+
+ All test cases with WIREGUARD.
+
+**SRv6**
+
+ All test cases with Segment routing over IPv6 dataplane.
+
+**SRv6_1SID**
+
+ All SRv6 test cases with single SID.
+
+**SRv6_2SID_DECAP**
+
+ All SRv6 test cases with two SIDs and with decapsulation.
+
+**SRv6_2SID_NODECAP**
+
+ All SRv6 test cases with two SIDs and without decapsulation.
+
+**GENEVE**
+
+ All test cases with GENEVE.
+
+**GENEVE_L3MODE**
+
+ All test cases with GENEVE tunnel in L3 mode.
+
+**FLOW**
+
+ All test cases with FLOW.
+
+**FLOW_DIR**
+
+ All test cases with FLOW_DIR.
+
+**FLOW_RSS**
+
+ All test cases with FLOW_RSS.
+
+**NTUPLE**
+
+ All test cases with NTUPLE.
+
+**L2TPV3**
+
+ All test cases with L2TPV3.
+
+**REASSEMBLY**
+
+ All encap/decap tests where MTU induces IP fragmentation and reassembly.
+
+## Interface Tags
+
+**PHY**
+
+ All test cases which use physical interface(s).
+
+**GSO**
+
+ All test cases which uses Generic Segmentation Offload.
+
+**VHOST**
+
+ All test cases which uses VHOST.
+
+**VHOST_1024**
+
+ All test cases which uses VHOST DPDK driver with qemu queue size set
+ to 1024.
+
+**VIRTIO**
+
+ All test cases which uses VIRTIO native VPP driver.
+
+**VIRTIO_1024**
+
+ All test cases which uses VIRTIO native VPP driver with qemu queue
+ size set to 1024.
+
+**CFS_OPT**
+
+ All test cases which uses VM with optimised scheduler policy.
+
+**TUNTAP**
+
+ All test cases which uses TUN and TAP.
+
+**AFPKT**
+
+ All test cases which uses AFPKT.
+
+**NETMAP**
+
+ All test cases which uses Netmap.
+
+**MEMIF**
+
+ All test cases which uses Memif.
+
+**SINGLE_MEMIF**
+
+ All test cases which uses only single Memif connection per DUT. One DUT
+ instance is running in container having one physical interface exposed
+ to container.
+
+**LBOND**
+
+ All test cases which uses link bonding (BondEthernet interface).
+
+**LBOND_DPDK**
+
+ All test cases which uses DPDK link bonding.
+
+**LBOND_VPP**
+
+ All test cases which uses VPP link bonding.
+
+**LBOND_MODE_XOR**
+
+ All test cases which uses link bonding with mode XOR.
+
+**LBOND_MODE_LACP**
+
+ All test cases which uses link bonding with mode LACP.
+
+**LBOND_LB_L34**
+
+ All test cases which uses link bonding with load-balance mode l34.
+
+**LBOND_{n}L**
+
+ All test cases which use {n} link(s) for link bonding.
+
+**DRV_{d}**
+
+ All test cases which NIC Driver for DUT is set to {d}.
+ Default is VFIO_PCI.
+ {d}=(AVF, RDMA_CORE, VFIO_PCI, AF_XDP).
+
+**TG_DRV_{d}**
+
+ All test cases which NIC Driver for TG is set to {d}.
+ Default is IGB_UIO.
+ {d}=(RDMA_CORE, IGB_UIO).
+
+**RXQ_SIZE_{n}**
+
+ All test cases which RXQ size (RX descriptors) are set to {n}.
+ Default is 0, which means VPP (API) default.
+
+**TXQ_SIZE_{n}**
+
+ All test cases which TXQ size (TX descriptors) are set to {n}.
+ Default is 0, which means VPP (API) default.
+
+## Feature Tags
+
+**IACLDST**
+
+ iACL destination.
+
+**ADLALWLIST**
+
+ ADL allowlist.
+
+**NAT44**
+
+ NAT44 configured and tested.
+
+**NAT64**
+
+ NAT44 configured and tested.
+
+**ACL**
+
+ ACL plugin configured and tested.
+
+**IACL**
+
+ ACL plugin configured and tested on input path.
+
+**OACL**
+
+ ACL plugin configured and tested on output path.
+
+**ACL_STATELESS**
+
+ ACL plugin configured and tested in stateless mode
+ (permit action).
+
+**ACL_STATEFUL**
+
+ ACL plugin configured and tested in stateful mode
+ (permit+reflect action).
+
+**ACL1**
+
+ ACL plugin configured and tested with 1 not-hitting ACE.
+
+**ACL10**
+
+ ACL plugin configured and tested with 10 not-hitting ACEs.
+
+**ACL50**
+
+ ACL plugin configured and tested with 50 not-hitting ACEs.
+
+**SRv6_PROXY**
+
+ SRv6 endpoint to SR-unaware appliance via proxy.
+
+**SRv6_PROXY_STAT**
+
+ SRv6 endpoint to SR-unaware appliance via static proxy.
+
+**SRv6_PROXY_DYN**
+
+ SRv6 endpoint to SR-unaware appliance via dynamic proxy.
+
+**SRv6_PROXY_MASQ**
+
+ SRv6 endpoint to SR-unaware appliance via masquerading proxy.
+
+## Encryption Tags
+
+**IPSECSW**
+
+ Crypto in software.
+
+**IPSECHW**
+
+ Crypto in hardware.
+
+**IPSECTRAN**
+
+ IPSec in transport mode.
+
+**IPSECTUN**
+
+ IPSec in tunnel mode.
+
+**IPSECINT**
+
+ IPSec in interface mode.
+
+**AES**
+
+ IPSec using AES algorithms.
+
+**AES_128_CBC**
+
+ IPSec using AES 128 CBC algorithms.
+
+**AES_128_GCM**
+
+ IPSec using AES 128 GCM algorithms.
+
+**AES_256_GCM**
+
+ IPSec using AES 256 GCM algorithms.
+
+**HMAC**
+
+ IPSec using HMAC integrity algorithms.
+
+**HMAC_SHA_256**
+
+ IPSec using HMAC SHA 256 integrity algorithms.
+
+**HMAC_SHA_512**
+
+ IPSec using HMAC SHA 512 integrity algorithms.
+
+**SCHEDULER**
+
+ IPSec using crypto sw scheduler engine.
+
+**FASTPATH**
+
+ IPSec policy mode with spd fast path enabled.
+
+## Client-Workload Tags
+
+**VM**
+
+ All test cases which use at least one virtual machine.
+
+**LXC**
+
+ All test cases which use Linux container and LXC utils.
+
+**DRC**
+
+ All test cases which use at least one Docker container.
+
+**DOCKER**
+
+ All test cases which use Docker as container manager.
+
+**APP**
+
+ All test cases with specific APP use.
+
+## Container Orchestration Tags
+
+**{n}VSWITCH**
+
+ {n} VPP running in {n} Docker container(s) acting as a VSWITCH.
+ {n}=(1).
+
+**{n}VNF**
+
+ {n} VPP running in {n} Docker container(s) acting as a VNF work load.
+ {n}=(1).
+
+## Multi-Threading Tags
+
+**STHREAD**
+
+ Dynamic tag.
+ All test cases using single poll mode thread.
+
+**MTHREAD**
+
+ Dynamic tag.
+ All test cases using more then one poll mode driver thread.
+
+**{n}NUMA**
+
+ All test cases with packet processing on {n} socket(s). {n}=(1,2).
+
+**{c}C**
+
+ {c} worker thread pinned to {c} dedicated physical core; or if
+ HyperThreading is enabled, {c}*2 worker threads each pinned to
+ a separate logical core within 1 dedicated physical core. Main
+ thread pinned to core 1.
+ {t}=(1,2,4).
+
+**{t}T{c}C**
+
+ *Dynamic tag*.
+ {t} worker threads pinned to {c} dedicated physical cores. Main thread
+ pinned to core 1. By default CSIT is configuring same amount of receive
+ queues per interface as worker threads.
+ {t}=(1,2,4,8),
+ {c}=(1,2,4).