aboutsummaryrefslogtreecommitdiffstats
path: root/docs/content/introduction
diff options
context:
space:
mode:
authorpmikus <peter.mikus@protonmail.ch>2023-03-09 13:32:33 +0000
committerPeter Mikus <peter.mikus@protonmail.ch>2023-03-15 10:06:55 +0000
commitb928e56347c682fdb9762ccbe2f368329d4037e4 (patch)
tree9c0f6a02efc1704821229b8e18a23468fdf8674a /docs/content/introduction
parentddcdf45806d0efa9e89dd4446b4c7da39cfb27a8 (diff)
feat(docs): Hugo
Signed-off-by: pmikus <peter.mikus@protonmail.ch> Change-Id: Id8b43ef1f31f39b19a0629c52581514fda278f3b
Diffstat (limited to 'docs/content/introduction')
-rw-r--r--docs/content/introduction/_index.md5
-rw-r--r--docs/content/introduction/automating_vpp_api_flag_day.md303
-rw-r--r--docs/content/introduction/bash_code_style.md651
-rw-r--r--docs/content/introduction/branches.md192
-rw-r--r--docs/content/introduction/dashboard_history.md17
-rw-r--r--docs/content/introduction/design.md148
-rw-r--r--docs/content/introduction/model_schema.md60
-rw-r--r--docs/content/introduction/perf_triggers_design.md44
-rw-r--r--docs/content/introduction/test_code_guidelines.md294
-rw-r--r--docs/content/introduction/test_naming.md107
-rw-r--r--docs/content/introduction/test_scenarios_overview.md61
-rw-r--r--docs/content/introduction/test_tag_description.md863
-rw-r--r--docs/content/introduction/testing_in_vagrant.md85
13 files changed, 2830 insertions, 0 deletions
diff --git a/docs/content/introduction/_index.md b/docs/content/introduction/_index.md
new file mode 100644
index 0000000000..e028786bd1
--- /dev/null
+++ b/docs/content/introduction/_index.md
@@ -0,0 +1,5 @@
+---
+bookFlatSection: true
+title: "Introduction"
+weight: 1
+--- \ No newline at end of file
diff --git a/docs/content/introduction/automating_vpp_api_flag_day.md b/docs/content/introduction/automating_vpp_api_flag_day.md
new file mode 100644
index 0000000000..131adeab9d
--- /dev/null
+++ b/docs/content/introduction/automating_vpp_api_flag_day.md
@@ -0,0 +1,303 @@
+---
+bookHidden: true
+title: "VPP API Flag Day Algorithms"
+---
+
+# VPP API Flag Day Algorithm
+
+## Abstract
+
+This document describes the current solution to the problem of
+automating the detection of VPP API changes which are not backwards
+compatible with existing CSIT tests, by defining the "Flag Day"
+process of deploying a new set of CSIT tests which are compatible
+with the new version of the VPP API without causing a halt to the
+normal VPP/CSIT operational CI process. This is initially
+limited to changes in \*.api files contained in the vpp repo.
+Eventually the detection algorithm could be extended to include
+other integration points such as "directory" structure of stats
+segment or PAPI python library dependencies.
+
+## Motivation
+
+Aside of per-release activities (release report), CSIT also provides testing
+that requires somewhat tight coupling to the latest (merged but not released)
+VPP code. Currently, HEAD of one project is run against somewhat older codebase
+of the other project. Definition of what is the older codebase to use
+is maintained by CSIT project. For older CSIT codebase, there are so-called
+"oper" branches. For older VPP codebase, CSIT master HEAD contains identifiers
+for "stable" VPP builds. Such older codebases are also used for verify jobs,
+where HEAD of the other project is replaced by the commit under review.
+
+One particular type of jobs useful for VPP development is trending jobs.
+They test latests VPP build with latest oper branch of CSIT,
+and analytics is applied to detect regressions in preformance.
+For this to work properly, VPP project needs a warning against breaking
+the assumptions the current oper branch makes about VPP behavior.
+In the past, the most frequent type of such breakage was API change.
+
+Earlier attempts to create a process to minimize breakage have focused
+on creating a new verify job for VPP (called api-crc job) that
+votes -1 on a change that affects CRC values for API messages CSIT uses.
+The list of messages and CRC values (multiple "collections" are allowed)
+is maintained in CSIT repository (in oper branch).
+The process was less explicit on how should CSIT project maintain such list.
+As CSIT was not willing to support two incpompatible API messages
+by the same codebase (commit), there were unavoidable windows
+where either trenging jobs, or CSIT verify jobs were failing.
+
+Practice showed that human (or infra) errors can create two kinds of breakages.
+Either the unavoidable short window gets long, affecting a trending job run
+or two, or the api-crc job starts giving -1 to innocent changes
+because oper branch went out of sync with VPP HEAD codebase.
+This second type of failure prevents any merges to VPP for a long time
+(12 hours is the typical time, give time zone differences).
+
+The current version of this document introduces two new requirements.
+Firstly, the api-crc job should not give false -1, under any
+(reasonable) circumstances. That means, if a VPP change
+(nor any of its unmerged ancestor commits) does not affect any CRC values
+for messages used by CSIT, -1 should only mean "rebase is needed",
+and rebasing to HEAD should result in +1 from the api-crc job.
+Secondly, no more than one VPP change is allowed to be processed
+(at the same time).
+
+## Naming
+
+It is easier to define the process after chosing shorter names
+for notions that need long definition.
+
+Note: Everytime a single job is mentioned,
+in practice it can be a set of jobs covering parts of functionality.
+A "run" of the set of jobs passes only if each job within the set
+has been run (again) and passed.
+
+## Jobs
+
++ A *vpp verify* job: Any job run automatically, and voting on open VPP changes.
+ Some verify jobs compile and package VPP for target operating system
+ and processor architecture, the packages are NOT archived (currently).
+ They should be cached somewhere in future to speed up in downstream jobs,
+ but currently each such downstream job can clone and build.
+
++ The *api-crc* job: Quick verify job for VPP changes, that accesses
+ CSIT repository (checkout latest oper branch HEAD) to figure out
+ whether merging the change is safe from CSIT point of view.
+ Here, -1 means CSIT is not ready. +1 means CSIT looks to be ready
+ for the new CRC values, but there still may be failures on real tests.
+
++ A *trending* job: Any job that is started by timer and performs testing.
+ It checkouts CSIT latest oper branch HEAD, downloads the most recent
+ completely uploaded VPP package, and unconditionally runs the tests.
+ CRC checks are optional, ideally only written to console log
+ without otherwise affecting the test cases.
+
++ A *vpp-csit* job: A slower verify job for VPP changes, that accesses CSIT
+ repository and runs tests from the correct CSIT commit (chosen as in trending)
+ against the VPP (built from the VPP patch under review).
+ Vote -1 means there were test failures. +1 means no test failures, meaning
+ there either was no API change, or it was backward compatible.
+
++ A *csit-vpp* job: Verify job for open CSIT changes. Downloads the
+ (completely uploaded) VPP package marked as "stable", and runs a selection
+ of tests (from the CSIT patch under review).
+ Vote +1 means all tests have passed, so it is safe to merge
+ the patch under review.
+
++ A *patch-on-patch* job: Manually triggered non-voting job
+ for open CSIT changes. Compiles and packages from VPP source
+ (usually of an unmerged change). Then runs the same tests as csit-vpp job.
+ This job is used to prove the CSIT patch under review is supporting
+ the specified VPP code.
+ In practice, this can be a vpp-csit job started with CSIT_REF set.
+
++ A *manual verification* is done by a CSIT committer, locally executing steps
+ equivalent to the patch-on-patch job. This can to save time and resources.
+
+## CRC Collections
+
+Any commit in/for the CSIT repository contains a file (supported_crcs.yaml),
+which contains either one or two collections. A collection is a mapping
+that maps API message name to its CRC value.
+
+A collection name specifies which VPP build is this collection for.
+An API message name is present in a collection if and only if
+it is used by a test implementation (can be in different CSIT commit)
+targeted at the VPP build (pointed out by the collection name).
+
++ The *stable collection*: Usually required, listed first, has comments and name
+ pointing to the VPP build this CSIT commit marks as stable.
+ The stable collection is only missing in deactivating changes (see below)
+ when not mergeable yet.
+
++ The *active collection*: Optional, listed second, has comments and name
+ pointing to the VPP Gerrit (including patch set number)
+ the currently active API process is processing.
+ The patch set number part can be behind the actual Gerrit state.
+ This is safe, because api-crc job on the active API change will fail
+ if the older patch is no longer API-equivalent to the newer patch.
+
+## Changes
+
++ An *API change*: The name for any Gerrit Change for VPP repository
+ that does not pass api-crc job right away, and needs this whole process.
+ This usually means .api files are edited, but a patch that affects
+ the way CRC values are computed is also an API change.
+
+ Full name could be VPP API Change, but as no CSIT change is named "API change"
+ (and this document does not talk about other FD.io or external projects),
+ "API change" is shorter.
+
++ A *blocked change*: The name for open Gerrit Change for VPP repository
+ that got -1 from some of voting verify jobs.
+
++ A *VPP-blocked change": A blocked change which got -1 from some "pure VPP"
+ verify job, meaning no CSIT code has been involved in the vote.
+ Example: "make test" fails.
+
+ VPP contributor is expected to fix the change, or VPP developers
+ are expected to found a cause in an earlier VPP change, and fix it.
+ No interaction with CSIT developers is necessary.
+
++ A *CSIT-blocked change*: A blocked change which is not VPP-blocked,
+ but does not pass some vpp-csit job.
+ To fix a CSIT-blocked change, an interaction with a CSIT committer
+ is usually necessary. Even if a VPP developer is experienced enough
+ to identify the cause of the failure, a merge to CSIT is usually needed
+ for a full fix.
+
+ This process does not specify what to do with CSIT-blocked changes
+ that are not also API changes.
+
++ A *candidate API change*: An API change that meets all requirements
+ to become active (see below). Currently, the requirements are:
+
+ + No -1 nor -2 from from any human reviewer.
+
+ + All verify jobs (except vpp-csit ones) pass.
+
+ + +1 from a VPP committer.
+
+ The reason is to avoid situations where an API change becomes active,
+ but the VPP committers are unwilling to merge it for some reason.
+
++ The *active API change*: The candidate API change currently being processed
+ by the API Flag Day Algorithm.
+ While many API changes can be candidates at the same time,
+ only one is allowed be active at a time.
+
++ The *activating change*: The name for a Gerrit Change for CSIT repository
+ that does not change the test code, but adds the active CRC collection.
+ Merge of the opening change (to latest CSIT oper branch) defines
+ which API change has become active.
+
++ The *deactivating change*: The name for Gerrit Change for CSIT repository
+ that only supports tests and CRC values for VPP with the active API change.
+ That implies the previously stable CRC collection is deleted,
+ and any edits to the test implementation are done here.
+
++ The *mergeable deactivating change*: The deactivating change with additional
+ requirements. Details on the requirements are listed in the next section.
+ Merging this change finishes the process for the active API change.
+
+It is possible for a single CSIT change to act both as a mergeable
+deactivating change for one API change, and as an activating change
+for another API change. As English lacks a good adjective for such a thing,
+this document does not name this change.
+When this documents says a change is activating or deactivating,
+it allows the possibility for the change to fullfill also other purposes
+(e.g. acting as deactivating / activating change for another API change).
+
+## Algorithm Steps
+
+The following steps describe the application of the API "Flag Day" algorithm:
+
+#. A VPP patch for an API change is submitted to
+ gerrit for review.
+#. The api-crc job detects the API CRC values have changed
+ for some messages used by CSIT.
+#. The api-crc job runs in parallel with any other vpp-csit verify job,
+ so those other jobs can hint at the impact on CSIT.
+ Currently, any such vpp-csit job is non-voting,
+ as the current process does not guarantee such jobs passes
+ when the API change is merged.
+#. If the api-crc job fails, an email with the appropriate reason
+ is sent to the VPP patch submitter and vpp-api-dev@lists.fd.io
+ including the VPP patch information and .api files that are edited.
+#. The VPP patch developer works with a VPP committer
+ to ensure the patch meets requirements to become a candidate (see above).
+#. The VPP patch developer and CSIT team create a CSIT JIRA ticket
+ to identify the work required to support the new VPP API version.
+#. CSIT developer creates a patch of the deactivating change
+ (upload to Gerrit not required yet).
+#. CSIT developer runs patch-on-patch job (or manual verification).
+ Both developers iterate until the verification passes.
+ Note that in this phase csit-vpp job is expected to vote -1,
+ as the deactivating change is not mergeable yet.
+#. CSIT developer creates the activating change, uploads to Gerrit,
+ waits for vote (usual review cycle applies).
+#. When CSIT committer is satisfied, the activating change is merged
+ to CSIT master branch and cherry-picked to the latest oper branch.
+ This enters a "critical section" of the process.
+ Merges of other activating changes are not allowed from now on.
+ The targeted API change becomes the active API change.
+ This does not break any jobs.
+#. VPP developer (or CSIT committer) issues a recheck on the VPP patch.
+#. On failure, VPP and CSIT committers analyze what went wrong.
+ Typically, the active CRC collection is matching only an older patch set,
+ but a newer patch set needs different CRC values.
+ Either due to improvements on the VPP change in question,
+ or due to a rebase over previously merged (unrelated) API change.
+ VPP perhaps needs to rebase, and CSIT definitely needs
+ to merge edits to the active collection. Then issue a recheck again,
+ and iterate until success.
+#. On success, VPP Committer merges the active API change patch.
+ (This is also a delayed verification of the current active CRC collection.)
+#. VPP committer sends an e-mail to vpp-api-dev stating the support for
+ the previous CRC values will soon be removed, implying other changes
+ (whether API or not) should be rebased soon.
+#. VPP merge jobs create and upload new VPP packages.
+ This breaks trending jobs, but both VPP and CSIT verify jobs still work.
+#. CSIT developer makes the deactivating change mergeable:
+ The stable VPP build indicator is bumped to the build
+ that contains the active API change. The active CRC collection
+ (added by the activating change) is renamed to the new stable collection.
+ (The previous stable collection has already been deleted.)
+ At this time, the deactivating change should be uploaded to Gerrit and
+ csit verify jobs should be triggered.
+#. CSIT committer reviews the code, perhaps triggering any additional jobs
+ needed to verify the tests using the edited APIs are still working.
+#. When satisfied, CSIT committer merges the mergeable deactivating change
+ (to both master and oper).
+ The merge fixes trending jobs. VPP and CSIT verify jobs continue to work.
+ The merge also breaks some verify jobs for old changes in VPP,
+ as announced when the active API change was merged.
+ The merge is the point where the process leaves the "critical section",
+ thus allowing merges of activating changes for other API changes.
+#. CSIT committer sends an e-mail to vpp-api-dev stating the support for
+ the previous CRC values has been removed, and rebase is needed
+ for all affected VPP changes.
+#. Recheck of existing VPP patches in gerrit may cause the "VPP
+ API Incompatible Change Test" to send an email to the patch
+ submitter to rebase the patch to pick up the compatible VPP API
+ version files.
+
+### Real life examples
+
+Simple API change: https://gerrit.fd.io/r/c/vpp/+/23829
+
+Activating change: https://gerrit.fd.io/r/c/csit/+/23956
+
+Mergeable deactivating change: https://gerrit.fd.io/r/c/csit/+/24280
+
+Less straightforward mergeable deactivating change:
+https://gerrit.fd.io/r/c/csit/+/22526
+It shows:
+
++ Crc edits: supported_crcs.yaml
++ Version bump: VPP_STABLE_VER_UBUNTU_BIONIC
++ And even a way to work around failing tests:
+ eth2p-ethicmpv4-ip4base-eth-1tap-dev.robot
+
+Simple change that is both deactivating and activating:
+https://gerrit.fd.io/r/c/csit/+/23969
diff --git a/docs/content/introduction/bash_code_style.md b/docs/content/introduction/bash_code_style.md
new file mode 100644
index 0000000000..1cc1464056
--- /dev/null
+++ b/docs/content/introduction/bash_code_style.md
@@ -0,0 +1,651 @@
+---
+bookHidden: true
+title: "Bash Code Style"
+---
+
+The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
+"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED",
+"MAY", and "OPTIONAL" in this document are to be interpreted as
+described in `BCP 14 <https://tools.ietf.org/html/bcp14>`_
+`[RFC2119] <https://tools.ietf.org/html/rfc2119>`_
+`[RFC8174] <https://tools.ietf.org/html/rfc8174>`_
+when, and only when, they appear in all capitals, as shown here.
+
+This document SHALL describe guidelines for writing reliable, maintainable,
+reusable and readable code for CSIT.
+
+# Proposed Style
+
+# File Types
+
+Bash files SHOULD NOT be monolithic. Generally, this document
+considers two types of bash files:
+
++ Entry script: Assumed to be called by user,
+ or a script "external" in some way.
+
+ + Sources bash libraries and calls functions defined there.
+
++ Library file: To be sourced by entry scipts, possibly also by other libraries.
+
+ + Sources other libraries for functions it needs.
+
+ + Or relies on a related file already having sourced that.
+
+ + Documentation SHALL imply which case it is.
+
+ + Defines multiple functions other scripts can call.
+
+# Safety
+
++ Variable expansions MUST be quoted, to prevent word splitting.
+
+ + This includes special "variables" such as "${1}".
+
+ + RECOMMENDED even if the value is safe, as in "$?" and "$#".
+
+ + It is RECOMMENDED to quote strings in general,
+ so text editors can syntax-highlight them.
+
+ + Even if the string is a numeric value.
+
+ + Commands and known options can get their own highlight, no need to quote.
+
+ + Example: You do not need to quote every word of
+ "pip install --upgrade virtualenv".
+
+ + Code SHALL NOT quote glob characters you need to expand (obviously).
+
+ + OPTIONALLY do not quote adjacent characters (such as dot or fore-slash),
+ so that syntax highlighting makes them stand out compared to surrounding
+ ordinary strings.
+
+ + Example: cp "logs"/*."log" "."/
+
+ + Command substitution on right hand side of assignment are safe
+ without quotes.
+
+ + Note that command substitution limits the scope for quotes,
+ so it is NOT REQUIRED to escape the quotes in deeper levels.
+
+ + Both backtics and "dollar round-bracket" provide command substitution.
+ The folowing rules are RECOMMENDED:
+
+ + For simple constructs, use "dollar round-bracket".
+
+ + If there are round brackets in the surrounding text, use backticks,
+ as some editor highlighting logic can get confused.
+
+ + Avoid nested command substitution.
+
+ + Put intermediate results into local variables,
+ use "|| die" on each step of command substitution.
+
+ + Code SHOULD NOT be structured in a way where
+ word splitting is intended.
+
+ + Example: Variable holding string of multiple command lines arguments.
+
+ + Solution: Array variable should be used in this case.
+
+ + Expansion MUST use quotes then: "${name[@]}".
+
+ + Word splitting MAY be used when creating arrays from command substitution.
+
++ Code MUST always check the exit code of commands.
+
+ + Traditionally, error code checking is done either by "set -e"
+ or by appending "|| die" after each command.
+ The first is unreliable, due to many rules affecting "set -e" behavior
+ (see <https://mywiki.wooledge.org/BashFAQ/105>), but "|| die"
+ relies on humans identifying each command, which is also unreliable.
+ When was the last time you checked error code of "echo" command,
+ for example?
+
+ + Another example: "set -e" in your function has no effect
+ if any ancestor call is done with logical or,
+ for example in "func || code=$?" construct.
+
+ + As there is no reliable method of error detection, and there are two
+ largely independent unreliable methods, the best what we can do
+ is to apply both. So, code SHOULD explicitly
+ check each command (with "|| die" and similar) AND have "set -e" applied.
+
+ + Code MUST explicitly check each command, unless the command is well known,
+ and considered safe (such as the aforementioned "echo").
+
+ + The well known commands MUST still be checked implicitly via "set -e".
+
+ + See below for specific "set -e" recommendations.
+
++ Code SHOULD use "readlink -e" (or "-f" if target does not exist yet)
+ to normalize any path value to absolute path without symlinks.
+ It helps with debugging and identifies malformed paths.
+
++ Code SHOULD use such normalized paths for sourcing.
+
++ When exiting on a known error, code MUST print a longer, helpful message,
+ in order for the user to fix their situation if possible.
+
++ When error happens at an unexpected place, it is RECOMMENDED for the message
+ to be short and generic, instead of speculative.
+
+# Bash Options
+
++ Code MUST apply "-x" to make debugging easier.
+
+ + Code MAY temporarily supress such output in order to avoid spam
+ (e.g. in long busy loops), but it is still NOT RECOMMENDED to do so.
+
++ Code MUST apply "-e" for early error detection.
+
+ + But code still SHOULD use "|| die" for most commands,
+ as "-e" has numerous rules and exceptions.
+
+ + Code MAY apply "+e" temporarily for commands which (possibly nonzero)
+ exit code it interested in.
+
+ + Code MUST to store "$?" and call "set -e" immediatelly afterwards.
+
+ + Code MUST NOT use this approach when calling functions.
+
+ + That is because functions are instructed to apply "set -e" on their own
+ which (when triggered) will exit the whole entry script.
+
+ + Unless overriden by ERR trap.
+ But code SHOULD NOT set any ERR trap.
+
+ + If code needs exit code of a function, it is RECOMMENDED to use
+ pattern 'code="0"; called_function || code="${?}"'.
+
+ + In this case, contributor MUST make sure nothing in the
+ called_function sub-graph relies on "set -e" behavior,
+ because the call being part of "or construct" disables it.
+
+ + Code MAY append "|| true" for benign commands,
+ when it is clear non-zero exit codes make no difference.
+
+ + Also in this case, the contributor MUST make sure nothing within
+ the called sub-graph depends on "set -e", as it is disabled.
+
++ Code MUST apply "-u" as unset variable is generally a typo, thus an error.
+
+ + Code MAY temporarily apply "+u" if a command needs that to pass.
+
+ + Virtualenv activation is the only known example so far.
+
++ Code MUST apply "-o pipefail" to make sure "-e" picks errors
+ inside piped construct.
+
+ + Code MAY use "|| true" inside a pipe construct, in the (inprobable) case
+ when non-zero exit code still results in a meaningful pipe output.
+
++ All together: "set -exuo pipefail".
+
+ + Code MUST put that line near start of every file, so we are sure
+ the options are applied no matter what.
+
+ + "Near start" means "before any nontrivial code".
+
+ + Basically only copyright is RECOMMENDED to appear before.
+
+ + Also code MUST put the line near start of function bodies
+ and subshell invocations.
+
+# Functions
+
+There are (at least) two possibilities how a code from an external file
+can be executed. Either the file contains a code block to execute
+on each "source" invocation, or the file just defines functions
+which have to be called separately.
+
+This document considers the "function way" to be better,
+here are some pros and cons:
+
++ Cons:
+
+ + The function way takes more space. Files have more lines,
+ and the code in function body is one indent deeper.
+
+ + It is not easy to create functions for low-level argument manipulation,
+ as "shift" command in the function code does not affect the caller context.
+
+ + Call sites frequently refer to code two times,
+ when sourcing the definition and when executing the function.
+
+ + It is not clear when a library can rely on its relative
+ to have performed the sourcing already.
+
+ + Ideally, each library should detect if it has been sourced already
+ and return early, which takes even more space.
+
++ Pros:
+
+ + Some code blocks are more useful when used as function,
+ to make call site shorter.
+
+ + Examples: Trap functions, "die" function.
+
+ + The "import" part and "function" part usually have different side effects,
+ making the documentation more focused (even if longer overall).
+
+ + There is zero risk of argument-less invocation picking arguments
+ from parent context.
+
+ + This safety feature is the main reason for chosing the "function way".
+
+ + This allows code blocks to support optional arguments.
+
++ Rules:
+
+ + Library files MUST be only "source"d. For example if "tox" calls a script,
+ it is an entry script.
+
+ + Library files (upon sourcing) MUST minimize size effect.
+
+ + The only permitted side effects MUST by directly related to:
+
+ + Defining functions (without executing them).
+
+ + Sourcing sub-library files.
+
+ + If a bash script indirectly call another bash script,
+ it is not a "source" operation, variables are not shared,
+ so the called script MUST be considered an entry script,
+ even if it implements logic fitting into a single function.
+
+ + Entry scripts SHOULD avoid duplicating any logic.
+
+ + Clear duplicated blocks MUST be moved into libraries as functions.
+
+ + Blocks with low amount of duplication MAY remain in entry scripts.
+
+ + Usual motives for not creating functions are:
+
+ + The extracted function would have too much logic for processing
+ arguments (instead of hardcoding values as in entry script).
+
+ + The arguments needed would be too verbose.
+
+ + And using "set +x" would take too much vertical space
+ (when compared to entry script implementation).
+
+# Variables
+
+This document describes two kinds of variables: called "local" and "global".
+
++ Local variables:
+
+ + Variable name MUST contain only lower case letters, digits and underscores.
+
+ + Code MUST NOT export local variables.
+
+ + Code MUST NOT rely on local variables set in different contexts.
+
+ + Documentation is NOT REQUIRED.
+
+ + Variable name SHOULD be descriptive enough.
+
+ + Local variable MUST be initialized before first use.
+
+ + Code SHOULD have a comment if a reader might have missed
+ the initialization.
+
+ + Unset local variables when leaving the function.
+
+ + Explicitly typeset by "local" builtin command.
+
+ + Require strict naming convention, e.g. function_name__variable_name.
+
++ Global variables:
+
+ + Variable name MUST contain only upper case letters, digits and underscores.
+
+ + They SHOULD NOT be exported, unless external commands need them
+ (e.g. PYTHONPATH).
+
+ + Code MUST document if a function (or its inner call)
+ reads a global variable.
+
+ + Code MUST document if a function (or its inner call)
+ sets or rewrites a global variable.
+
+ + If a function "wants to return a value", it SHOULD be implemented
+ as the function setting (or rewriting) a global variable,
+ and the call sites reading that variable.
+
+ + If a function "wants to accept an argument", it IS RECOMMENDED
+ to be implemented as the call sites setting or rewriting global variables,
+ and the function reading that variables.
+ But see below for direct arguments.
+
++ Code MUST use curly brackets when referencing variables,
+ e.g. "${my_variable}".
+
+ + It makes related constructs (such as ${name:-default}) less surprising.
+
+ + It looks more similar to Robot Framework variables (which is good).
+
+# Arguments
+
+Bash scripts and functions MAY accept arguments, named "${1}", "${2}" and so on.
+As a whole available via "$@".
+You MAY use "shift" command to consume an argument.
+
+## Contexts
+
+Functions never have access to parent arguments, but they can read and write
+variables set or read by parent contexts.
+
+### Arguments Or Variables
+
++ Both arguments and global variables MAY act as an input.
+
++ In general, if the caller is likely to supply the value already placed
+ in a global variable of known name, it is RECOMMENDED
+ to use that global variable.
+
++ Construct "${NAME:-value}" can be used equally well for arguments,
+ so default values are possible for both input methods.
+
++ Arguments are positional, so there are restrictions on which input
+ is optional.
+
++ Functions SHOULD either look at arguments (possibly also
+ reading global variables to use as defaults), or look at variables only.
+
++ Code MUST NOT rely on "${0}", it SHOULD use "${BASH_SOURCE[0]}" instead
+ (and apply "readlink -e") to get the current block location.
+
++ For entry scripts, it is RECOMMENDED to use standard parsing capabilities.
+
+ + For most Linux distros, "getopt" is RECOMMENDED.
+
+# Working Directory Handling
+
++ Functions SHOULD act correctly without neither assuming
+ what the currect working directory is, nor changing it.
+
+ + That is why global variables and arguments SHOULD contain
+ (normalized) full paths.
+
+ + Motivation: Different call sites MAY rely on different working directories.
+
++ A function MAY return (also with nonzero exit code) when working directory
+ is changed.
+
+ + In this case the function documentation MUST clearly state where (and when)
+ is the working directory changed.
+
+ + Exception: Functions with undocumented exit code.
+
+ + Those functions MUST return nonzero code only on "set -e" or "die".
+
+ + Note that both "set -e" and "die" by default result in exit of the whole
+ entry script, but the caller MAY have altered that behavior
+ (by registering ERR trap, or redefining die function).
+
+ + Any callers which use "set +e" or "|| true" MUST make sure
+ their (and their caller ancestors') assumption on working directory
+ are not affected.
+
+ + Such callers SHOULD do that by restoring the original working directory
+ either in their code,
+
+ + or contributors SHOULD do such restoration in the function code,
+ (see below) if that is more convenient.
+
+ + Motivation: Callers MAY rely on this side effect to simplify their logic.
+
++ A function MAY assume a particular directory is already set
+ as the working directory (to save space).
+
+ + In this case function documentation MUST clearly state what the assumed
+ working directory is.
+
+ + Motivation: Callers MAY call several functions with common
+ directory of interest.
+
+ + Example: Several dowload actions to execute in sequence,
+ implemented as functions assuming ${DOWNLOAD_DIR}
+ is the working directory.
+
++ A function MAY change the working directory transiently,
+ before restoring it back before return.
+
+ + Such functions SHOULD use command "pushd" to change the working directory.
+
+ + Such functions SHOULD use "trap 'trap - RETURN; popd' RETURN"
+ imediately after the pushd.
+
+ + In that case, the "trap - RETURN" part MUST be included,
+ to restore any trap set by ancestor.
+
+ + Functions MAY call "trap - RETURN; popd" exlicitly.
+
+ + Such functions MUST NOT call another pushd (before an explicit popd),
+ as traps do not stack within a function.
+
++ If entry scripts also use traps to restore working directory (or other state),
+ they SHOULD use EXIT traps instead.
+
+ + That is because "exit" command, as well as the default behavior
+ of "die" or "set -e" cause direct exit (without skipping function returns).
+
+# Function Size
+
++ In general, code SHOULD follow reasoning similar to how pylint
+ limits code complexity.
+
++ It is RECOMMENDED to have functions somewhat simpler than Python functions,
+ as Bash is generally more verbose and less readable.
+
++ If code contains comments in order to partition a block
+ into sub-blocks, the sub-blocks SHOULD be moved into separate functions.
+
+ + Unless the sub-blocks are essentially one-liners,
+ not readable just because external commands do not have
+ obvious enough parameters. Use common sense.
+
+# Documentation
+
++ The library path and filename is visible from source sites. It SHOULD be
+ descriptive enough, so reader do not need to look inside to determine
+ how and why is the sourced file used.
+
+ + If code would use several functions with similar names,
+ it is RECOMMENDED to create a (well-named) sub-library for them.
+
+ + Code MAY create deep library trees if needed, it SHOULD store
+ common path prefixes into global variables to make sourcing easier.
+
+ + Contributors, look at other files in the subdirectory. You SHOULD
+ improve their filenames when adding-removing other filenames.
+
+ + Library files SHOULD NOT have executable flag set.
+
+ + Library files SHOULD have an extension .sh (or perhaps .bash).
+
+ + It is RECOMMENDED for entry scripts to also have executable flag unset
+ and have .sh extension.
+
++ Each entry script MUST start with a shebang.
+
+ + "#!/bin/usr/env bash" is RECOMMENDED.
+
+ + Code SHOULD put an empty line after shebang.
+
+ + Library files SHOULD NOT contain a shebang, as "source" is the primary
+ method to include them.
+
++ Following that, there SHOULD be a block of comment lines with copyright.
+
+ + It is a boilerplate, but human eyes are good at ignoring it.
+
+ + Overhead for git is also negligible.
+
++ Following that, there MUST be "set -exuo pipefail".
+
+ + It acts as an anchor for humans to start paying attention.
+
+Then it depends on script type.
+
+## Library Documentation
+
++ Following "set -exuo pipefail" SHALL come the "import part" documentation.
+
++ Then SHALL be the import code
+ ("source" commands and a bare minimum they need).
+
++ Then SHALL be the function definitions, and inside:
+
+ + The body SHALL sart with the function documentation explaining API contract.
+ Similar to Robot [Documentation] or Python function-level docstring.
+
+ + See below.
+
+ + "set -exuo pipefail" SHALL be the first executable line
+ in the function body, except functions which legitimely need
+ different flags. Those SHALL also start with appropriate "set" command(s).
+
+ + Lines containing code itself SHALL follow.
+
+ + "Code itself" SHALL include comment lines
+ explaining any non-obvious logic.
+
+ + There SHALL be two empty lines between function definitions.
+
+More details on function documentation:
+
+Generally, code SHOULD use comments to explain anything
+not obvious from the funtion name.
+
++ Function documentation SHOULD start with short description of function
+ operation or motivation, but only if not obvious from function name.
+
++ Documentation SHOULD continue with listing any non-obvious side effect:
+
+ + Documentation MUST list all read global variables.
+
+ + Documentation SHOULD include descriptions of semantics
+ of global variable values.
+ It is RECOMMENDED to mention which function is supposed to set them.
+
+ + The "include descriptions" part SHOULD apply to other items as well.
+
+ + Documentation MUST list all global variables set, unset, reset,
+ or otherwise updated.
+
+ + It is RECOMMENDED to list all hardcoded values used in code.
+
+ + Not critical, but can hint at future improvements.
+
+ + Documentation MUST list all files or directories read
+ (so caller can make sure their content is ready).
+
+ + Documentation MUST list all files or directories updated
+ (created, deleted, emptied, otherwise edited).
+
+ + Documentation SHOULD list all functions called (so reader can look them up).
+
+ + Documentation SHOULD mention where are the functions defined,
+ if not in the current file.
+
+ + Documentation SHOULD list all external commands executed.
+
+ + Because their behavior can change "out of bounds", meaning
+ the contributor changing the implementation of the extrenal command
+ can be unaware of this particular function interested in its side effects.
+
+ + Documentation SHOULD explain exit code (coming from
+ the last executed command).
+
+ + Usually, most functions SHOULD be "pass or die",
+ but some callers MAY be interested in nonzero exit codes
+ without using global variables to store them.
+
+ + Remember, "exit 1" ends not only the function, but all scripts
+ in the source chain, so code MUST NOT use it for other purposes.
+
+ + Code SHOULD call "die" function instead. This way the caller can
+ redefine that function, if there is a good reason for not exiting
+ on function failure.
+
+## Entry Script Documentation
+
++ After "set -exuo pipefail", high-level description SHALL come.
+
+ + Entry scripts are rarely reused, so detailed side effects
+ are OPTIONAL to document.
+
+ + But code SHOULD document the primary side effects.
+
++ Then SHALL come few commented lines to import the library with "die" function.
+
++ Then block of "source" commands for sourcing other libraries needed SHALL be.
+
+ + In alphabetical order, any "special" library SHOULD be
+ in the previous block (for "die").
+
++ Then block os commands processing arguments SHOULD be (if needed).
+
++ Then SHALL come block of function calls (with parameters as needed).
+
+# Other General Recommendations
+
++ Code SHOULD NOT not repeat itself, even in documentation:
+
+ + For hardcoded values, a general description SHOULD be written
+ (instead of copying the value), so when someone edits the value
+ in the code, the description still applies.
+
+ + If affected directory name is taken from a global variable,
+ documentation MAY distribute the directory description
+ over the two items.
+
+ + If most of side effects come from an inner call,
+ documentation MAY point the reader to the documentation
+ of the called function (instead of listing all the side effects).
+
++ But documentation SHOULD repeat it if the information crosses functions.
+
+ + Item description MUST NOT be skipped just because the reader
+ should have read parent/child documentation already.
+
+ + Frequently it is RECOMMENDED to copy&paste item descriptions
+ between functions.
+
+ + But sometimes it is RECOMMENDED to vary the descriptions. For example:
+
+ + A global variable setter MAY document how does it figure out the value
+ (without caring about what it will be used for by other functions).
+
+ + A global variable reader MAY document how does it use the value
+ (without caring about how has it been figured out by the setter).
+
++ When possible, Bash code SHOULD be made to look like Python
+ (or Robot Framework). Those are three primary languages CSIT code relies on,
+ so it is nicer for the readers to see similar expressions when possible.
+ Examples:
+
+ + Code MUST use indentation, 1 level is 4 spaces.
+
+ + Code SHOULD use "if" instead of "&&" constructs.
+
+ + For comparisons, code SHOULD use operators such as "!=" (needs "[[").
+
++ Code MUST NOT use more than 80 characters per line.
+
+ + If long external command invocations are needed,
+ code SHOULD use array variables to shorten them.
+
+ + If long strings (or arrays) are needed, code SHOULD use "+=" operator
+ to grow the value over multiple lines.
+
+ + If "|| die" does not fit with the command, code SHOULD use curly braces:
+
+ + Current line has "|| {",
+
+ + Next line has the die commands (indented one level deeper),
+
+ + Final line closes with "}" at original intent level.
diff --git a/docs/content/introduction/branches.md b/docs/content/introduction/branches.md
new file mode 100644
index 0000000000..20759b9c78
--- /dev/null
+++ b/docs/content/introduction/branches.md
@@ -0,0 +1,192 @@
+---
+bookHidden: true
+title: "Git Branches in CSIT"
+---
+
+# Git Branches in CSIT
+
+## Overview
+
+This document describes how to create and remove git branches in CSIT project.
+
+To be able to perform everything described in this file, you must be **logged
+in as a committer**.
+
+## Operational Branches
+
+For more information about operational branches see
+[CSIT/Branching Strategy](https://wiki.fd.io/view/CSIT/Branching_Strategy) and
+[CSIT/Jobs](https://wiki.fd.io/view/CSIT/Jobs) on
+[fd.io](https://fd.io) [wiki](https://wiki.fd.io/view/CSIT) pages.
+
+> Note: The branch `rls2009_lts` is used here only as an example.
+
+### Pre-requisites
+
+1. The last builds of weekly and semiweekly jobs must finish with status
+ *"Success"*.
+1. If any of watched jobs failed, try to find the root cause, fix it and run it
+ again.
+
+The watched jobs are:
+
+- master:
+ - [csit-vpp-device-master-ubuntu1804-1n-skx-weekly](https://jenkins.fd.io/view/csit/job/csit-vpp-device-master-ubuntu1804-1n-skx-weekly)
+ - [csit-vpp-device-master-ubuntu1804-1n-skx-semiweekly](https://jenkins.fd.io/view/csit/job/csit-vpp-device-master-ubuntu1804-1n-skx-semiweekly)
+- 2009_lts:
+ - [csit-vpp-device-2009_lts-ubuntu1804-1n-skx-weekly](https://jenkins.fd.io/view/csit/job/csit-vpp-device-2009_lts-ubuntu1804-1n-skx-weekly)
+ - [csit-vpp-device-2009_lts-ubuntu1804-1n-skx-semiweekly](https://jenkins.fd.io/view/csit/job/csit-vpp-device-2009_lts-ubuntu1804-1n-skx-semiweekly)
+
+### Procedure
+
+**A. CSIT Operational Branch**
+1. Take the revision string from the last successful build of the **weekly**
+ job, e.g. **Revision**: 0f9b20775b4a656b67c7039e2dda4cf676af2b21.
+1. Open [Gerrit](https://gerrit.fd.io).
+1. Go to
+ [Browse --> Repositories --> csit --> Branches](https://gerrit.fd.io/r/admin/repos/csit,branches).
+1. Click `CREATE NEW`.
+1. Fill in the revision number and the name of the new operational branch. Its
+ format is: `oper-YYMMDD` for master and `oper-rls{RELEASE}-{YYMMDD}` or
+ `oper-rls{RELEASE}_lts-{YYMMDD}` for release branches.
+1. Click "CREATE".
+1. If needed, delete old operational branches by clicking "DELETE".
+
+**B. VPP Stable version**
+1. Open the console log of the last successful **semiweekly** build and search
+ for VPP version (e.g. vpp_21 ...).
+1. You should find the string with this structure:
+ `vpp_21.01-rc0~469-g7acab3790~b368_amd64.deb`
+1. Modify [VPP_STABLE_VER_UBUNTU_BIONIC](../../VPP_STABLE_VER_UBUNTU_BIONIC)
+ and [VPP_STABLE_VER_CENTOS](../../VPP_STABLE_VER_CENTOS) files.
+1. Use a string with the build number, e.g. `21.01-rc0~469_g7acab3790~b129`
+ for [VPP_STABLE_VER_CENTOS](../../VPP_STABLE_VER_CENTOS) and a string
+ without the build number, e.g. `21.01-rc0~469_g7acab3790` for
+ [VPP_STABLE_VER_UBUNTU_BIONIC](../../VPP_STABLE_VER_UBUNTU_BIONIC).
+1. Update the stable versions in master and in all LTS branches.
+
+## Release Branches
+
+> Note: VPP release 21.01 is used here only as an example.
+
+### Pre-requisites
+
+1. VPP release manager sends the information email to announce that the RC1
+ milestone for VPP {release}, e.g. 21.01, is complete, and the artifacts are
+ available.
+1. The artifacts (*.deb and *.rpm) should be available at
+ `https://packagecloud.io/fdio/{release}`. For example see artifacts for the
+ [VPP release 20.01](https://packagecloud.io/fdio/2101). The last available
+ build is to be used.
+1. All CSIT patches for the release are merged in CSIT master branch.
+
+### Procedure
+
+**A. Release branch**
+
+1. Open [Gerrit](https://gerrit.fd.io).
+1. Go to
+ [Browse --> Repositories --> csit --> Branches](https://gerrit.fd.io/r/admin/repos/csit,branches).
+1. Save the revision string of master for further use.
+1. Click `CREATE NEW`.
+1. Fill in the revision number and the name of the new release branch. Its
+ format is: `rlsYYMM`, e.g. rls2101.
+1. Click "CREATE".
+
+**B. Jenkins jobs**
+
+See ["Add CSIT rls2101 branch"](https://gerrit.fd.io/r/c/ci-management/+/30439)
+and ["Add report jobs to csit rls2101 branch"](https://gerrit.fd.io/r/c/ci-management/+/30462)
+patches as an example.
+
+1. [csit.yaml](https://github.com/FDio/ci-management/blob/master/jjb/csit/csit.yaml):
+ Documentation of the source code and the Report
+ - Add release branch (rls2101) for `csit-docs-merge-{stream}` and
+ `csit-report-merge-{stream}` (project --> stream).
+1. [csit-perf.yaml](https://github.com/FDio/ci-management/blob/master/jjb/csit/csit-perf.yaml):
+ Verify jobs
+ - Add release branch (rls2101) to `project --> jobs -->
+ csit-vpp-perf-verify-{stream}-{node-arch} --> stream`.
+ - Add release branch (rls2101) to `project --> project: 'csit' --> stream`.
+ - Add release branch (rls2101) to `project --> project: 'csit' --> stream_report`.
+1. [csit-tox.yaml](https://github.com/FDio/ci-management/blob/master/jjb/csit/csit-tox.yaml):
+ tox
+ - Add release branch (rls2101) to `project --> stream`.
+1. [csit-vpp-device.yaml](https://github.com/FDio/ci-management/blob/master/jjb/csit/csit-vpp-device.yaml):
+ csit-vpp-device
+ - Add release branch (rls2101) to `project --> jobs (weekly / semiweekly) --> stream`.
+ - Add release branch (rls2101) to `project --> project: 'csit' --> stream`.
+
+**C. VPP Stable version**
+
+See the patch
+[Update of VPP_REPO_URL and VPP_STABLE_VER files](https://gerrit.fd.io/r/c/csit/+/30461)
+and / or
+[rls2101: Update VPP_STABLE_VER files to release version](https://gerrit.fd.io/r/c/csit/+/30976)
+as an example.
+
+1. Find the last successful build on the
+ [Package Cloud](https://packagecloud.io) for the release, e.g.
+ [VPP release 20.01](https://packagecloud.io/fdio/2101).
+1. Clone the release branch to your PC:
+ `git clone --depth 1 ssh://<user>@gerrit.fd.io:29418/csit --branch rls{RELEASE}`
+1. Modify [VPP_STABLE_VER_UBUNTU_BIONIC](../../VPP_STABLE_VER_UBUNTU_BIONIC)
+ and [VPP_STABLE_VER_CENTOS](../../VPP_STABLE_VER_CENTOS) files with the last
+ successful build.
+1. Modify [VPP_REPO_URL](../../VPP_REPO_URL) to point to the new release, e.g.
+ `https://packagecloud.io/install/repositories/fdio/2101`.
+1. You can also modify the [.gitreview](../../.gitreview) file and set the new
+ default branch.
+1. Wait until the verify jobs
+ - [csit-vpp-device-2101-ubuntu1804-1n-skx](https://jenkins.fd.io/job/csit-vpp-device-2101-ubuntu1804-1n-skx)
+ - [csit-vpp-device-2101-ubuntu1804-1n-tx2](https://jenkins.fd.io/job/csit-vpp-device-2101-ubuntu1804-1n-tx2)
+
+ successfully finish and merge the patch.
+
+**D. CSIT Operational Branch**
+
+1. Manually start (Build with Parameters) the weekly job
+ [csit-vpp-device-2101-ubuntu1804-1n-skx-weekly](https://jenkins.fd.io/view/csit/job/csit-vpp-device-2101-ubuntu1804-1n-skx-weekly)
+1. When it successfully finishes, take the revision string e.g. **Revision**:
+ 876b6c1ae05bfb1ad54ff253ea021f3b46780fd4 to create a new operational branch
+ for the new release.
+1. Open [Gerrit](https://gerrit.fd.io).
+1. Go to
+ [Browse --> Repositories --> csit --> Branches](https://gerrit.fd.io/r/admin/repos/csit,branches).
+1. Click `CREATE NEW`.
+1. Fill in the revision number and the name of the new operational branch. Its
+ format is: `oper-rls{RELEASE}-YYMMDD` e.g. `oper-rls2101-201217`.
+1. Click "CREATE".
+1. Manually start (Build with Parameters) the semiweekly job
+ [csit-vpp-device-2101-ubuntu1804-1n-skx-semiweekly](https://jenkins.fd.io/view/csit/job/csit-vpp-device-2101-ubuntu1804-1n-skx-semiweekly)
+1. When it successfully finishes check in console log if it used the right VPP
+ version (search for `VPP_VERSION=`) from the right repository (search for
+ `REPO_URL=`).
+
+**E. Announcement**
+
+If everything is as it should be, send the announcement email to
+`csit-dev@lists.fd.io` mailing list.
+
+*Example:*
+
+Subject:
+```text
+CSIT rls2101 branch pulled out
+```
+
+Body:
+```text
+CSIT rls2101 branch [0] is created and fully functional.
+
+Corresponding operational branch (oper-rls2101-201217) has been created too.
+
+We are starting dry runs for performance ndrpdr iterative tests to get initial
+ndrpdr values with available rc1 packages as well as to test all the infra
+before starting report data collection runs.
+
+Regards,
+<signature>
+
+[0] https://git.fd.io/csit/log/?h=rls2101
+```
diff --git a/docs/content/introduction/dashboard_history.md b/docs/content/introduction/dashboard_history.md
new file mode 100644
index 0000000000..f7f9db576a
--- /dev/null
+++ b/docs/content/introduction/dashboard_history.md
@@ -0,0 +1,17 @@
+---
+title: "Dashboard History"
+weight: 1
+---
+
+# Dashboard History
+
+FD.io {{< release_csit >}} Dashboard History and per .[ww] revision changes are
+listed below.
+
+ **.[ww] Revision** | **Changes**
+--------------------|------------------
+ .10 | Initial revision
+
+FD.io CSIT Revision follow CSIT-[yy][mm].[ww] numbering format, with version
+denoted by concatenation of two digit year [yy] and two digit month [mm], and
+maintenance revision identified by two digit calendar week number [ww].
diff --git a/docs/content/introduction/design.md b/docs/content/introduction/design.md
new file mode 100644
index 0000000000..e236b47d6f
--- /dev/null
+++ b/docs/content/introduction/design.md
@@ -0,0 +1,148 @@
+---
+title: "Design"
+weight: 3
+---
+
+# Design
+
+FD.io CSIT system design needs to meet continuously expanding requirements of
+FD.io projects including VPP, related sub-systems (e.g. plugin applications,
+DPDK drivers) and FD.io applications (e.g. DPDK applications), as well as
+growing number of compute platforms running those applications. With CSIT
+project scope and charter including both FD.io continuous testing AND
+performance trending/comparisons, those evolving requirements further amplify
+the need for CSIT framework modularity, flexibility and usability.
+
+## Design Hierarchy
+
+CSIT follows a hierarchical system design with SUTs and DUTs at the bottom level
+of the hierarchy, presentation level at the top level and a number of functional
+layers in-between. The current CSIT system design including CSIT framework is
+depicted in the figure below.
+
+{{< svg "static/csit_design_picture.svg" >}}
+
+A brief bottom-up description is provided here:
+
+1. SUTs, DUTs, TGs
+ - SUTs - Systems Under Test;
+ - DUTs - Devices Under Test;
+ - TGs - Traffic Generators;
+2. Level-1 libraries - Robot and Python
+ - Lowest level CSIT libraries abstracting underlying test environment, SUT,
+ DUT and TG specifics;
+ - Used commonly across multiple L2 KWs;
+ - Performance and functional tests:
+ - L1 KWs (KeyWords) are implemented as RF libraries and Python
+ libraries;
+ - Performance TG L1 KWs:
+ - All L1 KWs are implemented as Python libraries:
+ - Support for TRex only today;
+ - CSIT IXIA drivers in progress;
+ - Performance data plane traffic profiles:
+ - TG-specific stream profiles provide full control of:
+ - Packet definition - layers, MACs, IPs, ports, combinations thereof
+ e.g. IPs and UDP ports;
+ - Stream definitions - different streams can run together, delayed,
+ one after each other;
+ - Stream profiles are independent of CSIT framework and can be used
+ in any T-rex setup, can be sent anywhere to repeat tests with
+ exactly the same setup;
+ - Easily extensible - one can create a new stream profile that meets
+ tests requirements;
+ - Same stream profile can be used for different tests with the same
+ traffic needs;
+ - Functional data plane traffic scripts:
+ - Scapy specific traffic scripts;
+3. Level-2 libraries - Robot resource files:
+ - Higher level CSIT libraries abstracting required functions for executing
+ tests;
+ - L2 KWs are classified into the following functional categories:
+ - Configuration, test, verification, state report;
+ - Suite setup, suite teardown;
+ - Test setup, test teardown;
+4. Tests - Robot:
+ - Test suites with test cases;
+ - Performance tests using physical testbed environment:
+ - VPP;
+ - DPDK-Testpmd;
+ - DPDK-L3Fwd;
+ - Tools:
+ - Documentation generator;
+ - Report generator;
+ - Testbed environment setup ansible playbooks;
+ - Operational debugging scripts;
+
+5. Test Lifecycle Abstraction
+
+A well coded test must follow a disciplined abstraction of the test
+lifecycles that includes setup, configuration, test and verification. In
+addition to improve test execution efficiency, the commmon aspects of
+test setup and configuration shared across multiple test cases should be
+done only once. Translating these high-level guidelines into the Robot
+Framework one arrives to definition of a well coded RF tests for FD.io
+CSIT. Anatomy of Good Tests for CSIT:
+
+1. Suite Setup - Suite startup Configuration common to all Test Cases in suite:
+ uses Configuration KWs, Verification KWs, StateReport KWs;
+2. Test Setup - Test startup Configuration common to multiple Test Cases: uses
+ Configuration KWs, StateReport KWs;
+3. Test Case - uses L2 KWs with RF Gherkin style:
+ - prefixed with {Given} - Verification of Test setup, reading state: uses
+ Configuration KWs, Verification KWs, StateReport KWs;
+ - prefixed with {When} - Test execution: Configuration KWs, Test KWs;
+ - prefixed with {Then} - Verification of Test execution, reading state: uses
+ Verification KWs, StateReport KWs;
+4. Test Teardown - post Test teardown with Configuration cleanup and
+ Verification common to multiple Test Cases - uses: Configuration KWs,
+ Verification KWs, StateReport KWs;
+5. Suite Teardown - Suite post-test Configuration cleanup: uses Configuration
+ KWs, Verification KWs, StateReport KWs;
+
+## RF Keywords Functional Classification
+
+CSIT RF KWs are classified into the functional categories matching the test
+lifecycle events described earlier. All CSIT RF L2 and L1 KWs have been grouped
+into the following functional categories:
+
+1. Configuration;
+2. Test;
+3. Verification;
+4. StateReport;
+5. SuiteSetup;
+6. TestSetup;
+7. SuiteTeardown;
+8. TestTeardown;
+
+## RF Keywords Naming Guidelines
+
+Readability counts: "..code is read much more often than it is written."
+Hence following a good and consistent grammar practice is important when
+writing Robot Framework KeyWords and Tests. All CSIT test cases
+are coded using Gherkin style and include only L2 KWs references. L2 KWs are
+coded using simple style and include L2 KWs, L1 KWs, and L1 python references.
+To improve readability, the proposal is to use the same grammar for both
+Robot Framework KW styles, and to formalize the grammar of English
+sentences used for naming the Robot Framework KWs. Robot
+Framework KWs names are short sentences expressing functional description of
+the command. They must follow English sentence grammar in one of the following
+forms:
+
+1. **Imperative** - verb-object(s): *"Do something"*, verb in base form.
+2. **Declarative** - subject-verb-object(s): *"Subject does something"*, verb in
+ a third-person singular present tense form.
+3. **Affirmative** - modal_verb-verb-object(s): *"Subject should be something"*,
+ *"Object should exist"*, verb in base form.
+4. **Negative** - modal_verb-Not-verb-object(s): *"Subject should not be
+ something"*, *"Object should not exist"*, verb in base form.
+
+Passive form MUST NOT be used. However a usage of past participle as an
+adjective is okay. See usage examples provided in the Coding guidelines
+section below. Following sections list applicability of the above
+grammar forms to different Robot Framework KW categories. Usage
+examples are provided, both good and bad.
+
+## Coding Guidelines
+
+Coding guidelines can be found on
+[Design optimizations wiki page](https://wiki.fd.io/view/CSIT/Design_Optimizations). \ No newline at end of file
diff --git a/docs/content/introduction/model_schema.md b/docs/content/introduction/model_schema.md
new file mode 100644
index 0000000000..ae3ba38fd7
--- /dev/null
+++ b/docs/content/introduction/model_schema.md
@@ -0,0 +1,60 @@
+---
+bookHidden: true
+title: "Model Schema"
+---
+
+# Model Schema
+
+This document describes what is currently implemented in CSIT,
+especially the export side (UTI), not import side (PAL).
+
+## Version
+
+This document is valid for CSIT model version 1.4.0.
+
+It is recommended to use semantic versioning: https://semver.org/
+That means, if the new model misses a field present in the old model,
+bump the major version. If the new model adds a field
+not present in the old model, bump the minor version.
+Any other edit in the implmenetation (or documentation) bumps the patch version.
+If you change value type or formatting,
+consider whether the parser (PAL) understands the new value correctly.
+Renaming a field is the same as adding a new one and removing the old one.
+Parser (PAL) has to know exact major version and minimal minor version,
+and unless bugs, it can ignore patch version and bumped minor version.
+
+## UTI
+
+UTI stands for Unified Test Interface.
+It mainly focuses on exporting information gathered during test run
+into JSON output files.
+
+### Output Structure
+
+UTI outputs come in filesystem tree structure (single tree), where directories
+correspond to suite levels and files correspond to suite setup, suite teardown
+or any test case at this level of suite.
+The directory name comes from SUITE_NAME Robot variable (the last part
+as the previous parts are higher level suites), converted to lowercase.
+If the suite name contains spaces (Robot converts underscores to spaces),
+they are replaced with underscores.
+
+The filesystem tree is rooted under tests/ (as suites in git are there),
+and for each component (test case, suite setup, suite teardown).
+
+Although we expect only ASCII text in the exported files,
+we manipulate files using UTF-8 encoding,
+so if Robot Framework uses a non-ascii character, it will be handled.
+
+### JSON schemas
+
+CSIT model is formally defined as a collection of JSON schema documents,
+one for each output file type.
+
+The current version specifies only one output file type:
+Info output for test case.
+
+The authoritative JSON schema documents are in JSON format.
+Git repository also contains YAML formatted document and conversion utility,
+which simplifies maintaining of the JSON document
+(no need to track brackets and commas), but are not authoritative.
diff --git a/docs/content/introduction/perf_triggers_design.md b/docs/content/introduction/perf_triggers_design.md
new file mode 100644
index 0000000000..445846f4d9
--- /dev/null
+++ b/docs/content/introduction/perf_triggers_design.md
@@ -0,0 +1,44 @@
+---
+bookHidden: true
+title: "Performance Triggers Design"
+---
+
+# Performance Triggers Design
+
+*Syntax*
+ trigger_keyword [{tag1} {tag2}AND{tag3} !{tag4} !{tag5}]
+
+*Inputs*
+ - trigger_keyword for vpp-* jobs: 'perftest'
+ - trigger_keyword for csit-* jobs: 'csit-perftest'
+ - tags: existing CSIT tags [4]_ i.e. ip4base, ip6base, iacldst, memif
+
+Set of default tags appended to user input, under control by CSIT
+ - always-on for vpp-csit*.job: 'mrr' 'nic_intel_x710-da2' '1t1c'
+ - if input with no tags, following set applied:
+ - 'mrrANDnic_intel-x710AND1t1cAND64bANDip4base'
+ - 'mrrANDnic_intel-x710AND1t1cAND78bANDip6base'
+ - 'mrrANDnic_intel-x710AND1t1cAND64bANDl2bdbase'
+
+Examples
+ input: 'perftest'
+ expanded: 'mrrANDnic_intel_x710-da2AND1t1cAND64bANDl2bdbase mrrANDnic_intel_x710-da2AND1t1cAND64bANDip4base mrrANDnic_intel_x710-da2AND1t1cAND78bANDip6base'
+ input: 'perftest l2bdbase l2xcbase'
+ expanded: 'mrrANDnic_intel_x710-da2ANDl2bdbase mrrANDnic_intel_x710-da2ANDl2xcbase'
+ input: 'perftest ip4base !feature'
+ expanded: 'mrrANDnic_intel_x710-da2ANDip4base' not 'feature'
+ input: 'perftest ip4base !feature !lbond_dpdk'
+ expanded: 'mrrANDnic_intel_x710-da2ANDip4base' not 'feature' not 'lbond_dpdk'
+ input: 'perftestxyx ip4base !feature !lbond_dpdk'
+ invalid: detected as error
+ input: 'perftestip4base !feature !lbond_dpdk'
+ invalid: detected as error
+ input: 'perftest ip4base!feature!lbond_dpdk'
+ invalid expand: 'mrrANDnic_intel_x710-da2ANDip4base!feature!lbond_dpdk'
+ execution of RobotFramework will fail
+
+Constrains
+ Trigger keyword must be different for every job to avoid running multiple jobs
+ at once. Trigger keyword must not be substring of job name or any other
+ message printed by JJB bach to gerrit message which can lead to recursive
+ execution.
diff --git a/docs/content/introduction/test_code_guidelines.md b/docs/content/introduction/test_code_guidelines.md
new file mode 100644
index 0000000000..9707d63ea6
--- /dev/null
+++ b/docs/content/introduction/test_code_guidelines.md
@@ -0,0 +1,294 @@
+---
+bookHidden: true
+title: "CSIT Test Code Guidelines"
+---
+
+# CSIT Test Code Guidelines
+
+The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
+"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED",
+"MAY", and "OPTIONAL" in this document are to be interpreted as
+described in [BCP 14](https://tools.ietf.org/html/bcp14),
+[RFC2119](https://tools.ietf.org/html/rfc2119),
+[RFC8174](https://tools.ietf.org/html/rfc8174)
+when, and only when, they appear in all capitals, as shown here.
+
+This document SHALL describe guidelines for writing reliable, maintainable,
+reusable and readable code for CSIT.
+
+# RobotFramework test case files and resource files
+
++ General
+
+ + Contributors SHOULD look at requirements.txt in root CSIT directory
+ for the currently used Robot Framework version.
+ Contributors SHOULD read
+ [Robot Framework User Guide](http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html)
+ for more details.
+
+ + RobotFramework test case files and resource files
+ SHALL use special extension .robot
+
+ + Pipe and space separated file format (without trailing pipe
+ and without pipe aligning) SHALL be used.
+ Tabs are invisible characters, which are error prone.
+ 4-spaces separation is prone to accidental double space
+ acting as a separator.
+
+ + Files SHALL be encoded in UTF-8 (the default Robot source file encoding).
+ Usage of non-ASCII characters SHOULD be avoided if possible.
+ It is RECOMMENDED to
+ [escape](http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#escaping)
+ non-ASCII characters.
+
+ + Line length SHALL be limited to 80 characters.
+
+ + There SHALL be licence text present at the beginning of each file.
+
+ + Copy-pasting of the code NOT RECOMMENDED practice, any code that could be
+ re-used SHOULD be put into a library (Robot resource, Python library, ...).
+
++ Test cases
+
+ + It is RECOMMENDED to use data-driven test case definitions
+ anytime suite contains test cases similar in structure.
+ Typically, a suite SHOULD define a Template keyword, and test cases
+ SHOULD only specify tags and argument values
+
+ *** Settings ***
+ | Test Template | Local Template
+ ...
+
+ *** Test Cases ***
+ | tc01-64B-1c-eth-l2patch-mrr
+ | | [Tags] | 64B | 1C
+ | | framesize=${64} | phy_cores=${1}
+
+ + Test case templates (or testcases) SHALL be written in Behavior-driven style
+ i.e. in readable English, so that even non-technical project stakeholders
+ can understand it
+
+ *** Keywords ***
+ | Local Template
+ | | [Documentation]
+ | | ... | [Cfg] DUT runs L2 patch config with ${phy_cores} phy core(s).
+ | | ... | [Ver] Measure NDR and PDR values using MLRsearch algorithm.\
+ | | ...
+ | | ... | *Arguments:*
+ | | ... | - frame_size - Framesize in Bytes in integer
+ | | ... | or string (IMIX_v4_1). Type: integer, string
+ | | ... | - phy_cores - Number of physical cores. Type: integer
+ | | ... | - rxq - Number of RX queues, default value: ${None}.
+ | | ... | Type: integer
+ | | ...
+ | | [Arguments] | ${frame_size} | ${phy_cores} | ${rxq}=${None}
+ | | ...
+ | | Set Test Variable | \${frame_size}
+ | | ...
+ | | Given Add worker threads and rxqueues to all DUTs
+ | | ... | ${phy_cores} | ${rxq}
+ | | And Add PCI devices to all DUTs
+ | | Set Max Rate And Jumbo And Handle Multi Seg
+ | | And Apply startup configuration on all VPP DUTs
+ | | When Initialize L2 patch
+ | | Then Find NDR and PDR intervals using optimized search
+
+ + Every suite and test case template (or testcase)
+ SHALL contain short documentation.
+ Generated CSIT web pages display the documentation.
+
+ + You SHOULD NOT use hard-coded constants.
+ It is RECOMMENDED to use the variable table
+ (\*\*\*Variables\*\*\*) to define test case specific values.
+ You SHALL use the assignment sign = after the variable name
+ to make assigning variables slightly more explicit
+
+ *** Variables ***
+ | ${traffic_profile}= | trex-stl-2n-ethip4-ip4src254
+
+ + Common test case specific settings of the test environment SHALL be done
+ in Test Setup keyword defined in the Setting table.
+
+ + Run Keywords construction is RECOMMENDED if it is more readable
+ than a keyword.
+
+ + Separate keyword is RECOMMENDED if the construction is less readable.
+
+ + Post-test cleaning and processing actions SHALL be done in Test Teardown
+ part of the Setting table (e.g. download statistics from VPP nodes).
+ This part is executed even if the test case has failed. On the other hand
+ it is possible to disable the tear-down from command line, thus leaving
+ the system in “broken” state for investigation.
+
+ + Every testcase SHALL be correctly tagged. List of defined tags is in
+ csit/docs/introduction/test_tag_documentation.rst
+
+ + Whenever possible, common tags SHALL be set using Force Tags
+ in Settings table.
+
+ + User high-level keywords specific for the particular test suite
+ SHOULD be implemented in the Keywords table of suitable Robot resource file
+ to enable readability and code-reuse.
+
+ + Such keywords MAY be implemented in Keywords table of the suite instead,
+ if the contributor believes no other test will use such keywords.
+ But this is NOT RECOMMENDED in general, as keywords in Resources
+ are easier to maintain.
+
+ + All test case names (and suite names) SHALL conform
+ to current naming convention.
+ https://wiki.fd.io/view/CSIT/csit-test-naming
+
+ + Frequently, different suites use the same test case layout.
+ It is RECOMMENDED to use autogeneration scripts available,
+ possibly extending them if their current functionality is not sufficient.
+
++ Resource files
+
+ + SHALL be used to implement higher-level keywords that are used in test cases
+ or other higher-level (or medium-level) keywords.
+
+ + Every keyword SHALL contain Documentation where the purpose and arguments
+ of the keyword are described. Also document types, return values,
+ and any specific assumptions the particular keyword relies on.
+
+ + A keyword usage example SHALL be the part of the Documentation.
+ The example SHALL use pipe and space separated format
+ (with escaped pipes and) with a trailing pipe.
+
+ + The reason was possbile usage of Robot's libdoc tool
+ to generate tests and resources documentation. In that case
+ example keyword usage would be rendered in table.
+
+ + Keyword name SHALL describe what the keyword does,
+ specifically and in a reasonable length (“short sentence”).
+
+ + Keyword names SHALL be short enough for call sites
+ to fit within line length limit.
+
+ + If a keyword argument has a most commonly used value, it is RECOMMENDED
+ to set it as default. This makes keyword code longer,
+ but suite code shorter, and readability (and maintainability)
+ of suites SHALL always more important.
+
+ + If there is intermediate data (created by one keyword, to be used
+ by another keyword) of singleton semantics (it is clear that the test case
+ can have at most one instance of such data, even if the instance
+ is complex, for example ${nodes}), it is RECOMMENDED to store it
+ in test variables. You SHALL document test variables read or written
+ by a keyword. This makes the test template code less verbose.
+ As soon as the data instance is not unique, you SHALL pass it around
+ via arguments and return values explicitly (this makes lower level keywords
+ more reusable and less bug prone).
+
+ + It is RECOMMENDED to pass arguments explicitly via [Arguments] line.
+ Setting test variables takes more space and is less explicit.
+ Using arguments embedded in keyword name makes them less visible,
+ and it makes it harder for the line containing the resulting long name
+ to fit into the maximum character limit, so you SHOULD NOT use them.
+
+# Python library files
+
++ General
+
+ + SHALL be used to implement low-level keywords that are called from
+ resource files (of higher-level keywords) or from test cases.
+
+ + Higher-level keywords MAY be implemented in python library file too.
+ it is RECOMMENDED especially in the case that their implementation
+ in resource file would be too difficult or impossible,
+ e.g. complex data structures or functional programming.
+
+ + Every keyword, Python module, class, method, enum SHALL contain
+ docstring with the short description and used input parameters
+ and possible return value(s) or raised exceptions.
+
+ + The docstrings SHOULD conform to
+ [PEP 257](https://www.python.org/dev/peps/pep-0257/)
+ and other quality standards.
+
+ + CSIT contributions SHALL use a specific formatting for documenting
+ arguments, return values and similar.
+
+ + Keyword usage examples MAY be grouped and used
+ in the class/module documentation string, to provide better overview
+ of the usage and relationships between keywords.
+
+ + Keyword name SHALL describe what the keyword does,
+ specifically and in a reasonable length (“short sentence”).
+ See https://wiki.fd.io/view/CSIT/csit-test-naming
+
+ + Python implementation of a keyword is a function,
+ so its name in the python library should be lowercase_with_underscores.
+ Robot call sites should usename with first letter capitalized, and spaces.
+
++ Coding
+
+ + It is RECOMMENDED to use some standard development tool
+ (e.g. PyCharm Community Edition) and follow
+ [PEP-8](https://www.python.org/dev/peps/pep-0008/) recommendations.
+
+ + All python code (not only Robot libraries) SHALL adhere to PEP-8 standard.
+ This is reported by CSIT Jenkins verify job.
+
+ + Indentation: You SHALL NOT use tab for indents!
+ Indent is defined as four spaces.
+
+ + Line length: SHALL be limited to 80 characters.
+
+ + CSIT Python code assumes PYTHONPATH is set
+ to the root of cloned CSIT git repository, creating a tree of sub-packages.
+ You SHALL use that tree for importing, for example
+
+ from resources.libraries.python.ssh import exec_cmd_no_error
+
+ + Imports SHALL be grouped in the following order:
+
+ 1. standard library imports,
+ 2. related third party imports,
+ 3. local application/library specific imports.
+
+ You SHALL put a blank line between each group of imports.
+
+ + You SHALL use two blank lines between top-level definitions,
+ one blank line between method definitions.
+
+ + You SHALL NOT execute any active code on library import.
+
+ + You SHALL NOT use global variables inside library files.
+
+ + You MAY define constants inside library files.
+
+ + It is NOT RECOMMENDED to use hard-coded constants (e.g. numbers,
+ paths without any description). It is RECOMMENDED to use
+ configuration file(s), like /csit/resources/libraries/python/Constants.py,
+ with appropriate comments.
+
+ + The code SHALL log at the lowest possible level of implementation,
+ for debugging purposes. You SHALL use same style for similar events.
+ You SHALL keep logging as verbose as necessary.
+
+ + You SHALL use the most appropriate exception not general one (Exception)
+ if possible. You SHOULD create your own exception
+ if necessary and implement there logging, level debug.
+
+ + You MAY use RuntimeException for generally unexpected failures.
+
+ + It is RECOMMENDED to use RuntimeError also for
+ infrastructure failures, e.g. losing SSH connection to SUT.
+
+ + You MAY use EnvironmentError and its cublasses instead,
+ if the distinction is informative for callers.
+
+ + It is RECOMMENDED to use AssertionError when SUT is at fault.
+
+ + For each class (e.g. exception) it is RECOMMENDED to implement __repr__()
+ which SHALL return a string usable as a constructor call
+ (including repr()ed arguments).
+ When logging, you SHOULD log the repr form, unless the internal structure
+ of the object in question would likely result in too long output.
+ This is helpful for debugging.
+
+ + For composing and formatting strings, you SHOULD use .format()
+ with named arguments.
+ Example: "repr() of name: {name!r}".format(name=name)
diff --git a/docs/content/introduction/test_naming.md b/docs/content/introduction/test_naming.md
new file mode 100644
index 0000000000..22e2c0bf8a
--- /dev/null
+++ b/docs/content/introduction/test_naming.md
@@ -0,0 +1,107 @@
+---
+title: "Test Naming"
+weight: 4
+---
+
+# Test Naming
+
+## Background
+
+{{< release_csit >}} follows a common structured naming convention for all
+performance and system functional tests, introduced in CSIT 17.01.
+
+The naming should be intuitive for majority of the tests. Complete
+description of CSIT test naming convention is provided on
+[CSIT test naming wiki page](https://wiki.fd.io/view/CSIT/csit-test-naming).
+Below few illustrative examples of the naming usage for test suites across CSIT
+performance, functional and Honeycomb management test areas.
+
+## Naming Convention
+
+The CSIT approach is to use tree naming convention and to encode following
+testing information into test suite and test case names:
+
+1. packet network port configuration
+ * port type, physical or virtual;
+ * number of ports;
+ * NIC model, if applicable;
+ * port-NIC locality, if applicable;
+2. packet encapsulations;
+3. VPP packet processing
+ * packet forwarding mode;
+ * packet processing function(s);
+4. packet forwarding path
+ * if present, network functions (processes, containers, VMs) and their
+ topology within the computer;
+5. main measured variable, type of test.
+
+Proposed convention is to encode ports and NICs on the left (underlay),
+followed by outer-most frame header, then other stacked headers up to the
+header processed by vSwitch-VPP, then VPP forwarding function, then encap on
+vhost interface, number of vhost interfaces, number of VMs. If chained VMs
+present, they get added on the right. Test topology is expected to be
+symmetric, in other words packets enter and leave SUT through ports specified
+on the left of the test name. Here some examples to illustrate the convention
+followed by the complete legend, and tables mapping the new test filenames to
+old ones.
+
+## Naming Examples
+
+CSIT test suite naming examples (filename.robot) for common tested VPP
+topologies:
+
+1. **Physical port to physical port - a.k.a. NIC-to-NIC, Phy-to-Phy, P2P**
+ * *PortNICConfig-WireEncapsulation-PacketForwardingFunction-
+ PacketProcessingFunction1-...-PacketProcessingFunctionN-TestType*
+ * *10ge2p1x520-dot1q-l2bdbasemaclrn-ndrdisc.robot* => 2 ports of 10GE on Intel
+ x520 NIC, dot1q tagged Ethernet, L2 bridge-domain baseline switching with
+ MAC learning, NDR throughput discovery.
+ * *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-ndrchk.robot* => 2 ports of 10GE on
+ Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain baseline switching
+ with MAC learning, NDR throughput discovery.
+ * *10ge2p1x520-ethip4-ip4base-ndrdisc.robot* => 2 ports of 10GE on Intel x520
+ NIC, IPv4 baseline routed forwarding, NDR throughput discovery.
+ * *10ge2p1x520-ethip6-ip6scale200k-ndrdisc.robot* => 2 ports of 10GE on Intel
+ x520 NIC, IPv6 scaled up routed forwarding, NDR throughput discovery.
+ * *10ge2p1x520-ethip4-ip4base-iacldstbase-ndrdisc.robot* => 2 ports of 10GE on
+ Intel x520 NIC, IPv4 baseline routed forwarding, ingress Access Control
+ Lists baseline matching on destination, NDR throughput discovery.
+ * *40ge2p1vic1385-ethip4-ip4base-ndrdisc.robot* => 2 ports of 40GE on Cisco
+ vic1385 NIC, IPv4 baseline routed forwarding, NDR throughput discovery.
+ * *eth2p-ethip4-ip4base-func.robot* => 2 ports of Ethernet, IPv4 baseline
+ routed forwarding, functional tests.
+2. **Physical port to VM (or VM chain) to physical port - a.k.a. NIC2VM2NIC,
+ P2V2P, NIC2VMchain2NIC, P2V2V2P**
+ * *PortNICConfig-WireEncapsulation-PacketForwardingFunction-
+ PacketProcessingFunction1-...-PacketProcessingFunctionN-VirtEncapsulation-
+ VirtPortConfig-VMconfig-TestType*
+ * *10ge2p1x520-dot1q-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2 ports
+ of 10GE on Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain switching
+ to/from two vhost interfaces and one VM, NDR throughput discovery.
+ * *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2
+ ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain
+ switching to/from two vhost interfaces and one VM, NDR throughput discovery.
+ * *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-4vhost-2vm-ndrdisc.robot* => 2
+ ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain
+ switching to/from four vhost interfaces and two VMs, NDR throughput
+ discovery.
+ * *eth2p-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm-func.robot* => 2 ports of
+ Ethernet, IPv4 VXLAN Ethernet, L2 bridge-domain switching to/from two vhost
+ interfaces and one VM, functional tests.
+3. **API CRUD tests - Create (Write), Read (Retrieve), Update (Modify), Delete
+ (Destroy) operations for configuration and operational data**
+ * *ManagementTestKeyword-ManagementOperation-ManagedFunction1-...-
+ ManagedFunctionN-ManagementAPI1-ManagementAPIN-TestType*
+ * *mgmt-cfg-lisp-apivat-func* => configuration of LISP with VAT API calls,
+ functional tests.
+ * *mgmt-cfg-l2bd-apihc-apivat-func* => configuration of L2 Bridge-Domain with
+ Honeycomb API and VAT API calls, functional tests.
+ * *mgmt-oper-int-apihcnc-func* => reading status and operational data of
+ interface with Honeycomb NetConf API calls, functional tests.
+ * *mgmt-cfg-int-tap-apihcnc-func* => configuration of tap interfaces with
+ Honeycomb NetConf API calls, functional tests.
+ * *mgmt-notif-int-subint-apihcnc-func* => notifications of interface and
+ sub-interface events with Honeycomb NetConf Notifications, functional tests.
+
+For complete description of CSIT test naming convention please refer to
+[CSIT test naming wiki page](https://wiki.fd.io/view/CSIT/csit-test-naming>).
diff --git a/docs/content/introduction/test_scenarios_overview.md b/docs/content/introduction/test_scenarios_overview.md
new file mode 100644
index 0000000000..415ee3403f
--- /dev/null
+++ b/docs/content/introduction/test_scenarios_overview.md
@@ -0,0 +1,61 @@
+---
+title: "Test Scenarios Overview"
+weight: 2
+---
+
+# Test Scenarios Overview
+
+FD.io CSIT Dashboard includes multiple test scenarios of VPP
+centric applications, topologies and use cases. In addition it also
+covers baseline tests of DPDK sample applications. Tests are executed in
+physical (performance tests) and virtual environments (functional
+tests).
+
+Brief overview of test scenarios covered in this documentation:
+
+1. **VPP Performance**: VPP performance tests are executed in physical
+ FD.io testbeds, focusing on VPP network data plane performance in
+ NIC-to-NIC switching topologies. VPP application runs in
+ bare-metal host user-mode handling NICs. TRex is used as a traffic generator.
+2. **VPP Vhostuser Performance with KVM VMs**: VPP VM service switching
+ performance tests using vhostuser virtual interface for
+ interconnecting multiple NF-in-VM instances. VPP vswitch
+ instance runs in bare-metal user-mode handling NICs and connecting
+ over vhost-user interfaces to VM instances each running VPP with virtio
+ virtual interfaces. Similarly to VPP Performance, tests are run across a
+ range of configurations. TRex is used as a traffic generator.
+3. **VPP Memif Performance with LXC and Docker Containers**: VPP
+ Container service switching performance tests using memif virtual
+ interface for interconnecting multiple VPP-in-container instances.
+ VPP vswitch instance runs in bare-metal user-mode handling NICs and
+ connecting over memif (Slave side) interfaces to more instances of
+ VPP running in LXC or in Docker Containers, both with memif
+ interfaces (Master side). Similarly to VPP Performance, tests are
+ run across a range of configurations. TRex is used as a traffic
+ generator.
+4. **DPDK Performance**: VPP uses DPDK to drive the NICs and physical
+ interfaces. DPDK performance tests are used as a baseline to
+ profile performance of the DPDK sub-system. Two DPDK applications
+ are tested: Testpmd and L3fwd. DPDK tests are executed in the same
+ testing environment as VPP tests. DPDK Testpmd and L3fwd
+ applications run in host user-mode. TRex is used as a traffic
+ generator.
+5. **T-Rex Performance**: T-Rex perfomance tests are executed in physical
+ FD.io testbeds, focusing on T-Rex data plane performance in NIC-to-NIC
+ loopback topologies.
+6. **VPP Functional**: VPP functional tests are executed in virtual
+ FD.io testbeds, focusing on VPP packet processing functionality,
+ including both network data plane and in-line control plane. Tests
+ cover vNIC-to-vNIC vNIC-to-nestedVM-to-vNIC forwarding topologies.
+ Scapy is used as a traffic generator.
+
+All CSIT test data included in this report is auto-generated from Robot
+Framework json output files produced by Linux Foundation FD.io Jenkins jobs
+executed against {{< release_vpp >}} artifacts.
+
+FD.io CSIT system is developed using two main coding platforms: Robot
+Framework and Python. {{< release_csit >}} source code for the executed test
+suites is available in corresponding CSIT branch in the directory
+`./tests/<name_of_the_test_suite>`. A local copy of CSIT source code
+can be obtained by cloning CSIT git repository - `git clone
+https://gerrit.fd.io/r/csit`.
diff --git a/docs/content/introduction/test_tag_description.md b/docs/content/introduction/test_tag_description.md
new file mode 100644
index 0000000000..630afa864e
--- /dev/null
+++ b/docs/content/introduction/test_tag_description.md
@@ -0,0 +1,863 @@
+---
+title: "Test Tags Descriptions"
+weight: 5
+---
+
+# Test Tags Descriptions
+
+All CSIT test cases are labelled with Robot Framework tags used to allow for
+easy test case type identification, test case grouping and selection for
+execution. Following sections list currently used CSIT tags and their
+descriptions.
+
+## Testbed Topology Tags
+
+**2_NODE_DOUBLE_LINK_TOPO**
+
+ 2 nodes connected in a circular topology with two links interconnecting
+ the devices.
+
+**2_NODE_SINGLE_LINK_TOPO**
+
+ 2 nodes connected in a circular topology with at least one link
+ interconnecting devices.
+
+**3_NODE_DOUBLE_LINK_TOPO**
+
+ 3 nodes connected in a circular topology with two links interconnecting
+ the devices.
+
+**3_NODE_SINGLE_LINK_TOPO**
+
+ 3 nodes connected in a circular topology with at least one link
+ interconnecting devices.
+
+## Objective Tags
+
+**SKIP_PATCH**
+
+ Test case(s) marked to not run in case of vpp-csit-verify (i.e. VPP patch)
+ and csit-vpp-verify jobs (i.e. CSIT patch).
+
+**SKIP_VPP_PATCH**
+
+ Test case(s) marked to not run in case of vpp-csit-verify (i.e. VPP patch).
+
+## Environment Tags
+
+**HW_ENV**
+
+ DUTs and TGs are running on bare metal.
+
+**VM_ENV**
+
+ DUTs and TGs are running in virtual environment.
+
+**VPP_VM_ENV**
+
+ DUTs with VPP and capable of running Virtual Machine.
+
+## NIC Model Tags
+
+**NIC_Intel-X520-DA2**
+
+ Intel X520-DA2 NIC.
+
+**NIC_Intel-XL710**
+
+ Intel XL710 NIC.
+
+**NIC_Intel-X710**
+
+ Intel X710 NIC.
+
+**NIC_Intel-XXV710**
+
+ Intel XXV710 NIC.
+
+**NIC_Cisco-VIC-1227**
+
+ VIC-1227 by Cisco.
+
+**NIC_Cisco-VIC-1385**
+
+ VIC-1385 by Cisco.
+
+**NIC_Amazon-Nitro-50G**
+
+ Amazon EC2 ENA NIC.
+
+## Scaling Tags
+
+**FIB_20K**
+
+ 2x10,000 entries in single fib table
+
+**FIB_200K**
+
+ 2x100,000 entries in single fib table
+
+**FIB_1M**
+
+ 2x500,000 entries in single fib table
+
+**FIB_2M**
+
+ 2x1,000,000 entries in single fib table
+
+**L2BD_1**
+
+ Test with 1 L2 bridge domain.
+
+**L2BD_10**
+
+ Test with 10 L2 bridge domains.
+
+**L2BD_100**
+
+ Test with 100 L2 bridge domains.
+
+**L2BD_1K**
+
+ Test with 1000 L2 bridge domains.
+
+**VLAN_1**
+
+ Test with 1 VLAN sub-interface.
+
+**VLAN_10**
+
+ Test with 10 VLAN sub-interfaces.
+
+**VLAN_100**
+
+ Test with 100 VLAN sub-interfaces.
+
+**VLAN_1K**
+
+ Test with 1000 VLAN sub-interfaces.
+
+**VXLAN_1**
+
+ Test with 1 VXLAN tunnel.
+
+**VXLAN_10**
+
+ Test with 10 VXLAN tunnels.
+
+**VXLAN_100*
+
+ Test with 100 VXLAN tunnels.
+
+**VXLAN_1K**
+
+ Test with 1000 VXLAN tunnels.
+
+**TNL_{t}**
+
+ IPSec in tunnel mode - {t} tunnels.
+
+**SRC_USER_{u}**
+
+ Traffic flow with {u} unique IPs (users) in one direction.
+ {u}=(1,10,100,1000,2000,4000).
+
+**100_FLOWS**
+
+ Traffic stream with 100 unique flows (10 IPs/users x 10 UDP ports) in one
+ direction.
+
+**10k_FLOWS**
+
+ Traffic stream with 10 000 unique flows (10 IPs/users x 1000 UDP ports) in
+ one direction.
+
+**100k_FLOWS**
+
+ Traffic stream with 100 000 unique flows (100 IPs/users x 1000 UDP ports) in
+ one direction.
+
+**HOSTS_{h}**
+
+ Stateless or stateful traffic stream with {h} client source IP4 addresses,
+ usually with 63 flow differing in source port number. Could be UDP or TCP.
+ If NAT is used, the clients are inside. Outside IP range can differ.
+ {h}=(1024,4096,16384,65536,262144).
+
+**GENEVE4_{t}TUN**
+
+ Test with {t} GENEVE IPv4 tunnel. {t}=(1,4,16,64,256,1024)
+
+## Test Category Tags
+
+**DEVICETEST*
+
+ All vpp_device functional test cases.
+
+**PERFTEST**
+
+ All performance test cases.
+
+## VPP Device Type Tags
+
+**SCAPY**
+
+ All test cases that uses Scapy for packet generation and validation.
+
+## erformance Type Tags
+
+**NDRPDR**
+
+ Single test finding both No Drop Rate and Partial Drop Rate simultaneously.
+ The search is done by optimized algorithm which performs
+ multiple trial runs at different durations and transmit rates.
+ The results come from the final trials, which have duration of 30 seconds.
+
+**MRR**
+
+ Performance tests where TG sends the traffic at maximum rate (line rate)
+ and reports total sent/received packets over trial duration.
+ The result is an average of 10 trials of 1 second duration.
+
+**SOAK**
+
+ Performance tests using PLRsearch to find the critical load.
+
+**RECONF**
+
+ Performance tests aimed to measure lost packets (time) when performing
+ reconfiguration while full throughput offered load is applied.
+
+## Ethernet Frame Size Tags
+
+These are describing the traffic offered by Traffic Generator,
+"primary" traffic in case of asymmetric load.
+For traffic between DUTs, or for "secondary" traffic, see ${overhead} value.
+
+**{b}B**
+
+ {b} Bytes frames used for test.
+
+**IMIX**
+
+ IMIX frame sequence (28x 64B, 16x 570B, 4x 1518B) used for test.
+
+## Test Type Tags
+
+**BASE**
+
+ Baseline test cases, no encapsulation, no feature(s) configured in tests.
+ No scaling whatsoever, beyond minimum needed for RSS.
+
+**IP4BASE**
+
+ IPv4 baseline test cases, no encapsulation, no feature(s) configured in
+ tests. Minimal number of routes. Other quantities may be scaled.
+
+**IP6BASE**
+
+ IPv6 baseline test cases, no encapsulation, no feature(s) configured in
+ tests.
+
+**L2XCBASE**
+
+ L2XC baseline test cases, no encapsulation, no feature(s) configured in
+ tests.
+
+**L2BDBASE**
+
+ L2BD baseline test cases, no encapsulation, no feature(s) configured in
+ tests.
+
+**L2PATCH**
+
+ L2PATCH baseline test cases, no encapsulation, no feature(s) configured in
+ tests.
+
+**SCALE**
+
+ Scale test cases. Other tags specify which quantities are scaled.
+ Also applies if scaling is set on TG only (e.g. DUT works as IP4BASE).
+
+**ENCAP**
+
+ Test cases where encapsulation is used. Use also encapsulation tag(s).
+
+**FEATURE**
+
+ At least one feature is configured in test cases. Use also feature tag(s).
+
+**UDP**
+
+ Tests which use any kind of UDP traffic (STL or ASTF profile).
+
+**TCP**
+
+ Tests which use any kind of TCP traffic (STL or ASTF profile).
+
+**TREX**
+
+ Tests which test trex traffic without any software DUTs in the traffic path.
+
+**UDP_UDIR**
+
+ Tests which use unidirectional UDP traffic (STL profile only).
+
+**UDP_BIDIR**
+
+ Tests which use bidirectional UDP traffic (STL profile only).
+
+**UDP_CPS**
+
+ Tests which measure connections per second on minimal UDP pseudoconnections.
+ This implies ASTF traffic profile is used.
+ This tag selects specific output processing in PAL.
+
+**TCP_CPS**
+
+ Tests which measure connections per second on empty TCP connections.
+ This implies ASTF traffic profile is used.
+ This tag selects specific output processing in PAL.
+
+**TCP_RPS**
+
+ Tests which measure requests per second on empty TCP connections.
+ This implies ASTF traffic profile is used.
+ This tag selects specific output processing in PAL.
+
+**UDP_PPS**
+
+ Tests which measure packets per second on lightweight UDP transactions.
+ This implies ASTF traffic profile is used.
+ This tag selects specific output processing in PAL.
+
+**TCP_PPS**
+
+ Tests which measure packets per second on lightweight TCP transactions.
+ This implies ASTF traffic profile is used.
+ This tag selects specific output processing in PAL.
+
+**HTTP**
+
+ Tests which use traffic formed of valid HTTP requests (and responses).
+
+**LDP_NGINX**
+
+ LDP NGINX is un-modified NGINX with VPP via LD_PRELOAD.
+
+**NF_DENSITY**
+
+ Performance tests that measure throughput of multiple VNF and CNF
+ service topologies at different service densities.
+
+## NF Service Density Tags
+
+**CHAIN**
+
+ NF service density tests with VNF or CNF service chain topology(ies).
+
+**PIPE**
+
+ NF service density tests with CNF service pipeline topology(ies).
+
+**NF_L3FWDIP4**
+
+ NF service density tests with DPDK l3fwd IPv4 routing as NF workload.
+
+**NF_VPPIP4**
+
+ NF service density tests with VPP IPv4 routing as NF workload.
+
+**{r}R{c}C**
+
+ Service density matrix locator {r}R{c}C, {r}Row denoting number of
+ service instances, {c}Column denoting number of NFs per service
+ instance. {r}=(1,2,4,6,8,10), {c}=(1,2,4,6,8,10).
+
+**{n}VM{t}T**
+
+ Service density {n}VM{t}T, {n}Number of NF Qemu VMs, {t}Number of threads
+ per NF.
+
+**{n}DCRt}T**
+
+ Service density {n}DCR{t}T, {n}Number of NF Docker containers, {t}Number of
+ threads per NF.
+
+**{n}_ADDED_CHAINS**
+
+ {n}Number of chains (or pipelines) added (and/or removed)
+ during RECONF test.
+
+## Forwarding Mode Tags
+
+**L2BDMACSTAT**
+
+ VPP L2 bridge-domain, L2 MAC static.
+
+**L2BDMACLRN**
+
+ VPP L2 bridge-domain, L2 MAC learning.
+
+**L2XCFWD**
+
+ VPP L2 point-to-point cross-connect.
+
+**IP4FWD**
+
+ VPP IPv4 routed forwarding.
+
+**IP6FWD**
+
+ VPP IPv6 routed forwarding.
+
+**LOADBALANCER_MAGLEV**
+
+ VPP Load balancer maglev mode.
+
+**LOADBALANCER_L3DSR**
+
+ VPP Load balancer l3dsr mode.
+
+**LOADBALANCER_NAT4**
+
+ VPP Load balancer nat4 mode.
+
+**N2N**
+
+ Mode, where NICs from the same physical server are directly
+ connected with a cable.
+
+## Underlay Tags
+
+**IP4UNRLAY**
+
+ IPv4 underlay.
+
+**IP6UNRLAY**
+
+ IPv6 underlay.
+
+**MPLSUNRLAY**
+
+ MPLS underlay.
+
+## Overlay Tags
+
+**L2OVRLAY**
+
+ L2 overlay.
+
+**IP4OVRLAY**
+
+ IPv4 overlay (IPv4 payload).
+
+**IP6OVRLAY**
+
+ IPv6 overlay (IPv6 payload).
+
+## Tagging Tags
+
+**DOT1Q**
+
+ All test cases with dot1q.
+
+**DOT1AD**
+
+ All test cases with dot1ad.
+
+## Encapsulation Tags
+
+**ETH**
+
+ All test cases with base Ethernet (no encapsulation).
+
+**LISP**
+
+ All test cases with LISP.
+
+**LISPGPE**
+
+ All test cases with LISP-GPE.
+
+**LISP_IP4o4**
+
+ All test cases with LISP_IP4o4.
+
+**LISPGPE_IP4o4**
+
+ All test cases with LISPGPE_IP4o4.
+
+**LISPGPE_IP6o4**
+
+ All test cases with LISPGPE_IP6o4.
+
+**LISPGPE_IP4o6**
+
+ All test cases with LISPGPE_IP4o6.
+
+**LISPGPE_IP6o6**
+
+ All test cases with LISPGPE_IP6o6.
+
+**VXLAN**
+
+ All test cases with Vxlan.
+
+**VXLANGPE**
+
+ All test cases with VXLAN-GPE.
+
+**GRE**
+
+ All test cases with GRE.
+
+**GTPU**
+
+ All test cases with GTPU.
+
+**GTPU_HWACCEL**
+
+ All test cases with GTPU_HWACCEL.
+
+**IPSEC**
+
+ All test cases with IPSEC.
+
+**WIREGUARD**
+
+ All test cases with WIREGUARD.
+
+**SRv6**
+
+ All test cases with Segment routing over IPv6 dataplane.
+
+**SRv6_1SID**
+
+ All SRv6 test cases with single SID.
+
+**SRv6_2SID_DECAP**
+
+ All SRv6 test cases with two SIDs and with decapsulation.
+
+**SRv6_2SID_NODECAP**
+
+ All SRv6 test cases with two SIDs and without decapsulation.
+
+**GENEVE**
+
+ All test cases with GENEVE.
+
+**GENEVE_L3MODE**
+
+ All test cases with GENEVE tunnel in L3 mode.
+
+**FLOW**
+
+ All test cases with FLOW.
+
+**FLOW_DIR**
+
+ All test cases with FLOW_DIR.
+
+**FLOW_RSS**
+
+ All test cases with FLOW_RSS.
+
+**NTUPLE**
+
+ All test cases with NTUPLE.
+
+**L2TPV3**
+
+ All test cases with L2TPV3.
+
+## Interface Tags
+
+**PHY**
+
+ All test cases which use physical interface(s).
+
+**GSO**
+
+ All test cases which uses Generic Segmentation Offload.
+
+**VHOST**
+
+ All test cases which uses VHOST.
+
+**VHOST_1024**
+
+ All test cases which uses VHOST DPDK driver with qemu queue size set
+ to 1024.
+
+**VIRTIO**
+
+ All test cases which uses VIRTIO native VPP driver.
+
+**VIRTIO_1024**
+
+ All test cases which uses VIRTIO native VPP driver with qemu queue size set
+ to 1024.
+
+**CFS_OPT**
+
+ All test cases which uses VM with optimised scheduler policy.
+
+**TUNTAP*
+
+ All test cases which uses TUN and TAP.
+
+**AFPKT**
+
+ All test cases which uses AFPKT.
+
+**NETMAP**
+
+ All test cases which uses Netmap.
+
+**MEMIF**
+
+ All test cases which uses Memif.
+
+**SINGLE_MEMIF**
+
+ All test cases which uses only single Memif connection per DUT. One DUT
+ instance is running in container having one physical interface exposed to
+ container.
+
+**LBOND**
+
+ All test cases which uses link bonding (BondEthernet interface).
+
+**LBOND_DPDK**
+
+ All test cases which uses DPDK link bonding.
+
+**LBOND_VPP**
+
+ All test cases which uses VPP link bonding.
+
+**LBOND_MODE_XOR**
+
+ All test cases which uses link bonding with mode XOR.
+
+**LBOND_MODE_LACP**
+
+ All test cases which uses link bonding with mode LACP.
+
+**LBOND_LB_L34**
+
+ All test cases which uses link bonding with load-balance mode l34.
+
+**LBOND_{n}L**
+
+ All test cases which use {n} link(s) for link bonding.
+
+**DRV_{d}**
+
+ All test cases which NIC Driver for DUT is set to {d}. Default is VFIO_PCI.
+ {d}=(AVF, RDMA_CORE, VFIO_PCI, AF_XDP).
+
+**TG_DRV_{d}**
+
+ All test cases which NIC Driver for TG is set to {d}. Default is IGB_UIO.
+ {d}=(RDMA_CORE, IGB_UIO).
+
+**RXQ_SIZE_{n}**
+
+ All test cases which RXQ size (RX descriptors) are set to {n}. Default is 0,
+ which means VPP (API) default.
+
+**TXQ_SIZE_{n}**
+
+ All test cases which TXQ size (TX descriptors) are set to {n}. Default is 0,
+ which means VPP (API) default.
+
+## Feature Tags
+
+**IACLDST**
+
+ iACL destination.
+
+**ADLALWLIST**
+
+ ADL allowlist.
+
+**NAT44**
+
+ NAT44 configured and tested.
+
+**NAT64**
+
+ NAT44 configured and tested.
+
+**ACL**
+
+ ACL plugin configured and tested.
+
+**IACL**
+
+ ACL plugin configured and tested on input path.
+
+**OACL**
+
+ ACL plugin configured and tested on output path.
+
+**ACL_STATELESS**
+
+ ACL plugin configured and tested in stateless mode (permit action).
+
+**ACL_STATEFUL**
+
+ ACL plugin configured and tested in stateful mode (permit+reflect action).
+
+**ACL1**
+
+ ACL plugin configured and tested with 1 not-hitting ACE.
+
+**ACL10**
+
+ ACL plugin configured and tested with 10 not-hitting ACEs.
+
+**ACL50**
+
+ ACL plugin configured and tested with 50 not-hitting ACEs.
+
+**SRv6_PROXY**
+
+ SRv6 endpoint to SR-unaware appliance via proxy.
+
+**SRv6_PROXY_STAT**
+
+ SRv6 endpoint to SR-unaware appliance via static proxy.
+
+**SRv6_PROXY_DYN**
+
+ SRv6 endpoint to SR-unaware appliance via dynamic proxy.
+
+**SRv6_PROXY_MASQ**
+
+ SRv6 endpoint to SR-unaware appliance via masquerading proxy.
+
+## Encryption Tags
+
+**IPSECSW**
+
+ Crypto in software.
+
+**IPSECHW**
+
+ Crypto in hardware.
+
+**IPSECTRAN**
+
+ IPSec in transport mode.
+
+**IPSECTUN**
+
+ IPSec in tunnel mode.
+
+**IPSECINT**
+
+ IPSec in interface mode.
+
+**AES**
+
+ IPSec using AES algorithms.
+
+**AES_128_CBC**
+
+ IPSec using AES 128 CBC algorithms.
+
+**AES_128_GCM**
+
+ IPSec using AES 128 GCM algorithms.
+
+**AES_256_GCM**
+
+ IPSec using AES 256 GCM algorithms.
+
+**HMAC**
+
+ IPSec using HMAC integrity algorithms.
+
+**HMAC_SHA_256**
+
+ IPSec using HMAC SHA 256 integrity algorithms.
+
+**HMAC_SHA_512**
+
+ IPSec using HMAC SHA 512 integrity algorithms.
+
+**SCHEDULER**
+
+ IPSec using crypto sw scheduler engine.
+
+**FASTPATH**
+
+ IPSec policy mode with spd fast path enabled.
+
+## Client-Workload Tags
+
+**VM**
+
+ All test cases which use at least one virtual machine.
+
+**LXC**
+
+ All test cases which use Linux container and LXC utils.
+
+**DRC**
+
+ All test cases which use at least one Docker container.
+
+**DOCKER**
+
+ All test cases which use Docker as container manager.
+
+**APP**
+
+ All test cases with specific APP use.
+
+## Container Orchestration Tags
+
+**{n}VSWITCH**
+
+ {n} VPP running in {n} Docker container(s) acting as a VSWITCH.
+ {n}=(1).
+
+**{n}VNF**
+
+ {n} VPP running in {n} Docker container(s) acting as a VNF work load.
+ {n}=(1).
+
+## Multi-Threading Tags
+
+**STHREAD**
+
+ *Dynamic tag*.
+ All test cases using single poll mode thread.
+
+**MTHREAD**
+
+ *Dynamic tag*.
+ All test cases using more then one poll mode driver thread.
+
+**{n}NUMA**
+
+ All test cases with packet processing on {n} socket(s). {n}=(1,2).
+
+**{c}C**
+
+ {c} worker thread pinned to {c} dedicated physical core; or if
+ HyperThreading is enabled, {c}*2 worker threads each pinned to a separate
+ logical core within 1 dedicated physical core. Main thread pinned to core 1.
+ {t}=(1,2,4).
+
+**{t}T{c}C**
+
+ *Dynamic tag*.
+ {t} worker threads pinned to {c} dedicated physical cores. Main thread
+ pinned to core 1. By default CSIT is configuring same amount of receive
+ queues per interface as worker threads. {t}=(1,2,4,8), {t}=(1,2,4).
diff --git a/docs/content/introduction/testing_in_vagrant.md b/docs/content/introduction/testing_in_vagrant.md
new file mode 100644
index 0000000000..ef1e231393
--- /dev/null
+++ b/docs/content/introduction/testing_in_vagrant.md
@@ -0,0 +1,85 @@
+---
+bookHidden: true
+title: "Running CSIT locally in Vagrant"
+---
+
+# Running CSIT locally in Vagrant
+
+## Install prerequisites
+
+Run all commands from command line.
+
+1. Download and install virtualbox from
+ [official page](https://www.virtualbox.org/wiki/Downloads).
+ To verify the installation, run VBoxManage
+
+ - on windows
+
+ "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" --version
+
+ - on nix
+
+ VBoxManage --version
+ Tested version: 6.1.16r140961
+
+2. Download and install latest vagrant
+ [from official page](https://www.vagrantup.com/downloads.html).
+ To verify the installtion, run
+
+ vagrant -v
+ Tested version: Vagrant 2.2.15
+
+3. Install vagrant plugins::
+
+ vagrant plugin install vagrant-vbguest
+ vagrant plugin install vagrant-cachier
+
+ If you are behind a proxy, install proxyconf plugin and update proxy
+ settings in Vagrantfile::
+
+ vagrant plugin install vagrant-proxyconf
+
+## Set up and run Vagrant virtualbox
+
+Before running following commands change working directory to Vagrant specific directory
+(from within root CSIT directory)
+
+ cd csit.infra.vagrant
+
+This allows Vagrant to automatically find Vagrantfile and corresponding Vagrant environment.
+
+Start the provisioning
+
+ vagrant up --provider virtualbox
+
+Your new VPP Device virtualbox machine will be created and configured.
+Master branch of csit project will be cloned inside virtual machine into
+/home/vagrant/csit folder.
+
+Once the process is finished, you can login to the box using
+
+ vagrant ssh
+
+In case you need to completely rebuild the box and start from scratch,
+run these commands
+
+ vagrant destroy -f
+ vagrant up --provider virtualbox
+
+## Run tests
+
+From within the box run the tests using
+
+ cd /home/vagrant/csit/resources/libraries/bash/entry
+ ./bootstrap_vpp_device.sh csit-vpp-device-master-ubuntu2004-1n-vbox
+
+To run only selected tests based on TAGS, export environment variables before
+running the test suite
+
+ export GERRIT_EVENT_TYPE="comment-added"
+ export GERRIT_EVENT_COMMENT_TEXT="devicetest memif"
+
+ # now it will run tests, selected based on tags
+ ./bootstrap_vpp_device.sh csit-vpp-device-master-ubuntu2004-1n-vbox
+
+