diff options
Diffstat (limited to 'docs/content')
79 files changed, 700 insertions, 2169 deletions
diff --git a/docs/content/_index.md b/docs/content/_index.md index 15cd1ec3f1..eda7ecf8f9 100644 --- a/docs/content/_index.md +++ b/docs/content/_index.md @@ -3,39 +3,27 @@ title: "FD.io CSIT" type: "docs" --- -# Report Structure +# Documentation Structure -FD.io CSIT Dashboard Documentation contains system performance and functional -testing data. - -Documentation is structured as follows: - -1. INTRODUCTION: General introduction to CSIT Performance Dashboard. - - **Dashboard History**: Version changes. - - **Test Scenarios Overview**: A brief overview of test scenarios - covered in this report. - - **Design**: Framework modular design hierarchy. - - **Test Naming**: Test naming convention. - - **Test Tags Descriptions**: Robot Framework Tags used for test suite and - test case grouping and selection. -2. METHODOLOGY: - - **Overview**: Tested logical topologies, test coverage and naming - specifics. -3. RELEASE NOTES: Performance tests executed in physical FD.io - testbeds. - - **VPP Performance**: Changes, added tests, environment or methodology - changes, known issues. - - **DPDK Performance**: Changes, added tests, environment or methodology - changes, known issues. - - **TRex Performance**: Changes, added tests, environment or methodology - changes, known issues. - - **VPP Device**: Changes, added tests, environment or methodology - changes, known issues. -4. INFRASTRUCTURE: +1. OVERVIEW: General introduction to CSIT Performance Dashboard and CSIT itself. + - **C-Dash** + - **CSIT** +2. METHODOLOGY + - **Overview** + - **Measurement** + - **Test** + - **Trending** + - **Per-patch Testing** +3. RELEASE NOTES: Performance tests executed in physical FD.io testbeds. + - **CSIT rls2306** + - **Previous** +4. INFRASTRUCTURE - **FD.io DC Vexxhost Inventory**: Physical testbeds location. - - **FD.io CSIT Testbed Specifications**: Specification of the physical + - **FD.io DC Testbed Specifications**: Specification of the physical testbed infrastructure. - - **FD.io CSIT Testbed Configuration**: Configuration of the physical + - **FD.io DC Testbed Configuration**: Configuration of the physical testbed infrastructure. - **FD.io CSIT Testbed Versioning**: CSIT testbed versioning. - **FD.io CSIT Logical Topologies**: CSIT Logical Topologies. + - **VPP Startup Settings** + - **TRex Traffic Generator** diff --git a/docs/content/infrastructure/_index.md b/docs/content/infrastructure/_index.md index 3ccc042a8b..c5dbd21d87 100644 --- a/docs/content/infrastructure/_index.md +++ b/docs/content/infrastructure/_index.md @@ -1,5 +1,6 @@ --- +bookCollapseSection: false bookFlatSection: true title: "Infrastructure" weight: 4 ----
\ No newline at end of file +--- diff --git a/docs/content/infrastructure/fdio_csit_logical_topologies.md b/docs/content/infrastructure/fdio_csit_logical_topologies.md index 5dd323d30c..4e9c22b357 100644 --- a/docs/content/infrastructure/fdio_csit_logical_topologies.md +++ b/docs/content/infrastructure/fdio_csit_logical_topologies.md @@ -1,6 +1,6 @@ --- title: "FD.io CSIT Logical Topologies" -weight: 4 +weight: 5 --- # FD.io CSIT Logical Topologies diff --git a/docs/content/infrastructure/fdio_csit_testbed_versioning.md b/docs/content/infrastructure/fdio_csit_testbed_versioning.md index 5185c787f7..4e8fb69659 100644 --- a/docs/content/infrastructure/fdio_csit_testbed_versioning.md +++ b/docs/content/infrastructure/fdio_csit_testbed_versioning.md @@ -1,7 +1,7 @@ --- bookToc: true title: "FD.io CSIT Testbed Versioning" -weight: 3 +weight: 4 --- # FD.io CSIT Testbed Versioning diff --git a/docs/content/infrastructure/fdio_csit_testbed_specifications.md b/docs/content/infrastructure/fdio_dc_testbed_specifications.md index 24a30cf1fa..3daa3824e2 100644 --- a/docs/content/infrastructure/fdio_csit_testbed_specifications.md +++ b/docs/content/infrastructure/fdio_dc_testbed_specifications.md @@ -1,10 +1,10 @@ --- bookToc: true -title: "FD.io CSIT Testbed Specifications" +title: "FD.io DC Testbed Specifications" weight: 2 --- -# FD.io CSIT Testbed Specifications +# FD.io DC Testbed Specifications ## Purpose diff --git a/docs/content/infrastructure/fdio_dc_vexxhost_inventory.md b/docs/content/infrastructure/fdio_dc_vexxhost_inventory.md index 25934af770..140c74ffc4 100644 --- a/docs/content/infrastructure/fdio_dc_vexxhost_inventory.md +++ b/docs/content/infrastructure/fdio_dc_vexxhost_inventory.md @@ -7,7 +7,7 @@ weight: 1 Captured inventory data: - **name**: CSIT functional server name as tracked in - [CSIT testbed specification]({{< ref "fdio_csit_testbed_specifications#FD.io CSIT Testbed Specifications" >}}), + [CSIT testbed specification]({{< ref "fdio_dc_testbed_specifications#FD.io CSIT Testbed Specifications" >}}), followed by "/" and the actual configured hostname, unless it is the same as CSIT name. - **oper-status**: operational status (up|down). @@ -24,8 +24,8 @@ Captured inventory data: ## Missing Equipment Inventory 1. Ixia PerfectStorm One Appliance - - [**Specification**]({{< ref "fdio_csit_testbed_specifications#2-node-ixiaps1l47-ixia-psone-l47-2n-ps1" >}}) - - [**Wiring**]({{< ref "fdio_csit_testbed_specifications#2-node-ixiaps1l47-2n-ps1" >}}) + - [**Specification**]({{< ref "fdio_dc_testbed_specifications#2-node-ixiaps1l47-ixia-psone-l47-2n-ps1" >}}) + - [**Wiring**]({{< ref "fdio_dc_testbed_specifications#2-node-ixiaps1l47-2n-ps1" >}}) - **mgmt-ip4**: 10.30.51.62 s26-t25-tg1 - **ipmi-ip4**: 10.30.50.59 s26-t25-tg1 diff --git a/docs/content/infrastructure/testbed_configuration/_index.md b/docs/content/infrastructure/testbed_configuration/_index.md index d0716003c5..79d0250474 100644 --- a/docs/content/infrastructure/testbed_configuration/_index.md +++ b/docs/content/infrastructure/testbed_configuration/_index.md @@ -1,6 +1,6 @@ --- bookCollapseSection: true bookFlatSection: false -title: "FD.io CSIT Testbed Configuration" +title: "FD.io DC Testbed Configuration" weight: 3 ---
\ No newline at end of file diff --git a/docs/content/methodology/trex_traffic_generator.md b/docs/content/infrastructure/trex_traffic_generator.md index 4f62d91c47..3497447cbf 100644 --- a/docs/content/methodology/trex_traffic_generator.md +++ b/docs/content/infrastructure/trex_traffic_generator.md @@ -1,6 +1,6 @@ --- title: "TRex Traffic Generator" -weight: 5 +weight: 7 --- # TRex Traffic Generator @@ -9,8 +9,8 @@ weight: 5 [TRex traffic generator](https://trex-tgn.cisco.com) is used for majority of CSIT performance tests. TRex is used in multiple types of performance tests, -see [Data Plane Throughtput]({{< ref "data_plane_throughput/data_plane_throughput/#Data Plane Throughtput" >}}) -for more detail. +see [Data Plane Throughtput]({{< ref "../methodology/measurements/data_plane_throughput/data_plane_throughput/#Data Plane Throughtput" >}}) +for more details. ## Traffic modes @@ -192,4 +192,4 @@ in Python part of CSIT code. If measurement of latency is requested, two more packet streams are created (one for each direction) with TRex flow_stats parameter set to STLFlowLatencyStats. In that case, returned statistics will also include -min/avg/max latency values and encoded HDRHistogram data.
\ No newline at end of file +min/avg/max latency values and encoded HDRHistogram data. diff --git a/docs/content/methodology/vpp_startup_settings.md b/docs/content/infrastructure/vpp_startup_settings.md index 6e40091a6c..7361d4b21f 100644 --- a/docs/content/methodology/vpp_startup_settings.md +++ b/docs/content/infrastructure/vpp_startup_settings.md @@ -1,6 +1,6 @@ --- title: "VPP Startup Settings" -weight: 17 +weight: 6 --- # VPP Startup Settings diff --git a/docs/content/introduction/_index.md b/docs/content/introduction/_index.md deleted file mode 100644 index e028786bd1..0000000000 --- a/docs/content/introduction/_index.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -bookFlatSection: true -title: "Introduction" -weight: 1 ----
\ No newline at end of file diff --git a/docs/content/introduction/automating_vpp_api_flag_day.md b/docs/content/introduction/automating_vpp_api_flag_day.md deleted file mode 100644 index 131adeab9d..0000000000 --- a/docs/content/introduction/automating_vpp_api_flag_day.md +++ /dev/null @@ -1,303 +0,0 @@ ---- -bookHidden: true -title: "VPP API Flag Day Algorithms" ---- - -# VPP API Flag Day Algorithm - -## Abstract - -This document describes the current solution to the problem of -automating the detection of VPP API changes which are not backwards -compatible with existing CSIT tests, by defining the "Flag Day" -process of deploying a new set of CSIT tests which are compatible -with the new version of the VPP API without causing a halt to the -normal VPP/CSIT operational CI process. This is initially -limited to changes in \*.api files contained in the vpp repo. -Eventually the detection algorithm could be extended to include -other integration points such as "directory" structure of stats -segment or PAPI python library dependencies. - -## Motivation - -Aside of per-release activities (release report), CSIT also provides testing -that requires somewhat tight coupling to the latest (merged but not released) -VPP code. Currently, HEAD of one project is run against somewhat older codebase -of the other project. Definition of what is the older codebase to use -is maintained by CSIT project. For older CSIT codebase, there are so-called -"oper" branches. For older VPP codebase, CSIT master HEAD contains identifiers -for "stable" VPP builds. Such older codebases are also used for verify jobs, -where HEAD of the other project is replaced by the commit under review. - -One particular type of jobs useful for VPP development is trending jobs. -They test latests VPP build with latest oper branch of CSIT, -and analytics is applied to detect regressions in preformance. -For this to work properly, VPP project needs a warning against breaking -the assumptions the current oper branch makes about VPP behavior. -In the past, the most frequent type of such breakage was API change. - -Earlier attempts to create a process to minimize breakage have focused -on creating a new verify job for VPP (called api-crc job) that -votes -1 on a change that affects CRC values for API messages CSIT uses. -The list of messages and CRC values (multiple "collections" are allowed) -is maintained in CSIT repository (in oper branch). -The process was less explicit on how should CSIT project maintain such list. -As CSIT was not willing to support two incpompatible API messages -by the same codebase (commit), there were unavoidable windows -where either trenging jobs, or CSIT verify jobs were failing. - -Practice showed that human (or infra) errors can create two kinds of breakages. -Either the unavoidable short window gets long, affecting a trending job run -or two, or the api-crc job starts giving -1 to innocent changes -because oper branch went out of sync with VPP HEAD codebase. -This second type of failure prevents any merges to VPP for a long time -(12 hours is the typical time, give time zone differences). - -The current version of this document introduces two new requirements. -Firstly, the api-crc job should not give false -1, under any -(reasonable) circumstances. That means, if a VPP change -(nor any of its unmerged ancestor commits) does not affect any CRC values -for messages used by CSIT, -1 should only mean "rebase is needed", -and rebasing to HEAD should result in +1 from the api-crc job. -Secondly, no more than one VPP change is allowed to be processed -(at the same time). - -## Naming - -It is easier to define the process after chosing shorter names -for notions that need long definition. - -Note: Everytime a single job is mentioned, -in practice it can be a set of jobs covering parts of functionality. -A "run" of the set of jobs passes only if each job within the set -has been run (again) and passed. - -## Jobs - -+ A *vpp verify* job: Any job run automatically, and voting on open VPP changes. - Some verify jobs compile and package VPP for target operating system - and processor architecture, the packages are NOT archived (currently). - They should be cached somewhere in future to speed up in downstream jobs, - but currently each such downstream job can clone and build. - -+ The *api-crc* job: Quick verify job for VPP changes, that accesses - CSIT repository (checkout latest oper branch HEAD) to figure out - whether merging the change is safe from CSIT point of view. - Here, -1 means CSIT is not ready. +1 means CSIT looks to be ready - for the new CRC values, but there still may be failures on real tests. - -+ A *trending* job: Any job that is started by timer and performs testing. - It checkouts CSIT latest oper branch HEAD, downloads the most recent - completely uploaded VPP package, and unconditionally runs the tests. - CRC checks are optional, ideally only written to console log - without otherwise affecting the test cases. - -+ A *vpp-csit* job: A slower verify job for VPP changes, that accesses CSIT - repository and runs tests from the correct CSIT commit (chosen as in trending) - against the VPP (built from the VPP patch under review). - Vote -1 means there were test failures. +1 means no test failures, meaning - there either was no API change, or it was backward compatible. - -+ A *csit-vpp* job: Verify job for open CSIT changes. Downloads the - (completely uploaded) VPP package marked as "stable", and runs a selection - of tests (from the CSIT patch under review). - Vote +1 means all tests have passed, so it is safe to merge - the patch under review. - -+ A *patch-on-patch* job: Manually triggered non-voting job - for open CSIT changes. Compiles and packages from VPP source - (usually of an unmerged change). Then runs the same tests as csit-vpp job. - This job is used to prove the CSIT patch under review is supporting - the specified VPP code. - In practice, this can be a vpp-csit job started with CSIT_REF set. - -+ A *manual verification* is done by a CSIT committer, locally executing steps - equivalent to the patch-on-patch job. This can to save time and resources. - -## CRC Collections - -Any commit in/for the CSIT repository contains a file (supported_crcs.yaml), -which contains either one or two collections. A collection is a mapping -that maps API message name to its CRC value. - -A collection name specifies which VPP build is this collection for. -An API message name is present in a collection if and only if -it is used by a test implementation (can be in different CSIT commit) -targeted at the VPP build (pointed out by the collection name). - -+ The *stable collection*: Usually required, listed first, has comments and name - pointing to the VPP build this CSIT commit marks as stable. - The stable collection is only missing in deactivating changes (see below) - when not mergeable yet. - -+ The *active collection*: Optional, listed second, has comments and name - pointing to the VPP Gerrit (including patch set number) - the currently active API process is processing. - The patch set number part can be behind the actual Gerrit state. - This is safe, because api-crc job on the active API change will fail - if the older patch is no longer API-equivalent to the newer patch. - -## Changes - -+ An *API change*: The name for any Gerrit Change for VPP repository - that does not pass api-crc job right away, and needs this whole process. - This usually means .api files are edited, but a patch that affects - the way CRC values are computed is also an API change. - - Full name could be VPP API Change, but as no CSIT change is named "API change" - (and this document does not talk about other FD.io or external projects), - "API change" is shorter. - -+ A *blocked change*: The name for open Gerrit Change for VPP repository - that got -1 from some of voting verify jobs. - -+ A *VPP-blocked change": A blocked change which got -1 from some "pure VPP" - verify job, meaning no CSIT code has been involved in the vote. - Example: "make test" fails. - - VPP contributor is expected to fix the change, or VPP developers - are expected to found a cause in an earlier VPP change, and fix it. - No interaction with CSIT developers is necessary. - -+ A *CSIT-blocked change*: A blocked change which is not VPP-blocked, - but does not pass some vpp-csit job. - To fix a CSIT-blocked change, an interaction with a CSIT committer - is usually necessary. Even if a VPP developer is experienced enough - to identify the cause of the failure, a merge to CSIT is usually needed - for a full fix. - - This process does not specify what to do with CSIT-blocked changes - that are not also API changes. - -+ A *candidate API change*: An API change that meets all requirements - to become active (see below). Currently, the requirements are: - - + No -1 nor -2 from from any human reviewer. - - + All verify jobs (except vpp-csit ones) pass. - - + +1 from a VPP committer. - - The reason is to avoid situations where an API change becomes active, - but the VPP committers are unwilling to merge it for some reason. - -+ The *active API change*: The candidate API change currently being processed - by the API Flag Day Algorithm. - While many API changes can be candidates at the same time, - only one is allowed be active at a time. - -+ The *activating change*: The name for a Gerrit Change for CSIT repository - that does not change the test code, but adds the active CRC collection. - Merge of the opening change (to latest CSIT oper branch) defines - which API change has become active. - -+ The *deactivating change*: The name for Gerrit Change for CSIT repository - that only supports tests and CRC values for VPP with the active API change. - That implies the previously stable CRC collection is deleted, - and any edits to the test implementation are done here. - -+ The *mergeable deactivating change*: The deactivating change with additional - requirements. Details on the requirements are listed in the next section. - Merging this change finishes the process for the active API change. - -It is possible for a single CSIT change to act both as a mergeable -deactivating change for one API change, and as an activating change -for another API change. As English lacks a good adjective for such a thing, -this document does not name this change. -When this documents says a change is activating or deactivating, -it allows the possibility for the change to fullfill also other purposes -(e.g. acting as deactivating / activating change for another API change). - -## Algorithm Steps - -The following steps describe the application of the API "Flag Day" algorithm: - -#. A VPP patch for an API change is submitted to - gerrit for review. -#. The api-crc job detects the API CRC values have changed - for some messages used by CSIT. -#. The api-crc job runs in parallel with any other vpp-csit verify job, - so those other jobs can hint at the impact on CSIT. - Currently, any such vpp-csit job is non-voting, - as the current process does not guarantee such jobs passes - when the API change is merged. -#. If the api-crc job fails, an email with the appropriate reason - is sent to the VPP patch submitter and vpp-api-dev@lists.fd.io - including the VPP patch information and .api files that are edited. -#. The VPP patch developer works with a VPP committer - to ensure the patch meets requirements to become a candidate (see above). -#. The VPP patch developer and CSIT team create a CSIT JIRA ticket - to identify the work required to support the new VPP API version. -#. CSIT developer creates a patch of the deactivating change - (upload to Gerrit not required yet). -#. CSIT developer runs patch-on-patch job (or manual verification). - Both developers iterate until the verification passes. - Note that in this phase csit-vpp job is expected to vote -1, - as the deactivating change is not mergeable yet. -#. CSIT developer creates the activating change, uploads to Gerrit, - waits for vote (usual review cycle applies). -#. When CSIT committer is satisfied, the activating change is merged - to CSIT master branch and cherry-picked to the latest oper branch. - This enters a "critical section" of the process. - Merges of other activating changes are not allowed from now on. - The targeted API change becomes the active API change. - This does not break any jobs. -#. VPP developer (or CSIT committer) issues a recheck on the VPP patch. -#. On failure, VPP and CSIT committers analyze what went wrong. - Typically, the active CRC collection is matching only an older patch set, - but a newer patch set needs different CRC values. - Either due to improvements on the VPP change in question, - or due to a rebase over previously merged (unrelated) API change. - VPP perhaps needs to rebase, and CSIT definitely needs - to merge edits to the active collection. Then issue a recheck again, - and iterate until success. -#. On success, VPP Committer merges the active API change patch. - (This is also a delayed verification of the current active CRC collection.) -#. VPP committer sends an e-mail to vpp-api-dev stating the support for - the previous CRC values will soon be removed, implying other changes - (whether API or not) should be rebased soon. -#. VPP merge jobs create and upload new VPP packages. - This breaks trending jobs, but both VPP and CSIT verify jobs still work. -#. CSIT developer makes the deactivating change mergeable: - The stable VPP build indicator is bumped to the build - that contains the active API change. The active CRC collection - (added by the activating change) is renamed to the new stable collection. - (The previous stable collection has already been deleted.) - At this time, the deactivating change should be uploaded to Gerrit and - csit verify jobs should be triggered. -#. CSIT committer reviews the code, perhaps triggering any additional jobs - needed to verify the tests using the edited APIs are still working. -#. When satisfied, CSIT committer merges the mergeable deactivating change - (to both master and oper). - The merge fixes trending jobs. VPP and CSIT verify jobs continue to work. - The merge also breaks some verify jobs for old changes in VPP, - as announced when the active API change was merged. - The merge is the point where the process leaves the "critical section", - thus allowing merges of activating changes for other API changes. -#. CSIT committer sends an e-mail to vpp-api-dev stating the support for - the previous CRC values has been removed, and rebase is needed - for all affected VPP changes. -#. Recheck of existing VPP patches in gerrit may cause the "VPP - API Incompatible Change Test" to send an email to the patch - submitter to rebase the patch to pick up the compatible VPP API - version files. - -### Real life examples - -Simple API change: https://gerrit.fd.io/r/c/vpp/+/23829 - -Activating change: https://gerrit.fd.io/r/c/csit/+/23956 - -Mergeable deactivating change: https://gerrit.fd.io/r/c/csit/+/24280 - -Less straightforward mergeable deactivating change: -https://gerrit.fd.io/r/c/csit/+/22526 -It shows: - -+ Crc edits: supported_crcs.yaml -+ Version bump: VPP_STABLE_VER_UBUNTU_BIONIC -+ And even a way to work around failing tests: - eth2p-ethicmpv4-ip4base-eth-1tap-dev.robot - -Simple change that is both deactivating and activating: -https://gerrit.fd.io/r/c/csit/+/23969 diff --git a/docs/content/introduction/bash_code_style.md b/docs/content/introduction/bash_code_style.md deleted file mode 100644 index bbd0c37196..0000000000 --- a/docs/content/introduction/bash_code_style.md +++ /dev/null @@ -1,651 +0,0 @@ ---- -bookHidden: true -title: "Bash Code Style" ---- - -The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", -"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", -"MAY", and "OPTIONAL" in this document are to be interpreted as -described in [BCP 14](https://tools.ietf.org/html/bcp14), -[RFC2119](https://tools.ietf.org/html/rfc2119), -[RFC8174](https://tools.ietf.org/html/rfc8174) -when, and only when, they appear in all capitals, as shown here. - -This document SHALL describe guidelines for writing reliable, maintainable, -reusable and readable code for CSIT. - -# Proposed Style - -# File Types - -Bash files SHOULD NOT be monolithic. Generally, this document -considers two types of bash files: - -+ Entry script: Assumed to be called by user, - or a script "external" in some way. - - + Sources bash libraries and calls functions defined there. - -+ Library file: To be sourced by entry scipts, possibly also by other libraries. - - + Sources other libraries for functions it needs. - - + Or relies on a related file already having sourced that. - - + Documentation SHALL imply which case it is. - - + Defines multiple functions other scripts can call. - -# Safety - -+ Variable expansions MUST be quoted, to prevent word splitting. - - + This includes special "variables" such as "${1}". - - + RECOMMENDED even if the value is safe, as in "$?" and "$#". - - + It is RECOMMENDED to quote strings in general, - so text editors can syntax-highlight them. - - + Even if the string is a numeric value. - - + Commands and known options can get their own highlight, no need to quote. - - + Example: You do not need to quote every word of - "pip install --upgrade virtualenv". - - + Code SHALL NOT quote glob characters you need to expand (obviously). - - + OPTIONALLY do not quote adjacent characters (such as dot or fore-slash), - so that syntax highlighting makes them stand out compared to surrounding - ordinary strings. - - + Example: cp "logs"/*."log" "."/ - - + Command substitution on right hand side of assignment are safe - without quotes. - - + Note that command substitution limits the scope for quotes, - so it is NOT REQUIRED to escape the quotes in deeper levels. - - + Both backtics and "dollar round-bracket" provide command substitution. - The folowing rules are RECOMMENDED: - - + For simple constructs, use "dollar round-bracket". - - + If there are round brackets in the surrounding text, use backticks, - as some editor highlighting logic can get confused. - - + Avoid nested command substitution. - - + Put intermediate results into local variables, - use "|| die" on each step of command substitution. - - + Code SHOULD NOT be structured in a way where - word splitting is intended. - - + Example: Variable holding string of multiple command lines arguments. - - + Solution: Array variable should be used in this case. - - + Expansion MUST use quotes then: "${name[@]}". - - + Word splitting MAY be used when creating arrays from command substitution. - -+ Code MUST always check the exit code of commands. - - + Traditionally, error code checking is done either by "set -e" - or by appending "|| die" after each command. - The first is unreliable, due to many rules affecting "set -e" behavior - (see <https://mywiki.wooledge.org/BashFAQ/105>), but "|| die" - relies on humans identifying each command, which is also unreliable. - When was the last time you checked error code of "echo" command, - for example? - - + Another example: "set -e" in your function has no effect - if any ancestor call is done with logical or, - for example in "func || code=$?" construct. - - + As there is no reliable method of error detection, and there are two - largely independent unreliable methods, the best what we can do - is to apply both. So, code SHOULD explicitly - check each command (with "|| die" and similar) AND have "set -e" applied. - - + Code MUST explicitly check each command, unless the command is well known, - and considered safe (such as the aforementioned "echo"). - - + The well known commands MUST still be checked implicitly via "set -e". - - + See below for specific "set -e" recommendations. - -+ Code SHOULD use "readlink -e" (or "-f" if target does not exist yet) - to normalize any path value to absolute path without symlinks. - It helps with debugging and identifies malformed paths. - -+ Code SHOULD use such normalized paths for sourcing. - -+ When exiting on a known error, code MUST print a longer, helpful message, - in order for the user to fix their situation if possible. - -+ When error happens at an unexpected place, it is RECOMMENDED for the message - to be short and generic, instead of speculative. - -# Bash Options - -+ Code MUST apply "-x" to make debugging easier. - - + Code MAY temporarily supress such output in order to avoid spam - (e.g. in long busy loops), but it is still NOT RECOMMENDED to do so. - -+ Code MUST apply "-e" for early error detection. - - + But code still SHOULD use "|| die" for most commands, - as "-e" has numerous rules and exceptions. - - + Code MAY apply "+e" temporarily for commands which (possibly nonzero) - exit code it interested in. - - + Code MUST to store "$?" and call "set -e" immediatelly afterwards. - - + Code MUST NOT use this approach when calling functions. - - + That is because functions are instructed to apply "set -e" on their own - which (when triggered) will exit the whole entry script. - - + Unless overriden by ERR trap. - But code SHOULD NOT set any ERR trap. - - + If code needs exit code of a function, it is RECOMMENDED to use - pattern 'code="0"; called_function || code="${?}"'. - - + In this case, contributor MUST make sure nothing in the - called_function sub-graph relies on "set -e" behavior, - because the call being part of "or construct" disables it. - - + Code MAY append "|| true" for benign commands, - when it is clear non-zero exit codes make no difference. - - + Also in this case, the contributor MUST make sure nothing within - the called sub-graph depends on "set -e", as it is disabled. - -+ Code MUST apply "-u" as unset variable is generally a typo, thus an error. - - + Code MAY temporarily apply "+u" if a command needs that to pass. - - + Virtualenv activation is the only known example so far. - -+ Code MUST apply "-o pipefail" to make sure "-e" picks errors - inside piped construct. - - + Code MAY use "|| true" inside a pipe construct, in the (inprobable) case - when non-zero exit code still results in a meaningful pipe output. - -+ All together: "set -exuo pipefail". - - + Code MUST put that line near start of every file, so we are sure - the options are applied no matter what. - - + "Near start" means "before any nontrivial code". - - + Basically only copyright is RECOMMENDED to appear before. - - + Also code MUST put the line near start of function bodies - and subshell invocations. - -# Functions - -There are (at least) two possibilities how a code from an external file -can be executed. Either the file contains a code block to execute -on each "source" invocation, or the file just defines functions -which have to be called separately. - -This document considers the "function way" to be better, -here are some pros and cons: - -+ Cons: - - + The function way takes more space. Files have more lines, - and the code in function body is one indent deeper. - - + It is not easy to create functions for low-level argument manipulation, - as "shift" command in the function code does not affect the caller context. - - + Call sites frequently refer to code two times, - when sourcing the definition and when executing the function. - - + It is not clear when a library can rely on its relative - to have performed the sourcing already. - - + Ideally, each library should detect if it has been sourced already - and return early, which takes even more space. - -+ Pros: - - + Some code blocks are more useful when used as function, - to make call site shorter. - - + Examples: Trap functions, "die" function. - - + The "import" part and "function" part usually have different side effects, - making the documentation more focused (even if longer overall). - - + There is zero risk of argument-less invocation picking arguments - from parent context. - - + This safety feature is the main reason for chosing the "function way". - - + This allows code blocks to support optional arguments. - -+ Rules: - - + Library files MUST be only "source"d. For example if "tox" calls a script, - it is an entry script. - - + Library files (upon sourcing) MUST minimize size effect. - - + The only permitted side effects MUST by directly related to: - - + Defining functions (without executing them). - - + Sourcing sub-library files. - - + If a bash script indirectly call another bash script, - it is not a "source" operation, variables are not shared, - so the called script MUST be considered an entry script, - even if it implements logic fitting into a single function. - - + Entry scripts SHOULD avoid duplicating any logic. - - + Clear duplicated blocks MUST be moved into libraries as functions. - - + Blocks with low amount of duplication MAY remain in entry scripts. - - + Usual motives for not creating functions are: - - + The extracted function would have too much logic for processing - arguments (instead of hardcoding values as in entry script). - - + The arguments needed would be too verbose. - - + And using "set +x" would take too much vertical space - (when compared to entry script implementation). - -# Variables - -This document describes two kinds of variables: called "local" and "global". - -+ Local variables: - - + Variable name MUST contain only lower case letters, digits and underscores. - - + Code MUST NOT export local variables. - - + Code MUST NOT rely on local variables set in different contexts. - - + Documentation is NOT REQUIRED. - - + Variable name SHOULD be descriptive enough. - - + Local variable MUST be initialized before first use. - - + Code SHOULD have a comment if a reader might have missed - the initialization. - - + Unset local variables when leaving the function. - - + Explicitly typeset by "local" builtin command. - - + Require strict naming convention, e.g. function_name__variable_name. - -+ Global variables: - - + Variable name MUST contain only upper case letters, digits and underscores. - - + They SHOULD NOT be exported, unless external commands need them - (e.g. PYTHONPATH). - - + Code MUST document if a function (or its inner call) - reads a global variable. - - + Code MUST document if a function (or its inner call) - sets or rewrites a global variable. - - + If a function "wants to return a value", it SHOULD be implemented - as the function setting (or rewriting) a global variable, - and the call sites reading that variable. - - + If a function "wants to accept an argument", it IS RECOMMENDED - to be implemented as the call sites setting or rewriting global variables, - and the function reading that variables. - But see below for direct arguments. - -+ Code MUST use curly brackets when referencing variables, - e.g. "${my_variable}". - - + It makes related constructs (such as ${name:-default}) less surprising. - - + It looks more similar to Robot Framework variables (which is good). - -# Arguments - -Bash scripts and functions MAY accept arguments, named "${1}", "${2}" and so on. -As a whole available via "$@". -You MAY use "shift" command to consume an argument. - -## Contexts - -Functions never have access to parent arguments, but they can read and write -variables set or read by parent contexts. - -### Arguments Or Variables - -+ Both arguments and global variables MAY act as an input. - -+ In general, if the caller is likely to supply the value already placed - in a global variable of known name, it is RECOMMENDED - to use that global variable. - -+ Construct "${NAME:-value}" can be used equally well for arguments, - so default values are possible for both input methods. - -+ Arguments are positional, so there are restrictions on which input - is optional. - -+ Functions SHOULD either look at arguments (possibly also - reading global variables to use as defaults), or look at variables only. - -+ Code MUST NOT rely on "${0}", it SHOULD use "${BASH_SOURCE[0]}" instead - (and apply "readlink -e") to get the current block location. - -+ For entry scripts, it is RECOMMENDED to use standard parsing capabilities. - - + For most Linux distros, "getopt" is RECOMMENDED. - -# Working Directory Handling - -+ Functions SHOULD act correctly without neither assuming - what the currect working directory is, nor changing it. - - + That is why global variables and arguments SHOULD contain - (normalized) full paths. - - + Motivation: Different call sites MAY rely on different working directories. - -+ A function MAY return (also with nonzero exit code) when working directory - is changed. - - + In this case the function documentation MUST clearly state where (and when) - is the working directory changed. - - + Exception: Functions with undocumented exit code. - - + Those functions MUST return nonzero code only on "set -e" or "die". - - + Note that both "set -e" and "die" by default result in exit of the whole - entry script, but the caller MAY have altered that behavior - (by registering ERR trap, or redefining die function). - - + Any callers which use "set +e" or "|| true" MUST make sure - their (and their caller ancestors') assumption on working directory - are not affected. - - + Such callers SHOULD do that by restoring the original working directory - either in their code, - - + or contributors SHOULD do such restoration in the function code, - (see below) if that is more convenient. - - + Motivation: Callers MAY rely on this side effect to simplify their logic. - -+ A function MAY assume a particular directory is already set - as the working directory (to save space). - - + In this case function documentation MUST clearly state what the assumed - working directory is. - - + Motivation: Callers MAY call several functions with common - directory of interest. - - + Example: Several dowload actions to execute in sequence, - implemented as functions assuming ${DOWNLOAD_DIR} - is the working directory. - -+ A function MAY change the working directory transiently, - before restoring it back before return. - - + Such functions SHOULD use command "pushd" to change the working directory. - - + Such functions SHOULD use "trap 'trap - RETURN; popd' RETURN" - imediately after the pushd. - - + In that case, the "trap - RETURN" part MUST be included, - to restore any trap set by ancestor. - - + Functions MAY call "trap - RETURN; popd" exlicitly. - - + Such functions MUST NOT call another pushd (before an explicit popd), - as traps do not stack within a function. - -+ If entry scripts also use traps to restore working directory (or other state), - they SHOULD use EXIT traps instead. - - + That is because "exit" command, as well as the default behavior - of "die" or "set -e" cause direct exit (without skipping function returns). - -# Function Size - -+ In general, code SHOULD follow reasoning similar to how pylint - limits code complexity. - -+ It is RECOMMENDED to have functions somewhat simpler than Python functions, - as Bash is generally more verbose and less readable. - -+ If code contains comments in order to partition a block - into sub-blocks, the sub-blocks SHOULD be moved into separate functions. - - + Unless the sub-blocks are essentially one-liners, - not readable just because external commands do not have - obvious enough parameters. Use common sense. - -# Documentation - -+ The library path and filename is visible from source sites. It SHOULD be - descriptive enough, so reader do not need to look inside to determine - how and why is the sourced file used. - - + If code would use several functions with similar names, - it is RECOMMENDED to create a (well-named) sub-library for them. - - + Code MAY create deep library trees if needed, it SHOULD store - common path prefixes into global variables to make sourcing easier. - - + Contributors, look at other files in the subdirectory. You SHOULD - improve their filenames when adding-removing other filenames. - - + Library files SHOULD NOT have executable flag set. - - + Library files SHOULD have an extension .sh (or perhaps .bash). - - + It is RECOMMENDED for entry scripts to also have executable flag unset - and have .sh extension. - -+ Each entry script MUST start with a shebang. - - + "#!/bin/usr/env bash" is RECOMMENDED. - - + Code SHOULD put an empty line after shebang. - - + Library files SHOULD NOT contain a shebang, as "source" is the primary - method to include them. - -+ Following that, there SHOULD be a block of comment lines with copyright. - - + It is a boilerplate, but human eyes are good at ignoring it. - - + Overhead for git is also negligible. - -+ Following that, there MUST be "set -exuo pipefail". - - + It acts as an anchor for humans to start paying attention. - -Then it depends on script type. - -## Library Documentation - -+ Following "set -exuo pipefail" SHALL come the "import part" documentation. - -+ Then SHALL be the import code - ("source" commands and a bare minimum they need). - -+ Then SHALL be the function definitions, and inside: - - + The body SHALL sart with the function documentation explaining API contract. - Similar to Robot [Documentation] or Python function-level docstring. - - + See below. - - + "set -exuo pipefail" SHALL be the first executable line - in the function body, except functions which legitimely need - different flags. Those SHALL also start with appropriate "set" command(s). - - + Lines containing code itself SHALL follow. - - + "Code itself" SHALL include comment lines - explaining any non-obvious logic. - - + There SHALL be two empty lines between function definitions. - -More details on function documentation: - -Generally, code SHOULD use comments to explain anything -not obvious from the funtion name. - -+ Function documentation SHOULD start with short description of function - operation or motivation, but only if not obvious from function name. - -+ Documentation SHOULD continue with listing any non-obvious side effect: - - + Documentation MUST list all read global variables. - - + Documentation SHOULD include descriptions of semantics - of global variable values. - It is RECOMMENDED to mention which function is supposed to set them. - - + The "include descriptions" part SHOULD apply to other items as well. - - + Documentation MUST list all global variables set, unset, reset, - or otherwise updated. - - + It is RECOMMENDED to list all hardcoded values used in code. - - + Not critical, but can hint at future improvements. - - + Documentation MUST list all files or directories read - (so caller can make sure their content is ready). - - + Documentation MUST list all files or directories updated - (created, deleted, emptied, otherwise edited). - - + Documentation SHOULD list all functions called (so reader can look them up). - - + Documentation SHOULD mention where are the functions defined, - if not in the current file. - - + Documentation SHOULD list all external commands executed. - - + Because their behavior can change "out of bounds", meaning - the contributor changing the implementation of the extrenal command - can be unaware of this particular function interested in its side effects. - - + Documentation SHOULD explain exit code (coming from - the last executed command). - - + Usually, most functions SHOULD be "pass or die", - but some callers MAY be interested in nonzero exit codes - without using global variables to store them. - - + Remember, "exit 1" ends not only the function, but all scripts - in the source chain, so code MUST NOT use it for other purposes. - - + Code SHOULD call "die" function instead. This way the caller can - redefine that function, if there is a good reason for not exiting - on function failure. - -## Entry Script Documentation - -+ After "set -exuo pipefail", high-level description SHALL come. - - + Entry scripts are rarely reused, so detailed side effects - are OPTIONAL to document. - - + But code SHOULD document the primary side effects. - -+ Then SHALL come few commented lines to import the library with "die" function. - -+ Then block of "source" commands for sourcing other libraries needed SHALL be. - - + In alphabetical order, any "special" library SHOULD be - in the previous block (for "die"). - -+ Then block os commands processing arguments SHOULD be (if needed). - -+ Then SHALL come block of function calls (with parameters as needed). - -# Other General Recommendations - -+ Code SHOULD NOT not repeat itself, even in documentation: - - + For hardcoded values, a general description SHOULD be written - (instead of copying the value), so when someone edits the value - in the code, the description still applies. - - + If affected directory name is taken from a global variable, - documentation MAY distribute the directory description - over the two items. - - + If most of side effects come from an inner call, - documentation MAY point the reader to the documentation - of the called function (instead of listing all the side effects). - -+ But documentation SHOULD repeat it if the information crosses functions. - - + Item description MUST NOT be skipped just because the reader - should have read parent/child documentation already. - - + Frequently it is RECOMMENDED to copy&paste item descriptions - between functions. - - + But sometimes it is RECOMMENDED to vary the descriptions. For example: - - + A global variable setter MAY document how does it figure out the value - (without caring about what it will be used for by other functions). - - + A global variable reader MAY document how does it use the value - (without caring about how has it been figured out by the setter). - -+ When possible, Bash code SHOULD be made to look like Python - (or Robot Framework). Those are three primary languages CSIT code relies on, - so it is nicer for the readers to see similar expressions when possible. - Examples: - - + Code MUST use indentation, 1 level is 4 spaces. - - + Code SHOULD use "if" instead of "&&" constructs. - - + For comparisons, code SHOULD use operators such as "!=" (needs "[["). - -+ Code MUST NOT use more than 80 characters per line. - - + If long external command invocations are needed, - code SHOULD use array variables to shorten them. - - + If long strings (or arrays) are needed, code SHOULD use "+=" operator - to grow the value over multiple lines. - - + If "|| die" does not fit with the command, code SHOULD use curly braces: - - + Current line has "|| {", - - + Next line has the die commands (indented one level deeper), - - + Final line closes with "}" at original intent level. diff --git a/docs/content/introduction/branches.md b/docs/content/introduction/branches.md deleted file mode 100644 index 20759b9c78..0000000000 --- a/docs/content/introduction/branches.md +++ /dev/null @@ -1,192 +0,0 @@ ---- -bookHidden: true -title: "Git Branches in CSIT" ---- - -# Git Branches in CSIT - -## Overview - -This document describes how to create and remove git branches in CSIT project. - -To be able to perform everything described in this file, you must be **logged -in as a committer**. - -## Operational Branches - -For more information about operational branches see -[CSIT/Branching Strategy](https://wiki.fd.io/view/CSIT/Branching_Strategy) and -[CSIT/Jobs](https://wiki.fd.io/view/CSIT/Jobs) on -[fd.io](https://fd.io) [wiki](https://wiki.fd.io/view/CSIT) pages. - -> Note: The branch `rls2009_lts` is used here only as an example. - -### Pre-requisites - -1. The last builds of weekly and semiweekly jobs must finish with status - *"Success"*. -1. If any of watched jobs failed, try to find the root cause, fix it and run it - again. - -The watched jobs are: - -- master: - - [csit-vpp-device-master-ubuntu1804-1n-skx-weekly](https://jenkins.fd.io/view/csit/job/csit-vpp-device-master-ubuntu1804-1n-skx-weekly) - - [csit-vpp-device-master-ubuntu1804-1n-skx-semiweekly](https://jenkins.fd.io/view/csit/job/csit-vpp-device-master-ubuntu1804-1n-skx-semiweekly) -- 2009_lts: - - [csit-vpp-device-2009_lts-ubuntu1804-1n-skx-weekly](https://jenkins.fd.io/view/csit/job/csit-vpp-device-2009_lts-ubuntu1804-1n-skx-weekly) - - [csit-vpp-device-2009_lts-ubuntu1804-1n-skx-semiweekly](https://jenkins.fd.io/view/csit/job/csit-vpp-device-2009_lts-ubuntu1804-1n-skx-semiweekly) - -### Procedure - -**A. CSIT Operational Branch** -1. Take the revision string from the last successful build of the **weekly** - job, e.g. **Revision**: 0f9b20775b4a656b67c7039e2dda4cf676af2b21. -1. Open [Gerrit](https://gerrit.fd.io). -1. Go to - [Browse --> Repositories --> csit --> Branches](https://gerrit.fd.io/r/admin/repos/csit,branches). -1. Click `CREATE NEW`. -1. Fill in the revision number and the name of the new operational branch. Its - format is: `oper-YYMMDD` for master and `oper-rls{RELEASE}-{YYMMDD}` or - `oper-rls{RELEASE}_lts-{YYMMDD}` for release branches. -1. Click "CREATE". -1. If needed, delete old operational branches by clicking "DELETE". - -**B. VPP Stable version** -1. Open the console log of the last successful **semiweekly** build and search - for VPP version (e.g. vpp_21 ...). -1. You should find the string with this structure: - `vpp_21.01-rc0~469-g7acab3790~b368_amd64.deb` -1. Modify [VPP_STABLE_VER_UBUNTU_BIONIC](../../VPP_STABLE_VER_UBUNTU_BIONIC) - and [VPP_STABLE_VER_CENTOS](../../VPP_STABLE_VER_CENTOS) files. -1. Use a string with the build number, e.g. `21.01-rc0~469_g7acab3790~b129` - for [VPP_STABLE_VER_CENTOS](../../VPP_STABLE_VER_CENTOS) and a string - without the build number, e.g. `21.01-rc0~469_g7acab3790` for - [VPP_STABLE_VER_UBUNTU_BIONIC](../../VPP_STABLE_VER_UBUNTU_BIONIC). -1. Update the stable versions in master and in all LTS branches. - -## Release Branches - -> Note: VPP release 21.01 is used here only as an example. - -### Pre-requisites - -1. VPP release manager sends the information email to announce that the RC1 - milestone for VPP {release}, e.g. 21.01, is complete, and the artifacts are - available. -1. The artifacts (*.deb and *.rpm) should be available at - `https://packagecloud.io/fdio/{release}`. For example see artifacts for the - [VPP release 20.01](https://packagecloud.io/fdio/2101). The last available - build is to be used. -1. All CSIT patches for the release are merged in CSIT master branch. - -### Procedure - -**A. Release branch** - -1. Open [Gerrit](https://gerrit.fd.io). -1. Go to - [Browse --> Repositories --> csit --> Branches](https://gerrit.fd.io/r/admin/repos/csit,branches). -1. Save the revision string of master for further use. -1. Click `CREATE NEW`. -1. Fill in the revision number and the name of the new release branch. Its - format is: `rlsYYMM`, e.g. rls2101. -1. Click "CREATE". - -**B. Jenkins jobs** - -See ["Add CSIT rls2101 branch"](https://gerrit.fd.io/r/c/ci-management/+/30439) -and ["Add report jobs to csit rls2101 branch"](https://gerrit.fd.io/r/c/ci-management/+/30462) -patches as an example. - -1. [csit.yaml](https://github.com/FDio/ci-management/blob/master/jjb/csit/csit.yaml): - Documentation of the source code and the Report - - Add release branch (rls2101) for `csit-docs-merge-{stream}` and - `csit-report-merge-{stream}` (project --> stream). -1. [csit-perf.yaml](https://github.com/FDio/ci-management/blob/master/jjb/csit/csit-perf.yaml): - Verify jobs - - Add release branch (rls2101) to `project --> jobs --> - csit-vpp-perf-verify-{stream}-{node-arch} --> stream`. - - Add release branch (rls2101) to `project --> project: 'csit' --> stream`. - - Add release branch (rls2101) to `project --> project: 'csit' --> stream_report`. -1. [csit-tox.yaml](https://github.com/FDio/ci-management/blob/master/jjb/csit/csit-tox.yaml): - tox - - Add release branch (rls2101) to `project --> stream`. -1. [csit-vpp-device.yaml](https://github.com/FDio/ci-management/blob/master/jjb/csit/csit-vpp-device.yaml): - csit-vpp-device - - Add release branch (rls2101) to `project --> jobs (weekly / semiweekly) --> stream`. - - Add release branch (rls2101) to `project --> project: 'csit' --> stream`. - -**C. VPP Stable version** - -See the patch -[Update of VPP_REPO_URL and VPP_STABLE_VER files](https://gerrit.fd.io/r/c/csit/+/30461) -and / or -[rls2101: Update VPP_STABLE_VER files to release version](https://gerrit.fd.io/r/c/csit/+/30976) -as an example. - -1. Find the last successful build on the - [Package Cloud](https://packagecloud.io) for the release, e.g. - [VPP release 20.01](https://packagecloud.io/fdio/2101). -1. Clone the release branch to your PC: - `git clone --depth 1 ssh://<user>@gerrit.fd.io:29418/csit --branch rls{RELEASE}` -1. Modify [VPP_STABLE_VER_UBUNTU_BIONIC](../../VPP_STABLE_VER_UBUNTU_BIONIC) - and [VPP_STABLE_VER_CENTOS](../../VPP_STABLE_VER_CENTOS) files with the last - successful build. -1. Modify [VPP_REPO_URL](../../VPP_REPO_URL) to point to the new release, e.g. - `https://packagecloud.io/install/repositories/fdio/2101`. -1. You can also modify the [.gitreview](../../.gitreview) file and set the new - default branch. -1. Wait until the verify jobs - - [csit-vpp-device-2101-ubuntu1804-1n-skx](https://jenkins.fd.io/job/csit-vpp-device-2101-ubuntu1804-1n-skx) - - [csit-vpp-device-2101-ubuntu1804-1n-tx2](https://jenkins.fd.io/job/csit-vpp-device-2101-ubuntu1804-1n-tx2) - - successfully finish and merge the patch. - -**D. CSIT Operational Branch** - -1. Manually start (Build with Parameters) the weekly job - [csit-vpp-device-2101-ubuntu1804-1n-skx-weekly](https://jenkins.fd.io/view/csit/job/csit-vpp-device-2101-ubuntu1804-1n-skx-weekly) -1. When it successfully finishes, take the revision string e.g. **Revision**: - 876b6c1ae05bfb1ad54ff253ea021f3b46780fd4 to create a new operational branch - for the new release. -1. Open [Gerrit](https://gerrit.fd.io). -1. Go to - [Browse --> Repositories --> csit --> Branches](https://gerrit.fd.io/r/admin/repos/csit,branches). -1. Click `CREATE NEW`. -1. Fill in the revision number and the name of the new operational branch. Its - format is: `oper-rls{RELEASE}-YYMMDD` e.g. `oper-rls2101-201217`. -1. Click "CREATE". -1. Manually start (Build with Parameters) the semiweekly job - [csit-vpp-device-2101-ubuntu1804-1n-skx-semiweekly](https://jenkins.fd.io/view/csit/job/csit-vpp-device-2101-ubuntu1804-1n-skx-semiweekly) -1. When it successfully finishes check in console log if it used the right VPP - version (search for `VPP_VERSION=`) from the right repository (search for - `REPO_URL=`). - -**E. Announcement** - -If everything is as it should be, send the announcement email to -`csit-dev@lists.fd.io` mailing list. - -*Example:* - -Subject: -```text -CSIT rls2101 branch pulled out -``` - -Body: -```text -CSIT rls2101 branch [0] is created and fully functional. - -Corresponding operational branch (oper-rls2101-201217) has been created too. - -We are starting dry runs for performance ndrpdr iterative tests to get initial -ndrpdr values with available rc1 packages as well as to test all the infra -before starting report data collection runs. - -Regards, -<signature> - -[0] https://git.fd.io/csit/log/?h=rls2101 -``` diff --git a/docs/content/introduction/dashboard_history.md b/docs/content/introduction/dashboard_history.md deleted file mode 100644 index f7f9db576a..0000000000 --- a/docs/content/introduction/dashboard_history.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: "Dashboard History" -weight: 1 ---- - -# Dashboard History - -FD.io {{< release_csit >}} Dashboard History and per .[ww] revision changes are -listed below. - - **.[ww] Revision** | **Changes** ---------------------|------------------ - .10 | Initial revision - -FD.io CSIT Revision follow CSIT-[yy][mm].[ww] numbering format, with version -denoted by concatenation of two digit year [yy] and two digit month [mm], and -maintenance revision identified by two digit calendar week number [ww]. diff --git a/docs/content/introduction/model_schema.md b/docs/content/introduction/model_schema.md deleted file mode 100644 index ae3ba38fd7..0000000000 --- a/docs/content/introduction/model_schema.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -bookHidden: true -title: "Model Schema" ---- - -# Model Schema - -This document describes what is currently implemented in CSIT, -especially the export side (UTI), not import side (PAL). - -## Version - -This document is valid for CSIT model version 1.4.0. - -It is recommended to use semantic versioning: https://semver.org/ -That means, if the new model misses a field present in the old model, -bump the major version. If the new model adds a field -not present in the old model, bump the minor version. -Any other edit in the implmenetation (or documentation) bumps the patch version. -If you change value type or formatting, -consider whether the parser (PAL) understands the new value correctly. -Renaming a field is the same as adding a new one and removing the old one. -Parser (PAL) has to know exact major version and minimal minor version, -and unless bugs, it can ignore patch version and bumped minor version. - -## UTI - -UTI stands for Unified Test Interface. -It mainly focuses on exporting information gathered during test run -into JSON output files. - -### Output Structure - -UTI outputs come in filesystem tree structure (single tree), where directories -correspond to suite levels and files correspond to suite setup, suite teardown -or any test case at this level of suite. -The directory name comes from SUITE_NAME Robot variable (the last part -as the previous parts are higher level suites), converted to lowercase. -If the suite name contains spaces (Robot converts underscores to spaces), -they are replaced with underscores. - -The filesystem tree is rooted under tests/ (as suites in git are there), -and for each component (test case, suite setup, suite teardown). - -Although we expect only ASCII text in the exported files, -we manipulate files using UTF-8 encoding, -so if Robot Framework uses a non-ascii character, it will be handled. - -### JSON schemas - -CSIT model is formally defined as a collection of JSON schema documents, -one for each output file type. - -The current version specifies only one output file type: -Info output for test case. - -The authoritative JSON schema documents are in JSON format. -Git repository also contains YAML formatted document and conversion utility, -which simplifies maintaining of the JSON document -(no need to track brackets and commas), but are not authoritative. diff --git a/docs/content/introduction/perf_triggers_design.md b/docs/content/introduction/perf_triggers_design.md deleted file mode 100644 index 445846f4d9..0000000000 --- a/docs/content/introduction/perf_triggers_design.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -bookHidden: true -title: "Performance Triggers Design" ---- - -# Performance Triggers Design - -*Syntax* - trigger_keyword [{tag1} {tag2}AND{tag3} !{tag4} !{tag5}] - -*Inputs* - - trigger_keyword for vpp-* jobs: 'perftest' - - trigger_keyword for csit-* jobs: 'csit-perftest' - - tags: existing CSIT tags [4]_ i.e. ip4base, ip6base, iacldst, memif - -Set of default tags appended to user input, under control by CSIT - - always-on for vpp-csit*.job: 'mrr' 'nic_intel_x710-da2' '1t1c' - - if input with no tags, following set applied: - - 'mrrANDnic_intel-x710AND1t1cAND64bANDip4base' - - 'mrrANDnic_intel-x710AND1t1cAND78bANDip6base' - - 'mrrANDnic_intel-x710AND1t1cAND64bANDl2bdbase' - -Examples - input: 'perftest' - expanded: 'mrrANDnic_intel_x710-da2AND1t1cAND64bANDl2bdbase mrrANDnic_intel_x710-da2AND1t1cAND64bANDip4base mrrANDnic_intel_x710-da2AND1t1cAND78bANDip6base' - input: 'perftest l2bdbase l2xcbase' - expanded: 'mrrANDnic_intel_x710-da2ANDl2bdbase mrrANDnic_intel_x710-da2ANDl2xcbase' - input: 'perftest ip4base !feature' - expanded: 'mrrANDnic_intel_x710-da2ANDip4base' not 'feature' - input: 'perftest ip4base !feature !lbond_dpdk' - expanded: 'mrrANDnic_intel_x710-da2ANDip4base' not 'feature' not 'lbond_dpdk' - input: 'perftestxyx ip4base !feature !lbond_dpdk' - invalid: detected as error - input: 'perftestip4base !feature !lbond_dpdk' - invalid: detected as error - input: 'perftest ip4base!feature!lbond_dpdk' - invalid expand: 'mrrANDnic_intel_x710-da2ANDip4base!feature!lbond_dpdk' - execution of RobotFramework will fail - -Constrains - Trigger keyword must be different for every job to avoid running multiple jobs - at once. Trigger keyword must not be substring of job name or any other - message printed by JJB bach to gerrit message which can lead to recursive - execution. diff --git a/docs/content/introduction/test_code_guidelines.md b/docs/content/introduction/test_code_guidelines.md deleted file mode 100644 index 9707d63ea6..0000000000 --- a/docs/content/introduction/test_code_guidelines.md +++ /dev/null @@ -1,294 +0,0 @@ ---- -bookHidden: true -title: "CSIT Test Code Guidelines" ---- - -# CSIT Test Code Guidelines - -The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", -"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", -"MAY", and "OPTIONAL" in this document are to be interpreted as -described in [BCP 14](https://tools.ietf.org/html/bcp14), -[RFC2119](https://tools.ietf.org/html/rfc2119), -[RFC8174](https://tools.ietf.org/html/rfc8174) -when, and only when, they appear in all capitals, as shown here. - -This document SHALL describe guidelines for writing reliable, maintainable, -reusable and readable code for CSIT. - -# RobotFramework test case files and resource files - -+ General - - + Contributors SHOULD look at requirements.txt in root CSIT directory - for the currently used Robot Framework version. - Contributors SHOULD read - [Robot Framework User Guide](http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html) - for more details. - - + RobotFramework test case files and resource files - SHALL use special extension .robot - - + Pipe and space separated file format (without trailing pipe - and without pipe aligning) SHALL be used. - Tabs are invisible characters, which are error prone. - 4-spaces separation is prone to accidental double space - acting as a separator. - - + Files SHALL be encoded in UTF-8 (the default Robot source file encoding). - Usage of non-ASCII characters SHOULD be avoided if possible. - It is RECOMMENDED to - [escape](http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#escaping) - non-ASCII characters. - - + Line length SHALL be limited to 80 characters. - - + There SHALL be licence text present at the beginning of each file. - - + Copy-pasting of the code NOT RECOMMENDED practice, any code that could be - re-used SHOULD be put into a library (Robot resource, Python library, ...). - -+ Test cases - - + It is RECOMMENDED to use data-driven test case definitions - anytime suite contains test cases similar in structure. - Typically, a suite SHOULD define a Template keyword, and test cases - SHOULD only specify tags and argument values - - *** Settings *** - | Test Template | Local Template - ... - - *** Test Cases *** - | tc01-64B-1c-eth-l2patch-mrr - | | [Tags] | 64B | 1C - | | framesize=${64} | phy_cores=${1} - - + Test case templates (or testcases) SHALL be written in Behavior-driven style - i.e. in readable English, so that even non-technical project stakeholders - can understand it - - *** Keywords *** - | Local Template - | | [Documentation] - | | ... | [Cfg] DUT runs L2 patch config with ${phy_cores} phy core(s). - | | ... | [Ver] Measure NDR and PDR values using MLRsearch algorithm.\ - | | ... - | | ... | *Arguments:* - | | ... | - frame_size - Framesize in Bytes in integer - | | ... | or string (IMIX_v4_1). Type: integer, string - | | ... | - phy_cores - Number of physical cores. Type: integer - | | ... | - rxq - Number of RX queues, default value: ${None}. - | | ... | Type: integer - | | ... - | | [Arguments] | ${frame_size} | ${phy_cores} | ${rxq}=${None} - | | ... - | | Set Test Variable | \${frame_size} - | | ... - | | Given Add worker threads and rxqueues to all DUTs - | | ... | ${phy_cores} | ${rxq} - | | And Add PCI devices to all DUTs - | | Set Max Rate And Jumbo And Handle Multi Seg - | | And Apply startup configuration on all VPP DUTs - | | When Initialize L2 patch - | | Then Find NDR and PDR intervals using optimized search - - + Every suite and test case template (or testcase) - SHALL contain short documentation. - Generated CSIT web pages display the documentation. - - + You SHOULD NOT use hard-coded constants. - It is RECOMMENDED to use the variable table - (\*\*\*Variables\*\*\*) to define test case specific values. - You SHALL use the assignment sign = after the variable name - to make assigning variables slightly more explicit - - *** Variables *** - | ${traffic_profile}= | trex-stl-2n-ethip4-ip4src254 - - + Common test case specific settings of the test environment SHALL be done - in Test Setup keyword defined in the Setting table. - - + Run Keywords construction is RECOMMENDED if it is more readable - than a keyword. - - + Separate keyword is RECOMMENDED if the construction is less readable. - - + Post-test cleaning and processing actions SHALL be done in Test Teardown - part of the Setting table (e.g. download statistics from VPP nodes). - This part is executed even if the test case has failed. On the other hand - it is possible to disable the tear-down from command line, thus leaving - the system in “broken” state for investigation. - - + Every testcase SHALL be correctly tagged. List of defined tags is in - csit/docs/introduction/test_tag_documentation.rst - - + Whenever possible, common tags SHALL be set using Force Tags - in Settings table. - - + User high-level keywords specific for the particular test suite - SHOULD be implemented in the Keywords table of suitable Robot resource file - to enable readability and code-reuse. - - + Such keywords MAY be implemented in Keywords table of the suite instead, - if the contributor believes no other test will use such keywords. - But this is NOT RECOMMENDED in general, as keywords in Resources - are easier to maintain. - - + All test case names (and suite names) SHALL conform - to current naming convention. - https://wiki.fd.io/view/CSIT/csit-test-naming - - + Frequently, different suites use the same test case layout. - It is RECOMMENDED to use autogeneration scripts available, - possibly extending them if their current functionality is not sufficient. - -+ Resource files - - + SHALL be used to implement higher-level keywords that are used in test cases - or other higher-level (or medium-level) keywords. - - + Every keyword SHALL contain Documentation where the purpose and arguments - of the keyword are described. Also document types, return values, - and any specific assumptions the particular keyword relies on. - - + A keyword usage example SHALL be the part of the Documentation. - The example SHALL use pipe and space separated format - (with escaped pipes and) with a trailing pipe. - - + The reason was possbile usage of Robot's libdoc tool - to generate tests and resources documentation. In that case - example keyword usage would be rendered in table. - - + Keyword name SHALL describe what the keyword does, - specifically and in a reasonable length (“short sentence”). - - + Keyword names SHALL be short enough for call sites - to fit within line length limit. - - + If a keyword argument has a most commonly used value, it is RECOMMENDED - to set it as default. This makes keyword code longer, - but suite code shorter, and readability (and maintainability) - of suites SHALL always more important. - - + If there is intermediate data (created by one keyword, to be used - by another keyword) of singleton semantics (it is clear that the test case - can have at most one instance of such data, even if the instance - is complex, for example ${nodes}), it is RECOMMENDED to store it - in test variables. You SHALL document test variables read or written - by a keyword. This makes the test template code less verbose. - As soon as the data instance is not unique, you SHALL pass it around - via arguments and return values explicitly (this makes lower level keywords - more reusable and less bug prone). - - + It is RECOMMENDED to pass arguments explicitly via [Arguments] line. - Setting test variables takes more space and is less explicit. - Using arguments embedded in keyword name makes them less visible, - and it makes it harder for the line containing the resulting long name - to fit into the maximum character limit, so you SHOULD NOT use them. - -# Python library files - -+ General - - + SHALL be used to implement low-level keywords that are called from - resource files (of higher-level keywords) or from test cases. - - + Higher-level keywords MAY be implemented in python library file too. - it is RECOMMENDED especially in the case that their implementation - in resource file would be too difficult or impossible, - e.g. complex data structures or functional programming. - - + Every keyword, Python module, class, method, enum SHALL contain - docstring with the short description and used input parameters - and possible return value(s) or raised exceptions. - - + The docstrings SHOULD conform to - [PEP 257](https://www.python.org/dev/peps/pep-0257/) - and other quality standards. - - + CSIT contributions SHALL use a specific formatting for documenting - arguments, return values and similar. - - + Keyword usage examples MAY be grouped and used - in the class/module documentation string, to provide better overview - of the usage and relationships between keywords. - - + Keyword name SHALL describe what the keyword does, - specifically and in a reasonable length (“short sentence”). - See https://wiki.fd.io/view/CSIT/csit-test-naming - - + Python implementation of a keyword is a function, - so its name in the python library should be lowercase_with_underscores. - Robot call sites should usename with first letter capitalized, and spaces. - -+ Coding - - + It is RECOMMENDED to use some standard development tool - (e.g. PyCharm Community Edition) and follow - [PEP-8](https://www.python.org/dev/peps/pep-0008/) recommendations. - - + All python code (not only Robot libraries) SHALL adhere to PEP-8 standard. - This is reported by CSIT Jenkins verify job. - - + Indentation: You SHALL NOT use tab for indents! - Indent is defined as four spaces. - - + Line length: SHALL be limited to 80 characters. - - + CSIT Python code assumes PYTHONPATH is set - to the root of cloned CSIT git repository, creating a tree of sub-packages. - You SHALL use that tree for importing, for example - - from resources.libraries.python.ssh import exec_cmd_no_error - - + Imports SHALL be grouped in the following order: - - 1. standard library imports, - 2. related third party imports, - 3. local application/library specific imports. - - You SHALL put a blank line between each group of imports. - - + You SHALL use two blank lines between top-level definitions, - one blank line between method definitions. - - + You SHALL NOT execute any active code on library import. - - + You SHALL NOT use global variables inside library files. - - + You MAY define constants inside library files. - - + It is NOT RECOMMENDED to use hard-coded constants (e.g. numbers, - paths without any description). It is RECOMMENDED to use - configuration file(s), like /csit/resources/libraries/python/Constants.py, - with appropriate comments. - - + The code SHALL log at the lowest possible level of implementation, - for debugging purposes. You SHALL use same style for similar events. - You SHALL keep logging as verbose as necessary. - - + You SHALL use the most appropriate exception not general one (Exception) - if possible. You SHOULD create your own exception - if necessary and implement there logging, level debug. - - + You MAY use RuntimeException for generally unexpected failures. - - + It is RECOMMENDED to use RuntimeError also for - infrastructure failures, e.g. losing SSH connection to SUT. - - + You MAY use EnvironmentError and its cublasses instead, - if the distinction is informative for callers. - - + It is RECOMMENDED to use AssertionError when SUT is at fault. - - + For each class (e.g. exception) it is RECOMMENDED to implement __repr__() - which SHALL return a string usable as a constructor call - (including repr()ed arguments). - When logging, you SHOULD log the repr form, unless the internal structure - of the object in question would likely result in too long output. - This is helpful for debugging. - - + For composing and formatting strings, you SHOULD use .format() - with named arguments. - Example: "repr() of name: {name!r}".format(name=name) diff --git a/docs/content/introduction/testing_in_vagrant.md b/docs/content/introduction/testing_in_vagrant.md deleted file mode 100644 index 34ca596d0a..0000000000 --- a/docs/content/introduction/testing_in_vagrant.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -bookHidden: true -title: "Running CSIT locally in Vagrant" ---- - -# Running CSIT locally in Vagrant - -## Install prerequisites - -Run all commands from command line. - -1. Download and install virtualbox from - [official page](https://www.virtualbox.org/wiki/Downloads). - To verify the installation, run VBoxManage - - - on windows - - "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" --version - - - on nix - - VBoxManage --version - Tested version: 6.1.16r140961 - -2. Download and install latest vagrant from - [official page](https://www.vagrantup.com/downloads.html). - To verify the installtion, run - - vagrant -v - Tested version: Vagrant 2.2.15 - -3. Install vagrant plugins:: - - vagrant plugin install vagrant-vbguest - vagrant plugin install vagrant-cachier - - If you are behind a proxy, install proxyconf plugin and update proxy - settings in Vagrantfile:: - - vagrant plugin install vagrant-proxyconf - -## Set up and run Vagrant virtualbox - -Before running following commands change working directory to Vagrant specific directory -(from within root CSIT directory) - - cd csit.infra.vagrant - -This allows Vagrant to automatically find Vagrantfile and corresponding Vagrant environment. - -Start the provisioning - - vagrant up --provider virtualbox - -Your new VPP Device virtualbox machine will be created and configured. -Master branch of csit project will be cloned inside virtual machine into -/home/vagrant/csit folder. - -Once the process is finished, you can login to the box using - - vagrant ssh - -In case you need to completely rebuild the box and start from scratch, -run these commands - - vagrant destroy -f - vagrant up --provider virtualbox - -## Run tests - -From within the box run the tests using - - cd /home/vagrant/csit/resources/libraries/bash/entry - ./bootstrap_vpp_device.sh csit-vpp-device-master-ubuntu2004-1n-vbox - -To run only selected tests based on TAGS, export environment variables before -running the test suite - - export GERRIT_EVENT_TYPE="comment-added" - export GERRIT_EVENT_COMMENT_TEXT="devicetest memif" - - # now it will run tests, selected based on tags - ./bootstrap_vpp_device.sh csit-vpp-device-master-ubuntu2004-1n-vbox - - diff --git a/docs/content/methodology/_index.md b/docs/content/methodology/_index.md index 6f0dcae783..dbef64db94 100644 --- a/docs/content/methodology/_index.md +++ b/docs/content/methodology/_index.md @@ -1,6 +1,6 @@ --- -bookCollapseSection: true +bookCollapseSection: false bookFlatSection: true title: "Methodology" weight: 2 ----
\ No newline at end of file +--- diff --git a/docs/content/methodology/hoststack_testing/_index.md b/docs/content/methodology/measurements/_index.md index b658313040..9e9232969e 100644 --- a/docs/content/methodology/hoststack_testing/_index.md +++ b/docs/content/methodology/measurements/_index.md @@ -1,6 +1,6 @@ --- bookCollapseSection: true bookFlatSection: false -title: "Hoststack Testing" -weight: 14 ----
\ No newline at end of file +title: "Measurements" +weight: 2 +--- diff --git a/docs/content/methodology/data_plane_throughput/_index.md b/docs/content/methodology/measurements/data_plane_throughput/_index.md index 5791438b3b..8fc7f66f3e 100644 --- a/docs/content/methodology/data_plane_throughput/_index.md +++ b/docs/content/methodology/measurements/data_plane_throughput/_index.md @@ -2,5 +2,5 @@ bookCollapseSection: true bookFlatSection: false title: "Data Plane Throughput" -weight: 4 +weight: 1 ---
\ No newline at end of file diff --git a/docs/content/methodology/data_plane_throughput/data_plane_throughput.md b/docs/content/methodology/measurements/data_plane_throughput/data_plane_throughput.md index 7ff1d38d17..865405ba2f 100644 --- a/docs/content/methodology/data_plane_throughput/data_plane_throughput.md +++ b/docs/content/methodology/measurements/data_plane_throughput/data_plane_throughput.md @@ -1,5 +1,5 @@ --- -title: "Data Plane Throughput" +title: "Overview" weight: 1 --- @@ -12,8 +12,8 @@ set of performance test cases implemented and executed within CSIT. Following throughput test methods are used: - MLRsearch - Multiple Loss Ratio search -- MRR - Maximum Receive Rate - PLRsearch - Probabilistic Loss Ratio search +- MRR - Maximum Receive Rate Description of each test method is followed by generic test properties shared by all methods. @@ -34,7 +34,7 @@ RFC2544. MLRsearch tests are run to discover NDR and PDR rates for each VPP and DPDK release covered by CSIT report. Results for small frame sizes -(64b/78B, IMIX) are presented in packet throughput graphs +(64B/78B, IMIX) are presented in packet throughput graphs (Box-and-Whisker Plots) with NDR and PDR rates plotted against the test cases covering popular VPP packet paths. @@ -46,10 +46,36 @@ tables. ### Details -See [MLRSearch]({{< ref "mlrsearch/#MLRsearch" >}}) section for more detail. +See [MLRSearch]({{< ref "mlr_search/#MLRsearch" >}}) section for more detail. MLRsearch is being standardized in IETF in [draft-ietf-bmwg-mlrsearch](https://datatracker.ietf.org/doc/html/draft-ietf-bmwg-mlrsearch-01). +## PLRsearch Tests + +### Description + +Probabilistic Loss Ratio search (PLRsearch) tests discovers a packet +throughput rate associated with configured Packet Loss Ratio (PLR) +criteria for tests run over an extended period of time a.k.a. soak +testing. PLRsearch assumes that system under test is probabilistic in +nature, and not deterministic. + +### Usage + +PLRsearch are run to discover a sustained throughput for PLR=10^-7^ +(close to NDR) for VPP release covered by CSIT report. Results for small +frame sizes (64B/78B) are presented in packet throughput graphs (Box +Plots) for a small subset of baseline tests. + +Each soak test lasts 30 minutes and is executed at least twice. Results are +compared against NDR and PDR rates discovered with MLRsearch. + +### Details + +See [PLRSearch]({{< ref "plr_search/#PLRsearch" >}}) methodology section for +more detail. PLRsearch is being standardized in IETF in +[draft-vpolak-bmwg-plrsearch](https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch). + ## MRR Tests ### Description @@ -74,42 +100,16 @@ progressions) resulting from data plane code changes. MRR tests are also used for VPP per patch performance jobs verifying patch performance vs parent. CSIT reports include MRR throughput comparisons between releases and test environments. Small frame sizes -only (64b/78B, IMIX). +only (64B/78B, IMIX). ### Details -See [MRR Throughput]({{< ref "mrr_throughput/#MRR Throughput" >}}) +See [MRR Throughput]({{< ref "mrr/#MRR" >}}) section for more detail about MRR tests configuration. FD.io CSIT performance dashboard includes complete description of -[daily performance trending tests](https://s3-docs.fd.io/csit/master/trending/methodology/performance_tests.html) -and [VPP per patch tests](https://s3-docs.fd.io/csit/master/trending/methodology/perpatch_performance_tests.html). - -## PLRsearch Tests - -### Description - -Probabilistic Loss Ratio search (PLRsearch) tests discovers a packet -throughput rate associated with configured Packet Loss Ratio (PLR) -criteria for tests run over an extended period of time a.k.a. soak -testing. PLRsearch assumes that system under test is probabilistic in -nature, and not deterministic. - -### Usage - -PLRsearch are run to discover a sustained throughput for PLR=10^-7 -(close to NDR) for VPP release covered by CSIT report. Results for small -frame sizes (64b/78B) are presented in packet throughput graphs (Box -Plots) for a small subset of baseline tests. - -Each soak test lasts 30 minutes and is executed at least twice. Results are -compared against NDR and PDR rates discovered with MLRsearch. - -### Details - -See [PLRSearch]({{< ref "plrsearch/#PLRsearch" >}}) methodology section for -more detail. PLRsearch is being standardized in IETF in -[draft-vpolak-bmwg-plrsearch](https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch). +[daily performance trending tests]({{< ref "../../trending/analysis" >}}) +and [VPP per patch tests]({{< ref "../../per_patch_testing.md" >}}). ## Generic Test Properties @@ -126,4 +126,4 @@ properties: - Offered packet load is always bi-directional and symmetric. - All measured and reported packet and bandwidth rates are aggregate bi-directional rates reported from external Traffic Generator - perspective.
\ No newline at end of file + perspective. diff --git a/docs/content/methodology/data_plane_throughput/mlrsearch.md b/docs/content/methodology/measurements/data_plane_throughput/mlr_search.md index 73039c9b02..93bdb51efe 100644 --- a/docs/content/methodology/data_plane_throughput/mlrsearch.md +++ b/docs/content/methodology/measurements/data_plane_throughput/mlr_search.md @@ -1,9 +1,9 @@ --- -title: "MLRsearch" +title: "MLR Search" weight: 2 --- -# MLRsearch +# MLR Search ## Overview @@ -23,10 +23,10 @@ conducted at the specified final trial duration. This results in the shorter overall execution time when compared to standard NDR/PDR binary search, while guaranteeing similar results. -.. Note:: All throughput rates are *always* bi-directional - aggregates of two equal (symmetric) uni-directional packet rates - received and reported by an external traffic generator, - unless the test specifically requires unidirectional traffic. + Note: All throughput rates are *always* bi-directional aggregates of two + equal (symmetric) uni-directional packet rates received and reported by an + external traffic generator, unless the test specifically requires + unidirectional traffic. ## Search Implementation diff --git a/docs/content/methodology/data_plane_throughput/mrr_throughput.md b/docs/content/methodology/measurements/data_plane_throughput/mrr.md index 076946fb66..e8c3e62eb6 100644 --- a/docs/content/methodology/data_plane_throughput/mrr_throughput.md +++ b/docs/content/methodology/measurements/data_plane_throughput/mrr.md @@ -1,9 +1,9 @@ --- -title: "MRR Throughput" +title: "MRR" weight: 4 --- -# MRR Throughput +# MRR Maximum Receive Rate (MRR) tests are complementary to MLRsearch tests, as they provide a maximum "raw" throughput benchmark for development and @@ -29,7 +29,7 @@ capacity, as follows: XXV710. - For 40GE NICs the maximum packet rate load is 2x18.75 Mpps for 64B, a 40GE bi-directional link sub-rate limited by 40GE NIC used on TRex - TG,XL710. Packet rate for other tested frame sizes is limited by + TG, XL710. Packet rate for other tested frame sizes is limited by PCIeGen3 x8 bandwidth limitation of ~50Gbps. MRR test code implements multiple bursts of offered packet load and has @@ -53,4 +53,4 @@ Burst parameter settings vary between different tests using MRR: - Daily performance trending: 10. - Per-patch performance verification: 5. - Initial iteration for MLRsearch: 1. - - Initial iteration for PLRsearch: 1.
\ No newline at end of file + - Initial iteration for PLRsearch: 1. diff --git a/docs/content/methodology/data_plane_throughput/plrsearch.md b/docs/content/methodology/measurements/data_plane_throughput/plr_search.md index 1facccc63b..529bac1f7f 100644 --- a/docs/content/methodology/data_plane_throughput/plrsearch.md +++ b/docs/content/methodology/measurements/data_plane_throughput/plr_search.md @@ -1,17 +1,17 @@ --- -title: "PLRsearch" +title: "PLR Search" weight: 3 --- -# PLRsearch +# PLR Search ## Motivation for PLRsearch Network providers are interested in throughput a system can sustain. -`RFC 2544`[^3] assumes loss ratio is given by a deterministic function of +`RFC 2544`[^1] assumes loss ratio is given by a deterministic function of offered load. But NFV software systems are not deterministic enough. -This makes deterministic algorithms (such as `binary search`[^9] per RFC 2544 +This makes deterministic algorithms (such as `binary search`[^2] per RFC 2544 and MLRsearch with single trial) to return results, which when repeated show relatively high standard deviation, thus making it harder to tell what "the throughput" actually is. @@ -21,8 +21,9 @@ We need another algorithm, which takes this indeterminism into account. ## Generic Algorithm Detailed description of the PLRsearch algorithm is included in the IETF -draft `draft-vpolak-bmwg-plrsearch-02`[^1] that is in the process -of being standardized in the IETF Benchmarking Methodology Working Group (BMWG). +draft `Probabilistic Loss Ratio Search for Packet Throughput`[^3] that is in the +process of being standardized in the IETF Benchmarking Methodology Working Group +(BMWG). ### Terms @@ -372,12 +373,11 @@ or is it better to have short periods of medium losses mixed with long periods of zero losses (as happens in Vhost test) with the same overall loss ratio? -[^1]: [draft-vpolak-bmwg-plrsearch-02](https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch-02) -[^2]: [plrsearch draft](https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch-00) -[^3]: [RFC 2544](https://tools.ietf.org/html/rfc2544) +[^1]: [RFC 2544: Benchmarking Methodology for Network Interconnect Devices](https://tools.ietf.org/html/rfc2544) +[^2]: [Binary search](https://en.wikipedia.org/wiki/Binary_search_algorithm) +[^3]: [Probabilistic Loss Ratio Search for Packet Throughput](https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch-02) [^4]: [Lomax distribution](https://en.wikipedia.org/wiki/Lomax_distribution) -[^5]: [reciprocal distribution](https://en.wikipedia.org/wiki/Reciprocal_distribution) +[^5]: [Reciprocal distribution](https://en.wikipedia.org/wiki/Reciprocal_distribution) [^6]: [Monte Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_integration) -[^7]: [importance sampling](https://en.wikipedia.org/wiki/Importance_sampling) -[^8]: [bivariate Gaussian](https://en.wikipedia.org/wiki/Multivariate_normal_distribution) -[^9]: [binary search](https://en.wikipedia.org/wiki/Binary_search_algorithm)
\ No newline at end of file +[^7]: [Importance sampling](https://en.wikipedia.org/wiki/Importance_sampling) +[^8]: [Bivariate Gaussian](https://en.wikipedia.org/wiki/Multivariate_normal_distribution) diff --git a/docs/content/methodology/packet_latency.md b/docs/content/methodology/measurements/packet_latency.md index fd7c0e00e8..f3606b5ffb 100644 --- a/docs/content/methodology/packet_latency.md +++ b/docs/content/methodology/measurements/packet_latency.md @@ -1,6 +1,6 @@ --- title: "Packet Latency" -weight: 8 +weight: 2 --- # Packet Latency @@ -16,6 +16,7 @@ Following methodology is used: - Only NDRPDR test type measures latency and only after NDR and PDR values are determined. Other test types do not involve latency streams. + - Latency is measured at different background load packet rates: - No-Load: latency streams only. @@ -25,21 +26,27 @@ Following methodology is used: - Latency is measured for all tested packet sizes except IMIX due to TRex TG restriction. + - TG sends dedicated latency streams, one per direction, each at the rate of 9 kpps at the prescribed packet size; these are sent in addition to the main load streams. + - TG reports Min/Avg/Max and HDRH latency values distribution per stream direction, hence two sets of latency values are reported per test case (marked as E-W and W-E). + - +/- 1 usec is the measurement accuracy of TRex TG and the data in HDRH latency values distribution is rounded to microseconds. + - TRex TG introduces a (background) always-on Tx + Rx latency bias of 4 usec on average per direction resulting from TRex software writing and reading packet timestamps on CPU cores. Quoted values are based on TG back-to-back latency measurements. + - Latency graphs are not smoothed, each latency value has its own horizontal line across corresponding packet percentiles. + - Percentiles are shown on X-axis using a logarithmic scale, so the maximal latency value (ending at 100% percentile) would be in infinity. The graphs are cut at 99.9999% (hover information still - lists 100%).
\ No newline at end of file + lists 100%). diff --git a/docs/content/methodology/telemetry.md b/docs/content/methodology/measurements/telemetry.md index e7a2571573..aed32d9e17 100644 --- a/docs/content/methodology/telemetry.md +++ b/docs/content/methodology/measurements/telemetry.md @@ -1,6 +1,6 @@ --- title: "Telemetry" -weight: 20 +weight: 3 --- # Telemetry @@ -50,14 +50,13 @@ them. ## MRR measurement - traffic_start(r=mrr) traffic_stop |< measure >| - | | | (r=mrr) | - | pre_run_stat post_run_stat | pre_stat | | post_stat - | | | | | | | | - --o--------o---------------o---------o-------o--------+-------------------+------o------------> - t - - Legend: + traffic_start(r=mrr) traffic_stop |< measure >| + | | | (r=mrr) | + | pre_run_stat post_run_stat | pre_stat | | post_stat + | | | | | | | | + o--------o---------------o-------o------o------+---------------+------o------> + t + Legend: - pre_run_stat - vpp-clear-runtime - post_run_stat @@ -72,27 +71,25 @@ them. - vpp-show-packettrace // if extended_debug == True - vpp-show-elog - - |< measure >| - | (r=mrr) | - | | - |< traffic_trial0 >|< traffic_trial1 >|< traffic_trialN >| - | (i=0,t=duration) | (i=1,t=duration) | (i=N,t=duration) | - | | | | - --o------------------------o------------------------o------------------------o---> + |< measure >| + | (r=mrr) | + | | + |< traffic_trial0 >|< traffic_trial1 >|< traffic_trialN >| + | (i=0,t=duration) | (i=1,t=duration) | (i=N,t=duration) | + | | | | + o-----------------------o------------------------o------------------------o---> t ## MLR measurement - |< measure >| traffic_start(r=pdr) traffic_stop traffic_start(r=ndr) traffic_stop |< [ latency ] >| - | (r=mlr) | | | | | | .9/.5/.1/.0 | - | | | pre_run_stat post_run_stat | | pre_run_stat post_run_stat | | | - | | | | | | | | | | | | - --+-------------------+----o--------o---------------o---------o--------------o--------o---------------o---------o------------[---------------------]---> - t - - Legend: + |< measure >| traffic_start(r=pdr) traffic_stop traffic_start(r=ndr) traffic_stop |< [ latency ] >| + | (r=mlr) | | | | | | .9/.5/.1/.0 | + | | | pre_run_stat post_run_stat | | pre_run_stat post_run_stat | | | + | | | | | | | | | | | | + +-------------+---o-------o---------------o--------o-------------o-------o---------------o--------o------------[-------------------]---> + t + Legend: - pre_run_stat - vpp-clear-runtime - post_run_stat @@ -107,17 +104,15 @@ them. - vpp-show-packettrace // if extended_debug == True - vpp-show-elog - ## MRR measurement - traffic_start(r=mrr) traffic_stop |< measure >| - | | | (r=mrr) | - | |< stat_runtime >| | stat_pre_trial | | stat_post_trial - | | | | | | | | - ----o---+--------------------------+---o-------------o------------+-------------------+-----o-------------> - t - - Legend: + traffic_start(r=mrr) traffic_stop |< measure >| + | | | (r=mrr) | + | |< stat_runtime >| | stat_pre_trial | | stat_post_trial + | | | | | | | | + o---+------------------+---o------o------------+-------------+----o------------> + t + Legend: - stat_runtime - vpp-runtime - stat_pre_trial @@ -127,36 +122,32 @@ them. - vpp-show-stats - vpp-show-packettrace // if extended_debug == True - |< measure >| | (r=mrr) | | | |< traffic_trial0 >|< traffic_trial1 >|< traffic_trialN >| | (i=0,t=duration) | (i=1,t=duration) | (i=N,t=duration) | | | | | - --o------------------------o------------------------o------------------------o---> - t - + o------------------------o------------------------o------------------------o---> + t |< stat_runtime >| | | |< program0 >|< program1 >|< programN >| | (@=params) | (@=params) | (@=params) | | | | | - --o------------------------o------------------------o------------------------o---> - t - + o------------------------o------------------------o------------------------o---> + t ## MLR measurement - |< measure >| traffic_start(r=pdr) traffic_stop traffic_start(r=ndr) traffic_stop |< [ latency ] >| - | (r=mlr) | | | | | | .9/.5/.1/.0 | - | | | |< stat_runtime >| | | |< stat_runtime >| | | | - | | | | | | | | | | | | - --+-------------------+-----o---+--------------------------+---o--------------o---+--------------------------+---o-----------[---------------------]---> - t - - Legend: + |< measure >| traffic_start(r=pdr) traffic_stop traffic_start(r=ndr) traffic_stop |< [ latency ] >| + | (r=mlr) | | | | | | .9/.5/.1/.0 | + | | | |< stat_runtime >| | | |< stat_runtime >| | | | + | | | | | | | | | | | | + +-------------+---o---+------------------+---o--------------o---+------------------+---o-----------[-----------------]---> + t + Legend: - stat_runtime - vpp-runtime - stat_pre_trial diff --git a/docs/content/methodology/root_cause_analysis/_index.md b/docs/content/methodology/overview/_index.md index 79cfe73769..10f362013f 100644 --- a/docs/content/methodology/root_cause_analysis/_index.md +++ b/docs/content/methodology/overview/_index.md @@ -1,6 +1,6 @@ --- bookCollapseSection: true bookFlatSection: false -title: "Root Cause Analysis" -weight: 20 ----
\ No newline at end of file +title: "Overview" +weight: 1 +--- diff --git a/docs/content/methodology/dut_state_considerations.md b/docs/content/methodology/overview/dut_state_considerations.md index 55e408f5f2..eca10a22cd 100644 --- a/docs/content/methodology/dut_state_considerations.md +++ b/docs/content/methodology/overview/dut_state_considerations.md @@ -1,9 +1,9 @@ --- -title: "DUT state considerations" -weight: 6 +title: "DUT State Considerations" +weight: 5 --- -# DUT state considerations +# DUT State Considerations This page discusses considerations for Device Under Test (DUT) state. DUTs such as VPP require configuration, to be provided before the aplication diff --git a/docs/content/methodology/multi_core_speedup.md b/docs/content/methodology/overview/multi_core_speedup.md index c0c9ae2570..f438e8e996 100644 --- a/docs/content/methodology/multi_core_speedup.md +++ b/docs/content/methodology/overview/multi_core_speedup.md @@ -1,6 +1,6 @@ --- title: "Multi-Core Speedup" -weight: 13 +weight: 3 --- # Multi-Core Speedup @@ -25,12 +25,12 @@ Cascadelake and Xeon Icelake testbeds. Multi-core tests are executed in the following VPP worker thread and physical core configurations: -#. Intel Xeon Icelake and Cascadelake testbeds (2n-icx, 3n-icx, 2n-clx) +1. Intel Xeon Icelake and Cascadelake testbeds (2n-icx, 3n-icx, 2n-clx) with Intel HT enabled (2 logical CPU cores per each physical core): - #. 2t1c - 2 VPP worker threads on 1 physical core. - #. 4t2c - 4 VPP worker threads on 2 physical cores. - #. 8t4c - 8 VPP worker threads on 4 physical cores. + 1. 2t1c - 2 VPP worker threads on 1 physical core. + 2. 4t2c - 4 VPP worker threads on 2 physical cores. + 3. 8t4c - 8 VPP worker threads on 4 physical cores. VPP worker threads are the data plane threads running on isolated logical cores. With Intel HT enabled VPP workers are placed as sibling @@ -48,4 +48,4 @@ the same amount of packet flows. If number of VPP workers is higher than number of physical or virtual interfaces, multiple receive queues are configured on each interface. NIC Receive Side Scaling (RSS) for physical interfaces and multi-queue -for virtual interfaces are used for this purpose.
\ No newline at end of file +for virtual interfaces are used for this purpose. diff --git a/docs/content/methodology/per_thread_resources.md b/docs/content/methodology/overview/per_thread_resources.md index cd862fa824..c23efb50bd 100644 --- a/docs/content/methodology/per_thread_resources.md +++ b/docs/content/methodology/overview/per_thread_resources.md @@ -5,8 +5,7 @@ weight: 2 # Per Thread Resources -CSIT test framework is managing mapping of the following resources per -thread: +CSIT test framework is managing mapping of the following resources per thread: 1. Cores, physical cores (pcores) allocated as pairs of sibling logical cores (lcores) if server in HyperThreading/SMT mode, or as single lcores @@ -30,7 +29,7 @@ tested (VPP or DPDK apps) and associated thread types, as follows: configurations, where{T} stands for a total number of threads (lcores), and {C} for a total number of pcores. Tested configurations are encoded in CSIT test case names, - e.g. "1c", "2c", "4c", and test tags "2T1C"(or "1T1C"), "4T2C" + e.g. "1c", "2c", "4c", and test tags "2T1C" (or "1T1C"), "4T2C" (or "2T2C"), "8T4C" (or "4T4C"). - Interface Receive Queues (RxQ): as of CSIT-2106 release, number of RxQs used on each physical or virtual interface is equal to the @@ -58,7 +57,7 @@ tested (VPP or DPDK apps) and associated thread types, as follows: total number of lcores and pcores used for feature workers. Accordingly, tested configurations are encoded in CSIT test case names, e.g. "1c-1c", "1c-2c", "1c-3c", and test tags "2T1C_2T1C" - (or "1T1C_1T1C"), "2T1C_4T2C"(or "1T1C_2T2C"), "2T1C_6T3C" + (or "1T1C_1T1C"), "2T1C_4T2C" (or "1T1C_2T2C"), "2T1C_6T3C" (or "1T1C_3T3C"). - RxQ and TxQ: no RxQs and no TxQs are used by feature workers. - Applies to VPP only. @@ -67,8 +66,8 @@ tested (VPP or DPDK apps) and associated thread types, as follows: - Cores: single lcore. - RxQ: not used (VPP default behaviour). - - TxQ: single TxQ per interface, allocated but not used - (VPP default behaviour). + - TxQ: single TxQ per interface, allocated but not used (VPP default + behaviour). - Applies to VPP only. ## VPP Thread Configuration diff --git a/docs/content/methodology/terminology.md b/docs/content/methodology/overview/terminology.md index 229db7d145..c9115e9291 100644 --- a/docs/content/methodology/terminology.md +++ b/docs/content/methodology/overview/terminology.md @@ -8,13 +8,17 @@ weight: 1 - **Frame size**: size of an Ethernet Layer-2 frame on the wire, including any VLAN tags (dot1q, dot1ad) and Ethernet FCS, but excluding Ethernet preamble and inter-frame gap. Measured in Bytes. + - **Packet size**: same as frame size, both terms used interchangeably. + - **Inner L2 size**: for tunneled L2 frames only, size of an encapsulated Ethernet Layer-2 frame, preceded with tunnel header, and followed by tunnel trailer. Measured in Bytes. + - **Inner IP size**: for tunneled IP packets only, size of an encapsulated IPv4 or IPv6 packet, preceded with tunnel header, and followed by tunnel trailer. Measured in Bytes. + - **Device Under Test (DUT)**: In software networking, "device" denotes a specific piece of software tasked with packet processing. Such device is surrounded with other software components (such as operating system @@ -26,25 +30,30 @@ weight: 1 SUT instead of RFC2544 DUT. Device under test (DUT) can be re-introduced when analyzing test results using whitebox techniques, but this document sticks to blackbox testing. + - **System Under Test (SUT)**: System under test (SUT) is a part of the whole test setup whose performance is to be benchmarked. The complete methodology contains other parts, whose performance is either already established, or not affecting the benchmarking result. + - **Bi-directional throughput tests**: involve packets/frames flowing in both east-west and west-east directions over every tested interface of SUT/DUT. Packet flow metrics are measured per direction, and can be reported as aggregate for both directions (i.e. throughput) and/or separately for each measured direction (i.e. latency). In most cases bi-directional tests use the same (symmetric) load in both directions. + - **Uni-directional throughput tests**: involve packets/frames flowing in only one direction, i.e. either east-west or west-east direction, over every tested interface of SUT/DUT. Packet flow metrics are measured and are reported for measured direction. + - **Packet Loss Ratio (PLR)**: ratio of packets received relative to packets transmitted over the test trial duration, calculated using formula: PLR = ( pkts_transmitted - pkts_received ) / pkts_transmitted. For bi-directional throughput tests aggregate PLR is calculated based on the aggregate number of packets transmitted and received. + - **Packet Throughput Rate**: maximum packet offered load DUT/SUT forwards within the specified Packet Loss Ratio (PLR). In many cases the rate depends on the frame size processed by DUT/SUT. Hence packet @@ -53,30 +62,36 @@ weight: 1 throughput rate should be reported as aggregate for both directions. Measured in packets-per-second (pps) or frames-per-second (fps), equivalent metrics. + - **Bandwidth Throughput Rate**: a secondary metric calculated from packet throughput rate using formula: bw_rate = pkt_rate * (frame_size + L1_overhead) * 8, where L1_overhead for Ethernet includes preamble (8 Bytes) and inter-frame gap (12 Bytes). For bi-directional tests, bandwidth throughput rate should be reported as aggregate for both directions. Expressed in bits-per-second (bps). + - **Non Drop Rate (NDR)**: maximum packet/bandwith throughput rate sustained by DUT/SUT at PLR equal zero (zero packet loss) specific to tested frame size(s). MUST be quoted with specific packet size as received by DUT/SUT during the measurement. Packet NDR measured in packets-per-second (or fps), bandwidth NDR expressed in bits-per-second (bps). + - **Partial Drop Rate (PDR)**: maximum packet/bandwith throughput rate sustained by DUT/SUT at PLR greater than zero (non-zero packet loss) specific to tested frame size(s). MUST be quoted with specific packet size as received by DUT/SUT during the measurement. Packet PDR measured in packets-per-second (or fps), bandwidth PDR expressed in bits-per-second (bps). + - **Maximum Receive Rate (MRR)**: packet/bandwidth rate regardless of PLR sustained by DUT/SUT under specified Maximum Transmit Rate (MTR) packet load offered by traffic generator. MUST be quoted with both specific packet size and MTR as received by DUT/SUT during the measurement. Packet MRR measured in packets-per-second (or fps), bandwidth MRR expressed in bits-per-second (bps). + - **Trial**: a single measurement step. + - **Trial duration**: amount of time over which packets are transmitted and received in a single measurement step. diff --git a/docs/content/methodology/vpp_forwarding_modes.md b/docs/content/methodology/overview/vpp_forwarding_modes.md index 1cc199c607..b3c3bba984 100644 --- a/docs/content/methodology/vpp_forwarding_modes.md +++ b/docs/content/methodology/overview/vpp_forwarding_modes.md @@ -1,13 +1,13 @@ --- title: "VPP Forwarding Modes" -weight: 3 +weight: 4 --- # VPP Forwarding Modes -VPP is tested in a number of L2, IPv4 and IPv6 packet lookup and -forwarding modes. Within each mode baseline and scale tests are -executed, the latter with varying number of FIB entries. +VPP is tested in a number of L2, IPv4 and IPv6 packet lookup and forwarding +modes. Within each mode baseline and scale tests are executed, the latter with +varying number of FIB entries. ## L2 Ethernet Switching diff --git a/docs/content/methodology/root_cause_analysis/perpatch_performance_tests.md b/docs/content/methodology/per_patch_testing.md index 900ea0b874..a64a52caf6 100644 --- a/docs/content/methodology/root_cause_analysis/perpatch_performance_tests.md +++ b/docs/content/methodology/per_patch_testing.md @@ -1,9 +1,9 @@ --- -title: "Per-patch performance tests" -weight: 1 +title: "Per-patch Testing" +weight: 5 --- -# Per-patch performance tests +# Per-patch Testing Updated for CSIT git commit id: 72b45cfe662107c8e1bb549df71ba51352a898ee. @@ -101,15 +101,15 @@ where the variables are all lower case (so AND operator stands out). Currently only one test type is supported by the performance comparison jobs: "mrr". -The nic_driver options depend on nic_model. For Intel cards "drv_avf" (AVF plugin) -and "drv_vfio_pci" (DPDK plugin) are popular, for Mellanox "drv_rdma_core". -Currently, the performance using "drv_af_xdp" is not reliable enough, so do not use it -unless you are specifically testing for AF_XDP. +The nic_driver options depend on nic_model. For Intel cards "drv_avf" +(AVF plugin) and "drv_vfio_pci" (DPDK plugin) are popular, for Mellanox +"drv_rdma_core". Currently, the performance using "drv_af_xdp" is not reliable +enough, so do not use it unless you are specifically testing for AF_XDP. The most popular nic_model is "nic_intel-xxv710", but that is not available on all testbed types. -It is safe to use "1c" for cores (unless you are suspection multi-core performance -is affected differently) and "64b" for frame size ("78b" for ip6 +It is safe to use "1c" for cores (unless you are suspection multi-core +performance is affected differently) and "64b" for frame size ("78b" for ip6 and more for dot1q and other encapsulated traffic; "1518b" is popular for ipsec and other payload-bound tests). @@ -121,9 +121,11 @@ for example ### Shortening triggers -Advanced users may use the following tricks to avoid writing long trigger comments. +Advanced users may use the following tricks to avoid writing long trigger +comments. -Robot supports glob matching, which can be used to select multiple suite tags at once. +Robot supports glob matching, which can be used to select multiple suite tags at +once. Not specifying one of 6 parts of the recommended expression pattern will select all available options. For example not specifying nic_driver diff --git a/docs/content/methodology/trending_methodology/_index.md b/docs/content/methodology/test/_index.md index 551d950cc7..857cc7b168 100644 --- a/docs/content/methodology/trending_methodology/_index.md +++ b/docs/content/methodology/test/_index.md @@ -1,6 +1,6 @@ --- bookCollapseSection: true bookFlatSection: false -title: "Trending Methodology" -weight: 22 ----
\ No newline at end of file +title: "Test" +weight: 3 +--- diff --git a/docs/content/methodology/access_control_lists.md b/docs/content/methodology/test/access_control_lists.md index 9767d3f86a..354e6b72bb 100644 --- a/docs/content/methodology/access_control_lists.md +++ b/docs/content/methodology/test/access_control_lists.md @@ -1,6 +1,6 @@ --- title: "Access Control Lists" -weight: 12 +weight: 5 --- # Access Control Lists @@ -40,10 +40,8 @@ ACL tests are executed with the following combinations of ACL entries and number of flows: - ACL entry definitions - - flow non-matching deny entry: (src-ip4, dst-ip4, src-port, dst-port). - flow matching permit ACL entry: (src-ip4, dst-ip4). - - {E} - number of non-matching deny ACL entries, {E} = [1, 10, 50]. - {F} - number of UDP flows with different tuple (src-ip4, dst-ip4, src-port, dst-port), {F} = [100, 10k, 100k]. @@ -60,10 +58,8 @@ MAC-IP ACL tests are executed with the following combinations of ACL entries and number of flows: - ACL entry definitions - - flow non-matching deny entry: (dst-ip4, dst-mac, bit-mask) - flow matching permit ACL entry: (dst-ip4, dst-mac, bit-mask) - - {E} - number of non-matching deny ACL entries, {E} = [1, 10, 50] - {F} - number of UDP flows with different tuple (dst-ip4, dst-mac), {F} = [100, 10k, 100k] diff --git a/docs/content/methodology/generic_segmentation_offload.md b/docs/content/methodology/test/generic_segmentation_offload.md index ddb19ba826..0032d203de 100644 --- a/docs/content/methodology/generic_segmentation_offload.md +++ b/docs/content/methodology/test/generic_segmentation_offload.md @@ -1,6 +1,6 @@ --- title: "Generic Segmentation Offload" -weight: 15 +weight: 7 --- # Generic Segmentation Offload @@ -22,12 +22,9 @@ performance comparison the same tests are run without GSO enabled. Two VPP GSO test topologies are implemented: 1. iPerfC_GSOvirtio_LinuxVM --- GSOvhost_VPP_GSOvhost --- iPerfS_GSOvirtio_LinuxVM - - Tests VPP GSO on vhostuser interfaces and interaction with Linux virtio with GSO enabled. - 2. iPerfC_GSOtap_LinuxNspace --- GSOtapv2_VPP_GSOtapv2 --- iPerfS_GSOtap_LinuxNspace - - Tests VPP GSO on tapv2 interfaces and interaction with Linux tap with GSO enabled. @@ -60,9 +57,10 @@ separate namespace. Following core pinning scheme is used: iPerf3 version used 3.7 $ sudo -E -S ip netns exec tap1_namespace iperf3 \ - --server --daemon --pidfile /tmp/iperf3_server.pid --logfile /tmp/iperf3.log --port 5201 --affinity <X> + --server --daemon --pidfile /tmp/iperf3_server.pid \ + --logfile /tmp/iperf3.log --port 5201 --affinity <X> -For the full iPerf3 reference please see: +For the full iPerf3 reference please see [iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst). @@ -71,9 +69,10 @@ For the full iPerf3 reference please see: iPerf3 version used 3.7 $ sudo -E -S ip netns exec tap1_namespace iperf3 \ - --client 2.2.2.2 --bind 1.1.1.1 --port 5201 --parallel <Y> --time 30.0 --affinity <X> --zerocopy + --client 2.2.2.2 --bind 1.1.1.1 --port 5201 --parallel <Y> \ + --time 30.0 --affinity <X> --zerocopy -For the full iPerf3 reference please see: +For the full iPerf3 reference please see [iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst). @@ -99,9 +98,10 @@ scheme is used: iPerf3 version used 3.7 $ sudo iperf3 \ - --server --daemon --pidfile /tmp/iperf3_server.pid --logfile /tmp/iperf3.log --port 5201 --affinity X + --server --daemon --pidfile /tmp/iperf3_server.pid \ + --logfile /tmp/iperf3.log --port 5201 --affinity X -For the full iPerf3 reference please see: +For the full iPerf3 reference please see [iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst). @@ -110,7 +110,8 @@ For the full iPerf3 reference please see: iPerf3 version used 3.7 $ sudo iperf3 \ - --client 2.2.2.2 --bind 1.1.1.1 --port 5201 --parallel <Y> --time 30.0 --affinity X --zerocopy + --client 2.2.2.2 --bind 1.1.1.1 --port 5201 --parallel <Y> \ + --time 30.0 --affinity X --zerocopy -For the full iPerf3 reference please see: -[iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst).
\ No newline at end of file +For the full iPerf3 reference please see +[iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst). diff --git a/docs/content/methodology/test/hoststack/_index.md b/docs/content/methodology/test/hoststack/_index.md new file mode 100644 index 0000000000..2ae872c54e --- /dev/null +++ b/docs/content/methodology/test/hoststack/_index.md @@ -0,0 +1,6 @@ +--- +bookCollapseSection: true +bookFlatSection: false +title: "Hoststack" +weight: 6 +--- diff --git a/docs/content/methodology/hoststack_testing/quicudpip_with_vppecho.md b/docs/content/methodology/test/hoststack/quicudpip_with_vppecho.md index c7d57a51b3..c7d57a51b3 100644 --- a/docs/content/methodology/hoststack_testing/quicudpip_with_vppecho.md +++ b/docs/content/methodology/test/hoststack/quicudpip_with_vppecho.md diff --git a/docs/content/methodology/hoststack_testing/tcpip_with_iperf3.md b/docs/content/methodology/test/hoststack/tcpip_with_iperf3.md index 7baa88ab50..7baa88ab50 100644 --- a/docs/content/methodology/hoststack_testing/tcpip_with_iperf3.md +++ b/docs/content/methodology/test/hoststack/tcpip_with_iperf3.md diff --git a/docs/content/methodology/hoststack_testing/udpip_with_iperf3.md b/docs/content/methodology/test/hoststack/udpip_with_iperf3.md index 01ddf61269..01ddf61269 100644 --- a/docs/content/methodology/hoststack_testing/udpip_with_iperf3.md +++ b/docs/content/methodology/test/hoststack/udpip_with_iperf3.md diff --git a/docs/content/methodology/hoststack_testing/vsap_ab_with_nginx.md b/docs/content/methodology/test/hoststack/vsap_ab_with_nginx.md index 2dc4d2b7f9..2dc4d2b7f9 100644 --- a/docs/content/methodology/hoststack_testing/vsap_ab_with_nginx.md +++ b/docs/content/methodology/test/hoststack/vsap_ab_with_nginx.md diff --git a/docs/content/methodology/internet_protocol_security_ipsec.md b/docs/content/methodology/test/internet_protocol_security.md index 711004f2c0..1a02c43a0a 100644 --- a/docs/content/methodology/internet_protocol_security_ipsec.md +++ b/docs/content/methodology/test/internet_protocol_security.md @@ -1,17 +1,16 @@ --- -title: "Internet Protocol Security (IPsec)" -weight: 11 +title: "Internet Protocol Security" +weight: 4 --- -# Internet Protocol Security (IPsec) +# Internet Protocol Security -VPP IPsec performance tests are executed for the following crypto -plugins: +VPP Internet Protocol Security (IPsec) performance tests are executed for the +following crypto plugins: - `crypto_native`, used for software based crypto leveraging CPU platform optimizations e.g. Intel's AES-NI instruction set. -- `crypto_ipsecmb`, used for hardware based crypto with Intel QAT PCIe - cards. +- `crypto_ipsecmb`, used for hardware based crypto with Intel QAT PCIe cards. ## IPsec with VPP Native SW Crypto diff --git a/docs/content/methodology/network_address_translation.md b/docs/content/methodology/test/network_address_translation.md index ef341dc892..f443eabc5f 100644 --- a/docs/content/methodology/network_address_translation.md +++ b/docs/content/methodology/test/network_address_translation.md @@ -1,6 +1,6 @@ --- title: "Network Address Translation" -weight: 7 +weight: 1 --- # Network Address Translation diff --git a/docs/content/methodology/packet_flow_ordering.md b/docs/content/methodology/test/packet_flow_ordering.md index d2b3bfb90c..c2c87038d4 100644 --- a/docs/content/methodology/packet_flow_ordering.md +++ b/docs/content/methodology/test/packet_flow_ordering.md @@ -1,6 +1,6 @@ --- title: "Packet Flow Ordering" -weight: 9 +weight: 2 --- # Packet Flow Ordering diff --git a/docs/content/methodology/reconfiguration_tests.md b/docs/content/methodology/test/reconfiguration.md index 837535526d..6dec4d918b 100644 --- a/docs/content/methodology/reconfiguration_tests.md +++ b/docs/content/methodology/test/reconfiguration.md @@ -1,9 +1,9 @@ --- -title: "Reconfiguration Tests" -weight: 16 +title: "Reconfiguration" +weight: 8 --- -# Reconfiguration Tests +# Reconfiguration ## Overview diff --git a/docs/content/methodology/geneve.md b/docs/content/methodology/test/tunnel_encapsulations.md index f4a0af92e7..c047c43dfa 100644 --- a/docs/content/methodology/geneve.md +++ b/docs/content/methodology/test/tunnel_encapsulations.md @@ -1,11 +1,48 @@ --- -title: "GENEVE" -weight: 21 +title: "Tunnel Encapsulations" +weight: 3 --- -# GENEVE +# Tunnel Encapsulations -## GENEVE Prefix Bindings +Tunnel encapsulations testing is grouped based on the type of outer +header: IPv4 or IPv6. + +## IPv4 Tunnels + +VPP is tested in the following IPv4 tunnel baseline configurations: + +- *ip4vxlan-l2bdbase*: VXLAN over IPv4 tunnels with L2 bridge-domain MAC + switching. +- *ip4vxlan-l2xcbase*: VXLAN over IPv4 tunnels with L2 cross-connect. +- *ip4lispip4-ip4base*: LISP over IPv4 tunnels with IPv4 routing. +- *ip4lispip6-ip6base*: LISP over IPv4 tunnels with IPv6 routing. +- *ip4gtpusw-ip4base*: GTPU over IPv4 tunnels with IPv4 routing. + +In all cases listed above low number of MAC, IPv4, IPv6 flows (253 or 254 per +direction) is switched or routed by VPP. + +In addition selected IPv4 tunnels are tested at scale: + +- *dot1q--ip4vxlanscale-l2bd*: VXLAN over IPv4 tunnels with L2 bridge- + domain MAC switching, with scaled up dot1q VLANs (10, 100, 1k), + mapped to scaled up L2 bridge-domains (10, 100, 1k), that are in turn + mapped to (10, 100, 1k) VXLAN tunnels. 64.5k flows are transmitted per + direction. + +## IPv6 Tunnels + +VPP is tested in the following IPv6 tunnel baseline configurations: + +- *ip6lispip4-ip4base*: LISP over IPv4 tunnels with IPv4 routing. +- *ip6lispip6-ip6base*: LISP over IPv4 tunnels with IPv6 routing. + +In all cases listed above low number of IPv4, IPv6 flows (253 or 254 per +direction) is routed by VPP. + +## GENEVE + +### GENEVE Prefix Bindings GENEVE prefix bindings should be representative to target applications, where a packet flows of particular set of IPv4 addresses (L3 underlay network) is @@ -14,46 +51,30 @@ routed via dedicated GENEVE interface by building an L2 overlay. Private address ranges to be used in tests: - East hosts ip address range: 10.0.1.0 - 10.127.255.255 (10.0/9 prefix) - - Total of 2^23 - 256 (8 388 352) of usable IPv4 addresses - Usable in tests for up to 32 767 GENEVE tunnels (IPv4 underlay networks) - - West hosts ip address range: 10.128.1.0 - 10.255.255.255 (10.128/9 prefix) - - Total of 2^23 - 256 (8 388 352) of usable IPv4 addresses - Usable in tests for up to 32 767 GENEVE tunnels (IPv4 underlay networks) -## GENEVE Tunnel Scale +### GENEVE Tunnel Scale If N is a number of GENEVE tunnels (and IPv4 underlay networks) then TG sends 256 packet flows in every of N different sets: - i = 1,2,3, ... N - GENEVE tunnel index - - East-West direction: GENEVE encapsulated packets - - Outer IP header: - - src ip: 1.1.1.1 - - dst ip: 1.1.1.2 - - GENEVE header: - - vni: i - - Inner IP header: - - src_ip_range(i) = 10.(0 + rounddown(i/255)).(modulo(i/255)).(0-to-255) - - dst_ip_range(i) = 10.(128 + rounddown(i/255)).(modulo(i/255)).(0-to-255) - - West-East direction: non-encapsulated packets - - IP header: - - src_ip_range(i) = 10.(128 + rounddown(i/255)).(modulo(i/255)).(0-to-255) - - dst_ip_range(i) = 10.(0 + rounddown(i/255)).(modulo(i/255)).(0-to-255) **geneve-tunnels** | **total-flows** @@ -63,4 +84,4 @@ If N is a number of GENEVE tunnels (and IPv4 underlay networks) then TG sends 16 | 4 096 64 | 16 384 256 | 65 536 - 1 024 | 262 144
\ No newline at end of file + 1 024 | 262 144 diff --git a/docs/content/methodology/vpp_device_functional.md b/docs/content/methodology/test/vpp_device.md index 2bad5973b6..0a5ee90308 100644 --- a/docs/content/methodology/vpp_device_functional.md +++ b/docs/content/methodology/test/vpp_device.md @@ -1,9 +1,9 @@ --- -title: "VPP_Device Functional" -weight: 18 +title: "VPP Device" +weight: 9 --- -# VPP_Device Functional +# VPP Device Includes VPP_Device test environment for functional VPP device tests integrated into LFN CI/CD infrastructure. VPP_Device tests diff --git a/docs/content/methodology/trending_methodology/overview.md b/docs/content/methodology/trending/_index.md index 90d8a2507c..4289e7ff96 100644 --- a/docs/content/methodology/trending_methodology/overview.md +++ b/docs/content/methodology/trending/_index.md @@ -1,9 +1,11 @@ --- -title: "Overview" -weight: 1 +bookCollapseSection: true +bookFlatSection: false +title: "Trending" +weight: 4 --- -# Overview +# Trending This document describes a high-level design of a system for continuous performance measuring, trending and change detection for FD.io VPP SW diff --git a/docs/content/methodology/trending_methodology/trend_analysis.md b/docs/content/methodology/trending/analysis.md index 7f1870f577..fe952259ab 100644 --- a/docs/content/methodology/trending_methodology/trend_analysis.md +++ b/docs/content/methodology/trending/analysis.md @@ -1,6 +1,6 @@ --- -title: "Trending Analysis" -weight: 2 +title: "Analysis" +weight: 1 --- # Trend Analysis @@ -220,5 +220,5 @@ It is good to exclude last week from the trend maximum, as including the last week would hide all real progressions. [^1]: [Minimum Description Length](https://en.wikipedia.org/wiki/Minimum_description_length) -[^2]: [Occam's razor](https://en.wikipedia.org/wiki/Occam%27s_razor) -[^3]: [bimodal distribution](https://en.wikipedia.org/wiki/Bimodal_distribution) +[^2]: [Occam's Razor](https://en.wikipedia.org/wiki/Occam%27s_razor) +[^3]: [Bimodal Distribution](https://en.wikipedia.org/wiki/Bimodal_distribution) diff --git a/docs/content/methodology/trending_methodology/trend_presentation.md b/docs/content/methodology/trending/presentation.md index 4c58589a0b..84925b46c8 100644 --- a/docs/content/methodology/trending_methodology/trend_presentation.md +++ b/docs/content/methodology/trending/presentation.md @@ -1,6 +1,6 @@ --- -title: "Trending Presentation" -weight: 3 +title: "Presentation" +weight: 2 --- # Trend Presentation @@ -25,10 +25,8 @@ The graphs are constructed as follows: - Y-axis represents run-average MRR value, NDR or PDR values in Mpps. For PDR tests also a graph with average latency at 50% PDR [us] is generated. - Markers to indicate anomaly classification: - - Regression - red circle. - Progression - green circle. - - The line shows average MRR value of each group. In addition the graphs show dynamic labels while hovering over graph data diff --git a/docs/content/methodology/tunnel_encapsulations.md b/docs/content/methodology/tunnel_encapsulations.md deleted file mode 100644 index 52505b7efb..0000000000 --- a/docs/content/methodology/tunnel_encapsulations.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: "Tunnel Encapsulations" -weight: 10 ---- - -# Tunnel Encapsulations - -Tunnel encapsulations testing is grouped based on the type of outer -header: IPv4 or IPv6. - -## IPv4 Tunnels - -VPP is tested in the following IPv4 tunnel baseline configurations: - -- *ip4vxlan-l2bdbase*: VXLAN over IPv4 tunnels with L2 bridge-domain MAC - switching. -- *ip4vxlan-l2xcbase*: VXLAN over IPv4 tunnels with L2 cross-connect. -- *ip4lispip4-ip4base*: LISP over IPv4 tunnels with IPv4 routing. -- *ip4lispip6-ip6base*: LISP over IPv4 tunnels with IPv6 routing. -- *ip4gtpusw-ip4base*: GTPU over IPv4 tunnels with IPv4 routing. - -In all cases listed above low number of MAC, IPv4, IPv6 flows (253 or 254 per -direction) is switched or routed by VPP. - -In addition selected IPv4 tunnels are tested at scale: - -- *dot1q--ip4vxlanscale-l2bd*: VXLAN over IPv4 tunnels with L2 bridge- - domain MAC switching, with scaled up dot1q VLANs (10, 100, 1k), - mapped to scaled up L2 bridge-domains (10, 100, 1k), that are in turn - mapped to (10, 100, 1k) VXLAN tunnels. 64.5k flows are transmitted per - direction. - -## IPv6 Tunnels - -VPP is tested in the following IPv6 tunnel baseline configurations: - -- *ip6lispip4-ip4base*: LISP over IPv4 tunnels with IPv4 routing. -- *ip6lispip6-ip6base*: LISP over IPv4 tunnels with IPv6 routing. - -In all cases listed above low number of IPv4, IPv6 flows (253 or 254 per -direction) is routed by VPP. diff --git a/docs/content/overview/_index.md b/docs/content/overview/_index.md new file mode 100644 index 0000000000..97fb5dec78 --- /dev/null +++ b/docs/content/overview/_index.md @@ -0,0 +1,6 @@ +--- +bookCollapseSection: false +bookFlatSection: true +title: "Overview" +weight: 1 +--- diff --git a/docs/content/overview/c_dash/_index.md b/docs/content/overview/c_dash/_index.md new file mode 100644 index 0000000000..97b351006f --- /dev/null +++ b/docs/content/overview/c_dash/_index.md @@ -0,0 +1,6 @@ +--- +bookCollapseSection: true +bookFlatSection: false +title: "C-Dash" +weight: 1 +--- diff --git a/docs/content/overview/c_dash/design.md b/docs/content/overview/c_dash/design.md new file mode 100644 index 0000000000..ef8c62ab88 --- /dev/null +++ b/docs/content/overview/c_dash/design.md @@ -0,0 +1,6 @@ +--- +title: "Design" +weight: 1 +--- + +# Design diff --git a/docs/content/overview/c_dash/releases.md b/docs/content/overview/c_dash/releases.md new file mode 100644 index 0000000000..1e51c2978a --- /dev/null +++ b/docs/content/overview/c_dash/releases.md @@ -0,0 +1,8 @@ +--- +title: "Releases" +weight: 3 +--- + +# Releases + +## C-Dash v1 diff --git a/docs/content/overview/c_dash/structure.md b/docs/content/overview/c_dash/structure.md new file mode 100644 index 0000000000..ba427f1ee3 --- /dev/null +++ b/docs/content/overview/c_dash/structure.md @@ -0,0 +1,20 @@ +--- +title: "Structure" +weight: 2 +--- + +# Structure + +## Performance Trending + +## Per Release Performance + +## Per Release Performance Comparisons + +## Per Release Coverage Data + +## Test Job Statistics + +## Failures and Anomalies + +## Documentation diff --git a/docs/content/overview/csit/_index.md b/docs/content/overview/csit/_index.md new file mode 100644 index 0000000000..959348d2ae --- /dev/null +++ b/docs/content/overview/csit/_index.md @@ -0,0 +1,6 @@ +--- +bookCollapseSection: true +bookFlatSection: false +title: "CSIT" +weight: 2 +--- diff --git a/docs/content/introduction/design.md b/docs/content/overview/csit/design.md index ba31477c4d..53b764f5bb 100644 --- a/docs/content/introduction/design.md +++ b/docs/content/overview/csit/design.md @@ -1,6 +1,6 @@ --- title: "Design" -weight: 3 +weight: 1 --- # Design @@ -145,4 +145,4 @@ examples are provided, both good and bad. ## Coding Guidelines Coding guidelines can be found on -[Design optimizations wiki page](https://wiki.fd.io/view/CSIT/Design_Optimizations).
\ No newline at end of file +[Design optimizations wiki page](https://wiki.fd.io/view/CSIT/Design_Optimizations). diff --git a/docs/content/methodology/suite_generation.md b/docs/content/overview/csit/suite_generation.md index 4fa9dee0ce..84a19b8ab9 100644 --- a/docs/content/methodology/suite_generation.md +++ b/docs/content/overview/csit/suite_generation.md @@ -1,14 +1,13 @@ --- title: "Suite Generation" -weight: 19 +weight: 5 --- # Suite Generation -CSIT uses robot suite files to define tests. -However, not all suite files available for Jenkins jobs -(or manually started bootstrap scripts) are present in CSIT git repository. -They are generated only when needed. +CSIT uses robot suite files to define tests. However, not all suite files +available for Jenkins jobs (or manually started bootstrap scripts) are present +in CSIT git repository. They are generated only when needed. ## Autogen Library @@ -35,7 +34,7 @@ the same content, it is one of checks that autogen works correctly. Not all suites present in CSIT git repository act as template for autogen. The distinction is on per-directory level. Directories with -regenerate_testcases.py script usually consider all suites as templates +`regenerate_testcases.py` script usually consider all suites as templates (unless possibly not included by the glob patten in the script). The script also specifies minimal frame size, indirectly, by specifying protocol @@ -43,25 +42,25 @@ The script also specifies minimal frame size, indirectly, by specifying protocol ### Constants -Values in Constants.py are taken into consideration when generating suites. +Values in `Constants.py` are taken into consideration when generating suites. The values are mostly related to different NIC models and NIC drivers. ### Python Code -Python code in resources/libraries/python/autogen contains several other +Python code in `resources/libraries/python/autogen` contains several other information sources. #### Testcase Templates The test case part of template suite is ignored, test case lines -are created according to text templates in Testcase.py file. +are created according to text templates in `Testcase.py` file. #### Testcase Argument Lists Each testcase template has different number of "arguments", e.g. values to put into various placeholders. Different test types need different -lists of the argument values, the lists are in regenerate_glob method -in Regenerator.py file. +lists of the argument values, the lists are in `regenerate_glob` method +in `Regenerator.py` file. #### Iteration Over Values @@ -103,9 +102,9 @@ only in testbeds/NICs with TG-TG line available. #### Other Topology Info -Some bonding tests need two (parallel) links between DUTs. -Autogen does not care, as suites are agnostic. -Robot tag marks the difference, but the link presence is not explicitly checked. +Some bonding tests need two (parallel) links between DUTs. Autogen does not +care, as suites are agnostic. Robot tag marks the difference, but the link +presence is not explicitly checked. ### Job specs diff --git a/docs/content/introduction/test_naming.md b/docs/content/overview/csit/test_naming.md index 22e2c0bf8a..d7a32518e5 100644 --- a/docs/content/introduction/test_naming.md +++ b/docs/content/overview/csit/test_naming.md @@ -1,6 +1,6 @@ --- title: "Test Naming" -weight: 4 +weight: 3 --- # Test Naming @@ -53,9 +53,9 @@ topologies: 1. **Physical port to physical port - a.k.a. NIC-to-NIC, Phy-to-Phy, P2P** * *PortNICConfig-WireEncapsulation-PacketForwardingFunction- PacketProcessingFunction1-...-PacketProcessingFunctionN-TestType* - * *10ge2p1x520-dot1q-l2bdbasemaclrn-ndrdisc.robot* => 2 ports of 10GE on Intel - x520 NIC, dot1q tagged Ethernet, L2 bridge-domain baseline switching with - MAC learning, NDR throughput discovery. + * *10ge2p1x520-dot1q-l2bdbasemaclrn-ndrdisc.robot* => 2 ports of 10GE on + Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain baseline switching + with MAC learning, NDR throughput discovery. * *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-ndrchk.robot* => 2 ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain baseline switching with MAC learning, NDR throughput discovery. @@ -63,24 +63,27 @@ topologies: NIC, IPv4 baseline routed forwarding, NDR throughput discovery. * *10ge2p1x520-ethip6-ip6scale200k-ndrdisc.robot* => 2 ports of 10GE on Intel x520 NIC, IPv6 scaled up routed forwarding, NDR throughput discovery. - * *10ge2p1x520-ethip4-ip4base-iacldstbase-ndrdisc.robot* => 2 ports of 10GE on - Intel x520 NIC, IPv4 baseline routed forwarding, ingress Access Control + * *10ge2p1x520-ethip4-ip4base-iacldstbase-ndrdisc.robot* => 2 ports of 10GE + on Intel x520 NIC, IPv4 baseline routed forwarding, ingress Access Control Lists baseline matching on destination, NDR throughput discovery. * *40ge2p1vic1385-ethip4-ip4base-ndrdisc.robot* => 2 ports of 40GE on Cisco vic1385 NIC, IPv4 baseline routed forwarding, NDR throughput discovery. * *eth2p-ethip4-ip4base-func.robot* => 2 ports of Ethernet, IPv4 baseline routed forwarding, functional tests. + 2. **Physical port to VM (or VM chain) to physical port - a.k.a. NIC2VM2NIC, P2V2P, NIC2VMchain2NIC, P2V2V2P** * *PortNICConfig-WireEncapsulation-PacketForwardingFunction- PacketProcessingFunction1-...-PacketProcessingFunctionN-VirtEncapsulation- VirtPortConfig-VMconfig-TestType* * *10ge2p1x520-dot1q-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2 ports - of 10GE on Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain switching - to/from two vhost interfaces and one VM, NDR throughput discovery. + of 10GE on Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain + switching to/from two vhost interfaces and one VM, NDR throughput + discovery. * *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2 ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain - switching to/from two vhost interfaces and one VM, NDR throughput discovery. + switching to/from two vhost interfaces and one VM, NDR throughput + discovery. * *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-4vhost-2vm-ndrdisc.robot* => 2 ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain switching to/from four vhost interfaces and two VMs, NDR throughput @@ -88,6 +91,7 @@ topologies: * *eth2p-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm-func.robot* => 2 ports of Ethernet, IPv4 VXLAN Ethernet, L2 bridge-domain switching to/from two vhost interfaces and one VM, functional tests. + 3. **API CRUD tests - Create (Write), Read (Retrieve), Update (Modify), Delete (Destroy) operations for configuration and operational data** * *ManagementTestKeyword-ManagementOperation-ManagedFunction1-...- @@ -101,7 +105,8 @@ topologies: * *mgmt-cfg-int-tap-apihcnc-func* => configuration of tap interfaces with Honeycomb NetConf API calls, functional tests. * *mgmt-notif-int-subint-apihcnc-func* => notifications of interface and - sub-interface events with Honeycomb NetConf Notifications, functional tests. + sub-interface events with Honeycomb NetConf Notifications, functional + tests. For complete description of CSIT test naming convention please refer to -[CSIT test naming wiki page](https://wiki.fd.io/view/CSIT/csit-test-naming>). +[CSIT test naming wiki page](https://wiki.fd.io/view/CSIT/csit-test-naming). diff --git a/docs/content/introduction/test_scenarios_overview.md b/docs/content/overview/csit/test_scenarios.md index 415ee3403f..1f06765eae 100644 --- a/docs/content/introduction/test_scenarios_overview.md +++ b/docs/content/overview/csit/test_scenarios.md @@ -1,9 +1,9 @@ --- -title: "Test Scenarios Overview" +title: "Test Scenarios" weight: 2 --- -# Test Scenarios Overview +# Test Scenarios FD.io CSIT Dashboard includes multiple test scenarios of VPP centric applications, topologies and use cases. In addition it also @@ -17,6 +17,7 @@ Brief overview of test scenarios covered in this documentation: FD.io testbeds, focusing on VPP network data plane performance in NIC-to-NIC switching topologies. VPP application runs in bare-metal host user-mode handling NICs. TRex is used as a traffic generator. + 2. **VPP Vhostuser Performance with KVM VMs**: VPP VM service switching performance tests using vhostuser virtual interface for interconnecting multiple NF-in-VM instances. VPP vswitch @@ -24,6 +25,7 @@ Brief overview of test scenarios covered in this documentation: over vhost-user interfaces to VM instances each running VPP with virtio virtual interfaces. Similarly to VPP Performance, tests are run across a range of configurations. TRex is used as a traffic generator. + 3. **VPP Memif Performance with LXC and Docker Containers**: VPP Container service switching performance tests using memif virtual interface for interconnecting multiple VPP-in-container instances. @@ -33,6 +35,7 @@ Brief overview of test scenarios covered in this documentation: interfaces (Master side). Similarly to VPP Performance, tests are run across a range of configurations. TRex is used as a traffic generator. + 4. **DPDK Performance**: VPP uses DPDK to drive the NICs and physical interfaces. DPDK performance tests are used as a baseline to profile performance of the DPDK sub-system. Two DPDK applications @@ -40,9 +43,11 @@ Brief overview of test scenarios covered in this documentation: testing environment as VPP tests. DPDK Testpmd and L3fwd applications run in host user-mode. TRex is used as a traffic generator. + 5. **T-Rex Performance**: T-Rex perfomance tests are executed in physical FD.io testbeds, focusing on T-Rex data plane performance in NIC-to-NIC loopback topologies. + 6. **VPP Functional**: VPP functional tests are executed in virtual FD.io testbeds, focusing on VPP packet processing functionality, including both network data plane and in-line control plane. Tests diff --git a/docs/content/introduction/test_tag_description.md b/docs/content/overview/csit/test_tags.md index 630afa864e..8fc3021d6f 100644 --- a/docs/content/introduction/test_tag_description.md +++ b/docs/content/overview/csit/test_tags.md @@ -1,9 +1,9 @@ --- -title: "Test Tags Descriptions" -weight: 5 +title: "Test Tags" +weight: 4 --- -# Test Tags Descriptions +# Test Tags All CSIT test cases are labelled with Robot Framework tags used to allow for easy test case type identification, test case grouping and selection for @@ -145,7 +145,7 @@ descriptions. Test with 10 VXLAN tunnels. -**VXLAN_100* +**VXLAN_100** Test with 100 VXLAN tunnels. @@ -190,7 +190,7 @@ descriptions. ## Test Category Tags -**DEVICETEST* +**DEVICETEST** All vpp_device functional test cases. @@ -204,7 +204,7 @@ descriptions. All test cases that uses Scapy for packet generation and validation. -## erformance Type Tags +## Performance Type Tags **NDRPDR** @@ -379,7 +379,7 @@ For traffic between DUTs, or for "secondary" traffic, see ${overhead} value. Service density {n}VM{t}T, {n}Number of NF Qemu VMs, {t}Number of threads per NF. -**{n}DCRt}T** +**{n}DCR{t}T** Service density {n}DCR{t}T, {n}Number of NF Docker containers, {t}Number of threads per NF. @@ -604,7 +604,7 @@ For traffic between DUTs, or for "secondary" traffic, see ${overhead} value. All test cases which uses VM with optimised scheduler policy. -**TUNTAP* +**TUNTAP** All test cases which uses TUN and TAP. diff --git a/docs/content/release_notes/_index.md b/docs/content/release_notes/_index.md index c08254e068..3a8318d09f 100644 --- a/docs/content/release_notes/_index.md +++ b/docs/content/release_notes/_index.md @@ -1,5 +1,6 @@ --- +bookCollapseSection: false bookFlatSection: true -title: "Release notes" -weight: 2 ----
\ No newline at end of file +title: "Release Notes" +weight: 3 +--- diff --git a/docs/content/release_notes/csit_rls2306/_index.md b/docs/content/release_notes/csit_rls2306/_index.md new file mode 100644 index 0000000000..27abbb79a6 --- /dev/null +++ b/docs/content/release_notes/csit_rls2306/_index.md @@ -0,0 +1,6 @@ +--- +bookCollapseSection: true +bookFlatSection: false +title: "CSIT rls2306" +weight: 1 +--- diff --git a/docs/content/release_notes/csit_rls2306/dpdk_performance.md b/docs/content/release_notes/csit_rls2306/dpdk_performance.md new file mode 100644 index 0000000000..3d3172c7c9 --- /dev/null +++ b/docs/content/release_notes/csit_rls2306/dpdk_performance.md @@ -0,0 +1,27 @@ +--- +title: "DPDK Performance" +weight: 2 +--- + +# CSIT 23.06 - DPDK Performance + +1. TEST FRAMEWORK +2. DPDK PERFORMANCE TESTS +3. DPDK RELEASE VERSION CHANGE + +# Known Issues + +List of known issues in CSIT 23.06 for DPDK performance tests: + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | | + + +## New + +List of new issues in CSIT 23.06 for DPDK performance tests: + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | | diff --git a/docs/content/release_notes/csit_rls2306/trex_performance.md b/docs/content/release_notes/csit_rls2306/trex_performance.md new file mode 100644 index 0000000000..02f7c68102 --- /dev/null +++ b/docs/content/release_notes/csit_rls2306/trex_performance.md @@ -0,0 +1,24 @@ +--- +title: "TRex Performance" +weight: 3 +--- + +# CSIT 23.06 - TRex Performance + +1. TEST FRAMEWORK + +# Known Issues + +List of known issues in CSIT 23.06 for TRex performance tests + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | | + +## New + +List of new issues in CSIT 23.06 for TRex performance tests: + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | | diff --git a/docs/content/release_notes/csit_rls2306/vpp_device.md b/docs/content/release_notes/csit_rls2306/vpp_device.md new file mode 100644 index 0000000000..c5d544b598 --- /dev/null +++ b/docs/content/release_notes/csit_rls2306/vpp_device.md @@ -0,0 +1,24 @@ +--- +title: "VPP Device" +weight: 4 +--- + +# CSIT 23.06 - VPP Device + +1. TEST FRAMEWORK + +# Known Issues + +List of known issues in CSIT 23.06 for VPP functional tests in VPP Device: + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | | + +## New + +List of new issues in CSIT 23.06 for VPP functional tests in VPP Device: + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | | diff --git a/docs/content/release_notes/csit_rls2306/vpp_performance.md b/docs/content/release_notes/csit_rls2306/vpp_performance.md new file mode 100644 index 0000000000..686420fc0f --- /dev/null +++ b/docs/content/release_notes/csit_rls2306/vpp_performance.md @@ -0,0 +1,42 @@ +--- +title: "VPP Performance" +weight: 1 +--- + +# CSIT 23.06 - VPP Performance + +1. VPP PERFORMANCE TESTS +2. TEST FRAMEWORK +3. PRESENTATION AND ANALYTICS LAYER + +# Known Issues + +## New + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | | + +## Previous + +Issues reported in previous releases which still affect the current results. + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | | + +## Fixed + +Issues reported in previous releases which were fixed in this release: + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- +1 | | + +# Root Cause Analysis for Performance Changes + +List of RCAs in CSIT 23.06 for VPP performance changes: + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | | diff --git a/docs/content/release_notes/dpdk.md b/docs/content/release_notes/dpdk.md deleted file mode 100644 index facefe4b23..0000000000 --- a/docs/content/release_notes/dpdk.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -title: "DPDK Performance" -weight: 2 ---- - -# Changes in {{< release_csit >}} - -1. TEST FRAMEWORK - - **CSIT test environment** version has been updated to ver. 11, see - [Environment Versioning]({{< ref "infrastructure#Release Notes" >}}). -2. DPDK PERFORMANCE TESTS - - No updates -3. DPDK RELEASE VERSION CHANGE - - {{< release_csit >}} tested {{< release_dpdk >}}, as used by - {{< release_vpp >}}. - -# Known Issues - -List of known issues in {{< release_csit >}} for DPDK performance tests: - - **#** | **JiraID** | **Issue Description** --------|--------------------------------------------------|--------------------------------------------------------------------------- - 1 | [CSIT-1848](https://jira.fd.io/browse/CSIT-1848) | 2n-clx, 3n-alt: sporadic testpmd/l3fwd tests fail with no or low traffic. - - -## New - -List of new issues in {{< release_csit >}} for DPDK performance tests: - - **#** | **JiraID** | **Issue Description** --------|--------------------------------------------------|---------------------------------------------------------------------------
\ No newline at end of file diff --git a/docs/content/release_notes/previous/_index.md b/docs/content/release_notes/previous/_index.md new file mode 100644 index 0000000000..40716f8315 --- /dev/null +++ b/docs/content/release_notes/previous/_index.md @@ -0,0 +1,6 @@ +--- +bookCollapseSection: true +bookFlatSection: false +title: "Previous" +weight: 2 +--- diff --git a/docs/content/release_notes/previous/csit_rls2302/_index.md b/docs/content/release_notes/previous/csit_rls2302/_index.md new file mode 100644 index 0000000000..aac03a946d --- /dev/null +++ b/docs/content/release_notes/previous/csit_rls2302/_index.md @@ -0,0 +1,6 @@ +--- +bookCollapseSection: false +bookFlatSection: false +title: "CSIT rls2302" +weight: 1 +--- diff --git a/docs/content/release_notes/previous/csit_rls2302/dpdk_performance.md b/docs/content/release_notes/previous/csit_rls2302/dpdk_performance.md new file mode 100644 index 0000000000..320dccf746 --- /dev/null +++ b/docs/content/release_notes/previous/csit_rls2302/dpdk_performance.md @@ -0,0 +1,31 @@ +--- +title: "DPDK Performance" +weight: 2 +--- + +# CSIT 23.02 - DPDK Performance + +1. TEST FRAMEWORK + - **CSIT test environment** version has been updated to ver. 11, see + [Environment Versioning]({{< ref "../../../infrastructure/fdio_csit_testbed_versioning" >}}). +2. DPDK PERFORMANCE TESTS + - No updates +3. DPDK RELEASE VERSION CHANGE + - CSIT 23.02 tested DPDK 22.07, as used by VPP 23.02. + +# Known Issues + +List of known issues in CSIT 23.02 for DPDK performance tests: + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | [CSIT-1848](https://jira.fd.io/browse/CSIT-1848) | 2n-clx, 3n-alt: sporadic testpmd/l3fwd tests fail with no or low traffic. + + +## New + +List of new issues in {{< release_csit >}} for DPDK performance tests: + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | | diff --git a/docs/content/release_notes/previous/csit_rls2302/trex_performance.md b/docs/content/release_notes/previous/csit_rls2302/trex_performance.md new file mode 100644 index 0000000000..67f2947891 --- /dev/null +++ b/docs/content/release_notes/previous/csit_rls2302/trex_performance.md @@ -0,0 +1,26 @@ +--- +title: "TRex Performance" +weight: 3 +--- + +# CSIT 23.02 - TRex Performance + +1. TEST FRAMEWORK + - **CSIT test environment** version has been updated to ver. 11, see + [Environment Versioning]({{< ref "../../../infrastructure/fdio_csit_testbed_versioning" >}}). + +# Known Issues + +List of known issues in CSIT 23.02 for TRex performance tests + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | [CSIT-1876](https://jira.fd.io/browse/CSIT-1876) | 1n-aws: TRex NDR PDR ALL IP4 scale and L2 scale tests failing with 50% packet loss. CSIT removed ip4scale and l2scale except ip4scale2m where it's still failing. + +## New + +List of new issues in CSIT 23.02 for TRex performance tests: + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | | diff --git a/docs/content/release_notes/previous/csit_rls2302/vpp_device.md b/docs/content/release_notes/previous/csit_rls2302/vpp_device.md new file mode 100644 index 0000000000..44ba9f5ce5 --- /dev/null +++ b/docs/content/release_notes/previous/csit_rls2302/vpp_device.md @@ -0,0 +1,26 @@ +--- +title: "VPP Device" +weight: 4 +--- + +# CSIT 23.02 - VPP Device + +1. TEST FRAMEWORK + - **CSIT test environment** version has been updated to ver. 11, see + [Environment Versioning]({{< ref "../../../infrastructure/fdio_csit_testbed_versioning" >}}). + +# Known Issues + +List of known issues in CSIT 23.02 for VPP functional tests in VPP Device: + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | | + +## New + +List of new issues in CSIT 23.02 for VPP functional tests in VPP Device: + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | | diff --git a/docs/content/release_notes/previous/csit_rls2302/vpp_performance.md b/docs/content/release_notes/previous/csit_rls2302/vpp_performance.md new file mode 100644 index 0000000000..072c55f14e --- /dev/null +++ b/docs/content/release_notes/previous/csit_rls2302/vpp_performance.md @@ -0,0 +1,93 @@ +--- +title: "VPP Performance" +weight: 1 +--- + +# CSIT 23.02 - VPP Performance + +1. VPP PERFORMANCE TESTS + - **Enhanced and added VPP hoststack tests** to daily and weekly + trending including: Quic VPP Echo, UDP+TCP LD_PRELOAD iPerf3, + LD_PRELOAD NGINX. + - **Added Nvidia/Mellanox DPDK tests** to daily and weekly trending + and report, in addition to RDMA_CORE ones that were already + there. + - **Jumbo frames tests** got fixed and re-added number of to report + coverage tests. + - **Intel Xeon SKX performance testbeds** got decommissioned and + removed from FD.io performance lab. +2. TEST FRAMEWORK + - **CSIT test environment** version has not changed from ver. 11 used + in previous release, see + [Environment Versioning]({{< ref "../../../infrastructure/fdio_csit_testbed_versioning" >}}). + - **CSIT PAPI optimizations for scale** got applied improving PAPI + programming speed especially for large scale tests. VAT has been + now completely deprecated from CSIT. + - **General Code Housekeeping**: Ongoing code optimizations and bug + fixes. +3. PRESENTATION AND ANALYTICS LAYER + - [Performance dashboard](https://csit.fd.io/) got updated with + addition of VPP telemetry trending across all VPP tests. A number + of code and AWS resource usage optimizations got applied to the + data processing pipeline and UI frontend and backend. + - Examples of release iterative data visualisation: + - [Packet throughput 2n-icx-e810cq-ip4-base-scale-pdr](https://csit.fd.io/report/#eNrdVcluwjAQ_Zr0ggbZDml64QDkP5BxhhJlwYxNVPr1OAhpYiGO7cEHb3pv1qeRnT8T7h1266zYZuU2U2VThy3LN4twUOdULhSM1oLKl-FG2KF2CGqAxvyAFOIblZX4JYW5gB6P0NgVfK4OIA2gP02vsA6Tja1pcq12T9cvcRitr57RED1CRiQGo7SYZk-3GeddsszXhJoNQsYMeXSzZOKamHUk3aNrfpGpoQuMm9BohqSJ_fubnaHPRpXVg_F3qjijO1RCtEBDnZo8UXFJ6NQmKlGbgjp9ujPU_8cEFdXHcKb-8Q8V1R2PI8PX) + - [Speedup Multi-Core throughput graph for 2n-icx-e810cq-ip4-base-pdr](https://csit.fd.io/report/#eNrtlM8OgjAMxp8GL6aGFRAvHlTew8xRhAR1bpOoT-8wJIUYEg8mXjjsX35fu65fMusuhvaW6nWQbIN0G2Ba5X4Kos3cL6a2GIUIjdaA0cLvDNUkLQGeoVJ3EGF4JNSCViJUV5BNAZWOYRkfQCggV7YnPw5tjM5Nmxp3XeqPe5jmN8fU3z4gDRmGg7JYpstHTzNWLOulIckBvmJGjmyvmOGbWFUYeSJbPYmlvgvMlW80I6GG-d1D92jXqDR7K37qCk6ujLuC_3IlnlwZdyX-0pUkm50v5vT-yZLsBXP6Swk>) + - [MRR, NDR and PDR comparison for 2n-icx-e810cq-ip4-base](https://csit.fd.io/report/#eNrtVMsOgjAQ_Bq8mDW0gHjxoPIfppZVSQDrthLx6y2GuBBj4kVPHvrKzG6nM0mtOxFuLZbLIFkH6TqQaZH7KYhWU79QaWUUSmiMARnN_I6wRGURZA2FvoIIwwNKI3AhQn0G1eyhMDHM4x0IDeiO3cmPXVdTEXWt5aZv_XIPo_nFMepvHyENEoMjWUwzx3bAeSeW-YpQcYFXzJBDOxAzfhOz9qQqtMUNmepdYFx7oxkSetzftWaA9kal2YPx5VTq_J_KR6n0Rv0mFfNP5bNUzDOVJJvUJ6oeP1mS3QG2H0sT>) + - [Normalized throughput architecture comparison for 2n-[icx|clx]-e810cq-ip4-base-pdr](https://csit.fd.io/report/#eNrVk00OgjAQhU-DGzOGFhA3LlTuYUoZhKRibSsRT28hJANRF-500b98rzOvM6l1F4NHi2obJPsg3Qc8rQs_BdFu6RejLI9CDq3WwKOV3xlUKCwCb0CqO7AwPCHXDDcslFcQbQm1jmEd58AkoKv6kx95f0cXpg_ND2PolzxEi5sj6rPPSIuG4MwWyXTVTTSfzJJeGBR0wTsm5NBOzMzfRKrSiDPa-oEk9VUgLn2hCTE5j-86PaFjodJsUHzXlVr-UVfem_35riTZormY8_BneNpvhRpzJNkT6FzkMw>) + - [NICs comparison for 2n-icx-ip4-base-pdr](https://csit.fd.io/report/#eNrll99ugyAUh5_G3SxnESx1N7to53s0FI6rmbYMnKF7-qFrcmRmV7vReuG__A74wSckuvZi8eCwfknEPsn3Cc8rHU5JtnsMF1s7nqUcOmOAZ0_hzmKN0iHwM6jaA0vTN-SGKS_EVkJTewGV2cB2cwSmANtT_xSOY9_IaNv3zV9vfU9eRKn-bCkNr4-SDi2FEReVmdN1VPMnLTWQFiW1CMgUtehGNPGgqKq0skFXfSGVhmmgXIWppoipuP_2akbpbabyYqj4txerG7kcLz3tnXvBZ5aqD5BduQAtBLsOK9ro9-Vo6Wnv1sswUJ-zdPZLJSJdgY_ZL5IY9U6NcPEzTN8NX14JXpsZW_mNewi46zAz691rwroKJzPfwaaws7ciiofzxTbDv6QovgETwNPp>) + +# Known Issues + +Editing Note: below listed known issues need to be updated to reflect the +current state as tracked on +[CSIT TestFailuresTracking wiki](https://wiki.fd.io/view/CSIT/TestFailuresTracking). + +## New + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | [CSIT-1890](https://jira.fd.io/browse/CSIT-1890) | 3n-alt: Tests failing until 40Ge Interface comes up. + +## Previous + +Issues reported in previous releases which still affect the current results. + +**#** | **JiraID** | **Issue Description** +------|-------------------------------------------------------------------------------------------------|--------------- + 1 | [CSIT-1782](https://jira.fd.io/browse/CSIT-1782) | Multicore AVF tests are failing when trying to create interface. Frequency is reduced by CSIT workaround, but occasional failures do still happen. + 2 | [CSIT-1785](https://jira.fd.io/browse/CSIT-1785) [VPP-1972](https://jira.fd.io/browse/VPP-1972) | NAT44ED tests failing to establish all TCP sessions. At least for max scale, in allotted time (limited by session 500s timeout) due to worse slow path performance than previously measured and calibrated for. CSIT removed the max scale NAT tests to avoid this issue. + 3 | [CSIT-1799](https://jira.fd.io/browse/CSIT-1799) | All NAT44-ED 16M sessions CPS scale tests fail while setting NAT44 address range. + 4 | [CSIT-1800](https://jira.fd.io/browse/CSIT-1800) | All Geneve L3 mode scale tests (1024 tunnels) are failing. + 5 | [CSIT-1801](https://jira.fd.io/browse/CSIT-1801) | 9000B payload frames not forwarded over tunnels due to violating supported Max Frame Size (VxLAN, LISP, + 6 | [CSIT-1802](https://jira.fd.io/browse/CSIT-1802) | all testbeds: AF-XDP - NDR tests failing from time to time. + 7 | [CSIT-1804](https://jira.fd.io/browse/CSIT-1804) | All testbeds: NDR tests failing from time to time. + 8 | [CSIT-1808](https://jira.fd.io/browse/CSIT-1808) | All tests with 9000B payload frames not forwarded over memif interfaces. + 9 | [CSIT-1827](https://jira.fd.io/browse/CSIT-1827) | 3n-icx, 3n-skx: all AVF crypto tests sporadically fail. 1518B with no traffic, IMIX with excessive + 10 | [CSIT-1835](https://jira.fd.io/browse/CSIT-1835) | 3n-icx: QUIC vppecho BPS tests failing on timeout when checking hoststack finished. + 11 | [CSIT-1849](https://jira.fd.io/browse/CSIT-1849) | 2n-skx, 2n-clx, 2n-icx: UDP 16m TPUT tests fail to create all sessions. + 12 | [CSIT-1864](https://jira.fd.io/browse/CSIT-1864) | 2n-clx: half of the packets lost on PDR tests. + 13 | [CSIT-1877](https://jira.fd.io/browse/CSIT-1877) | 3n-tsh: all VM tests failing to boot VM. + 14 | [CSIT-1883](https://jira.fd.io/browse/CSIT-1883) | 3n-snr: All hwasync wireguard tests failing when trying to verify device. + 15 | [CSIT-1884](https://jira.fd.io/browse/CSIT-1884) | 2n-clx, 2n-icx: All NAT44DET NDR PDR IMIX over 1M sessions BIDIR tests failing to create enough sessions. + 16 | [CSIT-1885](https://jira.fd.io/browse/CSIT-1885) | 3n-icx: 9000b ip4 ip6 l2 NDRPDR AVF tests are failing to forward traffic. + 17 | [CSIT-1886](https://jira.fd.io/browse/CSIT-1886) | 3n-icx: Wireguard tests with 100 and more tunnels are failing PDR criteria. + +## Fixed + +Issues reported in previous releases which were fixed in this release: + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | [CSIT-1868](https://jira.fd.io/browse/CSIT-1868) | 2n-clx: ALL ldpreload-nginx tests fails when trying to start nginx. + 2 | [CSIT-1871](https://jira.fd.io/browse/CSIT-1871) | 3n-snr: 25GE interface between SUT and TG/TRex goes down randomly. + +# Root Cause Analysis for Performance Changes + +List of RCAs in CSIT 23.02 for VPP performance changes: + +**#** | **JiraID** | **Issue Description** +------|--------------------------------------------------|-------------------------------------------------------------- + 1 | [CSIT-1887](https://jira.fd.io/browse/CSIT-1887) | rls2210 RCA: ASTF tests TRex upgrade decreased TRex performance. NAT results not affected, except on Denverton due to interference from VPP-2010. + 2 | [CSIT-1888](https://jira.fd.io/browse/CSIT-1888) | rls2210 RCA: testbed differences, especially for ipsec. Not caused by VPP code nor CSIT code. Most probable cause is clang-14 behavior. + 3 | [CSIT-1889](https://jira.fd.io/browse/CSIT-1889) | rls2210 RCA: policy-outbound-nocrypto. When VPP added spd fast path matching (Gerrit 36097), it decreased MRR of the corresponding tests, at least on 3-alt. diff --git a/docs/content/release_notes/trex.md b/docs/content/release_notes/trex.md deleted file mode 100644 index 3794dc159c..0000000000 --- a/docs/content/release_notes/trex.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: "TRex Performance" -weight: 3 ---- - -# Changes in {{< release_csit >}} - -1. TEST FRAMEWORK - - **CSIT test environment** version has been updated to ver. 11, see - [Environment Versioning]({{< ref "infrastructure#Release Notes" >}}). - -# Known Issues - -List of known issues in {{< release_csit >}} for TRex performance tests - - **#** | **JiraID** | **Issue Description** --------|--------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------- - 1 | [CSIT-1876](https://jira.fd.io/browse/CSIT-1876) | 1n-aws: TRex NDR PDR ALL IP4 scale and L2 scale tests failing with 50% packet loss. CSIT removed ip4scale and l2scale except ip4scale2m where it's still failing. - - -## New - -List of new issues in {{< release_csit >}} for TRex performance tests: - - **#** | **JiraID** | **Issue Description** --------|--------------------------------------------------|---------------------------------------------------------------------------
\ No newline at end of file diff --git a/docs/content/release_notes/vpp.md b/docs/content/release_notes/vpp.md deleted file mode 100644 index 48805ba574..0000000000 --- a/docs/content/release_notes/vpp.md +++ /dev/null @@ -1,95 +0,0 @@ ---- -title: "VPP Performance" -weight: 1 ---- - -# Changes in {{< release_csit >}} - -1. VPP PERFORMANCE TESTS - - **Enhanced and added VPP hoststack tests** to daily and weekly - trending including: Quic VPP Echo, UDP+TCP LD_PRELOAD iPerf3, - LD_PRELOAD NGINX. - - **Added Nvidia/Mellanox DPDK tests** to daily and weekly trending - and report, in addition to RDMA_CORE ones that were already - there. - - **Jumbo frames tests** got fixed and re-added number of to report - coverage tests. - - **Intel Xeon SKX performance testbeds** got decommissioned and - removed from FD.io performance lab. -2. TEST FRAMEWORK - - **CSIT test environment** version has not changed from ver. 11 used - in previous release, see - [Environment Versioning]({{< ref "infrastructure#Release Notes" >}}). - - **CSIT PAPI optimizations for scale** got applied improving PAPI - programming speed especially for large scale tests. VAT has been - now completely deprecated from CSIT. - - **General Code Housekeeping**: Ongoing code optimizations and bug - fixes. -3. PRESENTATION AND ANALYTICS LAYER - - [Performance dashboard](https://csit.fd.io/) got updated with - addition of VPP telemetry trending across all VPP tests. A number - of code and AWS resource usage optimizations got applied to the - data processing pipeline and UI frontend and backend. - - Examples of release iterative data visualisation: - - - [Packet throughput 2n-icx-e810cq-ip4-base-scale-pdr](https://csit.fd.io/report/#eNrdVcluwjAQ_Zr0ggbZDml64QDkP5BxhhJlwYxNVPr1OAhpYiGO7cEHb3pv1qeRnT8T7h1266zYZuU2U2VThy3LN4twUOdULhSM1oLKl-FG2KF2CGqAxvyAFOIblZX4JYW5gB6P0NgVfK4OIA2gP02vsA6Tja1pcq12T9cvcRitr57RED1CRiQGo7SYZk-3GeddsszXhJoNQsYMeXSzZOKamHUk3aNrfpGpoQuMm9BohqSJ_fubnaHPRpXVg_F3qjijO1RCtEBDnZo8UXFJ6NQmKlGbgjp9ujPU_8cEFdXHcKb-8Q8V1R2PI8PX) - - [Speedup Multi-Core throughput graph for 2n-icx-e810cq-ip4-base-pdr](https://csit.fd.io/report/#eNrtlM8OgjAMxp8GL6aGFRAvHlTew8xRhAR1bpOoT-8wJIUYEg8mXjjsX35fu65fMusuhvaW6nWQbIN0G2Ba5X4Kos3cL6a2GIUIjdaA0cLvDNUkLQGeoVJ3EGF4JNSCViJUV5BNAZWOYRkfQCggV7YnPw5tjM5Nmxp3XeqPe5jmN8fU3z4gDRmGg7JYpstHTzNWLOulIckBvmJGjmyvmOGbWFUYeSJbPYmlvgvMlW80I6GG-d1D92jXqDR7K37qCk6ujLuC_3IlnlwZdyX-0pUkm50v5vT-yZLsBXP6Swk>) - - [MRR, NDR and PDR comparison for 2n-icx-e810cq-ip4-base](https://csit.fd.io/report/#eNrtVMsOgjAQ_Bq8mDW0gHjxoPIfppZVSQDrthLx6y2GuBBj4kVPHvrKzG6nM0mtOxFuLZbLIFkH6TqQaZH7KYhWU79QaWUUSmiMARnN_I6wRGURZA2FvoIIwwNKI3AhQn0G1eyhMDHM4x0IDeiO3cmPXVdTEXWt5aZv_XIPo_nFMepvHyENEoMjWUwzx3bAeSeW-YpQcYFXzJBDOxAzfhOz9qQqtMUNmepdYFx7oxkSetzftWaA9kal2YPx5VTq_J_KR6n0Rv0mFfNP5bNUzDOVJJvUJ6oeP1mS3QG2H0sT>) - - [Normalized throughput architecture comparison for 2n-[icx|clx]-e810cq-ip4-base-pdr](https://csit.fd.io/report/#eNrVk00OgjAQhU-DGzOGFhA3LlTuYUoZhKRibSsRT28hJANRF-500b98rzOvM6l1F4NHi2obJPsg3Qc8rQs_BdFu6RejLI9CDq3WwKOV3xlUKCwCb0CqO7AwPCHXDDcslFcQbQm1jmEd58AkoKv6kx95f0cXpg_ND2PolzxEi5sj6rPPSIuG4MwWyXTVTTSfzJJeGBR0wTsm5NBOzMzfRKrSiDPa-oEk9VUgLn2hCTE5j-86PaFjodJsUHzXlVr-UVfem_35riTZormY8_BneNpvhRpzJNkT6FzkMw>) - - [NICs comparison for 2n-icx-ip4-base-pdr](https://csit.fd.io/report/#eNrll99ugyAUh5_G3SxnESx1N7to53s0FI6rmbYMnKF7-qFrcmRmV7vReuG__A74wSckuvZi8eCwfknEPsn3Cc8rHU5JtnsMF1s7nqUcOmOAZ0_hzmKN0iHwM6jaA0vTN-SGKS_EVkJTewGV2cB2cwSmANtT_xSOY9_IaNv3zV9vfU9eRKn-bCkNr4-SDi2FEReVmdN1VPMnLTWQFiW1CMgUtehGNPGgqKq0skFXfSGVhmmgXIWppoipuP_2akbpbabyYqj4txerG7kcLz3tnXvBZ5aqD5BduQAtBLsOK9ro9-Vo6Wnv1sswUJ-zdPZLJSJdgY_ZL5IY9U6NcPEzTN8NX14JXpsZW_mNewi46zAz691rwroKJzPfwaaws7ciiofzxTbDv6QovgETwNPp>) - -# Known Issues - -Editing Note: below listed known issues need to be updated to reflect the -current state as tracked on -[CSIT TestFailuresTracking wiki](https://wiki.fd.io/view/CSIT/TestFailuresTracking). - -## New - - **#** | **JiraID** | **Issue Description** --------|--------------------------------------------------|------------------------------------------------------ - 1 | [CSIT-1890](https://jira.fd.io/browse/CSIT-1890) | 3n-alt: Tests failing until 40Ge Interface comes up. - - -## Previous - -Issues reported in previous releases which still affect the current results. - - **#** | **JiraID** | **Issue Description** --------|-------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - 1 | [CSIT-1782](https://jira.fd.io/browse/CSIT-1782) | Multicore AVF tests are failing when trying to create interface. Frequency is reduced by CSIT workaround, but occasional failures do still happen. - 2 | [CSIT-1785](https://jira.fd.io/browse/CSIT-1785) [VPP-1972](https://jira.fd.io/browse/VPP-1972) | NAT44ED tests failing to establish all TCP sessions. At least for max scale, in allotted time (limited by session 500s timeout) due to worse slow path performance than previously measured and calibrated for. CSIT removed the max scale NAT tests to avoid this issue. - 3 | [CSIT-1799](https://jira.fd.io/browse/CSIT-1799) | All NAT44-ED 16M sessions CPS scale tests fail while setting NAT44 address range. - 4 | [CSIT-1800](https://jira.fd.io/browse/CSIT-1800) | All Geneve L3 mode scale tests (1024 tunnels) are failing. - 5 | [CSIT-1801](https://jira.fd.io/browse/CSIT-1801) | 9000B payload frames not forwarded over tunnels due to violating supported Max Frame Size (VxLAN, LISP, - 6 | [CSIT-1802](https://jira.fd.io/browse/CSIT-1802) | all testbeds: AF-XDP - NDR tests failing from time to time. - 7 | [CSIT-1804](https://jira.fd.io/browse/CSIT-1804) | All testbeds: NDR tests failing from time to time. - 8 | [CSIT-1808](https://jira.fd.io/browse/CSIT-1808) | All tests with 9000B payload frames not forwarded over memif interfaces. - 9 | [CSIT-1827](https://jira.fd.io/browse/CSIT-1827) | 3n-icx, 3n-skx: all AVF crypto tests sporadically fail. 1518B with no traffic, IMIX with excessive - 10 | [CSIT-1835](https://jira.fd.io/browse/CSIT-1835) | 3n-icx: QUIC vppecho BPS tests failing on timeout when checking hoststack finished. - 11 | [CSIT-1849](https://jira.fd.io/browse/CSIT-1849) | 2n-skx, 2n-clx, 2n-icx: UDP 16m TPUT tests fail to create all sessions. - 12 | [CSIT-1864](https://jira.fd.io/browse/CSIT-1864) | 2n-clx: half of the packets lost on PDR tests. - 13 | [CSIT-1877](https://jira.fd.io/browse/CSIT-1877) | 3n-tsh: all VM tests failing to boot VM. - 14 | [CSIT-1883](https://jira.fd.io/browse/CSIT-1883) | 3n-snr: All hwasync wireguard tests failing when trying to verify device. - 15 | [CSIT-1884](https://jira.fd.io/browse/CSIT-1884) | 2n-clx, 2n-icx: All NAT44DET NDR PDR IMIX over 1M sessions BIDIR tests failing to create enough sessions. - 16 | [CSIT-1885](https://jira.fd.io/browse/CSIT-1885) | 3n-icx: 9000b ip4 ip6 l2 NDRPDR AVF tests are failing to forward traffic. - 17 | [CSIT-1886](https://jira.fd.io/browse/CSIT-1886) | 3n-icx: Wireguard tests with 100 and more tunnels are failing PDR criteria. - -## Fixed - -Issues reported in previous releases which were fixed in this release: - - **#** | **JiraID** | **Issue Description** --------|--------------------------------------------------|--------------------------------------------------------------------- - 1 | [CSIT-1868](https://jira.fd.io/browse/CSIT-1868) | 2n-clx: ALL ldpreload-nginx tests fails when trying to start nginx. - 2 | [CSIT-1871](https://jira.fd.io/browse/CSIT-1871) | 3n-snr: 25GE interface between SUT and TG/TRex goes down randomly. - -# Root Cause Analysis for Performance Changes - -List of RCAs in {{< release_csit >}} for VPP performance changes: - - **#** | **JiraID** | **Issue Description** --------|--------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------- - 1 | [CSIT-1887](https://jira.fd.io/browse/CSIT-1887) | rls2210 RCA: ASTF tests TRex upgrade decreased TRex performance. NAT results not affected, except on Denverton due to interference from VPP-2010. - 2 | [CSIT-1888](https://jira.fd.io/browse/CSIT-1888) | rls2210 RCA: testbed differences, especially for ipsec. Not caused by VPP code nor CSIT code. Most probable cause is clang-14 behavior. - 3 | [CSIT-1889](https://jira.fd.io/browse/CSIT-1889) | rls2210 RCA: policy-outbound-nocrypto. When VPP added spd fast path matching (Gerrit 36097), it decreased MRR of the corresponding tests, at least on 3-alt. diff --git a/docs/content/release_notes/vpp_device.md b/docs/content/release_notes/vpp_device.md deleted file mode 100644 index 2f1f6d34b5..0000000000 --- a/docs/content/release_notes/vpp_device.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: "VPP Device" -weight: 4 ---- - -# Changes in {{< release_csit >}} - -1. TEST FRAMEWORK - - **CSIT test environment** version has been updated to ver. 11, see - [Environment Versioning]({{< ref "infrastructure#Release Notes" >}}). - -# Known Issues - -List of known issues in {{< release_csit >}} for VPP functional tests in VPP Device: - - **#** | **JiraID** | **Issue Description** --------|--------------------------------------------------|--------------------------------------------------------------------------- - -## New - -List of new issues in {{< release_csit >}} for VPP functional tests in VPP Device: - - **#** | **JiraID** | **Issue Description** --------|--------------------------------------------------|---------------------------------------------------------------------------
\ No newline at end of file |