aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorTibor Frank <tifrank@cisco.com>2023-05-09 10:41:38 +0000
committerTibor Frank <tifrank@cisco.com>2023-05-10 10:18:16 +0000
commit4f3004e217d2514154fd82a8efb4125c54584e44 (patch)
tree2ee4d0a71e127137da82d4f4f0c4f72fb4ace683
parent772f4202ff7b79da6ce3f230adf496279c009b28 (diff)
C-Docs: Review, edit, add parts of the documentation
Change-Id: I83c3d93c6d71f3d9d03078d405bea9ef29392089 Signed-off-by: Tibor Frank <tifrank@cisco.com>
-rw-r--r--docs/content/_index.md36
-rw-r--r--docs/content/methodology/trending/presentation.md22
-rw-r--r--docs/content/overview/csit/design.md4
-rw-r--r--docs/content/overview/csit/test_tags.md109
4 files changed, 96 insertions, 75 deletions
diff --git a/docs/content/_index.md b/docs/content/_index.md
index eda7ecf8f9..dead278d42 100644
--- a/docs/content/_index.md
+++ b/docs/content/_index.md
@@ -6,24 +6,34 @@ type: "docs"
# Documentation Structure
1. OVERVIEW: General introduction to CSIT Performance Dashboard and CSIT itself.
- - **C-Dash**
- - **CSIT**
+ - **C-Dash**: The design and the structure of C-Dash dashboard.
+ - **CSIT**: The design of the [FD.io](https://fd.io/) CSIT system, and the
+ description of the test scenarios, test naming and test tags.
2. METHODOLOGY
- - **Overview**
- - **Measurement**
- - **Test**
- - **Trending**
- - **Per-patch Testing**
+ - **Overview**: Terminology, per-thread resources, multi-core speedup, VPP
+ forwarding modes and DUT state considerations.
+ - **Measurement**: Data plane throughput, packet latency and the telemetry.
+ - **Test**: Methodology of all tests used in CSIT.
+ - **Trending**: A high-level design of a system for continuous performance
+ measuring, trending and change detection for FD.io VPP SW data plane (and
+ other performance tests run within CSIT sub-project).
+ - **Per-patch Testing**: A methodology similar to trending analysis is used
+ for comparing performance before a DUT code change is merged.
3. RELEASE NOTES: Performance tests executed in physical FD.io testbeds.
- - **CSIT rls2306**
- - **Previous**
+ - **CSIT rls2306**: The release notes of the current CSIT release.
+ - **Previous**: Archived release notes for the past releases.
4. INFRASTRUCTURE
- **FD.io DC Vexxhost Inventory**: Physical testbeds location.
- **FD.io DC Testbed Specifications**: Specification of the physical
testbed infrastructure.
- **FD.io DC Testbed Configuration**: Configuration of the physical
testbed infrastructure.
- - **FD.io CSIT Testbed Versioning**: CSIT testbed versioning.
- - **FD.io CSIT Logical Topologies**: CSIT Logical Topologies.
- - **VPP Startup Settings**
- - **TRex Traffic Generator**
+ - **FD.io CSIT Testbed Versioning**: CSIT test environment versioning to
+ track modifications of the test environment.
+ - **FD.io CSIT Logical Topologies**: CSIT performance tests are executed on
+ physical testbeds. Based on the packet path thru server SUTs, three
+ distinct logical topology types are used for VPP DUT data plane testing.
+ - **VPP Startup Settings**: List of common settings applied to all tests and
+ test dependent settings.
+ - **TRex Traffic Generator**: Usage of TRex traffic generator and its traffic
+ modes, profiles etc.
diff --git a/docs/content/methodology/trending/presentation.md b/docs/content/methodology/trending/presentation.md
index 84925b46c8..91bbef8db9 100644
--- a/docs/content/methodology/trending/presentation.md
+++ b/docs/content/methodology/trending/presentation.md
@@ -7,27 +7,29 @@ weight: 2
## Failed tests
-The Failed tests tables list the tests which failed during the last test run.
-Separate tables are generated for each testbed.
+The [Failed tests tables](https://csit.fd.io/news/) list the tests which failed
+during the last test run. Separate tables are generated for each testbed.
## Regressions and progressions
-These tables list tests which encountered a regression or progression during the
-specified time period, which is currently set to the last 21 days.
+[These tables](https://csit.fd.io/news/) list tests which encountered
+a regression or progression during the specified time period, which is currently
+set to the last 1, 7, and 130 days.
## Trendline Graphs
-Trendline graphs show measured per run averages of MRR values, NDR or PDR
-values, group average values, and detected anomalies.
-The graphs are constructed as follows:
+[Trendline graphs](https://csit.fd.io/trending/) show measured per run averages
+of MRR values, NDR or PDR values, user-selected telemetry metrics, group average
+values, and detected anomalies. The graphs are constructed as follows:
- X-axis represents the date in the format MMDD.
-- Y-axis represents run-average MRR value, NDR or PDR values in Mpps. For PDR
- tests also a graph with average latency at 50% PDR [us] is generated.
+- Y-axis represents run-average MRR value, NDR or PDR values in Mpps or selected
+ metrics. For PDR tests also a graph with average latency at 50% PDR [us] is
+ generated.
- Markers to indicate anomaly classification:
- Regression - red circle.
- Progression - green circle.
-- The line shows average MRR value of each group.
+- The line shows average value of each group.
In addition the graphs show dynamic labels while hovering over graph data
points, presenting the CSIT build date, measured value, VPP reference, trend job
diff --git a/docs/content/overview/csit/design.md b/docs/content/overview/csit/design.md
index 53b764f5bb..7bb08165b8 100644
--- a/docs/content/overview/csit/design.md
+++ b/docs/content/overview/csit/design.md
@@ -67,9 +67,9 @@ A brief bottom-up description is provided here:
- VPP;
- DPDK-Testpmd;
- DPDK-L3Fwd;
+ - TRex
- Tools:
- - Documentation generator;
- - Report generator;
+ - C-Dash
- Testbed environment setup ansible playbooks;
- Operational debugging scripts;
diff --git a/docs/content/overview/csit/test_tags.md b/docs/content/overview/csit/test_tags.md
index 8fc3021d6f..63bc845823 100644
--- a/docs/content/overview/csit/test_tags.md
+++ b/docs/content/overview/csit/test_tags.md
@@ -36,12 +36,13 @@ descriptions.
**SKIP_PATCH**
- Test case(s) marked to not run in case of vpp-csit-verify (i.e. VPP patch)
- and csit-vpp-verify jobs (i.e. CSIT patch).
+ Test case(s) marked to not run in case of vpp-csit-verify (i.e. VPP
+ patch) and csit-vpp-verify jobs (i.e. CSIT patch).
**SKIP_VPP_PATCH**
- Test case(s) marked to not run in case of vpp-csit-verify (i.e. VPP patch).
+ Test case(s) marked to not run in case of vpp-csit-verify (i.e. VPP
+ patch).
## Environment Tags
@@ -164,29 +165,31 @@ descriptions.
**100_FLOWS**
- Traffic stream with 100 unique flows (10 IPs/users x 10 UDP ports) in one
- direction.
+ Traffic stream with 100 unique flows (10 IPs/users x 10 UDP ports) in
+ one direction.
**10k_FLOWS**
- Traffic stream with 10 000 unique flows (10 IPs/users x 1000 UDP ports) in
- one direction.
+ Traffic stream with 10 000 unique flows (10 IPs/users x 1000 UDP ports)
+ in one direction.
**100k_FLOWS**
- Traffic stream with 100 000 unique flows (100 IPs/users x 1000 UDP ports) in
- one direction.
+ Traffic stream with 100 000 unique flows (100 IPs/users x 1000 UDP
+ ports) in one direction.
**HOSTS_{h}**
- Stateless or stateful traffic stream with {h} client source IP4 addresses,
- usually with 63 flow differing in source port number. Could be UDP or TCP.
- If NAT is used, the clients are inside. Outside IP range can differ.
+ Stateless or stateful traffic stream with {h} client source IP4
+ addresses, usually with 63 flow differing in source port number.
+ Could be UDP or TCP. If NAT is used, the clients are inside.
+ Outside IP range can differ.
{h}=(1024,4096,16384,65536,262144).
**GENEVE4_{t}TUN**
- Test with {t} GENEVE IPv4 tunnel. {t}=(1,4,16,64,256,1024)
+ Test with {t} GENEVE IPv4 tunnel.
+ {t}=(1,4,16,64,256,1024)
## Test Category Tags
@@ -208,10 +211,11 @@ descriptions.
**NDRPDR**
- Single test finding both No Drop Rate and Partial Drop Rate simultaneously.
- The search is done by optimized algorithm which performs
- multiple trial runs at different durations and transmit rates.
- The results come from the final trials, which have duration of 30 seconds.
+ Single test finding both No Drop Rate and Partial Drop Rate
+ simultaneously. The search is done by optimized algorithm which
+ performs multiple trial runs at different durations and transmit
+ rates. The results come from the final trials, which have duration
+ of 30 seconds.
**MRR**
@@ -271,8 +275,8 @@ For traffic between DUTs, or for "secondary" traffic, see ${overhead} value.
**L2PATCH**
- L2PATCH baseline test cases, no encapsulation, no feature(s) configured in
- tests.
+ L2PATCH baseline test cases, no encapsulation, no feature(s) configured
+ in tests.
**SCALE**
@@ -285,7 +289,8 @@ For traffic between DUTs, or for "secondary" traffic, see ${overhead} value.
**FEATURE**
- At least one feature is configured in test cases. Use also feature tag(s).
+ At least one feature is configured in test cases. Use also feature
+ tag(s).
**UDP**
@@ -297,7 +302,8 @@ For traffic between DUTs, or for "secondary" traffic, see ${overhead} value.
**TREX**
- Tests which test trex traffic without any software DUTs in the traffic path.
+ Tests which test trex traffic without any software DUTs in the
+ traffic path.
**UDP_UDIR**
@@ -309,33 +315,28 @@ For traffic between DUTs, or for "secondary" traffic, see ${overhead} value.
**UDP_CPS**
- Tests which measure connections per second on minimal UDP pseudoconnections.
- This implies ASTF traffic profile is used.
- This tag selects specific output processing in PAL.
+ Tests which measure connections per second on minimal UDP
+ pseudoconnections. This implies ASTF traffic profile is used.
**TCP_CPS**
Tests which measure connections per second on empty TCP connections.
This implies ASTF traffic profile is used.
- This tag selects specific output processing in PAL.
**TCP_RPS**
Tests which measure requests per second on empty TCP connections.
This implies ASTF traffic profile is used.
- This tag selects specific output processing in PAL.
**UDP_PPS**
Tests which measure packets per second on lightweight UDP transactions.
This implies ASTF traffic profile is used.
- This tag selects specific output processing in PAL.
**TCP_PPS**
Tests which measure packets per second on lightweight TCP transactions.
This implies ASTF traffic profile is used.
- This tag selects specific output processing in PAL.
**HTTP**
@@ -372,17 +373,18 @@ For traffic between DUTs, or for "secondary" traffic, see ${overhead} value.
Service density matrix locator {r}R{c}C, {r}Row denoting number of
service instances, {c}Column denoting number of NFs per service
- instance. {r}=(1,2,4,6,8,10), {c}=(1,2,4,6,8,10).
+ instance.
+ {r}=(1,2,4,6,8,10), {c}=(1,2,4,6,8,10).
**{n}VM{t}T**
- Service density {n}VM{t}T, {n}Number of NF Qemu VMs, {t}Number of threads
- per NF.
+ Service density {n}VM{t}T, {n}Number of NF Qemu VMs, {t}Number of
+ threads per NF.
**{n}DCR{t}T**
- Service density {n}DCR{t}T, {n}Number of NF Docker containers, {t}Number of
- threads per NF.
+ Service density {n}DCR{t}T, {n}Number of NF Docker containers,
+ {t}Number of threads per NF.
**{n}_ADDED_CHAINS**
@@ -597,8 +599,8 @@ For traffic between DUTs, or for "secondary" traffic, see ${overhead} value.
**VIRTIO_1024**
- All test cases which uses VIRTIO native VPP driver with qemu queue size set
- to 1024.
+ All test cases which uses VIRTIO native VPP driver with qemu queue
+ size set to 1024.
**CFS_OPT**
@@ -623,8 +625,8 @@ For traffic between DUTs, or for "secondary" traffic, see ${overhead} value.
**SINGLE_MEMIF**
All test cases which uses only single Memif connection per DUT. One DUT
- instance is running in container having one physical interface exposed to
- container.
+ instance is running in container having one physical interface exposed
+ to container.
**LBOND**
@@ -656,23 +658,25 @@ For traffic between DUTs, or for "secondary" traffic, see ${overhead} value.
**DRV_{d}**
- All test cases which NIC Driver for DUT is set to {d}. Default is VFIO_PCI.
+ All test cases which NIC Driver for DUT is set to {d}.
+ Default is VFIO_PCI.
{d}=(AVF, RDMA_CORE, VFIO_PCI, AF_XDP).
**TG_DRV_{d}**
- All test cases which NIC Driver for TG is set to {d}. Default is IGB_UIO.
+ All test cases which NIC Driver for TG is set to {d}.
+ Default is IGB_UIO.
{d}=(RDMA_CORE, IGB_UIO).
**RXQ_SIZE_{n}**
- All test cases which RXQ size (RX descriptors) are set to {n}. Default is 0,
- which means VPP (API) default.
+ All test cases which RXQ size (RX descriptors) are set to {n}.
+ Default is 0, which means VPP (API) default.
**TXQ_SIZE_{n}**
- All test cases which TXQ size (TX descriptors) are set to {n}. Default is 0,
- which means VPP (API) default.
+ All test cases which TXQ size (TX descriptors) are set to {n}.
+ Default is 0, which means VPP (API) default.
## Feature Tags
@@ -706,11 +710,13 @@ For traffic between DUTs, or for "secondary" traffic, see ${overhead} value.
**ACL_STATELESS**
- ACL plugin configured and tested in stateless mode (permit action).
+ ACL plugin configured and tested in stateless mode
+ (permit action).
**ACL_STATEFUL**
- ACL plugin configured and tested in stateful mode (permit+reflect action).
+ ACL plugin configured and tested in stateful mode
+ (permit+reflect action).
**ACL1**
@@ -836,12 +842,12 @@ For traffic between DUTs, or for "secondary" traffic, see ${overhead} value.
**STHREAD**
- *Dynamic tag*.
+ Dynamic tag.
All test cases using single poll mode thread.
**MTHREAD**
- *Dynamic tag*.
+ Dynamic tag.
All test cases using more then one poll mode driver thread.
**{n}NUMA**
@@ -851,8 +857,9 @@ For traffic between DUTs, or for "secondary" traffic, see ${overhead} value.
**{c}C**
{c} worker thread pinned to {c} dedicated physical core; or if
- HyperThreading is enabled, {c}*2 worker threads each pinned to a separate
- logical core within 1 dedicated physical core. Main thread pinned to core 1.
+ HyperThreading is enabled, {c}*2 worker threads each pinned to
+ a separate logical core within 1 dedicated physical core. Main
+ thread pinned to core 1.
{t}=(1,2,4).
**{t}T{c}C**
@@ -860,4 +867,6 @@ For traffic between DUTs, or for "secondary" traffic, see ${overhead} value.
*Dynamic tag*.
{t} worker threads pinned to {c} dedicated physical cores. Main thread
pinned to core 1. By default CSIT is configuring same amount of receive
- queues per interface as worker threads. {t}=(1,2,4,8), {t}=(1,2,4).
+ queues per interface as worker threads.
+ {t}=(1,2,4,8),
+ {c}=(1,2,4).