aboutsummaryrefslogtreecommitdiffstats
path: root/docs/content/methodology
diff options
context:
space:
mode:
Diffstat (limited to 'docs/content/methodology')
-rw-r--r--docs/content/methodology/_index.md4
-rw-r--r--docs/content/methodology/measurements/_index.md (renamed from docs/content/methodology/hoststack_testing/_index.md)6
-rw-r--r--docs/content/methodology/measurements/data_plane_throughput/_index.md (renamed from docs/content/methodology/data_plane_throughput/_index.md)2
-rw-r--r--docs/content/methodology/measurements/data_plane_throughput/data_plane_throughput.md (renamed from docs/content/methodology/data_plane_throughput/data_plane_throughput.md)70
-rw-r--r--docs/content/methodology/measurements/data_plane_throughput/mlr_search.md (renamed from docs/content/methodology/data_plane_throughput/mlrsearch.md)12
-rw-r--r--docs/content/methodology/measurements/data_plane_throughput/mrr.md (renamed from docs/content/methodology/data_plane_throughput/mrr_throughput.md)8
-rw-r--r--docs/content/methodology/measurements/data_plane_throughput/plr_search.md (renamed from docs/content/methodology/data_plane_throughput/plrsearch.md)26
-rw-r--r--docs/content/methodology/measurements/packet_latency.md (renamed from docs/content/methodology/packet_latency.md)11
-rw-r--r--docs/content/methodology/measurements/telemetry.md (renamed from docs/content/methodology/telemetry.md)89
-rw-r--r--docs/content/methodology/overview/_index.md (renamed from docs/content/methodology/root_cause_analysis/_index.md)6
-rw-r--r--docs/content/methodology/overview/dut_state_considerations.md (renamed from docs/content/methodology/dut_state_considerations.md)6
-rw-r--r--docs/content/methodology/overview/multi_core_speedup.md (renamed from docs/content/methodology/multi_core_speedup.md)12
-rw-r--r--docs/content/methodology/overview/per_thread_resources.md (renamed from docs/content/methodology/per_thread_resources.md)11
-rw-r--r--docs/content/methodology/overview/terminology.md (renamed from docs/content/methodology/terminology.md)15
-rw-r--r--docs/content/methodology/overview/vpp_forwarding_modes.md (renamed from docs/content/methodology/vpp_forwarding_modes.md)8
-rw-r--r--docs/content/methodology/per_patch_testing.md (renamed from docs/content/methodology/root_cause_analysis/perpatch_performance_tests.md)24
-rw-r--r--docs/content/methodology/suite_generation.md124
-rw-r--r--docs/content/methodology/test/_index.md (renamed from docs/content/methodology/trending_methodology/_index.md)6
-rw-r--r--docs/content/methodology/test/access_control_lists.md (renamed from docs/content/methodology/access_control_lists.md)6
-rw-r--r--docs/content/methodology/test/generic_segmentation_offload.md (renamed from docs/content/methodology/generic_segmentation_offload.md)27
-rw-r--r--docs/content/methodology/test/hoststack/_index.md6
-rw-r--r--docs/content/methodology/test/hoststack/quicudpip_with_vppecho.md (renamed from docs/content/methodology/hoststack_testing/quicudpip_with_vppecho.md)0
-rw-r--r--docs/content/methodology/test/hoststack/tcpip_with_iperf3.md (renamed from docs/content/methodology/hoststack_testing/tcpip_with_iperf3.md)0
-rw-r--r--docs/content/methodology/test/hoststack/udpip_with_iperf3.md (renamed from docs/content/methodology/hoststack_testing/udpip_with_iperf3.md)0
-rw-r--r--docs/content/methodology/test/hoststack/vsap_ab_with_nginx.md (renamed from docs/content/methodology/hoststack_testing/vsap_ab_with_nginx.md)0
-rw-r--r--docs/content/methodology/test/internet_protocol_security.md (renamed from docs/content/methodology/internet_protocol_security_ipsec.md)13
-rw-r--r--docs/content/methodology/test/network_address_translation.md (renamed from docs/content/methodology/network_address_translation.md)2
-rw-r--r--docs/content/methodology/test/packet_flow_ordering.md (renamed from docs/content/methodology/packet_flow_ordering.md)2
-rw-r--r--docs/content/methodology/test/reconfiguration.md (renamed from docs/content/methodology/reconfiguration_tests.md)6
-rw-r--r--docs/content/methodology/test/tunnel_encapsulations.md (renamed from docs/content/methodology/geneve.md)65
-rw-r--r--docs/content/methodology/test/vpp_device.md (renamed from docs/content/methodology/vpp_device_functional.md)6
-rw-r--r--docs/content/methodology/trending/_index.md (renamed from docs/content/methodology/trending_methodology/overview.md)8
-rw-r--r--docs/content/methodology/trending/analysis.md (renamed from docs/content/methodology/trending_methodology/trend_analysis.md)8
-rw-r--r--docs/content/methodology/trending/presentation.md (renamed from docs/content/methodology/trending_methodology/trend_presentation.md)6
-rw-r--r--docs/content/methodology/trex_traffic_generator.md195
-rw-r--r--docs/content/methodology/tunnel_encapsulations.md41
-rw-r--r--docs/content/methodology/vpp_startup_settings.md44
37 files changed, 254 insertions, 621 deletions
diff --git a/docs/content/methodology/_index.md b/docs/content/methodology/_index.md
index 6f0dcae783..dbef64db94 100644
--- a/docs/content/methodology/_index.md
+++ b/docs/content/methodology/_index.md
@@ -1,6 +1,6 @@
---
-bookCollapseSection: true
+bookCollapseSection: false
bookFlatSection: true
title: "Methodology"
weight: 2
---- \ No newline at end of file
+---
diff --git a/docs/content/methodology/hoststack_testing/_index.md b/docs/content/methodology/measurements/_index.md
index b658313040..9e9232969e 100644
--- a/docs/content/methodology/hoststack_testing/_index.md
+++ b/docs/content/methodology/measurements/_index.md
@@ -1,6 +1,6 @@
---
bookCollapseSection: true
bookFlatSection: false
-title: "Hoststack Testing"
-weight: 14
---- \ No newline at end of file
+title: "Measurements"
+weight: 2
+---
diff --git a/docs/content/methodology/data_plane_throughput/_index.md b/docs/content/methodology/measurements/data_plane_throughput/_index.md
index 5791438b3b..8fc7f66f3e 100644
--- a/docs/content/methodology/data_plane_throughput/_index.md
+++ b/docs/content/methodology/measurements/data_plane_throughput/_index.md
@@ -2,5 +2,5 @@
bookCollapseSection: true
bookFlatSection: false
title: "Data Plane Throughput"
-weight: 4
+weight: 1
--- \ No newline at end of file
diff --git a/docs/content/methodology/data_plane_throughput/data_plane_throughput.md b/docs/content/methodology/measurements/data_plane_throughput/data_plane_throughput.md
index 7ff1d38d17..865405ba2f 100644
--- a/docs/content/methodology/data_plane_throughput/data_plane_throughput.md
+++ b/docs/content/methodology/measurements/data_plane_throughput/data_plane_throughput.md
@@ -1,5 +1,5 @@
---
-title: "Data Plane Throughput"
+title: "Overview"
weight: 1
---
@@ -12,8 +12,8 @@ set of performance test cases implemented and executed within CSIT.
Following throughput test methods are used:
- MLRsearch - Multiple Loss Ratio search
-- MRR - Maximum Receive Rate
- PLRsearch - Probabilistic Loss Ratio search
+- MRR - Maximum Receive Rate
Description of each test method is followed by generic test properties
shared by all methods.
@@ -34,7 +34,7 @@ RFC2544.
MLRsearch tests are run to discover NDR and PDR rates for each VPP and
DPDK release covered by CSIT report. Results for small frame sizes
-(64b/78B, IMIX) are presented in packet throughput graphs
+(64B/78B, IMIX) are presented in packet throughput graphs
(Box-and-Whisker Plots) with NDR and PDR rates plotted against the test
cases covering popular VPP packet paths.
@@ -46,10 +46,36 @@ tables.
### Details
-See [MLRSearch]({{< ref "mlrsearch/#MLRsearch" >}}) section for more detail.
+See [MLRSearch]({{< ref "mlr_search/#MLRsearch" >}}) section for more detail.
MLRsearch is being standardized in IETF in
[draft-ietf-bmwg-mlrsearch](https://datatracker.ietf.org/doc/html/draft-ietf-bmwg-mlrsearch-01).
+## PLRsearch Tests
+
+### Description
+
+Probabilistic Loss Ratio search (PLRsearch) tests discovers a packet
+throughput rate associated with configured Packet Loss Ratio (PLR)
+criteria for tests run over an extended period of time a.k.a. soak
+testing. PLRsearch assumes that system under test is probabilistic in
+nature, and not deterministic.
+
+### Usage
+
+PLRsearch are run to discover a sustained throughput for PLR=10^-7^
+(close to NDR) for VPP release covered by CSIT report. Results for small
+frame sizes (64B/78B) are presented in packet throughput graphs (Box
+Plots) for a small subset of baseline tests.
+
+Each soak test lasts 30 minutes and is executed at least twice. Results are
+compared against NDR and PDR rates discovered with MLRsearch.
+
+### Details
+
+See [PLRSearch]({{< ref "plr_search/#PLRsearch" >}}) methodology section for
+more detail. PLRsearch is being standardized in IETF in
+[draft-vpolak-bmwg-plrsearch](https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch).
+
## MRR Tests
### Description
@@ -74,42 +100,16 @@ progressions) resulting from data plane code changes.
MRR tests are also used for VPP per patch performance jobs verifying
patch performance vs parent. CSIT reports include MRR throughput
comparisons between releases and test environments. Small frame sizes
-only (64b/78B, IMIX).
+only (64B/78B, IMIX).
### Details
-See [MRR Throughput]({{< ref "mrr_throughput/#MRR Throughput" >}})
+See [MRR Throughput]({{< ref "mrr/#MRR" >}})
section for more detail about MRR tests configuration.
FD.io CSIT performance dashboard includes complete description of
-[daily performance trending tests](https://s3-docs.fd.io/csit/master/trending/methodology/performance_tests.html)
-and [VPP per patch tests](https://s3-docs.fd.io/csit/master/trending/methodology/perpatch_performance_tests.html).
-
-## PLRsearch Tests
-
-### Description
-
-Probabilistic Loss Ratio search (PLRsearch) tests discovers a packet
-throughput rate associated with configured Packet Loss Ratio (PLR)
-criteria for tests run over an extended period of time a.k.a. soak
-testing. PLRsearch assumes that system under test is probabilistic in
-nature, and not deterministic.
-
-### Usage
-
-PLRsearch are run to discover a sustained throughput for PLR=10^-7
-(close to NDR) for VPP release covered by CSIT report. Results for small
-frame sizes (64b/78B) are presented in packet throughput graphs (Box
-Plots) for a small subset of baseline tests.
-
-Each soak test lasts 30 minutes and is executed at least twice. Results are
-compared against NDR and PDR rates discovered with MLRsearch.
-
-### Details
-
-See [PLRSearch]({{< ref "plrsearch/#PLRsearch" >}}) methodology section for
-more detail. PLRsearch is being standardized in IETF in
-[draft-vpolak-bmwg-plrsearch](https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch).
+[daily performance trending tests]({{< ref "../../trending/analysis" >}})
+and [VPP per patch tests]({{< ref "../../per_patch_testing.md" >}}).
## Generic Test Properties
@@ -126,4 +126,4 @@ properties:
- Offered packet load is always bi-directional and symmetric.
- All measured and reported packet and bandwidth rates are aggregate
bi-directional rates reported from external Traffic Generator
- perspective. \ No newline at end of file
+ perspective.
diff --git a/docs/content/methodology/data_plane_throughput/mlrsearch.md b/docs/content/methodology/measurements/data_plane_throughput/mlr_search.md
index 73039c9b02..93bdb51efe 100644
--- a/docs/content/methodology/data_plane_throughput/mlrsearch.md
+++ b/docs/content/methodology/measurements/data_plane_throughput/mlr_search.md
@@ -1,9 +1,9 @@
---
-title: "MLRsearch"
+title: "MLR Search"
weight: 2
---
-# MLRsearch
+# MLR Search
## Overview
@@ -23,10 +23,10 @@ conducted at the specified final trial duration. This results in the
shorter overall execution time when compared to standard NDR/PDR binary
search, while guaranteeing similar results.
-.. Note:: All throughput rates are *always* bi-directional
- aggregates of two equal (symmetric) uni-directional packet rates
- received and reported by an external traffic generator,
- unless the test specifically requires unidirectional traffic.
+ Note: All throughput rates are *always* bi-directional aggregates of two
+ equal (symmetric) uni-directional packet rates received and reported by an
+ external traffic generator, unless the test specifically requires
+ unidirectional traffic.
## Search Implementation
diff --git a/docs/content/methodology/data_plane_throughput/mrr_throughput.md b/docs/content/methodology/measurements/data_plane_throughput/mrr.md
index 076946fb66..e8c3e62eb6 100644
--- a/docs/content/methodology/data_plane_throughput/mrr_throughput.md
+++ b/docs/content/methodology/measurements/data_plane_throughput/mrr.md
@@ -1,9 +1,9 @@
---
-title: "MRR Throughput"
+title: "MRR"
weight: 4
---
-# MRR Throughput
+# MRR
Maximum Receive Rate (MRR) tests are complementary to MLRsearch tests,
as they provide a maximum "raw" throughput benchmark for development and
@@ -29,7 +29,7 @@ capacity, as follows:
XXV710.
- For 40GE NICs the maximum packet rate load is 2x18.75 Mpps for 64B, a
40GE bi-directional link sub-rate limited by 40GE NIC used on TRex
- TG,XL710. Packet rate for other tested frame sizes is limited by
+ TG, XL710. Packet rate for other tested frame sizes is limited by
PCIeGen3 x8 bandwidth limitation of ~50Gbps.
MRR test code implements multiple bursts of offered packet load and has
@@ -53,4 +53,4 @@ Burst parameter settings vary between different tests using MRR:
- Daily performance trending: 10.
- Per-patch performance verification: 5.
- Initial iteration for MLRsearch: 1.
- - Initial iteration for PLRsearch: 1. \ No newline at end of file
+ - Initial iteration for PLRsearch: 1.
diff --git a/docs/content/methodology/data_plane_throughput/plrsearch.md b/docs/content/methodology/measurements/data_plane_throughput/plr_search.md
index 1facccc63b..529bac1f7f 100644
--- a/docs/content/methodology/data_plane_throughput/plrsearch.md
+++ b/docs/content/methodology/measurements/data_plane_throughput/plr_search.md
@@ -1,17 +1,17 @@
---
-title: "PLRsearch"
+title: "PLR Search"
weight: 3
---
-# PLRsearch
+# PLR Search
## Motivation for PLRsearch
Network providers are interested in throughput a system can sustain.
-`RFC 2544`[^3] assumes loss ratio is given by a deterministic function of
+`RFC 2544`[^1] assumes loss ratio is given by a deterministic function of
offered load. But NFV software systems are not deterministic enough.
-This makes deterministic algorithms (such as `binary search`[^9] per RFC 2544
+This makes deterministic algorithms (such as `binary search`[^2] per RFC 2544
and MLRsearch with single trial) to return results,
which when repeated show relatively high standard deviation,
thus making it harder to tell what "the throughput" actually is.
@@ -21,8 +21,9 @@ We need another algorithm, which takes this indeterminism into account.
## Generic Algorithm
Detailed description of the PLRsearch algorithm is included in the IETF
-draft `draft-vpolak-bmwg-plrsearch-02`[^1] that is in the process
-of being standardized in the IETF Benchmarking Methodology Working Group (BMWG).
+draft `Probabilistic Loss Ratio Search for Packet Throughput`[^3] that is in the
+process of being standardized in the IETF Benchmarking Methodology Working Group
+(BMWG).
### Terms
@@ -372,12 +373,11 @@ or is it better to have short periods of medium losses
mixed with long periods of zero losses (as happens in Vhost test)
with the same overall loss ratio?
-[^1]: [draft-vpolak-bmwg-plrsearch-02](https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch-02)
-[^2]: [plrsearch draft](https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch-00)
-[^3]: [RFC 2544](https://tools.ietf.org/html/rfc2544)
+[^1]: [RFC 2544: Benchmarking Methodology for Network Interconnect Devices](https://tools.ietf.org/html/rfc2544)
+[^2]: [Binary search](https://en.wikipedia.org/wiki/Binary_search_algorithm)
+[^3]: [Probabilistic Loss Ratio Search for Packet Throughput](https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch-02)
[^4]: [Lomax distribution](https://en.wikipedia.org/wiki/Lomax_distribution)
-[^5]: [reciprocal distribution](https://en.wikipedia.org/wiki/Reciprocal_distribution)
+[^5]: [Reciprocal distribution](https://en.wikipedia.org/wiki/Reciprocal_distribution)
[^6]: [Monte Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_integration)
-[^7]: [importance sampling](https://en.wikipedia.org/wiki/Importance_sampling)
-[^8]: [bivariate Gaussian](https://en.wikipedia.org/wiki/Multivariate_normal_distribution)
-[^9]: [binary search](https://en.wikipedia.org/wiki/Binary_search_algorithm) \ No newline at end of file
+[^7]: [Importance sampling](https://en.wikipedia.org/wiki/Importance_sampling)
+[^8]: [Bivariate Gaussian](https://en.wikipedia.org/wiki/Multivariate_normal_distribution)
diff --git a/docs/content/methodology/packet_latency.md b/docs/content/methodology/measurements/packet_latency.md
index fd7c0e00e8..f3606b5ffb 100644
--- a/docs/content/methodology/packet_latency.md
+++ b/docs/content/methodology/measurements/packet_latency.md
@@ -1,6 +1,6 @@
---
title: "Packet Latency"
-weight: 8
+weight: 2
---
# Packet Latency
@@ -16,6 +16,7 @@ Following methodology is used:
- Only NDRPDR test type measures latency and only after NDR and PDR
values are determined. Other test types do not involve latency
streams.
+
- Latency is measured at different background load packet rates:
- No-Load: latency streams only.
@@ -25,21 +26,27 @@ Following methodology is used:
- Latency is measured for all tested packet sizes except IMIX due to
TRex TG restriction.
+
- TG sends dedicated latency streams, one per direction, each at the
rate of 9 kpps at the prescribed packet size; these are sent in
addition to the main load streams.
+
- TG reports Min/Avg/Max and HDRH latency values distribution per stream
direction, hence two sets of latency values are reported per test case
(marked as E-W and W-E).
+
- +/- 1 usec is the measurement accuracy of TRex TG and the data in HDRH
latency values distribution is rounded to microseconds.
+
- TRex TG introduces a (background) always-on Tx + Rx latency bias of 4
usec on average per direction resulting from TRex software writing and
reading packet timestamps on CPU cores. Quoted values are based on TG
back-to-back latency measurements.
+
- Latency graphs are not smoothed, each latency value has its own
horizontal line across corresponding packet percentiles.
+
- Percentiles are shown on X-axis using a logarithmic scale, so the
maximal latency value (ending at 100% percentile) would be in
infinity. The graphs are cut at 99.9999% (hover information still
- lists 100%). \ No newline at end of file
+ lists 100%).
diff --git a/docs/content/methodology/telemetry.md b/docs/content/methodology/measurements/telemetry.md
index e7a2571573..aed32d9e17 100644
--- a/docs/content/methodology/telemetry.md
+++ b/docs/content/methodology/measurements/telemetry.md
@@ -1,6 +1,6 @@
---
title: "Telemetry"
-weight: 20
+weight: 3
---
# Telemetry
@@ -50,14 +50,13 @@ them.
## MRR measurement
- traffic_start(r=mrr) traffic_stop |< measure >|
- | | | (r=mrr) |
- | pre_run_stat post_run_stat | pre_stat | | post_stat
- | | | | | | | |
- --o--------o---------------o---------o-------o--------+-------------------+------o------------>
- t
-
- Legend:
+ traffic_start(r=mrr) traffic_stop |< measure >|
+ | | | (r=mrr) |
+ | pre_run_stat post_run_stat | pre_stat | | post_stat
+ | | | | | | | |
+ o--------o---------------o-------o------o------+---------------+------o------>
+ t
+ Legend:
- pre_run_stat
- vpp-clear-runtime
- post_run_stat
@@ -72,27 +71,25 @@ them.
- vpp-show-packettrace // if extended_debug == True
- vpp-show-elog
-
- |< measure >|
- | (r=mrr) |
- | |
- |< traffic_trial0 >|< traffic_trial1 >|< traffic_trialN >|
- | (i=0,t=duration) | (i=1,t=duration) | (i=N,t=duration) |
- | | | |
- --o------------------------o------------------------o------------------------o--->
+ |< measure >|
+ | (r=mrr) |
+ | |
+ |< traffic_trial0 >|< traffic_trial1 >|< traffic_trialN >|
+ | (i=0,t=duration) | (i=1,t=duration) | (i=N,t=duration) |
+ | | | |
+ o-----------------------o------------------------o------------------------o--->
t
## MLR measurement
- |< measure >| traffic_start(r=pdr) traffic_stop traffic_start(r=ndr) traffic_stop |< [ latency ] >|
- | (r=mlr) | | | | | | .9/.5/.1/.0 |
- | | | pre_run_stat post_run_stat | | pre_run_stat post_run_stat | | |
- | | | | | | | | | | | |
- --+-------------------+----o--------o---------------o---------o--------------o--------o---------------o---------o------------[---------------------]--->
- t
-
- Legend:
+ |< measure >| traffic_start(r=pdr) traffic_stop traffic_start(r=ndr) traffic_stop |< [ latency ] >|
+ | (r=mlr) | | | | | | .9/.5/.1/.0 |
+ | | | pre_run_stat post_run_stat | | pre_run_stat post_run_stat | | |
+ | | | | | | | | | | | |
+ +-------------+---o-------o---------------o--------o-------------o-------o---------------o--------o------------[-------------------]--->
+ t
+ Legend:
- pre_run_stat
- vpp-clear-runtime
- post_run_stat
@@ -107,17 +104,15 @@ them.
- vpp-show-packettrace // if extended_debug == True
- vpp-show-elog
-
## MRR measurement
- traffic_start(r=mrr) traffic_stop |< measure >|
- | | | (r=mrr) |
- | |< stat_runtime >| | stat_pre_trial | | stat_post_trial
- | | | | | | | |
- ----o---+--------------------------+---o-------------o------------+-------------------+-----o------------->
- t
-
- Legend:
+ traffic_start(r=mrr) traffic_stop |< measure >|
+ | | | (r=mrr) |
+ | |< stat_runtime >| | stat_pre_trial | | stat_post_trial
+ | | | | | | | |
+ o---+------------------+---o------o------------+-------------+----o------------>
+ t
+ Legend:
- stat_runtime
- vpp-runtime
- stat_pre_trial
@@ -127,36 +122,32 @@ them.
- vpp-show-stats
- vpp-show-packettrace // if extended_debug == True
-
|< measure >|
| (r=mrr) |
| |
|< traffic_trial0 >|< traffic_trial1 >|< traffic_trialN >|
| (i=0,t=duration) | (i=1,t=duration) | (i=N,t=duration) |
| | | |
- --o------------------------o------------------------o------------------------o--->
- t
-
+ o------------------------o------------------------o------------------------o--->
+ t
|< stat_runtime >|
| |
|< program0 >|< program1 >|< programN >|
| (@=params) | (@=params) | (@=params) |
| | | |
- --o------------------------o------------------------o------------------------o--->
- t
-
+ o------------------------o------------------------o------------------------o--->
+ t
## MLR measurement
- |< measure >| traffic_start(r=pdr) traffic_stop traffic_start(r=ndr) traffic_stop |< [ latency ] >|
- | (r=mlr) | | | | | | .9/.5/.1/.0 |
- | | | |< stat_runtime >| | | |< stat_runtime >| | | |
- | | | | | | | | | | | |
- --+-------------------+-----o---+--------------------------+---o--------------o---+--------------------------+---o-----------[---------------------]--->
- t
-
- Legend:
+ |< measure >| traffic_start(r=pdr) traffic_stop traffic_start(r=ndr) traffic_stop |< [ latency ] >|
+ | (r=mlr) | | | | | | .9/.5/.1/.0 |
+ | | | |< stat_runtime >| | | |< stat_runtime >| | | |
+ | | | | | | | | | | | |
+ +-------------+---o---+------------------+---o--------------o---+------------------+---o-----------[-----------------]--->
+ t
+ Legend:
- stat_runtime
- vpp-runtime
- stat_pre_trial
diff --git a/docs/content/methodology/root_cause_analysis/_index.md b/docs/content/methodology/overview/_index.md
index 79cfe73769..10f362013f 100644
--- a/docs/content/methodology/root_cause_analysis/_index.md
+++ b/docs/content/methodology/overview/_index.md
@@ -1,6 +1,6 @@
---
bookCollapseSection: true
bookFlatSection: false
-title: "Root Cause Analysis"
-weight: 20
---- \ No newline at end of file
+title: "Overview"
+weight: 1
+---
diff --git a/docs/content/methodology/dut_state_considerations.md b/docs/content/methodology/overview/dut_state_considerations.md
index 55e408f5f2..eca10a22cd 100644
--- a/docs/content/methodology/dut_state_considerations.md
+++ b/docs/content/methodology/overview/dut_state_considerations.md
@@ -1,9 +1,9 @@
---
-title: "DUT state considerations"
-weight: 6
+title: "DUT State Considerations"
+weight: 5
---
-# DUT state considerations
+# DUT State Considerations
This page discusses considerations for Device Under Test (DUT) state.
DUTs such as VPP require configuration, to be provided before the aplication
diff --git a/docs/content/methodology/multi_core_speedup.md b/docs/content/methodology/overview/multi_core_speedup.md
index c0c9ae2570..f438e8e996 100644
--- a/docs/content/methodology/multi_core_speedup.md
+++ b/docs/content/methodology/overview/multi_core_speedup.md
@@ -1,6 +1,6 @@
---
title: "Multi-Core Speedup"
-weight: 13
+weight: 3
---
# Multi-Core Speedup
@@ -25,12 +25,12 @@ Cascadelake and Xeon Icelake testbeds.
Multi-core tests are executed in the following VPP worker thread and physical
core configurations:
-#. Intel Xeon Icelake and Cascadelake testbeds (2n-icx, 3n-icx, 2n-clx)
+1. Intel Xeon Icelake and Cascadelake testbeds (2n-icx, 3n-icx, 2n-clx)
with Intel HT enabled (2 logical CPU cores per each physical core):
- #. 2t1c - 2 VPP worker threads on 1 physical core.
- #. 4t2c - 4 VPP worker threads on 2 physical cores.
- #. 8t4c - 8 VPP worker threads on 4 physical cores.
+ 1. 2t1c - 2 VPP worker threads on 1 physical core.
+ 2. 4t2c - 4 VPP worker threads on 2 physical cores.
+ 3. 8t4c - 8 VPP worker threads on 4 physical cores.
VPP worker threads are the data plane threads running on isolated
logical cores. With Intel HT enabled VPP workers are placed as sibling
@@ -48,4 +48,4 @@ the same amount of packet flows.
If number of VPP workers is higher than number of physical or virtual
interfaces, multiple receive queues are configured on each interface.
NIC Receive Side Scaling (RSS) for physical interfaces and multi-queue
-for virtual interfaces are used for this purpose. \ No newline at end of file
+for virtual interfaces are used for this purpose.
diff --git a/docs/content/methodology/per_thread_resources.md b/docs/content/methodology/overview/per_thread_resources.md
index cd862fa824..c23efb50bd 100644
--- a/docs/content/methodology/per_thread_resources.md
+++ b/docs/content/methodology/overview/per_thread_resources.md
@@ -5,8 +5,7 @@ weight: 2
# Per Thread Resources
-CSIT test framework is managing mapping of the following resources per
-thread:
+CSIT test framework is managing mapping of the following resources per thread:
1. Cores, physical cores (pcores) allocated as pairs of sibling logical cores
(lcores) if server in HyperThreading/SMT mode, or as single lcores
@@ -30,7 +29,7 @@ tested (VPP or DPDK apps) and associated thread types, as follows:
configurations, where{T} stands for a total number of threads
(lcores), and {C} for a total number of pcores. Tested
configurations are encoded in CSIT test case names,
- e.g. "1c", "2c", "4c", and test tags "2T1C"(or "1T1C"), "4T2C"
+ e.g. "1c", "2c", "4c", and test tags "2T1C" (or "1T1C"), "4T2C"
(or "2T2C"), "8T4C" (or "4T4C").
- Interface Receive Queues (RxQ): as of CSIT-2106 release, number of
RxQs used on each physical or virtual interface is equal to the
@@ -58,7 +57,7 @@ tested (VPP or DPDK apps) and associated thread types, as follows:
total number of lcores and pcores used for feature workers.
Accordingly, tested configurations are encoded in CSIT test case
names, e.g. "1c-1c", "1c-2c", "1c-3c", and test tags "2T1C_2T1C"
- (or "1T1C_1T1C"), "2T1C_4T2C"(or "1T1C_2T2C"), "2T1C_6T3C"
+ (or "1T1C_1T1C"), "2T1C_4T2C" (or "1T1C_2T2C"), "2T1C_6T3C"
(or "1T1C_3T3C").
- RxQ and TxQ: no RxQs and no TxQs are used by feature workers.
- Applies to VPP only.
@@ -67,8 +66,8 @@ tested (VPP or DPDK apps) and associated thread types, as follows:
- Cores: single lcore.
- RxQ: not used (VPP default behaviour).
- - TxQ: single TxQ per interface, allocated but not used
- (VPP default behaviour).
+ - TxQ: single TxQ per interface, allocated but not used (VPP default
+ behaviour).
- Applies to VPP only.
## VPP Thread Configuration
diff --git a/docs/content/methodology/terminology.md b/docs/content/methodology/overview/terminology.md
index 229db7d145..c9115e9291 100644
--- a/docs/content/methodology/terminology.md
+++ b/docs/content/methodology/overview/terminology.md
@@ -8,13 +8,17 @@ weight: 1
- **Frame size**: size of an Ethernet Layer-2 frame on the wire, including
any VLAN tags (dot1q, dot1ad) and Ethernet FCS, but excluding Ethernet
preamble and inter-frame gap. Measured in Bytes.
+
- **Packet size**: same as frame size, both terms used interchangeably.
+
- **Inner L2 size**: for tunneled L2 frames only, size of an encapsulated
Ethernet Layer-2 frame, preceded with tunnel header, and followed by
tunnel trailer. Measured in Bytes.
+
- **Inner IP size**: for tunneled IP packets only, size of an encapsulated
IPv4 or IPv6 packet, preceded with tunnel header, and followed by
tunnel trailer. Measured in Bytes.
+
- **Device Under Test (DUT)**: In software networking, "device" denotes a
specific piece of software tasked with packet processing. Such device
is surrounded with other software components (such as operating system
@@ -26,25 +30,30 @@ weight: 1
SUT instead of RFC2544 DUT. Device under test
(DUT) can be re-introduced when analyzing test results using whitebox
techniques, but this document sticks to blackbox testing.
+
- **System Under Test (SUT)**: System under test (SUT) is a part of the
whole test setup whose performance is to be benchmarked. The complete
methodology contains other parts, whose performance is either already
established, or not affecting the benchmarking result.
+
- **Bi-directional throughput tests**: involve packets/frames flowing in
both east-west and west-east directions over every tested interface of
SUT/DUT. Packet flow metrics are measured per direction, and can be
reported as aggregate for both directions (i.e. throughput) and/or
separately for each measured direction (i.e. latency). In most cases
bi-directional tests use the same (symmetric) load in both directions.
+
- **Uni-directional throughput tests**: involve packets/frames flowing in
only one direction, i.e. either east-west or west-east direction, over
every tested interface of SUT/DUT. Packet flow metrics are measured
and are reported for measured direction.
+
- **Packet Loss Ratio (PLR)**: ratio of packets received relative to packets
transmitted over the test trial duration, calculated using formula:
PLR = ( pkts_transmitted - pkts_received ) / pkts_transmitted.
For bi-directional throughput tests aggregate PLR is calculated based
on the aggregate number of packets transmitted and received.
+
- **Packet Throughput Rate**: maximum packet offered load DUT/SUT forwards
within the specified Packet Loss Ratio (PLR). In many cases the rate
depends on the frame size processed by DUT/SUT. Hence packet
@@ -53,30 +62,36 @@ weight: 1
throughput rate should be reported as aggregate for both directions.
Measured in packets-per-second (pps) or frames-per-second (fps),
equivalent metrics.
+
- **Bandwidth Throughput Rate**: a secondary metric calculated from packet
throughput rate using formula: bw_rate = pkt_rate * (frame_size +
L1_overhead) * 8, where L1_overhead for Ethernet includes preamble (8
Bytes) and inter-frame gap (12 Bytes). For bi-directional tests,
bandwidth throughput rate should be reported as aggregate for both
directions. Expressed in bits-per-second (bps).
+
- **Non Drop Rate (NDR)**: maximum packet/bandwith throughput rate sustained
by DUT/SUT at PLR equal zero (zero packet loss) specific to tested
frame size(s). MUST be quoted with specific packet size as received by
DUT/SUT during the measurement. Packet NDR measured in
packets-per-second (or fps), bandwidth NDR expressed in
bits-per-second (bps).
+
- **Partial Drop Rate (PDR)**: maximum packet/bandwith throughput rate
sustained by DUT/SUT at PLR greater than zero (non-zero packet loss)
specific to tested frame size(s). MUST be quoted with specific packet
size as received by DUT/SUT during the measurement. Packet PDR
measured in packets-per-second (or fps), bandwidth PDR expressed in
bits-per-second (bps).
+
- **Maximum Receive Rate (MRR)**: packet/bandwidth rate regardless of PLR
sustained by DUT/SUT under specified Maximum Transmit Rate (MTR)
packet load offered by traffic generator. MUST be quoted with both
specific packet size and MTR as received by DUT/SUT during the
measurement. Packet MRR measured in packets-per-second (or fps),
bandwidth MRR expressed in bits-per-second (bps).
+
- **Trial**: a single measurement step.
+
- **Trial duration**: amount of time over which packets are transmitted and
received in a single measurement step.
diff --git a/docs/content/methodology/vpp_forwarding_modes.md b/docs/content/methodology/overview/vpp_forwarding_modes.md
index 1cc199c607..b3c3bba984 100644
--- a/docs/content/methodology/vpp_forwarding_modes.md
+++ b/docs/content/methodology/overview/vpp_forwarding_modes.md
@@ -1,13 +1,13 @@
---
title: "VPP Forwarding Modes"
-weight: 3
+weight: 4
---
# VPP Forwarding Modes
-VPP is tested in a number of L2, IPv4 and IPv6 packet lookup and
-forwarding modes. Within each mode baseline and scale tests are
-executed, the latter with varying number of FIB entries.
+VPP is tested in a number of L2, IPv4 and IPv6 packet lookup and forwarding
+modes. Within each mode baseline and scale tests are executed, the latter with
+varying number of FIB entries.
## L2 Ethernet Switching
diff --git a/docs/content/methodology/root_cause_analysis/perpatch_performance_tests.md b/docs/content/methodology/per_patch_testing.md
index 900ea0b874..a64a52caf6 100644
--- a/docs/content/methodology/root_cause_analysis/perpatch_performance_tests.md
+++ b/docs/content/methodology/per_patch_testing.md
@@ -1,9 +1,9 @@
---
-title: "Per-patch performance tests"
-weight: 1
+title: "Per-patch Testing"
+weight: 5
---
-# Per-patch performance tests
+# Per-patch Testing
Updated for CSIT git commit id: 72b45cfe662107c8e1bb549df71ba51352a898ee.
@@ -101,15 +101,15 @@ where the variables are all lower case (so AND operator stands out).
Currently only one test type is supported by the performance comparison jobs:
"mrr".
-The nic_driver options depend on nic_model. For Intel cards "drv_avf" (AVF plugin)
-and "drv_vfio_pci" (DPDK plugin) are popular, for Mellanox "drv_rdma_core".
-Currently, the performance using "drv_af_xdp" is not reliable enough, so do not use it
-unless you are specifically testing for AF_XDP.
+The nic_driver options depend on nic_model. For Intel cards "drv_avf"
+(AVF plugin) and "drv_vfio_pci" (DPDK plugin) are popular, for Mellanox
+"drv_rdma_core". Currently, the performance using "drv_af_xdp" is not reliable
+enough, so do not use it unless you are specifically testing for AF_XDP.
The most popular nic_model is "nic_intel-xxv710", but that is not available
on all testbed types.
-It is safe to use "1c" for cores (unless you are suspection multi-core performance
-is affected differently) and "64b" for frame size ("78b" for ip6
+It is safe to use "1c" for cores (unless you are suspection multi-core
+performance is affected differently) and "64b" for frame size ("78b" for ip6
and more for dot1q and other encapsulated traffic;
"1518b" is popular for ipsec and other payload-bound tests).
@@ -121,9 +121,11 @@ for example
### Shortening triggers
-Advanced users may use the following tricks to avoid writing long trigger comments.
+Advanced users may use the following tricks to avoid writing long trigger
+comments.
-Robot supports glob matching, which can be used to select multiple suite tags at once.
+Robot supports glob matching, which can be used to select multiple suite tags at
+once.
Not specifying one of 6 parts of the recommended expression pattern
will select all available options. For example not specifying nic_driver
diff --git a/docs/content/methodology/suite_generation.md b/docs/content/methodology/suite_generation.md
deleted file mode 100644
index 4fa9dee0ce..0000000000
--- a/docs/content/methodology/suite_generation.md
+++ /dev/null
@@ -1,124 +0,0 @@
----
-title: "Suite Generation"
-weight: 19
----
-
-# Suite Generation
-
-CSIT uses robot suite files to define tests.
-However, not all suite files available for Jenkins jobs
-(or manually started bootstrap scripts) are present in CSIT git repository.
-They are generated only when needed.
-
-## Autogen Library
-
-There is a code generation layer implemented as Python library called "autogen",
-called by various bash scripts.
-
-It generates the full extent of CSIT suites, using the ones in git as templates.
-
-## Sources
-
-The generated suites (and their contents) are affected by multiple information
-sources, listed below.
-
-### Git Suites
-
-The suites present in git repository act as templates for generating suites.
-One of autogen design principles is that any template suite should also act
-as a full suite (no placeholders).
-
-In practice, autogen always re-creates the template suite with exactly
-the same content, it is one of checks that autogen works correctly.
-
-### Regenerate Script
-
-Not all suites present in CSIT git repository act as template for autogen.
-The distinction is on per-directory level. Directories with
-regenerate_testcases.py script usually consider all suites as templates
-(unless possibly not included by the glob patten in the script).
-
-The script also specifies minimal frame size, indirectly, by specifying protocol
-(protocol "ip4" is the default, leading to 64B frame size).
-
-### Constants
-
-Values in Constants.py are taken into consideration when generating suites.
-The values are mostly related to different NIC models and NIC drivers.
-
-### Python Code
-
-Python code in resources/libraries/python/autogen contains several other
-information sources.
-
-#### Testcase Templates
-
-The test case part of template suite is ignored, test case lines
-are created according to text templates in Testcase.py file.
-
-#### Testcase Argument Lists
-
-Each testcase template has different number of "arguments", e.g. values
-to put into various placeholders. Different test types need different
-lists of the argument values, the lists are in regenerate_glob method
-in Regenerator.py file.
-
-#### Iteration Over Values
-
-Python code detects the test type (usually by substrings of suite file name),
-then iterates over different quantities based on type.
-For example, only ndrpdr suite templates generate other types (mrr and soak).
-
-#### Hardcoded Exclusions
-
-Some combinations of values are known not to work, so they are excluded.
-Examples: Density tests for too much CPUs; IMIX for ASTF.
-
-## Non-Sources
-
-Some information sources are available in CSIT repository,
-but do not affect the suites generated by autogen.
-
-### Testbeds
-
-Overall, no information visible in topology yaml files is taken into account
-by autogen.
-
-#### Testbed Architecture
-
-Historically, suite files are agnostic to testbed architecture, e.g. ICX or ALT.
-
-#### Testbed Size
-
-Historically, 2-node and 3-node suites have diferent names, and while
-most of the code is common, the differences are not always simple enough.
-Autogen treat 2-node and 3-node suites as independent templates.
-
-TRex suites are intended for a 1-node circuit of otherwise 2-node or 3-node
-testbeds, so they support all 3 robot tags.
-They are also detected and treated differently by autogen,
-mainly because they need different testcase arguments (no CPU count).
-Autogen does nothing specifically related to the fact they should run
-only in testbeds/NICs with TG-TG line available.
-
-#### Other Topology Info
-
-Some bonding tests need two (parallel) links between DUTs.
-Autogen does not care, as suites are agnostic.
-Robot tag marks the difference, but the link presence is not explicitly checked.
-
-### Job specs
-
-Information in job spec files depend on generated suites (not the other way).
-Autogen should generate more suites, as job spec is limited by time budget.
-More suites should be available for manually triggered verify jobs,
-so autogen covers that.
-
-### Bootstrap Scripts
-
-Historically, bootstrap scripts perform some logic,
-perhaps adding exclusion options to Robot invocation
-(e.g. skipping testbed+NIC combinations for tests that need parallel links).
-
-Once again, the logic here relies on what autogen generates,
-autogen does not look into bootstrap scripts.
diff --git a/docs/content/methodology/trending_methodology/_index.md b/docs/content/methodology/test/_index.md
index 551d950cc7..857cc7b168 100644
--- a/docs/content/methodology/trending_methodology/_index.md
+++ b/docs/content/methodology/test/_index.md
@@ -1,6 +1,6 @@
---
bookCollapseSection: true
bookFlatSection: false
-title: "Trending Methodology"
-weight: 22
---- \ No newline at end of file
+title: "Test"
+weight: 3
+---
diff --git a/docs/content/methodology/access_control_lists.md b/docs/content/methodology/test/access_control_lists.md
index 9767d3f86a..354e6b72bb 100644
--- a/docs/content/methodology/access_control_lists.md
+++ b/docs/content/methodology/test/access_control_lists.md
@@ -1,6 +1,6 @@
---
title: "Access Control Lists"
-weight: 12
+weight: 5
---
# Access Control Lists
@@ -40,10 +40,8 @@ ACL tests are executed with the following combinations of ACL entries
and number of flows:
- ACL entry definitions
-
- flow non-matching deny entry: (src-ip4, dst-ip4, src-port, dst-port).
- flow matching permit ACL entry: (src-ip4, dst-ip4).
-
- {E} - number of non-matching deny ACL entries, {E} = [1, 10, 50].
- {F} - number of UDP flows with different tuple (src-ip4, dst-ip4,
src-port, dst-port), {F} = [100, 10k, 100k].
@@ -60,10 +58,8 @@ MAC-IP ACL tests are executed with the following combinations of ACL
entries and number of flows:
- ACL entry definitions
-
- flow non-matching deny entry: (dst-ip4, dst-mac, bit-mask)
- flow matching permit ACL entry: (dst-ip4, dst-mac, bit-mask)
-
- {E} - number of non-matching deny ACL entries, {E} = [1, 10, 50]
- {F} - number of UDP flows with different tuple (dst-ip4, dst-mac),
{F} = [100, 10k, 100k]
diff --git a/docs/content/methodology/generic_segmentation_offload.md b/docs/content/methodology/test/generic_segmentation_offload.md
index ddb19ba826..0032d203de 100644
--- a/docs/content/methodology/generic_segmentation_offload.md
+++ b/docs/content/methodology/test/generic_segmentation_offload.md
@@ -1,6 +1,6 @@
---
title: "Generic Segmentation Offload"
-weight: 15
+weight: 7
---
# Generic Segmentation Offload
@@ -22,12 +22,9 @@ performance comparison the same tests are run without GSO enabled.
Two VPP GSO test topologies are implemented:
1. iPerfC_GSOvirtio_LinuxVM --- GSOvhost_VPP_GSOvhost --- iPerfS_GSOvirtio_LinuxVM
-
- Tests VPP GSO on vhostuser interfaces and interaction with Linux
virtio with GSO enabled.
-
2. iPerfC_GSOtap_LinuxNspace --- GSOtapv2_VPP_GSOtapv2 --- iPerfS_GSOtap_LinuxNspace
-
- Tests VPP GSO on tapv2 interfaces and interaction with Linux tap
with GSO enabled.
@@ -60,9 +57,10 @@ separate namespace. Following core pinning scheme is used:
iPerf3 version used 3.7
$ sudo -E -S ip netns exec tap1_namespace iperf3 \
- --server --daemon --pidfile /tmp/iperf3_server.pid --logfile /tmp/iperf3.log --port 5201 --affinity <X>
+ --server --daemon --pidfile /tmp/iperf3_server.pid \
+ --logfile /tmp/iperf3.log --port 5201 --affinity <X>
-For the full iPerf3 reference please see:
+For the full iPerf3 reference please see
[iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst).
@@ -71,9 +69,10 @@ For the full iPerf3 reference please see:
iPerf3 version used 3.7
$ sudo -E -S ip netns exec tap1_namespace iperf3 \
- --client 2.2.2.2 --bind 1.1.1.1 --port 5201 --parallel <Y> --time 30.0 --affinity <X> --zerocopy
+ --client 2.2.2.2 --bind 1.1.1.1 --port 5201 --parallel <Y> \
+ --time 30.0 --affinity <X> --zerocopy
-For the full iPerf3 reference please see:
+For the full iPerf3 reference please see
[iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst).
@@ -99,9 +98,10 @@ scheme is used:
iPerf3 version used 3.7
$ sudo iperf3 \
- --server --daemon --pidfile /tmp/iperf3_server.pid --logfile /tmp/iperf3.log --port 5201 --affinity X
+ --server --daemon --pidfile /tmp/iperf3_server.pid \
+ --logfile /tmp/iperf3.log --port 5201 --affinity X
-For the full iPerf3 reference please see:
+For the full iPerf3 reference please see
[iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst).
@@ -110,7 +110,8 @@ For the full iPerf3 reference please see:
iPerf3 version used 3.7
$ sudo iperf3 \
- --client 2.2.2.2 --bind 1.1.1.1 --port 5201 --parallel <Y> --time 30.0 --affinity X --zerocopy
+ --client 2.2.2.2 --bind 1.1.1.1 --port 5201 --parallel <Y> \
+ --time 30.0 --affinity X --zerocopy
-For the full iPerf3 reference please see:
-[iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst). \ No newline at end of file
+For the full iPerf3 reference please see
+[iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst).
diff --git a/docs/content/methodology/test/hoststack/_index.md b/docs/content/methodology/test/hoststack/_index.md
new file mode 100644
index 0000000000..2ae872c54e
--- /dev/null
+++ b/docs/content/methodology/test/hoststack/_index.md
@@ -0,0 +1,6 @@
+---
+bookCollapseSection: true
+bookFlatSection: false
+title: "Hoststack"
+weight: 6
+---
diff --git a/docs/content/methodology/hoststack_testing/quicudpip_with_vppecho.md b/docs/content/methodology/test/hoststack/quicudpip_with_vppecho.md
index c7d57a51b3..c7d57a51b3 100644
--- a/docs/content/methodology/hoststack_testing/quicudpip_with_vppecho.md
+++ b/docs/content/methodology/test/hoststack/quicudpip_with_vppecho.md
diff --git a/docs/content/methodology/hoststack_testing/tcpip_with_iperf3.md b/docs/content/methodology/test/hoststack/tcpip_with_iperf3.md
index 7baa88ab50..7baa88ab50 100644
--- a/docs/content/methodology/hoststack_testing/tcpip_with_iperf3.md
+++ b/docs/content/methodology/test/hoststack/tcpip_with_iperf3.md
diff --git a/docs/content/methodology/hoststack_testing/udpip_with_iperf3.md b/docs/content/methodology/test/hoststack/udpip_with_iperf3.md
index 01ddf61269..01ddf61269 100644
--- a/docs/content/methodology/hoststack_testing/udpip_with_iperf3.md
+++ b/docs/content/methodology/test/hoststack/udpip_with_iperf3.md
diff --git a/docs/content/methodology/hoststack_testing/vsap_ab_with_nginx.md b/docs/content/methodology/test/hoststack/vsap_ab_with_nginx.md
index 2dc4d2b7f9..2dc4d2b7f9 100644
--- a/docs/content/methodology/hoststack_testing/vsap_ab_with_nginx.md
+++ b/docs/content/methodology/test/hoststack/vsap_ab_with_nginx.md
diff --git a/docs/content/methodology/internet_protocol_security_ipsec.md b/docs/content/methodology/test/internet_protocol_security.md
index 711004f2c0..1a02c43a0a 100644
--- a/docs/content/methodology/internet_protocol_security_ipsec.md
+++ b/docs/content/methodology/test/internet_protocol_security.md
@@ -1,17 +1,16 @@
---
-title: "Internet Protocol Security (IPsec)"
-weight: 11
+title: "Internet Protocol Security"
+weight: 4
---
-# Internet Protocol Security (IPsec)
+# Internet Protocol Security
-VPP IPsec performance tests are executed for the following crypto
-plugins:
+VPP Internet Protocol Security (IPsec) performance tests are executed for the
+following crypto plugins:
- `crypto_native`, used for software based crypto leveraging CPU
platform optimizations e.g. Intel's AES-NI instruction set.
-- `crypto_ipsecmb`, used for hardware based crypto with Intel QAT PCIe
- cards.
+- `crypto_ipsecmb`, used for hardware based crypto with Intel QAT PCIe cards.
## IPsec with VPP Native SW Crypto
diff --git a/docs/content/methodology/network_address_translation.md b/docs/content/methodology/test/network_address_translation.md
index ef341dc892..f443eabc5f 100644
--- a/docs/content/methodology/network_address_translation.md
+++ b/docs/content/methodology/test/network_address_translation.md
@@ -1,6 +1,6 @@
---
title: "Network Address Translation"
-weight: 7
+weight: 1
---
# Network Address Translation
diff --git a/docs/content/methodology/packet_flow_ordering.md b/docs/content/methodology/test/packet_flow_ordering.md
index d2b3bfb90c..c2c87038d4 100644
--- a/docs/content/methodology/packet_flow_ordering.md
+++ b/docs/content/methodology/test/packet_flow_ordering.md
@@ -1,6 +1,6 @@
---
title: "Packet Flow Ordering"
-weight: 9
+weight: 2
---
# Packet Flow Ordering
diff --git a/docs/content/methodology/reconfiguration_tests.md b/docs/content/methodology/test/reconfiguration.md
index 837535526d..6dec4d918b 100644
--- a/docs/content/methodology/reconfiguration_tests.md
+++ b/docs/content/methodology/test/reconfiguration.md
@@ -1,9 +1,9 @@
---
-title: "Reconfiguration Tests"
-weight: 16
+title: "Reconfiguration"
+weight: 8
---
-# Reconfiguration Tests
+# Reconfiguration
## Overview
diff --git a/docs/content/methodology/geneve.md b/docs/content/methodology/test/tunnel_encapsulations.md
index f4a0af92e7..c047c43dfa 100644
--- a/docs/content/methodology/geneve.md
+++ b/docs/content/methodology/test/tunnel_encapsulations.md
@@ -1,11 +1,48 @@
---
-title: "GENEVE"
-weight: 21
+title: "Tunnel Encapsulations"
+weight: 3
---
-# GENEVE
+# Tunnel Encapsulations
-## GENEVE Prefix Bindings
+Tunnel encapsulations testing is grouped based on the type of outer
+header: IPv4 or IPv6.
+
+## IPv4 Tunnels
+
+VPP is tested in the following IPv4 tunnel baseline configurations:
+
+- *ip4vxlan-l2bdbase*: VXLAN over IPv4 tunnels with L2 bridge-domain MAC
+ switching.
+- *ip4vxlan-l2xcbase*: VXLAN over IPv4 tunnels with L2 cross-connect.
+- *ip4lispip4-ip4base*: LISP over IPv4 tunnels with IPv4 routing.
+- *ip4lispip6-ip6base*: LISP over IPv4 tunnels with IPv6 routing.
+- *ip4gtpusw-ip4base*: GTPU over IPv4 tunnels with IPv4 routing.
+
+In all cases listed above low number of MAC, IPv4, IPv6 flows (253 or 254 per
+direction) is switched or routed by VPP.
+
+In addition selected IPv4 tunnels are tested at scale:
+
+- *dot1q--ip4vxlanscale-l2bd*: VXLAN over IPv4 tunnels with L2 bridge-
+ domain MAC switching, with scaled up dot1q VLANs (10, 100, 1k),
+ mapped to scaled up L2 bridge-domains (10, 100, 1k), that are in turn
+ mapped to (10, 100, 1k) VXLAN tunnels. 64.5k flows are transmitted per
+ direction.
+
+## IPv6 Tunnels
+
+VPP is tested in the following IPv6 tunnel baseline configurations:
+
+- *ip6lispip4-ip4base*: LISP over IPv4 tunnels with IPv4 routing.
+- *ip6lispip6-ip6base*: LISP over IPv4 tunnels with IPv6 routing.
+
+In all cases listed above low number of IPv4, IPv6 flows (253 or 254 per
+direction) is routed by VPP.
+
+## GENEVE
+
+### GENEVE Prefix Bindings
GENEVE prefix bindings should be representative to target applications, where
a packet flows of particular set of IPv4 addresses (L3 underlay network) is
@@ -14,46 +51,30 @@ routed via dedicated GENEVE interface by building an L2 overlay.
Private address ranges to be used in tests:
- East hosts ip address range: 10.0.1.0 - 10.127.255.255 (10.0/9 prefix)
-
- Total of 2^23 - 256 (8 388 352) of usable IPv4 addresses
- Usable in tests for up to 32 767 GENEVE tunnels (IPv4 underlay networks)
-
- West hosts ip address range: 10.128.1.0 - 10.255.255.255 (10.128/9 prefix)
-
- Total of 2^23 - 256 (8 388 352) of usable IPv4 addresses
- Usable in tests for up to 32 767 GENEVE tunnels (IPv4 underlay networks)
-## GENEVE Tunnel Scale
+### GENEVE Tunnel Scale
If N is a number of GENEVE tunnels (and IPv4 underlay networks) then TG sends
256 packet flows in every of N different sets:
- i = 1,2,3, ... N - GENEVE tunnel index
-
- East-West direction: GENEVE encapsulated packets
-
- Outer IP header:
-
- src ip: 1.1.1.1
-
- dst ip: 1.1.1.2
-
- GENEVE header:
-
- vni: i
-
- Inner IP header:
-
- src_ip_range(i) = 10.(0 + rounddown(i/255)).(modulo(i/255)).(0-to-255)
-
- dst_ip_range(i) = 10.(128 + rounddown(i/255)).(modulo(i/255)).(0-to-255)
-
- West-East direction: non-encapsulated packets
-
- IP header:
-
- src_ip_range(i) = 10.(128 + rounddown(i/255)).(modulo(i/255)).(0-to-255)
-
- dst_ip_range(i) = 10.(0 + rounddown(i/255)).(modulo(i/255)).(0-to-255)
**geneve-tunnels** | **total-flows**
@@ -63,4 +84,4 @@ If N is a number of GENEVE tunnels (and IPv4 underlay networks) then TG sends
16 | 4 096
64 | 16 384
256 | 65 536
- 1 024 | 262 144 \ No newline at end of file
+ 1 024 | 262 144
diff --git a/docs/content/methodology/vpp_device_functional.md b/docs/content/methodology/test/vpp_device.md
index 2bad5973b6..0a5ee90308 100644
--- a/docs/content/methodology/vpp_device_functional.md
+++ b/docs/content/methodology/test/vpp_device.md
@@ -1,9 +1,9 @@
---
-title: "VPP_Device Functional"
-weight: 18
+title: "VPP Device"
+weight: 9
---
-# VPP_Device Functional
+# VPP Device
Includes VPP_Device test environment for functional VPP
device tests integrated into LFN CI/CD infrastructure. VPP_Device tests
diff --git a/docs/content/methodology/trending_methodology/overview.md b/docs/content/methodology/trending/_index.md
index 90d8a2507c..4289e7ff96 100644
--- a/docs/content/methodology/trending_methodology/overview.md
+++ b/docs/content/methodology/trending/_index.md
@@ -1,9 +1,11 @@
---
-title: "Overview"
-weight: 1
+bookCollapseSection: true
+bookFlatSection: false
+title: "Trending"
+weight: 4
---
-# Overview
+# Trending
This document describes a high-level design of a system for continuous
performance measuring, trending and change detection for FD.io VPP SW
diff --git a/docs/content/methodology/trending_methodology/trend_analysis.md b/docs/content/methodology/trending/analysis.md
index 7f1870f577..fe952259ab 100644
--- a/docs/content/methodology/trending_methodology/trend_analysis.md
+++ b/docs/content/methodology/trending/analysis.md
@@ -1,6 +1,6 @@
---
-title: "Trending Analysis"
-weight: 2
+title: "Analysis"
+weight: 1
---
# Trend Analysis
@@ -220,5 +220,5 @@ It is good to exclude last week from the trend maximum,
as including the last week would hide all real progressions.
[^1]: [Minimum Description Length](https://en.wikipedia.org/wiki/Minimum_description_length)
-[^2]: [Occam's razor](https://en.wikipedia.org/wiki/Occam%27s_razor)
-[^3]: [bimodal distribution](https://en.wikipedia.org/wiki/Bimodal_distribution)
+[^2]: [Occam's Razor](https://en.wikipedia.org/wiki/Occam%27s_razor)
+[^3]: [Bimodal Distribution](https://en.wikipedia.org/wiki/Bimodal_distribution)
diff --git a/docs/content/methodology/trending_methodology/trend_presentation.md b/docs/content/methodology/trending/presentation.md
index 4c58589a0b..84925b46c8 100644
--- a/docs/content/methodology/trending_methodology/trend_presentation.md
+++ b/docs/content/methodology/trending/presentation.md
@@ -1,6 +1,6 @@
---
-title: "Trending Presentation"
-weight: 3
+title: "Presentation"
+weight: 2
---
# Trend Presentation
@@ -25,10 +25,8 @@ The graphs are constructed as follows:
- Y-axis represents run-average MRR value, NDR or PDR values in Mpps. For PDR
tests also a graph with average latency at 50% PDR [us] is generated.
- Markers to indicate anomaly classification:
-
- Regression - red circle.
- Progression - green circle.
-
- The line shows average MRR value of each group.
In addition the graphs show dynamic labels while hovering over graph data
diff --git a/docs/content/methodology/trex_traffic_generator.md b/docs/content/methodology/trex_traffic_generator.md
deleted file mode 100644
index 4f62d91c47..0000000000
--- a/docs/content/methodology/trex_traffic_generator.md
+++ /dev/null
@@ -1,195 +0,0 @@
----
-title: "TRex Traffic Generator"
-weight: 5
----
-
-# TRex Traffic Generator
-
-## Usage
-
-[TRex traffic generator](https://trex-tgn.cisco.com) is used for majority of
-CSIT performance tests. TRex is used in multiple types of performance tests,
-see [Data Plane Throughtput]({{< ref "data_plane_throughput/data_plane_throughput/#Data Plane Throughtput" >}})
-for more detail.
-
-## Traffic modes
-
-TRex is primarily used in two (mutually incompatible) modes.
-
-### Stateless mode
-
-Sometimes abbreviated as STL.
-A mode with high performance, which is unable to react to incoming traffic.
-We use this mode whenever it is possible.
-Typical test where this mode is not applicable is NAT44ED,
-as DUT does not assign deterministic outside address+port combinations,
-so we are unable to create traffic that does not lose packets
-in out2in direction.
-
-Measurement results are based on simple L2 counters
-(opackets, ipackets) for each traffic direction.
-
-### Stateful mode
-
-A mode capable of reacting to incoming traffic.
-Contrary to the stateless mode, only UDP and TCP is supported
-(carried over IPv4 or IPv6 packets).
-Performance is limited, as TRex needs to do more CPU processing.
-TRex suports two subtypes of stateful traffic,
-CSIT uses ASTF (Advanced STateFul mode).
-
-This mode is suitable for NAT44ED tests, as clients send packets from inside,
-and servers react to it, so they see the outside address and port to respond to.
-Also, they do not send traffic before NAT44ED has created the corresponding
-translation entry.
-
-When possible, L2 counters (opackets, ipackets) are used.
-Some tests need L7 counters, which track protocol state (e.g. TCP),
-but those values are less than reliable on high loads.
-
-## Traffic Continuity
-
-Generated traffic is either continuous, or limited (by number of transactions).
-Both modes support both continuities in principle.
-
-### Continuous traffic
-
-Traffic is started without any data size goal.
-Traffic is ended based on time duration, as hinted by search algorithm.
-This is useful when DUT behavior does not depend on the traffic duration.
-The default for stateless mode.
-
-### Limited traffic
-
-Traffic has defined data size goal (given as number of transactions),
-duration is computed based on this goal.
-Traffic is ended when the size goal is reached,
-or when the computed duration is reached.
-This is useful when DUT behavior depends on traffic size,
-e.g. target number of NAT translation entries, each to be hit exactly once
-per direction.
-This is used mainly for stateful mode.
-
-## Traffic synchronicity
-
-Traffic can be generated synchronously (test waits for duration)
-or asynchronously (test operates during traffic and stops traffic explicitly).
-
-### Synchronous traffic
-
-Trial measurement is driven by given (or precomputed) duration,
-no activity from test driver during the traffic.
-Used for most trials.
-
-### Asynchronous traffic
-
-Traffic is started, but then the test driver is free to perform
-other actions, before stopping the traffic explicitly.
-This is used mainly by reconf tests, but also by some trials
-used for runtime telemetry.
-
-## Trafic profiles
-
-TRex supports several ways to define the traffic.
-CSIT uses small Python modules based on Scapy as definitions.
-Details of traffic profiles depend on modes (STL or ASTF),
-but some are common for both modes.
-
-Search algorithms are intentionally unaware of the traffic mode used,
-so CSIT defines some terms to use instead of mode-specific TRex terms.
-
-### Transactions
-
-TRex traffic profile defines a small number of behaviors,
-in CSIT called transaction templates. Traffic profiles also instruct
-TRex how to create a large number of transactions based on the templates.
-
-Continuous traffic loops over the generated transactions.
-Limited traffic usually executes each transaction once
-(typically as constant number of loops over source addresses,
-each loop with different source ports).
-
-Currently, ASTF profiles define one transaction template each.
-Number of packets expected per one transaction varies based on profile details,
-as does the criterion for when a transaction is considered successful.
-
-Stateless transactions are just one packet (sent from one TG port,
-successful if received on the other TG port).
-Thus unidirectional stateless profiles define one transaction template,
-bidirectional stateless profiles define two transaction templates.
-
-### TPS multiplier
-
-TRex aims to open transaction specified by the profile at a steady rate.
-While TRex allows the transaction template to define its intended "cps" value,
-CSIT does not specify it, so the default value of 1 is applied,
-meaning TRex will open one transaction per second (and transaction template)
-by default. But CSIT invocation uses "multiplier" (mult) argument
-when starting the traffic, that multiplies the cps value,
-meaning it acts as TPS (transactions per second) input.
-
-With a slight abuse of nomenclature, bidirectional stateless tests
-set "packets per transaction" value to 2, just to keep the TPS semantics
-as a unidirectional input value.
-
-### Duration stretching
-
-TRex can be IO-bound, CPU-bound, or have any other reason
-why it is not able to generate the traffic at the requested TPS.
-Some conditions are detected, leading to TRex failure,
-for example when the bandwidth does not fit into the line capacity.
-But many reasons are not detected.
-
-Unfortunately, TRex frequently reacts by not honoring the duration
-in synchronous mode, taking longer to send the traffic,
-leading to lower then requested load offered to DUT.
-This usualy breaks assumptions used in search algorithms,
-so it has to be avoided.
-
-For stateless traffic, the behavior is quite deterministic,
-so the workaround is to apply a fictional TPS limit (max_rate)
-to search algorithms, usually depending only on the NIC used.
-
-For stateful traffic the behavior is not deterministic enough,
-for example the limit for TCP traffic depends on DUT packet loss.
-In CSIT we decided to use logic similar to asynchronous traffic.
-The traffic driver sleeps for a time, then stops the traffic explicitly.
-The library that parses counters into measurement results
-than usually treats unsent packets/transactions as lost/failed.
-
-We have added a IP4base tests for every NAT44ED test,
-so that users can compare results.
-If the results are very similar, it is probable TRex was the bottleneck.
-
-### Startup delay
-
-By investigating TRex behavior, it was found that TRex does not start
-the traffic in ASTF mode immediately. There is a delay of zero traffic,
-after which the traffic rate ramps up to the defined TPS value.
-
-It is possible to poll for counters during the traffic
-(fist nonzero means traffic has started),
-but that was found to influence the NDR results.
-
-Thus "sleep and stop" stategy is used, which needs a correction
-to the computed duration so traffic is stopped after the intended
-duration of real traffic. Luckily, it turns out this correction
-is not dependend on traffic profile nor CPU used by TRex,
-so a fixed constant (0.112 seconds) works well.
-Unfortunately, the constant may depend on TRex version,
-or execution environment (e.g. TRex in AWS).
-
-The result computations need a precise enough duration of the real traffic,
-luckily server side of TRex has precise enough counter for that.
-
-It is unknown whether stateless traffic profiles also exhibit a startup delay.
-Unfortunately, stateless mode does not have similarly precise duration counter,
-so some results (mostly MRR) are affected by less precise duration measurement
-in Python part of CSIT code.
-
-## Measuring Latency
-
-If measurement of latency is requested, two more packet streams are
-created (one for each direction) with TRex flow_stats parameter set to
-STLFlowLatencyStats. In that case, returned statistics will also include
-min/avg/max latency values and encoded HDRHistogram data. \ No newline at end of file
diff --git a/docs/content/methodology/tunnel_encapsulations.md b/docs/content/methodology/tunnel_encapsulations.md
deleted file mode 100644
index 52505b7efb..0000000000
--- a/docs/content/methodology/tunnel_encapsulations.md
+++ /dev/null
@@ -1,41 +0,0 @@
----
-title: "Tunnel Encapsulations"
-weight: 10
----
-
-# Tunnel Encapsulations
-
-Tunnel encapsulations testing is grouped based on the type of outer
-header: IPv4 or IPv6.
-
-## IPv4 Tunnels
-
-VPP is tested in the following IPv4 tunnel baseline configurations:
-
-- *ip4vxlan-l2bdbase*: VXLAN over IPv4 tunnels with L2 bridge-domain MAC
- switching.
-- *ip4vxlan-l2xcbase*: VXLAN over IPv4 tunnels with L2 cross-connect.
-- *ip4lispip4-ip4base*: LISP over IPv4 tunnels with IPv4 routing.
-- *ip4lispip6-ip6base*: LISP over IPv4 tunnels with IPv6 routing.
-- *ip4gtpusw-ip4base*: GTPU over IPv4 tunnels with IPv4 routing.
-
-In all cases listed above low number of MAC, IPv4, IPv6 flows (253 or 254 per
-direction) is switched or routed by VPP.
-
-In addition selected IPv4 tunnels are tested at scale:
-
-- *dot1q--ip4vxlanscale-l2bd*: VXLAN over IPv4 tunnels with L2 bridge-
- domain MAC switching, with scaled up dot1q VLANs (10, 100, 1k),
- mapped to scaled up L2 bridge-domains (10, 100, 1k), that are in turn
- mapped to (10, 100, 1k) VXLAN tunnels. 64.5k flows are transmitted per
- direction.
-
-## IPv6 Tunnels
-
-VPP is tested in the following IPv6 tunnel baseline configurations:
-
-- *ip6lispip4-ip4base*: LISP over IPv4 tunnels with IPv4 routing.
-- *ip6lispip6-ip6base*: LISP over IPv4 tunnels with IPv6 routing.
-
-In all cases listed above low number of IPv4, IPv6 flows (253 or 254 per
-direction) is routed by VPP.
diff --git a/docs/content/methodology/vpp_startup_settings.md b/docs/content/methodology/vpp_startup_settings.md
deleted file mode 100644
index 6e40091a6c..0000000000
--- a/docs/content/methodology/vpp_startup_settings.md
+++ /dev/null
@@ -1,44 +0,0 @@
----
-title: "VPP Startup Settings"
-weight: 17
----
-
-# VPP Startup Settings
-
-CSIT code manipulates a number of VPP settings in startup.conf for
-optimized performance. List of common settings applied to all tests and
-test dependent settings follows.
-
-## Common Settings
-
-List of VPP startup.conf settings applied to all tests:
-
-1. heap-size <value> - set separately for ip4, ip6, stats, main
- depending on scale tested.
-2. no-tx-checksum-offload - disables UDP / TCP TX checksum offload in
- DPDK. Typically needed for use faster vector PMDs (together with
- no-multi-seg).
-3. buffers-per-numa <value> - sets a number of memory buffers allocated
- to VPP per CPU socket. VPP default is 16384. Needs to be increased for
- scenarios with large number of interfaces and worker threads. To
- accommodate for scale tests, CSIT is setting it to the maximum possible
- value corresponding to the limit of DPDK memory mappings (currently
- 256). For Xeon Skylake platforms configured with 2MB hugepages and VPP
- data-size and buffer-size defaults (2048B and 2496B respectively), this
- results in value of 215040 (256 * 840 = 215040, 840 * 2496B buffers fit
- in 2MB hugepage).
-
-## Per Test Settings
-
-List of vpp startup.conf settings applied dynamically per test:
-
-1. corelist-workers <list_of_cores> - list of logical cores to run VPP
- worker data plane threads. Depends on HyperThreading and core per
- test configuration.
-2. num-rx-queues <value> - depends on a number of VPP threads and NIC
- interfaces.
-3. no-multi-seg - disables multi-segment buffers in DPDK, improves
- packet throughput, but disables Jumbo MTU support. Disabled for all
- tests apart from the ones that require Jumbo 9000B frame support.
-4. UIO driver - depends on topology file definition.
-5. QAT VFs - depends on NRThreads, each thread = 1QAT VFs.