aboutsummaryrefslogtreecommitdiffstats
path: root/docs/content
diff options
context:
space:
mode:
Diffstat (limited to 'docs/content')
-rw-r--r--docs/content/overview/c_dash/design.md10
-rw-r--r--docs/content/overview/c_dash/releases.md2
-rw-r--r--docs/content/overview/c_dash/structure.md91
3 files changed, 103 insertions, 0 deletions
diff --git a/docs/content/overview/c_dash/design.md b/docs/content/overview/c_dash/design.md
index ef8c62ab88..21f6b2582a 100644
--- a/docs/content/overview/c_dash/design.md
+++ b/docs/content/overview/c_dash/design.md
@@ -4,3 +4,13 @@ weight: 1
---
# Design
+
+From a test to the graph (or table).
+
+## Tests
+
+## ETL Pipeline
+
+![CSIT ETL pipeline for UTI data](../../../static/csit_etl_for_uti_data_flow_simplified.svg)
+
+## Presentation
diff --git a/docs/content/overview/c_dash/releases.md b/docs/content/overview/c_dash/releases.md
index 1e51c2978a..3cf40ca1e9 100644
--- a/docs/content/overview/c_dash/releases.md
+++ b/docs/content/overview/c_dash/releases.md
@@ -6,3 +6,5 @@ weight: 3
# Releases
## C-Dash v1
+
+Initial release.
diff --git a/docs/content/overview/c_dash/structure.md b/docs/content/overview/c_dash/structure.md
index ba427f1ee3..1696d3429f 100644
--- a/docs/content/overview/c_dash/structure.md
+++ b/docs/content/overview/c_dash/structure.md
@@ -5,16 +5,107 @@ weight: 2
# Structure
+Our presentation layer named “CSIT-Dash” provides customizable views on
+performance data. We can split it into two groups. The first one is performance
+trending, which displays data collected on daily basis. The other one presents
+data coming from release testing. In addition, we publish also information and
+statistics about our test jobs, failures and anomalies and the CSIT
+documentation.
+The screen of CSIT-Dash is divided in two parts. On the left side, there is the
+control panel which makes possible to select required information. The right
+side then displays the user-selected data in graphs or tables.
+
+The structure of C-Docs consist of:
+
+- Performance Trending
+- Per Release Performance
+- Per Release Performance Comparisons
+- Per Release Coverage Data
+- Test Job Statistics
+- Failures and Anomalies
+- Documentation
+
## Performance Trending
+Performance trending shows measured per run averages of MRR values, NDR or PDR
+values, user-selected telemetry metrics, group average values, and detected
+anomalies.
+
+In addition, the graphs show dynamic labels while hovering over graph data
+points. By clicking on data samples, the user gets detailed information and for
+latency graphs also high dynamic range histogram of measured latency.
+Latency by percentile distribution plots are used to show packet latency
+percentiles at different packet rate load levels:
+- No-Load, latency streams only,
+- Low-Load at 10% PDR,
+- Mid-Load at 50% PDR and
+- High-Load at 90% PDR.
+
## Per Release Performance
+Per release performance section presents the graphs based on the results data
+obtained from the release test jobs. In order to verify benchmark results
+repeatability, CSIT performance tests are executed multiple times (target: 10
+times) on each physical testbed type. Box-and-Whisker plots are used to display
+variations in measured throughput and latency (PDR tests only) values.
+
+In addition, the graphs show dynamic labels while hovering over graph data
+points. By clicking on data samples or the box, the user gets detailed
+information and for latency graphs also high dynamic range histogram of measured
+latency.
+Latency by percentile distribution plots are used to show packet latency
+percentiles at different packet rate load levels:
+- No-Load, latency streams only,
+- Low-Load at 10% PDR,
+- Mid-Load at 50% PDR and
+- High-Load at 90% PDR.
+
## Per Release Performance Comparisons
+Relative comparison of packet throughput (NDR, PDR and MRR) and latency (PDR)
+between user-selected releases, test beds, NICs, ... is calculated from results
+of tests running on physical test beds, in 1-core, 2-core and 4-core
+configurations.
+
+Listed mean and standard deviation values are computed based on a series of the
+same tests executed against respective VPP releases to verify test results
+repeatability, with percentage change calculated for mean values. Note that the
+standard deviation is quite high for a small number of packet throughput tests,
+what indicates poor test results repeatability and makes the relative change of
+mean throughput value not fully representative for these tests. The root causes
+behind poor results repeatability vary between the test cases.
+
## Per Release Coverage Data
+Detailed result tables generated from CSIT test job executions. The coverage
+tests include also tests which are not run in iterative performance builds.
+The tables present NDR and PDR packet throughput (packets per second and bits
+per second) and latency percentiles (microseconds) at different packet rate load
+levels:
+- No-Load, latency streams only,
+- Low-Load at 10% PDR,
+- Mid-Load at 50% PDR and
+- High-Load at 90% PDR.
+
## Test Job Statistics
+The elementary statistical data (number of passed and failed tests and the
+duration) of all daily and weekly trending performance jobs.
+In addition, the graphs show dynamic labels while hovering over graph data
+points with detailed information. By clicking on the graph, user gets the job
+summary with the list of failed tests.
+
## Failures and Anomalies
+The presented tables list:
+- last build summary,
+- failed tests,
+- progressions and
+- regressions
+
+for all daily and weekly trending performance jobs.
+
## Documentation
+
+This documentation describing the methodology, infrastructure and release notes
+for each CSIT release.