diff options
author | Tibor Frank <tifrank@cisco.com> | 2022-08-02 10:53:28 +0200 |
---|---|---|
committer | Tibor Frank <tifrank@cisco.com> | 2022-08-02 12:21:51 +0200 |
commit | 441d07d2b1be0bf7b9f6fbd917fdd89aeb4fb253 (patch) | |
tree | 847b4aaf5d723b5586924c8e3025cd6deceb1081 /docs/cpta/methodology/trend_presentation.rst | |
parent | d448485d152d3273fe4ef4db28566d99e23efd30 (diff) |
Trending: Prepare static content for the new UTI
- Chapter "jenkins jobs": deleted, the neccesary information is in
"Performance Tests".
- Chapter "perpatch_performance_tests": deleted, should be somewhere
else.
- Chapter "testbed_hw_configuration": no information was there for years.
- Chapter "performance_tests": deleted.
- Other chapters: Edited, updated, simplyfied.
Change-Id: I04dffffea1200a9bf792458022ce868021c94745
Signed-off-by: Tibor Frank <tifrank@cisco.com>
Diffstat (limited to 'docs/cpta/methodology/trend_presentation.rst')
-rw-r--r-- | docs/cpta/methodology/trend_presentation.rst | 41 |
1 files changed, 17 insertions, 24 deletions
diff --git a/docs/cpta/methodology/trend_presentation.rst b/docs/cpta/methodology/trend_presentation.rst index e9918020c5..67d0d3c45a 100644 --- a/docs/cpta/methodology/trend_presentation.rst +++ b/docs/cpta/methodology/trend_presentation.rst @@ -1,35 +1,28 @@ Trend Presentation ------------------- - -Performance Dashboard -````````````````````` - -Dashboard tables list a summary of per test-case VPP MRR performance -trend and trend compliance metrics and detected number of anomalies. - -Separate tables are generated for each testbed and each tested number of -physical cores for VPP workers (1c, 2c, 4c). Test case names are linked to -respective trending graphs for ease of navigation through the test data. +^^^^^^^^^^^^^^^^^^ Failed tests -```````````` +~~~~~~~~~~~~ + +The Failed tests tables list the tests which failed during the last test run. +Separate tables are generated for each testbed. -The Failed tests tables list the tests which failed over the specified seven- -day period together with the number of fails over the period and last failure -details - Time, VPP-Build-Id and CSIT-Job-Build-Id. +Regressions and progressions +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Separate tables are generated for each testbed. Test case names are linked to -respective trending graphs for ease of navigation through the test data. +These tables list tests which encountered a regression or progression during the +specified time period, which is currently set to the last 21 days. Trendline Graphs -```````````````` +~~~~~~~~~~~~~~~~ -Trendline graphs show measured per run averages of MRR values, -group average values, and detected anomalies. +Trendline graphs show measured per run averages of MRR values, NDR or PDR +values, group average values, and detected anomalies. The graphs are constructed as follows: - X-axis represents the date in the format MMDD. -- Y-axis represents run-average MRR value in Mpps. +- Y-axis represents run-average MRR value, NDR or PDR values in Mpps. For PDR + tests also a graph with average latency at 50% PDR [us] is generated. - Markers to indicate anomaly classification: - Regression - red circle. @@ -37,6 +30,6 @@ The graphs are constructed as follows: - The line shows average MRR value of each group. -In addition the graphs show dynamic labels while hovering over graph -data points, presenting the CSIT build date, measured MRR value, VPP -reference, trend job build ID and the LF testbed ID. +In addition the graphs show dynamic labels while hovering over graph data +points, presenting the CSIT build date, measured value, VPP reference, trend job +build ID and the LF testbed ID. |