aboutsummaryrefslogtreecommitdiffstats
path: root/docs/new/cpta/introduction
diff options
context:
space:
mode:
authorVratko Polak <vrpolak@cisco.com>2018-06-14 14:03:44 +0200
committerTibor Frank <tifrank@cisco.com>2018-06-20 06:43:22 +0000
commitf2430562835ded8aeb66db2c36379cf3ea54c748 (patch)
tree0100045839806e1308c76ed7e89943fa96e616fb /docs/new/cpta/introduction
parent47b807e7268231b35982dc3b5a0c3108537d6432 (diff)
CSIT-1110: Improve new detection methodology doc
Change-Id: I068fd4e9418f232ee1e1f13994e9c5c431478ec8 Signed-off-by: Vratko Polak <vrpolak@cisco.com>
Diffstat (limited to 'docs/new/cpta/introduction')
-rw-r--r--docs/new/cpta/introduction/index.rst14
1 files changed, 7 insertions, 7 deletions
diff --git a/docs/new/cpta/introduction/index.rst b/docs/new/cpta/introduction/index.rst
index 991181aff4..229e9e3da9 100644
--- a/docs/new/cpta/introduction/index.rst
+++ b/docs/new/cpta/introduction/index.rst
@@ -8,17 +8,18 @@ Performance dashboard tables provide the latest VPP throughput trend,
trend compliance and detected anomalies, all on a per VPP test case
basis. Linked trendline graphs enable further drill-down into the
trendline compliance, sequence and nature of anomalies, as well as
-pointers to performance test builds/logs and VPP builds. Performance
-trending is currently based on the Maximum Receive Rate (MRR) tests. MRR
-tests measure the packet forwarding rate under the maximum load offered
+pointers to performance test builds/logs and VPP (or DPDK) builds.
+Performance trending is currently based on the Maximum Receive Rate (MRR) tests.
+MRR tests measure the packet forwarding rate under the maximum load offered
by traffic generator over a set trial duration, regardless of packet
loss. See :ref:`trending_methodology` section for more detail including
trend and anomaly calculations.
-Data samples are generated by the CSIT VPP performance trending jobs
+Data samples are generated by the CSIT VPP (and DPDK) performance trending jobs
executed twice a day (target start: every 12 hrs, 02:00, 14:00 UTC). All
-trend and anomaly evaluation is based on a rolling window of <N=14> data
-samples, covering last 7 days.
+trend and anomaly evaluation is based on an algorithm which divides test runs
+into groups according to minimum description length principle.
+The trend value is the population average of the results within a group.
Failed tests
------------
@@ -53,7 +54,6 @@ Legend to the tables:
maximum of trend values over the last quarter except last week.
- **Regressions [#]**: Number of regressions detected.
- **Progressions [#]**: Number of progressions detected.
- - **Outliers [#]**: Number of outliers detected.
Tested VPP worker-thread-core combinations (1t1c, 2t2c, 4t4c) are listed
in separate tables in section 1.x. Followed by trending methodology in