aboutsummaryrefslogtreecommitdiffstats
path: root/docs/cpta/introduction/introduction.rst
diff options
context:
space:
mode:
Diffstat (limited to 'docs/cpta/introduction/introduction.rst')
-rw-r--r--docs/cpta/introduction/introduction.rst24
1 files changed, 24 insertions, 0 deletions
diff --git a/docs/cpta/introduction/introduction.rst b/docs/cpta/introduction/introduction.rst
new file mode 100644
index 0000000000..e095d8f18b
--- /dev/null
+++ b/docs/cpta/introduction/introduction.rst
@@ -0,0 +1,24 @@
+Description
+===========
+
+Performance dashboard tables provide the latest VPP throughput trend,
+trend compliance and detected anomalies, all on a per VPP test case
+basis. Linked trendline graphs enable further drill-down into the
+trendline compliance, sequence and nature of anomalies, as well as
+pointers to performance test builds/logs and VPP (or DPDK) builds.
+Performance trending is currently based on the Maximum Receive Rate (MRR) tests.
+MRR tests measure the packet forwarding rate under the maximum load offered
+by traffic generator over a set trial duration, regardless of packet
+loss. See :ref:`trending_methodology` section for more detail including
+trend and anomaly calculations.
+
+Data samples are generated by the CSIT VPP (and DPDK) performance trending jobs
+executed twice a day (target start: every 12 hrs, 02:00, 14:00 UTC). All
+trend and anomaly evaluation is based on an algorithm which divides test runs
+into groups according to minimum description length principle.
+The trend value is the population average of the results within a group.
+
+Tested VPP worker-thread-core combinations (1t1c, 2t1c, 2t2c, 4t2c, 4t4c, 8t4c)
+are listed in separate tables in section 1.x. Followed by trending methodology
+in section 2. and trending graphs in sections 3.x. Performance test data
+used for trending graphs is provided in sections 4.x.