diff options
author | Tibor Frank <tifrank@cisco.com> | 2018-07-30 09:49:06 +0200 |
---|---|---|
committer | Tibor Frank <tifrank@cisco.com> | 2018-07-30 13:03:18 +0000 |
commit | ce1088d88744f2c040801c9852565d522b3feb68 (patch) | |
tree | 3683c1e740da518fc80d94e8b69e4b2cbad841c1 /docs/report/dpdk_performance_tests | |
parent | 84f46d579b405873ce9746a0bae2e8fdc38bd415 (diff) |
CSIT-1197: Add Comparison Across Testbeds to the Report
Change-Id: Ibf0c880926b87e781830d7b39438b5145144ef5b
Signed-off-by: Tibor Frank <tifrank@cisco.com>
Diffstat (limited to 'docs/report/dpdk_performance_tests')
4 files changed, 109 insertions, 7 deletions
diff --git a/docs/report/dpdk_performance_tests/csit_release_notes.rst b/docs/report/dpdk_performance_tests/csit_release_notes.rst index 214eb4c4e7..363c2e70f1 100644 --- a/docs/report/dpdk_performance_tests/csit_release_notes.rst +++ b/docs/report/dpdk_performance_tests/csit_release_notes.rst @@ -6,6 +6,100 @@ Changes in |csit-release| No code changes apart from bug fixes. +Performance Changes +------------------- + +Relative performance changes in measured packet throughput in |csit-release| +are calculated against the results from |csit-release-1| +report. Listed mean and standard deviation values are computed based on +a series of the same tests executed against respective VPP releases to +verify test results repeatability, with percentage change calculated for +mean values. Note that the standard deviation is quite high for a small +number of packet throughput tests, what indicates poor test results +repeatability and makes the relative change of mean throughput value not +fully representative for these tests. The root causes behind poor +results repeatability vary between the test cases. + +NDR Changes +~~~~~~~~~~~ + +NDR throughput changes between releases are available in a +CSV and pretty ASCII formats: + + - `csv format for 1t1c <../_static/dpdk/performance-changes-1t1c-ndr.csv>`_, + - `csv format for 2t2c <../_static/dpdk/performance-changes-2t2c-ndr.csv>`_, + - `pretty ASCII format for 1t1c <../_static/dpdk/performance-changes-1t1c-ndr.txt>`_, + - `pretty ASCII format for 2t2c <../_static/dpdk/performance-changes-2t2c-ndr.txt>`_. + +.. note:: + + Test results have been generated by + `FD.io test executor dpdk performance job 3n-hsw`_ + with Robot Framework result + files csit-vpp-perf-|srelease|-\*.zip + `archived here <../_static/archive/>`_. + +PDR Changes +~~~~~~~~~~~ + +PDR throughput changes between releases are available in a +CSV and pretty ASCII formats: + + - `csv format for 1t1c <../_static/dpdk/performance-changes-1t1c-pdr.csv>`_, + - `csv format for 2t2c <../_static/dpdk/performance-changes-2t2c-pdr.csv>`_, + - `pretty ASCII format for 1t1c <../_static/dpdk/performance-changes-1t1c-pdr.txt>`_, + - `pretty ASCII format for 2t2c <../_static/dpdk/performance-changes-2t2c-pdr.txt>`_. + +.. note:: + + Test results have been generated by + `FD.io test executor dpdk performance job 3n-hsw`_ + with Robot Framework result + files csit-vpp-perf-|srelease|-\*.zip + `archived here <../_static/archive/>`_. + +Comparison Across Testbeds +-------------------------- + +Relative performance changes in measured packet throughputon 3-Node Skx testbed +are calculated against the results measured on 3-Node Hsw testbed. + +NDR Changes +~~~~~~~~~~~ + +NDR changes between testbeds are available in a +CSV and pretty ASCII formats: + + - `csv format for ndr <../_static/dpdk/performance-compare-testbeds-3n-hsw-3n-skx-ndr.csv>`_, + - `pretty ASCII format for ndr <../_static/dpdk/performance-compare-testbeds-3n-hsw-3n-skx-ndr.txt>`_. + +.. note:: + + Test results have been generated by + `FD.io test executor dpdk performance job 3n-hsw`_ and + `FD.io test executor dpdk performance job 3n-skx`_ + with Robot Framework result + files csit-vpp-perf-|srelease|-\*.zip + `archived here <../_static/archive/>`_. + +PDR Changes +~~~~~~~~~~~ + +PDR throughput changes between testbeds are available in a +CSV and pretty ASCII formats: + + - `csv format for pdr <../_static/dpdk/performance-compare-testbeds-3n-hsw-3n-skx-pdr.csv>`_, + - `pretty ASCII format for pdr <../_static/dpdk/performance-compare-testbeds-3n-hsw-3n-skx-pdr.txt>`_. + +.. note:: + + Test results have been generated by + `FD.io test executor dpdk performance job 3n-hsw`_ and + `FD.io test executor dpdk performance job 3n-skx`_ + with Robot Framework result + files csit-vpp-perf-|srelease|-\*.zip + `archived here <../_static/archive/>`_. + Known Issues ------------ diff --git a/docs/report/dpdk_performance_tests/overview.rst b/docs/report/dpdk_performance_tests/overview.rst index 037300b3ee..b38f9595be 100644 --- a/docs/report/dpdk_performance_tests/overview.rst +++ b/docs/report/dpdk_performance_tests/overview.rst @@ -116,7 +116,7 @@ Performance Tests Naming ------------------------ |csit-release| follows a common structured naming convention for all performance -and system functional tests, introduced in CSIT rls1701. +and system functional tests, introduced in CSIT-17.01. The naming should be intuitive for majority of the tests. Complete description of CSIT test naming convention is provided on :ref:`csit_test_naming`. diff --git a/docs/report/dpdk_performance_tests/packet_latency_graphs/index.rst b/docs/report/dpdk_performance_tests/packet_latency_graphs/index.rst index 1e59d1eae2..014d1095a7 100644 --- a/docs/report/dpdk_performance_tests/packet_latency_graphs/index.rst +++ b/docs/report/dpdk_performance_tests/packet_latency_graphs/index.rst @@ -8,7 +8,7 @@ latency per test. *Title of each graph* is a regex (regular expression) matching all throughput test cases plotted on this graph, *X-axis labels* are indices of individual test suites executed by -`FD.io test executor dpdk performance jobs`_ that created result output file +FD.io test executor dpdk performance jobs that created result output file used as data source for the graph, *Y-axis labels* are measured packet Latency [uSec] values, and the *Graph legend* lists the plotted test suites and their indices. Latency is reported for concurrent symmetric bi-directional flows, @@ -19,8 +19,13 @@ TGint2-to-SUT2-to-SUT1-to-TGint1. .. note:: Test results have been generated by - `FD.io test executor dpdk performance jobs`_ with Robot Framework result - files csit-dpdk-perf-\*.zip `archived here <../../_static/archive/>`_. + `FD.io test executor dpdk performance job 3n-hsw`_, + `FD.io test executor dpdk performance job 3n-skx`_ and + `FD.io test executor dpdk performance job 2n-skx`_ with Robot Framework + result files csit-vpp-perf-|srelease|-\*.zip + `archived here <../../_static/archive/>`_. + Plotted data set size per test case is equal to the number of job executions + presented in this report version: **10**. .. toctree:: :maxdepth: 1 diff --git a/docs/report/dpdk_performance_tests/packet_throughput_graphs/index.rst b/docs/report/dpdk_performance_tests/packet_throughput_graphs/index.rst index 42dfe4f572..a243922798 100644 --- a/docs/report/dpdk_performance_tests/packet_throughput_graphs/index.rst +++ b/docs/report/dpdk_performance_tests/packet_throughput_graphs/index.rst @@ -20,7 +20,7 @@ have the same value, only a horizontal line is plotted. *Title of each graph* is a regex (regular expression) matching all throughput test cases plotted on this graph, *X-axis labels* are indices of individual test suites executed by -`FD.io test executor dpdk performance jobs`_ jobs that created result output +FD.io test executor dpdk performance jobs that created result output files used as data sources for the graph, *Y-axis labels* are measured Packets Per Second [pps] values, and the *Graph legend* lists the plotted test suites and their indices. @@ -28,8 +28,11 @@ and their indices. .. note:: Test results have been generated by - `FD.io test executor dpdk performance jobs`_ with Robot Framework result - files csit-dpdk-perf-\*.zip `archived here <../../_static/archive/>`_. + `FD.io test executor dpdk performance job 3n-hsw`_, + `FD.io test executor dpdk performance job 3n-skx`_ and + `FD.io test executor dpdk performance job 2n-skx`_ with Robot Framework + result files csit-vpp-perf-|srelease|-\*.zip + `archived here <../../_static/archive/>`_. Plotted data set size per test case is equal to the number of job executions presented in this report version: **10**. |