diff options
Diffstat (limited to 'docs/report')
22 files changed, 115 insertions, 118 deletions
diff --git a/docs/report/honeycomb_functional_tests/documentation.rst b/docs/report/honeycomb_functional_tests/documentation.rst index e53052ddb1..7ded3d4181 100644 --- a/docs/report/honeycomb_functional_tests/documentation.rst +++ b/docs/report/honeycomb_functional_tests/documentation.rst @@ -1,6 +1,5 @@ Documentation
=============
-`CSIT Honeycomb Functional Tests Documentation
-<https://docs.fd.io/csit/master/doc/tests.func.html>`_ contains detailed
+`CSIT Honeycomb Functional Tests Documentation`_ contains detailed
functional description and input parameters for each test case.
diff --git a/docs/report/honeycomb_functional_tests/test_environment.rst b/docs/report/honeycomb_functional_tests/test_environment.rst index 028911fd94..92431c6c32 100644 --- a/docs/report/honeycomb_functional_tests/test_environment.rst +++ b/docs/report/honeycomb_functional_tests/test_environment.rst @@ -13,20 +13,19 @@ versions: Current VPP tests have been executed on a single VM operating system and
version only, as described in the following paragraphs.
-In CSIT terminology, the VM operating system for both SUTs and TG that VPP 17.04
-has been tested with, is the following:
+In CSIT terminology, the VM operating system for both SUTs and TG that
+|vpp-release| has been tested with, is the following:
- ubuntu-16.04.1_2017-02-23_1.8
+ |virl-image-ubuntu|
-This image implies Ubuntu 16.04.1 LTS, current as of 2017/02/23 (that is,
+This image implies Ubuntu 16.04.1 LTS, current as of yyyy-mm-dd (that is,
package versions are those that would have been installed by a "apt-get update",
-"apt-get upgrade" on February 23), produced by CSIT disk image build scripts
-version 1.8.
+"apt-get upgrade" on that day), produced by CSIT disk image build scripts.
The exact list of installed packages and their versions (including the Linux
kernel package version) are included in CSIT source repository:
- resources/tools/disk-image-builder/ubuntu/lists/ubuntu-16.04.1_2017-02-23_1.8
+ resources/tools/disk-image-builder/ubuntu/lists/|virl-image-ubuntu|
A replica of this VM image can be built by running the "build.sh" script in CSIT
repository resources/tools/disk-image-builder/ubuntu.
diff --git a/docs/report/introduction/csit_tag_description.rst b/docs/report/introduction/csit_tag_description.rst index 968db02c61..5ea6c7fc04 100644 --- a/docs/report/introduction/csit_tag_description.rst +++ b/docs/report/introduction/csit_tag_description.rst @@ -5,7 +5,7 @@ All CSIT test cases are labelled with Robot Framework tags used to allow for easy test case type identification, test case grouping and selection for execution. Following sections list currently used CSIT TAGs and their documentation based on the content of -`tag_documentation rst file <https://git.fd.io/csit/tree/docs/tag_documentation.rst?h=rls1704>`_. +`tag documentation rst file`_. Topology TAGs ------------- diff --git a/docs/report/introduction/csit_test_naming.rst b/docs/report/introduction/csit_test_naming.rst index a4de765e0a..682fcd941a 100644 --- a/docs/report/introduction/csit_test_naming.rst +++ b/docs/report/introduction/csit_test_naming.rst @@ -5,12 +5,12 @@ Background ----------
CSIT |release| follows a common structured naming convention for all
-performance and system functional tests, introduced in CSIT rls1701.
+performance and system functional tests, introduced in CSIT |release-1|.
The naming should be intuitive for majority of the tests. Complete
description of CSIT test naming convention is provided on
-`CSIT test naming wiki page <https://wiki.fd.io/view/CSIT/csit-test-naming>`_. Below
-few illustrative examples of the naming usage for test suites across CSIT
+`CSIT test naming wiki page <https://wiki.fd.io/view/CSIT/csit-test-naming>`_.
+Below few illustrative examples of the naming usage for test suites across CSIT
performance, functional and HoneyComb management test areas.
Naming Convention
diff --git a/docs/report/introduction/general_notes.rst b/docs/report/introduction/general_notes.rst index ebb73cdc93..5015b6bb16 100644 --- a/docs/report/introduction/general_notes.rst +++ b/docs/report/introduction/general_notes.rst @@ -40,9 +40,11 @@ is listed separately, as follows: functionality of VPP. Tests cover a range of CRUD operations executed
against VPP.
-In addition to above, CSIT |release| report does also include VPP unit test results. VPP unit tests are developed within the FD.io VPP project and as they complement CSIT system functional tests, they are provided mainly as a reference and to
-provide a more complete view of automated testing executed against
-VPP-17.04 release.
+In addition to above, CSIT |release| report does also include VPP unit test
+results. VPP unit tests are developed within the FD.io VPP project and as they
+complement CSIT system functional tests, they are provided mainly as a reference
+and to provide a more complete view of automated testing executed against
+|vpp-release|.
FD.io CSIT system is developed using two main coding platforms: Robot
Framework (RF) and Python. CSIT |release| source code for the executed test
@@ -52,4 +54,4 @@ obtained by cloning CSIT git repository - "git clone https://gerrit.fd.io/r/csit". The CSIT testing virtual environment can be run
on a local computer workstation (laptop, server) using Vagrant by following
the instructions in `CSIT tutorials
-<https://wiki.fd.io/view/CSIT#Tutorials>`_.
\ No newline at end of file +<https://wiki.fd.io/view/CSIT#Tutorials>`_.
diff --git a/docs/report/introduction/overview.rst b/docs/report/introduction/overview.rst index 97b6abc5bb..30f62a90e2 100644 --- a/docs/report/introduction/overview.rst +++ b/docs/report/introduction/overview.rst @@ -3,17 +3,17 @@ Overview This is the **F**\ast **D**\ata **I**/**O** Project (**FD.io**) **C**\ontinuous
**S**\ystem **I**\ntegration and **T**\esting (**CSIT**) project report for CSIT
-|release| system testing of VPP-17.04 release.
+|release| system testing of |vpp-release|.
The report describes CSIT functional and performance tests and their
continuous execution delivered in CSIT |release|. A high-level overview is
provided for each CSIT test environment running in Linux Foundation (LF) FD.io
Continuous Performance Labs. This is followed by summary of all executed tests
-against the VPP-17.04 release and associated FD.io projects and sub-systems (HoneyComb, DPDK),
-CSIT |release| release notes, result highlights and known issues discovered in CSIT. More
-detailed description of each environment, pointers to CSIT test code
-documentation and detailed test resuls with links to the source data files are
-also provided.
+against the |vpp-release| and associated FD.io projects and sub-systems
+(HoneyComb, DPDK), CSIT |release| release notes, result highlights and known
+issues discovered in CSIT. More detailed description of each environment,
+pointers to CSIT test code documentation and detailed test resuls with links to
+the source data files are also provided.
CSIT |release| report contains following main sections and sub-sections:
diff --git a/docs/report/testpmd_performance_tests/documentation.rst b/docs/report/testpmd_performance_tests/documentation.rst index 4a0fe5dce5..67aae66ca7 100644 --- a/docs/report/testpmd_performance_tests/documentation.rst +++ b/docs/report/testpmd_performance_tests/documentation.rst @@ -1,6 +1,5 @@ Documentation
=============
-`CSIT Testpmd Performance Tests Documentation
-<https://docs.fd.io/csit/rls1704/doc/tests.perf.html>`_ contains detailed
+`CSIT Testpmd Performance Tests Documentation`_ contains detailed
functional description and input parameters for each test case.
diff --git a/docs/report/testpmd_performance_tests/overview.rst b/docs/report/testpmd_performance_tests/overview.rst index 2182a00cd8..78d214808a 100644 --- a/docs/report/testpmd_performance_tests/overview.rst +++ b/docs/report/testpmd_performance_tests/overview.rst @@ -42,7 +42,8 @@ tested NIC models include: #. 2port40GE VIC1385 Cisco. #. 2port40GE XL710 Intel. -For detailed LF FD.io test bed specification and physical topology please refer to `LF FDio CSIT testbed wiki page <https://wiki.fd.io/view/CSIT/CSIT_LF_testbed>`_. +For detailed LF FD.io test bed specification and physical topology please refer +to `LF FDio CSIT testbed wiki page <https://wiki.fd.io/view/CSIT/CSIT_LF_testbed>`_. Performance Tests Coverage -------------------------- @@ -150,4 +151,4 @@ Reported latency values are measured using following methodology: - TRex setup introduces an always-on error of about 2*2usec per latency flow - additonal Tx/Rx interface latency induced by TRex SW writing and reading packet timestamps on CPU cores without HW acceleration on NICs closer to the - interface line.
\ No newline at end of file + interface line. diff --git a/docs/report/testpmd_performance_tests/packet_latency_graphs/index.rst b/docs/report/testpmd_performance_tests/packet_latency_graphs/index.rst index fb16912039..7252e2eb75 100644 --- a/docs/report/testpmd_performance_tests/packet_latency_graphs/index.rst +++ b/docs/report/testpmd_performance_tests/packet_latency_graphs/index.rst @@ -7,21 +7,20 @@ latency per test. *Title of each graph* is a regex (regular expression) matching all throughput test cases plotted on this graph, *X-axis labels* are indices -of individual test suites executed by csit-dpdk-perf-1704-all job that -created result output file used as data source for the graph, *Y-axis -labels* are measured packet Latency [uSec] values, and the *Graph -legend* lists the plotted test suites and their indices. Latency is -reported for concurrent symmetric bi-directional flows, separately for -each direction: i) West-to-East: TGint1-to-SUT1-to-SUT2-to-TGint2, and -ii) East-to-West: TGint2-to-SUT2-to-SUT1-to-TGint1. +of individual test suites executed by +`FD.io test executor dpdk performance jobs`_ that created result output file +used as data source for the graph, *Y-axis labels* are measured packet Latency +[uSec] values, and the *Graph legend* lists the plotted test suites and their +indices. Latency is reported for concurrent symmetric bi-directional flows, +separately for each direction: i) West-to-East: +TGint1-to-SUT1-to-SUT2-to-TGint2, and ii) East-to-West: +TGint2-to-SUT2-to-SUT1-to-TGint1. .. note:: - Test results have been generated by FD.io test executor jobs - `csit-dpdk-perf-1704-all - <https://jenkins.fd.io/view/csit/job/csit-dpdk-perf-1704-all/>`_, - with Robot Framework result files csit-dpdk-perf-1704-all-1.zip - `archived here <../../_static/archive/>`_. + Test results have been generated by + `FD.io test executor dpdk performance jobs`_ with Robot Framework result + files csit-dpdk-perf-\*.zip `archived here <../../_static/archive/>`_. .. toctree:: diff --git a/docs/report/testpmd_performance_tests/packet_throughput_graphs/index.rst b/docs/report/testpmd_performance_tests/packet_throughput_graphs/index.rst index d607b6ead6..84d0603ac1 100644 --- a/docs/report/testpmd_performance_tests/packet_throughput_graphs/index.rst +++ b/docs/report/testpmd_performance_tests/packet_throughput_graphs/index.rst @@ -19,20 +19,19 @@ have the same value, only a horizontal line is plotted. *Title of each graph* is a regex (regular expression) matching all throughput test cases plotted on this graph, *X-axis labels* are indices -of individual test suites executed by csit-dpdk-perf-1704-all jobs that -created result output files used as data sources for the graph, *Y-axis -labels* are measured Packets Per Second [pps] values, and the *Graph -legend* lists the plotted test suites and their indices. +of individual test suites executed by +`FD.io test executor dpdk performance jobs`_ jobs that created result output +files used as data sources for the graph, *Y-axis labels* are measured Packets +Per Second [pps] values, and the *Graph legend* lists the plotted test suites +and their indices. .. note:: - Test results have been generated by FD.io test executor jobs - `csit-dpdk-perf-1704-all - <https://jenkins.fd.io/view/csit/job/csit-dpdk-perf-1704-all/>`_, - with Robot Framework result files csit-dpdk-perf-1704-all-<id>.zip - `archived here <../../_static/archive/>`_. Plotted data set size per - test case is equal to the number of job executions presented in this - report version: **10**. + Test results have been generated by + `FD.io test executor dpdk performance jobs`_ with Robot Framework result + files csit-dpdk-perf-\*.zip `archived here <../../_static/archive/>`_. + Plotted data set size per test case is equal to the number of job executions + presented in this report version: **10**. .. toctree:: diff --git a/docs/report/testpmd_performance_tests/test_environment.rst b/docs/report/testpmd_performance_tests/test_environment.rst index dd51137b2f..d7d960f617 100644 --- a/docs/report/testpmd_performance_tests/test_environment.rst +++ b/docs/report/testpmd_performance_tests/test_environment.rst @@ -3498,7 +3498,7 @@ DUT Configuration - DPDK **DPDK Version**
-17.02
+|dpdk-release|
**DPDK Compile Parameters**
@@ -3535,15 +3535,15 @@ TG Configuration - TRex **TG Version**
-TRex v2.09
+|trex-release|
**DPDK version**
-DPDK v16.07 (20e2b6eba13d9eb61b23ea75f09f2aa966fa6325 - in DPDK repo)
+DPDK v17.02 (f4decdc59e9323ecff5ddb5de7ebf0c79d50a960 - in DPDK repo)
**TG Build Script used**
-https://gerrit.fd.io/r/gitweb?p=csit.git;a=blob;f=resources/tools/t-rex/t-rex-installer.sh;h=e89b06f9b12499996df18e5e3399fcd660ebc017;hb=refs/heads/rls1701
+`TRex intallation`_
**TG Startup Configuration**
@@ -3561,5 +3561,5 @@ https://gerrit.fd.io/r/gitweb?p=csit.git;a=blob;f=resources/tools/t-rex/t-rex-in **TG common API - pointer to driver**
-https://gerrit.fd.io/r/gitweb?p=csit.git;a=blob;f=resources/tools/t-rex/t-rex-stateless.py;h=24f4a997389ba3f10ad42e1f9564ef915fd58b44;hb=refs/heads/rls1701
+`TRex driver`_
diff --git a/docs/report/vpp_functional_tests/documentation.rst b/docs/report/vpp_functional_tests/documentation.rst index d34877b0b3..e0f180c3ec 100644 --- a/docs/report/vpp_functional_tests/documentation.rst +++ b/docs/report/vpp_functional_tests/documentation.rst @@ -1,7 +1,6 @@ Documentation
=============
-`CSIT VPP Functional Tests Documentation
-<https://docs.fd.io/csit/rls1704/doc/tests.func.html>`_ contains detailed
+`CSIT VPP Functional Tests Documentation`_ contains detailed
functional description and input parameters for each test case.
diff --git a/docs/report/vpp_functional_tests/overview.rst b/docs/report/vpp_functional_tests/overview.rst index 0fa15a5472..a6de3f3282 100644 --- a/docs/report/vpp_functional_tests/overview.rst +++ b/docs/report/vpp_functional_tests/overview.rst @@ -132,7 +132,7 @@ Functional Tests Naming -----------------------
CSIT |release| follows a common structured naming convention for all
-performance and system functional tests, introduced in CSIT rls1701.
+performance and system functional tests, introduced in CSIT |release-1|.
The naming should be intuitive for majority of the tests. Complete
description of CSIT test naming convention is provided on `CSIT test naming
diff --git a/docs/report/vpp_functional_tests/test_environment.rst b/docs/report/vpp_functional_tests/test_environment.rst index fc7c63351a..8add53189c 100644 --- a/docs/report/vpp_functional_tests/test_environment.rst +++ b/docs/report/vpp_functional_tests/test_environment.rst @@ -30,14 +30,14 @@ SUT Configuration - VIRL Guest VM Configuration of the SUT VMs is defined in file
/csit/resources/tools/virl/topologies/double-ring-nested.xenial.virl
-
+
- List of SUT VM interfaces:::
<interface id="0" name="GigabitEthernet0/4/0"/>
<interface id="1" name="GigabitEthernet0/5/0"/>
<interface id="2" name="GigabitEthernet0/6/0"/>
<interface id="3" name="GigabitEthernet0/7/0"/>
-
+
- Number of 2MB hugepages: 1024
- Maximum number of memory map areas: 20000
@@ -47,25 +47,24 @@ Configuration of the SUT VMs is defined in file SUT Configuration - VIRL Guest OS Linux
---------------------------------------
-In CSIT terminology, the VM operating system for both SUTs that VPP 17.01 has
+In CSIT terminology, the VM operating system for both SUTs that |vpp-release| has
been tested with, is the following:
-**#. ubuntu-16.04.1_2016-12-19_1.6**
+**#. |virl-image-ubuntu|**
-This image implies Ubuntu 16.04.1 LTS, current as of 2016/12/19 (that is,
+This image implies Ubuntu 16.04.1 LTS, current as of yyyy-mm-dd (that is,
package versions are those that would have been installed by a "apt-get update",
-"apt-get upgrade" on December 19), produced by CSIT disk image build scripts
-version 1.6.
+"apt-get upgrade" on that day), produced by CSIT disk image build scripts.
The exact list of installed packages and their versions (including the Linux
kernel package version) are included in CSIT source repository:
- resources/tools/disk-image-builder/ubuntu/lists/ubuntu-16.04.1_2016-12-19_1.6
+ resources/tools/disk-image-builder/ubuntu/lists/|virl-image-ubuntu|
A replica of this VM image can be built by running the "build.sh" script in CSIT
repository resources/tools/disk-image-builder/ubuntu.
-**#. centos-7.3-1611_2017-01-24_1.2**
+**#. |virl-image-centos|**
The Centos7.3 image is ready to be used but no tests running on it now.
Corresponding Jenkins jobs are under preparation.
@@ -73,7 +72,7 @@ Corresponding Jenkins jobs are under preparation. The exact list of installed packages and their versions (including the Linux
kernel package version) are included in CSIT source repository:
- resources/tools/disk-image-builder/ubuntu/lists/centos-7.3-1611_2017-01-24_1.2
+ resources/tools/disk-image-builder/ubuntu/lists/|virl-image-centos|
A replica of this VM image can be built by running the "build.sh" script in CSIT
repository resources/tools/disk-image-builder/centos.
@@ -106,7 +105,7 @@ Port configuration of DUTs is defined in topology file that is generated per VIRL simulation based on the definition stored in file
/csit/resources/tools/virl/topologies/double-ring-nested.xenial.yaml
-
+
Example of DUT nodes configuration:::
DUT1:
@@ -224,7 +223,7 @@ Example of DUT nodes configuration::: **VPP Version**
-17.01-release_amd64
+|vpp-release|
**VPP Installed Packages**
::
@@ -235,13 +234,13 @@ Example of DUT nodes configuration::: |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============-=============-============-=============================================
- ii vpp 17.01-release amd64 Vector Packet Processing--executables
- ii vpp-dbg 17.01-release amd64 Vector Packet Processing--debug symbols
- ii vpp-dev 17.01-release amd64 Vector Packet Processing--development support
- ii vpp-dpdk-dev 17.01-release amd64 Vector Packet Processing--development support
- ii vpp-dpdk-dkms 17.01-release amd64 DPDK 2.1 igb_uio_driver
- ii vpp-lib 17.01-release amd64 Vector Packet Processing--runtime libraries
- ii vpp-plugins 17.01-release amd64 Vector Packet Processing--runtime plugins
+ ii vpp 17.07-release amd64 Vector Packet Processing--executables
+ ii vpp-dbg 17.07-release amd64 Vector Packet Processing--debug symbols
+ ii vpp-dev 17.07-release amd64 Vector Packet Processing--development support
+ ii vpp-dpdk-dev 17.07-release amd64 Vector Packet Processing--development support
+ ii vpp-dpdk-dkms 17.07-release amd64 DPDK 2.1 igb_uio_driver
+ ii vpp-lib 17.07-release amd64 Vector Packet Processing--runtime libraries
+ ii vpp-plugins 17.07-release amd64 Vector Packet Processing--runtime plugins
**VPP Startup Configuration**
@@ -439,5 +438,5 @@ Example of TG node configuration::: **Traffic generator**
Functional tests utilize Scapy as a traffic generator. There was used Scapy
-v2.3.1 for VPP 17.01 tests.
+v2.3.1 for |vpp-release| tests.
diff --git a/docs/report/vpp_performance_tests/csit_release_notes.rst b/docs/report/vpp_performance_tests/csit_release_notes.rst index 2c0aaf7414..a4077c13a5 100644 --- a/docs/report/vpp_performance_tests/csit_release_notes.rst +++ b/docs/report/vpp_performance_tests/csit_release_notes.rst @@ -48,7 +48,7 @@ Performance Improvements Substantial improvements in measured packet throughput have been
observed in a number of CSIT |release| tests listed below, with relative
increase of double-digit percentage points. Relative improvements are
-calculated against the test results listed in CSIT rls1701 report.
+calculated against the test results listed in CSIT |release-1| report.
VPP-16.09 numbers are provided for reference.
NDR Throughput
@@ -119,7 +119,7 @@ Other Performance Changes Other changes in measured packet throughput, with either minor relative
increase or decrease, have been observed in a number of CSIT |release|
tests listed below. Relative changes are calculated against the test
-results listed in CSIT rls1701 report.
+results listed in CSIT |release-1| report.
NDR Throughput
~~~~~~~~~~~~~~
diff --git a/docs/report/vpp_performance_tests/documentation.rst b/docs/report/vpp_performance_tests/documentation.rst index b282c2a9b6..9f0d0a4cb3 100644 --- a/docs/report/vpp_performance_tests/documentation.rst +++ b/docs/report/vpp_performance_tests/documentation.rst @@ -1,6 +1,5 @@ Documentation
=============
-`CSIT VPP Performance Tests Documentation
-<https://docs.fd.io/csit/rls1704/doc/tests.perf.html>`_ contains detailed
+`CSIT VPP Performance Tests Documentation`_ contains detailed
functional description and input parameters for each test case.
diff --git a/docs/report/vpp_performance_tests/overview.rst b/docs/report/vpp_performance_tests/overview.rst index 3ee9b2b955..ccf8063ec2 100644 --- a/docs/report/vpp_performance_tests/overview.rst +++ b/docs/report/vpp_performance_tests/overview.rst @@ -194,7 +194,7 @@ Performance Tests Naming ------------------------ CSIT |release| follows a common structured naming convention for all -performance and system functional tests, introduced in CSIT rls1701. +performance and system functional tests, introduced in CSIT |release-1|. The naming should be intuitive for majority of the tests. Complete description of CSIT test naming convention is provided on `CSIT test naming wiki @@ -313,8 +313,8 @@ Methodology: KVM VM vhost ------------------------- CSIT |release| introduced environment configuration changes to KVM Qemu vhost- -user tests in order to more representatively measure VPP-17.04 performance in -configurations with vhost-user interfaces and VMs. +user tests in order to more representatively measure |vpp-release| performance +in configurations with vhost-user interfaces and VMs. Current setup of CSIT FD.io performance lab is using tuned settings for more optimal performance of KVM Qemu: @@ -357,7 +357,7 @@ specific configuration. TRex is installed and run on the TG compute node. The typical procedure is: - If the TRex is not already installed on TG, it is installed in the - suite setup phase - see `TRex intallation <https://gerrit.fd.io/r/gitweb?p=csit.git;a=blob;f=resources/tools/t-rex/t-rex-installer.sh;h=8090b7568327ac5f869e82664bc51b24f89f603f;hb=refs/heads/rls1704>`_. + suite setup phase - see `TRex intallation`_. - TRex configuration is set in its configuration file :: @@ -366,7 +366,7 @@ TRex is installed and run on the TG compute node. The typical procedure is: - TRex is started in the background mode :: - sh -c 'cd /opt/trex-core-2.22/scripts/ && sudo nohup ./t-rex-64 -i -c 7 --iom 0 > /dev/null 2>&1 &' > /dev/null + sh -c 'cd /opt/trex-core-2.25/scripts/ && sudo nohup ./t-rex-64 -i -c 7 --iom 0 > /dev/null 2>&1 &' > /dev/null - There are traffic streams dynamically prepared for each test. The traffic is sent and the statistics obtained using trex_stl_lib.api.STLClient. diff --git a/docs/report/vpp_performance_tests/packet_latency_graphs/index.rst b/docs/report/vpp_performance_tests/packet_latency_graphs/index.rst index f71709453f..307bf3caf1 100644 --- a/docs/report/vpp_performance_tests/packet_latency_graphs/index.rst +++ b/docs/report/vpp_performance_tests/packet_latency_graphs/index.rst @@ -7,21 +7,20 @@ latency per test. *Title of each graph* is a regex (regular expression) matching all throughput test cases plotted on this graph, *X-axis labels* are indices -of individual test suites executed by csit-vpp-perf-1704-all job that -created result output file used as data source for the graph, *Y-axis -labels* are measured packet Latency [uSec] values, and the *Graph -legend* lists the plotted test suites and their indices. Latency is -reported for concurrent symmetric bi-directional flows, separately for -each direction: i) West-to-East: TGint1-to-SUT1-to-SUT2-to-TGint2, and -ii) East-to-West: TGint2-to-SUT2-to-SUT1-to-TGint1. +of individual test suites executed by +`FD.io test executor vpp performance jobs`_ that created result output file +used as data source for the graph, *Y-axis labels* are measured packet Latency +[uSec] values, and the *Graph legend* lists the plotted test suites and their +indices. Latency is reported for concurrent symmetric bi-directional flows, +separately for each direction: i) West-to-East: +TGint1-to-SUT1-to-SUT2-to-TGint2, and ii) East-to-West: +TGint2-to-SUT2-to-SUT1-to-TGint1. .. note:: - Test results have been generated by FD.io test executor jobs - `csit-vpp-perf-1704-all - <https://jenkins.fd.io/view/csit/job/csit-vpp-perf-1704-all/>`_, - with Robot Framework result files csit-vpp-perf-1704-all-6.zip - `archived here <../../_static/archive/>`_. + Test results have been generated by + `FD.io test executor vpp performance jobs`_ with Robot Framework result + files csit-vpp-perf-\*.zip `archived here <../../_static/archive/>`_. .. toctree:: diff --git a/docs/report/vpp_performance_tests/packet_throughput_graphs/index.rst b/docs/report/vpp_performance_tests/packet_throughput_graphs/index.rst index d744ef09e5..c203aa1144 100644 --- a/docs/report/vpp_performance_tests/packet_throughput_graphs/index.rst +++ b/docs/report/vpp_performance_tests/packet_throughput_graphs/index.rst @@ -19,20 +19,20 @@ have the same value, only a horizontal line is plotted. *Title of each graph* is a regex (regular expression) matching all throughput test cases plotted on this graph, *X-axis labels* are indices -of individual test suites executed by csit-vpp-perf-1704-all jobs that -created result output files used as data sources for the graph, *Y-axis -labels* are measured Packets Per Second [pps] values, and the *Graph -legend* lists the plotted test suites and their indices. +of individual test suites executed by +`FD.io test executor vpp performance jobs`_ jobs that created result output +files used as data sources for the graph, *Y-axis labels* are measured Packets +Per Second [pps] values, and the *Graph legend* lists the plotted test suites +and their indices. + .. note:: - Test results have been generated by FD.io test executor jobs - `csit-vpp-perf-1704-all - <https://jenkins.fd.io/view/csit/job/csit-vpp-perf-1704-all/>`_, - with Robot Framework result files csit-vpp-perf-1704-all-<id>.zip - `archived here <../../_static/archive/>`_. Plotted data set size per - test case is equal to the number of job executions presented in this - report version: **10**. + Test results have been generated by + `FD.io test executor vpp performance jobs`_ with Robot Framework result + files csit-vpp-perf-\*.zip `archived here <../../_static/archive/>`_. + Plotted data set size per test case is equal to the number of job executions + presented in this report version: **10**. .. toctree:: diff --git a/docs/report/vpp_performance_tests/test_environment.rst b/docs/report/vpp_performance_tests/test_environment.rst index 7198b01173..a232650e8c 100644 --- a/docs/report/vpp_performance_tests/test_environment.rst +++ b/docs/report/vpp_performance_tests/test_environment.rst @@ -3497,11 +3497,11 @@ DUT Configuration - VPP **VPP Version**
-17.01-release_amd64
+|vpp-release|
**VPP Compile Parameters**
-VPP Compile Job: https://jenkins.fd.io/view/vpp/job/vpp-merge-1701-ubuntu1604/
+`FD.io VPP compile job`_
**VPP Install Parameters**
@@ -3539,6 +3539,7 @@ Tagged by **1T1C**:: dev 0000:0a:00.0
no-multi-seg
}
+ heapsize 3G
ip6 {
hash-buckets 2000000
heap-size 3G
@@ -3559,6 +3560,7 @@ Tagged by **2T1C**:: cpu {
main-core 0 corelist-workers 1,2
}
+ heapsize 3G
dpdk {
socket-mem 1024,1024
dev default {
@@ -3588,6 +3590,7 @@ Tagged by **4T4C**:: cpu {
main-core 0 corelist-workers 1,2,3,4
}
+ heapsize 3G
dpdk {
socket-mem 1024,1024
dev default {
@@ -3608,7 +3611,7 @@ TG Configuration - TRex **TG Version**
-TRex v2.22
+|trex-release|
**DPDK version**
@@ -3616,7 +3619,7 @@ DPDK v17.02 (f4decdc59e9323ecff5ddb5de7ebf0c79d50a960 - in DPDK repo) **TG Build Script used**
-https://gerrit.fd.io/r/gitweb?p=csit.git;a=blob;f=resources/tools/t-rex/t-rex-installer.sh;h=8090b7568327ac5f869e82664bc51b24f89f603f;hb=refs/heads/rls1704
+`TRex intallation`_
**TG Startup Configuration**
@@ -3634,4 +3637,4 @@ https://gerrit.fd.io/r/gitweb?p=csit.git;a=blob;f=resources/tools/t-rex/t-rex-in **TG common API - pointer to driver**
-https://gerrit.fd.io/r/gitweb?p=csit.git;a=blob;f=resources/tools/t-rex/t-rex-stateless.py;h=ae8d18767013ebecb0bec6c732ac66e483408661;hb=refs/heads/rls1704
+`TRex driver`_
diff --git a/docs/report/vpp_unit_tests/documentation.rst b/docs/report/vpp_unit_tests/documentation.rst index 6a8ed89a6d..304db1121b 100644 --- a/docs/report/vpp_unit_tests/documentation.rst +++ b/docs/report/vpp_unit_tests/documentation.rst @@ -3,4 +3,4 @@ Documentation For complete description of the VPP Test Framework including anatomy of a test
case and detailed documentation of existing VPP unit test cases please refer
-to the `VPP test framework documentation <https://docs.fd.io/vpp/17.04/vpp_make_test/html/>`_.
+to the `VPP test framework documentation`_.
diff --git a/docs/report/vpp_unit_tests/overview.rst b/docs/report/vpp_unit_tests/overview.rst index 4ef50f53c0..c9808833f6 100644 --- a/docs/report/vpp_unit_tests/overview.rst +++ b/docs/report/vpp_unit_tests/overview.rst @@ -5,7 +5,7 @@ Overview This section includes an abbreviated version of the VPP Test Framework
overview maintained within the VPP project. Complete overview can be found
- in `VPP test framework documentation <https://docs.fd.io/vpp/17.04/vpp_make_test/html/>`_.
+ in `VPP test framework documentation`_.
VPP Unit Test Framework
-----------------------
@@ -27,7 +27,7 @@ Framework. For complete description of the VPP Test Framework including anatomy of a test
case and detailed documentation of existing VPP unit test cases please refer
-to the `VPP test framework documentation <https://docs.fd.io/vpp/17.04/vpp_make_test/html/>`_
+to the `VPP test framework documentation`_
Unit Tests Coverage
-------------------
|