aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMaciek Konstantynowicz <mkonstan@cisco.com>2017-10-30 21:28:43 +0000
committerTibor Frank <tifrank@cisco.com>2017-10-31 11:53:21 +0000
commit737fbd3341638e90478979779b5ee6c0b0bf5c39 (patch)
tree44b891d199bf7d99c6921de037461697fce2429d
parent5beb7d3d04a876ee37c7b9cf64b3e02366086968 (diff)
csit report static content updates for rls1710.
Change-Id: I6744f1c206410564114ce7b9d06f840432f97ed2 Signed-off-by: Maciek Konstantynowicz <mkonstan@cisco.com>
-rw-r--r--docs/report/dpdk_performance_tests/csit_release_notes.rst6
-rw-r--r--docs/report/introduction/general_notes.rst2
-rw-r--r--docs/report/introduction/overview.rst16
-rw-r--r--docs/report/vpp_performance_tests/csit_release_notes.rst63
-rw-r--r--docs/report/vpp_performance_tests/overview.rst24
5 files changed, 38 insertions, 73 deletions
diff --git a/docs/report/dpdk_performance_tests/csit_release_notes.rst b/docs/report/dpdk_performance_tests/csit_release_notes.rst
index 9673754d92..9c5c3d7ed7 100644
--- a/docs/report/dpdk_performance_tests/csit_release_notes.rst
+++ b/docs/report/dpdk_performance_tests/csit_release_notes.rst
@@ -4,11 +4,7 @@ CSIT Release Notes
Changes in CSIT |release|
-------------------------
-#. Improved performance of testpmd tests
-
- - Performance of NICs - 2p40GE Intel xl710, 2p10GE Intel x710
-
-#. Added L3FWD tests on 2p10GE Intel x520-DA2
+No code changes apart from bug fixes.
Known Issues
------------
diff --git a/docs/report/introduction/general_notes.rst b/docs/report/introduction/general_notes.rst
index e6a2a9875f..0ddeb6a569 100644
--- a/docs/report/introduction/general_notes.rst
+++ b/docs/report/introduction/general_notes.rst
@@ -12,7 +12,7 @@ references are provided to the :abbr:`RF (Robot Framework)` result files that
got archived in FD.io nexus online storage system.
FD.io CSIT project currently covers multiple FD.io system and sub-system
-testing areas and this is reflected in this report, where each testing area
+testing areas and this is reflected in this report, where each testing area
is listed separately, as follows:
#. **VPP Performance Tests** - VPP performance tests are executed in physical
diff --git a/docs/report/introduction/overview.rst b/docs/report/introduction/overview.rst
index 48d631f2e1..536d5d3cf1 100644
--- a/docs/report/introduction/overview.rst
+++ b/docs/report/introduction/overview.rst
@@ -5,8 +5,8 @@ This is the **F**\ast **D**\ata **I**/**O** Project (**FD.io**) **C**\ontinuous
**S**\ystem **I**\ntegration and **T**\esting (**CSIT**) project report for CSIT
|release| system testing of |vpp-release|.
-This is the full html version, there is also the reduced
-`pdf version of this report`_.
+This is the full html version, there is also a reduced
+`PDF version of this report`_.
The report describes CSIT functional and performance tests and their
continuous execution delivered in CSIT |release|. A high-level overview is
@@ -58,13 +58,6 @@ CSIT |release| report contains following main sections and sub-sections:
*Test Environment* - environment description ;
*Documentation* - source code documentation for Honeycomb functional tests.
-#. **Honeycomb Performance Tests** - Honeycomb performance tests executed in
- physical FD.io testbeds; *Overview* - tested topologies, test
- coverage and naming specifics; *CSIT Release Notes* - changes in CSIT
- |release|, added tests, environment or methodology changes, known CSIT issues;
- *Test Environment* - environment description ;
- *Documentation* - source code documentation for Honeycomb performance tests.
-
#. **VPP Unit Tests** - refers to VPP functional unit tests executed as
part of vpp make test verify option within the FD.io VPP project; listed in
this report to give a more complete view about executed VPP functional tests;
@@ -90,3 +83,8 @@ CSIT |release| report contains following main sections and sub-sections:
#. **Test Operational Data** - auto-generated DUT operational data from CSIT jobs
executions using CSIT Robot Framework output files as source data; *VPP
Performance Operational Data*.
+
+#. **CSIT Framework Documentation** - description of the overall CSIT
+ framework design hierarchy, CSIT test naming convention, followed by
+ description of Presentation and Analytics Layer (PAL) introduced in
+ CSIT |release|. \ No newline at end of file
diff --git a/docs/report/vpp_performance_tests/csit_release_notes.rst b/docs/report/vpp_performance_tests/csit_release_notes.rst
index 1c1bce17b3..69444d2a3a 100644
--- a/docs/report/vpp_performance_tests/csit_release_notes.rst
+++ b/docs/report/vpp_performance_tests/csit_release_notes.rst
@@ -4,61 +4,32 @@ CSIT Release Notes
Changes in CSIT |release|
-------------------------
-#. Test environment changes in VPP data plane performance tests:
-
- - Further characterization and optimizations of VPP vhost-user and VM test
- methodology and test environment;
-
- - Tests with varying Qemu virtio queue (a.k.a. vring) sizes:
- [vr256] default 256 descriptors, [vr1024] 1024 descriptors to
- optimize for packet throughput;
-
- - Tests with varying Linux :abbr:`CFS (Completely Fair Scheduler)`
- settings: [cfs] default settings, [cfsrr1] :abbr:`CFS (Completely Fair
- Scheduler)` RoundRobin(1) policy applied to all data plane threads
- handling test packet path including all VPP worker threads and all Qemu
- testpmd poll-mode threads;
-
- - Resulting test cases are all combinations with [vr256,vr1024] and
- [cfs,cfsrr1] settings;
-
- - For more detail see performance results observations section in
- this report;
-
-#. Code updates and optimizations in CSIT performance framework:
-
- - Complete CSIT framework code revision and optimizations as descried
- on CSIT wiki page `Design_Optimizations
- <https://wiki.fd.io/view/CSIT/Design_Optimizations>`_.
-
- - For more detail see the :ref:`CSIT Framework Design <csit-design>` section
- in this report;
-
-#. Changes to CSIT driver for TRex Traffic Generator:
+#. Added VPP performance tests
- - Complete refactor of TRex CSIT driver;
+ - **Linux Container VPP memif tests**
- - Introduction of packet traffic profiles to improve usability and
- manageability of traffic profiles for a growing number of test
- scenarios.
+ - Tests with VPP in L2 Bridge-Domain configuration connecting over
+ memif virtual interfaces to VPPs running in LXCs;
- - Support for packet traffic profiles to test IPv4/IPv6 stateful and
- stateless DUT data plane features;
+ - **Docker Container VPP memif tests**
-#. Added VPP performance tests
+ - Tests with VPP in L2 Cross-Connect configuration connecting over
+ memif virtual interfaces VPPs running in Docker containers;
- - **Linux Container VPP memif virtual interface tests**
+ - **Container Topologies Orchestrated by K8s with VPP memif tests**
- - New VPP Memif virtual interface (shared memory interface) tests
- with L2 Bridge-Domain switched-forwarding;
+ - Tests with VPP in L2 Cross-Connect and Bridge-Domain configurations
+ connecting over memif virtual interfaces VPPs running in Docker
+ containers, with service chain topologies orchestrated by Kubernetes;
- **Stateful Security Groups**
- - New m-thread m-core VPP stateful security-groups tests;
+ - m-thread m-core VPP stateful and stateless security-groups tests;
- **MAC-IP binding**
- - New MACIP iACL single-thread single-core and m-thread m-core tests;
+ - MACIP input access-lists, single-thread single-core and m-thread
+ m-core tests;
- Statistical analysis of repeatibility of results;
@@ -71,8 +42,8 @@ double-digit percentage points. Relative improvements for this release are
calculated against the test results listed in CSIT |release-1| report. The
comparison is calculated between the mean values based on collected and
archived test results' samples for involved VPP releases. Standard deviation
-has been also listed for CSIT |release|. VPP-16.09 and VPP-17.01 numbers are
-provided for reference.
+has been also listed for CSIT |release|. Performance numbers since release
+VPP-16.09 are provided for reference.
NDR Throughput
~~~~~~~~~~~~~~
@@ -232,7 +203,7 @@ Here is the list of known issues in CSIT |release| for VPP performance tests:
| | of discovered NDR throughput values across | | throughput. Applies to XL710 and X710 NICs, x520 NICs are fine. |
| | multiple test runs with xl710 and x710 NICs. | | |
+---+-------------------------------------------------+------------+-----------------------------------------------------------------+
-| 4 | Lower than expected NDR and PDR throughput with | CSIT-569 | Suspected NIC firmware or DPDK driver issue affecting NDR and |
+| 4 | Lower than expected NDR throughput with | CSIT-569 | Suspected NIC firmware or DPDK driver issue affecting NDR and |
| | xl710 and x710 NICs, compared to x520 NICs. | | PDR throughput. Applies to XL710 and X710 NICs. |
+---+-------------------------------------------------+------------+-----------------------------------------------------------------+
diff --git a/docs/report/vpp_performance_tests/overview.rst b/docs/report/vpp_performance_tests/overview.rst
index b7aecc1c38..c1302866fc 100644
--- a/docs/report/vpp_performance_tests/overview.rst
+++ b/docs/report/vpp_performance_tests/overview.rst
@@ -54,18 +54,18 @@ performance labs to address larger scale multi-interface and multi-NIC
performance testing scenarios.
For test cases that require DUT (VPP) to communicate with
-VirtualMachines (VMs) / LinuxContainers (LXCs) over vhost-user/memif
-interfaces, N of VM/LXC instances are created on SUT1 and SUT2. For N=1
-DUT forwards packets between vhost/memif and physical interfaces. For
-N>1 DUT a logical service chain forwarding topology is created on DUT by
-applying L2 or IPv4/IPv6 configuration depending on the test suite. DUT
-test topology with N VM/LXC instances is shown in the figure below
-including applicable packet flow thru the DUTs and VMs/LXCs (marked in
-the figure with ``***``).::
+VirtualMachines (VMs) / Containers (Linux or Docker Containers) over
+vhost-user/memif interfaces, N of VM/Ctr instances are created on SUT1
+and SUT2. For N=1 DUT forwards packets between vhost/memif and physical
+interfaces. For N>1 DUT a logical service chain forwarding topology is
+created on DUT by applying L2 or IPv4/IPv6 configuration depending on
+the test suite. DUT test topology with N VM/Ctr instances is shown in
+the figure below including applicable packet flow thru the DUTs and
+VMs/Ctrs (marked in the figure with ``***``).::
+-------------------------+ +-------------------------+
| +---------+ +---------+ | | +---------+ +---------+ |
- | |VM/LXC[1]| |VM/LXC[N]| | | |VM/LXC[1]| |VM/LXC[N]| |
+ | |VM/Ctr[1]| |VM/Ctr[N]| | | |VM/Ctr[1]| |VM/Ctr[N]| |
| | ***** | | ***** | | | | ***** | | ***** | |
| +--^---^--+ +--^---^--+ | | +--^---^--+ +--^---^--+ |
| *| |* *| |* | | *| |* *| |* |
@@ -85,8 +85,8 @@ the figure with ``***``).::
**********************| |**********************
+-----------+
-For VM/LXC tests, packets are switched by DUT multiple times: twice for
-a single VM/LXC, three times for two VMs/LXCs, N+1 times for N VMs/LXCs.
+For VM/Ctr tests, packets are switched by DUT multiple times: twice for
+a single VM/Ctr, three times for two VMs/Ctrs, N+1 times for N VMs/Ctrs.
Hence the external throughput rates measured by TG and listed in this
report must be multiplied by (N+1) to represent the actual DUT aggregate
packet forwarding rate.
@@ -99,7 +99,7 @@ thoughput for Phy-to-Phy (NIC-to-NIC, PCI-to-PCI) topology, is to expect
the forwarding performance to be proportional to CPU core frequency,
assuming CPU is the only limiting factor and all other SUT parameters
equivalent to FD.io CSIT environment. The same rule of thumb can be also
-applied for Phy-to-VM/LXC-to-Phy (NIC-to-VM/LXC-to-NIC) topology, but due to
+applied for Phy-to-VM/Ctr-to-Phy (NIC-to-VM/Ctr-to-NIC) topology, but due to
much higher dependency on intensive memory operations and sensitivity to Linux
kernel scheduler settings and behaviour, this estimation may not always yield
good enough accuracy.