aboutsummaryrefslogtreecommitdiffstats
path: root/docs/report/vpp_performance_tests
diff options
context:
space:
mode:
Diffstat (limited to 'docs/report/vpp_performance_tests')
-rw-r--r--docs/report/vpp_performance_tests/csit_release_notes.rst63
-rw-r--r--docs/report/vpp_performance_tests/overview.rst24
2 files changed, 29 insertions, 58 deletions
diff --git a/docs/report/vpp_performance_tests/csit_release_notes.rst b/docs/report/vpp_performance_tests/csit_release_notes.rst
index 1c1bce17b3..69444d2a3a 100644
--- a/docs/report/vpp_performance_tests/csit_release_notes.rst
+++ b/docs/report/vpp_performance_tests/csit_release_notes.rst
@@ -4,61 +4,32 @@ CSIT Release Notes
Changes in CSIT |release|
-------------------------
-#. Test environment changes in VPP data plane performance tests:
-
- - Further characterization and optimizations of VPP vhost-user and VM test
- methodology and test environment;
-
- - Tests with varying Qemu virtio queue (a.k.a. vring) sizes:
- [vr256] default 256 descriptors, [vr1024] 1024 descriptors to
- optimize for packet throughput;
-
- - Tests with varying Linux :abbr:`CFS (Completely Fair Scheduler)`
- settings: [cfs] default settings, [cfsrr1] :abbr:`CFS (Completely Fair
- Scheduler)` RoundRobin(1) policy applied to all data plane threads
- handling test packet path including all VPP worker threads and all Qemu
- testpmd poll-mode threads;
-
- - Resulting test cases are all combinations with [vr256,vr1024] and
- [cfs,cfsrr1] settings;
-
- - For more detail see performance results observations section in
- this report;
-
-#. Code updates and optimizations in CSIT performance framework:
-
- - Complete CSIT framework code revision and optimizations as descried
- on CSIT wiki page `Design_Optimizations
- <https://wiki.fd.io/view/CSIT/Design_Optimizations>`_.
-
- - For more detail see the :ref:`CSIT Framework Design <csit-design>` section
- in this report;
-
-#. Changes to CSIT driver for TRex Traffic Generator:
+#. Added VPP performance tests
- - Complete refactor of TRex CSIT driver;
+ - **Linux Container VPP memif tests**
- - Introduction of packet traffic profiles to improve usability and
- manageability of traffic profiles for a growing number of test
- scenarios.
+ - Tests with VPP in L2 Bridge-Domain configuration connecting over
+ memif virtual interfaces to VPPs running in LXCs;
- - Support for packet traffic profiles to test IPv4/IPv6 stateful and
- stateless DUT data plane features;
+ - **Docker Container VPP memif tests**
-#. Added VPP performance tests
+ - Tests with VPP in L2 Cross-Connect configuration connecting over
+ memif virtual interfaces VPPs running in Docker containers;
- - **Linux Container VPP memif virtual interface tests**
+ - **Container Topologies Orchestrated by K8s with VPP memif tests**
- - New VPP Memif virtual interface (shared memory interface) tests
- with L2 Bridge-Domain switched-forwarding;
+ - Tests with VPP in L2 Cross-Connect and Bridge-Domain configurations
+ connecting over memif virtual interfaces VPPs running in Docker
+ containers, with service chain topologies orchestrated by Kubernetes;
- **Stateful Security Groups**
- - New m-thread m-core VPP stateful security-groups tests;
+ - m-thread m-core VPP stateful and stateless security-groups tests;
- **MAC-IP binding**
- - New MACIP iACL single-thread single-core and m-thread m-core tests;
+ - MACIP input access-lists, single-thread single-core and m-thread
+ m-core tests;
- Statistical analysis of repeatibility of results;
@@ -71,8 +42,8 @@ double-digit percentage points. Relative improvements for this release are
calculated against the test results listed in CSIT |release-1| report. The
comparison is calculated between the mean values based on collected and
archived test results' samples for involved VPP releases. Standard deviation
-has been also listed for CSIT |release|. VPP-16.09 and VPP-17.01 numbers are
-provided for reference.
+has been also listed for CSIT |release|. Performance numbers since release
+VPP-16.09 are provided for reference.
NDR Throughput
~~~~~~~~~~~~~~
@@ -232,7 +203,7 @@ Here is the list of known issues in CSIT |release| for VPP performance tests:
| | of discovered NDR throughput values across | | throughput. Applies to XL710 and X710 NICs, x520 NICs are fine. |
| | multiple test runs with xl710 and x710 NICs. | | |
+---+-------------------------------------------------+------------+-----------------------------------------------------------------+
-| 4 | Lower than expected NDR and PDR throughput with | CSIT-569 | Suspected NIC firmware or DPDK driver issue affecting NDR and |
+| 4 | Lower than expected NDR throughput with | CSIT-569 | Suspected NIC firmware or DPDK driver issue affecting NDR and |
| | xl710 and x710 NICs, compared to x520 NICs. | | PDR throughput. Applies to XL710 and X710 NICs. |
+---+-------------------------------------------------+------------+-----------------------------------------------------------------+
diff --git a/docs/report/vpp_performance_tests/overview.rst b/docs/report/vpp_performance_tests/overview.rst
index b7aecc1c38..c1302866fc 100644
--- a/docs/report/vpp_performance_tests/overview.rst
+++ b/docs/report/vpp_performance_tests/overview.rst
@@ -54,18 +54,18 @@ performance labs to address larger scale multi-interface and multi-NIC
performance testing scenarios.
For test cases that require DUT (VPP) to communicate with
-VirtualMachines (VMs) / LinuxContainers (LXCs) over vhost-user/memif
-interfaces, N of VM/LXC instances are created on SUT1 and SUT2. For N=1
-DUT forwards packets between vhost/memif and physical interfaces. For
-N>1 DUT a logical service chain forwarding topology is created on DUT by
-applying L2 or IPv4/IPv6 configuration depending on the test suite. DUT
-test topology with N VM/LXC instances is shown in the figure below
-including applicable packet flow thru the DUTs and VMs/LXCs (marked in
-the figure with ``***``).::
+VirtualMachines (VMs) / Containers (Linux or Docker Containers) over
+vhost-user/memif interfaces, N of VM/Ctr instances are created on SUT1
+and SUT2. For N=1 DUT forwards packets between vhost/memif and physical
+interfaces. For N>1 DUT a logical service chain forwarding topology is
+created on DUT by applying L2 or IPv4/IPv6 configuration depending on
+the test suite. DUT test topology with N VM/Ctr instances is shown in
+the figure below including applicable packet flow thru the DUTs and
+VMs/Ctrs (marked in the figure with ``***``).::
+-------------------------+ +-------------------------+
| +---------+ +---------+ | | +---------+ +---------+ |
- | |VM/LXC[1]| |VM/LXC[N]| | | |VM/LXC[1]| |VM/LXC[N]| |
+ | |VM/Ctr[1]| |VM/Ctr[N]| | | |VM/Ctr[1]| |VM/Ctr[N]| |
| | ***** | | ***** | | | | ***** | | ***** | |
| +--^---^--+ +--^---^--+ | | +--^---^--+ +--^---^--+ |
| *| |* *| |* | | *| |* *| |* |
@@ -85,8 +85,8 @@ the figure with ``***``).::
**********************| |**********************
+-----------+
-For VM/LXC tests, packets are switched by DUT multiple times: twice for
-a single VM/LXC, three times for two VMs/LXCs, N+1 times for N VMs/LXCs.
+For VM/Ctr tests, packets are switched by DUT multiple times: twice for
+a single VM/Ctr, three times for two VMs/Ctrs, N+1 times for N VMs/Ctrs.
Hence the external throughput rates measured by TG and listed in this
report must be multiplied by (N+1) to represent the actual DUT aggregate
packet forwarding rate.
@@ -99,7 +99,7 @@ thoughput for Phy-to-Phy (NIC-to-NIC, PCI-to-PCI) topology, is to expect
the forwarding performance to be proportional to CPU core frequency,
assuming CPU is the only limiting factor and all other SUT parameters
equivalent to FD.io CSIT environment. The same rule of thumb can be also
-applied for Phy-to-VM/LXC-to-Phy (NIC-to-VM/LXC-to-NIC) topology, but due to
+applied for Phy-to-VM/Ctr-to-Phy (NIC-to-VM/Ctr-to-NIC) topology, but due to
much higher dependency on intensive memory operations and sensitivity to Linux
kernel scheduler settings and behaviour, this estimation may not always yield
good enough accuracy.