diff options
Diffstat (limited to 'docs/report')
-rw-r--r-- | docs/report/introduction/general_notes.rst | 26 | ||||
-rw-r--r-- | docs/report/vpp_performance_tests/csit_release_notes.rst | 39 | ||||
-rw-r--r-- | docs/report/vpp_performance_tests/overview.rst | 43 | ||||
-rw-r--r-- | docs/report/vpp_performance_tests/test_environment.rst | 170 |
4 files changed, 116 insertions, 162 deletions
diff --git a/docs/report/introduction/general_notes.rst b/docs/report/introduction/general_notes.rst index b51ed159c5..b94a82e2dc 100644 --- a/docs/report/introduction/general_notes.rst +++ b/docs/report/introduction/general_notes.rst @@ -5,11 +5,9 @@ All CSIT test results listed in this report are sourced and auto-generated from :file:`output.xml` :abbr:`RF (Robot Framework)` files resulting from :abbr:`LF (Linux Foundation)` FD.io Jenkins jobs execution against |vpp-release| release artifacts. References are provided to the original :abbr:`LF (Linux -Foundation)` FD.io Jenkins job results. However, as :abbr:`LF (Linux -Foundation)` FD.io Jenkins infrastructure does not automatically archive all jobs -(history record is provided for the last 30 days or 40 jobs only), additional -references are provided to the :abbr:`RF (Robot Framework)` result files that -got archived in FD.io nexus online storage system. +Foundation)` FD.io Jenkins job results. Additional references are provided to +the :abbr:`RF (Robot Framework)` result files that got archived in FD.io nexus +online storage system. FD.io CSIT project currently covers multiple FD.io system and sub-system testing areas and this is reflected in this report, where each testing area @@ -25,18 +23,18 @@ is listed separately, as follows: #. **LXC and Docker Containers VPP memif - Performance** - VPP memif virtual interface tests interconnect multiple VPP instances running in containers. VPP vswitch instance runs in bare-metal user-mode - handling Intel x520 NIC 10GbE interfaces and connecting over memif - (Master side) virtual interfaces to more instances of VPP running in - LXC or in Docker Containers, both with memif virtual interfaces (Slave - side). Tested across a range of multi-thread and multi-core - configurations. TRex is used as a traffic generator. + handling Intel x520 NIC 10GbE, Intel x710 NIC 10GbE, Intel xl710 NIC 40GbE + interfaces and connecting over memif (Slave side) virtual interfaces to more + instances of VPP running in LXC or in Docker Containers, both with memif + virtual interfaces (Master side). Tested across a range of multi-thread and + multi-core configurations. TRex is used as a traffic generator. #. **Container Topologies Orchestrated by K8s - Performance** - CSIT Container topologies connected over the memif virtual interface (shared memory interface). For these tests VPP vswitch instance runs in a Docker Container - handling Intel x520 NIC 10GbE interfaces and connecting over memif (Master - side) virtual interfaces to more instances of VPP running in Docker - Containers with memif virtual interfaces (Slave side). All containers are + handling Intel x520 NIC 10GbE, Intel x710 NIC 10GbE interfaces and connecting + over memif virtual interfaces to more instances of VPP running in Docker + Containers with memif virtual interfaces. All containers are orchestrated by Kubernetes, with `Ligato <https://github.com/ligato>`_ for container networking. TRex is used as a traffic generator. @@ -71,7 +69,7 @@ and to provide a more complete view of automated testing executed against |vpp-release|. FD.io CSIT system is developed using two main coding platforms :abbr:`RF (Robot -Framework)` and Python. CSIT |release| source code for the executed test +Framework)` and Python2.7. CSIT |release| source code for the executed test suites is available in CSIT branch |release| in the directory :file:`./tests/<name_of_the_test_suite>`. A local copy of CSIT source code can be obtained by cloning CSIT git repository - :command:`git clone diff --git a/docs/report/vpp_performance_tests/csit_release_notes.rst b/docs/report/vpp_performance_tests/csit_release_notes.rst index e7c61e665c..8fd8a3b634 100644 --- a/docs/report/vpp_performance_tests/csit_release_notes.rst +++ b/docs/report/vpp_performance_tests/csit_release_notes.rst @@ -19,10 +19,9 @@ Changes in CSIT |release| to container, then via "horizontal" memif to next container, and so on
until the last container, then back to VPP and NIC;
- - **VPP TCP/IP stack**
+ - **MRR tests**
- - Added tests for VPP TCP/IP stack using VPP built-in HTTP server.
- WRK traffic generator is used as a client-side;
+ - <placeholder>;
- **SRv6**
@@ -31,11 +30,6 @@ Changes in CSIT |release| lookups and rewrites based on configured End and End.DX6 SRv6 egress
functions;
- - **IPSecSW**
-
- - SW computed IPSec encryption with AES-GCM, CBC-SHA1 ciphers, in
- combination with IPv4 routed-forwarding;
-
#. Presentation and Analytics Layer
- Added throughput speedup analysis for multi-core and multi-thread
@@ -46,20 +40,7 @@ Changes in CSIT |release| - **Framework optimizations**
- - Ability to run CSIT framework on ARM architecture;
-
- - Overall stability improvements;
-
- - **NDR and PDR throughput binary search change**
-
- - Increased binary search resolution by reducing final step from
- 100kpps to 50kpps;
-
- - **VPP plugin loaded as needed by tests**
-
- - From this release only plugins required by tests are loaded at
- VPP initialization time. Previously all plugins were loaded for
- all tests;
+ - Performance test duration improvements and stability;
Performance Changes
-------------------
@@ -132,23 +113,15 @@ Here is the list of known issues in CSIT |release| for VPP performance tests: | 3 | Lower than expected NDR throughput with | CSIT-571 | Suspected NIC firmware or DPDK driver issue affecting NDR and |
| | xl710 and x710 NICs, compared to x520 NICs. | | PDR throughput. Applies to XL710 and X710 NICs. |
+---+-------------------------------------------------+------------+-----------------------------------------------------------------+
-| 4 | QAT IPSec scale with 1000 tunnels (interfaces) | VPP-1121 | VPP crashes during configuration of 1000 IPsec tunnels. |
-| | in 2t2c config, all tests are failing. | | 1t1c tests are not affected |
-+---+-------------------------------------------------+------------+-----------------------------------------------------------------+
-| 5 | rls1801 plugin related performance regression | CSIT-925 | With all plugins loaded NDR, PDR and MaxRates vary |
+| 4 | rls1801 plugin related performance regression | CSIT-925 | With all plugins loaded NDR, PDR and MaxRates vary |
| | | | intermittently from 3% to 5% across multiple test executions. |
| | | | Requires plugin code bisecting. |
+---+-------------------------------------------------+------------+-----------------------------------------------------------------+
-| 6 | rls1801 generic small performance regression | CSIT-926 | Generic performance regression of discovered NDR, PDR and |
+| 5 | rls1801 generic small performance regression | CSIT-926 | Generic performance regression of discovered NDR, PDR and |
| | ip4base, l2xcbase, l2bdbase | | MaxRates of -3%..-1% vs. rls1710, affects ip4base, l2xcbase, |
| | | | l2bdbase test suites. Not detected by CSIT performance trending |
| | | | scheme as it was masked out by another issue CSIT-925. |
+---+-------------------------------------------------+------------+-----------------------------------------------------------------+
-| 7 | rls1801 substantial NDR performance regression | CSIT-927 | Much lower NDR for vhostvr1024 tests, with mean values |
-| | for vhost-user vring size of 1024 | | regression of -17%..-42% vs. rls1710, but also very high |
-| | | | standard deviation of up to 1.46 Mpps => poor repeatibility. |
-| | | | Making mean values not fully representative. |
-+---+-------------------------------------------------+------------+-----------------------------------------------------------------+
-| 8 | rls1801 substantial NDR/PDR regression for | CSIT-928 | NDR regression of -7%..-15%, PDR regression of -3%..-15% |
+| 6 | rls1801 substantial NDR/PDR regression for | CSIT-928 | NDR regression of -7%..-15%, PDR regression of -3%..-15% |
| | IPSec tunnel scale with HW QAT crypto-dev | | compared to rls1710. |
+---+-------------------------------------------------+------------+-----------------------------------------------------------------+
diff --git a/docs/report/vpp_performance_tests/overview.rst b/docs/report/vpp_performance_tests/overview.rst index 86bea87c0b..5f85b77b51 100644 --- a/docs/report/vpp_performance_tests/overview.rst +++ b/docs/report/vpp_performance_tests/overview.rst @@ -124,8 +124,8 @@ For detailed FD.io CSIT testbed specification and topology, as well as configuration and setup of SUTs and DUTs testbeds please refer to :ref:`test_environment`. -Similar SUT compute node and DUT VPP settings can be arrived to in a -standalone VPP setup by using a `vpp-config configuration tool +Similar SUT compute node can be arrived to in a standalone VPP setup by using a +`vpp-config configuration tool <https://wiki.fd.io/view/VPP/Configuration_Tool>`_ developed within the VPP project using CSIT recommended settings and scripts. @@ -144,10 +144,6 @@ Performance tests are split into two main categories: currently set to 0.5%; followed by one-way packet latency measurements at 100% of discovered PDR throughput. -- Throughput verification - verification of packet forwarding rate against - previously discovered throughput rate. These tests are currently done against - 0.9 of reference NDR, with reference rates updated periodically. - CSIT |release| includes following performance test suites, listed per NIC type: - 2port10GE X520-DA2 Intel @@ -200,6 +196,8 @@ CSIT |release| includes following performance test suites, listed per NIC type: with LISP-GPE overlay tunneling for IPv4-over-IPv4. - **VPP TCP/IP stack** - tests of VPP TCP/IP stack used with VPP built-in HTTP server. + - **Container memif connections** - VPP memif virtual interface tests to + interconnect VPP instances with L2XC and L2BD. - 2port10GE X710 Intel @@ -207,6 +205,10 @@ CSIT |release| includes following performance test suites, listed per NIC type: with MAC learning. - **VMs with vhost-user** - virtual topologies with 1 VM using vhost-user interfaces, with VPP forwarding modes incl. L2 Bridge-Domain. + - **Container memif connections** - VPP memif virtual interface tests to + interconnect VPP instances with L2XC and L2BD. + - **Container K8s Orchestrated Topologies** - Container topologies connected + over the memif virtual interface. - 2port10GE VIC1227 Cisco @@ -360,14 +362,14 @@ environment settings: Methodology: LXC and Docker Containers memif -------------------------------------------- -CSIT |release| introduced additional tests taking advantage of VPP memif -virtual interface (shared memory interface) tests to interconnect VPP -instances. VPP vswitch instance runs in bare-metal user-mode handling -Intel x520 NIC 10GbE interfaces and connecting over memif (Master side) -virtual interfaces to more instances of VPP running in :abbr:`LXC (Linux -Container)` or in Docker Containers, both with memif virtual interfaces -(Slave side). LXCs and Docker Containers run in a priviliged mode with -VPP data plane worker threads pinned to dedicated physical CPU cores per +CSIT |release| introduced additional tests taking advantage of VPP memif virtual +interface (shared memory interface) tests to interconnect VPP instances. VPP +vswitch instance runs in bare-metal user-mode handling Intel x520 NIC 10GbE, +Intel x710 NIC 10GbE, Intel xl710 NIC 40GbE interfaces and connecting over memif +(Slave side) virtual interfaces to more instances of VPP running in +:abbr:`LXC (Linux Container)` or in Docker Containers, both with memif virtual +interfaces (Master side). LXCs and Docker Containers run in a priviliged mode +with VPP data plane worker threads pinned to dedicated physical CPU cores per usual CSIT practice. All VPP instances run the same version of software. This test topology is equivalent to existing tests with vhost-user and VMs as described earlier in :ref:`tested_physical_topologies`. @@ -388,14 +390,13 @@ used to address the container networking orchestration that is integrated with K8s, including memif support. For these tests VPP vswitch instance runs in a Docker Container handling -Intel x520 NIC 10GbE interfaces and connecting over memif (Master side) +Intel x520 NIC 10GbE, Intel x710 NIC 10GbE interfaces and connecting over memif virtual interfaces to more instances of VPP running in Docker Containers -with memif virtual interfaces (Slave side). All Docker Containers run in -a priviliged mode with VPP data plane worker threads pinned to dedicated -physical CPU cores per usual CSIT practice. All VPP instances run the -same version of software. This test topology is equivalent to existing -tests with vhost-user and VMs as described earlier in -:ref:`tested_physical_topologies`. +with memif virtual interfaces. All Docker Containers run in a priviliged mode +with VPP data plane worker threads pinned to dedicated physical CPU cores per +usual CSIT practice. All VPP instances run the same version of software. This +test topology is equivalent to existing tests with vhost-user and VMs as +described earlier in :ref:`tested_physical_topologies`. More information about CSIT Container Topologies Orchestrated by K8s is available in :ref:`container_orchestration_in_csit`. diff --git a/docs/report/vpp_performance_tests/test_environment.rst b/docs/report/vpp_performance_tests/test_environment.rst index fb1ed948a6..77fe3216e4 100644 --- a/docs/report/vpp_performance_tests/test_environment.rst +++ b/docs/report/vpp_performance_tests/test_environment.rst @@ -34,56 +34,50 @@ Tagged by **1T1C** ::
+ ip
+ {
+ heap-size 4G
+ }
unix
{
cli-listen localhost:5002
log /tmp/vpe.log
nodaemon
}
- cpu
+ ip6
{
- corelist-workers 2
- main-core 1
+ heap-size 4G
+ hash-buckets 2000000
}
- ip4
+ heapsize 4G
+ plugins
{
- heap-size "4G"
+ plugin default
+ {
+ disable
+ }
+ plugin dpdk_plugin.so
+ {
+ enable
+ }
}
- ip6
+ cpu
{
- heap-size "4G"
- hash-buckets "2000000"
+ corelist-workers 2
+ main-core 1
}
- plugins
- {
- plugin pppoe_plugin.so { disable }
- plugin kubeproxy_plugin.so { disable }
- plugin ioam_plugin.so { disable }
- plugin ila_plugin.so { disable }
- plugin stn_plugin.so { disable }
- plugin acl_plugin.so { disable }
- plugin l2e_plugin.so { disable }
- plugin sixrd_plugin.so { disable }
- plugin nat_plugin.so { disable }
- plugin ixge_plugin.so { disable }
- plugin lb_plugin.so { disable }
- plugin memif_plugin.so { disable }
- plugin gtpu_plugin.so { disable }
- plugin flowprobe_plugin.so { disable }
- }
- heapsize "4G"
dpdk
{
- dev 0000:88:00.1
- dev 0000:88:00.0
+ dev 0000:0a:00.0
+ dev 0000:0a:00.1
no-multi-seg
+ uio-driver uio_pci_generic
+ log-level debug
dev default
{
- num-rx-desc 2048
num-rx-queues 1
- num-tx-desc 2048
}
- socket-mem "1024,1024"
+ socket-mem 1024,1024
no-tx-checksum-offload
}
@@ -91,56 +85,50 @@ Tagged by **2T2C** ::
+ ip
+ {
+ heap-size 4G
+ }
unix
{
cli-listen localhost:5002
log /tmp/vpe.log
nodaemon
}
- cpu
+ ip6
{
- corelist-workers 2,3
- main-core 1
+ heap-size 4G
+ hash-buckets 2000000
}
- ip4
+ heapsize 4G
+ plugins
{
- heap-size "4G"
+ plugin default
+ {
+ disable
+ }
+ plugin dpdk_plugin.so
+ {
+ enable
+ }
}
- ip6
+ cpu
{
- heap-size "4G"
- hash-buckets "2000000"
+ corelist-workers 2,3
+ main-core 1
}
- plugins
- {
- plugin pppoe_plugin.so { disable }
- plugin kubeproxy_plugin.so { disable }
- plugin ioam_plugin.so { disable }
- plugin ila_plugin.so { disable }
- plugin stn_plugin.so { disable }
- plugin acl_plugin.so { disable }
- plugin l2e_plugin.so { disable }
- plugin sixrd_plugin.so { disable }
- plugin nat_plugin.so { disable }
- plugin ixge_plugin.so { disable }
- plugin lb_plugin.so { disable }
- plugin memif_plugin.so { disable }
- plugin gtpu_plugin.so { disable }
- plugin flowprobe_plugin.so { disable }
- }
- heapsize "4G"
dpdk
{
- dev 0000:88:00.1
- dev 0000:88:00.0
+ dev 0000:0a:00.0
+ dev 0000:0a:00.1
no-multi-seg
+ uio-driver uio_pci_generic
+ log-level debug
dev default
{
- num-rx-desc 2048
num-rx-queues 1
- num-tx-desc 2048
}
- socket-mem "1024,1024"
+ socket-mem 1024,1024
no-tx-checksum-offload
}
@@ -148,56 +136,50 @@ Tagged by **4T4C** ::
+ ip
+ {
+ heap-size 4G
+ }
unix
{
cli-listen localhost:5002
log /tmp/vpe.log
nodaemon
}
- cpu
+ ip6
{
- corelist-workers 2,3,4,5
- main-core 1
+ heap-size 4G
+ hash-buckets 2000000
}
- ip4
+ heapsize 4G
+ plugins
{
- heap-size "4G"
+ plugin default
+ {
+ disable
+ }
+ plugin dpdk_plugin.so
+ {
+ enable
+ }
}
- ip6
+ cpu
{
- heap-size "4G"
- hash-buckets "2000000"
+ corelist-workers 2,3,4,5
+ main-core 1
}
- plugins
- {
- plugin pppoe_plugin.so { disable }
- plugin kubeproxy_plugin.so { disable }
- plugin ioam_plugin.so { disable }
- plugin ila_plugin.so { disable }
- plugin stn_plugin.so { disable }
- plugin acl_plugin.so { disable }
- plugin l2e_plugin.so { disable }
- plugin sixrd_plugin.so { disable }
- plugin nat_plugin.so { disable }
- plugin ixge_plugin.so { disable }
- plugin lb_plugin.so { disable }
- plugin memif_plugin.so { disable }
- plugin gtpu_plugin.so { disable }
- plugin flowprobe_plugin.so { disable }
- }
- heapsize "4G"
dpdk
{
- dev 0000:88:00.1
- dev 0000:88:00.0
+ dev 0000:0a:00.0
+ dev 0000:0a:00.1
no-multi-seg
+ uio-driver uio_pci_generic
+ log-level debug
dev default
{
- num-rx-desc 2048
- num-rx-queues 2
- num-tx-desc 2048
+ num-rx-queues 1
}
- socket-mem "1024,1024"
+ socket-mem 1024,1024
no-tx-checksum-offload
}
|