aboutsummaryrefslogtreecommitdiffstats
path: root/docs/report/testpmd_performance_tests_hw
diff options
context:
space:
mode:
Diffstat (limited to 'docs/report/testpmd_performance_tests_hw')
-rw-r--r--docs/report/testpmd_performance_tests_hw/csit_release_notes.rst72
-rw-r--r--docs/report/testpmd_performance_tests_hw/overview.rst58
2 files changed, 44 insertions, 86 deletions
diff --git a/docs/report/testpmd_performance_tests_hw/csit_release_notes.rst b/docs/report/testpmd_performance_tests_hw/csit_release_notes.rst
index 3b138a6f0c..cc1bf8dd1c 100644
--- a/docs/report/testpmd_performance_tests_hw/csit_release_notes.rst
+++ b/docs/report/testpmd_performance_tests_hw/csit_release_notes.rst
@@ -4,68 +4,10 @@ CSIT Release Notes
Changes in CSIT |release|
-------------------------
-#. Naming change for all Testpmd performance test suites and test cases.
-
#. Added Testpmd tests
- new NICs - Intel x520
-
-Performance Tests Naming
-------------------------
-
-CSIT |release| introduced a common structured naming convention for all
-performance and functional tests. This change was driven by substantially
-growing number and type of CSIT test cases. Firstly, the original practice did
-not always follow any strict naming convention. Secondly test names did not
-always clearly capture tested packet encapsulations, and the actual type or
-content of the tests. Thirdly HW configurations in terms of NICs, ports and
-their locality were not captured either. These were but few reasons that drove
-the decision to change and define a new more complete and stricter test naming
-convention, and to apply this to all existing and new test cases.
-
-The new naming should be intuitive for majority of the tests. The complete
-description of CSIT test naming convention is provided on `CSIT test naming wiki
-<https://wiki.fd.io/view/CSIT/csit-test-naming>`_.
-
-Here few illustrative examples of the new naming usage for performance test
-suites:
-
-#. **Physical port to physical port - a.k.a. NIC-to-NIC, Phy-to-Phy, P2P**
-
- - *PortNICConfig-WireEncapsulation-PacketForwardingFunction-
- PacketProcessingFunction1-...-PacketProcessingFunctionN-TestType*
- - *10ge2p1x520-dot1q-l2bdbasemaclrn-ndrdisc.robot* => 2 ports of 10GE on
- Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain baseline switching
- with MAC learning, NDR throughput discovery.
- - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-ndrchk.robot* => 2 ports of 10GE
- on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain baseline
- switching with MAC learning, NDR throughput discovery.
- - *10ge2p1x520-ethip4-ip4base-ndrdisc.robot* => 2 ports of 10GE on Intel
- x520 NIC, IPv4 baseline routed forwarding, NDR throughput discovery.
- - *10ge2p1x520-ethip6-ip6scale200k-ndrdisc.robot* => 2 ports of 10GE on
- Intel x520 NIC, IPv6 scaled up routed forwarding, NDR throughput
- discovery.
-
-#. **Physical port to VM (or VM chain) to physical port - a.k.a. NIC2VM2NIC,
- P2V2P, NIC2VMchain2NIC, P2V2V2P**
-
- - *PortNICConfig-WireEncapsulation-PacketForwardingFunction-
- PacketProcessingFunction1-...-PacketProcessingFunctionN-VirtEncapsulation-
- VirtPortConfig-VMconfig-TestType*
- - *10ge2p1x520-dot1q-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2 ports
- of 10GE on Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain
- switching to/from two vhost interfaces and one VM, NDR throughput
- discovery.
- - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2
- ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain
- switching to/from two vhost interfaces and one VM, NDR throughput
- discovery.
- - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-4vhost-2vm-ndrdisc.robot* => 2
- ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain
- switching to/from four vhost interfaces and two VMs, NDR throughput
- discovery.
-
Multi-Thread and Multi-Core Measurements
----------------------------------------
@@ -79,12 +21,12 @@ HyperThreading Enabled (requires BIOS settings change and hard reboot).
**Multi-core Test** - CSIT |release| multi-core tests are executed in the
following Testpmd thread and core configurations:
-#. 1t1c - 1 Testpmd worker thread on 1 CPU physical core.
-#. 2t2c - 2 Testpmd worker threads on 2 CPU physical cores.
-#. 4t4c - 4 Testpmd threads on 4 CPU physical cores.
+#. 1t1c - 1 Testpmd pmd thread on 1 CPU physical core.
+#. 2t2c - 2 Testpmd pmd threads on 2 CPU physical cores.
+#. 4t4c - 4 Testpmd pmd threads on 4 CPU physical cores.
-Note that in quite a few test cases running Testpmd on 2 or 4 physical cores
-hits the tested NIC I/O bandwidth or packets-per-second limit.
+Note that in many tests running Testpmd reaches tested NIC I/O bandwidth
+or packets-per-second limit.
Packet Throughput Measurements
------------------------------
@@ -137,8 +79,8 @@ Reported latency values are measured using following methodology:
interface line.
-Report Addendum Tests - More NICs
----------------------------------
+Report Addendum Tests - Additional NICs
+---------------------------------------
Adding test cases with more NIC types. Once the results become available, they
will be published as an addendum to the current version of CSIT |release|
diff --git a/docs/report/testpmd_performance_tests_hw/overview.rst b/docs/report/testpmd_performance_tests_hw/overview.rst
index 5bd81abac2..4ef3982e49 100644
--- a/docs/report/testpmd_performance_tests_hw/overview.rst
+++ b/docs/report/testpmd_performance_tests_hw/overview.rst
@@ -1,11 +1,13 @@
Overview
========
-Testpmd Performance Test Topologies
------------------------------------
+Tested Topologies HW
+--------------------
CSIT Testpmd performance tests are executed on physical baremetal servers hosted
-by LF FD.io project. Testbed physical topology is shown in the figure below.::
+by LF FD.io project. Testbed physical topology is shown in the figure below.
+
+::
+------------------------+ +------------------------+
| | | |
@@ -25,13 +27,14 @@ by LF FD.io project. Testbed physical topology is shown in the figure below.::
| |
+-----------+
-SUT1 and SUT2 are two System Under Test servers (Cisco UCS C240, each with two
-Intel XEON CPUs), TG is a Traffic Generator (TG, another Cisco UCS C240, with
-two Intel XEON CPUs). SUTs run Testpmd SW application in Linux user-mode as a
-Device Under Test (DUT). TG runs TRex SW application as a packet Traffic
-Generator. Physical connectivity between SUTs and to TG is provided using
-different NIC models that need to be tested for performance. Currently
-installed and tested NIC models include:
+SUT1 and SUT2 are two System Under Test servers (currently Cisco UCS C240,
+each with two Intel XEON CPUs), TG is a Traffic Generator (TG, currently
+another Cisco UCS C240, with two Intel XEON CPUs). SUTs run Testpmd SW
+application in Linux user-mode as a Device Under Test (DUT). TG runs TRex SW
+application as a packet Traffic Generator. Physical connectivity between SUTs
+and to TG is provided using direct links (no L2 switches) connecting different
+NIC models that need to be tested for performance. Currently installed and
+tested NIC models include:
#. 2port10GE X520-DA2 Intel.
#. 2port10GE X710 Intel.
@@ -39,26 +42,39 @@ installed and tested NIC models include:
#. 2port40GE VIC1385 Cisco.
#. 2port40GE XL710 Intel.
-Detailed LF FD.io test bed specification and topology is described on `CSIT LF
-testbed wiki page <https://wiki.fd.io/view/CSIT/CSIT_LF_testbed>`_.
+Detailed LF FD.io test bed specification and topology is described in
+`wiki CSIT LF testbed <https://wiki.fd.io/view/CSIT/CSIT_LF_testbed>`_.
-Testpmd Performance Tests Overview
-----------------------------------
+Testing Summary
+---------------
-Performance tests are split into two main categories:
+Performance tests are split into the two main categories:
- Throughput discovery - discovery of packet forwarding rate using binary search
- in accordance to RFC2544.
+ in accordance with RFC2544.
- - NDR - discovery of Non Drop Rate, zero packet loss.
- - PDR - discovery of Partial Drop Rate, with specified non-zero packet loss.
+ - NDR - discovery of Non Drop Rate packet throughput, at zero packet loss;
+ followed by packet one-way latency measurements at 10%, 50% and 100% of
+ discovered NDR throughput.
+ - PDR - discovery of Partial Drop Rate, with specified non-zero packet loss
+ currently set to 0.5%; followed by packet one-way latency measurements at
+ 100% of discovered PDR throughput.
- Throughput verification - verification of packet forwarding rate against
- previously discovered throughput rate. These tests are currently done against
+ previously discovered NDR throughput. These tests are currently done against
0.9 of reference NDR, with reference rates updated periodically.
-CSIT |release| includes following performance test suites:
+CSIT |release| includes following performance test suites, listed per NIC type:
- 2port10GE X520-DA2 Intel
- - **L2XC** - L2 Cross-Connect forwarding of untagged Ethernet frames.
+ - **L2IntLoop** - L2 Interface Loop forwarding any Ethernet frames between
+ two Interfaces.
+
+Execution of performance tests takes time, especially the throughput discovery
+tests. Due to limited HW testbed resources available within FD.io labs hosted
+by Linux Foundation, the number of tests for NICs other than X520 (a.k.a.
+Niantic) has been limited to few baseline tests. Over time we expect the HW
+testbed resources to grow, and will be adding complete set of performance
+tests for all models of hardware to be executed regularly and(or)
+continuously.