aboutsummaryrefslogtreecommitdiffstats
path: root/docs/report/vpp_performance_tests
diff options
context:
space:
mode:
authorMaciek Konstantynowicz <mkonstan@cisco.com>2017-04-26 12:37:26 +0100
committerPeter Mikus <pmikus@cisco.com>2017-04-26 15:52:35 +0200
commit01f9e5ccff93c600c793b78f4b8957289ad3359f (patch)
tree442fff38debb090901dad28726acaaa20afd0a92 /docs/report/vpp_performance_tests
parent4e32bebc47a000b1424ed8a3141a5b4cd4d1f740 (diff)
csit rls1704 report - updated csit_release_notes.rst and overview.rst files.
Change-Id: I0b5005a4c8dc566e559638d981fb0e8a7b079499 Signed-off-by: Maciek Konstantynowicz <mkonstan@cisco.com>
Diffstat (limited to 'docs/report/vpp_performance_tests')
-rw-r--r--docs/report/vpp_performance_tests/csit_release_notes.rst38
-rw-r--r--docs/report/vpp_performance_tests/overview.rst60
2 files changed, 74 insertions, 24 deletions
diff --git a/docs/report/vpp_performance_tests/csit_release_notes.rst b/docs/report/vpp_performance_tests/csit_release_notes.rst
index 10e01ababa..7c17e0f448 100644
--- a/docs/report/vpp_performance_tests/csit_release_notes.rst
+++ b/docs/report/vpp_performance_tests/csit_release_notes.rst
@@ -6,31 +6,36 @@ Changes in CSIT |release|
#. VPP performance test environment changes
- - Further VM and vhost-user test environment optimizations - Qemu virtio
+ - Further optimizations of VM and vhost-user test environment - Qemu virtio
queue size increased from default value of 256 to 1024.
- - Addition of HW cryptodev devices in all three LF FD.io physical testbeds.
+ - Addition of HW cryptodev devices - Intel QAT 8950 50G - in all three
+ LF FD.io physical testbeds.
-#. Added tests
+#. VPP performance test framework changes
- - CGNAT
+ - Added VAT command history collection for every test case as part of teardown.
+
+#. Added VPP performance tests
+
+ - **CGNAT**
- Carrier Grade Network Address Translation tests with varying number
of users and ports per user: 1u-15p, 10u-15p, 100u-15p, 1000u-15p,
2000u-15p, 4000u-15p - with Intel x520 NIC.
- - vhost-user tests with one VM
+ - **vhost-user tests with one VM**
- L2 Bridge Domain switched-forwarding with Intel x710 NIC, Intel x520 NIC,
Intel xl710 NIC.
- VXLAN and L2 Bridge Domain switched-forwarding with Intel x520 NIC.
- - vhost-user tests with two VM service chain
+ - **vhost-user tests with two VMs service chain**
- L2 cross-connect switched-forwarding with Intel x520 NIC, Intel xl710 NIC.
- L2 Bridge Domain switched-forwarding with Intel x520 NIC, Intel xl710 NIC.
- IPv4 routed-forwarding with Intel x520 NIC, Intel xl710 NIC.
- - IPSec encryption with
+ - **IPSec encryption with**
- AES-GCM, CBC-SHA1 ciphers, in combination with IPv4 routed-forwarding
with Intel xl710 NIC.
@@ -205,22 +210,21 @@ Here is the list of known issues in CSIT |release| for VPP performance tests:
| | for ip4scale200k, ip4scale2m scale IPv4 routed- | | Observed frequency: all test runs. |
| | forwarding tests. ip4scale20k tests are fine. | | |
+---+-------------------------------------------------+------------+-----------------------------------------------------------------+
-| 2 | VAT API timeouts during ip6scale2m scale IPv6 | | Needs fixing VPP VAT API timeouts for large volume of IPv6 |
-| | routed-forwarding tests when volume adding IPv6 | VPP-? | routes. |
+| 2 | VAT API timeouts during ip6scale2m scale IPv6 | VPP-712 | Needs fixing VPP VAT API timeouts for large volume of IPv6 |
+| | routed-forwarding tests when volume adding IPv6 | | routes. |
| | routes - 2M in this case. ip6scale2kk works. | | |
+---+-------------------------------------------------+------------+-----------------------------------------------------------------+
-| 3 | Vic1385 and Vic1227 low performance | CSIT-? | Low NDR performance. |
+| 3 | Vic1385 and Vic1227 low performance | VPP-664 | Low NDR performance. |
| | | | . |
+---+-------------------------------------------------+------------+-----------------------------------------------------------------+
-| 4 | Sporadic NDR discovery test failures on x520 | CSIT-? | Suspected issue with HW settings (BIOS, FW) in LF |
+| 4 | Sporadic NDR discovery test failures on x520 | CSIT-750 | Suspected issue with HW settings (BIOS, FW) in LF |
| | | | infrastructure. Issue can't be replicated outside LF. |
+---+-------------------------------------------------+------------+-----------------------------------------------------------------+
-| 5 | Testpmd - Non-repeatible zig-zagging NDR | CSIT-? | Suspected NIC firmware or driver issue affecting NDR |
-| | throughput in multi-thread/-core tests | | in multi-thread/-core operation. Need to update to latest |
-| | - 2t2c - for some tested NICs. | | firmware in NICs. Applies to XL710 and X710 NICs. |
+| 5 | VPP in 2t2c setups - large variation | CSIT-568 | Suspected NIC firmware or DPDK driver issue affecting NDR |
+| | of discovered NDR throughput values across | | throughput. Applies to XL710 and X710 NICs, x520 NICs are fine. |
+| | multiple test runs with xl710 and x710 NICs. | | . |
+---+-------------------------------------------------+------------+-----------------------------------------------------------------+
-| 6 | VPP - Non-repeatible zig-zagging NDR | CSIT-? | Suspected NIC firmware or driver issue affecting NDR |
-| | throughput in multi-thread/-core tests | | in multi-thread/-core operation. Need to update to latest |
-| | - 2t2c - for some tested NICs. | | firmware in NICs. Applies to XL710 and X710 NICs. |
+| 6 | Lower than expected NDR and PDR throughput with | CSIT-569 | Suspected NIC firmware or DPDK driver issue affecting NDR and |
+| | xl710 and x710 NICs, compared to x520 NICs. | | PDR throughput. Applies to XL710 and X710 NICs. |
+---+-------------------------------------------------+------------+-----------------------------------------------------------------+
diff --git a/docs/report/vpp_performance_tests/overview.rst b/docs/report/vpp_performance_tests/overview.rst
index 96a9377511..56ffda03df 100644
--- a/docs/report/vpp_performance_tests/overview.rst
+++ b/docs/report/vpp_performance_tests/overview.rst
@@ -238,11 +238,6 @@ suites:
switching to/from four vhost interfaces and two VMs, NDR throughput
discovery.
-Methodology: TRex Traffic Generator Usage
------------------------------------------
-
-TODO Description to be added.
-
Methodology: Multi-Thread and Multi-Core
----------------------------------------
@@ -338,5 +333,56 @@ guest dealing with data plan.
Methodology: IPSec with Intel QAT HW cards
------------------------------------------
-TODO Description to be added.
-Intel QAT 8950 50G (Walnut Hill) \ No newline at end of file
+VPP IPSec performance tests are using DPDK cryptodev device driver in
+combination with HW cryptodev devices - Intel QAT 8950 50G - present in
+LF FD.io physical testbeds. DPDK cryptodev can be used for all IPSec
+data plane functions supported by VPP.
+
+Currently CSIT |release| implements following IPSec test cases:
+
+- AES-GCM, CBC-SHA1 ciphers, in combination with IPv4 routed-forwarding
+ with Intel xl710 NIC.
+- CBC-SHA1 ciphers, in combination with LISP-GPE overlay tunneling for
+ IPv4-over-IPv4 with Intel xl710 NIC.
+
+Methodology: TRex Traffic Generator Usage
+-----------------------------------------
+
+The `TRex traffic generator <https://wiki.fd.io/view/TRex>`_ is used for all
+CSIT performance tests. TRex stateless mode is used to measure NDR and PDR
+throughputs using binary search (NDR and PDR discovery tests) and for quick
+checks of DUT performance against the reference NDRs (NDR check tests) for
+specific configuration.
+
+TRex is installed and run on the TG compute node. The typical procedure is:
+
+ - If the TRex is not already installed on TG, it is installed in the
+ suite setup phase - see `TRex intallation <https://gerrit.fd.io/r/gitweb?p=csit.git;a=blob;f=resources/tools/t-rex/t-rex-installer.sh;h=8090b7568327ac5f869e82664bc51b24f89f603f;hb=refs/heads/rls1704>`_.
+ - TRex configuration is set in its configuration file::
+
+ /etc/trex_cfg.yaml
+
+ - TRex is started in the background mode::
+
+ sh -c 'cd /opt/trex-core-2.22/scripts/ && sudo nohup ./t-rex-64 -i -c 7 --iom 0 > /dev/null 2>&1 &' > /dev/null
+
+ - There are traffic streams dynamically prepared for each test. The traffic
+ is sent and the statistics obtained using trex_stl_lib.api.STLClient.
+
+**Measuring packet loss**
+
+ - Create an instance of STLClient
+ - Connect to the client
+ - Add all streams
+ - Clear statistics
+ - Send the traffic for defined time
+ - Get the statistics
+
+If there is a warm-up phase required, the traffic is sent also before test and
+the statistics are ignored.
+
+**Measuring latency**
+
+If measurement of latency is requested, two more packet streams are created (one
+for each direction) with TRex flow_stats parameter set to STLFlowLatencyStats. In
+that case, returned statistics will also include min/avg/max latency values.