aboutsummaryrefslogtreecommitdiffstats
path: root/docs/report/introduction
diff options
context:
space:
mode:
Diffstat (limited to 'docs/report/introduction')
-rw-r--r--docs/report/introduction/methodology.rst6
-rw-r--r--docs/report/introduction/methodology_http_tcp_with_wrk.rst (renamed from docs/report/introduction/methodology_http_tcp_with_wrk_tool.rst)19
-rw-r--r--docs/report/introduction/methodology_quic_with_vppecho.rst43
-rw-r--r--docs/report/introduction/methodology_tcp_with_iperf3.rst41
4 files changed, 96 insertions, 13 deletions
diff --git a/docs/report/introduction/methodology.rst b/docs/report/introduction/methodology.rst
index ff3ecc31a4..107a6954c6 100644
--- a/docs/report/introduction/methodology.rst
+++ b/docs/report/introduction/methodology.rst
@@ -13,6 +13,9 @@ Test Methodology
methodology_data_plane_throughput/index
methodology_packet_latency
methodology_multi_core_speedup
+ methodology_http_tcp_with_wrk
+ methodology_tcp_with_iperf3
+ methodology_quic_with_vppecho
methodology_reconf
methodology_vpp_startup_settings
methodology_kvm_vms_vhost_user
@@ -21,6 +24,3 @@ Test Methodology
methodology_vpp_device_functional
methodology_ipsec_on_intel_qat
methodology_trex_traffic_generator
-
-..
- methodology_http_tcp_with_wrk_tool
diff --git a/docs/report/introduction/methodology_http_tcp_with_wrk_tool.rst b/docs/report/introduction/methodology_http_tcp_with_wrk.rst
index 28f3fc6bbb..cd831b4481 100644
--- a/docs/report/introduction/methodology_http_tcp_with_wrk_tool.rst
+++ b/docs/report/introduction/methodology_http_tcp_with_wrk.rst
@@ -1,15 +1,14 @@
-HTTP/TCP with WRK Tool
-----------------------
+HTTP/TCP with WRK
+-----------------
`WRK HTTP benchmarking tool <https://github.com/wg/wrk>`_ is used for
-experimental TCP/IP and HTTP tests of VPP TCP/IP stack and built-in
-static HTTP server. WRK has been chosen as it is capable of generating
-significant TCP/IP and HTTP loads by scaling number of threads across
-multi-core processors.
-
-This in turn enables quite high scale benchmarking of the main TCP/IP
-and HTTP service including HTTP TCP/IP Connections-Per-Second (CPS),
-HTTP Requests-Per-Second and HTTP Bandwidth Throughput.
+TCP/IP and HTTP tests of VPP Host Stack and built-in static HTTP server.
+WRK has been chosen as it is capable of generating significant TCP/IP
+and HTTP loads by scaling number of threads across multi-core processors.
+
+This in turn enables high scale benchmarking of the VPP Host Stack TCP/IP
+and HTTP service including HTTP TCP/IP Connections-Per-Second (CPS) and
+HTTP Requests-Per-Second.
The initial tests are designed as follows:
diff --git a/docs/report/introduction/methodology_quic_with_vppecho.rst b/docs/report/introduction/methodology_quic_with_vppecho.rst
new file mode 100644
index 0000000000..12b64203db
--- /dev/null
+++ b/docs/report/introduction/methodology_quic_with_vppecho.rst
@@ -0,0 +1,43 @@
+Hoststack Throughput Testing over QUIC/UDP/IP with vpp_echo
+-----------------------------------------------------------
+
+`vpp_echo performance testing tool <https://wiki.fd.io/view/VPP/HostStack#External_Echo_Server.2FClient_.28vpp_echo.29>`_
+is a bespoke performance test application which utilizes the 'native
+HostStack APIs' to verify performance and correct handling of
+connection/stream events with uni-directional and bi-directional
+streams of data.
+
+Because iperf3 does not support the QUIC transport protocol, vpp_echo
+is used for measuring the maximum attainable bandwidth of the VPP Host
+Stack connection utilzing the QUIC transport protocol across two
+instances of VPP running on separate DUT nodes. The QUIC transport
+protocol supports multiple streams per connection and test cases
+utilize different combinations of QUIC connections and number of
+streams per connection.
+
+The test configuration is as follows:
+
+ DUT1 Network DUT2
+[ vpp_echo-client -> VPP1 ]=======[ VPP2 -> vpp_echo-server]
+ N-streams/connection
+
+where,
+
+ 1. vpp_echo server attaches to VPP2 and LISTENs on VPP2:TCP port 1234.
+ 2. vpp_echo client creates one or more connections to VPP1 and opens
+ one or more stream per connection to VPP2:TCP port 1234.
+ 3. vpp_echo client transmits a uni-directional stream as fast as the
+ VPP Host Stack allows to the vpp_echo server for the test duration.
+ 4. At the end of the test the vpp_echo client emits the goodput
+ measurements for all streams and the sum of all streams.
+
+ Test cases include
+ 1. 1 QUIC Connection with 1 Stream
+ 2. 1 QUIC connection with 10 Streams
+ 3. 10 QUIC connetions with 1 Stream
+ 4. 10 QUIC connections with 10 Streams
+
+ with stream sizes to provide reasonable test durations. The VPP Host
+ Stack QUIC transport is configured to utilize the picotls encryption
+ library. In the future, tests utilizing addtional encryption
+ algorithms will be added.
diff --git a/docs/report/introduction/methodology_tcp_with_iperf3.rst b/docs/report/introduction/methodology_tcp_with_iperf3.rst
new file mode 100644
index 0000000000..ef28dec4a3
--- /dev/null
+++ b/docs/report/introduction/methodology_tcp_with_iperf3.rst
@@ -0,0 +1,41 @@
+Hoststack Throughput Testing over TCP/IP with iperf3
+----------------------------------------------------
+
+`iperf3 bandwidth measurement tool <https://github.com/esnet/iperf>`_
+is used for measuring the maximum attainable bandwidth of the VPP Host
+Stack connection across two instances of VPP running on separate DUT
+nodes. iperf3 is a popular open source tool for active measurements
+of the maximum achievable bandwidth on IP networks.
+
+Because iperf3 utilizes the POSIX socket interface APIs, the current
+test configuration utilizes the LD_PRELOAD mechanism in the linux
+kernel to connect iperf3 to the VPP Host Stack using the VPP
+Communications Library (VCL) LD_PRELOAD library (libvcl_ldpreload.so).
+
+In the future, a forked version of iperf3 which has been modified to
+directly use the VCL application APIs may be added to determine the
+difference in performance of 'VCL Native' applications .vs. utilizing
+LD_PRELOAD which inherently has more overhead and other limitations.
+
+The test configuration is as follows:
+
+ DUT1 Network DUT2
+[ iperf3-client -> VPP1 ]=======[ VPP2 -> iperf3-server]
+
+where,
+
+ 1. iperf3 server attaches to VPP2 and LISTENs on VPP2:TCP port 5201.
+ 2. iperf3 client attaches to VPP1 and opens one or more stream
+ connections to VPP2:TCP port 5201.
+ 3. iperf3 client transmits a uni-directional stream as fast as the
+ VPP Host Stack allows to the iperf3 server for the test duration.
+ 4. At the end of the test the iperf3 client emits the goodput
+ measurements for all streams and the sum of all streams.
+
+ Test cases include 1 and 10 Streams with a 20 second test duration
+ with the VPP Host Stack configured to utilize the Cubic TCP
+ congestion algorithm.
+
+ Note: iperf3 is single threaded, so it is expected that the 10 stream
+ test does not show any performance improvement due to
+ multi-thread/multi-core execution.