diff options
author | Tibor Frank <tifrank@cisco.com> | 2020-02-25 10:11:10 +0100 |
---|---|---|
committer | Tibor Frank <tifrank@cisco.com> | 2020-02-25 10:14:29 +0100 |
commit | 84ab8bd624aa016988fc9f56e5a07e9ec07128b5 (patch) | |
tree | e57cd81de1eac98568324bd43cfb83f6cc3680c5 /docs/report/introduction/methodology_hoststack_testing | |
parent | 384c0d7196654cc32625024e75d616ae5175e1d5 (diff) |
Report: Hoststack methodology
Change-Id: I105e1d4823df42522bff1af50d1bb173cd84d958
Signed-off-by: Tibor Frank <tifrank@cisco.com>
Diffstat (limited to 'docs/report/introduction/methodology_hoststack_testing')
4 files changed, 136 insertions, 0 deletions
diff --git a/docs/report/introduction/methodology_hoststack_testing/index.rst b/docs/report/introduction/methodology_hoststack_testing/index.rst new file mode 100644 index 0000000000..e7b5b79610 --- /dev/null +++ b/docs/report/introduction/methodology_hoststack_testing/index.rst @@ -0,0 +1,8 @@ +Hoststack Testing +----------------- + +.. toctree:: + + methodology_http_tcp_with_wrk + methodology_tcp_with_iperf3 + methodology_quic_with_vppecho diff --git a/docs/report/introduction/methodology_hoststack_testing/methodology_http_tcp_with_wrk.rst b/docs/report/introduction/methodology_hoststack_testing/methodology_http_tcp_with_wrk.rst new file mode 100644 index 0000000000..f5da5339a0 --- /dev/null +++ b/docs/report/introduction/methodology_hoststack_testing/methodology_http_tcp_with_wrk.rst @@ -0,0 +1,39 @@ +HTTP/TCP with WRK +^^^^^^^^^^^^^^^^^ + +`WRK HTTP benchmarking tool <https://github.com/wg/wrk>`_ is used for +TCP/IP and HTTP tests of VPP Host Stack and built-in static HTTP server. +WRK has been chosen as it is capable of generating significant TCP/IP +and HTTP loads by scaling number of threads across multi-core processors. + +This in turn enables high scale benchmarking of the VPP Host Stack TCP/IP +and HTTP service including HTTP TCP/IP Connections-Per-Second (CPS) and +HTTP Requests-Per-Second. + +The initial tests are designed as follows: + +- HTTP and TCP/IP Connections-Per-Second (CPS) + + - WRK configured to use 8 threads across 8 cores, 1 thread per core. + - Maximum of 50 concurrent connections across all WRK threads. + - Timeout for server responses set to 5 seconds. + - Test duration is 30 seconds. + - Expected HTTP test sequence: + + - Single HTTP GET Request sent per open connection. + - Connection close after valid HTTP reply. + - Resulting flow sequence - 8 packets: >Syn, <Syn-Ack, >Ack, >Req, + <Rep, >Fin, <Fin, >Ack. + +- HTTP Requests-Per-Second + + - WRK configured to use 8 threads across 8 cores, 1 thread per core. + - Maximum of 50 concurrent connections across all WRK threads. + - Timeout for server responses set to 5 seconds. + - Test duration is 30 seconds. + - Expected HTTP test sequence: + + - Multiple HTTP GET Requests sent in sequence per open connection. + - Connection close after set test duration time. + - Resulting flow sequence: >Syn, <Syn-Ack, >Ack, >Req[1], <Rep[1], + .., >Req[n], <Rep[n], >Fin, <Fin, >Ack. diff --git a/docs/report/introduction/methodology_hoststack_testing/methodology_quic_with_vppecho.rst b/docs/report/introduction/methodology_hoststack_testing/methodology_quic_with_vppecho.rst new file mode 100644 index 0000000000..329b9a2964 --- /dev/null +++ b/docs/report/introduction/methodology_hoststack_testing/methodology_quic_with_vppecho.rst @@ -0,0 +1,46 @@ +Hoststack Throughput Testing over QUIC/UDP/IP with vpp_echo +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +`vpp_echo performance testing tool <https://wiki.fd.io/view/VPP/HostStack#External_Echo_Server.2FClient_.28vpp_echo.29>`_ +is a bespoke performance test application which utilizes the 'native +HostStack APIs' to verify performance and correct handling of +connection/stream events with uni-directional and bi-directional +streams of data. + +Because iperf3 does not support the QUIC transport protocol, vpp_echo +is used for measuring the maximum attainable bandwidth of the VPP Host +Stack connection utilzing the QUIC transport protocol across two +instances of VPP running on separate DUT nodes. The QUIC transport +protocol supports multiple streams per connection and test cases +utilize different combinations of QUIC connections and number of +streams per connection. + +The test configuration is as follows: + +:: + + DUT1 Network DUT2 + [ vpp_echo-client -> VPP1 ]=======[ VPP2 -> vpp_echo-server] + N-streams/connection + +where, + + 1. vpp_echo server attaches to VPP2 and LISTENs on VPP2:TCP port 1234. + 2. vpp_echo client creates one or more connections to VPP1 and opens + one or more stream per connection to VPP2:TCP port 1234. + 3. vpp_echo client transmits a uni-directional stream as fast as the + VPP Host Stack allows to the vpp_echo server for the test duration. + 4. At the end of the test the vpp_echo client emits the goodput + measurements for all streams and the sum of all streams. + + Test cases include + + 1. 1 QUIC Connection with 1 Stream + 2. 1 QUIC connection with 10 Streams + 3. 10 QUIC connetions with 1 Stream + 4. 10 QUIC connections with 10 Streams + + with stream sizes to provide reasonable test durations. The VPP Host + Stack QUIC transport is configured to utilize the picotls encryption + library. In the future, tests utilizing addtional encryption + algorithms will be added. diff --git a/docs/report/introduction/methodology_hoststack_testing/methodology_tcp_with_iperf3.rst b/docs/report/introduction/methodology_hoststack_testing/methodology_tcp_with_iperf3.rst new file mode 100644 index 0000000000..1355a3cb21 --- /dev/null +++ b/docs/report/introduction/methodology_hoststack_testing/methodology_tcp_with_iperf3.rst @@ -0,0 +1,43 @@ +Hoststack Throughput Testing over TCP/IP with iperf3 +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +`iperf3 goodput measurement tool <https://github.com/esnet/iperf>`_ +is used for measuring the maximum attainable goodput of the VPP Host +Stack connection across two instances of VPP running on separate DUT +nodes. iperf3 is a popular open source tool for active measurements +of the maximum achievable goodput on IP networks. + +Because iperf3 utilizes the POSIX socket interface APIs, the current +test configuration utilizes the LD_PRELOAD mechanism in the linux +kernel to connect iperf3 to the VPP Host Stack using the VPP +Communications Library (VCL) LD_PRELOAD library (libvcl_ldpreload.so). + +In the future, a forked version of iperf3 which has been modified to +directly use the VCL application APIs may be added to determine the +difference in performance of 'VCL Native' applications versus utilizing +LD_PRELOAD which inherently has more overhead and other limitations. + +The test configuration is as follows: + +:: + + DUT1 Network DUT2 + [ iperf3-client -> VPP1 ]=======[ VPP2 -> iperf3-server] + +where, + + 1. iperf3 server attaches to VPP2 and LISTENs on VPP2:TCP port 5201. + 2. iperf3 client attaches to VPP1 and opens one or more stream + connections to VPP2:TCP port 5201. + 3. iperf3 client transmits a uni-directional stream as fast as the + VPP Host Stack allows to the iperf3 server for the test duration. + 4. At the end of the test the iperf3 client emits the goodput + measurements for all streams and the sum of all streams. + + Test cases include 1 and 10 Streams with a 20 second test duration + with the VPP Host Stack configured to utilize the Cubic TCP + congestion algorithm. + + Note: iperf3 is single threaded, so it is expected that the 10 stream + test does not show any performance improvement due to + multi-thread/multi-core execution. |