aboutsummaryrefslogtreecommitdiffstats
path: root/docs/report/dpdk_performance_tests/overview.rst
blob: 41881b8c8e9be80269c3beaf7b5de1229258c66c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
Overview
========

For description of physical testbeds used for DPDK performance tests
please refer to :ref:`tested_physical_topologies`.

.. _tested_logical_topologies:

Logical Topologies
------------------

CSIT DPDK performance tests are executed on physical testbeds described
in :ref:`tested_physical_topologies`. Based on the packet path through
server SUTs, one distinct logical topology type is used for DPDK DUT
data plane testing:

#. NIC-to-NIC switching topologies.

NIC-to-NIC Switching
~~~~~~~~~~~~~~~~~~~~

The simplest logical topology for software data plane application like
DPDK is NIC-to-NIC switching. Tested topologies for 2-Node and 3-Node
testbeds are shown in figures below.

.. only:: latex

    .. raw:: latex

        \begin{figure}[H]
            \centering
                \graphicspath{{../_tmp/src/vpp_performance_tests/}}
                \includegraphics[width=0.90\textwidth]{logical-2n-nic2nic}
                \label{fig:logical-2n-nic2nic}
        \end{figure}

.. only:: html

    .. figure:: ../vpp_performance_tests/logical-2n-nic2nic.svg
        :alt: logical-2n-nic2nic
        :align: center


.. only:: latex

    .. raw:: latex

        \begin{figure}[H]
            \centering
                \graphicspath{{../_tmp/src/vpp_performance_tests/}}
                \includegraphics[width=0.90\textwidth]{logical-3n-nic2nic}
                \label{fig:logical-3n-nic2nic}
        \end{figure}

.. only:: html

    .. figure:: ../vpp_performance_tests/logical-3n-nic2nic.svg
        :alt: logical-3n-nic2nic
        :align: center

Server Systems Under Test (SUT) runs DPDK Testpmd/L3FWD application in
Linux user-mode as a Device Under Test (DUT). Server Traffic Generator (TG)
runs T-Rex application. Physical connectivity between SUTs and TG is provided
using different drivers and NIC models that need to be tested for performance
(packet/bandwidth throughput and latency).

From SUT and DUT perspectives, all performance tests involve forwarding
packets between two physical Ethernet ports (10GE, 25GE, 40GE, 100GE).
In most cases both physical ports on SUT are located on the same
NIC. The only exceptions are link bonding and 100GE tests. In the latter
case only one port per NIC can be driven at linerate due to PCIe Gen3
x16 slot bandwidth limiations. 100GE NICs are not supported in PCIe Gen3
x8 slots.

Note that reported DPDK DUT performance results are specific to the SUTs
tested. SUTs with other processors than the ones used in FD.io lab are
likely to yield different results. A good rule of thumb, that can be
applied to estimate DPDK packet thoughput for NIC-to-NIC switching
topology, is to expect the forwarding performance to be proportional to
processor core frequency for the same processor architecture, assuming
processor is the only limiting factor and all other SUT parameters are
equivalent to FD.io CSIT environment.

Performance Tests Coverage
--------------------------

Performance tests measure following metrics for tested DPDK DUT
topologies and configurations:

- Packet Throughput: measured in accordance with :rfc:`2544`, using
  FD.io CSIT Multiple Loss Ratio search (MLRsearch), an optimized binary
  search algorithm, producing throughput at different Packet Loss Ratio
  (PLR) values:

  - Non Drop Rate (NDR): packet throughput at PLR=0%.
  - Partial Drop Rate (PDR): packet throughput at PLR=0.5%.

- One-Way Packet Latency: measured at different offered packet loads:

  - 100% of discovered NDR throughput.
  - 100% of discovered PDR throughput.

- Maximum Receive Rate (MRR): measure packet forwarding rate under the
  maximum load offered by traffic generator over a set trial duration,
  regardless of packet loss. Maximum load for specified Ethernet frame
  size is set to the bi-directional link rate.

|csit-release| includes following performance test suites, listed per NIC type:

- **L2IntLoop** - L2 Interface Loop forwarding any Ethernet frames between
  two Interfaces.

- **IPv4 Routed Forwarding** - L3 IP forwarding of Ethernet frames between
  two Interfaces.

Execution of performance tests takes time, especially the throughput
tests. Due to limited HW testbed resources available within FD.io labs
hosted by :abbr:`LF (Linux Foundation)`, the number of tests for some
NIC models has been limited to few baseline tests.

Performance Tests Naming
------------------------

FD.io |csit-release| follows a common structured naming convention for
all performance and system functional tests, introduced in CSIT-17.01.

The naming should be intuitive for majority of the tests. Complete
description of FD.io CSIT test naming convention is provided on
:ref:`csit_test_naming`.