aboutsummaryrefslogtreecommitdiffstats
path: root/docs/report/dpdk_performance_tests
diff options
context:
space:
mode:
Diffstat (limited to 'docs/report/dpdk_performance_tests')
-rw-r--r--docs/report/dpdk_performance_tests/csit_release_notes.rst62
-rw-r--r--docs/report/dpdk_performance_tests/overview.rst292
-rw-r--r--docs/report/dpdk_performance_tests/packet_latency_graphs/index.rst2
-rw-r--r--docs/report/dpdk_performance_tests/packet_throughput_graphs/index.rst2
-rw-r--r--docs/report/dpdk_performance_tests/test_environment.rst50
5 files changed, 140 insertions, 268 deletions
diff --git a/docs/report/dpdk_performance_tests/csit_release_notes.rst b/docs/report/dpdk_performance_tests/csit_release_notes.rst
index 363c2e70f1..28e6a37768 100644
--- a/docs/report/dpdk_performance_tests/csit_release_notes.rst
+++ b/docs/report/dpdk_performance_tests/csit_release_notes.rst
@@ -4,7 +4,26 @@ Release Notes
Changes in |csit-release|
-------------------------
-No code changes apart from bug fixes.
+#. **DPDK performance tests**
+
+ - *MRR tests* - Maximum Receive Rate tests measure the packet forwarding rate
+ under the maximum load offered by traffic generator over a set trial
+ duration, regardless of packet loss. MRR tests are used for continuous
+ performance trending and for comparison between releases.
+
+ - *MLR tests* - NDR and PDR tests measure the packet forwarding rate using
+ MLRsearch library by traffic generator. All tests that previously used
+ binary search were converted to MLRsearch.
+
+ - *2-node tests* - Set of 2-node tests covering testpmd and l3fwd.
+
+ - Increased coverage of NIC specific tests (Intel-xxv710-da2, Intel-x710)
+
+ - *Generated tests* - Simplified and unified test structure,
+ semi-autogenerated by generator script. Test generator is currently able
+ to create test combinations with various frame size and cores combinations.
+ All existing test cases were converted to new format.
+
Performance Changes
-------------------
@@ -12,19 +31,15 @@ Performance Changes
Relative performance changes in measured packet throughput in |csit-release|
are calculated against the results from |csit-release-1|
report. Listed mean and standard deviation values are computed based on
-a series of the same tests executed against respective VPP releases to
+a series of the same tests executed against respective DPDK releases to
verify test results repeatability, with percentage change calculated for
-mean values. Note that the standard deviation is quite high for a small
-number of packet throughput tests, what indicates poor test results
-repeatability and makes the relative change of mean throughput value not
-fully representative for these tests. The root causes behind poor
-results repeatability vary between the test cases.
+mean values.
NDR Changes
~~~~~~~~~~~
-NDR throughput changes between releases are available in a
-CSV and pretty ASCII formats:
+NDR throughput changes between releases are available in a CSV and pretty ASCII
+formats:
- `csv format for 1t1c <../_static/dpdk/performance-changes-1t1c-ndr.csv>`_,
- `csv format for 2t2c <../_static/dpdk/performance-changes-2t2c-ndr.csv>`_,
@@ -36,14 +51,14 @@ CSV and pretty ASCII formats:
Test results have been generated by
`FD.io test executor dpdk performance job 3n-hsw`_
with Robot Framework result
- files csit-vpp-perf-|srelease|-\*.zip
+ files csit-dpdk-perf-|srelease|-\*.zip
`archived here <../_static/archive/>`_.
PDR Changes
~~~~~~~~~~~
-PDR throughput changes between releases are available in a
-CSV and pretty ASCII formats:
+PDR throughput changes between releases are available in a CSV and pretty ASCII
+formats:
- `csv format for 1t1c <../_static/dpdk/performance-changes-1t1c-pdr.csv>`_,
- `csv format for 2t2c <../_static/dpdk/performance-changes-2t2c-pdr.csv>`_,
@@ -55,20 +70,20 @@ CSV and pretty ASCII formats:
Test results have been generated by
`FD.io test executor dpdk performance job 3n-hsw`_
with Robot Framework result
- files csit-vpp-perf-|srelease|-\*.zip
+ files csit-dpdk-perf-|srelease|-\*.zip
`archived here <../_static/archive/>`_.
Comparison Across Testbeds
--------------------------
-Relative performance changes in measured packet throughputon 3-Node Skx testbed
+Relative performance changes in measured packet throughput on 3-Node Skx testbed
are calculated against the results measured on 3-Node Hsw testbed.
NDR Changes
~~~~~~~~~~~
-NDR changes between testbeds are available in a
-CSV and pretty ASCII formats:
+NDR throughput changes between testbeds are available in a CSV and pretty ASCII
+formats:
- `csv format for ndr <../_static/dpdk/performance-compare-testbeds-3n-hsw-3n-skx-ndr.csv>`_,
- `pretty ASCII format for ndr <../_static/dpdk/performance-compare-testbeds-3n-hsw-3n-skx-ndr.txt>`_.
@@ -79,14 +94,14 @@ CSV and pretty ASCII formats:
`FD.io test executor dpdk performance job 3n-hsw`_ and
`FD.io test executor dpdk performance job 3n-skx`_
with Robot Framework result
- files csit-vpp-perf-|srelease|-\*.zip
+ files csit-dpdk-perf-|srelease|-\*.zip
`archived here <../_static/archive/>`_.
PDR Changes
~~~~~~~~~~~
-PDR throughput changes between testbeds are available in a
-CSV and pretty ASCII formats:
+PDR throughput changes between testbeds are available in a CSV and pretty ASCII
+formats:
- `csv format for pdr <../_static/dpdk/performance-compare-testbeds-3n-hsw-3n-skx-pdr.csv>`_,
- `pretty ASCII format for pdr <../_static/dpdk/performance-compare-testbeds-3n-hsw-3n-skx-pdr.txt>`_.
@@ -97,7 +112,7 @@ CSV and pretty ASCII formats:
`FD.io test executor dpdk performance job 3n-hsw`_ and
`FD.io test executor dpdk performance job 3n-skx`_
with Robot Framework result
- files csit-vpp-perf-|srelease|-\*.zip
+ files csit-dpdk-perf-|srelease|-\*.zip
`archived here <../_static/archive/>`_.
Known Issues
@@ -108,10 +123,5 @@ Here is the list of known issues in |csit-release| for Testpmd performance tests
+---+---------------------------------------------------+------------+-----------------------------------------------------------------+
| # | Issue | Jira ID | Description |
+---+---------------------------------------------------+------------+-----------------------------------------------------------------+
-| 1 | Testpmd in 1t1c and 2t2c setups - large variation | CSIT-569 | Suspected NIC firmware or DPDK driver issue affecting NDR |
-| | of discovered NDR throughput values across | | throughput. Applies to XL710 and X710 NICs, no issues observed |
-| | multiple test runs with xl710 and x710 NICs. | | on x520 NICs. |
-+---+---------------------------------------------------+------------+-----------------------------------------------------------------+
-| 2 | Lower than expected NDR throughput with xl710 | CSIT-571 | Suspected NIC firmware or DPDK driver issue affecting NDR |
-| | and x710 NICs, compared to x520 NICs. | | throughput. Applies to XL710 and X710 NICs. |
+| | No known issues | | |
+---+---------------------------------------------------+------------+-----------------------------------------------------------------+
diff --git a/docs/report/dpdk_performance_tests/overview.rst b/docs/report/dpdk_performance_tests/overview.rst
index b38f9595be..e6abb53c90 100644
--- a/docs/report/dpdk_performance_tests/overview.rst
+++ b/docs/report/dpdk_performance_tests/overview.rst
@@ -1,239 +1,127 @@
Overview
========
-Tested Physical Topologies
---------------------------
-
-CSIT DPDK performance tests are executed on physical baremetal servers hosted
-by :abbr:`LF (Linux Foundation)` FD.io project. Testbed physical topology is
-shown in the figure below.::
-
- +------------------------+ +------------------------+
- | | | |
- | +------------------+ | | +------------------+ |
- | | | | | | | |
- | | <-----------------> | |
- | | DUT1 | | | | DUT2 | |
- | +--^---------------+ | | +---------------^--+ |
- | | | | | |
- | | SUT1 | | SUT2 | |
- +------------------------+ +------------------^-----+
- | |
- | |
- | +-----------+ |
- | | | |
- +------------------> TG <------------------+
- | |
- +-----------+
-
-SUT1 and SUT2 are two System Under Test servers (Cisco UCS C240, each with two
-Intel XEON CPUs), TG is a Traffic Generator (TG, another Cisco UCS C240, with
-two Intel XEON CPUs). SUTs run Testpmd/L3FWD SW SW application in Linux
-user-mode as a Device Under Test (DUT). TG runs TRex SW application as a packet
-Traffic Generator. Physical connectivity between SUTs and to TG is provided
-using different NIC models that need to be tested for performance. Currently
-installed and tested NIC models include:
-
-#. 2port10GE X520-DA2 Intel.
-#. 2port10GE X710 Intel.
-#. 2port10GE VIC1227 Cisco.
-#. 2port40GE VIC1385 Cisco.
-#. 2port40GE XL710 Intel.
-
-From SUT and DUT perspective, all performance tests involve forwarding packets
-between two physical Ethernet ports (10GE or 40GE). Due to the number of
-listed NIC models tested and available PCI slot capacity in SUT servers, in
-all of the above cases both physical ports are located on the same NIC. In
-some test cases this results in measured packet throughput being limited not
-by VPP DUT but by either the physical interface or the NIC capacity.
-
-Going forward CSIT project will be looking to add more hardware into FD.io
-performance labs to address larger scale multi-interface and multi-NIC
-performance testing scenarios.
-
-Note that reported DUT (DPDK) performance results are specific to the SUTs
-tested. Current :abbr:`LF (Linux Foundation)` FD.io SUTs are based on Intel
-XEON E5-2699v3 2.3GHz CPUs. SUTs with other CPUs are likely to yield different
-results. A good rule of thumb, that can be applied to estimate DPDK packet
-thoughput for Phy-to-Phy (NIC-to-NIC, PCI-to-PCI) topology, is to expect
-the forwarding performance to be proportional to CPU core frequency,
-assuming CPU is the only limiting factor and all other SUT parameters
-equivalent to FD.io CSIT environment. The same rule of thumb can be also
-applied for Phy-to-VM/LXC-to-Phy (NIC-to-VM/LXC-to-NIC) topology, but due to
-much higher dependency on intensive memory operations and sensitivity to Linux
-kernel scheduler settings and behaviour, this estimation may not always yield
-good enough accuracy.
-
-For detailed :abbr:`LF (Linux Foundation)` FD.io test bed specification and
-physical topology please refer to `LF FD.io CSIT testbed wiki page
-<https://wiki.fd.io/view/CSIT/CSIT_LF_testbed>`_.
+For description of physical testbeds used for DPDK performance tests
+please refer to :ref:`tested_physical_topologies`.
-Performance Tests Coverage
---------------------------
+.. _tested_logical_topologies:
-Performance tests are split into two main categories:
+Logical Topologies
+------------------
-- Throughput discovery - discovery of packet forwarding rate using binary search
- in accordance to :rfc:`2544`.
+CSIT DPDK performance tests are executed on physical testbeds described
+in :ref:`tested_physical_topologies`. Based on the packet path thru
+server SUTs, three distinct logical topology types are used for DPDK DUT
+data plane testing:
- - NDR - discovery of Non Drop Rate packet throughput, at zero packet loss;
- followed by one-way packet latency measurements at 10%, 50% and 100% of
- discovered NDR throughput.
- - PDR - discovery of Partial Drop Rate, with specified non-zero packet loss
- currently set to 0.5%; followed by one-way packet latency measurements at
- 100% of discovered PDR throughput.
-
-- Throughput verification - verification of packet forwarding rate against
- previously discovered throughput rate. These tests are currently done against
- 0.9 of reference NDR, with reference rates updated periodically.
-
-|csit-release| includes following performance test suites, listed per NIC type:
+#. NIC-to-NIC switching topologies.
-- 2port10GE X520-DA2 Intel
+NIC-to-NIC Switching
+~~~~~~~~~~~~~~~~~~~~
- - **L2IntLoop** - L2 Interface Loop forwarding any Ethernet frames between
- two Interfaces.
+The simplest logical topology for software data plane application like
+DPDK is NIC-to-NIC switching. Tested topologies for 2-Node and 3-Node
+testbeds are shown in figures below.
-- 2port40GE XL710 Intel
+.. only:: latex
- - **L2IntLoop** - L2 Interface Loop forwarding any Ethernet frames between
- two Interfaces.
-
-- 2port10GE X520-DA2 Intel
-
- - **IPv4 Routed Forwarding** - L3 IP forwarding of Ethernet frames between
- two Interfaces.
-
-Execution of performance tests takes time, especially the throughput discovery
-tests. Due to limited HW testbed resources available within FD.io labs hosted
-by Linux Foundation, the number of tests for NICs other than X520 (a.k.a.
-Niantic) has been limited to few baseline tests. Over time we expect the HW
-testbed resources to grow, and will be adding complete set of performance
-tests for all models of hardware to be executed regularly and(or)
-continuously.
-
-Performance Tests Naming
-------------------------
+ .. raw:: latex
-|csit-release| follows a common structured naming convention for all performance
-and system functional tests, introduced in CSIT-17.01.
+ \begin{figure}[H]
+ \centering
+ \includesvg[width=0.90\textwidth]{../_tmp/src/vpp_performance_tests/logical-2n-nic2nic}
+ \label{fig:logical-2n-nic2nic}
+ \end{figure}
-The naming should be intuitive for majority of the tests. Complete description
-of CSIT test naming convention is provided on :ref:`csit_test_naming`.
+.. only:: html
-Methodology: Multi-Core and Multi-Threading
--------------------------------------------
+ .. figure:: ../vpp_performance_tests/logical-2n-nic2nic.svg
+ :alt: logical-2n-nic2nic
+ :align: center
-**Intel Hyper-Threading** - |csit-release| performance tests are executed with
-SUT servers' Intel XEON processors configured in Intel Hyper-Threading Disabled
-mode (BIOS setting). This is the simplest configuration used to establish
-baseline single-thread single-core application packet processing and forwarding
-performance. Subsequent releases of CSIT will add performance tests with Intel
-Hyper-Threading Enabled (requires BIOS settings change and hard reboot of
-server).
-**Multi-core Tests** - |csit-release| multi-core tests are executed in the
-following VPP thread and core configurations:
+.. only:: latex
-#. 1t1c - 1 pmd worker thread on 1 CPU physical core.
-#. 2t2c - 2 pmd worker threads on 2 CPU physical cores.
+ .. raw:: latex
-Note that in many tests running Testpmd/L3FWD reaches tested NIC I/O bandwidth
-or packets-per-second limit.
+ \begin{figure}[H]
+ \centering
+ \includesvg[width=0.90\textwidth]{../_tmp/src/vpp_performance_tests/logical-3n-nic2nic}
+ \label{fig:logical-3n-nic2nic}
+ \end{figure}
-Methodology: Packet Throughput
-------------------------------
+.. only:: html
-Following values are measured and reported for packet throughput tests:
+ .. figure:: ../vpp_performance_tests/logical-3n-nic2nic.svg
+ :alt: logical-3n-nic2nic
+ :align: center
-- NDR binary search per :rfc:`2544`:
+Server Systems Under Test (SUT) runs DPDK Testpmd/L3FWD application in
+Linux user-mode as a Device Under Test (DUT). Server Traffic Generator (TG)
+runs T-Rex application. Physical connectivity between SUTs and TG is provided
+using different drivers and NIC models that need to be tested for performance
+(packet/bandwidth throughput and latency).
- - Packet rate: "RATE: <aggregate packet rate in packets-per-second> pps
- (2x <per direction packets-per-second>)"
- - Aggregate bandwidth: "BANDWIDTH: <aggregate bandwidth in Gigabits per
- second> Gbps (untagged)"
+From SUT and DUT perspectives, all performance tests involve forwarding
+packets between two physical Ethernet ports (10GE, 25GE, 40GE, 100GE).
+In most cases both physical ports on SUT are located on the same
+NIC. The only exceptions are link bonding and 100GE tests. In the latter
+case only one port per NIC can be driven at linerate due to PCIe Gen3
+x16 slot bandwidth limiations. 100GE NICs are not supported in PCIe Gen3
+x8 slots.
-- PDR binary search per :rfc:`2544`:
+Note that reported DPDK DUT performance results are specific to the SUTs
+tested. SUTs with other processors than the ones used in FD.io lab are
+likely to yield different results. A good rule of thumb, that can be
+applied to estimate DPDK packet thoughput for NIC-to-NIC switching
+topology, is to expect the forwarding performance to be proportional to
+processor core frequency for the same processor architecture, assuming
+processor is the only limiting factor and all other SUT parameters are
+equivalent to FD.io CSIT environment.
- - Packet rate: "RATE: <aggregate packet rate in packets-per-second> pps (2x
- <per direction packets-per-second>)"
- - Aggregate bandwidth: "BANDWIDTH: <aggregate bandwidth in Gigabits per
- second> Gbps (untagged)"
- - Packet loss tolerance: "LOSS_ACCEPTANCE <accepted percentage of packets
- lost at PDR rate>""
-
-- NDR and PDR are measured for the following L2 frame sizes:
-
- - IPv4: 64B, 1518B, 9000B.
-
-All rates are reported from external Traffic Generator perspective.
-
-
-Methodology: Packet Latency
----------------------------
-
-TRex Traffic Generator (TG) is used for measuring latency of Testpmd DUTs.
-Reported latency values are measured using following methodology:
-
-- Latency tests are performed at 10%, 50% of discovered NDR rate (non drop rate)
- for each NDR throughput test and packet size (except IMIX).
-- TG sends dedicated latency streams, one per direction, each at the rate of
- 10kpps at the prescribed packet size; these are sent in addition to the main
- load streams.
-- TG reports min/avg/max latency values per stream direction, hence two sets
- of latency values are reported per test case; future release of TRex is
- expected to report latency percentiles.
-- Reported latency values are aggregate across two SUTs due to three node
- topology used for all performance tests; for per SUT latency, reported value
- should be divided by two.
-- 1usec is the measurement accuracy advertised by TRex TG for the setup used in
- FD.io labs used by CSIT project.
-- TRex setup introduces an always-on error of about 2*2usec per latency flow -
- additonal Tx/Rx interface latency induced by TRex SW writing and reading
- packet timestamps on CPU cores without HW acceleration on NICs closer to the
- interface line.
+Performance Tests Coverage
+--------------------------
-Methodology: TRex Traffic Generator Usage
------------------------------------------
+Performance tests measure following metrics for tested DPDK DUT
+topologies and configurations:
-The `TRex traffic generator <https://wiki.fd.io/view/TRex>`_ is used for all
-CSIT performance tests. TRex stateless mode is used to measure NDR and PDR
-throughputs using binary search (NDR and PDR discovery tests) and for quick
-checks of DUT performance against the reference NDRs (NDR check tests) for
-specific configuration.
+- Packet Throughput: measured in accordance with :rfc:`2544`, using
+ FD.io CSIT Multiple Loss Ratio search (MLRsearch), an optimized binary
+ search algorithm, producing throughput at different Packet Loss Ratio
+ (PLR) values:
-TRex is installed and run on the TG compute node. The typical procedure is:
+ - Non Drop Rate (NDR): packet throughput at PLR=0%.
+ - Partial Drop Rate (PDR): packet throughput at PLR=0.5%.
-- If the TRex is not already installed on TG, it is installed in the
- suite setup phase - see `TRex intallation`_.
-- TRex configuration is set in its configuration file
- ::
+- One-Way Packet Latency: measured at different offered packet loads:
- /etc/trex_cfg.yaml
+ - 100% of discovered NDR throughput.
+ - 100% of discovered PDR throughput.
-- TRex is started in the background mode
- ::
+- Maximum Receive Rate (MRR): measure packet forwarding rate under the
+ maximum load offered by traffic generator over a set trial duration,
+ regardless of packet loss. Maximum load for specified Ethernet frame
+ size is set to the bi-directional link rate.
- $ sh -c 'cd <t-rex-install-dir>/scripts/ && sudo nohup ./t-rex-64 -i -c 7 --iom 0 > /tmp/trex.log 2>&1 &' > /dev/null
+|csit-release| includes following performance test suites, listed per NIC type:
-- There are traffic streams dynamically prepared for each test, based on traffic
- profiles. The traffic is sent and the statistics obtained using
- :command:`trex_stl_lib.api.STLClient`.
+- **L2IntLoop** - L2 Interface Loop forwarding any Ethernet frames between
+ two Interfaces.
-**Measuring packet loss**
+- **IPv4 Routed Forwarding** - L3 IP forwarding of Ethernet frames between
+ two Interfaces.
-- Create an instance of STLClient
-- Connect to the client
-- Add all streams
-- Clear statistics
-- Send the traffic for defined time
-- Get the statistics
+Execution of performance tests takes time, especially the throughput
+tests. Due to limited HW testbed resources available within FD.io labs
+hosted by :abbr:`LF (Linux Foundation)`, the number of tests for some
+NIC models has been limited to few baseline tests.
-If there is a warm-up phase required, the traffic is sent also before test and
-the statistics are ignored.
+Performance Tests Naming
+------------------------
-**Measuring latency**
+FD.io |csit-release| follows a common structured naming convention for
+all performance and system functional tests, introduced in CSIT-17.01.
-If measurement of latency is requested, two more packet streams are created (one
-for each direction) with TRex flow_stats parameter set to STLFlowLatencyStats. In
-that case, returned statistics will also include min/avg/max latency values.
+The naming should be intuitive for majority of the tests. Complete
+description of FD.io CSIT test naming convention is provided on
+:ref:`csit_test_naming`.
diff --git a/docs/report/dpdk_performance_tests/packet_latency_graphs/index.rst b/docs/report/dpdk_performance_tests/packet_latency_graphs/index.rst
index 014d1095a7..43f557c517 100644
--- a/docs/report/dpdk_performance_tests/packet_latency_graphs/index.rst
+++ b/docs/report/dpdk_performance_tests/packet_latency_graphs/index.rst
@@ -22,7 +22,7 @@ TGint2-to-SUT2-to-SUT1-to-TGint1.
`FD.io test executor dpdk performance job 3n-hsw`_,
`FD.io test executor dpdk performance job 3n-skx`_ and
`FD.io test executor dpdk performance job 2n-skx`_ with Robot Framework
- result files csit-vpp-perf-|srelease|-\*.zip
+ result files csit-dpdk-perf-|srelease|-\*.zip
`archived here <../../_static/archive/>`_.
Plotted data set size per test case is equal to the number of job executions
presented in this report version: **10**.
diff --git a/docs/report/dpdk_performance_tests/packet_throughput_graphs/index.rst b/docs/report/dpdk_performance_tests/packet_throughput_graphs/index.rst
index a243922798..cfa75419e6 100644
--- a/docs/report/dpdk_performance_tests/packet_throughput_graphs/index.rst
+++ b/docs/report/dpdk_performance_tests/packet_throughput_graphs/index.rst
@@ -31,7 +31,7 @@ and their indices.
`FD.io test executor dpdk performance job 3n-hsw`_,
`FD.io test executor dpdk performance job 3n-skx`_ and
`FD.io test executor dpdk performance job 2n-skx`_ with Robot Framework
- result files csit-vpp-perf-|srelease|-\*.zip
+ result files csit-dpdk-perf-|srelease|-\*.zip
`archived here <../../_static/archive/>`_.
Plotted data set size per test case is equal to the number of job executions
presented in this report version: **10**.
diff --git a/docs/report/dpdk_performance_tests/test_environment.rst b/docs/report/dpdk_performance_tests/test_environment.rst
index 204792015f..7de29ee332 100644
--- a/docs/report/dpdk_performance_tests/test_environment.rst
+++ b/docs/report/dpdk_performance_tests/test_environment.rst
@@ -1,8 +1,8 @@
-.. include:: ../vpp_performance_tests/test_environment_intro.rst
+.. include:: ../introduction/test_environment_intro.rst
-.. include:: ../vpp_performance_tests/test_environment_sut_conf_1.rst
+.. include:: ../introduction/test_environment_sut_conf_1.rst
-.. include:: ../vpp_performance_tests/test_environment_sut_conf_3.rst
+.. include:: ../introduction/test_environment_sut_conf_3.rst
DUT Configuration - DPDK
@@ -20,49 +20,23 @@ DUT Configuration - DPDK
**Testpmd Startup Configuration**
-Testpmd startup configuration changes per test case with different settings for CPU
-cores, rx-queues. Startup config is aligned with applied test case tag:
-
-Tagged by **1T1C**
-
-.. code-block:: bash
-
- testpmd -c 0x3 -n 4 -- --numa --nb-ports=2 --portmask=0x3 --nb-cores=1 --max-pkt-len=9000 --txqflags=0 --forward-mode=io --rxq=1 --txq=1 --burst=64 --rxd=1024 --txd=1024 --disable-link-check --auto-start
-
-Tagged by **2T2C**
-
-.. code-block:: bash
-
- testpmd -c 0x403 -n 4 -- --numa --nb-ports=2 --portmask=0x3 --nb-cores=2 --max-pkt-len=9000 --txqflags=0 --forward-mode=io --rxq=1 --txq=1 --burst=64 --rxd=1024 --txd=1024 --disable-link-check --auto-start
-
-Tagged by **4T4C**
+Testpmd startup configuration changes per test case with different settings for
+`$$CORES`, `$$RXQ` and max-pkt-len parameter if test is sending jumbo frames.
+Startup command template:
.. code-block:: bash
- testpmd -c 0xc07 -n 4 -- --numa --nb-ports=2 --portmask=0x3 --nb-cores=4 --max-pkt-len=9000 --txqflags=0 --forward-mode=io --rxq=2 --txq=2 --burst=64 --rxd=1024 --txd=1024 --disable-link-check --auto-start
+ testpmd -c $$CORE_MASK -n 4 -- --numa --nb-ports=2 --portmask=0x3 --nb-cores=$$CORES --max-pkt-len=9000 --txqflags=0 --forward-mode=io --rxq=$$RXQ --txq=$$TXQ --burst=64 --rxd=1024 --txd=1024 --disable-link-check --auto-start
**L3FWD Startup Configuration**
-L3FWD startup configuration changes per test case with different settings for CPU
-cores, rx-queues. Startup config is aligned with applied test case tag:
-
-Tagged by **1T1C**
-
-.. code-block:: bash
-
- l3fwd -l 1 -n 4 -- -P -L -p 0x3 --config='${port_config}' --enable-jumbo --max-pkt-len=9000 --eth-dest=0,${adj_mac0} --eth-dest=1,${adj_mac1} --parse-ptype
-
-Tagged by **2T2C**
-
-.. code-block:: bash
-
- l3fwd -l 1,2 -n 4 -- -P -L -p 0x3 --config='${port_config}' --enable-jumbo --max-pkt-len=9000 --eth-dest=0,${adj_mac0} --eth-dest=1,${adj_mac1} --parse-ptype
-
-Tagged by **4T4C**
+L3FWD startup configuration changes per test case with different settings for
+`$$CORES` and enable-jumbo parameter if test is sending jumbo frames.
+Startup command template:
.. code-block:: bash
- l3fwd -l 1,2,3,4 -n 4 -- -P -L -p 0x3 --config='${port_config}' --enable-jumbo --max-pkt-len=9000 --eth-dest=0,${adj_mac0} --eth-dest=1,${adj_mac1} --parse-ptype
+ l3fwd -l $$CORE_LIST -n 4 -- -P -L -p 0x3 --config='${port_config}' --enable-jumbo --max-pkt-len=9000 --eth-dest=0,${adj_mac0} --eth-dest=1,${adj_mac1} --parse-ptype
-.. include:: ../vpp_performance_tests/test_environment_tg.rst
+.. include:: ../introduction/test_environment_tg.rst