aboutsummaryrefslogtreecommitdiffstats
path: root/docs/report/introduction
diff options
context:
space:
mode:
Diffstat (limited to 'docs/report/introduction')
-rw-r--r--docs/report/introduction/methodology_autogen.rst14
-rw-r--r--docs/report/introduction/methodology_data_plane_throughput/index.rst2
-rw-r--r--docs/report/introduction/methodology_data_plane_throughput/methodology_data_plane_throughput.rst28
-rw-r--r--docs/report/introduction/methodology_data_plane_throughput/methodology_mlrsearch_tests.rst2
-rw-r--r--docs/report/introduction/methodology_data_plane_throughput/methodology_mrr_throughput.rst2
-rw-r--r--docs/report/introduction/methodology_data_plane_throughput/methodology_plrsearch.rst10
-rw-r--r--docs/report/introduction/methodology_dut_state.rst8
-rw-r--r--docs/report/introduction/methodology_nat44.rst24
-rw-r--r--docs/report/introduction/methodology_packet_flow_ordering.rst4
-rw-r--r--docs/report/introduction/methodology_packet_latency.rst2
-rw-r--r--docs/report/introduction/methodology_rca/methodology_perpatch_performance_tests.rst24
-rw-r--r--docs/report/introduction/methodology_reconf.rst2
-rw-r--r--docs/report/introduction/methodology_trending/index.rst2
-rw-r--r--docs/report/introduction/methodology_trending/trend_analysis.rst14
-rw-r--r--docs/report/introduction/methodology_trending/trend_presentation.rst2
-rw-r--r--docs/report/introduction/methodology_trex_traffic_generator.rst22
16 files changed, 81 insertions, 81 deletions
diff --git a/docs/report/introduction/methodology_autogen.rst b/docs/report/introduction/methodology_autogen.rst
index 1bf5e9e5ea..5453775b7a 100644
--- a/docs/report/introduction/methodology_autogen.rst
+++ b/docs/report/introduction/methodology_autogen.rst
@@ -23,7 +23,7 @@ The generated suites (and their contents) are affected by multiple information
sources, listed below.
Git Suites
-----------
+``````````
The suites present in git repository act as templates for generating suites.
One of autogen design principles is that any template suite should also act
@@ -33,7 +33,7 @@ In practice, autogen always re-creates the template suite with exactly
the same content, it is one of checks that autogen works correctly.
Regenerate Script
------------------
+`````````````````
Not all suites present in CSIT git repository act as template for autogen.
The distinction is on per-directory level. Directories with
@@ -44,13 +44,13 @@ The script also specifies minimal frame size, indirectly, by specifying protocol
(protocol "ip4" is the default, leading to 64B frame size).
Constants
----------
+`````````
Values in Constants.py are taken into consideration when generating suites.
The values are mostly related to different NIC models and NIC drivers.
Python Code
------------
+```````````
Python code in resources/libraries/python/autogen contains several other
information sources.
@@ -89,7 +89,7 @@ Some information sources are available in CSIT repository,
but do not affect the suites generated by autogen.
Testbeds
---------
+````````
Overall, no information visible in topology yaml files is taken into account
by autogen.
@@ -121,7 +121,7 @@ Autogen does not care, as suites are agnostic.
Robot tag marks the difference, but the link presence is not explicitly checked.
Job specs
----------
+`````````
Information in job spec files depend on generated suites (not the other way).
Autogen should generate more suites, as job spec is limited by time budget.
@@ -129,7 +129,7 @@ More suites should be available for manually triggered verify jobs,
so autogen covers that.
Bootstrap Scripts
------------------
+`````````````````
Historically, bootstrap scripts perform some logic,
perhaps adding exclusion options to Robot invocation
diff --git a/docs/report/introduction/methodology_data_plane_throughput/index.rst b/docs/report/introduction/methodology_data_plane_throughput/index.rst
index 487d300b5b..e2e373c56f 100644
--- a/docs/report/introduction/methodology_data_plane_throughput/index.rst
+++ b/docs/report/introduction/methodology_data_plane_throughput/index.rst
@@ -1,5 +1,5 @@
Data Plane Throughput
-=====================
+---------------------
.. toctree::
diff --git a/docs/report/introduction/methodology_data_plane_throughput/methodology_data_plane_throughput.rst b/docs/report/introduction/methodology_data_plane_throughput/methodology_data_plane_throughput.rst
index a26b40088b..24e76ef391 100644
--- a/docs/report/introduction/methodology_data_plane_throughput/methodology_data_plane_throughput.rst
+++ b/docs/report/introduction/methodology_data_plane_throughput/methodology_data_plane_throughput.rst
@@ -1,7 +1,7 @@
.. _data_plane_throughput:
Data Plane Throughput Tests
----------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
Network data plane throughput is measured using multiple test methods in
order to obtain representative and repeatable results across the large
@@ -21,10 +21,10 @@ Description of each test method is followed by generic test properties
shared by all methods.
MLRsearch Tests
-^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~
Description
-~~~~~~~~~~~
+```````````
Multiple Loss Ratio search (MLRsearch) tests discover multiple packet
throughput rates in a single search, reducing the overall test execution
@@ -35,7 +35,7 @@ and Partial Drop Rate (PDR, with PLR<0.5%). MLRsearch is compliant with
:rfc:`2544`.
Usage
-~~~~~
+`````
MLRsearch tests are run to discover NDR and PDR rates for each VPP and
DPDK release covered by CSIT report. Results for small frame sizes
@@ -50,17 +50,17 @@ all frame sizes and for all tests are presented in detailed results
tables.
Details
-~~~~~~~
+```````
See :ref:`mlrsearch_algorithm` section for more detail. MLRsearch is
being standardized in IETF in `draft-ietf-bmwg-mlrsearch
<https://datatracker.ietf.org/doc/html/draft-ietf-bmwg-mlrsearch-01>`_.
MRR Tests
-^^^^^^^^^
+~~~~~~~~~
Description
-~~~~~~~~~~~
+```````````
Maximum Receive Rate (MRR) tests are complementary to MLRsearch tests,
as they provide a maximum “raw” throughput benchmark for development and
@@ -72,7 +72,7 @@ a set trial duration, regardless of packet loss. Maximum load for
specified Ethernet frame size is set to the bi-directional link rate.
Usage
-~~~~~
+`````
MRR tests are much faster than MLRsearch as they rely on a single trial
or a small set of trials with very short duration. It is this property
@@ -86,7 +86,7 @@ comparisons between releases and test environments. Small frame sizes
only (64b/78B, IMIX).
Details
-~~~~~~~
+```````
See :ref:`mrr_throughput` section for more detail about MRR tests
configuration.
@@ -98,10 +98,10 @@ and `VPP per patch tests
<https://s3-docs.fd.io/csit/master/trending/methodology/perpatch_performance_tests.html>`_.
PLRsearch Tests
-^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~
Description
-~~~~~~~~~~~
+```````````
Probabilistic Loss Ratio search (PLRsearch) tests discovers a packet
throughput rate associated with configured Packet Loss Ratio (PLR)
@@ -110,7 +110,7 @@ testing. PLRsearch assumes that system under test is probabilistic in
nature, and not deterministic.
Usage
-~~~~~
+`````
PLRsearch are run to discover a sustained throughput for PLR=10^-7
(close to NDR) for VPP release covered by CSIT report. Results for small
@@ -121,14 +121,14 @@ Each soak test lasts 30 minutes and is executed at least twice. Results are
compared against NDR and PDR rates discovered with MLRsearch.
Details
-~~~~~~~
+```````
See :ref:`plrsearch` methodology section for more detail. PLRsearch is
being standardized in IETF in `draft-vpolak-bmwg-plrsearch
<https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch>`_.
Generic Test Properties
-^^^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~~~
All data plane throughput test methodologies share following generic
properties:
diff --git a/docs/report/introduction/methodology_data_plane_throughput/methodology_mlrsearch_tests.rst b/docs/report/introduction/methodology_data_plane_throughput/methodology_mlrsearch_tests.rst
index 6789dc073b..f834ca3e1b 100644
--- a/docs/report/introduction/methodology_data_plane_throughput/methodology_mlrsearch_tests.rst
+++ b/docs/report/introduction/methodology_data_plane_throughput/methodology_mlrsearch_tests.rst
@@ -1,7 +1,7 @@
.. _mlrsearch_algorithm:
MLRsearch Tests
----------------
+^^^^^^^^^^^^^^^
Overview
~~~~~~~~
diff --git a/docs/report/introduction/methodology_data_plane_throughput/methodology_mrr_throughput.rst b/docs/report/introduction/methodology_data_plane_throughput/methodology_mrr_throughput.rst
index 4e8000b161..f2a6cd57cd 100644
--- a/docs/report/introduction/methodology_data_plane_throughput/methodology_mrr_throughput.rst
+++ b/docs/report/introduction/methodology_data_plane_throughput/methodology_mrr_throughput.rst
@@ -1,7 +1,7 @@
.. _mrr_throughput:
MRR Throughput
---------------
+^^^^^^^^^^^^^^
Maximum Receive Rate (MRR) tests are complementary to MLRsearch tests,
as they provide a maximum "raw" throughput benchmark for development and
diff --git a/docs/report/introduction/methodology_data_plane_throughput/methodology_plrsearch.rst b/docs/report/introduction/methodology_data_plane_throughput/methodology_plrsearch.rst
index d60e9203ff..a128674384 100644
--- a/docs/report/introduction/methodology_data_plane_throughput/methodology_plrsearch.rst
+++ b/docs/report/introduction/methodology_data_plane_throughput/methodology_plrsearch.rst
@@ -1,7 +1,7 @@
.. _plrsearch:
PLRsearch
----------
+^^^^^^^^^
Motivation for PLRsearch
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -25,7 +25,7 @@ draft `draft-vpolak-bmwg-plrsearch-02`_ that is in the process
of being standardized in the IETF Benchmarking Methodology Working Group (BMWG).
Terms
------
+`````
The rest of this page assumes the reader is familiar with the following terms
defined in the IETF draft:
@@ -319,7 +319,7 @@ low quality estimate at first sight, but a more detailed look
reveals the quality is good (considering the measurement results).
L2 patch
---------
+________
Both fitting functions give similar estimates, the graph shows
"stochasticity" of measurements (estimates increase and decrease
@@ -356,7 +356,7 @@ the performance stays constant.
:align: center
Vhost
------
+_____
This test case shows what looks like a quite broad estimation interval,
compared to other test cases with similarly looking zero loss frequencies.
@@ -390,7 +390,7 @@ agrees that the interval should be wider than that.
:align: center
Summary
--------
+_______
The two graphs show the behavior of PLRsearch algorithm applied to soaking test
when some of PLRsearch assumptions do not hold:
diff --git a/docs/report/introduction/methodology_dut_state.rst b/docs/report/introduction/methodology_dut_state.rst
index d08e42513d..3792e31ecf 100644
--- a/docs/report/introduction/methodology_dut_state.rst
+++ b/docs/report/introduction/methodology_dut_state.rst
@@ -1,5 +1,5 @@
DUT state considerations
-------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^
This page discusses considerations for Device Under Test (DUT) state.
DUTs such as VPP require configuration, to be provided before the aplication
@@ -42,7 +42,7 @@ As the performance is different, each test has to choose which traffic
it wants to test, and manipulate the DUT state to achieve the intended impact.
Ramp-up trial
-_____________
+`````````````
Tests aiming at sustain performance need to make sure DUT state is created.
We achieve this via a ramp-up trial, specific purpose of which
@@ -67,7 +67,7 @@ it has been created as expected.
Test fails if the state is not (completely) created.
State Reset
-___________
+```````````
Tests aiming at ramp-up performance do not use ramp-up trial,
and they need to reset the DUT state before each trial measurement.
@@ -94,7 +94,7 @@ If neither were used, DUT will show different performance in subsequent trials,
violating assumptions of search algorithms.
DUT versus protocol ramp-up
-___________________________
+```````````````````````````
There are at least three different causes for bandwidth possibly increasing
within a single measurement trial.
diff --git a/docs/report/introduction/methodology_nat44.rst b/docs/report/introduction/methodology_nat44.rst
index 1b00ef281c..7dfd939ecc 100644
--- a/docs/report/introduction/methodology_nat44.rst
+++ b/docs/report/introduction/methodology_nat44.rst
@@ -1,10 +1,10 @@
.. _nat44_methodology:
Network Address Translation IPv4 to IPv4
-----------------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
NAT44 Prefix Bindings
-^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~
NAT44 prefix bindings should be representative to target applications,
where a number of private IPv4 addresses from the range defined by
@@ -67,7 +67,7 @@ Private address ranges to be used in tests:
- Used in tests for up to 1 048 576 inside addresses (inside hosts).
NAT44 Session Scale
-~~~~~~~~~~~~~~~~~~~
+```````````````````
NAT44 session scale tested is govern by the following logic:
@@ -95,7 +95,7 @@ NAT44 session scale tested is govern by the following logic:
+---+---------+------------+
NAT44 Deterministic
-^^^^^^^^^^^^^^^^^^^
+```````````````````
NAT44det performance tests are using TRex STL (Stateless) API and traffic
profiles, similar to all other stateless packet forwarding tests like
@@ -134,7 +134,7 @@ NAT44det scenario tested:
TODO: Make traffic profile names resemble suite names more closely.
NAT44 Endpoint-Dependent
-^^^^^^^^^^^^^^^^^^^^^^^^
+````````````````````````
In order to excercise NAT44ed ability to translate based on both
source and destination address and port, the inside-to-outside traffic
@@ -200,13 +200,13 @@ NAT44det case tested:
- [mrr|ndrpdr|soak], bidirectional stateful tests MRR, NDRPDR, or SOAK.
Stateful traffic profiles
-^^^^^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~~~~~
There are several important details which distinguish ASTF profiles
from stateless profiles.
General considerations
-~~~~~~~~~~~~~~~~~~~~~~
+``````````````````````
Protocols
_________
@@ -347,7 +347,7 @@ does not match the transactions as defined by ASTF programs.
See TCP TPUT profile below.
UDP CPS
-~~~~~~~
+```````
This profile uses a minimalistic transaction to verify NAT44ed session has been
created and it allows outside-to-inside traffic.
@@ -362,7 +362,7 @@ Transaction counts as attempted when opackets counter increases on client side.
Transaction counts as successful when ipackets counter increases on client side.
TCP CPS
-~~~~~~~
+```````
This profile uses a minimalistic transaction to verify NAT44ed session has been
created and it allows outside-to-inside traffic.
@@ -386,7 +386,7 @@ Transaction counts as successful when tcps_connects counter increases
on client side.
UDP TPUT
-~~~~~~~~
+````````
This profile uses a small transaction of "request-response" type,
with several packets simulating data payload.
@@ -411,7 +411,7 @@ in the reading phase. This probably decreases TRex performance,
but it leads to more stable results then alternatives.
TCP TPUT
-~~~~~~~~
+````````
This profile uses a small transaction of "request-response" type,
with some data amount to be transferred both ways.
@@ -466,7 +466,7 @@ Although it is possibly more taxing for TRex CPU,
the results are comparable to the old traffic profile.
Ip4base tests
-^^^^^^^^^^^^^
+~~~~~~~~~~~~~
Contrary to stateless traffic profiles, we do not have a simple limit
that would guarantee TRex is able to send traffic at specified load.
diff --git a/docs/report/introduction/methodology_packet_flow_ordering.rst b/docs/report/introduction/methodology_packet_flow_ordering.rst
index 3796b21796..47118f971c 100644
--- a/docs/report/introduction/methodology_packet_flow_ordering.rst
+++ b/docs/report/introduction/methodology_packet_flow_ordering.rst
@@ -11,7 +11,7 @@ altered, in some cases two fields (e.g. IPv4 destination address and UDP
destination port) are altered.
Incremental Ordering
---------------------
+~~~~~~~~~~~~~~~~~~~~
This case is simpler to implement and offers greater control.
@@ -24,7 +24,7 @@ combinations once before the "carry" field also wraps around.
It is possible to use increments other than 1.
Randomized Ordering
--------------------
+~~~~~~~~~~~~~~~~~~~
This case chooses each field value at random (from the allowed range).
In case of two fields, they are treated independently.
diff --git a/docs/report/introduction/methodology_packet_latency.rst b/docs/report/introduction/methodology_packet_latency.rst
index 35e2aad029..cd11663eaf 100644
--- a/docs/report/introduction/methodology_packet_latency.rst
+++ b/docs/report/introduction/methodology_packet_latency.rst
@@ -1,7 +1,7 @@
.. _latency_methodology:
Packet Latency
---------------
+^^^^^^^^^^^^^^
TRex Traffic Generator (TG) is used for measuring one-way latency in
2-Node and 3-Node physical testbed topologies. TRex integrates `High
diff --git a/docs/report/introduction/methodology_rca/methodology_perpatch_performance_tests.rst b/docs/report/introduction/methodology_rca/methodology_perpatch_performance_tests.rst
index 97c5e091ff..311fac456b 100644
--- a/docs/report/introduction/methodology_rca/methodology_perpatch_performance_tests.rst
+++ b/docs/report/introduction/methodology_rca/methodology_perpatch_performance_tests.rst
@@ -8,7 +8,7 @@ before a DUT code change is merged. This can act as a verify job to disallow
changes which would decrease performance without a good reason.
Existing jobs
-`````````````
+~~~~~~~~~~~~~
VPP is the only project currently using such jobs.
They are not started automatically, must be triggered on demand.
@@ -21,7 +21,7 @@ where the node_arch combinations currently supported are:
2n-clx, 2n-dnv, 2n-tx2, 2n-zn2, 3n-dnv, 3n-tsh.
Test selection
---------------
+~~~~~~~~~~~~~~
..
TODO: Majority of this section is also useful for CSIT verify jobs. Move it somewhere.
@@ -37,7 +37,7 @@ What follows is a list of explanations and recommendations
to help users to select the minimal set of tests cases.
Verify cycles
-_____________
+`````````````
When Gerrit schedules multiple jobs to run for the same patch set,
it waits until all runs are complete.
@@ -55,7 +55,7 @@ that comment gets ignored by Jenkins.
Only when 3n-icx job finishes, the user can trigger 2n-icx.
One comment many jobs
-_____________________
+`````````````````````
In the past, the CSIT code which parses for perftest trigger comments
was buggy, which lead to bad behavior (as in selection all performance test,
@@ -66,7 +66,7 @@ The worst bugs were fixed since then, but it is still recommended
to use just one trigger word per Gerrit comment, just to be safe.
Multiple test cases in run
-__________________________
+``````````````````````````
While Robot supports OR operator, it does not support parentheses,
so the OR operator is not very useful. It is recommended
@@ -78,7 +78,7 @@ perftest-2n-icx {tag_expression_1} {tag_expression_2}
See below for more concrete examples.
Suite tags
-__________
+``````````
Traditionally, CSIT maintains broad Robot tags that can be used to select tests,
for details on existing tags, see
@@ -101,7 +101,7 @@ and user still probably wants to narrow down
to a single test case within a suite.
Fully specified tag expressions
-_______________________________
+```````````````````````````````
Here is one template to select a single test case:
{test_type}AND{nic_model}AND{nic_driver}AND{cores}AND{frame_size}AND{suite_tag}
@@ -130,7 +130,7 @@ for example `this one <https://github.com/FDio/csit/blob/master/docs/job_specs/r
TODO: Explain why "periodic" job spec link lands at report_iterative.
Shortening triggers
-___________________
+```````````````````
Advanced users may use the following tricks to avoid writing long trigger comments.
@@ -151,7 +151,7 @@ will fail, as the default nic_model is nic_intel-xxv710
which does not support RDMA core driver.
Complete example
-________________
+````````````````
A user wants to test a VPP change which may affect load balance whith bonding.
Searching tag documentation for "bonding" finds LBOND tag and its variants.
@@ -179,7 +179,7 @@ After shortening, this is the trigger comment fianlly used:
perftest-3n-icx mrrANDnic_intel-x710AND1cAND64bAND?lbvpplacp-dot1q-l2xcbase-eth-2vhostvr1024-1vm*NOTdrv_af_xdp
Basic operation
-```````````````
+~~~~~~~~~~~~~~~
The job builds VPP .deb packages for both the patch under test
(called "current") and its parent patch (called "parent").
@@ -199,7 +199,7 @@ The whole job fails (giving -1) if some trial measurement failed,
or if any test was declared a regression.
Temporary specifics
-```````````````````
+~~~~~~~~~~~~~~~~~~~
The Minimal Description Length analysis is performed by
CSIT code equivalent to jumpavg-0.1.3 library available on PyPI.
@@ -216,7 +216,7 @@ This decreases sensitivity to regressions, but also decreases
probability of false positives.
Console output
-``````````````
+~~~~~~~~~~~~~~
The following information as visible towards the end of Jenkins console output,
repeated for each analyzed test.
diff --git a/docs/report/introduction/methodology_reconf.rst b/docs/report/introduction/methodology_reconf.rst
index 1a1f4cc98c..976cd7a6a3 100644
--- a/docs/report/introduction/methodology_reconf.rst
+++ b/docs/report/introduction/methodology_reconf.rst
@@ -1,7 +1,7 @@
.. _reconf_tests:
Reconfiguration Tests
----------------------
+^^^^^^^^^^^^^^^^^^^^^
.. important::
diff --git a/docs/report/introduction/methodology_trending/index.rst b/docs/report/introduction/methodology_trending/index.rst
index 9105ec46b4..957900226b 100644
--- a/docs/report/introduction/methodology_trending/index.rst
+++ b/docs/report/introduction/methodology_trending/index.rst
@@ -1,5 +1,5 @@
Trending Methodology
-====================
+--------------------
.. toctree::
diff --git a/docs/report/introduction/methodology_trending/trend_analysis.rst b/docs/report/introduction/methodology_trending/trend_analysis.rst
index 2bb54997b0..20ad46405a 100644
--- a/docs/report/introduction/methodology_trending/trend_analysis.rst
+++ b/docs/report/introduction/methodology_trending/trend_analysis.rst
@@ -22,7 +22,7 @@ Implementation details
~~~~~~~~~~~~~~~~~~~~~~
Partitioning into groups
-------------------------
+````````````````````````
While sometimes the samples within a group are far from being distributed
normally, currently we do not have a better tractable model.
@@ -39,7 +39,7 @@ results).
The group boundaries are selected based on `Minimum Description Length`_.
Minimum Description Length
---------------------------
+``````````````````````````
`Minimum Description Length`_ (MDL) is a particular formalization
of `Occam's razor`_ principle.
@@ -103,7 +103,7 @@ We are going to describe the behaviors,
as they motivate our choice of trend compliance metrics.
Sample time and analysis time
------------------------------
+`````````````````````````````
But first we need to distinguish two roles time plays in analysis,
so it is more clear which role we are referring to.
@@ -132,7 +132,7 @@ so the values reported there are likely to be different
from the later analysis time results shown in dashboard and graphs.
Ordinary regression
--------------------
+```````````````````
The real performance changes from previously stable value
into a new stable value.
@@ -143,7 +143,7 @@ is enough for anomaly detection to mark this regression.
Ordinary progressions are detected in the same way.
Small regression
-----------------
+````````````````
The real performance changes from previously stable value
into a new stable value, but the difference is small.
@@ -162,7 +162,7 @@ is still not enough for the detection).
Small progressions have the same behavior.
Reverted regression
--------------------
+```````````````````
This pattern can have two different causes.
We would like to distinguish them, but that is usually
@@ -196,7 +196,7 @@ to increase performance, the opposite (temporary progression)
almost never happens.
Summary
--------
+```````
There is a trade-off between detecting small regressions
and not reporting the same old regressions for a long time.
diff --git a/docs/report/introduction/methodology_trending/trend_presentation.rst b/docs/report/introduction/methodology_trending/trend_presentation.rst
index 67d0d3c45a..7272ac6e38 100644
--- a/docs/report/introduction/methodology_trending/trend_presentation.rst
+++ b/docs/report/introduction/methodology_trending/trend_presentation.rst
@@ -8,7 +8,7 @@ The Failed tests tables list the tests which failed during the last test run.
Separate tables are generated for each testbed.
Regressions and progressions
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
These tables list tests which encountered a regression or progression during the
specified time period, which is currently set to the last 21 days.
diff --git a/docs/report/introduction/methodology_trex_traffic_generator.rst b/docs/report/introduction/methodology_trex_traffic_generator.rst
index 180e3dda8c..02b46e0180 100644
--- a/docs/report/introduction/methodology_trex_traffic_generator.rst
+++ b/docs/report/introduction/methodology_trex_traffic_generator.rst
@@ -1,5 +1,5 @@
TRex Traffic Generator
-----------------------
+^^^^^^^^^^^^^^^^^^^^^^
Usage
~~~~~
@@ -18,7 +18,7 @@ Traffic modes
TRex is primarily used in two (mutually incompatible) modes.
Stateless mode
-______________
+``````````````
Sometimes abbreviated as STL.
A mode with high performance, which is unable to react to incoming traffic.
@@ -32,7 +32,7 @@ Measurement results are based on simple L2 counters
(opackets, ipackets) for each traffic direction.
Stateful mode
-_____________
+`````````````
A mode capable of reacting to incoming traffic.
Contrary to the stateless mode, only UDP and TCP is supported
@@ -57,7 +57,7 @@ Generated traffic is either continuous, or limited (by number of transactions).
Both modes support both continuities in principle.
Continuous traffic
-__________________
+``````````````````
Traffic is started without any data size goal.
Traffic is ended based on time duration, as hinted by search algorithm.
@@ -65,7 +65,7 @@ This is useful when DUT behavior does not depend on the traffic duration.
The default for stateless mode.
Limited traffic
-_______________
+```````````````
Traffic has defined data size goal (given as number of transactions),
duration is computed based on this goal.
@@ -83,14 +83,14 @@ Traffic can be generated synchronously (test waits for duration)
or asynchronously (test operates during traffic and stops traffic explicitly).
Synchronous traffic
-___________________
+```````````````````
Trial measurement is driven by given (or precomputed) duration,
no activity from test driver during the traffic.
Used for most trials.
Asynchronous traffic
-____________________
+````````````````````
Traffic is started, but then the test driver is free to perform
other actions, before stopping the traffic explicitly.
@@ -109,7 +109,7 @@ Search algorithms are intentionally unaware of the traffic mode used,
so CSIT defines some terms to use instead of mode-specific TRex terms.
Transactions
-____________
+````````````
TRex traffic profile defines a small number of behaviors,
in CSIT called transaction templates. Traffic profiles also instruct
@@ -130,7 +130,7 @@ Thus unidirectional stateless profiles define one transaction template,
bidirectional stateless profiles define two transaction templates.
TPS multiplier
-______________
+``````````````
TRex aims to open transaction specified by the profile at a steady rate.
While TRex allows the transaction template to define its intended "cps" value,
@@ -145,7 +145,7 @@ set "packets per transaction" value to 2, just to keep the TPS semantics
as a unidirectional input value.
Duration stretching
-___________________
+```````````````````
TRex can be IO-bound, CPU-bound, or have any other reason
why it is not able to generate the traffic at the requested TPS.
@@ -175,7 +175,7 @@ so that users can compare results.
If the results are very similar, it is probable TRex was the bottleneck.
Startup delay
-_____________
+`````````````
By investigating TRex behavior, it was found that TRex does not start
the traffic in ASTF mode immediately. There is a delay of zero traffic,