diff options
author | Maciek Konstantynowicz <mkonstan@cisco.com> | 2021-02-03 14:37:01 +0000 |
---|---|---|
committer | Maciek Konstantynowicz <mkonstan@cisco.com> | 2021-02-08 20:28:55 +0000 |
commit | a82dcd874cef79388e85383cb6c0784c14e8ae7b (patch) | |
tree | 921ac28e52b99618fbf2b0875832de9db11c49e3 /docs | |
parent | 127eb76dfeca04e123b0d7cc229c4a71ee3f2f9a (diff) |
report: Update packet latency methodology
Change-Id: I0378e96c6f64a1dd035d56465710b82f64dc4d32
Signed-off-by: Maciek Konstantynowicz <mkonstan@cisco.com>
Diffstat (limited to 'docs')
3 files changed, 30 insertions, 18 deletions
diff --git a/docs/report/dpdk_performance_tests/packet_latency/index.rst b/docs/report/dpdk_performance_tests/packet_latency/index.rst index 655773c8cf..bc45c3e4cd 100644 --- a/docs/report/dpdk_performance_tests/packet_latency/index.rst +++ b/docs/report/dpdk_performance_tests/packet_latency/index.rst @@ -16,6 +16,8 @@ percentiles at different packet rate load levels: i) No-Load latency streams only, ii) Low-Load at 10% PDR, iii) Mid-Load at 50% PDR and iv) High-Load at 90% PDR. +For more details, see :ref:`latency_methodology`. + Additional information about graph data: #. **Graph Title**: describes tested DUT packet path. diff --git a/docs/report/introduction/methodology_packet_latency.rst b/docs/report/introduction/methodology_packet_latency.rst index 1f7ad7f633..d5486786b6 100644 --- a/docs/report/introduction/methodology_packet_latency.rst +++ b/docs/report/introduction/methodology_packet_latency.rst @@ -1,33 +1,41 @@ Packet Latency -------------- -TRex Traffic Generator (TG) is used for measuring latency across 2-Node -and 3-Node SUT server topologies. TRex integrates `A High Dynamic Range -Histogram (HDRH) <http://hdrhistogram.org/>`_ code providing per packet -latency distribution for latency streams sent in parallel to the main -load packet streams. Packet latency is measured using following -methodology: +TRex Traffic Generator (TG) is used for measuring one-way latency in +2-Node and 3-Node physical testbed topologies. TRex integrates `High +Dynamic Range Histogram (HDRH) <http://hdrhistogram.org/>`_ +functionality and reports per packet latency distribution for latency +streams sent in parallel to the main load packet streams. -- Latency tests are performed at following packet load levels: +Following methodology is used: + +- Only NDRPDR test type measures latency and only after NDR and PDR + values are determined. Other test types do not involve latency + streams. +- Latency is measured at different background load packet rates: - No-Load: latency streams only. - Low-Load: at 10% PDR. - Mid-Load: at 50% PDR. - High-Load: at 90% PDR. - - NDR-Load: at 100% NDR. - - PDR-Load: at 100% PDR. - Latency is measured for all tested packet sizes except IMIX due to - TG restriction. + TRex TG restriction. - TG sends dedicated latency streams, one per direction, each at the rate of 9 kpps at the prescribed packet size; these are sent in addition to the main load streams. - TG reports Min/Avg/Max and HDRH latency values distribution per stream - direction, hence two sets of latency values are reported per test - case. -- Reported latency values are aggregate across tested topology. -- +/- 1 usec is the measurement accuracy advertised by TRex TG for the - setup used. -- TG setup introduces an always-on Tx/Rx interface latency of about 2 - * 2 usec per direction induced by TRex SW writing and reading packet - timestamps on CPU cores. + direction, hence two sets of latency values are reported per test case + (marked as E-W and W-E). +- +/- 1 usec is the measurement accuracy of TRex TG and the data in HDRH + latency values distribution is rounded to microseconds. +- TRex TG introduces a (background) always-on Tx + Rx latency bias of 4 + usec on average per direction resulting from TRex software writing and + reading packet timestamps on CPU cores. Quoted values are based on TG + back-to-back latency measurements. +- Latency graphs are not smoothed, each latency value has its own + horizontal line across corresponding packet percentiles. +- Percentiles are shown on X-axis using a logarithmic scale, so the + maximal latency value (ending at 100% percentile) would be in + infinity. The graphs are cut at 99.9999% (hover information still + lists 100%).
\ No newline at end of file diff --git a/docs/report/vpp_performance_tests/packet_latency/index.rst b/docs/report/vpp_performance_tests/packet_latency/index.rst index bc98cbb4a6..c079fd11b4 100644 --- a/docs/report/vpp_performance_tests/packet_latency/index.rst +++ b/docs/report/vpp_performance_tests/packet_latency/index.rst @@ -17,6 +17,8 @@ percentiles at different packet rate load levels: i) No-Load latency streams only, ii) Low-Load at 10% PDR, iii) Mid-Load at 50% PDR and iv) High-Load at 90% PDR. +For more details, see :ref:`latency_methodology`. + Additional information about graph data: #. **Graph Title**: describes tested DUT packet path. |