aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorVratko Polak <vrpolak@cisco.com>2019-02-13 09:46:34 +0100
committerTibor Frank <tifrank@cisco.com>2019-02-13 08:50:07 +0000
commit5a57661f740db06903327d29cad02c22ee42665d (patch)
tree54016b40e71accf1f4d3dbddff1fb7b0deb0d7cc /docs
parent0a5be92ed271b5bd55871e35a1af4b1f95c97dad (diff)
Apply minor improvements to methodology docs
Change-Id: Ice5625c2b04dce174b19748b0ccccdf813b66f2a Signed-off-by: Vratko Polak <vrpolak@cisco.com> (cherry picked from commit bbfcee8d3cf51ec01d269245970ef41bb072c580)
Diffstat (limited to 'docs')
-rw-r--r--docs/report/introduction/methodology_bmrr_throughput.rst6
-rw-r--r--docs/report/introduction/methodology_multi_core_speedup.rst6
-rw-r--r--docs/report/introduction/methodology_nfv_service_density.rst2
-rw-r--r--docs/report/introduction/methodology_packet_latency.rst4
-rw-r--r--docs/report/introduction/methodology_trex_traffic_generator.rst5
-rw-r--r--docs/report/introduction/methodology_tunnel_encapsulations.rst2
-rw-r--r--docs/report/introduction/methodology_vpp_forwarding_modes.rst12
-rw-r--r--docs/report/introduction/methodology_vpp_startup_settings.rst2
8 files changed, 19 insertions, 20 deletions
diff --git a/docs/report/introduction/methodology_bmrr_throughput.rst b/docs/report/introduction/methodology_bmrr_throughput.rst
index ac3c54e907..7bef3a4aaf 100644
--- a/docs/report/introduction/methodology_bmrr_throughput.rst
+++ b/docs/report/introduction/methodology_bmrr_throughput.rst
@@ -19,7 +19,7 @@ Current parameters for BMRR tests:
quoted sizes include frame CRC, but exclude per frame transmission
overhead of 20B (preamble, inter frame gap).
-- Maximum load offered: 10GE and 40GE link (sub-)rates depending on NIC
+- Maximum load offered: 10GE, 25GE and 40GE link (sub-)rates depending on NIC
tested, with the actual packet rate depending on frame size,
transmission overhead and traffic generator NIC forwarding capacity.
@@ -37,8 +37,8 @@ Current parameters for BMRR tests:
- Number of trials per burst: 10.
-Similarly to NDR/PDR throughput tests, MRR test should be reporting bi-
-directional link rate (or NIC rate, if lower) if tested VPP
+Similarly to NDR/PDR throughput tests, MRR test should be reporting
+bi-directional link rate (or NIC rate, if lower) if tested VPP
configuration can handle the packet rate higher than bi-directional link
rate, e.g. large packet tests and/or multi-core tests.
diff --git a/docs/report/introduction/methodology_multi_core_speedup.rst b/docs/report/introduction/methodology_multi_core_speedup.rst
index 94840406a1..b42bf42f92 100644
--- a/docs/report/introduction/methodology_multi_core_speedup.rst
+++ b/docs/report/introduction/methodology_multi_core_speedup.rst
@@ -51,7 +51,7 @@ In all CSIT tests care is taken to ensure that each VPP worker handles
the same amount of received packet load and does the same amount of
packet processing work. This is achieved by evenly distributing per
interface type (e.g. physical, virtual) receive queues over VPP workers
-using default VPP round- robin mapping and by loading these queues with
+using default VPP round-robin mapping and by loading these queues with
the same amount of packet flows.
If number of VPP workers is higher than number of physical or virtual
@@ -62,5 +62,5 @@ for virtual interfaces are used for this purpose.
Section :ref:`throughput_speedup_multi_core` includes a set of graphs
illustrating packet throughout speedup when running VPP worker threads
on multiple cores. Note that in quite a few test cases running VPP
-workers on 2 or 4 physical cores hits the I/O bandwidth or packets-per-
-second limit of tested NIC.
+workers on 2 or 4 physical cores hits the I/O bandwidth
+or packets-per-second limit of tested NIC.
diff --git a/docs/report/introduction/methodology_nfv_service_density.rst b/docs/report/introduction/methodology_nfv_service_density.rst
index 2946ba2777..51e56e294d 100644
--- a/docs/report/introduction/methodology_nfv_service_density.rst
+++ b/docs/report/introduction/methodology_nfv_service_density.rst
@@ -103,4 +103,4 @@ physical core mapping ratios:
Maximum tested service densities are limited by a number of physical
cores per NUMA. |csit-release| allocates cores within NUMA0. Support for
-multi NUMA tests is to be added in future release. \ No newline at end of file
+multi NUMA tests is to be added in future release.
diff --git a/docs/report/introduction/methodology_packet_latency.rst b/docs/report/introduction/methodology_packet_latency.rst
index 550d12f688..411fe3d6fe 100644
--- a/docs/report/introduction/methodology_packet_latency.rst
+++ b/docs/report/introduction/methodology_packet_latency.rst
@@ -12,8 +12,8 @@ Reported latency values are measured using following methodology:
- TG reports min/avg/max latency values per stream direction, hence two
sets of latency values are reported per test case; future release of
TRex is expected to report latency percentiles.
-- Reported latency values are aggregate across two SUTs due to three
- node topology used for all performance tests; for per SUT latency,
+- Reported latency values are aggregate across two SUTs if the three
+ node topology is used for given performance test; for per SUT latency,
reported value should be divided by two.
- 1usec is the measurement accuracy advertised by TRex TG for the setup
used in FD.io labs used by CSIT project.
diff --git a/docs/report/introduction/methodology_trex_traffic_generator.rst b/docs/report/introduction/methodology_trex_traffic_generator.rst
index 4d4de96fb0..2a25931faa 100644
--- a/docs/report/introduction/methodology_trex_traffic_generator.rst
+++ b/docs/report/introduction/methodology_trex_traffic_generator.rst
@@ -6,9 +6,8 @@ Usage
`TRex traffic generator <https://wiki.fd.io/view/TRex>`_ is used for all
CSIT performance tests. TRex stateless mode is used to measure NDR and
-PDR throughputs using binary search (NDR and PDR discovery tests) and
-for quick checks of DUT performance against the reference NDRs (NDR
-check tests) for specific configuration.
+PDR throughputs using MLRsearch and to measure maximum transer rate
+in MRR tests.
TRex is installed and run on the TG compute node. The typical procedure
is:
diff --git a/docs/report/introduction/methodology_tunnel_encapsulations.rst b/docs/report/introduction/methodology_tunnel_encapsulations.rst
index 6c47d1bd33..d9e2f42f25 100644
--- a/docs/report/introduction/methodology_tunnel_encapsulations.rst
+++ b/docs/report/introduction/methodology_tunnel_encapsulations.rst
@@ -15,7 +15,7 @@ VPP is tested in the following IPv4 tunnel baseline configurations:
- *ip4lispip4-ip4base*: LISP over IPv4 tunnels with IPv4 routing.
- *ip4lispip6-ip6base*: LISP over IPv4 tunnels with IPv6 routing.
-In all cases listed above low number of MAC, IPv4, IPv6 flows (253 per
+In all cases listed above low number of MAC, IPv4, IPv6 flows (254 or 253 per
direction) is switched or routed by VPP.
In addition selected IPv4 tunnels are tested at scale:
diff --git a/docs/report/introduction/methodology_vpp_forwarding_modes.rst b/docs/report/introduction/methodology_vpp_forwarding_modes.rst
index 6cf206f2f6..1af3a46556 100644
--- a/docs/report/introduction/methodology_vpp_forwarding_modes.rst
+++ b/docs/report/introduction/methodology_vpp_forwarding_modes.rst
@@ -21,8 +21,8 @@ VPP is tested in three L2 forwarding modes:
l2bd tests are executed in baseline and scale configurations:
-- *l2bdbase*: low number of L2 flows (253 per direction) is switched by
- VPP. They drive the content of MAC FIB size (506 total MAC entries).
+- *l2bdbase*: low number of L2 flows (254 per direction) is switched by
+ VPP. They drive the content of MAC FIB size (508 total MAC entries).
Both source and destination MAC addresses are incremented on a packet
by packet basis.
@@ -40,8 +40,8 @@ IPv4 Routing
IPv4 routing tests are executed in baseline and scale configurations:
-- *ip4base*: low number of IPv4 flows (253 per direction) is routed by
- VPP. They drive the content of IPv4 FIB size (506 total /32 prefixes).
+- *ip4base*: low number of IPv4 flows (253 or 254 per direction) is routed by
+ VPP. They drive the content of IPv4 FIB size (506 or 508 total /32 prefixes).
Destination IPv4 addresses are incremented on a packet by packet
basis.
@@ -57,8 +57,8 @@ IPv6 Routing
IPv6 routing tests are executed in baseline and scale configurations:
-- *ip6base*: low number of IPv6 flows (253 per direction) is routed by
- VPP. They drive the content of IPv6 FIB size (506 total /128 prefixes).
+- *ip6base*: low number of IPv6 flows (253 or 254 per direction) is routed by
+ VPP. They drive the content of IPv6 FIB size (506 or 508 total /128 prefixes).
Destination IPv6 addresses are incremented on a packet by packet
basis.
diff --git a/docs/report/introduction/methodology_vpp_startup_settings.rst b/docs/report/introduction/methodology_vpp_startup_settings.rst
index 16185b4c05..2b0f485cc0 100644
--- a/docs/report/introduction/methodology_vpp_startup_settings.rst
+++ b/docs/report/introduction/methodology_vpp_startup_settings.rst
@@ -19,7 +19,7 @@ List of vpp startup.conf settings applied to all tests:
Typically needed for use faster vector PMDs (together with
no-multi-seg).
#. socket-mem <value>,<value> - memory per numa. (Not required anymore
- due to VPP code changes, should be removed in CSIT-18.10.)
+ due to VPP code changes, will be removed in CSIT-19.04.)
Per Test Settings
~~~~~~~~~~~~~~~~~