aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorVratko Polak <vrpolak@cisco.com>2022-06-28 17:59:59 +0200
committerTibor Frank <tifrank@cisco.com>2022-06-30 08:38:27 +0000
commitfff956f9648ecf32be69c4b8cc2475e1e7a05e57 (patch)
tree9cffd1b9628a6d5f0903cd09781ec2c68357b9ef
parent0736123ef33e259027334b87edc9f304898a0b54 (diff)
feat(doc): update ASTF related methodology
Also added some TODOs for future methodology improvements. Some of them will addressed in subsequent changes soon, but not sure yet which ones exactly. Change-Id: Ib827371b7d7a598c5fa6ab41fb7e48c73185844f Signed-off-by: Vratko Polak <vrpolak@cisco.com> (cherry picked from commit 2db1998c0432569702738524e66c9d4e3f553b20)
-rw-r--r--docs/report/introduction/methodology_data_plane_throughput/methodology_data_plane_throughput.rst6
-rw-r--r--docs/report/introduction/methodology_dut_state.rst49
-rw-r--r--docs/report/introduction/methodology_nat44.rst193
-rw-r--r--docs/report/introduction/methodology_trex_traffic_generator.rst32
4 files changed, 159 insertions, 121 deletions
diff --git a/docs/report/introduction/methodology_data_plane_throughput/methodology_data_plane_throughput.rst b/docs/report/introduction/methodology_data_plane_throughput/methodology_data_plane_throughput.rst
index 06efbc2798..a26b40088b 100644
--- a/docs/report/introduction/methodology_data_plane_throughput/methodology_data_plane_throughput.rst
+++ b/docs/report/introduction/methodology_data_plane_throughput/methodology_data_plane_throughput.rst
@@ -15,6 +15,7 @@ Following throughput test methods are used:
..
TODO: Add RECONF.
+ TODO: Link to method-specific pages instead of duplicate info below.
Description of each test method is followed by generic test properties
shared by all methods.
@@ -143,3 +144,8 @@ properties:
- All measured and reported packet and bandwidth rates are aggregate
bi-directional rates reported from external Traffic Generator
perspective.
+
+..
+ TODO: Incorporate ASTF specifics: No IMIX, transactions instead of packets,
+ slightly non-symmetric traffic with TCP profiles, unsure max_rate.
+ TODO: Mention latency.
diff --git a/docs/report/introduction/methodology_dut_state.rst b/docs/report/introduction/methodology_dut_state.rst
index c66fe58277..d08e42513d 100644
--- a/docs/report/introduction/methodology_dut_state.rst
+++ b/docs/report/introduction/methodology_dut_state.rst
@@ -16,8 +16,8 @@ But there is one kind of state that needs specific handling.
This kind of DUT state is dynamically created based on incoming traffic,
it affects how DUT handles the traffic, and (unlike telemetry counters)
it has uneven impact on CPU load.
-Typical example is NAT where opening sessions takes more CPU than
-forwarding packet on existing sessions.
+Typical example is NAT, where detecting new sessions takes more CPU than
+forwarding packet on existing (open or recently closed) sessions.
We call DUT configurations with this kind of state "stateful",
and configurations without them "stateless".
(Even though stateless configurations contain state described in previous
@@ -46,20 +46,25 @@ _____________
Tests aiming at sustain performance need to make sure DUT state is created.
We achieve this via a ramp-up trial, specific purpose of which
-is to create the state. Subsequent trials need no specific handling,
-as state remains the same.
+is to create the state.
-For the state to be set completely, it is important DUT (nor TG) loses
-no packets, we achieve this by setting the profile multiplier (TPS from now on)
-to low enough value.
+Subsequent trials need no specific handling, as long as the state
+remains the same. But some state can time-out, so additional ramp-up
+trials are inserted whenever the code detects the state can time-out.
+Note that a trial with zero loss refreshes the state,
+so only the time since the last non-zero loss trial is tracked.
+
+For the state to be set completely, it is important both DUT and TG
+do not lose any packets. We achieve this by setting the profile multiplier
+(TPS from now on) to low enough value.
It is also important each state-affecting packet is sent.
For size-limited traffic profile it is guaranteed by the size limit.
For continuous traffic, we set a long enough duration (based on TPS).
-At the end of the ramp-up trial, we check telemetry to confirm
-the state has been created as expected.
-Test fails if the state is not complete.
+At the end of the ramp-up trial, we check DUT state to confirm
+it has been created as expected.
+Test fails if the state is not (completely) created.
State Reset
___________
@@ -75,7 +80,7 @@ If it is not set, DUT state is not reset.
If it is set, each search algorithm (including MRR) will invoke it
before all trial measurements (both main and telemetry ones).
Any configuration keyword enabling a feature with DUT state
-will check whether a test variable for ramp-up (duration) is present.
+will check whether a test variable for ramp-up rate is present.
If it is present, resetter is not set.
If it is not present, the keyword sets the apropriate resetter value.
This logic makes sure either ramp-up or state reset are used.
@@ -104,29 +109,27 @@ for tests wishing to measure performance of this phase.
The second is protocol such as TCP ramping up their throughput to utilize
the bandwidth available. This is the original meaning of "ramp up"
in the NGFW draft (see above).
-In existing tests we are not distinguishing such phases,
-trial measurment reports the telemetry from the whole trial
-(e.g. throughput is time averaged value).
+In existing tests we are not using this meaning of TCP ramp-up.
+Instead we use only small transactions, and large enough initial window
+so TCP acts as ramped-up already.
-The third is TCP increasing throughput due to retransmissions triggered by
-packet loss. In CSIT we currently try to avoid this behavior
+The third is TCP increasing offered load due to retransmissions triggered by
+packet loss. In CSIT we again try to avoid this behavior
by using small enough data to transfer, so overlap of multiple transactions
(primary cause of packet loss) is unlikely.
-But in MRR tests packet loss is still probable.
-Once again, we rely on using telemetry from the whole trial,
-resulting in time averaged throughput values.
+But in MRR tests, packet loss and non-constant offered load are still expected.
Stateless DUT configuratons
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-These are simply configurations, which do not set any resetter value
+These are simple configurations, which do not set any resetter value
(even if ramp-up duration is not configured).
Majority of existing tests are of this type, using continuous traffic profiles.
In order to identify limits of Trex performance,
we have added suites with stateless DUT configuration (VPP ip4base)
subjected to size-limited ASTF traffic.
-The discovered throughputs serve as a basis of comparison
+The discovered rates serve as a basis of comparison
for evaluating the results for stateful DUT configurations (VPP NAT44ed)
subjected to the same traffic profiles.
@@ -144,7 +147,7 @@ In CSIT we currently use all four possible configurations:
comparison.
- Some stateful DUT configurations (NAT44DET, NAT44ED unidirectional)
- are tested using stateless traffic profiles.
+ are tested using stateless traffic profiles and continuous traffic.
- The rest of stateful DUT configurations (NAT44ED bidirectional)
- are tested using stateful traffic profiles.
+ are tested using stateful traffic profiles and size limited traffic.
diff --git a/docs/report/introduction/methodology_nat44.rst b/docs/report/introduction/methodology_nat44.rst
index 29b7dfdf24..fb4cbd0c7e 100644
--- a/docs/report/introduction/methodology_nat44.rst
+++ b/docs/report/introduction/methodology_nat44.rst
@@ -24,7 +24,7 @@ and port bindings scenarios:
ports per outside source address. The maximal number of
ports-per-outside-address usable for NAT is 64 512
(in non-reserved port range 1024-65535, :rfc:`4787`).
-- Sharing-ratio, equal to inside-addresses / outside-addresses.
+- Sharing-ratio, equal to inside-addresses divided by outside-addresses.
CSIT NAT44 tests are designed to take into account the maximum number of
ports (sessions) required per inside host (inside-address) and at the
@@ -50,6 +50,10 @@ are based on ports-per-inside-address set to 63 and the sharing ratio of
NAT44det (NAT44 deterministic used for Carrier Grade NAT applications)
and NAT44ed (Endpoint Dependent).
+..
+ TODO: Will we ever test other than 63 ports-per-inside-address?
+ TODO: Will we ever NAT44ei? What about NAT66, NAT64, NAT46?
+
Private address ranges to be used in tests:
- 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
@@ -104,6 +108,9 @@ and port (1024).
The inside-to-outside traffic covers whole inside address and port range,
the outside-to-inside traffic covers whole outside address and port range.
+..
+ TODO: Clarify outside-to-inside source and destination address+port.
+
NAT44det translation entries are created during the ramp-up phase,
followed by verification that all entries are present,
before proceeding to the main measurements of the test.
@@ -123,6 +130,7 @@ NAT44det scenario tested:
..
TODO: The -s{S} part is redundant,
we can save space by removing it.
+ TODO: Rename nat44det suites so it is clear they are throughput (not cps).
TODO: Make traffic profile names resemble suite names more closely.
NAT44 Endpoint-Dependent
@@ -136,16 +144,15 @@ but applied to different subnet (starting with 20.0.0.0).
As the mapping is not deterministic (for security reasons),
we cannot easily use stateless bidirectional traffic profiles.
-Outside address and port range is fully covered,
+Inside address and port range is fully covered,
but we do not know which outside-to-inside source address and port to use
-to hit an open session of a particular outside address and port.
+to hit an open session.
Therefore, NAT44ed is benchmarked using following methodologies:
- Unidirectional throughput using *stateless* traffic profile.
-- Connections-per-second using *stateful* traffic profile.
-- Bidirectional PPS (see below) using *stateful* traffic profile.
-- Bidirectional throughput (see below) using *stateful* traffic profile.
+- Connections-per-second (CPS) using *stateful* traffic profile.
+- Bidirectional throughput (TPUT, see below) using *stateful* traffic profile.
Unidirectional NAT44ed throughput tests are using TRex STL (Stateless)
APIs and traffic profiles, but with packets sent only in
@@ -159,13 +166,14 @@ so it acts also as a ramp-up.
Stateful NAT44ed tests are using TRex ASTF (Advanced Stateful) APIs and
traffic profiles, with packets sent in both directions. Tests are run
-with both UDP and TCP/IP sessions.
-As both NAT44ed CPS (connections-per-second) and PPS (packets-per-second)
-stateful tests measure (also) session opening performance,
+with both UDP and TCP sessions.
+As NAT44ed CPS (connections-per-second) stateful tests
+measure (also) session opening performance,
they use state reset instead of ramp-up trial.
-NAT44ed bidirectional throughput tests use the same traffic profile
-as PPS tests, but also prepend ramp-up trials as in the unidirectional tests,
-so the test results describe performance without session opening overhead.
+NAT44ed TPUT (bidirectional throughput) tests prepend ramp-up trials
+as in the unidirectional tests,
+so the test results describe performance without translation entry
+creation overhead.
Associated CSIT test cases use the following naming scheme to indicate
NAT44det case tested:
@@ -179,22 +187,22 @@ NAT44det case tested:
- udir-[mrr|ndrpdr|soak], unidirectional stateless tests MRR, NDRPDR
or SOAK.
-- Stateful: ethip4[udp|tcp]-nat44ed-h{H}-p{P}-s{S}-[cps|pps|tput]-[mrr|ndrpdr]
+- Stateful: ethip4[udp|tcp]-nat44ed-h{H}-p{P}-s{S}-[cps|tput]-[mrr|ndrpdr|soak]
- - [udp|tcp], UDP or TCP/IP sessions
+ - [udp|tcp], UDP or TCP sessions
- {H}, number of inside hosts, H = 1024, 4096, 16384, 65536, 262144.
- {P}, number of ports per inside host, P = 63.
- {S}, number of sessions, S = 64512, 258048, 1032192, 4128768,
16515072.
- - [cps|pps|tput], connections-per-second session establishment rate or
+ - [cps|tput], connections-per-second session establishment rate or
packets-per-second average rate, or packets-per-second rate
without session establishment.
- - [mrr|ndrpdr], bidirectional stateful tests MRR, NDRPDR.
+ - [mrr|ndrpdr|soak], bidirectional stateful tests MRR, NDRPDR, or SOAK.
Stateful traffic profiles
^^^^^^^^^^^^^^^^^^^^^^^^^
-There are several important detais which distinguish ASTF profiles
+There are several important details which distinguish ASTF profiles
from stateless profiles.
General considerations
@@ -208,9 +216,11 @@ ASTF profiles are limited to either UDP or TCP protocol.
Programs
________
-Each template in the profile defines two "programs", one for client side
-and one for server side. Each program specifies when that side has to wait
-until enough data is received (counted in packets for UDP and in bytes for TCP)
+Each template in the profile defines two "programs", one for the client side
+and one for the server side.
+
+Each program specifies when that side has to wait until enough data is received
+(counted in packets for UDP and in bytes for TCP)
and when to send additional data. Together, the two programs
define a single transaction. Due to packet loss, transaction may take longer,
use more packets (retransmission) or never finish in its entirety.
@@ -218,15 +228,24 @@ use more packets (retransmission) or never finish in its entirety.
Instances
_________
-Client instance is created according to TPS parameter for the trial,
+A client instance is created according to TPS parameter for the trial,
and sends the first packet of the transaction (in some cases more packets).
-Server instance is created when first packet arrives on server side,
-each instance has different address or port.
-When a program reaches its end, the instance is deleted.
+Each client instance uses a different source address (see sequencing below)
+and some source port. The destination address also comes from a range,
+but destination port has to be constant for a given program.
+
+TRex uses an opaque way to chose source ports, but as session counting shows,
+next client with the same source address uses a different source port.
+Server instance is created when the first packet arrives to the server side.
+Source address and port of the first packet are used as destination address
+and port for the server responses. This is the ability we need
+when outside surface is not predictable.
+
+When a program reaches its end, the instance is deleted.
This creates possible issues with server instances. If the server instance
does not read all the data client has sent, late data packets
-can cause second copy of server instance to be created,
+can cause a second copy of server instance to be created,
which breaks assumptions on how many packet a transaction should have.
The need for server instances to read all the data reduces the overall
@@ -244,13 +263,9 @@ for client programs: seqential and pseudorandom.
In current tests we are using sequential addressing only (if destination
address varies at all).
-For choosing client source UDP/TCP port, there is only one mode.
-We have not investigated whether it results in sequential or pseudorandom order.
-
-For client destination UDP/TCP port, we use a constant value,
-as typical TRex usage pattern binds the server instances (of the same program)
-to a single port. (If profile defines multiple server programs, different
-programs use different ports.)
+For client destination UDP/TCP port, we use a single constant value.
+(TRex can support multiple program pairs in the same traffic profile,
+distinguished by the port number.)
Transaction overlap
___________________
@@ -264,7 +279,8 @@ This generally leads to duration stretching, and/or packet loss on TRex.
Currently used transactions were chosen to be short, so risk of bad behavior
is decreased. But in MRR tests, where load is computed based on NIC ability,
-not TRex ability, anomalous behavior is still possible.
+not TRex ability, anomalous behavior is still possible
+(e.g. MRR values being way lower than NDR).
Delays
______
@@ -309,12 +325,12 @@ can keep up with higher loads.
For some tests, we do not need to confirm the whole transaction was successful.
CPS (connections per second) tests are a typical example.
-We care only for NAT44ed creating a session (needs one packet in inside-to-outside
-direction per session) and being able to use it (needs one packet
-in outside-to-inside direction).
+We care only for NAT44ed creating a session (needs one packet
+in inside-to-outside direction per session) and being able to use it
+(needs one packet in outside-to-inside direction).
-Similarly in PPS (packets per second, combining session creation
-with data transfer) tests, we care about NAT44ed ability to forward packets,
+Similarly in TPUT tests (packet throuput, counting both control
+and data packets), we care about NAT44ed ability to forward packets,
we do not care whether aplications (TRex) can fully process them at that rate.
Therefore each type of tests has its own formula (usually just one counter
@@ -324,11 +340,11 @@ use size-limited profiles, so they know what the count of attempted
transactions should be, but due to duration stretching
TRex might have been unable to send that many packets.
For search purposes, unattempted transactions are treated the same
-as attemted byt failed transactions.
+as attempted but failed transactions.
Sometimes even the number of transactions as tracked by search algorithm
does not match the transactions as defined by ASTF programs.
-See PPS profiles below.
+See TCP TPUT profile below.
UDP CPS
~~~~~~~
@@ -340,7 +356,7 @@ Client instance sends one packet and ends.
Server instance sends one packet upon creation and ends.
In principle, packet size is configurable,
-but currently used tests apply only one value (64 bytes frame).
+but currently used tests apply only one value (100 bytes frame).
Transaction counts as attempted when opackets counter increases on client side.
Transaction counts as successful when ipackets counter increases on client side.
@@ -357,7 +373,7 @@ Server accepts the connection. Server waits for indirect confirmation
from client (by waiting for client to initiate close). Server ends.
Without packet loss, the whole transaction takes 7 packets to finish
-(4 and 3 per direction, respectively).
+(4 and 3 per direction).
From NAT44ed point of view, only the first two are needed to verify
the session got created.
@@ -369,66 +385,69 @@ on client side.
Transaction counts as successful when tcps_connects counter increases
on client side.
-UDP PPS
-~~~~~~~
+UDP TPUT
+~~~~~~~~
This profile uses a small transaction of "request-response" type,
with several packets simulating data payload.
-Client sends 33 packets and closes immediately.
-Server reads all 33 packets (needed to avoid late packets creating new
-server instances), then sends 33 packets and closes.
-The value 33 was chosen ad-hoc (1 "protocol" packet and 32 "data" packets).
-It is possible other values would still be safe from avoiding overlapping
-transactions point of view.
+Client sends 5 packets and closes immediately.
+Server reads all 5 packets (needed to avoid late packets creating new
+server instances), then sends 5 packets and closes.
+The value 5 was chosen to mirror what TCP TPUT (see below) choses.
-..
- TODO: 32 was chosen as it is a batch size DPDK driver puts on the PCIe bus
- at a time. May want to verify this with TRex ASTF devs and see if better
- UDP transaction sizes can be found to yield higher performance out of TRex.
-
-In principle, packet size is configurable,
-but currently used tests apply only one value (64 bytes frame)
-for both "protocol" and "data" packets.
+Packet size is configurable, currently we have tests for 100,
+1518 and 9000 bytes frame (to match size of TCP TPUT data frames, see below).
-As this is a PPS tests, we do not track the big 66 packet transaction.
-Similarly to stateless tests, we treat each packet as a "transaction"
-for search algorthm purposes. Therefore a "transaction" is attempted
-when opacket counter on client or server side is increased.
-Transaction is successful if ipacket counter on client or server side
-is increased.
+As this is a packet oriented test, we do not track the whole
+10 packet transaction. Similarly to stateless tests, we treat each packet
+as a "transaction" for search algorthm packet loss ratio purposes.
+Therefore a "transaction" is attempted when opacket counter on client
+or server side is increased. Transaction is successful if ipacket counter
+on client or server side is increased.
-If one of 33 client packets is lost, server instance will get stuck
+If one of 5 client packets is lost, server instance will get stuck
in the reading phase. This probably decreases TRex performance,
-but it leads to more stable results.
+but it leads to more stable results then alternatives.
-TCP PPS
-~~~~~~~
+TCP TPUT
+~~~~~~~~
This profile uses a small transaction of "request-response" type,
-with some data size to be transferred both ways.
-
-Client connects, sends 11111 bytes of data, receives 11111 of data and closes.
-Server accepts connection, reads 11111 bytes of data, sends 11111 bytes
-of data and closes.
-Server read is needed to avoid premature close and second server instance.
-Client read is not stricly needed, but acks help TRex to close server quickly,
-thus saving CPU and improving performance.
+with some data amount to be transferred both ways.
-The value of 11111 bytes was chosen ad-hoc. It leads to 22 packets
-(11 each direction) to be exchanged if no loss occurs.
-In principle, size of data packets is configurable via setting
-maximum segment size. Currently that is not applied, so the TRex default value
-(1460 bytes) is used, while the test name still (wrongly) mentions
-64 byte frame size.
+Client connects, sends 5 data packets worth of data,
+receives 5 data packets worth of data and closes its side of the connection.
+Server accepts connection, reads 5 data packets worth of data,
+sends 5 data packets worth of data and closes its side of the connection.
+As usual in TCP, sending side waits for ACK from the receiving side
+before proceeding with next step of its program.
-Exactly as in UDP_PPS, ipackets and opackets counters are used for counting
+Server read is needed to avoid premature close and second server instance.
+Client read is not stricly needed, but ACKs allow TRex to close
+the server instance quickly, thus saving CPU and improving performance.
+
+The number 5 of data packets was chosen so TRex is able to send them
+in a single burst, even with 9000 byte frame size (TRex has a hard limit
+on initial window size).
+That leads to 16 packets (9 of them in c2s direction) to be exchanged
+if no loss occurs.
+The size of data packets is controlled by the traffic profile setting
+the appropriate maximum segment size. Due to TRex restrictions,
+the minimal size for IPv4 data frame achievable by this method is 70 bytes,
+which is more than our usual minimum of 64 bytes.
+For that reason, the data frame sizes available for testing are 100 bytes
+(that allows room for eventually adding IPv6 ASTF tests),
+1518 bytes and 9000 bytes. There is no control over control packet sizes.
+
+Exactly as in UDP TPUT, ipackets and opackets counters are used for counting
"transactions" (in fact packets).
-If packet loss occurs, there is large transaction overlap, even if most
-ASTF programs finish eventually. This leads to big duration stretching
+If packet loss occurs, there can be large transaction overlap, even if most
+ASTF programs finish eventually. This can lead to big duration stretching
and somehow uneven rate of packets sent. This makes it hard to interpret
-MRR results, but NDR and PDR results tend to be stable enough.
+MRR results (frequently MRR is below NDR for this reason),
+but NDR and PDR results tend to be stable enough.
Ip4base tests
^^^^^^^^^^^^^
@@ -443,7 +462,7 @@ directions.
The packets arrive to server end of TRex with different source address&port
than in NAT44ed tests (no translation to outside values is done with ip4base),
but those are not specified in the stateful traffic profiles.
-The server end uses the received address&port as destination
+The server end (as always) uses the received address&port as destination
for outside-to-inside traffic. Therefore the same stateful traffic profile
works for both NAT44ed and ip4base test (of the same scale).
diff --git a/docs/report/introduction/methodology_trex_traffic_generator.rst b/docs/report/introduction/methodology_trex_traffic_generator.rst
index 9813b28025..180e3dda8c 100644
--- a/docs/report/introduction/methodology_trex_traffic_generator.rst
+++ b/docs/report/introduction/methodology_trex_traffic_generator.rst
@@ -43,34 +43,37 @@ CSIT uses ASTF (Advanced STateFul mode).
This mode is suitable for NAT44ED tests, as clients send packets from inside,
and servers react to it, so they see the outside address and port to respond to.
-Also, they do not send traffic before NAT44ED has opened the sessions.
+Also, they do not send traffic before NAT44ED has created the corresponding
+translation entry.
When possible, L2 counters (opackets, ipackets) are used.
Some tests need L7 counters, which track protocol state (e.g. TCP),
-but the values are less than reliable on high loads.
+but those values are less than reliable on high loads.
Traffic Continuity
~~~~~~~~~~~~~~~~~~
-Generated traffic is either continuous, or limited.
+Generated traffic is either continuous, or limited (by number of transactions).
Both modes support both continuities in principle.
Continuous traffic
__________________
-Traffic is started without any size goal.
-Traffic is ended based on time duration as hinted by search algorithm.
+Traffic is started without any data size goal.
+Traffic is ended based on time duration, as hinted by search algorithm.
This is useful when DUT behavior does not depend on the traffic duration.
The default for stateless mode.
Limited traffic
_______________
-Traffic has defined size goal, duration is computed based on the goal.
+Traffic has defined data size goal (given as number of transactions),
+duration is computed based on this goal.
Traffic is ended when the size goal is reached,
or when the computed duration is reached.
This is useful when DUT behavior depends on traffic size,
-e.g. target number of session, each to be hit once.
+e.g. target number of NAT translation entries, each to be hit exactly once
+per direction.
This is used mainly for stateful mode.
Traffic synchronicity
@@ -113,7 +116,9 @@ in CSIT called transaction templates. Traffic profiles also instruct
TRex how to create a large number of transactions based on the templates.
Continuous traffic loops over the generated transactions.
-Limited traffic usually executes each transaction once.
+Limited traffic usually executes each transaction once
+(typically as constant number of loops over source addresses,
+each loop with different source ports).
Currently, ASTF profiles define one transaction template each.
Number of packets expected per one transaction varies based on profile details,
@@ -163,11 +168,11 @@ for example the limit for TCP traffic depends on DUT packet loss.
In CSIT we decided to use logic similar to asynchronous traffic.
The traffic driver sleeps for a time, then stops the traffic explicitly.
The library that parses counters into measurement results
-than usually treats unsent packets as lost.
+than usually treats unsent packets/transactions as lost/failed.
We have added a IP4base tests for every NAT44ED test,
so that users can compare results.
-Of the results are very similar, it is probable TRex was the bottleneck.
+If the results are very similar, it is probable TRex was the bottleneck.
Startup delay
_____________
@@ -184,7 +189,9 @@ Thus "sleep and stop" stategy is used, which needs a correction
to the computed duration so traffic is stopped after the intended
duration of real traffic. Luckily, it turns out this correction
is not dependend on traffic profile nor CPU used by TRex,
-so a fixed constant (0.1115 seconds) works well.
+so a fixed constant (0.112 seconds) works well.
+Unfortunately, the constant may depend on TRex version,
+or execution environment (e.g. TRex in AWS).
The result computations need a precise enough duration of the real traffic,
luckily server side of TRex has precise enough counter for that.
@@ -201,3 +208,6 @@ If measurement of latency is requested, two more packet streams are
created (one for each direction) with TRex flow_stats parameter set to
STLFlowLatencyStats. In that case, returned statistics will also include
min/avg/max latency values and encoded HDRHistogram data.
+
+..
+ TODO: Mention we have added TRex self-test suites.