From 1daa6fdc0bae284dee1b61f34534e59b60b7526a Mon Sep 17 00:00:00 2001 From: Vratko Polak Date: Mon, 25 Apr 2022 10:22:05 +0200 Subject: feat(astf): Support framesizes for ASTF - No support for IMIX. + Fix a bad bug in padding (most ASTF profiles had wrong frame sizes). + Fix a big typo in TCP PPS profiles (s->c was not data, just RST). + Control transaction size via ASTF_N_DATA_FRAMES env variable. - Default value 5 leads to transactions smaller than before. + It ensures transaction is one burst (per direction) even for jumbo. + Edit autogen to set supported frame sizes based on suite id. + Both TCP and UDP use the same values: + 64B for CPS (exact for UDP, nominal for TCP). + 100B, 1518B and 9000B for TPUT and PPS. - TCP TPUT achievable minimum is 70B. + Used 100B to leave room for possible IPv6 ASTF tests. + Separate function for code reused by vpp and trex tests. - I do not really like the new "copy and edit" approach added here. + But it is a quick edit, better autogen refactor is low priority. + Consider both established and transitory sessions as valid. - Mostly for compatibility with 2202 behavior and to avoid ramp-ups. - Assuming both session states have similar enough VPP CPU overhead. + Added a TODO to investigate and maybe reconsider later. + Update the state timeout value to 240s. + That is the default for TCP (for transitory state). - UDP could keep using 300s. + But I prefer UDP and TCP to behave as similarly as possible. + Use TRex tunables to get the exact frame size (for data packets). - It is not clear why the recipe for MSS has to be this complicated. + Move code away from profile init, as frame size is not known there. + Change internal profile API, so values related to MSS are passed. + Lower ramp-up rate for TCP TPUT tests. + Because without lower rate, jumbo fails on packet loss in ramp-up. + UDP TPUT ramp-up rate also lowered (just to keep suites more similar). + Distinguish one-direction and aggregated average frame size. + Update keyword documentation where the distiction matters. + One-direction is needed for turning bandwidth limit to TPS limit. + Aggregated is needed for correct NDRPDR bandwidth result value. - TCP TPUT will always be few percent below bidirectional maximum. + That is unavoidable, as one direction sends more control packets. + Add runtime consistency checks so future refactors are safer. + Fail if padding requested would be negative. + Fail if suite claims unexpected values for packets per transaction. + Edit the 4 types of ASTF profiles to keep them similar to each other. + Move UDP TPUT limit value from a field back to direct argument. + Stop pretending first UDP packet is not data. + Apply small improvements where convenient. + Replace "aggregate" with "aggregated" where possible. + To lower probability of any future typos in variable names. + Avoid calling Set Numeric Frame Sizes twice. + Code formatting, keyword documentation, code comments, ... + Add TODOs for less important code quality improvements. - Postpone updating of methodology pages to a subsequent change. Change-Id: I4b381e5210e69669f972326202fdcc5a2c9c923b Signed-off-by: Vratko Polak --- .../robot/performance/performance_display.robot | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) (limited to 'resources/libraries/robot/performance/performance_display.robot') diff --git a/resources/libraries/robot/performance/performance_display.robot b/resources/libraries/robot/performance/performance_display.robot index db2b522091..a6df6f7b3a 100644 --- a/resources/libraries/robot/performance/performance_display.robot +++ b/resources/libraries/robot/performance/performance_display.robot @@ -1,4 +1,4 @@ -# Copyright (c) 2021 Cisco and/or its affiliates. +# Copyright (c) 2022 Cisco and/or its affiliates. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at: @@ -46,19 +46,19 @@ | | ... | ${message}${\n}${message_zero} | ${message}${\n}${message_other} | | Fail | ${message} -| Compute bandwidth +| Compute Bandwidth | | [Documentation] | | ... | Compute (bidir) bandwidth from given (unidir) transaction rate. | | ... -| | ... | This keyword reads "ppta" and "avg_frame_size" set elsewhere. -| | ... | The implementation should work for both pps and cps rates. +| | ... | This keyword reads \${ppta} and \${avg_aggregated_frame_size} set +| | ... | elsewhere. The implementation should work for both pps and cps rates. | | ... | | | ... | *Arguments:* | | ... | - tps - Transaction rate (unidirectional) [tps]. Type: float | | ... | | ... | *Returns:* | | ... | - Computed bandwidth in Gbps. -| | ... | - Computed aggregate packet rate in pps. +| | ... | - Computed aggregated packet rate in pps. | | | | ... | *Example:* | | @@ -68,7 +68,7 @@ | | | | ${ppta} = | Get Packets Per Transaction Aggregated | | ${pps} = | Evaluate | ${tps} * ${ppta} -| | ${bandwidth} = | Evaluate | ${pps} * (${avg_frame_size}+20)*8 / 1e9 +| | ${bandwidth} = | Evaluate | ${pps} * (${avg_aggregated_frame_size}+20)*8/1e9 | | Return From Keyword | ${bandwidth} | ${pps} | Display Reconfig Test Message @@ -96,7 +96,7 @@ | Display result of NDRPDR search | | [Documentation] | | ... | Display result of NDR+PDR search, both quantities, both bounds, -| | ... | aggregate in units given by trasaction type, e.g. by default +| | ... | aggregated, in units given by trasaction type, e.g. by default | | ... | in packet per seconds and Gbps total bandwidth | | ... | (for initial packet size). | | ... | @@ -115,7 +115,7 @@ | | ... | - transaction_type - String identifier to determine how to count | | ... | transactions. Default is "packet". | | ... | *Arguments:* -| | ... | - result - Measured result data. Aggregate rate, tps or pps. +| | ... | - result - Measured result data. Aggregated rate, tps or pps. | | ... | Type: NdrPdrResult | | | | ... | *Example:* @@ -175,7 +175,7 @@ | | ... | it is in transactions per second. Bidirectional traffic | | ... | transaction is understood as having 2 packets, for this purpose. | | ... | -| | ... | Pps values are aggregate in packet per seconds, +| | ... | Pps values are aggregated, in packet per seconds | | ... | and Gbps total bandwidth (for initial packet size). | | ... | | | ... | Througput is calculated as: @@ -231,8 +231,8 @@ | Display single pps bound | | [Documentation] -| | ... | Display one pps bound of NDR+PDR search, -| | ... | aggregate in packet per seconds and Gbps total bandwidth +| | ... | Display one pps bound of NDR+PDR search, aggregated, +| | ... | in packet per seconds and Gbps total bandwidth | | ... | (for initial packet size). | | ... | | | ... | The bound to display is given as target transfer rate, it is assumed -- cgit 1.2.3-korg