diff options
author | Vratko Polak <vrpolak@cisco.com> | 2020-10-27 19:09:44 +0100 |
---|---|---|
committer | Vratko Polak <vrpolak@cisco.com> | 2020-10-29 20:25:50 +0000 |
commit | 023fa41e51c966a1956bda6b915ffd894ff10e84 (patch) | |
tree | cdb96c99a8ade4855176c43969cbd9a06adf693b /resources/libraries/python/PLRsearch | |
parent | e31998ea56c55879fbaae8e58b0dad0bc6549dae (diff) |
Support existing test types with ASTF
+ Add UDP_CPS, TCP_CPS, UDP_PPS and TCP_PPS suites.
+ Update existing cps traffic profiles.
+ Add missing traffic profiles.
+ UDP:
+ Single burst of 32 packets was confirmed as safe enough for TRex.
+ Maybe 64 could work, but not enough testing for that.
+ Multiple bursts have lead to reduced TRex performance,
as overlaping bursts (from different client instances)
tend to fill up the buffers.
+ TCP:
+ Data size set to 11111 bytes, completely arbitrarily.
+ Results look reasonable, so I have kept that.
- MSS not set at all
- No tested support for frame size other than 64B.
- Frame size does not even factor into TCP profiles.
+ So other frame sizes are skipped in autogen.
+ Update tags in related suites.
- HOSTS_{n} and SRC_USER_{n} should be unified.
- Questionable clarification on difference between IP4BASE and SCALE.
+ Add NAT state resetters to tests that need them.
+ Resetter is called (if set) before each measurement.
+ If ramp-up is detected, resetter is not set.
+ Rename "mult" argument to "multiplier".
+ Abstracted from packets to transactions.
+ Transaction corresponds to profile.
+ TRex multiplier argument sets target rate in transactions per second.
+ The familiar STL traffic:
+ Bidirectional is considered to be 2 packets per transaction.
+ Unidirectional is considered to be 1 packet per transaction.
+ The newer ASTF traffic:
+ 4 subtypes, each has different number of packets per transaction.
+ For max rate computation:
+ Packets in the more numerous direction are considered.
+ Rely on TRex reported traffic duration for ASTF:
+ Use the server side value.
- Client side value is higher by an overhead.
- TRex is not sending traffic during that time.
+ Remove delays from traffic profiles.
- Those delays would increase the reprted traffic time.
+ Support for scale lmited trials.
+ Only for ASTF profiles, each ASTF profile has limited scale.
+ Scale defined in suite variables.
+ For TRex to send all transactions provided duration value is ignored.
+ The appropriate value is computed in TrafficGenerator.
+ An ad-hoc time constant is added to match the TRex client side time overhead.
+ The profile driver receives the computed duration.
+ Measurement for PLRsearch add a sleep if the computed duration is smaller.
+ Alternative argument for search algos if scale is limited.
+ Both need higher timeout to accomodate big scales.
+ MLRsearch can afford fewer phases.
+ Added a parameter to optionally shorten the duration.
+ Use short duration for runtime stats trial and failure stats trial.
+ Use very large keepalive values in udp profiles to avoid ka packets.
+ No polling in ASTF profile driver.
- Polling could eliminate the time overhead value.
+ But polling proved to introduce some loss, affecting the results.
+ Handle duration stretching in ASTF by stopping traffic.
+ The stop has several steps so that:
+ The traffic is really stopped entirely.
+ Late packets do not count (maybe as errors).
+ Stats are preserved to read for results (and cleared afterwards).
+ Several quantities added to ReceiveRateMeasurement:
+ Original target duration is preserved (algos need that).
+ Input estimate (tps) for early search iterations.
+ Output estimate (maybe pps) for MRR output.
+ Strict result (unsent counts as loss) for NDR.
+ Use L2 counters (opackets, ipackets) where possible.
- TRex has trouble processing packets for the L7 ones at high loads.
+ Remove warmup from profile drivers and keywords.
+ Suites should call "Send ramp-up traffic" explicitly if needed.
+ Added parsing for few more counters.
+ Both to use in formulas or just for debug purposes.
- Only 64B cases in autogen, framesize support to be added later.
+ Latency streams during search can be enabled via PERF_USE_LATENCY env var.
+ MLRsearch improvments:
+ Rename argument names to min_rate and max_rate.
+ Use relative receive rate in initial phase.
+ PLRsearch improvements:
+ Careful computation when output (pps) does not match input (tps).
+ Use geometric distribution (instead of Poisson).
+ Helps agains math errors.
+ This should improve estimate stability.
- But in practice big losses still lead to significant jumps.
+ Traffic generator improvements:
+ send_traffic_on_tg now calls the full set_rate_provider_defaults.
+ _send_traffic_on_tg_internal for the logic without provider defaults.
+ As the internal function is re-used by measure() without affecting defaults.
+ Move _parse_traffic_results just before get_measurement_result.
+ As the latter uses fields set bu the former, it is now easier to read.
+ Multiple sources for approximate duration.
+ Tried from more precise to more available.
+ Includes logic for _pps tests (added in later change).
+ Move explicit type conversions to earlier occurences.
+ Profile driver output field uses semicolons to simplify parsing.
+ Performance Robot lib file split to several smaller ones.
+ performance_actions.robot:
+ Hosts Additional Statistics Action For * keywords.
+ performance_display.robot:
+ Hosts keyword for displaying and verifying results.
+ Change test message to use the correct unit (pps or cps).
+ performance_limits.robot renamed to performance_vars.robot
+ Added many keywords, mostly for accessing test variables.
+ Moved variables for Policer into a new keyword there.
+ Some keywords need sophisticated logic.
- Other are basically Get Variable Value.
+ But in future more logic can be added, without editing callers.
+ Documentation for the new keywords acts as a documentation for test variables.
+ performance_utils.robot has the rest.
+ Eliminated arguments if the value is in test variable.
+ Small improvements to documentation.
- Still not enough cleanup with respect to arguments and test variables.
+ Keywords are sorted alphabetically now in each one.
+ Suites:
+ Unified variables table:
+ No colons in comments.
+ ${n_hosts}, ${n_ports} and use them instead hardcoded numbers.
+ Add -cps to existing cps suite names.
+ Remove "trial data overwrite".
+ Compute max rate as in STL suites.
+ Each NAT suite has ip4base suite to compare results to.
- Those act as indirect TRex calibration.
- VPP does not lose packets in those.
+ Latency in ASTF suites is disabled hard.
- As we do not support latency in ASTF profiles yet.
+ Unidirectional tests governed by suite variable, not an argument.
+ Write long argument lists vertically.
+ Prefer to use argument names.
+ In Python, also the last argument is followed by comma.
+ It makes renaming and reordering easier.
+ Similarly applies to prints with long lists of values.
+ A TODO to update api crc file comments.
Change-Id: I84729355edbec051298a9de1162107f88ff5737d
Signed-off-by: Vratko Polak <vrpolak@cisco.com>
Diffstat (limited to 'resources/libraries/python/PLRsearch')
-rw-r--r-- | resources/libraries/python/PLRsearch/PLRsearch.py | 27 |
1 files changed, 14 insertions, 13 deletions
diff --git a/resources/libraries/python/PLRsearch/PLRsearch.py b/resources/libraries/python/PLRsearch/PLRsearch.py index ec58fbd10f..226b482d76 100644 --- a/resources/libraries/python/PLRsearch/PLRsearch.py +++ b/resources/libraries/python/PLRsearch/PLRsearch.py @@ -54,7 +54,7 @@ class PLRsearch: def __init__( self, measurer, trial_duration_per_trial, packet_loss_ratio_target, - trial_number_offset=0, timeout=1800.0, trace_enabled=False): + trial_number_offset=0, timeout=7200.0, trace_enabled=False): """Store rate measurer and additional parameters. The measurer must never report negative loss count. @@ -205,7 +205,7 @@ class PLRsearch: if (trial_number - self.trial_number_offset) <= 1: next_load = max_rate elif (trial_number - self.trial_number_offset) <= 3: - next_load = (measurement.receive_rate / ( + next_load = (measurement.relative_receive_rate / ( 1.0 - self.packet_loss_ratio_target)) else: next_load = (avg1 + avg2) / 2.0 @@ -439,7 +439,7 @@ class PLRsearch: :param lfit_func: Fitting function, typically lfit_spread or lfit_erf. :param trial_result_list: List of trial measurement results. :param mrr: The mrr parameter for the fitting function. - :param spread: The spread parameter for the fittinmg function. + :param spread: The spread parameter for the fitting function. :type trace: function (str, object) -> None :type lfit_func: Function from 3 floats to float. :type trial_result_list: list of MLRsearch.ReceiveRateMeasurement @@ -455,20 +455,21 @@ class PLRsearch: trace(u"for tr", result.target_tr) trace(u"lc", result.loss_count) trace(u"d", result.duration) - log_avg_loss_per_second = lfit_func( + # _rel_ values use units of target_tr (transactions per second). + log_avg_rel_loss_per_second = lfit_func( trace, result.target_tr, mrr, spread ) - log_avg_loss_per_trial = ( - log_avg_loss_per_second + math.log(result.duration) + # _abs_ values use units of loss count (maybe packets). + # There can be multiple packets per transaction. + log_avg_abs_loss_per_trial = log_avg_rel_loss_per_second + math.log( + result.transmit_count / result.target_tr ) - # Poisson probability computation works nice for logarithms. - log_trial_likelihood = ( - result.loss_count * log_avg_loss_per_trial - - math.exp(log_avg_loss_per_trial) - ) - log_trial_likelihood -= math.lgamma(1 + result.loss_count) + # Geometric probability computation for logarithms. + log_trial_likelihood = log_plus(0.0, -log_avg_abs_loss_per_trial) + log_trial_likelihood *= -result.loss_count + log_trial_likelihood -= log_plus(0.0, +log_avg_abs_loss_per_trial) log_likelihood += log_trial_likelihood - trace(u"avg_loss_per_trial", math.exp(log_avg_loss_per_trial)) + trace(u"avg_loss_per_trial", math.exp(log_avg_abs_loss_per_trial)) trace(u"log_trial_likelihood", log_trial_likelihood) return log_likelihood |