aboutsummaryrefslogtreecommitdiffstats
path: root/docs/ietf
diff options
context:
space:
mode:
authorVratko Polak <vrpolak@cisco.com>2024-07-17 20:12:47 +0200
committerMaciek Konstantynowicz <mkonstan@cisco.com>2024-07-21 14:18:57 +0000
commit25e7fb3b4fe955d1eea55ff282544d340ebcfe2b (patch)
tree8cce39c7275fcfc3f8bbef93fb37b079f0367869 /docs/ietf
parent901fba157df6da09800866c7eb80014ae2ffbf5b (diff)
feat(ietf): MLRsearch draft-07
+ date change 17jul to 18jul due to ietf submission tool reqs + .xml and .txt Change-Id: I8b6fbba3a412ce41773b73592722905a9f361861 Signed-off-by: Vratko Polak <vrpolak@cisco.com> Signed-off-by: Maciek Konstantynowicz <mkonstan@cisco.com>
Diffstat (limited to 'docs/ietf')
-rw-r--r--docs/ietf/draft-ietf-bmwg-mlrsearch-06.md1634
-rw-r--r--docs/ietf/draft-ietf-bmwg-mlrsearch-07.md3123
-rw-r--r--docs/ietf/draft-ietf-bmwg-mlrsearch-07.txt2800
-rw-r--r--docs/ietf/draft-ietf-bmwg-mlrsearch-07.xml3136
-rw-r--r--docs/ietf/process.txt2
5 files changed, 9060 insertions, 1635 deletions
diff --git a/docs/ietf/draft-ietf-bmwg-mlrsearch-06.md b/docs/ietf/draft-ietf-bmwg-mlrsearch-06.md
deleted file mode 100644
index 27d65e2690..0000000000
--- a/docs/ietf/draft-ietf-bmwg-mlrsearch-06.md
+++ /dev/null
@@ -1,1634 +0,0 @@
----
-
-title: Multiple Loss Ratio Search
-abbrev: MLRsearch
-docname: draft-ietf-bmwg-mlrsearch-06
-date: 2024-03-04
-
-ipr: trust200902
-area: ops
-wg: Benchmarking Working Group
-kw: Internet-Draft
-cat: info
-
-coding: us-ascii
-pi: # can use array (if all yes) or hash here
- toc: yes
- sortrefs: # defaults to yes
- symrefs: yes
-
-author:
- -
- ins: M. Konstantynowicz
- name: Maciek Konstantynowicz
- org: Cisco Systems
- email: mkonstan@cisco.com
- -
- ins: V. Polak
- name: Vratko Polak
- org: Cisco Systems
- email: vrpolak@cisco.com
-
-normative:
- RFC1242:
- RFC2285:
- RFC2544:
- RFC9004:
-
-informative:
- TST009:
- target: https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/009/03.04.01_60/gs_NFV-TST009v030401p.pdf
- title: "TST 009"
- FDio-CSIT-MLRsearch:
- target: https://csit.fd.io/cdocs/methodology/measurements/data_plane_throughput/mlr_search/
- title: "FD.io CSIT Test Methodology - MLRsearch"
- date: 2023-10
- PyPI-MLRsearch:
- target: https://pypi.org/project/MLRsearch/1.2.1/
- title: "MLRsearch 1.2.1, Python Package Index"
- date: 2023-10
-
---- abstract
-
-This document proposes extensions to [RFC2544] throughput search by
-defining a new methodology called Multiple Loss Ratio search
-(MLRsearch). MLRsearch aims to minimize search duration,
-support multiple loss ratio searches,
-and enhance result repeatability and comparability.
-
-The primary reason for extending [RFC2544] is to address the challenges
-and requirements presented by the evaluation and testing
-of software-based networking systems' data planes.
-
-To give users more freedom, MLRsearch provides additional configuration options
-such as allowing multiple shorter trials per load instead of one large trial,
-tolerating a certain percentage of trial results with higher loss,
-and supporting the search for multiple goals with varying loss ratios.
-
---- middle
-
-{::comment}
- As we use Kramdown to convert from Markdown,
- we use this way of marking comments not to be visible in the rendered draft.
- https://stackoverflow.com/a/42323390
- If another engine is used, convert to this way:
- https://stackoverflow.com/a/20885980
-{:/comment}
-
-# Purpose and Scope
-
-The purpose of this document is to describe Multiple Loss Ratio search
-(MLRsearch), a data plane throughput search methodology optimized for software
-networking DUTs.
-
-Applying vanilla [RFC2544] throughput bisection to software DUTs
-results in several problems:
-
-- Binary search takes too long as most trials are done far from the
- eventually found throughput.
-- The required final trial duration and pauses between trials
- prolong the overall search duration.
-- Software DUTs show noisy trial results,
- leading to a big spread of possible discovered throughput values.
-- Throughput requires a loss of exactly zero frames, but the industry
- frequently allows for small but non-zero losses.
-- The definition of throughput is not clear when trial results are inconsistent.
-
-
-To address the problems mentioned above,
-the MLRsearch library employs the following enhancements:
-
-- Allow multiple shorter trials instead of one big trial per load.
- - Optionally, tolerate a percentage of trial results with higher loss.
-- Allow searching for multiple search goals, with differing loss ratios.
- - Any trial result can affect each search goal in principle.
-- Insert multiple coarse targets for each search goal, earlier ones need
- to spend less time on trials.
- - Earlier targets also aim for lesser precision.
- - Use Forwarding Rate (FR) at maximum offered load
- [RFC2285] (section 3.6.2) to initialize the initial targets.
-- Take care when dealing with inconsistent trial results.
- - Reported throughput is smaller than the smallest load with high loss.
- - Smaller load candidates are measured first.
-- Apply several load selection heuristics to save even more time
- by trying hard to avoid unnecessarily narrow bounds.
-
-Some of these enhancements are formalized as MLRsearch specification,
-the remaining enhancements are treated as implementation details,
-thus achieving high comparability without limiting future improvements.
-
-MLRsearch configuration options are flexible enough to
-support both conservative settings and aggressive settings.
-Where the conservative settings lead to results
-unconditionally compliant with [RFC2544],
-but longer search duration and worse repeatability.
-Conversely, aggressive settings lead to shorter search duration
-and better repeatability, but the results are not compliant with [RFC2544].
-
-No part of [RFC2544] is intended to be obsoleted by this document.
-
-# Identified Problems
-
-This chapter describes the problems affecting usability
-of various performance testing methodologies,
-mainly a binary search for [RFC2544] unconditionally compliant throughput.
-
-## Long Search Duration
-
-The emergence of software DUTs, with frequent software updates and a
-number of different frame processing modes and configurations,
-has increased both the number of performance tests
-required to verify the DUT update and the frequency of running those tests.
-This makes the overall test execution time even more important than before.
-
-The current [RFC2544] throughput definition restricts the potential
-for time-efficiency improvements.
-A more generalized throughput concept could enable further enhancements
-while maintaining the precision of simpler methods.
-
-The bisection method, when unconditionally compliant with [RFC2544],
-is excessively slow.
-This is because a significant amount of time is spent on trials
-with loads that, in retrospect, are far from the final determined throughput.
-
-[RFC2544] does not specify any stopping condition for throughput search,
-so users already have an access to a limited trade-off
-between search duration and achieved precision.
-However, each full 60-second trials doubles the precision,
-so not many trials can be removed without a substantial loss of precision.
-
-## DUT in SUT
-
-[RFC2285] defines:
-- DUT as
- - The network forwarding device to which stimulus is offered and
- response measured [RFC2285] (section 3.1.1).
-- SUT as
- - The collective set of network devices to which stimulus is offered
- as a single entity and response measured [RFC2285] (section 3.1.2).
-
-[RFC2544] specifies a test setup with an external tester stimulating the
-networking system, treating it either as a single DUT, or as a system
-of devices, an SUT.
-
-In the case of software networking, the SUT consists of not only the DUT
-as a software program processing frames, but also of
-a server hardware and operating system functions,
-with server hardware resources shared across all programs
-and the operating system running on the same server.
-
-Given that the SUT is a shared multi-tenant environment
-encompassing the DUT and other components, the DUT might inadvertently
-experience interference from the operating system
-or other software operating on the same server.
-
-Some of this interference can be mitigated.
-For instance,
-pinning DUT program threads to specific CPU cores
-and isolating those cores can prevent context switching.
-
-Despite taking all feasible precautions, some adverse effects may still impact
-the DUT's network performance.
-In this document, these effects are collectively
-referred to as SUT noise, even if the effects are not as unpredictable
-as what other engineering disciplines call noise.
-
-DUT can also exhibit fluctuating performance itself, for reasons
-not related to the rest of SUT; for example due to pauses in execution
-as needed for internal stateful processing.
-In many cases this
-may be an expected per-design behavior, as it would be observable even
-in a hypothetical scenario where all sources of SUT noise are eliminated.
-Such behavior affects trial results in a way similar to SUT noise.
-As the two phenomenons are hard to distinguish,
-in this document the term 'noise' is used to encompass
-both the internal performance fluctuations of the DUT
-and the genuine noise of the SUT.
-
-A simple model of SUT performance consists of an idealized noiseless performance,
-and additional noise effects.
-For a specific SUT, the noiseless performance is assumed to be constant,
-with all observed performance variations being attributed to noise.
-The impact of the noise can vary in time, sometimes wildly,
-even within a single trial.
-The noise can sometimes be negligible, but frequently
-it lowers the observed SUT performance as observed in trial results.
-
-In this model, SUT does not have a single performance value, it has a spectrum.
-One end of the spectrum is the idealized noiseless performance value,
-the other end can be called a noiseful performance.
-In practice, trial result
-close to the noiseful end of the spectrum happens only rarely.
-The worse the performance value is, the more rarely it is seen in a trial.
-Therefore, the extreme noiseful end of the SUT spectrum is not observable
-among trial results.
-Also, the extreme noiseless end of the SUT spectrum
-is unlikely to be observable, this time because some small noise effects
-are likely to occur multiple times during a trial.
-
-Unless specified otherwise, this document's focus is
-on the potentially observable ends of the SUT performance spectrum,
-as opposed to the extreme ones.
-
-When focusing on the DUT, the benchmarking effort should ideally aim
-to eliminate only the SUT noise from SUT measurements.
-However,
-this is currently not feasible in practice, as there are no realistic enough
-models available to distinguish SUT noise from DUT fluctuations,
-based on the author's experience and available literature.
-
-Assuming a well-constructed SUT, the DUT is likely its
-primary performance bottleneck.
-In this case, we can define the DUT's
-ideal noiseless performance as the noiseless end of the SUT performance spectrum,
-especially for throughput.
-However, other performance metrics, such as latency,
-may require additional considerations.
-
-Note that by this definition, DUT noiseless performance
-also minimizes the impact of DUT fluctuations, as much as realistically possible
-for a given trial duration.
-
-This document aims to solve the DUT in SUT problem
-by estimating the noiseless end of the SUT performance spectrum
-using a limited number of trial results.
-
-Any improvements to the throughput search algorithm, aimed at better
-dealing with software networking SUT and DUT setup, should employ
-strategies recognizing the presence of SUT noise, allowing the discovery of
-(proxies for) DUT noiseless performance
-at different levels of sensitivity to SUT noise.
-
-## Repeatability and Comparability
-
-[RFC2544] does not suggest to repeat throughput search.
-And from just one
-discovered throughput value, it cannot be determined how repeatable that value is.
-Poor repeatability then leads to poor comparability,
-as different benchmarking teams may obtain varying throughput values
-for the same SUT, exceeding the expected differences from search precision.
-
-[RFC2544] throughput requirements (60 seconds trial and
-no tolerance of a single frame loss) affect the throughput results
-in the following way.
-The SUT behavior close to the noiseful end of its performance spectrum
-consists of rare occasions of significantly low performance,
-but the long trial duration makes those occasions not so rare on the trial level.
-Therefore, the binary search results tend to wander away from the noiseless end
-of SUT performance spectrum, more frequently and more widely than shorter
-trials would, thus causing poor throughput repeatability.
-
-The repeatability problem can be addressed by defining a search procedure
-that identifies a consistent level of performance,
-even if it does not meet the strict definition of throughput in [RFC2544].
-
-According to the SUT performance spectrum model, better repeatability
-will be at the noiseless end of the spectrum.
-Therefore, solutions to the DUT in SUT problem
-will help also with the repeatability problem.
-
-Conversely, any alteration to [RFC2544] throughput search
-that improves repeatability should be considered
-as less dependent on the SUT noise.
-
-An alternative option is to simply run a search multiple times, and report some
-statistics (e.g. average and standard deviation).
-This can be used
-for a subset of tests deemed more important,
-but it makes the search duration problem even more pronounced.
-
-## Throughput with Non-Zero Loss
-
-[RFC1242] (section 3.17) defines throughput as:
- The maximum rate at which none of the offered frames
- are dropped by the device.
-
-Then, it says:
- Since even the loss of one frame in a
- data stream can cause significant delays while
- waiting for the higher level protocols to time out,
- it is useful to know the actual maximum data
- rate that the device can support.
-
-However, many benchmarking teams accept a small,
-non-zero loss ratio as the goal for their load search.
-
-Motivations are many:
-
-- Modern protocols tolerate frame loss better,
- compared to the time when [RFC1242] and [RFC2544] were specified.
-
-- Trials nowadays send way more frames within the same duration,
- increasing the chance of a small SUT performance fluctuation
- being enough to cause frame loss.
-
-- Small bursts of frame loss caused by noise have otherwise smaller impact
- on the average frame loss ratio observed in the trial,
- as during other parts of the same trial the SUT may work more closely
- to its noiseless performance, thus perhaps lowering the trial loss ratio
- below the goal loss ratio value.
-
-- If an approximation of the SUT noise impact on the trial loss ratio is known,
- it can be set as the goal loss ratio.
-
-Regardless of the validity of all similar motivations,
-support for non-zero loss goals makes any search algorithm more user-friendly.
-[RFC2544] throughput is not user-friendly in this regard.
-
-Furthermore, allowing users to specify multiple loss ratio values,
-and enabling a single search to find all relevant bounds,
-significantly enhances the usefulness of the search algorithm.
-
-Searching for multiple search goals also helps to describe the SUT performance
-spectrum better than the result of a single search goal.
-For example, the repeated wide gap between zero and non-zero loss loads
-indicates the noise has a large impact on the observed performance,
-which is not evident from a single goal load search procedure result.
-
-It is easy to modify the vanilla bisection to find a lower bound
-for the intended load that satisfies a non-zero goal loss ratio.
-But it is not that obvious how to search for multiple goals at once,
-hence the support for multiple search goals remains a problem.
-
-## Inconsistent Trial Results
-
-While performing throughput search by executing a sequence of
-measurement trials, there is a risk of encountering inconsistencies
-between trial results.
-
-The plain bisection never encounters inconsistent trials.
-But [RFC2544] hints about the possibility of inconsistent trial results,
-in two places in its text.
-The first place is section 24, where full trial durations are required,
-presumably because they can be inconsistent with the results
-from shorter trial durations.
-The second place is section 26.3, where two successive zero-loss trials
-are recommended, presumably because after one zero-loss trial
-there can be a subsequent inconsistent non-zero-loss trial.
-
-Examples include:
-
-- A trial at the same load (same or different trial duration) results
- in a different trial loss ratio.
-- A trial at a higher load (same or different trial duration) results
- in a smaller trial loss ratio.
-
-Any robust throughput search algorithm needs to decide how to continue
-the search in the presence of such inconsistencies.
-Definitions of throughput in [RFC1242] and [RFC2544] are not specific enough
-to imply a unique way of handling such inconsistencies.
-
-Ideally, there will be a definition of a new quantity which both generalizes
-throughput for non-zero-loss (and other possible repeatability enhancements),
-while being precise enough to force a specific way to resolve trial result
-inconsistencies.
-But until such a definition is agreed upon, the correct way to handle
-inconsistent trial results remains an open problem.
-
-# MLRsearch Specification
-
-This chapter focuses on technical definitions needed for evaluating
-whether a particular test procedure adheres to MLRsearch specification.
-
-For motivations, explanations, and other comments see other chapters.
-
-## MLRsearch Architecture
-
-MLRsearch architecture consists of three main components:
-the manager, the controller, and the measurer.
-For definitions of the components, see the following sections.
-
-The architecture also implies the presence of other components, such as the SUT.
-
-These components can be seen as abstractions present in any testing procedure.
-
-### Measurer
-
-The measurer is the component that performs one trial
-as described in [RFC2544] section 23.
-
-Specifically, one call to the measurer accepts a trial load value
-and trial duration value, performs the trial, and returns
-the measured trial loss ratio, and optionally a different duration value.
-
-It is the responsibility of the measurer to uphold any requirements
-and assumptions present in MLRsearch specification
-(e.g. trial forwarding ratio not being larger than one).
-Implementers have some freedom, for example in the way they deal with
-duplicated frames, or what to return if the tester sent zero frames towards SUT.
-Implementations are RECOMMENDED to document their behavior
-related to such freedoms in as detailed a way as possible.
-
-Implementations MUST document any deviations from RFC documents,
-for example if the wait time around traffic
-is shorter than what [RFC2544] section 23 specifies.
-
-### Controller
-
-The controller selects trial load and duration values
-to achieve the search goals in the shortest expected time.
-
-The controller calls the measurer multiple times,
-receiving the trial result from each call.
-After exit condition is met, the controller returns
-the overall search results.
-
-The controller's role in optimizing trial load and duration selection
-distinguishes MLRsearch algorithms from simpler search procedures.
-
-For controller inputs, see later section Controller Inputs.
-For controller outputs, see later section Controller Outputs.
-
-### Manager
-
-The controller gets initiated by the manager once, and subsequently calls
-
-The manager is the component that initializes SUT, the traffic generator
-(tester in [RFC2544] terminology), the measurer and the controller
-with intended configurations.
-It then calls the controller once, and receives its outputs.
-
-The manager is also responsible for creating reports in the appropriate format,
-based on information in controller outputs.
-
-## Units
-
-The specification deals with physical quantities, so it is assumed
-each numeric value is accompanied by an appropriate physical unit.
-
-The specification does not state which unit is appropriate,
-but implementations MUST make it explicit which unit is used
-for each value provided or received by the user.
-
-For example, load quantities (including the conditional throughput)
-returned by the controller are defined to be based on a single-interface
-(unidirectional) loads.
-For bidirectional traffic, users are likely
-to expect bidirectional throughput quantities, so the manager is responsible
-for making its report clear.
-
-## SUT
-
-As defined in [RFC2285]:
-The collective set of network devices to which stimulus is offered
-as a single entity and response measured.
-
-## Trial
-
-A trial is the part of the test described in [RFC2544] section 23.
-
-### Trial Load
-
-The trial load is the intended constant load for a trial.
-
-Load is the quantity implied by Constant Load of [RFC1242],
-Data Rate of [RFC2544] and Intended Load of [RFC2285].
-All three specify this value applies to one (input or output) interface.
-
-### Trial Duration
-
-Trial duration is the intended duration of the traffic for a trial.
-
-In general, this quantity does not include any preparation nor waiting
-described in section 23 of [RFC2544].
-
-However, the measurer MAY return a duration value that deviates
-from the intended duration.
-This feature can be beneficial for users
-who wish to manage the overall search duration,
-rather than solely the traffic portion of it.
-The manager MUST report
-how the measurer computes the returned duration values in that case.
-
-### Trial Forwarding Ratio
-
-The trial forwarding ratio is a dimensionless floating point value
-that ranges from 0.0 to 1.0, inclusive.
-It is calculated by dividing the number of frames
-successfully forwarded by the SUT
-by the total number of frames expected to be forwarded during the trial.
-
-Note that, contrary to loads, frame counts used to compute
-trial forwarding ratio are aggregates over all SUT output ports.
-
-Questions around what is the correct number of frames
-that should have been forwarded is outside of the scope of this document.
-E.g. what should the measurer return when it detects
-that the offered load differs significantly from the intended load.
-
-### Trial Loss Ratio
-
-The trial loss ratio is equal to one minus the trial forwarding ratio.
-
-### Trial Forwarding Rate
-
-The trial forwarding rate is a derived quantity, calculated by
-multiplying the trial load by the trial forwarding ratio.
-
-It is important to note that while similar, this quantity is not identical
-to the Forwarding Rate as defined in [RFC2285] section 3.6.1,
-as the latter is specific to one output interface,
-whereas the trial forwarding ratio is based
-on frame counts aggregated over all SUT output interfaces.
-
-## Traffic profile
-
-Any other specifics (besides trial load and trial duration)
-the measurer needs in order to perform the trial
-are understood as a composite called the traffic profile.
-All its attributes are assumed to be constant during the search,
-and the composite is configured on the measurer by the manager
-before the search starts.
-
-The traffic profile is REQUIRED by [RFC2544]
-to contain some specific quantities, for example frame size.
-Several more specific quantities may be RECOMMENDED.
-
-Depending on SUT configuration, e.g. when testing specific protocols,
-additional values need to be included in the traffic profile
-and in the test report.
-See other IETF documents.
-
-## Search Goal
-
-The search goal is a composite consisting of several attributes,
-some of them are required.
-Implementations are free to add their own attributes.
-
-A particular set of attribute values is called a search goal instance.
-
-Subsections list all required attributes and one recommended attribute.
-Each subsection contains a short informal description,
-but see other chapters for more in-depth explanations.
-
-The meaning of the attributes is formally given only by their effect
-on the controller output attributes (defined in later in section Search Result).
-
-Informally, later chapters give additional intuitions and examples
-to the search goal attribute values.
-Later chapters also give motivation to formulas of computation of the outputs.
-
-### Goal Final Trial Duration
-
-A threshold value for trial durations.
-This attribute is REQUIRED, and the value MUST be positive.
-
-Informally, while MLRsearch is allowed to perform trials shorter than this,
-but results from such short trials have only limited impact on search results.
-
-The full relation needs definitions is later subsections.
-But for example, the conditional throughput
-(definition in subsection Conditional Throughput)
-for this goal will be computed only from trial results
-from trials at least as long as this.
-
-### Goal Duration Sum
-
-A threshold value for a particular sum of trial durations.
-This attribute is REQUIRED, and the value MUST be positive.
-
-This uses the duration values returned by the measurer.
-
-Informally, even when looking only at trials done at this goal's
-final trial duration, MLRsearch may spend up to this time measuring
-the same load value.
-If the goal duration sum is larger than
-the goal final trial duration, it means multiple trials need to be measured
-at the same load.
-
-### Goal Loss Ratio
-
-A threshold value for trial loss ratios.
-REQUIRED attribute, MUST be non-negative and smaller than one.
-
-Informally, if a load causes too many trials with trial loss ratios
-larger than this, the conditional throughput for this goal
-will be smaller than that load.
-
-### Goal Exceed Ratio
-
-A threshold value for a particular ratio of duration sums.
-REQUIRED attribute, MUST be non-negative and smaller than one.
-
-The duration sum values come from the duration values returned by the measurer.
-
-Informally, the impact of lossy trials is controlled by this value.
-The full relation needs definitions is later subsections.
-
-But for example, the definition of the conditional throughput
-(given later in subsection Conditional Throughput)
-refers to a q-value for a quantile when selecting
-which trial result gives the conditional throughput.
-The goal exceed ratio acts as the q-value to use there.
-
-Specifically, when the goal exceed ratio is 0.5 and MLRsearch happened
-to use the whole goal duration sum (using full-length trials),
-it means the conditional throughput is the median of trial forwarding rates.
-
-### Goal Width
-
-A value used as a threshold for telling when two trial load values
-are close enough.
-
-RECOMMENDED attribute, positive.
-Implementations without this attribute
-MUST give the manager other ways to control the search exit condition.
-
-Absolute load difference and relative load difference are two popular choices,
-but implementations may choose a different way to specify width.
-
-Informally, this acts as a stopping condition, controlling the precision
-of the search.
-The search stops if every goal has reached its precision.
-
-## Controller Inputs
-
-The only REQUIRED input for controller is a set of search goal instances.
-MLRsearch implementations MAY use additional input parameters for the controller.
-
-The order of instances SHOULD NOT have a big impact on controller outputs,
-but MLRsearch implementations MAY base their behavior on the order
-of search goal instances.
-
-The search goal instances SHOULD NOT be identical.
-MLRsearch implementation MAY allow identical instances.
-
-## Goal Result
-
-Before defining the output of the controller,
-it is useful to define what the goal result is.
-
-The goal result is a composite object consisting of several attributes.
-A particular set of attribute values is called a goal result instance.
-
-Any goal result instance can be either regular or irregular.
-MLRsearch specification puts requirements on regular goal result instances.
-Any instance that does not meet the requirements is deemed irregular.
-
-Implementations are free to define their own irregular goal results,
-but the manager MUST report them clearly as not regular according to this section.
-
-All attribute values in one goal result instance
-are related to a single search goal instance,
-referred to as the given search goal.
-
-Some of the attributes of a regular goal result instance are required,
-some are recommended, implementations are free to add their own.
-
-The subsections define two required and one optional attribute
-for a regular goal result.
-
-A typical irregular result is when all trials at the maximal offered load
-have zero loss, as the relevant upper bound does not exist in that case.
-
-### Relevant Upper Bound
-
-The relevant upper bound is the smallest intended load value that is classified
-at the end of the search as an upper bound (see Appendix A)
-for the given search goal.
-This is a REQUIRED attribute.
-
-Informally, this is the smallest intended load that failed to uphold
-all the requirements of the given search goal, mainly the goal loss ratio
-in combination with the goal exceed ratio.
-
-### Relevant Lower Bound
-
-The relevant lower bound is the largest intended load value
-among those smaller than the relevant upper bound
-that got classified at the end of the search
-as a lower bound (see Appendix A) for the given search goal.
-This is a REQUIRED attribute.
-
-For a regular goal result, the distance between the relevant lower bound
-and the relevant upper bound MUST NOT be larger than the goal width,
-if the implementation offers width as a goal attribute.
-
-Informally, this is the largest intended load that managed to uphold
-all the requirements of the given search goal, mainly the goal loss ratio
-in combination with the goal exceed ratio, while not being larger
-than the relevant upper bound.
-
-### Conditional Throughput
-
-The conditional throughput (see Appendix B)
-as evaluated at the relevant lower bound of the given search goal
-at the end of the search.
-This is a RECOMMENDED attribute.
-
-Informally, this is a typical forwarding rate expected to be seen
-at the relevant lower bound of the given search goal.
-But frequently just a conservative estimate thereof,
-as MLRsearch implementations tend to stop gathering more data
-as soon as they confirm the result cannot get worse than this estimate
-within the goal duration sum.
-
-## Search Result
-
-The search result is a single composite object
-that maps each search goal to a corresponding goal result.
-
-In other words, search result is an unordered list of key-value pairs,
-where no two pairs contain equal keys.
-The key is a search goal instance, acting as the given search goal
-for the goal result instance in the value portion of the key-value pair.
-
-The search result (as a mapping)
-MUST map from all the search goals present in the controller input.
-
-## Controller Outputs
-
-The search result is the only REQUIRED output
-returned from the controller to the manager.
-
-MLRsearch implementation MAY return additional data in the controller output.
-
-# Further Explanations
-
-This chapter focuses on intuitions and motivations
-and skips over some important details.
-
-Familiarity with the MLRsearch specification is not required here,
-so this chapter can act as an introduction.
-For example, this chapter starts talking about the tightest lower bounds
-before it is ready to talk about the relevant lower bound from the specification.
-
-## MLRsearch Versions
-
-The MLRsearch algorithm has been developed in a code-first approach,
-a Python library has been created, debugged, and used in production
-before the first descriptions (even informal) were published.
-In fact, multiple versions of the library were used in the production
-over the past few years, and later code was usually not compatible
-with earlier descriptions.
-
-The code in (any version of) MLRsearch library fully determines
-the search process (for given configuration parameters),
-leaving no space for deviations.
-MLRsearch, as a name for a broad class of possible algorithms,
-leaves plenty of space for future improvements, at the cost
-of poor comparability of results of different MLRsearch implementations.
-
-There are two competing needs.
-There is the need for standardization in areas critical to comparability.
-There is also the need to allow flexibility for implementations
-to innovate and improve in other areas.
-This document defines the MLRsearch specification
-in a manner that aims to fairly balances both needs.
-
-## Exit Condition
-
-[RFC2544] prescribes that after performing one trial at a specific offered load,
-the next offered load should be larger or smaller, based on frame loss.
-
-The usual implementation uses binary search.
-Here a lossy trial becomes
-a new upper bound, a lossless trial becomes a new lower bound.
-The span of values between (including both) the tightest lower bound
-and the tightest upper bound forms an interval of possible results,
-and after each trial the width of that interval halves.
-
-Usually the binary search implementation tracks only the two tightest bounds,
-simply calling them bounds.
-But the old values still B remain valid bounds,
-just not as tight as the new ones.
-
-After some number of trials, the tightest lower bound becomes the throughput.
-[RFC2544] does not specify when (if ever) should the search stop.
-
-MLRsearch library introduces a concept of goal width.
-The search stops
-when the distance between the tightest upper bound and the tightest lower bound
-is smaller than a user-configured value, called goal width from now on.
-In other words, the interval width at the end of the search
-has to be no larger than the goal width.
-
-This goal width value therefore determines the precision of the result.
-As MLRsearch specification requires a particular structure of the result,
-the result itself does contain enough information to determine its precision,
-thus it is not required to report the goal width value.
-
-This allows MLRsearch implementations to use exit conditions
-different from goal width.
-
-## Load Classification
-
-MLRsearch keeps the basic logic of binary search (tracking tightest bounds,
-measuring at the middle), perhaps with minor technical clarifications.
-The algorithm chooses an intended load (as opposed to the offered load),
-the interval between bounds does not need to be split
-exactly into two equal halves,
-and the final reported structure specifies both bounds.
-
-The biggest difference is that to classify a load
-as an upper or lower bound, MLRsearch may need more than one trial
-(depending on configuration options) to be performed at the same intended load.
-
-As a consequence, even if a load already does have few trial results,
-it still may be classified as undecided, neither a lower bound nor an upper bound.
-
-An explanation of the classification logic is given in the next chapter,
-as it relies heavily on other sections of this chapter.
-
-For repeatability and comparability reasons, it is important that
-given a set of trial results, all implementations of MLRsearch
-classify the load equivalently.
-
-## Loss Ratios
-
-The next difference is in the goals of the search.
-[RFC2544] has a single goal,
-based on classifying full-length trials as either lossless or lossy.
-
-As the name suggests, MLRsearch can search for multiple goals,
-differing in their loss ratios.
-The precise definition of the goal loss ratio will be given later.
-The [RFC2544] throughput goal then simply becomes a zero goal loss ratio.
-Different goals also may have different goal widths.
-
-A set of trial results for one specific intended load value
-can classify the load as an upper bound for some goals, but a lower bound
-for some other goals, and undecided for the rest of the goals.
-
-Therefore, the load classification depends not only on trial results,
-but also on the goal.
-The overall search procedure becomes more complicated
-(compared to binary search with a single goal),
-but most of the complications do not affect the final result,
-except for one phenomenon, loss inversion.
-
-## Loss Inversion
-
-In [RFC2544] throughput search using bisection, any load with a lossy trial
-becomes a hard upper bound, meaning every subsequent trial has a smaller
-intended load.
-
-But in MLRsearch, a load that is classified as an upper bound for one goal
-may still be a lower bound for another goal, and due to the other goal
-MLRsearch will probably perform trials at even higher loads.
-What to do when all such higher load trials happen to have zero loss?
-Does it mean the earlier upper bound was not real?
-Does it mean the later lossless trials are not considered a lower bound?
-Surely we do not want to have an upper bound at a load smaller than a lower bound.
-
-MLRsearch is conservative in these situations.
-The upper bound is considered real, and the lossless trials at higher loads
-are considered to be a coincidence, at least when computing the final result.
-
-This is formalized using new notions, the relevant upper bound and
-the relevant lower bound.
-Load classification is still based just on the set of trial results
-at a given intended load (trials at other loads are ignored),
-making it possible to have a lower load classified as an upper bound,
-and a higher load classified as a lower bound (for the same goal).
-The relevant upper bound (for a goal) is the smallest load classified
-as an upper bound.
-But the relevant lower bound is not simply
-the largest among lower bounds.
-It is the largest load among loads
-that are lower bounds while also being smaller than the relevant upper bound.
-
-With these definitions, the relevant lower bound is always smaller
-than the relevant upper bound (if both exist), and the two relevant bounds
-are used analogously as the two tightest bounds in the binary search.
-When they are less than the goal width apart,
-the relevant bounds are used in the output.
-
-One consequence is that every trial result can have an impact on the search result.
-That means if your SUT (or your traffic generator) needs a warmup,
-be sure to warm it up before starting the search.
-
-## Exceed Ratio
-
-The idea of performing multiple trials at the same load comes from
-a model where some trial results (those with high loss) are affected
-by infrequent effects, causing poor repeatability of [RFC2544] throughput results.
-See the discussion about noiseful and noiseless ends
-of the SUT performance spectrum.
-Stable results are closer to the noiseless end of the SUT performance spectrum,
-so MLRsearch may need to allow some frequency of high-loss trials
-to ignore the rare but big effects near the noiseful end.
-
-MLRsearch can do such trial result filtering, but it needs
-a configuration option to tell it how frequent can the infrequent big loss be.
-This option is called the exceed ratio.
-It tells MLRsearch what ratio of trials
-(more exactly what ratio of trial seconds) can have a trial loss ratio
-larger than the goal loss ratio and still be classified as a lower bound.
-Zero exceed ratio means all trials have to have a trial loss ratio
-equal to or smaller than the goal loss ratio.
-
-For explainability reasons, the RECOMMENDED value for exceed ratio is 0.5,
-as it simplifies some later concepts by relating them to the concept of median.
-
-## Duration Sum
-
-When more than one trial is needed to classify a load,
-MLRsearch also needs something that controls the number of trials needed.
-Therefore, each goal also has an attribute called duration sum.
-
-The meaning of a goal duration sum is that when a load has trials
-(at full trial duration, details later)
-whose trial durations when summed up give a value at least this long,
-the load is guaranteed to be classified as an upper bound or a lower bound
-for the goal.
-
-As the duration sum has a big impact on the overall search duration,
-and [RFC2544] prescribes wait intervals around trial traffic,
-the MLRsearch algorithm is allowed to sum durations that are different
-from the actual trial traffic durations.
-
-## Short Trials
-
-MLRsearch requires each goal to specify its final trial duration.
-Full-length trial is a shorter name for a trial whose intended trial duration
-is equal to (or longer than) the goal final trial duration.
-
-Section 24 of [RFC2544] already anticipates possible time savings
-when short trials (shorter than full-length trials) are used.
-Full-length trials are the opposite of short trials,
-so they may also be called long trials.
-
-Any MLRsearch implementation may include its own configuration options
-which control when and how MLRsearch chooses to use shorter trial durations.
-
-For explainability reasons, when exceed ratio of 0.5 is used,
-it is recommended for the goal duration sum to be an odd multiple
-of the full trial durations, so conditional throughput becomes identical to
-a median of a particular set of forwarding rates.
-
-The presence of shorter trial results complicates the load classification logic.
-Full details are given later.
-In short, results from short trials
-may cause a load to be classified as an upper bound.
-This may cause loss inversion, and thus lower the relevant lower bound
-(below what would classification say when considering full-length trials only).
-
-For explainability reasons, it is RECOMMENDED users use such configurations
-that guarantee all trials have the same length.
-Alas, such configurations are usually not compliant with [RFC2544] requirements,
-or not time-saving enough.
-
-## Conditional Throughput
-
-As testing equipment takes the intended load as an input parameter
-for a trial measurement, any load search algorithm needs to deal
-with intended load values internally.
-
-But in the presence of goals with a non-zero loss ratio, the intended load
-usually does not match the user's intuition of what a throughput is.
-The forwarding rate (as defined in [RFC2285] section 3.6.1) is better,
-but it is not obvious how to generalize it
-for loads with multiple trial results and a non-zero goal loss ratio.
-
-MLRsearch defines one such generalization, called the conditional throughput.
-It is the forwarding rate from one of the trials performed at the load
-in question.
-Specification of which trial exactly is quite technical,
-see the specification and Appendix B.
-
-Conditional throughput is partially related to load classification.
-If a load is classified as a lower bound for a goal,
-the conditional throughput can be calculated,
-and guaranteed to show an effective loss ratio
-no larger than the goal loss ratio.
-
-While the conditional throughput gives more intuitive-looking values
-than the relevant lower bound, especially for non-zero goal loss ratio values,
-the actual definition is more complicated than the definition of the relevant
-lower bound.
-In the future, other intuitive values may become popular,
-but they are unlikely to supersede the definition of the relevant lower bound
-as the most fitting value for comparability purposes,
-therefore the relevant lower bound remains a required attribute
-of the goal result structure, while the conditional throughput is only optional.
-
-Note that comparing the best and worst case, the same relevant lower bound value
-may result in the conditional throughput differing up to the goal loss ratio.
-Therefore it is rarely needed to set the goal width (if expressed
-as the relative difference of loads) below the goal loss ratio.
-In other words, setting the goal width below the goal loss ratio
-may cause the conditional throughput for a larger loss ratio to become smaller
-than a conditional throughput for a goal with a smaller goal loss ratio,
-which is counter-intuitive, considering they come from the same search.
-Therefore it is RECOMMENDED to set the goal width to a value no smaller
-than the goal loss ratio.
-
-## Search Time
-
-MLRsearch was primarily developed to reduce the time
-required to determine a throughput, either the [RFC2544] compliant one,
-or some generalization thereof.
-The art of achieving short search times
-is mainly in the smart selection of intended loads (and intended durations)
-for the next trial to perform.
-
-While there is an indirect impact of the load selection on the reported values,
-in practice such impact tends to be small,
-even for SUTs with quite a broad performance spectrum.
-
-A typical example of two approaches to load selection leading to different
-relevant lower bounds is when the interval is split in a very uneven way.
-Any implementation choosing loads very close to the current relevant lower bound
-is quite likely to eventually stumble upon a trial result
-with poor performance (due to SUT noise).
-For an implementation choosing loads very close
-to the current relevant upper bound, this is unlikely,
-as it examines more loads that can see a performance
-close to the noiseless end of the SUT performance spectrum.
-
-However, as even splits optimize search duration at give precision,
-MLRsearch implementations that prioritize minimizing search time
-are unlikely to suffer from any such bias.
-
-Therefore, this document remains quite vague on load selection
-and other optimization details, and configuration attributes related to them.
-Assuming users prefer libraries that achieve short overall search time,
-the definition of the relevant lower bound
-should be strict enough to ensure result repeatability
-and comparability between different implementations,
-while not restricting future implementations much.
-
-Sadly, different implementations may exhibit their sweet spot of
-the best repeatability for a given search duration
-at different goals attribute values, especially concerning
-any optional goal attributes such as the initial trial duration.
-Thus, this document does not comment much on which configurations
-are good for comparability between different implementations.
-For comparability between different SUTs using the same implementation,
-refer to configurations recommended by that particular implementation.
-
-## [RFC2544] compliance
-
-The following search goal ensures unconditional compliance with
-[RFC2544] throughput search procedure:
-
-- Goal loss ratio: zero.
-
-- Goal final trial duration: 60 seconds.
-
-- Goal duration sum: 60 seconds.
-
-- Goal exceed ratio: zero.
-
-The presence of other search goals does not affect the compliance
-of this goal result.
-The relevant lower bound and the conditional throughput are in this case
-equal to each other, and the value is the [RFC2544] throughput.
-
-If the 60 second quantity is replaced by a smaller quantity in both attributes,
-the conditional throughput is still conditionally compliant with
-[RFC2544] throughput.
-
-# Logic of Load Classification
-
-This chapter continues with explanations,
-but this time more precise definitions are needed
-for readers to follow the explanations.
-The definitions here are wordy, implementers should read the specification
-chapter and appendices for more concise definitions.
-
-The two related areas of focus in this chapter are load classification
-and the conditional throughput, starting with the latter.
-
-The section Performance Spectrum contains definitions
-needed to gain insight into what conditional throughput means.
-The rest of the subsections discuss load classification,
-they do not refer to Performance Spectrum, only to a few duration sums.
-
-For load classification, it is useful to define good and bad trials.
-A trial is called bad (according to a goal) if its trial loss ratio
-is larger than the goal loss ratio.
-The trial that is not bad is called good.
-
-## Performance Spectrum
-
-There are several equivalent ways to explain
-the conditional throughput computation.
-One of the ways relies on an object called the performance spectrum.
-First, two heavy definitions are needed.
-
-Take an intended load value, a trial duration value, and a finite set
-of trial results, all trials measured at that load value and duration value.
-The performance spectrum is the function that maps
-any non-negative real number into a sum of trial durations among all trials
-in the set that has that number as their forwarding rate,
-e.g. map to zero if no trial has that particular forwarding rate.
-
-A related function, defined if there is at least one trial in the set,
-is the performance spectrum divided by the sum of the durations
-of all trials in the set.
-That function is called the performance probability function, as it satisfies
-all the requirements for probability mass function function
-of a discrete probability distribution,
-the one-dimensional random variable being the trial forwarding rate.
-
-These functions are related to the SUT performance spectrum,
-as sampled by the trials in the set.
-
-As for any other probability function, we can talk about percentiles
-of the performance probability function, including the median.
-The conditional throughput will be one such quantile value
-for a specifically chosen set of trials.
-
-Take a set of all full-length trials performed at the relevant lower bound,
-sorted by decreasing forwarding rate.
-The sum of the durations of those trials
-may be less than the goal duration sum, or not.
-If it is less, add an imaginary trial result with zero forwarding rate,
-such that the new sum of durations is equal to the goal duration sum.
-This is the set of trials to use.
-The q-value for the quantile
-is the goal exceed ratio.
-If the quantile touches two trials,
-the larger forwarding rate (from the trial result sorted earlier) is used.
-The resulting quantity is the conditional throughput of the goal in question.
-
-First example.
-For zero exceed ratio, when goal duration sum has been reached.
-The conditional throughput is the smallest forwarding rate among the trials.
-
-Second example.
-For zero exceed ratio, when goal duration sum has not been reached yet.
-Due to the missing duration sum, the worst case may still happen,
-so the conditional throughput is zero.
-This is not reported to the user,
-as this load cannot become the relevant lower bound yet.
-
-Third example.
-Exceed ratio 50%, goal duration sum two seconds,
-one trial present with the duration of one second and zero loss.
-The imaginary trial is added with the duration
-of one second and zero forwarding rate.
-The median would touch both trials, so the conditional throughput
-is the forwarding rate of the one non-imaginary trial.
-As that had zero loss, the value is equal to the offered load.
-
-Note that Appendix B does not take into account short trial results.
-
-### Summary
-
-While the conditional throughput is a generalization of the forwarding rate,
-its definition is not an obvious one.
-
-Other than the forwarding rate, the other source of intuition
-is the quantile in general, and the median the the recommended case.
-
-In future, different quantities may prove more useful,
-especially when applying to specific problems,
-but currently the conditional throughput is the recommended compromise,
-especially for repeatability and comparability reasons.
-
-## Single Trial Duration
-
-When goal attributes are chosen in such a way that every trial has the same
-intended duration, the load classification is simpler.
-
-The following description looks technical, but it follows the motivation
-of goal loss ratio, goal exceed ratio, and goal duration sum.
-If the sum of the durations of all trials (at the given load)
-is less than the goal duration sum, imagine best case scenario
-(all subsequent trials having zero loss) and worst case scenario
-(all subsequent trials having 100% loss).
-Here we assume there are as many subsequent trials as needed
-to make the sum of all trials equal to the goal duration sum.
-As the exceed ratio is defined just using sums of durations
-(number of trials does not matter), it does not matter whether
-the "subsequent trials" can consist of an integer number of full-length trials.
-
-In any of the two scenarios, we can compute the load exceed ratio,
-As the duration sum of good trials divided by the duration sum of all trials,
-in both cases including the assumed trials.
-
-If even in the best case scenario the load exceed ratio would be larger
-than the goal exceed ratio, the load is an upper bound.
-If even in the worst case scenario the load exceed ratio would not be larger
-than the goal exceed ratio, the load is a lower bound.
-
-Even more specifically.
-Take all trials measured at a given load.
-The sum of the durations of all bad full-length trials is called the bad sum.
-The sum of the durations of all good full-length trials is called the good sum.
-The result of adding the bad sum plus the good sum is called the measured sum.
-The larger of the measured sum and the goal duration sum is called the whole sum.
-The whole sum minus the measured sum is called the missing sum.
-The optimistic exceed ratio is the bad sum divided by the whole sum.
-The pessimistic exceed ratio is the bad sum plus the missing sum,
-that divided by the whole sum.
-If the optimistic exceed ratio is larger than the goal exceed ratio,
-the load is classified as an upper bound.
-If the pessimistic exceed ratio is not larger than the goal exceed ratio,
-the load is classified as a lower bound.
-Else, the load is classified as undecided.
-
-The definition of pessimistic exceed ratio is compatible with the logic in
-the conditional throughput computation, so in this single trial duration case,
-a load is a lower bound if and only if the conditional throughput
-effective loss ratio is not larger than the goal loss ratio.
-If it is larger, the load is either an upper bound or undecided.
-
-## Short Trial Scenarios
-
-Trials with intended duration smaller than the goal final trial duration
-are called short trials.
-The motivation for load classification logic in the presence of short trials
-is based around a counter-factual case: What would the trial result be
-if a short trial has been measured as a full-length trial instead?
-
-There are three main scenarios where human intuition guides
-the intended behavior of load classification.
-
-False good scenario.
-The user had their reason for not configuring a shorter goal
-final trial duration.
-Perhaps SUT has buffers that may get full at longer
-trial durations.
-Perhaps SUT shows periodic decreases in performance
-the user does not want to be treated as noise.
-In any case, many good short trials may become bad full-length trials
-in the counter-factual case.
-In extreme cases, there are plenty of good short trials and no bad short trials.
-In this scenario, we want the load classification NOT to classify the load
-as a lower bound, despite the abundance of good short trials.
-Effectively, we want the good short trials to be ignored, so they
-do not contribute to comparisons with the goal duration sum.
-
-True bad scenario.
-When there is a frame loss in a short trial,
-the counter-factual full-length trial is expected to lose at least as many
-frames.
-And in practice, bad short trials are rarely turning into
-good full-length trials.
-In extreme cases, there are no good short trials.
-In this scenario, we want the load classification
-to classify the load as an upper bound just based on the abundance
-of short bad trials.
-Effectively, we want the bad short trials
-to contribute to comparisons with the goal duration sum,
-so the load can be classified sooner.
-
-Balanced scenario.
-Some SUTs are quite indifferent to trial duration.
-Performance probability function constructed from short trial results
-is likely to be similar to the performance probability function constructed
-from full-length trial results (perhaps with larger dispersion,
-but without a big impact on the median quantiles overall).
-For a moderate goal exceed ratio value, this may mean there are both
-good short trials and bad short trials.
-This scenario is there just to invalidate a simple heuristic
-of always ignoring good short trials and never ignoring bad short trials.
-That simple heuristic would be too biased.
-Yes, the short bad trials
-are likely to turn into full-length bad trials in the counter-factual case,
-but there is no information on what would the good short trials turn into.
-The only way to decide safely is to do more trials at full length,
-the same as in scenario one.
-
-## Short Trial Logic
-
-MLRsearch picks a particular logic for load classification
-in the presence of short trials, but it is still RECOMMENDED
-to use configurations that imply no short trials,
-so the possible inefficiencies in that logic
-do not affect the result, and the result has better explainability.
-
-With that said, the logic differs from the single trial duration case
-only in different definition of the bad sum.
-The good sum is still the sum across all good full-length trials.
-
-Few more notions are needed for defining the new bad sum.
-The sum of durations of all bad full-length trials is called the bad long sum.
-The sum of durations of all bad short trials is called the bad short sum.
-The sum of durations of all good short trials is called the good short sum.
-One minus the goal exceed ratio is called the inceed ratio.
-The goal exceed ratio divided by the inceed ratio is called the exceed coefficient.
-The good short sum multiplied by the exceed coefficient is called the balancing sum.
-The bad short sum minus the balancing sum is called the excess sum.
-If the excess sum is negative, the bad sum is equal to the bad long sum.
-Otherwise, the bad sum is equal to the bad long sum plus the excess sum.
-
-Here is how the new definition of the bad sum fares in the three scenarios,
-where the load is close to what would the relevant bounds be
-if only full-length trials were used for the search.
-
-False good scenario.
-If the duration is too short, we expect to see a higher frequency
-of good short trials.
-This could lead to a negative excess sum,
-which has no impact, hence the load classification is given just by
-full-length trials.
-Thus, MLRsearch using too short trials has no detrimental effect
-on result comparability in this scenario.
-But also using short trials does not help with overall search duration,
-probably making it worse.
-
-True bad cenario.
-Settings with a small exceed ratio
-have a small exceed coefficient, so the impact of the good short sum is small,
-and the bad short sum is almost wholly converted into excess sum,
-thus bad short trials have almost as big an impact as full-length bad trials.
-The same conclusion applies to moderate exceed ratio values
-when the good short sum is small.
-Thus, short trials can cause a load to get classified as an upper bound earlier,
-bringing time savings (while not affecting comparability).
-
-Balanced scenario.
-Here excess sum is small in absolute value, as the balancing sum
-is expected to be similar to the bad short sum.
-Once again, full-length trials are needed for final load classification;
-but usage of short trials probably means MLRsearch needed
-a shorter overall search time before selecting this load for measurement,
-thus bringing time savings (while not affecting comparability).
-
-Note that in presence of short trial results,
-the comparibility between the load classification
-and the conditional throughput is only partial.
-The conditional throughput still comes from a good long trial,
-but a load higher than the relevant lower bound may also compute to a good value.
-
-## Longer Trial Durations
-
-If there are trial results with an intended duration larger
-than the goal trial duration, the precise definitions
-in Appendix A and Appendix B treat them in exactly the same way
-as trials with duration equal to the goal trial duration.
-
-But in configurations with moderate (including 0.5) or small
-goal exceed ratio and small goal loss ratio (especially zero),
-bad trials with longer than goal durations may bias the search
-towards the lower load values, as the noiseful end of the spectrum
-gets a larger probability of causing the loss within the longer trials.
-
-For some users, this is an acceptable price
-for increased configuration flexibility
-(perhaps saving time for the related goals),
-so implementations SHOULD allow such configurations.
-Still, users are encouraged to avoid such configurations
-by making all goals use the same final trial duration,
-so their results remain comparable across implementations.
-
-# Addressed Problems
-
-Now when MLRsearch is clearly specified and explained,
-it is possible to summarize how does MLRsearch specification help with problems.
-
-Here, "multiple trials" is a shorthand for having the goal final trial duration
-significantly smaller than the goal duration sum.
-This results in MLRsearch performing multiple trials at the same load,
-which may not be the case with other configurations.
-
-## Long Test Duration
-
-As shortening the overall search duration is the main motivation
-of MLRsearch library development, the library implements
-multiple improvements on this front, both big and small.
-
-Most of implementation details are not constrained by the MLRsearch specification,
-so that future implementations may keep shortening the search duration even more.
-
-One exception is the impact of short trial results on the relevant lower bound.
-While motivated by human intuition, the logic is not straightforward.
-In practice, configurations with only one common trial duration value
-are capable of achieving good overal search time and result repeatability
-without the need to consider short trials.
-
-### Impact of goal attribute values
-
-From the required goal attributes, the goal duration sum
-remains the best way to get even shorter searches.
-
-Usage of multiple trials can also save time,
-depending on wait times around trial traffic.
-
-The farther the goal exceed ratio is from 0.5 (towards zero or one),
-the less predictable the overal search duration becomes in practice.
-
-Width parameter does not change search duration much in practice
-(compared to other, mainly optional goal attributes).
-
-## DUT in SUT
-
-In practice, using multiple trials and moderate exceed ratios
-often improves result repeatability without increasing the overall search time,
-depending on the specific SUT and DUT characteristics.
-Benefits for separating SUT noise are less clear though,
-as it is not easy to distinguish SUT noise from DUT instability in general.
-
-Conditional throughput has an intuitive meaning when described
-using the performance spectrum, so this is an improvement
-over existing simple (less configurable) search procedures.
-
-Multiple trials can save time also when the noisy end of
-the preformance spectrum needs to be examined, e.g. for [RFC9004].
-
-Under some circumstances, testing the same DUT and SUT setup with different
-DUT configurations can give some hints on what part of noise is SUT noise
-and what part is DUT performance fluctuations.
-In practice, both types of noise tend to be too complicated for that analysis.
-
-MLRsearch enables users to search for multiple goals,
-potentially providing more insight at the cost of a longer overall search time.
-However, for a thorough and reliable examination of DUT-SUT interactions,
-it is necessary to employ additional methods beyond black-box benchmarking,
-such as collecting and analyzing DUT and SUT telemetry.
-
-## Repeatability and Comparability
-
-Multiple trials improve repeatability, depending on exceed ratio.
-
-In practice, one-second goal final trial duration with exceed ratio 0.5
-is good enough for modern SUTs.
-However, unless smaller wait times around the traffic part of the trial
-are allowed, too much of overal search time would be wasted on waiting.
-
-It is not clear whether exceed ratios higher than 0.5 are better
-for repeatability.
-The 0.5 value is still preferred due to explainability using median.
-
-It is possible that the conditional throughput values (with non-zero goal
-loss ratio) are better for repeatability than the relevant lower bound values.
-This is especially for implementations
-which pick load from a small set of discrete values,
-as that hides small variances in relevant lower bound values
-other implementations may find.
-
-Implementations focusing on shortening the overall search time
-are automatically forced to avoid comparability issues due to load selection,
-as they must prefer even splits wherever possible.
-But this conclusion only holds when the same goals are used.
-Larger adoption is needed before any further claims on comparability
-between MLRsearch implementations can be made.
-
-## Throughput with Non-Zero Loss
-
-Trivially suported by the goal loss ratio attribute.
-
-In practice, usage of non-zero loss ratio values
-improves the result repeatability
-(exactly as expected based on results from simpler search methods).
-
-## Inconsistent Trial Results
-
-MLRsearch is conservative wherever possible.
-This is built into the definition of conditional throughput,
-and into the treatment of short trial results for load classification.
-
-This is consistent with [RFC2544] zero loss tolerance motivation.
-
-If the noiseless part of the SUT performance spectrum is of interest,
-it should be enough to set small value for the goal final trial duration,
-and perhaps also a large value for the goal exceed ratio.
-
-Implementations may offer other (optional) configuration attributes
-to become less conservative, but currently it is not clear
-what impact would that have on repeatability.
-
-# IANA Considerations
-
-No requests of IANA.
-
-# Security Considerations
-
-Benchmarking activities as described in this memo are limited to
-technology characterization of a DUT/SUT using controlled stimuli in a
-laboratory environment, with dedicated address space and the constraints
-specified in the sections above.
-
-The benchmarking network topology will be an independent test setup and
-MUST NOT be connected to devices that may forward the test traffic into
-a production network or misroute traffic to the test management network.
-
-Further, benchmarking is performed on a "black-box" basis, relying
-solely on measurements observable external to the DUT/SUT.
-
-Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
-benchmarking purposes. Any implications for network security arising
-from the DUT/SUT SHOULD be identical in the lab and in production
-networks.
-
-# Acknowledgements
-
-Some phrases and statements in this document were created
-with help of Mistral AI (mistral.ai).
-
-Many thanks to Alec Hothan of the OPNFV NFVbench project for thorough
-review and numerous useful comments and suggestions.
-
-Special wholehearted gratitude and thanks to the late Al Morton for his
-thorough reviews filled with very specific feedback and constructive
-guidelines. Thank you Al for the close collaboration over the years,
-for your continuous unwavering encouragement full of empathy and
-positive attitude.
-Al, you are dearly missed.
-
-# Appendix A: Load Classification
-
-This is the specification of how to perform the load classification.
-
-Any intended load value can be classified, according to the given search goal.
-
-The algorithm uses (some subsets of) the set of all available trial results
-from trials measured at a given intended load at the end of the search.
-All durations are those returned by the measurer.
-
-The block at the end of this appendix holds pseudocode
-which computes two values, stored in variables named optimistic and pessimistic.
-The pseudocode happens to be a valid Python code.
-
-If both values are computed to be true, the load in question
-is classified as a lower bound according to the given search goal.
-If both values are false, the load is classified as an upper bound.
-Otherwise, the load is classified as undecided.
-
-The pseudocode expects the following variables to hold values as follows:
-
-- goal_duration_sum: The duration sum value of the given search goal.
-
-- goal_exceed_ratio: The exceed ratio value of the given search goal.
-
-- good_long_sum: Sum of durations across trials with trial duration
- at least equal to the goal final trial duration and with a trial loss ratio
- not higher than the goal loss ratio.
-
-- bad_long_sum: Sum of durations across trials with trial duration
- at least equal to the goal final trial duration and with a trial loss ratio
- higher than the goal loss ratio.
-
-- good_short_sum: Sum of durations across trials with trial duration
- shorter than the goal final trial duration and with a trial loss ratio
- not higher than the goal loss ratio.
-
-- bad_short_sum: Sum of durations across trials with trial duration
- shorter than the goal final trial duration and with a trial loss ratio
- higher than the goal loss ratio.
-
-The code works correctly also when there are no trial results at the given load.
-
-~~~ python
-balancing_sum = good_short_sum * goal_exceed_ratio / (1.0 - goal_exceed_ratio)
-effective_bad_sum = bad_long_sum + max(0.0, bad_short_sum - balancing_sum)
-effective_whole_sum = max(good_long_sum + effective_bad_sum, goal_duration_sum)
-quantile_duration_sum = effective_whole_sum * goal_exceed_ratio
-optimistic = effective_bad_sum <= quantile_duration_sum
-pessimistic = (effective_whole_sum - good_long_sum) <= quantile_duration_sum
-~~~
-
-# Appendix B: Conditional Throughput
-
-This is the specification of how to compute conditional throughput.
-
-Any intended load value can be used as the basis for the following computation,
-but only the relevant lower bound (at the end of the search)
-leads to the value called the conditional throughput for a given search goal.
-
-The algorithm uses (some subsets of) the set of all available trial results
-from trials measured at a given intended load at the end of the search.
-All durations are those returned by the measurer.
-
-The block at the end of this appendix holds pseudocode
-which computes a value stored as variable conditional_throughput.
-The pseudocode happens to be a valid Python code.
-
-The pseudocode expects the following variables to hold values as follows:
-
-- goal_duration_sum: The duration sum value of the given search goal.
-
-- goal_exceed_ratio: The exceed ratio value of the given search goal.
-
-- good_long_sum: Sum of durations across trials with trial duration
- at least equal to the goal final trial duration and with a trial loss ratio
- not higher than the goal loss ratio.
-
-- bad_long_sum: Sum of durations across trials with trial duration
- at least equal to the goal final trial duration and with a trial loss ratio
- higher than the goal loss ratio.
-
-- long_trials: An iterable of all trial results from trials with trial duration
- at least equal to the goal final trial duration,
- sorted by increasing the trial loss ratio.
- A trial result is a composite with the following two attributes available:
-
- - trial.loss_ratio: The trial loss ratio as measured for this trial.
-
- - trial.duration: The trial duration of this trial.
-
-The code works correctly only when there if there is at least one
-trial result measured at a given load.
-
-~~~ python
-all_long_sum = max(goal_duration_sum, good_long_sum + bad_long_sum)
-remaining = all_long_sum * (1.0 - goal_exceed_ratio)
-quantile_loss_ratio = None
-for trial in long_trials:
- if quantile_loss_ratio is None or remaining > 0.0:
- quantile_loss_ratio = trial.loss_ratio
- remaining -= trial.duration
- else:
- break
-else:
- if remaining > 0.0:
- quantile_loss_ratio = 1.0
-conditional_throughput = intended_load * (1.0 - quantile_loss_ratio)
-~~~
-
---- back
diff --git a/docs/ietf/draft-ietf-bmwg-mlrsearch-07.md b/docs/ietf/draft-ietf-bmwg-mlrsearch-07.md
new file mode 100644
index 0000000000..eb2a218bb8
--- /dev/null
+++ b/docs/ietf/draft-ietf-bmwg-mlrsearch-07.md
@@ -0,0 +1,3123 @@
+---
+
+title: Multiple Loss Ratio Search
+abbrev: MLRsearch
+docname: draft-ietf-bmwg-mlrsearch-07
+date: 2024-07-18
+
+ipr: trust200902
+area: ops
+wg: Benchmarking Working Group
+kw: Internet-Draft
+cat: info
+
+coding: us-ascii
+pi: # can use array (if all yes) or hash here
+ toc: yes
+ sortrefs: # defaults to yes
+ symrefs: yes
+
+author:
+ -
+ ins: M. Konstantynowicz
+ name: Maciek Konstantynowicz
+ org: Cisco Systems
+ email: mkonstan@cisco.com
+ -
+ ins: V. Polak
+ name: Vratko Polak
+ org: Cisco Systems
+ email: vrpolak@cisco.com
+
+normative:
+ RFC1242:
+ RFC2285:
+ RFC2544:
+ RFC8219:
+ RFC9004:
+
+informative:
+ TST009:
+ target: https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/009/03.04.01_60/gs_NFV-TST009v030401p.pdf
+ title: "TST 009"
+ FDio-CSIT-MLRsearch:
+ target: https://csit.fd.io/cdocs/methodology/measurements/data_plane_throughput/mlr_search/
+ title: "FD.io CSIT Test Methodology - MLRsearch"
+ date: 2023-10
+ PyPI-MLRsearch:
+ target: https://pypi.org/project/MLRsearch/1.2.1/
+ title: "MLRsearch 1.2.1, Python Package Index"
+ date: 2023-10
+
+--- abstract
+
+This document proposes extensions to [RFC2544] throughput search by
+defining a new methodology called Multiple Loss Ratio search
+(MLRsearch). MLRsearch aims to minimize search duration,
+support multiple loss ratio searches,
+and enhance result repeatability and comparability.
+
+The primary reason for extending [RFC2544] is to address the challenges
+and requirements presented by the evaluation and testing
+of software-based networking systems' data planes.
+
+To give users more freedom, MLRsearch provides additional configuration options
+such as allowing multiple short trials per load instead of one large trial,
+tolerating a certain percentage of trial results with higher loss,
+and supporting the search for multiple goals with varying loss ratios.
+
+--- middle
+
+{::comment}
+
+ As we use Kramdown to convert from Markdown,
+ we use this way of marking comments not to be visible in the rendered draft.
+ https://stackoverflow.com/a/42323390
+ If another engine is used, convert to this way:
+ https://stackoverflow.com/a/20885980
+
+ [toc]
+
+{:/comment}
+
+
+# Purpose and Scope
+
+The purpose of this document is to describe Multiple Loss Ratio search
+(MLRsearch), a data plane throughput search methodology optimized for software
+networking DUTs.
+
+Applying vanilla [RFC2544] throughput bisection to software DUTs
+results in several problems:
+
+- Binary search takes too long as most trials are done far from the
+ eventually found throughput.
+- The required final trial duration and pauses between trials
+ prolong the overall search duration.
+- Software DUTs show noisy trial results,
+ leading to a big spread of possible discovered throughput values.
+- Throughput requires a loss of exactly zero frames, but the industry
+ frequently allows for small but non-zero losses.
+- The definition of throughput is not clear when trial results are inconsistent.
+
+To address the problems mentioned above,
+the MLRsearch test methodology specification employs the following enhancements:
+
+- Allow multiple short trials instead of one big trial per load.
+ - Optionally, tolerate a percentage of trial results with higher loss.
+- Allow searching for multiple Search Goals, with differing loss ratios.
+ - Any trial result can affect each Search Goal in principle.
+- Insert multiple coarse targets for each Search Goal, earlier ones need
+ to spend less time on trials.
+ - Earlier targets also aim for lesser precision.
+ - Use Forwarding Rate (FR) at maximum offered load
+ [RFC2285] (section 3.6.2) to initialize the initial targets.
+- Take care when dealing with inconsistent trial results.
+ - Reported throughput is smaller than the smallest load with high loss.
+ - Smaller load candidates are measured first.
+- Apply several load selection heuristics to save even more time
+ by trying hard to avoid unnecessarily narrow bounds.
+
+Some of these enhancements are formalized as MLRsearch specification,
+the remaining enhancements are treated as implementation details,
+thus achieving high comparability without limiting future improvements.
+
+MLRsearch configuration options are flexible enough to
+support both conservative settings and aggressive settings.
+The conservative settings lead to results
+unconditionally compliant with [RFC2544],
+but longer search duration and worse repeatability.
+Conversely, aggressive settings lead to shorter search duration
+and better repeatability, but the results are not compliant with [RFC2544].
+
+No part of [RFC2544] is intended to be obsoleted by this document.
+
+# Identified Problems
+
+This chapter describes the problems affecting usability
+of various performance testing methodologies,
+mainly a binary search for [RFC2544] unconditionally compliant throughput.
+
+## Long Search Duration
+
+{::comment}
+ [Low priority]
+
+ <mark>MKP2 [VP] TODO: Look for mentions of search duration in existing RFCs.</mark>
+
+ <mark>MKP2 [VP] TODO: If not found, define right after defining "the search".</mark>
+
+{:/comment}
+
+The emergence of software DUTs, with frequent software updates and a
+number of different frame processing modes and configurations,
+has increased both the number of performance tests
+required to verify the DUT update and the frequency of running those tests.
+This makes the overall test execution time even more important than before.
+
+The current [RFC2544] throughput definition restricts the potential
+for time-efficiency improvements.
+A more generalized throughput concept could enable further enhancements
+while maintaining the precision of simpler methods.
+
+The bisection method, when unconditionally compliant with [RFC2544],
+is excessively slow.
+This is because a significant amount of time is spent on trials
+with loads that, in retrospect, are far from the final determined throughput.
+
+[RFC2544] does not specify any stopping condition for throughput search,
+so users already have an access to a limited trade-off
+between search duration and achieved precision.
+However, each full 60-second trials doubles the precision,
+so not many trials can be removed without a substantial loss of precision.
+
+## DUT in SUT
+
+[RFC2285] defines:
+- DUT as
+ - The network forwarding device to which stimulus is offered and
+ response measured [RFC2285] (section 3.1.1).
+- SUT as
+ - The collective set of network devices to which stimulus is offered
+ as a single entity and response measured [RFC2285] (section 3.1.2).
+
+[RFC2544] specifies a test setup with an external tester stimulating the
+networking system, treating it either as a single DUT, or as a system
+of devices, an SUT.
+
+In the case of software networking, the SUT consists of not only the DUT
+as a software program processing frames, but also of
+server hardware and operating system functions,
+with that server hardware resources shared across all programs including
+the operating system.
+
+Given that the SUT is a shared multi-tenant environment
+encompassing the DUT and other components, the DUT might inadvertently
+experience interference from the operating system
+or other software operating on the same server.
+
+Some of this interference can be mitigated.
+For instance,
+pinning DUT program threads to specific CPU cores
+and isolating those cores can prevent context switching.
+
+Despite taking all feasible precautions, some adverse effects may still impact
+the DUT's network performance.
+In this document, these effects are collectively
+referred to as SUT noise, even if the effects are not as unpredictable
+as what other engineering disciplines call noise.
+
+DUT can also exhibit fluctuating performance itself, for reasons
+not related to the rest of SUT. For example due to pauses in execution
+as needed for internal stateful processing.
+In many cases this
+may be an expected per-design behavior, as it would be observable even
+in a hypothetical scenario where all sources of SUT noise are eliminated.
+Such behavior affects trial results in a way similar to SUT noise.
+As the two phenomenons are hard to distinguish,
+in this document the term 'noise' is used to encompass
+both the internal performance fluctuations of the DUT
+and the genuine noise of the SUT.
+
+A simple model of SUT performance consists of an idealized noiseless performance,
+and additional noise effects.
+For a specific SUT, the noiseless performance is assumed to be constant,
+with all observed performance variations being attributed to noise.
+The impact of the noise can vary in time, sometimes wildly,
+even within a single trial.
+The noise can sometimes be negligible, but frequently
+it lowers the observed SUT performance as observed in trial results.
+
+In this model, SUT does not have a single performance value, it has a spectrum.
+One end of the spectrum is the idealized noiseless performance value,
+the other end can be called a noiseful performance.
+In practice, trial result
+close to the noiseful end of the spectrum happens only rarely.
+The worse the performance value is, the more rarely it is seen in a trial.
+Therefore, the extreme noiseful end of the SUT spectrum is not observable
+among trial results.
+Also, the extreme noiseless end of the SUT spectrum
+is unlikely to be observable, this time because some small noise effects
+are likely to occur multiple times during a trial.
+
+Unless specified otherwise, this document's focus is
+on the potentially observable ends of the SUT performance spectrum,
+as opposed to the extreme ones.
+
+When focusing on the DUT, the benchmarking effort should ideally aim
+to eliminate only the SUT noise from SUT measurements.
+However,
+this is currently not feasible in practice, as there are no realistic enough
+models available to distinguish SUT noise from DUT fluctuations,
+based on authors' experience and available literature.
+
+Assuming a well-constructed SUT, the DUT is likely its
+primary performance bottleneck.
+In this case, we can define the DUT's
+ideal noiseless performance as the noiseless end of the SUT performance spectrum,
+especially for throughput.
+However, other performance metrics, such as latency,
+may require additional considerations.
+
+Note that by this definition, DUT noiseless performance
+also minimizes the impact of DUT fluctuations, as much as realistically possible
+for a given trial duration.
+
+MLRsearch methodology aims to solve the DUT in SUT problem
+by estimating the noiseless end of the SUT performance spectrum
+using a limited number of trial results.
+
+Any improvements to the throughput search algorithm, aimed at better
+dealing with software networking SUT and DUT setup, should employ
+strategies recognizing the presence of SUT noise, allowing the discovery of
+(proxies for) DUT noiseless performance
+at different levels of sensitivity to SUT noise.
+
+## Repeatability and Comparability
+
+[RFC2544] does not suggest to repeat throughput search.
+And from just one
+discovered throughput value, it cannot be determined how repeatable that value is.
+Poor repeatability then leads to poor comparability,
+as different benchmarking teams may obtain varying throughput values
+for the same SUT, exceeding the expected differences from search precision.
+
+[RFC2544] throughput requirements (60 seconds trial and
+no tolerance of a single frame loss) affect the throughput results
+in the following way.
+The SUT behavior close to the noiseful end of its performance spectrum
+consists of rare occasions of significantly low performance,
+but the long trial duration makes those occasions not so rare on the trial level.
+Therefore, the binary search results tend to wander away from the noiseless end
+of SUT performance spectrum, more frequently and more widely than short
+trials would, thus causing poor throughput repeatability.
+
+The repeatability problem can be addressed by defining a search procedure
+that identifies a consistent level of performance,
+even if it does not meet the strict definition of throughput in [RFC2544].
+
+According to the SUT performance spectrum model, better repeatability
+will be at the noiseless end of the spectrum.
+Therefore, solutions to the DUT in SUT problem
+will help also with the repeatability problem.
+
+Conversely, any alteration to [RFC2544] throughput search
+that improves repeatability should be considered
+as less dependent on the SUT noise.
+
+An alternative option is to simply run a search multiple times, and report some
+statistics (e.g. average and standard deviation).
+This can be used
+for a subset of tests deemed more important,
+but it makes the search duration problem even more pronounced.
+
+## Throughput with Non-Zero Loss
+
+[RFC1242] (section 3.17 Throughput) defines throughput as:
+ The maximum rate at which none of the offered frames
+ are dropped by the device.
+
+Then, it says:
+ Since even the loss of one frame in a
+ data stream can cause significant delays while
+ waiting for the higher level protocols to time out,
+ it is useful to know the actual maximum data
+ rate that the device can support.
+
+However, many benchmarking teams accept a small,
+non-zero loss ratio as the goal for their load search.
+
+Motivations are many:
+
+- Modern protocols tolerate frame loss better,
+ compared to the time when [RFC1242] and [RFC2544] were specified.
+
+- Trials nowadays send way more frames within the same duration,
+ increasing the chance of a small SUT performance fluctuation
+ being enough to cause frame loss.
+
+- Small bursts of frame loss caused by noise have otherwise smaller impact
+ on the average frame loss ratio observed in the trial,
+ as during other parts of the same trial the SUT may work more closely
+ to its noiseless performance, thus perhaps lowering the Trial Loss Ratio
+ below the Goal Loss Ratio value.
+
+- If an approximation of the SUT noise impact on the Trial Loss Ratio is known,
+ it can be set as the Goal Loss Ratio.
+
+Regardless of the validity of all similar motivations,
+support for non-zero loss goals makes any search algorithm more user-friendly.
+[RFC2544] throughput is not user-friendly in this regard.
+
+Furthermore, allowing users to specify multiple loss ratio values,
+and enabling a single search to find all relevant bounds,
+significantly enhances the usefulness of the search algorithm.
+
+Searching for multiple Search Goals also helps to describe the SUT performance
+spectrum better than the result of a single Search Goal.
+For example, the repeated wide gap between zero and non-zero loss loads
+indicates the noise has a large impact on the observed performance,
+which is not evident from a single goal load search procedure result.
+
+It is easy to modify the vanilla bisection to find a lower bound
+for the intended load that satisfies a non-zero Goal Loss Ratio.
+But it is not that obvious how to search for multiple goals at once,
+hence the support for multiple Search Goals remains a problem.
+
+## Inconsistent Trial Results
+
+While performing throughput search by executing a sequence of
+measurement trials, there is a risk of encountering inconsistencies
+between trial results.
+
+The plain bisection never encounters inconsistent trials.
+But [RFC2544] hints about the possibility of inconsistent trial results,
+in two places in its text.
+The first place is section 24, where full trial durations are required,
+presumably because they can be inconsistent with the results
+from short trial durations.
+The second place is section 26.3, where two successive zero-loss trials
+are recommended, presumably because after one zero-loss trial
+there can be a subsequent inconsistent non-zero-loss trial.
+
+Examples include:
+
+- A trial at the same load (same or different trial duration) results
+ in a different Trial Loss Ratio.
+- A trial at a higher load (same or different trial duration) results
+ in a smaller Trial Loss Ratio.
+
+Any robust throughput search algorithm needs to decide how to continue
+the search in the presence of such inconsistencies.
+Definitions of throughput in [RFC1242] and [RFC2544] are not specific enough
+to imply a unique way of handling such inconsistencies.
+
+Ideally, there will be a definition of a new quantity which both generalizes
+throughput for non-zero-loss (and other possible repeatability enhancements),
+while being precise enough to force a specific way to resolve trial result
+inconsistencies.
+But until such a definition is agreed upon, the correct way to handle
+inconsistent trial results remains an open problem.
+
+# MLRsearch Specification
+
+This section describes MLRsearch specification including all technical
+definitions needed for evaluating whether a particular test procedure
+complies with MLRsearch specification.
+
+{::comment}
+ [Good idea for 08, maybe ask BMWG first?]
+
+ <mark>TODO VP: Separate Requirements and Recommendations/Suggestions
+ paragraphs? (currently requirements are in discussion subsections -
+ discussion should only clarify things without adding new
+ requirements)</mark>
+
+{:/comment}
+
+## Overview
+
+MLRsearch specification describes a set of abstract system components,
+acting as functions with specified inputs and outputs.
+
+A test procedure is said to comply with MLRsearch specification
+if it can be conceptually divided into analogous components,
+each satisfying requirements for the corresponding MLRsearch component.
+
+The Measurer component is tasked to perform trials,
+the Controller component is tasked to select trial loads and durations,
+the Manager component is tasked to pre-configure everything
+and to produce the test report.
+The test report explicitly states Search Goals (as the Controller Inputs)
+and corresponding Goal Results (Controller Outputs).
+
+{::comment}
+ [Low priority]
+
+ <mark>MKP2 TODO: Find a good reference for the test report, seems only implicit in RFC2544.</mark>
+
+{:/comment}
+
+The Manager calls the Controller once,
+the Controller keeps calling the Measurer
+until all stopping conditions are met.
+
+The part where Controller calls the Measurer is called the search.
+Any activity done by the Manager before it calls the Controller
+(or after Controller returns) is not considered to be part of the search.
+
+MLRsearch specification prescribes regular search results and recommends
+their stopping conditions. Irregular search results are also allowed,
+they may have different requirements and stopping conditions.
+
+Search results are based on load classification.
+When measured enough, any chosen load either achieves of fails each search goal,
+thus becoming a lower or an upper bound for that goal.
+When the relevant bounds are at loads that are close enough
+(according to goal precision), the regular result is found.
+Search stops when all regular results are found
+(or if some goals are proven to have only irregular results).
+
+## Measurement Quantities
+
+MLRsearch specification uses a number of measurement quantities.
+
+In general, MLRsearch specification does not require particular units to be used,
+but it is REQUIRED for the test report to state all the units.
+For example, ratio quantities can be dimensionless numbers between zero and one,
+but may be expressed as percentages instead.
+
+For convenience, a group of quantities can be treated as a composite quantity,
+One constituent of a composite quantity is called an attribute,
+and a group of attribute values is called an instance of that composite quantity.
+
+Some attributes are not independent from others,
+and they can be calculated from other attributes.
+Such quantites are called derived quantities.
+
+## Existing Terms
+
+RFC 1242 "Benchmarking Terminology for Network Interconnect Devices"
+contains basic definitions, and
+RFC 2544 "Benchmarking Methodology for Network Interconnect Devices"
+contains discussions of a number of terms and additional methodology requirements.
+RFC 2285 adds more terms and discussions, describing some known situations
+in more precise way.
+
+All three documents should be consulted
+before attempting to make use of this document.
+
+Definitions of some central terms are copied and discussed in subsections.
+
+{::comment}
+ [Good idea for 08, but needs more work. Ask BMWG?]
+
+ Alternatively, quick list of all (existing and new here) terms,
+ with links (external or internal respectively) to definitions.
+
+ <mark>MKP3 [VP] TODO: Even if the following list will not be in final draft,
+ it is useful to keep it around (maybe commented-out) while editing.</mark>
+
+ <mark>MKP3 VP note: rough list of all RFC references:
+ - [RFC1242] (section 3.17 Throughput) ... definition
+ - [RFC2544] (section 26.1 Throughput) ... methodology
+ - [RFC2544] (section 24. Trial duration):
+ - full trial durations (implies short trials)
+ - Also 60s for unconditional compliance is here.
+ - Also "the search" (without quotes) appears there.
+ - Also "binary search" (with quotes) appears there.
+ - [RFC2544] (section 26.3 Frame loss rate):
+ - two successive zero-loss trials are recommended (hints about loss inversion)
+ - un/conditionally compliant with [RFC2544]
+ - [RFC2544] (section 26. Benchmarking tests:)
+ - all its "dot sections" have "Reporting format:" paragraphs
+ - (implies test report)
+ - [RFC2544] (section 26.1 Throughput) wants graph, frame size on X axis.
+ - [RFC2544] (section 23. Trial description) trial
+ - general description of trial
+ - wait times specifically, maybe also learning frames?
+ - Constant Load of [RFC1242] (section 3.4 Constant Load)
+ - Data Rate of [RFC2544] (section 14. Bidirectional traffic)
+ - seems equal to input frame rate [RFC2544] (23. Trial description).
+ - [RFC2544] (section 21. Bursty traffic) suggests non-constant loads?
+ - Intended Load of [RFC2285] (section 3.5.1 Intended load (Iload))
+ - [RFC2285] (Section 3.5.2 Offered load (Oload))
+ - Frame Loss Rate of [RFC1242] (section 3.6 Frame Loss Rate)
+ - Forwarding Rate as defined in [RFC2285] (section 3.6.1 Forwarding rate (FR))
+ - [RFC2544] (section 20. Maximum frame rate)
+ - [RFC2285] (3.5.3 Maximum offered load (MOL))
+ - reordered frames [RFC2544] (section 10. Verifying received frames)
+ - For example, [RFC2544] (Appendix C) lists frame formats and protocol addresses,
+ as recommended from [RFC2544] (section 8. Frame formats)
+ and [RFC2544] (section 12. Protocol addresses).
+ - [RFC8219] (section 5.3. Traffic Setup) introduces traffic setups consisting of a mix of IPv4 and IPv6 traffic
+ - [RFC2544] (section 9. Frame sizes)
+ - [RFC1242] (section 3.5 Data link frame size)
+ - [RFC2285] (section 3.6.2) FRMOL
+ - [RFC2285] (section 3.1.1) DUT
+ - [RFC2285] (section 3.1.2) SUT
+ - [RFC2544] (section 6. Test set up) test setup with (an external) tester
+ - [RFC9004] B2B
+ - [RFC8219] (section 5.3. Traffic Setup) for an example of ip4+ip6 mixed traffic
+ </mark>
+
+ <mark>MKP3 [VP] TODO: Do not mention those that do not need discussion here.</mark>
+
+{:/comment}
+
+
+{::comment}
+ [Low priority]
+
+ <mark>MKP3 [VP] TODO: Do we even need RFC9004?</mark>
+
+{:/comment}
+
+{::comment}
+ [I do not understand what I meant. Typos? Probably not important overall.]
+
+ <mark>MKP2 [VP] TODO: Even terms that are discussed in this memo,
+ they perhaps do not need a separate list (just free paragraphs),
+ in a chapter after MLRsearch specification.</mark>
+
+{:/comment}
+
+{::comment}
+ [Important, just not enough time in 07.]
+
+ <mark>MKP3 [VP] TODO: Verify that MLRsearch specification does not discuss
+ meaning of existing terms without quoting their original definition.</mark>
+
+{:/comment}
+
+### SUT
+
+Defined in [RFC2285] (section 3.1.2 System Under Test (SUT)) as follows.
+
+Definition:
+
+The collective set of network devices to which stimulus is offered
+as a single entity and response measured.
+
+Discussion:
+
+An SUT consisting of a single network device is also allowed.
+
+### DUT
+
+Defined in [RFC2285] (section 3.1.1 Device Under Test (DUT)) as follows.
+
+Definition:
+
+The network forwarding device to which stimulus is offered and
+response measured.
+
+Discussion:
+
+DUT, as a sub-component of SUT, is only indirectly mentioned
+in MLRsearch specification, but is of key relevance for its motivation.
+
+{::comment}
+ [Could be useful, but not high priority.]
+
+ ### Tester
+
+ <mark>MKP3 TODO: Add Definition and Discusion paragraphs</mark>
+
+ <mark>MKP3 MK note: Bizarre ... i can't find tester definition in
+ rfc1242, rfc2288 or rfc2544, but will keep looking. If there isn't one,
+ we need to define one :).</mark>
+
+ <mark>[VP] TODO: There were some documents distinguishing TG and TA.</mark>
+
+{:/comment}
+
+### Trial
+
+A trial is the part of the test described in [RFC2544] (section 23. Trial description).
+
+Definition:
+
+ A particular test consists of multiple trials. Each trial returns
+ one piece of information, for example the loss rate at a particular
+ input frame rate. Each trial consists of a number of phases:
+
+ a) If the DUT is a router, send the routing update to the "input"
+ port and pause two seconds to be sure that the routing has settled.
+
+ b) Send the "learning frames" to the "output" port and wait 2
+ seconds to be sure that the learning has settled. Bridge learning
+ frames are frames with source addresses that are the same as the
+ destination addresses used by the test frames. Learning frames for
+ other protocols are used to prime the address resolution tables in
+ the DUT. The formats of the learning frame that should be used are
+ shown in the Test Frame Formats document.
+
+ c) Run the test trial.
+
+ d) Wait for two seconds for any residual frames to be received.
+
+ e) Wait for at least five seconds for the DUT to restabilize.
+
+Discussion:
+
+The definition describes some traits, it is not clear whether all of them
+are REQUIRED, or some of them are only RECOMMENDED.
+
+{::comment}
+ [Useful if possible.]
+
+ <mark>MKP2 [VP] TODO: Search RFCs for better description of "Run the test trial".</mark>
+
+{:/comment}
+
+For the purposes of the MLRsearch specification,
+it is ALLOWED for the test procedure to deviate from the [RFC2544] description,
+but any such deviation MUST be made explicit in the test report.
+
+Trials are the only stimuli the SUT is expected to experience
+during the search.
+
+In some discussion paragraphs, it is useful to consider the traffic
+as sent and received by a tester, as implicitly defined
+in [RFC2544] (section 6. Test set up).
+
+An example of deviation from [RFC2544] is using shorter wait times.
+
+## Trial Terms
+
+This section defines new and redefine existing terms for quantities
+relevant as inputs or outputs of trial, as used by the Measurer component.
+
+### Trial Duration
+
+Definition:
+
+Trial duration is the intended duration of the traffic for a trial.
+
+Discussion:
+
+In general, this quantity does not include any preparation nor waiting
+described in section 23 of [RFC2544] (section 23. Trial description).
+
+While any positive real value may be provided, some Measurer implementations
+MAY limit possible values, e.g. by rounding down to neared integer in seconds.
+In that case, it is RECOMMENDED to give such inputs to the Controller
+so the Controller only proposes the accepted values.
+Alternatively, the test report MUST present the rounded values
+as Search Goal attributes.
+
+### Trial Load
+
+Definition:
+
+The trial load is the intended load for a trial
+
+Discussion:
+
+For test report purposes, it is assumed that this is a constant load by default.
+This MAY be only an average load, e.g. when the traffic is intended to be busty,
+e.g. as suggested in [RFC2544] (section 21. Bursty traffic),
+but the test report MUST explicitly mention how non-constant the traffic is.
+
+Trial load is the quantity defined as Constant Load of [RFC1242]
+(section 3.4 Constant Load), Data Rate of [RFC2544]
+(section 14. Bidirectional traffic)
+and Intended Load of [RFC2285] (section 3.5.1 Intended load (Iload)).
+All three definitions specify
+that this value applies to one (input or output) interface.
+
+{::comment}
+ [Not important.]
+
+ <mark>MKP2 [VP] TODO: Also mention input frame rate [RFC2544] (23. Trial description).</mark>
+
+{:/comment}
+
+For test report purposes, multi-interface aggregate load MAY be reported,
+this is understood as the same quantity expressed using different units.
+From the report it MUST be clear whether a particular trial load value
+is per one interface, or an aggregate over all interfaces.
+
+Similarly to trial duration, some Measurers may limit the possible values
+of trial load. Contrary to trial duration, the test report is NOT REQUIRED
+to document such behavior.
+
+{::comment}
+ [Can of worms. Be aware, but probably do not let spill into draft.]
+
+ <mark>MKP2 [VP] TODO: Why? In practice the difference is small, but what if it is big?
+ Do we need Trial Effective Load for bounds an conditional throughput purposes?
+ Should the Controller be recommended to chose load values that are exactly accepted?
+ </mark>
+
+{:/comment}
+
+It is ALLOWED to combine trial load and trial duration in a way
+that would not be possible to achieve using any integer number of data frames.
+
+{::comment}
+ [I feel this is important, to be discussed separately (not in-scope).]
+
+ <mark>MKP2 [VP] TODO: Explain why are we not using Oload.
+ 1. MLRsearch implementations cannot react correctly to big differences
+ between Iload and Oload.
+ 2. The media between the tested and the DUT are thus considered to be part of SUT.
+ If DUT causes congestion control, it is not expected to handle Iload.
+ </mark>
+
+ See further discussion in [Trial Forwarding Ratio] (#Trial-Forwarding-Ratio)
+ and in [Measurer] (#Measurer) sections for other related issues.
+
+ <mark>MKP2 [VP] TODO: Create a separate subsection for Oload discussion,
+ or clearly separate which aspects are discussed under which term.</mark>
+
+ <mark>MKP2 [VP] TODO: New idea. Compare the tester to an ordinary router
+ in some datacenter. The Intended Load is not jst some abstract input.
+ It is the real traffic coming from routers next hop farther.
+ It does not matter that DUT has forwarded each frame it received,
+ if the tester was unable to sent all the traffic in time.
+ Endpoint see packet loss, they do not care about [RFC2285]
+ half-duplex, spanning trees, nor congestion control mechanisms.
+ Formally speaking, I consider even the sending interface of the sender
+ to be the part of SUT.
+ Reading [RFC2285] (section 3.5.3 Maximum offered load (MOL))
+ "This will be the case when an external source lacks the resources
+ to transmit frames at the minimum legal inter-frame gap"
+ that means TRex workers are also part of SUT. If they do not have
+ enough CPU power to generate frames are required, those frames are lost.
+ </mark>
+
+ <mark>MKP2 [VP] TODO: That new idea warants some discussion in "DUT within SUT",
+ as it is just another case of ther rest of SUT ruining
+ otherwise good DUT performance.</mark>
+
+{:/comment}
+
+### Trial Input
+
+Definition:
+
+Trial Input is a composite quantity, consisting of two attributes:
+trial duration and trial load.
+
+Discussion:
+
+When talking about multiple trials, it is common to say "Trial Inputs"
+to denote all corresponding Trial Input instances.
+
+A Trial Input instance acts as the input for one call of the Measurer component.
+
+Contrary to other composite quantities, MLRsearch implementations
+are NOT ALLOWED to add optional attributes here.
+This improves interoperability between various implementations of
+the Controller and the Measurer.
+
+### Traffic Profile
+
+Definition:
+
+Traffic profile is a composite quantity
+containing attributes other than trial load and trial duration,
+needed for unique determination of the trial to be performed.
+
+Discussion:
+
+All its attributes are assumed to be constant during the search,
+and the composite is configured on the Measurer by the Manager
+before the search starts.
+This is why the traffic profile is not part of the Trial Input.
+
+As a consequence, implementations of the Manager and the Measurer
+must be aware of their common set of capabilities, so that the traffic profile
+uniquely defines the traffic during the search.
+The important fact is that none of those capabilities
+have to be known by the Controller implementations.
+
+The traffic profile SHOULD contain some specific quantities,
+for example [RFC2544] (section 9. Frame sizes) governs
+data link frame size as defined in [RFC1242] (section 3.5 Data link frame size).
+
+Several more specific quantities may be RECOMMENDED, depending on media type.
+For example, [RFC2544] (Appendix C) lists frame formats and protocol addresses,
+as recommended from [RFC2544] (section 8. Frame formats)
+and [RFC2544] (section 12. Protocol addresses).
+
+Depending on SUT configuration, e.g. when testing specific protocols,
+additional attributes MUST be included in the traffic profile
+and in the test report.
+
+Example: [RFC8219] (section 5.3. Traffic Setup) introduces traffic setups
+consisting of a mix of IPv4 and IPv6 traffic - the implied traffic profile
+therefore must include an attribute for their percentage.
+
+Other traffic properties that need to be somehow specified
+in Traffic Profile include:
+[RFC2544] (section 14. Bidirectional traffic),
+[RFC2285] (section 3.3.3 Fully meshed traffic),
+and [RFC2544] (section 11. Modifiers).
+
+### Trial Forwarding Ratio
+
+Definition:
+
+The trial forwarding ratio is a dimensionless floating point value.
+It MUST range between 0.0 and 1.0, both inclusive.
+It is calculated by dividing the number of frames
+successfully forwarded by the SUT
+by the total number of frames expected to be forwarded during the trial
+
+Discussion:
+
+For most traffic profiles, "expected to be forwarded" means
+"intended to get transmitted from Tester towards SUT".
+
+Trial forwarding ratio MAY be expressed in other units
+(e.g. as a percentage) in the test report.
+
+Note that, contrary to loads, frame counts used to compute
+trial forwarding ratio are aggregates over all SUT output interfaces.
+
+Questions around what is the correct number of frames
+that should have been forwarded
+is generally outside of the scope of this document.
+
+{::comment}
+ [Part two of iload/oload discussion.]
+
+ See discussion in [Measurer] (#Measurer) section
+ for more details about calibrating test equipment.
+
+ <mark>MKP2 [VP] TODO: Define unsent frames?</mark>
+
+ <mark>MKP2 [VP] TODO: If Oload is fairly below Iload, the unsent frames
+ should be counted as lost, otherwise search outputs are misleading.
+ But what is "fairly"? CSIT tolerates 10 microseconds worth of unsent frames.</mark>
+
+{:/comment}
+
+{::comment}
+ [Low priority, but maybe useful for somebody?]
+
+ <mark>MKP2 [VP] TODO: Mention traffic profiles with uneven frame counts?
+ E.g. when SUT is expected to perform IP packet fragmentation or reassembly.
+ </mark>
+
+{:/comment}
+
+### Trial Loss Ratio
+
+Definition:
+
+The Trial Loss Ratio is equal to one minus the trial forwarding ratio.
+
+Discussion:
+
+100% minus the trial forwarding ratio, when expressed as a percentage.
+
+This is almost identical to Frame Loss Rate of [RFC1242]
+(section 3.6 Frame Loss Rate),
+the only minor difference is that Trial Loss Ratio
+does not need to be expressed as a percentage.
+
+### Trial Forwarding Rate
+
+Definition:
+
+The trial forwarding rate is a derived quantity, calculated by
+multiplying the trial load by the trial forwarding ratio.
+
+Discussion:
+
+It is important to note that while similar, this quantity is not identical
+to the Forwarding Rate as defined in [RFC2285]
+(section 3.6.1 Forwarding rate (FR)).
+The latter is specific to one output interface only,
+whereas the trial forwarding ratio is based
+on frame counts aggregated over all SUT output interfaces.
+
+{::comment}
+ [Part 3 of iload/oload discussion.]
+
+ <mark>MKP2 [VP] TODO: If some unsent frames were tolerated (not counted as lost),
+ this value is actually higher than the real fps output of the SUT.
+ Should we use the real FR as the basis for Conditional Throughput
+ (instead of this TFR)? That would require additional Trial Output attribute.
+ </mark>
+
+ <mark>MKP2 [VP] TODO: What about duration stretching?
+ This also causes difference between Iload and Oload,
+ but in an invisible way.</mark>
+
+ <mark>MKP2 [VP] TODO: Recommend start+sleep+stop?
+ How long wait for late frames? RFC2544 2s is too much even at 30s trial.</mark>
+
+{:/comment}
+
+### Trial Effective Duration
+
+Definition:
+
+Trial effective duration is a time quantity related to the trial,
+by default equal to the trial duration.
+
+Discussion:
+
+This is an optional feature.
+If the Measurer does not return any trial effective duration value,
+the Controller MUST use the trial duration value instead.
+
+Trial effective duration may be any time quantity chosen by the Measurer
+to be used for time-based decisions in the Controller.
+
+The test report MUST explain how the Measurer computes the returned
+trial effective duration values, if they are not always
+equal to the trial duration.
+
+This feature can be beneficial for users
+who wish to manage the overall search duration,
+rather than solely the traffic portion of it.
+Simply measure the duration of the whole trial (waits including)
+and use that as the trial effective duration.
+
+Also, this is a way for the Measurer to inform the Controller about
+its surprising behavior, for example when rounding the trial duration value.
+
+{::comment}
+ [Not very important, but easy and nice recommendation.]
+
+ <mark>MKP2 [VP] TODO: Recommend for Measurer to return all trials at relevant bounds,
+ as that may better inform users when surprisingly small amount of trials
+ was performed, just because the the trial effective duration values were big.</mark>
+
+ <mark>MKP2 [VP] TODO: Repeat that this is not here to deal with duration stretching.</mark>
+
+{:/comment}
+
+### Trial Output
+
+Definition:
+
+Trial Output is a composite quantity. The REQUIRED attributes are
+Trial Loss Ratio, trial effective duration and trial forwarding rate.
+
+Discussion:
+
+When talking about multiple trials, it is common to say "Trial Outputs"
+to denote all corresponding Trial Output instances.
+
+Implementations may provide additional (optional) attributes.
+The Controller implementations MUST ignore values of any optional attribute
+they are not familiar with,
+except when passing Trial Output instance to the Manager.
+
+Example of an optional attribute:
+The aggregate number of frames expected to be forwarded during the trial,
+especially if it is not just (a rounded-up value)
+implied by trial load and trial duration.
+
+While [RFC2285] (Section 3.5.2 Offered load (Oload))
+requires the offered load value to be reported for forwarding rate measurements,
+it is NOT REQUIRED in MLRsearch specification.
+
+{::comment}
+ [Side tangent from iload/oload discussion. Stilll recommendation is not obvious.]
+
+ <mark>MKP2 TODO: Why? Just because bound trial results are optional in Controller Output?</mark>
+
+ <mark>MKP2 mk edit note: we need to more explicitly address
+ the relevance or irrelevance of [RFC2285] (Section 3.5.2 Offered load (Oload)).
+ Current text in [Trial Load] (#Trial-Load) is ambiguous - quoted below.</mark>
+
+ <mark>MKP2 "Questions around what is the correct number of frames that should
+ have been forwarded is generally outside of the scope of this document.
+ See discussion in [Measurer] (#Measurer) section for more details about
+ calibrating test equipment."</mark>
+
+{:/comment}
+
+### Trial Result
+
+Definition:
+
+Trial result is a composite quantity,
+consisting of the Trial Input and the Trial Output.
+
+Discussion:
+
+When talking about multiple trials, it is common to say "trial results"
+to denote all corresponding trial result instances.
+
+While implementations SHOULD NOT include additional attributes
+with independent values, they MAY include derived quantities.
+
+## Goal Terms
+
+This section defines new and redefine existing terms for quantities
+indirectly relevant for inputs or outputs of the Controller component.
+
+Several goal attributes are defined before introducing
+the main component quantity: the Search Goal.
+
+### Goal Final Trial Duration
+
+Definition:
+
+A threshold value for trial durations.
+
+Discussion:
+
+This attribute value MUST be positive.
+
+A trial with Trial Duration at least as long as the Goal Final Trial Duration
+is called a full-length trial (with respect to the given Search Goal).
+
+A trial that is not full-length is called a short trial.
+
+Informally, while MLRsearch is allowed to perform short trials,
+the results from such short trials have only limited impact on search results.
+
+One trial may be full-length for some Search Goals, but not for others.
+
+The full relation of this goal to Controller Output is defined later in
+this document in subsections of [Goal Result] (#Goal-Result).
+For example, the Conditional Throughput for this goal is computed only from
+full-length trial results.
+
+### Goal Duration Sum
+
+Definition:
+
+A threshold value for a particular sum of trial effective durations.
+
+Discussion:
+
+This attribute value MUST be positive.
+
+Informally, even when looking only at full-length trials,
+MLRsearch may spend up to this time measuring the same load value.
+
+If the Goal Duration Sum is larger than the Goal Final Trial Duration,
+multiple full-length trials may need to be performed at the same load.
+
+See [TST009 Example] (#TST009-Example) for an example where possibility
+of multiple full-length trials at the same load is intended.
+
+A Goal Duration Sum value lower than the Goal Final Trial Duration
+(of the same goal) could save some search time, but is NOT RECOMMENDED.
+See [Relevant Upper Bound] (#Relevant-Upper-Bound) for partial explanation.
+
+### Goal Loss Ratio
+
+Definition:
+
+A threshold value for Trial Loss Ratios.
+
+Discussion:
+
+Attribute value MUST be non-negative and smaller than one.
+
+A trial with Trial Loss Ratio larger than a Goal Loss Ratio value
+is called a lossy trial, with respect to given Search Goal.
+
+Informally, if a load causes too many lossy trials,
+the Relevant Lower Bound for this goal will be smaller than that load.
+
+If a trial is not lossy, it is called a low-loss trial,
+or (specifically for zero Goal Loss Ratio value) zero-loss trial.
+
+### Goal Exceed Ratio
+
+Definition:
+
+A threshold value for a particular ratio of sums of Trial Effective Durations.
+
+Discussion:
+
+Attribute value MUST be non-negative and smaller than one.
+
+See later sections for details on which sums.
+Specifically, the direct usage is only in
+[Appendix A: Load Classification] (#Appendix-A\:-Load-Classification)
+and [Appendix B: Conditional Throughput] (#Appendix-B\:-Conditional-Throughput).
+The impact of that usage is discussed in subsections leading to
+[Goal Result] (#Goal-Result).
+
+Informally, the impact of lossy trials is controlled by this value.
+Effectively, Goal Exceed Ratio is a percentage of full-length trials
+that may be lossy without the load being classified
+as the [Relevant Upper Bound] (#Relevant-Upper-Bound).
+
+### Goal Width
+
+Definition:
+
+A value used as a threshold for deciding
+whether two trial load values are close enough.
+
+Discussion:
+
+If present, the value MUST be positive.
+
+Informally, this acts as a stopping condition,
+controlling the precision of the search.
+The search stops if every goal has reached its precision.
+
+Implementations without this attribute
+MUST give the Controller other ways to control the search stopping conditions.
+
+Absolute load difference and relative load difference are two popular choices,
+but implementations may choose a different way to specify width.
+
+The test report MUST make it clear what specific quantity is used as Goal Width.
+
+It is RECOMMENDED to set the Goal Width (as relative difference) value
+to a value no smaller than the Goal Loss Ratio.
+(The reason is not obvious, see [Throughput] (#Throughput) if interested.)
+
+### Search Goal
+
+Definition:
+
+The Search Goal is a composite quantity consisting of several attributes,
+some of them are required.
+
+Required attributes:
+- Goal Final Trial Duration
+- Goal Duration Sum
+- Goal Loss Ratio
+- Goal Exceed Ratio
+
+Optional attribute:
+- Goal Width
+
+Discussion:
+
+Implementations MAY add their own attributes.
+Those additional attributes may be required by the implementation
+even if they are not required by MLRsearch specification.
+But it is RECOMMENDED for those implementations
+to support missing values by computing reasonable defaults.
+
+The meaning of listed attributes is formally given only by their indirect effect
+on the search results.
+
+Informally, later sections provide additional intuitions and examples
+of the Search Goal attribute values.
+
+An example of additional attributes required by some implementations
+is Goal Initial Trial Duration, together with another attribute
+that controls possible intermediate Trial Duration values.
+The reasonable default in this case is using the Goal Final Trial Duration
+and no intermediate values.
+
+### Controller Input
+
+Definition:
+
+Controller Input is a composite quantity
+required as an input for the Controller.
+The only REQUIRED attribute is a list of Search Goal instances.
+
+Discussion:
+
+MLRsearch implementations MAY use additional attributes.
+Those additional attributes may be required by the implementation
+even if they are not required by MLRsearch specification.
+
+Formally, the Manager does not apply any Controller configuration
+apart from one Controller Input instance.
+
+For example, Traffic Profile is configured on the Measurer by the Manager
+(without explicit assistance of the Controller).
+
+The order of Search Goal instances in a list SHOULD NOT
+have a big impact on Controller Output (see section [Controller Output] (#Controller-Output) ,
+but MLRsearch implementations MAY base their behavior on the order
+of Search Goal instances in a list.
+
+An example of an optional attribute (outside the list of Search Goals)
+required by some implementations is Max Load.
+While this is a frequently used configuration parameter,
+already governed by [RFC2544] (section 20. Maximum frame rate)
+and [RFC2285] (3.5.3 Maximum offered load (MOL)),
+some implementations may detect or discover it instead.
+
+{::comment}
+ [Not important directly, may matter for iload/oload.]
+
+ <mark>MKP2 [VP] TODO: 2544 and 2285 care about half-duplex media. Should we?</mark>
+
+{:/comment}
+
+{::comment}
+ [Maybe obvious but I think useful. RFC2544 talks about header compression in WANs.]
+
+ <mark>MKP2 [VP] TODO: Mention that Max Load should care about all media within SUT,
+ including DUT-DUT links. Important when that link carries encapsulated traffic,
+ as bandwidth limit there implies lower max rate
+ (than implied by tester-SUT links).</mark>
+
+{:/comment}
+
+In MLRsearch specification, the [Relevant Upper Bound] (#Relevant-Upper-Bound)
+is added as a required attribute precisely because it makes the search result
+independent of Max Load value.
+
+{::comment}
+ [User recommendation, we should have separate section summarizing those.]
+
+ <mark>[VP] TODO for MK: The rest of this subsection is new, review?</mark>
+
+ It is RECOMMENDED to use the same Goal Final Trial Duration value across all goals.
+ Otherwise, some goals may be measured at Trial Durations longer than needed,
+ with possibly unexpected impacts on repeatability and comparability.
+
+ For example when Goal Loss Ratio is zero, any increase in Trial Duration
+ also increases the likelihood of the trial to become lossy,
+ similar to usage of lower Goal Exceed Ratio or larger Goal Duration Sum,
+ both of which tend to lower the search results, towards noisy end
+ of performance spectrum.
+
+ Also, it is recommended to avoid "incomparable" goals, e.g. one with
+ lower loss ratio but higher exceed ratio, and other with higher loss ratio
+ but lower loss ratio. In worst case, this can make the search to last too long.
+ Implementations are RECOMMENDED to sort the goals and start with
+ stricter ones first, as bounds for those will not get invalidated
+ byt measureing for less trict goal later in the search.
+
+{:/comment}
+
+## Search Goal Examples
+
+### RFC2544 Goal
+
+The following set of values makes the search result unconditionally compliant
+with [RFC2544] (section 24 Trial duration)
+
+- Goal Final Trial Duration = 60 seconds
+- Goal Duration Sum = 60 seconds
+- Goal Loss Ratio = 0%
+- Goal Exceed Ratio = 0%
+
+The latter two attributes are enough to make the search goal
+conditionally compliant, adding the first attribute
+makes it unconditionally compliant.
+
+The second attribute (Goal Duration Sum) only prevents MLRsearch
+from repeating zero-loss full-length trials.
+
+Non-zero exceed ratio could prolong the search and allow loss inversion
+between lower-load lossy short trial and higher-load full-length zero-loss trial.
+From [RFC2544] alone, it is not clear whether that higher load
+could be considered as compliant throughput.
+
+### TST009 Goal
+
+One of the alternatives to RFC2544 is described in
+[TST009] (section 12.3.3 Binary search with loss verification).
+The idea there is to repeat lossy trials, hoping for zero loss on second try,
+so the results are closer to the noiseless end of performance sprectum,
+and more repeatable and comparable.
+
+Only the variant with "z = infinity" is achievable with MLRsearch.
+
+{::comment}
+ [Low priority, unless a short sentence is found.]
+
+ <mark>MKP2 MK note: Shouldn't we add a note about how MLRsearch goes about
+ addressing the TST009 point related to z, that is "z is threshold of
+ Lord(r) to override Loss Verification when the count of lost frames is
+ very high and unnecessary verification trials."? i.e. by have Goal Loss
+ Ratio. Thoughts?</mark>
+
+{:/comment}
+
+For example, for "r = 2" variant, the following search goal should be used:
+
+- Goal Final Trial Duration = 60 seconds
+- Goal Duration Sum = 120 seconds
+- Goal Loss Ratio = 0%
+- Goal Exceed Ratio = 50%
+
+If the first 60s trial has zero loss, it is enough for MLRsearch to stop
+measuring at that load, as even a second lossy trial
+would still fit within the exceed ratio.
+
+But if the first trial is lossy, MLRsearch needs to perform also
+the second trial to classify that load.
+As Goal Duration Sum is twice as long as Goal Final Trial Duration,
+third full-length trial is never needed.
+
+## Result Terms
+
+Before defining the output of the Controller,
+it is useful to define what the Goal Result is.
+
+The Goal Result is a composite quantity.
+
+Following subsections define its attribute first, before describing the Goal Result quantity.
+
+There is a correspondence between Search Goals and Goal Results.
+Most of the following subsections refer to a given Search Goal,
+when defining attributes of the Goal Result.
+Conversely, at the end of the search, each Search Goal
+has its corresponding Goal Result.
+
+Conceptually, the search can be seen as a process of load classification,
+where the Controller attempts to classify some loads as an Upper Bound
+or a Lower Bound with respect to some Search Goal.
+
+Before defining real attributes of the goal result,
+it is useful to define bounds in general.
+
+### Relevant Upper Bound
+
+Definition:
+
+The Relevant Upper Bound is the smallest trial load value that is classified
+at the end of the search as an upper bound
+(see [Appendix A: Load Classification] (#Appendix-A\:-Load-Classification))
+for the given Search Goal.
+
+Discussion:
+
+One search goal can have many different load classified as an upper bound.
+At the end of the search, one of those loads will be the smallest,
+becoming the relevant upper bound for that goal.
+
+In more detail, the set of all trial outputs (both short and full-length,
+enough of them according to Goal Duration Sum)
+performed at that smallest load failed to uphold all the requirements
+of the given Search Goal, mainly the Goal Loss Ratio
+in combination with the Goal Exceed Ratio.
+
+{::comment}
+ [Recheck. Move to end?]
+
+ <mark>[VP] TODO: Is the above a good summary of Appendix A inputs?</mark>
+
+{:/comment}
+
+If Max Load does not cause enough lossy trials,
+the Relevant Upper Bound does not exist.
+Conversely, if Relevant Upper Bound exists,
+it is not affected by Max Load value.
+
+{::comment}
+ [Medium priority, depends on how many user recommendations we have.]
+
+ With non-zero exceed ratio values, a lossy short trial may not be enough
+ to classify a load as the relevant upper bound.
+ Users MAY apply Goal Duration Sum value lower than Goal Final Trial Duration
+ to force such classification in hope to save time,
+ but it is RECOMMENDED not to do so, as in practice
+ it hurts comparability and repeatability.
+
+{:/comment}
+
+{::comment}
+ [Probably too technical, unless relation to repeatability is found.]
+
+ In general, a load starts as as undecided, then maybe flips to become
+ an upper bound. MLRsearch stops measuring at that load for this goal,
+ but it may be forced to measure more for some other search goals,
+ in which case the load may flip to a lower bound (and back and forth).
+
+ <mark>[VP] TODO: Confirm the load can never flip back to being undecided.</mark>
+
+ Even though the load classification may change during the search,
+ the goal results are established at the end of the search.
+
+ If the exceed ratio is zero, an upper bound can never flip;
+ one lossy trial (even short) is enough to pin the classification.
+
+{:/comment}
+
+### Relevant Lower Bound
+
+Definition:
+
+The Relevant Lower Bound is the largest trial load value
+among those smaller than the Relevant Upper Bound,
+that got classified at the end of the search as a lower bound (see
+[Appendix A: Load Classification] (#Appendix-A\:-Load-Classification))
+for the given Search Goal.
+
+Discussion:
+
+Only among loads smaller that the relevant upper bound,
+the largest load becomes the relevant lower bound.
+With loss inversion, stricter upper bound matters.
+
+In more detail, the set of all trial outputs (both short and full-length,
+enough of them according to Goal Duration Sum)
+performed at that largest load managed to uphold all the requirements
+of the given Search Goal, mainly the Goal Loss Ratio
+in combination with the Goal Exceed Ratio.
+
+Is no load had enough low-loss trials, the relevant lower bound
+MAY not exist.
+
+{::comment}
+ [Min Load us useful for detecting broken SUTs (and latency).]
+
+ <mark>[VP] TODO: Mention min load here?</mark>
+
+ <mark>[VP] TODO: Allow zero as implicit lower bound that needs no trials?
+ If yes, then probably way earlier than here.</mark>
+
+{:/comment}
+
+Strictly speaking, if the Relevant Upper Bound does not exist,
+the Relevant Lower Bound also does not exist.
+In that case, Max Load is classified as a lower bound,
+but it is not clear whether a higher lower bound
+would be found if the search used a higher Max Load value.
+
+For a regular Goal Result, the distance between the Relevant Lower Bound
+and the Relevant Upper Bound MUST NOT be larger than the Goal Width,
+if the implementation offers width as a goal attribute.
+
+{::comment}
+ [True but no time to fix properly.]
+
+ <mark>mk note: Seemingly broken grammar,
+ "managed to uphold all requirements", should be followed
+ by stating what it means.</mark>
+
+{:/comment}
+
+Searching for anther search goal may cause a loss inversion phenomenon,
+where a lower load is classified as an upper bound,
+but also a higher load is classified as a lower bound for the same search goal.
+The definition of the Relevant Lower Bound ignores such high lower bounds.
+
+{::comment}
+ [Compare to similar block in upper bound.]
+
+ In general, a load starts as as undecided, then maybe flips to become
+ a lower bound. MLRsearch stops measuring at that load for this goal,
+ but it may be forced to measure more for some other search goals,
+ in which case the load may flip to an upper bound (and back and forth).
+
+ <mark>[VP] TODO: Confirm the load can never flip back to being undecided.</mark>
+
+ Even though the load classification may change during the search,
+ the goal results are established at the end of the search.
+
+ No valid exceed ratio value pins the classification as a lower bound.
+
+{:/comment}
+
+### Conditional Throughput
+
+Definition:
+
+The Conditional Throughput (see section [Appendix B: Conditional Throughput] (#Appendix-B\:-Conditional-Throughput))
+as evaluated at the Relevant Lower Bound of the given Search Goal
+at the end of the search.
+
+Discussion:
+
+Informally, this is a typical trial forwarding rate, expected to be seen
+at the Relevant Lower Bound of the given Search Goal.
+
+But frequently it is only a conservative estimate thereof,
+as MLRsearch implementations tend to stop gathering more data
+as soon as they confirm the value cannot get worse than this estimate
+within the Goal Duration Sum.
+
+This value is RECOMMENDED to be used when evaluating repeatability
+and comparability if different MLRsearch implementations.
+
+{::comment}
+ [Low priority but useful for comparabuility.]
+
+ <mark>[VP] TODO: Add subsection for Trial Results At Relevant Bounds
+ as an optional attribute of Goal Result.</mark>
+
+{:/comment}
+
+### Goal Result
+
+Definition:
+
+The Goal Result is a composite quantity consisting of several attributes.
+Relevant Upper Bound and Relevant Lower Bound are REQUIRED attributes,
+Conditional Throughput is a RECOMMENDED attribute.
+
+Discussion:
+
+Depending on SUT behavior, it is possible that one or both relevant bounds
+do not exist. The goal result instance where the required attribute values exist
+is informally called a Regular Goal Result instance,
+so we can say some goals reached Irregular Goal Results.
+
+{::comment}
+ [Probably delete after last edits re irregular results.]
+
+ <mark>MKP2 [VP] TODO: Additional attributes should not be required by the Manager?
+ Explicitly mention that irregular goal result may support different attributes.
+ </mark>
+
+ <mark>MKP2 Implementations are free to define their own specific subtypes
+ of irregular Goal Results, but the test report MUST mark them clearly
+ as irregular according to this section.</mark>
+
+{:/comment}
+
+A typical Irregular Goal Result is when all trials at the Max Load
+have zero loss, as the Relevant Upper Bound does not exist in that case.
+
+It is RECOMMENDED that the test report will display such results appropriately,
+although MLRsearch specification does not prescibe how.
+
+{::comment}
+ [Useful.]
+
+ <mark>MKP2 [VP] TODO: Also allways-fail. Link to bounds to avoid duplication.</mark>
+
+{:/comment}
+
+Anything else regarging Irregular Goal Results,
+including their role in stopping conditions of the search
+is outside the scope of this document.
+
+### Search Result
+
+Definition:
+
+The Search Result is a single composite object
+that maps each Search Goal instance to a corresponding Goal Result instance.
+
+Discussion:
+
+Alternatively, the Search Result can be implemented as an ordered list
+of the Goal Result instances, matching the order of Search Goal instances.
+
+{::comment}
+ [Low priority, as there is no obvious harm.]
+
+ <mark>MKP1 [VP] TODO: Disallow any additional attributes?</mark>
+
+{:/comment}
+
+The Search Result (as a mapping)
+MUST map from all the Search Goal instances present in the Controller Input.
+
+{::comment}
+ [Not important.]
+
+ <mark>[VP] Postponed: API independence, modularity.</mark>
+
+{:/comment}
+
+{::comment}
+ [Not needed?]
+
+ <mark>MKP1 [VP] TODO: Short sentence on what to do on irregular goal result.</mark>
+
+{:/comment}
+
+### Controller Output
+
+Definition:
+
+The Controller Output is a composite quantity returned from the Controller
+to the Manager at the end of the search.
+The Search Result instance is its only REQUIRED attribute.
+
+Discussion:
+
+MLRsearch implementation MAY return additional data in the Controller Output.
+
+{::comment}
+ [Not needed?]
+
+ <mark>MKP1 [VP] TODO: Short sentence on what to do on irregular goal result.</mark>
+
+ <mark>MKP1 [VP] TODO: Irregular output, e.g. with "max search time exceeded" flag?</mark>
+
+{:/comment}
+
+## MLRsearch Architecture
+
+{::comment}
+ [Meta and irrelevant. Delete after verifying other text is good.]
+
+ <mark>MKP2 [VP] TODO: Review the folowing:
+ This section is about division into components,
+ so it fits this definition:
+ "The software architecture of a system represents the design decisions
+ related to overall system structure and behavior."
+ Saying "MLRsearch Design" does not make it clear if it is
+ Vratko designing the MLRsearch specification,
+ or some other person designing a new MLRsearch implementation using that spec.
+ </mark>
+
+{:/comment}
+
+MLRsearch architecture consists of three main system components:
+the Manager, the Controller, and the Measurer.
+
+The architecture also implies the presence of other components,
+such as the SUT and the Tester (as a sub-component of the Measurer).
+
+Protocols of communication between components are generally left unspecified.
+For example, when MLRsearch specification mentions "Controller calls Measurer",
+it is possible that the Controller notifies the Manager
+to call the Measurer indirectly instead. This way the Measurer implementations
+can be fully independent from the Controller implementations,
+e.g. programmed in different programming languages.
+
+### Measurer
+
+Definition:
+
+The Measurer is an abstract system component
+that when called with a [Trial Input] (#Trial-Input) instance,
+performs one [Trial] (#Trial),
+and returns a [Trial Output] (#Trial-Output) instance.
+
+Discussion:
+
+This definition assumes the Measurer is already initialized.
+In practice, there may be additional steps before the search,
+e.g. when the Manager configures the traffic profile
+(either on the Measurer or on its tester sub-component directly)
+and performs a warmup (if the tester requires one).
+
+It is the responsibility of the Measurer implementation to uphold
+any requirements and assumptions present in MLRsearch specification,
+e.g. trial forwarding ratio not being larger than one.
+
+Implementers have some freedom.
+For example [RFC2544] (section 10. Verifying received frames)
+gives some suggestions (but not requirements) related to
+duplicated or reordered frames.
+Implementations are RECOMMENDED to document their behavior
+related to such freedoms in as detailed a way as possible.
+
+It is RECOMMENDED to benchmark the test equipment first,
+e.g. connect sender and receiver directly (without any SUT in the path),
+find a load value that guarantees the offered load is not too far
+from the intended load, and use that value as the Max Load value.
+When testing the real SUT, it is RECOMMENDED to turn any big difference
+between the intended load and the offered load into increased Trial Loss Ratio.
+
+Neither of the two recommendations are made into requirements,
+because it is not easy to tell when the difference is big enough,
+in a way thay would be dis-entangled from other Measurer freedoms.
+
+### Controller
+
+Definition:
+
+The Controller is an abstract system component
+that when called with a Controller Input instance
+repeatedly computes Trial Input instance for the Measurer,
+obtains corresponding Trial Output instances,
+and eventually returns a Controller Output instance.
+
+Discussion:
+
+Informally, the Controller has big freedom in selection of Trial Inputs,
+and the implementations want to achieve the Search Goals
+in the shortest expected time.
+
+The Controller's role in optimizing the overall search time
+distinguishes MLRsearch algorithms from simpler search procedures.
+
+Informally, each implementation can have different stopping conditions.
+Goal Width is only one example.
+In practice, implementation details do not matter,
+as long as Goal Results are regular.
+
+### Manager
+
+Definition:
+
+The Manager is an abstract system component that is reponsible for
+configuring other components, calling the Controller component once,
+and for creating the test report following the reporting format as
+defined in [RFC2544] (section 26. Benchmarking tests).
+
+Discussion:
+
+The Manager initializes the SUT, the Measurer (and the Tester if independent)
+with their intended configurations before calling the Controller.
+
+The Manager does not need to be able to tweak any Search Goal attributes,
+but it MUST report all applied attribute values even if not tweaked.
+
+{::comment}
+ [Not very important but also should be easy to add.]
+
+ <mark>MKP2 [VP] TODO: Is saying "RFC2544" indirectly reporting RFC2544 Goal values?</mark>
+
+{:/comment}
+
+In principle, there should be a "user" (human or CI)
+that "starts" or "calls" the Manager and receives the report.
+The Manager MAY be able to be called more than once whis way.
+
+{::comment}
+ [Not important, unless anybody else asks.]
+
+ <mark>MKP2 The Manager may use the Measurer or other system components
+ to perform other tests, e.g. back-to-back frames,
+ as the Controller is only replacing the search from
+ [RFC2544] (section 26.1 Throughput).</mark>
+
+{:/comment}
+
+## Implementation Compliance
+
+Any networking measurement setup where there can be logically delineated system components
+and there are components satisfying requirements for the Measurer,
+the Controller and the Manager, is considered to be compliant with MLRsearch design.
+
+These components can be seen as abstractions present in any testing procedure.
+For example, there can be a single component acting both
+as the Manager and the Controller, but as long as values of required attributes
+of Search Goals and Goal Results are visible in the test report,
+the Controller Input instance and output instance are implied.
+
+For example, any setup for conditionally (or unconditionally)
+compliant [RFC2544] throughput testing
+can be understood as a MLRsearch architecture,
+assuming there is enough data to reconstruct the Relevant Upper Bound.
+
+See [RFC2544 Goal] (#RFC2544-Goal) subsection for equivalent Search Goal.
+
+Any test procedure that can be understood as (one call to the Manager of)
+MLRsearch architecture is said to be compliant with MLRsearch specification.
+
+# Additional Considerations
+
+This section focuses on additional considerations, intuitions and motivations
+pertaining to MLRsearch methodology.
+
+{::comment}
+ [Meta, redundant.]
+
+ <mark>MKP2 [VP] TODO: Review the following:
+ If MLRsearch specification is a product design specification
+ for MLRsearch implementation, then this chapter talks about
+ my goals and early attempts at designing the MLRsearch specification.
+ </mark>
+
+{:/comment}
+
+## MLRsearch Versions
+
+The MLRsearch algorithm has been developed in a code-first approach,
+a Python library has been created, debugged, used in production
+and published in PyPI before the first descriptions
+(even informal) were published.
+
+But the code (and hence the description) was evolving over time.
+Multiple versions of the library were used over past several years,
+and later code was usually not compatible with earlier descriptions.
+
+The code in (some version of) MLRsearch library fully determines
+the search process (for a given set of configuration parameters),
+leaving no space for deviations.
+
+{::comment}
+ [Different type of external link, should be in 08.]
+
+ <mark>MKP2 mk3 note: any references to library
+ should have specific reference link.
+ We have FDio-CSIT-MLRsearch in informative: at the start. Link it.
+ </mark>
+
+{:/comment}
+
+{::comment}
+ [Lesson learned is important, but maybe does not need version history?]
+
+ <mark>MKP2 mk edit note: Suggest to remove crossed-out text, as it is
+ distracting, doesn't bring any value, and recalls multiple versions of
+ MLRsearch library, without any references. A much more appropriate
+ approach would be to provide a pointer to MLRsearch code versions in
+ FD.io that evolved over the years, as an example implementation. But I
+ would question the value of referring to old previous versions in this
+ document. It's okay for the blog, but not for IETF specification,
+ unless there are specific lessons learned that need to be highlighted
+ to support the specification.</mark>
+
+{:/comment}
+
+This historic meaning of MLRsearch, as a family
+of search algorithm implementations,
+leaves plenty of space for future improvements, at the cost
+of poor comparability of results of search algoritm implementations.
+
+{::comment}
+ [Reckeck after clarifying library/algorithm/implementation/specification mess.]
+
+ <mark>mk edit note: If the aim of this sentence is to state that there
+ could be possibly other approaches to address this problem space, then
+ I think we are already addressing it in the opening sections discussing
+ problems, and referring to ETSi TST.009 and opnfv work. If the aim is
+ to define "MLRsearch" as a completely new class of algorithms for
+ software network benchmarking, of which this spec is just one example,
+ then i have a problem with it. This specification is very prescriptive
+ in the main functional areas to address the problem identified, but
+ still leaving space for further exploration and innovation as noted
+ elsewhere in this document. It is not a new class of algorithms. It is
+ a newly defined methodology to amend RFC2544, to specifically address
+ identified problems.</mark>
+
+{:/comment}
+
+There are two competing needs.
+There is the need for standardization in areas critical to comparability.
+There is also the need to allow flexibility for implementations
+to innovate and improve in other areas.
+This document defines MLRsearch as a new specification
+in a manner that aims to fairly balance both needs.
+
+## Stopping Conditions
+
+[RFC2544] prescribes that after performing one trial at a specific offered load,
+the next offered load should be larger or smaller, based on frame loss.
+
+The usual implementation uses binary search.
+Here a lossy trial becomes
+a new upper bound, a lossless trial becomes a new lower bound.
+The span of values between the tightest lower bound
+and the tightest upper bound (including both values) forms an interval of possible results,
+and after each trial the width of that interval halves.
+
+Usually the binary search implementation tracks only the two tightest bounds,
+simply calling them bounds.
+But the old values still remain valid bounds,
+just not as tight as the new ones.
+
+After some number of trials, the tightest lower bound becomes the throughput.
+[RFC2544] does not specify when, if ever, should the search stop.
+
+MLRsearch introduces a concept of [Goal Width] (#Goal-Width).
+
+The search stops
+when the distance between the tightest upper bound and the tightest lower bound
+is smaller than a user-configured value, called Goal Width from now on.
+In other words, the interval width at the end of the search
+has to be no larger than the Goal Width.
+
+This Goal Width value therefore determines the precision of the result.
+Due to the fact that MLRsearch specification requires a particular
+structure of the result (see [Trial Result] (#Trial-Result) section),
+the result itself does contain enough information to determine its
+precision, thus it is not required to report the Goal Width value.
+
+This allows MLRsearch implementations to use stopping conditions
+different from Goal Width.
+
+## Load Classification
+
+MLRsearch keeps the basic logic of binary search (tracking tightest bounds,
+measuring at the middle), perhaps with minor technical differences.
+
+MLRsearch algorithm chooses an intended load (as opposed to the offered load),
+the interval between bounds does not need to be split
+exactly into two equal halves,
+and the final reported structure specifies both bounds.
+
+The biggest difference is that to classify a load
+as an upper or lower bound, MLRsearch may need more than one trial
+(depending on configuration options) to be performed at the same intended load.
+
+In consequence, even if a load already does have few trial results,
+it still may be classified as undecided, neither a lower bound nor an upper bound.
+
+An explanation of the classification logic is given in the next section [Logic of Load Classification] (#Logic-of-Load-Classification),
+as it heavily relies on other subsections of this section.
+
+For repeatability and comparability reasons, it is important that
+given a set of trial results, all implementations of MLRsearch
+classify the load equivalently.
+
+## Loss Ratios
+
+Another difference between MLRsearch and [RFC2544] binary search is in the goals of the search.
+[RFC2544] has a single goal,
+based on classifying full-length trials as either lossless or lossy.
+
+MLRsearch, as the name suggests, can search for multiple goals,
+differing in their loss ratios.
+The precise definition of the Goal Loss Ratio will be given later.
+The [RFC2544] throughput goal then simply becomes a zero Goal Loss Ratio.
+Different goals also may have different Goal Widths.
+
+A set of trial results for one specific intended load value
+can classify the load as an upper bound for some goals, but a lower bound
+for some other goals, and undecided for the rest of the goals.
+
+Therefore, the load classification depends not only on trial results,
+but also on the goal.
+The overall search procedure becomes more complicated, when
+compared to binary search with a single goal,
+but most of the complications do not affect the final result,
+except for one phenomenon, loss inversion.
+
+## Loss Inversion
+
+In [RFC2544] throughput search using bisection, any load with a lossy trial
+becomes a hard upper bound, meaning every subsequent trial has a smaller
+intended load.
+
+But in MLRsearch, a load that is classified as an upper bound for one goal
+may still be a lower bound for another goal, and due to the other goal
+MLRsearch will probably perform trials at even higher loads.
+What to do when all such higher load trials happen to have zero loss?
+Does it mean the earlier upper bound was not real?
+Does it mean the later lossless trials are not considered a lower bound?
+Surely we do not want to have an upper bound at a load smaller than a lower bound.
+
+MLRsearch is conservative in these situations.
+The upper bound is considered real, and the lossless trials at higher loads
+are considered to be a coincidence, at least when computing the final result.
+
+This is formalized using new notions, the [Relevant Upper Bound] (#Relevant-Upper-Bound) and
+the [Relevant Lower Bound] (#Relevant-Lower-Bound).
+Load classification is still based just on the set of trial results
+at a given intended load (trials at other loads are ignored),
+making it possible to have a lower load classified as an upper bound,
+and a higher load classified as a lower bound (for the same goal).
+The Relevant Upper Bound (for a goal) is the smallest load classified
+as an upper bound.
+But the Relevant Lower Bound is not simply
+the largest among lower bounds.
+It is the largest load among loads
+that are lower bounds while also being smaller than the Relevant Upper Bound.
+
+With these definitions, the Relevant Lower Bound is always smaller
+than the Relevant Upper Bound (if both exist), and the two relevant bounds
+are used analogously as the two tightest bounds in the binary search.
+When they are less than the Goal Width apart,
+the relevant bounds are used in the output.
+
+One consequence is that every trial result can have an impact on the search result.
+That means if your SUT (or your traffic generator) needs a warmup,
+be sure to warm it up before starting the search.
+
+## Exceed Ratio
+
+The idea of performing multiple trials at the same load comes from
+a model where some trial results (those with high loss) are affected
+by infrequent effects, causing poor repeatability of [RFC2544] throughput results.
+See the discussion about noiseful and noiseless ends
+of the SUT performance spectrum in section [DUT in SUT] (#DUT-in-SUT).
+Stable results are closer to the noiseless end of the SUT performance spectrum,
+so MLRsearch may need to allow some frequency of high-loss trials
+to ignore the rare but big effects near the noiseful end.
+
+MLRsearch can do such trial result filtering, but it needs
+a configuration option to tell it how frequent can the infrequent big loss be.
+This option is called the exceed ratio.
+It tells MLRsearch what ratio of trials
+(more exactly what ratio of trial seconds) can have a [Trial Loss Ratio] (#Trial-Loss-Ratio)
+larger than the Goal Loss Ratio and still be classified as a lower bound.
+Zero exceed ratio means all trials have to have a Trial Loss Ratio
+equal to or smaller than the Goal Loss Ratio.
+
+For explainability reasons, the RECOMMENDED value for exceed ratio is 0.5,
+as it simplifies some later concepts by relating them to the concept of median.
+
+## Duration Sum
+
+When more than one trial is intended to classify a load,
+MLRsearch also needs something that controls the number of trials needed.
+Therefore, each goal also has an attribute called duration sum.
+
+The meaning of a [Goal Duration Sum] (#Goal-Duration-Sum) is that
+when a load has (full-length) trials
+whose trial durations when summed up give a value at least as big
+as the Goal Duration Sum value,
+the load is guaranteed to be classified either as an upper bound
+or a lower bound for that goal.
+
+Due to the fact that the duration sum has a big impact
+on the overall search duration, and [RFC2544] prescribes
+wait intervals around trial traffic,
+the MLRsearch algorithm is allowed to sum durations that are different
+from the actual trial traffic durations.
+
+In the MLRsearch specification, the different duration values are called
+[Trial Effective Duration] (#Trial-Effective-Duration).
+
+## Short Trials
+
+MLRsearch requires each goal to specify its final trial duration.
+Full-length trial is a shorter name for a trial whose intended trial duration
+is equal to (or longer than) the goal final trial duration.
+
+Section 24 of [RFC2544] already anticipates possible time savings
+when short trials (shorter than full-length trials) are used.
+Full-length trials are the opposite of short trials,
+so they may also be called long trials.
+
+Any MLRsearch implementation may include its own configuration options
+which control when and how MLRsearch chooses to use short trial durations.
+
+For explainability reasons, when exceed ratio of 0.5 is used,
+it is recommended for the Goal Duration Sum to be an odd multiple
+of the full trial durations, so Conditional Throughput becomes identical to
+a median of a particular set of trial forwarding rates.
+
+The presence of short trial results complicates the load classification logic.
+
+Full details are given later in section [Logic of Load Classification] (#Logic-of-Load-Classification).
+In a nutshell, results from short trials
+may cause a load to be classified as an upper bound.
+This may cause loss inversion, and thus lower the Relevant Lower Bound,
+below what would classification say when considering full-length trials only.
+
+{::comment}
+ [I still think this is important, revisit after explanations re quantiles.]
+
+ <mark>For explainability reasons, it is RECOMMENDED users use such configurations
+ that guarantee all trials have the same length.</mark>
+
+ <mark>mk edit note: Using RFC2119 keyword here does not seem to be
+ appropriate. Moreover, I do not get the meaning nor the logic behind
+ this statement. It seems to say that in order for users to understand
+ the workings of MLRsearch, they should use simplified configuration,
+ otherwise they won't get it. Illogical it seems to me. Suggest to
+ remove it.</mark>
+
+{:/comment}
+
+{::comment}
+ [Important. Keeping compatibility slows search considerably.]
+
+ <mark>Alas, such configurations are usually not compliant with [RFC2544] requirements,
+ or not time-saving enough.</mark>
+
+ <mark>mk edit note: This statement does not make sense to me. Suggest to remove it.</mark>
+
+{:/comment}
+
+## Throughput
+
+{::comment}
+ [Important, we need better title.]
+
+ <mark>[VP] TODO: Was named Conditional Troughput, but spec chapter already has one.</mark>
+
+{:/comment}
+
+Due to the fact that testing equipment takes the intended load as an input parameter
+for a trial measurement, any load search algorithm needs to deal
+with intended load values internally.
+
+But in the presence of goals with a non-zero loss ratio, the intended load
+usually does not match the user's intuition of what a throughput is.
+The forwarding rate (as defined in [RFC2285] section 3.6.1) is better,
+but it is not obvious how to generalize it
+for loads with multiple trial results and a non-zero
+[Goal Loss Ratio] (#Goal-Loss-Ratio).
+
+The best example is also the main motivation: hard limit performance.
+Even if the medium allows higher performance,
+the SUT interfaces may have their additional own limitations,
+e.g. a specific fps limit on the NIC (a very common occurance).
+
+Ideally, those should be known and used when computing Max Load.
+But if Max Load is higher that what interface can receive or transmit,
+there will be a "hard limit" observed in trial results.
+Imagine the hard limit is at 100 Mfps, Max Load is higher,
+and the goal loss ratio is 0.5%. If DUT has no additional losses,
+0.5% loss ratio will be achieved at 100.5025 Mfps (the relevant lower bound).
+But it is not intuitive to report SUT performance as a value that is
+larger than known hard limit.
+We need a generalization of RFC2544 throughput,
+different from just the relevant lower bound.
+
+MLRsearch defines one such generalization, called the Conditional Throughput.
+It is the trial forwarding rate from one of the trials
+performed at the load in question.
+Determining which trial exactly is defined in
+[MLRsearch Specification] (#MLRsearch-Specification),
+and in [Appendix B: Conditional Throughput] (#Appendix-B\:-Conditional-Throughput).
+
+In the hard limit example, 100.5 Mfps load will still have
+only 100.0 Mfps forwarding rate, nicely confirming the known limitation.
+
+Conditional Throughput is partially related to load classification.
+If a load is classified as a lower bound for a goal,
+the Conditional Throughput can be calculated from trial results,
+and guaranteed to show an loss ratio
+no larger than the Goal Loss Ratio.
+
+{::comment}
+ [Revisit after other edits, may be addressed elsewhere.]
+
+ <mark>While the Conditional Throughput gives more intuitive-looking
+ values than the Relevant Lower Bound (for non-zero Goal Loss Ratio
+ values), the actual definition is more complicated than the definition
+ of the Relevant Lower Bound.</mark>
+
+ <mark>mk edit note: Looking at this again, and per improved text, I
+ don't think it is that complicated. (BTW saying it is more complicated
+ and not addressing it, and leaving it open ended is not
+ good.) "Conditional throughput" intuitively is really throughput under
+ certain conditions, these being offered load determined by Relevant
+ Lower Bound and actual loss. For comparability, and taking into account
+ multiple trial samples, per MLRsearch definition, this is
+ mathematically expressed as `conditional_throughput = intended_load *
+ (1.0 - quantile_loss_ratio)`.</mark>
+
+ <mark>DONE VP to MK: Hmm. Frequently, Conditional Throughput comes
+ from the worst among low-loss full-length trials.
+ But if two disparate goals are interested at the same load,
+ things get complicated (does not happen in CSIT production,
+ but I found few bugs when testing in simulator).
+ Computation in load classification is also not trivial,
+ but at least it only needs two "duration sum" values,
+ no need to sort all trial results.</mark>
+
+ <mark>MKP2 [VP] TODO: Still not sure what to do with this subsection.
+ Possibly a bigger rewrite once VP and MK agree on what is (or is not)
+ complicated. :)</mark>
+
+{:/comment}
+
+{::comment}
+ [Important only for "design principles" chapter we may never have.]
+
+ <mark>In the future, other intuitive values may become popular,
+ but they are unlikely to supersede the definition of the Relevant Lower Bound
+ as the most fitting value for comparability purposes,
+ therefore the Relevant Lower Bound remains a required attribute
+ of the Goal Result structure, while the Conditional Throughput is only optional.</mark>
+
+ <mark>mk edit note: This paragraph adds to the confusion. I would remove
+ this paragraph, as with the new text above it doesn't seem to add any
+ value.</mark>
+
+ <mark>[VP] TODO: This is an example of MLRsearch design principles.</mark>
+
+{:/comment}
+
+{::comment}
+ [Useful.]
+
+ <mark>[VP] TODO: Mention somewhere that trending is a specific case
+ of repeatability/comparability.</mark>
+
+{:/comment}
+
+Note that when comparing the best (all zero loss) and worst case (all loss
+just below Goal Loss Ratio), the same Relevant Lower Bound value
+may result in the Conditional Throughput differing up to the Goal Loss Ratio.
+
+Therefore it is rarely needed to set the Goal Width (if expressed
+as the relative difference of loads) below the Goal Loss Ratio.
+In other words, setting the Goal Width below the Goal Loss Ratio
+may cause the Conditional Throughput for a larger loss ratio to become smaller
+than a Conditional Throughput for a goal with a smaller Goal Loss Ratio,
+which is counter-intuitive, considering they come from the same search.
+Therefore it is RECOMMENDED to set the Goal Width to a value no smaller
+than the Goal Loss Ratio.
+
+Overall, this Conditional Throughput does behave well for comparability purposes.
+
+## Search Time
+
+MLRsearch was primarily developed to reduce the time
+required to determine a throughput, either the [RFC2544] compliant one,
+or some generalization thereof.
+The art of achieving short search times
+is mainly in the smart selection of intended loads (and intended durations)
+for the next trial to perform.
+
+While there is an indirect impact of the load selection on the reported values,
+in practice such impact tends to be small,
+even for SUTs with quite a broad performance spectrum.
+
+A typical example of two approaches to load selection leading to different
+Relevant Lower Bounds is when the interval is split in a very uneven way.
+Any implementation choosing loads very close to the current Relevant Lower Bound
+is quite likely to eventually stumble upon a trial result
+with poor performance (due to SUT noise).
+For an implementation choosing loads very close
+to the current Relevant Upper Bound, this is unlikely,
+as it examines more loads that can see a performance
+close to the noiseless end of the SUT performance spectrum.
+
+However, as even splits optimize search duration at give precision,
+MLRsearch implementations that prioritize minimizing search time
+are unlikely to suffer from any such bias.
+
+Therefore, this document remains quite vague on load selection
+and other optimization details, and configuration attributes related to them.
+Assuming users prefer libraries that achieve short overall search time,
+the definition of the Relevant Lower Bound
+should be strict enough to ensure result repeatability
+and comparability between different implementations,
+while not restricting future implementations much.
+
+{::comment}
+ [Important for BMWG. Configurability is bad for comparability.]
+
+ <mark>MKP2 Sadly, different implementations may exhibit their sweet spot of</mark>
+ <mark>the best repeatability for a given search duration</mark>
+ <mark>at different goals attribute values, especially concerning</mark>
+ <mark>any optional goal attributes such as the initial trial duration.</mark>
+ <mark>Thus, this document does not comment much on which configurations</mark>
+ <mark>are good for comparability between different implementations.</mark>
+ <mark>For comparability between different SUTs using the same implementation,</mark>
+ <mark>refer to configurations recommended by that particular implementation.</mark>
+
+ <mark>MKP2 mk edit note: Isn't this going off on a tangent, hypothesising and
+ second guessing about different possible implementations. What is the
+ value of this content to this document? Suggest to remove it.</mark>
+
+{:/comment}
+
+## [RFC2544] Compliance
+
+Some Search Goal instances lead to results compliant with RFC2544.
+See [RFC2544 Goal] (#RFC2544-Goal) for more details
+regarding both conditional and unconditional compliance.
+
+The presence of other Search Goals does not affect the compliance
+of this Goal Result.
+The Relevant Lower Bound and the Conditional Throughput are in this case
+equal to each other, and the value is the [RFC2544] throughput.
+
+# Logic of Load Classification
+
+## Introductory Remarks
+
+This chapter continues with explanations,
+but this time more precise definitions are needed
+for readers to follow the explanations.
+
+Descriptions in this section are wordy and implementers should read
+[MLRsearch Specification] (#MLRsearch-Specification) section
+and Appendices for more concise definitions.
+
+The two areas of focus here are load classification
+and the Conditional Throughput.
+
+To start with [Performance Spectrum] (#Performance-Spectrum)
+subsection contains definitions needed to gain insight
+into what Conditional Throughput means.
+Remaining subsections discuss load classification.
+
+For load classification, it is useful to define **good trials** and **bad trials**:
+
+- **Bad trial**: Trial is called bad (according to a goal)
+ if its [Trial Loss Ratio] (#Trial-Loss-Ratio)
+ is larger than the [Goal Loss Ratio] (#Goal-Loss-Ratio).
+
+- **Good trial**: Trial that is not bad is called good.
+
+## Performance Spectrum
+### Description
+
+There are several equivalent ways to explain the Conditional Throughput
+computation. One of the ways relies on performance
+spectrum.
+
+Take an intended load value, a trial duration value, and a finite set
+of trial results, with all trials measured at that load value and duration value.
+
+The performance spectrum is the function that maps
+any non-negative real number into a sum of trial durations among all trials
+in the set, that has that number, as their trial forwarding rate,
+e.g. map to zero if no trial has that particular forwarding rate.
+
+A related function, defined if there is at least one trial in the set,
+is the performance spectrum divided by the sum of the durations
+of all trials in the set.
+
+That function is called the performance probability function, as it satisfies
+all the requirements for probability mass function
+of a discrete probability distribution,
+the one-dimensional random variable being the trial forwarding rate.
+
+These functions are related to the SUT performance spectrum,
+as sampled by the trials in the set.
+
+{::comment}
+ [Middle of rewrite?]
+
+ <mark>MKP1 The performance spectrum is the function that maps
+ any non-negative real number into a sum of trial durations among all trials
+ in the set, that has that number, as their trial forwarding rate,
+ e.g. map to zero if no trial has that particular forwarding rate.</mark>
+
+ <mark>MKP1 A related function, defined if there is at least one trial in the set,
+ is the performance spectrum divided by the sum of the durations
+ of all trials in the set.</mark>
+
+ <mark>MKP1 That function is called the performance probability function, as it satisfies
+ all the requirements for probability mass function
+ of a discrete probability distribution,
+ the one-dimensional random variable being the trial forwarding rate.</mark>
+
+ <mark>MKP1 These functions are related to the SUT performance spectrum,
+ as sampled by the trials in the set.</mark>
+
+ <mark>MKP1 [VP] TODO: Introduce quantiles properly by incorporating the below.</mark>
+
+ <mark>MKP1 [VP] TODO: "q-quantile" is plainly wrong. I meant the "p" in "p-quantile".
+
+ - wikipedia: The 100-quantiles are called percentiles
+ - also wiki: If, instead of using integers k and q, the "p-quantile" is based on a real number p with 0 < p < 1 then...
+ - https://en.wikipedia.org/wiki/Quantile_function
+ - exceed ratio is an input to a quantile function: percentage?
+ </mark>
+
+ <mark>MKP1 mk2 TODO for VP: Above is not making it clearer at all. Can't we really not explain the spectrum and exceed ratio with just percentiles and quantiles?</mark>
+
+ As for any other probability function, we can talk about percentiles
+ of the performance probability function, including the median.
+ The Conditional Throughput will be one such quantile value
+ for a specifically chosen set of trials.
+
+ <mark>MKP2 As for any other probability function, we can talk about percentiles
+ of the performance probability function, including the median.
+ The Conditional Throughput will be one such quantile value
+ for a specifically chosen set of trials.</mark>
+
+{:/comment}
+
+Take a set of all full-length trials performed at the Relevant Lower Bound,
+sorted by decreasing trial forwarding rate.
+The sum of the durations of those trials
+may be less than the Goal Duration Sum, or not.
+If it is less, add an imaginary trial result with zero trial forwarding rate,
+such that the new sum of durations is equal to the Goal Duration Sum.
+This is the set of trials to use.
+
+If the quantile touches two trials,
+
+{::comment}
+ [Clarity.]
+
+ <mark>mk edit note: What does it mean "quantile touches two trials"?
+ Do you mean two trials are within specific quantile or percentile?</mark>
+
+{:/comment}
+
+the larger trial forwarding rate (from the trial result sorted earlier) is used.
+
+{::comment}
+ [Oh, unspecified exceed ratio?]
+
+ <mark>the larger trial forwarding rate (from the trial result sorted earlier) is used.</mark>
+
+ <mark>mk edit note: Why is that? Is it because you silently assumed that
+ quantile here is median or 50th percentile?</mark>
+
+{:/comment}
+
+The resulting quantity is the Conditional Throughput of the goal in question.
+
+{::comment}
+ [Motivation has lead to code. Now code is definition, this should be equivalent.]
+
+ <mark>The resulting quantity is the Conditional Throughput of the goal in question.</mark>
+
+ <mark>mk edit note: Is this is supposed to be another definition of
+ Conditional Throughput? If so, how does this relate to Performance
+ Spectrum? I suggest to either remove these unclear paragraphs above and
+ rely on examples below that are clear, or rework above so it fits the
+ flow. Cause right now it's confusion. Even more so, that
+ [Conditional Throughput] (#Conditional-Throughput) has been already
+ defined elsewhere in the document.</mark>
+
+{:/comment}
+
+A set of examples follows.
+
+### First Example
+
+- [Goal Exceed Ratio] (#Goal-Exceed-Ratio) = 0 and [Goal Duration Sum] (#Goal-Duration-Sum) has been reached.
+- Conditional Throughput is the smallest trial forwarding rate among the trials.
+
+### Second Example
+
+- Goal Exceed Ratio = 0 and Goal Duration Sum has not been reached yet.
+- Due to the missing duration sum, the worst case may still happen, so the Conditional Throughput is zero.
+- This is not reported to the user, as this load cannot become the Relevant Lower Bound yet.
+
+### Third Example
+
+- Goal Exceed Ratio = 50% and Goal Duration Sum is two seconds.
+- One trial is present with the duration of one second and zero loss.
+- The imaginary trial is added with the duration of one second and zero trial forwarding rate.
+- The median would touch both trials, so the Conditional Throughput is the trial forwarding rate of the one non-imaginary trial.
+- As that had zero loss, the value is equal to the offered load.
+
+{::comment}
+ [Middle of rewrite?]
+
+ <mark>MKP2 mk edit note: how is the median "touching" both trials?
+ Isn't median of even set of data samples
+ the average of the two middle data points,
+ in this case the non-imaginary trial and the imaginary one?</mark>
+
+ <mark>MKP2 Note that Appendix B does not take into account short trial results.</mark>
+
+ <mark>MKP2 mk edit note: Whis is this relevant here? Appendix B has not been mentioned in this section.</mark>
+
+{:/comment}
+
+### Summary
+
+While the Conditional Throughput is a generalization of the trial forwarding rate,
+its definition is not an obvious one.
+
+Other than the trial forwarding rate, the other source of intuition
+is the quantile in general, and the median the recommended case.
+
+{::comment}
+ [Next version of MLRsearch library may invent new quantity that is more stable.]
+
+ <mark>In future, different quantities may prove more useful,
+ especially when applying to specific problems,
+ but currently the Conditional Throughput is the recommended compromise,
+ especially for repeatability and comparability reasons.</mark>
+
+ <mark>MKP2 mk edit note: This is future looking and hand wavy without
+ specifics. What are the "specific problems" that are referred here?
+ Networking, else?Some specific behaviours, if so, what sort? If
+ something is classified as future work, it needs to be better defined.
+ The same applies to any out of scope statements.</mark>
+
+{:/comment}
+
+## Trials with Single Duration
+
+{::comment}
+ [Clarity.]
+
+ <mark>MKP2 mk edit note: Need to improve explanations in this subsection.</mark>
+
+{:/comment}
+
+When goal attributes are chosen in such a way that every trial has the same
+intended duration, the load classification is simpler.
+
+The following description follows the motivation
+of Goal Loss Ratio, Goal Exceed Ratio, and Goal Duration Sum.
+
+If the sum of the durations of all trials (at the given load)
+is less than the Goal Duration Sum, imagine two scenarios:
+
+- **best case scenario**: all subsequent trials having zero loss, and
+- **worst case scenario**: all subsequent trials having 100% loss.
+
+Here we assume there are as many subsequent trials as needed
+to make the sum of all trials equal to the Goal Duration Sum.
+
+The exceed ratio is defined using sums of durations
+(and number of trials does not matter), so it does not matter whether
+the "subsequent trials" can consist of an integer number of full-length trials.
+
+In any of the two scenarios, best case and worst case, we can compute the load exceed ratio,
+as the duration sum of good trials divided by the duration sum of all trials,
+in both cases including the assumed trials.
+
+Even if, in the best case scenario, the load exceed ratio is larger
+than the Goal Exceed Ratio, the load is an upper bound.
+
+MKP2 Even if, in the worst case scenario, the load exceed ratio is not larger
+than the Goal Exceed Ratio, the load is a lower bound.
+
+{::comment}
+ [Middle of rewrite?]
+
+ <mark>Even if</mark>, in the best case scenario, the load exceed ratio is larger
+ than the Goal Exceed Ratio, the load is an upper bound.
+
+ <mark>MKP2 Even if</mark>, in the worst case scenario, the load exceed ratio is not larger
+ than the Goal Exceed Ratio, the load is a lower bound.
+
+ <mark>MKP2 mk edit note: I am confused by "Even if" prefixing
+ each of the above statements. And even more so by your version
+ with "If even".</mark>
+
+ <mark>mk edit note: I do not get how this statements are true, as they
+ are counter-intuitive. For the best case scenario, if load exceed ratio
+ is larger than the goal exceed ratio, I expect the load to be lower
+ bound. Need more examples.</mark>
+
+{:/comment}
+
+More specifically:
+
+- Take all trials measured at a given load.
+- The sum of the durations of all bad full-length trials is called the bad sum.
+- The sum of the durations of all good full-length trials is called the good sum.
+- The result of adding the bad sum plus the good sum is called the measured sum.
+- The larger of the measured sum and the Goal Duration Sum is called the whole sum.
+- The whole sum minus the measured sum is called the missing sum.
+- The optimistic exceed ratio is the bad sum divided by the whole sum.
+- The pessimistic exceed ratio is the bad sum plus the missing sum, that divided by the whole sum.
+- If the optimistic exceed ratio is larger than the Goal Exceed Ratio, the load is classified as an upper bound.
+- If the pessimistic exceed ratio is not larger than the Goal Exceed Ratio, the load is classified as a lower bound.
+- Else, the load is classified as undecided.
+
+The definition of pessimistic exceed ratio is compatible with the logic in
+the Conditional Throughput computation, so in this single trial duration case,
+a load is a lower bound if and only if the Conditional Throughput
+loss ratio is not larger than the Goal Loss Ratio.
+
+{::comment}
+ [Useful (depends on the whole chapter).]
+
+ <mark>MKP2 mk edit note: I do not get the defintion of optimistic and
+ pessmistic exceed ratios. Please define or describe what they
+ are.</mark>
+
+{:/comment}
+
+If it is larger, the load is either an upper bound or undecided.
+
+## Trials with Short Duration
+
+### Scenarios
+
+Trials with intended duration smaller than the goal final trial duration
+are called short trials.
+The motivation for load classification logic in the presence of short trials
+is based around a counter-factual case: What would the trial result be
+if a short trial has been measured as a full-length trial instead?
+
+There are three main scenarios where human intuition guides
+the intended behavior of load classification.
+
+#### False Good Scenario
+
+The user had their reason for not configuring a shorter goal
+final trial duration.
+Perhaps SUT has buffers that may get full at longer
+trial durations.
+Perhaps SUT shows periodic decreases in performance
+the user does not want to be treated as noise.
+
+In any case, many good short trials may become bad full-length trials
+in the counter-factual case.
+
+In extreme cases, there are plenty of good short trials and no bad short trials.
+
+In this scenario, we want the load classification NOT to classify the load
+as a lower bound, despite the abundance of good short trials.
+
+{::comment}
+ [I agree.]
+
+ <mark>MKP2 mk edit note: It may be worth adding why that is. i.e. because
+ there is a risk that at longer trial this could turn into a bad
+ trial.</mark>
+
+{:/comment}
+
+Effectively, we want the good short trials to be ignored, so they
+do not contribute to comparisons with the Goal Duration Sum.
+
+#### True Bad Scenario
+
+When there is a frame loss in a short trial,
+the counter-factual full-length trial is expected to lose at least as many
+frames.
+
+In practice, bad short trials are rarely turning into
+good full-length trials.
+
+In extreme cases, there are no good short trials.
+
+In this scenario, we want the load classification
+to classify the load as an upper bound just based on the abundance
+of short bad trials.
+
+Effectively, we want the bad short trials
+to contribute to comparisons with the Goal Duration Sum,
+so the load can be classified sooner.
+
+#### Balanced Scenario
+
+Some SUTs are quite indifferent to trial duration.
+Performance probability function constructed from short trial results
+is likely to be similar to the performance probability function constructed
+from full-length trial results (perhaps with larger dispersion,
+but without a big impact on the median quantiles overall).
+
+{::comment}
+ [Recheck after edits earlier.]
+
+ <mark>MKP1 mk edit note: "Performance probability function" is this function
+ defined anywhere? Mention in [Performance Spectrum] (#Performance Spectrum)
+ is not a complete definition.</mark>
+
+{:/comment}
+
+For a moderate Goal Exceed Ratio value, this may mean there are both
+good short trials and bad short trials.
+
+This scenario is there just to invalidate a simple heuristic
+of always ignoring good short trials and never ignoring bad short trials,
+as that simple heuristic would be too biased.
+
+Yes, the short bad trials
+are likely to turn into full-length bad trials in the counter-factual case,
+but there is no information on what would the good short trials turn into.
+
+The only way to decide safely is to do more trials at full length,
+the same as in False Good Scenario.
+
+### Classification Logic
+
+MLRsearch picks a particular logic for load classification
+in the presence of short trials, but it is still RECOMMENDED
+to use configurations that imply no short trials,
+so the possible inefficiencies in that logic
+do not affect the result, and the result has better explainability.
+
+With that said, the logic differs from the single trial duration case
+only in different definition of the bad sum.
+The good sum is still the sum across all good full-length trials.
+
+Few more notions are needed for defining the new bad sum:
+
+- The sum of durations of all bad full-length trials is called the bad long sum.
+- The sum of durations of all bad short trials is called the bad short sum.
+- The sum of durations of all good short trials is called the good short sum.
+- One minus the Goal Exceed Ratio is called the subceed ratio.
+- The Goal Exceed Ratio divided by the subceed ratio is called the exceed coefficient.
+- The good short sum multiplied by the exceed coefficient is called the balancing sum.
+- The bad short sum minus the balancing sum is called the excess sum.
+- If the excess sum is negative, the bad sum is equal to the bad long sum.
+- Otherwise, the bad sum is equal to the bad long sum plus the excess sum.
+
+Here is how the new definition of the bad sum fares in the three scenarios,
+where the load is close to what would the relevant bounds be
+if only full-length trials were used for the search.
+
+#### False Good Scenario
+
+If the duration is too short, we expect to see a higher frequency
+of good short trials.
+This could lead to a negative excess sum,
+which has no impact, hence the load classification is given just by
+full-length trials.
+Thus, MLRsearch using too short trials has no detrimental effect
+on result comparability in this scenario.
+But also using short trials does not help with overall search duration,
+probably making it worse.
+
+#### True Bad Scenario
+
+Settings with a small exceed ratio
+have a small exceed coefficient, so the impact of the good short sum is small,
+and the bad short sum is almost wholly converted into excess sum,
+thus bad short trials have almost as big an impact as full-length bad trials.
+The same conclusion applies to moderate exceed ratio values
+when the good short sum is small.
+Thus, short trials can cause a load to get classified as an upper bound earlier,
+bringing time savings (while not affecting comparability).
+
+#### Balanced Scenario
+
+Here excess sum is small in absolute value, as the balancing sum
+is expected to be similar to the bad short sum.
+Once again, full-length trials are needed for final load classification;
+but usage of short trials probably means MLRsearch needed
+a shorter overall search time before selecting this load for measurement,
+thus bringing time savings (while not affecting comparability).
+
+Note that in presence of short trial results,
+the comparibility between the load classification
+and the Conditional Throughput is only partial.
+The Conditional Throughput still comes from a good long trial,
+but a load higher than the Relevant Lower Bound may also compute to a good value.
+
+## Trials with Longer Duration
+
+If there are trial results with an intended duration larger
+than the goal trial duration, the precise definitions
+in Appendix A and Appendix B treat them in exactly the same way
+as trials with duration equal to the goal trial duration.
+
+But in configurations with moderate (including 0.5) or small
+Goal Exceed Ratio and small Goal Loss Ratio (especially zero),
+bad trials with longer than goal durations may bias the search
+towards the lower load values, as the noiseful end of the spectrum
+gets a larger probability of causing the loss within the longer trials.
+
+{::comment}
+ [Use single goal when testing externaly, deviate freely in internal tests.]
+
+ <mark>For some users, this is an acceptable price</mark>
+ <mark>for increased configuration flexibility</mark>
+ <mark>(perhaps saving time for the related goals),</mark>
+ <mark>so implementations SHOULD allow such configurations.</mark>
+ <mark>Still, users are encouraged to avoid such configurations</mark>
+ <mark>by making all goals use the same final trial duration,</mark>
+ <mark>so their results remain comparable across implementations.</mark>
+
+ <mark>MKP2 mk edit note: This paragraph has no value in my view.
+ Statements like "For some users, this is an acceptable price
+ for increased configuration flexibility" do not make sense.
+ Configuration flexibility for flexibility sake is not a valid argument
+ in the specification that aims at standardising benchmarking methodologies.
+ If one wants to test with longer durations,
+ then one should configure these as Goal Final Trial Duration.
+ Simple, no? Or am I reading this point wrong?</mark>
+
+{:/comment}
+
+{::comment}
+ [MKP4 Out of scope here, subject for future work]
+
+ # Current practices?
+
+ <mark>MKP2 [VP] TODO: Even if not mentioned in spec (not even recommended),
+ some tricks from CSIT code may be worth mentioning? Not sure.</mark>
+
+ <mark>MKP2 [VP] TODO: Tricks with big impact on search time
+ can be mentioned so that Addressed Problems : Long Test Duration
+ has something specific to refer to.</mark>
+
+ <mark>MKP2 [VP] TODO: It is important to mention trick that have impact
+ on repeatability and comparability.</mark>
+
+ <mark>MKP2 [VP] TODO: CSIT computes a discrete "grid" of load values to use.</mark>
+
+ <mark>MKP2 [VP] TODO:
+ If all Goal Widths are aligned, there is one common coarse grid.
+ In that case, NDR (and even PDR conditional throughput
+ for tests with zer-or-big losses) values are identical in trending,
+ hiding the real performance variance, and causing fake anomaly
+ when the performance shifts just one gridpoint.
+ </mark>
+
+ <mark>MKP2 [VP] TODO: Conversely, when Goal Width do not match well,
+ CSIT needs to compute a fine-grained grid to match them all.
+ In this case, similar performances can be "rounded differently",
+ mostly based on specific loss that happened at Max Load,
+ where SUT may be less stable than around PDR.
+ This way trending sees higher variance (still within corresponding Goal Width),
+ but at least there are no fake anomalies.
+ </mark>
+
+ <mark>MKP2 [VP] TODO: In general, do not trust stdev if not larged than width.</mark>
+
+ <mark>MKP2 [VP] TODO: De we have a chapter section fosucing on design principles?
+ - Make Controller API independent from Measurer API.
+ - The "allowed if makes worse" principle:
+ - RFC1242 specmanship happens when testing own DUTs.
+ - Shortening trial wait times only risks making goal results lower.
+ - So it is fine to save time aggressively when testing own DUTs.
+ </mark>
+
+{:/comment}
+
+
+{::comment}
+ [Will be nice if made substantial.]
+
+ # Addressed Problems
+
+ <mark>MKP1 all of this section requires updating based on the updated content.
+ And it is for information only anyways. In fact not sure it's needed.
+ Maybe in appendix for posterity.</mark>
+
+ Now when MLRsearch is clearly specified and explained,
+ it is possible to summarize how does MLRsearch specification help with problems.
+
+ Here, "multiple trials" is a shorthand for having the goal final trial duration
+ significantly smaller than the Goal Duration Sum.
+ This results in MLRsearch performing multiple trials at the same load,
+ which may not be the case with other configurations.
+
+ ## Long Test Duration
+
+ As shortening the overall search duration is the main motivation
+ of MLRsearch library development, the library implements
+ multiple improvements on this front, both big and small.
+
+ Most of implementation details are not constrained by MLRsearch specification,
+ so that future implementations may keep shortening the search duration even more.
+
+ One exception is the impact of short trial results on the Relevant Lower Bound.
+ While motivated by human intuition, the logic is not straightforward.
+ In practice, configurations with only one common trial duration value
+ are capable of achieving good overal search time and result repeatability
+ without the need to consider short trials.
+
+ ### Impact of goal attribute values
+
+ From the required goal attributes, the Goal Duration Sum
+ remains the best way to get even shorter searches.
+
+ Usage of multiple trials can also save time,
+ depending on wait times around trial traffic.
+
+ The farther the Goal Exceed Ratio is from 0.5 (towards zero or one),
+ the less predictable the overal search duration becomes in practice.
+
+ Width parameter does not change search duration much in practice
+ (compared to other, mainly optional goal attributes).
+
+ ## DUT in SUT
+
+ In practice, using multiple trials and moderate exceed ratios
+ often improves result repeatability without increasing the overall search time,
+ depending on the specific SUT and DUT characteristics.
+ Benefits for separating SUT noise are less clear though,
+ as it is not easy to distinguish SUT noise from DUT instability in general.
+
+ Conditional Throughput has an intuitive meaning when described
+ using the performance spectrum, so this is an improvement
+ over existing simple (less configurable) search procedures.
+
+ Multiple trials can save time also when the noisy end of
+ the preformance spectrum needs to be examined, e.g. for [RFC9004].
+
+ Under some circumstances, testing the same DUT and SUT setup with different
+ DUT configurations can give some hints on what part of noise is SUT noise
+ and what part is DUT performance fluctuations.
+ In practice, both types of noise tend to be too complicated for that analysis.
+
+ MLRsearch enables users to search for multiple goals,
+ potentially providing more insight at the cost of a longer overall search time.
+ However, for a thorough and reliable examination of DUT-SUT interactions,
+ it is necessary to employ additional methods beyond black-box benchmarking,
+ such as collecting and analyzing DUT and SUT telemetry.
+
+ ## Repeatability and Comparability
+
+ Multiple trials improve repeatability, depending on exceed ratio.
+
+ In practice, one-second goal final trial duration with exceed ratio 0.5
+ is good enough for modern SUTs.
+ However, unless smaller wait times around the traffic part of the trial
+ are allowed, too much of overal search time would be wasted on waiting.
+
+ It is not clear whether exceed ratios higher than 0.5 are better
+ for repeatability.
+ The 0.5 value is still preferred due to explainability using median.
+
+ It is possible that the Conditional Throughput values (with non-zero goal
+ loss ratio) are better for repeatability than the Relevant Lower Bound values.
+ This is especially for implementations
+ which pick load from a small set of discrete values,
+ as that hides small variances in Relevant Lower Bound values
+ other implementations may find.
+
+ Implementations focusing on shortening the overall search time
+ are automatically forced to avoid comparability issues due to load selection,
+ as they must prefer even splits wherever possible.
+ But this conclusion only holds when the same goals are used.
+ Larger adoption is needed before any further claims on comparability
+ between MLRsearch implementations can be made.
+
+ ## Throughput with Non-Zero Loss
+
+ Trivially suported by the Goal Loss Ratio attribute.
+
+ In practice, usage of non-zero loss ratio values
+ improves the result repeatability
+ (exactly as expected based on results from simpler search methods).
+
+ ## Inconsistent Trial Results
+
+ MLRsearch is conservative wherever possible.
+ This is built into the definition of Conditional Throughput,
+ and into the treatment of short trial results for load classification.
+
+ This is consistent with [RFC2544] zero loss tolerance motivation.
+
+ If the noiseless part of the SUT performance spectrum is of interest,
+ it should be enough to set small value for the goal final trial duration,
+ and perhaps also a large value for the Goal Exceed Ratio.
+
+ Implementations may offer other (optional) configuration attributes
+ to become less conservative, but currently it is not clear
+ what impact would that have on repeatability.
+
+{:/comment}
+
+# IANA Considerations
+
+No requests of IANA.
+
+# Security Considerations
+
+Benchmarking activities as described in this memo are limited to
+technology characterization of a DUT/SUT using controlled stimuli in a
+laboratory environment, with dedicated address space and the constraints
+specified in the sections above.
+
+The benchmarking network topology will be an independent test setup and
+MUST NOT be connected to devices that may forward the test traffic into
+a production network or misroute traffic to the test management network.
+
+Further, benchmarking is performed on a "black-box" basis, relying
+solely on measurements observable external to the DUT/SUT.
+
+Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
+benchmarking purposes. Any implications for network security arising
+from the DUT/SUT SHOULD be identical in the lab and in production
+networks.
+
+# Acknowledgements
+
+Some phrases and statements in this document were created
+with help of Mistral AI (mistral.ai).
+
+Many thanks to Alec Hothan of the OPNFV NFVbench project for thorough
+review and numerous useful comments and suggestions in the earlier versions of this document.
+
+Special wholehearted gratitude and thanks to the late Al Morton for his
+thorough reviews filled with very specific feedback and constructive
+guidelines. Thank you Al for the close collaboration over the years,
+for your continuous unwavering encouragement full of empathy and
+positive attitude. Al, you are dearly missed.
+
+# Appendix A: Load Classification
+
+This section specifies how to perform the load classification.
+
+Any intended load value can be classified, according to a given [Search Goal] (#Search-Goal).
+
+The algorithm uses (some subsets of) the set of all available trial results
+from trials measured at a given intended load at the end of the search.
+All durations are those returned by the Measurer.
+
+The block at the end of this appendix holds pseudocode
+which computes two values, stored in variables named
+`optimistic` and `pessimistic`.
+
+{::comment}
+ [We have other section re optimistic. Not going to talk about variable naming here.]
+
+ <mark>MKP2 mk edit note: Need to add the description of what
+ the `optimistic` and `pessimistic` variables represent.
+ Or a reference to where this is described
+ e.g. in [Single Trial Duration] (#Single-Trial-Duration) section.</mark>
+
+{:/comment}
+
+The pseudocode happens to be a valid Python code.
+
+If values of both variables are computed to be true, the load in question
+is classified as a lower bound according to the given Search Goal.
+If values of both variables are false, the load is classified as an upper bound.
+Otherwise, the load is classified as undecided.
+
+The pseudocode expects the following variables to hold values as follows:
+
+- `goal_duration_sum`: The duration sum value of the given Search Goal.
+
+- `goal_exceed_ratio`: The exceed ratio value of the given Search Goal.
+
+- `good_long_sum`: Sum of durations across trials with trial duration
+ at least equal to the goal final trial duration and with a Trial Loss Ratio
+ not higher than the Goal Loss Ratio.
+
+- `bad_long_sum`: Sum of durations across trials with trial duration
+ at least equal to the goal final trial duration and with a Trial Loss Ratio
+ higher than the Goal Loss Ratio.
+
+- `good_short_sum`: Sum of durations across trials with trial duration
+ shorter than the goal final trial duration and with a Trial Loss Ratio
+ not higher than the Goal Loss Ratio.
+
+- `bad_short_sum`: Sum of durations across trials with trial duration
+ shorter than the goal final trial duration and with a Trial Loss Ratio
+ higher than the Goal Loss Ratio.
+
+The code works correctly also when there are no trial results at a given load.
+
+~~~ python
+balancing_sum = good_short_sum * goal_exceed_ratio / (1.0 - goal_exceed_ratio)
+effective_bad_sum = bad_long_sum + max(0.0, bad_short_sum - balancing_sum)
+effective_whole_sum = max(good_long_sum + effective_bad_sum, goal_duration_sum)
+quantile_duration_sum = effective_whole_sum * goal_exceed_ratio
+optimistic = effective_bad_sum <= quantile_duration_sum
+pessimistic = (effective_whole_sum - good_long_sum) <= quantile_duration_sum
+~~~
+
+# Appendix B: Conditional Throughput
+
+This section specifies how to compute Conditional Throughput, as referred to in section [Conditional Throughput] (#Conditional-Throughput).
+
+Any intended load value can be used as the basis for the following computation,
+but only the Relevant Lower Bound (at the end of the search)
+leads to the value called the Conditional Throughput for a given Search Goal.
+
+The algorithm uses (some subsets of) the set of all available trial results
+from trials measured at a given intended load at the end of the search.
+All durations are those returned by the Measurer.
+
+The block at the end of this appendix holds pseudocode
+which computes a value stored as variable `conditional_throughput`.
+
+{::comment}
+ [CT is CT. But text could make more obvious.]
+
+ <mark>MKP2 mk edit note: Need to add the description of what does
+ the `conditional_throughput` variable represent.
+ Or a reference to where this is described
+ e.g. in [Conditional Throughput] (#Conditional-Throughput) section.</mark>
+
+{:/comment}
+
+The pseudocode happens to be a valid Python code.
+
+The pseudocode expects the following variables to hold values as follows:
+
+- `goal_duration_sum`: The duration sum value of the given Search Goal.
+
+- `goal_exceed_ratio`: The exceed ratio value of the given Search Goal.
+
+- `good_long_sum`: Sum of durations across trials with trial duration
+ at least equal to the goal final trial duration and with a Trial Loss Ratio
+ not higher than the Goal Loss Ratio.
+
+- `bad_long_sum`: Sum of durations across trials with trial duration
+ at least equal to the goal final trial duration and with a Trial Loss Ratio
+ higher than the Goal Loss Ratio.
+
+- `long_trials`: An iterable of all trial results from trials with trial duration
+ at least equal to the goal final trial duration,
+ sorted by increasing the Trial Loss Ratio.
+ A trial result is a composite with the following two attributes available:
+
+ - `trial.loss_ratio`: The Trial Loss Ratio as measured for this trial.
+
+ - `trial.duration`: The trial duration of this trial.
+
+The code works correctly only when there if there is at least one
+trial result measured at a given load.
+
+~~~ python
+all_long_sum = max(goal_duration_sum, good_long_sum + bad_long_sum)
+remaining = all_long_sum * (1.0 - goal_exceed_ratio)
+quantile_loss_ratio = None
+for trial in long_trials:
+ if quantile_loss_ratio is None or remaining > 0.0:
+ quantile_loss_ratio = trial.loss_ratio
+ remaining -= trial.duration
+ else:
+ break
+else:
+ if remaining > 0.0:
+ quantile_loss_ratio = 1.0
+conditional_throughput = intended_load * (1.0 - quantile_loss_ratio)
+~~~
+
+--- back
+
+{::comment}
+ [Final checklist.]
+
+ <mark>[VP] Final Checks. Only mark as done when there are no active todos above.</mark>
+
+ <mark>[VP] Rename chapter/sub-/section to better match their content.</mark>
+
+ <mark>MKP3 [VP] TODO: Recheck the definition dependencies go bottom-up.</mark>
+
+ <mark>[VP] TODO: Unify external reference style (brackets, spaces, section numbers and names).</mark>
+
+ <mark>[VP] TODO: Add internal links wherever Captialized Term is mentioned.</mark>
+
+ <mark>MKP2 [VP] TODO: Capitalization of New Terms: useful when editing and reviewing,
+ but I still vote to remove capitalization before final submit,
+ because all other RFCs I see only capitalize due to being section title.</mark>
+
+ <mark>[VP] TODO: If time permits, keep improving formal style (e.g. using AI).</mark>
+
+{:/comment}
diff --git a/docs/ietf/draft-ietf-bmwg-mlrsearch-07.txt b/docs/ietf/draft-ietf-bmwg-mlrsearch-07.txt
new file mode 100644
index 0000000000..c5c94410a8
--- /dev/null
+++ b/docs/ietf/draft-ietf-bmwg-mlrsearch-07.txt
@@ -0,0 +1,2800 @@
+
+
+
+
+Benchmarking Working Group M. Konstantynowicz
+Internet-Draft V. Polak
+Intended status: Informational Cisco Systems
+Expires: 18 January 2025 18 July 2024
+
+
+ Multiple Loss Ratio Search
+ draft-ietf-bmwg-mlrsearch-07
+
+Abstract
+
+ This document proposes extensions to [RFC2544] throughput search by
+ defining a new methodology called Multiple Loss Ratio search
+ (MLRsearch). MLRsearch aims to minimize search duration, support
+ multiple loss ratio searches, and enhance result repeatability and
+ comparability.
+
+ The primary reason for extending [RFC2544] is to address the
+ challenges and requirements presented by the evaluation and testing
+ of software-based networking systems' data planes.
+
+ To give users more freedom, MLRsearch provides additional
+ configuration options such as allowing multiple short trials per load
+ instead of one large trial, tolerating a certain percentage of trial
+ results with higher loss, and supporting the search for multiple
+ goals with varying loss ratios.
+
+Status of This Memo
+
+ This Internet-Draft is submitted in full conformance with the
+ provisions of BCP 78 and BCP 79.
+
+ Internet-Drafts are working documents of the Internet Engineering
+ Task Force (IETF). Note that other groups may also distribute
+ working documents as Internet-Drafts. The list of current Internet-
+ Drafts is at https://datatracker.ietf.org/drafts/current/.
+
+ Internet-Drafts are draft documents valid for a maximum of six months
+ and may be updated, replaced, or obsoleted by other documents at any
+ time. It is inappropriate to use Internet-Drafts as reference
+ material or to cite them other than as "work in progress."
+
+ This Internet-Draft will expire on 18 January 2025.
+
+Copyright Notice
+
+ Copyright (c) 2024 IETF Trust and the persons identified as the
+ document authors. All rights reserved.
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 1]
+
+Internet-Draft MLRsearch July 2024
+
+
+ This document is subject to BCP 78 and the IETF Trust's Legal
+ Provisions Relating to IETF Documents (https://trustee.ietf.org/
+ license-info) in effect on the date of publication of this document.
+ Please review these documents carefully, as they describe your rights
+ and restrictions with respect to this document. Code Components
+ extracted from this document must include Revised BSD License text as
+ described in Section 4.e of the Trust Legal Provisions and are
+ provided without warranty as described in the Revised BSD License.
+
+Table of Contents
+
+ 1. Purpose and Scope . . . . . . . . . . . . . . . . . . . . . . 4
+ 2. Identified Problems . . . . . . . . . . . . . . . . . . . . . 5
+ 2.1. Long Search Duration . . . . . . . . . . . . . . . . . . 5
+ 2.2. DUT in SUT . . . . . . . . . . . . . . . . . . . . . . . 6
+ 2.3. Repeatability and Comparability . . . . . . . . . . . . . 8
+ 2.4. Throughput with Non-Zero Loss . . . . . . . . . . . . . . 8
+ 2.5. Inconsistent Trial Results . . . . . . . . . . . . . . . 9
+ 3. MLRsearch Specification . . . . . . . . . . . . . . . . . . . 10
+ 3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 10
+ 3.2. Measurement Quantities . . . . . . . . . . . . . . . . . 11
+ 3.3. Existing Terms . . . . . . . . . . . . . . . . . . . . . 12
+ 3.3.1. SUT . . . . . . . . . . . . . . . . . . . . . . . . . 12
+ 3.3.2. DUT . . . . . . . . . . . . . . . . . . . . . . . . . 12
+ 3.3.3. Trial . . . . . . . . . . . . . . . . . . . . . . . . 12
+ 3.4. Trial Terms . . . . . . . . . . . . . . . . . . . . . . . 13
+ 3.4.1. Trial Duration . . . . . . . . . . . . . . . . . . . 14
+ 3.4.2. Trial Load . . . . . . . . . . . . . . . . . . . . . 14
+ 3.4.3. Trial Input . . . . . . . . . . . . . . . . . . . . . 15
+ 3.4.4. Traffic Profile . . . . . . . . . . . . . . . . . . . 15
+ 3.4.5. Trial Forwarding Ratio . . . . . . . . . . . . . . . 16
+ 3.4.6. Trial Loss Ratio . . . . . . . . . . . . . . . . . . 16
+ 3.4.7. Trial Forwarding Rate . . . . . . . . . . . . . . . . 17
+ 3.4.8. Trial Effective Duration . . . . . . . . . . . . . . 17
+ 3.4.9. Trial Output . . . . . . . . . . . . . . . . . . . . 18
+ 3.4.10. Trial Result . . . . . . . . . . . . . . . . . . . . 18
+ 3.5. Goal Terms . . . . . . . . . . . . . . . . . . . . . . . 19
+ 3.5.1. Goal Final Trial Duration . . . . . . . . . . . . . . 19
+ 3.5.2. Goal Duration Sum . . . . . . . . . . . . . . . . . . 19
+ 3.5.3. Goal Loss Ratio . . . . . . . . . . . . . . . . . . . 20
+ 3.5.4. Goal Exceed Ratio . . . . . . . . . . . . . . . . . . 20
+ 3.5.5. Goal Width . . . . . . . . . . . . . . . . . . . . . 21
+ 3.5.6. Search Goal . . . . . . . . . . . . . . . . . . . . . 21
+ 3.5.7. Controller Input . . . . . . . . . . . . . . . . . . 22
+ 3.6. Search Goal Examples . . . . . . . . . . . . . . . . . . 23
+ 3.6.1. RFC2544 Goal . . . . . . . . . . . . . . . . . . . . 23
+ 3.6.2. TST009 Goal . . . . . . . . . . . . . . . . . . . . . 24
+ 3.7. Result Terms . . . . . . . . . . . . . . . . . . . . . . 24
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 2]
+
+Internet-Draft MLRsearch July 2024
+
+
+ 3.7.1. Relevant Upper Bound . . . . . . . . . . . . . . . . 25
+ 3.7.2. Relevant Lower Bound . . . . . . . . . . . . . . . . 25
+ 3.7.3. Conditional Throughput . . . . . . . . . . . . . . . 26
+ 3.7.4. Goal Result . . . . . . . . . . . . . . . . . . . . . 26
+ 3.7.5. Search Result . . . . . . . . . . . . . . . . . . . . 27
+ 3.7.6. Controller Output . . . . . . . . . . . . . . . . . . 27
+ 3.8. MLRsearch Architecture . . . . . . . . . . . . . . . . . 28
+ 3.8.1. Measurer . . . . . . . . . . . . . . . . . . . . . . 28
+ 3.8.2. Controller . . . . . . . . . . . . . . . . . . . . . 29
+ 3.8.3. Manager . . . . . . . . . . . . . . . . . . . . . . . 29
+ 3.9. Implementation Compliance . . . . . . . . . . . . . . . . 30
+ 4. Additional Considerations . . . . . . . . . . . . . . . . . . 30
+ 4.1. MLRsearch Versions . . . . . . . . . . . . . . . . . . . 31
+ 4.2. Stopping Conditions . . . . . . . . . . . . . . . . . . . 31
+ 4.3. Load Classification . . . . . . . . . . . . . . . . . . . 32
+ 4.4. Loss Ratios . . . . . . . . . . . . . . . . . . . . . . . 32
+ 4.5. Loss Inversion . . . . . . . . . . . . . . . . . . . . . 33
+ 4.6. Exceed Ratio . . . . . . . . . . . . . . . . . . . . . . 34
+ 4.7. Duration Sum . . . . . . . . . . . . . . . . . . . . . . 34
+ 4.8. Short Trials . . . . . . . . . . . . . . . . . . . . . . 35
+ 4.9. Throughput . . . . . . . . . . . . . . . . . . . . . . . 35
+ 4.10. Search Time . . . . . . . . . . . . . . . . . . . . . . . 37
+ 4.11. RFC2544 Compliance . . . . . . . . . . . . . . . . . . . 38
+ 5. Logic of Load Classification . . . . . . . . . . . . . . . . 38
+ 5.1. Introductory Remarks . . . . . . . . . . . . . . . . . . 38
+ 5.2. Performance Spectrum . . . . . . . . . . . . . . . . . . 38
+ 5.2.1. First Example . . . . . . . . . . . . . . . . . . . . 39
+ 5.2.2. Second Example . . . . . . . . . . . . . . . . . . . 40
+ 5.2.3. Third Example . . . . . . . . . . . . . . . . . . . . 40
+ 5.2.4. Summary . . . . . . . . . . . . . . . . . . . . . . . 40
+ 5.3. Trials with Single Duration . . . . . . . . . . . . . . . 40
+ 5.4. Trials with Short Duration . . . . . . . . . . . . . . . 42
+ 5.4.1. Scenarios . . . . . . . . . . . . . . . . . . . . . . 42
+ 5.4.2. Classification Logic . . . . . . . . . . . . . . . . 43
+ 5.5. Trials with Longer Duration . . . . . . . . . . . . . . . 45
+ 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 45
+ 7. Security Considerations . . . . . . . . . . . . . . . . . . . 45
+ 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 46
+ 9. Appendix A: Load Classification . . . . . . . . . . . . . . . 46
+ 10. Appendix B: Conditional Throughput . . . . . . . . . . . . . 47
+ 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 49
+ 11.1. Normative References . . . . . . . . . . . . . . . . . . 49
+ 11.2. Informative References . . . . . . . . . . . . . . . . . 49
+ Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 49
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 3]
+
+Internet-Draft MLRsearch July 2024
+
+
+1. Purpose and Scope
+
+ The purpose of this document is to describe Multiple Loss Ratio
+ search (MLRsearch), a data plane throughput search methodology
+ optimized for software networking DUTs.
+
+ Applying vanilla [RFC2544] throughput bisection to software DUTs
+ results in several problems:
+
+ * Binary search takes too long as most trials are done far from the
+ eventually found throughput.
+
+ * The required final trial duration and pauses between trials
+ prolong the overall search duration.
+
+ * Software DUTs show noisy trial results, leading to a big spread of
+ possible discovered throughput values.
+
+ * Throughput requires a loss of exactly zero frames, but the
+ industry frequently allows for small but non-zero losses.
+
+ * The definition of throughput is not clear when trial results are
+ inconsistent.
+
+ To address the problems mentioned above, the MLRsearch test
+ methodology specification employs the following enhancements:
+
+ * Allow multiple short trials instead of one big trial per load.
+
+ - Optionally, tolerate a percentage of trial results with higher
+ loss.
+
+ * Allow searching for multiple Search Goals, with differing loss
+ ratios.
+
+ - Any trial result can affect each Search Goal in principle.
+
+ * Insert multiple coarse targets for each Search Goal, earlier ones
+ need to spend less time on trials.
+
+ - Earlier targets also aim for lesser precision.
+
+ - Use Forwarding Rate (FR) at maximum offered load [RFC2285]
+ (section 3.6.2) to initialize the initial targets.
+
+ * Take care when dealing with inconsistent trial results.
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 4]
+
+Internet-Draft MLRsearch July 2024
+
+
+ - Reported throughput is smaller than the smallest load with high
+ loss.
+
+ - Smaller load candidates are measured first.
+
+ * Apply several load selection heuristics to save even more time by
+ trying hard to avoid unnecessarily narrow bounds.
+
+ Some of these enhancements are formalized as MLRsearch specification,
+ the remaining enhancements are treated as implementation details,
+ thus achieving high comparability without limiting future
+ improvements.
+
+ MLRsearch configuration options are flexible enough to support both
+ conservative settings and aggressive settings. The conservative
+ settings lead to results unconditionally compliant with [RFC2544],
+ but longer search duration and worse repeatability. Conversely,
+ aggressive settings lead to shorter search duration and better
+ repeatability, but the results are not compliant with [RFC2544].
+
+ No part of [RFC2544] is intended to be obsoleted by this document.
+
+2. Identified Problems
+
+ This chapter describes the problems affecting usability of various
+ performance testing methodologies, mainly a binary search for
+ [RFC2544] unconditionally compliant throughput.
+
+2.1. Long Search Duration
+
+ The emergence of software DUTs, with frequent software updates and a
+ number of different frame processing modes and configurations, has
+ increased both the number of performance tests required to verify the
+ DUT update and the frequency of running those tests. This makes the
+ overall test execution time even more important than before.
+
+ The current [RFC2544] throughput definition restricts the potential
+ for time-efficiency improvements. A more generalized throughput
+ concept could enable further enhancements while maintaining the
+ precision of simpler methods.
+
+ The bisection method, when unconditionally compliant with [RFC2544],
+ is excessively slow. This is because a significant amount of time is
+ spent on trials with loads that, in retrospect, are far from the
+ final determined throughput.
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 5]
+
+Internet-Draft MLRsearch July 2024
+
+
+ [RFC2544] does not specify any stopping condition for throughput
+ search, so users already have an access to a limited trade-off
+ between search duration and achieved precision. However, each full
+ 60-second trials doubles the precision, so not many trials can be
+ removed without a substantial loss of precision.
+
+2.2. DUT in SUT
+
+ [RFC2285] defines: - DUT as - The network forwarding device to which
+ stimulus is offered and response measured [RFC2285] (section 3.1.1).
+ - SUT as - The collective set of network devices to which stimulus is
+ offered as a single entity and response measured [RFC2285] (section
+ 3.1.2).
+
+ [RFC2544] specifies a test setup with an external tester stimulating
+ the networking system, treating it either as a single DUT, or as a
+ system of devices, an SUT.
+
+ In the case of software networking, the SUT consists of not only the
+ DUT as a software program processing frames, but also of server
+ hardware and operating system functions, with that server hardware
+ resources shared across all programs including the operating system.
+
+ Given that the SUT is a shared multi-tenant environment encompassing
+ the DUT and other components, the DUT might inadvertently experience
+ interference from the operating system or other software operating on
+ the same server.
+
+ Some of this interference can be mitigated. For instance, pinning
+ DUT program threads to specific CPU cores and isolating those cores
+ can prevent context switching.
+
+ Despite taking all feasible precautions, some adverse effects may
+ still impact the DUT's network performance. In this document, these
+ effects are collectively referred to as SUT noise, even if the
+ effects are not as unpredictable as what other engineering
+ disciplines call noise.
+
+ DUT can also exhibit fluctuating performance itself, for reasons not
+ related to the rest of SUT. For example due to pauses in execution
+ as needed for internal stateful processing. In many cases this may
+ be an expected per-design behavior, as it would be observable even in
+ a hypothetical scenario where all sources of SUT noise are
+ eliminated. Such behavior affects trial results in a way similar to
+ SUT noise. As the two phenomenons are hard to distinguish, in this
+ document the term 'noise' is used to encompass both the internal
+ performance fluctuations of the DUT and the genuine noise of the SUT.
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 6]
+
+Internet-Draft MLRsearch July 2024
+
+
+ A simple model of SUT performance consists of an idealized noiseless
+ performance, and additional noise effects. For a specific SUT, the
+ noiseless performance is assumed to be constant, with all observed
+ performance variations being attributed to noise. The impact of the
+ noise can vary in time, sometimes wildly, even within a single trial.
+ The noise can sometimes be negligible, but frequently it lowers the
+ observed SUT performance as observed in trial results.
+
+ In this model, SUT does not have a single performance value, it has a
+ spectrum. One end of the spectrum is the idealized noiseless
+ performance value, the other end can be called a noiseful
+ performance. In practice, trial result close to the noiseful end of
+ the spectrum happens only rarely. The worse the performance value
+ is, the more rarely it is seen in a trial. Therefore, the extreme
+ noiseful end of the SUT spectrum is not observable among trial
+ results. Also, the extreme noiseless end of the SUT spectrum is
+ unlikely to be observable, this time because some small noise effects
+ are likely to occur multiple times during a trial.
+
+ Unless specified otherwise, this document's focus is on the
+ potentially observable ends of the SUT performance spectrum, as
+ opposed to the extreme ones.
+
+ When focusing on the DUT, the benchmarking effort should ideally aim
+ to eliminate only the SUT noise from SUT measurements. However, this
+ is currently not feasible in practice, as there are no realistic
+ enough models available to distinguish SUT noise from DUT
+ fluctuations, based on authors' experience and available literature.
+
+ Assuming a well-constructed SUT, the DUT is likely its primary
+ performance bottleneck. In this case, we can define the DUT's ideal
+ noiseless performance as the noiseless end of the SUT performance
+ spectrum, especially for throughput. However, other performance
+ metrics, such as latency, may require additional considerations.
+
+ Note that by this definition, DUT noiseless performance also
+ minimizes the impact of DUT fluctuations, as much as realistically
+ possible for a given trial duration.
+
+ MLRsearch methodology aims to solve the DUT in SUT problem by
+ estimating the noiseless end of the SUT performance spectrum using a
+ limited number of trial results.
+
+ Any improvements to the throughput search algorithm, aimed at better
+ dealing with software networking SUT and DUT setup, should employ
+ strategies recognizing the presence of SUT noise, allowing the
+ discovery of (proxies for) DUT noiseless performance at different
+ levels of sensitivity to SUT noise.
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 7]
+
+Internet-Draft MLRsearch July 2024
+
+
+2.3. Repeatability and Comparability
+
+ [RFC2544] does not suggest to repeat throughput search. And from
+ just one discovered throughput value, it cannot be determined how
+ repeatable that value is. Poor repeatability then leads to poor
+ comparability, as different benchmarking teams may obtain varying
+ throughput values for the same SUT, exceeding the expected
+ differences from search precision.
+
+ [RFC2544] throughput requirements (60 seconds trial and no tolerance
+ of a single frame loss) affect the throughput results in the
+ following way. The SUT behavior close to the noiseful end of its
+ performance spectrum consists of rare occasions of significantly low
+ performance, but the long trial duration makes those occasions not so
+ rare on the trial level. Therefore, the binary search results tend
+ to wander away from the noiseless end of SUT performance spectrum,
+ more frequently and more widely than short trials would, thus causing
+ poor throughput repeatability.
+
+ The repeatability problem can be addressed by defining a search
+ procedure that identifies a consistent level of performance, even if
+ it does not meet the strict definition of throughput in [RFC2544].
+
+ According to the SUT performance spectrum model, better repeatability
+ will be at the noiseless end of the spectrum. Therefore, solutions
+ to the DUT in SUT problem will help also with the repeatability
+ problem.
+
+ Conversely, any alteration to [RFC2544] throughput search that
+ improves repeatability should be considered as less dependent on the
+ SUT noise.
+
+ An alternative option is to simply run a search multiple times, and
+ report some statistics (e.g. average and standard deviation). This
+ can be used for a subset of tests deemed more important, but it makes
+ the search duration problem even more pronounced.
+
+2.4. Throughput with Non-Zero Loss
+
+ [RFC1242] (section 3.17 Throughput) defines throughput as: The
+ maximum rate at which none of the offered frames are dropped by the
+ device.
+
+ Then, it says: Since even the loss of one frame in a data stream can
+ cause significant delays while waiting for the higher level protocols
+ to time out, it is useful to know the actual maximum data rate that
+ the device can support.
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 8]
+
+Internet-Draft MLRsearch July 2024
+
+
+ However, many benchmarking teams accept a small, non-zero loss ratio
+ as the goal for their load search.
+
+ Motivations are many:
+
+ * Modern protocols tolerate frame loss better, compared to the time
+ when [RFC1242] and [RFC2544] were specified.
+
+ * Trials nowadays send way more frames within the same duration,
+ increasing the chance of a small SUT performance fluctuation being
+ enough to cause frame loss.
+
+ * Small bursts of frame loss caused by noise have otherwise smaller
+ impact on the average frame loss ratio observed in the trial, as
+ during other parts of the same trial the SUT may work more closely
+ to its noiseless performance, thus perhaps lowering the Trial Loss
+ Ratio below the Goal Loss Ratio value.
+
+ * If an approximation of the SUT noise impact on the Trial Loss
+ Ratio is known, it can be set as the Goal Loss Ratio.
+
+ Regardless of the validity of all similar motivations, support for
+ non-zero loss goals makes any search algorithm more user-friendly.
+ [RFC2544] throughput is not user-friendly in this regard.
+
+ Furthermore, allowing users to specify multiple loss ratio values,
+ and enabling a single search to find all relevant bounds,
+ significantly enhances the usefulness of the search algorithm.
+
+ Searching for multiple Search Goals also helps to describe the SUT
+ performance spectrum better than the result of a single Search Goal.
+ For example, the repeated wide gap between zero and non-zero loss
+ loads indicates the noise has a large impact on the observed
+ performance, which is not evident from a single goal load search
+ procedure result.
+
+ It is easy to modify the vanilla bisection to find a lower bound for
+ the intended load that satisfies a non-zero Goal Loss Ratio. But it
+ is not that obvious how to search for multiple goals at once, hence
+ the support for multiple Search Goals remains a problem.
+
+2.5. Inconsistent Trial Results
+
+ While performing throughput search by executing a sequence of
+ measurement trials, there is a risk of encountering inconsistencies
+ between trial results.
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 9]
+
+Internet-Draft MLRsearch July 2024
+
+
+ The plain bisection never encounters inconsistent trials. But
+ [RFC2544] hints about the possibility of inconsistent trial results,
+ in two places in its text. The first place is section 24, where full
+ trial durations are required, presumably because they can be
+ inconsistent with the results from short trial durations. The second
+ place is section 26.3, where two successive zero-loss trials are
+ recommended, presumably because after one zero-loss trial there can
+ be a subsequent inconsistent non-zero-loss trial.
+
+ Examples include:
+
+ * A trial at the same load (same or different trial duration)
+ results in a different Trial Loss Ratio.
+
+ * A trial at a higher load (same or different trial duration)
+ results in a smaller Trial Loss Ratio.
+
+ Any robust throughput search algorithm needs to decide how to
+ continue the search in the presence of such inconsistencies.
+ Definitions of throughput in [RFC1242] and [RFC2544] are not specific
+ enough to imply a unique way of handling such inconsistencies.
+
+ Ideally, there will be a definition of a new quantity which both
+ generalizes throughput for non-zero-loss (and other possible
+ repeatability enhancements), while being precise enough to force a
+ specific way to resolve trial result inconsistencies. But until such
+ a definition is agreed upon, the correct way to handle inconsistent
+ trial results remains an open problem.
+
+3. MLRsearch Specification
+
+ This section describes MLRsearch specification including all
+ technical definitions needed for evaluating whether a particular test
+ procedure complies with MLRsearch specification.
+
+3.1. Overview
+
+ MLRsearch specification describes a set of abstract system
+ components, acting as functions with specified inputs and outputs.
+
+ A test procedure is said to comply with MLRsearch specification if it
+ can be conceptually divided into analogous components, each
+ satisfying requirements for the corresponding MLRsearch component.
+
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 10]
+
+Internet-Draft MLRsearch July 2024
+
+
+ The Measurer component is tasked to perform trials, the Controller
+ component is tasked to select trial loads and durations, the Manager
+ component is tasked to pre-configure everything and to produce the
+ test report. The test report explicitly states Search Goals (as the
+ Controller Inputs) and corresponding Goal Results (Controller
+ Outputs).
+
+ The Manager calls the Controller once, the Controller keeps calling
+ the Measurer until all stopping conditions are met.
+
+ The part where Controller calls the Measurer is called the search.
+ Any activity done by the Manager before it calls the Controller (or
+ after Controller returns) is not considered to be part of the search.
+
+ MLRsearch specification prescribes regular search results and
+ recommends their stopping conditions. Irregular search results are
+ also allowed, they may have different requirements and stopping
+ conditions.
+
+ Search results are based on load classification. When measured
+ enough, any chosen load either achieves of fails each search goal,
+ thus becoming a lower or an upper bound for that goal. When the
+ relevant bounds are at loads that are close enough (according to goal
+ precision), the regular result is found. Search stops when all
+ regular results are found (or if some goals are proven to have only
+ irregular results).
+
+3.2. Measurement Quantities
+
+ MLRsearch specification uses a number of measurement quantities.
+
+ In general, MLRsearch specification does not require particular units
+ to be used, but it is REQUIRED for the test report to state all the
+ units. For example, ratio quantities can be dimensionless numbers
+ between zero and one, but may be expressed as percentages instead.
+
+ For convenience, a group of quantities can be treated as a composite
+ quantity, One constituent of a composite quantity is called an
+ attribute, and a group of attribute values is called an instance of
+ that composite quantity.
+
+ Some attributes are not independent from others, and they can be
+ calculated from other attributes. Such quantites are called derived
+ quantities.
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 11]
+
+Internet-Draft MLRsearch July 2024
+
+
+3.3. Existing Terms
+
+ RFC 1242 "Benchmarking Terminology for Network Interconnect Devices"
+ contains basic definitions, and RFC 2544 "Benchmarking Methodology
+ for Network Interconnect Devices" contains discussions of a number of
+ terms and additional methodology requirements. RFC 2285 adds more
+ terms and discussions, describing some known situations in more
+ precise way.
+
+ All three documents should be consulted before attempting to make use
+ of this document.
+
+ Definitions of some central terms are copied and discussed in
+ subsections.
+
+3.3.1. SUT
+
+ Defined in [RFC2285] (section 3.1.2 System Under Test (SUT)) as
+ follows.
+
+ Definition:
+
+ The collective set of network devices to which stimulus is offered as
+ a single entity and response measured.
+
+ Discussion:
+
+ An SUT consisting of a single network device is also allowed.
+
+3.3.2. DUT
+
+ Defined in [RFC2285] (section 3.1.1 Device Under Test (DUT)) as
+ follows.
+
+ Definition:
+
+ The network forwarding device to which stimulus is offered and
+ response measured.
+
+ Discussion:
+
+ DUT, as a sub-component of SUT, is only indirectly mentioned in
+ MLRsearch specification, but is of key relevance for its motivation.
+
+3.3.3. Trial
+
+ A trial is the part of the test described in [RFC2544] (section 23.
+ Trial description).
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 12]
+
+Internet-Draft MLRsearch July 2024
+
+
+ Definition:
+
+ A particular test consists of multiple trials. Each trial returns
+ one piece of information, for example the loss rate at a particular
+ input frame rate. Each trial consists of a number of phases:
+
+ a) If the DUT is a router, send the routing update to the "input"
+ port and pause two seconds to be sure that the routing has settled.
+
+ b) Send the "learning frames" to the "output" port and wait 2 seconds
+ to be sure that the learning has settled. Bridge learning frames are
+ frames with source addresses that are the same as the destination
+ addresses used by the test frames. Learning frames for other
+ protocols are used to prime the address resolution tables in the DUT.
+ The formats of the learning frame that should be used are shown in
+ the Test Frame Formats document.
+
+ c) Run the test trial.
+
+ d) Wait for two seconds for any residual frames to be received.
+
+ e) Wait for at least five seconds for the DUT to restabilize.
+
+ Discussion:
+
+ The definition describes some traits, it is not clear whether all of
+ them are REQUIRED, or some of them are only RECOMMENDED.
+
+ For the purposes of the MLRsearch specification, it is ALLOWED for
+ the test procedure to deviate from the [RFC2544] description, but any
+ such deviation MUST be made explicit in the test report.
+
+ Trials are the only stimuli the SUT is expected to experience during
+ the search.
+
+ In some discussion paragraphs, it is useful to consider the traffic
+ as sent and received by a tester, as implicitly defined in [RFC2544]
+ (section 6. Test set up).
+
+ An example of deviation from [RFC2544] is using shorter wait times.
+
+3.4. Trial Terms
+
+ This section defines new and redefine existing terms for quantities
+ relevant as inputs or outputs of trial, as used by the Measurer
+ component.
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 13]
+
+Internet-Draft MLRsearch July 2024
+
+
+3.4.1. Trial Duration
+
+ Definition:
+
+ Trial duration is the intended duration of the traffic for a trial.
+
+ Discussion:
+
+ In general, this quantity does not include any preparation nor
+ waiting described in section 23 of [RFC2544] (section 23. Trial
+ description).
+
+ While any positive real value may be provided, some Measurer
+ implementations MAY limit possible values, e.g. by rounding down to
+ neared integer in seconds. In that case, it is RECOMMENDED to give
+ such inputs to the Controller so the Controller only proposes the
+ accepted values. Alternatively, the test report MUST present the
+ rounded values as Search Goal attributes.
+
+3.4.2. Trial Load
+
+ Definition:
+
+ The trial load is the intended load for a trial
+
+ Discussion:
+
+ For test report purposes, it is assumed that this is a constant load
+ by default. This MAY be only an average load, e.g. when the traffic
+ is intended to be busty, e.g. as suggested in [RFC2544] (section 21.
+ Bursty traffic), but the test report MUST explicitly mention how non-
+ constant the traffic is.
+
+ Trial load is the quantity defined as Constant Load of [RFC1242]
+ (section 3.4 Constant Load), Data Rate of [RFC2544] (section 14.
+ Bidirectional traffic) and Intended Load of [RFC2285] (section 3.5.1
+ Intended load (Iload)). All three definitions specify that this
+ value applies to one (input or output) interface.
+
+ For test report purposes, multi-interface aggregate load MAY be
+ reported, this is understood as the same quantity expressed using
+ different units. From the report it MUST be clear whether a
+ particular trial load value is per one interface, or an aggregate
+ over all interfaces.
+
+ Similarly to trial duration, some Measurers may limit the possible
+ values of trial load. Contrary to trial duration, the test report is
+ NOT REQUIRED to document such behavior.
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 14]
+
+Internet-Draft MLRsearch July 2024
+
+
+ It is ALLOWED to combine trial load and trial duration in a way that
+ would not be possible to achieve using any integer number of data
+ frames.
+
+3.4.3. Trial Input
+
+ Definition:
+
+ Trial Input is a composite quantity, consisting of two attributes:
+ trial duration and trial load.
+
+ Discussion:
+
+ When talking about multiple trials, it is common to say "Trial
+ Inputs" to denote all corresponding Trial Input instances.
+
+ A Trial Input instance acts as the input for one call of the Measurer
+ component.
+
+ Contrary to other composite quantities, MLRsearch implementations are
+ NOT ALLOWED to add optional attributes here. This improves
+ interoperability between various implementations of the Controller
+ and the Measurer.
+
+3.4.4. Traffic Profile
+
+ Definition:
+
+ Traffic profile is a composite quantity containing attributes other
+ than trial load and trial duration, needed for unique determination
+ of the trial to be performed.
+
+ Discussion:
+
+ All its attributes are assumed to be constant during the search, and
+ the composite is configured on the Measurer by the Manager before the
+ search starts. This is why the traffic profile is not part of the
+ Trial Input.
+
+ As a consequence, implementations of the Manager and the Measurer
+ must be aware of their common set of capabilities, so that the
+ traffic profile uniquely defines the traffic during the search. The
+ important fact is that none of those capabilities have to be known by
+ the Controller implementations.
+
+ The traffic profile SHOULD contain some specific quantities, for
+ example [RFC2544] (section 9. Frame sizes) governs data link frame
+ size as defined in [RFC1242] (section 3.5 Data link frame size).
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 15]
+
+Internet-Draft MLRsearch July 2024
+
+
+ Several more specific quantities may be RECOMMENDED, depending on
+ media type. For example, [RFC2544] (Appendix C) lists frame formats
+ and protocol addresses, as recommended from [RFC2544] (section 8.
+ Frame formats) and [RFC2544] (section 12. Protocol addresses).
+
+ Depending on SUT configuration, e.g. when testing specific protocols,
+ additional attributes MUST be included in the traffic profile and in
+ the test report.
+
+ Example: [RFC8219] (section 5.3. Traffic Setup) introduces traffic
+ setups consisting of a mix of IPv4 and IPv6 traffic - the implied
+ traffic profile therefore must include an attribute for their
+ percentage.
+
+ Other traffic properties that need to be somehow specified in Traffic
+ Profile include: [RFC2544] (section 14. Bidirectional traffic),
+ [RFC2285] (section 3.3.3 Fully meshed traffic), and [RFC2544]
+ (section 11. Modifiers).
+
+3.4.5. Trial Forwarding Ratio
+
+ Definition:
+
+ The trial forwarding ratio is a dimensionless floating point value.
+ It MUST range between 0.0 and 1.0, both inclusive. It is calculated
+ by dividing the number of frames successfully forwarded by the SUT by
+ the total number of frames expected to be forwarded during the trial
+
+ Discussion:
+
+ For most traffic profiles, "expected to be forwarded" means "intended
+ to get transmitted from Tester towards SUT".
+
+ Trial forwarding ratio MAY be expressed in other units (e.g. as a
+ percentage) in the test report.
+
+ Note that, contrary to loads, frame counts used to compute trial
+ forwarding ratio are aggregates over all SUT output interfaces.
+
+ Questions around what is the correct number of frames that should
+ have been forwarded is generally outside of the scope of this
+ document.
+
+3.4.6. Trial Loss Ratio
+
+ Definition:
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 16]
+
+Internet-Draft MLRsearch July 2024
+
+
+ The Trial Loss Ratio is equal to one minus the trial forwarding
+ ratio.
+
+ Discussion:
+
+ 100% minus the trial forwarding ratio, when expressed as a
+ percentage.
+
+ This is almost identical to Frame Loss Rate of [RFC1242] (section 3.6
+ Frame Loss Rate), the only minor difference is that Trial Loss Ratio
+ does not need to be expressed as a percentage.
+
+3.4.7. Trial Forwarding Rate
+
+ Definition:
+
+ The trial forwarding rate is a derived quantity, calculated by
+ multiplying the trial load by the trial forwarding ratio.
+
+ Discussion:
+
+ It is important to note that while similar, this quantity is not
+ identical to the Forwarding Rate as defined in [RFC2285] (section
+ 3.6.1 Forwarding rate (FR)). The latter is specific to one output
+ interface only, whereas the trial forwarding ratio is based on frame
+ counts aggregated over all SUT output interfaces.
+
+3.4.8. Trial Effective Duration
+
+ Definition:
+
+ Trial effective duration is a time quantity related to the trial, by
+ default equal to the trial duration.
+
+ Discussion:
+
+ This is an optional feature. If the Measurer does not return any
+ trial effective duration value, the Controller MUST use the trial
+ duration value instead.
+
+ Trial effective duration may be any time quantity chosen by the
+ Measurer to be used for time-based decisions in the Controller.
+
+ The test report MUST explain how the Measurer computes the returned
+ trial effective duration values, if they are not always equal to the
+ trial duration.
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 17]
+
+Internet-Draft MLRsearch July 2024
+
+
+ This feature can be beneficial for users who wish to manage the
+ overall search duration, rather than solely the traffic portion of
+ it. Simply measure the duration of the whole trial (waits including)
+ and use that as the trial effective duration.
+
+ Also, this is a way for the Measurer to inform the Controller about
+ its surprising behavior, for example when rounding the trial duration
+ value.
+
+3.4.9. Trial Output
+
+ Definition:
+
+ Trial Output is a composite quantity. The REQUIRED attributes are
+ Trial Loss Ratio, trial effective duration and trial forwarding rate.
+
+ Discussion:
+
+ When talking about multiple trials, it is common to say "Trial
+ Outputs" to denote all corresponding Trial Output instances.
+
+ Implementations may provide additional (optional) attributes. The
+ Controller implementations MUST ignore values of any optional
+ attribute they are not familiar with, except when passing Trial
+ Output instance to the Manager.
+
+ Example of an optional attribute: The aggregate number of frames
+ expected to be forwarded during the trial, especially if it is not
+ just (a rounded-up value) implied by trial load and trial duration.
+
+ While [RFC2285] (Section 3.5.2 Offered load (Oload)) requires the
+ offered load value to be reported for forwarding rate measurements,
+ it is NOT REQUIRED in MLRsearch specification.
+
+3.4.10. Trial Result
+
+ Definition:
+
+ Trial result is a composite quantity, consisting of the Trial Input
+ and the Trial Output.
+
+ Discussion:
+
+ When talking about multiple trials, it is common to say "trial
+ results" to denote all corresponding trial result instances.
+
+ While implementations SHOULD NOT include additional attributes with
+ independent values, they MAY include derived quantities.
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 18]
+
+Internet-Draft MLRsearch July 2024
+
+
+3.5. Goal Terms
+
+ This section defines new and redefine existing terms for quantities
+ indirectly relevant for inputs or outputs of the Controller
+ component.
+
+ Several goal attributes are defined before introducing the main
+ component quantity: the Search Goal.
+
+3.5.1. Goal Final Trial Duration
+
+ Definition:
+
+ A threshold value for trial durations.
+
+ Discussion:
+
+ This attribute value MUST be positive.
+
+ A trial with Trial Duration at least as long as the Goal Final Trial
+ Duration is called a full-length trial (with respect to the given
+ Search Goal).
+
+ A trial that is not full-length is called a short trial.
+
+ Informally, while MLRsearch is allowed to perform short trials, the
+ results from such short trials have only limited impact on search
+ results.
+
+ One trial may be full-length for some Search Goals, but not for
+ others.
+
+ The full relation of this goal to Controller Output is defined later
+ in this document in subsections of [Goal Result] (#Goal-Result). For
+ example, the Conditional Throughput for this goal is computed only
+ from full-length trial results.
+
+3.5.2. Goal Duration Sum
+
+ Definition:
+
+ A threshold value for a particular sum of trial effective durations.
+
+ Discussion:
+
+ This attribute value MUST be positive.
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 19]
+
+Internet-Draft MLRsearch July 2024
+
+
+ Informally, even when looking only at full-length trials, MLRsearch
+ may spend up to this time measuring the same load value.
+
+ If the Goal Duration Sum is larger than the Goal Final Trial
+ Duration, multiple full-length trials may need to be performed at the
+ same load.
+
+ See [TST009 Example] (#TST009-Example) for an example where
+ possibility of multiple full-length trials at the same load is
+ intended.
+
+ A Goal Duration Sum value lower than the Goal Final Trial Duration
+ (of the same goal) could save some search time, but is NOT
+ RECOMMENDED. See [Relevant Upper Bound] (#Relevant-Upper-Bound) for
+ partial explanation.
+
+3.5.3. Goal Loss Ratio
+
+ Definition:
+
+ A threshold value for Trial Loss Ratios.
+
+ Discussion:
+
+ Attribute value MUST be non-negative and smaller than one.
+
+ A trial with Trial Loss Ratio larger than a Goal Loss Ratio value is
+ called a lossy trial, with respect to given Search Goal.
+
+ Informally, if a load causes too many lossy trials, the Relevant
+ Lower Bound for this goal will be smaller than that load.
+
+ If a trial is not lossy, it is called a low-loss trial, or
+ (specifically for zero Goal Loss Ratio value) zero-loss trial.
+
+3.5.4. Goal Exceed Ratio
+
+ Definition:
+
+ A threshold value for a particular ratio of sums of Trial Effective
+ Durations.
+
+ Discussion:
+
+ Attribute value MUST be non-negative and smaller than one.
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 20]
+
+Internet-Draft MLRsearch July 2024
+
+
+ See later sections for details on which sums. Specifically, the
+ direct usage is only in [Appendix A: Load Classification] (#Appendix-
+ A:-Load-Classification) and [Appendix B: Conditional Throughput]
+ (#Appendix-B:-Conditional-Throughput). The impact of that usage is
+ discussed in subsections leading to [Goal Result] (#Goal-Result).
+
+ Informally, the impact of lossy trials is controlled by this value.
+ Effectively, Goal Exceed Ratio is a percentage of full-length trials
+ that may be lossy without the load being classified as the [Relevant
+ Upper Bound] (#Relevant-Upper-Bound).
+
+3.5.5. Goal Width
+
+ Definition:
+
+ A value used as a threshold for deciding whether two trial load
+ values are close enough.
+
+ Discussion:
+
+ If present, the value MUST be positive.
+
+ Informally, this acts as a stopping condition, controlling the
+ precision of the search. The search stops if every goal has reached
+ its precision.
+
+ Implementations without this attribute MUST give the Controller other
+ ways to control the search stopping conditions.
+
+ Absolute load difference and relative load difference are two popular
+ choices, but implementations may choose a different way to specify
+ width.
+
+ The test report MUST make it clear what specific quantity is used as
+ Goal Width.
+
+ It is RECOMMENDED to set the Goal Width (as relative difference)
+ value to a value no smaller than the Goal Loss Ratio. (The reason is
+ not obvious, see [Throughput] (#Throughput) if interested.)
+
+3.5.6. Search Goal
+
+ Definition:
+
+ The Search Goal is a composite quantity consisting of several
+ attributes, some of them are required.
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 21]
+
+Internet-Draft MLRsearch July 2024
+
+
+ Required attributes: - Goal Final Trial Duration - Goal Duration Sum
+ - Goal Loss Ratio - Goal Exceed Ratio
+
+ Optional attribute: - Goal Width
+
+ Discussion:
+
+ Implementations MAY add their own attributes. Those additional
+ attributes may be required by the implementation even if they are not
+ required by MLRsearch specification. But it is RECOMMENDED for those
+ implementations to support missing values by computing reasonable
+ defaults.
+
+ The meaning of listed attributes is formally given only by their
+ indirect effect on the search results.
+
+ Informally, later sections provide additional intuitions and examples
+ of the Search Goal attribute values.
+
+ An example of additional attributes required by some implementations
+ is Goal Initial Trial Duration, together with another attribute that
+ controls possible intermediate Trial Duration values. The reasonable
+ default in this case is using the Goal Final Trial Duration and no
+ intermediate values.
+
+3.5.7. Controller Input
+
+ Definition:
+
+ Controller Input is a composite quantity required as an input for the
+ Controller. The only REQUIRED attribute is a list of Search Goal
+ instances.
+
+ Discussion:
+
+ MLRsearch implementations MAY use additional attributes. Those
+ additional attributes may be required by the implementation even if
+ they are not required by MLRsearch specification.
+
+ Formally, the Manager does not apply any Controller configuration
+ apart from one Controller Input instance.
+
+ For example, Traffic Profile is configured on the Measurer by the
+ Manager (without explicit assistance of the Controller).
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 22]
+
+Internet-Draft MLRsearch July 2024
+
+
+ The order of Search Goal instances in a list SHOULD NOT have a big
+ impact on Controller Output (see section [Controller Output]
+ (#Controller-Output) , but MLRsearch implementations MAY base their
+ behavior on the order of Search Goal instances in a list.
+
+ An example of an optional attribute (outside the list of Search
+ Goals) required by some implementations is Max Load. While this is a
+ frequently used configuration parameter, already governed by
+ [RFC2544] (section 20. Maximum frame rate) and [RFC2285] (3.5.3
+ Maximum offered load (MOL)), some implementations may detect or
+ discover it instead.
+
+ In MLRsearch specification, the [Relevant Upper Bound] (#Relevant-
+ Upper-Bound) is added as a required attribute precisely because it
+ makes the search result independent of Max Load value.
+
+3.6. Search Goal Examples
+
+3.6.1. RFC2544 Goal
+
+ The following set of values makes the search result unconditionally
+ compliant with [RFC2544] (section 24 Trial duration)
+
+ * Goal Final Trial Duration = 60 seconds
+
+ * Goal Duration Sum = 60 seconds
+
+ * Goal Loss Ratio = 0%
+
+ * Goal Exceed Ratio = 0%
+
+ The latter two attributes are enough to make the search goal
+ conditionally compliant, adding the first attribute makes it
+ unconditionally compliant.
+
+ The second attribute (Goal Duration Sum) only prevents MLRsearch from
+ repeating zero-loss full-length trials.
+
+ Non-zero exceed ratio could prolong the search and allow loss
+ inversion between lower-load lossy short trial and higher-load full-
+ length zero-loss trial. From [RFC2544] alone, it is not clear
+ whether that higher load could be considered as compliant throughput.
+
+
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 23]
+
+Internet-Draft MLRsearch July 2024
+
+
+3.6.2. TST009 Goal
+
+ One of the alternatives to RFC2544 is described in [TST009] (section
+ 12.3.3 Binary search with loss verification). The idea there is to
+ repeat lossy trials, hoping for zero loss on second try, so the
+ results are closer to the noiseless end of performance sprectum, and
+ more repeatable and comparable.
+
+ Only the variant with "z = infinity" is achievable with MLRsearch.
+
+ For example, for "r = 2" variant, the following search goal should be
+ used:
+
+ * Goal Final Trial Duration = 60 seconds
+
+ * Goal Duration Sum = 120 seconds
+
+ * Goal Loss Ratio = 0%
+
+ * Goal Exceed Ratio = 50%
+
+ If the first 60s trial has zero loss, it is enough for MLRsearch to
+ stop measuring at that load, as even a second lossy trial would still
+ fit within the exceed ratio.
+
+ But if the first trial is lossy, MLRsearch needs to perform also the
+ second trial to classify that load. As Goal Duration Sum is twice as
+ long as Goal Final Trial Duration, third full-length trial is never
+ needed.
+
+3.7. Result Terms
+
+ Before defining the output of the Controller, it is useful to define
+ what the Goal Result is.
+
+ The Goal Result is a composite quantity.
+
+ Following subsections define its attribute first, before describing
+ the Goal Result quantity.
+
+ There is a correspondence between Search Goals and Goal Results.
+ Most of the following subsections refer to a given Search Goal, when
+ defining attributes of the Goal Result. Conversely, at the end of
+ the search, each Search Goal has its corresponding Goal Result.
+
+ Conceptually, the search can be seen as a process of load
+ classification, where the Controller attempts to classify some loads
+ as an Upper Bound or a Lower Bound with respect to some Search Goal.
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 24]
+
+Internet-Draft MLRsearch July 2024
+
+
+ Before defining real attributes of the goal result, it is useful to
+ define bounds in general.
+
+3.7.1. Relevant Upper Bound
+
+ Definition:
+
+ The Relevant Upper Bound is the smallest trial load value that is
+ classified at the end of the search as an upper bound (see
+ [Appendix A: Load Classification] (#Appendix-A:-Load-Classification))
+ for the given Search Goal.
+
+ Discussion:
+
+ One search goal can have many different load classified as an upper
+ bound. At the end of the search, one of those loads will be the
+ smallest, becoming the relevant upper bound for that goal.
+
+ In more detail, the set of all trial outputs (both short and full-
+ length, enough of them according to Goal Duration Sum) performed at
+ that smallest load failed to uphold all the requirements of the given
+ Search Goal, mainly the Goal Loss Ratio in combination with the Goal
+ Exceed Ratio.
+
+ If Max Load does not cause enough lossy trials, the Relevant Upper
+ Bound does not exist. Conversely, if Relevant Upper Bound exists, it
+ is not affected by Max Load value.
+
+3.7.2. Relevant Lower Bound
+
+ Definition:
+
+ The Relevant Lower Bound is the largest trial load value among those
+ smaller than the Relevant Upper Bound, that got classified at the end
+ of the search as a lower bound (see [Appendix A: Load Classification]
+ (#Appendix-A:-Load-Classification)) for the given Search Goal.
+
+ Discussion:
+
+ Only among loads smaller that the relevant upper bound, the largest
+ load becomes the relevant lower bound. With loss inversion, stricter
+ upper bound matters.
+
+ In more detail, the set of all trial outputs (both short and full-
+ length, enough of them according to Goal Duration Sum) performed at
+ that largest load managed to uphold all the requirements of the given
+ Search Goal, mainly the Goal Loss Ratio in combination with the Goal
+ Exceed Ratio.
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 25]
+
+Internet-Draft MLRsearch July 2024
+
+
+ Is no load had enough low-loss trials, the relevant lower bound MAY
+ not exist.
+
+ Strictly speaking, if the Relevant Upper Bound does not exist, the
+ Relevant Lower Bound also does not exist. In that case, Max Load is
+ classified as a lower bound, but it is not clear whether a higher
+ lower bound would be found if the search used a higher Max Load
+ value.
+
+ For a regular Goal Result, the distance between the Relevant Lower
+ Bound and the Relevant Upper Bound MUST NOT be larger than the Goal
+ Width, if the implementation offers width as a goal attribute.
+
+ Searching for anther search goal may cause a loss inversion
+ phenomenon, where a lower load is classified as an upper bound, but
+ also a higher load is classified as a lower bound for the same search
+ goal. The definition of the Relevant Lower Bound ignores such high
+ lower bounds.
+
+3.7.3. Conditional Throughput
+
+ Definition:
+
+ The Conditional Throughput (see section [Appendix B: Conditional
+ Throughput] (#Appendix-B:-Conditional-Throughput)) as evaluated at
+ the Relevant Lower Bound of the given Search Goal at the end of the
+ search.
+
+ Discussion:
+
+ Informally, this is a typical trial forwarding rate, expected to be
+ seen at the Relevant Lower Bound of the given Search Goal.
+
+ But frequently it is only a conservative estimate thereof, as
+ MLRsearch implementations tend to stop gathering more data as soon as
+ they confirm the value cannot get worse than this estimate within the
+ Goal Duration Sum.
+
+ This value is RECOMMENDED to be used when evaluating repeatability
+ and comparability if different MLRsearch implementations.
+
+3.7.4. Goal Result
+
+ Definition:
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 26]
+
+Internet-Draft MLRsearch July 2024
+
+
+ The Goal Result is a composite quantity consisting of several
+ attributes. Relevant Upper Bound and Relevant Lower Bound are
+ REQUIRED attributes, Conditional Throughput is a RECOMMENDED
+ attribute.
+
+ Discussion:
+
+ Depending on SUT behavior, it is possible that one or both relevant
+ bounds do not exist. The goal result instance where the required
+ attribute values exist is informally called a Regular Goal Result
+ instance, so we can say some goals reached Irregular Goal Results.
+
+ A typical Irregular Goal Result is when all trials at the Max Load
+ have zero loss, as the Relevant Upper Bound does not exist in that
+ case.
+
+ It is RECOMMENDED that the test report will display such results
+ appropriately, although MLRsearch specification does not prescibe
+ how.
+
+ Anything else regarging Irregular Goal Results, including their role
+ in stopping conditions of the search is outside the scope of this
+ document.
+
+3.7.5. Search Result
+
+ Definition:
+
+ The Search Result is a single composite object that maps each Search
+ Goal instance to a corresponding Goal Result instance.
+
+ Discussion:
+
+ Alternatively, the Search Result can be implemented as an ordered
+ list of the Goal Result instances, matching the order of Search Goal
+ instances.
+
+ The Search Result (as a mapping) MUST map from all the Search Goal
+ instances present in the Controller Input.
+
+3.7.6. Controller Output
+
+ Definition:
+
+ The Controller Output is a composite quantity returned from the
+ Controller to the Manager at the end of the search. The Search
+ Result instance is its only REQUIRED attribute.
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 27]
+
+Internet-Draft MLRsearch July 2024
+
+
+ Discussion:
+
+ MLRsearch implementation MAY return additional data in the Controller
+ Output.
+
+3.8. MLRsearch Architecture
+
+ MLRsearch architecture consists of three main system components: the
+ Manager, the Controller, and the Measurer.
+
+ The architecture also implies the presence of other components, such
+ as the SUT and the Tester (as a sub-component of the Measurer).
+
+ Protocols of communication between components are generally left
+ unspecified. For example, when MLRsearch specification mentions
+ "Controller calls Measurer", it is possible that the Controller
+ notifies the Manager to call the Measurer indirectly instead. This
+ way the Measurer implementations can be fully independent from the
+ Controller implementations, e.g. programmed in different programming
+ languages.
+
+3.8.1. Measurer
+
+ Definition:
+
+ The Measurer is an abstract system component that when called with a
+ [Trial Input] (#Trial-Input) instance, performs one [Trial] (#Trial),
+ and returns a [Trial Output] (#Trial-Output) instance.
+
+ Discussion:
+
+ This definition assumes the Measurer is already initialized. In
+ practice, there may be additional steps before the search, e.g. when
+ the Manager configures the traffic profile (either on the Measurer or
+ on its tester sub-component directly) and performs a warmup (if the
+ tester requires one).
+
+ It is the responsibility of the Measurer implementation to uphold any
+ requirements and assumptions present in MLRsearch specification, e.g.
+ trial forwarding ratio not being larger than one.
+
+ Implementers have some freedom. For example [RFC2544] (section 10.
+ Verifying received frames) gives some suggestions (but not
+ requirements) related to duplicated or reordered frames.
+ Implementations are RECOMMENDED to document their behavior related to
+ such freedoms in as detailed a way as possible.
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 28]
+
+Internet-Draft MLRsearch July 2024
+
+
+ It is RECOMMENDED to benchmark the test equipment first, e.g. connect
+ sender and receiver directly (without any SUT in the path), find a
+ load value that guarantees the offered load is not too far from the
+ intended load, and use that value as the Max Load value. When
+ testing the real SUT, it is RECOMMENDED to turn any big difference
+ between the intended load and the offered load into increased Trial
+ Loss Ratio.
+
+ Neither of the two recommendations are made into requirements,
+ because it is not easy to tell when the difference is big enough, in
+ a way thay would be dis-entangled from other Measurer freedoms.
+
+3.8.2. Controller
+
+ Definition:
+
+ The Controller is an abstract system component that when called with
+ a Controller Input instance repeatedly computes Trial Input instance
+ for the Measurer, obtains corresponding Trial Output instances, and
+ eventually returns a Controller Output instance.
+
+ Discussion:
+
+ Informally, the Controller has big freedom in selection of Trial
+ Inputs, and the implementations want to achieve the Search Goals in
+ the shortest expected time.
+
+ The Controller's role in optimizing the overall search time
+ distinguishes MLRsearch algorithms from simpler search procedures.
+
+ Informally, each implementation can have different stopping
+ conditions. Goal Width is only one example. In practice,
+ implementation details do not matter, as long as Goal Results are
+ regular.
+
+3.8.3. Manager
+
+ Definition:
+
+ The Manager is an abstract system component that is reponsible for
+ configuring other components, calling the Controller component once,
+ and for creating the test report following the reporting format as
+ defined in [RFC2544] (section 26. Benchmarking tests).
+
+ Discussion:
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 29]
+
+Internet-Draft MLRsearch July 2024
+
+
+ The Manager initializes the SUT, the Measurer (and the Tester if
+ independent) with their intended configurations before calling the
+ Controller.
+
+ The Manager does not need to be able to tweak any Search Goal
+ attributes, but it MUST report all applied attribute values even if
+ not tweaked.
+
+ In principle, there should be a "user" (human or CI) that "starts" or
+ "calls" the Manager and receives the report. The Manager MAY be able
+ to be called more than once whis way.
+
+3.9. Implementation Compliance
+
+ Any networking measurement setup where there can be logically
+ delineated system components and there are components satisfying
+ requirements for the Measurer, the Controller and the Manager, is
+ considered to be compliant with MLRsearch design.
+
+ These components can be seen as abstractions present in any testing
+ procedure. For example, there can be a single component acting both
+ as the Manager and the Controller, but as long as values of required
+ attributes of Search Goals and Goal Results are visible in the test
+ report, the Controller Input instance and output instance are
+ implied.
+
+ For example, any setup for conditionally (or unconditionally)
+ compliant [RFC2544] throughput testing can be understood as a
+ MLRsearch architecture, assuming there is enough data to reconstruct
+ the Relevant Upper Bound.
+
+ See [RFC2544 Goal] (#RFC2544-Goal) subsection for equivalent Search
+ Goal.
+
+ Any test procedure that can be understood as (one call to the Manager
+ of) MLRsearch architecture is said to be compliant with MLRsearch
+ specification.
+
+4. Additional Considerations
+
+ This section focuses on additional considerations, intuitions and
+ motivations pertaining to MLRsearch methodology.
+
+
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 30]
+
+Internet-Draft MLRsearch July 2024
+
+
+4.1. MLRsearch Versions
+
+ The MLRsearch algorithm has been developed in a code-first approach,
+ a Python library has been created, debugged, used in production and
+ published in PyPI before the first descriptions (even informal) were
+ published.
+
+ But the code (and hence the description) was evolving over time.
+ Multiple versions of the library were used over past several years,
+ and later code was usually not compatible with earlier descriptions.
+
+ The code in (some version of) MLRsearch library fully determines the
+ search process (for a given set of configuration parameters), leaving
+ no space for deviations.
+
+ This historic meaning of MLRsearch, as a family of search algorithm
+ implementations, leaves plenty of space for future improvements, at
+ the cost of poor comparability of results of search algoritm
+ implementations.
+
+ There are two competing needs. There is the need for standardization
+ in areas critical to comparability. There is also the need to allow
+ flexibility for implementations to innovate and improve in other
+ areas. This document defines MLRsearch as a new specification in a
+ manner that aims to fairly balance both needs.
+
+4.2. Stopping Conditions
+
+ [RFC2544] prescribes that after performing one trial at a specific
+ offered load, the next offered load should be larger or smaller,
+ based on frame loss.
+
+ The usual implementation uses binary search. Here a lossy trial
+ becomes a new upper bound, a lossless trial becomes a new lower
+ bound. The span of values between the tightest lower bound and the
+ tightest upper bound (including both values) forms an interval of
+ possible results, and after each trial the width of that interval
+ halves.
+
+ Usually the binary search implementation tracks only the two tightest
+ bounds, simply calling them bounds. But the old values still remain
+ valid bounds, just not as tight as the new ones.
+
+ After some number of trials, the tightest lower bound becomes the
+ throughput. [RFC2544] does not specify when, if ever, should the
+ search stop.
+
+ MLRsearch introduces a concept of [Goal Width] (#Goal-Width).
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 31]
+
+Internet-Draft MLRsearch July 2024
+
+
+ The search stops when the distance between the tightest upper bound
+ and the tightest lower bound is smaller than a user-configured value,
+ called Goal Width from now on. In other words, the interval width at
+ the end of the search has to be no larger than the Goal Width.
+
+ This Goal Width value therefore determines the precision of the
+ result. Due to the fact that MLRsearch specification requires a
+ particular structure of the result (see [Trial Result] (#Trial-
+ Result) section), the result itself does contain enough information
+ to determine its precision, thus it is not required to report the
+ Goal Width value.
+
+ This allows MLRsearch implementations to use stopping conditions
+ different from Goal Width.
+
+4.3. Load Classification
+
+ MLRsearch keeps the basic logic of binary search (tracking tightest
+ bounds, measuring at the middle), perhaps with minor technical
+ differences.
+
+ MLRsearch algorithm chooses an intended load (as opposed to the
+ offered load), the interval between bounds does not need to be split
+ exactly into two equal halves, and the final reported structure
+ specifies both bounds.
+
+ The biggest difference is that to classify a load as an upper or
+ lower bound, MLRsearch may need more than one trial (depending on
+ configuration options) to be performed at the same intended load.
+
+ In consequence, even if a load already does have few trial results,
+ it still may be classified as undecided, neither a lower bound nor an
+ upper bound.
+
+ An explanation of the classification logic is given in the next
+ section [Logic of Load Classification] (#Logic-of-Load-
+ Classification), as it heavily relies on other subsections of this
+ section.
+
+ For repeatability and comparability reasons, it is important that
+ given a set of trial results, all implementations of MLRsearch
+ classify the load equivalently.
+
+4.4. Loss Ratios
+
+ Another difference between MLRsearch and [RFC2544] binary search is
+ in the goals of the search. [RFC2544] has a single goal, based on
+ classifying full-length trials as either lossless or lossy.
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 32]
+
+Internet-Draft MLRsearch July 2024
+
+
+ MLRsearch, as the name suggests, can search for multiple goals,
+ differing in their loss ratios. The precise definition of the Goal
+ Loss Ratio will be given later. The [RFC2544] throughput goal then
+ simply becomes a zero Goal Loss Ratio. Different goals also may have
+ different Goal Widths.
+
+ A set of trial results for one specific intended load value can
+ classify the load as an upper bound for some goals, but a lower bound
+ for some other goals, and undecided for the rest of the goals.
+
+ Therefore, the load classification depends not only on trial results,
+ but also on the goal. The overall search procedure becomes more
+ complicated, when compared to binary search with a single goal, but
+ most of the complications do not affect the final result, except for
+ one phenomenon, loss inversion.
+
+4.5. Loss Inversion
+
+ In [RFC2544] throughput search using bisection, any load with a lossy
+ trial becomes a hard upper bound, meaning every subsequent trial has
+ a smaller intended load.
+
+ But in MLRsearch, a load that is classified as an upper bound for one
+ goal may still be a lower bound for another goal, and due to the
+ other goal MLRsearch will probably perform trials at even higher
+ loads. What to do when all such higher load trials happen to have
+ zero loss? Does it mean the earlier upper bound was not real? Does
+ it mean the later lossless trials are not considered a lower bound?
+ Surely we do not want to have an upper bound at a load smaller than a
+ lower bound.
+
+ MLRsearch is conservative in these situations. The upper bound is
+ considered real, and the lossless trials at higher loads are
+ considered to be a coincidence, at least when computing the final
+ result.
+
+ This is formalized using new notions, the [Relevant Upper Bound]
+ (#Relevant-Upper-Bound) and the [Relevant Lower Bound] (#Relevant-
+ Lower-Bound). Load classification is still based just on the set of
+ trial results at a given intended load (trials at other loads are
+ ignored), making it possible to have a lower load classified as an
+ upper bound, and a higher load classified as a lower bound (for the
+ same goal). The Relevant Upper Bound (for a goal) is the smallest
+ load classified as an upper bound. But the Relevant Lower Bound is
+ not simply the largest among lower bounds. It is the largest load
+ among loads that are lower bounds while also being smaller than the
+ Relevant Upper Bound.
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 33]
+
+Internet-Draft MLRsearch July 2024
+
+
+ With these definitions, the Relevant Lower Bound is always smaller
+ than the Relevant Upper Bound (if both exist), and the two relevant
+ bounds are used analogously as the two tightest bounds in the binary
+ search. When they are less than the Goal Width apart, the relevant
+ bounds are used in the output.
+
+ One consequence is that every trial result can have an impact on the
+ search result. That means if your SUT (or your traffic generator)
+ needs a warmup, be sure to warm it up before starting the search.
+
+4.6. Exceed Ratio
+
+ The idea of performing multiple trials at the same load comes from a
+ model where some trial results (those with high loss) are affected by
+ infrequent effects, causing poor repeatability of [RFC2544]
+ throughput results. See the discussion about noiseful and noiseless
+ ends of the SUT performance spectrum in section [DUT in SUT] (#DUT-
+ in-SUT). Stable results are closer to the noiseless end of the SUT
+ performance spectrum, so MLRsearch may need to allow some frequency
+ of high-loss trials to ignore the rare but big effects near the
+ noiseful end.
+
+ MLRsearch can do such trial result filtering, but it needs a
+ configuration option to tell it how frequent can the infrequent big
+ loss be. This option is called the exceed ratio. It tells MLRsearch
+ what ratio of trials (more exactly what ratio of trial seconds) can
+ have a [Trial Loss Ratio] (#Trial-Loss-Ratio) larger than the Goal
+ Loss Ratio and still be classified as a lower bound. Zero exceed
+ ratio means all trials have to have a Trial Loss Ratio equal to or
+ smaller than the Goal Loss Ratio.
+
+ For explainability reasons, the RECOMMENDED value for exceed ratio is
+ 0.5, as it simplifies some later concepts by relating them to the
+ concept of median.
+
+4.7. Duration Sum
+
+ When more than one trial is intended to classify a load, MLRsearch
+ also needs something that controls the number of trials needed.
+ Therefore, each goal also has an attribute called duration sum.
+
+ The meaning of a [Goal Duration Sum] (#Goal-Duration-Sum) is that
+ when a load has (full-length) trials whose trial durations when
+ summed up give a value at least as big as the Goal Duration Sum
+ value, the load is guaranteed to be classified either as an upper
+ bound or a lower bound for that goal.
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 34]
+
+Internet-Draft MLRsearch July 2024
+
+
+ Due to the fact that the duration sum has a big impact on the overall
+ search duration, and [RFC2544] prescribes wait intervals around trial
+ traffic, the MLRsearch algorithm is allowed to sum durations that are
+ different from the actual trial traffic durations.
+
+ In the MLRsearch specification, the different duration values are
+ called [Trial Effective Duration] (#Trial-Effective-Duration).
+
+4.8. Short Trials
+
+ MLRsearch requires each goal to specify its final trial duration.
+ Full-length trial is a shorter name for a trial whose intended trial
+ duration is equal to (or longer than) the goal final trial duration.
+
+ Section 24 of [RFC2544] already anticipates possible time savings
+ when short trials (shorter than full-length trials) are used. Full-
+ length trials are the opposite of short trials, so they may also be
+ called long trials.
+
+ Any MLRsearch implementation may include its own configuration
+ options which control when and how MLRsearch chooses to use short
+ trial durations.
+
+ For explainability reasons, when exceed ratio of 0.5 is used, it is
+ recommended for the Goal Duration Sum to be an odd multiple of the
+ full trial durations, so Conditional Throughput becomes identical to
+ a median of a particular set of trial forwarding rates.
+
+ The presence of short trial results complicates the load
+ classification logic.
+
+ Full details are given later in section [Logic of Load
+ Classification] (#Logic-of-Load-Classification). In a nutshell,
+ results from short trials may cause a load to be classified as an
+ upper bound. This may cause loss inversion, and thus lower the
+ Relevant Lower Bound, below what would classification say when
+ considering full-length trials only.
+
+4.9. Throughput
+
+ Due to the fact that testing equipment takes the intended load as an
+ input parameter for a trial measurement, any load search algorithm
+ needs to deal with intended load values internally.
+
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 35]
+
+Internet-Draft MLRsearch July 2024
+
+
+ But in the presence of goals with a non-zero loss ratio, the intended
+ load usually does not match the user's intuition of what a throughput
+ is. The forwarding rate (as defined in [RFC2285] section 3.6.1) is
+ better, but it is not obvious how to generalize it for loads with
+ multiple trial results and a non-zero [Goal Loss Ratio] (#Goal-Loss-
+ Ratio).
+
+ The best example is also the main motivation: hard limit performance.
+ Even if the medium allows higher performance, the SUT interfaces may
+ have their additional own limitations, e.g. a specific fps limit on
+ the NIC (a very common occurance).
+
+ Ideally, those should be known and used when computing Max Load. But
+ if Max Load is higher that what interface can receive or transmit,
+ there will be a "hard limit" observed in trial results. Imagine the
+ hard limit is at 100 Mfps, Max Load is higher, and the goal loss
+ ratio is 0.5%. If DUT has no additional losses, 0.5% loss ratio will
+ be achieved at 100.5025 Mfps (the relevant lower bound). But it is
+ not intuitive to report SUT performance as a value that is larger
+ than known hard limit. We need a generalization of RFC2544
+ throughput, different from just the relevant lower bound.
+
+ MLRsearch defines one such generalization, called the Conditional
+ Throughput. It is the trial forwarding rate from one of the trials
+ performed at the load in question. Determining which trial exactly
+ is defined in [MLRsearch Specification] (#MLRsearch-Specification),
+ and in [Appendix B: Conditional Throughput] (#Appendix-B:-
+ Conditional-Throughput).
+
+ In the hard limit example, 100.5 Mfps load will still have only 100.0
+ Mfps forwarding rate, nicely confirming the known limitation.
+
+ Conditional Throughput is partially related to load classification.
+ If a load is classified as a lower bound for a goal, the Conditional
+ Throughput can be calculated from trial results, and guaranteed to
+ show an loss ratio no larger than the Goal Loss Ratio.
+
+ Note that when comparing the best (all zero loss) and worst case (all
+ loss just below Goal Loss Ratio), the same Relevant Lower Bound value
+ may result in the Conditional Throughput differing up to the Goal
+ Loss Ratio.
+
+
+
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 36]
+
+Internet-Draft MLRsearch July 2024
+
+
+ Therefore it is rarely needed to set the Goal Width (if expressed as
+ the relative difference of loads) below the Goal Loss Ratio. In
+ other words, setting the Goal Width below the Goal Loss Ratio may
+ cause the Conditional Throughput for a larger loss ratio to become
+ smaller than a Conditional Throughput for a goal with a smaller Goal
+ Loss Ratio, which is counter-intuitive, considering they come from
+ the same search. Therefore it is RECOMMENDED to set the Goal Width
+ to a value no smaller than the Goal Loss Ratio.
+
+ Overall, this Conditional Throughput does behave well for
+ comparability purposes.
+
+4.10. Search Time
+
+ MLRsearch was primarily developed to reduce the time required to
+ determine a throughput, either the [RFC2544] compliant one, or some
+ generalization thereof. The art of achieving short search times is
+ mainly in the smart selection of intended loads (and intended
+ durations) for the next trial to perform.
+
+ While there is an indirect impact of the load selection on the
+ reported values, in practice such impact tends to be small, even for
+ SUTs with quite a broad performance spectrum.
+
+ A typical example of two approaches to load selection leading to
+ different Relevant Lower Bounds is when the interval is split in a
+ very uneven way. Any implementation choosing loads very close to the
+ current Relevant Lower Bound is quite likely to eventually stumble
+ upon a trial result with poor performance (due to SUT noise). For an
+ implementation choosing loads very close to the current Relevant
+ Upper Bound, this is unlikely, as it examines more loads that can see
+ a performance close to the noiseless end of the SUT performance
+ spectrum.
+
+ However, as even splits optimize search duration at give precision,
+ MLRsearch implementations that prioritize minimizing search time are
+ unlikely to suffer from any such bias.
+
+ Therefore, this document remains quite vague on load selection and
+ other optimization details, and configuration attributes related to
+ them. Assuming users prefer libraries that achieve short overall
+ search time, the definition of the Relevant Lower Bound should be
+ strict enough to ensure result repeatability and comparability
+ between different implementations, while not restricting future
+ implementations much.
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 37]
+
+Internet-Draft MLRsearch July 2024
+
+
+4.11. [RFC2544] Compliance
+
+ Some Search Goal instances lead to results compliant with RFC2544.
+ See [RFC2544 Goal] (#RFC2544-Goal) for more details regarding both
+ conditional and unconditional compliance.
+
+ The presence of other Search Goals does not affect the compliance of
+ this Goal Result. The Relevant Lower Bound and the Conditional
+ Throughput are in this case equal to each other, and the value is the
+ [RFC2544] throughput.
+
+5. Logic of Load Classification
+
+5.1. Introductory Remarks
+
+ This chapter continues with explanations, but this time more precise
+ definitions are needed for readers to follow the explanations.
+
+ Descriptions in this section are wordy and implementers should read
+ [MLRsearch Specification] (#MLRsearch-Specification) section and
+ Appendices for more concise definitions.
+
+ The two areas of focus here are load classification and the
+ Conditional Throughput.
+
+ To start with [Performance Spectrum] (#Performance-Spectrum)
+ subsection contains definitions needed to gain insight into what
+ Conditional Throughput means. Remaining subsections discuss load
+ classification.
+
+ For load classification, it is useful to define *good trials* and
+ *bad trials*:
+
+ * *Bad trial*: Trial is called bad (according to a goal) if its
+ [Trial Loss Ratio] (#Trial-Loss-Ratio) is larger than the [Goal
+ Loss Ratio] (#Goal-Loss-Ratio).
+
+ * *Good trial*: Trial that is not bad is called good.
+
+5.2. Performance Spectrum
+
+ ### Description
+
+ There are several equivalent ways to explain the Conditional
+ Throughput computation. One of the ways relies on performance
+ spectrum.
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 38]
+
+Internet-Draft MLRsearch July 2024
+
+
+ Take an intended load value, a trial duration value, and a finite set
+ of trial results, with all trials measured at that load value and
+ duration value.
+
+ The performance spectrum is the function that maps any non-negative
+ real number into a sum of trial durations among all trials in the
+ set, that has that number, as their trial forwarding rate, e.g. map
+ to zero if no trial has that particular forwarding rate.
+
+ A related function, defined if there is at least one trial in the
+ set, is the performance spectrum divided by the sum of the durations
+ of all trials in the set.
+
+ That function is called the performance probability function, as it
+ satisfies all the requirements for probability mass function of a
+ discrete probability distribution, the one-dimensional random
+ variable being the trial forwarding rate.
+
+ These functions are related to the SUT performance spectrum, as
+ sampled by the trials in the set.
+
+ Take a set of all full-length trials performed at the Relevant Lower
+ Bound, sorted by decreasing trial forwarding rate. The sum of the
+ durations of those trials may be less than the Goal Duration Sum, or
+ not. If it is less, add an imaginary trial result with zero trial
+ forwarding rate, such that the new sum of durations is equal to the
+ Goal Duration Sum. This is the set of trials to use.
+
+ If the quantile touches two trials,
+
+ the larger trial forwarding rate (from the trial result sorted
+ earlier) is used.
+
+ The resulting quantity is the Conditional Throughput of the goal in
+ question.
+
+ A set of examples follows.
+
+5.2.1. First Example
+
+ * [Goal Exceed Ratio] (#Goal-Exceed-Ratio) = 0 and [Goal Duration
+ Sum] (#Goal-Duration-Sum) has been reached.
+
+ * Conditional Throughput is the smallest trial forwarding rate among
+ the trials.
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 39]
+
+Internet-Draft MLRsearch July 2024
+
+
+5.2.2. Second Example
+
+ * Goal Exceed Ratio = 0 and Goal Duration Sum has not been reached
+ yet.
+
+ * Due to the missing duration sum, the worst case may still happen,
+ so the Conditional Throughput is zero.
+
+ * This is not reported to the user, as this load cannot become the
+ Relevant Lower Bound yet.
+
+5.2.3. Third Example
+
+ * Goal Exceed Ratio = 50% and Goal Duration Sum is two seconds.
+
+ * One trial is present with the duration of one second and zero
+ loss.
+
+ * The imaginary trial is added with the duration of one second and
+ zero trial forwarding rate.
+
+ * The median would touch both trials, so the Conditional Throughput
+ is the trial forwarding rate of the one non-imaginary trial.
+
+ * As that had zero loss, the value is equal to the offered load.
+
+5.2.4. Summary
+
+ While the Conditional Throughput is a generalization of the trial
+ forwarding rate, its definition is not an obvious one.
+
+ Other than the trial forwarding rate, the other source of intuition
+ is the quantile in general, and the median the recommended case.
+
+5.3. Trials with Single Duration
+
+ When goal attributes are chosen in such a way that every trial has
+ the same intended duration, the load classification is simpler.
+
+ The following description follows the motivation of Goal Loss Ratio,
+ Goal Exceed Ratio, and Goal Duration Sum.
+
+ If the sum of the durations of all trials (at the given load) is less
+ than the Goal Duration Sum, imagine two scenarios:
+
+ * *best case scenario*: all subsequent trials having zero loss, and
+
+ * *worst case scenario*: all subsequent trials having 100% loss.
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 40]
+
+Internet-Draft MLRsearch July 2024
+
+
+ Here we assume there are as many subsequent trials as needed to make
+ the sum of all trials equal to the Goal Duration Sum.
+
+ The exceed ratio is defined using sums of durations (and number of
+ trials does not matter), so it does not matter whether the
+ "subsequent trials" can consist of an integer number of full-length
+ trials.
+
+ In any of the two scenarios, best case and worst case, we can compute
+ the load exceed ratio, as the duration sum of good trials divided by
+ the duration sum of all trials, in both cases including the assumed
+ trials.
+
+ Even if, in the best case scenario, the load exceed ratio is larger
+ than the Goal Exceed Ratio, the load is an upper bound.
+
+ MKP2 Even if, in the worst case scenario, the load exceed ratio is
+ not larger than the Goal Exceed Ratio, the load is a lower bound.
+
+ More specifically:
+
+ * Take all trials measured at a given load.
+
+ * The sum of the durations of all bad full-length trials is called
+ the bad sum.
+
+ * The sum of the durations of all good full-length trials is called
+ the good sum.
+
+ * The result of adding the bad sum plus the good sum is called the
+ measured sum.
+
+ * The larger of the measured sum and the Goal Duration Sum is called
+ the whole sum.
+
+ * The whole sum minus the measured sum is called the missing sum.
+
+ * The optimistic exceed ratio is the bad sum divided by the whole
+ sum.
+
+ * The pessimistic exceed ratio is the bad sum plus the missing sum,
+ that divided by the whole sum.
+
+ * If the optimistic exceed ratio is larger than the Goal Exceed
+ Ratio, the load is classified as an upper bound.
+
+ * If the pessimistic exceed ratio is not larger than the Goal Exceed
+ Ratio, the load is classified as a lower bound.
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 41]
+
+Internet-Draft MLRsearch July 2024
+
+
+ * Else, the load is classified as undecided.
+
+ The definition of pessimistic exceed ratio is compatible with the
+ logic in the Conditional Throughput computation, so in this single
+ trial duration case, a load is a lower bound if and only if the
+ Conditional Throughput loss ratio is not larger than the Goal Loss
+ Ratio.
+
+ If it is larger, the load is either an upper bound or undecided.
+
+5.4. Trials with Short Duration
+
+5.4.1. Scenarios
+
+ Trials with intended duration smaller than the goal final trial
+ duration are called short trials. The motivation for load
+ classification logic in the presence of short trials is based around
+ a counter-factual case: What would the trial result be if a short
+ trial has been measured as a full-length trial instead?
+
+ There are three main scenarios where human intuition guides the
+ intended behavior of load classification.
+
+5.4.1.1. False Good Scenario
+
+ The user had their reason for not configuring a shorter goal final
+ trial duration. Perhaps SUT has buffers that may get full at longer
+ trial durations. Perhaps SUT shows periodic decreases in performance
+ the user does not want to be treated as noise.
+
+ In any case, many good short trials may become bad full-length trials
+ in the counter-factual case.
+
+ In extreme cases, there are plenty of good short trials and no bad
+ short trials.
+
+ In this scenario, we want the load classification NOT to classify the
+ load as a lower bound, despite the abundance of good short trials.
+
+ Effectively, we want the good short trials to be ignored, so they do
+ not contribute to comparisons with the Goal Duration Sum.
+
+5.4.1.2. True Bad Scenario
+
+ When there is a frame loss in a short trial, the counter-factual
+ full-length trial is expected to lose at least as many frames.
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 42]
+
+Internet-Draft MLRsearch July 2024
+
+
+ In practice, bad short trials are rarely turning into good full-
+ length trials.
+
+ In extreme cases, there are no good short trials.
+
+ In this scenario, we want the load classification to classify the
+ load as an upper bound just based on the abundance of short bad
+ trials.
+
+ Effectively, we want the bad short trials to contribute to
+ comparisons with the Goal Duration Sum, so the load can be classified
+ sooner.
+
+5.4.1.3. Balanced Scenario
+
+ Some SUTs are quite indifferent to trial duration. Performance
+ probability function constructed from short trial results is likely
+ to be similar to the performance probability function constructed
+ from full-length trial results (perhaps with larger dispersion, but
+ without a big impact on the median quantiles overall).
+
+ For a moderate Goal Exceed Ratio value, this may mean there are both
+ good short trials and bad short trials.
+
+ This scenario is there just to invalidate a simple heuristic of
+ always ignoring good short trials and never ignoring bad short
+ trials, as that simple heuristic would be too biased.
+
+ Yes, the short bad trials are likely to turn into full-length bad
+ trials in the counter-factual case, but there is no information on
+ what would the good short trials turn into.
+
+ The only way to decide safely is to do more trials at full length,
+ the same as in False Good Scenario.
+
+5.4.2. Classification Logic
+
+ MLRsearch picks a particular logic for load classification in the
+ presence of short trials, but it is still RECOMMENDED to use
+ configurations that imply no short trials, so the possible
+ inefficiencies in that logic do not affect the result, and the result
+ has better explainability.
+
+ With that said, the logic differs from the single trial duration case
+ only in different definition of the bad sum. The good sum is still
+ the sum across all good full-length trials.
+
+ Few more notions are needed for defining the new bad sum:
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 43]
+
+Internet-Draft MLRsearch July 2024
+
+
+ * The sum of durations of all bad full-length trials is called the
+ bad long sum.
+
+ * The sum of durations of all bad short trials is called the bad
+ short sum.
+
+ * The sum of durations of all good short trials is called the good
+ short sum.
+
+ * One minus the Goal Exceed Ratio is called the subceed ratio.
+
+ * The Goal Exceed Ratio divided by the subceed ratio is called the
+ exceed coefficient.
+
+ * The good short sum multiplied by the exceed coefficient is called
+ the balancing sum.
+
+ * The bad short sum minus the balancing sum is called the excess
+ sum.
+
+ * If the excess sum is negative, the bad sum is equal to the bad
+ long sum.
+
+ * Otherwise, the bad sum is equal to the bad long sum plus the
+ excess sum.
+
+ Here is how the new definition of the bad sum fares in the three
+ scenarios, where the load is close to what would the relevant bounds
+ be if only full-length trials were used for the search.
+
+5.4.2.1. False Good Scenario
+
+ If the duration is too short, we expect to see a higher frequency of
+ good short trials. This could lead to a negative excess sum, which
+ has no impact, hence the load classification is given just by full-
+ length trials. Thus, MLRsearch using too short trials has no
+ detrimental effect on result comparability in this scenario. But
+ also using short trials does not help with overall search duration,
+ probably making it worse.
+
+5.4.2.2. True Bad Scenario
+
+ Settings with a small exceed ratio have a small exceed coefficient,
+ so the impact of the good short sum is small, and the bad short sum
+ is almost wholly converted into excess sum, thus bad short trials
+ have almost as big an impact as full-length bad trials. The same
+ conclusion applies to moderate exceed ratio values when the good
+ short sum is small. Thus, short trials can cause a load to get
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 44]
+
+Internet-Draft MLRsearch July 2024
+
+
+ classified as an upper bound earlier, bringing time savings (while
+ not affecting comparability).
+
+5.4.2.3. Balanced Scenario
+
+ Here excess sum is small in absolute value, as the balancing sum is
+ expected to be similar to the bad short sum. Once again, full-length
+ trials are needed for final load classification; but usage of short
+ trials probably means MLRsearch needed a shorter overall search time
+ before selecting this load for measurement, thus bringing time
+ savings (while not affecting comparability).
+
+ Note that in presence of short trial results, the comparibility
+ between the load classification and the Conditional Throughput is
+ only partial. The Conditional Throughput still comes from a good
+ long trial, but a load higher than the Relevant Lower Bound may also
+ compute to a good value.
+
+5.5. Trials with Longer Duration
+
+ If there are trial results with an intended duration larger than the
+ goal trial duration, the precise definitions in Appendix A and
+ Appendix B treat them in exactly the same way as trials with duration
+ equal to the goal trial duration.
+
+ But in configurations with moderate (including 0.5) or small Goal
+ Exceed Ratio and small Goal Loss Ratio (especially zero), bad trials
+ with longer than goal durations may bias the search towards the lower
+ load values, as the noiseful end of the spectrum gets a larger
+ probability of causing the loss within the longer trials.
+
+6. IANA Considerations
+
+ No requests of IANA.
+
+7. Security Considerations
+
+ Benchmarking activities as described in this memo are limited to
+ technology characterization of a DUT/SUT using controlled stimuli in
+ a laboratory environment, with dedicated address space and the
+ constraints specified in the sections above.
+
+ The benchmarking network topology will be an independent test setup
+ and MUST NOT be connected to devices that may forward the test
+ traffic into a production network or misroute traffic to the test
+ management network.
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 45]
+
+Internet-Draft MLRsearch July 2024
+
+
+ Further, benchmarking is performed on a "black-box" basis, relying
+ solely on measurements observable external to the DUT/SUT.
+
+ Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
+ benchmarking purposes. Any implications for network security arising
+ from the DUT/SUT SHOULD be identical in the lab and in production
+ networks.
+
+8. Acknowledgements
+
+ Some phrases and statements in this document were created with help
+ of Mistral AI (mistral.ai).
+
+ Many thanks to Alec Hothan of the OPNFV NFVbench project for thorough
+ review and numerous useful comments and suggestions in the earlier
+ versions of this document.
+
+ Special wholehearted gratitude and thanks to the late Al Morton for
+ his thorough reviews filled with very specific feedback and
+ constructive guidelines. Thank you Al for the close collaboration
+ over the years, for your continuous unwavering encouragement full of
+ empathy and positive attitude. Al, you are dearly missed.
+
+9. Appendix A: Load Classification
+
+ This section specifies how to perform the load classification.
+
+ Any intended load value can be classified, according to a given
+ [Search Goal] (#Search-Goal).
+
+ The algorithm uses (some subsets of) the set of all available trial
+ results from trials measured at a given intended load at the end of
+ the search. All durations are those returned by the Measurer.
+
+ The block at the end of this appendix holds pseudocode which computes
+ two values, stored in variables named optimistic and pessimistic.
+
+ The pseudocode happens to be a valid Python code.
+
+ If values of both variables are computed to be true, the load in
+ question is classified as a lower bound according to the given Search
+ Goal. If values of both variables are false, the load is classified
+ as an upper bound. Otherwise, the load is classified as undecided.
+
+ The pseudocode expects the following variables to hold values as
+ follows:
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 46]
+
+Internet-Draft MLRsearch July 2024
+
+
+ * goal_duration_sum: The duration sum value of the given Search
+ Goal.
+
+ * goal_exceed_ratio: The exceed ratio value of the given Search
+ Goal.
+
+ * good_long_sum: Sum of durations across trials with trial duration
+ at least equal to the goal final trial duration and with a Trial
+ Loss Ratio not higher than the Goal Loss Ratio.
+
+ * bad_long_sum: Sum of durations across trials with trial duration
+ at least equal to the goal final trial duration and with a Trial
+ Loss Ratio higher than the Goal Loss Ratio.
+
+ * good_short_sum: Sum of durations across trials with trial duration
+ shorter than the goal final trial duration and with a Trial Loss
+ Ratio not higher than the Goal Loss Ratio.
+
+ * bad_short_sum: Sum of durations across trials with trial duration
+ shorter than the goal final trial duration and with a Trial Loss
+ Ratio higher than the Goal Loss Ratio.
+
+ The code works correctly also when there are no trial results at a
+ given load.
+
+ balancing_sum = good_short_sum * goal_exceed_ratio / (1.0 - goal_exceed_ratio)
+ effective_bad_sum = bad_long_sum + max(0.0, bad_short_sum - balancing_sum)
+ effective_whole_sum = max(good_long_sum + effective_bad_sum, goal_duration_sum)
+ quantile_duration_sum = effective_whole_sum * goal_exceed_ratio
+ optimistic = effective_bad_sum <= quantile_duration_sum
+ pessimistic = (effective_whole_sum - good_long_sum) <= quantile_duration_sum
+
+10. Appendix B: Conditional Throughput
+
+ This section specifies how to compute Conditional Throughput, as
+ referred to in section [Conditional Throughput] (#Conditional-
+ Throughput).
+
+ Any intended load value can be used as the basis for the following
+ computation, but only the Relevant Lower Bound (at the end of the
+ search) leads to the value called the Conditional Throughput for a
+ given Search Goal.
+
+ The algorithm uses (some subsets of) the set of all available trial
+ results from trials measured at a given intended load at the end of
+ the search. All durations are those returned by the Measurer.
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 47]
+
+Internet-Draft MLRsearch July 2024
+
+
+ The block at the end of this appendix holds pseudocode which computes
+ a value stored as variable conditional_throughput.
+
+ The pseudocode happens to be a valid Python code.
+
+ The pseudocode expects the following variables to hold values as
+ follows:
+
+ * goal_duration_sum: The duration sum value of the given Search
+ Goal.
+
+ * goal_exceed_ratio: The exceed ratio value of the given Search
+ Goal.
+
+ * good_long_sum: Sum of durations across trials with trial duration
+ at least equal to the goal final trial duration and with a Trial
+ Loss Ratio not higher than the Goal Loss Ratio.
+
+ * bad_long_sum: Sum of durations across trials with trial duration
+ at least equal to the goal final trial duration and with a Trial
+ Loss Ratio higher than the Goal Loss Ratio.
+
+ * long_trials: An iterable of all trial results from trials with
+ trial duration at least equal to the goal final trial duration,
+ sorted by increasing the Trial Loss Ratio. A trial result is a
+ composite with the following two attributes available:
+
+ - trial.loss_ratio: The Trial Loss Ratio as measured for this
+ trial.
+
+ - trial.duration: The trial duration of this trial.
+
+ The code works correctly only when there if there is at least one
+ trial result measured at a given load.
+
+ all_long_sum = max(goal_duration_sum, good_long_sum + bad_long_sum)
+ remaining = all_long_sum * (1.0 - goal_exceed_ratio)
+ quantile_loss_ratio = None
+ for trial in long_trials:
+ if quantile_loss_ratio is None or remaining > 0.0:
+ quantile_loss_ratio = trial.loss_ratio
+ remaining -= trial.duration
+ else:
+ break
+ else:
+ if remaining > 0.0:
+ quantile_loss_ratio = 1.0
+ conditional_throughput = intended_load * (1.0 - quantile_loss_ratio)
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 48]
+
+Internet-Draft MLRsearch July 2024
+
+
+11. References
+
+11.1. Normative References
+
+ [RFC1242] Bradner, S., "Benchmarking Terminology for Network
+ Interconnection Devices", RFC 1242, DOI 10.17487/RFC1242,
+ July 1991, <https://www.rfc-editor.org/info/rfc1242>.
+
+ [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN
+ Switching Devices", RFC 2285, DOI 10.17487/RFC2285,
+ February 1998, <https://www.rfc-editor.org/info/rfc2285>.
+
+ [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
+ Network Interconnect Devices", RFC 2544,
+ DOI 10.17487/RFC2544, March 1999,
+ <https://www.rfc-editor.org/info/rfc2544>.
+
+ [RFC8219] Georgescu, M., Pislaru, L., and G. Lencse, "Benchmarking
+ Methodology for IPv6 Transition Technologies", RFC 8219,
+ DOI 10.17487/RFC8219, August 2017,
+ <https://www.rfc-editor.org/info/rfc8219>.
+
+ [RFC9004] Morton, A., "Updates for the Back-to-Back Frame Benchmark
+ in RFC 2544", RFC 9004, DOI 10.17487/RFC9004, May 2021,
+ <https://www.rfc-editor.org/info/rfc9004>.
+
+11.2. Informative References
+
+ [FDio-CSIT-MLRsearch]
+ "FD.io CSIT Test Methodology - MLRsearch", October 2023,
+ <https://csit.fd.io/cdocs/methodology/measurements/
+ data_plane_throughput/mlr_search/>.
+
+ [PyPI-MLRsearch]
+ "MLRsearch 1.2.1, Python Package Index", October 2023,
+ <https://pypi.org/project/MLRsearch/1.2.1/>.
+
+ [TST009] "TST 009", n.d., <https://www.etsi.org/deliver/etsi_gs/
+ NFV-TST/001_099/009/03.04.01_60/gs_NFV-
+ TST009v030401p.pdf>.
+
+Authors' Addresses
+
+ Maciek Konstantynowicz
+ Cisco Systems
+ Email: mkonstan@cisco.com
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 49]
+
+Internet-Draft MLRsearch July 2024
+
+
+ Vratko Polak
+ Cisco Systems
+ Email: vrpolak@cisco.com
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 18 January 2025 [Page 50]
diff --git a/docs/ietf/draft-ietf-bmwg-mlrsearch-07.xml b/docs/ietf/draft-ietf-bmwg-mlrsearch-07.xml
new file mode 100644
index 0000000000..c3aede3d3b
--- /dev/null
+++ b/docs/ietf/draft-ietf-bmwg-mlrsearch-07.xml
@@ -0,0 +1,3136 @@
+<?xml version="1.0" encoding="us-ascii"?>
+ <?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
+ <!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.18 (Ruby 3.1.2) -->
+
+
+<!DOCTYPE rfc [
+ <!ENTITY nbsp "&#160;">
+ <!ENTITY zwsp "&#8203;">
+ <!ENTITY nbhy "&#8209;">
+ <!ENTITY wj "&#8288;">
+
+<!ENTITY RFC1242 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.1242.xml">
+<!ENTITY RFC2285 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2285.xml">
+<!ENTITY RFC2544 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2544.xml">
+<!ENTITY RFC8219 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8219.xml">
+<!ENTITY RFC9004 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9004.xml">
+]>
+
+
+<rfc ipr="trust200902" docName="draft-ietf-bmwg-mlrsearch-07" category="info" tocInclude="true" sortRefs="true" symRefs="true">
+ <front>
+ <title abbrev="MLRsearch">Multiple Loss Ratio Search</title>
+
+ <author initials="M." surname="Konstantynowicz" fullname="Maciek Konstantynowicz">
+ <organization>Cisco Systems</organization>
+ <address>
+ <email>mkonstan@cisco.com</email>
+ </address>
+ </author>
+ <author initials="V." surname="Polak" fullname="Vratko Polak">
+ <organization>Cisco Systems</organization>
+ <address>
+ <email>vrpolak@cisco.com</email>
+ </address>
+ </author>
+
+ <date year="2024" month="July" day="18"/>
+
+ <area>ops</area>
+ <workgroup>Benchmarking Working Group</workgroup>
+ <keyword>Internet-Draft</keyword>
+
+ <abstract>
+
+
+<?line 52?>
+
+<t>This document proposes extensions to <xref target="RFC2544"></xref> throughput search by
+defining a new methodology called Multiple Loss Ratio search
+(MLRsearch). MLRsearch aims to minimize search duration,
+support multiple loss ratio searches,
+and enhance result repeatability and comparability.</t>
+
+<t>The primary reason for extending <xref target="RFC2544"></xref> is to address the challenges
+and requirements presented by the evaluation and testing
+of software-based networking systems&#39; data planes.</t>
+
+<t>To give users more freedom, MLRsearch provides additional configuration options
+such as allowing multiple short trials per load instead of one large trial,
+tolerating a certain percentage of trial results with higher loss,
+and supporting the search for multiple goals with varying loss ratios.</t>
+
+
+
+ </abstract>
+
+
+
+ </front>
+
+ <middle>
+
+
+<?line 69?>
+
+
+<section anchor="purpose-and-scope"><name>Purpose and Scope</name>
+
+<t>The purpose of this document is to describe Multiple Loss Ratio search
+(MLRsearch), a data plane throughput search methodology optimized for software
+networking DUTs.</t>
+
+<t>Applying vanilla <xref target="RFC2544"></xref> throughput bisection to software DUTs
+results in several problems:</t>
+
+<t><list style="symbols">
+ <t>Binary search takes too long as most trials are done far from the
+eventually found throughput.</t>
+ <t>The required final trial duration and pauses between trials
+prolong the overall search duration.</t>
+ <t>Software DUTs show noisy trial results,
+leading to a big spread of possible discovered throughput values.</t>
+ <t>Throughput requires a loss of exactly zero frames, but the industry
+frequently allows for small but non-zero losses.</t>
+ <t>The definition of throughput is not clear when trial results are inconsistent.</t>
+</list></t>
+
+<t>To address the problems mentioned above,
+the MLRsearch test methodology specification employs the following enhancements:</t>
+
+<t><list style="symbols">
+ <t>Allow multiple short trials instead of one big trial per load.
+ <list style="symbols">
+ <t>Optionally, tolerate a percentage of trial results with higher loss.</t>
+ </list></t>
+ <t>Allow searching for multiple Search Goals, with differing loss ratios.
+ <list style="symbols">
+ <t>Any trial result can affect each Search Goal in principle.</t>
+ </list></t>
+ <t>Insert multiple coarse targets for each Search Goal, earlier ones need
+to spend less time on trials.
+ <list style="symbols">
+ <t>Earlier targets also aim for lesser precision.</t>
+ <t>Use Forwarding Rate (FR) at maximum offered load
+<xref target="RFC2285"></xref> (section 3.6.2) to initialize the initial targets.</t>
+ </list></t>
+ <t>Take care when dealing with inconsistent trial results.
+ <list style="symbols">
+ <t>Reported throughput is smaller than the smallest load with high loss.</t>
+ <t>Smaller load candidates are measured first.</t>
+ </list></t>
+ <t>Apply several load selection heuristics to save even more time
+by trying hard to avoid unnecessarily narrow bounds.</t>
+</list></t>
+
+<t>Some of these enhancements are formalized as MLRsearch specification,
+the remaining enhancements are treated as implementation details,
+thus achieving high comparability without limiting future improvements.</t>
+
+<t>MLRsearch configuration options are flexible enough to
+support both conservative settings and aggressive settings.
+The conservative settings lead to results
+unconditionally compliant with <xref target="RFC2544"></xref>,
+but longer search duration and worse repeatability.
+Conversely, aggressive settings lead to shorter search duration
+and better repeatability, but the results are not compliant with <xref target="RFC2544"></xref>.</t>
+
+<t>No part of <xref target="RFC2544"></xref> is intended to be obsoleted by this document.</t>
+
+</section>
+<section anchor="identified-problems"><name>Identified Problems</name>
+
+<t>This chapter describes the problems affecting usability
+of various performance testing methodologies,
+mainly a binary search for <xref target="RFC2544"></xref> unconditionally compliant throughput.</t>
+
+<section anchor="long-search-duration"><name>Long Search Duration</name>
+
+
+<t>The emergence of software DUTs, with frequent software updates and a
+number of different frame processing modes and configurations,
+has increased both the number of performance tests
+required to verify the DUT update and the frequency of running those tests.
+This makes the overall test execution time even more important than before.</t>
+
+<t>The current <xref target="RFC2544"></xref> throughput definition restricts the potential
+for time-efficiency improvements.
+A more generalized throughput concept could enable further enhancements
+while maintaining the precision of simpler methods.</t>
+
+<t>The bisection method, when unconditionally compliant with <xref target="RFC2544"></xref>,
+is excessively slow.
+This is because a significant amount of time is spent on trials
+with loads that, in retrospect, are far from the final determined throughput.</t>
+
+<t><xref target="RFC2544"></xref> does not specify any stopping condition for throughput search,
+so users already have an access to a limited trade-off
+between search duration and achieved precision.
+However, each full 60-second trials doubles the precision,
+so not many trials can be removed without a substantial loss of precision.</t>
+
+</section>
+<section anchor="dut-in-sut"><name>DUT in SUT</name>
+
+<t><xref target="RFC2285"></xref> defines:
+- DUT as
+ - The network forwarding device to which stimulus is offered and
+ response measured <xref target="RFC2285"></xref> (section 3.1.1).
+- SUT as
+ - The collective set of network devices to which stimulus is offered
+ as a single entity and response measured <xref target="RFC2285"></xref> (section 3.1.2).</t>
+
+<t><xref target="RFC2544"></xref> specifies a test setup with an external tester stimulating the
+networking system, treating it either as a single DUT, or as a system
+of devices, an SUT.</t>
+
+<t>In the case of software networking, the SUT consists of not only the DUT
+as a software program processing frames, but also of
+server hardware and operating system functions,
+with that server hardware resources shared across all programs including
+the operating system.</t>
+
+<t>Given that the SUT is a shared multi-tenant environment
+encompassing the DUT and other components, the DUT might inadvertently
+experience interference from the operating system
+or other software operating on the same server.</t>
+
+<t>Some of this interference can be mitigated.
+For instance,
+pinning DUT program threads to specific CPU cores
+and isolating those cores can prevent context switching.</t>
+
+<t>Despite taking all feasible precautions, some adverse effects may still impact
+the DUT&#39;s network performance.
+In this document, these effects are collectively
+referred to as SUT noise, even if the effects are not as unpredictable
+as what other engineering disciplines call noise.</t>
+
+<t>DUT can also exhibit fluctuating performance itself, for reasons
+not related to the rest of SUT. For example due to pauses in execution
+as needed for internal stateful processing.
+In many cases this
+may be an expected per-design behavior, as it would be observable even
+in a hypothetical scenario where all sources of SUT noise are eliminated.
+Such behavior affects trial results in a way similar to SUT noise.
+As the two phenomenons are hard to distinguish,
+in this document the term &#39;noise&#39; is used to encompass
+both the internal performance fluctuations of the DUT
+and the genuine noise of the SUT.</t>
+
+<t>A simple model of SUT performance consists of an idealized noiseless performance,
+and additional noise effects.
+For a specific SUT, the noiseless performance is assumed to be constant,
+with all observed performance variations being attributed to noise.
+The impact of the noise can vary in time, sometimes wildly,
+even within a single trial.
+The noise can sometimes be negligible, but frequently
+it lowers the observed SUT performance as observed in trial results.</t>
+
+<t>In this model, SUT does not have a single performance value, it has a spectrum.
+One end of the spectrum is the idealized noiseless performance value,
+the other end can be called a noiseful performance.
+In practice, trial result
+close to the noiseful end of the spectrum happens only rarely.
+The worse the performance value is, the more rarely it is seen in a trial.
+Therefore, the extreme noiseful end of the SUT spectrum is not observable
+among trial results.
+Also, the extreme noiseless end of the SUT spectrum
+is unlikely to be observable, this time because some small noise effects
+are likely to occur multiple times during a trial.</t>
+
+<t>Unless specified otherwise, this document&#39;s focus is
+on the potentially observable ends of the SUT performance spectrum,
+as opposed to the extreme ones.</t>
+
+<t>When focusing on the DUT, the benchmarking effort should ideally aim
+to eliminate only the SUT noise from SUT measurements.
+However,
+this is currently not feasible in practice, as there are no realistic enough
+models available to distinguish SUT noise from DUT fluctuations,
+based on authors&#39; experience and available literature.</t>
+
+<t>Assuming a well-constructed SUT, the DUT is likely its
+primary performance bottleneck.
+In this case, we can define the DUT&#39;s
+ideal noiseless performance as the noiseless end of the SUT performance spectrum,
+especially for throughput.
+However, other performance metrics, such as latency,
+may require additional considerations.</t>
+
+<t>Note that by this definition, DUT noiseless performance
+also minimizes the impact of DUT fluctuations, as much as realistically possible
+for a given trial duration.</t>
+
+<t>MLRsearch methodology aims to solve the DUT in SUT problem
+by estimating the noiseless end of the SUT performance spectrum
+using a limited number of trial results.</t>
+
+<t>Any improvements to the throughput search algorithm, aimed at better
+dealing with software networking SUT and DUT setup, should employ
+strategies recognizing the presence of SUT noise, allowing the discovery of
+(proxies for) DUT noiseless performance
+at different levels of sensitivity to SUT noise.</t>
+
+</section>
+<section anchor="repeatability-and-comparability"><name>Repeatability and Comparability</name>
+
+<t><xref target="RFC2544"></xref> does not suggest to repeat throughput search.
+And from just one
+discovered throughput value, it cannot be determined how repeatable that value is.
+Poor repeatability then leads to poor comparability,
+as different benchmarking teams may obtain varying throughput values
+for the same SUT, exceeding the expected differences from search precision.</t>
+
+<t><xref target="RFC2544"></xref> throughput requirements (60 seconds trial and
+no tolerance of a single frame loss) affect the throughput results
+in the following way.
+The SUT behavior close to the noiseful end of its performance spectrum
+consists of rare occasions of significantly low performance,
+but the long trial duration makes those occasions not so rare on the trial level.
+Therefore, the binary search results tend to wander away from the noiseless end
+of SUT performance spectrum, more frequently and more widely than short
+trials would, thus causing poor throughput repeatability.</t>
+
+<t>The repeatability problem can be addressed by defining a search procedure
+that identifies a consistent level of performance,
+even if it does not meet the strict definition of throughput in <xref target="RFC2544"></xref>.</t>
+
+<t>According to the SUT performance spectrum model, better repeatability
+will be at the noiseless end of the spectrum.
+Therefore, solutions to the DUT in SUT problem
+will help also with the repeatability problem.</t>
+
+<t>Conversely, any alteration to <xref target="RFC2544"></xref> throughput search
+that improves repeatability should be considered
+as less dependent on the SUT noise.</t>
+
+<t>An alternative option is to simply run a search multiple times, and report some
+statistics (e.g. average and standard deviation).
+This can be used
+for a subset of tests deemed more important,
+but it makes the search duration problem even more pronounced.</t>
+
+</section>
+<section anchor="throughput-with-non-zero-loss"><name>Throughput with Non-Zero Loss</name>
+
+<t><xref target="RFC1242"></xref> (section 3.17 Throughput) defines throughput as:
+ The maximum rate at which none of the offered frames
+ are dropped by the device.</t>
+
+<t>Then, it says:
+ Since even the loss of one frame in a
+ data stream can cause significant delays while
+ waiting for the higher level protocols to time out,
+ it is useful to know the actual maximum data
+ rate that the device can support.</t>
+
+<t>However, many benchmarking teams accept a small,
+non-zero loss ratio as the goal for their load search.</t>
+
+<t>Motivations are many:</t>
+
+<t><list style="symbols">
+ <t>Modern protocols tolerate frame loss better,
+compared to the time when <xref target="RFC1242"></xref> and <xref target="RFC2544"></xref> were specified.</t>
+ <t>Trials nowadays send way more frames within the same duration,
+increasing the chance of a small SUT performance fluctuation
+being enough to cause frame loss.</t>
+ <t>Small bursts of frame loss caused by noise have otherwise smaller impact
+on the average frame loss ratio observed in the trial,
+as during other parts of the same trial the SUT may work more closely
+to its noiseless performance, thus perhaps lowering the Trial Loss Ratio
+below the Goal Loss Ratio value.</t>
+ <t>If an approximation of the SUT noise impact on the Trial Loss Ratio is known,
+it can be set as the Goal Loss Ratio.</t>
+</list></t>
+
+<t>Regardless of the validity of all similar motivations,
+support for non-zero loss goals makes any search algorithm more user-friendly.
+<xref target="RFC2544"></xref> throughput is not user-friendly in this regard.</t>
+
+<t>Furthermore, allowing users to specify multiple loss ratio values,
+and enabling a single search to find all relevant bounds,
+significantly enhances the usefulness of the search algorithm.</t>
+
+<t>Searching for multiple Search Goals also helps to describe the SUT performance
+spectrum better than the result of a single Search Goal.
+For example, the repeated wide gap between zero and non-zero loss loads
+indicates the noise has a large impact on the observed performance,
+which is not evident from a single goal load search procedure result.</t>
+
+<t>It is easy to modify the vanilla bisection to find a lower bound
+for the intended load that satisfies a non-zero Goal Loss Ratio.
+But it is not that obvious how to search for multiple goals at once,
+hence the support for multiple Search Goals remains a problem.</t>
+
+</section>
+<section anchor="inconsistent-trial-results"><name>Inconsistent Trial Results</name>
+
+<t>While performing throughput search by executing a sequence of
+measurement trials, there is a risk of encountering inconsistencies
+between trial results.</t>
+
+<t>The plain bisection never encounters inconsistent trials.
+But <xref target="RFC2544"></xref> hints about the possibility of inconsistent trial results,
+in two places in its text.
+The first place is section 24, where full trial durations are required,
+presumably because they can be inconsistent with the results
+from short trial durations.
+The second place is section 26.3, where two successive zero-loss trials
+are recommended, presumably because after one zero-loss trial
+there can be a subsequent inconsistent non-zero-loss trial.</t>
+
+<t>Examples include:</t>
+
+<t><list style="symbols">
+ <t>A trial at the same load (same or different trial duration) results
+in a different Trial Loss Ratio.</t>
+ <t>A trial at a higher load (same or different trial duration) results
+in a smaller Trial Loss Ratio.</t>
+</list></t>
+
+<t>Any robust throughput search algorithm needs to decide how to continue
+the search in the presence of such inconsistencies.
+Definitions of throughput in <xref target="RFC1242"></xref> and <xref target="RFC2544"></xref> are not specific enough
+to imply a unique way of handling such inconsistencies.</t>
+
+<t>Ideally, there will be a definition of a new quantity which both generalizes
+throughput for non-zero-loss (and other possible repeatability enhancements),
+while being precise enough to force a specific way to resolve trial result
+inconsistencies.
+But until such a definition is agreed upon, the correct way to handle
+inconsistent trial results remains an open problem.</t>
+
+</section>
+</section>
+<section anchor="mlrsearch-specification"><name>MLRsearch Specification</name>
+
+<t>This section describes MLRsearch specification including all technical
+definitions needed for evaluating whether a particular test procedure
+complies with MLRsearch specification.</t>
+
+
+<section anchor="overview"><name>Overview</name>
+
+<t>MLRsearch specification describes a set of abstract system components,
+acting as functions with specified inputs and outputs.</t>
+
+<t>A test procedure is said to comply with MLRsearch specification
+if it can be conceptually divided into analogous components,
+each satisfying requirements for the corresponding MLRsearch component.</t>
+
+<t>The Measurer component is tasked to perform trials,
+the Controller component is tasked to select trial loads and durations,
+the Manager component is tasked to pre-configure everything
+and to produce the test report.
+The test report explicitly states Search Goals (as the Controller Inputs)
+and corresponding Goal Results (Controller Outputs).</t>
+
+
+<t>The Manager calls the Controller once,
+the Controller keeps calling the Measurer
+until all stopping conditions are met.</t>
+
+<t>The part where Controller calls the Measurer is called the search.
+Any activity done by the Manager before it calls the Controller
+(or after Controller returns) is not considered to be part of the search.</t>
+
+<t>MLRsearch specification prescribes regular search results and recommends
+their stopping conditions. Irregular search results are also allowed,
+they may have different requirements and stopping conditions.</t>
+
+<t>Search results are based on load classification.
+When measured enough, any chosen load either achieves of fails each search goal,
+thus becoming a lower or an upper bound for that goal.
+When the relevant bounds are at loads that are close enough
+(according to goal precision), the regular result is found.
+Search stops when all regular results are found
+(or if some goals are proven to have only irregular results).</t>
+
+</section>
+<section anchor="measurement-quantities"><name>Measurement Quantities</name>
+
+<t>MLRsearch specification uses a number of measurement quantities.</t>
+
+<t>In general, MLRsearch specification does not require particular units to be used,
+but it is REQUIRED for the test report to state all the units.
+For example, ratio quantities can be dimensionless numbers between zero and one,
+but may be expressed as percentages instead.</t>
+
+<t>For convenience, a group of quantities can be treated as a composite quantity,
+One constituent of a composite quantity is called an attribute,
+and a group of attribute values is called an instance of that composite quantity.</t>
+
+<t>Some attributes are not independent from others,
+and they can be calculated from other attributes.
+Such quantites are called derived quantities.</t>
+
+</section>
+<section anchor="existing-terms"><name>Existing Terms</name>
+
+<t>RFC 1242 &quot;Benchmarking Terminology for Network Interconnect Devices&quot;
+contains basic definitions, and
+RFC 2544 &quot;Benchmarking Methodology for Network Interconnect Devices&quot;
+contains discussions of a number of terms and additional methodology requirements.
+RFC 2285 adds more terms and discussions, describing some known situations
+in more precise way.</t>
+
+<t>All three documents should be consulted
+before attempting to make use of this document.</t>
+
+<t>Definitions of some central terms are copied and discussed in subsections.</t>
+
+
+
+
+
+<section anchor="sut"><name>SUT</name>
+
+<t>Defined in <xref target="RFC2285"></xref> (section 3.1.2 System Under Test (SUT)) as follows.</t>
+
+<t>Definition:</t>
+
+<t>The collective set of network devices to which stimulus is offered
+as a single entity and response measured.</t>
+
+<t>Discussion:</t>
+
+<t>An SUT consisting of a single network device is also allowed.</t>
+
+</section>
+<section anchor="dut"><name>DUT</name>
+
+<t>Defined in <xref target="RFC2285"></xref> (section 3.1.1 Device Under Test (DUT)) as follows.</t>
+
+<t>Definition:</t>
+
+<t>The network forwarding device to which stimulus is offered and
+response measured.</t>
+
+<t>Discussion:</t>
+
+<t>DUT, as a sub-component of SUT, is only indirectly mentioned
+in MLRsearch specification, but is of key relevance for its motivation.</t>
+
+
+</section>
+<section anchor="trial"><name>Trial</name>
+
+<t>A trial is the part of the test described in <xref target="RFC2544"></xref> (section 23. Trial description).</t>
+
+<t>Definition:</t>
+
+<t>A particular test consists of multiple trials. Each trial returns
+ one piece of information, for example the loss rate at a particular
+ input frame rate. Each trial consists of a number of phases:</t>
+
+<t>a) If the DUT is a router, send the routing update to the &quot;input&quot;
+ port and pause two seconds to be sure that the routing has settled.</t>
+
+<t>b) Send the &quot;learning frames&quot; to the &quot;output&quot; port and wait 2
+ seconds to be sure that the learning has settled. Bridge learning
+ frames are frames with source addresses that are the same as the
+ destination addresses used by the test frames. Learning frames for
+ other protocols are used to prime the address resolution tables in
+ the DUT. The formats of the learning frame that should be used are
+ shown in the Test Frame Formats document.</t>
+
+<t>c) Run the test trial.</t>
+
+<t>d) Wait for two seconds for any residual frames to be received.</t>
+
+<t>e) Wait for at least five seconds for the DUT to restabilize.</t>
+
+<t>Discussion:</t>
+
+<t>The definition describes some traits, it is not clear whether all of them
+are REQUIRED, or some of them are only RECOMMENDED.</t>
+
+
+<t>For the purposes of the MLRsearch specification,
+it is ALLOWED for the test procedure to deviate from the <xref target="RFC2544"></xref> description,
+but any such deviation MUST be made explicit in the test report.</t>
+
+<t>Trials are the only stimuli the SUT is expected to experience
+during the search.</t>
+
+<t>In some discussion paragraphs, it is useful to consider the traffic
+as sent and received by a tester, as implicitly defined
+in <xref target="RFC2544"></xref> (section 6. Test set up).</t>
+
+<t>An example of deviation from <xref target="RFC2544"></xref> is using shorter wait times.</t>
+
+</section>
+</section>
+<section anchor="trial-terms"><name>Trial Terms</name>
+
+<t>This section defines new and redefine existing terms for quantities
+relevant as inputs or outputs of trial, as used by the Measurer component.</t>
+
+<section anchor="trial-duration"><name>Trial Duration</name>
+
+<t>Definition:</t>
+
+<t>Trial duration is the intended duration of the traffic for a trial.</t>
+
+<t>Discussion:</t>
+
+<t>In general, this quantity does not include any preparation nor waiting
+described in section 23 of <xref target="RFC2544"></xref> (section 23. Trial description).</t>
+
+<t>While any positive real value may be provided, some Measurer implementations
+MAY limit possible values, e.g. by rounding down to neared integer in seconds.
+In that case, it is RECOMMENDED to give such inputs to the Controller
+so the Controller only proposes the accepted values.
+Alternatively, the test report MUST present the rounded values
+as Search Goal attributes.</t>
+
+</section>
+<section anchor="trial-load"><name>Trial Load</name>
+
+<t>Definition:</t>
+
+<t>The trial load is the intended load for a trial</t>
+
+<t>Discussion:</t>
+
+<t>For test report purposes, it is assumed that this is a constant load by default.
+This MAY be only an average load, e.g. when the traffic is intended to be busty,
+e.g. as suggested in <xref target="RFC2544"></xref> (section 21. Bursty traffic),
+but the test report MUST explicitly mention how non-constant the traffic is.</t>
+
+<t>Trial load is the quantity defined as Constant Load of <xref target="RFC1242"></xref>
+(section 3.4 Constant Load), Data Rate of <xref target="RFC2544"></xref>
+(section 14. Bidirectional traffic)
+and Intended Load of <xref target="RFC2285"></xref> (section 3.5.1 Intended load (Iload)).
+All three definitions specify
+that this value applies to one (input or output) interface.</t>
+
+
+<t>For test report purposes, multi-interface aggregate load MAY be reported,
+this is understood as the same quantity expressed using different units.
+From the report it MUST be clear whether a particular trial load value
+is per one interface, or an aggregate over all interfaces.</t>
+
+<t>Similarly to trial duration, some Measurers may limit the possible values
+of trial load. Contrary to trial duration, the test report is NOT REQUIRED
+to document such behavior.</t>
+
+
+<t>It is ALLOWED to combine trial load and trial duration in a way
+that would not be possible to achieve using any integer number of data frames.</t>
+
+
+</section>
+<section anchor="trial-input"><name>Trial Input</name>
+
+<t>Definition:</t>
+
+<t>Trial Input is a composite quantity, consisting of two attributes:
+trial duration and trial load.</t>
+
+<t>Discussion:</t>
+
+<t>When talking about multiple trials, it is common to say &quot;Trial Inputs&quot;
+to denote all corresponding Trial Input instances.</t>
+
+<t>A Trial Input instance acts as the input for one call of the Measurer component.</t>
+
+<t>Contrary to other composite quantities, MLRsearch implementations
+are NOT ALLOWED to add optional attributes here.
+This improves interoperability between various implementations of
+the Controller and the Measurer.</t>
+
+</section>
+<section anchor="traffic-profile"><name>Traffic Profile</name>
+
+<t>Definition:</t>
+
+<t>Traffic profile is a composite quantity
+containing attributes other than trial load and trial duration,
+needed for unique determination of the trial to be performed.</t>
+
+<t>Discussion:</t>
+
+<t>All its attributes are assumed to be constant during the search,
+and the composite is configured on the Measurer by the Manager
+before the search starts.
+This is why the traffic profile is not part of the Trial Input.</t>
+
+<t>As a consequence, implementations of the Manager and the Measurer
+must be aware of their common set of capabilities, so that the traffic profile
+uniquely defines the traffic during the search.
+The important fact is that none of those capabilities
+have to be known by the Controller implementations.</t>
+
+<t>The traffic profile SHOULD contain some specific quantities,
+for example <xref target="RFC2544"></xref> (section 9. Frame sizes) governs
+data link frame size as defined in <xref target="RFC1242"></xref> (section 3.5 Data link frame size).</t>
+
+<t>Several more specific quantities may be RECOMMENDED, depending on media type.
+For example, <xref target="RFC2544"></xref> (Appendix C) lists frame formats and protocol addresses,
+as recommended from <xref target="RFC2544"></xref> (section 8. Frame formats)
+and <xref target="RFC2544"></xref> (section 12. Protocol addresses).</t>
+
+<t>Depending on SUT configuration, e.g. when testing specific protocols,
+additional attributes MUST be included in the traffic profile
+and in the test report.</t>
+
+<t>Example: <xref target="RFC8219"></xref> (section 5.3. Traffic Setup) introduces traffic setups
+consisting of a mix of IPv4 and IPv6 traffic - the implied traffic profile
+therefore must include an attribute for their percentage.</t>
+
+<t>Other traffic properties that need to be somehow specified
+in Traffic Profile include:
+<xref target="RFC2544"></xref> (section 14. Bidirectional traffic),
+<xref target="RFC2285"></xref> (section 3.3.3 Fully meshed traffic),
+and <xref target="RFC2544"></xref> (section 11. Modifiers).</t>
+
+</section>
+<section anchor="trial-forwarding-ratio"><name>Trial Forwarding Ratio</name>
+
+<t>Definition:</t>
+
+<t>The trial forwarding ratio is a dimensionless floating point value.
+It MUST range between 0.0 and 1.0, both inclusive.
+It is calculated by dividing the number of frames
+successfully forwarded by the SUT
+by the total number of frames expected to be forwarded during the trial</t>
+
+<t>Discussion:</t>
+
+<t>For most traffic profiles, &quot;expected to be forwarded&quot; means
+&quot;intended to get transmitted from Tester towards SUT&quot;.</t>
+
+<t>Trial forwarding ratio MAY be expressed in other units
+(e.g. as a percentage) in the test report.</t>
+
+<t>Note that, contrary to loads, frame counts used to compute
+trial forwarding ratio are aggregates over all SUT output interfaces.</t>
+
+<t>Questions around what is the correct number of frames
+that should have been forwarded
+is generally outside of the scope of this document.</t>
+
+
+
+</section>
+<section anchor="trial-loss-ratio"><name>Trial Loss Ratio</name>
+
+<t>Definition:</t>
+
+<t>The Trial Loss Ratio is equal to one minus the trial forwarding ratio.</t>
+
+<t>Discussion:</t>
+
+<t>100% minus the trial forwarding ratio, when expressed as a percentage.</t>
+
+<t>This is almost identical to Frame Loss Rate of <xref target="RFC1242"></xref>
+(section 3.6 Frame Loss Rate),
+the only minor difference is that Trial Loss Ratio
+does not need to be expressed as a percentage.</t>
+
+</section>
+<section anchor="trial-forwarding-rate"><name>Trial Forwarding Rate</name>
+
+<t>Definition:</t>
+
+<t>The trial forwarding rate is a derived quantity, calculated by
+multiplying the trial load by the trial forwarding ratio.</t>
+
+<t>Discussion:</t>
+
+<t>It is important to note that while similar, this quantity is not identical
+to the Forwarding Rate as defined in <xref target="RFC2285"></xref>
+(section 3.6.1 Forwarding rate (FR)).
+The latter is specific to one output interface only,
+whereas the trial forwarding ratio is based
+on frame counts aggregated over all SUT output interfaces.</t>
+
+
+</section>
+<section anchor="trial-effective-duration"><name>Trial Effective Duration</name>
+
+<t>Definition:</t>
+
+<t>Trial effective duration is a time quantity related to the trial,
+by default equal to the trial duration.</t>
+
+<t>Discussion:</t>
+
+<t>This is an optional feature.
+If the Measurer does not return any trial effective duration value,
+the Controller MUST use the trial duration value instead.</t>
+
+<t>Trial effective duration may be any time quantity chosen by the Measurer
+to be used for time-based decisions in the Controller.</t>
+
+<t>The test report MUST explain how the Measurer computes the returned
+trial effective duration values, if they are not always
+equal to the trial duration.</t>
+
+<t>This feature can be beneficial for users
+who wish to manage the overall search duration,
+rather than solely the traffic portion of it.
+Simply measure the duration of the whole trial (waits including)
+and use that as the trial effective duration.</t>
+
+<t>Also, this is a way for the Measurer to inform the Controller about
+its surprising behavior, for example when rounding the trial duration value.</t>
+
+
+</section>
+<section anchor="trial-output"><name>Trial Output</name>
+
+<t>Definition:</t>
+
+<t>Trial Output is a composite quantity. The REQUIRED attributes are
+Trial Loss Ratio, trial effective duration and trial forwarding rate.</t>
+
+<t>Discussion:</t>
+
+<t>When talking about multiple trials, it is common to say &quot;Trial Outputs&quot;
+to denote all corresponding Trial Output instances.</t>
+
+<t>Implementations may provide additional (optional) attributes.
+The Controller implementations MUST ignore values of any optional attribute
+they are not familiar with,
+except when passing Trial Output instance to the Manager.</t>
+
+<t>Example of an optional attribute:
+The aggregate number of frames expected to be forwarded during the trial,
+especially if it is not just (a rounded-up value)
+implied by trial load and trial duration.</t>
+
+<t>While <xref target="RFC2285"></xref> (Section 3.5.2 Offered load (Oload))
+requires the offered load value to be reported for forwarding rate measurements,
+it is NOT REQUIRED in MLRsearch specification.</t>
+
+
+</section>
+<section anchor="trial-result"><name>Trial Result</name>
+
+<t>Definition:</t>
+
+<t>Trial result is a composite quantity,
+consisting of the Trial Input and the Trial Output.</t>
+
+<t>Discussion:</t>
+
+<t>When talking about multiple trials, it is common to say &quot;trial results&quot;
+to denote all corresponding trial result instances.</t>
+
+<t>While implementations SHOULD NOT include additional attributes
+with independent values, they MAY include derived quantities.</t>
+
+</section>
+</section>
+<section anchor="goal-terms"><name>Goal Terms</name>
+
+<t>This section defines new and redefine existing terms for quantities
+indirectly relevant for inputs or outputs of the Controller component.</t>
+
+<t>Several goal attributes are defined before introducing
+the main component quantity: the Search Goal.</t>
+
+<section anchor="goal-final-trial-duration"><name>Goal Final Trial Duration</name>
+
+<t>Definition:</t>
+
+<t>A threshold value for trial durations.</t>
+
+<t>Discussion:</t>
+
+<t>This attribute value MUST be positive.</t>
+
+<t>A trial with Trial Duration at least as long as the Goal Final Trial Duration
+is called a full-length trial (with respect to the given Search Goal).</t>
+
+<t>A trial that is not full-length is called a short trial.</t>
+
+<t>Informally, while MLRsearch is allowed to perform short trials,
+the results from such short trials have only limited impact on search results.</t>
+
+<t>One trial may be full-length for some Search Goals, but not for others.</t>
+
+<t>The full relation of this goal to Controller Output is defined later in
+this document in subsections of [Goal Result] (#Goal-Result).
+For example, the Conditional Throughput for this goal is computed only from
+full-length trial results.</t>
+
+</section>
+<section anchor="goal-duration-sum"><name>Goal Duration Sum</name>
+
+<t>Definition:</t>
+
+<t>A threshold value for a particular sum of trial effective durations.</t>
+
+<t>Discussion:</t>
+
+<t>This attribute value MUST be positive.</t>
+
+<t>Informally, even when looking only at full-length trials,
+MLRsearch may spend up to this time measuring the same load value.</t>
+
+<t>If the Goal Duration Sum is larger than the Goal Final Trial Duration,
+multiple full-length trials may need to be performed at the same load.</t>
+
+<t>See [TST009 Example] (#TST009-Example) for an example where possibility
+of multiple full-length trials at the same load is intended.</t>
+
+<t>A Goal Duration Sum value lower than the Goal Final Trial Duration
+(of the same goal) could save some search time, but is NOT RECOMMENDED.
+See [Relevant Upper Bound] (#Relevant-Upper-Bound) for partial explanation.</t>
+
+</section>
+<section anchor="goal-loss-ratio"><name>Goal Loss Ratio</name>
+
+<t>Definition:</t>
+
+<t>A threshold value for Trial Loss Ratios.</t>
+
+<t>Discussion:</t>
+
+<t>Attribute value MUST be non-negative and smaller than one.</t>
+
+<t>A trial with Trial Loss Ratio larger than a Goal Loss Ratio value
+is called a lossy trial, with respect to given Search Goal.</t>
+
+<t>Informally, if a load causes too many lossy trials,
+the Relevant Lower Bound for this goal will be smaller than that load.</t>
+
+<t>If a trial is not lossy, it is called a low-loss trial,
+or (specifically for zero Goal Loss Ratio value) zero-loss trial.</t>
+
+</section>
+<section anchor="goal-exceed-ratio"><name>Goal Exceed Ratio</name>
+
+<t>Definition:</t>
+
+<t>A threshold value for a particular ratio of sums of Trial Effective Durations.</t>
+
+<t>Discussion:</t>
+
+<t>Attribute value MUST be non-negative and smaller than one.</t>
+
+<t>See later sections for details on which sums.
+Specifically, the direct usage is only in
+[Appendix A: Load Classification] (#Appendix-A:-Load-Classification)
+and [Appendix B: Conditional Throughput] (#Appendix-B:-Conditional-Throughput).
+The impact of that usage is discussed in subsections leading to
+[Goal Result] (#Goal-Result).</t>
+
+<t>Informally, the impact of lossy trials is controlled by this value.
+Effectively, Goal Exceed Ratio is a percentage of full-length trials
+that may be lossy without the load being classified
+as the [Relevant Upper Bound] (#Relevant-Upper-Bound).</t>
+
+</section>
+<section anchor="goal-width"><name>Goal Width</name>
+
+<t>Definition:</t>
+
+<t>A value used as a threshold for deciding
+whether two trial load values are close enough.</t>
+
+<t>Discussion:</t>
+
+<t>If present, the value MUST be positive.</t>
+
+<t>Informally, this acts as a stopping condition,
+controlling the precision of the search.
+The search stops if every goal has reached its precision.</t>
+
+<t>Implementations without this attribute
+MUST give the Controller other ways to control the search stopping conditions.</t>
+
+<t>Absolute load difference and relative load difference are two popular choices,
+but implementations may choose a different way to specify width.</t>
+
+<t>The test report MUST make it clear what specific quantity is used as Goal Width.</t>
+
+<t>It is RECOMMENDED to set the Goal Width (as relative difference) value
+to a value no smaller than the Goal Loss Ratio.
+(The reason is not obvious, see [Throughput] (#Throughput) if interested.)</t>
+
+</section>
+<section anchor="search-goal"><name>Search Goal</name>
+
+<t>Definition:</t>
+
+<t>The Search Goal is a composite quantity consisting of several attributes,
+some of them are required.</t>
+
+<t>Required attributes:
+- Goal Final Trial Duration
+- Goal Duration Sum
+- Goal Loss Ratio
+- Goal Exceed Ratio</t>
+
+<t>Optional attribute:
+- Goal Width</t>
+
+<t>Discussion:</t>
+
+<t>Implementations MAY add their own attributes.
+Those additional attributes may be required by the implementation
+even if they are not required by MLRsearch specification.
+But it is RECOMMENDED for those implementations
+to support missing values by computing reasonable defaults.</t>
+
+<t>The meaning of listed attributes is formally given only by their indirect effect
+on the search results.</t>
+
+<t>Informally, later sections provide additional intuitions and examples
+of the Search Goal attribute values.</t>
+
+<t>An example of additional attributes required by some implementations
+is Goal Initial Trial Duration, together with another attribute
+that controls possible intermediate Trial Duration values.
+The reasonable default in this case is using the Goal Final Trial Duration
+and no intermediate values.</t>
+
+</section>
+<section anchor="controller-input"><name>Controller Input</name>
+
+<t>Definition:</t>
+
+<t>Controller Input is a composite quantity
+required as an input for the Controller.
+The only REQUIRED attribute is a list of Search Goal instances.</t>
+
+<t>Discussion:</t>
+
+<t>MLRsearch implementations MAY use additional attributes.
+Those additional attributes may be required by the implementation
+even if they are not required by MLRsearch specification.</t>
+
+<t>Formally, the Manager does not apply any Controller configuration
+apart from one Controller Input instance.</t>
+
+<t>For example, Traffic Profile is configured on the Measurer by the Manager
+(without explicit assistance of the Controller).</t>
+
+<t>The order of Search Goal instances in a list SHOULD NOT
+have a big impact on Controller Output (see section [Controller Output] (#Controller-Output) ,
+but MLRsearch implementations MAY base their behavior on the order
+of Search Goal instances in a list.</t>
+
+<t>An example of an optional attribute (outside the list of Search Goals)
+required by some implementations is Max Load.
+While this is a frequently used configuration parameter,
+already governed by <xref target="RFC2544"></xref> (section 20. Maximum frame rate)
+and <xref target="RFC2285"></xref> (3.5.3 Maximum offered load (MOL)),
+some implementations may detect or discover it instead.</t>
+
+
+
+<t>In MLRsearch specification, the [Relevant Upper Bound] (#Relevant-Upper-Bound)
+is added as a required attribute precisely because it makes the search result
+independent of Max Load value.</t>
+
+
+</section>
+</section>
+<section anchor="search-goal-examples"><name>Search Goal Examples</name>
+
+<section anchor="rfc2544-goal"><name>RFC2544 Goal</name>
+
+<t>The following set of values makes the search result unconditionally compliant
+with <xref target="RFC2544"></xref> (section 24 Trial duration)</t>
+
+<t><list style="symbols">
+ <t>Goal Final Trial Duration = 60 seconds</t>
+ <t>Goal Duration Sum = 60 seconds</t>
+ <t>Goal Loss Ratio = 0%</t>
+ <t>Goal Exceed Ratio = 0%</t>
+</list></t>
+
+<t>The latter two attributes are enough to make the search goal
+conditionally compliant, adding the first attribute
+makes it unconditionally compliant.</t>
+
+<t>The second attribute (Goal Duration Sum) only prevents MLRsearch
+from repeating zero-loss full-length trials.</t>
+
+<t>Non-zero exceed ratio could prolong the search and allow loss inversion
+between lower-load lossy short trial and higher-load full-length zero-loss trial.
+From <xref target="RFC2544"></xref> alone, it is not clear whether that higher load
+could be considered as compliant throughput.</t>
+
+</section>
+<section anchor="tst009-goal"><name>TST009 Goal</name>
+
+<t>One of the alternatives to RFC2544 is described in
+<xref target="TST009"></xref> (section 12.3.3 Binary search with loss verification).
+The idea there is to repeat lossy trials, hoping for zero loss on second try,
+so the results are closer to the noiseless end of performance sprectum,
+and more repeatable and comparable.</t>
+
+<t>Only the variant with &quot;z = infinity&quot; is achievable with MLRsearch.</t>
+
+
+<t>For example, for &quot;r = 2&quot; variant, the following search goal should be used:</t>
+
+<t><list style="symbols">
+ <t>Goal Final Trial Duration = 60 seconds</t>
+ <t>Goal Duration Sum = 120 seconds</t>
+ <t>Goal Loss Ratio = 0%</t>
+ <t>Goal Exceed Ratio = 50%</t>
+</list></t>
+
+<t>If the first 60s trial has zero loss, it is enough for MLRsearch to stop
+measuring at that load, as even a second lossy trial
+would still fit within the exceed ratio.</t>
+
+<t>But if the first trial is lossy, MLRsearch needs to perform also
+the second trial to classify that load.
+As Goal Duration Sum is twice as long as Goal Final Trial Duration,
+third full-length trial is never needed.</t>
+
+</section>
+</section>
+<section anchor="result-terms"><name>Result Terms</name>
+
+<t>Before defining the output of the Controller,
+it is useful to define what the Goal Result is.</t>
+
+<t>The Goal Result is a composite quantity.</t>
+
+<t>Following subsections define its attribute first, before describing the Goal Result quantity.</t>
+
+<t>There is a correspondence between Search Goals and Goal Results.
+Most of the following subsections refer to a given Search Goal,
+when defining attributes of the Goal Result.
+Conversely, at the end of the search, each Search Goal
+has its corresponding Goal Result.</t>
+
+<t>Conceptually, the search can be seen as a process of load classification,
+where the Controller attempts to classify some loads as an Upper Bound
+or a Lower Bound with respect to some Search Goal.</t>
+
+<t>Before defining real attributes of the goal result,
+it is useful to define bounds in general.</t>
+
+<section anchor="relevant-upper-bound"><name>Relevant Upper Bound</name>
+
+<t>Definition:</t>
+
+<t>The Relevant Upper Bound is the smallest trial load value that is classified
+at the end of the search as an upper bound
+(see [Appendix A: Load Classification] (#Appendix-A:-Load-Classification))
+for the given Search Goal.</t>
+
+<t>Discussion:</t>
+
+<t>One search goal can have many different load classified as an upper bound.
+At the end of the search, one of those loads will be the smallest,
+becoming the relevant upper bound for that goal.</t>
+
+<t>In more detail, the set of all trial outputs (both short and full-length,
+enough of them according to Goal Duration Sum)
+performed at that smallest load failed to uphold all the requirements
+of the given Search Goal, mainly the Goal Loss Ratio
+in combination with the Goal Exceed Ratio.</t>
+
+
+<t>If Max Load does not cause enough lossy trials,
+the Relevant Upper Bound does not exist.
+Conversely, if Relevant Upper Bound exists,
+it is not affected by Max Load value.</t>
+
+
+
+</section>
+<section anchor="relevant-lower-bound"><name>Relevant Lower Bound</name>
+
+<t>Definition:</t>
+
+<t>The Relevant Lower Bound is the largest trial load value
+among those smaller than the Relevant Upper Bound,
+that got classified at the end of the search as a lower bound (see
+[Appendix A: Load Classification] (#Appendix-A:-Load-Classification))
+for the given Search Goal.</t>
+
+<t>Discussion:</t>
+
+<t>Only among loads smaller that the relevant upper bound,
+the largest load becomes the relevant lower bound.
+With loss inversion, stricter upper bound matters.</t>
+
+<t>In more detail, the set of all trial outputs (both short and full-length,
+enough of them according to Goal Duration Sum)
+performed at that largest load managed to uphold all the requirements
+of the given Search Goal, mainly the Goal Loss Ratio
+in combination with the Goal Exceed Ratio.</t>
+
+<t>Is no load had enough low-loss trials, the relevant lower bound
+MAY not exist.</t>
+
+
+<t>Strictly speaking, if the Relevant Upper Bound does not exist,
+the Relevant Lower Bound also does not exist.
+In that case, Max Load is classified as a lower bound,
+but it is not clear whether a higher lower bound
+would be found if the search used a higher Max Load value.</t>
+
+<t>For a regular Goal Result, the distance between the Relevant Lower Bound
+and the Relevant Upper Bound MUST NOT be larger than the Goal Width,
+if the implementation offers width as a goal attribute.</t>
+
+
+<t>Searching for anther search goal may cause a loss inversion phenomenon,
+where a lower load is classified as an upper bound,
+but also a higher load is classified as a lower bound for the same search goal.
+The definition of the Relevant Lower Bound ignores such high lower bounds.</t>
+
+
+</section>
+<section anchor="conditional-throughput"><name>Conditional Throughput</name>
+
+<t>Definition:</t>
+
+<t>The Conditional Throughput (see section [Appendix B: Conditional Throughput] (#Appendix-B:-Conditional-Throughput))
+as evaluated at the Relevant Lower Bound of the given Search Goal
+at the end of the search.</t>
+
+<t>Discussion:</t>
+
+<t>Informally, this is a typical trial forwarding rate, expected to be seen
+at the Relevant Lower Bound of the given Search Goal.</t>
+
+<t>But frequently it is only a conservative estimate thereof,
+as MLRsearch implementations tend to stop gathering more data
+as soon as they confirm the value cannot get worse than this estimate
+within the Goal Duration Sum.</t>
+
+<t>This value is RECOMMENDED to be used when evaluating repeatability
+and comparability if different MLRsearch implementations.</t>
+
+
+</section>
+<section anchor="goal-result"><name>Goal Result</name>
+
+<t>Definition:</t>
+
+<t>The Goal Result is a composite quantity consisting of several attributes.
+Relevant Upper Bound and Relevant Lower Bound are REQUIRED attributes,
+Conditional Throughput is a RECOMMENDED attribute.</t>
+
+<t>Discussion:</t>
+
+<t>Depending on SUT behavior, it is possible that one or both relevant bounds
+do not exist. The goal result instance where the required attribute values exist
+is informally called a Regular Goal Result instance,
+so we can say some goals reached Irregular Goal Results.</t>
+
+
+<t>A typical Irregular Goal Result is when all trials at the Max Load
+have zero loss, as the Relevant Upper Bound does not exist in that case.</t>
+
+<t>It is RECOMMENDED that the test report will display such results appropriately,
+although MLRsearch specification does not prescibe how.</t>
+
+
+<t>Anything else regarging Irregular Goal Results,
+including their role in stopping conditions of the search
+is outside the scope of this document.</t>
+
+</section>
+<section anchor="search-result"><name>Search Result</name>
+
+<t>Definition:</t>
+
+<t>The Search Result is a single composite object
+that maps each Search Goal instance to a corresponding Goal Result instance.</t>
+
+<t>Discussion:</t>
+
+<t>Alternatively, the Search Result can be implemented as an ordered list
+of the Goal Result instances, matching the order of Search Goal instances.</t>
+
+
+<t>The Search Result (as a mapping)
+MUST map from all the Search Goal instances present in the Controller Input.</t>
+
+
+
+</section>
+<section anchor="controller-output"><name>Controller Output</name>
+
+<t>Definition:</t>
+
+<t>The Controller Output is a composite quantity returned from the Controller
+to the Manager at the end of the search.
+The Search Result instance is its only REQUIRED attribute.</t>
+
+<t>Discussion:</t>
+
+<t>MLRsearch implementation MAY return additional data in the Controller Output.</t>
+
+
+</section>
+</section>
+<section anchor="mlrsearch-architecture"><name>MLRsearch Architecture</name>
+
+
+<t>MLRsearch architecture consists of three main system components:
+the Manager, the Controller, and the Measurer.</t>
+
+<t>The architecture also implies the presence of other components,
+such as the SUT and the Tester (as a sub-component of the Measurer).</t>
+
+<t>Protocols of communication between components are generally left unspecified.
+For example, when MLRsearch specification mentions &quot;Controller calls Measurer&quot;,
+it is possible that the Controller notifies the Manager
+to call the Measurer indirectly instead. This way the Measurer implementations
+can be fully independent from the Controller implementations,
+e.g. programmed in different programming languages.</t>
+
+<section anchor="measurer"><name>Measurer</name>
+
+<t>Definition:</t>
+
+<t>The Measurer is an abstract system component
+that when called with a [Trial Input] (#Trial-Input) instance,
+performs one [Trial] (#Trial),
+and returns a [Trial Output] (#Trial-Output) instance.</t>
+
+<t>Discussion:</t>
+
+<t>This definition assumes the Measurer is already initialized.
+In practice, there may be additional steps before the search,
+e.g. when the Manager configures the traffic profile
+(either on the Measurer or on its tester sub-component directly)
+and performs a warmup (if the tester requires one).</t>
+
+<t>It is the responsibility of the Measurer implementation to uphold
+any requirements and assumptions present in MLRsearch specification,
+e.g. trial forwarding ratio not being larger than one.</t>
+
+<t>Implementers have some freedom.
+For example <xref target="RFC2544"></xref> (section 10. Verifying received frames)
+gives some suggestions (but not requirements) related to
+duplicated or reordered frames.
+Implementations are RECOMMENDED to document their behavior
+related to such freedoms in as detailed a way as possible.</t>
+
+<t>It is RECOMMENDED to benchmark the test equipment first,
+e.g. connect sender and receiver directly (without any SUT in the path),
+find a load value that guarantees the offered load is not too far
+from the intended load, and use that value as the Max Load value.
+When testing the real SUT, it is RECOMMENDED to turn any big difference
+between the intended load and the offered load into increased Trial Loss Ratio.</t>
+
+<t>Neither of the two recommendations are made into requirements,
+because it is not easy to tell when the difference is big enough,
+in a way thay would be dis-entangled from other Measurer freedoms.</t>
+
+</section>
+<section anchor="controller"><name>Controller</name>
+
+<t>Definition:</t>
+
+<t>The Controller is an abstract system component
+that when called with a Controller Input instance
+repeatedly computes Trial Input instance for the Measurer,
+obtains corresponding Trial Output instances,
+and eventually returns a Controller Output instance.</t>
+
+<t>Discussion:</t>
+
+<t>Informally, the Controller has big freedom in selection of Trial Inputs,
+and the implementations want to achieve the Search Goals
+in the shortest expected time.</t>
+
+<t>The Controller&#39;s role in optimizing the overall search time
+distinguishes MLRsearch algorithms from simpler search procedures.</t>
+
+<t>Informally, each implementation can have different stopping conditions.
+Goal Width is only one example.
+In practice, implementation details do not matter,
+as long as Goal Results are regular.</t>
+
+</section>
+<section anchor="manager"><name>Manager</name>
+
+<t>Definition:</t>
+
+<t>The Manager is an abstract system component that is reponsible for
+configuring other components, calling the Controller component once,
+and for creating the test report following the reporting format as
+defined in <xref target="RFC2544"></xref> (section 26. Benchmarking tests).</t>
+
+<t>Discussion:</t>
+
+<t>The Manager initializes the SUT, the Measurer (and the Tester if independent)
+with their intended configurations before calling the Controller.</t>
+
+<t>The Manager does not need to be able to tweak any Search Goal attributes,
+but it MUST report all applied attribute values even if not tweaked.</t>
+
+
+<t>In principle, there should be a &quot;user&quot; (human or CI)
+that &quot;starts&quot; or &quot;calls&quot; the Manager and receives the report.
+The Manager MAY be able to be called more than once whis way.</t>
+
+
+</section>
+</section>
+<section anchor="implementation-compliance"><name>Implementation Compliance</name>
+
+<t>Any networking measurement setup where there can be logically delineated system components
+and there are components satisfying requirements for the Measurer,
+the Controller and the Manager, is considered to be compliant with MLRsearch design.</t>
+
+<t>These components can be seen as abstractions present in any testing procedure.
+For example, there can be a single component acting both
+as the Manager and the Controller, but as long as values of required attributes
+of Search Goals and Goal Results are visible in the test report,
+the Controller Input instance and output instance are implied.</t>
+
+<t>For example, any setup for conditionally (or unconditionally)
+compliant <xref target="RFC2544"></xref> throughput testing
+can be understood as a MLRsearch architecture,
+assuming there is enough data to reconstruct the Relevant Upper Bound.</t>
+
+<t>See [RFC2544 Goal] (#RFC2544-Goal) subsection for equivalent Search Goal.</t>
+
+<t>Any test procedure that can be understood as (one call to the Manager of)
+MLRsearch architecture is said to be compliant with MLRsearch specification.</t>
+
+</section>
+</section>
+<section anchor="additional-considerations"><name>Additional Considerations</name>
+
+<t>This section focuses on additional considerations, intuitions and motivations
+pertaining to MLRsearch methodology.</t>
+
+
+<section anchor="mlrsearch-versions"><name>MLRsearch Versions</name>
+
+<t>The MLRsearch algorithm has been developed in a code-first approach,
+a Python library has been created, debugged, used in production
+and published in PyPI before the first descriptions
+(even informal) were published.</t>
+
+<t>But the code (and hence the description) was evolving over time.
+Multiple versions of the library were used over past several years,
+and later code was usually not compatible with earlier descriptions.</t>
+
+<t>The code in (some version of) MLRsearch library fully determines
+the search process (for a given set of configuration parameters),
+leaving no space for deviations.</t>
+
+
+
+<t>This historic meaning of MLRsearch, as a family
+of search algorithm implementations,
+leaves plenty of space for future improvements, at the cost
+of poor comparability of results of search algoritm implementations.</t>
+
+
+<t>There are two competing needs.
+There is the need for standardization in areas critical to comparability.
+There is also the need to allow flexibility for implementations
+to innovate and improve in other areas.
+This document defines MLRsearch as a new specification
+in a manner that aims to fairly balance both needs.</t>
+
+</section>
+<section anchor="stopping-conditions"><name>Stopping Conditions</name>
+
+<t><xref target="RFC2544"></xref> prescribes that after performing one trial at a specific offered load,
+the next offered load should be larger or smaller, based on frame loss.</t>
+
+<t>The usual implementation uses binary search.
+Here a lossy trial becomes
+a new upper bound, a lossless trial becomes a new lower bound.
+The span of values between the tightest lower bound
+and the tightest upper bound (including both values) forms an interval of possible results,
+and after each trial the width of that interval halves.</t>
+
+<t>Usually the binary search implementation tracks only the two tightest bounds,
+simply calling them bounds.
+But the old values still remain valid bounds,
+just not as tight as the new ones.</t>
+
+<t>After some number of trials, the tightest lower bound becomes the throughput.
+<xref target="RFC2544"></xref> does not specify when, if ever, should the search stop.</t>
+
+<t>MLRsearch introduces a concept of [Goal Width] (#Goal-Width).</t>
+
+<t>The search stops
+when the distance between the tightest upper bound and the tightest lower bound
+is smaller than a user-configured value, called Goal Width from now on.
+In other words, the interval width at the end of the search
+has to be no larger than the Goal Width.</t>
+
+<t>This Goal Width value therefore determines the precision of the result.
+Due to the fact that MLRsearch specification requires a particular
+structure of the result (see [Trial Result] (#Trial-Result) section),
+the result itself does contain enough information to determine its
+precision, thus it is not required to report the Goal Width value.</t>
+
+<t>This allows MLRsearch implementations to use stopping conditions
+different from Goal Width.</t>
+
+</section>
+<section anchor="load-classification"><name>Load Classification</name>
+
+<t>MLRsearch keeps the basic logic of binary search (tracking tightest bounds,
+measuring at the middle), perhaps with minor technical differences.</t>
+
+<t>MLRsearch algorithm chooses an intended load (as opposed to the offered load),
+the interval between bounds does not need to be split
+exactly into two equal halves,
+and the final reported structure specifies both bounds.</t>
+
+<t>The biggest difference is that to classify a load
+as an upper or lower bound, MLRsearch may need more than one trial
+(depending on configuration options) to be performed at the same intended load.</t>
+
+<t>In consequence, even if a load already does have few trial results,
+it still may be classified as undecided, neither a lower bound nor an upper bound.</t>
+
+<t>An explanation of the classification logic is given in the next section [Logic of Load Classification] (#Logic-of-Load-Classification),
+as it heavily relies on other subsections of this section.</t>
+
+<t>For repeatability and comparability reasons, it is important that
+given a set of trial results, all implementations of MLRsearch
+classify the load equivalently.</t>
+
+</section>
+<section anchor="loss-ratios"><name>Loss Ratios</name>
+
+<t>Another difference between MLRsearch and <xref target="RFC2544"></xref> binary search is in the goals of the search.
+<xref target="RFC2544"></xref> has a single goal,
+based on classifying full-length trials as either lossless or lossy.</t>
+
+<t>MLRsearch, as the name suggests, can search for multiple goals,
+differing in their loss ratios.
+The precise definition of the Goal Loss Ratio will be given later.
+The <xref target="RFC2544"></xref> throughput goal then simply becomes a zero Goal Loss Ratio.
+Different goals also may have different Goal Widths.</t>
+
+<t>A set of trial results for one specific intended load value
+can classify the load as an upper bound for some goals, but a lower bound
+for some other goals, and undecided for the rest of the goals.</t>
+
+<t>Therefore, the load classification depends not only on trial results,
+but also on the goal.
+The overall search procedure becomes more complicated, when
+compared to binary search with a single goal,
+but most of the complications do not affect the final result,
+except for one phenomenon, loss inversion.</t>
+
+</section>
+<section anchor="loss-inversion"><name>Loss Inversion</name>
+
+<t>In <xref target="RFC2544"></xref> throughput search using bisection, any load with a lossy trial
+becomes a hard upper bound, meaning every subsequent trial has a smaller
+intended load.</t>
+
+<t>But in MLRsearch, a load that is classified as an upper bound for one goal
+may still be a lower bound for another goal, and due to the other goal
+MLRsearch will probably perform trials at even higher loads.
+What to do when all such higher load trials happen to have zero loss?
+Does it mean the earlier upper bound was not real?
+Does it mean the later lossless trials are not considered a lower bound?
+Surely we do not want to have an upper bound at a load smaller than a lower bound.</t>
+
+<t>MLRsearch is conservative in these situations.
+The upper bound is considered real, and the lossless trials at higher loads
+are considered to be a coincidence, at least when computing the final result.</t>
+
+<t>This is formalized using new notions, the [Relevant Upper Bound] (#Relevant-Upper-Bound) and
+the [Relevant Lower Bound] (#Relevant-Lower-Bound).
+Load classification is still based just on the set of trial results
+at a given intended load (trials at other loads are ignored),
+making it possible to have a lower load classified as an upper bound,
+and a higher load classified as a lower bound (for the same goal).
+The Relevant Upper Bound (for a goal) is the smallest load classified
+as an upper bound.
+But the Relevant Lower Bound is not simply
+the largest among lower bounds.
+It is the largest load among loads
+that are lower bounds while also being smaller than the Relevant Upper Bound.</t>
+
+<t>With these definitions, the Relevant Lower Bound is always smaller
+than the Relevant Upper Bound (if both exist), and the two relevant bounds
+are used analogously as the two tightest bounds in the binary search.
+When they are less than the Goal Width apart,
+the relevant bounds are used in the output.</t>
+
+<t>One consequence is that every trial result can have an impact on the search result.
+That means if your SUT (or your traffic generator) needs a warmup,
+be sure to warm it up before starting the search.</t>
+
+</section>
+<section anchor="exceed-ratio"><name>Exceed Ratio</name>
+
+<t>The idea of performing multiple trials at the same load comes from
+a model where some trial results (those with high loss) are affected
+by infrequent effects, causing poor repeatability of <xref target="RFC2544"></xref> throughput results.
+See the discussion about noiseful and noiseless ends
+of the SUT performance spectrum in section [DUT in SUT] (#DUT-in-SUT).
+Stable results are closer to the noiseless end of the SUT performance spectrum,
+so MLRsearch may need to allow some frequency of high-loss trials
+to ignore the rare but big effects near the noiseful end.</t>
+
+<t>MLRsearch can do such trial result filtering, but it needs
+a configuration option to tell it how frequent can the infrequent big loss be.
+This option is called the exceed ratio.
+It tells MLRsearch what ratio of trials
+(more exactly what ratio of trial seconds) can have a [Trial Loss Ratio] (#Trial-Loss-Ratio)
+larger than the Goal Loss Ratio and still be classified as a lower bound.
+Zero exceed ratio means all trials have to have a Trial Loss Ratio
+equal to or smaller than the Goal Loss Ratio.</t>
+
+<t>For explainability reasons, the RECOMMENDED value for exceed ratio is 0.5,
+as it simplifies some later concepts by relating them to the concept of median.</t>
+
+</section>
+<section anchor="duration-sum"><name>Duration Sum</name>
+
+<t>When more than one trial is intended to classify a load,
+MLRsearch also needs something that controls the number of trials needed.
+Therefore, each goal also has an attribute called duration sum.</t>
+
+<t>The meaning of a [Goal Duration Sum] (#Goal-Duration-Sum) is that
+when a load has (full-length) trials
+whose trial durations when summed up give a value at least as big
+as the Goal Duration Sum value,
+the load is guaranteed to be classified either as an upper bound
+or a lower bound for that goal.</t>
+
+<t>Due to the fact that the duration sum has a big impact
+on the overall search duration, and <xref target="RFC2544"></xref> prescribes
+wait intervals around trial traffic,
+the MLRsearch algorithm is allowed to sum durations that are different
+from the actual trial traffic durations.</t>
+
+<t>In the MLRsearch specification, the different duration values are called
+[Trial Effective Duration] (#Trial-Effective-Duration).</t>
+
+</section>
+<section anchor="short-trials"><name>Short Trials</name>
+
+<t>MLRsearch requires each goal to specify its final trial duration.
+Full-length trial is a shorter name for a trial whose intended trial duration
+is equal to (or longer than) the goal final trial duration.</t>
+
+<t>Section 24 of <xref target="RFC2544"></xref> already anticipates possible time savings
+when short trials (shorter than full-length trials) are used.
+Full-length trials are the opposite of short trials,
+so they may also be called long trials.</t>
+
+<t>Any MLRsearch implementation may include its own configuration options
+which control when and how MLRsearch chooses to use short trial durations.</t>
+
+<t>For explainability reasons, when exceed ratio of 0.5 is used,
+it is recommended for the Goal Duration Sum to be an odd multiple
+of the full trial durations, so Conditional Throughput becomes identical to
+a median of a particular set of trial forwarding rates.</t>
+
+<t>The presence of short trial results complicates the load classification logic.</t>
+
+<t>Full details are given later in section [Logic of Load Classification] (#Logic-of-Load-Classification).
+In a nutshell, results from short trials
+may cause a load to be classified as an upper bound.
+This may cause loss inversion, and thus lower the Relevant Lower Bound,
+below what would classification say when considering full-length trials only.</t>
+
+
+
+</section>
+<section anchor="throughput"><name>Throughput</name>
+
+
+<t>Due to the fact that testing equipment takes the intended load as an input parameter
+for a trial measurement, any load search algorithm needs to deal
+with intended load values internally.</t>
+
+<t>But in the presence of goals with a non-zero loss ratio, the intended load
+usually does not match the user&#39;s intuition of what a throughput is.
+The forwarding rate (as defined in <xref target="RFC2285"></xref> section 3.6.1) is better,
+but it is not obvious how to generalize it
+for loads with multiple trial results and a non-zero
+[Goal Loss Ratio] (#Goal-Loss-Ratio).</t>
+
+<t>The best example is also the main motivation: hard limit performance.
+Even if the medium allows higher performance,
+the SUT interfaces may have their additional own limitations,
+e.g. a specific fps limit on the NIC (a very common occurance).</t>
+
+<t>Ideally, those should be known and used when computing Max Load.
+But if Max Load is higher that what interface can receive or transmit,
+there will be a &quot;hard limit&quot; observed in trial results.
+Imagine the hard limit is at 100 Mfps, Max Load is higher,
+and the goal loss ratio is 0.5%. If DUT has no additional losses,
+0.5% loss ratio will be achieved at 100.5025 Mfps (the relevant lower bound).
+But it is not intuitive to report SUT performance as a value that is
+larger than known hard limit.
+We need a generalization of RFC2544 throughput,
+different from just the relevant lower bound.</t>
+
+<t>MLRsearch defines one such generalization, called the Conditional Throughput.
+It is the trial forwarding rate from one of the trials
+performed at the load in question.
+Determining which trial exactly is defined in
+[MLRsearch Specification] (#MLRsearch-Specification),
+and in [Appendix B: Conditional Throughput] (#Appendix-B:-Conditional-Throughput).</t>
+
+<t>In the hard limit example, 100.5 Mfps load will still have
+only 100.0 Mfps forwarding rate, nicely confirming the known limitation.</t>
+
+<t>Conditional Throughput is partially related to load classification.
+If a load is classified as a lower bound for a goal,
+the Conditional Throughput can be calculated from trial results,
+and guaranteed to show an loss ratio
+no larger than the Goal Loss Ratio.</t>
+
+
+
+
+<t>Note that when comparing the best (all zero loss) and worst case (all loss
+just below Goal Loss Ratio), the same Relevant Lower Bound value
+may result in the Conditional Throughput differing up to the Goal Loss Ratio.</t>
+
+<t>Therefore it is rarely needed to set the Goal Width (if expressed
+as the relative difference of loads) below the Goal Loss Ratio.
+In other words, setting the Goal Width below the Goal Loss Ratio
+may cause the Conditional Throughput for a larger loss ratio to become smaller
+than a Conditional Throughput for a goal with a smaller Goal Loss Ratio,
+which is counter-intuitive, considering they come from the same search.
+Therefore it is RECOMMENDED to set the Goal Width to a value no smaller
+than the Goal Loss Ratio.</t>
+
+<t>Overall, this Conditional Throughput does behave well for comparability purposes.</t>
+
+</section>
+<section anchor="search-time"><name>Search Time</name>
+
+<t>MLRsearch was primarily developed to reduce the time
+required to determine a throughput, either the <xref target="RFC2544"></xref> compliant one,
+or some generalization thereof.
+The art of achieving short search times
+is mainly in the smart selection of intended loads (and intended durations)
+for the next trial to perform.</t>
+
+<t>While there is an indirect impact of the load selection on the reported values,
+in practice such impact tends to be small,
+even for SUTs with quite a broad performance spectrum.</t>
+
+<t>A typical example of two approaches to load selection leading to different
+Relevant Lower Bounds is when the interval is split in a very uneven way.
+Any implementation choosing loads very close to the current Relevant Lower Bound
+is quite likely to eventually stumble upon a trial result
+with poor performance (due to SUT noise).
+For an implementation choosing loads very close
+to the current Relevant Upper Bound, this is unlikely,
+as it examines more loads that can see a performance
+close to the noiseless end of the SUT performance spectrum.</t>
+
+<t>However, as even splits optimize search duration at give precision,
+MLRsearch implementations that prioritize minimizing search time
+are unlikely to suffer from any such bias.</t>
+
+<t>Therefore, this document remains quite vague on load selection
+and other optimization details, and configuration attributes related to them.
+Assuming users prefer libraries that achieve short overall search time,
+the definition of the Relevant Lower Bound
+should be strict enough to ensure result repeatability
+and comparability between different implementations,
+while not restricting future implementations much.</t>
+
+
+</section>
+<section anchor="rfc2544-compliance"><name><xref target="RFC2544"></xref> Compliance</name>
+
+<t>Some Search Goal instances lead to results compliant with RFC2544.
+See [RFC2544 Goal] (#RFC2544-Goal) for more details
+regarding both conditional and unconditional compliance.</t>
+
+<t>The presence of other Search Goals does not affect the compliance
+of this Goal Result.
+The Relevant Lower Bound and the Conditional Throughput are in this case
+equal to each other, and the value is the <xref target="RFC2544"></xref> throughput.</t>
+
+</section>
+</section>
+<section anchor="logic-of-load-classification"><name>Logic of Load Classification</name>
+
+<section anchor="introductory-remarks"><name>Introductory Remarks</name>
+
+<t>This chapter continues with explanations,
+but this time more precise definitions are needed
+for readers to follow the explanations.</t>
+
+<t>Descriptions in this section are wordy and implementers should read
+[MLRsearch Specification] (#MLRsearch-Specification) section
+and Appendices for more concise definitions.</t>
+
+<t>The two areas of focus here are load classification
+and the Conditional Throughput.</t>
+
+<t>To start with [Performance Spectrum] (#Performance-Spectrum)
+subsection contains definitions needed to gain insight
+into what Conditional Throughput means.
+Remaining subsections discuss load classification.</t>
+
+<t>For load classification, it is useful to define <strong>good trials</strong> and <strong>bad trials</strong>:</t>
+
+<t><list style="symbols">
+ <t><strong>Bad trial</strong>: Trial is called bad (according to a goal)
+if its [Trial Loss Ratio] (#Trial-Loss-Ratio)
+is larger than the [Goal Loss Ratio] (#Goal-Loss-Ratio).</t>
+ <t><strong>Good trial</strong>: Trial that is not bad is called good.</t>
+</list></t>
+
+</section>
+<section anchor="performance-spectrum"><name>Performance Spectrum</name>
+<t>### Description</t>
+
+<t>There are several equivalent ways to explain the Conditional Throughput
+computation. One of the ways relies on performance
+spectrum.</t>
+
+<t>Take an intended load value, a trial duration value, and a finite set
+of trial results, with all trials measured at that load value and duration value.</t>
+
+<t>The performance spectrum is the function that maps
+any non-negative real number into a sum of trial durations among all trials
+in the set, that has that number, as their trial forwarding rate,
+e.g. map to zero if no trial has that particular forwarding rate.</t>
+
+<t>A related function, defined if there is at least one trial in the set,
+is the performance spectrum divided by the sum of the durations
+of all trials in the set.</t>
+
+<t>That function is called the performance probability function, as it satisfies
+all the requirements for probability mass function
+of a discrete probability distribution,
+the one-dimensional random variable being the trial forwarding rate.</t>
+
+<t>These functions are related to the SUT performance spectrum,
+as sampled by the trials in the set.</t>
+
+
+<t>Take a set of all full-length trials performed at the Relevant Lower Bound,
+sorted by decreasing trial forwarding rate.
+The sum of the durations of those trials
+may be less than the Goal Duration Sum, or not.
+If it is less, add an imaginary trial result with zero trial forwarding rate,
+such that the new sum of durations is equal to the Goal Duration Sum.
+This is the set of trials to use.</t>
+
+<t>If the quantile touches two trials,</t>
+
+
+<t>the larger trial forwarding rate (from the trial result sorted earlier) is used.</t>
+
+
+<t>The resulting quantity is the Conditional Throughput of the goal in question.</t>
+
+
+<t>A set of examples follows.</t>
+
+<section anchor="first-example"><name>First Example</name>
+
+<t><list style="symbols">
+ <t>[Goal Exceed Ratio] (#Goal-Exceed-Ratio) = 0 and [Goal Duration Sum] (#Goal-Duration-Sum) has been reached.</t>
+ <t>Conditional Throughput is the smallest trial forwarding rate among the trials.</t>
+</list></t>
+
+</section>
+<section anchor="second-example"><name>Second Example</name>
+
+<t><list style="symbols">
+ <t>Goal Exceed Ratio = 0 and Goal Duration Sum has not been reached yet.</t>
+ <t>Due to the missing duration sum, the worst case may still happen, so the Conditional Throughput is zero.</t>
+ <t>This is not reported to the user, as this load cannot become the Relevant Lower Bound yet.</t>
+</list></t>
+
+</section>
+<section anchor="third-example"><name>Third Example</name>
+
+<t><list style="symbols">
+ <t>Goal Exceed Ratio = 50% and Goal Duration Sum is two seconds.</t>
+ <t>One trial is present with the duration of one second and zero loss.</t>
+ <t>The imaginary trial is added with the duration of one second and zero trial forwarding rate.</t>
+ <t>The median would touch both trials, so the Conditional Throughput is the trial forwarding rate of the one non-imaginary trial.</t>
+ <t>As that had zero loss, the value is equal to the offered load.</t>
+</list></t>
+
+
+</section>
+<section anchor="summary"><name>Summary</name>
+
+<t>While the Conditional Throughput is a generalization of the trial forwarding rate,
+its definition is not an obvious one.</t>
+
+<t>Other than the trial forwarding rate, the other source of intuition
+is the quantile in general, and the median the recommended case.</t>
+
+
+</section>
+</section>
+<section anchor="trials-with-single-duration"><name>Trials with Single Duration</name>
+
+
+<t>When goal attributes are chosen in such a way that every trial has the same
+intended duration, the load classification is simpler.</t>
+
+<t>The following description follows the motivation
+of Goal Loss Ratio, Goal Exceed Ratio, and Goal Duration Sum.</t>
+
+<t>If the sum of the durations of all trials (at the given load)
+is less than the Goal Duration Sum, imagine two scenarios:</t>
+
+<t><list style="symbols">
+ <t><strong>best case scenario</strong>: all subsequent trials having zero loss, and</t>
+ <t><strong>worst case scenario</strong>: all subsequent trials having 100% loss.</t>
+</list></t>
+
+<t>Here we assume there are as many subsequent trials as needed
+to make the sum of all trials equal to the Goal Duration Sum.</t>
+
+<t>The exceed ratio is defined using sums of durations
+(and number of trials does not matter), so it does not matter whether
+the &quot;subsequent trials&quot; can consist of an integer number of full-length trials.</t>
+
+<t>In any of the two scenarios, best case and worst case, we can compute the load exceed ratio,
+as the duration sum of good trials divided by the duration sum of all trials,
+in both cases including the assumed trials.</t>
+
+<t>Even if, in the best case scenario, the load exceed ratio is larger
+than the Goal Exceed Ratio, the load is an upper bound.</t>
+
+<t>MKP2 Even if, in the worst case scenario, the load exceed ratio is not larger
+than the Goal Exceed Ratio, the load is a lower bound.</t>
+
+
+<t>More specifically:</t>
+
+<t><list style="symbols">
+ <t>Take all trials measured at a given load.</t>
+ <t>The sum of the durations of all bad full-length trials is called the bad sum.</t>
+ <t>The sum of the durations of all good full-length trials is called the good sum.</t>
+ <t>The result of adding the bad sum plus the good sum is called the measured sum.</t>
+ <t>The larger of the measured sum and the Goal Duration Sum is called the whole sum.</t>
+ <t>The whole sum minus the measured sum is called the missing sum.</t>
+ <t>The optimistic exceed ratio is the bad sum divided by the whole sum.</t>
+ <t>The pessimistic exceed ratio is the bad sum plus the missing sum, that divided by the whole sum.</t>
+ <t>If the optimistic exceed ratio is larger than the Goal Exceed Ratio, the load is classified as an upper bound.</t>
+ <t>If the pessimistic exceed ratio is not larger than the Goal Exceed Ratio, the load is classified as a lower bound.</t>
+ <t>Else, the load is classified as undecided.</t>
+</list></t>
+
+<t>The definition of pessimistic exceed ratio is compatible with the logic in
+the Conditional Throughput computation, so in this single trial duration case,
+a load is a lower bound if and only if the Conditional Throughput
+loss ratio is not larger than the Goal Loss Ratio.</t>
+
+
+<t>If it is larger, the load is either an upper bound or undecided.</t>
+
+</section>
+<section anchor="trials-with-short-duration"><name>Trials with Short Duration</name>
+
+<section anchor="scenarios"><name>Scenarios</name>
+
+<t>Trials with intended duration smaller than the goal final trial duration
+are called short trials.
+The motivation for load classification logic in the presence of short trials
+is based around a counter-factual case: What would the trial result be
+if a short trial has been measured as a full-length trial instead?</t>
+
+<t>There are three main scenarios where human intuition guides
+the intended behavior of load classification.</t>
+
+<section anchor="false-good-scenario"><name>False Good Scenario</name>
+
+<t>The user had their reason for not configuring a shorter goal
+final trial duration.
+Perhaps SUT has buffers that may get full at longer
+trial durations.
+Perhaps SUT shows periodic decreases in performance
+the user does not want to be treated as noise.</t>
+
+<t>In any case, many good short trials may become bad full-length trials
+in the counter-factual case.</t>
+
+<t>In extreme cases, there are plenty of good short trials and no bad short trials.</t>
+
+<t>In this scenario, we want the load classification NOT to classify the load
+as a lower bound, despite the abundance of good short trials.</t>
+
+
+<t>Effectively, we want the good short trials to be ignored, so they
+do not contribute to comparisons with the Goal Duration Sum.</t>
+
+</section>
+<section anchor="true-bad-scenario"><name>True Bad Scenario</name>
+
+<t>When there is a frame loss in a short trial,
+the counter-factual full-length trial is expected to lose at least as many
+frames.</t>
+
+<t>In practice, bad short trials are rarely turning into
+good full-length trials.</t>
+
+<t>In extreme cases, there are no good short trials.</t>
+
+<t>In this scenario, we want the load classification
+to classify the load as an upper bound just based on the abundance
+of short bad trials.</t>
+
+<t>Effectively, we want the bad short trials
+to contribute to comparisons with the Goal Duration Sum,
+so the load can be classified sooner.</t>
+
+</section>
+<section anchor="balanced-scenario"><name>Balanced Scenario</name>
+
+<t>Some SUTs are quite indifferent to trial duration.
+Performance probability function constructed from short trial results
+is likely to be similar to the performance probability function constructed
+from full-length trial results (perhaps with larger dispersion,
+but without a big impact on the median quantiles overall).</t>
+
+
+<t>For a moderate Goal Exceed Ratio value, this may mean there are both
+good short trials and bad short trials.</t>
+
+<t>This scenario is there just to invalidate a simple heuristic
+of always ignoring good short trials and never ignoring bad short trials,
+as that simple heuristic would be too biased.</t>
+
+<t>Yes, the short bad trials
+are likely to turn into full-length bad trials in the counter-factual case,
+but there is no information on what would the good short trials turn into.</t>
+
+<t>The only way to decide safely is to do more trials at full length,
+the same as in False Good Scenario.</t>
+
+</section>
+</section>
+<section anchor="classification-logic"><name>Classification Logic</name>
+
+<t>MLRsearch picks a particular logic for load classification
+in the presence of short trials, but it is still RECOMMENDED
+to use configurations that imply no short trials,
+so the possible inefficiencies in that logic
+do not affect the result, and the result has better explainability.</t>
+
+<t>With that said, the logic differs from the single trial duration case
+only in different definition of the bad sum.
+The good sum is still the sum across all good full-length trials.</t>
+
+<t>Few more notions are needed for defining the new bad sum:</t>
+
+<t><list style="symbols">
+ <t>The sum of durations of all bad full-length trials is called the bad long sum.</t>
+ <t>The sum of durations of all bad short trials is called the bad short sum.</t>
+ <t>The sum of durations of all good short trials is called the good short sum.</t>
+ <t>One minus the Goal Exceed Ratio is called the subceed ratio.</t>
+ <t>The Goal Exceed Ratio divided by the subceed ratio is called the exceed coefficient.</t>
+ <t>The good short sum multiplied by the exceed coefficient is called the balancing sum.</t>
+ <t>The bad short sum minus the balancing sum is called the excess sum.</t>
+ <t>If the excess sum is negative, the bad sum is equal to the bad long sum.</t>
+ <t>Otherwise, the bad sum is equal to the bad long sum plus the excess sum.</t>
+</list></t>
+
+<t>Here is how the new definition of the bad sum fares in the three scenarios,
+where the load is close to what would the relevant bounds be
+if only full-length trials were used for the search.</t>
+
+<section anchor="false-good-scenario-1"><name>False Good Scenario</name>
+
+<t>If the duration is too short, we expect to see a higher frequency
+of good short trials.
+This could lead to a negative excess sum,
+which has no impact, hence the load classification is given just by
+full-length trials.
+Thus, MLRsearch using too short trials has no detrimental effect
+on result comparability in this scenario.
+But also using short trials does not help with overall search duration,
+probably making it worse.</t>
+
+</section>
+<section anchor="true-bad-scenario-1"><name>True Bad Scenario</name>
+
+<t>Settings with a small exceed ratio
+have a small exceed coefficient, so the impact of the good short sum is small,
+and the bad short sum is almost wholly converted into excess sum,
+thus bad short trials have almost as big an impact as full-length bad trials.
+The same conclusion applies to moderate exceed ratio values
+when the good short sum is small.
+Thus, short trials can cause a load to get classified as an upper bound earlier,
+bringing time savings (while not affecting comparability).</t>
+
+</section>
+<section anchor="balanced-scenario-1"><name>Balanced Scenario</name>
+
+<t>Here excess sum is small in absolute value, as the balancing sum
+is expected to be similar to the bad short sum.
+Once again, full-length trials are needed for final load classification;
+but usage of short trials probably means MLRsearch needed
+a shorter overall search time before selecting this load for measurement,
+thus bringing time savings (while not affecting comparability).</t>
+
+<t>Note that in presence of short trial results,
+the comparibility between the load classification
+and the Conditional Throughput is only partial.
+The Conditional Throughput still comes from a good long trial,
+but a load higher than the Relevant Lower Bound may also compute to a good value.</t>
+
+</section>
+</section>
+</section>
+<section anchor="trials-with-longer-duration"><name>Trials with Longer Duration</name>
+
+<t>If there are trial results with an intended duration larger
+than the goal trial duration, the precise definitions
+in Appendix A and Appendix B treat them in exactly the same way
+as trials with duration equal to the goal trial duration.</t>
+
+<t>But in configurations with moderate (including 0.5) or small
+Goal Exceed Ratio and small Goal Loss Ratio (especially zero),
+bad trials with longer than goal durations may bias the search
+towards the lower load values, as the noiseful end of the spectrum
+gets a larger probability of causing the loss within the longer trials.</t>
+
+
+
+
+</section>
+</section>
+<section anchor="iana-considerations"><name>IANA Considerations</name>
+
+<t>No requests of IANA.</t>
+
+</section>
+<section anchor="security-considerations"><name>Security Considerations</name>
+
+<t>Benchmarking activities as described in this memo are limited to
+technology characterization of a DUT/SUT using controlled stimuli in a
+laboratory environment, with dedicated address space and the constraints
+specified in the sections above.</t>
+
+<t>The benchmarking network topology will be an independent test setup and
+MUST NOT be connected to devices that may forward the test traffic into
+a production network or misroute traffic to the test management network.</t>
+
+<t>Further, benchmarking is performed on a &quot;black-box&quot; basis, relying
+solely on measurements observable external to the DUT/SUT.</t>
+
+<t>Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
+benchmarking purposes. Any implications for network security arising
+from the DUT/SUT SHOULD be identical in the lab and in production
+networks.</t>
+
+</section>
+<section anchor="acknowledgements"><name>Acknowledgements</name>
+
+<t>Some phrases and statements in this document were created
+with help of Mistral AI (mistral.ai).</t>
+
+<t>Many thanks to Alec Hothan of the OPNFV NFVbench project for thorough
+review and numerous useful comments and suggestions in the earlier versions of this document.</t>
+
+<t>Special wholehearted gratitude and thanks to the late Al Morton for his
+thorough reviews filled with very specific feedback and constructive
+guidelines. Thank you Al for the close collaboration over the years,
+for your continuous unwavering encouragement full of empathy and
+positive attitude. Al, you are dearly missed.</t>
+
+</section>
+<section anchor="appendix-a-load-classification"><name>Appendix A: Load Classification</name>
+
+<t>This section specifies how to perform the load classification.</t>
+
+<t>Any intended load value can be classified, according to a given [Search Goal] (#Search-Goal).</t>
+
+<t>The algorithm uses (some subsets of) the set of all available trial results
+from trials measured at a given intended load at the end of the search.
+All durations are those returned by the Measurer.</t>
+
+<t>The block at the end of this appendix holds pseudocode
+which computes two values, stored in variables named
+<spanx style="verb">optimistic</spanx> and <spanx style="verb">pessimistic</spanx>.</t>
+
+
+<t>The pseudocode happens to be a valid Python code.</t>
+
+<t>If values of both variables are computed to be true, the load in question
+is classified as a lower bound according to the given Search Goal.
+If values of both variables are false, the load is classified as an upper bound.
+Otherwise, the load is classified as undecided.</t>
+
+<t>The pseudocode expects the following variables to hold values as follows:</t>
+
+<t><list style="symbols">
+ <t><spanx style="verb">goal_duration_sum</spanx>: The duration sum value of the given Search Goal.</t>
+ <t><spanx style="verb">goal_exceed_ratio</spanx>: The exceed ratio value of the given Search Goal.</t>
+ <t><spanx style="verb">good_long_sum</spanx>: Sum of durations across trials with trial duration
+at least equal to the goal final trial duration and with a Trial Loss Ratio
+not higher than the Goal Loss Ratio.</t>
+ <t><spanx style="verb">bad_long_sum</spanx>: Sum of durations across trials with trial duration
+at least equal to the goal final trial duration and with a Trial Loss Ratio
+higher than the Goal Loss Ratio.</t>
+ <t><spanx style="verb">good_short_sum</spanx>: Sum of durations across trials with trial duration
+shorter than the goal final trial duration and with a Trial Loss Ratio
+not higher than the Goal Loss Ratio.</t>
+ <t><spanx style="verb">bad_short_sum</spanx>: Sum of durations across trials with trial duration
+shorter than the goal final trial duration and with a Trial Loss Ratio
+higher than the Goal Loss Ratio.</t>
+</list></t>
+
+<t>The code works correctly also when there are no trial results at a given load.</t>
+
+<figure><sourcecode type="python"><![CDATA[
+balancing_sum = good_short_sum * goal_exceed_ratio / (1.0 - goal_exceed_ratio)
+effective_bad_sum = bad_long_sum + max(0.0, bad_short_sum - balancing_sum)
+effective_whole_sum = max(good_long_sum + effective_bad_sum, goal_duration_sum)
+quantile_duration_sum = effective_whole_sum * goal_exceed_ratio
+optimistic = effective_bad_sum <= quantile_duration_sum
+pessimistic = (effective_whole_sum - good_long_sum) <= quantile_duration_sum
+]]></sourcecode></figure>
+
+</section>
+<section anchor="appendix-b-conditional-throughput"><name>Appendix B: Conditional Throughput</name>
+
+<t>This section specifies how to compute Conditional Throughput, as referred to in section [Conditional Throughput] (#Conditional-Throughput).</t>
+
+<t>Any intended load value can be used as the basis for the following computation,
+but only the Relevant Lower Bound (at the end of the search)
+leads to the value called the Conditional Throughput for a given Search Goal.</t>
+
+<t>The algorithm uses (some subsets of) the set of all available trial results
+from trials measured at a given intended load at the end of the search.
+All durations are those returned by the Measurer.</t>
+
+<t>The block at the end of this appendix holds pseudocode
+which computes a value stored as variable <spanx style="verb">conditional_throughput</spanx>.</t>
+
+
+<t>The pseudocode happens to be a valid Python code.</t>
+
+<t>The pseudocode expects the following variables to hold values as follows:</t>
+
+<t><list style="symbols">
+ <t><spanx style="verb">goal_duration_sum</spanx>: The duration sum value of the given Search Goal.</t>
+ <t><spanx style="verb">goal_exceed_ratio</spanx>: The exceed ratio value of the given Search Goal.</t>
+ <t><spanx style="verb">good_long_sum</spanx>: Sum of durations across trials with trial duration
+at least equal to the goal final trial duration and with a Trial Loss Ratio
+not higher than the Goal Loss Ratio.</t>
+ <t><spanx style="verb">bad_long_sum</spanx>: Sum of durations across trials with trial duration
+at least equal to the goal final trial duration and with a Trial Loss Ratio
+higher than the Goal Loss Ratio.</t>
+ <t><spanx style="verb">long_trials</spanx>: An iterable of all trial results from trials with trial duration
+at least equal to the goal final trial duration,
+sorted by increasing the Trial Loss Ratio.
+A trial result is a composite with the following two attributes available: <list style="symbols">
+ <t><spanx style="verb">trial.loss_ratio</spanx>: The Trial Loss Ratio as measured for this trial.</t>
+ <t><spanx style="verb">trial.duration</spanx>: The trial duration of this trial.</t>
+ </list></t>
+</list></t>
+
+<t>The code works correctly only when there if there is at least one
+trial result measured at a given load.</t>
+
+<figure><sourcecode type="python"><![CDATA[
+all_long_sum = max(goal_duration_sum, good_long_sum + bad_long_sum)
+remaining = all_long_sum * (1.0 - goal_exceed_ratio)
+quantile_loss_ratio = None
+for trial in long_trials:
+ if quantile_loss_ratio is None or remaining > 0.0:
+ quantile_loss_ratio = trial.loss_ratio
+ remaining -= trial.duration
+ else:
+ break
+else:
+ if remaining > 0.0:
+ quantile_loss_ratio = 1.0
+conditional_throughput = intended_load * (1.0 - quantile_loss_ratio)
+]]></sourcecode></figure>
+
+</section>
+
+
+ </middle>
+
+ <back>
+
+
+<references title='References' anchor="sec-combined-references">
+
+ <references title='Normative References' anchor="sec-normative-references">
+
+&RFC1242;
+&RFC2285;
+&RFC2544;
+&RFC8219;
+&RFC9004;
+
+
+ </references>
+
+ <references title='Informative References' anchor="sec-informative-references">
+
+<reference anchor="TST009" target="https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/009/03.04.01_60/gs_NFV-TST009v030401p.pdf">
+ <front>
+ <title>TST 009</title>
+ <author >
+ <organization></organization>
+ </author>
+ <date year="n.d."/>
+ </front>
+</reference>
+<reference anchor="FDio-CSIT-MLRsearch" target="https://csit.fd.io/cdocs/methodology/measurements/data_plane_throughput/mlr_search/">
+ <front>
+ <title>FD.io CSIT Test Methodology - MLRsearch</title>
+ <author >
+ <organization></organization>
+ </author>
+ <date year="2023" month="October"/>
+ </front>
+</reference>
+<reference anchor="PyPI-MLRsearch" target="https://pypi.org/project/MLRsearch/1.2.1/">
+ <front>
+ <title>MLRsearch 1.2.1, Python Package Index</title>
+ <author >
+ <organization></organization>
+ </author>
+ <date year="2023" month="October"/>
+ </front>
+</reference>
+
+
+ </references>
+
+</references>
+
+
+<?line 3102?>
+
+
+
+
+ </back>
+
+<!-- ##markdown-source:
+H4sIAAAAAAAAA+S9+3MbV5Iu+Hv9FRXsmGiyB4Qo+THd2pnVypLVw9uWpZFo
+a+/t7ehbBApEjQAUuqogmt7Y/30zv8w8J089QMp2xOzGnYiJlgmg6jzy5MnH
+l1+en59nWVd1m/Jp/vqw6ar9psy/q9s2f1d0VZ2/L4tmsc6K6+um/ERf+e5d
+K39Z1otdsaVfLZti1Z1XZbc6v97e3pxvN4185fziX7Jl0dFXnlw8+ZL+6/zx
+v2RZtW+e5l1zaLsnFxd/uniSFU1ZPM3rfZvd3jzNvyl3i/W2aD5Wu5v8Qy3/
+++emPuyzj7dP88tdVza7sjt/yW/NFkX3NK92qzrLFvWSvvo0P7TnRbuoqmxf
+Pc3p/36XL4od/bXMi6Yp7vLTapUXm01+V7Zned3k66Jd5+uyKbM87+rFU/6A
+/tnWTdeUq/YpHrEsVwUtTkvfsM/vtvIx/2dWHLp13TzNcvzfuf5vTkOjb7ye
+53+pd21X7Lq7XX1bLX4On8sKvi4WVflx8kt1Q9N6UbUL2o27tiu3bfio3BbV
+5mm+/Sg//T8W/K35ot6Oj+THef623hQfe+//sSm6j3Xvo/vf+qnZ8y/cS7Nd
+3WxJbD6VvBTvXr14/OTLJ/rPJ0/++JX986svv9R//vHJ4z/pP/90cUF/zXg3
+3UOu3l+RmMjKdkVzU9KGr7tu3z599Oj29nZedm01p7E+WpYb+knziP/w95v2
+0fevfjynHz+6uHj894s//Yn+l/7/i/nFl3P6w9cXj27av+tX6JNPF19cfHnx
+eD/fL1fyKjkRJ/RxTp+f0B9fvazq8xfvL6/OwyEYH9airbr5ajmv6kcLOiXt
+o21J0rGsN/XNHf27aA9NuS13XfuITkfx9/2m2JV/79Yk4zfr/aF7RAfo7/L8
+R8lYXr2kR+Y8gvyqbLv8dXxsfh5P5gl+FA7eF+ePL+gvb+/eXt438P3dXtZy
+39T/WS66R+H7jx7Pn8wfp8MJH+b4cEavoPHs8rfF4mNxU9JRXZY/jQ0mOz8/
+z4vrtmuKRZdlV+uqzWmdDrwmOb16X7dlm5c/deWurUiw+dD9VcXmb3lcqFxf
+f32X0fmsdqwpinxX3uZuwen0bzblclS3qSY7DTM5m8dlzItqi1dv6cnb6ufS
+Xrc8NPzr3SxrD/s9qYl8a8/e8LMb9+yynWXFbpmXu3WxW5R5U7b0XfqffUlb
+f11tqu4u5y/Q6dkXjf5lzotS0lJUpAjv6NtFS+tKx0IWhfWcW5AKoyyWS3o4
+/ZN+uFjznHc3pJn42U35j0OlIkcPLVv6B63I9R2+XH4qNgdMCAPpSLLo+Vm9
+IhW46m5JO59fFy19n7TurerjVhTC73ljixwC3PKg6/yGziAr26bNt3VT5qum
+LJf1dubWlXb4U7WkHaYhV/zeYkPT362qG11Yugv4f1paX96GltU1aUR6b1jo
+ds3r3jVVsaEplQ2tfLFkHdeV9L809npHu8HiLV+aZV29KfnxkJFF2XRFteNf
+LmgxWFrpN/imblGb31Yd3QvVzRpPb3Ujdcv5Mbx4OiXemjC2m5oHhZ9/ou3j
+r0a54GVi8d9Wy+WmzLL/++lT2nremv8nw1F5Tj/FCuZ/aYrtsr7d8e7S+pBu
+62g56y1dF81H/mCGH+i3Oz5Ft3S/0UTs+tQnt/mu7vgp12X+qWqraxokTZ7H
+35A00dW3lEt8jgeaNqALZfGxpteuaPVZvT8qHn355IsnX3zxpwt88ZIuUnoy
+LxAJW0UrTkOgsSxnYbz0UhvY03sf/uTij3/86k9/vJCF+Ctdxn/jBXoUFyj7
+Xf720LB+gKy+X9T7Us+K/pm3MdEncjpI3BZNRfN/mBqYkYxE0R5ROV7BsLSy
+flhCDOzQZO64vPzhivf9+X6/gTh8KnbVZlOMK7XrqiXVy8eAxm1PwyMyE03a
+vLaktSNppcNE27lt6eI8z7+pdqwvdIxd8bHkydckfiz1fCLbcGj4oUs+Jaui
+EbGijaSFp+fuugOduDuazoEVQhjanF5xBaGBPqH5Vnx25diYVsTG7IsDq/Br
+WoKy3Okr6eE0WoyFRa/GBDZ9rcovee9nzWf9liS4au/SE8riv6HjjrNICpBW
+jjQT6TfRACQOIupLNk8+QcrdMrPWY6XFUwp/1JnR6siRpceUP9ElRYvxc9nU
+tE5kL7Wz/Jq+y3OodkuyY5s7GsmKf0tLR1+FumpFGrY8Rf76rt6d4xn8YHtx
+mcvFJVpv5cdXyald0Ayb/HZtqxjUEy9PtaNz1lak9HadqF9/C5ho5HwO6AU0
+/+KaFoJ0IX0a9TFr/ESg2325qFbVQvaz3O439Z08clWbJtYLDeoFsvecP5lQ
+0D29zPskkzHNzYrnPH+zl9tgczfLVV3TOf8sJT0PI5HJ8VAT5SzuTP5n1tEz
++fmyWq3KZqCmeUTPd6nMwZUo6OuLLi8Leo57HJ9KurF3C34Pj+Ny15beOFjU
+RcOKGkaXiEf/GTP6S7OpaDK0TiQAdHfCK+EtoWO1wdZWW1oIO1Qyzm/1V/Zs
++qBm+wUv4V/RZ3QwyFDHCeOf/EBDeVU3dMxwgN7xWp++eneWFzTk4qdqe9jS
+cq9wbHiLRCerHf+3/NSU1Bfzr+dPzniMkONiw5aSnA38p40JAk8aiZaQJBcC
+vSzp2/Ru7IKX5XSTZbzvSr530zNMZwQHjGdO4ig3Mv5AEg2LIMiHSgc/6b3+
+BF+gDV1WbJ7KiVLrnFVb00LhQWkHdYvftOVGJ78uDw0NuVrgkmmLTyX0p5g+
+vFH0QjazxApY01pDU32qq2V+2O3KBW1N0VT0AlLcDUntNatcvive11u9zMhc
+S04bxgkXaYNLh/R6PMvJ0ZWD3rC3tuufWTyFHNyik0dUdMrxiRz6ZUn2EZ0Q
+esKBvksHqfyEKfBSJrYqlrimzdjQJQi7aHXoDqyctmzoydtoQnGMo7aezGpT
+/gSVXe54i2mtgol9TXYG/5Ik+RN8Q9qFjl/X4sIpbm5Y7/m/z2EXjP+Erw3e
+CZWw7MDCZ8Yo7QbPcFORKy4CFO7pWcaqnK8wkp/exYVx0I3flql5P89ewBgi
+qSG9NjLQMBrozOGDYXfSVcofJU+O95C/FHBrTIyf9uH7mm5nWlCSrcSFqHbs
+WZRLtRPr65Y0cHATnEU1Zyvscsl3yqqiz9/qNaOOHPkeex6pWVy9u0iUJ4vJ
+odVpsKtBhnJVH2DKQ7TZV1JPxF1NFbtTLM18ydI14q0d1nRxQtMb6s2Z7He/
+IzuQXqEq+KWteGKWQ/F9R4eTtHvd0ID/Jgbqv7KZ/b+//svbJ/lff3z7t/zq
+zcs3T+l59Ue5cuTWhRHRFxW6KkjSZXo05nb+r4/wsGMPJmubtxZG2UzsBtp4
+Oo8dLaqsuPrAJ9EzOYlPTgxpPhp0NEkz80o7Xw8ml16MZtLEDw97VZV85LLd
+YXvNV9VKr1D+Kiwk3m5Wbdi9eqk/SM49beSa1c5uwd4tixkfcB54fGpfGNgA
+VsuThJSOVLUSF5bGrEMTF3Zd2tgX8Ica0rXisbGLgEfNRVq3YiM7cxTWUPlT
+uTiIDc63bdTppNTojIoc0YVzXdIAS3XXF4cGSzBq1Ts7jw4r3W+LTk9GzTce
+XXcZCw2/7rxckQavMPhUiT6XQdCm8Vih/d0raH0X5Z7/97DhiEPBmnR1aNRB
+i7o/u11X9BEfpE6vBjmjaiBAHnAhNHr4Wp1i9E3k7zO5yB+uPiuO6yxEAfK1
+ys6f7ETF7sKiQJyW3n6zwzVGjyi2JPHQV9gLvvP3vMzBBMrwEr6ZeUWLbsbH
+qym7pubbkP4TV4vzctRvoTuubLbVrkx9nCzu37IuxQqXa5UjNTTmrt7vxbvW
+OeO4D3zEWUY2mERCig07JXdkAHxiCaX7dAFLjp0WXJs8hKZYludkcWXmNY3d
+L3IT09edNffv9S1bJzOxJlcHEuKvL85po2o+C2KBL+sDSUOb7jNGyNPbFmbn
+trBwr2E11Pweu91pTw7XiE5XMIPEOXKjYF3K55DW/v0PV7qKMBVFVZGXcI4v
+FC2sMBYndZF5/cwOXZKdwce9Jsmq2KChTT9sDpAPM0dpIaAn6SDt+XqPVtuo
+ffp4/vgMXmX68gX5MvwduYl5NjYcGUN7dBAYAIemctZyMFk6i+U9eFxPzhJx
+U+MN7ifUEI3rsJczVOwQ+2vgcNNnbCZgVIXForJBfG4mBh7/oSKdVkER+DHT
+dsw4BSJ/w2/4Mtb5z/iltGg0xkuxrRdFm14W8ZUzfIGXWM14yAcLV823tWrp
+TN5kPyfddkP3hb8uvIcNL6ZeZWzAlQ3MZ/yM17jeWzRPxk1yv1voxYIFY02Q
+939JO1MfGt7blv7GorRoWJRZ8etgcCdtDiyMMJ/7L6LV+HPF9wFeYJOuMC95
+Jhy+c9LqrL3K3aeqqXesdzPS6Gw5y0Tt3sJksDP8Ibl9pKBn4dMt7ndSV0sO
+pyG8kJU/0ZAqXNxstDW4eRdlVG/9IWe0xfKKsPLxK7W6TXxty3Il3odahuEl
+qh7Y1L9h32GekRMJB5+vl1lGmnGnIa+wvaQaSyjnOvgn+Yu3P9CEG41QV2Rs
+miDzFY1P8C7SMByQYqnq6ADkLW0u3Hoa5Us6Z6Q7OdCFsC5t4opOHDwI1kzF
+QQSCpk3TwRKyLwULlK9+1uUV/YiuOk5E6Jr/vg2KwBkgczkDzg6emW+mz+Nl
+jTqF9qkpadHUWCG5ZzHhKFY5E3uiWkn43f2cjwt987Cj0S/JQuAbnI/MLYua
+D7RKuIIDWxVdtDusFU0Ej+eF4WPI9wwfoPKndXVN53+1OSy6g6yyt6yqjjyT
+1QyXmGQa2owH0pQbOIcI4eLoQEmyQuDAAQfG2ESg+wnqWkN+sGzVeOKhcwhD
+I6OQI9ZfJCtdSfeUO/dYXlxDrGNarHTGO3Rdiurjm5yvvbI5J3uSzAP6hK5T
+MshncF/J1IDZI64Le3zwI2mhMxpSka/v9ryA5KrzABZ0OMmYZ+uF9QlHIVUv
+yAxlJbEnJV/QOxH195yPsPeqK9P2IlJ4G8fhyYKqNmR10NqEJ5IJJ3cwyVe+
+J9OJBHNn3q+FB5biGByqlmyIqid18muyW/Lf44m/t5g7/zIomCxY02HR/ZYH
+UVD3JGhnNZ7JwDywdyGroF+Qm+C5Woaw6ze2XP7hXv/T1lWI8rCpiqchhOW+
+LlkVlwmSd+qpEO1SRLXxnm8seAljD4MabltaKHNjF5pS10uBd1rEQ2Qp/JD9
+T12P6xLKpKNtpUtInqSbx2aDKAtbFBktnzVO9SCrQkaqaBz+F0coN0ty/DOc
+eR4EBEQvYIiOPDc+Kf72mm/Ym011wypN7sQYZs4qjkTcsn0JpW/T6u8HnY7w
+WbXrh9aCWsN+zvDrYPaKvWqDTddrc6ARVfydVjeoaw50O77ZsS20tAWyD5CH
+4eU7Lg76YLl5VeEt7dLRTG4hP4UC6SnoPWeUyXaZJdPMFhv4fXXcMv7x2CjX
+xZ6ci1aMloZO5eZOtkcCOzCf+6OlqYlMwjmTH/HKsKfCdjw2PO50A59RfkE3
+GudlR8fEO+FXD7ZUUG0ZuUW7m/5uPieFP/JkrPLEo9klO+w21UcedYj96Ftm
+IhtwvMw7w2UqeY3krDKOJ4/PqRfkD8fgtwg0+TKSgNXlyH7YYWxm+aopdItb
+MtF7v+eA+QImeKYmS/Cc6YVe5++WrZ+o3y+b9IyvJnLj6jZecLZiteSyP7Bj
+i1c6K+mlqZ9rj1KiBeAQZbvGDQQJ5wBVtc1YJ9v9EQ3heMHAZOP/9KiQ6NFl
+nXrGGlrgODGHgMzIqbzEFzhgTamWBN/lG8SmNZqa4YTTYf1U0L3EP0/vmv6w
+2IjwF8UsExgAe6JAO7W/z50pCjUeHr0hu4wMzAOCI89ZJcu+35abzTmUcnPA
+hR4U+kuxolWAyCjJDP7gN5DutW5T7srFx2iQscUw4zQ4qwmNiwVbLsN2TCgb
+WbLpMzIuOiWkVTOk3vF3rrjoLv970ulNtWBjVEENbF3tFnczmDka2OpBIloa
+vIbLELllW5dNwRCSDUGlGdZvdJYZrEADsagWDpfYYJeRJ9YhBhHCZC2hikBV
+AaTHrpf8TeL8PqVoWBqy8z+Vcb93sswSHM5oWhzy3QaX9vN2JpOjGkMqMZLY
+v/Q4r+dDa6YDhon+YnPDMd81OdM0B759Oo3FZ0n6asQlloADjZunCld+ZjpC
+cqoZI6C6ksPaNLZFfbOrfnbBuNZis85zCEgY/o4ltjnGmZ3SbH7iJ9G6nB2T
+hs5FazckrRuNUJO0kePCMYzUYuXAzrsBXumFzwGNx8wONzfsMyDNwj8fri5d
+WPQoaJv/PLB7sSuzI9l62Bt0yPnp16WP3zFCwFIjGz0kdjfPs7d13cuc8PLt
+kHfB1u/5C0lWCxdEXKdE3XclRwn41NbXQBEZymeALpCQrnnXUHUc/SyXtoPB
+r7FXsQuC9QhYqRhgG40rJ9iu068vcon7mVfCoTK6CySbruIUDDqJ1XMs78yS
+2r1DYPkxRQvF9D85OGIXsaAEj+ionVV17fix9Q5Dg+DEglR6SJ3EUDDpIM7s
+J96DZcA20RoKAVML7gMXFJ4J2az1TTIv+R0Ow8BES7NM5uVxrgzxwYLxU3nB
+Dl+IvyRaKxtxkcJdEjByATpCj8Xfbkn1w1xgh4DzgpnGZ+Hm8tAOfPOJyoP8
+JtuW5B8zAe14+VeVa5a1wkYk2+ewlBGxtyhpWcsMJ6uy5B9b/i5rjwXsJW7U
+8al4/6Nu2JalbJtkQo6gYHZJ8vL5YlE3Bvg5dheYOzOWNiVXkDE5Za7xu9FL
+Jjo0Th7o9pKQkr1+5BLDw9flZi/RF41FTqw/TSlJDu8YPNTpnX8P7FX3Qm6x
+tvd8vWfUA66A8WOVhmkuS0aTWA7FG6S4GmUIO0mZS3peUXRw/e84nRZlI7Xw
+ZxoAR86ePYWMgz0KkTgt5zdzMhJpfjdiMLJvvuSwB4ecMekzTQepYHJkQ+0N
+zj9InB4JPPpJyfdxmpYThVB1LrHXT6OY6MekHv2FLGQSn6Xcdg4Qhu37vt6d
+/w+GbjFmUPQw49rTYP6/uJ+dWc7D71rRCgKSD6MBbATh1GmaYcfgKJU/y3VI
+PFyyDYzYa8hridhdidTLCd/hemyLO33P+4oPRCmh6jKkbAD5g+Jnv9TQ2QWf
+RLrXsO7q57kcHJ0lem6OvKGATgvFeugFZxAsaABaz65e1Bs5JoArHToBq4pb
+fJCLgT79uCOFzg8o2ALdhIXhIUmOpzCjN85XoiQCDKG5B6Mb8cORq5rTbXuk
+sNhrnWUJFE/h2uoKMHbXJlU1BvcRWyV7XdORKCJchd8H7NtrUjbNLpm3otfi
+FauqiJdBLI3oeWKJkEiNosWnIx7+W/brgpPMEOL8Sq4DWr9iyXvTsvLia0iv
+lEKCT4g3BRMkQtdzS72bKbJYO/sA3n1ftzo3gWFNpUCKFKijQhPni0G+V/hj
+o9e7Ww58H4IsPidiTcH7D7gujc3npqpMe7gnyQYmMa51gH0jSadhB3XKiqYL
+AQKsilz/pgjZsEP0HwsJk2ZzJyi8ClDqsSim3Mf0l3WxbyUsZwuLjXJwY6zd
+RsUewEEHRYbdiKUDuDov9jDrt0W8G338wPy43eib+KjxAZPt7kynshZVce+9
+nl78rrwhfYz56dtoSNWS75Ra6qYsrr2NpyFWQvDRSY+XoOFFHSN93vOrZJk5
+T36+4lDCkiNuo9eexsCSr+YWH28wbprAKwE8bHFhB3dJ8vAhAXU3WrIhRrsV
+bJAnoUaQWMuGlq0ZPrDEUjQkCJ9YPwpsj5YhMVYVcyFLLTpv5xa2vxKcd7sf
+tCpmBRsYKbB9xBrKgjWkZlAASSqa1TsD7h0Sddf8zsxZL8ADLElLFvuA7cZO
+84ql2w44BvkNS8Yili7OogFjqc1I5XcsMj/L5G7UzS8/wfgUWzuMHVrb6epo
+sOpMOc4NASKFB/+WzEMDDxkYP8HdyxbLQZbdDb5cAMrhhZJmZhNH7eGwDIOz
+9Y1YJjoT/LC+/gTYG/uvXe2hbL1qEv4uVmONoADExx25cUkR5CcPKpqbZN5c
+eqSt6Ix36udlH4AM0vXv+bSh2soyfOoiAGfFN0fmopgKJ5lpTBL58aZqPwJS
+TwM4cF4KyIQ4mgWtYJbUDLiQDZtN+w2723Gjdnzvx8e1IyDiVtY9KhQ6XZxt
+va7VcZSolljN7KVOwpAlF8d5u02xkExnBU/wp05cYUCG5VOJ/csgn3w50zwj
+oDmpjyqGhMHaZhmHfQ5bUj13Id5Og7wz1Z2MzrkWsnsSOIjA+/gWGaBCgoYj
+/Hr+hY2RJ9geFgrRwuk+x4FWrJUMV4CES670GRmxwBHZ0Oz9PBNhMKdTTHqB
+GCYzszPkfkoS8K0oJANolFJ2YIGOLt7nOJin+CedjRjFSRflLKxbLima+MX+
+RTpPX1TEioNf9CKzbYavQWCSzioHw47EIpFVV/W/YIWs+oNBEtXuUGbuelFz
+yMcTEYHunbt59jL44O24Ez5mmBpqISRoNdfAxhJ8xSI/7CraYitNoxtoiXt1
+fBDZpWRPTG8EV70XIpAyz38cCkFdyR2BpHcER7aZm4M3TESsTiPyJpQKpU60
+x0yezRQ1KXavxOUcUJ1fsCh9qponLBBziXn7hORg3qyjSIdVG00P+Omy7rzh
+Msr8sOdIP6z1umk4XqcvwaqW2bTyincBI+7Lnb8RXPnAe18+oIBu0xIR0D1R
+bhCBU7kAaRfrHacOsqWTLIcHscJTDieuS2xEAeu8WhwAnODocQw9CahUnZqp
+McxHoNt/rmvJyuGtF39kR/GOZYpuo29ef/izKO5nCaqbIdf5j2+f0oXKQWHy
+5N75QCuLzjvTgqJiH72XeDcKV/lB/Lubptiv22f5aUzgJQFbKeJCIP/QAnoL
+lShANi3e9x9KVAeJxAUtkZgwqCEIOM0lNoBOh0Ik49vOxqHgZBG8oWv0U0U/
+yaa2Nm5+YWhJK+A2AJ6Dr2WFQPzJ0guwPE2ThERvtaNjKStJ4+Z/A12Sbjpu
+qaJainKDQjm2+5mEGQ0uIHhoqWVcVmw58msZjbUrNvUNG15+0IDPiimHcH6y
+U2b94eAxvBPL7Ita9EFqqrwWU8ih+hBDI5kTv19NLDOToLFfkPpualwNE7+S
+yiOLVwPxzAu4jKB6fs5rmt/NkVc35blh8REgau4gRAL/4c/r5UGNTGyHhPPE
+hnB/4OzFplpULNWAdLWp+XmqTqab1iU2/SyTcgC/krCW1QzNT91P3ohwnI2d
+7ONFGVI28Uos+RtWAkDkCVZSt9NNZ8Zoja1CP/jy4qnx6dTL7kglRVhwErXB
+nMVu7/3xY1nuBbRnYQITmEyuAXjaA6S5FamZlKGSRww3LzxhGEEKq9bQM9E0
+mMPa4KOKrB+KgTWuaBOS+gY5U8OpZadAwbGx597elN2h2bVnoYI1BKAVXmLl
+R34kk4qH7RbVPOTi417opWIk4Ky6mO98DtyNLN08v2ymHgEQIJdMcryA7XAY
+3RwJQlgqWnZN/xYYe5H58cnzA3hCyg43DAOOtxbQJgEuLlaF5AMWnL7SXxmE
+W4oAJKLGZXqC/NdJscOodXvXvCyaE4cny/u1IzNib16tHgQyam/g+GMc4lMk
+oQ1Zos5VWQjeFQk/tflOC5+dgU8e0pdnFkaQ9df4Q9VKOdPcFoxXs5VQqMRX
+/Net8pGdcZa8aiVYJPWQJZCPcHet0USc5Kb3lDPxgl87X/U/xI5k93NSEIFu
+LRymwDu7/wgPEESdGqGzSVMpZMIM+OHsHjKYBZGg+Y+Q06Dlevftf/xw+e7b
+l2MKDBdEh4zCRsKZeFIvmiOhrjhguy2X1VboVhD7k2m2wygPKQkZj4Jz6Q7Q
+zGHRukrtUPPNETkk13e0M0AKMbfCDfM58SIOx+FqUgu5wVrGeJupPwPAEAii
+qoPnCG9g+EWn8jiOanBOxZvGEYRPNPyX/s6A7aKvim7kRQaXDw+KWG66ekKq
+Dc453A2NMHq/nl7Ie9+VS/c990SFHusr9Q06SlKtFQfNEhkkAf/WCgyvyoar
+M+kay9mFy08Siq0rYCgEpcNC9b1i30G0Reu8Y4PjpVSEnHCuvoMfQeqMfBxn
+2SPzh5fwZdl7iWcq+oyXRNO3VacvInp4UnkPO+wRR15Tz2VcT/74FX9bmWni
+E9xrZmbpwkXlbUUAPacdV4AUR4E0ayguIKAQ2XMcOfLSAmKx7SVhSfeUy0xv
+VNracrvvVFdydBwkLn32EhQ5JG45xsSHrEEhEKaAwoN9JZVRNhvJhDh34mGu
+EagqEF0QGALt0jx/rp6S+UjPY4KYXXVa6MXHnJFiliM4DdWtiAyTq842ypkM
+WGlrULRX7T5yXtgqm3yVQAOcnbzjTGIdYSXmPWPvC18o+62rroiIFQwP4QSF
+D9HqaB0gU99MpCfJTOM/Fg0uy1NxHHUNy+V5zbleiQuUS2REJyp5vyBnkl9c
+Ps0RkkhWi4UzmKaavT3PH5Jmns/nblncDyU6c+oifI8HP3Sn5cgvv5xrnCqE
+s5Tl7nw8nnlaqZvuGUDO7CcMUM6/vhB/KikbDUWjEptkeZknv/JVzfmp+bv/
+ONCiMn6JbJqiUQhs74cJhEd/e+yHU0v4Rf4qyTqWcSnuCZvmvbBpfurD0Phi
+BRgIr6+O4bB79LCq2qNjTvkUgZx4GnaDhY9tjZMlh/BUT5yI8XQihBuai9oW
+3dMTF9IwesG43c4QCc9/iCTeFrwQeOpMs7otU4jQd//PvPipao/tyRdBOKG1
+9xJulVizjkFNMf+NgAi17zCaQRHqwUxDEFIDRSxFzMOzixWKz3RUL7TAJP+u
+FoKb0WP7Zfo92+KXDLwA80rCxhB++JgO3zfVsmrkv8G1VHBxeFhhcVnpqis2
+QsGCWCdWEVEr99DxxTq2uo/p7Zy6vwuvNVBni1iqFdeIV2Arcmn5Mb8ig+LX
+r0gMLpNM2ukl/8/ZmR+Q/Oq9+9WT/I0jpclP3yQ/kvNpEfVycj++7n8zPKDH
+iFMoyFpu0wn6m8f+Z40R6ZwdWdmLObnYgnSJmzUyc57xF+GrdTLz12++C+9o
+SnK8HF5oVJjopT+CM0GCW4sSZqP8wM0/ugruIc+5LGZZ/ZS/OMPV1eq4RTWI
+GWXwl4Am1Js+Fyh5VH4wcUdG+Me5bos+9cx+nqQc4oSezJl8pPfORKKZXdT9
+ghaTzwBkOX/PiOwztjck3tWalAtWuzVkI5AjbHxuafr0j8u3n77EkOgfX9tv
+pvf6TzYp1mqt3+OBVH4lCoGNIqcJR8Six7306h3JwrFvcQU8CvyOfoee9D79
+TjoTukyutCo955Xrl6ifuhr1My1Sd09jete/5d88+ebz9mclMQurNuUU7f7L
+f672X/N+CG9C2IAp28sZhy+V8UDIWRQcDM9uKZ+w8etj7jALxkN/nxeQ7A/j
+VsF5eKGuz7PxFw3ec2mjPTD8GChKKRK+5LAE+Q351R25qc9AzoPcLNzRQF6i
+ZCfzo0w2sKPFxQgBn8S5kLrBclvLOYdDaxAov5qcMtAsCszeU4D9mRnUWRRn
+aoFzgtQ4hCS2OJXqeehaBWyoFBkARqJpu0pgkBf/MliJZLd+NKoZWoV7Izq6
+RHgcb4bqj+ARyYp6+1VDwBXH56ob5SYxo35imr8jDx88Gy/vuZ7oXCt/c/4D
+sOo4w6f027MzJGjgIbWJp/lUCW1+HUPGQ9kx+NXhvD0FAtnxSAT1q09Kh4Dk
+qIvczmVlXj5sZR5rzCFZmZcPWJlfQVxy7/RRYyhrd7g+jzkcqSOY4XkIbO7E
+MKR/BoJHDk5MkcHBtcdgyK29swCv5kLYC4iAvrFQwQuLZYh3rJECLhJmSjhT
+enaOeBOu5Abonys5Us+Xyzwuq5RJYRUQ9A9aYUqjv/6LetPfVD8XDSkmdmgr
+Dqf9vhPwlpKk+Dy6OMjNasE374z/QTLxRw468D/pspNJIUgAz39T1+w2zRmM
+aTgmfgGCoPys21I0nEUnUD2aPz0biwI4fXIl2AaAejmeE0NGrh4Twbk/Y2Wu
+nh/RArDpkTqFca911j7JgovaMrjxPDzEl+oLP83m+SBH7wt4YjmAoK/y/FvO
+TRgYAakhfgqv074qJbIaKN9ZTFeOWyIA1w0o7wECGa6L1OFJ35dQEXhisTWT
+TMh0ijPdXatALThCw0htgVQjbVEL2k05xhS2fYKXg+Ac4fdAfCvRAKvDQhyf
+T3qEsdvzGArJLIAbaAF6zvVZTnaPvvWk53KehBdLwvwkvhYO7BN+wrHXhuf5
+9+b5N021vImf8lPUiyhSJLmSZERbO9oFAXslOV9+xhL0fUpdFX5hoO8gl/J8
+GsZ36WxZDiAogtAJ0PpC0MKaqeYLHLBw5b0F2kaZ24prAYtlYpvw/s6l/MK8
+Fj0f6TorpHPt9N2S34rVXXMwWCFVuC7Etn+lD3RxW/o2+cvvDrs4VQOy8eKc
+5R94z5DGcdIi1i5r57ZaslOviyHbaU6bPKN0z+DUHN0nrPs+lcnTTLIFiiTY
+pp/L/qVztU6oiCPcA/qJbOyKWYmqIS2xJCo2G13LLQCClqUCuVQbmUy3uRTd
+0ZX17tsXb16//vb7l9++HLttfpAQbBVpnI/bqpo/ZC5FzFoBz72gz8lwO6YI
+El/p0imveJCVSapVWZvn33335kM/PRfhLLgnuNLJsTW50tk4WsmxAS/PiZ9Q
+HpW//uH9FQiYimUZABih5sGjNbKrGHoEunoHnAZbJVVAi4OPT2tQmawg1PNn
+WjORJOkvd3pfRfcoXtWzQfTcMv9ajyF+GlTPrrO0vQQhru+U8qxUOp9tQJZo
++CUbv7V6PqnUrzlfMS5cL+iAgSLJo6SrMQaoRWC4QTR31gPCSW0XpzVkFnr3
+90x8loGYlMtCRh20lwA/MS+XwFtCSBLT90pyCCSau3vfUZamRmpaCWvkKxZw
+Cx+YhaA+t1TamZ5KNITPasPxC4nW4PkoLhdiu2/E5QNMu26sYCxLzJBoe0wE
+QCeNEsGp40U16tZLkBVo0bfmprXRxFIJwCIYJuE6brPXz/+7cAZEIKiWguSo
+WLxmTO5BcErWkGFXooKLV5SBMjKZGtzNl8oNJ7wUlrcP6g7gCGhpQcBi+/Vm
+d9iatv8XOcChPYoUzTHAjYZhZPa9xFwfIADloQ1AzBCBPGi5epEguJL8sxM5
+juuO+EQRlDYQN/zRyVZPtKBr3TBN59riBUonMWSEEqXIk/CzFi4XKPbAceVd
+vVa1xwgALRzjL+u+3hrYxcR/SIPMSGwmb0Lhamvh72kjehgyjyXqg51w+DmL
+R0mrAxdaT4dnWj1Z5ngQ1eGlgU6nJbIjaYnZREYie0hGAiHRXxn7n/s8ukt8
+a9VWFiVADnqx1+RTDYfiVNyBoFbPlLuwQI3swMj43sfEjtsXSCTaJv2CLMsR
+I2NU8IVDMoxeaMNvEEHj9VLZbpQTPxIFaTyQ0/qKvoRVHkQkwnXk9ovINsMK
+mVmiQ6q6YHL0bL7EDYxCiY1hUqm91oCEScwUfhbnwmFISULadwCekwJDoZFK
+08s9TS4sHKK6Yy1PUN9ZYH5BjwnRpZwJHnlu/4DSBL5/cxUsWS5oCBR8racB
+HI2VFLhZb8kraDkDy/wQjRK47S0iqxHSTcnVE5UsQq19eI7K4of13bPckZ1h
+7JFAJHRF0GgGWAJWqkqvqxvJ00kAGsELEddvhZj9UylnF0a0Qv92uc/Tu6IK
+k1d55HtxmXq31nWa+GazEDH3KCzOjbRmK3axPUvD+snhuUwMbgGIX4P7KQpj
+YQzICfU6+BlFlwhppCJCgvgwRFwwlnpM2NCwq96xnrOyVP91NEi/KstNuLAi
+R4HeLTGWbuFxmvmpGFLnLbc2OrsnQP+TFMTdru+werelFsjykN9YX5U8f+y7
+mvWMH+PTIetp0VlZiTLBVTeelwaPMjzgZVhe954ncyE3KJdVEXv/6MFSdJK5
+o+KVHNppeDD4Jvmxl8JStRCC0QU3X0CZBcqdSMq8X+qdGSmKkaHOh/mh92Vk
+R3cODV/sciLSXHBVk3b/HT45j5+c4xNJ0YHIln5t2om/b/8+C/AKnCwJahjD
+akX2TTnANCVb/QKoSJ9JicguPBHb4OYh8Ulw8pTQpeGHEp4uAK5qe0kdXB/6
+DXZiHtKX4HtygxhCNldOqFg4UIL8lHUhrxU62SGyZoke8SXpEDGerWxEelIT
+Qrf1P9tOSXyt6AR3sMpHZ5YQXACzlhT6DL9P3ssu208d2Vh7poPnLQgPiMw0
+hVVKd5C6NRIB2G4GZoNhXdgzuuC+auZq5Sd+Cx5fo9kTj1dRucGcE7JQGcO3
+u+W+rvhyQVZs8bEUXNBM0mp6WaB3joCGgmWFn6+Lzep8eaCj/RPdkftC+x2Q
+HUVP2AkAt3ds6Jwy50PVbhVigyAWA41INArh9L6MPnwgEmm1AWE0TgKWn6VH
+UoE4yT4OHU7zO+3YNWEZ3o92OIF1b4WBqMlhWnJFjTuadI1WbmgxTTyU5dfG
+SDuxa9l4sHinmBHgyqMRbMhKUdvkXDb9ptifaK6T2ySV9PP86l35E3CSYPq3
+SgI/bw0wh11kfBWeomlI5sLeA5/PriGc7K70MdhQH6z5avcRyUg3otzGj+oV
+j3qn55VEtAH2qh/SIbk8YdFX8hCawYlIuPAs0+Ijj2qt/4wUXlVaYInOm0Nl
+8WRH7IFCnJcpUcF92Q2pGxoNceAT8wcHYPFeEpEjrdGpfZqNdK5z9mLPS5XS
+iGIjXOM4g71kh11EPANt30fm6YkbaHsCQ7Lk3BX0QVoElUxJkedSFjf2CZfu
+tGbmq09Si829iBHZ8QCSN4Ud+7xfPW62M203IM7LFrIzwIrlUvmikuCBYjml
+y4fRVeFUgYVei23NXrBOQH07pV71a6jMnLAJhiCFqNe3Tb1iuqK+1Mine/l0
+SnIMg55QQbe6VEKjcczKnGWuzlULoI2usBd4qwS5dx1YF0YS4grT7FUYjDNd
+54PAbSg3cPOElGoJ4NL4N4KkpFVghlmPT+Qykyb00GFtvL5Lrja3uqzwfDLS
+STKoYTWQoywSs5F9TyrS+puebVkXXauLpd+uGjuDiltYFHuRMwg1QmtFNzbi
+TDYrxJ3b5EsjMXE2WSKuZgXTRB2aSOeFlgZuCBkQtrJvUl+gK+6ku7cOcwuy
+pQv8/t/f/PDdy1ylVbmZrQLdHeTMZ1XvR6mRlibXnE75cgSQNgKKfCiUDaVx
+0kkP5QUjQ7XgrYuYzpSxTomYxb/o7vZlr7TpN8Ar/mKk4udiFEkruSkp3iW2
+x0oilNoHLSxWSIbSeGPxi9MOFrHRgLzjxEqFXT2WYepIqTae/moQZbb4DBAl
+4+jlOG2qiOwLo+2MiDHHmY/ZBlfDFanbYi0azeeN6O34QPoUwiYHtQxalA8Q
+x2BDnTrnnXo3SuQf+SzI9iwbNXm/4LKCwwYx4HYdp302m5Qp9ueZv6gii/Ms
+Cc33XdbJMP0qASwLVVjRqwFc0eUm3ULgmCg12aUGBMl2vCnDrX0xv8B2Pp5f
+zISHA4vExRBzjdS4GrdrrccPNM8hpKJ0h1pKsTooyba6X9eBPz0zCEHdMbt3
+7/dJGOC6dE9wGnwqE6FNkhPZo0vjZOqZJ+IHZCc+dXBTdsG7CFV9V+YT8w/R
+EOYkBPQHG6Ih3hiwJUEU8wOR2uzUchK+Qe7Z+HEOzOEzcf7U8ANs34ouwKEU
+W4mwqUAHKpsQF9gfFsdtYyCXFZnE3tOY7n8clBrDSqkkLKnt6pXMZCAGHomB
+K/O6BCm/rjxHmTUnyS0AyLxmLhzzRjmANlZQN4jTvWXrhF0DRiLxmjyqe8EU
+C8NxyKgXKjoa7MGPwM/FWkt7qqrvQMehutYeTNJzkHy8fQSPTLhwAmokGWhD
+t8X22cO6R76xpNGqqBrQNTEL4aUkxLp176F4lq9gZIotZBXY5Zx5qkaxCS2T
+DYaCqtXO3OKbfhMC0W1+Iq8/eZa/eH95Fagy2/zxBf2O+3EpeoV8atIjtCnJ
+uB4K+/VQbImESzGPwhNWiky5rpd3z45GWV8bVLynEgQSddghMOLPkAStvw33
+9wjOwtg/Lt9arIcecBPb72orprbcXjMd4nQQ3GdlA8HkUOmPMUOGoiE2U8kv
+ObRRLQ4OfN8leXxx8U/3/ki7QyYl4kV6MZsDUWygdoVZeiHjOlbM47OYw2Ie
+7R/DmV+ubW562RGolcGqhRCgMweODX3q1h14nBPro75nr3ybYxb+osw0xnCX
+XFsh5f3wHZNL2PUvRSxK0W5SwqrMnn18hzpxYW8yBSs8sFYq2aqpWilxpDYS
+fK1iGZ7JZ/9OweYyJVfJ5+TIMiDpxdQbWZ2e0nh9Le+/vsbvjC/uvzEmlTE8
+tUSxCR7Y1OFSMkE9tXtmlRYhA84ytFCOI2WlcyyfvCD71qbkG3m5hN2t1J6H
+X7x6Z0ElLvKXhMULlwCM1Zt4yqkyPYS79oo29JnEGiW9NtLcRA6OcPtE8/3B
+gcwPSBjiHg3ROyawltaAooChWhCL1cyRUwITaSxZ3WuhvAP7w6dKMoJc5P+A
+cQVOMImQ/DPdguX+n5nVRMb073QpoVPBrQE3NzHU+8yYhvInrRCu19KJBTcM
+TfiLCyNDvO8yiMnco/iwMnzNI8UKqYsJ57/XCFCJlSPeJt4k8Ri6hjA9nKnq
++12MFK5KbRR02QtYOpIUxo3noUvt2MBd4zAXRYGrYtLdC/fqAQo8JZNrEvoQ
+3vVWRrl5eki9LFK3iD/KPZ2F/mepdDit2elxqBbgGQMJcWhnrYzRSTj3YNS6
+skKk5Y6vD4eoNRcR+k1uSLbb7PgeYtt0n4yz5Josb+5TrYTpYFgmhcwtD9q1
+sFpwyE6wp9pcu8fFP8vof4PK4obzm14ckSvQJVpakV38XjgltXJGcA89JCMN
+wOLx+SmfMtfPVaI0Byv4S+6N4YqB1EPaqRniDF0+FNkb9gFl18Ll1otPs37K
+eAT0RTJFkZaPXSt9RA52UsAYTsnqFIYJLYAcvIAVGIiOQb7BCJEm4So8fjlF
+HcYj9NO0Y2isD0iYDTiwNVMkGTKcHLnUZY2EiBvzjavCOUcwxbte30I3yw+7
+LdoYG9ciPkeOe3QTDWGCu/W6mqLn6C2AdityaEOk7UpDcNOrYPyP3D33qma5
+80bVsV6HEykJSY8HAqg0GZD1zdnZ9JLEdEXPIP2NM17K3PeQlJfN3OW8Lnt5
+ABYlBfR6Q+LUrpGzBK56dTSYLpq1utmxW64Sgm6ldyP5qyxRl6uCbOSKcXAk
+ALOM2zntO5Fn6+w8OiVTrZrHiDFWbZM6fO9TTCKC5X55kCtpWRegYMBV8Fk6
+LQwHfH7Yy3KcZRaEvb7zHscw0xWw2J/J16A2ofYu9V+Ra9kKXgTgCFXUd598
+z0Srv/CYvXy6GHJMk75nyeo4qmlsXROmff6eu0Zveko1tugEpfuIjnXovf/m
+dZgQ8aV0vdKjW0WC5jEgxJyM+Ww/gopIyyNdgSJiUA5yrNkIK912BaFcD9q4
+/1x95uaKGf9CKG9BkO5QVQzriUgqwI6h8baknQ+c+D0Xap6lxKcmFfbJLwop
+5i6kqMCZQVgx/yVhxV8SHpwIDeJZR8KDJ/ddMUKjOnrFROLFcXK9Hl4izdeG
+5KvXcL/ZpZEcgOOXRpfMxV0aoor62l6TpKwbQtJoLG8mPaI9aZ8ZzbgCOCJv
+v58i3UPdxG9YNOSKvIOhhXrt0QKi9TiHsMu63qR1HQL/06CNkb1qLo+hO0BD
+sfcRy9BNVp5KLMF3DYEIYgVeVdHJn/BAnwPmTydxYyoflnW/b8GI+9hjbAzZ
+TqsEmsdiaGxoOoxYKomYirBU81SmB+4YIcF5dr4pdzfdOngZ/BJlq7NrXpqj
+utU5c6PqVFHBnnDP8+9x7GkovFspOG+msToHzGmN/MBHlj37mnjFdrVIlwiO
+K/jvOMZUa58am7OkdLmcTw0wa/WN/TQstJ6wQUe+gIB/NUwDaOQQZAh+XCV9
+g3hCg5svr2KckQMTfBiyRBv3iA9xgTl+adbF/J/n8p9nI+1uxsNd6vvZ4ESF
+7dEjHgvHK5sNBSQuWzgfQRjfH7YPOhdJtUULdOSkgf+LT42XMmlXjy6pQoKg
+hVTd8ASQfLmmvwXwo+xn7+UwVNo/XMy1gJ8JjTrMtdXoz2B90BWauwW5yObk
+WZ1l4bIZjhODc9H94FQOuodAY5JNe/X+6uLiT7ma6jBb8Jdz/cuAnki4uF1P
+mcxzIowMadC3xNWgQWUM10P2Twid71+R7NS3OmPBPeOoMklXyydeEEPa2oq2
+KdCEiCXtirSxIO/sEvoBHNLfsOHFy2J/P8ffz/F3WRzILcspx7F2Zn2HkzCZ
+tho/B303dyDrzyfEnIvqduxI8UkBcbc2YcEK0tU2fmu4lJmXwWK8cVtyUzB6
++86Kevt3xOB+6B2/aoUnMFO4BK8RDGYH1T1WFXvYk+8gEt84Xm9TVYaWTiaN
+a0ilnbvNRQIR1CXxe4KpFid163gtZxm95dRTJuK9Y+2v1K/sM2N6UfgWrZE/
+QxgSpahdALm7zRYqfyoS/puKDB8KuYSS+g6z5+udsQEdGGX/PiGXRPgS1l1+
+aDlSGpl9sr8GANvzp1IJ8SJhjOczZ185f/5/PYUjdZ5+R0Fp4UnfPJ2415KH
+fUMPc187d1ydAe2oveMhQGHoU7zDuSIBSISz49dwcgK65FVe6hW8KlaBZkEt
+HzbPwobzQwZyJZ5PzOPCMxxo5cwFL/XVxtfFw5LkK/oAGY+/0F3xh5+nIr38
+f6iW3Xog+CKTh5CDjgdBBG0BGFVmZZmMY+lXY7YDrv5BcnhlZeGy8A8yEbDq
+hkEvRjohwJ3ENtmtH/oB9Ps/XHlsMfcAIAWIziSiv9bAZRYLxsehj7hri94P
+FMad8lZPhsmg7r7nJgmoipMg1kKLi2O6dEDDFg/Pr8E0o9Lgcovi3W1EbQw+
+1BZr+3oPrbVY10ykphT/IyFP+gJvm+9Opj2frJPlLYvNVPYIzOJV5GspugHm
+9k5JOyBfURBDy8QedUGrnbvjN9HoJcw4TvZML0WuDFCJ2tX9K2ikB+nplWSi
+20FQDS1ayCpLNJcnE+bgJuftUW84P1OavHjLjkAzPOHBRFikV0XSqi8d3ehZ
+NiC5sZIdtFSVfyalJ+dHLLbzESfhfGAunY/dmm9GwsjnqXpJzn0/Lv78v6OK
+QyC0DExPg+oQxVHIsSpLm7blRFORDr3gk5C6/81kvDY2z/TyKFZO3Q5iPix0
+1iBzW0lcXnXh9Z26bUKDy2KGMj3NZptX6qgbGUCebJ80LtFqOTHlcHHLpKsm
+cPOpd5bVO69QokPo1WnPjBjJdZBoH6wNELeo1X6ImeE6xqg7AjVIjxhnfBf9
+VkCk+8taqYq45DM09L5IQdzIPQSD10rEfC4FPTSgY9tYcI1DC1x/V/bjNTaB
+qBX8dgUeUpShBVaf4y6RtKtN3xrWiVVGv2VVT2/0P56sIgrrWbSCJ4lRhDT3
+f7UOvFj9/J483PoFJOrKRT6TYz1d7c0H/DB1iP9rT3j2KrH9rOAngECY6+MO
+ObokxumKJrIChUbSQWU3bDwWFkyb0oSQzwDg/znVUaEVQWDjYpPQt43xIzlT
+9QK+7skNFZYC7HkMXkvVUIG6/BibG8bHTvmStLjzXwef850Z/3j+RplSxAQ5
+LjkMYlENZziG0LuZ55PdP5+hIhrLfeanlnaBvT2U/fYsu09X8S6+Ln6C9zTX
+zECEcqwaaTxLIgXbJxEksJptuV5vlhUbUjrLO62Iktd9BqV7qOF4IKG7mhJj
+piAXEC466TTbLoBZrDqHXzrOb5NbIgFdDazOHbmEmOY8jgwBNo0nhE46rhbd
+VZ9LddY8YgsfSmn9Guhsa4jNsnjJ27X7qIjteUDHcXLJUPRrmrqmOjiZqWm3
+D8+/H0vAjoG6QemsUmJwdzczzj1JvVmshjaiamt2+vKHq3MuaEYzm3ke+KaN
+X4qjLDwPemzDNUdkGxf7VhG+inQKsJlrWl9Y9JHWprGSqFaDflsaL0uXwC9h
+SPt0PapMzt/biKbIhy6PUAV/viPLlgHdG+alNgOb1/oluT7RVQf/pB3aRplP
+wtHJDzs0iYP6oUX1uc/Izzj77atHIneGHtr2sKWVqX4Wi4GuvlRogrwIEuov
+IO4NJe7QJY6Fo0Jyb0afcyvVND8/6kkZfgmB2UljxZiuuDACmR7pdycZ5zdW
+hDHzrfD0pg7tBAPaPYTAkPgyJ0xKlV1XJjXL7ri4wTAmcuEgopU2SZZWnkz8
+oX/R8pVXfXhbPyBIC8LBwJmS6yzYtgMbe89Yw8FgGK99p9VL4WO5qdbMJjCs
+pV7U2qbnTqalyHZZdQ37yFkaRogAykWod+CFKT64lrIUo0mRrqkWj+9b+bNQ
+77Wrq5abSgvwgGmIIwMCDmDXHLZz67DFyENxeHr0ScWnuloy+bCt+qY8kY3X
+olE2fXgf8SAZlnEo0/RYsypQvJRpa61GbIcNIQgtzu2XARvdf+Scyahu66Y1
+ykO1x3cSfXBLwsvEmVcOZrMEKv1K764TBtk06IBE5Tr0ejSAdZxnS7u/0Lbz
+rXR0BpWmclhFRzG0AOMSvWpHp6tiUmfZlOu7zo6NNj7KUQqJh0soylKOSfhq
+0FHZW0HWt15cC7vFJByB/GfoUabV6uqoTujGtG2W7wuVpX2hfBOvfg+v7FgA
+Iv+3/OsL47Mci0aMfsEd7X/LL/5pLD4hH/hSj5SRQzjAQkv1vvjw+mcTc59Z
+52v+PrbfeZ2yktWRpVOrXKbk7dDB3M+Mh5M9nc41Q8+EaQjKkQcSMx3DADNq
+MqUZfXIKNTNHvj+QCW7u6HTIctLrFpZZKQNO5TksSYlYu+w+fi0HWr7hRzTI
+yLxKi96LDdPdT3I/w74JyqJYZgvf9FCZvYrWdS+LBHJWRSVZVjkRb3bBayoi
+mylis3Z0AACINLKZpmnTsnuuq/7GN37TjoM80U/cz8OyJJrVYEYcNbZaATsD
+BZyk25g6yvQC9g5Pq437lb52NzPiVo/kQ+i9MWwI3wMl1ApfHYN7gK10vmp4
+zwAOs/t2UyaX7QaF7TtFzDNtSmgNd/IznbRqh4jF3YnUCDGdHZ6RNlC/r632
+jM4MxmqQFE4SWCGdNO8dGtqhO4N4Adwy4RZhBWbhB6RLjHd6TzRBb9jRjwA4
+hSfakVYZkVJ0V5Hy8yzAaWjWQP9ZfqRe4UHfkWt62qCDJPtMDXuV0FU/OimI
+XLALQ6BzvZWhBiuBSSIhgX4XKCTgtqU0RBYxL1F2xk+e5dW8BH0wzM+gJYUH
+S67OqzWfha6dcJOSMAWL3UlDW/vkxPZbTHV/gwRN2aOwf/qrVf7jJ79Q53/F
+Sl/hHaKav7ZKJmR2wlEyLaNXAKzuIB/oblzvs4gjMZC+VDDTg6RQyk6jO7qZ
+FKK1DNylIXTmzfGIvP6l04BAsx9ryIprRjyOSPqkOtAVm6lZF++RYJFqmvDO
+59yft+NYl+6WKzYcOO0I2IUm0SxHMGnwRThAILa9YBMlz2roxG8E7Scsu3rI
+0jrBGCQydHWkdlfw4q3R6Li8rtIUD/44XtTAIh6E1yWN9QUJ7ZGZddc29NCp
+tz8E9/gr0+mFA5EmRYA+rISD7Z5EU3ldt2FJVqNjRe9WmOZDXAdqVHdxmT2Z
+1Ko/7jlHlvliR/Jal1YviWgLzISG0Ke11qBma3sw2eTJ/GiuVTjEEKuKsdZz
+tbwWUubM/QraVpTgoFm9Vt3206ja0bhNxL2tFdbUagTchRIyIDg8ZqUPkumD
+COdDqUXB6nBNof/kBp6UXfUMqsBrr9bIWNhjJG849jVDn0uOM+gOX9agd5XH
+DUxss67YAY+/lq73SH/+FtCQs8yyEGNIpCSZwAaZv1ZYXHCfAY4U09KJqISU
+hxs+qbxJiU6oskRgPMOjLegsg29vRz5Aot1b1NUrxF2TphUOY2+SH5ovyxYZ
+ivoUzr1YOqwKnGqdZXothTTvYlE3imwZqvKzrAcv5NS7yYUY4TQcMWIOexgs
+RhDqW5dbcm+oWADMVuuvnxsWxPa1kc3hYIXv+dt5zPx7Vy7W5eIjkwwJSxnt
+17PxwNjT/FI7EVzXyE2A3VHCane8UFFWFbI+YedcuhhfyPhIlFBX/QjuzZ/A
+8GPg6lOFStf66G/w1VBChFwT0raasLo39vi6XFZ0d0eTWWKX8AzYxsVBOQwD
+lFyWiKNkJvQH3qfdqGNolQjFiHcHWKtQWMtiGb9p0MOKJSzayWMjoZgfUJ0J
+FACybQ8An05nWXUQdAYW2uwivUhyVDXvhSAXhHiMP40l+IOYKc8RPOw5B8fQ
+RiWQoEtMnnyKQ9O1aUBSETkuaNkP2AzpHYyknSNVHZ2HHYP2gi8UwOrBT7RX
+9Z0i3zVFN0G4E6Eeha9/IT1KOrYThKNmtan2bYxkKrt0sl0+dg/A1LhZnOJA
+k8U15D5vjxSIaVk1tGUA8UtI0F0AbWjUKQHQhSYF5X38VB6+mEMiJ6KXT3kb
+rovFR1GsTO5zNp/QKi84G6cV1YqC3alFi4fjMVgftGWzRUyLxaR/6VqiSeFB
+qQgKxApUZiO8mVYa58wJjVFxI61NBb62qftb56Z+T3KYXeg7ubrSWf5vEife
+lV735adwcnD8z5yvxF6IOjTpFIfByd+NIoaPmTjeSFMTByHyEQsnK7a1ZVOG
+WK8x9TvL9LbuEuvhmFWUyhUZRb8NXvazjCLGImCqYq24qXaTSlbuLVs6hZHy
+Ce/pZTe9efYhhK1C0G8Ww91efCSn2/5/y+ZJZiuUEP/1Ns8lX/MypDX9fzAx
+PLhdyv5GNwUdo5yRMWIS0GgghofWM45JCh88EE39UbjBWtGMHNHaLe7OJlKQ
+MWVN1q+OnOzusTLgpG0NR4xhTbiuasnxCQyYWBKZ+TPTXHda/LiLbUsYfcod
+BSo710caZL+HlCaM8hpaeYDtdqS4AenAvqWXtt0KVlvibA20hwJvp/oaxsB2
+3Ptbi6qtRB8m2klwtPazgen4Cj5vU94A/Ov8c6sGUNiQb54xqqytEnd0HYH8
+5fqd63K8cgpQULJ3VyMILgGltAIrlvVKi0bHxP2qOZRa4CfFXmz3kTIWptXN
+XSrU248WHi7LrdB/6HG4aQp2HeTmPRnXFl5TnMxclFOCMyGTx4YWjprUhStd
+/5SkYvsstk8L2jN6xFCAP1L0lHG+p/NBOnwXQyMmYptxAdyldwI6PaKHtc+j
+3CO4AUUI9IAb6bzfy7NeTYqRUmC0Yp0jsO1eMUp8Fnp81CGpfr2pyRqrUvv0
+t7d/k1vx/xfmb2rc/S9q/34P37VajrizbLG2IybrQNjHbNgJRrqhFTtRy5ui
+JH+7IqmzDIkIml/RxdUZPX1TZs5kSHDYk7NXjSPsbXd7odAc4/iZ9WljOOyb
+/ZJRaq7EASnlGpU6YSH1bz5JUQgzdWzROpv1Y70C0fo01tQwNXy08xtQhPH4
+xaQlZxtNZGsRlI6Bxgt3XkS0tK0VgzwYmlLaDcgekw4mc0mggSlrpGeBZbGH
+SDGCN2E4le2WgLQLB2QDcBSbCzFkOrkC92VlocScXWnvOEh4Y8qEfL5c9vtG
+ecaQNn/eRRnA7reGSxyH6ZJs+CTDNDGJ+9bIEX1AoujeGpx5NmoL8Q6M25DN
+KJnWLJvQFxiXFwJvECWncsDsHznf5ITEPnN8TyH03QiqrEemlmnDILFwgT50
+qjhyS8WEzAjuUtFEeEaG6vJQuhLKet8NLdLwdCAaboX2j4laHNDQSvIumxGT
+dpw3NXQ/pJkyEGDFTixAYcxaxI8E95A8zQpljrfmHK1WULNQw6L9ogXF7itT
+87ANq6Rpwjj8ooPgQCuL4kn2coihTrAWjWHdVtztNOalulB7FSr16Mxyu4vW
+gIPV6HpL6T6QkMMywOajOPDaDi70cwpPSjz7zlHXTJzq5+GeGd196QpT9tkC
+ZfHFK5J6Bpf81/D0A3xDwd+pszdeqRgavLi1QEqJnKz9ptDG6sGo2bOr0lRo
+wsigf7WbJhDScTgMO19UJGLr+nYCl8zg9fuby9I6cQ3qOWeF5vl3DBjna0aS
+lAH2ySB7C+xN7MvuroMjU25aFvwbcgD5P8fPKDmBAccugtfUqMcaq3pNjRHW
+Jb5KY5Jw39VgTt4Ayeeia0EM6XsW1df/ybV0WptNNn8/DZ6Q7RXTyXBfB9Rr
+tDRon50OTFPl4bYODh3KX7iSg5XsMLEfK2E4hCVMkYK4OFoGdD84S45Mo52W
+QvnEumi2A5l7nPQRqFqBE3JuarTgayJTN9ytU5jrtCN7cKyqwtlLHZbF98bL
+gqwX+YANNzSI+pymzZje27rtmKpqSefq7aVj8+LmUtt6yUeAbaQHlqV8r4Tw
+5XLQJSBZz/cpOA6IssLSVZzrGrtMjhhMg7Ktcc9mSI00ajwZPbDsSbrURuce
+OmxNOh8jB9UOXCX4k4kSxofWJyLzaCyzUSbRhGooIoF/7r9q06YfHJWtBNmt
+nRPAmVy+4xh31DHmRjKrTXEzcexIJOKqPedYFUeSD005mo+m9UJvJ2NxJOPx
+pTe4AFdESwHtaAeOxhbp+/vYgbm6xcBQwEI9jXTrrhZG2dmrT5VWZWlTGbDI
+aRCFizo6xuO1emk4CUcEkCWurVcdersVbtbSTKq9a7sSmGvRI2I/LMu2utlF
+rm08ykFGAxm1/LrtmoM8EwEaa+stfT7fF1ilk7j2L/H0E98x1lMsGL0qfv0j
++ZMfax2Qqfypoisx7pKwE5k/LbgD7fcF+AMnj4/VPivLw7HGIfEZybKql6VX
+PRumoP/TpYr79zRzKmPWO5mzsdaMvJXJq6SmRwvb+MuyiVI063pSqrzAXFML
+kZ2qwEUpTZXkEiJD+TxSFfYaYHK0zXqxYYK8GoedmXQWc48vhXUemUA35Yor
+CEJrsB5vHKzdKXtRHYs2P/HFy/TYNgzvxAAoqXvYU3okcfzy1qts1uALu2hD
+mbKjjrQaUTmk4A9Jvtkr8lcrR7pv+Uq8kduj/+NZBk1H1jRi+cIGFP0k+zvL
+6abY3Rxo/FZ0H/jzhzddHCqsrdAFui+X2tSdd0J9W+EhMPJZWBWRfRb/eeY8
+XU1ftnDK5Tfh29qQTa6nNj4zVlXLQ62gesrIvEo1nbbxbHtbwqV+UnhcCdVC
+9TNL3GVEvMzU8rPuBPG+pDUh63jQtVO3JiDd7cIPpe5ps0tru3daVjiM/TJ4
+qf2G7pYjmB4+Ez4pgQ4Li7bD28M+P037ZQc+aPr1WfDoJKTBNrzx6Q262vZU
+YMgVZWzY+lyRlNDwYu87ZdgIxuekTsaKjQVSq1pjCyLJMckmvGDBz+dU2joQ
+7bGrv6y3ieYYbfF3MZfyhDuJKUqfca1GOMtuUBEj1H2HmxvjQT41ik0/7TN3
+9WXmPjKrAa+5uS3aWmvABzNSihdINruEByBzFyxUtU5VSv9bRSIg2MT6p4hq
+bopo6JrugrUFLnocyIoEl+0h+d0xYlj6jyvaCyvWBCnMA0sDiwU6cok474tu
+TSebTuPSElQOqUv6iXtkl2M85ZoxZoTYqmiyoBtDG0AtS/AtJ7SW17R3mh3+
+4Jt9iuQXaEk0G8HCsdFuPVGYECLSLmU+eZwMJtyZ6Tx2XaytXQ54D7lUzTSA
+ntjbegBkRNO3YlnK07z8AbJr5d66ZmhOwTMo6dIK+iht0sWTElwGxyhUbGgR
+7/KQgF9W7TnLKscJ1LURuyEoBxPCAa3LcW/ql94yk4QjmaQFyuXGyIdIojyz
+dnCk+u1FZll9za19+9h+f/e46AKULSoTpSVUvK1GnMWJC6rPwud+yXUGvDG6
+rogTkV+xsESzm1Ib20730zu32n9MitLKfnyAWa/l2mL3DOc+pKzIX5r3t+v3
+bQhacYpia/X8g7Yz/OtsKUmEAycvfQ6q2NxwTGW9NYZkjDqkflETseRLsk+U
+Wwzt8ACRj4bPKImc41CzxFkNSnBcDL3bvvcOo5jUFIFgv5BXS+qG3rmsrfqk
+Zm6p8ThibalhcM8xyK2YgaOrOzFZaV0ysyeie+nseJwY258xsvK8hiWmOfKc
+1VJQiD6WG0txRFWiVZDgN7Zo7pP1m9H1aqK/nuff2BVjLPvSmTm11tx6BEss
+uCGz1Bg57Xkl4KML9vNZZqA0UISpZk6YZ4LZNr5M83RAYy0LUePJuvW2LD7K
+bTfGC9YG5JN085VFRQ+evVCJDDNISu2EW4+fXk6yzqQdifKAcIlgHbsCyGy9
+p1dey1knOOG6iSd5wopvG+8L63XIU8UGfKzoxquM77sp3ciK/ISx+if56fqw
+RWg3f3F5Jjr/RMArJ/zHE7hvJ2nsLNoerZPLebJr2tnXNuq6tFtkK/Y6jEhk
+9cRZuzcKGmt0d3fcz1RC/0X7cbQLiRsJuw7GQpLY9QJ66Tv9hui3ckeLHXHx
+hAS4GJdy3tXnwKeIUekaQvXvWCg8WqFNsUiRJ8KkjomOHtzHuWelnQyYpdZs
+/kIr0Ok65iwJHRqyZOTsu1Yy0rg8plRjw7NNfaMkx8uSzibu85FFUhXQCNmn
+iya0NIzWLHrnlwyv/N5ShWiKhVuEn8yK61WGQn19WuCt0SPRHG0yoH7hnyr6
+vnOEvndqloZrcEiaHxcqzd1AqReKeiWRyYp2cGb68SMojHiTxRZNw/x22yMd
+G5ZwYh+skeOwP/ZguXsWGXhRes2cisDJtOxzyfFqiQgJMMPTTZxyn7yUguIs
+ixsXhT1SJNjKW1CGsV7k+nCZFSJe43E8tgPI0dVTJekhBTkjmN6JBb+T2Odk
+1tVo8L1qBQOU/Pf5n8Em30OU8A7RhvGup2ih5ypHUYgshzsys1M2hCSolaYn
+6tXZVPCSA9BFde+B6PMO/s5DCF7oudJQWNrEZUW+b4sYhQ+3LJKfzPo0ndu6
+qz7p42hl2ZjXLLvrmlCSc7qsScOMansO6DOx1JLWaJD1Oh6h9yF6LrKbiE9W
+Wve7ZHHQAHryDTwgLchPDVIFcAr3z7rYg90lMrXh99s7R+ADHEIsGy66h8XJ
+j3fDdr/6UeC5rZpMQ0NfvJkS1dmfyk29F1uRE2jL8lw5ZBgXUHDsrMjf3tEm
+7XL0ZWIeCPs1TFRGsC7La47G0L8OSoOuK1op5+j+YKBJ+uzt3dtLH6STF0pR
+u0SoMqn1McDOmXQxDA9RAB7/lkcsxuca/rOmQexJZ+ihWH6qN59glHNkRFyp
+19aaQsHMIcdvs8QrMR38as9IHQNf3dGCqpsnrEgYBr/q0Ir7KS2Mt3vauUA9
+YsUDfqZq1+L3tDSnCG0ZvppOvNs+G5hEp7mgotlyZ6XMmQ9WwH4qXQIEuqj1
+LxNEj+3ZLNuUBdaHWar3hTrjJBvVNCjvZXDwGKDDzy9/AoZgA/I9D0+neV38
+ceTobj9+oXB4iVhqDASgD52sJMs8k50hg8LX8To5Gx+kqDR/9bKqz1+8v7w6
+d2c2iBODHJ6G9iNs1irqpOqOHbEhHoEWmk8FPX8nvdN6nUgF0p26Kra1pCy6
+urkbZGn7bezeS5RT7q0tlxqDDq9cnoPpndZc6kFD5o29/EaMjhnezRw014IP
+p1WWZsFmriMNsx05CXjUQPRmuY8lxg2b58+lXzPseIcoEgtYNUkMX7ElbSTP
+QmwjLBLxhTgPYTyVKOFXL+dVLTcnTrQdTd5HOZGKSbH4cqqnyetlbk1h+8NQ
+/qGN9Bx6FpbWisly5KaqQUpVCrjEDQjqXhbc8D75Zfd7WruPrlXuNd1raf+n
+y2+vXo2lP9WPieZzEPQNxKwNchbKlvS25+KFDf2/Mbk5/m8IeHKFTCFbSHhF
+IumFjv87bIisrPQARWefwZUyyIaxTmGgC9tDSF5EzbI6iNWyZSnQYKkdyEUt
+OKJ97dC9IQFiqLXBCAYDmKjv/1hyGQKgAAvGw4hXovL9KEznUfq0R/2cZtsO
+CnvcodXK16LaRrpMRx4FfLdgwgvlOZXOh3Y+Ag2lkojrARLNqPRQ8lguTttw
+Kp/XVgwRMXhyYZC9FWGyjJqjlqoC9oiu/50wKBkDjYaByPbmZ+k7WtMa7nR8
+e/W+YoKqOTNUwV3Y71afGHf+ce5XoQoetEI9I6zgRCQLNmvJGEQk+lEWIUWb
+MThZN4qdUFiEurExXwKdF/kpse60cbzm6PfqQoyhxIOuhVzppW0xpRlip0nj
+gbmIEA/Qj7jEP5VWFsOzBWhgRa6Ooco4uZBuWxleVHFsDBl1KAmlc2Sopl3I
+/sw0nVBX7je162dc7Xb1p1A8wvInaoAjIeLKG0+8V1SB7mFqtfVLWoBEX4LJ
+IWFFZ7NjZmiZrZ7RLDbo0JCB77YaJxykalIlmSLkpAtLRwk/GvWa88htxMsJ
+ZYi6Jbp7l5yp/DnQLMj60051UhlS9ylbw5MQpguP62olHVxtyp8sAwvO6GG7
+Bd0CcZhVqSEmL6eXRzDX5LelEK0ppjPOW92M1PmAWb4tdjursKbjBHlaFRV7
+EdfFRuomGUSviwMaTou6B1A/uQPRzVbxvba+sKIRNbwlAH6jleVP423kE2gS
+P9gxgirJq0XLT9PDvDVSJT4DkzqY5YUrnPHPagPDdB4Ce7hxhWcznGf/bpWG
+kRtAi8kzWUJfZKhfNDLT+F1d7qTYCqirfbFzXKQ+p9jhmm3TimiL4oQPk9K3
+CDLGDslD0R9uq40ROq4V2oAN0eAvRqCLZ8vWINliTTRLrVG1TlThIeuCrCJe
+zx/UDYERknBB9lEDZCt+1ICkpTjDTKyxPTJCdz4yvw2FkuaIhQZlraowMldZ
+F0oJnD0JahiKp5XXWHyUt4IJbDlaggnDEYpNjH1h+tg2JHQCnmczynwwxEP/
+INL+M+u3FBwW501x6mqegDa1Q2wpVxZoxWKTTeSzQnMv/Jc1PfAVm5nL+45U
+PY/K0UDIvARWbco3UYBy59w1clCrX6PtLvmGhN+u5rVH0k1pkOtmqWsdREur
+oicgsuBhE3t0Vx8pvbbiMjcEAx3QoVaGM3NsDRuXNs5SLGr28hBay6+Khebk
+pqI8AWTjO+dlEQCZPFsqJP/qC8QiyEl7tpm1dOZbzDImqNysRNK41wsfAI0+
+BudToDphlvybLEySF/3QOsBACPsK4Y5Z9f31s3XFnXW0wFDY10eSsllM2UIq
+kk2jG2WEW8QfjY8lQ6+gb4qW3RbOGfC6purnFAoHWqSvZXqFzGRLVcvlpjyb
+8b205rIHGGa0aOxdGTWRA0+0yVmNrol0EgvqNqJCGDpJ61C3sr59gIjubTgC
+dkq1NGUsB9nuyVDIyMZU9CHSkTVHhoN2juiAFbijZFc5oRKk0YCWrVwaoSb9
+CtpcAgIpZqRT9HaP9yrzRfd1wubg6TxDd1ifh9P7Pztd+qq+NIgkhZHt2dG2
+ssmaC0ULSmS5eJadFsuuGlWX+ipYXljmKw7q+obCwIvKLaMYwJQvwNXW7xTB
+k1II7KR3bcICJt1ZQqtWUwm9Am0RawaMVxKhzIMNFEqqvzPRn6Djwefn9WqU
+igdAhgr9PT5V0vy8ksC75ibT9s5JvZokZO7pUqCtpALlbExWsxBlMq/Cgobp
+uiNN3lcpPk6QOb5XrbuPWZHNnWmS0MSWV13m5eTZzpk7y9ZLBhd5z6BpbRck
+wt4r3Yg/WxeuuEpYEYIxasMGjmKkV3GbqxwFW7KWf995nRMK+XZFBCgC+RE6
+ibMLEQJuyqogU4dTvlOMROwxoP2/tIfICL1Fn47Y2CNlIxGelkeMJtlQ3gE3
+WI28aB2PNbOlazfcEprQYLeJT2EP9hOvD9h0o/IkHdF3LtiV6mfh1eLVG4rV
+gE4kMldoSwikUhNDqcdtod8TRmvVGCF0F5udlNZ3RHxFNlJmcRw9/WAEiOjd
+KLCmvu4KmJA6iq32QUtRWzFbaJsC/SzpvYVkXtiWzOR86yU0ZH7vCz2Hph21
+b3ieBH9qxwOZXFPCKcs1O/su7JyjgOmxw7izfhk4+1n5j8phYBGCq1SpRpOk
+MtZZZ+KpraOorsnpT70+C2JKK1UoTZA1OPLtwqzmrH9DgQl7l0Q/ZRBDEtsJ
+OeSlQdMGFE13eiSHPDbWJxA7A0lcRrM2fuTMGhzvwEllgJRYZ4zb1LHptAyv
+DaVdoTA58N4Y6Y4+geysfQkLNa1Rfpa9rKWTBK+sOAGazvJzvy3MbC02Iz+R
+ZFnqj7ehWZ5vm+BX6ln2ng4BM3CVJp4GpJTIXbr+RRcId1KvKOVVcTZym3J2
+iBZmK7nqDhZNRoTCvSVFovB8Y/HPYIJJn4g2K5pyCGNhh5KhWVqwyTw+Jecc
+BW0b+nb2T6RZ/qExJ5dL6Dlip5rLZnDbd+vP7mJP88nSnzkeieRn+Hvo7Pzd
+iGKsLC4gd67GY/WqHl4NGXbRjKzEaI+LWndhTQWYAlInttu3IF1j2Ys1RSYt
+nqLqOD8Vwi/JKTnGS3WaEFPxsdUOG6Nl/ZahBZCkz57de1c2GFuMu0xRVSLQ
+gSs9oV405kbPdRUrThLKQkfyKBhAXmL/S46zb7ScTcpBHkR7SRL7QdGgiUGj
+Mjo1oQIsAUFnH30FamzgO4E04SweTYHwp1wjhSX6yfQn+74+tJs7s+RG4mFm
+bvaikh80qiPNPzWbNwiA5OjUaUGDZBx5GIclZqzMl4nIncsUHD653Py5iQBs
+aY2nrTJdTMu0xhXIBJgYjn2vu/rQoDiEgVr4DyuGkirArm7OlCvR6pi4vCEH
+bxidLP4T2gztDdiBxHoKbhSLIG3bzOcDHWhiNxjAEs1ETgk02mJrZhdufqAl
+Cy5uLzcKXIR9l1qZp9b/qgtUby35rEiMKd11ds3eujE6afNiWO6iSZGNTB0r
+DvuNWTKhxzEDyDTKp6hqLQ1GGxwmMJJuvK4nTuxoTFsx1iRN6g7Uz3wpxTz0
+XdbF3Iyx2nH3Q1I776Vrzme04jn2VhDhjIQLQpLESrwgn1gaXmdPKYo8CdSz
+WNc8IjZEUewiq03PLJo4OF6gsndTs3QvtcwqEftVxQQWyP8puBvCmhWjAYtQ
+gVNJK56w74vCiofCn3iAmMd1qUkcfQY6vSGeCmMo6aVy2eH5PhCH+nopnwvh
+7OwUBr2Fi0a+Ym1nztzBtshk9MpidJL/do6/nWWjUVjnKEovOTVNj1xs8+x/
+DEjZRXE4hhsMLV6x/RFmEgHrapcLmhxZAJbuNwXp2H7YAkrfFYRJABkAzB7V
+9MX8Kwun4CqUmJq05lDYFkL4aI0u1OaW3tBj4mL8aIaqXk3SoF40/0joTEIT
+Xehf2AvOzZJgJZ0w0a88vG4dytlDy3AcjF4+JHS5cZ4pMkVCXMoPXYvtEGsa
+VGitIR73LBg2fi80peEnGjIb9sdzNIXTy0iyGoVRCzMELcZRzkzgb6GJu6Ql
+n9Ilce8Etlz3sProSVoyaHawFGAZhnqCn195prVKMRQxBmRslHKLCw5ajdT9
+YGG/rcZo4gGK3i2pepixY3Vm7aJTH99+M+vFuGKCNrstqpjiY10u7MWSC7Qm
+ut0E0tNyAlafunXLHoy6ELWJtZw04kNgUjRbIPxUorjpO0ca6cZo0DLp6aoX
+EgQxU232LS4B3nnb1ajVwmdB9M40yw0akyuRLXecQronHoYAS7hD5ba4UKkg
+zrNXY/2ktP8bUxCw+SF2u3wq4hzPePI4zssFtXeKeGFoPXsWIj8TIyHrIXSw
+TCyNAOXZddWi2hdcTxl9HGZVaQEd0TSj65ZBR9ImAj01jHOeBRN0ZClk0yDC
+e+OlWiXPtw6Ad7AQ1C0wfSNtHa0HJCPiJ8k8+NeSN5c2WMwIN5p3yJSIVlSk
+BjcYCpy017P0j6W+XP8QL9DHrhzhuvS3C02dbhdtr7Q0AgvfsNa8waGmUn+f
+prFcBkvXLD/elf7ouMfxFJurBcEEWiMYFzaJcVuJKo8Zz9TV7hGkWobJs5H4
+xTJjMsYf28kwKNIkvKg8GyvYBKlIjEontuyvypkgdV3Q7di1azK7ZjG+jHpW
+J6JZymZdjFwMI742zL74y35LAvEtD9akfMqJZYeJjWWYeQJA7a0ak0xquEeC
+QxPpCA4sj+EbL9WeE+yf8eI6VDIDWenkGqYkZrtAPymkWeQUpNjGYwdjWJ5/
+QCMdHLSDnE1X46mgO88vMDQig5+HOY+RTqVoyx9aK4J8/PhP+cfyjuELIOd3
+kI+y1HMXIckCUeZmT01ZAwFyaeFFJs/tnEW007Ms2b/rktZXwbZIvzGYMyDr
++FWC8Sy062GlLHlQCbI+rIuk6qcITypzLcpLU2ozUamKTsHCmi3bK6FVIqVO
+O6PL725rBoCj5TOPb6O1fLCJbaRbWoUINFfWKIDNqymmtqHsBWa6/C9lqdgC
+qX8QmWmBTAj4bq0cuu5T5D+nIzEbkx29m9LyClfmFK/IlIRByaVQu0vX47lc
+jwrLuF+6rpId7pFfkZ5sy+EK3rt63PrXMXdPL+aMY97wta9LaRtddZtykub4
+A4ffCzajk9vC3iTOMSCxVqhk5gSbq0zgMj7ecZNXKyMjN0kX+nb32DcUe8H3
+Vag7ybwh5cpQXc5nYM2GvqPLklubAqc7zBiK19WgyjBmc7re1SbpS00rhdZj
+MfE6G04kM/FzUtAthGOez/Xv21gCJzhkRLE7z6U8187nydULJEq/av/JH7/6
+W7ggv5h/PX8MZ0sEod++I/Bf0gVD66PcXdXPLIRYaWstyAiaJLgWg0S7pVuJ
+7K89rzy4fy7GYJgUoarQogsHpQUCMJYBPpVE3abacmA+Bprm2beKARGli6Zy
+CmXS6Lv7tjg7wmRDK0ECWbYxAy35c1ekyNYj3pjQdDlI62rf6pDURfv+8gVt
+iOC8+QzwZi4WpIfo5WBoYukTjpC69QX0H3f8MuW9WfYzN8Z6M7dOu75Ti05T
++VUM08lzQ+RHS+tZkZEztmtptFiGpgz5/iI/iat7QgLBCS0NJvt9Zq6j4kb5
+lv2GVAizPr64yF/TksxGhhehS/Bc4lnRYMs/AfTPYck18oB+G/jLDH7ir/lf
+huELJcpSxzD/6uLJVxgIh2/HmxGd6UqGQ6CHTyJRipbrxzThlictSZNAmWxh
+XJV59kEVcBEPVQAIWXlwPOGzPowOia6pGSTBTcOEAxDB91/6vpkPN447Az6V
+M2riK1VQbDSvVvEAuKX8SKFCap69VMCi9JWpQgA2YN28Asv+Gqf13scFWImE
+j86Tj5RlrvpNe1LEQIUT9FCxDjkTIVOMAQdmYEKzLskA4OAvyZEYdpTYVYty
+E5owWLZDZCjqHGlBPEGtD+dMGYsCj9iISzXnTqHFgxvkFAr2mBYWqz4nqWLf
+sAscuSlahfckjaO1a5Anu0OcTWF+k5juSDmW90YkmQsu/Jmj9uPaEQ7XWTlL
+avt8QAbyyCSFMA7B2aAdzjd1/dGqm9RiGKYTffoRydpgI/T7r8XHnM188Mxh
+taohdie+Mn7PaO6nBnK/ufqdTE3OMWv0G7qCxUVlh1arU5ZaOXqpFYzsIojT
+WHUhu+jGOs9Pv7n6YHQ08p3+fLRXp3QeSArN5PVW0sT3LENMxKwSzY3fgoL3
+DDyhYSejYj2J+yfapim1wiHsNTwqqaQTtgGHbp5pulny1Em5SoBio0mCLTue
+k7TN4P+XjUXRSv6qX6CokQCFHuzA87Wg3yoFQGp1tdBBLcDNee8SqCyECgde
+fs29YBg+LkVV5I/rwaDT/z8dr8bf3Xr8W7Be/46J/gFPOn1M6uw8OPt/58n8
+Hcf47H+OydfLN99/m//4FvW5f3ma//t2S1MPrW9mk9oF9TjgTrCYMrejcQgE
+SQ8OwxtShK02EuffuW9BgetLIYeNgrKZamo5yA1baR88aXZ8/aE7Daa7Ip1I
+SrhS3BEWxGZZl9rpjgHI14cbTVSY31MBM8mqs2aGXf7RCxh7ofhsLDJm5jH8
+0ab6VPnuXCHZUSl6UB0eWoQTn104sa7I+OWuDunY1risUpNvqimHZwTHtYdY
+Cef0HR24UnexGxywxzLdt1anWggsnVPktw0iw2xqkdDwgXj9F9JCTKhsLONV
+i3C4HP0zLX11uubp2efGHGSxWEmfKIVHoLlqT4Kve1tq6voTCP0+9e4SNRSk
+OHmm91E0KfWikJuJpTvf13sOqsbtC+iPw25TfSzRRZmLsbkN97Ls6fpjet4z
+RwGquao6yFzMdaaY7v2h4TKGNpS1NpFeY/RGk+qsFj0Z+8RC/hbyrRxCgcJM
+kT9H7t3A6qetkx4YaOEp3TTFfs03SOuysKsDMKV0JCVyKhGWGIMLPwQGO7RA
+ZQQcKN6lTXvVBUoEiwjSezjgEC/xsZG6c2JgO0cz4KN1+UD+HhpAG+uXMtIE
+ldPDRtHFR7TR2gxJVJlPy91hbBsT2MqjtPx1fGzf11acHlzYIrTIg7N/yjom
+BEyAFFT1jq6A+Jg/kYI/iXv3zCa1laC2R0VU0N983kLXp2MSF1H0h70JztAI
+DYlytWIYiKJ6VjVoOaiwYjBZuHAtAS1wgU8JeSstNsIsZzrj0SH0q+zohQEn
+5V46+QSXwjiyGuIDqFHu3O3QWjIF0hXHnwN337Dkit7oDWumyTjgYg98O58H
+3TlLEhqd9K3bltEycK0854Mt6pH/jmwQmt6IamQGmz5CcCgFbyQJrybWlEDV
+KP9FVOmW0ULTalez0aIBrphs1SO2mfK5qeigVSiiN8YlxCi4mFQ8cf6Vr/aL
+FYI+gjgz4EKXVHTEWDg597MsVEKkEYtOehDOtSWBdMVG3AXgTe0JEhhj2wxp
+L/SdNmLabYHvOObbJEKqLZ3D30IKM3YYR62U4goCoyJjQu1K0XL8XSC7DEDG
+VYxNuCHs9DxqGZ1ZRlWkkJVwij6lK7Wv1LWegVkGzDyPD02pIee0D1zQz+15
+i+UoKG7uO4G5q4CNtZSxozdgsvCs21gEXoxpwDa0ErNINMoQwUmxEf4OjVIe
+dpgCKDM5td5n5eUMeBU6tktgc1O3Iai/ODSIWI3aIvQ+WY1o0zie5bY7bBl6
+cNgzvjGxOyVAD+ikX8BTLXHgyByQfmdCqSiI1QeNO5sat8MAx8agZo0ZFow3
+C5E2uLDy7MDHx6W/hR9vlqzUZ+EmSUT+nRYSucVCaWSxda3xNZd9HBC7AMA/
+xZLgyaZBOmrticUP4/icskB75uehScpip22qdtoQ7roqBjVOnrLCLEaRhU/F
+zQEWfSrcCBfJ9aYztLIoQABmWo7osRyudaGLgLG3S7JsXI6SNd2DfEbZeqrA
+XaFk2qK/RpivJQr2QMs7RvNb9HC3+m2W+h1cI7VG7ut3alWMMR48oEgSK1rq
+ZeRtkvA3dqRkr5lgazTpH5wg1mDfvP7w57l0UeYVtt6rdJFpT+jU/hvSj70v
+luzVT44aDlD507q6rqwTQkvz5LQip6FWZlHGxwajMcVPpwx1ySEYPqPwLSc1
+CtBjaaabEeav9PdkB7ThYPXIs3bRLel1eNd+4IUlMcF63cdmDZ94tT60/eMS
+4gy6W0KQVofm2Ulue2SQjJVhRtCh0XGvYI2McBCrGnkKLj9rpGS12ynT5fDB
+ciJBq5Mk6z0S6lpxEA6H1GNmm4pP9Gi1Wo1TcrsuDePlcutwH29OHa/v9qx9
+2qoVwjvxpAVEnZPGkpCkdegKvXkMP9dfyPyDxivomdFDDHXXDDwrpRAt2fxn
+n4kFiHacZ2p+zxbcePdANiHk2R6SFaAQ+rj5QzhsUZCsvBusojM0zoyMNS62
+qJWy/i+LMN4R8JjcAwlDcTgVrr40PiOzhfWtldMSqn5E9ogLVDjuK/jDAYwJ
+VChGFwuDQrvr1LD2VDIZl7NOw9SEeVsZYpjckUbNG25cuhaGYqmpdhxLElZQ
+h8KSrD5GDCgnNmZY+q0lk/BZYVczhkNhRcJ8i1n4JzNu2dGOhoUxfAE/kB3S
+OyOxik189Ebkd/yivJ69A3ekJu1YiIPksbLuTVClCbY0GLxoycFBnAdesJHg
+anZcIvihtRQmKWTorTPa3qvRxlNxfz+3v59lju5ZyV3aZFNiEIGTLnxcuXAs
+QyYAgc8JQUUxBXfx3ipDsud40Oqh8ZQg7OaRTwyYp43SI+/eH/6AS0UC7X/4
+Azb7D3+4LuKfnmbZOf3pG/sT/UWrOWLByzW4U3zbZC1kJB2JNoDtQ2tUcn5q
+P3X4QOQJD/PPYTZxnFakjS5Vmi2VcSPDhGM6tvHoDeIOiSehM+JfR/GNakRW
+JgKNPCJ32SKmBeb5m5h9xyMiwYd3OpwLccVIswF3jRG59qwTT/Ba5JBNFNhm
+Qy4PiedE9KVisDShUnS+J5RUpvtXmL4fLVJrNZIuNIh5aFyM1mScR93RJYPo
+GTo9aVmLpMxQpBBGG6sVJG8Uxxt65ZTdTN4ADiqwo+J5xsZRNeNgCEUCccNe
+ei+Cmejx4SgC+mZL7wkIAZjTYrOdRRzEyoUzLLPjqoPi+DNdsdHV5E6iwYwq
+w/K4ghOUDbqdjE/GLtG7w1akVWv+fcIpoLZ5mItWTxXcwKFigj1ttjjo4+B/
+vi2Q2JNnYGxQYw23YfXfA1cxG95wc/mxtDrnS7r5dq0cooYEr+baHpoYG2iS
+vp3Et4ReD/Zya//j/cojZY5Fq3nZsNpjKzpkqQdRlcTakf8aab77Cw6LJNR/
+uwPDz/v1h4af8qsPzlQn4d/oOGGmv/JIafZk/FhNjf+3PWwQgM8/cDbyhxw6
+TRf+6oM3vSC/4jhqFvTeI/mQvtTG3BirHHhV6L3MeYR6o7rZg1g35LnIjn7I
+o0/+cW7PPAGiayMh89uGzh9nLNm+E1fnZM8wFvqf+Iu5PPqcruOP1Z6rdp5C
+Vzy+uDiPQ431crxUi1L+rL8EnoB/zszTM+t5yyIg3jwbDjdsyX/EPf6PmY4l
+HXbgwioSHbMXO+Ei/1f657/mj/m3u/l8ru9ed92+ffroEf0tTGBeNzeP+L8e
+/YeBTBLZPB8U6gZ0OPSZDSuIzVObdHFTPksp+nvbsv34BLuCE/Lj26f5c8n7
+Bsy+IpDQsrpEy3da1Xn+ouDQwm1pqCL0anSGXdAc6N/hR4/VQZbTbYyss+3e
+s2S0z1tl/jF673FdcCvAX24kovGK/sarxrpfrUQO2k7h1YUCOK6mvWfD5QY0
+atgVycsC2YMQXsL2vGBIdGg5Eer9enGd/wXXYILmGra9fZV1/Ui51wCdO15d
+1kr+65pzjOhpWlnF5dBMupq49eQPoUZb6uWuR+lMfEnjTCtsAFIV15N/MVN0
+RV4BcV70yUpwdmA/TNgaQvRg9dVgx5ZBxwH7ItvRsc0DP5JeF658XkpCGSQs
+6xD2t6sPkr27rUNx69Due8EdBPpR9DRmiQDi0vNgnRx5yYlot5c1c7AobVb4
+UGI0FQPbItAjPEySbHo4Jtr/IXuqzvYoPPw0ZOSTfVLJUsKvM6t6HTOF36xn
+vlN8oitTq/i3Hsy9AKMP6zvDtj7j5ooVimcBpODVbivhqNS25EL2hoeFNTbr
+04prm/yrC85w3rfuV2vLGvHk5HHdnYnkhPJx3IMpEn/E/wgVNjC+LTrMTVTm
++ff1rbYXaofoUteWMkQ2UnH+Tcd+7x5dtiFriyYmbaRHM55Qn8ZT0OXYGJ5x
+KUpbz1ARhQOIB4vtyc90ISA8xsJA9EOj7kR4R+AWGsXvYM7SrcL2QwSdtYov
+s4QDMEV1QIi1Ac2jhAv4+Uw6kaOFhfyczLgKWL+YcFixEUrmCUtpA8J0Zuuu
+uNOMQ8WhfAqx1LaeRcH963TxxES9ROztpYWBAhBXD6zXU6KMLSXG5T6wfoZ1
+kOi0daV+hcZf38qHHNCTyJ9niAqxP/mjRv/yfyODFIQZD2UpCfNqOPjPuuv8
+CHRRoS5CxjauncS7jv6ITum9pJrcnAZTCoMfMgOslT7RjzS/46DDee5qMLeV
+5LE8JFiMeofBi8yTgnQGicCRM0vT5quYX2UXpmSmFVej7+ZMvAYJKotKFzsZ
+NEBdUwaKzAOLRM9v7l2jry7+aWKVKrkylRqJB/zGU+5YA88AAg3LxDkpNuVk
+j/jhAcIo0y4Hlgo7J0sOFTz4aRMWlzxebw7BsOL+lyybtTe4d4smPXDTuTwk
+Dhj1JsIDeN5ayMdNfJZmvxJbypdJ/LK4Vz+Fy8pYZ6FLcYJVoFmc+JUQO0gS
+vpHKonTd7NDNU4soQhSj4Cj9TaxuIykREnn5OlqNKUbaZwYV2TNYtJAcjH+n
+5X02la2OsNlYwBYznlwcnVSFjJFrPDAT/iFYtHKlyWFjzfzMvztRJ1sBEBu9
+YMz9Tealf8enbctd+LJ766wAPx4WSE6KKzOmtL0KKeSFd6GUmcvRs+xNt/a5
+ofGHibTim219aCT9HEqxLboe7DhaAB1qzP+qlEmoLWIXWDxG21AzijH2aRxp
+0ygsNgyUg+MSLCdLT22VJfG6X89/uQs1CBGioD+vtAJBegzhEZLj07hsRMAI
+Gc5+v7nTFF1wGUI/r1C3oDA6bQFzXPkki1NzORndRuXg9auHE9E/UOYDy63A
+ozZW6MZ8P4CfF5/urD+hoD50wgbjMPqik8FKnETLTNqblcIfIlro+9AtewYL
+6BmAGeEhQAiTwHL7wUpMTiR72U9hI1TGEkjdBrWbOh9+xSwQF6rRq6QPan/F
+eAWwOdKuXpqLcSBF7O52Ue/LSFgxFXigw628WXKzvReGcLtnH+rsjuzT91qK
+ZI2wEpaZoHpiJdH4+MCp1wdnwXSW4Ar7wIBqcRI1H7ChrhXCxQuVDdDI0/Tt
+PDSAHxrNb4ZGwr5rq5mxojiC88WZrj4qfmjazMatmhiEmIrLuEzEqUZElE2J
+e5VkGnM5GqWprPKfzSfyWWlH61Yz/oDo4Ta0TzilLlzhKXs6iHp4SZwRwXPC
+Y5wF+uDnPL64+CfrBIbmXty2EF64ZnxA1cqabzfkckd3BsXCMBMLX7NuGd2q
+3Rcnwob3Q9Pm+0gonR7aJvGnDHj3AS+jJwihE3w2U9+u93fW0jzDTPRSf2Yn
+CH+idkLo+hUFwDGT+MqxCkoQYu3uvCEU9nuWx61O63ZCxFUAC2U8Jn5VZlYC
+k/AdglElQEv6Gbb+V+OuAKgvSLOiBXWLD9SGWIzNS4lCZoGEeSC1s/FRR6BJ
+rzAkPZvht9WQBiyDtuuPYETgjwyB9/5zh9FjjPhcM1wHrHr2V62c2Nm/cPV6
+t8bEsH7xcn7W2HpLevRCuyRfX4MtItAnOvQTANKrn4xKQGCF2vVUAjrxGs6f
+c+bIxWn4SWDatt4Y/AjcxSeX4uqcPCBe5hnLwP2T0FUpZ2MDRBAOrQR0pAFB
+r0pLKtqnRKNaDXfA8ux9ABfu7URl0EDJDjDEpyPewz6IIYq9EPPBaJG3Rwoo
+X9euSzLbnLjHJKEyDmoq3H1pDvmxyxaQ+WE6Js3q85dAoXv/8wTPfd8D8S33
+RA178zOWQS/qW/P95tAmv+o9LEzfPdCacq4G3wjO0GjMxT32dl1vSv/M8Acu
+QTm0wyf3hqUBLPcEKRdpO7Ko+4fcT7h3rwwGsmeI9wOeExbODUURMcdeoSba
+kcGO8qBMa6LjxJPhhcemFXXgL31rqg3P8283bXnsB6FbktpMaWnNsbEaJeCm
+jAE1bam2O8pVExGUYkqZHyFuSw8DCVsmK8bVPVrNcZUSyhpXR1zeLCW4mlzn
+4yw3Utedn1pnKK1XFLFSTPjZvV7VgJoSax7ikFEeLQXBuzCyCXQRvWXkVGmI
+4LpRz+baSB/cJTGhemOSF8uRiooRa6dNeZj50kkNB4G994lIWHQ+EXsye5Vk
+zH114MwN6eQnaZUzB6PxrLCSD4+OXL4ah1UHUZUzOc6Ti6pZAdMoVXcR7tqV
+EsiwgGpu+DY0nU2ynNfktq6Meto5tRrKs0uNZXtwoxgC6FnSRnvNFBwg5Aue
+gLbMWB+24lgoc+HNgXapDb0vpYZHohyN1bgPoei/Q0aH5s+ngi4j2z9r7gzO
+Dcy0ajT0k6+UmNOKhxDQCXTb6Hw1To/9VjuCvleuuWtUNGp8mwNkfExApQwc
+8w72dp/y2T+ESa2AtKjqJTOdC4IC/kiCy7YUSPTkrBXVNe9gCWAbwq6VAAvE
+DxPvCv6r3NSelttRioybHAZ1HhMjeUf5U8fYQPGhZs5r3nNG9y54Z8l7haxJ
+rsTkNAhnGqvXYP7dljrRicDJ92+ukhYH9r2sf78wnrPdV+pcFtf0pyIwctaD
+kYxwHINM5n6F2RmJGHkU3dqMp9t1iMDO82pORq9iACJxitwYTdV+1LCgSZBV
+rUvpF47todkZCpeWUZ6BdMu44gxE9lxi6Zd0uDkiUdrRyhJDd5neAqA8l+rH
+0NK+atFLwa7VsQAHzugVuQM513fEE2q9i2zysTe7FJm7kQlOuy+JIzqoVYvf
+GO3atJcDn4UM71GBs2L92UAgJTArRB284kavlU0Y1PccCRL5MUn7bJnPxuR9
+pB+gUKAYujKR+ixcHbEEZ35ETPork0nh5WeLgjH1h/xtjwO9resdQqAQmG+K
+DQ/WC4zUJnLBaAHicD7PTNdg6YquHlPZRzF6iHOBYciYCEfyY4hyhjr2a3Bh
+Vwww16jefUBA/xJpdjGU3NA2Kmk7rZYfc5Ep7ztq9TThkLT6sG3WlFJE8Wpt
++tk4GeJiXS4+GjU70yAaxmmg6x73dN3JfWt7EpKFCRDXYpt0FG8leWgsQ8zC
++YDiuDwWx2koADk8Kelk4Hn0DCY0Iogf0MALSewhEEDrmTqj4LdeknqYOXKY
+jV9tI/fa/9ve1TS3kRzZO35FB+dC2CBH9o4Pq13tBE2FwowdfYRErWO9sSE1
+gCbZSxCNQAPk8DK/fSvfy8yq6g+QY+vgwxwcYwpAdXVVVlZ+vneZHnB1CcMg
+xGgVryI8rV6ivEPzAMVNFcwSMaPZY4OWLWhl0UIjdyoAxvxL3Xlo/LTc9R6h
+xqBYEk0DQAYYy/+t+qunKmDPxuMQb6NUqOPXiwNmhHWe6hWwbjLCekNui8bq
+wIVlT1enEP4VkjPSfSiGf9GWVwreSGQ5cibpwqm9xlnzlmGaC/MeMCy1mCRv
+wmWHbgq/s6kXt23OgkEzfsTInzxh3jvDWG2MlglA0URJRjrd8LQ4wK+8bgY5
+U5Ie9HUlZDt1eHZd6Z7BBJH36rPzKi+vh27Uf6CvsHOaB+duiOyLIoBlvTTv
+TZaECrxNgJlG/WuC49YpgEAfYcODY5edEJXxVDBmVC62YmociJFJq2v1QIFR
+RtOkBxpbyadrfEwy7vpwhgVjaO7vD/OBwqYf6xscMDsaAxFDYi09PVb/nA1F
+C9PRpBYqBuH6CjX/fbufp8R1nEz/R72GqXknrtNjwls0JsY7GzafqwGi1nHY
+/k97SyeWSCdymK1n8urZlwcm2badqF78R6hA7bibZXHDbolUVy5QriLsG8//
+WYxFprNiIrRWRH8V6dETVlyV28oVPH39mPCbGGBgGtBTUKOOXu9ykjIOQXjN
+/hl5qIyy1LlvI9PnWDjgIg+P8zpQrQhz1zIFjYIxKTa+80tOhn3Fy+iXWQV0
+6buYrK0h1SlGPW22WbiF14tkjfqFAUwe0JoPrsuAjiIOTLx6FEylyVW+PXhZ
+hb8BNrJSCkzhiDMe16xSpu74JgS+R9+X5qXT8SPEbbXS9q0x1rmJM4nH3ijJ
+wFUHnMVPBCxsM0DAPC+kLJDZJ8mp9jrHHNitox/q1vDZ7G7LT7p4qiugo0os
+laA/4TV3qHDbNdmOg56pp5c5TY5BesGENrdsR8wo7Z4R20SQK1Z7UrvGahy3
+ZrPoN1GKJo7nNvK2JkXZRJGT77BWAdn4EA+9eg/BshMzFJKY0MMVxxF9iuaE
+U/aY2E0POIDQTrnC5G5LvGDeNisHZrLMY66PJ534QN+V61yT70HeQDTzAU3U
+MQYYNRw4yP8GO3ffapVotsrxLIDdNJ5jrS6JkckBkDHnPSYUGuwQK5AG0knC
+cKPi+A9sSyw1BczhQcI2i9ggMNBBfxoLbTwBrVMruq+SF/BEjHyXpl7kawZW
+SJMSAtL3MOpQJ0I5gMfv/IJeqNLYsAZO0ckw/MQIXkwxXFg3ebmtOo4/9dp6
+INXQrdwgs2RmHM/MeehC2ohf4bW5Z0UChiOluggdy09B8WykGu4EBT8KXmPy
+Rj6pzLAYmFGkQOp4JSQDMmV1HCtvXpz+aepEvZO+NQjmYJz2LqfwcVICKtVh
+07C30f9kICWyYHK20eZFwLa20j2creBQSaWvMQ0+VNuU6Ml1S8oYbdeJ9exO
+gqZsIxhuGh8J3zSGb47fttZhx79j2HcwGP25dReJELkpPnz1M0moBE2vuq9J
+v1JV9JqMoQrfHiDcA3wrUA8jjqXg9C6Ejhgd+Zttvaj6gGyia8JOInvRhVq8
+WlU/qwbo/9DDXUqQBq1kZp217AN6bzqAA9c2PaDAT395//mn18YQ3id0G4Cp
+Awj9TNEe5WAGtdbspaSf5tx9Uy+HhuqPNHeLhl6UiJ9BJuNEDeWVht/LMlZU
+DoTBdI28qsx7HQPhe0ZVc0Q9V9NQ2zHWxd1jcV9XD6z8/RTLeiToUxz9CkHx
+XuVnyMaR5Zojx53RG4z8gndu8neLZgcLByKqFvbzGs1iGQ5JihSm6Zb6DtEg
+UCNKkT+O5zxIwo0sovwhtdTNspGYRaV0ERdsxpFIOWHjpYQp1TeRTNUyPWTq
+1g5IWwxr8SsVhO5NTQa9Oomg627UJBFaNz8W77dSI3YBjDS/+tFtQhSIkb7Q
+fiXff374oXifFnLLFSV0iPP/E5cIyxzrxVVrfFecKw6u5VDaHw9RPRjZGrY4
+7QoBM+AxsA/u0YDmpf7TmZexy5mRiBruclBnoLs0S7bpqGEpfpSuGFBKPIeE
+4pJDY9/yeHoKZyu/15xFnD4OqrTeOGXQB63uL17i9i8uRSTcAJBB5LTFynyv
+6AewImEtnzNpcm45yyrpIBlGx1IV2nR1b/4OkQvWT/VHPOfZuv4wgtoU8eXo
+elsvjzxbbyxH7Hd/emA7U37HA2idOjkc5ut1tbSUWk3GMiXKWzRl8B4Leboe
+TD3WzIC/e/2RCOEQsA/hrxRNMoIturrC9ehAASfN9kTkgmRy05RNPPIfg/CO
+rAiU2pva6+aAbJImiQBwI6SCXH81Bq4IudYEG4e1MO61ZUA1N/VVmBwiArIE
+8tI49KcHMEry7YPH2jLPJ49IQO1dBwvDpODP82Ww494jYvYvoN6qk3CJIKEj
+M4EcGj0l6BSTDdHGt5k7Xcl7tXa4jlC+IvavhXhXj0echbjMApxjWU0/OzCh
+VODFumXlpVEJzmwxt4T9SdEl2ApFu1ALZ4J8WM9LmDByCUZ30VaVkybaHhbH
+9DXUgls0QSm2m4Y/iCs7HeDbybLDye77/fIcPZB0lS2N4Edko90F+88ULqxQ
+Jf96kOk855i/RkuERlUMR9SwKK+aYBEBDnfdZx/5URF33so7nZMoXYqlzj5c
+SMIWNWnOVfiWPio+NYAf8eyOYMMRgusOHK+IER3F51BhyPc/vjn/wx9/+CMk
+IohTOCMbFYUOfZJw5L3+fKnrKz9FFVi1jrAlD2VN0l51N6UmozW7Dga3WWXw
+DJKRGk3QoMQNXMz3pFgoyutrMJOBRuzghIbv7P6l/VdFiREyQK7QEhFyMWDg
+Hfs13b+beoldUbiGgmsbrNBfwYDdLIlPlVUT4F9pz+0qUz5Sba5LAKMvzeUJ
+xsX6UTKZpyK0YPR14idACzDgwZHelo9z2KKleatAHwsKoNr2bql34DcPi5pg
+1reEOxCuAIcFIYwS8lKVagXO1tNggKmX7lPBtXcMhzhsbjnGSKf19GlV/19g
+Px3l5GvtkdLliMCheVBeSZuRDlcQwgoKRwyPRstkr/xwoPDGdZjJa52ukOpe
+WLb5PEe4zRi/BpcVunuZ1ZWiTgZ7UYDZcbZUBr8bsoUMl6pNDqBkcIeDxt7A
+nZP7Gh5TvxNWKU9I7gy/Wj9wl0k57uzltXWQrk6j91XQUPJ7tAoxUru0iCnl
+tGGPVIdDQmGtVbfvtACkNLq/EYEyc5dX2Rgif9iA26radNetu17e9aEzlQyd
+hE436WrGUPhACM+O+iAnJMZkg7buBl+tU9aZ5nnVM8NCXN/stJ3a7YNYijUU
+MSKvVzT5OsnhCI3FatsNrvWMaQaBOkpXFkItQUw2wK4AoddKG+bCKkW4Ia1Q
+t86Dov5dceGLmreSWjAeX3xjyW5n3+n0nc6Gj7UivZAVAxFr+JyseJDoPOEK
+NGTM96xsdp8tAt098WJ7IbTpV9ZMa3U2avpIMUa8GNVU0tLEbSk5Fn2GXN1X
+5XZndEGDuWBc/EKDe2yRNnR1SqXvuppGgEjYaJuwOvXCDLWqs4cuAixmZe2s
+ipJOiaatc8AnLAk3AuHfGwmkCckwGOSYbhJFQGHclaZojNdhGvXfaxCHi/k5
+6cs7zf+eFpaQ81BSx0DodtXalFY7KMEuvRp+GVGxI9udRkpgN8uE5D3CqsnU
+K5YPGXNmMEKBISSXWluBOlMGcr4dwqfLfhLGSGZ2feNwm5FPO0yU5Ts1LKR9
+3d4ko0ByuJrtLslUqhGsKz6SFbgpFfXR6BUl58IC3WrtHQmsqI0B2kGYUKpp
+j3wllwd35x41MHwFK7k65us7TcqqmtoehF8vqiCAflTfDpzQxJ4E+Ka5h7Iy
+jxqE9qOzAaFOB4I2bfNXQqLgUgNYV/ZNGAj+9cWLH/7X9MUaak7iL4t6u9jf
+KRXEzM1XtxNeq4CgrL3a7dUwilxTMh6kJ1fu8l6gHsJTbmq9fB8Myle0FXe+
+bqMYUGqkjdi/Fz5+3cF0vVoJc6SZIr1jR+SXx03VxodI7sXq6psmI3FlVFpi
+hRJeb2vfKb/Mq7XsaauRZFQS4N+RibP9RFSYYr9pxHRm3kLEhxEDpY0GjL+Z
+Ygs1M0oLLA4cYb6hUz8RIjKcMki/3nIr4upy6x0vJazbySccqXCoy0USsNRT
+WUnGFQA1TVGFFWkeUdmuB4yBUanaeBRMovmqXNyezJufs/Cp2jZKdLMQf5BZ
+RmDGy4qCQCoVo10lhs9u+xjV6Mde8Oo8DV4NHx3DhMh04yxXdql+PR1Q0YJU
+bIQuY2a6MWsk6fdwxVmNKswPpXUiCUU4W7gR2s7O7dcMTaiJP3Dtom8Hl66f
+kZ318rgRpO7zDIJMHqCrIfvHyz8fSjAb600fFsYWwlUzNbfCB+R3UpZClZsd
+dbKoBfSwWrYBEVdEvu1QUAymbBwRRWnb8nJCuzEVUDWZZfTkDL9z5CrQIN4x
+9sxJ1tH8I6PFJrxp8iYDADOHU8Z8SuKKSQFCjlfTse8TV0tqSDWVzww2s58G
+RWVB15SU2Wp8b6SXSr9vgSpYRgdmyXsLOzvkcwSJt6b1i87H4EjRg3TYm/NI
+OgR0v2siuXhYi0WacOvUIrWtbJdKQ07/lrx69RgkXVCRGUpPue8Q/5Nb2UQk
+Mn4brZJV1cDXuGlWy4QIEZdcZAInEqkM8BMTvOWycc9KS0K0NkPar672NIgX
+KySamnX+egwNanXEOPGepR/KZRVVYoYcHGT5XZDlv4ksS4JcTXIyfkvYYL9x
+vF43zpNMutuuQzrQ61f8uCStqokMuU2aVAb3PatjKzcok5ocDzGZC8qWDILj
+mBDpfZNY1xdrhSuRoCLTZh+1daNzRXOjW+G0hAE4IhV2Vuf7GhTAWueQV0IO
+a5aZ2yb+M5RZgI5txNEeKQp3j4qTSd6RRELOFxU3Y9eEhYLxE2MktpdXsWiB
+flVydYzB4SPRcuXE924VJBCuzlQoislUjhGGH4xrxaWyKgDYtVo00Rml50eO
+KCPRVYAQVF12bN7ZdJQCkjazcxSble5SMuvgldX5hagqu3TKWCtstQRcN+t2
+2oXFKi7O3p2JPCGuoBfB5F2D4AByUWEP5DugA/tUhamIUuz+4M9pulpO7T2x
+28o2+jZe1XlX3TX0yeq7mvVwk121uFlLhvsxcfIirl4p5tn3IipU9wuL7C/l
+2g4Wbo2I7WRVzoXTQMjIqvV9vW3WjMLRIxBnng2rDEwHWSsXlbcSeKgsHN4Y
+unXeBWN0EKQT7fnIsvRr4qaFt9nwRQxOndS+nnpAqp6OijSvv/386RLtpHPM
+YO0lglJEs6iSHl8NW/FgE6mVlhiaA0sx5IWJTZbM5iLWXt0GBbGLdpupBhkh
+HLmgWe8I2oefSOcBr4xZ/np1CswO2oQjt7ePRH3WwTmTnkWBh2mDMiAkcFIH
+GGRpDsmmI2AFQZyO7m94/CcaKQylidSKIGlxjawTfFvbFhOLDJA+THKSzd15
+qwtjC1Y9RwVoq9WaeEtHobyG94XYY3Qa827+VQJG5Vx1b7IREx0acLnF2eJ2
+3TwEob3WKDD7Cjc3W7ReI8Iba17suDjFJqrPF+y5Jtcw0gASgxaOkzCVs4vi
++I7//7Ss5Zp6KzaAWIq3cAvPguESzH2Yjqp/33949+a/ivA/rJjM3UsvzIWb
+hIuqrh4Khf4KKn/vXGuqS3T2xJGuHfqusqpcQ/1pPdvjWMpxx4ERcRN+gdIr
+US67/dLOp70C1zrI89mqeBsuNG2rD0NO3OXkfCUdBhWBtQJankeVroIyn5eL
+WyPoZZNk0LgToAGshC/5VOht1rdASz9b+XXAVgLxI6lsoKLuNeb4GGYfLGL5
+LsCOlPwQC7Z+ELxWlMpZmRf2FX1gAvcqWCE3cC4nQVwZJwp3BVYhiO5qhqmU
+IK9EgknAXIgvkZRavhwma7xMU2ym3rTdwnnJxwpkhflLTk6fkq3fTTsrulR5
+aCT4n4QYU9oq+SdZOVWdlqtrYXa+uRPhCh4SAjNAicM9NFVF7NwR5X1Zrxic
+zZpmeWzHAZHy11BvLa2k1KaOs1XGL4ViKdn74Pzst+toy1oe2W6FVSOC1RlW
+onW2R7TwN221D6cgeOMTI8nVwhbBr7OKz3YnHfFynIycKBgAwSVYTr5GGJSv
+kOOvCRTN16H6zb9qUl2hYz3fmgCqnKKAiXyzctoiF4lzI61L5O7Efn02OqYQ
+Y9COjaiSYVkeDLpdPjv8PsnrB5OGmNc0maUajRVMbGlpCuv/ofWaR1cRcpR+
+X4UAzWvdIJn44AQfOKy603qOMZtI4NM31EsAFMxfSwI/PAYBWpOoACXZGgsI
+K4GQYHxFIpZBHDwquN1nKEWRZ2ByGOIoP4+wi3EMkgN5+uRkrsonQJK6WE6d
+zrDnISslK0jPTMniHJU0Tiq8ihwjr4lysH00QH4V0/+LHd4v7f7uK3mmMmTG
+hN54cFF8IAacvuCnOlC/2eXJgZrlFwmi6mw+dVsgtbA2LR7vZeO9gqdfBT8Y
+GESkmn1LFPTocofR0DTVaUDoQzyFuc/Lf7apP2vaWHK4vf/IxC2l+TTg0rdc
+7n+eaT895Ut4T0tW6basRGOEJc0WxYqzPAbRgyqc/PLLL8UGunLiXVSyFMWr
+It/S4ndF73wW3xfHfzh9UZz0P5pOKoMa+YI1xpCpcBe/Dz7Rz8cvTl8AlyV5
+0kmRTSUdCnarDia/zk56GLH30FnRU0/TieFmZP8cRhx6zsBrTxJQtFf9Rxb/
+/qoYfMIkxa97VRwPPe6kyF5pOj5Y2LnMGv3zyzGwuScMUqv0HAl1ib73SD2g
+LHyoX88C87R1i55f7+trtcwsv5pSzD50eCGYOxqgPx6zO6cTaed1X8fm4b3c
+I2kFZt2G7p7fTOvEtC51QdWsDnvqdu3XpDb7S6zNHrSkzy/FmDm/PGUYX9gR
+2IqNHhIkUpXX4VuYyKgbiXbyyDzjm3wbA/nXsyl9cxP5N5vwN5vwG9uEmDIn
+FWZ9FlSWlB5YzZ4hKedpoG/5DpJ6iNSVnfqo7kvJ+T3LkTJRySvqTCJECaxs
+PA8SPUhZJEyVvxRFFFaAiH2StMmEt/tsoNaZiud1V+t2nmYj2ZvpOJ0tMw1t
+Pxy1FQmglCDzjfBfT7LlGAe8Tq3IsK/RIDMjraMMZkXXbkslfDph6aMs8Ksi
+G+93B+xNt5Licoefv5PXwJISO3BdJFLJ3oLw9kO/DWshPyaVns3nP4pgr1pL
+QjH4u1dFd9f963GcE/tWVv4tHDBx8HkQ19tJ/Lf66tdOJKzVZPgGCx+a/fAF
+9oOv7MBIU9qaJydimC9uB+5o9jICZm4VzNv8Ika3Cb9xLt9oT4v3a+CCCD2h
+hKfX1YDngtyWXKHLxlJBA/0sGPxjtQZmBttXvg/G1sn3ZqPizkNdh7cs1Vvv
+aBjukPmXtEPG4PM6uWHLMgFK67qRQM6uuTvZb0ZnyeE+rwXU0XMy0VZod49S
+PjjfhiUOpuKMKTP5r74JeTkUCE7QLadPPOpsuYzd4KtaovqeCj8PS1ULoVY4
+z5fV9o7ko9ry+Kz+wHJT7zJGrnfVA4ZqX1rGApsq9pfVgDFb4JViYrpfaDXQ
+fUOwBaXDXOTDa5kFtXzY37ta09TGs4ruFsRaP745b2XUSoHifKTKKkvIOO8C
+Uu9W46KlDVhXLKPahNerZW9QnM8aCHT1SUp9ZVsIs47Z07OL6bCJ9v/XgXit
+I+IBAA==
+
+-->
+
+</rfc>
+
diff --git a/docs/ietf/process.txt b/docs/ietf/process.txt
index 128c31bff1..6492861163 100644
--- a/docs/ietf/process.txt
+++ b/docs/ietf/process.txt
@@ -22,7 +22,7 @@ $ sudo gem install kramdown-rfc2629
$ kdrfc --version
Main:
-$ kdrfc draft-ietf-bmwg-mlrsearch-06.md
+$ kdrfc draft-ietf-bmwg-mlrsearch-07.md
If that complains, do it manually at https://author-tools.ietf.org/