aboutsummaryrefslogtreecommitdiffstats
path: root/docs/report/introduction/methodology_data_plane_throughput
diff options
context:
space:
mode:
authorVratko Polak <vrpolak@cisco.com>2019-05-10 13:33:22 +0200
committerVratko Polak <vrpolak@cisco.com>2019-05-10 12:21:24 +0000
commit03363a60e2b38976aba91b110cd2040e1fda2401 (patch)
tree2fb67b590b9aecce01658fcd4fd9b33a1f6e0236 /docs/report/introduction/methodology_data_plane_throughput
parent4ba1fd8d55a6fefa3349fcf221aece3a345e3ca2 (diff)
Update PLRsearch methodology to latest code
Deleted much of terms and generalized descriptions, added more details on the current implementation. - Still with old graphs Change-Id: I9ba6b40d1d9d36d4f10382f60c756f5bc4708e4d Signed-off-by: Vratko Polak <vrpolak@cisco.com>
Diffstat (limited to 'docs/report/introduction/methodology_data_plane_throughput')
-rw-r--r--docs/report/introduction/methodology_data_plane_throughput/methodology_plrsearch.rst391
1 files changed, 105 insertions, 286 deletions
diff --git a/docs/report/introduction/methodology_data_plane_throughput/methodology_plrsearch.rst b/docs/report/introduction/methodology_data_plane_throughput/methodology_plrsearch.rst
index fdd03472c3..1745edfb5e 100644
--- a/docs/report/introduction/methodology_data_plane_throughput/methodology_plrsearch.rst
+++ b/docs/report/introduction/methodology_data_plane_throughput/methodology_plrsearch.rst
@@ -3,158 +3,55 @@
PLRsearch
^^^^^^^^^
-Abstract algorithm
-~~~~~~~~~~~~~~~~~~
+Motivation for PLRsearch
+~~~~~~~~~~~~~~~~~~~~~~~~
-.. TODO: Refer to packet forwarding terminology, such as "offered load" and
- "loss ratio".
+Network providers are interested in throughput a system can sustain.
-Eventually, a better description of the abstract search algorithm
-will appear at this IETF standard: `plrsearch draft`_.
+`RFC 2544`_ assumes loss ratio is given by a deterministic function of
+offered load. But NFV software systems are not deterministic enough.
+This makes deterministic algorithms (such as `binary search`_ per RFC 2544
+and MLRsearch with single trial) to return results,
+which when repeated show relatively high standard deviation,
+thus making it harder to tell what "the throughput" actually is.
-Motivation
-``````````
+We need another algorithm, which takes this indeterminism into account.
-Network providers are interested in throughput a device can sustain.
+Generic algorithm
+~~~~~~~~~~~~~~~~~
-`RFC 2544`_ assumes loss ratio is given by a deterministic function of
-offered load. But NFV software devices are not deterministic (enough).
-This leads for deterministic algorithms (such as MLRsearch with single
-trial) to return results, which when repeated show relatively high
-standard deviation, thus making it harder to tell what "the throughput"
-actually is.
+Detailed description of the PLRsearch algorithm is included in the IETF
+draft `draft-vpolak-bmwg-plrsearch-01`_ that is in the process
+of being standardized in the IETF Benchmarking Methodology Working Group (BMWG).
-We need another algorithm, which takes this indeterminism into account.
+Terms
+-----
+
+The rest of this page assumes the reader is familiar with the following terms
+defined in the IETF draft:
+
++ Target Loss Ratio
++ Critical Load
++ Offered Load regions
+
+ + Zero Loss Region
+ + Non-Deterministic Region
+ + Guaranteed Loss Region
+
++ Fitting Function
-Model
-`````
-
-Each algorithm searches for an answer to a precisely formulated
-question. When the question involves indeterministic systems, it has to
-specify probabilities (or prior distributions) which are tied to a
-specific probabilistic model. Different models will have different
-number (and meaning) of parameters. Complicated (but more realistic)
-models have many parameters, and the math involved can be very
-convoluted. It is better to start with simpler probabilistic model, and
-only change it when the output of the simpler algorithm is not stable or
-useful enough.
-
-This document is focused on algorithms related to packet loss count
-only. No latency (or other information) is taken into account. For
-simplicity, only one type of measurement is considered: dynamically
-computed offered load, constant within trial measurement of
-predetermined trial duration.
-
-The main idea of the search apgorithm is to iterate trial measurements,
-using `Bayesian inference`_ to compute both the current estimate
-of "the throughput" and the next offered load to measure at.
-The computations are done in parallel with the trial measurements.
-
-The following algorithm makes an assumption that packet traffic
-generator detects duplicate packets on receive detection, and reports
-this as an error.
-
-Poisson distribution
-````````````````````
-
-For given offered load, number of packets lost during trial measurement
-is assumed to come from `Poisson distribution`_,
-each trial is assumed to be independent, and the (unknown) Poisson parameter
-(average number of packets lost per second) is assumed to be
-constant across trials.
-
-When comparing different offered loads, the average loss per second is
-assumed to increase, but the (deterministic) function from offered load
-into average loss rate is otherwise unknown. This is called "loss function".
-
-Given a target loss ratio (configurable), there is an unknown offered load
-when the average is exactly that. We call that the "critical load".
-If critical load seems higher than maximum offerable load, we should use
-the maximum offerable load to make search output more conservative.
-
-Side note: `Binomial distribution`_ is a better fit compared to Poisson
-distribution (acknowledging that the number of packets lost cannot be
-higher than the number of packets offered), but the difference tends to
-be relevant in loads far above the critical region, so using Poisson
-distribution helps the algorithm focus on critical region better.
-
-Of course, there are great many increasing functions (as candidates
-for loss function). The offered load has to be chosen for each trial,
-and the computed posterior distribution of critical load
-changes with each trial result.
-
-To make the space of possible functions more tractable, some other
-simplifying assumptions are needed. As the algorithm will be examining
-(also) loads close to the critical load, linear approximation to the
-loss function in the critical region is important.
-But as the search algorithm needs to evaluate the function also far
-away from the critical region, the approximate function has to be
-well-behaved for every positive offered load, specifically it cannot predict
-non-positive packet loss rate.
-
-Within this document, "fitting function" is the name for such a well-behaved
-function, which approximates the unknown loss function in the critical region.
-
-Results from trials far from the critical region are likely to affect
-the critical rate estimate negatively, as the fitting function does not
-need to be a good approximation there. Discarding some results,
-or "suppressing" their impact with ad-hoc methods (other than
-using Poisson distribution instead of binomial) is not used, as such
-methods tend to make the overall search unstable. We rely on most of
-measurements being done (eventually) within the critical region, and
-overweighting far-off measurements (eventually) for well-behaved fitting
-functions.
-
-Speaking about new trials, each next trial will be done at offered load
-equal to the current average of the critical load.
-Alternative methods for selecting offered load might be used,
-in an attempt to speed up convergence, but such methods tend to be
-scpecific for a particular system under tests.
-
-Fitting function coefficients distribution
-``````````````````````````````````````````
-
-To accomodate systems with different behaviours, the fitting function is
-expected to have few numeric parameters affecting its shape (mainly
-affecting the linear approximation in the critical region).
-
-The general search algorithm can use whatever increasing fitting
-function, some specific functions can described later.
-
-It is up to implementer to chose a fitting function and prior
-distribution of its parameters. The rest of this document assumes each
-parameter is independently and uniformly distributed over a common
-interval. Implementers are to add non-linear transformations into their
-fitting functions if their prior is different.
-
-Exit condition for the search is either critical load stdev
-becoming small enough, or overal search time becoming long enough.
-
-The algorithm should report both avg and stdev for critical load. If the
-reported averages follow a trend (without reaching equilibrium), avg and
-stdev should refer to the equilibrium estimates based on the trend, not
-to immediate posterior values.
-
-Integration
-```````````
-
-The posterior distributions for fitting function parameters will not be
-integrable in general.
-
-The search algorithm utilises the fact that trial measurement takes some
-time, so this time can be used for numeric integration (using suitable
-method, such as Monte Carlo) to achieve sufficient precision.
-
-Optimizations
-`````````````
-
-After enough trials, the posterior distribution will be concentrated in
-a narrow area of parameter space. The integration method should take
-advantage of that.
-
-Even in the concentrated area, the likelihood can be quite small, so the
-integration algorithm should track the logarithm of the likelihood, and
-also avoid underflow errors by other means.
+ + Stretch Function
+ + Erf Function
+
++ Bayesian Inference
+
+ + Prior distribution
+ + Posterior Distribution
+
++ Numeric Integration
+
+ + Monte Carlo
+ + Importance Sampling
FD.io CSIT Implementation Specifics
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -187,8 +84,8 @@ of the CPU the algorithm is able to use.
All those timing related effects are addressed by arithmetically increasing
trial durations with configurable coefficients
-(currently 10.2 seconds for the first trial,
-each subsequent trial being 0.2 second longer).
+(currently 5.1 seconds for the first trial,
+each subsequent trial being 0.1 second longer).
Rounding errors and underflows
``````````````````````````````
@@ -197,9 +94,8 @@ In order to avoid them, the current implementation tracks natural logarithm
(instead of the original quantity) for any quantity which is never negative.
Logarithm of zero is minus infinity (not supported by Python),
so special value "None" is used instead.
-Specific functions for frequent operations
-(such as "logarithm of sum of exponentials")
-are defined to handle None correctly.
+Specific functions for frequent operations (such as "logarithm
+of sum of exponentials") are defined to handle None correctly.
Fitting functions
`````````````````
@@ -211,37 +107,9 @@ on top of randomness error reported by integrator.
Otherwise the reported stdev of critical rate estimate
is unrealistically low.
-Both functions are not only increasing, but convex
+Both functions are not only increasing, but also convex
(meaning the rate of increase is also increasing).
-As `primitive function`_ to any positive function is an increasing function,
-and primitive function to any increasing function is convex function;
-both fitting functions were constructed as double primitive function
-to a positive function (even though the intermediate increasing function
-is easier to describe).
-
-As not any function is integrable, some more realistic functions
-(especially with respect to behavior at very small offered loads)
-are not easily available.
-
-Both fitting functions have a "central point" and a "spread",
-varied by simply shifting and scaling (in x-axis, the offered load direction)
-the function to be doubly integrated.
-Scaling in y-axis (the loss rate direction) is fixed by the requirement of
-transfer rate staying nearly constant in very high offered loads.
-
-In both fitting functions (as they are a double primitive function
-to a symmetric function), the "central point" turns out
-to be equal to the aforementioned limiting transfer rate,
-so the fitting function parameter is named "mrr",
-the same quantity our Maximum Receive Rate tests are designed to measure.
-
-Both fitting functions return logarithm of loss rate,
-to avoid rounding errors and underflows.
-Parameters and offered load are not given as logarithms,
-as they are not expected to be extreme,
-and the formulas are simpler that way.
-
Both fitting functions have several mathematically equivalent formulas,
each can lead to an overflow or underflow in different places.
Overflows can be eliminated by using different exact formulas
@@ -252,24 +120,6 @@ At the end, both fitting function implementations
contain multiple "if" branches, discontinuities are a possibility
at range boundaries.
-Offered load for next trial measurement is the average of critical rate estimate.
-During each measurement, two estimates are computed,
-even though only one (in alternating order) is used for next offered load.
-
-Stretch function
-----------------
-
-The original function (before applying logarithm) is primitive function
-to `logistic function`_.
-The name "stretch" is used for related function
-in context of neural networks with sigmoid activation function.
-
-Erf function
-------------
-
-The original function is double primitive function to `Gaussian function`_.
-The name "erf" comes from error function, the first primitive to Gaussian.
-
Prior distributions
```````````````````
@@ -298,37 +148,69 @@ with deliberately larger variance.
If the generated sample falls outside (-1, 1) interval,
another sample is generated.
-The center and the variance for the biased distribution has three sources.
-First is a prior information. After enough samples are generated,
-the biased distribution is constructed from a mixture of two sources.
-Top 12 most weight samples, and all samples (the mix ratio is computed
-from the relative weights of the two populations).
-When integration (run along a particular measurement) is finished,
-the mixture bias distribution is used as the prior information
-for the next integration.
+The center and the covariance matrix for the biased distribution
+is based on the first and second moments of samples seen so far
+(within the computation), with the following additional features
+designed to avoid hyper-focused distributions.
+
+Each computation starts with the biased distribution inherited
+from the previous computation (zero point and unit covariance matrix
+is used in the first computation), but the overal weight of the data
+is set to the weight of the first sample of the computation.
+Also, the center is set to the first sample point.
+When additional samples come, their weight (including the importance correction)
+is compared to the weight of data seen so far (within the computation).
+If the new sample is more than one e-fold more impactful, both weight values
+(for data so far and for the new sample) are set to (geometric) average
+of the two weights. Finally, the actual sample generator uses covariance matrix
+scaled up by a configurable factor (8.0 by default).
This combination showed the best behavior, as the integrator usually follows
-two phases. First phase (where the top 12 samples are dominating)
-is mainly important for locating the new area the posterior distribution
-is concentrated at. The second phase (dominated by whole sample population)
+two phases. First phase (where inherited biased distribution
+or single big samples are dominating) is mainly important
+for locating the new area the posterior distribution is concentrated at.
+The second phase (dominated by whole sample population)
is actually relevant for the critical rate estimation.
+Offered load selection
+``````````````````````
+
+First two measurements are hardcoded to happen at the middle of rate interval
+and at max_rate. Next two measurements follow MRR-like logic,
+offered load is decreased so that it would reach target loss ratio
+if offered load decrease lead to equal decrease of loss rate.
+
+The rest of measurements alternate between erf and stretch estimate average.
+There is one workaround implemented, aimed at reducing the number of consequent
+zero loss measurements (per fitting function). The workaround first stores
+every measurement result which loss ratio was the targed loss ratio or higher.
+Sorted list (called lossy loads) of such results is maintained.
+
+When a sequence of one or more zero loss measurement results is encountered,
+a smallest of lossy loads is drained from the list.
+If the estimate average is smaller than the drained value,
+a weighted average of this estimate and the drained value is used
+as the next offered load. The weight of the estimate decreases exponentially
+with the length of consecutive zero loss results.
+
+This behavior helps the algorithm with convergence speed,
+as it does not need so many zero loss result to get near critical region.
+Using the smallest (not srained yet) of lossy loads makes it sure
+the new offered load is unlikely to result in big loss region.
+Draining even if the estimate is large enough helps to discard
+early measurements when loss hapened at too low offered load.
+Current implementation adds 4 copies of lossy loads and drains 3 of them,
+which leads to fairly stable behavior even for somewhat inconsistent SUTs.
+
Caveats
```````
-Current implementation does not constrict the critical rate
-(as computed for every sample) to the min_rate, max_rate interval.
-
-Earlier implementations were targeting loss rate (as opposed to loss ratio).
-The chosen fitting functions do allow arbitrarily low loss ratios,
-but may suffer from rounding errors in corresponding parameter regions.
-Internal loss rate target is computed from given loss ratio
-using the current trial offered load, which increases search instability,
-especially if measurements with surprisingly high loss count appear.
-
As high loss count measurements add many bits of information,
they need a large amount of small loss count measurements to balance them,
-making the algorithm converge quite slowly.
+making the algorithm converge quite slowly. Typically, this happens
+when few initial measurements suggest spread way bigger then later measurements.
+The workaround in offered load selection helps,
+but more intelligent workarounds could get faster convergence still.
Some systems evidently do not follow the assumption of repeated measurements
having the same average loss rate (when offered load is the same).
@@ -338,78 +220,15 @@ as the observed trends have varied characteristics.
Probably, using a more realistic fitting functions
will give better estimates than trend analysis.
-Graphical examples
-``````````````````
-
-The following pictures show the upper and lower bound (one sigma)
-on estimated critical rate, as computed by PLRsearch, after each trial measurement
-within the 30 minute duration of a test run.
-
-Both graphs are focusing on later estimates. Estimates computed from
-few initial measurements are wildly off the y-axis range shown.
-
-L2 patch
---------
-
-This test case shows quite narrow critical region. Both fitting functions
-give similar estimates, the graph shows the randomness of measurements,
-and a trend. Both fitting functions seem to be somewhat overestimating
-the critical rate. The final estimated interval is too narrow,
-a longer run would report estimates somewhat bellow the current lower bound.
-
-.. only:: latex
-
- .. raw:: latex
-
- \begin{figure}[H]
- \centering
- \graphicspath{{../_tmp/src/introduction/}}
- \includegraphics[width=0.90\textwidth]{PLR_patch}
- \label{fig:PLR_patch}
- \end{figure}
-
-.. only:: html
-
- .. figure:: PLR_patch.svg
- :alt: PLR_patch
- :align: center
-
-Vhost
------
-
-This test case shows quite broad critical region. Fitting functions give
-fairly differing estimates. One overestimates, the other underestimates.
-The graph mostly shows later measurements slowly bringing the estimates
-towards each other. The final estimated interval is too broad,
-a longer run would return a smaller interval within the current one.
-
-.. only:: latex
-
- .. raw:: latex
-
- \begin{figure}[H]
- \centering
- \graphicspath{{../_tmp/src/introduction/}}
- \includegraphics[width=0.90\textwidth]{PLR_vhost}
- \label{fig:PLR_vhost}
- \end{figure}
-
-.. only:: html
-
- .. figure:: PLR_vhost.svg
- :alt: PLR_vhost
- :align: center
+..
+ TODO: Add graphical examples created from 1904 data.
+.. _draft-vpolak-bmwg-plrsearch-01: https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch-01
.. _plrsearch draft: https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch-00
.. _RFC 2544: https://tools.ietf.org/html/rfc2544
-.. _Bayesian inference: https://en.wikipedia.org/wiki/Bayesian_statistics
-.. _Poisson distribution: https://en.wikipedia.org/wiki/Poisson_distribution
-.. _Binomial distribution: https://en.wikipedia.org/wiki/Binomial_distribution
-.. _primitive function: https://en.wikipedia.org/wiki/Antiderivative
-.. _logistic function: https://en.wikipedia.org/wiki/Logistic_function
-.. _Gaussian function: https://en.wikipedia.org/wiki/Gaussian_function
.. _Lomax distribution: https://en.wikipedia.org/wiki/Lomax_distribution
.. _reciprocal distribution: https://en.wikipedia.org/wiki/Reciprocal_distribution
.. _Monte Carlo: https://en.wikipedia.org/wiki/Monte_Carlo_integration
.. _importance sampling: https://en.wikipedia.org/wiki/Importance_sampling
.. _bivariate Gaussian: https://en.wikipedia.org/wiki/Multivariate_normal_distribution
+.. _binary search: https://en.wikipedia.org/wiki/Binary_search_algorithm