aboutsummaryrefslogtreecommitdiffstats
path: root/docs/cpta/methodology/index.rst
blob: c58d05fba9739569fbd610f780b6b730c73f620f (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
.. _trending_methodology:

Trending Methodology
====================

Overview
--------

// Below paragraph needs to be updated.
This document describes a high-level design of a system for continuous
performance measuring, trending and change detection for FD.io VPP SW
data plane. It builds upon the existing FD.io CSIT framework with
extensions to its throughput testing methodology, CSIT data analytics
engine (PAL – Presentation-and-Analytics-Layer) and associated Jenkins
jobs definitions.

// Below paragraph needs to be updated.
Proposed design replaces existing CSIT performance trending jobs and
tests with new Performance Trending (PT) CSIT module and separate
Performance Analysis (PA) module ingesting results from PT and
analysing, detecting and reporting any performance anomalies using
historical trending data and statistical metrics. PA does also produce
trending dashboard and graphs with summary and drill-down views across
all specified tests that can be reviewed and inspected regularly by
FD.io developers and users community.

Performance Tests
-----------------

Performance trending is relies on Maximum Receive Rate
(MRR) tests. MRR tests measure the packet forwarding rate under the
maximum load offered by traffic generator over a set trial duration,
regardless of packet loss. Maximum load for specified Ethernet frame
size is set to the bi-directional link rate.

Current parameters for performance trending MRR tests:

- **Ethernet frame sizes**: 64B (78B for IPv6 tests) for all tests, IMIX for
  selected tests (vhost, memif); all quoted sizes include frame CRC, but
  exclude per frame transmission overhead of 20B (preamble, inter frame
  gap).
- **Maximum load offered**: 10GE and 40GE link (sub-)rates depending on NIC
  tested, with the actual packet rate depending on frame size,
  transmission overhead and traffic generator NIC forwarding capacity.

  - For 10GE NICs the maximum packet rate load is 2* 14.88 Mpps for 64B,
    a 10GE bi-directional link rate.
  - For 40GE NICs the maximum packet rate load is 2* 18.75 Mpps for 64B,
    a 40GE bi-directional link sub-rate limited by the packet forwarding
    capacity of 2-port 40GE NIC model (XL710) used on T-Rex Traffic
    Generator.

- **Trial duration**: 1 sec.
- **Number of trials per test**: 10.
- **Test execution frequency**: twice a day, every 12 hrs (02:00,
  14:00 UTC).

Note: MRR tests should be reporting bi-directional link rate (or NIC
rate, if lower) if tested VPP configuration can handle the packet rate
higher than bi-directional link rate, e.g. large packet tests and/or
multi-core tests. In other words MRR = min(VPP rate, bi-dir link rate,
NIC rate).

Trend Analysis
--------------

All measured performance trend data is treated as time-series data that
can be modelled as concatenation of groups, each group modelled
using normal distribution. While sometimes the samples within a group
are far from being distributed normally, currently we do not have a
better tractable model.

The group boundaries are selected based on `Minimum Description Length`_.

Minimum Description Length
--------------------------

`Minimum Description Length`_ (MDL) is a particular formalization
of `Occam's razor`_ principle.

The general formulation mandates to evaluate a large set of models,
but for anomaly detection purposes, it is useful to consider
a smaller set of models, so that scoring and comparing them is easier.

For each candidate model, the data should be compressed losslessly,
which includes model definitions, encoded model parameters,
and the raw data encoded based on probabilities computed by the model.
The model resulting in shortest compressed message is the "the" correct model.

For our model set (groups of normally distributed samples),
we need to encode group length (which penalizes too many groups),
group average (more on that later), group stdev and then all the samples.

Luckily, the "all the samples" part turns out to be quite easy to compute.
If sample values are considered as coordinates in (multi-dimensional)
Euclidean space, fixing stdev means the point with allowed coordinates
lays on a sphere. Fixing average intersects the sphere with a (hyper)-plane,
and Gaussian probability density on the resulting sphere is constant.
So the only contribution is the "area" of the sphere, which only depends
on the number of samples and stdev.

A somehow ambiguous part is in choosing which encoding
is used for group size, average and stdev.
Diferent encodings cause different biases to large or small values.
In our implementation we have chosen probability density
corresponding to uniform distribution (from zero to maximal sample value)
for stdev and average of the first group,
but for averages of subsequent groups we have chosen a distribution
which disourages deliminating groups with averages close together.

// Below paragraph needs to be updated.
One part of our implementation which is not precise enough
is handling of measurement precision.
The minimal difference in MRR values is currently 0.1 pps
(the difference of one packet over 10 second trial),
but the code assumes the precision is 1.0.
Also, all the calculations assume 1.0 is totally negligible,
compared to stdev value.

The group selection algorithm currently has no parameters,
all the aforementioned encodings and handling of precision is hardcoded.
In principle, every group selection is examined, and the one encodable
with least amount of bits is selected.
As the bit amount for a selection is just sum of bits for every group,
finding the best selection takes number of comparisons
quadratically increasing with the size of data,
the overall time complexity being probably cubic.

The resulting group distribution looks good
if samples are distributed normally enough within a group.
But for obviously different distributions (for example `bimodal distribution`_)
the groups tend to focus on less relevant factors (such as "outlier" density).

Anomaly Detection
`````````````````

Once the trend data is divided into groups, each group has its population average.
The start of the following group is marked as a regression (or progression)
if the new group's average is lower (higher) then the previous group's.

Trend Compliance
````````````````

Trend compliance metrics are targeted to provide an indication of trend
changes over a short-term (i.e. weekly) and a long-term (i.e.
quarterly), comparing the last group average AVG[last], to the one from week
ago, AVG[last - 1week] and to the maximum of trend values over last
quarter except last week, max(AVG[last - 3mths]..ANV[last - 1week]),
respectively. This results in following trend compliance calculations:

// Below table needs to be updated.
+-------------------------+---------------------------------+-----------+-------------------------------------------+
| Trend Compliance Metric | Trend Change Formula            | Value     | Reference                                 |
+=========================+=================================+===========+===========================================+
| Short-Term Change       | (Value - Reference) / Reference | AVG[last] | AVG[last - 1week]                         |
+-------------------------+---------------------------------+-----------+-------------------------------------------+
| Long-Term Change        | (Value - Reference) / Reference | AVG[last] | max(AVG[last - 3mths]..AVG[last - 1week]) |
+-------------------------+---------------------------------+-----------+-------------------------------------------+

Trend Presentation
------------------

Performance Dashboard
`````````````````````

Dashboard tables list a summary of per test-case VPP MRR performance
trend and trend compliance metrics and detected number of anomalies.

Separate tables are generated for tested VPP worker-thread-core
combinations (1t1c, 2t2c, 4t4c). Test case names are linked to
respective trending graphs for ease of navigation thru the test data.

Trendline Graphs
````````````````

Trendline graphs show per test case measured MRR throughput values with
associated gruop averages. The graphs are constructed as follows:

- X-axis represents performance trend job build Id (csit-vpp-perf-mrr-
  daily-master-build).
- Y-axis represents MRR throughput in Mpps.
- Markers to indicate anomaly classification:

  - Regression - red circle.
  - Progression - green circle.

- The line shows average of each group.

In addition the graphs show dynamic labels while hovering over graph
data points, representing (trend job build Id, MRR value) and the actual
vpp build number (b<XXX>) tested.


Jenkins Jobs
------------

Performance Trending (PT)
`````````````````````````

CSIT PT runs regular performance test jobs measuring and collecting MRR
data per test case. PT is designed as follows:

1. PT job triggers:

   a) Periodic e.g. daily.
   b) On-demand gerrit triggered.

2. Measurements and data calculations per test case:

  a) Max Received Rate (MRR) - send packets at link rate over a trial
     period, count total received packets, divide by trial period.

3. Archive MRR per test case.
4. Archive all counters collected at MRR.

Performance Analysis (PA)
`````````````````````````

CSIT PA runs performance analysis including trendline calculation, trend
compliance and anomaly detection using specified trend analysis metrics
over the rolling window of last <N> sets of historical measurement data.
PA is defined as follows:

1. PA job triggers:

   a) By PT job at its completion.
   b) On-demand gerrit triggered.

2. Download and parse archived historical data and the new data:

   a) Download RF output.xml files from latest PT job and compressed
      archived data.
   b) Parse out the data filtering test cases listed in PA specification
      (part of CSIT PAL specification file).

3. Re-calculate new groups and their averages.

4. Evaluate new test data:

   a) If the existing group is prolonged => Result = Pass,
      Reason = Normal. (to be updated base on the final Jenkins code).
   b) If a new group is detected with lower average => Result = Fail, Reason = Regression.
   c) If a new group is detected with higher average => Result = Pass, Reason = Progression.

5. Generate and publish results

   a) Relay evaluation result to job result. (to be updated base on the
      final Jenkins code).
   b) Generate a new set of trend summary dashboard and graphs.
   c) Publish trend dashboard and graphs in html format on
      https://docs.fd.io/.

Testbed HW configuration
------------------------

The testbed HW configuration is described on
`this FD.IO wiki page <https://wiki.fd.io/view/CSIT/CSIT_LF_testbed#FD.IO_CSIT_testbed_-_Server_HW_Configuration>`_.

.. _Minimum Description Length: https://en.wikipedia.org/wiki/Minimum_description_length
.. _Occam's razor: https://en.wikipedia.org/wiki/Occam%27s_razor
.. _bimodal distribution: https://en.wikipedia.org/wiki/Bimodal_distribution