diff options
author | Vratko Polak <vrpolak@cisco.com> | 2022-01-28 11:06:00 +0100 |
---|---|---|
committer | Vratko Polak <vrpolak@cisco.com> | 2022-01-28 11:06:00 +0100 |
commit | 13306993f1f70284e80439981106c16bfad3f036 (patch) | |
tree | a37daad2e2cbc1126711f156dd79e53f7411b9e8 /docs | |
parent | c33444749e1a5ced42234437d92641f1a97c8375 (diff) |
Doc: Add discussion about common anomaly patterns
Change-Id: Ic70c653cb660663abe48c2a0ae3e496a4e8c85f3
Signed-off-by: Vratko Polak <vrpolak@cisco.com>
Diffstat (limited to 'docs')
-rw-r--r-- | docs/cpta/methodology/trend_analysis.rst | 141 |
1 files changed, 136 insertions, 5 deletions
diff --git a/docs/cpta/methodology/trend_analysis.rst b/docs/cpta/methodology/trend_analysis.rst index 5b9ebd352d..5a48136c9b 100644 --- a/docs/cpta/methodology/trend_analysis.rst +++ b/docs/cpta/methodology/trend_analysis.rst @@ -2,7 +2,7 @@ Trend Analysis ^^^^^^^^^^^^^^ All measured performance trend data is treated as time-series data -that is modelled as a concatenation of groups, +that is modeled as a concatenation of groups, within each group the samples come (independently) from the same normal distribution (with some center and standard deviation). @@ -17,7 +17,7 @@ Trend Compliance .. _Trend_Compliance: Trend compliance metrics are targeted to provide an indication of trend -changes, and hint at their reliability. +changes, and hint at their reliability (see Common Patterns below). There is a difference between compliance metric names used in this document, and column names used in :ref:`Dashboard` tables and Alerting emails. @@ -58,7 +58,7 @@ Caveats Obviously, if the result history is too short, the true Trend[t] value may not by available. We use the earliest Trend available instead. -The current implementaton does not track time of the samples, +The current implementation does not track time of the samples, it counts runs instead. For "- 1week" we use "10 runs ago, 5 runs for topo-arch with 1 TB", for "- 3mths" we use "180 days or 180 runs ago, whatever comes first". @@ -126,7 +126,7 @@ In our implementation we have chosen probability density corresponding to uniform distribution (from zero to maximal sample value) for stdev and average of the first group, but for averages of subsequent groups we have chosen a distribution -which disourages delimiting groups with averages close together. +which discourages delimiting groups with averages close together. Our implementation assumes that measurement precision is 1.0 pps. Thus it is slightly wrong for trial durations other than 1.0 seconds. @@ -134,7 +134,7 @@ Also, all the calculations assume 1.0 pps is totally negligible, compared to stdev value. The group selection algorithm currently has no parameters, -all the aforementioned encodings and handling of precision is hardcoded. +all the aforementioned encodings and handling of precision is hard-coded. In principle, every group selection is examined, and the one encodable with least amount of bits is selected. As the bit amount for a selection is just sum of bits for every group, @@ -147,6 +147,137 @@ if samples are distributed normally enough within a group. But for obviously different distributions (for example `bimodal distribution`_) the groups tend to focus on less relevant factors (such as "outlier" density). +Common Patterns +~~~~~~~~~~~~~~~ + +When an anomaly is detected, it frequently falls into few known patterns, +each having its typical behavior over time. + +We are going to describe the behaviors, +as they motivate our choice of trend compliance metrics. + +Sample time and analysis time +----------------------------- + +But first we need to distinguish two roles time plays in analysis, +so it is more clear which role we are referring to. + +Sample time is the more obvious one. +It is the time the sample is generated. +It is the start time or the end time of the Jenkins job run, +does not really matter which (parallel runs are disabled, +and length of gap between samples does not affect metrics). + +Analysis time is the time the current analysis is computed. +Again, the exact time does not usually matter, +what matters is how many later (and how fewer earlier) samples +were considered in the computation. + +For some patterns, it is usual for a previously reported +anomaly to "vanish", or previously unseen anomaly to "appear late", +as later samples change which partition into groups is more probable. + +Dashboard and graphs are always showing the latest analysis time, +the compliance metrics are using earlier sample time +with the same latest analysis time. + +Alerting e-mails use the latest analysis time at the time of sending, +so the values reported there are likely to be different +from the later analysis time results shown in dashboard and graphs. + +Ordinary regression +------------------- + +The real performance changes from previously stable value +into a new stable value. + +For medium to high magnitude of the change, one run +is enough for anomaly detection to mark this regression. + +Ordinary progressions are detected in the same way. + +Small regression +---------------- + +The real performance changes from previously stable value +into a new stable value, but the difference is small. + +For the anomaly detection algorithm, this change is harder to detect, +depending on the standard deviation of the previous group. + +If the new performance value stays stable, eventually +the detection algorithm is able to detect this anomaly +when there are enough samples around the new value. + +If the difference is too small, it may remain undetected +(as new performance change happens, or full history of samples +is still not enough for the detection). + +Small progressions have the same behavior. + +Reverted regression +------------------- + +This pattern can have two different causes. +We would like to distinguish them, but that is usually +not possible to do just by looking at the measured values (and not telemetry). + +In one cause, the real DUT performance has changed, +but got restored immediately. +In the other cause, no real performance change happened, +just some temporary infrastructure issue +has caused a wrong low value to be measured. + +For small measured changes, this pattern may remain undetected. +For medium and big measured changes, this is detected when the regression +happens on just the last sample. + +For big changes, the revert is also immediately detected +as a subsequent progression. The trend is usually different +from the previously stable trend (as the two population averages +are not likely to be exactly equal), but the difference +between the two trends is relatively small. + +For medium changes, the detection algorithm may need several new samples +to detect a progression (as it dislikes single sample groups), +in the meantime reporting regressions (difference decreasing +with analysis time), until it stabilizes the same way as for big changes +(regression followed by progression, small difference +between the old stable trend and last trend). + +As it is very hard for a fault code or an infrastructure issue +to increase performance, the opposite (temporary progression) +almost never happens. + +Summary +------- + +There is a trade-off between detecting small regressions +and not reporting the same old regressions for a long time. + +For people reading e-mails, a sudden regression with a big number of samples +in the last group means this regression was hard for the algorithm to detect. + +If there is a big regression with just one run in the last group, +we are not sure if it is real, or just a temporary issue. +It is useful to wait some time before starting an investigation. + +With decreasing (absolute value of) difference, the number of expected runs +increases. If there is not enough runs, we still cannot distinguish +real regression from temporary regression just from the current metrics +(although humans frequently can tell by looking at the graph). + +When there is a regression or progression with just a small difference, +it is probably an artifact of a temporary regression. +Not worth examining, unless temporary regressions happen somewhat frequently. + +It is not easy for the metrics to locate the previous stable value, +especially if multiple anomalies happened in the last few weeks. +It is good to compare last trend with long term trend maximum, +as it highlights the difference between "now" and "what could be". +It is good to exclude last week from the trend maximum, +as including the last week would hide all real progressions. + .. _Minimum Description Length: https://en.wikipedia.org/wiki/Minimum_description_length .. _Occam's razor: https://en.wikipedia.org/wiki/Occam%27s_razor .. _bimodal distribution: https://en.wikipedia.org/wiki/Bimodal_distribution |