aboutsummaryrefslogtreecommitdiffstats
path: root/resources
diff options
context:
space:
mode:
Diffstat (limited to 'resources')
-rw-r--r--resources/tools/presentation/doc/pal_lld.rst116
-rw-r--r--resources/tools/presentation/generator_tables.py175
-rw-r--r--resources/tools/presentation/requirements.txt1
-rw-r--r--resources/tools/presentation/specification.yaml81
4 files changed, 354 insertions, 19 deletions
diff --git a/resources/tools/presentation/doc/pal_lld.rst b/resources/tools/presentation/doc/pal_lld.rst
index 027d6b326a..64bde3e5fe 100644
--- a/resources/tools/presentation/doc/pal_lld.rst
+++ b/resources/tools/presentation/doc/pal_lld.rst
@@ -1109,8 +1109,9 @@ For example, the element which specification includes:
filter:
- "'64B' and 'BASE' and 'NDRDISC' and '1T1C' and ('L2BDMACSTAT' or 'L2BDMACLRN' or 'L2XCFWD') and not 'VHOST'"
-will be constructed using data from the job "csit-vpp-perf-1707-all", for all listed
-builds and the tests with the list of tags matching the filter conditions.
+will be constructed using data from the job "csit-vpp-perf-1707-all", for all
+listed builds and the tests with the list of tags matching the filter
+conditions.
The output data structure for filtered test data is:
@@ -1189,6 +1190,83 @@ Subset of existing performance tests is covered by TSA graphs.
"plot-throughput-speedup-analysis"
+Comparison of results from two sets of the same test executions
+'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+This algorithm enables comparison of results coming from two sets of the
+same test executions. It is used to quantify performance changes across
+all tests after test environment changes e.g. Operating System
+upgrades/patches, Hardware changes.
+
+It is assumed that each set of test executions includes multiple runs
+of the same tests, 10 or more, to verify test results repeatibility and
+to yield statistically meaningful results data.
+
+Comparison results are presented in a table with a specified number of
+the best and the worst relative changes between the two sets. Following table
+columns are defined:
+
+ - name of the test;
+ - throughput mean values of the reference set;
+ - throughput standard deviation of the reference set;
+ - throughput mean values of the set to compare;
+ - throughput standard deviation of the set to compare;
+ - relative change of the mean values.
+
+**The model**
+
+The model specifies:
+
+ - type: "table" - means this section defines a table.
+ - title: Title of the table.
+ - algorithm: Algorithm which is used to generate the table. The other
+ parameters in this section must provide all information needed by the used
+ algorithm.
+ - output-file-ext: Extension of the output file.
+ - output-file: File which the table will be written to.
+ - reference - the builds which are used as the reference for comparison.
+ - compare - the builds which are compared to the reference.
+ - data: Specify the sources, jobs and builds, providing data for generating
+ the table.
+ - filter: Filter based on tags applied on the input data, if "template" is
+ used, filtering is based on the template.
+ - parameters: Only these parameters will be put to the output data
+ structure.
+ - nr-of-tests-shown: Number of the best and the worst tests presented in the
+ table. Use 0 (zero) to present all tests.
+
+*Example:*
+
+::
+
+ -
+ type: "table"
+ title: "Performance comparison"
+ algorithm: "table_performance_comparison"
+ output-file-ext: ".csv"
+ output-file: "{DIR[DTR,PERF,VPP,IMPRV]}/vpp_performance_comparison"
+ reference:
+ title: "csit-vpp-perf-1801-all - 1"
+ data:
+ csit-vpp-perf-1801-all:
+ - 1
+ - 2
+ compare:
+ title: "csit-vpp-perf-1801-all - 2"
+ data:
+ csit-vpp-perf-1801-all:
+ - 1
+ - 2
+ data:
+ "vpp-perf-comparison"
+ filter: "all"
+ parameters:
+ - "name"
+ - "parent"
+ - "throughput"
+ nr-of-tests-shown: 20
+
+
Advanced data analytics
```````````````````````
@@ -1216,7 +1294,8 @@ Tables
- tables are generated by algorithms implemented in PAL, the model includes the
algorithm and all necessary information.
- output format: csv
- - generated tables are stored in specified directories and linked to .rst files.
+ - generated tables are stored in specified directories and linked to .rst
+ files.
Plots
@@ -1232,8 +1311,8 @@ Report generation
-----------------
Report is generated using Sphinx and Read_the_Docs template. PAL generates html
-and pdf formats. It is possible to define the content of the report by specifying
-the version (TODO: define the names and content of versions).
+and pdf formats. It is possible to define the content of the report by
+specifying the version (TODO: define the names and content of versions).
The process
@@ -1251,12 +1330,13 @@ The process
5. Generate the report.
6. Store the report (Nexus).
-The process is model driven. The elements’ models (tables, plots, files and
-report itself) are defined in the specification file. Script reads the elements’
-models from specification file and generates the elements.
+The process is model driven. The elements' models (tables, plots, files
+and report itself) are defined in the specification file. Script reads
+the elements' models from specification file and generates the elements.
-It is easy to add elements to be generated, if a new kind of element is
-required, only a new algorithm is implemented and integrated.
+It is easy to add elements to be generated in the report. If a new type
+of an element is required, only a new algorithm needs to be implemented
+and integrated.
API
@@ -1396,12 +1476,12 @@ PAL functional diagram
How to add an element
`````````````````````
-Element can be added by adding its model to the specification file. If the
-element will be generated by an existing algorithm, only its parameters must be
-set.
+Element can be added by adding it's model to the specification file. If
+the element is to be generated by an existing algorithm, only it's
+parameters must be set.
-If a brand new type of element will be added, also the algorithm must be
-implemented.
-The algorithms are implemented in the files which names start with "generator".
-The name of the function implementing the algorithm and the name of algorithm in
-the specification file had to be the same.
+If a brand new type of element needs to be added, also the algorithm
+must be implemented. Element generation algorithms are implemented in
+the files with names starting with "generator" prefix. The name of the
+function implementing the algorithm and the name of algorithm in the
+specification file have to be the same.
diff --git a/resources/tools/presentation/generator_tables.py b/resources/tools/presentation/generator_tables.py
index 3bb30b5ab8..71ec431e6a 100644
--- a/resources/tools/presentation/generator_tables.py
+++ b/resources/tools/presentation/generator_tables.py
@@ -16,6 +16,9 @@
import logging
+import csv
+import prettytable
+
from string import replace
from errors import PresentationError
@@ -64,7 +67,6 @@ def table_details(table, input_data):
# Generate the data for the table according to the model in the table
# specification
-
job = table["data"].keys()[0]
build = str(table["data"][job][0])
try:
@@ -331,3 +333,174 @@ def _read_csv_template(file_name):
return tmpl_data
except IOError as err:
raise PresentationError(str(err), level="ERROR")
+
+
+def table_performance_comparision(table, input_data):
+ """Generate the table(s) with algorithm: table_performance_comparision
+ specified in the specification file.
+
+ :param table: Table to generate.
+ :param input_data: Data to process.
+ :type table: pandas.Series
+ :type input_data: InputData
+ """
+
+ # Transform the data
+ data = input_data.filter_data(table)
+
+ # Prepare the header of the tables
+ try:
+ header = ["Test case",
+ "{0} Throughput [Mpps]".format(table["reference"]["title"]),
+ "{0} stdev [Mpps]".format(table["reference"]["title"]),
+ "{0} Throughput [Mpps]".format(table["compare"]["title"]),
+ "{0} stdev [Mpps]".format(table["compare"]["title"]),
+ "Change [%]"]
+ header_str = ",".join(header) + "\n"
+ except (AttributeError, KeyError) as err:
+ logging.error("The model is invalid, missing parameter: {0}".
+ format(err))
+ return
+
+ # Prepare data to the table:
+ tbl_dict = dict()
+ for job, builds in table["reference"]["data"].items():
+ for build in builds:
+ for tst_name, tst_data in data[job][str(build)].iteritems():
+ if tbl_dict.get(tst_name, None) is None:
+ name = "{0}-{1}".format(tst_data["parent"].split("-")[0],
+ "-".join(tst_data["name"].
+ split("-")[1:]))
+ tbl_dict[tst_name] = {"name": name,
+ "ref-data": list(),
+ "cmp-data": list()}
+ tbl_dict[tst_name]["ref-data"].\
+ append(tst_data["throughput"]["value"])
+
+ for job, builds in table["compare"]["data"].items():
+ for build in builds:
+ for tst_name, tst_data in data[job][str(build)].iteritems():
+ tbl_dict[tst_name]["cmp-data"].\
+ append(tst_data["throughput"]["value"])
+
+ tbl_lst = list()
+ for tst_name in tbl_dict.keys():
+ item = [tbl_dict[tst_name]["name"], ]
+ if tbl_dict[tst_name]["ref-data"]:
+ item.append(round(mean(tbl_dict[tst_name]["ref-data"]) / 1000000,
+ 2))
+ item.append(round(stdev(tbl_dict[tst_name]["ref-data"]) / 1000000,
+ 2))
+ else:
+ item.extend([None, None])
+ if tbl_dict[tst_name]["cmp-data"]:
+ item.append(round(mean(tbl_dict[tst_name]["cmp-data"]) / 1000000,
+ 2))
+ item.append(round(stdev(tbl_dict[tst_name]["cmp-data"]) / 1000000,
+ 2))
+ else:
+ item.extend([None, None])
+ if item[1] is not None and item[3] is not None:
+ item.append(int(relative_change(float(item[1]), float(item[3]))))
+ if len(item) == 6:
+ tbl_lst.append(item)
+
+ # Sort the table according to the relative change
+ tbl_lst.sort(key=lambda rel: rel[-1], reverse=True)
+
+ # Generate tables:
+ # All tests in csv:
+ tbl_names = ["{0}-ndr-1t1c-full{1}".format(table["output-file"],
+ table["output-file-ext"]),
+ "{0}-ndr-2t2c-full{1}".format(table["output-file"],
+ table["output-file-ext"]),
+ "{0}-ndr-4t4c-full{1}".format(table["output-file"],
+ table["output-file-ext"]),
+ "{0}-pdr-1t1c-full{1}".format(table["output-file"],
+ table["output-file-ext"]),
+ "{0}-pdr-2t2c-full{1}".format(table["output-file"],
+ table["output-file-ext"]),
+ "{0}-pdr-4t4c-full{1}".format(table["output-file"],
+ table["output-file-ext"])
+ ]
+ for file_name in tbl_names:
+ with open(file_name, "w") as file_handler:
+ file_handler.write(header_str)
+ for test in tbl_lst:
+ if (file_name.split("-")[-3] in test[0] and # NDR vs PDR
+ file_name.split("-")[-2] in test[0]): # cores
+ test[0] = "-".join(test[0].split("-")[:-1])
+ file_handler.write(",".join([str(item) for item in test]) +
+ "\n")
+
+ # All tests in txt:
+ tbl_names_txt = ["{0}-ndr-1t1c-full.txt".format(table["output-file"]),
+ "{0}-ndr-2t2c-full.txt".format(table["output-file"]),
+ "{0}-ndr-4t4c-full.txt".format(table["output-file"]),
+ "{0}-pdr-1t1c-full.txt".format(table["output-file"]),
+ "{0}-pdr-2t2c-full.txt".format(table["output-file"]),
+ "{0}-pdr-4t4c-full.txt".format(table["output-file"])
+ ]
+
+ for i, txt_name in enumerate(tbl_names_txt):
+ txt_table = None
+ with open(tbl_names[i], 'rb') as csv_file:
+ csv_content = csv.reader(csv_file, delimiter=',', quotechar='"')
+ for row in csv_content:
+ if txt_table is None:
+ txt_table = prettytable.PrettyTable(row)
+ else:
+ txt_table.add_row(row)
+ with open(txt_name, "w") as txt_file:
+ txt_file.write(str(txt_table))
+
+ # Selected tests in csv:
+ input_file = "{0}-ndr-1t1c-full{1}".format(table["output-file"],
+ table["output-file-ext"])
+ with open(input_file, "r") as in_file:
+ lines = list()
+ for line in in_file:
+ lines.append(line)
+
+ output_file = "{0}-ndr-1t1c-top{1}".format(table["output-file"],
+ table["output-file-ext"])
+ with open(output_file, "w") as out_file:
+ out_file.write(header_str)
+ for i, line in enumerate(lines[1:]):
+ if i == table["nr-of-tests-shown"]:
+ break
+ out_file.write(line)
+
+ output_file = "{0}-ndr-1t1c-bottom{1}".format(table["output-file"],
+ table["output-file-ext"])
+ with open(output_file, "w") as out_file:
+ out_file.write(header_str)
+ for i, line in enumerate(lines[-1:0:-1]):
+ if i == table["nr-of-tests-shown"]:
+ break
+ out_file.write(line)
+
+ input_file = "{0}-pdr-1t1c-full{1}".format(table["output-file"],
+ table["output-file-ext"])
+ with open(input_file, "r") as in_file:
+ lines = list()
+ for line in in_file:
+ lines.append(line)
+
+ output_file = "{0}-pdr-1t1c-top{1}".format(table["output-file"],
+ table["output-file-ext"])
+ with open(output_file, "w") as out_file:
+ out_file.write(header_str)
+ for i, line in enumerate(lines[1:]):
+ if i == table["nr-of-tests-shown"]:
+ break
+ out_file.write(line)
+
+ output_file = "{0}-pdr-1t1c-bottom{1}".format(table["output-file"],
+ table["output-file-ext"])
+ with open(output_file, "w") as out_file:
+ out_file.write(header_str)
+ for i, line in enumerate(lines[-1:0:-1]):
+ if i == table["nr-of-tests-shown"]:
+ break
+ out_file.write(line)
diff --git a/resources/tools/presentation/requirements.txt b/resources/tools/presentation/requirements.txt
index 5765dc4ef0..a33848d681 100644
--- a/resources/tools/presentation/requirements.txt
+++ b/resources/tools/presentation/requirements.txt
@@ -8,3 +8,4 @@ python-dateutil
numpy
pandas
plotly
+PTable
diff --git a/resources/tools/presentation/specification.yaml b/resources/tools/presentation/specification.yaml
index 8a105fe974..d6e0b0ea78 100644
--- a/resources/tools/presentation/specification.yaml
+++ b/resources/tools/presentation/specification.yaml
@@ -44,6 +44,8 @@
DIR[DTR,FUNC,HC]: "{DIR[DTR]}/honeycomb_functional_results"
DIR[DTR,FUNC,NSHSFC]: "{DIR[DTR]}/nshsfc_functional_results"
DIR[DTR,PERF,VPP,IMPRV]: "{DIR[WORKING,SRC]}/vpp_performance_tests/performance_improvements"
+ DIR[DTR,PERF,VPP,IMPACT,SPECTRE]: "{DIR[WORKING,SRC]}/vpp_performance_tests/performance_impact_spectre"
+ DIR[DTR,PERF,VPP,IMPACT,MELTDOWN]: "{DIR[WORKING,SRC]}/vpp_performance_tests/performance_improvements"
# Detailed test configurations
DIR[DTC]: "{DIR[WORKING,SRC]}/test_configuration"
@@ -87,6 +89,21 @@
-
type: "configuration"
data-sets:
+ vpp-meltdown-impact:
+# TODO: specify data sources
+# csit-vpp-perf-1801-all:
+# - 1
+# - 2
+ plot-throughput-speedup-analysis:
+# TODO: Add the data sources
+# csit-vpp-perf-1801-all:
+# - 1
+# - 2
+ vpp-spectre-impact:
+# TODO: specify data sources
+# csit-vpp-perf-1801-all:
+# - 1
+# - 2
plot-throughput-speedup-analysis:
# TODO: Add the data sources
# csit-vpp-perf-1801-all:
@@ -431,6 +448,70 @@
-
type: "table"
+ title: "Performance Impact of Meltdown Patches"
+ algorithm: "table_performance_comparision"
+ output-file-ext: ".csv"
+# TODO: specify dir
+ output-file: "{DIR[DTR,PERF,VPP,IMPACT,MELTDOWN]}/meltdown-impact"
+ reference:
+ title: "No Meltdown"
+# TODO: specify data sources
+# data:
+# csit-vpp-perf-1801-all:
+# - 1
+# - 2
+ compare:
+ title: "Meltdown Patches Applied"
+# TODO: specify data sources
+# data:
+# csit-vpp-perf-1801-all:
+# - 1
+# - 2
+ data:
+ "vpp-meltdown-impact"
+ filter: "all"
+ parameters:
+ - "name"
+ - "parent"
+ - "throughput"
+ # Number of the best and the worst tests presented in the table. Use 0 (zero)
+ # to present all tests.
+ nr-of-tests-shown: 20
+
+-
+ type: "table"
+ title: "Performance Impact of Spectre Patches"
+ algorithm: "table_performance_comparision"
+ output-file-ext: ".csv"
+# TODO: specify dir
+ output-file: "{DIR[DTR,PERF,VPP,IMPACT,SPECTRE]}/spectre-impact"
+ reference:
+ title: "No Spectre"
+# TODO: specify data sources
+# data:
+# csit-vpp-perf-1801-all:
+# - 1
+# - 2
+ compare:
+ title: "Spectre Patches Applied"
+# TODO: specify data sources
+# data:
+# csit-vpp-perf-1801-all:
+# - 1
+# - 2
+ data:
+ "vpp-spectre-impact"
+ filter: "all"
+ parameters:
+ - "name"
+ - "parent"
+ - "throughput"
+ # Number of the best and the worst tests presented in the table. Use 0 (zero)
+ # to present all tests.
+ nr-of-tests-shown: 20
+
+-
+ type: "table"
title: "Performance improvements"
algorithm: "table_performance_improvements"
template: "{DIR[DTR,PERF,VPP,IMPRV]}/tmpl_performance_improvements.csv"