aboutsummaryrefslogtreecommitdiffstats
path: root/tests
AgeCommit message (Collapse)AuthorFilesLines
2020-06-10NAT44 EI testsMaros Mullner16-0/+2504
Signed-off-by: Maros Mullner <maros.mullner@pantheon.tech> Change-Id: Ib5f58f60a1409ed139e2846793bf52fdc02a6571
2020-06-09Remove leading tc[nn] from test namesJuraj Linkeš560-5657/+5657
The test names are unique without it and the information doesn't add anything extra. Change-Id: Idc7d6d1d21c8c05691e1757227a0a3787406d370 Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
2020-05-25FIX: vts perf testsJan Gelety3-12/+12
Change-Id: Ie144c22575a7976da0c77e787e450355b73b0006 Signed-off-by: Jan Gelety <jgelety@cisco.com>
2020-05-15Performance: Tests with virtio driver in VMPeter Mikus69-67/+834
Signed-off-by: Peter Mikus <pmikus@cisco.com> Change-Id: I20e01dfe83a961dc8202d33783a678d38e71cff2
2020-05-07perf: refactor 'setup suite topology interfaces'Dave Wallace8-9/+8
- and 'setup suite topology interfaces no tg' to use a common keyword to create suite variables using the required topology information. Change-Id: I46894948bc86eb7ce72d036e5b84f09c5c1385db Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
2020-05-04perf: remove hoststack wrk cps/rps test suites.Dave Wallace2-196/+0
- The VSAP project will be adding hoststack connections-per-second and requests-per-second tests of the hoststack using the Apache 'ab' test tools. - WRK infra in /resources/libraries will be cleaned up separately as a part of the hoststack + LDP + nginx test suite. Signed-off-by: Dave Wallace <dwallacelf@gmail.com> Change-Id: Ic1d2f4299e8b6ae6be84283f22f6e28dd05bd80f
2020-05-04VPP-DEV API Coverages: SRv6Jan Gelety12-15/+694
Jira: CSIT-1698 Change-Id: I6d9154284990df8877850e4014716510016e485b Signed-off-by: Jan Gelety <jgelety@cisco.com>
2020-04-28CSIT-1597 API cleanup: lispJan Gelety9-9/+13
- cover API changes in VPP: https://gerrit.fd.io/r/c/vpp/+/24663 - update vpp stable to version 20.05-rc0~637 - remove unused L1 and L2 lisp KWs Change-Id: I2672b6a375ad70c82f331dcc991c145e868108b9 Signed-off-by: Jan Gelety <jgelety@cisco.com>
2020-04-23FIX: Mellanox jumbo framesPeter Mikus4-0/+4
Signed-off-by: Peter Mikus <pmikus@cisco.com> Change-Id: I84b3c07e22a313e96ac59fc7818960c502507651
2020-04-23Performance: DPDK refactorPeter Mikus12-520/+41
+ Rework BASH scripts (more code to python). + Move BASH into libraries. + Allows RDMA usage. + Fix 9000B tests. + Rename confusing l2fwd -> testpmd. + Fix suite setup. + Fix PCI whitelist to not accidentally pickup wrong interface. + Fix deprecated DPDK cli arguments. - MLX5 jumbo are disabled on NIC (i will increase separately). https://jenkins.fd.io/job/csit-dpdk-perf-verify-master-2n-clx/6/console (l3fwd still broken) - MLX5 IMIX seems to be some TRex issue with IMIX for mlx5 (i will handle separately) Signed-off-by: Peter Mikus <pmikus@cisco.com> Change-Id: I31d1b67305fa247cb5e1f57e739d3ef30dc1a04b
2020-04-20FIX: VTS testsPeter Mikus3-6/+6
Signed-off-by: Peter Mikus <pmikus@cisco.com> Change-Id: Ib815175360ca565ce147f802e5dea0908b7507ee
2020-04-17VPP-DEV API Coverages: IPSEC interfaceJan Gelety37-106/+307
+ some pylint fixies Change-Id: I650ce16282ae953a1a5ee96e810702c01f71efd6 Signed-off-by: Jan Gelety <jgelety@cisco.com>
2020-04-06Improve pf layerPeter Mikus545-995/+2087
+ Merge single/double link + Introduce _pf{n}[0] variables so we can access physical function same way as virtual function + Cleanup code by moving complex logic to python + Prepare code for multiple vf functions Signed-off-by: Peter Mikus <pmikus@cisco.com> Change-Id: Ic2e74a38bfa146441357de8f0916aeb638941c49
2020-03-19perf: Fix broken hoststack testsDave Wallace2-2/+2
- Rename NSIM attribute names as changed in b9f4ba11 Change-Id: I6bc232c9954cfd9004d1d0cf22446957e78a641a Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
2020-03-12rls2001 perf: fix hoststack test packet sizesDave Wallace8-24/+25
- TCP packet size is 1460B not 9000B - QUIC packet size is 1280B not 9000B Change-Id: I6604a74fa533db4ac782782c85ea54038688627a Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
2020-03-10Make RXQs/TXQs configurablePeter Mikus551-60/+1705
Change-Id: Ib30bc4697fcba93a6723ee492a59a0523425f623 Signed-off-by: Peter Mikus <pmikus@cisco.com>
2020-03-09Fix on 2n suiteVratko Polak1-1/+1
Wrong tag said it is 3. Change-Id: I3ea6ffbec3e38a11b721318266bcb2ea451d2d5a Signed-off-by: Vratko Polak <vrpolak@cisco.com>
2020-03-03perf: Clean up Hoststack testsDave Wallace10-46/+347
- Update test names with clients/streams - Convert test results to JSON output * iperf3 results include bits_per_second * vpp_echo results include both client and server output which includes time in seconds and rx_data/tx_data in bytes which can be used to calculate the average bits per second. Tx and Rx data will always be the same: BPS = (client tx_data * 8) / ((client time + server time) / 2) - Fix WRK test results data formatting errors Change-Id: Ie2aeb665e3cc0739b16f97ba2628eebe6e041d22 Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
2020-02-25FIX: check if t-rex is running at test setup of all perf testsJan Gelety498-703/+703
Change-Id: I9af632035a1415666b2470c62a41d1b6acbf33c8 Signed-off-by: Jan Gelety <jgelety@cisco.com>
2020-02-10FIX: Detection if l2fwd/l3fwd is up/downJan Gelety3-12/+8
Change-Id: Ide5de222e8314a0ea0be59f9a478f8d59147f722 Signed-off-by: Jan Gelety <jgelety@cisco.com>
2020-01-22FIX: policerJan Gelety3-6/+15
Change-Id: I23910b74c8720245b43067ac8c68880291b4b1c2 Signed-off-by: Jan Gelety <jgelety@cisco.com>
2020-02-07FIX: Detection if testpmd/l3fwd is upPeter Mikus3-102/+122
Change-Id: Ibd2e038332fe2bdf0e5bd69bf7376a2a7357e901 Signed-off-by: Peter Mikus <pmikus@cisco.com>
2020-02-04Ipsec: Use new plugin name in reconf testsVratko Polak40-40/+40
This edit is not suitable for rls2001. Change-Id: I18ea22346d5996e78034f35d74a87c125010a146 Signed-off-by: Vratko Polak <vrpolak@cisco.com>
2020-02-04Add more reconf tests, for IPsecVratko Polak41-1/+6101
- Not adding nf_density tests. - Not adding hardware ipsec tests. - Not adding -policy- tests. - Using old crypto_ia32_plugin.so plugin name. + Suitable for cherry-picking to rls2001. Change-Id: Ibf44d6d91e2afa2320637ecd9eb69d5d5dc364aa Signed-off-by: Vratko Polak <vrpolak@cisco.com>
2020-01-31T-Rex: CPU pinningPeter Mikus3-10/+9
+ Detect NUMA + Pin based on numa location Signed-off-by: Peter Mikus <pmikus@cisco.com> Change-Id: Ife350f8c70e5437ac7c1413c7753f2a2f62777d9
2020-01-30perf: Add hoststack NSIM+LDPRELOAD+IPERF3 test suiteDave Wallace1-0/+66
Change-Id: Ia7a876b1aa240676e1f2d23618c1d4e09ead14f0 Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
2020-01-29perf: QUIC transport hoststack test suiteDave Wallace1-0/+74
Change-Id: I73f4be7ea315c7a5dcce46e1bd3034bcb0a97ee2 Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
2020-01-29IPSEC: Change plugin namingPeter Mikus104-104/+104
https://gerrit.fd.io/r/c/vpp/+/24574 Signed-off-by: Peter Mikus <pmikus@cisco.com> Change-Id: Ic2b7c925ceba1e16de87c64003ecceeea69c681c
2020-01-28Ipsec: Unify first line of Local Template docVratko Polak62-94/+90
Seeing differences when diff-ing between suites is distracting. + Bump copyright year, even for files with no change. Change-Id: Iaca79647821dd8233bdbe6b0ac8b14fdb04060a8 Signed-off-by: Vratko Polak <vrpolak@cisco.com>
2020-01-28perf: hoststack iperf3 test tuningDave Wallace1-18/+11
Change-Id: I53425f57fe9ecef9cff2c94642cc7cb24537a961 Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
2020-01-28Fix two auth_alg valuesVratko Polak2-2/+2
Change-Id: I0e85fc958779df3d5dbacf1ad1e3898268a832ec Signed-off-by: Vratko Polak <vrpolak@cisco.com>
2020-01-28Update overheads for IPsec CBC testsVratko Polak32-64/+64
Updated to the values as seen in packet trace. Even if VPP creates wrongly sized packets (compared to RFCs), the overhead should correspond to the actual packet size present, in order to correctly prevent linerate on DUT-DUT link. The new overhead values are 62 (256SHA) and 78 (512SHA). The GCM value is already correct, at 54 bytes, so density tests are ok. - The lispgpe test is not updated, as it currently fails. We will update overhead there in (or after) 24578. Change-Id: I5cc6920205f37ddc80e76804fabd90b67174addf Signed-off-by: Vratko Polak <vrpolak@cisco.com>
2020-01-28FIX: dot1qip4vxlan testsPeter Mikus24-24/+24
Signed-off-by: Peter Mikus <pmikus@cisco.com> Change-Id: Ieb6e6010108987f99f55730149e3e4b7f1a7fc21
2020-01-18FIX: VTS testsJan Gelety3-6/+6
Change-Id: I594d248c58dcdcdeceea57af2dd25e2b2e08247f Signed-off-by: Jan Gelety <jgelety@cisco.com>
2020-01-23FIX: nfv_densityJan Gelety185-370/+370
- use correct osi_layer=L2 (so L2 spoofing check is switched off in case of avf driver) - add pci address information to eth interface in topology file - nfv_density chain_ipsec tests work only with DPDK in current implementation Change-Id: I233c6e5634a14581960c7459b87f11fcee8365bd Signed-off-by: Jan Gelety <jgelety@cisco.com>
2020-01-17perf: add TCP Iperf3+LDPRELOAD test suiteDave Wallace1-0/+71
Change-Id: Icff49fb31cce342a2a4ae799e844ec91f9e5e366 Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
2020-01-15Remove everything related to TLDKVratko Polak18-409/+0
- Leftovers from kubernetes found, but not removed here. Change-Id: If8cb9269d0f3e69f642d7fe02c59122e17925a4d Signed-off-by: Vratko Polak <vrpolak@cisco.com>
2020-01-10Trending: new daily setJan Gelety42-353/+777
New daily sets are prepared based on information in https://gerrit.fd.io/r/c/csit/+/24073/1/docs/job_specs/perf_tests_job_specs.md and previous test set definitions in docs/job_specs/test_select_list_[2n|3n]_[clx|skx|hsw|tsh|dnv].md files. - mrr-daily-2n-clx: 510 TCs (incl. nfv_density), expected exec. time 8:50h - mrr-daily-2n-skx: 525 TCs (incl. nfv_density), expected exec. time 7:55h - mrr-daily-3n-skx: 393 TCs (incl. nfv_density), expected exec. time 11:00h - mrr-daily-3n-hsw: 177 TCs (incl. nfv_density), expected exec. time 7:10h - mrr-daily-3n-tsh: 204 TCs, expected exec. time 21:00h - mrr-daily-2n-dnv: 84 TCs, expected exec. time 2:25h - mrr-daily-3n-dnv: 144 TCs, expected exec. time 6:35h + add some missing test suites + add trex-sl-2n-ethip4udp-1000u15p.py T-Rex traffic profile + correction of TS and TC names and tags in directory tests/vpp/perf/nfv_density/chain_ipsec Change-Id: Icfc86e9af97ed8dd8ccd2a34355c99aad69a28c0 Signed-off-by: Jan Gelety <jgelety@cisco.com>
2020-01-10Support suite tags in autogenVratko Polak498-1/+499
+ Include a script to add suite tags to many suites at once. + Add suite tags also to device tests (not covered by autogen). Change-Id: I514ee6178e22999b43460028fe2696738b012f04 Signed-off-by: Vratko Polak <vrpolak@cisco.com>
2020-01-10Autogen: Generate also NIC drivers.Vratko Polak198-3908/+185
+ Disallowed -avf- (or -rdma-) as "template" suites. + GBP suite switched to DPDK driver in repo. + Each NIC has its own list of supported drivers, in Constants. + Updated tag expressions for daily jobs: + Feature, ipsec, memif, scale, srv6, tunnels, vhost and vts are tested only with vfio-pci. + Other (base, dot1q, dot1ad) tested with all drivers. + Setup actions currently depend on driver, generated. - The performance_rdma action is trivial for now. - Several tests fail, to be fixed later, e.g. by performance_rdma. + Reconf tests are also supported. + Added DRV_VFIO_PCI tags missing, mainly in density tests. - Vhost suites (density, reconf) are failing, but suites look good. - TCP suites do not support NIC drivers yet. - DPDK obviously not supported. + Use Python 3 in regenerate scripts. + Fix typos binded => bound. + File open modes set either u"rt" or u"wt" everywhere. + Remove a trailing space in an environment variable name. Change-Id: I290470675dc5c9e88b2eaa5ab6285ecd9ed7827a Signed-off-by: Vratko Polak <vrpolak@cisco.com>
2020-01-15Hoststack perf infrastructure refactoringDave Wallace3-6/+10
- DUT only topology (hoststack test apps are co-located with vpp) - Make vpp app specific keywords generic where applicable - Add IP4 Prefix to topology file - Support running wrk in linux namespace - Refactor namespace cleanup - Remove redundant namespace creation code - Refactor test/keyword dirs: tcp -> hoststack - Add hoststack utility keywords - Refactor wrk suite setup/teardown - Update tests with recent perf infra changes Change-Id: Ia1cf07978d579393eef94923819a87c8c1f36f34 Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
2020-01-09DMM: RemoveTibor Frank2-66/+0
Change-Id: Ibbfbed79e473c804390802ae1ecd737b50c06aa3 Signed-off-by: Tibor Frank <tifrank@cisco.com>
2020-01-07Update of VPP_STABLE_VER filesJan Gelety1-1/+1
- use new vpp ref build - ubuntu 18.04: 20.01-rc0~983-g78565f38e - use new vpp ref build - centos7: 20.01-rc0~983_g78565f3~b8651 + remove EXPECTED_FAILING tag from tc01-64B-ethip4-l2patch-dev test Change-Id: Iab47a66003926024f87e028b1b1d9136b8fb4ec4 Signed-off-by: Jan Gelety <jgelety@cisco.com>
2020-01-02Revert "L2Patch: Remove EXPECTED_FAILING"Peter Mikus1-1/+1
This reverts commit 09c5a6b8e1c6efed8826ef34aa64809226e80edb. Reason for revert: CSIT has not yet the latest VPP version Change-Id: Ibaa2c00c639bacef1561898daf9485c3a68efec4 Signed-off-by: Peter Mikus <pmikus@cisco.com>
2020-01-02L2Patch: Remove EXPECTED_FAILINGPeter Mikus1-1/+1
Signed-off-by: Peter Mikus <pmikus@cisco.com> Change-Id: Ie950370c47403872597f8857edd651df2552ccb2
2019-12-20VTS: Unify the testsPeter Mikus3-58/+36
- Converting to 2n as they were always 2n (with l2xc on 3rd node) - Removing KW and converting to layered approach Signed-off-by: Peter Mikus <pmikus@cisco.com> Change-Id: Ie349c50f72eb362815e7c5ede076d421ab386e76
2019-12-16FIX: NAT44 bugPeter Mikus1-1/+2
+ From trending ... Signed-off-by: Peter Mikus <pmikus@cisco.com> Change-Id: I0ebb5429f0ad731d42aa43855cd99ae73b1461fc
2019-12-13Python3: refactor ':FOR' statementsDave Wallace2-2/+4
Signed-off-by: Dave Wallace <dwallacelf@gmail.com> Change-Id: I76835e3d3acf6955e328f30427f9dd0098947e41
2019-12-01CSIT-VPP-DEV: move tap tests back to critical poolJan Gelety3-3/+1
Change-Id: Ic99e828588d561a07c169502dff8ca19ac98400f Signed-off-by: Jan Gelety <jgelety@cisco.com>
2019-12-11Introduce VPP-IPsec container tests.Ludovit Mikula36-0/+5268
Change-Id: Ie64d662e81879bd52785e0188450d998bf056bda Signed-off-by: Ludovit Mikula <ludovit.mikula@pantheon.tech>
olor: #008800; font-weight: bold } /* Keyword */ .highlight .ch { color: #888888 } /* Comment.Hashbang */ .highlight .cm { color: #888888 } /* Comment.Multiline */ .highlight .cp { color: #cc0000; font-weight: bold } /* Comment.Preproc */ .highlight .cpf { color: #888888 } /* Comment.PreprocFile */ .highlight .c1 { color: #888888 } /* Comment.Single */ .highlight .cs { color: #cc0000; font-weight: bold; background-color: #fff0f0 } /* Comment.Special */ .highlight .gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */ .highlight .ge { font-style: italic } /* Generic.Emph */ .highlight .gr { color: #aa0000 } /* Generic.Error */ .highlight .gh { color: #333333 } /* Generic.Heading */ .highlight .gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */ .highlight .go { color: #888888 } /* Generic.Output */ .highlight .gp { color: #555555 } /* Generic.Prompt */ .highlight .gs { font-weight: bold } /* Generic.Strong */ .highlight .gu { color: #666666 } /* Generic.Subheading */ .highlight .gt { color: #aa0000 } /* Generic.Traceback */ .highlight .kc { color: #008800; font-weight: bold } /* Keyword.Constant */ .highlight .kd { color: #008800; font-weight: bold } /* Keyword.Declaration */ .highlight .kn { color: #008800; font-weight: bold } /* Keyword.Namespace */ .highlight .kp { color: #008800 } /* Keyword.Pseudo */ .highlight .kr { color: #008800; font-weight: bold } /* Keyword.Reserved */ .highlight .kt { color: #888888; font-weight: bold } /* Keyword.Type */ .highlight .m { color: #0000DD; font-weight: bold } /* Literal.Number */ .highlight .s { color: #dd2200; background-color: #fff0f0 } /* Literal.String */ .highlight .na { color: #336699 } /* Name.Attribute */ .highlight .nb { color: #003388 } /* Name.Builtin */ .highlight .nc { color: #bb0066; font-weight: bold } /* Name.Class */ .highlight .no { color: #003366; font-weight: bold } /* Name.Constant */ .highlight .nd { color: #555555 } /* Name.Decorator */ .highlight .ne { color: #bb0066; font-weight: bold } /* Name.Exception */ .highlight .nf { color: #0066bb; font-weight: bold } /* Name.Function */ .highlight .nl { color: #336699; font-style: italic } /* Name.Label */ .highlight .nn { color: #bb0066; font-weight: bold } /* Name.Namespace */ .highlight .py { color: #336699; font-weight: bold } /* Name.Property */ .highlight .nt { color: #bb0066; font-weight: bold } /* Name.Tag */ .highlight .nv { color: #336699 } /* Name.Variable */ .highlight .ow { color: #008800 } /* Operator.Word */ .highlight .w { color: #bbbbbb } /* Text.Whitespace */ .highlight .mb { color: #0000DD; font-weight: bold } /* Literal.Number.Bin */ .highlight .mf { color: #0000DD; font-weight: bold } /* Literal.Number.Float */ .highlight .mh { color: #0000DD; font-weight: bold } /* Literal.Number.Hex */ .highlight .mi { color: #0000DD; font-weight: bold } /* Literal.Number.Integer */ .highlight .mo { color: #0000DD; font-weight: bold } /* Literal.Number.Oct */ .highlight .sa { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Affix */ .highlight .sb { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Backtick */ .highlight .sc { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Char */ .highlight .dl { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Delimiter */ .highlight .sd { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Doc */ .highlight .s2 { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Double */ .highlight .se { color: #0044dd; background-color: #fff0f0 } /* Literal.String.Escape */ .highlight .sh { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Heredoc */ .highlight .si { color: #3333bb; background-color: #fff0f0 } /* Literal.String.Interpol */ .highlight .sx { color: #22bb22; background-color: #f0fff0 } /* Literal.String.Other */ .highlight .sr { color: #008800; background-color: #fff0ff } /* Literal.String.Regex */ .highlight .s1 { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Single */ .highlight .ss { color: #aa6600; background-color: #fff0f0 } /* Literal.String.Symbol */ .highlight .bp { color: #003388 } /* Name.Builtin.Pseudo */ .highlight .fm { color: #0066bb; font-weight: bold } /* Name.Function.Magic */ .highlight .vc { color: #336699 } /* Name.Variable.Class */ .highlight .vg { color: #dd7700 } /* Name.Variable.Global */ .highlight .vi { color: #3333bb } /* Name.Variable.Instance */ .highlight .vm { color: #336699 } /* Name.Variable.Magic */ .highlight .il { color: #0000DD; font-weight: bold } /* Literal.Number.Integer.Long */ }
# Copyright (c) 2022 Cisco and/or its affiliates.
# Copyright (c) 2022 PANTHEON.tech and/or its affiliates.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at:
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

set -exuo pipefail

# This library defines functions used by multiple entry scripts.
# Keep functions ordered alphabetically, please.

# TODO: Add a link to bash style guide.
# TODO: Consider putting every die into a {} block,
#   the code might become more readable (but longer).


function activate_docker_topology () {

    # Create virtual vpp-device topology. Output of the function is topology
    # file describing created environment saved to a file.
    #
    # Variables read:
    # - BASH_FUNCTION_DIR - Path to existing directory this file is located in.
    # - TOPOLOGIES - Available topologies.
    # - NODENESS - Node multiplicity of desired testbed.
    # - FLAVOR - Node flavor string, usually describing the processor.
    # - IMAGE_VER_FILE - Name of file that contains the image version.
    # - CSIT_DIR - Directory where ${IMAGE_VER_FILE} is located.
    # Variables set:
    # - WORKING_TOPOLOGY - Path to topology file.

    set -exuo pipefail

    source "${BASH_FUNCTION_DIR}/device.sh" || {
        die "Source failed!"
    }
    device_image="$(< ${CSIT_DIR}/${IMAGE_VER_FILE})"
    case_text="${NODENESS}_${FLAVOR}"
    case "${case_text}" in
        "1n_skx" | "1n_tx2")
            # We execute reservation over csit-shim-dcr (ssh) which runs sourced
            # script's functions. Env variables are read from ssh output
            # back to localhost for further processing.
            # Shim and Jenkins executor are in the same network on the same host
            # Connect to docker's default gateway IP and shim's exposed port
            ssh="ssh root@172.17.0.1 -p 6022"
            run="activate_wrapper ${NODENESS} ${FLAVOR} ${device_image}"
            # The "declare -f" output is long and boring.
            set +x
            # backtics to avoid https://midnight-commander.org/ticket/2142
            env_vars=`${ssh} "$(declare -f); ${run}"` || {
                die "Topology reservation via shim-dcr failed!"
            }
            set -x
            set -a
            source <(echo "$env_vars" | grep -v /usr/bin/docker) || {
                die "Source failed!"
            }
            set +a
            ;;
        "1n_vbox")
            # We execute reservation on localhost. Sourced script automatially
            # sets environment variables for further processing.
            activate_wrapper "${NODENESS}" "${FLAVOR}" "${device_image}" || die
            ;;
        *)
            die "Unknown specification: ${case_text}!"
    esac

    trap 'deactivate_docker_topology' EXIT || {
         die "Trap attempt failed, please cleanup manually. Aborting!"
    }

    parse_env_variables || die "Parse of environment variables failed!"

    # Replace all variables in template with those in environment.
    source <(echo 'cat <<EOF >topo.yml'; cat ${TOPOLOGIES[0]}; echo EOF;) || {
        die "Topology file create failed!"
    }

    WORKING_TOPOLOGY="${CSIT_DIR}/topologies/available/vpp_device.yaml"
    mv topo.yml "${WORKING_TOPOLOGY}" || {
        die "Topology move failed!"
    }
    cat ${WORKING_TOPOLOGY} | grep -v password || {
        die "Topology read failed!"
    }
}


function activate_virtualenv () {

    # Update virtualenv pip package, delete and create virtualenv directory,
    # activate the virtualenv, install requirements, set PYTHONPATH.

    # Arguments:
    # - ${1} - Path to existing directory for creating virtualenv in.
    #          If missing or empty, ${CSIT_DIR} is used.
    # - ${2} - Path to requirements file, ${CSIT_DIR}/requirements.txt if empty.
    # Variables read:
    # - CSIT_DIR - Path to existing root of local CSIT git repository.
    # Variables exported:
    # - PYTHONPATH - CSIT_DIR, as CSIT Python scripts usually need this.
    # Functions called:
    # - die - Print to stderr and exit.

    set -exuo pipefail

    root_path="${1-$CSIT_DIR}"
    env_dir="${root_path}/env"
    req_path=${2-$CSIT_DIR/requirements.txt}
    rm -rf "${env_dir}" || die "Failed to clean previous virtualenv."
    pip3 install virtualenv==20.15.1 || {
        die "Virtualenv package install failed."
    }
    virtualenv --no-download --python=$(which python3) "${env_dir}" || {
        die "Virtualenv creation for $(which python3) failed."
    }
    set +u
    source "${env_dir}/bin/activate" || die "Virtualenv activation failed."
    set -u
    pip3 install -r "${req_path}" || {
        die "Requirements installation failed."
    }
    # Most CSIT Python scripts assume PYTHONPATH is set and exported.
    export PYTHONPATH="${CSIT_DIR}" || die "Export failed."
}


function archive_tests () {

    # Create .tar.gz of generated/tests for archiving.
    # To be run after generate_tests, kept separate to offer more flexibility.

    # Directory read:
    # - ${GENERATED_DIR}/tests - Tree of executed suites to archive.
    # File rewriten:
    # - ${ARCHIVE_DIR}/generated_tests.tar.gz - Archive of generated tests.

    set -exuo pipefail

    pushd "${ARCHIVE_DIR}" || die
    tar czf "generated_tests.tar.gz" "${GENERATED_DIR}/tests" || true
    popd || die
}


function check_download_dir () {

    # Fail if there are no files visible in ${DOWNLOAD_DIR}.
    #
    # Variables read:
    # - DOWNLOAD_DIR - Path to directory pybot takes the build to test from.
    # Directories read:
    # - ${DOWNLOAD_DIR} - Has to be non-empty to proceed.
    # Functions called:
    # - die - Print to stderr and exit.

    set -exuo pipefail

    if [[ ! "$(ls -A "${DOWNLOAD_DIR}")" ]]; then
        die "No artifacts downloaded!"
    fi
}


function check_prerequisites () {

    # Fail if prerequisites are not met.
    #
    # Functions called:
    # - installed - Check if application is installed/present in system.
    # - die - Print to stderr and exit.

    set -exuo pipefail

    if ! installed sshpass; then
        die "Please install sshpass before continue!"
    fi
}


function common_dirs () {

    # Set global variables, create some directories (without touching content).

    # Variables set:
    # - BASH_FUNCTION_DIR - Path to existing directory this file is located in.
    # - CSIT_DIR - Path to existing root of local CSIT git repository.
    # - TOPOLOGIES_DIR - Path to existing directory with available topologies.
    # - JOB_SPECS_DIR - Path to existing directory with job test specifications.
    # - RESOURCES_DIR - Path to existing CSIT subdirectory "resources".
    # - TOOLS_DIR - Path to existing resources subdirectory "tools".
    # - PYTHON_SCRIPTS_DIR - Path to existing tools subdirectory "scripts".
    # - ARCHIVE_DIR - Path to created CSIT subdirectory "archives".
    #   The name is chosen to match what ci-management expects.
    # - DOWNLOAD_DIR - Path to created CSIT subdirectory "download_dir".
    # - GENERATED_DIR - Path to created CSIT subdirectory "generated".
    # Directories created if not present:
    # ARCHIVE_DIR, DOWNLOAD_DIR, GENERATED_DIR.
    # Functions called:
    # - die - Print to stderr and exit.

    set -exuo pipefail

    this_file=$(readlink -e "${BASH_SOURCE[0]}") || {
        die "Some error during locating of this source file."
    }
    BASH_FUNCTION_DIR=$(dirname "${this_file}") || {
        die "Some error during dirname call."
    }
    # Current working directory could be in a different repo, e.g. VPP.
    pushd "${BASH_FUNCTION_DIR}" || die "Pushd failed"
    relative_csit_dir=$(git rev-parse --show-toplevel) || {
        die "Git rev-parse failed."
    }
    CSIT_DIR=$(readlink -e "${relative_csit_dir}") || die "Readlink failed."
    popd || die "Popd failed."
    TOPOLOGIES_DIR=$(readlink -e "${CSIT_DIR}/topologies/available") || {
        die "Readlink failed."
    }
    JOB_SPECS_DIR=$(readlink -e "${CSIT_DIR}/docs/job_specs") || {
        die "Readlink failed."
    }
    RESOURCES_DIR=$(readlink -e "${CSIT_DIR}/resources") || {
        die "Readlink failed."
    }
    TOOLS_DIR=$(readlink -e "${RESOURCES_DIR}/tools") || {
        die "Readlink failed."
    }
    DOC_GEN_DIR=$(readlink -e "${TOOLS_DIR}/doc_gen") || {
        die "Readlink failed."
    }
    PYTHON_SCRIPTS_DIR=$(readlink -e "${TOOLS_DIR}/scripts") || {
        die "Readlink failed."
    }

    ARCHIVE_DIR=$(readlink -f "${CSIT_DIR}/archives") || {
        die "Readlink failed."
    }
    mkdir -p "${ARCHIVE_DIR}" || die "Mkdir failed."
    DOWNLOAD_DIR=$(readlink -f "${CSIT_DIR}/download_dir") || {
        die "Readlink failed."
    }
    mkdir -p "${DOWNLOAD_DIR}" || die "Mkdir failed."
    GENERATED_DIR=$(readlink -f "${CSIT_DIR}/generated") || {
        die "Readlink failed."
    }
    mkdir -p "${GENERATED_DIR}" || die "Mkdir failed."
}


function compose_pybot_arguments () {

    # Variables read:
    # - WORKING_TOPOLOGY - Path to topology yaml file of the reserved testbed.
    # - DUT - CSIT test/ subdirectory, set while processing tags.
    # - TAGS - Array variable holding selected tag boolean expressions.
    # - TOPOLOGIES_TAGS - Tag boolean expression filtering tests for topology.
    # - TEST_CODE - The test selection string from environment or argument.
    # - SELECTION_MODE - Selection criteria [test, suite, include, exclude].
    # Variables set:
    # - PYBOT_ARGS - String holding part of all arguments for pybot.
    # - EXPANDED_TAGS - Array of strings pybot arguments compiled from tags.

    set -exuo pipefail

    # No explicit check needed with "set -u".
    PYBOT_ARGS=("--loglevel" "TRACE")
    PYBOT_ARGS+=("--variable" "TOPOLOGY_PATH:${WORKING_TOPOLOGY}")

    case "${TEST_CODE}" in
        *"device"*)
            PYBOT_ARGS+=("--suite" "tests.${DUT}.device")
            ;;
        *"perf"*)
            PYBOT_ARGS+=("--suite" "tests.${DUT}.perf")
            ;;
        *)
            die "Unknown specification: ${TEST_CODE}"
    esac

    EXPANDED_TAGS=()
    for tag in "${TAGS[@]}"; do
        if [[ ${tag} == "!"* ]]; then
            EXPANDED_TAGS+=("--exclude" "${tag#$"!"}")
        else
            if [[ ${SELECTION_MODE} == "--test" ]]; then
                EXPANDED_TAGS+=("--test" "${tag}")
            else
                EXPANDED_TAGS+=("--include" "${TOPOLOGIES_TAGS}AND${tag}")
            fi
        fi
    done

    if [[ ${SELECTION_MODE} == "--test" ]]; then
        EXPANDED_TAGS+=("--include" "${TOPOLOGIES_TAGS}")
    fi
}


function deactivate_docker_topology () {

    # Deactivate virtual vpp-device topology by removing containers.
    #
    # Variables read:
    # - NODENESS - Node multiplicity of desired testbed.
    # - FLAVOR - Node flavor string, usually describing the processor.

    set -exuo pipefail

    case_text="${NODENESS}_${FLAVOR}"
    case "${case_text}" in
        "1n_skx" | "1n_tx2")
            ssh="ssh root@172.17.0.1 -p 6022"
            env_vars=$(env | grep CSIT_ | tr '\n' ' ' ) || die
            # The "declare -f" output is long and boring.
            set +x
            ${ssh} "$(declare -f); deactivate_wrapper ${env_vars}" || {
                die "Topology cleanup via shim-dcr failed!"
            }
            set -x
            ;;
        "1n_vbox")
            enter_mutex || die
            clean_environment || {
                die "Topology cleanup locally failed!"
            }
            exit_mutex || die
            ;;
        *)
            die "Unknown specification: ${case_text}!"
    esac
}


function die () {

    # Print the message to standard error end exit with error code specified
    # by the second argument.
    #
    # Hardcoded values:
    # - The default error message.
    # Arguments:
    # - ${1} - The whole error message, be sure to quote. Optional
    # - ${2} - the code to exit with, default: 1.

    set -x
    set +eu
    warn "${1:-Unspecified run-time error occurred!}"
    exit "${2:-1}"
}


function die_on_pybot_error () {

    # Source this fragment if you want to abort on any failed test case.
    #
    # Variables read:
    # - PYBOT_EXIT_STATUS - Set by a pybot running fragment.
    # Functions called:
    # - die - Print to stderr and exit.

    set -exuo pipefail

    if [[ "${PYBOT_EXIT_STATUS}" != "0" ]]; then
        die "Test failures are present!" "${PYBOT_EXIT_STATUS}"
    fi
}


function generate_tests () {

    # Populate ${GENERATED_DIR}/tests based on ${CSIT_DIR}/tests/.
    # Any previously existing content of ${GENERATED_DIR}/tests is wiped before.
    # The generation is done by executing any *.py executable
    # within any subdirectory after copying.

    # This is a separate function, because this code is called
    # both by autogen checker and entries calling run_pybot.

    # Directories read:
    # - ${CSIT_DIR}/tests - Used as templates for the generated tests.
    # Directories replaced:
    # - ${GENERATED_DIR}/tests - Overwritten by the generated tests.

    set -exuo pipefail

    rm -rf "${GENERATED_DIR}/tests" || die
    cp -r "${CSIT_DIR}/tests" "${GENERATED_DIR}/tests" || die
    cmd_line=("find" "${GENERATED_DIR}/tests" "-type" "f")
    cmd_line+=("-executable" "-name" "*.py")
    # We sort the directories, so log output can be compared between runs.
    file_list=$("${cmd_line[@]}" | sort) || die

    for gen in ${file_list}; do
        directory="$(dirname "${gen}")" || die
        filename="$(basename "${gen}")" || die
        pushd "${directory}" || die
        ./"${filename}" || die
        popd || die
    done
}


function get_test_code () {

    # Arguments:
    # - ${1} - Optional, argument of entry script (or empty as unset).
    #   Test code value to override job name from environment.
    # Variables read:
    # - JOB_NAME - String affecting test selection, default if not argument.
    # Variables set:
    # - TEST_CODE - The test selection string from environment or argument.
    # - NODENESS - Node multiplicity of desired testbed.
    # - FLAVOR - Node flavor string, usually describing the processor.

    set -exuo pipefail

    TEST_CODE="${1-}" || die "Reading optional argument failed, somehow."
    if [[ -z "${TEST_CODE}" ]]; then
        TEST_CODE="${JOB_NAME-}" || die "Reading job name failed, somehow."
    fi

    case "${TEST_CODE}" in
        *"1n-vbox"*)
            NODENESS="1n"
            FLAVOR="vbox"
            ;;
        *"1n-skx"*)
            NODENESS="1n"
            FLAVOR="skx"
            ;;
        *"1n-tx2"*)
            NODENESS="1n"
            FLAVOR="tx2"
            ;;
        *"1n-aws"*)
            NODENESS="1n"
            FLAVOR="aws"
            ;;
        *"2n-aws"*)
            NODENESS="2n"
            FLAVOR="aws"
            ;;
        *"3n-aws"*)
            NODENESS="3n"
            FLAVOR="aws"
            ;;
        *"2n-zn2"*)
            NODENESS="2n"
            FLAVOR="zn2"
            ;;
        *"2n-clx"*)
            NODENESS="2n"
            FLAVOR="clx"
            ;;
        *"2n-icx"*)
            NODENESS="2n"
            FLAVOR="icx"
            ;;
        *"3n-icx"*)
            NODENESS="3n"
            FLAVOR="icx"
            ;;
        *"2n-dnv"*)
            NODENESS="2n"
            FLAVOR="dnv"
            ;;
        *"3n-dnv"*)
            NODENESS="3n"
            FLAVOR="dnv"
            ;;
        *"3n-snr"*)
            NODENESS="3n"
            FLAVOR="snr"
            ;;
        *"2n-tx2"*)
            NODENESS="2n"
            FLAVOR="tx2"
            ;;
        *"3n-tsh"*)
            NODENESS="3n"
            FLAVOR="tsh"
            ;;
        *"3n-alt"*)
            NODENESS="3n"
            FLAVOR="alt"
            ;;
    esac
}


function get_test_tag_string () {

    # Variables read:
    # - GERRIT_EVENT_TYPE - Event type set by gerrit, can be unset.
    # - GERRIT_EVENT_COMMENT_TEXT - Comment text, read for "comment-added" type.
    # - TEST_CODE - The test selection string from environment or argument.
    # Variables set:
    # - TEST_TAG_STRING - The string following trigger word in gerrit comment.
    #   May be empty, or even not set on event types not adding comment.

    # TODO: ci-management scripts no longer need to perform this.

    set -exuo pipefail

    if [[ "${GERRIT_EVENT_TYPE-}" == "comment-added" ]]; then
        case "${TEST_CODE}" in
            *"device"*)
                trigger="devicetest"
                ;;
            *"perf"*)
                trigger="perftest"
                ;;
            *)
                die "Unknown specification: ${TEST_CODE}"
        esac
        # Ignore lines not containing the trigger word.
        comment=$(fgrep "${trigger}" <<< "${GERRIT_EVENT_COMMENT_TEXT}" || true)
        # The vpp-csit triggers trail stuff we are not interested in.
        # Removing them and trigger word: https://unix.stackexchange.com/a/13472
        # (except relying on \s whitespace, \S non-whitespace and . both).
        # The last string is concatenated, only the middle part is expanded.
        cmd=("grep" "-oP" '\S*'"${trigger}"'\S*\s\K.+$') || die "Unset trigger?"
        # On parsing error, TEST_TAG_STRING probably stays empty.
        TEST_TAG_STRING=$("${cmd[@]}" <<< "${comment}" || true)
        if [[ -z "${TEST_TAG_STRING-}" ]]; then
            # Probably we got a base64 encoded comment.
            comment="${GERRIT_EVENT_COMMENT_TEXT}"
            comment=$(base64 --decode <<< "${comment}" || true)
            comment=$(fgrep "${trigger}" <<< "${comment}" || true)
            TEST_TAG_STRING=$("${cmd[@]}" <<< "${comment}" || true)
        fi
        if [[ -n "${TEST_TAG_STRING-}" ]]; then
            test_tag_array=(${TEST_TAG_STRING})
            if [[ "${test_tag_array[0]}" == "icl" ]]; then
                export GRAPH_NODE_VARIANT="icl"
                TEST_TAG_STRING="${test_tag_array[@]:1}" || true
            elif [[ "${test_tag_array[0]}" == "skx" ]]; then
                export GRAPH_NODE_VARIANT="skx"
                TEST_TAG_STRING="${test_tag_array[@]:1}" || true
            fi
        fi
    fi
}


function installed () {

    # Check if the given utility is installed. Fail if not installed.
    #
    # Duplicate of common.sh function, as this file is also used standalone.
    #
    # Arguments:
    # - ${1} - Utility to check.
    # Returns:
    # - 0 - If command is installed.
    # - 1 - If command is not installed.

    set -exuo pipefail

    command -v "${1}"
}


function move_archives () {

    # Move archive directory to top of workspace, if not already there.
    #
    # ARCHIVE_DIR is positioned relative to CSIT_DIR,
    # but in some jobs CSIT_DIR is not same as WORKSPACE
    # (e.g. under VPP_DIR). To simplify ci-management settings,
    # we want to move the data to the top. We do not want simple copy,
    # as ci-management is eager with recursive search.
    #
    # As some scripts may call this function multiple times,
    # the actual implementation use copying and deletion,
    # so the workspace gets "union" of contents (except overwrites on conflict).
    # The consequence is empty ARCHIVE_DIR remaining after this call.
    #
    # As the source directory is emptied,
    # the check for dirs being different is essential.
    #
    # Variables read:
    # - WORKSPACE - Jenkins workspace, move only if the value is not empty.
    #   Can be unset, then it speeds up manual testing.
    # - ARCHIVE_DIR - Path to directory with content to be moved.
    # Directories updated:
    # - ${WORKSPACE}/archives/ - Created if does not exist.
    #   Content of ${ARCHIVE_DIR}/ is moved.
    # Functions called:
    # - die - Print to stderr and exit.

    set -exuo pipefail

    if [[ -n "${WORKSPACE-}" ]]; then
        target=$(readlink -f "${WORKSPACE}/archives")
        if [[ "${target}" != "${ARCHIVE_DIR}" ]]; then
            mkdir -p "${target}" || die "Archives dir create failed."
            cp -rf "${ARCHIVE_DIR}"/* "${target}" || die "Copy failed."
            rm -rf "${ARCHIVE_DIR}"/* || die "Delete failed."
        fi
    fi
}


function post_process_robot_outputs () {

    # Generate INFO level output_info.xml by rebot.
    # Archive UTI raw json outputs.
    #
    # Variables read:
    # - ARCHIVE_DIR - Path to post-processed files.

    set -exuo pipefail

    # Compress raw json outputs, as they will never be post-processed.
    pushd "${ARCHIVE_DIR}" || die
    if [ -d "tests" ]; then
        # Use deterministic order.
        options+=("--sort=name")
        # We are keeping info outputs where they are.
        # Assuming we want to move anything but info files (and dirs).
        options+=("--exclude=*.info.json")
        tar czf "generated_output_raw.tar.gz" "${options[@]}" "tests" || true
        # Tar can remove when archiving, but chokes (not deterministically)
        # on attempting to remove dirs (not empty as info files are there).
        # So we need to delete the raw files manually.
        find "tests" -type f -name "*.raw.json" -delete || true
    fi
    popd || die

    # Generate INFO level output_info.xml for post-processing.
    all_options=("--loglevel" "INFO")
    all_options+=("--log" "none")
    all_options+=("--report" "none")
    all_options+=("--output" "${ARCHIVE_DIR}/output_info.xml")
    all_options+=("${ARCHIVE_DIR}/output.xml")
    rebot "${all_options[@]}" || true
}


function prepare_topology () {

    # Prepare virtual testbed topology if needed based on flavor.

    # Variables read:
    # - TEST_CODE - String affecting test selection, usually jenkins job name.
    # - NODENESS - Node multiplicity of testbed, either "2n" or "3n".
    # - FLAVOR - Node flavor string, e.g. "clx" or "skx".
    # Functions called:
    # - die - Print to stderr and exit.
    # - terraform_init - Terraform init topology.
    # - terraform_apply - Terraform apply topology.

    set -exuo pipefail

    case_text="${NODENESS}_${FLAVOR}"
    case "${case_text}" in
        "1n_aws" | "2n_aws" | "3n_aws")
            export TF_VAR_testbed_name="${TEST_CODE}"
            terraform_init || die "Failed to call terraform init."
            terraform_apply || die "Failed to call terraform apply."
            ;;
    esac
}


function reserve_and_cleanup_testbed () {

    # Reserve physical testbed, perform cleanup, register trap to unreserve.
    # When cleanup fails, remove from topologies and keep retrying
    # until all topologies are removed.
    #
    # Variables read:
    # - TOPOLOGIES - Array of paths to topology yaml to attempt reservation on.
    # - PYTHON_SCRIPTS_DIR - Path to directory holding the reservation script.
    # - BUILD_TAG - Any string suitable as filename, identifying
    #   test run executing this function. May be unset.
    # Variables set:
    # - TOPOLOGIES - Array of paths to topologies, with failed cleanups removed.
    # - WORKING_TOPOLOGY - Path to topology yaml file of the reserved testbed.
    # Functions called:
    # - die - Print to stderr and exit.
    # - ansible_playbook - Perform an action using ansible, see ansible.sh
    # Traps registered:
    # - EXIT - Calls cancel_all for ${WORKING_TOPOLOGY}.

    set -exuo pipefail

    while true; do
        for topo in "${TOPOLOGIES[@]}"; do
            set +e
            scrpt="${PYTHON_SCRIPTS_DIR}/topo_reservation.py"
            opts=("-t" "${topo}" "-r" "${BUILD_TAG:-Unknown}")
            python3 "${scrpt}" "${opts[@]}"
            result="$?"
            set -e
            if [[ "${result}" == "0" ]]; then
                # Trap unreservation before cleanup check,
                # so multiple jobs showing failed cleanup improve chances
                # of humans to notice and fix.
                WORKING_TOPOLOGY="${topo}"
                echo "Reserved: ${WORKING_TOPOLOGY}"
                trap "untrap_and_unreserve_testbed" EXIT || {
                    message="TRAP ATTEMPT AND UNRESERVE FAILED, FIX MANUALLY."
                    untrap_and_unreserve_testbed "${message}" || {
                        die "Teardown should have died, not failed."
                    }
                    die "Trap attempt failed, unreserve succeeded. Aborting."
                }
                # Cleanup + calibration checks
                set +e
                ansible_playbook "cleanup, calibration"
                result="$?"
                set -e
                if [[ "${result}" == "0" ]]; then
                    break
                fi
                warn "Testbed cleanup failed: ${topo}"
                untrap_and_unreserve_testbed "Fail of unreserve after cleanup."
            fi
            # Else testbed is accessible but currently reserved, moving on.
        done

        if [[ -n "${WORKING_TOPOLOGY-}" ]]; then
            # Exit the infinite while loop if we made a reservation.
            warn "Reservation and cleanup successful."
            break
        fi

        if [[ "${#TOPOLOGIES[@]}" == "0" ]]; then
            die "Run out of operational testbeds!"
        fi

        # Wait ~3minutes before next try.
        sleep_time="$[ ( ${RANDOM} % 20 ) + 180 ]s" || {
            die "Sleep time calculation failed."
        }
        echo "Sleeping ${sleep_time}"
        sleep "${sleep_time}" || die "Sleep failed."
    done
}


function run_pybot () {

    # Run pybot with options based on input variables.
    # Generate INFO level output_info.xml by rebot.
    # Archive UTI raw json outputs.
    #
    # Variables read:
    # - CSIT_DIR - Path to existing root of local CSIT git repository.
    # - ARCHIVE_DIR - Path to store robot result files in.
    # - PYBOT_ARGS, EXPANDED_TAGS - See compose_pybot_arguments.sh
    # - GENERATED_DIR - Tests are assumed to be generated under there.
    # Variables set:
    # - PYBOT_EXIT_STATUS - Exit status of most recent pybot invocation.
    # Functions called:
    # - die - Print to stderr and exit.

    set -exuo pipefail

    all_options=("--outputdir" "${ARCHIVE_DIR}" "${PYBOT_ARGS[@]}")
    all_options+=("${EXPANDED_TAGS[@]}")

    pushd "${CSIT_DIR}" || die "Change directory operation failed."
    set +e
    robot "${all_options[@]}" "${GENERATED_DIR}/tests/"
    PYBOT_EXIT_STATUS="$?"
    set -e

    post_process_robot_outputs || die

    popd || die "Change directory operation failed."
}


function select_arch_os () {

    # Set variables affected by local CPU architecture and operating system.
    #
    # Variables set:
    # - VPP_VER_FILE - Name of file in CSIT dir containing vpp stable version.
    # - IMAGE_VER_FILE - Name of file in CSIT dir containing the image name.
    # - PKG_SUFFIX - Suffix of OS package file name, "rpm" or "deb."

    set -exuo pipefail

    source /etc/os-release || die "Get OS release failed."

    case "${ID}" in
        "ubuntu"*)
            case "${VERSION}" in
                *"LTS (Focal Fossa)"*)
                    IMAGE_VER_FILE="VPP_DEVICE_IMAGE_UBUNTU"
                    VPP_VER_FILE="VPP_STABLE_VER_UBUNTU_FOCAL"
                    PKG_SUFFIX="deb"
                    ;;
                *)
                    die "Unsupported Ubuntu version!"
                    ;;
            esac
            ;;
        *)
            die "Unsupported distro or OS!"
            ;;
    esac

    arch=$(uname -m) || {
        die "Get CPU architecture failed."
    }

    case "${arch}" in
        "aarch64")
            IMAGE_VER_FILE="${IMAGE_VER_FILE}_ARM"
            ;;
        *)
            ;;
    esac
}


function select_tags () {

    # Variables read:
    # - WORKING_TOPOLOGY - Path to topology yaml file of the reserved testbed.
    # - TEST_CODE - String affecting test selection, usually jenkins job name.
    # - DUT - CSIT test/ subdirectory, set while processing tags.
    # - TEST_TAG_STRING - String selecting tags, from gerrit comment.
    #   Can be unset.
    # - TOPOLOGIES_DIR - Path to existing directory with available tpologies.
    # - BASH_FUNCTION_DIR - Directory with input files to process.
    # Variables set:
    # - TAGS - Array of processed tag boolean expressions.
    # - SELECTION_MODE - Selection criteria [test, suite, include, exclude].

    set -exuo pipefail

    # NIC SELECTION
    case "${TEST_CODE}" in
        *"1n-aws"*)
            start_pattern='^  SUT:'
            ;;
        *)
            start_pattern='^  TG:'
            ;;
    esac
    end_pattern='^ \? \?[A-Za-z0-9]\+:'
    # Remove the sections from topology file
    sed_command="/${start_pattern}/,/${end_pattern}/d"
    # All topologies NICs
    available=$(sed "${sed_command}" "${TOPOLOGIES_DIR}"/* \
                | grep -hoP "model: \K.*" | sort -u)
    # Selected topology NICs
    reserved=$(sed "${sed_command}" "${WORKING_TOPOLOGY}" \
               | grep -hoP "model: \K.*" | sort -u)
    # All topologies NICs - Selected topology NICs
    exclude_nics=($(comm -13 <(echo "${reserved}") <(echo "${available}"))) || {
        die "Computation of excluded NICs failed."
    }

    # Select default NIC tag.
    case "${TEST_CODE}" in
        *"3n-dnv"* | *"2n-dnv"*)
            default_nic="nic_intel-x553"
            ;;
        *"3n-snr"*)
            default_nic="nic_intel-e822cq"
            ;;
        *"3n-tsh"*)
            default_nic="nic_intel-x520-da2"
            ;;
        *"3n-icx"* | *"2n-icx"*)
            default_nic="nic_intel-xxv710"
            ;;
        *"2n-clx"* | *"2n-zn2"*)
            default_nic="nic_intel-xxv710"
            ;;
        *"2n-tx2"* | *"3n-alt"* | *"mrr-daily-master")
            default_nic="nic_intel-xl710"
            ;;
        *"1n-aws"* | *"2n-aws"* | *"3n-aws"*)
            default_nic="nic_amazon-nitro-50g"
            ;;
        *)
            default_nic="nic_intel-x710"
            ;;
    esac

    sed_nic_sub_cmd="sed s/\${default_nic}/${default_nic}/"
    awk_nics_sub_cmd=""
    awk_nics_sub_cmd+='gsub("xxv710","25ge2p1xxv710");'
    awk_nics_sub_cmd+='gsub("x710","10ge2p1x710");'
    awk_nics_sub_cmd+='gsub("xl710","40ge2p1xl710");'
    awk_nics_sub_cmd+='gsub("x520-da2","10ge2p1x520");'
    awk_nics_sub_cmd+='gsub("x553","10ge2p1x553");'
    awk_nics_sub_cmd+='gsub("cx556a","100ge2p1cx556a");'
    awk_nics_sub_cmd+='gsub("e810cq","100ge2p1e810cq");'
    awk_nics_sub_cmd+='gsub("vic1227","10ge2p1vic1227");'
    awk_nics_sub_cmd+='gsub("vic1385","40ge2p1vic1385");'
    awk_nics_sub_cmd+='gsub("nitro-50g","50ge1p1ENA");'
    awk_nics_sub_cmd+='if ($9 =="drv_avf") drv="avf-";'
    awk_nics_sub_cmd+='else if ($9 =="drv_rdma_core") drv ="rdma-";'
    awk_nics_sub_cmd+='else if ($9 =="drv_af_xdp") drv ="af-xdp-";'
    awk_nics_sub_cmd+='else drv="";'
    awk_nics_sub_cmd+='if ($1 =="-") cores="";'
    awk_nics_sub_cmd+='else cores=$1;'
    awk_nics_sub_cmd+='print "*"$7"-" drv $11"-"$5"."$3"-" cores "-" drv $11"-"$5'

    # Tag file directory shorthand.
    tfd="${JOB_SPECS_DIR}"
    case "${TEST_CODE}" in
        # Select specific performance tests based on jenkins job type variable.
        *"device"* )
            readarray -t test_tag_array <<< $(grep -v "#" \
                ${tfd}/vpp_device/${DUT}-${NODENESS}-${FLAVOR}.md |
                awk {"$awk_nics_sub_cmd"} || echo "devicetest") || die
            SELECTION_MODE="--test"
            ;;
        *"ndrpdr-weekly"* )
            readarray -t test_tag_array <<< $(grep -v "#" \
                ${tfd}/mlr_weekly/${DUT}-${NODENESS}-${FLAVOR}.md |
                awk {"$awk_nics_sub_cmd"} || echo "perftest") || die
            SELECTION_MODE="--test"
            ;;
        *"mrr-daily"* )
            readarray -t test_tag_array <<< $(grep -v "#" \
                ${tfd}/mrr_daily/${DUT}-${NODENESS}-${FLAVOR}.md |
                awk {"$awk_nics_sub_cmd"} || echo "perftest") || die
            SELECTION_MODE="--test"
            ;;
        *"mrr-weekly"* )
            readarray -t test_tag_array <<< $(grep -v "#" \
                ${tfd}/mrr_weekly/${DUT}-${NODENESS}-${FLAVOR}.md |
                awk {"$awk_nics_sub_cmd"} || echo "perftest") || die
            SELECTION_MODE="--test"
            ;;
        *"report-iterative"* )
            test_sets=(${TEST_TAG_STRING//:/ })
            # Run only one test set per run
            report_file=${test_sets[0]}.md
            readarray -t test_tag_array <<< $(grep -v "#" \
                ${tfd}/report_iterative/${NODENESS}-${FLAVOR}/${report_file} |
                awk {"$awk_nics_sub_cmd"} || echo "perftest") || die
            SELECTION_MODE="--test"
            ;;
        *"report-coverage"* )
            test_sets=(${TEST_TAG_STRING//:/ })
            # Run only one test set per run
            report_file=${test_sets[0]}.md
            readarray -t test_tag_array <<< $(grep -v "#" \
                ${tfd}/report_coverage/${NODENESS}-${FLAVOR}/${report_file} |
                awk {"$awk_nics_sub_cmd"} || echo "perftest") || die
            SELECTION_MODE="--test"
            ;;
        * )
            if [[ -z "${TEST_TAG_STRING-}" ]]; then
                # If nothing is specified, we will run pre-selected tests by
                # following tags.
                test_tag_array=("mrrAND${default_nic}AND1cAND64bANDethip4-ip4base"
                                "mrrAND${default_nic}AND1cAND78bANDethip6-ip6base"
                                "mrrAND${default_nic}AND1cAND64bANDeth-l2bdbasemaclrn"
                                "mrrAND${default_nic}AND1cAND64bANDeth-l2xcbase"
                                "!drv_af_xdp" "!drv_avf")
            else
                # If trigger contains tags, split them into array.
                test_tag_array=(${TEST_TAG_STRING//:/ })
            fi
            SELECTION_MODE="--include"
            ;;
    esac

    # Blacklisting certain tags per topology.
    #
    # Reasons for blacklisting:
    # - ipsechw - Blacklisted on testbeds without crypto hardware accelerator.
    case "${TEST_CODE}" in
        *"1n-vbox"*)
            test_tag_array+=("!avf")
            test_tag_array+=("!vhost")
            test_tag_array+=("!flow")
            ;;
        *"1n_tx2"*)
            test_tag_array+=("!flow")
            ;;
        *"2n-clx"*)
            test_tag_array+=("!ipsechw")
            ;;
        *"2n-icx"*)
            test_tag_array+=("!ipsechw")
            ;;
        *"3n-icx"*)
            test_tag_array+=("!ipsechw")
            # Not enough nic_intel-xxv710 to support double link tests.
            test_tag_array+=("!3_node_double_link_topoANDnic_intel-xxv710")
            ;;
        *"2n-zn2"*)
            test_tag_array+=("!ipsechw")
            ;;
        *"2n-dnv"*)
            test_tag_array+=("!memif")
            test_tag_array+=("!srv6_proxy")
            test_tag_array+=("!vhost")
            test_tag_array+=("!vts")
            test_tag_array+=("!drv_avf")
            ;;
        *"2n-tx2"* | *"3n-alt"*)
            test_tag_array+=("!ipsechw")
            ;;
        *"3n-dnv"*)
            test_tag_array+=("!memif")
            test_tag_array+=("!srv6_proxy")
            test_tag_array+=("!vhost")
            test_tag_array+=("!vts")
            test_tag_array+=("!drv_avf")
            ;;
        *"3n-snr"*)
            ;;
        *"3n-tsh"*)
            # 3n-tsh only has x520 NICs which don't work with AVF
            test_tag_array+=("!drv_avf")
            test_tag_array+=("!ipsechw")
            ;;
        *"1n-aws"* | *"2n-aws"* | *"3n-aws"*)
            test_tag_array+=("!ipsechw")
            ;;
    esac

    # We will add excluded NICs.
    test_tag_array+=("${exclude_nics[@]/#/!NIC_}")

    TAGS=()
    prefix=""

    set +x
    if [[ "${TEST_CODE}" == "vpp-"* ]]; then
        if [[ "${TEST_CODE}" != *"device"* ]]; then
            # Automatic prefixing for VPP perf jobs to limit the NIC used and
            # traffic evaluation to MRR.
            if [[ "${TEST_TAG_STRING-}" == *"nic_"* ]]; then
                prefix="${prefix}mrrAND"
            else
                prefix="${prefix}mrrAND${default_nic}AND"
            fi
        fi
    fi
    for tag in "${test_tag_array[@]}"; do
        if [[ "${tag}" == "!"* ]]; then
            # Exclude tags are not prefixed.
            TAGS+=("${tag}")
        elif [[ "${tag}" == " "* || "${tag}" == *"perftest"* ]]; then
            # Badly formed tag expressions can trigger way too much tests.
            set -x
            warn "The following tag expression hints at bad trigger: ${tag}"
            warn "Possible cause: Multiple triggers in a single comment."
            die "Aborting to avoid triggering too many tests."
        elif [[ "${tag}" == *"OR"* ]]; then
            # If OR had higher precedence than AND, it would be useful here.
            # Some people think it does, thus triggering way too much tests.
            set -x
            warn "The following tag expression hints at bad trigger: ${tag}"
            warn "Operator OR has lower precedence than AND. Use space instead."
            die "Aborting to avoid triggering too many tests."
        elif [[ "${tag}" != "" && "${tag}" != "#"* ]]; then
            # Empty and comment lines are skipped.
            # Other lines are normal tags, they are to be prefixed.
            TAGS+=("${prefix}${tag}")
        fi
    done
    set -x
}


function select_topology () {

    # Variables read:
    # - NODENESS - Node multiplicity of testbed, either "2n" or "3n".
    # - FLAVOR - Node flavor string, e.g. "clx" or "skx".
    # - CSIT_DIR - Path to existing root of local CSIT git repository.
    # - TOPOLOGIES_DIR - Path to existing directory with available topologies.
    # Variables set:
    # - TOPOLOGIES - Array of paths to suitable topology yaml files.
    # - TOPOLOGIES_TAGS - Tag expression selecting tests for the topology.
    # Functions called:
    # - die - Print to stderr and exit.

    set -exuo pipefail

    case_text="${NODENESS}_${FLAVOR}"
    case "${case_text}" in
        "1n_vbox")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*vpp_device*.template )
            TOPOLOGIES_TAGS="2_node_single_link_topo"
            ;;
        "1n_skx" | "1n_tx2")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*vpp_device*.template )
            TOPOLOGIES_TAGS="2_node_single_link_topo"
            ;;
        "2n_skx")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*2n_skx*.yaml )
            TOPOLOGIES_TAGS="2_node_*_link_topo"
            ;;
        "2n_zn2")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*2n_zn2*.yaml )
            TOPOLOGIES_TAGS="2_node_*_link_topo"
            ;;
        "3n_skx")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*3n_skx*.yaml )
            TOPOLOGIES_TAGS="3_node_*_link_topo"
            ;;
        "3n_icx")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*3n_icx*.yaml )
            TOPOLOGIES_TAGS="3_node_*_link_topo"
            ;;
        "2n_clx")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*2n_clx*.yaml )
            TOPOLOGIES_TAGS="2_node_*_link_topo"
            ;;
        "2n_icx")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*2n_icx*.yaml )
            TOPOLOGIES_TAGS="2_node_*_link_topo"
            ;;
        "2n_dnv")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*2n_dnv*.yaml )
            TOPOLOGIES_TAGS="2_node_single_link_topo"
            ;;
        "3n_dnv")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*3n_dnv*.yaml )
            TOPOLOGIES_TAGS="3_node_single_link_topo"
            ;;
        "3n_snr")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*3n_snr*.yaml )
            TOPOLOGIES_TAGS="3_node_single_link_topo"
            ;;
        "3n_tsh")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*3n_tsh*.yaml )
            TOPOLOGIES_TAGS="3_node_single_link_topo"
            ;;
        "2n_tx2")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*2n_tx2*.yaml )
            TOPOLOGIES_TAGS="2_node_single_link_topo"
            ;;
        "3n_alt")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*3n_alt*.yaml )
            TOPOLOGIES_TAGS="3_node_single_link_topo"
            ;;
        "1n_aws")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*1n-aws*.yaml )
            TOPOLOGIES_TAGS="1_node_single_link_topo"
            ;;
        "2n_aws")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*2n-aws*.yaml )
            TOPOLOGIES_TAGS="2_node_single_link_topo"
            ;;
        "3n_aws")
            TOPOLOGIES=( "${TOPOLOGIES_DIR}"/*3n-aws*.yaml )
            TOPOLOGIES_TAGS="3_node_single_link_topo"
            ;;
        *)
            # No falling back to default, that should have been done
            # by the function which has set NODENESS and FLAVOR.
            die "Unknown specification: ${case_text}"
    esac

    if [[ -z "${TOPOLOGIES-}" ]]; then
        die "No applicable topology found!"
    fi
}


function set_environment_variables () {

    # Depending on testbed topology, overwrite defaults set in the
    # resources/libraries/python/Constants.py file
    #
    # Variables read:
    # - TEST_CODE - String affecting test selection, usually jenkins job name.
    # Variables set:
    # See specific cases

    set -exuo pipefail

    case "${TEST_CODE}" in
        *"1n-aws"* | *"2n-aws"* | *"3n-aws"*)
            # T-Rex 2.88+ workaround for ENA NICs.
            export TREX_RX_DESCRIPTORS_COUNT=1024
            export TREX_EXTRA_CMDLINE="--mbuf-factor 19"
            export TREX_CORE_COUNT=6
            # Settings to prevent duration stretching.
            export PERF_TRIAL_STL_DELAY=0.1
            ;;
        *"2n-zn2"*)
            # Maciek's workaround for Zen2 with lower amount of cores.
            export TREX_CORE_COUNT=14
    esac
}


function untrap_and_unreserve_testbed () {

    # Use this as a trap function to ensure testbed does not remain reserved.
    # Perhaps call directly before script exit, to free testbed for other jobs.
    # This function is smart enough to avoid multiple unreservations (so safe).
    # Topo cleanup is executed (call it best practice), ignoring failures.
    #
    # Hardcoded values:
    # - default message to die with if testbed might remain reserved.
    # Arguments:
    # - ${1} - Message to die with if unreservation fails. Default hardcoded.
    # Variables read (by inner function):
    # - WORKING_TOPOLOGY - Path to topology yaml file of the reserved testbed.
    # - PYTHON_SCRIPTS_DIR - Path to directory holding Python scripts.
    # Variables written:
    # - WORKING_TOPOLOGY - Set to empty string on successful unreservation.
    # Trap unregistered:
    # - EXIT - Failure to untrap is reported, but ignored otherwise.
    # Functions called:
    # - die - Print to stderr and exit.
    # - ansible_playbook - Perform an action using ansible, see ansible.sh

    set -xo pipefail
    set +eu  # We do not want to exit early in a "teardown" function.
    trap - EXIT || echo "Trap deactivation failed, continuing anyway."
    wt="${WORKING_TOPOLOGY}"  # Just to avoid too long lines.
    if [[ -z "${wt-}" ]]; then
        set -eu
        warn "Testbed looks unreserved already. Trap removal failed before?"
    else
        ansible_playbook "cleanup" || true
        python3 "${PYTHON_SCRIPTS_DIR}/topo_reservation.py" -c -t "${wt}" || {
            die "${1:-FAILED TO UNRESERVE, FIX MANUALLY.}" 2
        }
        case "${TEST_CODE}" in
            *"1n-aws"* | *"2n-aws"* | *"3n-aws"*)
                terraform_destroy || die "Failed to call terraform destroy."
                ;;
            *)
                ;;
        esac
        WORKING_TOPOLOGY=""
        set -eu
    fi
}


function warn () {

    # Print the message to standard error.
    #
    # Arguments:
    # - ${@} - The text of the message.

    set -exuo pipefail

    echo "$@" >&2
}