diff options
author | Luca Muscariello <lumuscar@cisco.com> | 2022-03-30 22:29:28 +0200 |
---|---|---|
committer | Mauro Sardara <msardara@cisco.com> | 2022-03-31 19:51:47 +0200 |
commit | c46e5df56b67bb8ea7a068d39324c640084ead2b (patch) | |
tree | eddeb17785938e09bc42eec98ee09b8a28846de6 /tests/resources | |
parent | 18fa668f25d3cc5463417ce7df6637e31578e898 (diff) |
feat: boostrap hicn 22.02
The current patch provides several new features, improvements,
bug fixes and also complete rewrite of entire components.
- lib
The hicn packet parser has been improved with a new packet
format fully based on UDP. The TCP header is still temporarily
supported but the UDP header will replace completely the new hicn
packet format. Improvements have been made to make sure every
packet parsing operation is made via this library. The current
new header can be used as header between the payload and the
UDP header or as trailer in the UDP surplus area to be tested
when UDP options will start to be used.
- hicn-light
The portable packet forwarder has been completely rewritten from
scratch with the twofold objective to improve performance and
code size but also to drop dependencies such as libparc which is
now removed by the current implementation.
- hicn control
the control library is the agent that is used to program the
packet forwarders via their binary API. This component has
benefited from significant improvements in terms of interaction
model which is now event driven and more robust to failures.
- VPP plugin has been updated to support VPP 22.02
- transport
Major improvement have been made to the RTC protocol, to the
support of IO modules and to the security sub system. Signed
manifests are the default data authenticity and integrity framework.
Confidentiality can be enabled by sharing the encryption key to the
prod/cons layer. The library has been tested with group key based
applications such as broadcast/multicast and real-time on-line
meetings with trusted server keys or MLS.
- testing
Unit testing has been introduced using GoogleTest. One third of
the code base is covered by unit testing with priority on
critical features. Functional testing has also been introduce
using Docker, linux bridging and Robot Framework to define
test with Less Code techniques to facilitate the extension
of the coverage.
Co-authored-by: Mauro Sardara <msardara@cisco.com>
Co-authored-by: Jordan Augé <jordan.auge+fdio@cisco.com>
Co-authored-by: Michele Papalini <micpapal@cisco.com>
Co-authored-by: Angelo Mantellini <manangel@cisco.com>
Co-authored-by: Jacques Samain <jsamain@cisco.com>
Co-authored-by: Olivier Roques <oroques+fdio@cisco.com>
Co-authored-by: Enrico Loparco <eloparco@cisco.com>
Co-authored-by: Giulio Grassi <gigrassi@cisco.com>
Change-Id: I75d0ef70f86d921e3ef503c99271216ff583c215
Signed-off-by: Luca Muscariello <muscariello@ieee.org>
Signed-off-by: Mauro Sardara <msardara@cisco.com>
Diffstat (limited to 'tests/resources')
-rw-r--r-- | tests/resources/libraries/robot/common.robot | 23 | ||||
-rw-r--r-- | tests/resources/libraries/robot/runtest.robot | 92 |
2 files changed, 115 insertions, 0 deletions
diff --git a/tests/resources/libraries/robot/common.robot b/tests/resources/libraries/robot/common.robot new file mode 100644 index 000000000..c1e3f20a4 --- /dev/null +++ b/tests/resources/libraries/robot/common.robot @@ -0,0 +1,23 @@ +*** Settings *** +Library OperatingSystem +Library Process +Library String + +*** Variables *** + +*** Keywords *** + +Build Topology + [Arguments] ${TEST_TOPOLOGY}=${NONE} ${TEST_CONFIGURATION}=${NONE} + Log to console Building topology ${TEST_TOPOLOGY} ${TEST_CONFIGURATION} + ${result_setup} = Run Process ${EXECDIR}/config.sh build setup ${TEST_TOPOLOGY} ${TEST_CONFIGURATION} + Log to console Done + Log Many stdout: ${result_setup.stdout} stderr: ${result_setup.stderr} + +Check Environment + ${result} = Run Process docker ps + Log Many stdout: ${result.stdout} stderr: ${result.stderr} + +Destroy Topology + ${result_teardown} = Run Process ${EXECDIR}/config.sh stopall + Log Many stdout: ${result_teardown.stdout} stderr: ${result_teardown.stderr} diff --git a/tests/resources/libraries/robot/runtest.robot b/tests/resources/libraries/robot/runtest.robot new file mode 100644 index 000000000..d5201d765 --- /dev/null +++ b/tests/resources/libraries/robot/runtest.robot @@ -0,0 +1,92 @@ +*** Settings *** +Library OperatingSystem +Library Process +Library String + +*** Variables *** + +*** Keywords *** + +Infra ${VALUE} + Run Process ${EXECDIR}/config.sh ${VALUE} + +Run Test + [Arguments] ${TEST_SETUP}=${NONE} ${TESTID}=${NONE} ${EXPECTED_MIN}=${NONE} ${EXPECTED_MAX}=${NONE} ${EXPECTED_AVG}=${NONE} + ${result_test} = Run Process ${EXECDIR}/config.sh start ${TEST_SETUP} ${TESTID} stdout=${TEMPDIR}/stdout.txt stderr=${TEMPDIR}/stderr.txt + Log Many stdout: ${result_test.stdout} stderr: ${result_test.stderr} + @{min_max_avg} = Split String ${result_test.stdout.strip()} + Log To Console Min Max Average Array: @{min_max_avg} + IF '${TESTID}' == 'rtc' + Should Be True ${min_max_avg}[0] == ${EXPECTED_MIN} msg="Min does not match (${min_max_avg}[0] != ${EXPECTED_MIN})" + Should Be True ${min_max_avg}[1] == ${EXPECTED_MAX} msg="Max does not match (${min_max_avg}[1] != ${EXPECTED_MAX})" + Should Be True ${min_max_avg}[2] == ${EXPECTED_AVG} msg="Avg does not match (${min_max_avg}[2] != ${EXPECTED_AVG})" + ELSE IF '${TESTID}' == 'requin' + Should Be True ${min_max_avg}[0] >= ${EXPECTED_MIN} msg="Min does not match (${min_max_avg}[0] < ${EXPECTED_MIN})" + Should Be True ${min_max_avg}[1] >= ${EXPECTED_MAX} msg="Max does not match (${min_max_avg}[1] < ${EXPECTED_MAX})" + Should Be True ${min_max_avg}[2] >= ${EXPECTED_AVG} msg="Avg does not match (${min_max_avg}[2] < ${EXPECTED_AVG})" + ELSE IF '${TESTID}' == 'latency' + Should Be True ${min_max_avg}[0] <= ${EXPECTED_MIN} msg="Min does not match (${min_max_avg}[0] > ${EXPECTED_MIN})" + Should Be True ${min_max_avg}[1] <= ${EXPECTED_MAX} msg="Max does not match (${min_max_avg}[1] > ${EXPECTED_MAX})" + Should Be True ${min_max_avg}[2] <= ${EXPECTED_AVG} msg="Avg does not match (${min_max_avg}[2] > ${EXPECTED_AVG})" + ELSE IF '${TESTID}' == 'cbr' + Should Be True ${min_max_avg}[0] >= ${EXPECTED_MIN} msg="Min does not match (${min_max_avg}[0] < ${EXPECTED_MIN})" + Should Be True ${min_max_avg}[1] >= ${EXPECTED_MAX} msg="Max does not match (${min_max_avg}[1] < ${EXPECTED_MAX})" + Should Be True ${min_max_avg}[2] >= ${EXPECTED_AVG} msg="Avg does not match (${min_max_avg}[2] < ${EXPECTED_AVG})" + ELSE + Fail "Provided Test ID does not exist" + END + +Set Link + [Documentation] Configure link rate/delay/jitter/loss + ... Arguments: + ... ${RATE} Rate of the link + ... ${DELAY} Delay of the link + ... ${JITTER} Jitter of the link + ... ${LOSS} Loss of the link + [Arguments] ${TEST_SETUP}=${NONE} + ... ${RATE}=${NONE} + ... ${DELAY}=${NONE} + ... ${JITTER}=${NONE} + ... ${LOSS}=${NONE} + ${result_link} = Run Process ${EXECDIR}/config.sh setchannel ${TEST_SETUP} server ${RATE}-${DELAY}-${JITTER}-${LOSS} + Log Many stdout: ${result_link.stdout} stderr: ${result_link.stderr} + +Run Latency Test + [Documentation] Run hicn-ping on the ${TEST_SETUP} topology and measure latency. + ... Arguments: + ... ${TEST_SETUP} The setup of the test. + ... ${EXPECTED_MIN} The expected min latency + ... ${EXPECTED_MAX} The expected max latency + ... ${EXPECTED_AVG} The expected avg latency + [Arguments] ${TEST_SETUP}=${NONE} ${EXPECTED_MIN}=${NONE} ${EXPECTED_MAX}=${NONE} ${EXPECTED_AVG}=${NONE} + Run Test ${TEST_SETUP} latency ${EXPECTED_MIN} ${EXPECTED_MAX} ${EXPECTED_AVG} + +Run Throughput Test Raaqm + [Documentation] Run hiperf on the ${TEST_SETUP} topology and measure throughput. + ... Arguments: + ... ${TEST_SETUP} The setup of the test. + ... ${EXPECTED_MIN} The expected min throughput + ... ${EXPECTED_MAX} The expected max throughput + ... ${EXPECTED_AVG} The expected avg throughput + [Arguments] ${TEST_SETUP}=${NONE} ${EXPECTED_MIN}=${NONE} ${EXPECTED_MAX}=${NONE} ${EXPECTED_AVG}=${NONE} + Run Test ${TEST_SETUP} requin ${EXPECTED_MIN} ${EXPECTED_MAX} ${EXPECTED_AVG} + +Run Throughput Test CBR + [Documentation] Run hiperf on the ${TEST_SETUP} topology and measure throughput. + ... Arguments: + ... ${TEST_SETUP} The setup of the test. + ... ${EXPECTED_MIN} The expected min throughput + ... ${EXPECTED_MAX} The expected max throughput + ... ${EXPECTED_AVG} The expected avg throughput + [Arguments] ${TEST_SETUP}=${NONE} ${EXPECTED_MIN}=${NONE} ${EXPECTED_MAX}=${NONE} ${EXPECTED_AVG}=${NONE} + Run Test ${TEST_SETUP} cbr ${EXPECTED_MIN} ${EXPECTED_MAX} ${EXPECTED_AVG} + +Run RTC Test + [Documentation] Run hiperf RTC on the ${TEST_SETUP} topology and check consumer syncs to producer bitrate. + ... Arguments: + ... ${TEST_SETUP} The setup of the test. + ... ${EXPECTED_MIN} The expected min bitrate + ... ${EXPECTED_MAX} The expected max bitrate + ... ${EXPECTED_AVG} The expected avg bitrate + [Arguments] ${TEST_SETUP}=${NONE} ${EXPECTED_MIN}=${NONE} ${EXPECTED_MAX}=${NONE} ${EXPECTED_AVG}=${NONE} + Run Test ${TEST_SETUP} rtc ${EXPECTED_MIN} ${EXPECTED_MAX} ${EXPECTED_AVG} |