diff options
author | Tibor Frank <tifrank@cisco.com> | 2017-02-14 14:09:11 +0100 |
---|---|---|
committer | Tibor Frank <tifrank@cisco.com> | 2017-02-27 09:11:25 +0100 |
commit | 21702fb09c470511cc3ed7b34ca5ab0f8d30b68f (patch) | |
tree | fd36f6f589dfe826ef4199f102c249fa7c248d6c /resources/libraries/robot/default.robot | |
parent | f8eca5691c63479e5b4c59b5d65456553d4999dd (diff) |
CSIT-339: Add Keywords for SMT
- modify keywords in CpuUtils.py
- add RF keywords
Change-Id: I57230b3948254e8f149b2563a8e24e948bc2ec27
Signed-off-by: Tibor Frank <tifrank@cisco.com>
Diffstat (limited to 'resources/libraries/robot/default.robot')
-rw-r--r-- | resources/libraries/robot/default.robot | 149 |
1 files changed, 108 insertions, 41 deletions
diff --git a/resources/libraries/robot/default.robot b/resources/libraries/robot/default.robot index f8dda1721e..e3e93098a9 100644 --- a/resources/libraries/robot/default.robot +++ b/resources/libraries/robot/default.robot @@ -27,64 +27,72 @@ *** Keywords *** | Setup all DUTs before test | | [Documentation] | Setup all DUTs in topology before test execution. +| | ... | | Setup All DUTs | ${nodes} | Setup all TGs before traffic script | | [Documentation] | Prepare all TGs before traffic scripts execution. +| | ... | | All TGs Set Interface Default Driver | ${nodes} | Show Vpp Version On All DUTs | | [Documentation] | Show VPP version verbose on all DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} | | | Vpp show version verbose | ${nodes['${dut}']} | Show Vpp Errors On All DUTs | | [Documentation] | Show VPP errors verbose on all DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} | | | Vpp Show Errors | ${nodes['${dut}']} | Show Vpp Trace Dump On All DUTs | | [Documentation] | Save API trace and dump output on all DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} | | | Vpp api trace save | ${nodes['${dut}']} | | | Vpp api trace dump | ${nodes['${dut}']} | Show Vpp Vhost On All DUTs -| | [Documentation] | Show Vhost User on all DUTs +| | [Documentation] | Show Vhost User on all DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} | | | Vpp Show Vhost | ${nodes['${dut}']} | Setup Scheduler Policy for Vpp On All DUTs | | [Documentation] | Set realtime scheduling policy (SCHED_RR) with priority 1 -| | ... | on all VPP worker threads on all DUTs. +| | ... | on all VPP worker threads on all DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} | | | Set VPP Scheduling rr | ${nodes['${dut}']} | Add '${m}' worker threads and rxqueues '${n}' in 3-node single-link topo -| | [Documentation] | Setup M worker threads and N rxqueues in vpp startup -| | ... | configuration on all DUTs in 3-node single-link topology. +| | [Documentation] | Setup M worker threads and N rxqueues in vpp startup\ +| | ... | configuration on all DUTs in 3-node single-link topology. +| | ... | | ${m_int}= | Convert To Integer | ${m} | | ${dut1_numa}= | Get interfaces numa node | ${dut1} -| | ... | ${dut1_if1} | ${dut1_if2} +| | ... | ${dut1_if1} | ${dut1_if2} | | ${dut2_numa}= | Get interfaces numa node | ${dut2} -| | ... | ${dut2_if1} | ${dut2_if2} +| | ... | ${dut2_if1} | ${dut2_if2} | | ${dut1_cpu_main}= | Cpu list per node str | ${dut1} | ${dut1_numa} -| | ... | cpu_cnt=${1} +| | ... | skip_cnt=${1} | cpu_cnt=${1} | | ${dut1_cpu_w}= | Cpu list per node str | ${dut1} | ${dut1_numa} -| | ... | skip_cnt=${1} | cpu_cnt=${m_int} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} | | ${dut2_cpu_main}= | Cpu list per node str | ${dut2} | ${dut2_numa} -| | ... | cpu_cnt=${1} +| | ... | skip_cnt=${1} | cpu_cnt=${1} | | ${dut2_cpu_w}= | Cpu list per node str | ${dut2} | ${dut2_numa} -| | ... | skip_cnt=${1} | cpu_cnt=${m_int} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} | | ${dut1_cpu}= | Catenate | main-core | ${dut1_cpu_main} -| | ... | corelist-workers | ${dut1_cpu_w} +| | ... | corelist-workers | ${dut1_cpu_w} | | ${dut2_cpu}= | Catenate | main-core | ${dut2_cpu_main} -| | ... | corelist-workers | ${dut2_cpu_w} +| | ... | corelist-workers | ${dut2_cpu_w} | | ${rxqueues}= | Catenate | num-rx-queues | ${n} | | Add CPU config | ${dut1} | ${dut1_cpu} | | Add CPU config | ${dut2} | ${dut2_cpu} @@ -92,13 +100,14 @@ | | Add rxqueues config | ${dut2} | ${rxqueues} | Add '${m}' worker threads and rxqueues '${n}' in 2-node single-link topo -| | [Documentation] | Setup M worker threads and N rxqueues in vpp startup +| | [Documentation] | Setup M worker threads and N rxqueues in vpp startup\ | | ... | configuration on all DUTs in 2-node single-link topology. +| | ... | | ${m_int}= | Convert To Integer | ${m} | | ${dut1_numa}= | Get interfaces numa node | ${dut1} | | ... | ${dut1_if1} | ${dut1_if2} | | ${dut1_cpu_main}= | Cpu list per node str | ${dut1} | ${dut1_numa} -| | ... | cpu_cnt=${1} | skip_cnt=${1} +| | ... | skip_cnt=${1} | cpu_cnt=${1} | | ${dut1_cpu_w}= | Cpu list per node str | ${dut1} | ${dut1_numa} | | ... | skip_cnt=${2} | cpu_cnt=${m_int} | | ${dut1_cpu}= | Catenate | main-core | ${dut1_cpu_main} @@ -107,9 +116,53 @@ | | Add CPU config | ${dut1} | ${dut1_cpu} | | Add rxqueues config | ${dut1} | ${rxqueues} +| Add '${m}' worker threads using SMT and rxqueues '${n}' in 3-node single-link topo +| | [Documentation] | Setup M worker threads using SMT and N rxqueues in vpp\ +| | ... | startup configuration on all DUTs in 3-node single-link topology. +| | ... +| | ${m_int}= | Convert To Integer | ${m} +| | ${dut1_numa}= | Get interfaces numa node | ${dut1} +| | ... | ${dut1_if1} | ${dut1_if2} +| | ${dut2_numa}= | Get interfaces numa node | ${dut2} +| | ... | ${dut2_if1} | ${dut2_if2} +| | ${dut1_cpu_main}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${1} | cpu_cnt=${1} | smt_used=${True} +| | ${dut1_cpu_w}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} | smt_used=${True} +| | ${dut2_cpu_main}= | Cpu list per node str | ${dut2} | ${dut2_numa} +| | ... | skip_cnt=${1} | cpu_cnt=${1} | smt_used=${True} +| | ${dut2_cpu_w}= | Cpu list per node str | ${dut2} | ${dut2_numa} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} | smt_used=${True} +| | ${dut1_cpu}= | Catenate | main-core | ${dut1_cpu_main} +| | ... | corelist-workers | ${dut1_cpu_w} +| | ${dut2_cpu}= | Catenate | main-core | ${dut2_cpu_main} +| | ... | corelist-workers | ${dut2_cpu_w} +| | ${rxqueues}= | Catenate | num-rx-queues | ${n} +| | Add CPU config | ${dut1} | ${dut1_cpu} +| | Add CPU config | ${dut2} | ${dut2_cpu} +| | Add rxqueues config | ${dut1} | ${rxqueues} +| | Add rxqueues config | ${dut2} | ${rxqueues} + +| Add '${m}' worker threads using SMT and rxqueues '${n}' in 2-node single-link topo +| | [Documentation] | Setup M worker threads and N rxqueues in vpp startup\ +| | ... | configuration on all DUTs in 2-node single-link topology. +| | ... +| | ${m_int}= | Convert To Integer | ${m} +| | ${dut1_numa}= | Get interfaces numa node | ${dut1} +| | ... | ${dut1_if1} | ${dut1_if2} +| | ${dut1_cpu_main}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${1} | cpu_cnt=${1} | smt_used=${True} +| | ${dut1_cpu_w}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} | smt_used=${True} +| | ${dut1_cpu}= | Catenate | main-core | ${dut1_cpu_main} +| | ... | corelist-workers | ${dut1_cpu_w} +| | ${rxqueues}= | Catenate | num-rx-queues | ${n} +| | Add CPU config | ${dut1} | ${dut1_cpu} +| | Add rxqueues config | ${dut1} | ${rxqueues} + | Add worker threads and rxqueues to all DUTs -| | [Documentation] | Setup worker threads and rxqueues in VPP startup -| | ... | configuration to all DUTs +| | [Documentation] | Setup worker threads and rxqueues in VPP startup\ +| | ... | configuration to all DUTs. | | ... | | ... | *Arguments:* | | ... | - ${cpu} - CPU configuration. Type: string @@ -118,25 +171,26 @@ | | ... | *Example:* | | ... | | ... | \| Add worker threads and rxqueues to all DUTs \| main-core 0 \ -| | ... | \| rxqueues 2 +| | ... | \| rxqueues 2 \| +| | ... | | [Arguments] | ${cpu} | ${rxqueues} +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} -| | | Add CPU config | ${nodes['${dut}']} -| | | ... | ${cpu} -| | | Add rxqueues config | ${nodes['${dut}']} -| | | ... | ${rxqueues} +| | | Add CPU config | ${nodes['${dut}']} | ${cpu} +| | | Add rxqueues config | ${nodes['${dut}']} | ${rxqueues} | Add all PCI devices to all DUTs -| | [Documentation] | Add all available PCI devices from topology file to VPP -| | ... | startup configuration to all DUTs +| | [Documentation] | Add all available PCI devices from topology file to VPP\ +| | ... | startup configuration to all DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} | | | Add PCI all devices | ${nodes['${dut}']} | Add PCI device to DUT | | [Documentation] | Add PCI device to VPP startup configuration -| | ... | to DUT specified as argument +| | ... | to DUT specified as argument. | | ... | | ... | *Arguments:* | | ... | - ${node} - DUT node. Type: dictionary @@ -144,37 +198,47 @@ | | ... | | ... | *Example:* | | ... -| | ... | \| Add PCI device to DUT \| ${nodes['DUT1']} \ -| | ... | \| 0000:00:00.0 +| | ... | \| Add PCI device to DUT \| ${nodes['DUT1']} \| 0000:00:00.0 \| +| | ... | | [Arguments] | ${node} | ${pci_address} +| | ... | | Add PCI device | ${node} | ${pci_address} | Add Heapsize Config to all DUTs -| | [Documentation] | Add Add Heapsize Config to VPP startup configuration -| | ... | to all DUTs +| | [Documentation] | Add Add Heapsize Config to VPP startup configuration\ +| | ... | to all DUTs. +| | ... | | ... | *Arguments:* | | ... | - ${heapsize} - Heapsize string (5G, 200M, ...) +| | ... +| | ... | *Example:* +| | ... +| | ... | \| Add Heapsize Config to all DUTs \| 200M \| +| | ... | | [Arguments] | ${heapsize} +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} | | | Add Heapsize Config | ${nodes['${dut}']} | ${heapsize} | Add No Multi Seg to all DUTs -| | [Documentation] | Add No Multi Seg to VPP startup configuration to all -| | ... | DUTs +| | [Documentation] | Add No Multi Seg to VPP startup configuration to all DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} | | | Add No Multi Seg Config | ${nodes['${dut}']} | Add Enable Vhost User to all DUTs -| | [Documentation] | Add Enable Vhost User to VPP startup configuration to all -| | ... | DUTs +| | [Documentation] | Add Enable Vhost User to VPP startup configuration to all\ +| | ... | DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} | | | Add Enable Vhost User Config | ${nodes['${dut}']} | Remove startup configuration of VPP from all DUTs -| | [Documentation] | Remove VPP startup configuration from all DUTs +| | [Documentation] | Remove VPP startup configuration from all DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} | | | Remove All PCI Devices | ${nodes['${dut}']} @@ -186,38 +250,41 @@ | | | Remove Enable Vhost User Config | ${nodes['${dut}']} | Setup default startup configuration of VPP on all DUTs -| | [Documentation] | Setup default startup configuration of VPP to all DUTs +| | [Documentation] | Setup default startup configuration of VPP to all DUTs. +| | ... | | Remove startup configuration of VPP from all DUTs | | Add '1' worker threads and rxqueues '1' in 3-node single-link topo | | Add all PCI devices to all DUTs | | Apply startup configuration on all VPP DUTs | Setup 2-node startup configuration of VPP on all DUTs -| | [Documentation] | Setup default startup configuration of VPP to all DUTs +| | [Documentation] | Setup default startup configuration of VPP to all DUTs. +| | ... | | Remove startup configuration of VPP from all DUTs | | Add '1' worker threads and rxqueues '1' in 2-node single-link topo | | Add all PCI devices to all DUTs | | Apply startup configuration on all VPP DUTs | Apply startup configuration on all VPP DUTs -| | [Documentation] | Apply startup configuration of VPP and restart VPP on all -| | ... | DUTs +| | [Documentation] | Apply startup configuration of VPP and restart VPP on all\ +| | ... | DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} | | | Apply Config | ${nodes['${dut}']} | | Update All Interface Data On All Nodes | ${nodes} | skip_tg=${TRUE} | Save VPP PIDs -| | [Documentation] | Get PIDs of VPP processes from all DUTs in topology and -| | ... | set it as a test variable. The PIDs are stored as dictionary items +| | [Documentation] | Get PIDs of VPP processes from all DUTs in topology and\ +| | ... | set it as a test variable. The PIDs are stored as dictionary items\ | | ... | where the key is the host and the value is the PID. | | ... | | ${setup_vpp_pids}= | Get VPP PIDs | ${nodes} | | Set Test Variable | ${setup_vpp_pids} | Check VPP PID in Teardown -| | [Documentation] | Check if the VPP PIDs on all DUTs are the same at the end -| | ... | of test as they were at the begining. If they are not, only a message +| | [Documentation] | Check if the VPP PIDs on all DUTs are the same at the end\ +| | ... | of test as they were at the begining. If they are not, only a message\ | | ... | is printed on console and to log. The test will not fail. | | ... | | ${teardown_vpp_pids}= | Get VPP PIDs | ${nodes} |