aboutsummaryrefslogtreecommitdiffstats
path: root/tests/vpp/perf/ip4_tunnels/10ge2p1x710-ethip4lispip6-ip4base-ndrpdr.robot
blob: 1dd5604552b36d679950b86c2598189b65018c41 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
# Copyright (c) 2019 Cisco and/or its affiliates.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at:
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

*** Settings ***
| Resource | resources/libraries/robot/shared/default.robot
| Resource | resources/libraries/robot/overlay/lisp_static_adjacency.robot
| Variables | resources/test_data/lisp/performance/lisp_static_adjacency.py
|
| Force Tags | 3_NODE_SINGLE_LINK_TOPO | PERFTEST | HW_ENV | NDRPDR
| ... | NIC_Intel-X710 | IP4FWD | ENCAP | LISP | IP6UNRLAY | IP4OVRLAY
| ... | DRV_VFIO_PCI
|
| Suite Setup | Setup suite single link | performance
| Suite Teardown | Tear down suite | performance
| Test Setup | Setup test
| Test Teardown | Tear down test | performance
|
| Test Template | Local Template
|
| Documentation | *RFC2544: Pkt throughput Lisp test cases*
|
| ... | *[Top] Network Topologies:* TG-DUT1-DUT2-TG 3-node circular topology\
| ... | with single links between nodes.
| ... | *[Enc] Packet Encapsulations:* Eth-IPv4-LISP-IPv6 on DUT1-DUT2,\
| ... | Eth-IPv4 on TG-DUTn for IPv4 routing over LISPoIPv6 tunnel.
| ... | *[Cfg] DUT configuration:* DUT1 and DUT2 are configured with IPv4\
| ... | routing and static routes. LISPoIPv6 tunnel is configured between DUT1\
| ... | and DUT2. DUT1 and DUT2 tested with ${nic_name}.\
| ... | *[Ver] TG verification:* TG finds and reports throughput NDR (Non Drop\
| ... | Rate) with zero packet loss tolerance and throughput PDR (Partial Drop\
| ... | Rate) with non-zero packet loss tolerance (LT) expressed in percentage\
| ... | of packets transmitted. NDR and PDR are discovered for different\
| ... | Ethernet L2 frame sizes using MLRsearch library.\
| ... | Test packets are generated by TG on links to DUTs. TG traffic profile
| ... | contains two L3 flow-groups (flow-group per direction, 253 flows per
| ... | flow-group) with all packets containing Ethernet header, IPv4 header
| ... | with IP protocol=61 and static payload. MAC addresses are matching MAC
| ... | addresses of the TG node interfaces.
| ... | *[Ref] Applicable standard specifications:* RFC6830.

*** Variables ***
| @{plugins_to_enable}= | dpdk_plugin.so
| ${crypto_type}= | ${None}
| ${nic_name}= | Intel-X710
| ${nic_driver}= | vfio-pci
| ${osi_layer}= | L3
| ${overhead}= | ${48}
# Traffic profile:
| ${traffic_profile}= | trex-sl-3n-ethip4-ip4src253

*** Keywords ***
| Local Template
| | [Documentation]
| | ... | [Cfg] DUT runs IPv6 LISP remote static mappings and whitelist filters\
| | ... | config.
| | ... | Each DUT uses ${phy_cores} physical core(s) for worker threads.
| | ... | [Ver] Measure NDR and PDR values using MLRsearch algorithm.\
| |
| | ... | *Arguments:*
| | ... | - frame_size - Framesize in Bytes in integer or string (IMIX_v4_1).
| | ... | Type: integer, string
| | ... | - phy_cores - Number of physical cores. Type: integer
| | ... | - rxq - Number of RX queues, default value: ${None}. Type: integer
| |
| | [Arguments] | ${frame_size} | ${phy_cores} | ${rxq}=${None}
| |
| | Set Test Variable | \${frame_size}
| |
| | Given Set Max Rate And Jumbo
| | And Add worker threads to all DUTs | ${phy_cores} | ${rxq}
| | And Pre-initialize layer driver | ${nic_driver}
| | And Apply startup configuration on all VPP DUTs
| | When Initialize layer driver | ${nic_driver}
| | And Initialize layer interface
| | And Initialize LISP IPv4 over IPv6 forwarding in 3-node circular topology
| | ... | ${dut1_to_dut2_ip4o6} | ${dut1_to_tg_ip4o6} | ${dut2_to_dut1_ip4o6}
| | ... | ${dut2_to_tg_ip4o6} | ${tg_prefix4o6} | ${dut_prefix4o6}
| | And Configure LISP topology in 3-node circular topology
| | ... | ${dut1} | ${dut1_if2} | ${NONE}
| | ... | ${dut2} | ${dut2_if1} | ${NONE}
| | ... | ${duts_locator_set} | ${dut1_ip4o6_eid} | ${dut2_ip4o6_eid}
| | ... | ${dut1_ip4o6_static_adjacency} | ${dut2_ip4o6_static_adjacency}
| | Then Find NDR and PDR intervals using optimized search

*** Test Cases ***
| tc01-64B-1c-ethip4lispip6-ip4base-ndrpdr
| | [Tags] | 64B | 1C
| | frame_size=${64} | phy_cores=${1}

| tc02-64B-2c-ethip4lispip6-ip4base-ndrpdr
| | [Tags] | 64B | 2C
| | frame_size=${64} | phy_cores=${2}

| tc03-64B-4c-ethip4lispip6-ip4base-ndrpdr
| | [Tags] | 64B | 4C
| | frame_size=${64} | phy_cores=${4}

| tc04-1518B-1c-ethip4lispip6-ip4base-ndrpdr
| | [Tags] | 1518B | 1C
| | frame_size=${1518} | phy_cores=${1}

| tc05-1518B-2c-ethip4lispip6-ip4base-ndrpdr
| | [Tags] | 1518B | 2C
| | frame_size=${1518} | phy_cores=${2}

| tc06-1518B-4c-ethip4lispip6-ip4base-ndrpdr
| | [Tags] | 1518B | 4C
| | frame_size=${1518} | phy_cores=${4}

| tc07-9000B-1c-ethip4lispip6-ip4base-ndrpdr
| | [Tags] | 9000B | 1C
| | frame_size=${9000} | phy_cores=${1}

| tc08-9000B-2c-ethip4lispip6-ip4base-ndrpdr
| | [Tags] | 9000B | 2C
| | frame_size=${9000} | phy_cores=${2}

| tc09-9000B-4c-ethip4lispip6-ip4base-ndrpdr
| | [Tags] | 9000B | 4C
| | frame_size=${9000} | phy_cores=${4}

| tc10-IMIX-1c-ethip4lispip6-ip4base-ndrpdr
| | [Tags] | IMIX | 1C
| | frame_size=IMIX_v4_1 | phy_cores=${1}

| tc11-IMIX-2c-ethip4lispip6-ip4base-ndrpdr
| | [Tags] | IMIX | 2C
| | frame_size=IMIX_v4_1 | phy_cores=${2}

| tc12-IMIX-4c-ethip4lispip6-ip4base-ndrpdr
| | [Tags] | IMIX | 4C
| | frame_size=IMIX_v4_1 | phy_cores=${4}
) self.crc = crc self.type_pairs = type_pairs self.depends = [t for t, _ in self.type_pairs] def __str__(self): return "Union(%s, [%s])" % ( self.name, "], [" .join(["%s %s" % (i, j) for i, j in self.type_pairs]) ) def has_vla(self): return False class Message(object): def __init__(self, logger, definition, json_parser): struct_type_class = json_parser.struct_type_class field_class = json_parser.field_class self.request = None self.logger = logger m = definition logger.debug("Parsing message definition `%s'" % m) name = m[0] self.name = name logger.debug("Message name is `%s'" % name) ignore = True self.header = None self.is_reply = json_parser.is_reply(self.name) self.is_event = json_parser.is_event(self.name) fields = [] for header in get_msg_header_defs(struct_type_class, field_class, json_parser, logger): logger.debug("Probing header `%s'" % header.name) if header.is_part_of_def(m[1:]): self.header = header logger.debug("Found header `%s'" % header.name) fields.append(field_class(field_name='header', field_type=self.header)) ignore = False break if ignore and not self.is_event and not self.is_reply: raise ParseError("While parsing message `%s': could not find all " "common header fields" % name) for field in m[1:]: if len(field) == 1 and 'crc' in field: self.crc = field['crc'] logger.debug("Found CRC `%s'" % self.crc) continue else: field_type = json_parser.lookup_type_like_id(field[0]) logger.debug("Parsing message field `%s'" % field) l = len(field) if any(type(n) is dict for n in field): l -= 1 if l == 2: if self.header is not None and\ self.header.has_field(field[1]): continue p = field_class(field_name=field[1], field_type=field_type) elif l == 3: if field[2] == 0: raise ParseError( "While parsing message `%s': variable length " "array `%s' doesn't have reference to member " "containing the actual length" % ( name, field[1])) p = field_class( field_name=field[1], field_type=field_type, array_len=field[2]) elif l == 4: nelem_field = None for f in fields: if f.name == field[3]: nelem_field = f if nelem_field is None: raise ParseError( "While parsing message `%s': couldn't find " "variable length array `%s' member containing " "the actual length `%s'" % ( name, field[1], field[3])) p = field_class( field_name=field[1], field_type=field_type, array_len=field[2], nelem_field=nelem_field) else: raise Exception("Don't know how to parse message " "definition for message `%s': `%s'" % (m, m[1:])) logger.debug("Parsed field `%s'" % p) fields.append(p) self.fields = fields self.depends = [f.type for f in self.fields] logger.debug("Parsed message: %s" % self) def __str__(self): return "Message(%s, [%s], {crc: %s}" % \ (self.name, "], [".join([str(f) for f in self.fields]), self.crc) class StructType (Type, Struct): def __init__(self, definition, json_parser, field_class, logger): t = definition logger.debug("Parsing struct definition `%s'" % t) name = t[0] fields = [] for field in t[1:]: if len(field) == 1 and 'crc' in field: self.crc = field['crc'] continue field_type = json_parser.lookup_type_like_id(field[0]) logger.debug("Parsing type field `%s'" % field) if len(field) == 2: p = field_class(field_name=field[1], field_type=field_type) elif len(field) == 3: if field[2] == 0: raise ParseError("While parsing type `%s': array `%s' has " "variable length" % (name, field[1])) p = field_class(field_name=field[1], field_type=field_type, array_len=field[2]) elif len(field) == 4: nelem_field = None for f in fields: if f.name == field[3]: nelem_field = f if nelem_field is None: raise ParseError( "While parsing message `%s': couldn't find " "variable length array `%s' member containing " "the actual length `%s'" % ( name, field[1], field[3])) p = field_class(field_name=field[1], field_type=field_type, array_len=field[2], nelem_field=nelem_field) else: raise ParseError( "Don't know how to parse field `%s' of type definition " "for type `%s'" % (field, t)) fields.append(p) Type.__init__(self, name) Struct.__init__(self, name, fields) def __str__(self): return "StructType(%s, %s)" % (Type.__str__(self), Struct.__str__(self)) def has_field(self, name): return name in self.field_names def is_part_of_def(self, definition): for idx in range(len(self.fields)): field = definition[idx] p = self.fields[idx] if field[1] != p.name: return False if field[0] != p.type.name: raise ParseError( "Unexpected field type `%s' (should be `%s'), " "while parsing msg/def/field `%s/%s/%s'" % (field[0], p.type, p.name, definition, field)) return True class JsonParser(object): def __init__(self, logger, files, simple_type_class=SimpleType, enum_class=Enum, union_class=Union, struct_type_class=StructType, field_class=Field, message_class=Message, alias_class=Alias): self.services = {} self.messages = {} self.enums = {} self.unions = {} self.aliases = {} self.types = { x: simple_type_class(x) for x in [ 'i8', 'i16', 'i32', 'i64', 'u8', 'u16', 'u32', 'u64', 'f64', 'bool' ] } self.types['string'] = simple_type_class('vl_api_string_t') self.replies = set() self.events = set() self.simple_type_class = simple_type_class self.enum_class = enum_class self.union_class = union_class self.struct_type_class = struct_type_class self.field_class = field_class self.alias_class = alias_class self.message_class = message_class self.exceptions = [] self.json_files = [] self.types_by_json = {} self.enums_by_json = {} self.unions_by_json = {} self.aliases_by_json = {} self.messages_by_json = {} self.logger = logger for f in files: self.parse_json_file(f) self.finalize_parsing() def parse_json_file(self, path): self.logger.info("Parsing json api file: `%s'" % path) self.json_files.append(path) self.types_by_json[path] = [] self.enums_by_json[path] = [] self.unions_by_json[path] = [] self.aliases_by_json[path] = [] self.messages_by_json[path] = {} with open(path) as f: j = json.load(f) for k in j['services']: if k in self.services: raise ParseError("Duplicate service `%s'" % k) self.services[k] = j['services'][k] self.replies.add(self.services[k]["reply"]) if "events" in self.services[k]: for x in self.services[k]["events"]: self.events.add(x) for e in j['enums']: name = e[0] value_pairs = e[1:-1] enumtype = self.types[e[-1]["enumtype"]] enum = self.enum_class(name, value_pairs, enumtype) self.enums[enum.name] = enum self.logger.debug("Parsed enum: %s" % enum) self.enums_by_json[path].append(enum) exceptions = [] progress = 0 last_progress = 0 while True: for u in j['unions']: name = u[0] if name in self.unions: progress = progress + 1 continue try: type_pairs = [[self.lookup_type_like_id(t), n] for t, n in u[1:]] union = self.union_class(name, type_pairs, 0) progress = progress + 1 except ParseError as e: exceptions.append(e) continue self.unions[union.name] = union self.logger.debug("Parsed union: %s" % union) self.unions_by_json[path].append(union) for name, body in j['aliases'].iteritems(): if name in self.aliases: progress = progress + 1 continue if 'length' in body: array_len = body['length'] else: array_len = None t = self.types[body['type']] alias = self.alias_class(name, t, array_len) self.aliases[name] = alias self.logger.debug("Parsed alias: %s" % alias) self.aliases_by_json[path].append(alias) for t in j['types']: if t[0] in self.types: progress = progress + 1 continue try: type_ = self.struct_type_class(t, self, self.field_class, self.logger) if type_.name in self.types: raise ParseError( "Duplicate type `%s'" % type_.name) progress = progress + 1 except ParseError as e: exceptions.append(e) continue self.types[type_.name] = type_ self.types_by_json[path].append(type_) self.logger.debug("Parsed type: %s" % type_) if not exceptions: # finished parsing break if progress <= last_progress: # cannot make forward progress self.exceptions.extend(exceptions) break exceptions = [] last_progress = progress progress = 0 prev_length = len(self.messages) processed = [] while True: exceptions = [] for m in j['messages']: if m in processed: continue try: msg = self.message_class(self.logger, m, self) if msg.name in self.messages: raise ParseError( "Duplicate message `%s'" % msg.name) except ParseError as e: exceptions.append(e) continue self.messages[msg.name] = msg self.messages_by_json[path][msg.name] = msg processed.append(m) if prev_length == len(self.messages): # cannot make forward progress ... self.exceptions.extend(exceptions) break prev_length = len(self.messages) def lookup_type_like_id(self, name): mundane_name = remove_magic(name) if name in self.types: return self.types[name] elif name in self.enums: return self.enums[name] elif name in self.unions: return self.unions[name] elif name in self.aliases: return self.aliases[name] elif mundane_name in self.types: return self.types[mundane_name] elif mundane_name in self.enums: return self.enums[mundane_name] elif mundane_name in self.unions: return self.unions[mundane_name] elif mundane_name in self.aliases: return self.aliases[mundane_name] raise ParseError( "Could not find type, enum or union by magic name `%s' nor by " "mundane name `%s'" % (name, mundane_name)) def is_reply(self, message): return message in self.replies def is_event(self, message): return message in self.events def get_reply(self, message): return self.messages[self.services[message]['reply']] def finalize_parsing(self): if len(self.messages) == 0: for e in self.exceptions: self.logger.warning(e) for jn, j in self.messages_by_json.items(): remove = [] for n, m in j.items(): try: if not m.is_reply and not m.is_event: try: m.reply = self.get_reply(n) if "stream" in self.services[m.name]: m.reply_is_stream = \ self.services[m.name]["stream"] else: m.reply_is_stream = False m.reply.request = m except: raise ParseError( "Cannot find reply to message `%s'" % n) except ParseError as e: self.exceptions.append(e) remove.append(n) self.messages_by_json[jn] = { k: v for k, v in j.items() if k not in remove}