aboutsummaryrefslogtreecommitdiffstats
path: root/docs/report/introduction/physical_testbeds.rst
blob: 74d3c58183ea555d0c5ae8bb23d21426ae79a77c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
.. _tested_physical_topologies:

Physical Testbeds
=================

All :abbr:`FD.io (Fast Data Input/Ouput)` :abbr:`CSIT (Continuous System
Integration and Testing)` performance testing listed in this report are
executed on physical testbeds built with bare-metal servers hosted by
:abbr:`LF (Linux Foundation)` FD.io project. Two testbed topologies are
used:

- **3-Node Topology**: Consisting of two servers acting as SUTs
  (Systems Under Test) and one server as TG (Traffic Generator), all
  connected in ring topology. Used for executing all of the data plane
  tests including overlay tunnels and IPSec tests.
- **2-Node Topology**: Consisting of one server acting as SUTs (Systems
  Under Test) and one server as TG (Traffic Generator), both connected
  in ring topology. Used for execution of tests without any overlay
  tunnel encapsulations. Added in CSIT rls18.07.

Current FD.io production testbeds are built with servers based on two
processor generations of Intel Xeons: Haswell-SP (E5-2699v3) and Skylake
(Platinum 8180). Testbeds built with servers based on Arm processors are
in the process of being added to FD.io production.

Server SUT and DUT performance depends on server and processor type,
hence results for testbeds based on different servers must be reported
separately, and compared if appropriate.

Complete technical specifications of compute servers used in CSIT
physical testbeds are maintained in FD.io CSIT repository:
`FD.io CSIT testbeds - Xeon Skylake, Arm, Atom`_ and
`FD.io CSIT Testbeds - Xeon Haswell`_.

Following sections describe existing production testbed types.

3-Node Xeon Haswell (3n-hsw)
----------------------------

3n-hsw testbed is based on three Cisco UCS-c240m3 servers each equipped
with two Intel Xeon Haswell-SP E5-2699v3 2.3 GHz 18 core processors.
Physical testbed topology is depicted in a figure below.

.. only:: latex

    .. raw:: latex

        \begin{figure}[H]
            \centering
                \graphicspath{{../_tmp/src/introduction/}}
                \includegraphics[width=0.90\textwidth]{testbed-3n-hsw}
                \label{fig:testbed-3n-hsw}
        \end{figure}

.. only:: html

    .. figure:: testbed-3n-hsw.svg
        :alt: testbed-3n-hsw
        :align: center

SUT1 and SUT2 servers are populated with the following NIC models:

#. NIC-1: VIC 1385 2p40GE Cisco.
#. NIC-2: NIC x520 2p10GE Intel.
#. NIC-3: empty.
#. NIC-4: NIC xl710-QDA2 2p40GE Intel.
#. NIC-5: NIC x710-DA2 2p10GE Intel.
#. NIC-6: QAT 8950 50G (Walnut Hill) Intel.

TG servers run T-Rex application and are populated with the following
NIC models:

#. NIC-1: NIC xl710-QDA2 2p40GE Intel.
#. NIC-2: NIC x710-DA2 2p10GE Intel.
#. NIC-3: empty.
#. NIC-4: NIC xl710-QDA2 2p40GE Intel.
#. NIC-5: NIC x710-DA2 2p10GE Intel.
#. NIC-6: NIC x710-DA2 2p10GE Intel. (For self-tests.)

All Intel Xeon Haswell servers run with Intel Hyper-Threading disabled,
making the number of logical cores exposed to Linux match the number of
18 physical cores per processor socket.

Total of three 3n-hsw testbeds are in operation in FD.io labs.

3-Node Xeon Skylake (3n-skx)
----------------------------

3n-skx testbed is based on three SuperMicro SYS-7049GP-TRT servers each
equipped with two Intel Xeon Skylake Platinum 8180 2.5 GHz 28 core
processors. Physical testbed topology is depicted in a figure below.

.. only:: latex

    .. raw:: latex

        \begin{figure}[H]
            \centering
                \graphicspath{{../_tmp/src/introduction/}}
                \includegraphics[width=0.90\textwidth]{testbed-3n-skx}
                \label{fig:testbed-3n-skx}
        \end{figure}

.. only:: html

    .. figure:: testbed-3n-skx.svg
        :alt: testbed-3n-skx
        :align: center

SUT1 and SUT2 servers are populated with the following NIC models:

#. NIC-1: x710-DA4 4p10GE Intel.
#. NIC-2: xxv710-DA2 2p25GE Intel.
#. NIC-3: empty, future expansion.
#. NIC-4: empty, future expansion.
#. NIC-5: empty, future expansion.
#. NIC-6: empty, future expansion.

TG servers run T-Rex application and are populated with the following
NIC models:

#. NIC-1: x710-DA4 4p10GE Intel.
#. NIC-2: xxv710-DA2 2p25GE Intel.
#. NIC-3: empty, future expansion.
#. NIC-4: empty, future expansion.
#. NIC-5: empty, future expansion.
#. NIC-6: x710-DA4 4p10GE Intel. (For self-tests.)

All Intel Xeon Skylake servers run with Intel Hyper-Threading enabled,
doubling the number of logical cores exposed to Linux, with 56 logical
cores and 28 physical cores per processor socket.

Total of two 3n-skx testbeds are in operation in FD.io labs.

2-Node Xeon Skylake (2n-skx)
----------------------------

2n-skx testbed is based on two SuperMicro SYS-7049GP-TRT servers each
equipped with two Intel Xeon Skylake Platinum 8180 2.5 GHz 28 core
processors. Physical testbed topology is depicted in a figure below.

.. only:: latex

    .. raw:: latex

        \begin{figure}[H]
            \centering
                \graphicspath{{../_tmp/src/introduction/}}
                \includegraphics[width=0.90\textwidth]{testbed-2n-skx}
                \label{fig:testbed-2n-skx}
        \end{figure}

.. only:: html

    .. figure:: testbed-2n-skx.svg
        :alt: testbed-2n-skx
        :align: center

SUT servers are populated with the following NIC models:

#. NIC-1: x710-DA4 4p10GE Intel.
#. NIC-2: xxv710-DA2 2p25GE Intel.
#. NIC-3: mcx556a-edat ConnectX5 2p100GE Mellanox. (Not used yet.)
#. NIC-4: empty, future expansion.
#. NIC-5: empty, future expansion.
#. NIC-6: empty, future expansion.

TG servers run T-Rex application and are populated with the following
NIC models:

#. NIC-1: x710-DA4 4p10GE Intel.
#. NIC-2: xxv710-DA2 2p25GE Intel.
#. NIC-3: mcx556a-edat ConnectX5 2p100GE Mellanox. (Not used yet.)
#. NIC-4: empty, future expansion.
#. NIC-5: empty, future expansion.
#. NIC-6: x710-DA4 4p10GE Intel. (For self-tests.)

All Intel Xeon Skylake servers run with Intel Hyper-Threading enabled,
doubling the number of logical cores exposed to Linux, with 56 logical
cores and 28 physical cores per processor socket.

Total of four 2n-skx testbeds are in operation in FD.io labs.

2-Node Atom Denverton (2n-dnv)
------------------------------

2n-dnv testbed is based on one Intel S2600WFT Server that is equipped with
two Intel Xeon Skylake Platinum 8180 2.5GHz 28 core processors and one
SuperMicro SYS-E300-9A server that is equipped with one Intel Atom CPU
C3858 2.00GHz 12 core processors. Physical testbed topology is depicted in
a figure below.

.. only:: latex

    .. raw:: latex

        \begin{figure}[H]
            \centering
                \graphicspath{{../_tmp/src/introduction/}}
                \includegraphics[width=0.90\textwidth]{testbed-2n-dnv}
                \label{fig:testbed-2n-dnv}
        \end{figure}

.. only:: html

    .. figure:: testbed-2n-dnv.svg
        :alt: testbed-2n-dnv
        :align: center

SUT server have four internal 10G NIC port:

#. P-1: x553 copper port.
#. P-2: x553 copper port.
#. P-3: x553 fiber port.
#. P-4: x553 fiber port.

TG server run T-Rex software traffic generator and are populated with the
following NIC models:

#. NIC-1: x550-T2 2p10GE Intel.
#. NIC-2: x550-T2 2p10GE Intel.
#. NIC-3: x520-DA2 2p10GE Intel.
#. NIC-4: x520-DA2 2p10GE Intel.

The 2n-dnv testbed is in operation in Intel SH labs.
class="nb">isinstance(interface, basestring): sw_if_index = Topology.get_interface_sw_index(node, interface) else: sw_if_index = interface VatExecutor.cmd_from_template(node, "add_l2_fib_entry.vat", mac=mac, bd=bd_id, interface=sw_if_index) @staticmethod def create_l2_bd(node, bd_id, flood=1, uu_flood=1, forward=1, learn=1, arp_term=0): """Create a l2 bridge domain on the chosen VPP node Execute "bridge_domain_add_del bd_id {bd_id} flood {flood} uu-flood 1 forward {forward} learn {learn} arp-term {arp_term}" VAT command on the node. :param node: Node where we wish to crate the l2 bridge domain. :param bd_id: Bridge domain index number. :param flood: Enable flooding. :param uu_flood: Enable uu_flood. :param forward: Enable forwarding. :param learn: Enable mac address learning to fib. :param arp_term: Enable arp_termination. :type node: dict :type bd_id: int :type flood: bool :type uu_flood: bool :type forward: bool :type learn: bool :type arp_term:bool """ VatExecutor.cmd_from_template(node, "l2_bd_create.vat", bd_id=bd_id, flood=flood, uu_flood=uu_flood, forward=forward, learn=learn, arp_term=arp_term) @staticmethod def add_interface_to_l2_bd(node, interface, bd_id, shg=0): """Add a interface to the l2 bridge domain. Get SW IF ID and add it to the bridge domain. :param node: Node where we want to execute the command that does this. :param interface: Interface name. :param bd_id: Bridge domain index number to add Interface name to. :param shg: Split horizon group. :type node: dict :type interface: str :type bd_id: int :type shg: int """ sw_if_index = Topology.get_interface_sw_index(node, interface) L2Util.add_sw_if_index_to_l2_bd(node, sw_if_index, bd_id, shg) @staticmethod def add_sw_if_index_to_l2_bd(node, sw_if_index, bd_id, shg=0): """Add interface with sw_if_index to l2 bridge domain. Execute the "sw_interface_set_l2_bridge sw_if_index {sw_if_index} bd_id {bd_id} shg {shg} enable" VAT command on the given node. :param node: Node where we want to execute the command that does this. :param sw_if_index: Interface index. :param bd_id: Bridge domain index number to add SW IF ID to. :param shg: Split horizon group. :type node: dict :type sw_if_index: int :type bd_id: int :type shg: int :return: """ VatExecutor.cmd_from_template(node, "l2_bd_add_sw_if_index.vat", bd_id=bd_id, sw_if_index=sw_if_index, shg=shg) @staticmethod @keyword('Create dict used in bridge domain template file for node ' '"${node}" with links "${link_names}" and bd_id "${bd_id}"') def create_bridge_domain_vat_dict(node, link_names, bd_id): """Create dictionary that can be used in l2 bridge domain template. The resulting dictionary looks like this: 'interface1': interface name of first interface 'interface2': interface name of second interface 'bd_id': bridge domain index :param node: Node data dictionary. :param link_names: List of names of links the bridge domain should be connecting. :param bd_id: Bridge domain index number. :type node: dict :type link_names: list :return: Dictionary used to generate l2 bridge domain VAT configuration from template file. :rtype: dict """ bd_dict = Topology().get_interfaces_by_link_names(node, link_names) bd_dict['bd_id'] = bd_id return bd_dict @staticmethod def vpp_add_l2_bridge_domain(node, bd_id, port_1, port_2, learn=True): """Add L2 bridge domain with 2 interfaces to the VPP node. :param node: Node to add L2BD on. :param bd_id: Bridge domain ID. :param port_1: First interface name added to L2BD. :param port_2: Second interface name added to L2BD. :param learn: Enable/disable MAC learn. :type node: dict :type bd_id: int :type port_1: str :type port_2: str :type learn: bool """ sw_if_index1 = Topology.get_interface_sw_index(node, port_1) sw_if_index2 = Topology.get_interface_sw_index(node, port_2) VatExecutor.cmd_from_template(node, 'l2_bridge_domain.vat', sw_if_id1=sw_if_index1, sw_if_id2=sw_if_index2, bd_id=bd_id, learn=int(learn)) @staticmethod def vpp_setup_bidirectional_cross_connect(node, interface1, interface2): """Create bidirectional cross-connect between 2 interfaces on vpp node. :param node: Node to add bidirectional cross-connect. :param interface1: First interface name or sw_if_index. :param interface2: Second interface name or sw_if_index. :type node: dict :type interface1: str or int :type interface2: str or int """ if isinstance(interface1, basestring): sw_iface1 = Topology().get_interface_sw_index(node, interface1) else: sw_iface1 = interface1 if isinstance(interface2, basestring): sw_iface2 = Topology().get_interface_sw_index(node, interface2) else: sw_iface2 = interface2 with VatTerminal(node) as vat: vat.vat_terminal_exec_cmd_from_template('l2_xconnect.vat', interface1=sw_iface1, interface2=sw_iface2) vat.vat_terminal_exec_cmd_from_template('l2_xconnect.vat', interface1=sw_iface2, interface2=sw_iface1) @staticmethod def linux_add_bridge(node, br_name, if_1, if_2, set_up=True): """Bridge two interfaces on linux node. :param node: Node to add bridge on. :param br_name: Bridge name. :param if_1: First interface to be added to the bridge. :param if_2: Second interface to be added to the bridge. :param set_up: Change bridge interface state to up after create bridge. Optional. Default: True. :type node: dict :type br_name: str :type if_1: str :type if_2: str :type set_up: bool """ cmd = 'brctl addbr {0}'.format(br_name) exec_cmd_no_error(node, cmd, sudo=True) cmd = 'brctl addif {0} {1}'.format(br_name, if_1) exec_cmd_no_error(node, cmd, sudo=True) cmd = 'brctl addif {0} {1}'.format(br_name, if_2) exec_cmd_no_error(node, cmd, sudo=True) if set_up: cmd = 'ip link set dev {0} up'.format(br_name) exec_cmd_no_error(node, cmd, sudo=True) @staticmethod def linux_del_bridge(node, br_name, set_down=True): """Delete bridge from linux node. :param node: Node to delete bridge from. :param br_name: Bridge name. :param set_down: Change bridge interface state to down before delbr command. Optional. Default: True. :type node: str :type br_name: str :type set_down: bool ..note:: The network interface corresponding to the bridge must be down before it can be deleted! """ if set_down: cmd = 'ip link set dev {0} down'.format(br_name) exec_cmd_no_error(node, cmd, sudo=True) cmd = 'brctl delbr {0}'.format(br_name) exec_cmd_no_error(node, cmd, sudo=True) @staticmethod def vpp_get_bridge_domain_data(node, bd_id=None): """Get all bridge domain data from a VPP node. If a domain ID number is provided, return only data for the matching bridge domain. :param node: VPP node to get bridge domain data from. :param bd_id: Numeric ID of a specific bridge domain. :type node: dict :type bd_id: int :return: List of dictionaries containing data for each bridge domain, or a single dictionary for the specified bridge domain. :rtype: list or dict """ with VatTerminal(node) as vat: response = vat.vat_terminal_exec_cmd_from_template("l2_bd_dump.vat") data = response[0] if bd_id is not None: for bridge_domain in data: if bridge_domain["bd_id"] == bd_id: return bridge_domain return data @staticmethod def l2_vlan_tag_rewrite(node, interface, tag_rewrite_method, push_dot1q=True, tag1_id=None, tag2_id=None): """Rewrite tags in ethernet frame. :param node: Node to rewrite tags. :param interface: Interface on which rewrite tags. :param tag_rewrite_method: Method of tag rewrite. :param push_dot1q: Optional parameter to disable to push dot1q tag instead of dot1ad. :param tag1_id: Optional tag1 ID for VLAN. :param tag2_id: Optional tag2 ID for VLAN. :type node: dict :type interface: str or int :type tag_rewrite_method: str :type push_dot1q: bool :type tag1_id: int :type tag2_id: int """ push_dot1q = 'push_dot1q 0' if not push_dot1q else '' tag1_id = 'tag1 {0}'.format(tag1_id) if tag1_id else '' tag2_id = 'tag2 {0}'.format(tag2_id) if tag2_id else '' if isinstance(interface, basestring): iface_key = Topology.get_interface_by_name(node, interface) sw_if_index = Topology.get_interface_sw_index(node, iface_key) else: sw_if_index = interface with VatTerminal(node) as vat: vat.vat_terminal_exec_cmd_from_template("l2_vlan_tag_rewrite.vat", sw_if_index=sw_if_index, tag_rewrite_method= tag_rewrite_method, push_dot1q=push_dot1q, tag1_optional=tag1_id, tag2_optional=tag2_id) @staticmethod def delete_bridge_domain_vat(node, bd_id): """Delete the specified bridge domain from the node. :param node: VPP node to delete a bridge domain from. :param bd_id: Bridge domain ID. :type node: dict :type bd_id: int """ with VatTerminal(node) as vat: vat.vat_terminal_exec_cmd_from_template( "l2_bridge_domain_delete.vat", bd_id=bd_id) @staticmethod def delete_l2_fib_entry(node, bd_id, mac): """Delete the specified L2 FIB entry. :param node: VPP node. :param bd_id: Bridge domain ID. :param mac: MAC address used as the key in L2 FIB entry. :type node: dict :type bd_id: int :type mac: str """ with VatTerminal(node) as vat: vat.vat_terminal_exec_cmd_from_template("l2_fib_entry_delete.vat", mac=mac, bd_id=bd_id) @staticmethod def get_l2_fib_table_vat(node, bd_index): """Retrieves the L2 FIB table using VAT. :param node: VPP node. :param bd_index: Index of the bridge domain. :type node: dict :type bd_index: int :return: L2 FIB table. :rtype: list """ bd_data = L2Util.vpp_get_bridge_domain_data(node) bd_id = bd_data[bd_index-1]["bd_id"] try: with VatTerminal(node) as vat: table = vat.vat_terminal_exec_cmd_from_template( "l2_fib_table_dump.vat", bd_id=bd_id) return table[0] except ValueError: return [] @staticmethod def get_l2_fib_entry_vat(node, bd_index, mac): """Retrieves the L2 FIB entry specified by MAC address using VAT. :param node: VPP node. :param bd_index: Index of the bridge domain. :param mac: MAC address used as the key in L2 FIB data structure. :type node: dict :type bd_index: int :type mac: str :return: L2 FIB entry :rtype: dict """ table = L2Util.get_l2_fib_table_vat(node, bd_index) for entry in table: if entry["mac"] == mac: return entry return {}