aboutsummaryrefslogtreecommitdiffstats
path: root/docs/lab/Testbeds_Xeon_Skx_Arm_Atom.md
diff options
context:
space:
mode:
authorPeter Mikus <pmikus@cisco.com>2019-02-26 12:53:41 +0000
committerTibor Frank <tifrank@cisco.com>2019-02-26 13:28:33 +0000
commit935bc3285b5a2174ba2846f4ab3f3e9e14e1cf17 (patch)
tree2341e5703f430afc2ddde622fd08f0b1beea5b60 /docs/lab/Testbeds_Xeon_Skx_Arm_Atom.md
parent2fa35251cfc73e148bb4b21ba3928fac3a1a2b4e (diff)
FIX: Hardware sections - report
Change-Id: I4a64f87fe43eb94841c55d63a7befff0d2c2a477 Signed-off-by: Peter Mikus <pmikus@cisco.com>
Diffstat (limited to 'docs/lab/Testbeds_Xeon_Skx_Arm_Atom.md')
-rw-r--r--docs/lab/Testbeds_Xeon_Skx_Arm_Atom.md190
1 files changed, 157 insertions, 33 deletions
diff --git a/docs/lab/Testbeds_Xeon_Skx_Arm_Atom.md b/docs/lab/Testbeds_Xeon_Skx_Arm_Atom.md
index 1875c31b4a..3227ee09d4 100644
--- a/docs/lab/Testbeds_Xeon_Skx_Arm_Atom.md
+++ b/docs/lab/Testbeds_Xeon_Skx_Arm_Atom.md
@@ -1,8 +1,12 @@
## FD.io CSIT testbeds - Xeon Skylake, Arm, Atom
-This is a low-level design implemented as an extensions of FD.io CSIT lab to accommodate the new Intel Xeon Skylake, Arm AArch64 and Atom devices. Content has been copied from [FD.io CSIT testbeds wiki page: Xeon_skx, Arm, Atom](https://wiki.fd.io/view/CSIT/Testbeds:_Xeon_Skx,_Arm,_Atom).
+
+This is a low-level design implemented as an extensions of FD.io CSIT lab to
+accommodate the new Intel Xeon Skylake, Arm AArch64 and Atom devices.
## Testbeds Overview
+
### Testbeds Type Breakdown
+
```
#. CSIT_tb. Purpose. SUT. TG. #tb. #SUTs. #TGs. #skx_node.
1. 1-node Xeon. func. skx. n/a. 2. 2. 0. 2.
@@ -17,31 +21,57 @@ This is a low-level design implemented as an extensions of FD.io CSIT lab to acc
```
### 1-Node Xeon Testbeds
-One 1-node Xeon testbed for VPP_Device tests is built using one SUT (Type-6 server), with NIC ports connected back-to-back.
+
+One 1-node Xeon testbed for VPP_Device tests is built using one SUT (Type-6
+server), with NIC ports connected back-to-back.
### 2-Node Xeon Testbeds
-Four 2-node Xeon testbeds (are expected to be built|are built), with each testbed using one SUTs (Type-1 server) and one TG (Type-2 server) connected back-to-back. NIC cards placement into slots and NIC ports connectivity is following the testbed specification included in next sections.
+
+Four 2-node Xeon testbeds (are expected to be built|are built), with each
+testbed using one SUTs (Type-1 server) and one TG (Type-2 server) connected
+back-to-back. NIC cards placement into slots and NIC ports connectivity is
+following the testbed specification included in next sections.
### 3-Node Xeon Testbeds
-Two 3-node Xeon testbeds (are expected to be built|are built), with each testbed using two SUTs (Type-1 server) and one TG (Type-2 server) connected in full-mesh triangle. NIC cards placement into slots and NIC ports connectivity is following the testbed specification included in next sections.
+
+Two 3-node Xeon testbeds (are expected to be built|are built), with each testbed
+using two SUTs (Type-1 server) and one TG (Type-2 server) connected in full-mesh
+triangle. NIC cards placement into slots and NIC ports connectivity is following
+the testbed specification included in next sections.
### 1-Node Arm Marvell ThunderX2 Testbed
-One 1-node ThunderX2 testbed for VPP_Device tests is expected to be built using one SUT (Type-9 server), with NIC ports connected back-to-back.
+
+One 1-node ThunderX2 testbed for VPP_Device tests is expected to be built using
+one SUT (Type-9 server), with NIC ports connected back-to-back.
### 3-Node Arm Huawei TaiShan Testbed
-One 3-node TaiShan testbed is built, with each testbed using two SUTs (Type-3 server) and one TG (Type-2 server) connected in full-mesh triangle. NIC cards placement into slots and NIC ports connectivity is following the testbed specification included in next sections.
+
+One 3-node TaiShan testbed is built, with each testbed using two SUTs (Type-3
+server) and one TG (Type-2 server) connected in full-mesh triangle. NIC cards
+placement into slots and NIC ports connectivity is following the testbed
+specification included in next sections.
### 3-Node Arm Marvell MACCHIATObin Testbed
-One 3-node MACCHIATObin testbeds is built, with each testbed using two SUTs (Type-4 server) and one TG (Type-2 server) connected in full-mesh triangle. Built-in NIC ports connectivity is following the testbed specification included in next sections.
+
+One 3-node MACCHIATObin testbeds is built, with each testbed using two SUTs
+(Type-4 server) and one TG (Type-2 server) connected in full-mesh triangle.
+Built-in NIC ports connectivity is following the testbed specification included
+in next sections.
### TCP/IP and L47 Testbeds
+
One 2-node Ixia PS One and Xeon server testbed, for TCP/IP host stack tests.
### Atom Testbeds
-One 3-node Atom (Netgate based) testbed is built consisting of three SUTs (Type-5 Netgate device.) NIC cards placement into slots and NIC ports connectivity is following the testbed specification included in the next section.
+One 3-node Atom (Netgate based) testbed is built consisting of three SUTs
+(Type-5 Netgate device.) NIC cards placement into slots and NIC ports
+connectivity is following the testbed specification included in the next
+section.
## Inventory
+
### Appliances
+
```
1. Ixia PerfectStorm One Appliance
- 1 * PS10GE4NG
@@ -56,6 +86,7 @@ One 3-node Atom (Netgate based) testbed is built consisting of three SUTs (Type-
```
### Arm Servers
+
```
1. Arm Cortex A-72 servers
- 1 * ThunderX2
@@ -95,6 +126,7 @@ Huawei TaiShan 2280 | CSIT Performance | Up, Manual perf experiments | s17-t33-s
| | VPP dev debug | Up, Manual VPP Device experiments, Full Skx TG too much for it - suggest to use LXC/DRC TG(!) | fdio-mcbin3 | 10.30.51.43 | 10.30.51.49, then connect to /dev/ttyUSB2 | 4 | 16GB | 2x10GbE SFP+ | Ubuntu 16.04.5
### Xeon and Atom Servers
+
```
1. Intel Xeon servers:
- 20 * SuperMicro SYS-7049GP-TRT with Xeon Skylake processors.
@@ -112,6 +144,7 @@ Huawei TaiShan 2280 | CSIT Performance | Up, Manual perf experiments | s17-t33-s
```
### Network Interface Cards
+
```
1. 10GE NICs
- 14 * Intel® Ethernet Converged Network Adapter X710-DA4
@@ -126,7 +159,9 @@ Huawei TaiShan 2280 | CSIT Performance | Up, Manual perf experiments | s17-t33-s
```
### Pluggables and Cables
+
Pluggables:
+
```
1. 10GE SFP+
- 16 * Intel E10GSFPSR Ethernet SFP+ SR Optics
@@ -140,6 +175,7 @@ Pluggables:
```
Standalone cables:
+
```
1. 10GE
- None
@@ -152,25 +188,36 @@ Standalone cables:
```
### Other Network Cards
+
Any QATs?
## Installation Status
-Lab installation status is tracked by LF IT team in [FD.io Server Status](https://docs.google.com/document/d/16TdvGC73wuNQjkP355MTXRckv7yqnaxJkWwX7G7izEo/edit?ts=5b10411b#heading=h.dprb64shku8u).
+
+Lab installation status is tracked by LF IT team in
+[FD.io Server Status](https://docs.google.com/document/d/16TdvGC73wuNQjkP355MTXRckv7yqnaxJkWwX7G7izEo/edit?ts=5b10411b#heading=h.dprb64shku8u).
## Server/Device Management and Naming
+
### Server Management Requirements
+
Total of 20 SM SYS-7049GP-TRT servers are made available for FD.IO CSIT testbed.
-For management purposes, each server must have following two ports connected to the management network:
+For management purposes, each server must have following two ports connected to
+the management network:
+
```
- 1GE IPMI port
- IPMI - Intelligent Platform Management Interface.
- - Required for access to embedded server management with WebUI, CLI, SNMPv3, IPMIv2.0, for firmware (BIOS) and OS updates.
+ - Required for access to embedded server management with WebUI, CLI, SNMPv3,
+ IPMIv2.0, for firmware (BIOS) and OS updates.
- 1GE/10GE management port
- hostOS management port for general system management.
```
### Server and Port Naming Convention
-Following naming convention is used within this page to specify physical connectivity and wiring across defined CSIT testbeds:
+
+Following naming convention is used within this page to specify physical
+connectivity and wiring across defined CSIT testbeds:
+
```
- testbedname: testbedN.
- hostname:
@@ -190,7 +237,10 @@ Following naming convention is used within this page to specify physical connect
```
### Server Management - Addressing
-Each server has a LOM (Lights-Out-Management e.g. SM IPMI) and a Management port, which are connected to two different VLANs.
+
+Each server has a LOM (Lights-Out-Management e.g. SM IPMI) and a Management
+port, which are connected to two different VLANs.
+
```
1. LOM (IPMI) VLAN:
- Subnet: 10.30.50.0/24
@@ -205,6 +255,7 @@ Each server has a LOM (Lights-Out-Management e.g. SM IPMI) and a Management port
- DNS1: 199.204.44.24
- DNS2: 199.204.47.54
```
+
To access these hosts, an VPN connection is required.
### LOM (IPMI) VLAN IP Addresses
@@ -214,9 +265,14 @@ To access these hosts, an VPN connection is required.
..
## Testbeds Specification - Target Build
+
### Server/Ports Naming, NIC Placement
+
#### 1-Node Xeon
-Each server in 1-node Xeon topology has its NIC cards placed, and NIC cards and ports indexed per following specification:
+
+Each server in 1-node Xeon topology has its NIC cards placed, and NIC cards and
+ports indexed per following specification:
+
```
- Server1 [Type-6]:
- testbedname: testbed11.
@@ -249,7 +305,10 @@ Each server in 1-node Xeon topology has its NIC cards placed, and NIC cards and
```
#### 2-Node Xeon
-Each server in 2-node Xeon topology has its NIC cards placed, and NIC cards and ports indexed per following specification:
+
+Each server in 2-node Xeon topology has its NIC cards placed, and NIC cards and
+ports indexed per following specification:
+
```
- Server3 [Type-1]:
- testbedname: testbed21.
@@ -366,7 +425,10 @@ Each server in 2-node Xeon topology has its NIC cards placed, and NIC cards and
```
#### 3-Node Xeon
-Each server in 3-node Xeon topology has its NIC cards placed, and NIC cards and ports indexed per following specification:
+
+Each server in 3-node Xeon topology has its NIC cards placed, and NIC cards and
+ports indexed per following specification:
+
```
- Server11 [Type-1]:
- testbedname: testbed31.
@@ -455,6 +517,7 @@ Each server in 3-node Xeon topology has its NIC cards placed, and NIC cards and
```
#### 1-Node Arm
+
```
- Server17 [Type-3]:
- testbedname: testbed13.
@@ -471,7 +534,9 @@ Each server in 3-node Xeon topology has its NIC cards placed, and NIC cards and
```
#### 3-Node Arm
+
Note: Server19 (TG) is shared between testbed33 & testbed34
+
```
- Server17 [Type-3]:
- testbedname: testbed33.
@@ -528,7 +593,10 @@ Note: Server19 (TG) is shared between testbed33 & testbed34
```
#### TCP/IP and L47
-Each server (appliance) in 2-node TCP/IP topology has its NIC cards placed, and NIC cards and ports indexed per following specification:
+
+Each server (appliance) in 2-node TCP/IP topology has its NIC cards placed, and
+NIC cards and ports indexed per following specification:
+
```
- Server25 [Type-8]:
- testbedname: testbed25.
@@ -553,7 +621,9 @@ Each server (appliance) in 2-node TCP/IP topology has its NIC cards placed, and
```
#### 3-Node Atom
+
Note: There is no IPMI. Serial console is accessible via VIRL2 and VIRL3 USB.
+
```
- Server22 [Type-5]:
- testbedname: testbed35.
@@ -585,8 +655,12 @@ Note: There is no IPMI. Serial console is accessible via VIRL2 and VIRL3 USB.
```
### Physical Connectivity within Testbeds
+
#### 1-Node Xeon
-Two 1-Node testbeds are constructed by connecting 2 Xeon servers using below specification:
+
+Two 1-Node testbeds are constructed by connecting 2 Xeon servers using below
+specification:
+
```
- testbed11:
- ring1 10GE-ports x710-4p10GE:
@@ -609,7 +683,10 @@ Two 1-Node testbeds are constructed by connecting 2 Xeon servers using below spe
```
#### 2-Node Xeon
-Four 2-Node testbeds are constructed by connecting 8 Xeon servers using below specification:
+
+Four 2-Node testbeds are constructed by connecting 8 Xeon servers using below
+specification:
+
```
- testbed21:
- ring1 10GE-ports x710-4p10GE on SUT:
@@ -678,7 +755,10 @@ Four 2-Node testbeds are constructed by connecting 8 Xeon servers using below sp
```
#### 3-Node Xeon
-Two 3-Node testbeds are constructed by connecting 6 Xeon servers using below specification:
+
+Two 3-Node testbeds are constructed by connecting 6 Xeon servers using below
+specification:
+
```
- testbed31:
- ring1 10GE-ports x710-4p10GE on SUTs:
@@ -724,7 +804,10 @@ Two 3-Node testbeds are constructed by connecting 6 Xeon servers using below spe
#### 1-Node Arm
-One 1-Node testbed is constructed by connecting 1 Arm server using below specification:
+
+One 1-Node testbed is constructed by connecting 1 Arm server using below
+specification:
+
```
- testbed13:
- ring1 40GE-ports XL710-QDA2-2p40GE on SUTs:
@@ -735,9 +818,11 @@ One 1-Node testbed is constructed by connecting 1 Arm server using below specifi
- s27-t13-sut1-c6/p2 - s27-t13-sut1-c1/p1.
```
-
#### 3-Node Arm
-One 3-Node testbed is constructed by connecting 2 TaiShan servers and one Xeon server using below specification:
+
+One 3-Node testbed is constructed by connecting 2 TaiShan servers and one Xeon
+server using below specification:
+
```
- testbed33:
- ring1 10GE-ports 82599-2p10GE on SUTs:
@@ -749,7 +834,10 @@ One 3-Node testbed is constructed by connecting 2 TaiShan servers and one Xeon s
- t33-sut1-c4/p1 - t33-sut2-c4/p2.
- t33-sut2-c4/p1 - t33t34-tg1-c4/p1.
```
-One 3-Node testbed is constructed by connecting 2 MACCHIATObin servers and one Xeon server using below specification:
+
+One 3-Node testbed is constructed by connecting 2 MACCHIATObin servers and one
+Xeon server using below specification:
+
```
- testbed34:
- ring1 10GE-ports Marvell on SUTs:
@@ -759,7 +847,10 @@ One 3-Node testbed is constructed by connecting 2 MACCHIATObin servers and one X
```
#### TCP/IP and L47
-One 2-Node TCP/IP testbed is constructed by connecting Ixia PSOne and 1 Xeon server using below specification:
+
+One 2-Node TCP/IP testbed is constructed by connecting Ixia PSOne and 1 Xeon
+server using below specification:
+
```
- testbed25:
- link1 10GE-port x710-4p10GE on SUT:
@@ -773,13 +864,19 @@ One 2-Node TCP/IP testbed is constructed by connecting Ixia PSOne and 1 Xeon ser
```
#### 3-Node Atom
+
..
## Server Specification
+
### Hardware Configuration
-The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware server configurations:
+
+The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
+server configurations:
+
```
-1. Type-1: Purpose - (Intel Xeon Processor) SUT for SW Data Plane Workload i.e. VPP, testpmd.
+1. Type-1: Purpose - (Intel Xeon Processor) SUT for SW Data Plane Workload i.e.
+ VPP, testpmd.
- Quantity: TBD based on testbed allocation.
- Physical connectivity:
- IPMI and host management ports.
@@ -819,7 +916,8 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
- PCIe Slot6 86:00.xx: empty.
- PCIe Slot8 af:00.xx: empty.
- PCIe Slot10 d8:00.xx: x710-4p10GE Intel.
-3. Type-3: Purpose - (Arm hip07-d05 Processor) SUT for SW Data Plane Workload i.e. VPP, testpmd.
+3. Type-3: Purpose - (Arm hip07-d05 Processor) SUT for SW Data Plane Workload
+ i.e. VPP, testpmd.
- Quantity: 2
- Physical connectivity:
- IPMI(?) and host management ports.
@@ -832,7 +930,8 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
- NICs configuration:
- PCIe Slot4 e9:00.xx: connectx4-2p25GE Mellanox.
- PCIe Slot6 11:00.xx: 82599-2p10GE Intel.
-4. Type-4: Purpose - (Arm Armada 8040 Processor) SUT for SW Data Plane Workload i.e. VPP, testpmd.
+4. Type-4: Purpose - (Arm Armada 8040 Processor) SUT for SW Data Plane Workload
+ i.e. VPP, testpmd.
- Quantity: 3
- Physical connectivity:
- Host management ports.
@@ -844,7 +943,8 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
- Disks: 1* 128GB(?) SATA SDD
- NICs configuration:
- pp2-2p10GE Marvell (on-chip Ethernet ports ; marvell plugin in VPP)
-5. Type-5: Purpose - (Intel Atom Processor) SUT for SW Data Plane Workload i.e. VPP, testpmd.
+5. Type-5: Purpose - (Intel Atom Processor) SUT for SW Data Plane Workload i.e.
+ VPP, testpmd.
- Quantity: TBD based on testbed allocation.
- Physical connectivity:
- Management: serial Port (usb) for console
@@ -877,7 +977,8 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
- PCIe Slot6 86:00.xx: empty.
- PCIe Slot8 af:00.xx: empty.
- PCIe Slot10 d8:00.xx: empty.
-7. Type-7: Purpose - Ixia PerfectStorm One Appliance TG for TCP/IP performance tests.
+7. Type-7: Purpose - Ixia PerfectStorm One Appliance TG for TCP/IP performance
+ tests.
- Quantity: 1.
- Physical connectivity:
- Host management interface: 10/100/1000-BaseT.
@@ -916,7 +1017,8 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
- PCIe Slot6 86:00.xx: empty.
- PCIe Slot8 af:00.xx: empty.
- PCIe Slot10 d8:00.xx: empty.
-9. Type-9: Purpose - (Marvell ThunderX2 Processor) SUT for VPP_Device functional tests.
+9. Type-9: Purpose - (Marvell ThunderX2 Processor) SUT for VPP_Device functional
+ tests.
- Quantity: 1
- Physical connectivity:
- IPMI and host management ports.
@@ -933,7 +1035,8 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
- PCIe Slot3 08:00.xx: XL710-QDA2.
- Numa1:
- PCIe Slot6 85:00.xx: XL710-QDA2.
-10. Type-10: Purpose - (Intel Atom C3000 Processor) SUT for SW Data Plane Workload i.e. VPP, testpmd.
+10. Type-10: Purpose - (Intel Atom C3000 Processor) SUT for SW Data Plane
+ Workload i.e. VPP, testpmd.
- Quantity: 4
- Physical connectivity:
- IPMI and host management ports.
@@ -950,7 +1053,9 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
### Xeon Skx Server BIOS Configuration
+
#### Boot Feature
+
```
| Quiet Boot [Enabled] |Boot option |
| | |
@@ -970,6 +1075,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
#### CPU Configuration
+
```
| Processor Configuration |Enables Hyper Threading |
| -------------------------------------------------- |(Software Method to |
@@ -1004,6 +1110,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
##### Advanced Power Management Configuration
+
```
| Advanced Power Management Configuration |Switch CPU Power Management |
| -------------------------------------------------- |profile |
@@ -1018,6 +1125,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
###### CPU P State Control
+
```
| CPU P State Control |Enable/Disable EIST |
| |(P-States) |
@@ -1026,6 +1134,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
###### Hardware PM State Control
+
```
| Hardware PM State Control |Disable: Hardware chooses a |
| |P-state based on OS Request |
@@ -1039,6 +1148,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
###### CPU C State Control
+
```
| CPU C State Control |Autonomous Core C-State |
| |Control |
@@ -1048,6 +1158,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
###### Package C State Control
+
```
| Package C State Control |Package C State limit |
| | |
@@ -1055,6 +1166,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
###### CPU T State Control
+
```
| CPU T State Control |Enable/Disable Software |
| |Controlled T-States |
@@ -1062,6 +1174,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
##### Chipset Configuration
+
```
| WARNING: Setting wrong values in below sections may cause |North Bridge Parameters |
| system to malfunction. | |
@@ -1070,6 +1183,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
###### North Bridge
+
```
|> UPI Configuration |Displays and provides |
|> Memory Configuration |option to change the UPI |
@@ -1077,6 +1191,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
###### UPI Configuration
+
```
| UPI Configuration |Choose Topology Precedence |
| -------------------------------------------------- |to degrade features if |
@@ -1101,6 +1216,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
###### Memory Configuration
+
```
| |POR - Enforces Plan Of |
| -------------------------------------------------- |Record restrictions for |
@@ -1121,6 +1237,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
###### IIO Configuration
+
```
| IIO Configuration |Expose IIO DFX devices and |
| -------------------------------------------------- |other CPU devices like PMON |
@@ -1138,6 +1255,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
###### CPU1 Configuration
+
```
| IOU0 (IIO PCIe Br1) [Auto] |Selects PCIe port |
| IOU1 (IIO PCIe Br2) [Auto] |Bifurcation for selected |
@@ -1148,6 +1266,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
###### CPU2 Configuration
+
```
| IOU0 (IIO PCIe Br1) [Auto] |Selects PCIe port |
| IOU1 (IIO PCIe Br2) [Auto] |Bifurcation for selected |
@@ -1158,6 +1277,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
##### South Bridge
+
```
| |Enables Legacy USB support. |
| USB Module Version 17 |AUTO option disables legacy |
@@ -1177,6 +1297,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
#### PCIe/PCI/PnP Configuration
+
```
| PCI Bus Driver Version A5.01.12 |Enables or Disables 64bit |
| |capable Devices to be |
@@ -1204,6 +1325,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
#### ACPI Settings
+
```
| ACPI Settings |Enable or Disable Non |
| |uniform Memory Access |
@@ -1214,6 +1336,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
#### DMIDECODE
+
```
# dmidecode 3.1
Getting SMBIOS data from sysfs.
@@ -1414,6 +1537,7 @@ The new FD.io CSIT-CPL lab (is expected to contain|contains) following hardware
```
### Xeon Skx Server Firmware Inventory
+
```
Host. IPMI IP. BIOS. CPLD. Aptio SU. CPU Microcode. PCI Bus. ME Operation FW. X710 Firmware. XXV710 Firmware. i40e.
s1-t11-sut1. 10.30.50.47. 2.1. 03.B1.03. 2.19.1268. 02000043. A5.01.12. 4.0.4.294. 6.01 0x80003554 1.1747.0. 6.01 0x80003554 1.1747.0. 2.1.14-k.