aboutsummaryrefslogtreecommitdiffstats
path: root/docs/content/infrastructure
diff options
context:
space:
mode:
authorpmikus <peter.mikus@protonmail.ch>2023-03-23 12:25:34 +0000
committerPeter Mikus <peter.mikus@protonmail.ch>2023-03-23 13:10:14 +0000
commitde72298ead9d9564f7c2b9647226f48c47e9e0bf (patch)
treeb6c6e5bc2a0ca428bb325b145e12e1517dc38978 /docs/content/infrastructure
parent7df3ef0a7b23614ef3329061be184cf25f8fa21f (diff)
feat(docs): Add missing infra sections
Signed-off-by: pmikus <peter.mikus@protonmail.ch> Change-Id: I46008fa035418fb4bc0445b43403847a1e6a288e
Diffstat (limited to 'docs/content/infrastructure')
-rw-r--r--docs/content/infrastructure/fdio_csit_logical_topologies.md138
-rw-r--r--docs/content/infrastructure/fdio_csit_testbed_specifications.md20
-rw-r--r--docs/content/infrastructure/fdio_csit_testbed_versioning.md121
-rw-r--r--docs/content/infrastructure/fdio_dc_vexxhost_inventory.md3
-rw-r--r--docs/content/infrastructure/testbed_configuration/_index.md1
-rw-r--r--docs/content/infrastructure/testbed_configuration/ami_alt_hw_bios_cfg.md12
-rw-r--r--docs/content/infrastructure/testbed_configuration/gigabyte_tx2_hw_bios_cfg.md12
-rw-r--r--docs/content/infrastructure/testbed_configuration/huawei_tsh_hw_bios_cfg.md12
-rw-r--r--docs/content/infrastructure/testbed_configuration/sm_clx_hw_bios_cfg.md6
-rw-r--r--docs/content/infrastructure/testbed_configuration/sm_icx_hw_bios_cfg.md7
-rw-r--r--docs/content/infrastructure/testbed_configuration/sm_spr_hw_bios_cfg.md6
-rw-r--r--docs/content/infrastructure/testbed_configuration/sm_zn2_hw_bios_cfg.md6
12 files changed, 339 insertions, 5 deletions
diff --git a/docs/content/infrastructure/fdio_csit_logical_topologies.md b/docs/content/infrastructure/fdio_csit_logical_topologies.md
new file mode 100644
index 0000000000..5dd323d30c
--- /dev/null
+++ b/docs/content/infrastructure/fdio_csit_logical_topologies.md
@@ -0,0 +1,138 @@
+---
+title: "FD.io CSIT Logical Topologies"
+weight: 4
+---
+
+# FD.io CSIT Logical Topologies
+
+CSIT VPP performance tests are executed on physical testbeds. Based on the
+packet path thru server SUTs, three distinct logical topology types are used
+for VPP DUT data plane testing:
+
+1. NIC-to-NIC switching topologies.
+2. VM service switching topologies.
+3. Container service switching topologies.
+
+## NIC-to-NIC Switching
+
+The simplest logical topology for software data plane application like
+VPP is NIC-to-NIC switching. Tested topologies for 2-Node and 3-Node
+testbeds are shown in figures below.
+
+{{< figure src="/cdocs/logical-2n-nic2nic.svg" >}}
+
+{{< figure src="/cdocs/logical-3n-nic2nic.svg" >}}
+
+Server Systems Under Test (SUT) run VPP application in Linux user-mode
+as a Device Under Test (DUT). Server Traffic Generator (TG) runs T-Rex
+application. Physical connectivity between SUTs and TG is provided using
+different drivers and NIC models that need to be tested for performance
+(packet/bandwidth throughput and latency).
+
+From SUT and DUT perspectives, all performance tests involve forwarding
+packets between two (or more) physical Ethernet ports (10GE, 25GE, 40GE,
+100GE). In most cases both physical ports on SUT are located on the same
+NIC. The only exceptions are link bonding and 100GE tests. In the latter
+case only one port per NIC can be driven at linerate due to PCIe Gen3
+x16 slot bandwidth limiations. 100GE NICs are not supported in PCIe Gen3
+x8 slots.
+
+Note that reported VPP DUT performance results are specific to the SUTs
+tested. SUTs with other processors than the ones used in FD.io lab are
+likely to yield different results. A good rule of thumb, that can be
+applied to estimate VPP packet thoughput for NIC-to-NIC switching
+topology, is to expect the forwarding performance to be proportional to
+processor core frequency for the same processor architecture, assuming
+processor is the only limiting factor and all other SUT parameters are
+equivalent to FD.io CSIT environment.
+
+## VM Service Switching
+
+VM service switching topology test cases require VPP DUT to communicate
+with Virtual Machines (VMs) over vhost-user virtual interfaces.
+
+Two types of VM service topologies are tested:
+
+1. "Parallel" topology with packets flowing within SUT from NIC(s) via
+ VPP DUT to VM, back to VPP DUT, then out thru NIC(s).
+2. "Chained" topology (a.k.a. "Snake") with packets flowing within SUT
+ from NIC(s) via VPP DUT to VM, back to VPP DUT, then to the next VM,
+ back to VPP DUT and so on and so forth until the last VM in a chain,
+ then back to VPP DUT and out thru NIC(s).
+
+For each of the above topologies, VPP DUT is tested in a range of L2
+or IPv4/IPv6 configurations depending on the test suite. Sample VPP DUT
+"Chained" VM service topologies for 2-Node and 3-Node testbeds with each
+SUT running N of VM instances is shown in the figures below.
+
+{{< figure src="/cdocs/logical-2n-vm-vhost.svg" >}}
+
+{{< figure src="/cdocs/logical-3n-vm-vhost.svg" >}}
+
+In "Chained" VM topologies, packets are switched by VPP DUT multiple
+times: twice for a single VM, three times for two VMs, N+1 times for N
+VMs. Hence the external throughput rates measured by TG and listed in
+this report must be multiplied by N+1 to represent the actual VPP DUT
+aggregate packet forwarding rate.
+
+For "Parallel" service topology packets are always switched twice by VPP
+DUT per service chain.
+
+Note that reported VPP DUT performance results are specific to the SUTs
+tested. SUTs with other processor than the ones used in FD.io lab are
+likely to yield different results. Similarly to NIC-to-NIC switching
+topology, here one can also expect the forwarding performance to be
+proportional to processor core frequency for the same processor
+architecture, assuming processor is the only limiting factor. However
+due to much higher dependency on intensive memory operations in VM
+service chained topologies and sensitivity to Linux scheduler settings
+and behaviour, this estimation may not always yield good enough
+accuracy.
+
+## Container Service Switching
+
+Container service switching topology test cases require VPP DUT to
+communicate with Containers (Ctrs) over memif virtual interfaces.
+
+Three types of VM service topologies are tested in |csit-release|:
+
+1. "Parallel" topology with packets flowing within SUT from NIC(s) via
+ VPP DUT to Container, back to VPP DUT, then out thru NIC(s).
+2. "Chained" topology (a.k.a. "Snake") with packets flowing within SUT
+ from NIC(s) via VPP DUT to Container, back to VPP DUT, then to the
+ next Container, back to VPP DUT and so on and so forth until the
+ last Container in a chain, then back to VPP DUT and out thru NIC(s).
+3. "Horizontal" topology with packets flowing within SUT from NIC(s) via
+ VPP DUT to Container, then via "horizontal" memif to the next
+ Container, and so on and so forth until the last Container, then
+ back to VPP DUT and out thru NIC(s).
+
+For each of the above topologies, VPP DUT is tested in a range of L2
+or IPv4/IPv6 configurations depending on the test suite. Sample VPP DUT
+"Chained" Container service topologies for 2-Node and 3-Node testbeds
+with each SUT running N of Container instances is shown in the figures
+below.
+
+{{< figure src="/cdocs/logical-2n-container-memif.svg" >}}
+
+{{< figure src="/cdocs/logical-3n-container-memif.svg" >}}
+
+In "Chained" Container topologies, packets are switched by VPP DUT
+multiple times: twice for a single Container, three times for two
+Containers, N+1 times for N Containers. Hence the external throughput
+rates measured by TG and listed in this report must be multiplied by N+1
+to represent the actual VPP DUT aggregate packet forwarding rate.
+
+For a "Parallel" and "Horizontal" service topologies packets are always
+switched by VPP DUT twice per service chain.
+
+Note that reported VPP DUT performance results are specific to the SUTs
+tested. SUTs with other processor than the ones used in FD.io lab are
+likely to yield different results. Similarly to NIC-to-NIC switching
+topology, here one can also expect the forwarding performance to be
+proportional to processor core frequency for the same processor
+architecture, assuming processor is the only limiting factor. However
+due to much higher dependency on intensive memory operations in
+Container service chained topologies and sensitivity to Linux scheduler
+settings and behaviour, this estimation may not always yield good enough
+accuracy.
diff --git a/docs/content/infrastructure/fdio_csit_testbed_specifications.md b/docs/content/infrastructure/fdio_csit_testbed_specifications.md
index 20bec11f9f..49b6d72fa9 100644
--- a/docs/content/infrastructure/fdio_csit_testbed_specifications.md
+++ b/docs/content/infrastructure/fdio_csit_testbed_specifications.md
@@ -801,6 +801,8 @@ connectivity and wiring across defined CSIT testbeds:
#### 2-Node-Cascadelake Servers (2n-clx) PROD
+{{< figure src="/cdocs/testbed-2n-clx.svg" >}}
+
```
- SUT [Server-Type-C2]:
- testbedname: testbed27.
@@ -906,6 +908,8 @@ connectivity and wiring across defined CSIT testbeds:
#### 2-Node-Zen2 Servers (2n-zn2) PROD
+{{< figure src="/cdocs/testbed-2n-zn2.svg" >}}
+
```
- SUT [Server-Type-D1]:
- testbedname: testbed210.
@@ -939,7 +943,7 @@ connectivity and wiring across defined CSIT testbeds:
#### 2-Node-ThunderX2 Servers (2x-tx2) PROD
-Note: Server19 (TG) is shared between testbed33 & testbed211
+{{< figure src="/cdocs/testbed-2n-tx2.svg" >}}
```
- SUT [Server-Type-E22]:
@@ -972,6 +976,8 @@ Note: Server19 (TG) is shared between testbed33 & testbed211
#### 2-Node-Icelake Servers (2n-icx) PROD
+{{< figure src="/cdocs/testbed-2n-icx.svg" >}}
+
```
- SUT [Server-Type-F1]:
- testbedname: testbed212.
@@ -1131,7 +1137,7 @@ Note: There is no IPMI. Serial console is accessible via VIRL2 and VIRL3 USB.
#### 3-Node-Taishan Servers (3n-tsh) PROD
-Note: Server19 (TG) is shared between testbed33 & testbed211
+{{< figure src="/cdocs/testbed-3n-tsh.svg" >}}
```
- SUT [Server-Type-E21]:
@@ -1176,6 +1182,8 @@ Note: Server19 (TG) is shared between testbed33 & testbed211
#### 3-Node-Altra Servers (3n-alt) PROD
+{{< figure src="/cdocs/testbed-3n-alt.svg" >}}
+
```
- SUT [Server-Type-E23]:
- testbedname: testbed34.
@@ -1213,6 +1221,8 @@ Note: Server19 (TG) is shared between testbed33 & testbed211
#### 3-Node-Icelake Servers (3n-icx) PROD
+{{< figure src="/cdocs/testbed-3n-icx.svg" >}}
+
```
- ServerF1 [Server-Type-F1]:
- testbedname: testbed37.
@@ -1302,6 +1312,8 @@ Note: Server19 (TG) is shared between testbed33 & testbed211
#### 3-Node-SnowRidge Servers (3n-snr) PROD
+{{< figure src="/cdocs/testbed-3n-snr.svg" >}}
+
```
- ServerG1 [Server-Type-G1]:
- testbedname: testbed39.
@@ -1337,6 +1349,8 @@ Note: Server19 (TG) is shared between testbed33 & testbed211
#### 2-Node-SapphireRapids Servers (2n-spr) PROD
+{{< figure src="/cdocs/testbed-2n-spr.svg" >}}
+
```
- SUT [Server-Type-H1]:
- testbedname: testbed21.
@@ -1827,4 +1841,4 @@ To be completed.
- s59-t24-tg1-c7/p1 to s59-t24-tg1-c7/p2.
- ring5 100GE-ports e810-2CQDA2-2p100GE:
- s59-t24-tg1-c9/p1 to s59-t24-tg1-c9/p2.
-``` \ No newline at end of file
+```
diff --git a/docs/content/infrastructure/fdio_csit_testbed_versioning.md b/docs/content/infrastructure/fdio_csit_testbed_versioning.md
new file mode 100644
index 0000000000..5185c787f7
--- /dev/null
+++ b/docs/content/infrastructure/fdio_csit_testbed_versioning.md
@@ -0,0 +1,121 @@
+---
+bookToc: true
+title: "FD.io CSIT Testbed Versioning"
+weight: 3
+---
+
+# FD.io CSIT Testbed Versioning
+
+CSIT test environment versioning has been introduced to track modifications of
+the test environment.
+
+Any benchmark anomalies (progressions, regressions) between releases of a DUT
+application (e.g. VPP, DPDK), are determined by testing it in the same test
+environment, to avoid test environment changes clouding the picture.
+To beter distinguish impact of test environment changes, we also execute tests
+without any SUT (just with TRex TG sending packets over a link looping back to
+TG).
+
+A mirror approach is introduced to determine benchmarking anomalies due to the
+test environment change. This is achieved by testing the same DUT application
+version between releases of CSIT test system. This works under the assumption
+that the behaviour of the DUT is deterministic under the test conditions.
+
+CSIT test environment versioning scheme ensures integrity of all the test system
+components, including their HW revisions, compiled SW code versions and SW
+source code, within a specific CSIT version. Components included in the CSIT
+environment versioning include:
+
+- **HW** Server hardware firmware and BIOS (motherboard, processsor,
+ NIC(s), accelerator card(s)), tracked in CSIT branch.
+- **Linux** Server Linux OS version and configuration, tracked in CSIT
+ Reports.
+- **TRex** TRex Traffic Generator version, drivers and configuration
+ tracked in TG Settings.
+- **CSIT** CSIT framework code tracked in CSIT release branches.
+
+Following is the list of CSIT versions to date:
+
+- Ver. 1 associated with CSIT rls1908 branch (
+ [HW](https://git.fd.io/csit/tree/docs/lab?h=rls1908),
+ [Linux](https://docs.fd.io/csit/rls1908/report/vpp_performance_tests/test_environment.html#sut-settings-linux),
+ [TRex](https://docs.fd.io/csit/rls1908/report/vpp_performance_tests/test_environment.html#tg-settings-trex),
+ [CSIT](https://git.fd.io/csit/tree/?h=rls1908)
+ ).
+- Ver. 2 associated with CSIT rls2001 branch (
+ [HW](https://git.fd.io/csit/tree/docs/lab?h=rls2001),
+ [Linux](https://docs.fd.io/csit/rls2001/report/vpp_performance_tests/test_environment.html#sut-settings-linux),
+ [TRex](https://docs.fd.io/csit/rls2001/report/vpp_performance_tests/test_environment.html#tg-settings-trex),
+ [CSIT](https://git.fd.io/csit/tree/?h=rls2001)
+ ).
+- Ver. 4 associated with CSIT rls2005 branch (
+ [HW](https://git.fd.io/csit/tree/docs/lab?h=rls2005),
+ [Linux](https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/test_environment.html#sut-settings-linux),
+ [TRex](https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/test_environment.html#tg-settings-trex),
+ [CSIT](https://git.fd.io/csit/tree/?h=rls2005)
+ ).
+- Ver. 5 associated with CSIT rls2009 branch (
+ [HW](https://git.fd.io/csit/tree/docs/lab?h=rls2009),
+ [Linux](https://docs.fd.io/csit/rls2009/report/vpp_performance_tests/test_environment.html#sut-settings-linux),
+ [TRex](https://docs.fd.io/csit/rls2009/report/vpp_performance_tests/test_environment.html#tg-settings-trex),
+ [CSIT](https://git.fd.io/csit/tree/?h=rls2009)
+ ).
+ - The main change is TRex data-plane core resource adjustments:
+ [increase from 7 to 8 cores and pinning cores to interfaces](https://gerrit.fd.io/r/c/csit/+/28184)
+ for better TRex performance with symmetric traffic profiles.
+- Ver. 6 associated with CSIT rls2101 branch (
+ [HW](https://git.fd.io/csit/tree/docs/lab?h=rls2101),
+ [Linux](https://docs.fd.io/csit/rls2101/report/vpp_performance_tests/test_environment.html#sut-settings-linux),
+ [TRex](https://docs.fd.io/csit/rls2101/report/vpp_performance_tests/test_environment.html#tg-settings-trex),
+ [CSIT](https://git.fd.io/csit/tree/?h=rls2101)
+ ).
+ - The main change is TRex version upgrade: increase from 2.82 to 2.86.
+- Ver. 7 associated with CSIT rls2106 branch (
+ [HW](https://git.fd.io/csit/tree/docs/lab?h=rls2106),
+ [Linux](https://s3-docs.fd.io/csit/rls2106/report/vpp_performance_tests/test_environment.html#sut-settings-linux),
+ [TRex](https://s3-docs.fd.io/csit/rls2106/report/vpp_performance_tests/test_environment.html#tg-settings-trex),
+ [CSIT](https://git.fd.io/csit/tree/?h=rls2106)
+ ).
+ - TRex version upgrade: increase from 2.86 to 2.88.
+ - Ubuntu upgrade from 18.04 LTS to 20.04.2 LTS.
+- Ver. 8 associated with CSIT rls2110 branch (
+ [HW](https://git.fd.io/csit/tree/docs/lab?h=rls2110),
+ [Linux](https://s3-docs.fd.io/csit/rls2110/report/vpp_performance_tests/test_environment.html#sut-settings-linux),
+ [TRex](https://s3-docs.fd.io/csit/rls2110/report/vpp_performance_tests/test_environment.html#tg-settings-trex),
+ [CSIT](https://git.fd.io/csit/tree/?h=rls2110)
+ ).
+ - Intel NIC 700/800 series firmware upgrade based on DPDK compatibility
+ matrix.
+- Ver. 9 associated with CSIT rls2202 branch (
+ [HW](https://git.fd.io/csit/tree/docs/lab?h=rls2202),
+ [Linux](https://s3-docs.fd.io/csit/rls2202/report/vpp_performance_tests/test_environment.html#sut-settings-linux),
+ [TRex](https://s3-docs.fd.io/csit/rls2202/report/vpp_performance_tests/test_environment.html#tg-settings-trex),
+ [CSIT](https://git.fd.io/csit/tree/?h=rls2202)
+ ).
+ - Intel NIC 700/800 series firmware upgrade based on DPDK compatibility
+ matrix.
+- Ver. 10 associated with CSIT rls2206 branch (
+ [HW](https://git.fd.io/csit/tree/docs/lab?h=rls2206),
+ [Linux](https://s3-docs.fd.io/csit/rls2206/report/vpp_performance_tests/test_environment.html#sut-settings-linux),
+ [TRex](https://s3-docs.fd.io/csit/rls2206/report/vpp_performance_tests/test_environment.html#tg-settings-trex),
+ [CSIT](https://git.fd.io/csit/tree/?h=rls2206)
+ ).
+ - Intel NIC 700/800 series firmware upgrade based on DPDK compatibility
+ matrix.
+ - Mellanox 556A series firmware upgrade based on DPDK compatibility
+ matrix.
+ - Intel IceLake all core turbo frequency turned off. Current base frequency
+ is 2.6GHz.
+ - TRex version upgrade: increase from 2.88 to 2.97.
+- Ver. 11 associated with CSIT rls2210 branch (
+ [HW](https://git.fd.io/csit/tree/docs/lab?h=rls2210),
+ [Linux](https://s3-docs.fd.io/csit/rls2210/report/vpp_performance_tests/test_environment.html#sut-settings-linux),
+ [TRex](https://s3-docs.fd.io/csit/rls2210/report/vpp_performance_tests/test_environment.html#tg-settings-trex),
+ [CSIT](https://git.fd.io/csit/tree/?h=rls2210)
+ ).
+ - Intel NIC 700/800 series firmware upgrade based on DPDK compatibility
+ matrix.
+ - Mellanox 556A series firmware upgrade based on DPDK compatibility
+ matrix.
+ - Ubuntu upgrade from 20.04.2 LTS to 22.04.1 LTS.
+ - TRex version upgrade: increase from 2.97 to 3.00. \ No newline at end of file
diff --git a/docs/content/infrastructure/fdio_dc_vexxhost_inventory.md b/docs/content/infrastructure/fdio_dc_vexxhost_inventory.md
index b1494f6f72..3bdca72dae 100644
--- a/docs/content/infrastructure/fdio_dc_vexxhost_inventory.md
+++ b/docs/content/infrastructure/fdio_dc_vexxhost_inventory.md
@@ -5,10 +5,9 @@ weight: 1
# FD.io DC Vexxhost Inventory
-- for each DC location, per rack .csv table with server inventory
- captured inventory data: name,oper-status,testbed-id,role,model,s/n,rackid,rackunit,mgmt-ip4,ipmi-ip4,new-rackid,new-rackunit,new-mgmt-ip4,new-ipmi-ip4
- name: CSIT functional server name as tracked in [CSIT testbed specification](https://git.fd.io/csit/tree/docs/lab/testbed_specifications.md), followed by "/" and the actual configured hostname, unless it is the same as CSIT name.
- - oper-status: operational status (up|down|ipmi).
+ - oper-status: operational status (up|down).
- testbed-id: CSIT testbed identifier.
- role: 2n/3n-xxx performance testbed, nomad-client, nomad-server.
- role exceptions: decommission, repurpose, spare.
diff --git a/docs/content/infrastructure/testbed_configuration/_index.md b/docs/content/infrastructure/testbed_configuration/_index.md
index ce023237c7..d0716003c5 100644
--- a/docs/content/infrastructure/testbed_configuration/_index.md
+++ b/docs/content/infrastructure/testbed_configuration/_index.md
@@ -1,4 +1,5 @@
---
+bookCollapseSection: true
bookFlatSection: false
title: "FD.io CSIT Testbed Configuration"
weight: 3
diff --git a/docs/content/infrastructure/testbed_configuration/ami_alt_hw_bios_cfg.md b/docs/content/infrastructure/testbed_configuration/ami_alt_hw_bios_cfg.md
new file mode 100644
index 0000000000..e02c642e6e
--- /dev/null
+++ b/docs/content/infrastructure/testbed_configuration/ami_alt_hw_bios_cfg.md
@@ -0,0 +1,12 @@
+---
+bookToc: true
+title: "MegaRac Altra"
+---
+
+# MegaRac Altra
+
+## Linux cmdline
+
+```
+BOOT_IMAGE=/boot/vmlinuz-5.15.0-46-generic root=UUID=7d1d0e77-4df0-43df-9619-a99db29ffb83 ro audit=0 default_hugepagesz=2M hugepagesz=1G hugepages=32 hugepagesz=2M hugepages=32768 iommu.passthrough=1 isolcpus=1-10,29-38 nmi_watchdog=0 nohz_full=1-10,29-38 nosoftlockup processor.max_cstate=1 rcu_nocbs=1-10,29-38 console=ttyAMA0,115200n8 quiet
+```
diff --git a/docs/content/infrastructure/testbed_configuration/gigabyte_tx2_hw_bios_cfg.md b/docs/content/infrastructure/testbed_configuration/gigabyte_tx2_hw_bios_cfg.md
new file mode 100644
index 0000000000..eb188d3bf9
--- /dev/null
+++ b/docs/content/infrastructure/testbed_configuration/gigabyte_tx2_hw_bios_cfg.md
@@ -0,0 +1,12 @@
+---
+bookToc: true
+title: "GigaByte ThunderX2"
+---
+
+# GigaByte ThunderX2
+
+## Linux cmdline
+
+```
+BOOT_IMAGE=/boot/vmlinuz-5.4.0-65-generic root=UUID=7d1d0e77-4df0-43df-9619-a99db29ffb83 ro audit=0 intel_iommu=on isolcpus=1-27,29-55 nmi_watchdog=0 nohz_full=1-27,29-55 nosoftlockup processor.max_cstate=1 rcu_nocbs=1-27,29-55 console=ttyAMA0,115200n8 quiet
+```
diff --git a/docs/content/infrastructure/testbed_configuration/huawei_tsh_hw_bios_cfg.md b/docs/content/infrastructure/testbed_configuration/huawei_tsh_hw_bios_cfg.md
new file mode 100644
index 0000000000..d9fd71b080
--- /dev/null
+++ b/docs/content/infrastructure/testbed_configuration/huawei_tsh_hw_bios_cfg.md
@@ -0,0 +1,12 @@
+---
+bookToc: true
+title: "Huawei Taishan"
+---
+
+# Huawei Taishan
+
+## Linux cmdline
+
+```
+BOOT_IMAGE=/boot/vmlinuz-5.4.0-65-generic root=UUID=7d1d0e77-4df0-43df-9619-a99db29ffb83 ro audit=0 intel_iommu=on isolcpus=1-27,29-55 nmi_watchdog=0 nohz_full=1-27,29-55 nosoftlockup processor.max_cstate=1 rcu_nocbs=1-27,29-55 console=ttyAMA0,115200n8 quiet
+```
diff --git a/docs/content/infrastructure/testbed_configuration/sm_clx_hw_bios_cfg.md b/docs/content/infrastructure/testbed_configuration/sm_clx_hw_bios_cfg.md
index b2c859b11f..f4d9fbe475 100644
--- a/docs/content/infrastructure/testbed_configuration/sm_clx_hw_bios_cfg.md
+++ b/docs/content/infrastructure/testbed_configuration/sm_clx_hw_bios_cfg.md
@@ -1405,6 +1405,12 @@ pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
| High Precision Event Timer [Enabled] | |
```
+## Linux cmdline
+
+```
+$ cat /proc/cmdline
+BOOT_IMAGE=/boot/vmlinuz-5.15.0-46-generic root=UUID=2d6f4d44-76b1-4343-bc73-c066a3e95b32 ro audit=0 default_hugepagesz=2M hugepagesz=1G hugepages=32 hugepagesz=2M hugepages=32768 hpet=disable intel_idle.max_cstate=1 intel_iommu=on intel_pstate=disable iommu=pt isolcpus=1-23,25-47,49-71,73-95 mce=off nmi_watchdog=0 nohz_full=1-23,25-47,49-71,73-95 nosoftlockup numa_balancing=disable processor.max_cstate=1 rcu_nocbs=1-23,25-47,49-71,73-95 tsc=reliable console=ttyS0,115200n8 quiet
+```
## Xeon Clx Server Firmware Inventory
diff --git a/docs/content/infrastructure/testbed_configuration/sm_icx_hw_bios_cfg.md b/docs/content/infrastructure/testbed_configuration/sm_icx_hw_bios_cfg.md
index 854d3d0418..97c7874d85 100644
--- a/docs/content/infrastructure/testbed_configuration/sm_icx_hw_bios_cfg.md
+++ b/docs/content/infrastructure/testbed_configuration/sm_icx_hw_bios_cfg.md
@@ -1092,6 +1092,13 @@ Memory Device
|> Network Stack Configuration | |
```
+## Linux cmdline
+
+```
+$ cat /proc/cmdline
+BOOT_IMAGE=/boot/vmlinuz-5.15.0-46-generic root=UUID=6ff26c8a-8c65-4025-a6e7-d97dee6025d0 ro audit=0 default_hugepagesz=2M hugepagesz=1G hugepages=32 hugepagesz=2M hugepages=32768 hpet=disable intel_idle.max_cstate=1 intel_iommu=on intel_pstate=disable iommu=pt isolcpus=1-31,33-63,65-95,97-127 mce=off nmi_watchdog=0 nohz_full=1-31,33-63,65-95,97-127 nosoftlockup numa_balancing=disable processor.max_cstate=1 rcu_nocbs=1-31,33-63,65-95,97-127 tsc=reliable console=ttyS0,115200n8 quiet
+```
+
## Xeon ICX Server Firmware Inventory
```
diff --git a/docs/content/infrastructure/testbed_configuration/sm_spr_hw_bios_cfg.md b/docs/content/infrastructure/testbed_configuration/sm_spr_hw_bios_cfg.md
index c2bf8fb795..a91fcfffb1 100644
--- a/docs/content/infrastructure/testbed_configuration/sm_spr_hw_bios_cfg.md
+++ b/docs/content/infrastructure/testbed_configuration/sm_spr_hw_bios_cfg.md
@@ -830,6 +830,12 @@ Memory Device
Logical Size: None
```
+## Linux cmdline
+
+```
+BOOT_IMAGE=/boot/vmlinuz-5.15.0-46-generic root=UUID=b99a7749-d0ee-4afe-88a0-0be6c5873645 ro audit=0 default_hugepagesz=2M hugepagesz=1G hugepages=32 hugepagesz=2M hugepages=32768 hpet=disable intel_idle.max_cstate=1 intel_iommu=on intel_pstate=disable iommu=pt isolcpus=1-31,33-63,65-95,97-127 mce=off nmi_watchdog=0 nohz_full=1-31,33-63,65-95,97-127 nosoftlockup numa_balancing=disable processor.max_cstate=1 rcu_nocbs=1-31,33-63,65-95,97-127 tsc=reliable
+```
+
## Xeon ICX Server Firmware Inventory
```
diff --git a/docs/content/infrastructure/testbed_configuration/sm_zn2_hw_bios_cfg.md b/docs/content/infrastructure/testbed_configuration/sm_zn2_hw_bios_cfg.md
index 31335d5cc7..537fc9f42a 100644
--- a/docs/content/infrastructure/testbed_configuration/sm_zn2_hw_bios_cfg.md
+++ b/docs/content/infrastructure/testbed_configuration/sm_zn2_hw_bios_cfg.md
@@ -604,6 +604,12 @@ Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cm
| | |
```
+## Linux cmdline
+
+```
+$ cat /proc/cmdline
+BOOT_IMAGE=/boot/vmlinuz-5.15.0-46-generic root=UUID=cac1254f-9426-4ea6-a8db-2554f075db99 ro amd_iommu=on audit=0 default_hugepagesz=2M hugepagesz=1G hugepages=32 hugepagesz=2M hugepages=32768 hpet=disable iommu=pt isolcpus=1-15,17-31,33-47,49-63 nmi_watchdog=0 nohz_full=off nosoftlockup numa_balancing=disable processor.max_cstate=0 rcu_nocbs=1-15,17-31,33-47,49-63 tsc=reliable console=ttyS0,115200n8 quiet
+```
## EPYC zn2 Server Firmware Inventory