aboutsummaryrefslogtreecommitdiffstats
path: root/resources/tools/testbed-setup/README.rst
blob: 14871d1d29180a471ea6b84a41633f620de2a5af (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
Testbed Setup
=============

Introduction
------------

This directory contains the *high-level* process to set up a hardware machine
as a CSIT testbed, either for use as a physical performance testbed host or as
a vpp_device host.

Code in this directory is NOT executed as part of a regular CSIT test case
but is stored here for ad-hoc installation of HW, archiving and documentation
purposes.

Setting up a hardware host
--------------------------

Documentation below is step by step tutorial and assumes an understanding of PXE
boot and Ansible and managing physical hardware via CIMC or IPMI.

This process is not specific for LF lab, but associated files and code, is based
on the assumption that it runs in LF environment. If run elsewhere, changes
will be required in following files:

#. Inventory directory: `ansible/inventories/sample_inventory/`
#. Inventory files: `ansible/inventories/sample_inventory/hosts`
#. Kickseed file: `pxe/ks.cfg`
#. DHCPD file: `pxe/dhcpd.conf`
#. Bootscreen file: `boot-screens_txt.cfg`

The process below assumes that there is a host used for bootstrapping (referred
to as "PXE bootstrap server" below).

Prepare the PXE bootstrap server when there is no http server AMD64
```````````````````````````````````````````````````````````````````

#. Clone the csit repo:

   .. code-block:: bash

      git clone https://gerrit.fd.io/r/csit
      cd csit/resources/tools/testbed-setup/pxe

#. Setup prerequisities (isc-dhcp-server tftpd-hpa nginx-light ansible):

   .. code-block:: bash

      sudo apt-get install isc-dhcp-server tftpd-hpa nginx-light ansible

#. Edit dhcpd.cfg:

   .. code-block:: bash

      sudo cp dhcpd.cfg /etc/dhcp/
      sudo service isc-dhcp-server restart
      sudo mkdir /mnt/cdrom

#. Download Ubuntu 18.04 LTS - X86_64:

   .. code-block:: bash

      wget http://cdimage.ubuntu.com/ubuntu/releases/18.04/release/ubuntu-18.04-server-amd64.iso
      sudo mount -o loop ubuntu-18.04-server-amd64.iso /mnt/cdrom/
      sudo cp -r /mnt/cdrom/install/netboot/* /var/lib/tftpboot/

      # Figure out root folder for NGINX webserver. The configuration is in one
      # of the files in /etc/nginx/conf.d/, /etc/nginx/sites-enabled/ or in
      # /etc/nginx/nginx.conf under section server/root. Save the path to
      # variable WWW_ROOT.
      sudo mkdir -p ${WWW_ROOT}/download/ubuntu
      sudo cp -r /mnt/cdrom/* ${WWW_ROOT}/download/ubuntu/
      sudo cp /mnt/cdrom/ubuntu/isolinux/ldlinux.c32 /var/lib/tftpboot
      sudo cp /mnt/cdrom/ubuntu/isolinux/libcom32.c32 /var/lib/tftpboot
      sudo cp /mnt/cdrom/ubuntu/isolinux/libutil.c32 /var/lib/tftpboot
      sudo cp /mnt/cdrom/ubuntu/isolinux/chain.c32 /var/lib/tftpboot
      sudo umount /mnt/cdrom

#. Edit ks.cfg and replace IP address of PXE bootstrap server and subdir in
   `/var/www` (in this case `/var/www/download`):

   .. code-block:: bash

      sudo cp ks.cfg ${WWW_ROOT}/download/ks.cfg

#. Edit boot-screens_txt.cfg and replace IP address of PXE bootstrap server and
   subdir in `/var/www` (in this case `/var/www/download`):

   .. code-block:: bash

      sudo cp boot-screens_txt.cfg /var/lib/tftpboot/ubuntu-installer/amd64/boot-screens/txt.cfg
      sudo cp syslinux.cfg /var/lib/tftpboot/ubuntu-installer/amd64/boot-screens/syslinux.cfg

New testbed host - manual preparation
`````````````````````````````````````

Set CIMC/IPMI address, username, password and hostname an BIOS.

Bootstrap the host
``````````````````

Convenient way to re-stage host via script:

.. code-block:: bash

   sudo ./bootstrap_setup_testbed.sh <linux_ip> <mgmt_ip> <username> <pass>

Optional: CIMC - From PXE boostrap server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

#. Initialize args.ip: Power-Off, reset BIOS defaults, Enable console redir, get
   LOM MAC addr:

   .. code-block:: bash

     ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -i

#. Adjust BIOS settings:

   .. code-block:: bash

      ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -s '<biosVfIntelHyperThreadingTech rn="Intel-HyperThreading-Tech" vpIntelHyperThreadingTech="disabled" />' -s '<biosVfEnhancedIntelSpeedStepTech rn="Enhanced-Intel-SpeedStep-Tech" vpEnhancedIntelSpeedStepTech="disabled" />' -s '<biosVfIntelTurboBoostTech rn="Intel-Turbo-Boost-Tech" vpIntelTurboBoostTech="disabled" />'

#. If RAID is not created in CIMC. Create RAID array. Reboot:

   .. code-block:: bash

      ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d --wipe
      ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -r -rl 1 -rs <disk size> -rd '[1,2]'

#. Reboot server with boot from PXE (restart immediately):

   .. code-block:: bash

      ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -pxe

#. Set the next boot from HDD (without restart). Execute while Ubuntu install
   is running:

   .. code-block:: bash

      ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -hdd

Optional: IPMI - From PXE boostrap server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

#. Get MAC address of LAN0:

   .. code-block:: bash

      ipmitool -U ADMIN -H $HOST_ADDRESS raw 0x30 0x21 | tail -c 18

#. Reboot into PXE for next boot only:

   .. code-block:: bash

      ipmitool -I lanplus -H $HOST_ADDRESS -U ADMIN chassis bootdev pxe
      ipmitool -I lanplus -H $HOST_ADDRESS -U ADMIN power reset

#. For live watching SOL (Serial-over-LAN console):

   .. code-block:: bash

      ipmitool -I lanplus -H $HOST_ADDRESS -U ADMIN sol activate
      ipmitool -I lanplus -H $HOST_ADDRESS -U ADMIN sol deactivate

Ansible machine
~~~~~~~~~~~~~~~

Prerequisities for running Ansible
..................................

- Ansible can run on any machine that has direct SSH connectivity to target
  machines that will be provisioned (does not need to be PXE server).
- User `testuser` with password `Csit1234` is created with home folder
  initialized on all target machines that will be provisioned.
- Inventory directory is created with same or similar content as
  `inventories/lf_inventory` in `inventories/` directory (`sample_inventory`
  can be used).
- Group variables in `ansible/inventories/<inventory>/group_vars/all.yaml` are
  adjusted per environment. Special attention to `proxy_env` variable.
- Host variables in `ansible/inventories/<inventory>/host_vars/x.x.x.x.yaml` are
  defined.

Ansible structure
.................

Ansible is defining roles `TG` (Traffic Generator), `SUT` (System Under Test),
`VPP_DEVICE` (vpp_device host for functional testing). `COMMON` (Applicable
for all servers in inventory).

Each Host has corresponding Ansible role mapped and is applied only if Host
with that role is present in inventory file. As a part of optimization the role
`common` contains Ansible tasks applied for all Hosts.

.. note::

   You may see `[WARNING]: Could not match supplied host pattern, ignoring:
   <role>` in case you have not define hosts for that particular role.

Ansible structure is described below:

.. code-block:: bash

   .
   ├── inventories                     # Contains all inventories.
   │   ├── sample_inventory            # Sample, free for edits outside of LF.
   │   │   ├── group_vars              # Variables applied for all hosts.
   │   │   │   └── all.yaml
   │   │   ├── hosts                   # Inventory list with sample hosts.
   │   │   └── host_vars               # Variables applied for single host only.
   │   │       └── 1.1.1.1.yaml        # Sample host with IP 1.1.1.1
   │   └── lf_inventory                # Linux Foundation inventory.
   │       ├── group_vars
   │       │   └── all.yaml
   │       ├── hosts
   │       └── host_vars
   ├── roles                           # CSIT roles.
   │   ├── common                      # Role applied for all hosts.
   │   ├── sut                         # Role applied for all SUTs only.
   │   ├── tg                          # Role applied for all TGs only.
   │   ├── tg_sut                      # Role applied for TGs and SUTs only.
   │   └── vpp_device                  # Role applied for vpp_device only.
   ├── site.yaml                       # Main playbook.
   ├── sut.yaml                        # SUT playbook.
   ├── tg.yaml                         # TG playbook.
   ├── vault_pass                      # Main password for vualt.
   ├── vault.yml                       # Ansible vualt storage.
   └── vpp_device.yaml                 # vpp_device playbook.

Tagging
.......

Every task, handler, role, playbook is tagged with self-explanatory tags that
could be used to limit which objects are applied to target systems.

You can see which tags are applied to tasks, roles, and static imports by
running `ansible-playbook` with the `--list-tasks` option. You can display all
tags applied to the tasks with the `--list-tags` option.

Running Ansible
...............

#. Go to ansible directory: `cd csit/resources/tools/testbed-setup/ansible`
#. Run ansible on selected hosts:
   `ansible-playbook --vault-password-file=vault_pass --extra-vars '@vault.yml'
   --inventory <inventory_file> site.yaml --limit x.x.x.x`

.. note::

   In case you want to provision only particular role. You can use tags: `tg`,
   `sut`, `vpp_device`.

Reboot hosts
------------

Manually reboot hosts after Ansible provisioning succeeded.