aboutsummaryrefslogtreecommitdiffstats
path: root/resources/tools/testbed-setup/README.rst
diff options
context:
space:
mode:
authorPeter Mikus <pmikus@cisco.com>2019-02-23 16:27:07 +0000
committerPeter Mikus <pmikus@cisco.com>2019-05-22 09:30:11 +0000
commit04ea580e111ddf5be6101be1fbfe9fde56f1a214 (patch)
tree09247ed50f1da5e09b79dcf41a05b38afeaa4ee2 /resources/tools/testbed-setup/README.rst
parentc6cd03e08d9429168b0e183b8dcbce991112f279 (diff)
Ansible: Add CIMC/IPMI/COBBLER
- added tasks and handlers for CIMC, IPMI, COBBLER - allows provisioning of servers via COBBLER Change-Id: I64080069260dabb8a6e3b648aeff12f109d3f7c2 Signed-off-by: Peter Mikus <pmikus@cisco.com>
Diffstat (limited to 'resources/tools/testbed-setup/README.rst')
-rw-r--r--resources/tools/testbed-setup/README.rst226
1 files changed, 64 insertions, 162 deletions
diff --git a/resources/tools/testbed-setup/README.rst b/resources/tools/testbed-setup/README.rst
index 01be10b5d3..9059e28500 100644
--- a/resources/tools/testbed-setup/README.rst
+++ b/resources/tools/testbed-setup/README.rst
@@ -12,179 +12,60 @@ Code in this directory is NOT executed as part of a regular CSIT test case
but is stored here for ad-hoc installation of HW, archiving and documentation
purposes.
-Setting up a hardware host
---------------------------
-
Documentation below is step by step tutorial and assumes an understanding of PXE
-boot and Ansible and managing physical hardware via CIMC or IPMI.
+boot and `Ansible <https://www.ansible.com/>`_ and managing physical hardware
+via CIMC or IPMI.
-This process is not specific for LF lab, but associated files and code, is based
-on the assumption that it runs in LF environment. If run elsewhere, changes
-will be required in following files:
+This process is not specific for Linux Foundation lab, but associated files and
+code, is based on the assumption that it runs in Linux Foundation environment.
+If run elsewhere, changes will be required in following files:
#. Inventory directory: `ansible/inventories/sample_inventory/`
#. Inventory files: `ansible/inventories/sample_inventory/hosts`
-#. Kickseed file: `pxe/ks.cfg`
-#. DHCPD file: `pxe/dhcpd.conf`
-#. Bootscreen file: `boot-screens_txt.cfg`
The process below assumes that there is a host used for bootstrapping (referred
-to as "PXE bootstrap server" below).
-
-Prepare the PXE bootstrap server when there is no http server AMD64
-```````````````````````````````````````````````````````````````````
-
-#. Clone the csit repo:
-
- .. code-block:: bash
-
- git clone https://gerrit.fd.io/r/csit
- cd csit/resources/tools/testbed-setup/pxe
-
-#. Setup prerequisities (isc-dhcp-server tftpd-hpa nginx-light ansible):
-
- .. code-block:: bash
-
- sudo apt-get install isc-dhcp-server tftpd-hpa nginx-light ansible
-
-#. Edit dhcpd.cfg:
-
- .. code-block:: bash
-
- sudo cp dhcpd.cfg /etc/dhcp/
- sudo service isc-dhcp-server restart
- sudo mkdir /mnt/cdrom
-
-#. Download Ubuntu 18.04 LTS - X86_64:
-
- .. code-block:: bash
-
- wget http://cdimage.ubuntu.com/ubuntu/releases/18.04/release/ubuntu-18.04-server-amd64.iso
- sudo mount -o loop ubuntu-18.04-server-amd64.iso /mnt/cdrom/
- sudo cp -r /mnt/cdrom/install/netboot/* /var/lib/tftpboot/
-
- # Figure out root folder for NGINX webserver. The configuration is in one
- # of the files in /etc/nginx/conf.d/, /etc/nginx/sites-enabled/ or in
- # /etc/nginx/nginx.conf under section server/root. Save the path to
- # variable WWW_ROOT.
- sudo mkdir -p ${WWW_ROOT}/download/ubuntu
- sudo cp -r /mnt/cdrom/* ${WWW_ROOT}/download/ubuntu/
- sudo cp /mnt/cdrom/ubuntu/isolinux/ldlinux.c32 /var/lib/tftpboot
- sudo cp /mnt/cdrom/ubuntu/isolinux/libcom32.c32 /var/lib/tftpboot
- sudo cp /mnt/cdrom/ubuntu/isolinux/libutil.c32 /var/lib/tftpboot
- sudo cp /mnt/cdrom/ubuntu/isolinux/chain.c32 /var/lib/tftpboot
- sudo umount /mnt/cdrom
-
-#. Edit ks.cfg and replace IP address of PXE bootstrap server and subdir in
- `/var/www` (in this case `/var/www/download`):
-
- .. code-block:: bash
-
- sudo cp ks.cfg ${WWW_ROOT}/download/ks.cfg
-
-#. Edit boot-screens_txt.cfg and replace IP address of PXE bootstrap server and
- subdir in `/var/www` (in this case `/var/www/download`):
-
- .. code-block:: bash
-
- sudo cp boot-screens_txt.cfg /var/lib/tftpboot/ubuntu-installer/amd64/boot-screens/txt.cfg
- sudo cp syslinux.cfg /var/lib/tftpboot/ubuntu-installer/amd64/boot-screens/syslinux.cfg
-
-New testbed host - manual preparation
-`````````````````````````````````````
-
-Set CIMC/IPMI address, username, password and hostname an BIOS.
-
-Bootstrap the host
-``````````````````
-
-Optional: CIMC - From PXE boostrap server
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-#. Initialize args.ip: Power-Off, reset BIOS defaults, Enable console redir, get
- LOM MAC addr:
-
- .. code-block:: bash
+to as a "Cobbler provision host" below), with reachable DHCP service.
- ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -i
-
-#. Adjust BIOS settings:
-
- .. code-block:: bash
-
- ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -s '<biosVfIntelHyperThreadingTech rn="Intel-HyperThreading-Tech" vpIntelHyperThreadingTech="disabled" />' -s '<biosVfEnhancedIntelSpeedStepTech rn="Enhanced-Intel-SpeedStep-Tech" vpEnhancedIntelSpeedStepTech="disabled" />' -s '<biosVfIntelTurboBoostTech rn="Intel-Turbo-Boost-Tech" vpIntelTurboBoostTech="disabled" />'
-
-#. If RAID is not created in CIMC. Create RAID array. Reboot:
-
- .. code-block:: bash
-
- ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d --wipe
- ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -r -rl 1 -rs <disk size> -rd '[1,2]'
-
-#. Reboot server with boot from PXE (restart immediately):
-
- .. code-block:: bash
-
- ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -pxe
-
-#. Set the next boot from HDD (without restart). Execute while Ubuntu install
- is running:
-
- .. code-block:: bash
-
- ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -hdd
-
-Optional: IPMI - From PXE boostrap server
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-#. Get MAC address of LAN0:
-
- .. code-block:: bash
-
- ipmitool -U ADMIN -H $HOST_ADDRESS raw 0x30 0x21 | tail -c 18
-
-#. Reboot into PXE for next boot only:
-
- .. code-block:: bash
-
- ipmitool -I lanplus -H $HOST_ADDRESS -U ADMIN chassis bootdev pxe
- ipmitool -I lanplus -H $HOST_ADDRESS -U ADMIN power reset
-
-#. For live watching SOL (Serial-over-LAN console):
+Ansible host
+------------
- .. code-block:: bash
+Prerequisities for running Ansible
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- ipmitool -I lanplus -H $HOST_ADDRESS -U ADMIN sol activate
- ipmitool -I lanplus -H $HOST_ADDRESS -U ADMIN sol deactivate
+- CIMC/IPMI address, username, password are set in BIOS.
+- Ansible can be invoked on any host that has direct SSH connectivity to
+ the remote hosts that will be provisioned (does not need to be Cobbler
+ provision host). This may require installed ssh_keys `ssh-copy-id` on remote
+ host or disabled StrictHostChecking on host running Ansible:
-Ansible machine
-~~~~~~~~~~~~~~~
+ ::
-Prerequisities for running Ansible
-..................................
+ Host <host_ip or host subnet_ip>
+ StrictHostKeyChecking no
+ UserKnownHostsFile=/dev/null
-- Ansible can run on any machine that has direct SSH connectivity to target
- machines that will be provisioned (does not need to be PXE server).
+- Ansible version 2.7+ is installed via PIP or via standard package
+ distribution (apt, yum, dnf).
- User `testuser` with password `Csit1234` is created with home folder
- initialized on all target machines that will be provisioned.
+ initialized on all remote machines that will be provisioned.
- Inventory directory is created with same or similar content as
`inventories/lf_inventory` in `inventories/` directory (`sample_inventory`
can be used).
- Group variables in `ansible/inventories/<inventory>/group_vars/all.yaml` are
- adjusted per environment. Special attention to `proxy_env` variable.
+ adjusted per environment with special attention to `proxy_env` variable.
- Host variables in `ansible/inventories/<inventory>/host_vars/x.x.x.x.yaml` are
defined.
Ansible structure
-.................
+~~~~~~~~~~~~~~~~~
-Ansible is defining roles `TG` (Traffic Generator), `SUT` (System Under Test),
-`VPP_DEVICE` (vpp_device host for functional testing). `COMMON` (Applicable
-for all servers in inventory).
+Ansible is defining roles `tg` (Traffic Generator), `sut` (System Under Test),
+`vpp_device` (vpp_device host for functional device testing), `common`
+(Applicable for all hosts in inventory) and `cobbler` (Cobbler provision host).
-Each Host has corresponding Ansible role mapped and is applied only if Host
+Each host has corresponding Ansible role mapped and is applied only if a host
with that role is present in inventory file. As a part of optimization the role
-`common` contains Ansible tasks applied for all Hosts.
+`common` contains Ansible tasks applied for all hosts.
.. note::
@@ -209,6 +90,7 @@ Ansible structure is described below:
│   ├── hosts
│   └── host_vars
├── roles # CSIT roles.
+ │   ├── cobbler # Role applied for Cobbler host only.
│   ├── common # Role applied for all hosts.
│   ├── sut # Role applied for all SUTs only.
│   ├── tg # Role applied for all TGs only.
@@ -217,34 +99,54 @@ Ansible structure is described below:
├── site.yaml # Main playbook.
├── sut.yaml # SUT playbook.
├── tg.yaml # TG playbook.
- ├── vault_pass # Main password for vualt.
- ├── vault.yml # Ansible vualt storage.
+ ├── vault_pass # Main password for vault.
+ ├── vault.yml # Ansible vault storage.
└── vpp_device.yaml # vpp_device playbook.
Tagging
-.......
+~~~~~~~
-Every task, handler, role, playbook is tagged with self-explanatory tags that
-could be used to limit which objects are applied to target systems.
+Every task, handler, role or playbook is tagged with self-explanatory tag(s)
+that could be used to limit which Ansible objects are applied to target systems.
-You can see which tags are applied to tasks, roles, and static imports by
+You can see what tags are applied to tasks, roles, and static imports by
running `ansible-playbook` with the `--list-tasks` option. You can display all
tags applied to the tasks with the `--list-tags` option.
Running Ansible
-...............
+~~~~~~~~~~~~~~~
-#. Go to ansible directory: `cd csit/resources/tools/testbed-setup/ansible`
+#. Go to ansible directory: `$ cd csit/resources/tools/testbed-setup/ansible`
#. Run ansible on selected hosts:
- `ansible-playbook --vault-password-file=vault_pass --extra-vars '@vault.yml'
- --inventory <inventory_file> site.yaml --limit x.x.x.x`
+ `$ ansible-playbook --vault-password-file=vault_pass --extra-vars
+ '@vault.yml' --inventory <inventory_file> site.yaml --limit <host_ip>`
+#. (Optional) Run ansible on selected hosts with selected tags:
+ `$ ansible-playbook --vault-password-file=vault_pass --extra-vars
+ '@vault.yml' --inventory <inventory_file> site.yaml --limit <host_ip>
+ --tags 'copy-90-csit'`
.. note::
In case you want to provision only particular role. You can use tags: `tg`,
`sut`, `vpp_device`.
-Reboot hosts
-------------
-
-Manually reboot hosts after Ansible provisioning succeeded.
+Baremetal provisioning of host via Ansible Cobbler module
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Baremetal provisioning of the host with Ansible is done via `Cobbler
+<https://cobbler.github.io/>`_. Ansible contains a role `cobbler` that includes
+a set of tasks for deploying Cobbler in a container on dedicated host.
+Container is built during Ansible run of `cobbler` role and it provides DHCPD,
+TFTPD, HTTTP and Cobbler services.
+
+There is a special set of tasks and handlers in `common` role that does include
+a system into Cobbler and reboots provisioned host.
+
+#. Go to Ansible directory: `$ cd csit/resources/tools/testbed-setup/ansible`
+#. Prepare Cobbler provision host via Ansible on dedicated hosts:
+ `$ ansible-playbook --vault-password-file=vault_pass --extra-vars
+ '@vault.yml' --inventory <inventory_file> site.yaml --limit <cobbler_ip>`
+#. Run Ansible on selected hosts with selected tags:
+ `$ ansible-playbook --vault-password-file=vault_pass --extra-vars
+ '@vault.yml' --inventory <inventory_file> site.yaml --limit <host_ip>
+ --tags 'provision'`