summaryrefslogtreecommitdiffstats
path: root/docs/usecases/containers
diff options
context:
space:
mode:
authorNathan Skrzypczak <nathan.skrzypczak@gmail.com>2021-08-19 11:38:06 +0200
committerDave Wallace <dwallacelf@gmail.com>2021-10-13 23:22:32 +0000
commit9ad39c026c8a3c945a7003c4aa4f5cb1d4c80160 (patch)
tree3cca19635417e28ae381d67ae31c75df2925032d /docs/usecases/containers
parentf47122e07e1ecd0151902a3cabe46c60a99bee8e (diff)
docs: better docs, mv doxygen to sphinx
This patch refactors the VPP sphinx docs in order to make it easier to consume for external readers as well as VPP developers. It also makes sphinx the single source of documentation, which simplifies maintenance and operation. Most important updates are: - reformat the existing documentation as rst - split RELEASE.md and move it into separate rst files - remove section 'events' - remove section 'archive' - remove section 'related projects' - remove section 'feature by release' - remove section 'Various links' - make (Configuration reference, CLI docs, developer docs) top level items in the list - move 'Use Cases' as part of 'About VPP' - move 'Troubleshooting' as part of 'Getting Started' - move test framework docs into 'Developer Documentation' - add a 'Contributing' section for gerrit, docs and other contributer related infos - deprecate doxygen and test-docs targets - redirect the "make doxygen" target to "make docs" Type: refactor Change-Id: I552a5645d5b7964d547f99b1336e2ac24e7c209f Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com> Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
Diffstat (limited to 'docs/usecases/containers')
-rw-r--r--docs/usecases/containers/Routing.rst266
-rw-r--r--docs/usecases/containers/containerCreation.rst125
-rw-r--r--docs/usecases/containers/containerSetup.rst49
-rw-r--r--docs/usecases/containers/index.rst13
4 files changed, 453 insertions, 0 deletions
diff --git a/docs/usecases/containers/Routing.rst b/docs/usecases/containers/Routing.rst
new file mode 100644
index 00000000000..b9d3bc97638
--- /dev/null
+++ b/docs/usecases/containers/Routing.rst
@@ -0,0 +1,266 @@
+.. _Routing:
+
+.. toctree::
+
+Connecting the two Containers
+_____________________________
+
+Now for connecting these two linux containers to VPP and pinging between them.
+
+Enter container *cone*, and check the current network configuration:
+
+.. code-block:: console
+
+ root@cone:/# ip -o a
+ 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
+ 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever
+ 30: veth0 inet 10.0.3.157/24 brd 10.0.3.255 scope global veth0\ valid_lft forever preferred_lft forever
+ 30: veth0 inet6 fe80::216:3eff:fee2:d0ba/64 scope link \ valid_lft forever preferred_lft forever
+ 32: veth_link1 inet6 fe80::2c9d:83ff:fe33:37e/64 scope link \ valid_lft forever preferred_lft forever
+
+You can see that there are three network interfaces, *lo, veth0*, and *veth_link1*.
+
+Notice that *veth_link1* has no assigned IP.
+
+Check if the interfaces are down or up:
+
+.. code-block:: console
+
+ root@cone:/# ip link
+ 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ 30: veth0@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
+ link/ether 00:16:3e:e2:d0:ba brd ff:ff:ff:ff:ff:ff link-netnsid 0
+ 32: veth_link1@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
+ link/ether 2e:9d:83:33:03:7e brd ff:ff:ff:ff:ff:ff link-netnsid 0
+
+.. _networkNote:
+
+.. note::
+
+ Take note of the network index for **veth_link1**. In our case, it 32, and its parent index (the host machine, not the containers) is 33, shown by **veth_link1@if33**. Yours will most likely be different, but **please take note of these index's**.
+
+Make sure your loopback interface is up, and assign an IP and gateway to veth_link1.
+
+.. code-block:: console
+
+ root@cone:/# ip link set dev lo up
+ root@cone:/# ip addr add 172.16.1.2/24 dev veth_link1
+ root@cone:/# ip link set dev veth_link1 up
+ root@cone:/# dhclient -r
+ root@cone:/# ip route add default via 172.16.1.1 dev veth_link1
+
+Here, the IP is 172.16.1.2/24 and the gateway is 172.16.1.1.
+
+Run some commands to verify the changes:
+
+.. code-block:: console
+
+ root@cone:/# ip -o a
+ 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
+ 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever
+ 30: veth0 inet6 fe80::216:3eff:fee2:d0ba/64 scope link \ valid_lft forever preferred_lft forever
+ 32: veth_link1 inet 172.16.1.2/24 scope global veth_link1\ valid_lft forever preferred_lft forever
+ 32: veth_link1 inet6 fe80::2c9d:83ff:fe33:37e/64 scope link \ valid_lft forever preferred_lft forever
+
+ root@cone:/# route
+ Kernel IP routing table
+ Destination Gateway Genmask Flags Metric Ref Use Iface
+ default 172.16.1.1 0.0.0.0 UG 0 0 0 veth_link1
+ 172.16.1.0 * 255.255.255.0 U 0 0 0 veth_link1
+
+
+We see that the IP has been assigned, as well as our default gateway.
+
+Now exit this container and repeat this process with container *ctwo*, except with IP 172.16.2.2/24 and gateway 172.16.2.1.
+
+
+After that's done for *both* containers, exit from the container if you're in one:
+
+.. code-block:: console
+
+ root@ctwo:/# exit
+ exit
+ root@localhost:~#
+
+In the machine running the containers, run **ip link** to see the host *veth* network interfaces, and their link with their respective *container veth's*.
+
+.. code-block:: console
+
+ root@localhost:~# ip link
+ 1: lo: <LOOPBACK> mtu 65536 qdisc noqueue state DOWN mode DEFAULT group default qlen 1
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
+ link/ether 08:00:27:33:82:8a brd ff:ff:ff:ff:ff:ff
+ 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
+ link/ether 08:00:27:d9:9f:ac brd ff:ff:ff:ff:ff:ff
+ 4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
+ link/ether 08:00:27:78:84:9d brd ff:ff:ff:ff:ff:ff
+ 5: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
+ link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
+ 19: veth0C2FL7@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxcbr0 state UP mode DEFAULT group default qlen 1000
+ link/ether fe:0d:da:90:c1:65 brd ff:ff:ff:ff:ff:ff link-netnsid 1
+ 21: veth8NA72P@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
+ link/ether fe:1c:9e:01:9f:82 brd ff:ff:ff:ff:ff:ff link-netnsid 1
+ 31: vethXQMY4C@if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxcbr0 state UP mode DEFAULT group default qlen 1000
+ link/ether fe:9a:d9:29:40:bb brd ff:ff:ff:ff:ff:ff link-netnsid 0
+ 33: vethQL7KOC@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
+ link/ether fe:ed:89:54:47:a2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
+
+
+Remember our network interface index 32 in *cone* from this :ref:`note <networkNote>`? We can see at the bottom the name of the 33rd index **vethQL7KOC@if32**. Keep note of this network interface name for the veth connected to *cone* (ex. vethQL7KOC), and the other network interface name for *ctwo*.
+
+With VPP in the host machine, show current VPP interfaces:
+
+.. code-block:: console
+
+ root@localhost:~# vppctl show inter
+ Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
+ local0 0 down 0/0/0/0
+
+Which should only output local0.
+
+Based on the names of the network interfaces discussed previously, which are specific to my systems, we can create VPP host-interfaces:
+
+.. code-block:: console
+
+ root@localhost:~# vppctl create host-interface name vethQL7K0C
+ root@localhost:~# vppctl create host-interface name veth8NA72P
+
+Verify they have been set up properly:
+
+.. code-block:: console
+
+ root@localhost:~# vppctl show inter
+ Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
+ host-vethQL7K0C 1 down 9000/0/0/0
+ host-veth8NA72P 2 down 9000/0/0/0
+ local0 0 down 0/0/0/0
+
+Which should output *three network interfaces*, local0, and the other two host network interfaces linked to the container veth's.
+
+
+Set their state to up:
+
+.. code-block:: console
+
+ root@localhost:~# vppctl set interface state host-vethQL7K0C up
+ root@localhost:~# vppctl set interface state host-veth8NA72P up
+
+Verify they are now up:
+
+.. code-block:: console
+
+ root@localhost:~# vppctl show inter
+ Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
+ host-vethQL7K0C 1 up 9000/0/0/0
+ host-veth8NA72P 2 up 9000/0/0/0
+ local0 0 down 0/0/0/0
+
+
+Add IP addresses for the other end of each veth link:
+
+.. code-block:: console
+
+ root@localhost:~# vppctl set interface ip address host-vethQL7K0C 172.16.1.1/24
+ root@localhost:~# vppctl set interface ip address host-veth8NA72P 172.16.2.1/24
+
+
+Verify the addresses are set properly by looking at the L3 table:
+
+.. code-block:: console
+
+ root@localhost:~# vppctl show inter addr
+ host-vethQL7K0C (up):
+ L3 172.16.1.1/24
+ host-veth8NA72P (up):
+ L3 172.16.2.1/24
+ local0 (dn):
+
+Or looking at the FIB by doing:
+
+.. code-block:: console
+
+ root@localhost:~# vppctl show ip fib
+ ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] locks:[src:plugin-hi:2, src:default-route:1, ]
+ 0.0.0.0/0
+ unicast-ip4-chain
+ [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[0:0]]
+ [0] [@0]: dpo-drop ip4
+ 0.0.0.0/32
+ unicast-ip4-chain
+ [@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]]
+ [0] [@0]: dpo-drop ip4
+ 172.16.1.0/32
+ unicast-ip4-chain
+ [@0]: dpo-load-balance: [proto:ip4 index:10 buckets:1 uRPF:9 to:[0:0]]
+ [0] [@0]: dpo-drop ip4
+ 172.16.1.0/24
+ unicast-ip4-chain
+ [@0]: dpo-load-balance: [proto:ip4 index:9 buckets:1 uRPF:8 to:[0:0]]
+ [0] [@4]: ipv4-glean: host-vethQL7K0C: mtu:9000 ffffffffffff02fec953f98c0806
+ 172.16.1.1/32
+ unicast-ip4-chain
+ [@0]: dpo-load-balance: [proto:ip4 index:12 buckets:1 uRPF:13 to:[0:0]]
+ [0] [@2]: dpo-receive: 172.16.1.1 on host-vethQL7K0C
+ 172.16.1.255/32
+ unicast-ip4-chain
+ [@0]: dpo-load-balance: [proto:ip4 index:11 buckets:1 uRPF:11 to:[0:0]]
+ [0] [@0]: dpo-drop ip4
+ 172.16.2.0/32
+ unicast-ip4-chain
+ [@0]: dpo-load-balance: [proto:ip4 index:14 buckets:1 uRPF:15 to:[0:0]]
+ [0] [@0]: dpo-drop ip4
+ 172.16.2.0/24
+ unicast-ip4-chain
+ [@0]: dpo-load-balance: [proto:ip4 index:13 buckets:1 uRPF:14 to:[0:0]]
+ [0] [@4]: ipv4-glean: host-veth8NA72P: mtu:9000 ffffffffffff02fe305400e80806
+ 172.16.2.1/32
+ unicast-ip4-chain
+ [@0]: dpo-load-balance: [proto:ip4 index:16 buckets:1 uRPF:19 to:[0:0]]
+ [0] [@2]: dpo-receive: 172.16.2.1 on host-veth8NA72P
+ 172.16.2.255/32
+ unicast-ip4-chain
+ [@0]: dpo-load-balance: [proto:ip4 index:15 buckets:1 uRPF:17 to:[0:0]]
+ [0] [@0]: dpo-drop ip4
+ 224.0.0.0/4
+ unicast-ip4-chain
+ [@0]: dpo-load-balance: [proto:ip4 index:4 buckets:1 uRPF:3 to:[0:0]]
+ [0] [@0]: dpo-drop ip4
+ 240.0.0.0/4
+ unicast-ip4-chain
+ [@0]: dpo-load-balance: [proto:ip4 index:3 buckets:1 uRPF:2 to:[0:0]]
+ [0] [@0]: dpo-drop ip4
+ 255.255.255.255/32
+ unicast-ip4-chain
+ [@0]: dpo-load-balance: [proto:ip4 index:5 buckets:1 uRPF:4 to:[0:0]]
+ [0] [@0]: dpo-drop ip4
+
+At long last you probably want to see some pings:
+
+.. code-block:: console
+
+ root@localhost:~# lxc-attach -n cone -- ping -c3 172.16.2.2
+ PING 172.16.2.2 (172.16.2.2) 56(84) bytes of data.
+ 64 bytes from 172.16.2.2: icmp_seq=1 ttl=63 time=0.102 ms
+ 64 bytes from 172.16.2.2: icmp_seq=2 ttl=63 time=0.189 ms
+ 64 bytes from 172.16.2.2: icmp_seq=3 ttl=63 time=0.150 ms
+
+ --- 172.16.2.2 ping statistics ---
+ 3 packets transmitted, 3 received, 0% packet loss, time 1999ms
+ rtt min/avg/max/mdev = 0.102/0.147/0.189/0.035 ms
+
+ root@localhost:~# lxc-attach -n ctwo -- ping -c3 172.16.1.2
+ PING 172.16.1.2 (172.16.1.2) 56(84) bytes of data.
+ 64 bytes from 172.16.1.2: icmp_seq=1 ttl=63 time=0.111 ms
+ 64 bytes from 172.16.1.2: icmp_seq=2 ttl=63 time=0.089 ms
+ 64 bytes from 172.16.1.2: icmp_seq=3 ttl=63 time=0.096 ms
+
+ --- 172.16.1.2 ping statistics ---
+ 3 packets transmitted, 3 received, 0% packet loss, time 1998ms
+ rtt min/avg/max/mdev = 0.089/0.098/0.111/0.014 ms
+
+
+Which should send/receive three packets for each command.
+
+This is the end of this guide. Great work!
diff --git a/docs/usecases/containers/containerCreation.rst b/docs/usecases/containers/containerCreation.rst
new file mode 100644
index 00000000000..bb116883e7d
--- /dev/null
+++ b/docs/usecases/containers/containerCreation.rst
@@ -0,0 +1,125 @@
+.. _containerCreation:
+
+.. toctree::
+
+Creating Containers
+___________________
+
+Make sure you have gone through :ref:`installingVPP` on the system you want to create containers on.
+
+After VPP is installed, get root privileges with:
+
+.. code-block:: console
+
+ $ sudo bash
+
+Then install packages for containers such as lxc:
+
+.. code-block:: console
+
+ # apt-get install bridge-utils lxc
+
+As quoted from the `lxc.conf manpage <https://linuxcontainers.org/it/lxc/manpages/man5/lxc.conf.5.html>`_, "container configuration is held in the config stored in the container's directory.
+A basic configuration is generated at container creation time with the default's recommended for the chosen template as well as extra default keys coming from the default.conf file."
+
+"That *default.conf* file is either located at /etc/lxc/default.conf or for unprivileged containers at ~/.config/lxc/default.conf."
+
+Since we want to ping between two containers, we'll need to **add to this file**.
+
+Look at the contents of *default.conf*, which should initially look like this:
+
+.. code-block:: console
+
+ # cat /etc/lxc/default.conf
+ lxc.network.type = veth
+ lxc.network.link = lxcbr0
+ lxc.network.flags = up
+ lxc.network.hwaddr = 00:16:3e:xx:xx:xx
+
+As you can see, by default there is one veth interface.
+
+Now you will *append to this file* so that each container you create will have an interface for a Linux bridge and an unconsumed second interface.
+
+You can do this by piping *echo* output into *tee*, where each line is separated with a newline character *\\n* as shown below. Alternatively, you can manually add to this file with a text editor such as **vi**, but make sure you have root privileges.
+
+.. code-block:: console
+
+ # echo -e "lxc.network.name = veth0\nlxc.network.type = veth\nlxc.network.name = veth_link1" | sudo tee -a /etc/lxc/default.conf
+
+Inspect the contents again to verify the file was indeed modified:
+
+.. code-block:: console
+
+ # cat /etc/lxc/default.conf
+ lxc.network.type = veth
+ lxc.network.link = lxcbr0
+ lxc.network.flags = up
+ lxc.network.hwaddr = 00:16:3e:xx:xx:xx
+ lxc.network.name = veth0
+ lxc.network.type = veth
+ lxc.network.name = veth_link1
+
+
+After this, we're ready to create the containers.
+
+Creates an Ubuntu Xenial container named "cone".
+
+.. code-block:: console
+
+ # lxc-create -t download -n cone -- --dist ubuntu --release xenial --arch amd64 --keyserver hkp://p80.pool.sks-keyservers.net:80
+
+
+If successful, you'll get an output similar to this:
+
+.. code-block:: console
+
+ You just created an Ubuntu xenial amd64 (20180625_07:42) container.
+
+ To enable SSH, run: apt install openssh-server
+ No default root or user password are set by LXC.
+
+
+Make another container "ctwo".
+
+.. code-block:: console
+
+ # lxc-create -t download -n ctwo -- --dist ubuntu --release xenial --arch amd64 --keyserver hkp://p80.pool.sks-keyservers.net:80
+
+
+List your containers to verify they exist:
+
+
+.. code-block:: console
+
+ # lxc-ls
+ cone ctwo
+
+
+Start the first container:
+
+.. code-block:: console
+
+ # lxc-start --name cone
+
+And verify its running:
+
+.. code-block:: console
+
+ # lxc-ls --fancy
+ NAME STATE AUTOSTART GROUPS IPV4 IPV6
+ cone RUNNING 0 - - -
+ ctwo STOPPED 0 - - -
+
+
+.. note::
+
+ Here are some `lxc container commands <https://help.ubuntu.com/lts/serverguide/lxc.html.en-GB#lxc-basic-usage>`_ you may find useful:
+
+
+ .. code-block:: console
+
+ $ sudo lxc-ls --fancy
+ $ sudo lxc-start --name u1 --daemon
+ $ sudo lxc-info --name u1
+ $ sudo lxc-stop --name u1
+ $ sudo lxc-destroy --name u1
diff --git a/docs/usecases/containers/containerSetup.rst b/docs/usecases/containers/containerSetup.rst
new file mode 100644
index 00000000000..8c458f77cfd
--- /dev/null
+++ b/docs/usecases/containers/containerSetup.rst
@@ -0,0 +1,49 @@
+.. _containerSetup:
+
+.. toctree::
+
+Container packages
+==================
+
+Now we can go into container *cone* and install prerequisites such as VPP, and perform some additional commands:
+
+To enter our container via the shell, type:
+
+.. code-block:: console
+
+ # lxc-attach -n cone
+ root@cone:/#
+
+Run the linux DHCP setup and install VPP:
+
+.. code-block:: console
+
+ root@cone:/# resolvconf -d eth0
+ root@cone:/# dhclient
+ root@cone:/# apt-get install -y wget
+ root@cone:/# echo "deb [trusted=yes] https://nexus.fd.io/content/repositories/fd.io.ubuntu.xenial.main/ ./" | sudo tee -a /etc/apt/sources.list.d/99fd.io.list
+ root@cone:/# apt-get update
+ root@cone:/# apt-get install -y --force-yes vpp
+ root@cone:/# sh -c 'echo \"\\ndpdk {\\n no-pci\\n}\" >> /etc/vpp/startup.conf'
+
+After this is done, start VPP in this container:
+
+.. code-block:: console
+
+ root@cone:/# service vpp start
+
+Exit this container with the **exit** command (you *may* need to run **exit** twice):
+
+.. code-block:: console
+
+ root@cone:/# exit
+ exit
+ root@cone:/# exit
+ exit
+ root@localhost:~#
+
+Repeat the container setup on this page for the second container **ctwo**. Go to the end of the previous page if you forgot how to start a container.
+
+
+
+
diff --git a/docs/usecases/containers/index.rst b/docs/usecases/containers/index.rst
new file mode 100644
index 00000000000..65bf2aee5de
--- /dev/null
+++ b/docs/usecases/containers/index.rst
@@ -0,0 +1,13 @@
+.. _containers:
+
+VPP with Containers
+====================
+
+This section will cover connecting two Linux containers with VPP. A container is essentially a more efficient and faster VM, due to the fact that a container does not simulate a separate kernel and hardware. You can read more about `Linux containers here <https://linuxcontainers.org/>`_.
+
+
+.. toctree::
+
+ containerCreation
+ containerSetup
+ Routing