diff options
author | John DeNisco <jdenisco@cisco.com> | 2018-08-01 10:38:23 -0400 |
---|---|---|
committer | Damjan Marion <dmarion@me.com> | 2018-08-01 20:21:23 +0000 |
commit | a14c16674023bd6672ca49e3551c707702711050 (patch) | |
tree | 8f6fcd2a22356a0d16dce12f3b2bbdefbb37d7e7 | |
parent | e126cc53317fcc38970d244bea2ddaf11e47702f (diff) |
docs: change code blocks from "shell" to "console"
Change-Id: I136fccfc06e07fb68d11df686c59687362fb8827
Signed-off-by: John DeNisco <jdenisco@cisco.com>
-rw-r--r-- | docs/reference/vppvagrant/VagrantVMSetup.rst | 12 | ||||
-rw-r--r-- | docs/reference/vppvagrant/boxSetup.rst | 8 | ||||
-rw-r--r-- | docs/reference/vppvagrant/installingVboxVagrant.rst | 6 | ||||
-rw-r--r-- | docs/reference/vppvagrant/settingENV.rst | 2 | ||||
-rw-r--r-- | docs/usecases/Routing.rst | 30 | ||||
-rw-r--r-- | docs/usecases/containerCreation.rst | 28 | ||||
-rw-r--r-- | docs/usecases/containerSetup.rst | 8 |
7 files changed, 47 insertions, 47 deletions
diff --git a/docs/reference/vppvagrant/VagrantVMSetup.rst b/docs/reference/vppvagrant/VagrantVMSetup.rst index 769c6186170..f9f4304ed94 100644 --- a/docs/reference/vppvagrant/VagrantVMSetup.rst +++ b/docs/reference/vppvagrant/VagrantVMSetup.rst @@ -6,7 +6,7 @@ Accessing your VM ^^^^^^^^^^^^^^^^^ ssh into the newly created box: -.. code-block:: console +.. code-block:: shell $ vagrant ssh <id> @@ -28,7 +28,7 @@ Sample output looks like: Become the root with: -.. code-block:: console +.. code-block:: shell $ sudo bash @@ -38,19 +38,19 @@ When you ssh into your Vagrant box you will be placed in the directory */home/va For Ubuntu systems: -.. code-block:: console +.. code-block:: shell # dpkg -i *.deb For CentOS systems: -.. code-block:: console +.. code-block:: shell # rpm -Uvh *.rpm Since VPP is now installed, you can start running VPP with: -.. code-block:: console +.. code-block:: shell - # service vpp start
\ No newline at end of file + # service vpp start diff --git a/docs/reference/vppvagrant/boxSetup.rst b/docs/reference/vppvagrant/boxSetup.rst index d23033da856..374ba349458 100644 --- a/docs/reference/vppvagrant/boxSetup.rst +++ b/docs/reference/vppvagrant/boxSetup.rst @@ -82,7 +82,7 @@ __________ Once you're satisfied with your *Vagrantfile*, boot the box with: -.. code-block:: console +.. code-block:: shell $ vagrant up @@ -106,19 +106,19 @@ To confirm it is up, show the status and information of Vagrant boxes with: To poweroff your VM, type: - .. code-block:: console + .. code-block:: shell $ vagrant halt <id> To resume your VM, type: - .. code-block:: console + .. code-block:: shell $ vagrant resume <id> To destroy your VM, type: - .. code-block:: console + .. code-block:: shell $ vagrant destroy <id> diff --git a/docs/reference/vppvagrant/installingVboxVagrant.rst b/docs/reference/vppvagrant/installingVboxVagrant.rst index 1bd4ba076d7..018ce6cfb53 100644 --- a/docs/reference/vppvagrant/installingVboxVagrant.rst +++ b/docs/reference/vppvagrant/installingVboxVagrant.rst @@ -15,7 +15,7 @@ If you're on CentOS, follow the `steps here <https://wiki.centos.org/HowTos/Virt If you're on Ubuntu, perform: -.. code-block:: console +.. code-block:: shell $ sudo apt-get install virtualbox @@ -24,13 +24,13 @@ __________________ Here we are on a 64-bit version of CentOS, downloading and installing Vagrant 2.1.2: -.. code-block:: console +.. code-block:: shell $ yum -y install https://releases.hashicorp.com/vagrant/2.1.2/vagrant_2.1.2_x86_64.rpm This is a similar command, but on a 64-bit version of Debian: -.. code-block:: console +.. code-block:: shell $ sudo apt-get install https://releases.hashicorp.com/vagrant/2.1.2/vagrant_2.1.2_x86_64.deb diff --git a/docs/reference/vppvagrant/settingENV.rst b/docs/reference/vppvagrant/settingENV.rst index 269b36bda84..8bd7847d36c 100644 --- a/docs/reference/vppvagrant/settingENV.rst +++ b/docs/reference/vppvagrant/settingENV.rst @@ -24,6 +24,6 @@ Adding your own ENV variables is easy. For example, if you wanted to setup proxi Once you're finished with *env.sh* script, and you are in the directory containing *env.sh*, run the script to set the ENV variables with: -.. code-block:: console +.. code-block:: shell $ source ./env.sh diff --git a/docs/usecases/Routing.rst b/docs/usecases/Routing.rst index 0c5908fd57e..cecc2637108 100644 --- a/docs/usecases/Routing.rst +++ b/docs/usecases/Routing.rst @@ -9,7 +9,7 @@ Now for connecting these two linux containers to VPP and pinging between them. Enter container *cone*, and check the current network configuration: -.. code-block:: console +.. code-block:: shell root@cone:/# ip -o a 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever @@ -24,7 +24,7 @@ Notice that *veth_link1* has no assigned IP. Check if the interfaces are down or up: -.. code-block:: console +.. code-block:: shell root@cone:/# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1 @@ -42,7 +42,7 @@ Check if the interfaces are down or up: Make sure your loopback interface is up, and assign an IP and gateway to veth_link1. -.. code-block:: console +.. code-block:: shell root@cone:/# ip link set dev lo up root@cone:/# ip addr add 172.16.1.2/24 dev veth_link1 @@ -54,7 +54,7 @@ Here, the IP is 172.16.1.2/24 and the gateway is 172.16.1.1. Run some commands to verify the changes: -.. code-block:: console +.. code-block:: shell root@cone:/# ip -o a 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever @@ -77,7 +77,7 @@ Now exit this container and repeat this process with container *ctwo*, except wi After thats done for *both* containers, exit from the container if you're in one: -.. code-block:: console +.. code-block:: shell root@ctwo:/# exit exit @@ -85,7 +85,7 @@ After thats done for *both* containers, exit from the container if you're in one In the machine running the containers, run **ip link** to see the host *veth* network interfaces, and their link with their respective *container veth's*. -.. code-block:: console +.. code-block:: shell root@localhost:~# ip link 1: lo: <LOOPBACK> mtu 65536 qdisc noqueue state DOWN mode DEFAULT group default qlen 1 @@ -112,7 +112,7 @@ Remember our network interface index 32 in *cone* from this :ref:`note <networkN With VPP in the host machine, show current VPP interfaces: -.. code-block:: console +.. code-block:: shell root@localhost:~# vppctl show inter Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count @@ -122,14 +122,14 @@ Which should only output local0. Based on the names of the network interfaces discussed previously, which are specific to my systems, we can create VPP host-interfaces: -.. code-block:: console +.. code-block:: shell root@localhost:~# vppctl create host-interface name vethQL7K0C root@localhost:~# vppctl create host-interface name veth8NA72P Verify they have been set up properly: -.. code-block:: console +.. code-block:: shell root@localhost:~# vppctl show inter Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count @@ -142,14 +142,14 @@ Which should output *three network interfaces*, local0, and the other two host n Set their state to up: -.. code-block:: console +.. code-block:: shell root@localhost:~# vppctl set interface state host-vethQL7K0C up root@localhost:~# vppctl set interface state host-veth8NA72P up Verify they are now up: -.. code-block:: console +.. code-block:: shell root@localhost:~# vppctl show inter Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count @@ -160,7 +160,7 @@ Verify they are now up: Add IP addresses for the other end of each veth link: -.. code-block:: console +.. code-block:: shell root@localhost:~# vppctl set interface ip address host-vethQL7K0C 172.16.1.1/24 root@localhost:~# vppctl set interface ip address host-veth8NA72P 172.16.2.1/24 @@ -168,7 +168,7 @@ Add IP addresses for the other end of each veth link: Verify the addresses are set properly by looking at the L3 table: -.. code-block:: console +.. code-block:: shell root@localhost:~# vppctl show inter addr host-vethQL7K0C (up): @@ -179,7 +179,7 @@ Verify the addresses are set properly by looking at the L3 table: Or looking at the FIB by doing: -.. code-block:: console +.. code-block:: shell root@localhost:~# vppctl show ip fib ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] locks:[src:plugin-hi:2, src:default-route:1, ] @@ -238,7 +238,7 @@ Or looking at the FIB by doing: At long last you probably want to see some pings: -.. code-block:: console +.. code-block:: shell root@localhost:~# lxc-attach -n cone -- ping -c3 172.16.2.2 PING 172.16.2.2 (172.16.2.2) 56(84) bytes of data. diff --git a/docs/usecases/containerCreation.rst b/docs/usecases/containerCreation.rst index b9344f35ce5..fb38b3ed135 100644 --- a/docs/usecases/containerCreation.rst +++ b/docs/usecases/containerCreation.rst @@ -7,13 +7,13 @@ ___________________ First you should have root privileges: -.. code-block:: console +.. code-block:: shell - $ sudo bash + ~$ sudo bash Then install packages for containers such as lxc: -.. code-block:: console +.. code-block:: shell # apt-get install bridge-utils lxc @@ -26,9 +26,9 @@ Since we want to ping between two containers, we'll need to **add to this file** Look at the contents of *default.conf*, which should initially look like this: -.. code-block:: console +.. code-block:: shell - # cat /etc/lxc/default.conf + # cat /etc/lxc/default.conf lxc.network.type = veth lxc.network.link = lxcbr0 lxc.network.flags = up @@ -40,15 +40,15 @@ Now you will *append to this file* so that each container you create will have a You can do this by piping *echo* output into *tee*, where each line is separated with a newline character *\\n* as shown below. Alternatively, you can manually add to this file with a text editor such as **vi**, but make sure you have root privileges. -.. code-block:: console +.. code-block:: shell # echo -e "lxc.network.name = veth0\nlxc.network.type = veth\nlxc.network.name = veth_link1" | sudo tee -a /etc/lxc/default.conf Inspect the contents again to verify the file was indeed modified: -.. code-block:: console +.. code-block:: shell - # cat /etc/lxc/default.conf + # cat /etc/lxc/default.conf lxc.network.type = veth lxc.network.link = lxcbr0 lxc.network.flags = up @@ -62,7 +62,7 @@ After this, we're ready to create the containers. Creates an Ubuntu Xenial container named "cone". -.. code-block:: console +.. code-block:: shell # lxc-create -t download -n cone -- --dist ubuntu --release xenial --arch amd64 --keyserver hkp://p80.pool.sks-keyservers.net:80 @@ -79,7 +79,7 @@ If successful, you'll get an output similar to this: Make another container "ctwo". -.. code-block:: console +.. code-block:: shell # lxc-create -t download -n ctwo -- --dist ubuntu --release xenial --arch amd64 --keyserver hkp://p80.pool.sks-keyservers.net:80 @@ -87,7 +87,7 @@ Make another container "ctwo". List your containers to verify they exist: -.. code-block:: console +.. code-block:: shell # lxc-ls cone ctwo @@ -95,13 +95,13 @@ List your containers to verify they exist: Start the first container: -.. code-block:: console +.. code-block:: shell # lxc-start --name cone And verify its running: -.. code-block:: console +.. code-block:: shell # lxc-ls --fancy NAME STATE AUTOSTART GROUPS IPV4 IPV6 @@ -114,7 +114,7 @@ And verify its running: Here are some `lxc container commands <https://help.ubuntu.com/lts/serverguide/lxc.html.en-GB#lxc-basic-usage>`_ you may find useful: - .. code-block:: console + .. code-block:: shell sudo lxc-ls --fancy sudo lxc-start --name u1 --daemon diff --git a/docs/usecases/containerSetup.rst b/docs/usecases/containerSetup.rst index e0fd81eebc3..d1c230daf24 100644 --- a/docs/usecases/containerSetup.rst +++ b/docs/usecases/containerSetup.rst @@ -9,14 +9,14 @@ Now we can go into container *cone* and install prerequisites such as VPP, and p To enter our container via the shell, type: -.. code-block:: console +.. code-block:: shell # lxc-attach -n cone root@cone:/# Run the linux DHCP setup and install VPP: -.. code-block:: console +.. code-block:: shell root@cone:/# resolvconf -d eth0 root@cone:/# dhclient @@ -28,13 +28,13 @@ Run the linux DHCP setup and install VPP: After this is done, start VPP in this container: -.. code-block:: console +.. code-block:: shell root@cone:/# service vpp start Exit this container with the **exit** command (you *may* need to run **exit** twice): -.. code-block:: console +.. code-block:: shell root@cone:/# exit exit |