aboutsummaryrefslogtreecommitdiffstats
path: root/doc/guides/howto/lm_virtio_vhost_user.rst
blob: 3f5ebd58ab17dc50e5756d70ea9b81ef0fe72447 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
..  SPDX-License-Identifier: BSD-3-Clause
    Copyright(c) 2016 Intel Corporation.

Live Migration of VM with Virtio on host running vhost_user
===========================================================

Overview
--------

Live Migration of a VM with DPDK Virtio PMD on a host which is
running the Vhost sample application (vhost-switch) and using the DPDK PMD (ixgbe or i40e).

The Vhost sample application uses VMDQ so SRIOV must be disabled on the NIC's.

The following sections show an example of how to do this migration.

Test Setup
----------

To test the Live Migration two servers with identical operating systems installed are used.
KVM and QEMU is also required on the servers.

QEMU 2.5 is required for Live Migration of a VM with vhost_user running on the hosts.

In this example, the servers have Niantic and or Fortville NIC's installed.
The NIC's on both servers are connected to a switch
which is also connected to the traffic generator.

The switch is configured to broadcast traffic on all the NIC ports.

The ip address of host_server_1 is 10.237.212.46

The ip address of host_server_2 is 10.237.212.131

.. _figure_lm_vhost_user:

.. figure:: img/lm_vhost_user.*

Live Migration steps
--------------------

The sample scripts mentioned in the steps below can be found in the
:ref:`Sample host scripts <lm_virtio_vhost_user_host_scripts>` and
:ref:`Sample VM scripts <lm_virtio_vhost_user_vm_scripts>` sections.

On host_server_1: Terminal 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Setup DPDK on host_server_1

.. code-block:: console

   cd /root/dpdk/host_scripts
   ./setup_dpdk_on_host.sh

On host_server_1: Terminal 2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Bind the Niantic or Fortville NIC to igb_uio on host_server_1.

For Fortville NIC.

.. code-block:: console

   cd /root/dpdk/usertools
   ./dpdk-devbind.py -b igb_uio 0000:02:00.0

For Niantic NIC.

.. code-block:: console

   cd /root/dpdk/usertools
   ./dpdk-devbind.py -b igb_uio 0000:09:00.0

On host_server_1: Terminal 3
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

For Fortville and Niantic NIC's reset SRIOV and run the
vhost_user sample application (vhost-switch) on host_server_1.

.. code-block:: console

   cd /root/dpdk/host_scripts
   ./reset_vf_on_212_46.sh
   ./run_vhost_switch_on_host.sh

On host_server_1: Terminal 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Start the VM on host_server_1

.. code-block:: console

   ./vm_virtio_vhost_user.sh

On host_server_1: Terminal 4
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Connect to the QEMU monitor on host_server_1.

.. code-block:: console

   cd /root/dpdk/host_scripts
   ./connect_to_qemu_mon_on_host.sh
   (qemu)

On host_server_1: Terminal 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

**In VM on host_server_1:**

Setup DPDK in the VM and run testpmd in the VM.

.. code-block:: console

   cd /root/dpdk/vm_scripts
   ./setup_dpdk_in_vm.sh
   ./run_testpmd_in_vm.sh

   testpmd> show port info all
   testpmd> set fwd mac retry
   testpmd> start tx_first
   testpmd> show port stats all

Virtio traffic is seen at P1 and P2.

On host_server_2: Terminal 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Set up DPDK on the host_server_2.

.. code-block:: console

   cd /root/dpdk/host_scripts
   ./setup_dpdk_on_host.sh

On host_server_2: Terminal 2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Bind the Niantic or Fortville NIC to igb_uio on host_server_2.

For Fortville NIC.

.. code-block:: console

   cd /root/dpdk/usertools
   ./dpdk-devbind.py -b igb_uio 0000:03:00.0

For Niantic NIC.

.. code-block:: console

   cd /root/dpdk/usertools
   ./dpdk-devbind.py -b igb_uio 0000:06:00.0

On host_server_2: Terminal 3
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

For Fortville and Niantic NIC's reset SRIOV, and run
the vhost_user sample application on host_server_2.

.. code-block:: console

   cd /root/dpdk/host_scripts
   ./reset_vf_on_212_131.sh
   ./run_vhost_switch_on_host.sh

On host_server_2: Terminal 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Start the VM on host_server_2.

.. code-block:: console

   ./vm_virtio_vhost_user_migrate.sh

On host_server_2: Terminal 4
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Connect to the QEMU monitor on host_server_2.

.. code-block:: console

   cd /root/dpdk/host_scripts
   ./connect_to_qemu_mon_on_host.sh
   (qemu) info status
   VM status: paused (inmigrate)
   (qemu)

On host_server_1: Terminal 4
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Check that switch is up before migrating the VM.

.. code-block:: console

   (qemu) migrate tcp:10.237.212.131:5555
   (qemu) info status
   VM status: paused (postmigrate)

   (qemu) info migrate
   capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off
   Migration status: completed
   total time: 11619 milliseconds
   downtime: 5 milliseconds
   setup: 7 milliseconds
   transferred ram: 379699 kbytes
   throughput: 267.82 mbps
   remaining ram: 0 kbytes
   total ram: 1590088 kbytes
   duplicate: 303985 pages
   skipped: 0 pages
   normal: 94073 pages
   normal bytes: 376292 kbytes
   dirty sync count: 2
   (qemu) quit

On host_server_2: Terminal 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

**In VM on host_server_2:**

   Hit Enter key. This brings the user to the testpmd prompt.

.. code-block:: console

   testpmd>

On host_server_2: Terminal 4
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

**In QEMU monitor on host_server_2**

.. code-block:: console

   (qemu) info status
   VM status: running

On host_server_2: Terminal 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

**In VM on host_server_2:**

.. code-block:: console

   testomd> show port info all
   testpmd> show port stats all

Virtio traffic is seen at P0 and P1.


.. _lm_virtio_vhost_user_host_scripts:

Sample host scripts
-------------------

reset_vf_on_212_46.sh
~~~~~~~~~~~~~~~~~~~~~

.. code-block:: sh

   #!/bin/sh
   # This script is run on the host 10.237.212.46 to reset SRIOV

   # BDF for Fortville NIC is 0000:02:00.0
   cat /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
   echo 0 > /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
   cat /sys/bus/pci/devices/0000\:02\:00.0/max_vfs

   # BDF for Niantic NIC is 0000:09:00.0
   cat /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
   echo 0 > /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
   cat /sys/bus/pci/devices/0000\:09\:00.0/max_vfs

vm_virtio_vhost_user.sh
~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: sh

   #/bin/sh
   # Script for use with vhost_user sample application
   # The host system has 8 cpu's (0-7)

   # Path to KVM tool
   KVM_PATH="/usr/bin/qemu-system-x86_64"

   # Guest Disk image
   DISK_IMG="/home/user/disk_image/virt1_sml.disk"

   # Number of guest cpus
   VCPUS_NR="6"

   # Memory
   MEM=1024

   VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off"

   # Socket Path
   SOCKET_PATH="/root/dpdk/host_scripts/usvhost"

   taskset -c 2-7 $KVM_PATH \
    -enable-kvm \
    -m $MEM \
    -smp $VCPUS_NR \
    -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem,nodeid=0 \
    -cpu host \
    -name VM1 \
    -no-reboot \
    -net none \
    -vnc none \
    -nographic \
    -hda $DISK_IMG \
    -chardev socket,id=chr0,path=$SOCKET_PATH \
    -netdev type=vhost-user,id=net1,chardev=chr0,vhostforce \
    -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
    -chardev socket,id=chr1,path=$SOCKET_PATH \
    -netdev type=vhost-user,id=net2,chardev=chr1,vhostforce \
    -device virtio-net-pci,netdev=net2,mac=DD:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
    -monitor telnet::3333,server,nowait

connect_to_qemu_mon_on_host.sh
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: sh

   #!/bin/sh
   # This script is run on both hosts when the VM is up,
   # to connect to the Qemu Monitor.

   telnet 0 3333

reset_vf_on_212_131.sh
~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: sh

   #!/bin/sh
   # This script is run on the host 10.237.212.131 to reset SRIOV

   # BDF for Ninatic NIC is 0000:06:00.0
   cat /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
   echo 0 > /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
   cat /sys/bus/pci/devices/0000\:06\:00.0/max_vfs

   # BDF for Fortville NIC is 0000:03:00.0
   cat /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
   echo 0 > /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
   cat /sys/bus/pci/devices/0000\:03\:00.0/max_vfs

vm_virtio_vhost_user_migrate.sh
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: sh

   #/bin/sh
   # Script for use with vhost user sample application
   # The host system has 8 cpu's (0-7)

   # Path to KVM tool
   KVM_PATH="/usr/bin/qemu-system-x86_64"

   # Guest Disk image
   DISK_IMG="/home/user/disk_image/virt1_sml.disk"

   # Number of guest cpus
   VCPUS_NR="6"

   # Memory
   MEM=1024

   VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off"

   # Socket Path
   SOCKET_PATH="/root/dpdk/host_scripts/usvhost"

   taskset -c 2-7 $KVM_PATH \
    -enable-kvm \
    -m $MEM \
    -smp $VCPUS_NR \
    -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem,nodeid=0 \
    -cpu host \
    -name VM1 \
    -no-reboot \
    -net none \
    -vnc none \
    -nographic \
    -hda $DISK_IMG \
    -chardev socket,id=chr0,path=$SOCKET_PATH \
    -netdev type=vhost-user,id=net1,chardev=chr0,vhostforce \
    -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
    -chardev socket,id=chr1,path=$SOCKET_PATH \
    -netdev type=vhost-user,id=net2,chardev=chr1,vhostforce \
    -device virtio-net-pci,netdev=net2,mac=DD:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
    -incoming tcp:0:5555 \
    -monitor telnet::3333,server,nowait

.. _lm_virtio_vhost_user_vm_scripts:

Sample VM scripts
-----------------

setup_dpdk_virtio_in_vm.sh
~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: sh

   #!/bin/sh
   # this script matches the vm_virtio_vhost_user script
   # virtio port is 03
   # virtio port is 04

   cat  /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
   echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
   cat  /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages

   ifconfig -a
   /root/dpdk/usertools/dpdk-devbind.py --status

   rmmod virtio-pci

   modprobe uio
   insmod /root/dpdk/x86_64-default-linuxapp-gcc/kmod/igb_uio.ko

   /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:03.0
   /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:04.0

   /root/dpdk/usertools/dpdk-devbind.py --status

run_testpmd_in_vm.sh
~~~~~~~~~~~~~~~~~~~~

.. code-block:: sh

   #!/bin/sh
   # Run testpmd for use with vhost_user sample app.
   # test system has 8 cpus (0-7), use cpus 2-7 for VM

   /root/dpdk/x86_64-default-linuxapp-gcc/app/testpmd \
   -l 0-5 -n 4 --socket-mem 350 -- --burst=64 --i