summaryrefslogtreecommitdiffstats
path: root/doc/guides/prog_guide
diff options
context:
space:
mode:
Diffstat (limited to 'doc/guides/prog_guide')
-rw-r--r--doc/guides/prog_guide/dev_kit_build_system.rst59
-rw-r--r--doc/guides/prog_guide/img/ivshmem.pngbin44920 -> 0 bytes
-rw-r--r--doc/guides/prog_guide/index.rst1
-rw-r--r--doc/guides/prog_guide/ivshmem_lib.rst160
-rw-r--r--doc/guides/prog_guide/kernel_nic_interface.rst3
-rw-r--r--doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst14
-rw-r--r--doc/guides/prog_guide/multi_proc_support.rst2
-rw-r--r--doc/guides/prog_guide/overview.rst2
-rw-r--r--doc/guides/prog_guide/port_hotplug_framework.rst2
-rw-r--r--doc/guides/prog_guide/profile_app.rst64
-rw-r--r--doc/guides/prog_guide/source_org.rst1
-rw-r--r--doc/guides/prog_guide/vhost_lib.rst94
12 files changed, 117 insertions, 285 deletions
diff --git a/doc/guides/prog_guide/dev_kit_build_system.rst b/doc/guides/prog_guide/dev_kit_build_system.rst
index fa2411f7..19de1563 100644
--- a/doc/guides/prog_guide/dev_kit_build_system.rst
+++ b/doc/guides/prog_guide/dev_kit_build_system.rst
@@ -53,62 +53,7 @@ Build Directory Concept
~~~~~~~~~~~~~~~~~~~~~~~
After installation, a build directory structure is created.
-Each build directory contains include files, libraries, and applications:
-
-.. code-block:: console
-
- ~/DPDK$ ls
- app MAINTAINERS
- config Makefile
- COPYRIGHT mk
- doc scripts
- examples lib
- tools x86_64-native-linuxapp-gcc
- x86_64-native-linuxapp-icc i686-native-linuxapp-gcc
- i686-native-linuxapp-icc
-
- ...
- ~/DEV/DPDK$ ls i686-native-linuxapp-gcc
-
- app build buildtools include kmod lib Makefile
-
-
- ~/DEV/DPDK$ ls i686-native-linuxapp-gcc/app/
- cmdline_test dump_cfg test testpmd
- cmdline_test.map dump_cfg.map test.map
- testpmd.map
-
-
- ~/DEV/DPDK$ ls i686-native-linuxapp-gcc/lib/
-
- libethdev.a librte_hash.a librte_mbuf.a librte_pmd_ixgbe.a
-
- librte_cmdline.a librte_lpm.a librte_mempool.a librte_ring.a
-
- librte_eal.a librte_pmd_e1000.a librte_timer.a
-
-
- ~/DEV/DPDK$ ls i686-native-linuxapp-gcc/include/
- arch rte_cpuflags.h rte_memcpy.h
- cmdline_cirbuf.h rte_cycles.h rte_memory.h
- cmdline.h rte_debug.h rte_mempool.h
- cmdline_parse_etheraddr.h rte_eal.h rte_memzone.h
- cmdline_parse.h rte_errno.h rte_pci_dev_ids.h
- cmdline_parse_ipaddr.h rte_ethdev.h rte_pci.h
- cmdline_parse_num.h rte_ether.h rte_per_lcore.h
- cmdline_parse_portlist.h rte_fbk_hash.h rte_prefetch.h
- cmdline_parse_string.h rte_hash_crc.h rte_random.h
- cmdline_rdline.h rte_hash.h rte_ring.h
- cmdline_socket.h rte_interrupts.h rte_rwlock.h
- cmdline_vt100.h rte_ip.h rte_sctp.h
- exec-env rte_jhash.h rte_spinlock.h
- rte_alarm.h rte_launch.h rte_string_fns.h
- rte_atomic.h rte_lcore.h rte_tailq.h
- rte_branch_prediction.h rte_log.h rte_tcp.h
- rte_byteorder.h rte_lpm.h rte_timer.h
- rte_common.h rte_malloc.h rte_udp.h
- rte_config.h rte_mbuf.h
-
+Each build directory contains include files, libraries, and applications.
A build directory is specific to a configuration that includes architecture + execution environment + toolchain.
It is possible to have several build directories sharing the same sources with different configurations.
@@ -319,7 +264,7 @@ instance the macro:
.. code-block:: c
- PMD_REGISTER_DRIVER(drv, name)
+ RTE_PMD_REGISTER_PCI(name, drv)
Creates the following symbol:
diff --git a/doc/guides/prog_guide/img/ivshmem.png b/doc/guides/prog_guide/img/ivshmem.png
deleted file mode 100644
index 2b34a2cf..00000000
--- a/doc/guides/prog_guide/img/ivshmem.png
+++ /dev/null
Binary files differ
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 07a4d354..e5a50a88 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -43,7 +43,6 @@ Programmer's Guide
mbuf_lib
poll_mode_drv
cryptodev_lib
- ivshmem_lib
link_bonding_poll_mode_drv_lib
timer_lib
hash_lib
diff --git a/doc/guides/prog_guide/ivshmem_lib.rst b/doc/guides/prog_guide/ivshmem_lib.rst
deleted file mode 100644
index b8a32e4c..00000000
--- a/doc/guides/prog_guide/ivshmem_lib.rst
+++ /dev/null
@@ -1,160 +0,0 @@
-.. BSD LICENSE
- Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- All rights reserved.
-
- Redistribution and use in source and binary forms, with or without
- modification, are permitted provided that the following conditions
- are met:
-
- * Redistributions of source code must retain the above copyright
- notice, this list of conditions and the following disclaimer.
- * Redistributions in binary form must reproduce the above copyright
- notice, this list of conditions and the following disclaimer in
- the documentation and/or other materials provided with the
- distribution.
- * Neither the name of Intel Corporation nor the names of its
- contributors may be used to endorse or promote products derived
- from this software without specific prior written permission.
-
- THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-IVSHMEM Library
-===============
-
-The DPDK IVSHMEM library facilitates fast zero-copy data sharing among virtual machines
-(host-to-guest or guest-to-guest) by means of QEMU's IVSHMEM mechanism.
-
-The library works by providing a command line for QEMU to map several hugepages into a single IVSHMEM device.
-For the guest to know what is inside any given IVSHMEM device
-(and to distinguish between DPDK and non-DPDK IVSHMEM devices),
-a metadata file is also mapped into the IVSHMEM segment.
-No work needs to be done by the guest application to map IVSHMEM devices into memory;
-they are automatically recognized by the DPDK Environment Abstraction Layer (EAL).
-
-A typical DPDK IVSHMEM use case looks like the following.
-
-
-.. figure:: img/ivshmem.*
-
- Typical Ivshmem use case
-
-
-The same could work with several virtual machines, providing host-to-VM or VM-to-VM communication.
-The maximum number of metadata files is 32 (by default) and each metadata file can contain different (or even the same) hugepages.
-The only constraint is that each VM has to have access to the memory it is sharing with other entities (be it host or another VM).
-For example, if the user wants to share the same memzone across two VMs, each VM must have that memzone in its metadata file.
-
-IVHSHMEM Library API Overview
------------------------------
-
-The following is a simple guide to using the IVSHMEM Library API:
-
-* Call rte_ivshmem_metadata_create() to create a new metadata file.
- The metadata name is used to distinguish between multiple metadata files.
-
-* Populate each metadata file with DPDK data structures.
- This can be done using the following API calls:
-
- * rte_ivhshmem_metadata_add_memzone() to add rte_memzone to metadata file
-
- * rte_ivshmem_metadata_add_ring() to add rte_ring to metadata file
-
- * rte_ivshmem_metadata_add_mempool() to add rte_mempool to metadata file
-
-* Finally, call rte_ivshmem_metadata_cmdline_generate() to generate the command line for QEMU.
- Multiple metadata files (and thus multiple command lines) can be supplied to a single VM.
-
-.. note::
-
- Only data structures fully residing in DPDK hugepage memory work correctly.
- Supported data structures created by malloc(), mmap()
- or otherwise using non-DPDK memory cause undefined behavior and even a segmentation fault.
- Specifically, because the memzone field in an rte_ring refers to a memzone structure residing in local memory,
- accessing the memzone field in a shared rte_ring will cause an immediate segmentation fault.
-
-IVSHMEM Environment Configuration
----------------------------------
-
-The steps needed to successfully run IVSHMEM applications are the following:
-
-* Compile a special version of QEMU from sources.
-
- The source code can be found on the QEMU website (currently, version 1.4.x is supported, but version 1.5.x is known to work also),
- however, the source code will need to be patched to support using regular files as the IVSHMEM memory backend.
- The patch is not included in the DPDK package,
- but is available on the `Intel®DPDK-vswitch project webpage <https://01.org/packet-processing/intel%C2%AE-ovdk>`_
- (either separately or in a DPDK vSwitch package).
-
-* Enable IVSHMEM library in the DPDK build configuration.
-
- In the default configuration, IVSHMEM library is not compiled. To compile the IVSHMEM library,
- one has to either use one of the provided IVSHMEM targets
- (for example, x86_64-ivshmem-linuxapp-gcc),
- or set CONFIG_RTE_LIBRTE_IVSHMEM to "y" in the build configuration.
-
-* Set up hugepage memory on the virtual machine.
-
- The guest applications run as regular DPDK (primary) processes and thus need their own hugepage memory set up inside the VM.
- The process is identical to the one described in the *DPDK Getting Started Guide*.
-
-Best Practices for Writing IVSHMEM Applications
------------------------------------------------
-
-When considering the use of IVSHMEM for sharing memory, security implications need to be carefully evaluated.
-IVSHMEM is not suitable for untrusted guests, as IVSHMEM is essentially a window into the host process memory.
-This also has implications for the multiple VM scenarios.
-While the IVSHMEM library tries to share as little memory as possible,
-it is quite probable that data designated for one VM might also be present in an IVSMHMEM device designated for another VM.
-Consequently, any shared memory corruption will affect both host and all VMs sharing that particular memory.
-
-IVSHMEM applications essentially behave like multi-process applications,
-so it is important to implement access serialization to data and thread safety.
-DPDK ring structures are already thread-safe, however,
-any custom data structures that the user might need would have to be thread-safe also.
-
-Similar to regular DPDK multi-process applications,
-it is not recommended to use function pointers as functions might have different memory addresses in different processes.
-
-It is best to avoid freeing the rte_mbuf structure on a different machine from where it was allocated,
-that is, if the mbuf was allocated on the host, the host should free it.
-Consequently, any packet transmission and reception should also happen on the same machine (whether virtual or physical).
-Failing to do so may lead to data corruption in the mempool cache.
-
-Despite the IVSHMEM mechanism being zero-copy and having good performance,
-it is still desirable to do processing in batches and follow other procedures described in
-:ref:`Performance Optimization <Performance_Optimization>`.
-
-Best Practices for Running IVSHMEM Applications
------------------------------------------------
-
-For performance reasons,
-it is best to pin host processes and QEMU processes to different cores so that they do not interfere with each other.
-If NUMA support is enabled, it is also desirable to keep host process' hugepage memory and QEMU process on the same NUMA node.
-
-For the best performance across all NUMA nodes, each QEMU core should be pinned to host CPU core on the appropriate NUMA node.
-QEMU's virtual NUMA nodes should also be set up to correspond to physical NUMA nodes.
-More on how to set up DPDK and QEMU NUMA support can be found in *DPDK Getting Started Guide* and
-`QEMU documentation <http://qemu.weilnetz.de/qemu-doc.html>`_ respectively.
-A script called cpu_layout.py is provided with the DPDK package (in the tools directory)
-that can be used to identify which CPU cores correspond to which NUMA node.
-
-The QEMU IVSHMEM command line creation should be considered the last step before starting the virtual machine.
-Currently, there is no hot plug support for QEMU IVSHMEM devices,
-so one cannot add additional memory to an IVSHMEM device once it has been created.
-Therefore, the correct sequence to run an IVSHMEM application is to run host application first,
-obtain the command lines for each IVSHMEM device and then run all QEMU instances with guest applications afterwards.
-
-It is important to note that once QEMU is started, it holds on to the hugepages it uses for IVSHMEM devices.
-As a result, if the user wishes to shut down or restart the IVSHMEM host application,
-it is not enough to simply shut the application down.
-The virtual machine must also be shut down (if not, it will hold onto outdated host data).
diff --git a/doc/guides/prog_guide/kernel_nic_interface.rst b/doc/guides/prog_guide/kernel_nic_interface.rst
index fac1960b..eb16e2e3 100644
--- a/doc/guides/prog_guide/kernel_nic_interface.rst
+++ b/doc/guides/prog_guide/kernel_nic_interface.rst
@@ -102,6 +102,9 @@ Refer to rte_kni_common.h in the DPDK source code for more details.
The physical addresses will be re-mapped into the kernel address space and stored in separate KNI contexts.
+The affinity of kernel RX thread (both single and multi-threaded modes) is controlled by force_bind and
+core_id config parameters.
+
The KNI interfaces can be deleted by a DPDK application dynamically after being created.
Furthermore, all those KNI interfaces not deleted will be deleted on the release operation
of the miscellaneous device (when the DPDK application is closed).
diff --git a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
index 01ddcb91..65813c9e 100644
--- a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
+++ b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
@@ -356,7 +356,7 @@ Using Link Bonding Devices from the EAL Command Line
Link bonding devices can be created at application startup time using the
``--vdev`` EAL command line option. The device name must start with the
-eth_bond prefix followed by numbers or letters. The name must be unique for
+net_bond prefix followed by numbers or letters. The name must be unique for
each device. Each device can have multiple options arranged in a comma
separated list. Multiple devices definitions can be arranged by calling the
``--vdev`` option multiple times.
@@ -365,7 +365,7 @@ Device names and bonding options must be separated by commas as shown below:
.. code-block:: console
- $RTE_TARGET/app/testpmd -c f -n 4 --vdev 'eth_bond0,bond_opt0=..,bond opt1=..'--vdev 'eth_bond1,bond _opt0=..,bond_opt1=..'
+ $RTE_TARGET/app/testpmd -c f -n 4 --vdev 'net_bond0,bond_opt0=..,bond opt1=..'--vdev 'net_bond1,bond _opt0=..,bond_opt1=..'
Link Bonding EAL Options
^^^^^^^^^^^^^^^^^^^^^^^^
@@ -373,7 +373,7 @@ Link Bonding EAL Options
There are multiple ways of definitions that can be assessed and combined as
long as the following two rules are respected:
-* A unique device name, in the format of eth_bondX is provided,
+* A unique device name, in the format of net_bondX is provided,
where X can be any combination of numbers and/or letters,
and the name is no greater than 32 characters long.
@@ -465,22 +465,22 @@ Create a bonded device in round robin mode with two slaves specified by their PC
.. code-block:: console
- $RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'eth_bond0,mode=0, slave=0000:00a:00.01,slave=0000:004:00.00' -- --port-topology=chained
+ $RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'net_bond0,mode=0, slave=0000:00a:00.01,slave=0000:004:00.00' -- --port-topology=chained
Create a bonded device in round robin mode with two slaves specified by their PCI address and an overriding MAC address:
.. code-block:: console
- $RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'eth_bond0,mode=0, slave=0000:00a:00.01,slave=0000:004:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
+ $RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'net_bond0,mode=0, slave=0000:00a:00.01,slave=0000:004:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
Create a bonded device in active backup mode with two slaves specified, and a primary slave specified by their PCI addresses:
.. code-block:: console
- $RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'eth_bond0,mode=1, slave=0000:00a:00.01,slave=0000:004:00.00,primary=0000:00a:00.01' -- --port-topology=chained
+ $RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'net_bond0,mode=1, slave=0000:00a:00.01,slave=0000:004:00.00,primary=0000:00a:00.01' -- --port-topology=chained
Create a bonded device in balance mode with two slaves specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
.. code-block:: console
- $RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'eth_bond0,mode=2, slave=0000:00a:00.01,slave=0000:004:00.00,xmit_policy=l34' -- --port-topology=chained
+ $RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'net_bond0,mode=2, slave=0000:00a:00.01,slave=0000:004:00.00,xmit_policy=l34' -- --port-topology=chained
diff --git a/doc/guides/prog_guide/multi_proc_support.rst b/doc/guides/prog_guide/multi_proc_support.rst
index badd102e..2a996ae8 100644
--- a/doc/guides/prog_guide/multi_proc_support.rst
+++ b/doc/guides/prog_guide/multi_proc_support.rst
@@ -35,7 +35,7 @@ Multi-process Support
In the DPDK, multi-process support is designed to allow a group of DPDK processes
to work together in a simple transparent manner to perform packet processing,
-or other workloads, on Intel® architecture hardware.
+or other workloads.
To support this functionality,
a number of additions have been made to the core DPDK Environment Abstraction Layer (EAL).
diff --git a/doc/guides/prog_guide/overview.rst b/doc/guides/prog_guide/overview.rst
index 68cc75cc..9986e3cb 100644
--- a/doc/guides/prog_guide/overview.rst
+++ b/doc/guides/prog_guide/overview.rst
@@ -157,7 +157,7 @@ The mbuf library provides the facility to create and destroy buffers
that may be used by the DPDK application to store message buffers.
The message buffers are created at startup time and stored in a mempool, using the DPDK mempool library.
-This library provide an API to allocate/free mbufs, manipulate control message buffers (ctrlmbuf) which are generic message buffers,
+This library provides an API to allocate/free mbufs, manipulate control message buffers (ctrlmbuf) which are generic message buffers,
and packet buffers (pktmbuf) which are used to carry network packets.
Network Packet Buffer Management is described in :ref:`Mbuf Library <Mbuf_Library>`.
diff --git a/doc/guides/prog_guide/port_hotplug_framework.rst b/doc/guides/prog_guide/port_hotplug_framework.rst
index fe6d72a6..6e4436e5 100644
--- a/doc/guides/prog_guide/port_hotplug_framework.rst
+++ b/doc/guides/prog_guide/port_hotplug_framework.rst
@@ -80,7 +80,7 @@ Port Hotplug API overview
returns the attached port number. Before calling the API, the device
should be recognized by an userspace driver I/O framework. The API
receives a pci address like "0000:01:00.0" or a virtual device name
- like "eth_pcap0,iface=eth0". In the case of virtual device name, the
+ like "net_pcap0,iface=eth0". In the case of virtual device name, the
format is the same as the general "--vdev" option of DPDK.
* Detaching a port
diff --git a/doc/guides/prog_guide/profile_app.rst b/doc/guides/prog_guide/profile_app.rst
index 32261875..54b546ac 100644
--- a/doc/guides/prog_guide/profile_app.rst
+++ b/doc/guides/prog_guide/profile_app.rst
@@ -31,8 +31,15 @@
Profile Your Application
========================
+The following sections describe methods of profiling DPDK applications on
+different architectures.
+
+
+Profiling on x86
+----------------
+
Intel processors provide performance counters to monitor events.
-Some tools provided by Intel can be used to profile and benchmark an application.
+Some tools provided by Intel, such as VTune, can be used to profile and benchmark an application.
See the *VTune Performance Analyzer Essentials* publication from Intel Press for more information.
For a DPDK application, this can be done in a Linux* application environment only.
@@ -50,3 +57,58 @@ The main situations that should be monitored through event counters are:
Refer to the
`Intel Performance Analysis Guide <http://software.intel.com/sites/products/collateral/hpc/vtune/performance_analysis_guide.pdf>`_
for details about application profiling.
+
+
+Profiling on ARM64
+------------------
+
+Using Linux perf
+~~~~~~~~~~~~~~~~
+
+The ARM64 architecture provide performance counters to monitor events. The
+Linux ``perf`` tool can be used to profile and benchmark an application. In
+addition to the standard events, ``perf`` can be used to profile arm64
+specific PMU (Performance Monitor Unit) events through raw events (``-e``
+``-rXX``).
+
+For more derails refer to the
+`ARM64 specific PMU events enumeration <http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.100095_0002_04_en/way1382543438508.html>`_.
+
+
+High-resolution cycle counter
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The default ``cntvct_el0`` based ``rte_rdtsc()`` provides a portable means to
+get a wall clock counter in user space. Typically it runs at <= 100MHz.
+
+The alternative method to enable ``rte_rdtsc()`` for a high resolution wall
+clock counter is through the armv8 PMU subsystem. The PMU cycle counter runs
+at CPU frequency. However, access to the PMU cycle counter from user space is
+not enabled by default in the arm64 linux kernel. It is possible to enable
+cycle counter for user space access by configuring the PMU from the privileged
+mode (kernel space).
+
+By default the ``rte_rdtsc()`` implementation uses a portable ``cntvct_el0``
+scheme. Application can choose the PMU based implementation with
+``CONFIG_RTE_ARM_EAL_RDTSC_USE_PMU``.
+
+The example below shows the steps to configure the PMU based cycle counter on
+an armv8 machine.
+
+.. code-block:: console
+
+ git clone https://github.com/jerinjacobk/armv8_pmu_cycle_counter_el0
+ cd armv8_pmu_cycle_counter_el0
+ make
+ sudo insmod pmu_el0_cycle_counter.ko
+ cd $DPDK_DIR
+ make config T=arm64-armv8a-linuxapp-gcc
+ echo "CONFIG_RTE_ARM_EAL_RDTSC_USE_PMU=y" >> build/.config
+ make
+
+.. warning::
+
+ The PMU based scheme is useful for high accuracy performance profiling with
+ ``rte_rdtsc()``. However, this method can not be used in conjunction with
+ Linux userspace profiling tools like ``perf`` as this scheme alters the PMU
+ registers state.
diff --git a/doc/guides/prog_guide/source_org.rst b/doc/guides/prog_guide/source_org.rst
index 0c06d47b..d9c140f7 100644
--- a/doc/guides/prog_guide/source_org.rst
+++ b/doc/guides/prog_guide/source_org.rst
@@ -70,7 +70,6 @@ The lib directory contains::
+-- librte_ether # Generic interface to poll mode driver
+-- librte_hash # Hash library
+-- librte_ip_frag # IP fragmentation library
- +-- librte_ivshmem # QEMU IVSHMEM library
+-- librte_kni # Kernel NIC interface
+-- librte_kvargs # Argument parsing library
+-- librte_lpm # Longest prefix match library
diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index 6b0c6b26..4f997d47 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -46,26 +46,8 @@ vhost library should be able to:
* Know all the necessary information about the vring:
Information such as where the available ring is stored. Vhost defines some
- messages to tell the backend all the information it needs to know how to
- manipulate the vring.
-
-Currently, there are two ways to pass these messages and as a result there are
-two Vhost implementations in DPDK: *vhost-cuse* (where the character devices
-are in user space) and *vhost-user*.
-
-Vhost-cuse creates a user space character device and hook to a function ioctl,
-so that all ioctl commands that are sent from the frontend (QEMU) will be
-captured and handled.
-
-Vhost-user creates a Unix domain socket file through which messages are
-passed.
-
-.. Note::
-
- Since DPDK v2.2, the majority of the development effort has gone into
- enhancing vhost-user, such as multiple queue, live migration, and
- reconnect. Thus, it is strongly advised to use vhost-user instead of
- vhost-cuse.
+ messages (passed through a Unix domain socket file) to tell the backend all
+ the information it needs to know how to manipulate the vring.
Vhost API Overview
@@ -75,11 +57,10 @@ The following is an overview of the Vhost API functions:
* ``rte_vhost_driver_register(path, flags)``
- This function registers a vhost driver into the system. For vhost-cuse, a
- ``/dev/path`` character device file will be created. For vhost-user server
- mode, a Unix domain socket file ``path`` will be created.
+ This function registers a vhost driver into the system. ``path`` specifies
+ the Unix domain socket file path.
- Currently two flags are supported (these are valid for vhost-user only):
+ Currently supported flags are:
- ``RTE_VHOST_USER_CLIENT``
@@ -97,6 +78,38 @@ The following is an overview of the Vhost API functions:
This reconnect option is enabled by default. However, it can be turned off
by setting this flag.
+ - ``RTE_VHOST_USER_DEQUEUE_ZERO_COPY``
+
+ Dequeue zero copy will be enabled when this flag is set. It is disabled by
+ default.
+
+ There are some truths (including limitations) you might want to know while
+ setting this flag:
+
+ * zero copy is not good for small packets (typically for packet size below
+ 512).
+
+ * zero copy is really good for VM2VM case. For iperf between two VMs, the
+ boost could be above 70% (when TSO is enableld).
+
+ * for VM2NIC case, the ``nb_tx_desc`` has to be small enough: <= 64 if virtio
+ indirect feature is not enabled and <= 128 if it is enabled.
+
+ The is because when dequeue zero copy is enabled, guest Tx used vring will
+ be updated only when corresponding mbuf is freed. Thus, the nb_tx_desc
+ has to be small enough so that the PMD driver will run out of available
+ Tx descriptors and free mbufs timely. Otherwise, guest Tx vring would be
+ starved.
+
+ * Guest memory should be backended with huge pages to achieve better
+ performance. Using 1G page size is the best.
+
+ When dequeue zero copy is enabled, the guest phys address and host phys
+ address mapping has to be established. Using non-huge pages means far
+ more page segments. To make it simple, DPDK vhost does a linear search
+ of those segments, thus the fewer the segments, the quicker we will get
+ the mapping. NOTE: we may speed it by using tree searching in future.
+
* ``rte_vhost_driver_session_start()``
This function starts the vhost session loop to handle vhost messages. It
@@ -139,35 +152,8 @@ The following is an overview of the Vhost API functions:
default.
-Vhost Implementations
----------------------
-
-Vhost-cuse implementation
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-When vSwitch registers the vhost driver, it will register a cuse device driver
-into the system and creates a character device file. This cuse driver will
-receive vhost open/release/IOCTL messages from the QEMU simulator.
-
-When the open call is received, the vhost driver will create a vhost device
-for the virtio device in the guest.
-
-When the ``VHOST_SET_MEM_TABLE`` ioctl is received, vhost searches the memory
-region to find the starting user space virtual address that maps the memory of
-the guest virtual machine. Through this virtual address and the QEMU pid,
-vhost can find the file QEMU uses to map the guest memory. Vhost maps this
-file into its address space, in this way vhost can fully access the guest
-physical memory, which means vhost could access the shared virtio ring and the
-guest physical address specified in the entry of the ring.
-
-The guest virtual machine tells the vhost whether the virtio device is ready
-for processing or is de-activated through the ``VHOST_NET_SET_BACKEND``
-message. The registered callback from vSwitch will be called.
-
-When the release call is made, vhost will destroy the device.
-
-Vhost-user implementation
-~~~~~~~~~~~~~~~~~~~~~~~~~
+Vhost-user Implementations
+--------------------------
Vhost-user uses Unix domain sockets for passing messages. This means the DPDK
vhost-user implementation has two options:
@@ -214,8 +200,6 @@ For ``VHOST_SET_MEM_TABLE`` message, QEMU will send information for each
memory region and its file descriptor in the ancillary data of the message.
The file descriptor is used to map that region.
-There is no ``VHOST_NET_SET_BACKEND`` message as in vhost-cuse to signal
-whether the virtio device is ready or stopped. Instead,
``VHOST_SET_VRING_KICK`` is used as the signal to put the vhost device into
the data plane, and ``VHOST_GET_VRING_BASE`` is used as the signal to remove
the vhost device from the data plane.