summaryrefslogtreecommitdiffstats
path: root/doc/guides/sample_app_ug
diff options
context:
space:
mode:
Diffstat (limited to 'doc/guides/sample_app_ug')
-rw-r--r--doc/guides/sample_app_ug/flow_filtering.rst2
-rw-r--r--doc/guides/sample_app_ug/index.rst1
-rw-r--r--doc/guides/sample_app_ug/ip_pipeline.rst23
-rw-r--r--doc/guides/sample_app_ug/ipsec_secgw.rst3
-rw-r--r--doc/guides/sample_app_ug/kernel_nic_interface.rst262
-rw-r--r--doc/guides/sample_app_ug/l3_forward_power_man.rst69
-rw-r--r--doc/guides/sample_app_ug/link_status_intr.rst1
-rw-r--r--doc/guides/sample_app_ug/vdpa.rst120
-rw-r--r--doc/guides/sample_app_ug/vhost.rst2
-rw-r--r--doc/guides/sample_app_ug/vhost_crypto.rst26
-rw-r--r--doc/guides/sample_app_ug/vm_power_management.rst300
11 files changed, 693 insertions, 116 deletions
diff --git a/doc/guides/sample_app_ug/flow_filtering.rst b/doc/guides/sample_app_ug/flow_filtering.rst
index bd0ae1e2..0d6fe2bb 100644
--- a/doc/guides/sample_app_ug/flow_filtering.rst
+++ b/doc/guides/sample_app_ug/flow_filtering.rst
@@ -139,7 +139,6 @@ application is shown below:
struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {
.offloads =
@@ -215,7 +214,6 @@ The Ethernet port is configured with default settings using the
struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {
.offloads =
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index 5bedf4f6..74b12af8 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -45,6 +45,7 @@ Sample Applications User Guides
vhost
vhost_scsi
vhost_crypto
+ vdpa
netmap_compatibility
ip_pipeline
test_pipeline
diff --git a/doc/guides/sample_app_ug/ip_pipeline.rst b/doc/guides/sample_app_ug/ip_pipeline.rst
index b75509a0..447a544d 100644
--- a/doc/guides/sample_app_ug/ip_pipeline.rst
+++ b/doc/guides/sample_app_ug/ip_pipeline.rst
@@ -304,6 +304,15 @@ Kni
[thread <thread_id>]
+Cryptodev
+~~~~~~~~~
+
+ Create cryptodev port ::
+
+ cryptodev <cryptodev_name>
+ dev <DPDK Cryptodev PMD name>
+ queue <n_queues> <queue_size>
+
Action profile
~~~~~~~~~~~~~~
@@ -330,6 +339,8 @@ Action profile
[ttl drop | fwd
stats none | pkts]
[stats pkts | bytes | both]
+ [sym_crypto cryptodev <cryptodev_name>
+ mempool_create <mempool_name> mempool_init <mempool_name>]
[time]
@@ -471,6 +482,18 @@ Add rule to table for specific pipeline instance ::
[ttl dec | keep]
[stats]
[time]
+ [sym_crypto
+ encrypt | decrypt
+ type
+ | cipher
+ cipher_algo <algo> cipher_key <key> cipher_iv <iv>
+ | cipher_auth
+ cipher_algo <algo> cipher_key <key> cipher_iv <iv>
+ auth_algo <algo> auth_key <key> digest_size <size>
+ | aead
+ aead_algo <algo> aead_key <key> aead_iv <iv> aead_aad <aad>
+ digest_size <size>
+ data_offset <data_offset>]
where:
<pa> ::= g | y | r | drop
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 46696f2a..4869a011 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -67,7 +67,7 @@ Constraints
* No IPv6 options headers.
* No AH mode.
-* Supported algorithms: AES-CBC, AES-CTR, AES-GCM, HMAC-SHA1 and NULL.
+* Supported algorithms: AES-CBC, AES-CTR, AES-GCM, 3DES-CBC, HMAC-SHA1 and NULL.
* Each SA must be handle by a unique lcore (*1 RX queue per port*).
* No chained mbufs.
@@ -397,6 +397,7 @@ where each options means:
* *aes-128-cbc*: AES-CBC 128-bit algorithm
* *aes-256-cbc*: AES-CBC 256-bit algorithm
* *aes-128-ctr*: AES-CTR 128-bit algorithm
+ * *3des-cbc*: 3DES-CBC 192-bit algorithm
* Syntax: *cipher_algo <your algorithm>*
diff --git a/doc/guides/sample_app_ug/kernel_nic_interface.rst b/doc/guides/sample_app_ug/kernel_nic_interface.rst
index 1b3ee9a5..6acdf0ff 100644
--- a/doc/guides/sample_app_ug/kernel_nic_interface.rst
+++ b/doc/guides/sample_app_ug/kernel_nic_interface.rst
@@ -31,18 +31,27 @@ This is done by creating one or more kernel net devices for each of the DPDK por
The application allows the use of standard Linux tools (ethtool, ifconfig, tcpdump) with the DPDK ports and
also the exchange of packets between the DPDK application and the Linux* kernel.
+The Kernel NIC Interface sample application requires that the
+KNI kernel module ``rte_kni`` be loaded into the kernel. See
+:doc:`../prog_guide/kernel_nic_interface` for more information on loading
+the ``rte_kni`` kernel module.
+
Overview
--------
-The Kernel NIC Interface sample application uses two threads in user space for each physical NIC port being used,
-and allocates one or more KNI device for each physical NIC port with kernel module's support.
-For a physical NIC port, one thread reads from the port and writes to KNI devices,
-and another thread reads from KNI devices and writes the data unmodified to the physical NIC port.
-It is recommended to configure one KNI device for each physical NIC port.
-If configured with more than one KNI devices for a physical NIC port,
-it is just for performance testing, or it can work together with VMDq support in future.
+The Kernel NIC Interface sample application ``kni`` allocates one or more
+KNI interfaces for each physical NIC port. For each physical NIC port,
+``kni`` uses two DPDK threads in user space; one thread reads from the port and
+writes to the corresponding KNI interfaces and the other thread reads from
+the KNI interfaces and writes the data unmodified to the physical NIC port.
+
+It is recommended to configure one KNI interface for each physical NIC port.
+The application can be configured with more than one KNI interface for
+each physical NIC port for performance testing or it can work together with
+VMDq support in future.
-The packet flow through the Kernel NIC Interface application is as shown in the following figure.
+The packet flow through the Kernel NIC Interface application is as shown
+in the following figure.
.. _figure_kernel_nic:
@@ -50,145 +59,221 @@ The packet flow through the Kernel NIC Interface application is as shown in the
Kernel NIC Application Packet Flow
+If link monitoring is enabled with the ``-m`` command line flag, one
+additional pthread is launched which will check the link status of each
+physical NIC port and will update the carrier status of the corresponding
+KNI interface(s) to match the physical NIC port's state. This means that
+the KNI interface(s) will be disabled automatically when the Ethernet link
+goes down and enabled when the Ethernet link goes up.
+
+If link monitoring is enabled, the ``rte_kni`` kernel module should be loaded
+such that the :ref:`default carrier state <kni_default_carrier_state>` is
+set to *off*. This ensures that the KNI interface is only enabled *after*
+the Ethernet link of the corresponding NIC port has reached the linkup state.
+
+If link monitoring is not enabled, the ``rte_kni`` kernel module should be
+loaded with the :ref:`default carrier state <kni_default_carrier_state>`
+set to *on*. This sets the carrier state of the KNI interfaces to *on*
+when the KNI interfaces are enabled without regard to the actual link state
+of the corresponding NIC port. This is useful for testing in loopback
+mode where the NIC port may not be physically connected to anything.
+
Compiling the Application
-------------------------
To compile the sample application see :doc:`compiling`.
-The application is located in the ``kni`` sub-directory.
+The application is located in the ``examples/kni`` sub-directory.
.. note::
This application is intended as a linuxapp only.
-Loading the Kernel Module
--------------------------
+Running the kni Example Application
+-----------------------------------
-Loading the KNI kernel module without any parameter is the typical way a DPDK application
-gets packets into and out of the kernel net stack.
-This way, only one kernel thread is created for all KNI devices for packet receiving in kernel side:
+The ``kni`` example application requires a number of command line options:
.. code-block:: console
- #insmod rte_kni.ko
+ kni [EAL options] -- -p PORTMASK --config="(port,lcore_rx,lcore_tx[,lcore_kthread,...])[,(port,lcore_rx,lcore_tx[,lcore_kthread,...])]" [-P] [-m]
-Pinning the kernel thread to a specific core can be done using a taskset command such as following:
+Where:
-.. code-block:: console
+* ``-p PORTMASK``:
- #taskset -p 100000 `pgrep --fl kni_thread | awk '{print $1}'`
+ Hexadecimal bitmask of ports to configure.
-This command line tries to pin the specific kni_thread on the 20th lcore (lcore numbering starts at 0),
-which means it needs to check if that lcore is available on the board.
-This command must be sent after the application has been launched, as insmod does not start the kni thread.
+* ``--config="(port,lcore_rx,lcore_tx[,lcore_kthread,...])[,(port,lcore_rx,lcore_tx[,lcore_kthread,...])]"``:
-For optimum performance,
-the lcore in the mask must be selected to be on the same socket as the lcores used in the KNI application.
+ Determines which lcores the Rx and Tx DPDK tasks, and (optionally)
+ the KNI kernel thread(s) are bound to for each physical port.
-To provide flexibility of performance, the kernel module of the KNI,
-located in the kmod sub-directory of the DPDK target directory,
-can be loaded with parameter of kthread_mode as follows:
+* ``-P``:
-* #insmod rte_kni.ko kthread_mode=single
+ Optional flag to set all ports to promiscuous mode so that packets are
+ accepted regardless of the packet's Ethernet MAC destination address.
+ Without this option, only packets with the Ethernet MAC destination
+ address set to the Ethernet address of the port are accepted.
- This mode will create only one kernel thread for all KNI devices for packet receiving in kernel side.
- By default, it is in this single kernel thread mode.
- It can set core affinity for this kernel thread by using Linux command taskset.
+* ``-m``:
-* #insmod rte_kni.ko kthread_mode =multiple
+ Optional flag to enable monitoring and updating of the Ethernet
+ carrier state. With this option set, a thread will be started which
+ will periodically check the Ethernet link status of the physical
+ Ethernet ports and set the carrier state of the corresponding KNI
+ network interface to match it. This means that the KNI interface will
+ be disabled automatically when the Ethernet link goes down and enabled
+ when the Ethernet link goes up.
- This mode will create a kernel thread for each KNI device for packet receiving in kernel side.
- The core affinity of each kernel thread is set when creating the KNI device.
- The lcore ID for each kernel thread is provided in the command line of launching the application.
- Multiple kernel thread mode can provide scalable higher performance.
+Refer to *DPDK Getting Started Guide* for general information on running
+applications and the Environment Abstraction Layer (EAL) options.
-To measure the throughput in a loopback mode, the kernel module of the KNI,
-located in the kmod sub-directory of the DPDK target directory,
-can be loaded with parameters as follows:
+The ``-c coremask`` or ``-l corelist`` parameter of the EAL options must
+include the lcores specified by ``lcore_rx`` and ``lcore_tx`` for each port,
+but does not need to include lcores specified by ``lcore_kthread`` as those
+cores are used to pin the kernel threads in the ``rte_kni`` kernel module.
-* #insmod rte_kni.ko lo_mode=lo_mode_fifo
+The ``--config`` parameter must include a set of
+``(port,lcore_rx,lcore_tx,[lcore_kthread,...])`` values for each physical
+port specified in the ``-p PORTMASK`` parameter.
- This loopback mode will involve ring enqueue/dequeue operations in kernel space.
+The optional ``lcore_kthread`` lcore ID parameter in ``--config`` can be
+specified zero, one or more times for each physical port.
-* #insmod rte_kni.ko lo_mode=lo_mode_fifo_skb
+If no lcore ID is specified for ``lcore_kthread``, one KNI interface will
+be created for the physical port ``port`` and the KNI kernel thread(s)
+will have no specific core affinity.
- This loopback mode will involve ring enqueue/dequeue operations and sk buffer copies in kernel space.
+If one or more lcore IDs are specified for ``lcore_kthread``, a KNI interface
+will be created for each lcore ID specified, bound to the physical port
+``port``. If the ``rte_kni`` kernel module is loaded in :ref:`multiple
+kernel thread <kni_kernel_thread_mode>` mode, a kernel thread will be created
+for each KNI interface and bound to the specified core. If the ``rte_kni``
+kernel module is loaded in :ref:`single kernel thread <kni_kernel_thread_mode>`
+mode, only one kernel thread is started for all KNI interfaces. The kernel
+thread will be bound to the first ``lcore_kthread`` lcore ID specified.
-Running the Application
------------------------
+Example Configurations
+~~~~~~~~~~~~~~~~~~~~~~~
-The application requires a number of command line options:
+The following commands will first load the ``rte_kni`` kernel module in
+:ref:`multiple kernel thread <kni_kernel_thread_mode>` mode. The ``kni``
+application is then started using two ports; Port 0 uses lcore 4 for the
+Rx task, lcore 6 for the Tx task, and will create a single KNI interface
+``vEth0_0`` with the kernel thread bound to lcore 8. Port 1 uses lcore
+5 for the Rx task, lcore 7 for the Tx task, and will create a single KNI
+interface ``vEth1_0`` with the kernel thread bound to lcore 9.
.. code-block:: console
- kni [EAL options] -- -P -p PORTMASK --config="(port,lcore_rx,lcore_tx[,lcore_kthread,...])[,port,lcore_rx,lcore_tx[,lcore_kthread,...]]"
-
-Where:
+ # rmmod rte_kni
+ # insmod kmod/rte_kni.ko kthread_mode=multiple
+ # ./build/kni -l 4-7 -n 4 -- -P -p 0x3 -m --config="(0,4,6,8),(1,5,7,9)"
-* -P: Set all ports to promiscuous mode so that packets are accepted regardless of the packet's Ethernet MAC destination address.
- Without this option, only packets with the Ethernet MAC destination address set to the Ethernet address of the port are accepted.
+The following example is identical, except an additional ``lcore_kthread``
+core is specified per physical port. In this case, ``kni`` will create
+four KNI interfaces: ``vEth0_0``/``vEth0_1`` bound to physical port 0 and
+``vEth1_0``/``vEth1_1`` bound to physical port 1.
-* -p PORTMASK: Hexadecimal bitmask of ports to configure.
+The kernel thread for each interface will be bound as follows:
-* --config="(port,lcore_rx, lcore_tx[,lcore_kthread, ...]) [, port,lcore_rx, lcore_tx[,lcore_kthread, ...]]":
- Determines which lcores of RX, TX, kernel thread are mapped to which ports.
+ * ``vEth0_0`` - bound to lcore 8.
+ * ``vEth0_1`` - bound to lcore 10.
+ * ``vEth1_0`` - bound to lcore 9.
+ * ``vEth1_1`` - bound to lcore 11
-Refer to *DPDK Getting Started Guide* for general information on running applications and the Environment Abstraction Layer (EAL) options.
+.. code-block:: console
-The -c coremask or -l corelist parameter of the EAL options should include the lcores indicated by the lcore_rx and lcore_tx,
-but does not need to include lcores indicated by lcore_kthread as they are used to pin the kernel thread on.
-The -p PORTMASK parameter should include the ports indicated by the port in --config, neither more nor less.
+ # rmmod rte_kni
+ # insmod kmod/rte_kni.ko kthread_mode=multiple
+ # ./build/kni -l 4-7 -n 4 -- -P -p 0x3 -m --config="(0,4,6,8,10),(1,5,7,9,11)"
-The lcore_kthread in --config can be configured none, one or more lcore IDs.
-In multiple kernel thread mode, if configured none, a KNI device will be allocated for each port,
-while no specific lcore affinity will be set for its kernel thread.
-If configured one or more lcore IDs, one or more KNI devices will be allocated for each port,
-while specific lcore affinity will be set for its kernel thread.
-In single kernel thread mode, if configured none, a KNI device will be allocated for each port.
-If configured one or more lcore IDs,
-one or more KNI devices will be allocated for each port while
-no lcore affinity will be set as there is only one kernel thread for all KNI devices.
+The following example can be used to test the interface between the ``kni``
+test application and the ``rte_kni`` kernel module. In this example,
+the ``rte_kni`` kernel module is loaded in :ref:`single kernel thread
+mode <kni_kernel_thread_mode>`, :ref:`loopback mode <kni_loopback_mode>`
+enabled, and the :ref:`default carrier state <kni_default_carrier_state>`
+is set to *on* so that the corresponding physical NIC port does not have
+to be connected in order to use the KNI interface. One KNI interface
+``vEth0_0`` is created for port 0 and one KNI interface ``vEth1_0`` is
+created for port 1. Since ``rte_kni`` is loaded in "single kernel thread"
+mode, the one kernel thread is bound to lcore 8.
-For example, to run the application with two ports served by six lcores, one lcore of RX, one lcore of TX,
-and one lcore of kernel thread for each port:
+Since the physical NIC ports are not being used, link monitoring can be
+disabled by **not** specifying the ``-m`` flag to ``kni``:
.. code-block:: console
- ./build/kni -l 4-7 -n 4 -- -P -p 0x3 --config="(0,4,6,8),(1,5,7,9)"
+ # rmmod rte_kni
+ # insmod kmod/rte_kni.ko lo_mode=lo_mode_fifo carrier=on
+ # ./build/kni -l 4-7 -n 4 -- -P -p 0x3 --config="(0,4,6,8),(1,5,7,9)"
KNI Operations
--------------
-Once the KNI application is started, one can use different Linux* commands to manage the net interfaces.
-If more than one KNI devices configured for a physical port,
-only the first KNI device will be paired to the physical device.
-Operations on other KNI devices will not affect the physical port handled in user space application.
+Once the ``kni`` application is started, the user can use the normal
+Linux commands to manage the KNI interfaces as if they were any other
+Linux network interface.
-Assigning an IP address:
+Enable KNI interface and assign an IP address:
.. code-block:: console
- #ifconfig vEth0_0 192.168.0.1
+ # ifconfig vEth0_0 192.168.0.1
-Displaying the NIC registers:
+Show KNI interface configuration and statistics:
.. code-block:: console
- #ethtool -d vEth0_0
+ # ifconfig vEth0_0
+
+The user can also check and reset the packet statistics inside the ``kni``
+application by sending the app the USR1 and USR2 signals:
+
+.. code-block:: console
+
+ # Print statistics
+ # kill -SIGUSR1 `pidof kni`
+
+ # Zero statistics
+ # kill -SIGUSR2 `pidof kni`
-Dumping the network traffic:
+Dump network traffic:
.. code-block:: console
- #tcpdump -i vEth0_0
+ # tcpdump -i vEth0_0
+
+The normal Linux commands can also be used to change the MAC address and
+MTU size used by the physical NIC which corresponds to the KNI interface.
+However, if more than one KNI interface is configured for a physical port,
+these commands will only work on the first KNI interface for that port.
Change the MAC address:
.. code-block:: console
- #ifconfig vEth0_0 hw ether 0C:01:02:03:04:08
+ # ifconfig vEth0_0 hw ether 0C:01:02:03:04:08
+
+Change the MTU size:
+
+.. code-block:: console
+
+ # ifconfig vEth0_0 mtu 1450
+
+If DPDK is compiled with ``CONFIG_RTE_KNI_KMOD_ETHTOOL=y`` and an Intel
+NIC is used, the user can use ``ethtool`` on the KNI interface as if it
+were a normal Linux kernel interface.
+
+Displaying the NIC registers:
+
+.. code-block:: console
+
+ # ethtool -d vEth0_0
-When the DPDK userspace application is closed, all the KNI devices are deleted from Linux*.
+When the ``kni`` application is closed, all the KNI interfaces are deleted
+from the Linux kernel.
Explanation
-----------
@@ -227,7 +312,7 @@ to see if this lcore is reading from or writing to kernel NIC interfaces.
For the case that reads from a NIC port and writes to the kernel NIC interfaces (``kni_ingress``),
the packet reception is the same as in L2 Forwarding sample application
(see :ref:`l2_fwd_app_rx_tx_packets`).
-The packet transmission is done by sending mbufs into the kernel NIC interfaces by rte_kni_tx_burst().
+The packet transmission is done by sending mbufs into the kernel NIC interfaces by ``rte_kni_tx_burst()``.
The KNI library automatically frees the mbufs after the kernel successfully copied the mbufs.
For the other case that reads from kernel NIC interfaces
@@ -235,16 +320,3 @@ and writes to a physical NIC port (``kni_egress``),
packets are retrieved by reading mbufs from kernel NIC interfaces by ``rte_kni_rx_burst()``.
The packet transmission is the same as in the L2 Forwarding sample application
(see :ref:`l2_fwd_app_rx_tx_packets`).
-
-Callbacks for Kernel Requests
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-To execute specific PMD operations in user space requested by some Linux* commands,
-callbacks must be implemented and filled in the struct rte_kni_ops structure.
-Currently, setting a new MTU, change in MAC address, configuring promiscusous mode and
-configuring the network interface(up/down) re supported.
-Default implementation for following is available in rte_kni library.
-Application may choose to not implement following callbacks:
-
-- ``config_mac_address``
-- ``config_promiscusity``
diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
index 795a570b..e44a11b2 100644
--- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
+++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
@@ -105,6 +105,8 @@ where,
* --no-numa: optional, disables numa awareness
+* --empty-poll: Traffic Aware power management. See below for details
+
See :doc:`l3_forward` for details.
The L3fwd-power example reuses the L3fwd command line options.
@@ -362,3 +364,70 @@ The algorithm has the following sleeping behavior depending on the idle counter:
If a thread polls multiple Rx queues and different queue returns different sleep duration values,
the algorithm controls the sleep time in a conservative manner by sleeping for the least possible time
in order to avoid a potential performance impact.
+
+Empty Poll Mode
+-------------------------
+Additionally, there is a traffic aware mode of operation called "Empty
+Poll" where the number of empty polls can be monitored to keep track
+of how busy the application is. Empty poll mode can be enabled by the
+command line option --empty-poll.
+
+See :doc:`Power Management<../prog_guide/power_man>` chapter in the DPDK Programmer's Guide for empty poll mode details.
+
+.. code-block:: console
+
+ ./l3fwd-power -l xxx -n 4 -w 0000:xx:00.0 -w 0000:xx:00.1 -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
+
+Where,
+
+--empty-poll: Enable the empty poll mode instead of original algorithm
+
+--empty-poll="training_flag, med_threshold, high_threshold"
+
+* ``training_flag`` : optional, enable/disable training mode. Default value is 0. If the training_flag is set as 1(true), then the application will start in training mode and print out the trained threshold values. If the training_flag is set as 0(false), the application will start in normal mode, and will use either the default thresholds or those supplied on the command line. The trained threshold values are specific to the user’s system, may give a better power profile when compared to the default threshold values.
+
+* ``med_threshold`` : optional, sets the empty poll threshold of a modestly busy system state. If this is not supplied, the application will apply the default value of 350000.
+
+* ``high_threshold`` : optional, sets the empty poll threshold of a busy system state. If this is not supplied, the application will apply the default value of 580000.
+
+* -l : optional, set up the LOW power state frequency index
+
+* -m : optional, set up the MED power state frequency index
+
+* -h : optional, set up the HIGH power state frequency index
+
+Empty Poll Mode Example Usage
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+To initially obtain the ideal thresholds for the system, the training
+mode should be run first. This is achieved by running the l3fwd-power
+app with the training flag set to “1”, and the other parameters set to
+0.
+
+.. code-block:: console
+
+ ./examples/l3fwd-power/build/l3fwd-power -l 1-3 -- -p 0x0f --config="(0,0,2),(0,1,3)" --empty-poll "1,0,0" –P
+
+This will run the training algorithm for x seconds on each core (cores 2
+and 3), and then print out the recommended threshold values for those
+cores. The thresholds should be very similar for each core.
+
+.. code-block:: console
+
+ POWER: Bring up the Timer
+ POWER: set the power freq to MED
+ POWER: Low threshold is 230277
+ POWER: MED threshold is 335071
+ POWER: HIGH threshold is 523769
+ POWER: Training is Complete for 2
+ POWER: set the power freq to MED
+ POWER: Low threshold is 236814
+ POWER: MED threshold is 344567
+ POWER: HIGH threshold is 538580
+ POWER: Training is Complete for 3
+
+Once the values have been measured for a particular system, the app can
+then be started without the training mode so traffic can start immediately.
+
+.. code-block:: console
+
+ ./examples/l3fwd-power/build/l3fwd-power -l 1-3 -- -p 0x0f --config="(0,0,2),(0,1,3)" --empty-poll "0,340000,540000" –P
diff --git a/doc/guides/sample_app_ug/link_status_intr.rst b/doc/guides/sample_app_ug/link_status_intr.rst
index c7665fe5..695c088e 100644
--- a/doc/guides/sample_app_ug/link_status_intr.rst
+++ b/doc/guides/sample_app_ug/link_status_intr.rst
@@ -137,7 +137,6 @@ The global configuration is stored in a static structure:
static const struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {},
.intr_conf = {
diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
new file mode 100644
index 00000000..745f196c
--- /dev/null
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -0,0 +1,120 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2018 Intel Corporation.
+
+Vdpa Sample Application
+=======================
+
+The vdpa sample application creates vhost-user sockets by using the
+vDPA backend. vDPA stands for vhost Data Path Acceleration which utilizes
+virtio ring compatible devices to serve virtio driver directly to enable
+datapath acceleration. As vDPA driver can help to set up vhost datapath,
+this application doesn't need to launch dedicated worker threads for vhost
+enqueue/dequeue operations.
+
+Testing steps
+-------------
+
+This section shows the steps of how to start VMs with vDPA vhost-user
+backend and verify network connection & live migration.
+
+Build
+~~~~~
+
+To compile the sample application see :doc:`compiling`.
+
+The application is located in the ``vdpa`` sub-directory.
+
+Start the vdpa example
+~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+ ./vdpa [EAL options] -- [--client] [--interactive|-i] or [--iface SOCKET_PATH]
+
+where
+
+* --client means running vdpa app in client mode, in the client mode, QEMU needs
+ to run as the server mode and take charge of socket file creation.
+* --iface specifies the path prefix of the UNIX domain socket file, e.g.
+ /tmp/vhost-user-, then the socket files will be named as /tmp/vhost-user-<n>
+ (n starts from 0).
+* --interactive means run the vdpa sample in interactive mode, currently 4
+ internal cmds are supported:
+
+ 1. help: show help message
+ 2. list: list all available vdpa devices
+ 3. create: create a new vdpa port with socket file and vdpa device address
+ 4. quit: unregister vhost driver and exit the application
+
+Take IFCVF driver for example:
+
+.. code-block:: console
+
+ ./vdpa -c 0x2 -n 4 --socket-mem 1024,1024 \
+ -w 0000:06:00.3,vdpa=1 -w 0000:06:00.4,vdpa=1 \
+ -- --interactive
+
+.. note::
+ Here 0000:06:00.3 and 0000:06:00.4 refer to virtio ring compatible devices,
+ and we need to bind vfio-pci to them before running vdpa sample.
+
+ * modprobe vfio-pci
+ * ./usertools/dpdk-devbind.py -b vfio-pci 06:00.3 06:00.4
+
+Then we can create 2 vdpa ports in interactive cmdline.
+
+.. code-block:: console
+
+ vdpa> list
+ device id device address queue num supported features
+ 0 0000:06:00.3 1 0x14c238020
+ 1 0000:06:00.4 1 0x14c238020
+ 2 0000:06:00.5 1 0x14c238020
+
+ vdpa> create /tmp/vdpa-socket0 0000:06:00.3
+ vdpa> create /tmp/vdpa-socket1 0000:06:00.4
+
+.. _vdpa_app_run_vm:
+
+Start the VMs
+~~~~~~~~~~~~~
+
+.. code-block:: console
+
+ qemu-system-x86_64 -cpu host -enable-kvm \
+ <snip>
+ -mem-prealloc \
+ -chardev socket,id=char0,path=<socket_file created in above steps> \
+ -netdev type=vhost-user,id=vdpa,chardev=char0 \
+ -device virtio-net-pci,netdev=vdpa,mac=00:aa:bb:cc:dd:ee,page-per-vq=on \
+
+After the VMs launches, we can login the VMs and configure the ip, verify the
+network connection via ping or netperf.
+
+.. note::
+ Suggest to use QEMU 3.0.0 which extends vhost-user for vDPA.
+
+Live Migration
+~~~~~~~~~~~~~~
+vDPA supports cross-backend live migration, user can migrate SW vhost backend
+VM to vDPA backend VM and vice versa. Here are the detailed steps. Assume A is
+the source host with SW vhost VM and B is the destination host with vDPA.
+
+1. Start vdpa sample and launch a VM with exact same parameters as the VM on A,
+ in migration-listen mode:
+
+.. code-block:: console
+
+ B: <qemu-command-line> -incoming tcp:0:4444 (or other PORT))
+
+2. Start the migration (on source host):
+
+.. code-block:: console
+
+ A: (qemu) migrate -d tcp:<B ip>:4444 (or other PORT)
+
+3. Check the status (on source host):
+
+.. code-block:: console
+
+ A: (qemu) info migrate
diff --git a/doc/guides/sample_app_ug/vhost.rst b/doc/guides/sample_app_ug/vhost.rst
index fd42cb3f..df4d6f9a 100644
--- a/doc/guides/sample_app_ug/vhost.rst
+++ b/doc/guides/sample_app_ug/vhost.rst
@@ -78,7 +78,7 @@ could be done by:
.. code-block:: console
modprobe uio_pci_generic
- $RTE_SDK/usertools/dpdk-devbind.py -b=uio_pci_generic 0000:00:04.0
+ $RTE_SDK/usertools/dpdk-devbind.py -b uio_pci_generic 0000:00:04.0
Then start testpmd for packet forwarding testing.
diff --git a/doc/guides/sample_app_ug/vhost_crypto.rst b/doc/guides/sample_app_ug/vhost_crypto.rst
index 65c86a53..3db57eab 100644
--- a/doc/guides/sample_app_ug/vhost_crypto.rst
+++ b/doc/guides/sample_app_ug/vhost_crypto.rst
@@ -28,24 +28,22 @@ Start the vhost_crypto example
.. code-block:: console
- ./vhost_crypto [EAL options] -- [--socket-file PATH]
- [--cdev-id ID] [--cdev-queue-id ID] [--zero-copy] [--guest-polling]
+ ./vhost_crypto [EAL options] --
+ --config (lcore,cdev-id,queue-id)[,(lcore,cdev-id,queue-id)]
+ --socketfile lcore,PATH
+ [--zero-copy]
+ [--guest-polling]
where,
-* socket-file PATH: the path of UNIX socket file to be created, multiple
- instances of this config item is supported. Upon absence of this item,
- the default socket-file `/tmp/vhost_crypto1.socket` is used.
+* config (lcore,cdev-id,queue-id): build the lcore-cryptodev id-queue id
+ connection. Once specified, the specified lcore will only work with
+ specified cryptodev's queue.
-* cdev-id ID: the target DPDK Cryptodev's ID to process the actual crypto
- workload. Upon absence of this item the default value of `0` will be used.
- For details of DPDK Cryptodev, please refer to DPDK Cryptodev Library
- Programmers' Guide.
-
-* cdev-queue-id ID: the target DPDK Cryptodev's queue ID to process the
- actual crypto workload. Upon absence of this item the default value of `0`
- will be used. For details of DPDK Cryptodev, please refer to DPDK Cryptodev
- Library Programmers' Guide.
+* socket-file lcore,PATH: the path of UNIX socket file to be created and
+ the lcore id that will deal with the all workloads of the socket. Multiple
+ instances of this config item is supported and one lcore supports processing
+ multiple sockets.
* zero-copy: the presence of this item means the ZERO-COPY feature will be
enabled. Otherwise it is disabled. PLEASE NOTE the ZERO-COPY feature is still
diff --git a/doc/guides/sample_app_ug/vm_power_management.rst b/doc/guides/sample_app_ug/vm_power_management.rst
index 855570d6..1ad4f149 100644
--- a/doc/guides/sample_app_ug/vm_power_management.rst
+++ b/doc/guides/sample_app_ug/vm_power_management.rst
@@ -199,7 +199,7 @@ see :doc:`compiling`.
The application is located in the ``vm_power_manager`` sub-directory.
-To build just the ``vm_power_manager`` application:
+To build just the ``vm_power_manager`` application using ``make``:
.. code-block:: console
@@ -208,6 +208,22 @@ To build just the ``vm_power_manager`` application:
cd ${RTE_SDK}/examples/vm_power_manager/
make
+The resulting binary will be ${RTE_SDK}/build/examples/vm_power_manager
+
+To build just the ``vm_power_manager`` application using ``meson/ninja``:
+
+.. code-block:: console
+
+ export RTE_SDK=/path/to/rte_sdk
+ cd ${RTE_SDK}
+ meson build
+ cd build
+ ninja
+ meson configure -Dexamples=vm_power_manager
+ ninja
+
+The resulting binary will be ${RTE_SDK}/build/examples/dpdk-vm_power_manager
+
Running
~~~~~~~
@@ -337,6 +353,270 @@ monitoring of branch ratio on cores doing busy polling via PMDs.
and will need to be adjusted for different workloads.
+
+JSON API
+~~~~~~~~
+
+In addition to the command line interface for host command and a virtio-serial
+interface for VM power policies, there is also a JSON interface through which
+power commands and policies can be sent. This functionality adds a dependency
+on the Jansson library, and the Jansson development package must be installed
+on the system before the JSON parsing functionality is included in the app.
+This is achieved by:
+
+ .. code-block:: javascript
+
+ apt-get install libjansson-dev
+
+The command and package name may be different depending on your operating
+system. It's worth noting that the app will successfully build without this
+package present, but a warning is shown during compilation, and the JSON
+parsing functionality will not be present in the app.
+
+Sending a command or policy to the power manager application is achieved by
+simply opening a fifo file, writing a JSON string to that fifo, and closing
+the file.
+
+The fifo is at /tmp/powermonitor/fifo
+
+The jason string can be a policy or instruction, and takes the following
+format:
+
+ .. code-block:: javascript
+
+ {"packet_type": {
+ "pair_1": value,
+ "pair_2": value
+ }}
+
+The 'packet_type' header can contain one of two values, depending on
+whether a policy or power command is being sent. The two possible values are
+"policy" and "instruction", and the expected name-value pairs is different
+depending on which type is being sent.
+
+The pairs are the format of standard JSON name-value pairs. The value type
+varies between the different name/value pairs, and may be integers, strings,
+arrays, etc. Examples of policies follow later in this document. The allowed
+names and value types are as follows:
+
+
+:Pair Name: "name"
+:Description: Name of the VM or Host. Allows the parser to associate the
+ policy with the relevant VM or Host OS.
+:Type: string
+:Values: any valid string
+:Required: yes
+:Example:
+
+ .. code-block:: javascript
+
+ "name", "ubuntu2"
+
+
+:Pair Name: "command"
+:Description: The type of packet we're sending to the power manager. We can be
+ creating or destroying a policy, or sending a direct command to adjust
+ the frequency of a core, similar to the command line interface.
+:Type: string
+:Values:
+
+ :CREATE: used when creating a new policy,
+ :DESTROY: used when removing a policy,
+ :POWER: used when sending an immediate command, max, min, etc.
+:Required: yes
+:Example:
+
+ .. code-block:: javascript
+
+ "command", "CREATE"
+
+
+:Pair Name: "policy_type"
+:Description: Type of policy to apply. Please see vm_power_manager documentation
+ for more information on the types of policies that may be used.
+:Type: string
+:Values:
+
+ :TIME: Time-of-day policy. Frequencies of the relevant cores are
+ scaled up/down depending on busy and quiet hours.
+ :TRAFFIC: This policy takes statistics from the NIC and scales up
+ and down accordingly.
+ :WORKLOAD: This policy looks at how heavily loaded the cores are,
+ and scales up and down accordingly.
+ :BRANCH_RATIO: This out-of-band policy can look at the ratio between
+ branch hits and misses on a core, and is useful for detecting
+ how much packet processing a core is doing.
+:Required: only for CREATE/DESTROY command
+:Example:
+
+ .. code-block:: javascript
+
+ "policy_type", "TIME"
+
+:Pair Name: "busy_hours"
+:Description: The hours of the day in which we scale up the cores for busy
+ times.
+:Type: array of integers
+:Values: array with list of hour numbers, (0-23)
+:Required: only for TIME policy
+:Example:
+
+ .. code-block:: javascript
+
+ "busy_hours":[ 17, 18, 19, 20, 21, 22, 23 ]
+
+:Pair Name: "quiet_hours"
+:Description: The hours of the day in which we scale down the cores for quiet
+ times.
+:Type: array of integers
+:Values: array with list of hour numbers, (0-23)
+:Required: only for TIME policy
+:Example:
+
+ .. code-block:: javascript
+
+ "quiet_hours":[ 2, 3, 4, 5, 6 ]
+
+:Pair Name: "avg_packet_thresh"
+:Description: Threshold below which the frequency will be set to min for
+ the TRAFFIC policy. If the traffic rate is above this and below max, the
+ frequency will be set to medium.
+:Type: integer
+:Values: The number of packets below which the TRAFFIC policy applies the
+ minimum frequency, or medium frequency if between avg and max thresholds.
+:Required: only for TRAFFIC policy
+:Example:
+
+ .. code-block:: javascript
+
+ "avg_packet_thresh": 100000
+
+:Pair Name: "max_packet_thresh"
+:Description: Threshold above which the frequency will be set to max for
+ the TRAFFIC policy
+:Type: integer
+:Values: The number of packets per interval above which the TRAFFIC policy
+ applies the maximum frequency
+:Required: only for TRAFFIC policy
+:Example:
+
+ .. code-block:: javascript
+
+ "max_packet_thresh": 500000
+
+:Pair Name: "core_list"
+:Description: The cores to which to apply the policy.
+:Type: array of integers
+:Values: array with list of virtual CPUs.
+:Required: only policy CREATE/DESTROY
+:Example:
+
+ .. code-block:: javascript
+
+ "core_list":[ 10, 11 ]
+
+:Pair Name: "workload"
+:Description: When our policy is of type WORKLOAD, we need to specify how
+ heavy our workload is.
+:Type: string
+:Values:
+
+ :HIGH: For cores running workloads that require high frequencies
+ :MEDIUM: For cores running workloads that require medium frequencies
+ :LOW: For cores running workloads that require low frequencies
+:Required: only for WORKLOAD policy types
+:Example:
+
+ .. code-block:: javascript
+
+ "workload", "MEDIUM"
+
+:Pair Name: "mac_list"
+:Description: When our policy is of type TRAFFIC, we need to specify the
+ MAC addresses that the host needs to monitor
+:Type: string
+:Values: array with a list of mac address strings.
+:Required: only for TRAFFIC policy types
+:Example:
+
+ .. code-block:: javascript
+
+ "mac_list":[ "de:ad:be:ef:01:01", "de:ad:be:ef:01:02" ]
+
+:Pair Name: "unit"
+:Description: the type of power operation to apply in the command
+:Type: string
+:Values:
+
+ :SCALE_MAX: Scale frequency of this core to maximum
+ :SCALE_MIN: Scale frequency of this core to minimum
+ :SCALE_UP: Scale up frequency of this core
+ :SCALE_DOWN: Scale down frequency of this core
+ :ENABLE_TURBO: Enable Turbo Boost for this core
+ :DISABLE_TURBO: Disable Turbo Boost for this core
+:Required: only for POWER instruction
+:Example:
+
+ .. code-block:: javascript
+
+ "unit", "SCALE_MAX"
+
+:Pair Name: "resource_id"
+:Description: The core to which to apply the power command.
+:Type: integer
+:Values: valid core id for VM or host OS.
+:Required: only POWER instruction
+:Example:
+
+ .. code-block:: javascript
+
+ "resource_id": 10
+
+JSON API Examples
+~~~~~~~~~~~~~~~~~
+
+Profile create example:
+
+ .. code-block:: javascript
+
+ {"policy": {
+ "name": "ubuntu",
+ "command": "create",
+ "policy_type": "TIME",
+ "busy_hours":[ 17, 18, 19, 20, 21, 22, 23 ],
+ "quiet_hours":[ 2, 3, 4, 5, 6 ],
+ "core_list":[ 11 ]
+ }}
+
+Profile destroy example:
+
+ .. code-block:: javascript
+
+ {"profile": {
+ "name": "ubuntu",
+ "command": "destroy",
+ }}
+
+Power command example:
+
+ .. code-block:: javascript
+
+ {"command": {
+ "name": "ubuntu",
+ "unit": "SCALE_MAX",
+ "resource_id": 10
+ }}
+
+To send a JSON string to the Power Manager application, simply paste the
+example JSON string into a text file and cat it into the fifo:
+
+ .. code-block:: console
+
+ cat file.json >/tmp/powermonitor/fifo
+
+The console of the Power Manager application should indicate the command that
+was just received via the fifo.
+
Compiling and Running the Guest Applications
--------------------------------------------
@@ -366,7 +646,7 @@ For compiling and running l3fwd-power, see :doc:`l3_forward_power_man`.
The application is located in the ``guest_cli`` sub-directory under ``vm_power_manager``.
-To build just the ``guest_vm_power_manager`` application:
+To build just the ``guest_vm_power_manager`` application using ``make``:
.. code-block:: console
@@ -375,6 +655,22 @@ To build just the ``guest_vm_power_manager`` application:
cd ${RTE_SDK}/examples/vm_power_manager/guest_cli/
make
+The resulting binary will be ${RTE_SDK}/build/examples/guest_cli
+
+To build just the ``vm_power_manager`` application using ``meson/ninja``:
+
+.. code-block:: console
+
+ export RTE_SDK=/path/to/rte_sdk
+ cd ${RTE_SDK}
+ meson build
+ cd build
+ ninja
+ meson configure -Dexamples=vm_power_manager/guest_cli
+ ninja
+
+The resulting binary will be ${RTE_SDK}/build/examples/guest_cli
+
Running
~~~~~~~