aboutsummaryrefslogtreecommitdiffstats
path: root/doc/guides/prog_guide
diff options
context:
space:
mode:
Diffstat (limited to 'doc/guides/prog_guide')
-rw-r--r--doc/guides/prog_guide/bbdev.rst4
-rw-r--r--doc/guides/prog_guide/compressdev.rst6
-rw-r--r--doc/guides/prog_guide/cryptodev_lib.rst12
-rw-r--r--doc/guides/prog_guide/dev_kit_build_system.rst4
-rw-r--r--doc/guides/prog_guide/efd_lib.rst2
-rw-r--r--doc/guides/prog_guide/env_abstraction_layer.rst32
-rw-r--r--doc/guides/prog_guide/event_ethernet_rx_adapter.rst6
-rw-r--r--doc/guides/prog_guide/eventdev.rst6
-rw-r--r--doc/guides/prog_guide/kernel_nic_interface.rst2
-rw-r--r--doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst8
-rw-r--r--doc/guides/prog_guide/lpm_lib.rst2
-rw-r--r--doc/guides/prog_guide/metrics_lib.rst2
-rw-r--r--doc/guides/prog_guide/multi_proc_support.rst22
-rw-r--r--doc/guides/prog_guide/poll_mode_drv.rst6
-rw-r--r--doc/guides/prog_guide/profile_app.rst4
-rw-r--r--doc/guides/prog_guide/rte_flow.rst8
-rw-r--r--doc/guides/prog_guide/rte_security.rst20
-rw-r--r--doc/guides/prog_guide/traffic_management.rst2
-rw-r--r--doc/guides/prog_guide/vhost_lib.rst2
19 files changed, 95 insertions, 55 deletions
diff --git a/doc/guides/prog_guide/bbdev.rst b/doc/guides/prog_guide/bbdev.rst
index 9de14443..658ffd40 100644
--- a/doc/guides/prog_guide/bbdev.rst
+++ b/doc/guides/prog_guide/bbdev.rst
@@ -78,7 +78,7 @@ From the application point of view, each instance of a bbdev device consists of
one or more queues identified by queue IDs. While different devices may have
different capabilities (e.g. support different operation types), all queues on
a device support identical configuration possibilities. A queue is configured
-for only one type of operation and is configured at initializations time.
+for only one type of operation and is configured at initialization time.
When an operation is enqueued to a specific queue ID, the result is dequeued
from the same queue ID.
@@ -678,7 +678,7 @@ bbdev framework, by giving a sample code performing a loop-back operation with a
baseband processor capable of transceiving data packets.
The following sample C-like pseudo-code shows the basic steps to encode several
-buffers using (**sw_trubo**) bbdev PMD.
+buffers using (**sw_turbo**) bbdev PMD.
.. code-block:: c
diff --git a/doc/guides/prog_guide/compressdev.rst b/doc/guides/prog_guide/compressdev.rst
index 87e26490..3ba4238c 100644
--- a/doc/guides/prog_guide/compressdev.rst
+++ b/doc/guides/prog_guide/compressdev.rst
@@ -17,7 +17,7 @@ Device Creation
Physical compression devices are discovered during the bus probe of the EAL function
which is executed at DPDK initialization, based on their unique device identifier.
-For eg. PCI devices can be identified using PCI BDF (bus/bridge, device, function).
+For e.g. PCI devices can be identified using PCI BDF (bus/bridge, device, function).
Specific physical compression devices, like other physical devices in DPDK can be
white-listed or black-listed using the EAL command line options.
@@ -379,7 +379,7 @@ using priv_xform would look like:
setup op->m_src and op->m_dst;
}
num_enqd = rte_compressdev_enqueue_burst(cdev_id, 0, comp_ops, NUM_OPS);
- /* wait for this to complete before enqueing next*/
+ /* wait for this to complete before enqueuing next*/
do {
num_deque = rte_compressdev_dequeue_burst(cdev_id, 0 , &processed_ops, NUM_OPS);
} while (num_dqud < num_enqd);
@@ -526,7 +526,7 @@ An example pseudocode to set up and process a stream having NUM_CHUNKS with each
op->src.length = CHUNK_LEN;
op->input_chksum = 0;
num_enqd = rte_compressdev_enqueue_burst(cdev_id, 0, &op[i], 1);
- /* wait for this to complete before enqueing next*/
+ /* wait for this to complete before enqueuing next*/
do {
num_deqd = rte_compressdev_dequeue_burst(cdev_id, 0 , &processed_ops, 1);
} while (num_deqd < num_enqd);
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 8ee33c87..23770ffd 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -14,7 +14,7 @@ and AEAD symmetric and asymmetric Crypto operations.
Design Principles
-----------------
-The cryptodev library follows the same basic principles as those used in DPDKs
+The cryptodev library follows the same basic principles as those used in DPDK's
Ethernet Device framework. The Crypto framework provides a generic Crypto device
framework which supports both physical (hardware) and virtual (software) Crypto
devices as well as a generic Crypto API which allows Crypto devices to be
@@ -48,7 +48,7 @@ From the command line using the --vdev EAL option
* If DPDK application requires multiple software crypto PMD devices then required
number of ``--vdev`` with appropriate libraries are to be added.
- * An Application with crypto PMD instaces sharing the same library requires unique ID.
+ * An Application with crypto PMD instances sharing the same library requires unique ID.
Example: ``--vdev 'crypto_aesni_mb0' --vdev 'crypto_aesni_mb1'``
@@ -382,7 +382,7 @@ Operation Management and Allocation
The cryptodev library provides an API set for managing Crypto operations which
utilize the Mempool Library to allocate operation buffers. Therefore, it ensures
-that the crytpo operation is interleaved optimally across the channels and
+that the crypto operation is interleaved optimally across the channels and
ranks for optimal processing.
A ``rte_crypto_op`` contains a field indicating the pool that it originated from.
When calling ``rte_crypto_op_free(op)``, the operation returns to its original pool.
@@ -586,7 +586,7 @@ Sample code
There are various sample applications that show how to use the cryptodev library,
such as the L2fwd with Crypto sample application (L2fwd-crypto) and
-the IPSec Security Gateway application (ipsec-secgw).
+the IPsec Security Gateway application (ipsec-secgw).
While these applications demonstrate how an application can be created to perform
generic crypto operation, the required complexity hides the basic steps of
@@ -767,7 +767,7 @@ using one of the crypto PMDs available in DPDK.
/*
* Dequeue the crypto operations until all the operations
- * are proccessed in the crypto device.
+ * are processed in the crypto device.
*/
uint16_t num_dequeued_ops, total_num_dequeued_ops = 0;
do {
@@ -846,7 +846,7 @@ the order in which the transforms are passed indicates the order of the chaining
Not all asymmetric crypto xforms are supported for chaining. Currently supported
asymmetric crypto chaining is Diffie-Hellman private key generation followed by
public generation. Also, currently API does not support chaining of symmetric and
-asymmetric crypto xfroms.
+asymmetric crypto xforms.
Each xform defines specific asymmetric crypto algo. Currently supported are:
* RSA
diff --git a/doc/guides/prog_guide/dev_kit_build_system.rst b/doc/guides/prog_guide/dev_kit_build_system.rst
index da83a31e..a851a811 100644
--- a/doc/guides/prog_guide/dev_kit_build_system.rst
+++ b/doc/guides/prog_guide/dev_kit_build_system.rst
@@ -216,8 +216,6 @@ Objects
Misc
^^^^
-* rte.doc.mk: Documentation in the development kit framework
-
* rte.gnuconfigure.mk: Build an application that is configure-based.
* rte.subdir.mk: Build several directories in the development kit framework.
@@ -249,7 +247,7 @@ Creates the following symbol:
Which ``dpdk-pmdinfogen`` scans for. Using this information other relevant
bits of data can be exported from the object file and used to produce a
hardware support description, that ``dpdk-pmdinfogen`` then encodes into a
-json formatted string in the following format:
+JSON formatted string in the following format:
.. code-block:: c
diff --git a/doc/guides/prog_guide/efd_lib.rst b/doc/guides/prog_guide/efd_lib.rst
index cb1a1df8..2b355ff2 100644
--- a/doc/guides/prog_guide/efd_lib.rst
+++ b/doc/guides/prog_guide/efd_lib.rst
@@ -423,6 +423,6 @@ References
1- EFD is based on collaborative research work between Intel and
Carnegie Mellon University (CMU), interested readers can refer to the paper
-“Scaling Up Clustered Network Appliances with ScaleBricks;” Dong Zhou et al.
+"Scaling Up Clustered Network Appliances with ScaleBricks" Dong Zhou et al.
at SIGCOMM 2015 (`http://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p241.pdf`)
for more information.
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 426acfc2..2bb77b01 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -147,6 +147,14 @@ A default validator callback is provided by EAL, which can be enabled with a
``--socket-limit`` command-line option, for a simple way to limit maximum amount
of memory that can be used by DPDK application.
+.. warning::
+ Memory subsystem uses DPDK IPC internally, so memory allocations/callbacks
+ and IPC must not be mixed: it is not safe to allocate/free memory inside
+ memory-related or IPC callbacks, and it is not safe to use IPC inside
+ memory-related callbacks. See chapter
+ :ref:`Multi-process Support <Multi-process_Support>` for more details about
+ DPDK IPC.
+
+ Legacy memory mode
This mode is enabled by specifying ``--legacy-mem`` command-line switch to the
@@ -441,6 +449,28 @@ Those TLS include *_cpuset* and *_socket_id*:
* *_socket_id* stores the NUMA node of the CPU set. If the CPUs in CPU set belong to different NUMA node, the *_socket_id* will be set to SOCKET_ID_ANY.
+Control Thread API
+~~~~~~~~~~~~~~~~~~
+
+It is possible to create Control Threads using the public API
+``rte_ctrl_thread_create()``.
+Those threads can be used for management/infrastructure tasks and are used
+internally by DPDK for multi process support and interrupt handling.
+
+Those threads will be scheduled on CPUs part of the original process CPU
+affinity from which the dataplane and service lcores are excluded.
+
+For example, on a 8 CPUs system, starting a dpdk application with -l 2,3
+(dataplane cores), then depending on the affinity configuration which can be
+controlled with tools like taskset (Linux) or cpuset (FreeBSD),
+
+- with no affinity configuration, the Control Threads will end up on
+ 0-1,4-7 CPUs.
+- with affinity restricted to 2-4, the Control Threads will end up on
+ CPU 4.
+- with affinity restricted to 2-3, the Control Threads will end up on
+ CPU 2 (master lcore, which is the default when no CPU is available).
+
.. _known_issue_label:
Known Issues
@@ -626,7 +656,7 @@ The most important fields in the structure and how they are used are described b
Malloc heap is a doubly-linked list, where each element keeps track of its
previous and next elements. Due to the fact that hugepage memory can come and
-go, neighbouring malloc elements may not necessarily be adjacent in memory.
+go, neighboring malloc elements may not necessarily be adjacent in memory.
Also, since a malloc element may span multiple pages, its contents may not
necessarily be IOVA-contiguous either - each malloc element is only guaranteed
to be virtually contiguous.
diff --git a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst
index 0166bb45..7bc559c8 100644
--- a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst
+++ b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst
@@ -157,7 +157,7 @@ The servicing_weight member of struct rte_event_eth_rx_adapter_queue_conf
is applicable when the adapter uses a service core function. The application
has to enable Rx queue interrupts when configuring the ethernet device
using the ``rte_eth_dev_configure()`` function and then use a servicing_weight
-of zero when addding the Rx queue to the adapter.
+of zero when adding the Rx queue to the adapter.
The adapter creates a thread blocked on the interrupt, on an interrupt this
thread enqueues the port id and the queue id to a ring buffer. The adapter
@@ -175,9 +175,9 @@ Rx Callback for SW Rx Adapter
For SW based packet transfers, i.e., when the
``RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT`` is not set in the adapter's
capabilities flags for a particular ethernet device, the service function
-temporarily enqueues mbufs to an event buffer before batch enqueueing these
+temporarily enqueues mbufs to an event buffer before batch enqueuing these
to the event device. If the buffer fills up, the service function stops
-dequeueing packets from the ethernet device. The application may want to
+dequeuing packets from the ethernet device. The application may want to
monitor the buffer fill level and instruct the service function to selectively
enqueue packets to the event device. The application may also use some other
criteria to decide which packets should enter the event device even when
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
index 8fcae546..27c51d25 100644
--- a/doc/guides/prog_guide/eventdev.rst
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -42,7 +42,7 @@ The rte_event structure contains the following metadata fields, which the
application fills in to have the event scheduled as required:
* ``flow_id`` - The targeted flow identifier for the enq/deq operation.
-* ``event_type`` - The source of this event, eg RTE_EVENT_TYPE_ETHDEV or CPU.
+* ``event_type`` - The source of this event, e.g. RTE_EVENT_TYPE_ETHDEV or CPU.
* ``sub_event_type`` - Distinguishes events inside the application, that have
the same event_type (see above)
* ``op`` - This field takes one of the RTE_EVENT_OP_* values, and tells the
@@ -265,7 +265,7 @@ Linking Queues and Ports
The final step is to "wire up" the ports to the queues. After this, the
eventdev is capable of scheduling events, and when cores request work to do,
the correct events are provided to that core. Note that the RX core takes input
-from eg: a NIC so it is not linked to any eventdev queues.
+from e.g.: a NIC so it is not linked to any eventdev queues.
Linking all workers to atomic queues, and the TX core to the single-link queue
can be achieved like this:
@@ -276,7 +276,7 @@ can be achieved like this:
uint8_t tx_port_id = 5;
uint8_t atomic_qs[] = {0, 1};
uint8_t single_link_q = 2;
- uin8t_t priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ uint8_t priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
for(int worker_port_id = 1; worker_port_id <= 4; worker_port_id++) {
int links_made = rte_event_port_link(dev_id, worker_port_id, atomic_qs, NULL, 2);
diff --git a/doc/guides/prog_guide/kernel_nic_interface.rst b/doc/guides/prog_guide/kernel_nic_interface.rst
index 33ea980e..7b8a481a 100644
--- a/doc/guides/prog_guide/kernel_nic_interface.rst
+++ b/doc/guides/prog_guide/kernel_nic_interface.rst
@@ -225,7 +225,7 @@ application functions:
``config_promiscusity``:
- Called when the user changes the promiscusity state of the KNI
+ Called when the user changes the promiscuity state of the KNI
interface. For example, when the user runs ``ip link set promisc
[on|off] dev <ifaceX>``. If the user sets this callback function to
NULL, but sets the ``port_id`` field to a value other than -1, a default
diff --git a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
index 56abee54..2459fd24 100644
--- a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
+++ b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
@@ -477,22 +477,22 @@ Create a bonded device in round robin mode with two slaves specified by their PC
.. code-block:: console
- $RTE_TARGET/app/testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0, slave=0000:00a:00.01,slave=0000:004:00.00' -- --port-topology=chained
+ $RTE_TARGET/app/testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00' -- --port-topology=chained
Create a bonded device in round robin mode with two slaves specified by their PCI address and an overriding MAC address:
.. code-block:: console
- $RTE_TARGET/app/testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0, slave=0000:00a:00.01,slave=0000:004:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
+ $RTE_TARGET/app/testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
Create a bonded device in active backup mode with two slaves specified, and a primary slave specified by their PCI addresses:
.. code-block:: console
- $RTE_TARGET/app/testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1, slave=0000:00a:00.01,slave=0000:004:00.00,primary=0000:00a:00.01' -- --port-topology=chained
+ $RTE_TARGET/app/testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,slave=0000:0a:00.01,slave=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
Create a bonded device in balance mode with two slaves specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
.. code-block:: console
- $RTE_TARGET/app/testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2, slave=0000:00a:00.01,slave=0000:004:00.00,xmit_policy=l34' -- --port-topology=chained
+ $RTE_TARGET/app/testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,slave=0000:0a:00.01,slave=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
diff --git a/doc/guides/prog_guide/lpm_lib.rst b/doc/guides/prog_guide/lpm_lib.rst
index 99563a4a..1609a57d 100644
--- a/doc/guides/prog_guide/lpm_lib.rst
+++ b/doc/guides/prog_guide/lpm_lib.rst
@@ -195,4 +195,4 @@ References
`http://www.ietf.org/rfc/rfc1519 <http://www.ietf.org/rfc/rfc1519>`_
* Pankaj Gupta, Algorithms for Routing Lookups and Packet Classification, PhD Thesis, Stanford University,
- 2000 (`http://klamath.stanford.edu/~pankaj/thesis/ thesis_1sided.pdf <http://klamath.stanford.edu/~pankaj/thesis/%20thesis_1sided.pdf>`_ )
+ 2000 (`http://klamath.stanford.edu/~pankaj/thesis/thesis_1sided.pdf <http://klamath.stanford.edu/~pankaj/thesis/thesis_1sided.pdf>`_ )
diff --git a/doc/guides/prog_guide/metrics_lib.rst b/doc/guides/prog_guide/metrics_lib.rst
index e68e4e74..89bc7d68 100644
--- a/doc/guides/prog_guide/metrics_lib.rst
+++ b/doc/guides/prog_guide/metrics_lib.rst
@@ -25,7 +25,7 @@ individual device. Since the metrics library is self-contained, the only
restriction on port numbers is that they are less than ``RTE_MAX_ETHPORTS``
- there is no requirement for the ports to actually exist.
-Initialising the library
+Initializing the library
------------------------
Before the library can be used, it has to be initialized by calling
diff --git a/doc/guides/prog_guide/multi_proc_support.rst b/doc/guides/prog_guide/multi_proc_support.rst
index 1384fe33..a84083b9 100644
--- a/doc/guides/prog_guide/multi_proc_support.rst
+++ b/doc/guides/prog_guide/multi_proc_support.rst
@@ -176,7 +176,7 @@ Some of these are documented below:
* The use of function pointers between multiple processes running based of different compiled binaries is not supported,
since the location of a given function in one process may be different to its location in a second.
- This prevents the librte_hash library from behaving properly as in a multi-threaded instance,
+ This prevents the librte_hash library from behaving properly as in a multi-process instance,
since it uses a pointer to the hash function internally.
To work around this issue, it is recommended that multi-process applications perform the hash calculations by directly calling
@@ -263,9 +263,9 @@ To send a request, a message descriptor ``rte_mp_msg`` must be populated.
Additionally, a ``timespec`` value must be specified as a timeout, after which
IPC will stop waiting and return.
-For synchronous synchronous requests, the ``rte_mp_reply`` descriptor must also
-be created. This is where the responses will be stored. The list of fields that
-will be populated by IPC are as follows:
+For synchronous requests, the ``rte_mp_reply`` descriptor must also be created.
+This is where the responses will be stored.
+The list of fields that will be populated by IPC are as follows:
* ``nb_sent`` - number indicating how many requests were sent (i.e. how many
peer processes were active at the time of the request).
@@ -273,7 +273,7 @@ will be populated by IPC are as follows:
those peer processes that were active at the time of request, how many have
replied)
* ``msgs`` - pointer to where all of the responses are stored. The order in
- which responses appear is undefined. Whendoing sycnrhonous requests, this
+ which responses appear is undefined. When doing synchronous requests, this
memory must be freed by the requestor after request completes!
For asynchronous requests, a function pointer to the callback function must be
@@ -309,6 +309,13 @@ If a response is required, a new ``rte_mp_msg`` message descriptor must be
constructed and sent via ``rte_mp_reply()`` function, along with ``peer``
pointer. The resulting response will then be delivered to the correct requestor.
+.. warning::
+ Simply returning a value when processing a request callback will not send a
+ response to the request - it must always be explicitly sent even in case
+ of errors. Implementation of error signalling rests with the application,
+ there is no built-in way to indicate success or error for a request. Failing
+ to do so will cause the requestor to time out while waiting on a response.
+
Misc considerations
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -318,6 +325,11 @@ supported. However, since sending messages (not requests) does not involve an
IPC thread, sending messages while processing another message or request is
supported.
+Since the memory sybsystem uses IPC internally, memory allocations and IPC must
+not be mixed: it is not safe to use IPC inside a memory-related callback, nor is
+it safe to allocate/free memory inside IPC callbacks. Attempting to do so may
+lead to a deadlock.
+
Asynchronous request callbacks may be triggered either from IPC thread or from
interrupt thread, depending on whether the request has timed out. It is
therefore suggested to avoid waiting for interrupt-based events (such as alarms)
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index b2cf4835..6fae39f9 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -374,9 +374,9 @@ parameters to those ports.
this argument allows user to specify which switch ports to enable port
representors for.::
- -w BDBF,representor=0
- -w BDBF,representor=[0,4,6,9]
- -w BDBF,representor=[0-31]
+ -w DBDF,representor=0
+ -w DBDF,representor=[0,4,6,9]
+ -w DBDF,representor=[0-31]
Note: PMDs are not required to support the standard device arguments and users
should consult the relevant PMD documentation to see support devargs.
diff --git a/doc/guides/prog_guide/profile_app.rst b/doc/guides/prog_guide/profile_app.rst
index 02f05614..c1b29f93 100644
--- a/doc/guides/prog_guide/profile_app.rst
+++ b/doc/guides/prog_guide/profile_app.rst
@@ -64,7 +64,7 @@ The default ``cntvct_el0`` based ``rte_rdtsc()`` provides a portable means to
get a wall clock counter in user space. Typically it runs at <= 100MHz.
The alternative method to enable ``rte_rdtsc()`` for a high resolution wall
-clock counter is through the armv8 PMU subsystem. The PMU cycle counter runs
+clock counter is through the ARMv8 PMU subsystem. The PMU cycle counter runs
at CPU frequency. However, access to the PMU cycle counter from user space is
not enabled by default in the arm64 linux kernel. It is possible to enable
cycle counter for user space access by configuring the PMU from the privileged
@@ -75,7 +75,7 @@ scheme. Application can choose the PMU based implementation with
``CONFIG_RTE_ARM_EAL_RDTSC_USE_PMU``.
The example below shows the steps to configure the PMU based cycle counter on
-an armv8 machine.
+an ARMv8 machine.
.. code-block:: console
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index dbf4999a..4e9b4440 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2131,7 +2131,7 @@ as defined in the ``rte_flow_action_raw_decap``
This action modifies the payload of matched flows. The data supplied must
be a valid header, either holding layer 2 data in case of removing layer 2
-before eincapsulation of layer 3 tunnel (for example MPLSoGRE) or complete
+before encapsulation of layer 3 tunnel (for example MPLSoGRE) or complete
tunnel definition starting from layer 2 and moving to the tunnel item itself.
When applied to the original packet the resulting packet must be a
valid packet.
@@ -2281,7 +2281,7 @@ Action: ``DEC_TTL``
Decrease TTL value.
If there is no valid RTE_FLOW_ITEM_TYPE_IPV4 or RTE_FLOW_ITEM_TYPE_IPV6
-in pattern, Some PMDs will reject rule because behaviour will be undefined.
+in pattern, Some PMDs will reject rule because behavior will be undefined.
.. _table_rte_flow_action_dec_ttl:
@@ -2299,7 +2299,7 @@ Action: ``SET_TTL``
Assigns a new TTL value.
If there is no valid RTE_FLOW_ITEM_TYPE_IPV4 or RTE_FLOW_ITEM_TYPE_IPV6
-in pattern, Some PMDs will reject rule because behaviour will be undefined.
+in pattern, Some PMDs will reject rule because behavior will be undefined.
.. _table_rte_flow_action_set_ttl:
@@ -2725,7 +2725,7 @@ Caveats
- API operations are synchronous and blocking (``EAGAIN`` cannot be
returned).
-- There is no provision for reentrancy/multi-thread safety, although nothing
+- There is no provision for re-entrancy/multi-thread safety, although nothing
should prevent different devices from being configured at the same
time. PMDs may protect their control path functions accordingly.
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index cb70caa7..7d0734a3 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -40,7 +40,7 @@ Inline Crypto
~~~~~~~~~~~~~
RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
-The crypto processing for security protocol (e.g. IPSec) is processed
+The crypto processing for security protocol (e.g. IPsec) is processed
inline during receive and transmission on NIC port. The flow based
security action should be configured on the port.
@@ -48,7 +48,7 @@ Ingress Data path - The packet is decrypted in RX path and relevant
crypto status is set in Rx descriptors. After the successful inline
crypto processing the packet is presented to host as a regular Rx packet
however all security protocol related headers are still attached to the
-packet. e.g. In case of IPSec, the IPSec tunnel headers (if any),
+packet. e.g. In case of IPsec, the IPsec tunnel headers (if any),
ESP/AH headers will remain in the packet but the received packet
contains the decrypted data where the encrypted data was when the packet
arrived. The driver Rx path check the descriptors and and based on the
@@ -111,7 +111,7 @@ Inline protocol offload
~~~~~~~~~~~~~~~~~~~~~~~
RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
-The crypto and protocol processing for security protocol (e.g. IPSec)
+The crypto and protocol processing for security protocol (e.g. IPsec)
is processed inline during receive and transmission. The flow based
security action should be configured on the port.
@@ -119,7 +119,7 @@ Ingress Data path - The packet is decrypted in the RX path and relevant
crypto status is set in the Rx descriptors. After the successful inline
crypto processing the packet is presented to the host as a regular Rx packet
but all security protocol related headers are optionally removed from the
-packet. e.g. in the case of IPSec, the IPSec tunnel headers (if any),
+packet. e.g. in the case of IPsec, the IPsec tunnel headers (if any),
ESP/AH headers will be removed from the packet and the received packet
will contains the decrypted packet only. The driver Rx path checks the
descriptors and based on the crypto status sets additional flags in
@@ -132,7 +132,7 @@ to identify the security processing done on the packet.
The underlying device in this case is stateful. It is expected that
the device shall support crypto processing for all kind of packets matching
to a given flow, this includes fragmented packets (post reassembly).
- E.g. in case of IPSec the device may internally manage anti-replay etc.
+ E.g. in case of IPsec the device may internally manage anti-replay etc.
It will provide a configuration option for anti-replay behavior i.e. to drop
the packets or pass them to driver with error flags set in the descriptor.
@@ -150,7 +150,7 @@ to cross the MTU size.
.. note::
The underlying device will manage state information required for egress
- processing. E.g. in case of IPSec, the seq number will be added to the
+ processing. E.g. in case of IPsec, the seq number will be added to the
packet, however the device shall provide indication when the sequence number
is about to overflow. The underlying device may support post encryption TSO.
@@ -199,13 +199,13 @@ crypto device.
Decryption: The packet is sent to the crypto device for security
protocol processing. The device will decrypt the packet and it will also
optionally remove additional security headers from the packet.
-E.g. in case of IPSec, IPSec tunnel headers (if any), ESP/AH headers
+E.g. in case of IPsec, IPsec tunnel headers (if any), ESP/AH headers
will be removed from the packet and the decrypted packet may contain
plain data only.
.. note::
- In case of IPSec the device may internally manage anti-replay etc.
+ In case of IPsec the device may internally manage anti-replay etc.
It will provide a configuration option for anti-replay behavior i.e. to drop
the packets or pass them to driver with error flags set in descriptor.
@@ -217,7 +217,7 @@ for any protocol header addition.
.. note::
- In the case of IPSec, the seq number will be added to the packet,
+ In the case of IPsec, the seq number will be added to the packet,
It shall provide an indication when the sequence number is about to
overflow.
@@ -549,7 +549,7 @@ IPsec related configuration parameters are defined in ``rte_security_ipsec_xform
struct rte_security_ipsec_sa_options options;
/**< various SA options */
enum rte_security_ipsec_sa_direction direction;
- /**< IPSec SA Direction - Egress/Ingress */
+ /**< IPsec SA Direction - Egress/Ingress */
enum rte_security_ipsec_sa_protocol proto;
/**< IPsec SA Protocol - AH/ESP */
enum rte_security_ipsec_sa_mode mode;
diff --git a/doc/guides/prog_guide/traffic_management.rst b/doc/guides/prog_guide/traffic_management.rst
index 98ac4310..05b34d93 100644
--- a/doc/guides/prog_guide/traffic_management.rst
+++ b/doc/guides/prog_guide/traffic_management.rst
@@ -39,7 +39,7 @@ whether a specific implementation does meet the needs to the user application.
At the TM level, users can get high level idea with the help of various
parameters such as maximum number of nodes, maximum number of hierarchical
levels, maximum number of shapers, maximum number of private shapers, type of
-scheduling algorithm (Strict Priority, Weighted Fair Queueing , etc.), etc.,
+scheduling algorithm (Strict Priority, Weighted Fair Queuing , etc.), etc.,
supported by the implementation.
Likewise, users can query the capability of the TM at the hierarchical level to
diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index c77df338..27c3af8f 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -63,7 +63,7 @@ The following is an overview of some key Vhost API functions:
512).
* zero copy is really good for VM2VM case. For iperf between two VMs, the
- boost could be above 70% (when TSO is enableld).
+ boost could be above 70% (when TSO is enabled).
* For zero copy in VM2NIC case, guest Tx used vring may be starved if the
PMD driver consume the mbuf but not release them timely.