aboutsummaryrefslogtreecommitdiffstats
path: root/docs/overview
diff options
context:
space:
mode:
Diffstat (limited to 'docs/overview')
-rw-r--r--docs/overview/features/l2.rst2
-rw-r--r--docs/overview/performance/index.rst2
-rw-r--r--docs/overview/whatisvpp/dataplane.rst4
-rw-r--r--docs/overview/whatisvpp/developer.rst2
-rw-r--r--docs/overview/whatisvpp/extensible.rst2
-rw-r--r--docs/overview/whatisvpp/what-is-vector-packet-processing.rst12
6 files changed, 12 insertions, 12 deletions
diff --git a/docs/overview/features/l2.rst b/docs/overview/features/l2.rst
index 56c12053ab8..939afb7e8be 100644
--- a/docs/overview/features/l2.rst
+++ b/docs/overview/features/l2.rst
@@ -12,7 +12,7 @@ MAC Layer
Discovery
---------
-* Cisco Discovery Protocol
+* Cisco Discovery Protocol v2 (CDP)
* Link Layer Discovery Protocol (LLDP)
Link Layer Control Protocol
diff --git a/docs/overview/performance/index.rst b/docs/overview/performance/index.rst
index 1c250206fcf..25e3897ff37 100644
--- a/docs/overview/performance/index.rst
+++ b/docs/overview/performance/index.rst
@@ -27,7 +27,7 @@ These features have been designed to take full advantage of common micro-process
* Reducing cache and TLS misses by processing packets in vectors
* Realizing `IPC <https://en.wikipedia.org/wiki/Instructions_per_cycle>`_ gains with vector instructions such as: SSE, AVX and NEON
* Eliminating mode switching, context switches and blocking, to always be doing useful work
-* Cache-lined aliged buffers for cache and memory efficiency
+* Cache-lined aligned buffers for cache and memory efficiency
Packet Throughput Graphs
diff --git a/docs/overview/whatisvpp/dataplane.rst b/docs/overview/whatisvpp/dataplane.rst
index 256165f4f8c..daf2124158d 100644
--- a/docs/overview/whatisvpp/dataplane.rst
+++ b/docs/overview/whatisvpp/dataplane.rst
@@ -16,10 +16,10 @@ This section identifies different components of packet processing and describes
* Wide support for standard Operating System Interfaces such as AF_Packet, Tun/Tap & Netmap.
-* Wide network and cryptograhic hardware support with `DPDK <https://www.dpdk.org/>`_.
+* Wide network and cryptographic hardware support with `DPDK <https://www.dpdk.org/>`_.
* Container and Virtualization support
- * Para-virtualized intefaces; Vhost and Virtio
+ * Para-virtualized interfaces; Vhost and Virtio
* Network Adapters over PCI passthrough
* Native container interfaces; MemIF
diff --git a/docs/overview/whatisvpp/developer.rst b/docs/overview/whatisvpp/developer.rst
index 57000d45880..040762b01ba 100644
--- a/docs/overview/whatisvpp/developer.rst
+++ b/docs/overview/whatisvpp/developer.rst
@@ -14,7 +14,7 @@ This section describes the different ways VPP is friendly to developers:
* Runs as a standard user-space process for fault tolerance, software crashes seldom require more than a process restart.
* Improved fault-tolerance and upgradability when compared to running similar packet processing in the kernel, software updates never require system reboots.
- * Development expierence is easier compared to similar kernel code
+ * Development experience is easier compared to similar kernel code
* Hardware isolation and protection (`iommu <https://en.wikipedia.org/wiki/Input%E2%80%93output_memory_management_unit>`_)
* Built for security
diff --git a/docs/overview/whatisvpp/extensible.rst b/docs/overview/whatisvpp/extensible.rst
index c271dad7d14..e7762d71312 100644
--- a/docs/overview/whatisvpp/extensible.rst
+++ b/docs/overview/whatisvpp/extensible.rst
@@ -13,7 +13,7 @@ Extensible and Modular Design
The FD.io VPP packet processing pipeline is decomposed into a ‘packet processing
graph’. This modular approach means that anyone can ‘plugin’ new graph
-nodes. This makes VPP easily exensible and means that plugins can be
+nodes. This makes VPP easily extensible and means that plugins can be
customized for specific purposes. VPP is also configurable through it's
Low-Level API.
diff --git a/docs/overview/whatisvpp/what-is-vector-packet-processing.rst b/docs/overview/whatisvpp/what-is-vector-packet-processing.rst
index 994318e81c5..50a5bab8af1 100644
--- a/docs/overview/whatisvpp/what-is-vector-packet-processing.rst
+++ b/docs/overview/whatisvpp/what-is-vector-packet-processing.rst
@@ -13,7 +13,7 @@ Vector packet processing is a common approach among high performance `Userspace
<https://en.wikipedia.org/wiki/User_space>`_ packet processing applications such
as developed with FD.io VPP and `DPDK
<https://en.wikipedia.org/wiki/Data_Plane_Development_Kit>`_. The scalar based
-aproach tends to be favoured by Operating System `Kernel
+approach tends to be favoured by Operating System `Kernel
<https://en.wikipedia.org/wiki/Kernel_(operating_system)>`_ Network Stacks and
Userspace stacks that don't have strict performance requirements.
@@ -21,7 +21,7 @@ Userspace stacks that don't have strict performance requirements.
A scalar packet processing network stack typically processes one packet at a
time: an interrupt handling function takes a single packet from a Network
-Inteface, and processes it through a set of functions: fooA calls fooB calls
+Interface, and processes it through a set of functions: fooA calls fooB calls
fooC and so on.
.. code-block:: none
@@ -32,7 +32,7 @@ fooC and so on.
+---> fooA(packet3) +---> fooB(packet3) +---> fooC(packet3)
-Scalar packet processing is simple, but inefficent in these ways:
+Scalar packet processing is simple, but inefficient in these ways:
* When the code path length exceeds the size of the Microprocessor's instruction
cache (I-cache), `thrashing
@@ -46,7 +46,7 @@ Scalar packet processing is simple, but inefficent in these ways:
In contrast, a vector packet processing network stack processes multiple packets
at a time, called 'vectors of packets' or simply a 'vector'. An interrupt
-handling function takes the vector of packets from a Network Inteface, and
+handling function takes the vector of packets from a Network Interface, and
processes the vector through a set of functions: fooA calls fooB calls fooC and
so on.
@@ -59,10 +59,10 @@ so on.
This approach fixes:
-* The I-cache thrashing problem described above, by ammoritizing the cost of
+* The I-cache thrashing problem described above, by amortizing the cost of
I-cache loads across multiple packets.
-* The ineffeciences associated with the deep call stack by recieving vectors
+* The inefficiencies associated with the deep call stack by receiving vectors
of up to 256 packets at a time from the Network Interface, and processes them
using a directed graph of node. The graph scheduler invokes one node dispatch
function at a time, restricting stack depth to a few stack frames.