aboutsummaryrefslogtreecommitdiffstats
path: root/docs/overview/whatisvpp/what-is-vector-packet-processing.rst
diff options
context:
space:
mode:
authorJohn DeNisco <jdenisco@cisco.com>2018-07-26 12:45:10 -0400
committerDave Barach <openvpp@barachs.net>2018-07-26 18:34:47 +0000
commit06dcd45ff81e06bc8cf40ed487c0b2652d346a5a (patch)
tree71403f9d422c4e532b2871a66ab909bd6066b10b /docs/overview/whatisvpp/what-is-vector-packet-processing.rst
parent1d65279ffecd0f540288187b94cb1a6b84a7a0c6 (diff)
Initial commit of Sphinx docs
Change-Id: I9fca8fb98502dffc2555f9de7f507b6f006e0e77 Signed-off-by: John DeNisco <jdenisco@cisco.com>
Diffstat (limited to 'docs/overview/whatisvpp/what-is-vector-packet-processing.rst')
-rw-r--r--docs/overview/whatisvpp/what-is-vector-packet-processing.rst73
1 files changed, 73 insertions, 0 deletions
diff --git a/docs/overview/whatisvpp/what-is-vector-packet-processing.rst b/docs/overview/whatisvpp/what-is-vector-packet-processing.rst
new file mode 100644
index 00000000000..994318e81c5
--- /dev/null
+++ b/docs/overview/whatisvpp/what-is-vector-packet-processing.rst
@@ -0,0 +1,73 @@
+:orphan:
+
+.. _what-is-vector-packet-processing:
+
+=================================
+What is vector packet processing?
+=================================
+
+FD.io VPP is developed using vector packet processing concepts, as opposed to
+scalar packet processing, these concepts are explained in the following sections.
+
+Vector packet processing is a common approach among high performance `Userspace
+<https://en.wikipedia.org/wiki/User_space>`_ packet processing applications such
+as developed with FD.io VPP and `DPDK
+<https://en.wikipedia.org/wiki/Data_Plane_Development_Kit>`_. The scalar based
+aproach tends to be favoured by Operating System `Kernel
+<https://en.wikipedia.org/wiki/Kernel_(operating_system)>`_ Network Stacks and
+Userspace stacks that don't have strict performance requirements.
+
+**Scalar Packet Processing**
+
+A scalar packet processing network stack typically processes one packet at a
+time: an interrupt handling function takes a single packet from a Network
+Inteface, and processes it through a set of functions: fooA calls fooB calls
+fooC and so on.
+
+.. code-block:: none
+
+ +---> fooA(packet1) +---> fooB(packet1) +---> fooC(packet1)
+ +---> fooA(packet2) +---> fooB(packet2) +---> fooC(packet2)
+ ...
+ +---> fooA(packet3) +---> fooB(packet3) +---> fooC(packet3)
+
+
+Scalar packet processing is simple, but inefficent in these ways:
+
+* When the code path length exceeds the size of the Microprocessor's instruction
+ cache (I-cache), `thrashing
+ <https://en.wikipedia.org/wiki/Thrashing_(computer_science)>`_ occurs as the
+ Microprocessor is continually loading new instructions. In this model, each
+ packet incurs an identical set of I-cache misses.
+* The associated deep call stack will also add load-store-unit pressure as
+ stack-locals fall out of the Microprocessor's Layer 1 Data Cache (D-cache).
+
+**Vector Packet Processing**
+
+In contrast, a vector packet processing network stack processes multiple packets
+at a time, called 'vectors of packets' or simply a 'vector'. An interrupt
+handling function takes the vector of packets from a Network Inteface, and
+processes the vector through a set of functions: fooA calls fooB calls fooC and
+so on.
+
+.. code-block:: none
+
+ +---> fooA([packet1, +---> fooB([packet1, +---> fooC([packet1, +--->
+ packet2, packet2, packet2,
+ ... ... ...
+ packet256]) packet256]) packet256])
+
+This approach fixes:
+
+* The I-cache thrashing problem described above, by ammoritizing the cost of
+ I-cache loads across multiple packets.
+
+* The ineffeciences associated with the deep call stack by recieving vectors
+ of up to 256 packets at a time from the Network Interface, and processes them
+ using a directed graph of node. The graph scheduler invokes one node dispatch
+ function at a time, restricting stack depth to a few stack frames.
+
+The further optimizations that this approaches enables are pipelining and
+prefetching to minimize read latency on table data and parallelize packet loads
+needed to process packets.
+