aboutsummaryrefslogtreecommitdiffstats
path: root/docs/content/methodology/test
diff options
context:
space:
mode:
authorTibor Frank <tifrank@cisco.com>2023-05-03 13:53:27 +0000
committerTibor Frank <tifrank@cisco.com>2023-05-09 05:56:22 +0000
commit374954b9d648f503f6783325a1266457953a998d (patch)
tree5514dee6af2a2e069189efe39d4e929dd25721f7 /docs/content/methodology/test
parent46eac7bb697e8261dba5b439a15f5a6125f31760 (diff)
C-Docs: New structure
Change-Id: I73d107f94b28b138f3350a9e1eedb0555583a9ca Signed-off-by: Tibor Frank <tifrank@cisco.com>
Diffstat (limited to 'docs/content/methodology/test')
-rw-r--r--docs/content/methodology/test/_index.md6
-rw-r--r--docs/content/methodology/test/access_control_lists.md66
-rw-r--r--docs/content/methodology/test/generic_segmentation_offload.md117
-rw-r--r--docs/content/methodology/test/hoststack/_index.md6
-rw-r--r--docs/content/methodology/test/hoststack/quicudpip_with_vppecho.md48
-rw-r--r--docs/content/methodology/test/hoststack/tcpip_with_iperf3.md52
-rw-r--r--docs/content/methodology/test/hoststack/udpip_with_iperf3.md44
-rw-r--r--docs/content/methodology/test/hoststack/vsap_ab_with_nginx.md39
-rw-r--r--docs/content/methodology/test/internet_protocol_security.md73
-rw-r--r--docs/content/methodology/test/network_address_translation.md445
-rw-r--r--docs/content/methodology/test/packet_flow_ordering.md42
-rw-r--r--docs/content/methodology/test/reconfiguration.md68
-rw-r--r--docs/content/methodology/test/tunnel_encapsulations.md87
-rw-r--r--docs/content/methodology/test/vpp_device.md15
14 files changed, 1108 insertions, 0 deletions
diff --git a/docs/content/methodology/test/_index.md b/docs/content/methodology/test/_index.md
new file mode 100644
index 0000000000..857cc7b168
--- /dev/null
+++ b/docs/content/methodology/test/_index.md
@@ -0,0 +1,6 @@
+---
+bookCollapseSection: true
+bookFlatSection: false
+title: "Test"
+weight: 3
+---
diff --git a/docs/content/methodology/test/access_control_lists.md b/docs/content/methodology/test/access_control_lists.md
new file mode 100644
index 0000000000..354e6b72bb
--- /dev/null
+++ b/docs/content/methodology/test/access_control_lists.md
@@ -0,0 +1,66 @@
+---
+title: "Access Control Lists"
+weight: 5
+---
+
+# Access Control Lists
+
+VPP is tested in a number of data plane feature configurations across
+different forwarding modes. Following sections list features tested.
+
+## ACL Security-Groups
+
+Both stateless and stateful access control lists (ACL), also known as
+security-groups, are supported by VPP.
+
+Following ACL configurations are tested for MAC switching with L2
+bridge-domains:
+
+- *l2bdbasemaclrn-iacl{E}sl-{F}flows*: Input stateless ACL, with {E}
+ entries and {F} flows.
+- *l2bdbasemaclrn-oacl{E}sl-{F}flows*: Output stateless ACL, with {E}
+ entries and {F} flows.
+- *l2bdbasemaclrn-iacl{E}sf-{F}flows*: Input stateful ACL, with {E}
+ entries and {F} flows.
+- *l2bdbasemaclrn-oacl{E}sf-{F}flows*: Output stateful ACL, with {E}
+ entries and {F} flows.
+
+Following ACL configurations are tested with IPv4 routing:
+
+- *ip4base-iacl{E}sl-{F}flows*: Input stateless ACL, with {E} entries
+ and {F} flows.
+- *ip4base-oacl{E}sl-{F}flows*: Output stateless ACL, with {E} entries
+ and {F} flows.
+- *ip4base-iacl{E}sf-{F}flows*: Input stateful ACL, with {E} entries and
+ {F} flows.
+- *ip4base-oacl{E}sf-{F}flows*: Output stateful ACL, with {E} entries
+ and {F} flows.
+
+ACL tests are executed with the following combinations of ACL entries
+and number of flows:
+
+- ACL entry definitions
+ - flow non-matching deny entry: (src-ip4, dst-ip4, src-port, dst-port).
+ - flow matching permit ACL entry: (src-ip4, dst-ip4).
+- {E} - number of non-matching deny ACL entries, {E} = [1, 10, 50].
+- {F} - number of UDP flows with different tuple (src-ip4, dst-ip4,
+ src-port, dst-port), {F} = [100, 10k, 100k].
+- All {E}x{F} combinations are tested per ACL type, total of 9.
+
+## ACL MAC-IP
+
+MAC-IP binding ACLs are tested for MAC switching with L2 bridge-domains:
+
+- *l2bdbasemaclrn-macip-iacl{E}sl-{F}flows*: Input stateless ACL, with
+ {E} entries and {F} flows.
+
+MAC-IP ACL tests are executed with the following combinations of ACL
+entries and number of flows:
+
+- ACL entry definitions
+ - flow non-matching deny entry: (dst-ip4, dst-mac, bit-mask)
+ - flow matching permit ACL entry: (dst-ip4, dst-mac, bit-mask)
+- {E} - number of non-matching deny ACL entries, {E} = [1, 10, 50]
+- {F} - number of UDP flows with different tuple (dst-ip4, dst-mac),
+ {F} = [100, 10k, 100k]
+- All {E}x{F} combinations are tested per ACL type, total of 9.
diff --git a/docs/content/methodology/test/generic_segmentation_offload.md b/docs/content/methodology/test/generic_segmentation_offload.md
new file mode 100644
index 0000000000..0032d203de
--- /dev/null
+++ b/docs/content/methodology/test/generic_segmentation_offload.md
@@ -0,0 +1,117 @@
+---
+title: "Generic Segmentation Offload"
+weight: 7
+---
+
+# Generic Segmentation Offload
+
+## Overview
+
+Generic Segmentation Offload (GSO) reduces per-packet processing
+overhead by enabling applications to pass a multi-packet buffer to
+(v)NIC and process a smaller number of large packets (e.g. frame size of
+64 KB), instead of processing higher numbers of small packets (e.g.
+frame size of 1500 B), thus reducing per-packet overhead.
+
+GSO tests for VPP vhostuser and tapv2 interfaces. All tests cases use iPerf3
+client and server applications running TCP/IP as a traffic generator. For
+performance comparison the same tests are run without GSO enabled.
+
+## GSO Test Topologies
+
+Two VPP GSO test topologies are implemented:
+
+1. iPerfC_GSOvirtio_LinuxVM --- GSOvhost_VPP_GSOvhost --- iPerfS_GSOvirtio_LinuxVM
+ - Tests VPP GSO on vhostuser interfaces and interaction with Linux
+ virtio with GSO enabled.
+2. iPerfC_GSOtap_LinuxNspace --- GSOtapv2_VPP_GSOtapv2 --- iPerfS_GSOtap_LinuxNspace
+ - Tests VPP GSO on tapv2 interfaces and interaction with Linux tap
+ with GSO enabled.
+
+Common configuration:
+
+- iPerfC (client) and iPerfS (server) run in TCP/IP mode without upper
+ bandwidth limit.
+- Trial duration is set to 30 sec.
+- iPerfC, iPerfS and VPP run in the single SUT node.
+
+
+## VPP GSOtap Topology
+
+### VPP Configuration
+
+VPP GSOtap tests are executed without using hyperthreading. VPP worker runs on
+a single core. Multi-core tests are not executed. Each interface belongs to
+separate namespace. Following core pinning scheme is used:
+
+- 1t1c (rxq=1, rx_qsz=4096, tx_qsz=4096)
+ - system isolated: 0,28,56,84
+ - vpp mt: 1
+ - vpp wt: 2
+ - vhost: 3-5
+ - iperf-s: 6
+ - iperf-c: 7
+
+### iPerf3 Server Configuration
+
+iPerf3 version used 3.7
+
+ $ sudo -E -S ip netns exec tap1_namespace iperf3 \
+ --server --daemon --pidfile /tmp/iperf3_server.pid \
+ --logfile /tmp/iperf3.log --port 5201 --affinity <X>
+
+For the full iPerf3 reference please see
+[iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst).
+
+
+### iPerf3 Client Configuration
+
+iPerf3 version used 3.7
+
+ $ sudo -E -S ip netns exec tap1_namespace iperf3 \
+ --client 2.2.2.2 --bind 1.1.1.1 --port 5201 --parallel <Y> \
+ --time 30.0 --affinity <X> --zerocopy
+
+For the full iPerf3 reference please see
+[iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst).
+
+
+## VPP GSOvhost Topology
+
+### VPP Configuration
+
+VPP GSOvhost tests are executed without using hyperthreading. VPP worker runs
+on a single core. Multi-core tests are not executed. Following core pinning
+scheme is used:
+
+- 1t1c (rxq=1, rx_qsz=1024, tx_qsz=1024)
+ - system isolated: 0,28,56,84
+ - vpp mt: 1
+ - vpp wt: 2
+ - vm-iperf-s: 3,4,5,6,7
+ - vm-iperf-c: 8,9,10,11,12
+ - iperf-s: 1
+ - iperf-c: 1
+
+### iPerf3 Server Configuration
+
+iPerf3 version used 3.7
+
+ $ sudo iperf3 \
+ --server --daemon --pidfile /tmp/iperf3_server.pid \
+ --logfile /tmp/iperf3.log --port 5201 --affinity X
+
+For the full iPerf3 reference please see
+[iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst).
+
+
+### iPerf3 Client Configuration
+
+iPerf3 version used 3.7
+
+ $ sudo iperf3 \
+ --client 2.2.2.2 --bind 1.1.1.1 --port 5201 --parallel <Y> \
+ --time 30.0 --affinity X --zerocopy
+
+For the full iPerf3 reference please see
+[iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst).
diff --git a/docs/content/methodology/test/hoststack/_index.md b/docs/content/methodology/test/hoststack/_index.md
new file mode 100644
index 0000000000..2ae872c54e
--- /dev/null
+++ b/docs/content/methodology/test/hoststack/_index.md
@@ -0,0 +1,6 @@
+---
+bookCollapseSection: true
+bookFlatSection: false
+title: "Hoststack"
+weight: 6
+---
diff --git a/docs/content/methodology/test/hoststack/quicudpip_with_vppecho.md b/docs/content/methodology/test/hoststack/quicudpip_with_vppecho.md
new file mode 100644
index 0000000000..c7d57a51b3
--- /dev/null
+++ b/docs/content/methodology/test/hoststack/quicudpip_with_vppecho.md
@@ -0,0 +1,48 @@
+---
+title: "QUIC/UDP/IP with vpp_echo"
+weight: 1
+---
+
+# QUIC/UDP/IP with vpp_echo
+
+[vpp_echo performance testing tool](https://wiki.fd.io/view/VPP/HostStack#External_Echo_Server.2FClient_.28vpp_echo.29)
+is a bespoke performance test application which utilizes the 'native
+HostStack APIs' to verify performance and correct handling of
+connection/stream events with uni-directional and bi-directional
+streams of data.
+
+Because iperf3 does not support the QUIC transport protocol, vpp_echo
+is used for measuring the maximum attainable goodput of the VPP Host
+Stack connection utilizing the QUIC transport protocol across two
+instances of VPP running on separate DUT nodes. The QUIC transport
+protocol supports multiple streams per connection and test cases
+utilize different combinations of QUIC connections and number of
+streams per connection.
+
+The test configuration is as follows:
+
+ DUT1 Network DUT2
+ [ vpp_echo-client -> VPP1 ]=======[ VPP2 -> vpp_echo-server]
+ N-streams/connection
+
+where,
+
+1. vpp_echo server attaches to VPP2 and LISTENs on VPP2:TCP port 1234.
+2. vpp_echo client creates one or more connections to VPP1 and opens
+ one or more stream per connection to VPP2:TCP port 1234.
+3. vpp_echo client transmits a uni-directional stream as fast as the
+ VPP Host Stack allows to the vpp_echo server for the test duration.
+4. At the end of the test the vpp_echo client emits the goodput
+ measurements for all streams and the sum of all streams.
+
+Test cases include
+
+1. 1 QUIC Connection with 1 Stream
+2. 1 QUIC connection with 10 Streams
+3. 10 QUIC connetions with 1 Stream
+4. 10 QUIC connections with 10 Streams
+
+with stream sizes to provide reasonable test durations. The VPP Host
+Stack QUIC transport is configured to utilize the picotls encryption
+library. In the future, tests utilizing addtional encryption
+algorithms will be added.
diff --git a/docs/content/methodology/test/hoststack/tcpip_with_iperf3.md b/docs/content/methodology/test/hoststack/tcpip_with_iperf3.md
new file mode 100644
index 0000000000..7baa88ab50
--- /dev/null
+++ b/docs/content/methodology/test/hoststack/tcpip_with_iperf3.md
@@ -0,0 +1,52 @@
+---
+title: "TCP/IP with iperf3"
+weight: 2
+---
+
+# TCP/IP with iperf3
+
+[iperf3 goodput measurement tool](https://github.com/esnet/iperf)
+is used for measuring the maximum attainable goodput of the VPP Host
+Stack connection across two instances of VPP running on separate DUT
+nodes. iperf3 is a popular open source tool for active measurements
+of the maximum achievable goodput on IP networks.
+
+Because iperf3 utilizes the POSIX socket interface APIs, the current
+test configuration utilizes the LD_PRELOAD mechanism in the linux
+kernel to connect iperf3 to the VPP Host Stack using the VPP
+Communications Library (VCL) LD_PRELOAD library (libvcl_ldpreload.so).
+
+In the future, a forked version of iperf3 which has been modified to
+directly use the VCL application APIs may be added to determine the
+difference in performance of 'VCL Native' applications versus utilizing
+LD_PRELOAD which inherently has more overhead and other limitations.
+
+The test configuration is as follows:
+
+ DUT1 Network DUT2
+ [ iperf3-client -> VPP1 ]=======[ VPP2 -> iperf3-server]
+
+where,
+
+1. iperf3 server attaches to VPP2 and LISTENs on VPP2:TCP port 5201.
+2. iperf3 client attaches to VPP1 and opens one or more stream
+ connections to VPP2:TCP port 5201.
+3. iperf3 client transmits a uni-directional stream as fast as the
+ VPP Host Stack allows to the iperf3 server for the test duration.
+4. At the end of the test the iperf3 client emits the goodput
+ measurements for all streams and the sum of all streams.
+
+Test cases include 1 and 10 Streams with a 20 second test duration
+with the VPP Host Stack configured to utilize the Cubic TCP
+congestion algorithm.
+
+Note: iperf3 is single threaded, so it is expected that the 10 stream
+test shows little or no performance improvement due to
+multi-thread/multi-core execution.
+
+There are also variations of these test cases which use the VPP Network
+Simulator (NSIM) plugin to test the VPP Hoststack goodput with 1 percent
+of the traffic being dropped at the output interface of VPP1 thereby
+simulating a lossy network. The NSIM tests are experimental and the
+test results are not currently representative of typical results in a
+lossy network.
diff --git a/docs/content/methodology/test/hoststack/udpip_with_iperf3.md b/docs/content/methodology/test/hoststack/udpip_with_iperf3.md
new file mode 100644
index 0000000000..01ddf61269
--- /dev/null
+++ b/docs/content/methodology/test/hoststack/udpip_with_iperf3.md
@@ -0,0 +1,44 @@
+---
+title: "UDP/IP with iperf3"
+weight: 3
+---
+
+# UDP/IP with iperf3
+
+[iperf3 goodput measurement tool](https://github.com/esnet/iperf)
+is used for measuring the maximum attainable goodput of the VPP Host
+Stack connection across two instances of VPP running on separate DUT
+nodes. iperf3 is a popular open source tool for active measurements
+of the maximum achievable goodput on IP networks.
+
+Because iperf3 utilizes the POSIX socket interface APIs, the current
+test configuration utilizes the LD_PRELOAD mechanism in the linux
+kernel to connect iperf3 to the VPP Host Stack using the VPP
+Communications Library (VCL) LD_PRELOAD library (libvcl_ldpreload.so).
+
+In the future, a forked version of iperf3 which has been modified to
+directly use the VCL application APIs may be added to determine the
+difference in performance of 'VCL Native' applications versus utilizing
+LD_PRELOAD which inherently has more overhead and other limitations.
+
+The test configuration is as follows:
+
+ DUT1 Network DUT2
+ [ iperf3-client -> VPP1 ]=======[ VPP2 -> iperf3-server]
+
+where,
+
+1. iperf3 server attaches to VPP2 and LISTENs on VPP2:UDP port 5201.
+2. iperf3 client attaches to VPP1 and transmits one or more streams
+ of packets to VPP2:UDP port 5201.
+3. iperf3 client transmits a uni-directional stream as fast as the
+ VPP Host Stack allows to the iperf3 server for the test duration.
+4. At the end of the test the iperf3 client emits the goodput
+ measurements for all streams and the sum of all streams.
+
+Test cases include 1 and 10 Streams with a 20 second test duration
+with the VPP Host Stack using the UDP transport layer..
+
+Note: iperf3 is single threaded, so it is expected that the 10 stream
+test shows little or no performance improvement due to
+multi-thread/multi-core execution.
diff --git a/docs/content/methodology/test/hoststack/vsap_ab_with_nginx.md b/docs/content/methodology/test/hoststack/vsap_ab_with_nginx.md
new file mode 100644
index 0000000000..2dc4d2b7f9
--- /dev/null
+++ b/docs/content/methodology/test/hoststack/vsap_ab_with_nginx.md
@@ -0,0 +1,39 @@
+---
+title: "VSAP ab with nginx"
+weight: 4
+---
+
+# VSAP ab with nginx
+
+[VSAP (VPP Stack Acceleration Project)](https://wiki.fd.io/view/VSAP)
+aims to establish an industry user space application ecosystem based on
+the VPP hoststack. As a pre-requisite to adapting open source applications
+using VPP Communications Library to accelerate performance, the VSAP team
+has introduced baseline tests utilizing the LD_PRELOAD mechanism to capture
+baseline performance data.
+
+[AB (Apache HTTP server benchmarking tool)](https://httpd.apache.org/docs/2.4/programs/ab.html)
+is used for measuring the maximum connections-per-second and requests-per-second.
+
+[NGINX](https://www.nginx.com) is a popular open source HTTP server
+application. Because NGINX utilizes the POSIX socket interface APIs, the test
+configuration uses the LD_PRELOAD mechanism to connect NGINX to the VPP
+Hoststack using the VPP Communications Library (VCL) LD_PRELOAD library
+(libvcl_ldpreload.so).
+
+In the future, a version of NGINX which has been modified to
+directly use the VCL application APIs will be added to determine the
+difference in performance of 'VCL Native' applications versus utilizing
+LD_PRELOAD which inherently has more overhead and other limitations.
+
+The test configuration is as follows:
+
+ TG Network DUT
+ [ AB ]=============[ VPP -> nginx ]
+
+where,
+
+1. nginx attaches to VPP and listens on TCP port 80
+2. ab runs CPS and RPS tests with packets flowing from the Test Generator node,
+ across 100G NICs, through VPP hoststack to NGINX.
+3. At the end of the tests, the results are reported by AB.
diff --git a/docs/content/methodology/test/internet_protocol_security.md b/docs/content/methodology/test/internet_protocol_security.md
new file mode 100644
index 0000000000..1a02c43a0a
--- /dev/null
+++ b/docs/content/methodology/test/internet_protocol_security.md
@@ -0,0 +1,73 @@
+---
+title: "Internet Protocol Security"
+weight: 4
+---
+
+# Internet Protocol Security
+
+VPP Internet Protocol Security (IPsec) performance tests are executed for the
+following crypto plugins:
+
+- `crypto_native`, used for software based crypto leveraging CPU
+ platform optimizations e.g. Intel's AES-NI instruction set.
+- `crypto_ipsecmb`, used for hardware based crypto with Intel QAT PCIe cards.
+
+## IPsec with VPP Native SW Crypto
+
+CSIT implements following IPsec test cases relying on VPP native crypto
+(`crypto_native` plugin):
+
+ **VPP Crypto Engine** | **ESP Encryption** | **ESP Integrity** | **Scale Tested**
+----------------------:|-------------------:|------------------:|-----------------:
+ crypto_native | AES[128\|256]-GCM | GCM | 1 to 60k tunnels
+ crypto_native | AES128-CBC | SHA[256\|512] | 1 to 60k tunnels
+
+VPP IPsec with SW crypto are executed in both tunnel and policy modes,
+with tests running on 3-node testbeds: 3n-icx, 3n-tsh.
+
+## IPsec with Intel QAT HW
+
+CSIT implements following IPsec test cases relying on ipsecmb library
+(`crypto_ipsecmb` plugin) and Intel QAT 8950 (50G HW crypto card):
+
+dpdk_cryptodev
+
+ **VPP Crypto Engine** | **VPP Crypto Workers** | **ESP Encryption** | **ESP Integrity** | **Scale Tested**
+----------------------:|-----------------------:|-------------------:|------------------:|-----------------:
+ crypto_ipsecmb | sync/all workers | AES[128\|256]-GCM | GCM | 1, 1k tunnels
+ crypto_ipsecmb | sync/all workers | AES[128]-CBC | SHA[256\|512] | 1, 1k tunnels
+ crypto_ipsecmb | async/crypto worker | AES[128\|256]-GCM | GCM | 1, 4, 1k tunnels
+ crypto_ipsecmb | async/crypto worker | AES[128]-CBC | SHA[256\|512] | 1, 4, 1k tunnels
+
+## IPsec with Async Crypto Feature Workers
+
+*TODO Description to be added*
+
+## IPsec Uni-Directional Tests with VPP Native SW Crypto
+
+CSIT implements following IPsec uni-directional test cases relying on VPP native
+crypto (`crypto_native` plugin) in tunnel mode:
+
+ **VPP Crypto Engine** | **ESP Encryption** | **ESP Integrity** | **Scale Tested**
+----------------------:|-------------------:|------------------:|-------------------:
+ crypto_native | AES[128\|256]-GCM | GCM | 4, 1k, 10k tunnels
+ crypto_native | AES128-CBC | SHA[512] | 4, 1k, 10k tunnels
+
+In policy mode:
+
+ **VPP Crypto Engine** | **ESP Encryption** | **ESP Integrity** | **Scale Tested**
+----------------------:|-------------------:|------------------:|------------------:
+ crypto_native | AES[256]-GCM | GCM | 1, 40, 1k tunnels
+
+The tests are running on 2-node testbeds: 2n-tx2. The uni-directional tests
+are partially addressing a weakness in 2-node testbed setups with T-Rex as
+the traffic generator. With just one DUT node, we can either encrypt or decrypt
+traffic in each direction.
+
+The testcases are only doing encryption - packets are encrypted on the DUT and
+then arrive at TG where no additional packet processing is needed (just
+counting packets).
+
+Decryption would require that the traffic generator generated encrypted packets
+which the DUT then would decrypt. However, T-Rex does not have the capability
+to encrypt packets.
diff --git a/docs/content/methodology/test/network_address_translation.md b/docs/content/methodology/test/network_address_translation.md
new file mode 100644
index 0000000000..f443eabc5f
--- /dev/null
+++ b/docs/content/methodology/test/network_address_translation.md
@@ -0,0 +1,445 @@
+---
+title: "Network Address Translation"
+weight: 1
+---
+
+# Network Address Translation
+
+## NAT44 Prefix Bindings
+
+NAT44 prefix bindings should be representative to target applications,
+where a number of private IPv4 addresses from the range defined by
+RFC1918 is mapped to a smaller set of public IPv4 addresses from the
+public range.
+
+Following quantities are used to describe inside to outside IP address
+and port bindings scenarios:
+
+- Inside-addresses, number of inside source addresses
+ (representing inside hosts).
+- Ports-per-inside-address, number of TCP/UDP source
+ ports per inside source address.
+- Outside-addresses, number of outside (public) source addresses
+ allocated to NAT44.
+- Ports-per-outside-address, number of TCP/UDP source
+ ports per outside source address. The maximal number of
+ ports-per-outside-address usable for NAT is 64 512
+ (in non-reserved port range 1024-65535, RFC4787).
+- Sharing-ratio, equal to inside-addresses divided by outside-addresses.
+
+CSIT NAT44 tests are designed to take into account the maximum number of
+ports (sessions) required per inside host (inside-address) and at the
+same time to maximize the use of outside-address range by using all
+available outside ports. With this in mind, the following scheme of
+NAT44 sharing ratios has been devised for use in CSIT:
+
+ **ports-per-inside-address** | **sharing-ratio**
+-----------------------------:|------------------:
+ 63 | 1024
+ 126 | 512
+ 252 | 256
+ 504 | 128
+
+Initial CSIT NAT44 tests, including associated TG/TRex traffic profiles,
+are based on ports-per-inside-address set to 63 and the sharing ratio of
+1024. This approach is currently used for all NAT44 tests including
+NAT44det (NAT44 deterministic used for Carrier Grade NAT applications)
+and NAT44ed (Endpoint Dependent).
+
+Private address ranges to be used in tests:
+
+- 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
+
+ - Total of 2^16 (65 536) of usable IPv4 addresses.
+ - Used in tests for up to 65 536 inside addresses (inside hosts).
+
+- 172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
+
+ - Total of 2^20 (1 048 576) of usable IPv4 addresses.
+ - Used in tests for up to 1 048 576 inside addresses (inside hosts).
+
+### NAT44 Session Scale
+
+NAT44 session scale tested is govern by the following logic:
+
+- Number of inside-addresses(hosts) H[i] = (H[i-1] x 2^2) with H(0)=1 024,
+ i = 1,2,3, ...
+
+ - H[i] = 1 024, 4 096, 16 384, 65 536, 262 144, ...
+
+- Number of sessions S[i] = H[i] * ports-per-inside-address
+
+ - ports-per-inside-address = 63
+
+ **i** | **hosts** | **sessions**
+------:|----------:|-------------:
+ 0 | 1 024 | 64 512
+ 1 | 4 096 | 258 048
+ 2 | 16 384 | 1 032 192
+ 3 | 65 536 | 4 128 768
+ 4 | 262 144 | 16 515 072
+
+### NAT44 Deterministic
+
+NAT44det performance tests are using TRex STL (Stateless) API and traffic
+profiles, similar to all other stateless packet forwarding tests like
+ip4, ip6 and l2, sending UDP packets in both directions
+inside-to-outside and outside-to-inside.
+
+The inside-to-outside traffic uses single destination address (20.0.0.0)
+and port (1024).
+The inside-to-outside traffic covers whole inside address and port range,
+the outside-to-inside traffic covers whole outside address and port range.
+
+NAT44det translation entries are created during the ramp-up phase,
+followed by verification that all entries are present,
+before proceeding to the main measurements of the test.
+This ensures session setup does not impact the forwarding performance test.
+
+Associated CSIT test cases use the following naming scheme to indicate
+NAT44det scenario tested:
+
+- ethip4udp-nat44det-h{H}-p{P}-s{S}-[mrr|ndrpdr|soak]
+
+ - {H}, number of inside hosts, H = 1024, 4096, 16384, 65536, 262144.
+ - {P}, number of ports per inside host, P = 63.
+ - {S}, number of sessions, S = 64512, 258048, 1032192, 4128768,
+ 16515072.
+ - [mrr|ndrpdr|soak], MRR, NDRPDR or SOAK test.
+
+### NAT44 Endpoint-Dependent
+
+In order to excercise NAT44ed ability to translate based on both
+source and destination address and port, the inside-to-outside traffic
+varies also destination address and port. Destination port is the same
+as source port, destination address has the same offset as the source address,
+but applied to different subnet (starting with 20.0.0.0).
+
+As the mapping is not deterministic (for security reasons),
+we cannot easily use stateless bidirectional traffic profiles.
+Inside address and port range is fully covered,
+but we do not know which outside-to-inside source address and port to use
+to hit an open session.
+
+Therefore, NAT44ed is benchmarked using following methodologies:
+
+- Unidirectional throughput using *stateless* traffic profile.
+- Connections-per-second (CPS) using *stateful* traffic profile.
+- Bidirectional throughput (TPUT, see below) using *stateful* traffic profile.
+
+Unidirectional NAT44ed throughput tests are using TRex STL (Stateless)
+APIs and traffic profiles, but with packets sent only in
+inside-to-outside direction.
+Similarly to NAT44det, NAT44ed unidirectional throughput tests include
+a ramp-up phase to establish and verify the presence of required NAT44ed
+binding entries. As the sessions have finite duration, the test code
+keeps inserting ramp-up trials during the search, if it detects a risk
+of sessions timing out. Any zero loss trial visits all sessions,
+so it acts also as a ramp-up.
+
+Stateful NAT44ed tests are using TRex ASTF (Advanced Stateful) APIs and
+traffic profiles, with packets sent in both directions. Tests are run
+with both UDP and TCP sessions.
+As NAT44ed CPS (connections-per-second) stateful tests
+measure (also) session opening performance,
+they use state reset instead of ramp-up trial.
+NAT44ed TPUT (bidirectional throughput) tests prepend ramp-up trials
+as in the unidirectional tests,
+so the test results describe performance without translation entry
+creation overhead.
+
+Associated CSIT test cases use the following naming scheme to indicate
+NAT44det case tested:
+
+- Stateless: ethip4udp-nat44ed-h{H}-p{P}-s{S}-udir-[mrr|ndrpdr|soak]
+
+ - {H}, number of inside hosts, H = 1024, 4096, 16384, 65536, 262144.
+ - {P}, number of ports per inside host, P = 63.
+ - {S}, number of sessions, S = 64512, 258048, 1032192, 4128768,
+ 16515072.
+ - udir-[mrr|ndrpdr|soak], unidirectional stateless tests MRR, NDRPDR
+ or SOAK.
+
+- Stateful: ethip4[udp|tcp]-nat44ed-h{H}-p{P}-s{S}-[cps|tput]-[mrr|ndrpdr|soak]
+
+ - [udp|tcp], UDP or TCP sessions
+ - {H}, number of inside hosts, H = 1024, 4096, 16384, 65536, 262144.
+ - {P}, number of ports per inside host, P = 63.
+ - {S}, number of sessions, S = 64512, 258048, 1032192, 4128768,
+ 16515072.
+ - [cps|tput], connections-per-second session establishment rate or
+ packets-per-second average rate, or packets-per-second rate
+ without session establishment.
+ - [mrr|ndrpdr|soak], bidirectional stateful tests MRR, NDRPDR, or SOAK.
+
+## Stateful traffic profiles
+
+There are several important details which distinguish ASTF profiles
+from stateless profiles.
+
+### General considerations
+
+#### Protocols
+
+ASTF profiles are limited to either UDP or TCP protocol.
+
+#### Programs
+
+Each template in the profile defines two "programs", one for the client side
+and one for the server side.
+
+Each program specifies when that side has to wait until enough data is received
+(counted in packets for UDP and in bytes for TCP)
+and when to send additional data. Together, the two programs
+define a single transaction. Due to packet loss, transaction may take longer,
+use more packets (retransmission) or never finish in its entirety.
+
+#### Instances
+
+A client instance is created according to TPS parameter for the trial,
+and sends the first packet of the transaction (in some cases more packets).
+Each client instance uses a different source address (see sequencing below)
+and some source port. The destination address also comes from a range,
+but destination port has to be constant for a given program.
+
+TRex uses an opaque way to chose source ports, but as session counting shows,
+next client with the same source address uses a different source port.
+
+Server instance is created when the first packet arrives to the server side.
+Source address and port of the first packet are used as destination address
+and port for the server responses. This is the ability we need
+when outside surface is not predictable.
+
+When a program reaches its end, the instance is deleted.
+This creates possible issues with server instances. If the server instance
+does not read all the data client has sent, late data packets
+can cause a second copy of server instance to be created,
+which breaks assumptions on how many packet a transaction should have.
+
+The need for server instances to read all the data reduces the overall
+bandwidth TRex is able to create in ASTF mode.
+
+Note that client instances are not created on packets,
+so it is safe to end client program without reading all server data
+(unless the definition of transaction success requires that).
+
+#### Sequencing
+
+ASTF profiles offer two modes for choosing source and destination IP addresses
+for client programs: seqential and pseudorandom.
+In current tests we are using sequential addressing only (if destination
+address varies at all).
+
+For client destination UDP/TCP port, we use a single constant value.
+(TRex can support multiple program pairs in the same traffic profile,
+distinguished by the port number.)
+
+#### Transaction overlap
+
+If a transaction takes longer to finish, compared to period implied by TPS,
+TRex will have multiple client or server instances active at a time.
+
+During calibration testing we have found this increases CPU utilization,
+and for high TPS it can lead to TRex's Rx or Tx buffers becoming full.
+This generally leads to duration stretching, and/or packet loss on TRex.
+
+Currently used transactions were chosen to be short, so risk of bad behavior
+is decreased. But in MRR tests, where load is computed based on NIC ability,
+not TRex ability, anomalous behavior is still possible
+(e.g. MRR values being way lower than NDR).
+
+#### Delays
+
+TRex supports adding constant delays to ASTF programs.
+This can be useful, for example if we want to separate connection establishment
+from data transfer.
+
+But as TRex tracks delayed instances as active, this still results
+in higher CPU utilization and reduced performance issues
+(as other overlaping transactions). So the current tests do not use any delays.
+
+#### Keepalives
+
+Both UDP and TCP protocol implementations in TRex programs support keepalive
+duration. That means there is a configurable period of keepalive time,
+and TRex sends keepalive packets automatically (outside the program)
+for the time the program is active (started, not ended yet)
+but not sending any packets.
+
+For TCP this is generally not a big deal, as the other side usually
+retransmits faster. But for UDP it means a packet loss may leave
+the receiving program running.
+
+In order to avoid keepalive packets, keepalive value is set to a high number.
+Here, "high number" means that even at maximum scale and minimum TPS,
+there are still no keepalive packets sent within the corresponding
+(computed) trial duration. This number is kept the same also for
+smaller scale traffic profiles, to simplify maintenance.
+
+#### Transaction success
+
+The transaction is considered successful at Layer-7 (L7) level
+when both program instances close. At this point, various L7 counters
+(unofficial name) are updated on TRex.
+
+We found that proper close and L7 counter update can be CPU intensive,
+whereas lower-level counters (ipackets, opackets) called L2 counters
+can keep up with higher loads.
+
+For some tests, we do not need to confirm the whole transaction was successful.
+CPS (connections per second) tests are a typical example.
+We care only for NAT44ed creating a session (needs one packet
+in inside-to-outside direction per session) and being able to use it
+(needs one packet in outside-to-inside direction).
+
+Similarly in TPUT tests (packet throuput, counting both control
+and data packets), we care about NAT44ed ability to forward packets,
+we do not care whether aplications (TRex) can fully process them at that rate.
+
+Therefore each type of tests has its own formula (usually just one counter
+already provided by TRex) to count "successful enough" transactions
+and attempted transactions. Currently, all tests relying on L7 counters
+use size-limited profiles, so they know what the count of attempted
+transactions should be, but due to duration stretching
+TRex might have been unable to send that many packets.
+For search purposes, unattempted transactions are treated the same
+as attempted but failed transactions.
+
+Sometimes even the number of transactions as tracked by search algorithm
+does not match the transactions as defined by ASTF programs.
+See TCP TPUT profile below.
+
+### UDP CPS
+
+This profile uses a minimalistic transaction to verify NAT44ed session has been
+created and it allows outside-to-inside traffic.
+
+Client instance sends one packet and ends.
+Server instance sends one packet upon creation and ends.
+
+In principle, packet size is configurable,
+but currently used tests apply only one value (100 bytes frame).
+
+Transaction counts as attempted when opackets counter increases on client side.
+Transaction counts as successful when ipackets counter increases on client side.
+
+### TCP CPS
+
+This profile uses a minimalistic transaction to verify NAT44ed session has been
+created and it allows outside-to-inside traffic.
+
+Client initiates TCP connection. Client waits until connection is confirmed
+(by reading zero data bytes). Client ends.
+Server accepts the connection. Server waits for indirect confirmation
+from client (by waiting for client to initiate close). Server ends.
+
+Without packet loss, the whole transaction takes 7 packets to finish
+(4 and 3 per direction).
+From NAT44ed point of view, only the first two are needed to verify
+the session got created.
+
+Packet size is not configurable, but currently used tests report
+frame size as 64 bytes.
+
+Transaction counts as attempted when tcps_connattempt counter increases
+on client side.
+Transaction counts as successful when tcps_connects counter increases
+on client side.
+
+### UDP TPUT
+
+This profile uses a small transaction of "request-response" type,
+with several packets simulating data payload.
+
+Client sends 5 packets and closes immediately.
+Server reads all 5 packets (needed to avoid late packets creating new
+server instances), then sends 5 packets and closes.
+The value 5 was chosen to mirror what TCP TPUT (see below) choses.
+
+Packet size is configurable, currently we have tests for 100,
+1518 and 9000 bytes frame (to match size of TCP TPUT data frames, see below).
+
+As this is a packet oriented test, we do not track the whole
+10 packet transaction. Similarly to stateless tests, we treat each packet
+as a "transaction" for search algorthm packet loss ratio purposes.
+Therefore a "transaction" is attempted when opacket counter on client
+or server side is increased. Transaction is successful if ipacket counter
+on client or server side is increased.
+
+If one of 5 client packets is lost, server instance will get stuck
+in the reading phase. This probably decreases TRex performance,
+but it leads to more stable results then alternatives.
+
+### TCP TPUT
+
+This profile uses a small transaction of "request-response" type,
+with some data amount to be transferred both ways.
+
+In CSIT release 22.06, TRex behavior changed, so we needed to edit
+the traffic profile. Let us describe the pre-22.06 profile first.
+
+Client connects, sends 5 data packets worth of data,
+receives 5 data packets worth of data and closes its side of the connection.
+Server accepts connection, reads 5 data packets worth of data,
+sends 5 data packets worth of data and closes its side of the connection.
+As usual in TCP, sending side waits for ACK from the receiving side
+before proceeding with next step of its program.
+
+Server read is needed to avoid premature close and second server instance.
+Client read is not stricly needed, but ACKs allow TRex to close
+the server instance quickly, thus saving CPU and improving performance.
+
+The number 5 of data packets was chosen so TRex is able to send them
+in a single burst, even with 9000 byte frame size (TRex has a hard limit
+on initial window size).
+That leads to 16 packets (9 of them in c2s direction) to be exchanged
+if no loss occurs.
+The size of data packets is controlled by the traffic profile setting
+the appropriate maximum segment size. Due to TRex restrictions,
+the minimal size for IPv4 data frame achievable by this method is 70 bytes,
+which is more than our usual minimum of 64 bytes.
+For that reason, the data frame sizes available for testing are 100 bytes
+(that allows room for eventually adding IPv6 ASTF tests),
+1518 bytes and 9000 bytes. There is no control over control packet sizes.
+
+Exactly as in UDP TPUT, ipackets and opackets counters are used for counting
+"transactions" (in fact packets).
+
+If packet loss occurs, there can be large transaction overlap, even if most
+ASTF programs finish eventually. This can lead to big duration stretching
+and somehow uneven rate of packets sent. This makes it hard to interpret
+MRR results (frequently MRR is below NDR for this reason),
+but NDR and PDR results tend to be stable enough.
+
+In 22.06, the "ACK from the receiving side" behavior changed,
+the receiving side started sending ACK sometimes
+also before receiving the full set of 5 data packets.
+If the previous profile is understood as a "single challenge, single response"
+where challenge (and also response) is sent as a burst of 5 data packets,
+the new profile uses "bursts" of 1 packet instead, but issues
+the challenge-response part 5 times sequentially
+(waiting for receiving the response before sending next challenge).
+This new profile happens to have the same overall packet count
+(when no re-transmissions are needed).
+Although it is possibly more taxing for TRex CPU,
+the results are comparable to the old traffic profile.
+
+## Ip4base tests
+
+Contrary to stateless traffic profiles, we do not have a simple limit
+that would guarantee TRex is able to send traffic at specified load.
+For that reason, we have added tests where "nat44ed" is replaced by "ip4base".
+Instead of NAT44ed processing, the tests set minimalistic IPv4 routes,
+so that packets are forwarded in both inside-to-outside and outside-to-inside
+directions.
+
+The packets arrive to server end of TRex with different source address&port
+than in NAT44ed tests (no translation to outside values is done with ip4base),
+but those are not specified in the stateful traffic profiles.
+The server end (as always) uses the received address&port as destination
+for outside-to-inside traffic. Therefore the same stateful traffic profile
+works for both NAT44ed and ip4base test (of the same scale).
+
+The NAT44ed results are displayed together with corresponding ip4base results.
+If they are similar, TRex is probably the bottleneck.
+If NAT44ed result is visibly smaller, it describes the real VPP performance.
diff --git a/docs/content/methodology/test/packet_flow_ordering.md b/docs/content/methodology/test/packet_flow_ordering.md
new file mode 100644
index 0000000000..c2c87038d4
--- /dev/null
+++ b/docs/content/methodology/test/packet_flow_ordering.md
@@ -0,0 +1,42 @@
+---
+title: "Packet Flow Ordering"
+weight: 2
+---
+
+# Packet Flow Ordering
+
+TRex Traffic Generator (TG) supports two main ways how to cover
+address space (on allowed ranges) in scale tests.
+
+In most cases only one field value (e.g. IPv4 destination address) is
+altered, in some cases two fields (e.g. IPv4 destination address and UDP
+destination port) are altered.
+
+## Incremental Ordering
+
+This case is simpler to implement and offers greater control.
+
+When changing two fields, they can be incremented synchronously, or one
+after another. In the latter case we can specify which one is
+incremented each iteration and which is incremented by "carrying over"
+only when the other "wraps around". This way also visits all
+combinations once before the "carry" field also wraps around.
+
+It is possible to use increments other than 1.
+
+## Randomized Ordering
+
+This case chooses each field value at random (from the allowed range).
+In case of two fields, they are treated independently.
+TRex allows to set random seed to get deterministic numbers.
+We use a different seed for each field and traffic direction.
+The seed has to be a non-zero number, we use 1, 2, 3, and so on.
+
+The seeded random mode in TRex requires a "limit" value,
+which acts as a cycle length limit (after this many iterations,
+the seed resets to its initial value).
+We use the maximal allowed limit value (computed as 2^24 - 1).
+
+Randomized profiles do not avoid duplicated values,
+and do not guarantee each possible value is visited,
+so it is not very useful for stateful tests.
diff --git a/docs/content/methodology/test/reconfiguration.md b/docs/content/methodology/test/reconfiguration.md
new file mode 100644
index 0000000000..6dec4d918b
--- /dev/null
+++ b/docs/content/methodology/test/reconfiguration.md
@@ -0,0 +1,68 @@
+---
+title: "Reconfiguration"
+weight: 8
+---
+
+# Reconfiguration
+
+## Overview
+
+Reconf tests are designed to measure the impact of VPP re-configuration
+on data plane traffic.
+While VPP takes some measures against the traffic being
+entirely stopped for a prolonged time,
+the immediate forwarding rate varies during the re-configuration,
+as some configurations steps need the active dataplane worker threads
+to be stopped temporarily.
+
+As the usual methods of measuring throughput need multiple trial measurements
+with somewhat long durations, and the re-configuration process can also be long,
+finding an offered load which would result in zero loss
+during the re-configuration process would be time-consuming.
+
+Instead, reconf tests first find a througput value (lower bound for NDR)
+without re-configuration, and then maintain that ofered load
+during re-configuration. The measured loss count is then assumed to be caused
+by the re-configuration process. The result published by reconf tests
+is the effective blocked time, that is
+the loss count divided by the offered load.
+
+## Current Implementation
+
+Each reconf suite is based on a similar MLRsearch performance suite.
+
+MLRsearch parameters are changed to speed up the throughput discovery.
+For example, PDR is not searched for, and the final trial duration is shorter.
+
+The MLRsearch suite has to contain a configuration parameter
+that can be scaled up, e.g. number of tunnels or number of service chains.
+Currently, only increasing the scale is supported
+as the re-configuration operation. In future, scale decrease
+or other operations can be implemented.
+
+The traffic profile is not changed, so the traffic present is processed
+only by the smaller scale configuration. The added tunnels / chains
+are not targetted by the traffic.
+
+For the re-configuration, the same Robot Framework and Python libraries
+are used, as were used in the initial configuration, with the exception
+of the final calls that do not interact with VPP (e.g. starting
+virtual machines) being skipped to reduce the test overall duration.
+
+## Discussion
+
+Robot Framework introduces a certain overhead, which may affect timing
+of individual VPP API calls, which in turn may affect
+the number of packets lost.
+
+The exact calls executed may contain unnecessary info dumps, repeated commands,
+or commands which change a value that do not need to be changed (e.g. MTU).
+Thus, implementation details are affecting the results, even if their effect
+on the corresponding MLRsearch suite is negligible.
+
+The lower bound for NDR is the only value safe to be used when zero packets lost
+are expected without re-configuration. But different suites show different
+"jitter" in that value. For some suites, the lower bound is not tight,
+allowing full NIC buffers to drain quickly between worker pauses.
+For other suites, lower bound for NDR still has quite a large probability
+of non-zero packet loss even without re-configuration.
diff --git a/docs/content/methodology/test/tunnel_encapsulations.md b/docs/content/methodology/test/tunnel_encapsulations.md
new file mode 100644
index 0000000000..c047c43dfa
--- /dev/null
+++ b/docs/content/methodology/test/tunnel_encapsulations.md
@@ -0,0 +1,87 @@
+---
+title: "Tunnel Encapsulations"
+weight: 3
+---
+
+# Tunnel Encapsulations
+
+Tunnel encapsulations testing is grouped based on the type of outer
+header: IPv4 or IPv6.
+
+## IPv4 Tunnels
+
+VPP is tested in the following IPv4 tunnel baseline configurations:
+
+- *ip4vxlan-l2bdbase*: VXLAN over IPv4 tunnels with L2 bridge-domain MAC
+ switching.
+- *ip4vxlan-l2xcbase*: VXLAN over IPv4 tunnels with L2 cross-connect.
+- *ip4lispip4-ip4base*: LISP over IPv4 tunnels with IPv4 routing.
+- *ip4lispip6-ip6base*: LISP over IPv4 tunnels with IPv6 routing.
+- *ip4gtpusw-ip4base*: GTPU over IPv4 tunnels with IPv4 routing.
+
+In all cases listed above low number of MAC, IPv4, IPv6 flows (253 or 254 per
+direction) is switched or routed by VPP.
+
+In addition selected IPv4 tunnels are tested at scale:
+
+- *dot1q--ip4vxlanscale-l2bd*: VXLAN over IPv4 tunnels with L2 bridge-
+ domain MAC switching, with scaled up dot1q VLANs (10, 100, 1k),
+ mapped to scaled up L2 bridge-domains (10, 100, 1k), that are in turn
+ mapped to (10, 100, 1k) VXLAN tunnels. 64.5k flows are transmitted per
+ direction.
+
+## IPv6 Tunnels
+
+VPP is tested in the following IPv6 tunnel baseline configurations:
+
+- *ip6lispip4-ip4base*: LISP over IPv4 tunnels with IPv4 routing.
+- *ip6lispip6-ip6base*: LISP over IPv4 tunnels with IPv6 routing.
+
+In all cases listed above low number of IPv4, IPv6 flows (253 or 254 per
+direction) is routed by VPP.
+
+## GENEVE
+
+### GENEVE Prefix Bindings
+
+GENEVE prefix bindings should be representative to target applications, where
+a packet flows of particular set of IPv4 addresses (L3 underlay network) is
+routed via dedicated GENEVE interface by building an L2 overlay.
+
+Private address ranges to be used in tests:
+
+- East hosts ip address range: 10.0.1.0 - 10.127.255.255 (10.0/9 prefix)
+ - Total of 2^23 - 256 (8 388 352) of usable IPv4 addresses
+ - Usable in tests for up to 32 767 GENEVE tunnels (IPv4 underlay networks)
+- West hosts ip address range: 10.128.1.0 - 10.255.255.255 (10.128/9 prefix)
+ - Total of 2^23 - 256 (8 388 352) of usable IPv4 addresses
+ - Usable in tests for up to 32 767 GENEVE tunnels (IPv4 underlay networks)
+
+### GENEVE Tunnel Scale
+
+If N is a number of GENEVE tunnels (and IPv4 underlay networks) then TG sends
+256 packet flows in every of N different sets:
+
+- i = 1,2,3, ... N - GENEVE tunnel index
+- East-West direction: GENEVE encapsulated packets
+ - Outer IP header:
+ - src ip: 1.1.1.1
+ - dst ip: 1.1.1.2
+ - GENEVE header:
+ - vni: i
+ - Inner IP header:
+ - src_ip_range(i) = 10.(0 + rounddown(i/255)).(modulo(i/255)).(0-to-255)
+ - dst_ip_range(i) = 10.(128 + rounddown(i/255)).(modulo(i/255)).(0-to-255)
+- West-East direction: non-encapsulated packets
+ - IP header:
+ - src_ip_range(i) = 10.(128 + rounddown(i/255)).(modulo(i/255)).(0-to-255)
+ - dst_ip_range(i) = 10.(0 + rounddown(i/255)).(modulo(i/255)).(0-to-255)
+
+ **geneve-tunnels** | **total-flows**
+-------------------:|----------------:
+ 1 | 256
+ 4 | 1 024
+ 16 | 4 096
+ 64 | 16 384
+ 256 | 65 536
+ 1 024 | 262 144
diff --git a/docs/content/methodology/test/vpp_device.md b/docs/content/methodology/test/vpp_device.md
new file mode 100644
index 0000000000..0a5ee90308
--- /dev/null
+++ b/docs/content/methodology/test/vpp_device.md
@@ -0,0 +1,15 @@
+---
+title: "VPP Device"
+weight: 9
+---
+
+# VPP Device
+
+Includes VPP_Device test environment for functional VPP
+device tests integrated into LFN CI/CD infrastructure. VPP_Device tests
+run on 1-Node testbeds (1n-skx, 1n-arm) and rely on Linux SRIOV Virtual
+Function (VF), dot1q VLAN tagging and external loopback cables to
+facilitate packet passing over external physical links. Initial focus is
+on few baseline tests. New device tests can be added by small edits
+to existing CSIT Performance (2-node) test. RF test definition code
+stays unchanged with the exception of traffic generator related L2 KWs.