aboutsummaryrefslogtreecommitdiffstats
path: root/docs/report/introduction
diff options
context:
space:
mode:
authorPeter Mikus <pmikus@cisco.com>2019-04-29 09:34:59 +0000
committerPeter Mikus <pmikus@cisco.com>2019-04-30 05:35:52 +0000
commita611ee796ea6b224018953c276e67900b212dc44 (patch)
tree96980e5e79fa25ab0e10b71d364649ed03151977 /docs/report/introduction
parentad4f18fdb7b46c3c2b95813c1fdbe428295ef61b (diff)
Update report static content
Change-Id: I8aef71fb2cc09d2c7862884f7ca68c9786ad5165 Signed-off-by: Peter Mikus <pmikus@cisco.com>
Diffstat (limited to 'docs/report/introduction')
-rw-r--r--docs/report/introduction/methodology_kvm_vms_vhost_user.rst5
-rw-r--r--docs/report/introduction/methodology_nfv_service_density.rst26
-rw-r--r--docs/report/introduction/methodology_vpp_startup_settings.rst13
3 files changed, 23 insertions, 21 deletions
diff --git a/docs/report/introduction/methodology_kvm_vms_vhost_user.rst b/docs/report/introduction/methodology_kvm_vms_vhost_user.rst
index 0a465cf0fb..299f708827 100644
--- a/docs/report/introduction/methodology_kvm_vms_vhost_user.rst
+++ b/docs/report/introduction/methodology_kvm_vms_vhost_user.rst
@@ -3,10 +3,7 @@ KVM VMs vhost-user
QEMU is used for VPP-VM testing enviroment. By default, standard QEMU version
preinstalled from OS repositories is used on VIRL/vpp_device functional testing
-(qemu-2.11.x for Ubuntu 18.04, qemu-2.5.0 for Ubuntu 16.04). For perfomance
-testing QEMU is downloaded from `project homepage <qemu.org>`_ and compiled
-during testing. This allows framework to easily inject QEMU patches in case of
-need. In QEMU version <2.8 we used it for increasing QEMU virtion queue size.
+(qemu-2.11.x for Ubuntu 18.04, qemu-2.5.0 for Ubuntu 16.04).
In CSIT setup DUTs have small VM image `/var/lib/vm/vhost-nested.img`. QEMU
binary can be adjusted in global settings. VM image must have installed at least
qemu-guest-agent, sshd, bridge-utils, VirtIO support and Testpmd/L3fwd
diff --git a/docs/report/introduction/methodology_nfv_service_density.rst b/docs/report/introduction/methodology_nfv_service_density.rst
index 51e56e294d..23bdac9d7c 100644
--- a/docs/report/introduction/methodology_nfv_service_density.rst
+++ b/docs/report/introduction/methodology_nfv_service_density.rst
@@ -22,14 +22,12 @@ different service density setups by varying two parameters:
The initial implementation of NFV service density tests in
|csit-release| is using two NF applications:
-- VNF: DPDK L3fwd running in KVM VM, configured with /8 IPv4 prefix
- routing. L3fwd got chosen as a lightweight fast IPv4 VNF application,
- and follows CSIT approach of using DPDK sample applications in VMs for
- performance testing.
-- CNF: VPP running in Docker Container, configured with /24 IPv4 prefix
- routing. VPP got chosen as a fast IPv4 NF application that supports
- required memif interface (L3fwd does not). This is similar to all
- other Container tests in CSIT that use VPP.
+- VNF: VPP of the same version as vswitch running in KVM VM, configured with /8
+ IPv4 prefix routing.
+- CNF: VPP of the same version as vswitch running in Docker Container,
+ configured with /8 IPv4 prefix routing. VPP got chosen as a fast IPv4 NF
+ application that supports required memif interface (L3fwd does not). This is
+ similar to all other Container tests in CSIT that use VPP.
Tests are designed such that in all tested cases VPP vswitch is the most
stressed application, as for each flow vswitch is processing each packet
@@ -84,22 +82,30 @@ physical core mapping ratios:
- Data-plane on single core
+ - (main:core) = (1:1) => 1mt1c - 1 main thread on 1 core.
- (data:core) = (1:1) => 2dt1c - 2 Data-plane Threads on 1 Core.
- - (main:core) = (1:1) => 1mt1c - 1 Main Thread on 1 Core.
- Data-plane on two cores
- - (data:core) = (1:2) => 4dt2c - 4 Data-plane Threads on 2 Cores.
- (main:core) = (1:1) => 1mt1c - 1 Main Thread on 1 Core.
+ - (data:core) = (1:2) => 4dt2c - 4 Data-plane Threads on 2 Cores.
- VNF and CNF
- Data-plane on single core
+ - (main:core) = (2:1) => 2mt1c - 2 Main Threads on 1 Core, 1 Thread
+ per NF, core shared between two NFs.
- (data:core) = (1:1) => 2dt1c - 2 Data-plane Threads on 1 Core per
NF.
+
+ - Data-plane on single logical core (Two NFs per physical core)
+
- (main:core) = (2:1) => 2mt1c - 2 Main Threads on 1 Core, 1 Thread
per NF, core shared between two NFs.
+ - (data:core) = (2:1) => 1dt1c - 1 Data-plane Threads on 1 Core per
+ NF.
+
Maximum tested service densities are limited by a number of physical
cores per NUMA. |csit-release| allocates cores within NUMA0. Support for
diff --git a/docs/report/introduction/methodology_vpp_startup_settings.rst b/docs/report/introduction/methodology_vpp_startup_settings.rst
index 2b0f485cc0..81c5e4e73e 100644
--- a/docs/report/introduction/methodology_vpp_startup_settings.rst
+++ b/docs/report/introduction/methodology_vpp_startup_settings.rst
@@ -18,8 +18,12 @@ List of vpp startup.conf settings applied to all tests:
#. no-tx-checksum-offload - disables UDP / TCP TX checksum offload in DPDK.
Typically needed for use faster vector PMDs (together with
no-multi-seg).
-#. socket-mem <value>,<value> - memory per numa. (Not required anymore
- due to VPP code changes, will be removed in CSIT-19.04.)
+#. buffers-per-numa <value> - increases number of buffers allocated, needed
+ in scenarios with large number of interfaces and worker threads.
+ Value is per CPU socket. Default is 16384. CSIT is setting statically
+ 107520 buffers per CPU thread (215040 if HTT is enabled). This value is also
+ maximum possible amount limited by number of memory mappings in DPDK
+ libraries for 2MB Hugepages used in CSIT.
Per Test Settings
~~~~~~~~~~~~~~~~~
@@ -31,11 +35,6 @@ List of vpp startup.conf settings applied dynamically per test:
test configuration.
#. num-rx-queues <value> - depends on a number of VPP threads and NIC
interfaces.
-#. num-rx-desc/num-tx-desc - number of rx/tx descriptors for specific
- NICs, incl. xl710, x710, xxv710.
-#. num-mbufs <value> - increases number of buffers allocated, needed
- only in scenarios with large number of interfaces and worker threads.
- Value is per CPU socket. Default is 16384.
#. no-multi-seg - disables multi-segment buffers in DPDK, improves
packet throughput, but disables Jumbo MTU support. Disabled for all
tests apart from the ones that require Jumbo 9000B frame support.