aboutsummaryrefslogtreecommitdiffstats
path: root/docs/report/introduction
diff options
context:
space:
mode:
authorMaciek Konstantynowicz <mkonstan@cisco.com>2019-05-02 19:20:38 +0100
committerMaciek Konstantynowicz <mkonstan@cisco.com>2019-05-03 16:59:03 +0000
commita8c3f441fa595311f60bcc634a2720f204ced733 (patch)
tree90b7a81ec7e24c9a3d0f404798434b8999ef03e6 /docs/report/introduction
parent11173c99a4416de8970fe67cef4fff2d9119ef14 (diff)
report 1904: updated methodology (nfv density, startup.conf) and vpp perf rls notes
Change-Id: If6d24511d5d29dcc25c21437364e0da15af3cca1 Signed-off-by: Maciek Konstantynowicz <mkonstan@cisco.com>
Diffstat (limited to 'docs/report/introduction')
-rw-r--r--docs/report/introduction/methodology_nfv_service_density.rst13
-rw-r--r--docs/report/introduction/methodology_vpp_startup_settings.rst31
2 files changed, 22 insertions, 22 deletions
diff --git a/docs/report/introduction/methodology_nfv_service_density.rst b/docs/report/introduction/methodology_nfv_service_density.rst
index 23bdac9d7c..b09c1be629 100644
--- a/docs/report/introduction/methodology_nfv_service_density.rst
+++ b/docs/report/introduction/methodology_nfv_service_density.rst
@@ -19,15 +19,13 @@ different service density setups by varying two parameters:
- Number of service instances (e.g. 1,2,4..10).
- Number of NFs per service instance (e.g. 1,2,4..10).
-The initial implementation of NFV service density tests in
-|csit-release| is using two NF applications:
+Implementation of NFV service density tests in |csit-release| is using two NF
+applications:
- VNF: VPP of the same version as vswitch running in KVM VM, configured with /8
IPv4 prefix routing.
- CNF: VPP of the same version as vswitch running in Docker Container,
- configured with /8 IPv4 prefix routing. VPP got chosen as a fast IPv4 NF
- application that supports required memif interface (L3fwd does not). This is
- similar to all other Container tests in CSIT that use VPP.
+ configured with /8 IPv4 prefix routing.
Tests are designed such that in all tested cases VPP vswitch is the most
stressed application, as for each flow vswitch is processing each packet
@@ -103,9 +101,8 @@ physical core mapping ratios:
- (main:core) = (2:1) => 2mt1c - 2 Main Threads on 1 Core, 1 Thread
per NF, core shared between two NFs.
- - (data:core) = (2:1) => 1dt1c - 1 Data-plane Threads on 1 Core per
- NF.
-
+ - (data:core) = (2:1) => 2dt1c - 2 Data-plane Threads on 1 Core, 1
+ Thread per NF, core shared between two NFs.
Maximum tested service densities are limited by a number of physical
cores per NUMA. |csit-release| allocates cores within NUMA0. Support for
diff --git a/docs/report/introduction/methodology_vpp_startup_settings.rst b/docs/report/introduction/methodology_vpp_startup_settings.rst
index 81c5e4e73e..e3e8d29b23 100644
--- a/docs/report/introduction/methodology_vpp_startup_settings.rst
+++ b/docs/report/introduction/methodology_vpp_startup_settings.rst
@@ -1,29 +1,32 @@
VPP Startup Settings
--------------------
-CSIT code manipulates a number of VPP settings in startup.conf for optimized
-performance. List of common settings applied to all tests and test
-dependent settings follows.
+CSIT code manipulates a number of VPP settings in startup.conf for
+optimized performance. List of common settings applied to all tests and
+test dependent settings follows.
-See `VPP startup.conf`_
-for a complete set and description of listed settings.
+See `VPP startup.conf`_ for a complete set and description of listed
+settings.
Common Settings
~~~~~~~~~~~~~~~
-List of vpp startup.conf settings applied to all tests:
+List of VPP startup.conf settings applied to all tests:
#. heap-size <value> - set separately for ip4, ip6, stats, main
depending on scale tested.
-#. no-tx-checksum-offload - disables UDP / TCP TX checksum offload in DPDK.
- Typically needed for use faster vector PMDs (together with
+#. no-tx-checksum-offload - disables UDP / TCP TX checksum offload in
+ DPDK. Typically needed for use faster vector PMDs (together with
no-multi-seg).
-#. buffers-per-numa <value> - increases number of buffers allocated, needed
- in scenarios with large number of interfaces and worker threads.
- Value is per CPU socket. Default is 16384. CSIT is setting statically
- 107520 buffers per CPU thread (215040 if HTT is enabled). This value is also
- maximum possible amount limited by number of memory mappings in DPDK
- libraries for 2MB Hugepages used in CSIT.
+#. buffers-per-numa <value> - sets a number of memory buffers allocated
+ to VPP per CPU socket. VPP default is 16384. Needs to be increased for
+ scenarios with large number of interfaces and worker threads. To
+ accommodate for scale tests, CSIT is setting it to the maximum possible
+ value corresponding to the limit of DPDK memory mappings (currently
+ 256). For Xeon Skylake platforms configured with 2MB hugepages and VPP
+ data-size and buffer-size defaults (2048B and 2496B respectively), this
+ results in value of 215040 (256 * 840 = 215040, 840 * 2496B buffers fit
+ in 2MB hugepage ). For Xeon Haswell nodes value of 107520 is used.
Per Test Settings
~~~~~~~~~~~~~~~~~