diff options
Diffstat (limited to 'docs/report/introduction')
-rw-r--r-- | docs/report/introduction/methodology_nfv_service_density.rst | 13 | ||||
-rw-r--r-- | docs/report/introduction/methodology_vpp_startup_settings.rst | 31 |
2 files changed, 22 insertions, 22 deletions
diff --git a/docs/report/introduction/methodology_nfv_service_density.rst b/docs/report/introduction/methodology_nfv_service_density.rst index 23bdac9d7c..b09c1be629 100644 --- a/docs/report/introduction/methodology_nfv_service_density.rst +++ b/docs/report/introduction/methodology_nfv_service_density.rst @@ -19,15 +19,13 @@ different service density setups by varying two parameters: - Number of service instances (e.g. 1,2,4..10). - Number of NFs per service instance (e.g. 1,2,4..10). -The initial implementation of NFV service density tests in -|csit-release| is using two NF applications: +Implementation of NFV service density tests in |csit-release| is using two NF +applications: - VNF: VPP of the same version as vswitch running in KVM VM, configured with /8 IPv4 prefix routing. - CNF: VPP of the same version as vswitch running in Docker Container, - configured with /8 IPv4 prefix routing. VPP got chosen as a fast IPv4 NF - application that supports required memif interface (L3fwd does not). This is - similar to all other Container tests in CSIT that use VPP. + configured with /8 IPv4 prefix routing. Tests are designed such that in all tested cases VPP vswitch is the most stressed application, as for each flow vswitch is processing each packet @@ -103,9 +101,8 @@ physical core mapping ratios: - (main:core) = (2:1) => 2mt1c - 2 Main Threads on 1 Core, 1 Thread per NF, core shared between two NFs. - - (data:core) = (2:1) => 1dt1c - 1 Data-plane Threads on 1 Core per - NF. - + - (data:core) = (2:1) => 2dt1c - 2 Data-plane Threads on 1 Core, 1 + Thread per NF, core shared between two NFs. Maximum tested service densities are limited by a number of physical cores per NUMA. |csit-release| allocates cores within NUMA0. Support for diff --git a/docs/report/introduction/methodology_vpp_startup_settings.rst b/docs/report/introduction/methodology_vpp_startup_settings.rst index 81c5e4e73e..e3e8d29b23 100644 --- a/docs/report/introduction/methodology_vpp_startup_settings.rst +++ b/docs/report/introduction/methodology_vpp_startup_settings.rst @@ -1,29 +1,32 @@ VPP Startup Settings -------------------- -CSIT code manipulates a number of VPP settings in startup.conf for optimized -performance. List of common settings applied to all tests and test -dependent settings follows. +CSIT code manipulates a number of VPP settings in startup.conf for +optimized performance. List of common settings applied to all tests and +test dependent settings follows. -See `VPP startup.conf`_ -for a complete set and description of listed settings. +See `VPP startup.conf`_ for a complete set and description of listed +settings. Common Settings ~~~~~~~~~~~~~~~ -List of vpp startup.conf settings applied to all tests: +List of VPP startup.conf settings applied to all tests: #. heap-size <value> - set separately for ip4, ip6, stats, main depending on scale tested. -#. no-tx-checksum-offload - disables UDP / TCP TX checksum offload in DPDK. - Typically needed for use faster vector PMDs (together with +#. no-tx-checksum-offload - disables UDP / TCP TX checksum offload in + DPDK. Typically needed for use faster vector PMDs (together with no-multi-seg). -#. buffers-per-numa <value> - increases number of buffers allocated, needed - in scenarios with large number of interfaces and worker threads. - Value is per CPU socket. Default is 16384. CSIT is setting statically - 107520 buffers per CPU thread (215040 if HTT is enabled). This value is also - maximum possible amount limited by number of memory mappings in DPDK - libraries for 2MB Hugepages used in CSIT. +#. buffers-per-numa <value> - sets a number of memory buffers allocated + to VPP per CPU socket. VPP default is 16384. Needs to be increased for + scenarios with large number of interfaces and worker threads. To + accommodate for scale tests, CSIT is setting it to the maximum possible + value corresponding to the limit of DPDK memory mappings (currently + 256). For Xeon Skylake platforms configured with 2MB hugepages and VPP + data-size and buffer-size defaults (2048B and 2496B respectively), this + results in value of 215040 (256 * 840 = 215040, 840 * 2496B buffers fit + in 2MB hugepage ). For Xeon Haswell nodes value of 107520 is used. Per Test Settings ~~~~~~~~~~~~~~~~~ |