diff options
author | Maciek Konstantynowicz <mkonstan@cisco.com> | 2018-08-18 09:17:16 +0100 |
---|---|---|
committer | Maciek Konstantynowicz <mkonstan@cisco.com> | 2018-08-18 09:17:16 +0100 |
commit | 5a1f4570778a7511415d94e58cc2299d02b871cd (patch) | |
tree | c63fb41b077cb5d4f5f0c53fd3645070339d3c4a /docs/report | |
parent | ec497a5a8a47544147f130187f178de40b80e5c0 (diff) |
report 18.07: final final editorial nit picking in v0.9.
Formatting and removing excessive white space in static content.
Change-Id: I7400f4ba6386b85b59b667db026558685ec0d1a1
Signed-off-by: Maciek Konstantynowicz <mkonstan@cisco.com>
Diffstat (limited to 'docs/report')
-rw-r--r-- | docs/report/dpdk_performance_tests/overview.rst | 7 | ||||
-rw-r--r-- | docs/report/dpdk_performance_tests/throughput_trending.rst | 12 | ||||
-rw-r--r-- | docs/report/introduction/introduction.rst | 5 | ||||
-rw-r--r-- | docs/report/introduction/methodology.rst | 5 | ||||
-rw-r--r-- | docs/report/introduction/physical_testbeds.rst | 5 | ||||
-rw-r--r-- | docs/report/introduction/test_environment_intro.rst | 11 | ||||
-rw-r--r-- | docs/report/introduction/test_scenarios_overview.rst | 13 | ||||
-rw-r--r-- | docs/report/nsh_sfc_functional_tests/overview.rst | 12 | ||||
-rw-r--r-- | docs/report/vpp_functional_tests/overview.rst | 7 | ||||
-rw-r--r-- | docs/report/vpp_performance_tests/overview.rst | 3 | ||||
-rw-r--r-- | docs/report/vpp_performance_tests/throughput_trending.rst | 12 |
11 files changed, 43 insertions, 49 deletions
diff --git a/docs/report/dpdk_performance_tests/overview.rst b/docs/report/dpdk_performance_tests/overview.rst index 5f3d6b0a2b..212e202c13 100644 --- a/docs/report/dpdk_performance_tests/overview.rst +++ b/docs/report/dpdk_performance_tests/overview.rst @@ -1,8 +1,11 @@ Overview ======== -For description of physical testbeds used for DPDK performance tests -please refer to :ref:`tested_physical_topologies`. +DPDK performance test results are reported for all three physical +testbed types present in FD.io labs: 3-Node Xeon Haswell (3n-hsw), +3-Node Xeon Skylake (3n-skx), 2-Node Xeon Skylake (2n-skx) and installed +NIC models. For description of physical testbeds used for DPDK +performance tests please refer to :ref:`tested_physical_topologies`. Logical Topologies ------------------ diff --git a/docs/report/dpdk_performance_tests/throughput_trending.rst b/docs/report/dpdk_performance_tests/throughput_trending.rst index 961d3ea7a1..81d4d6db60 100644 --- a/docs/report/dpdk_performance_tests/throughput_trending.rst +++ b/docs/report/dpdk_performance_tests/throughput_trending.rst @@ -4,14 +4,14 @@ Throughput Trending In addition to reporting throughput comparison between DPDK releases, CSIT provides regular performance trending for DPDK release branches: -#. `Performance Dashboard <https://docs.fd.io/csit/master/trending/introduction/index.html>`_ - - per DPDK test case throughput trend, trend compliance and summary of +#. `Performance Dashboard <https://docs.fd.io/csit/master/trending/introduction/index.html>`_: + per DPDK test case throughput trend, trend compliance and summary of detected anomalies. -#. `Trending Methodology <https://docs.fd.io/csit/master/trending/methodology/index.html>`_ - - throughput test metrics, trend calculations and anomaly +#. `Trending Methodology <https://docs.fd.io/csit/master/trending/methodology/index.html>`_: + throughput test metrics, trend calculations and anomaly classification (progression, regression). -#. `DPDK Trendline Graphs <https://docs.fd.io/csit/master/trending/trending/dpdk.html>`_ - - weekly DPDK Testpmd and L3fwd MRR throughput measurements against +#. `DPDK Trendline Graphs <https://docs.fd.io/csit/master/trending/trending/dpdk.html>`_: + weekly DPDK Testpmd and L3fwd MRR throughput measurements against the trendline with anomaly highlights and associated CSIT test jobs.
\ No newline at end of file diff --git a/docs/report/introduction/introduction.rst b/docs/report/introduction/introduction.rst index 01108f1119..24c5970725 100644 --- a/docs/report/introduction/introduction.rst +++ b/docs/report/introduction/introduction.rst @@ -2,9 +2,8 @@ Introduction ============ FD.io |csit-release| report contains system performance and functional -testing data of |vpp-release|. - -`PDF version of this report`_ is also available for download. +testing data of |vpp-release|. `PDF version of this report`_ is +available for download. |csit-release| report is structured as follows: diff --git a/docs/report/introduction/methodology.rst b/docs/report/introduction/methodology.rst index 28bcf68257..1a1c349ba2 100644 --- a/docs/report/introduction/methodology.rst +++ b/docs/report/introduction/methodology.rst @@ -15,15 +15,14 @@ different Packet Loss Ratio (PLR) values. Following MLRsearch values are measured across a range of L2 frame sizes and reported: -- **Non Drop Rate (NDR)**: packet and bandwidth throughput at PLR=0%. +- NON DROP RATE (NDR): packet and bandwidth throughput at PLR=0%. - **Aggregate packet rate**: NDR_LOWER <bi-directional packet rate> pps. - **Aggregate bandwidth rate**: NDR_LOWER <bi-directional bandwidth rate> Gbps. -- **Partial Drop Rate (PDR)**: packet and bandwidth throughput at - PLR=0.5%. +- PARTIAL DROP RATE (PDR): packet and bandwidth throughput at PLR=0.5%. - **Aggregate packet rate**: PDR_LOWER <bi-directional packet rate> pps. diff --git a/docs/report/introduction/physical_testbeds.rst b/docs/report/introduction/physical_testbeds.rst index ba6cd39389..f6c559c8e8 100644 --- a/docs/report/introduction/physical_testbeds.rst +++ b/docs/report/introduction/physical_testbeds.rst @@ -6,9 +6,8 @@ Physical Testbeds All :abbr:`FD.io (Fast Data Input/Ouput)` :abbr:`CSIT (Continuous System Integration and Testing)` performance testing listed in this report are executed on physical testbeds built with bare-metal servers hosted by -:abbr:`LF (Linux Foundation)` FD.io project. - -Two testbed topologies are used: +:abbr:`LF (Linux Foundation)` FD.io project. Two testbed topologies are +used: - **3-Node Topology**: Consisting of two servers acting as SUTs (Systems Under Test) and one server as TG (Traffic Generator), all diff --git a/docs/report/introduction/test_environment_intro.rst b/docs/report/introduction/test_environment_intro.rst index 19dac90b96..7763e9df8b 100644 --- a/docs/report/introduction/test_environment_intro.rst +++ b/docs/report/introduction/test_environment_intro.rst @@ -7,9 +7,8 @@ Physical Testbeds ----------------- FD.io CSIT performance tests are executed in physical testbeds hosted by -:abbr:`LF (Linux Foundation)` for FD.io project. - -Two physical testbed topology types are used: +:abbr:`LF (Linux Foundation)` for FD.io project. Two physical testbed +topology types are used: - **3-Node Topology**: Consisting of two servers acting as SUTs (Systems Under Test) and one server as TG (Traffic Generator), all @@ -20,10 +19,8 @@ Two physical testbed topology types are used: Tested SUT servers are based on a range of processors including Intel Xeon Haswell-SP, Intel Xeon Skylake-SP, Arm, Intel Atom. More detailed description is provided in -:ref:`tested_physical_topologies`. - -Tested logical topologies are described in -:ref:`tested_logical_topologies`. +:ref:`tested_physical_topologies`. Tested logical topologies are +described in :ref:`tested_logical_topologies`. Server Specifications --------------------- diff --git a/docs/report/introduction/test_scenarios_overview.rst b/docs/report/introduction/test_scenarios_overview.rst index aebb0b1900..ccd5e820d7 100644 --- a/docs/report/introduction/test_scenarios_overview.rst +++ b/docs/report/introduction/test_scenarios_overview.rst @@ -7,8 +7,7 @@ covers baseline tests of DPDK sample applications. Tests are executed in physical (performance tests) and virtual environments (functional tests). -Following list provides a brief overview of test scenarios covered in -this report: +Brief overview of test scenarios covered in this report: #. **VPP Performance**: VPP performance tests are executed in physical FD.io testbeds, focusing on VPP network data plane performance in @@ -70,13 +69,11 @@ this report: client (DUT2) scenario using DMM framework and Linux kernel TCP/IP stack. -All CSIT test results listed in this report are sourced and auto- +All CSIT test data included in this report is auto- generated from :abbr:`RF (Robot Framework)` :file:`output.xml` files -resulting from :abbr:`LF (Linux Foundation)` FD.io Jenkins jobs executed -against |vpp-release| release artifacts. References are provided to the -original FD.io Jenkins job results. Additional references are provided -to the :abbr:`RF (Robot Framework)` result files that got archived in -FD.io Nexus online storage system. +produced by :abbr:`LF (Linux Foundation)` FD.io Jenkins jobs executed +against |vpp-release| artifacts. References are provided to the +original FD.io Jenkins job results and all archived source files. FD.io CSIT system is developed using two main coding platforms: :abbr:`RF (Robot Framework)` and Python2.7. |csit-release| source code for the executed test diff --git a/docs/report/nsh_sfc_functional_tests/overview.rst b/docs/report/nsh_sfc_functional_tests/overview.rst index 39b9dc6eac..1cbfd9a040 100644 --- a/docs/report/nsh_sfc_functional_tests/overview.rst +++ b/docs/report/nsh_sfc_functional_tests/overview.rst @@ -4,13 +4,11 @@ Overview Virtual Topologies ------------------ -CSIT NSH_SFC functional tests are executed in VM-based virtual topologies -created on demand using :abbr:`VIRL (Virtual Internet Routing Lab)` -simulation platform contributed by Cisco. VIRL runs on physical -baremetal servers hosted by LF FD.io project. - -All tests are executed in three-node virtual test topology shown in the -figure below. +CSIT NSH_SFC functional tests are executed in VM-based virtual +topologies created on demand using :abbr:`VIRL (Virtual Internet Routing +Lab)` simulation platform contributed by Cisco. VIRL runs on physical +baremetal servers hosted by LF FD.io project. All tests are executed in +three-node virtual test topology shown in the figure below. .. only:: latex diff --git a/docs/report/vpp_functional_tests/overview.rst b/docs/report/vpp_functional_tests/overview.rst index 8ce516cf3d..a4635e7f85 100644 --- a/docs/report/vpp_functional_tests/overview.rst +++ b/docs/report/vpp_functional_tests/overview.rst @@ -7,10 +7,9 @@ Virtual Topologies CSIT VPP functional tests are executed in VM-based virtual topologies created on demand using :abbr:`VIRL (Virtual Internet Routing Lab)` simulation platform contributed by Cisco. VIRL runs on physical -baremetal servers hosted by LF FD.io project. - -Based on the packet path thru SUT VMs, two distinct logical topology -types are used for VPP DUT data plane testing: +baremetal servers hosted by LF FD.io project. Based on the packet path +thru SUT VMs, two distinct logical topology types are used for VPP DUT +data plane testing: #. vNIC-to-vNIC switching topologies. #. Nested-VM service switching topologies. diff --git a/docs/report/vpp_performance_tests/overview.rst b/docs/report/vpp_performance_tests/overview.rst index 8ddcaec798..3c95919c55 100644 --- a/docs/report/vpp_performance_tests/overview.rst +++ b/docs/report/vpp_performance_tests/overview.rst @@ -1,6 +1,9 @@ Overview ======== +VPP performance test results are reported for all three physical testbed +types present in FD.io labs: 3-Node Xeon Haswell (3n-hsw), 3-Node Xeon +Skylake (3n-skx), 2-Node Xeon Skylake (2n-skx) and installed NIC models. For description of physical testbeds used for VPP performance tests please refer to :ref:`tested_physical_topologies`. diff --git a/docs/report/vpp_performance_tests/throughput_trending.rst b/docs/report/vpp_performance_tests/throughput_trending.rst index 6607423e0f..01d99e2c99 100644 --- a/docs/report/vpp_performance_tests/throughput_trending.rst +++ b/docs/report/vpp_performance_tests/throughput_trending.rst @@ -4,14 +4,14 @@ Throughput Trending In addition to reporting throughput comparison between VPP releases, CSIT provides continuous performance trending for VPP master branch: -#. `Performance Dashboard <https://docs.fd.io/csit/master/trending/introduction/index.html>`_ - - per VPP test case throughput trend, trend compliance and summary of +#. `Performance Dashboard <https://docs.fd.io/csit/master/trending/introduction/index.html>`_: + per VPP test case throughput trend, trend compliance and summary of detected anomalies. -#. `Trending Methodology <https://docs.fd.io/csit/master/trending/methodology/index.html>`_ - - throughput test metrics, trend calculations and anomaly +#. `Trending Methodology <https://docs.fd.io/csit/master/trending/methodology/index.html>`_: + throughput test metrics, trend calculations and anomaly classification (progression, regression). -#. `VPP Trendline Graphs <https://docs.fd.io/csit/master/trending/trending/index.html>`_ - - per VPP build MRR throughput measurements against the trendline +#. `VPP Trendline Graphs <https://docs.fd.io/csit/master/trending/trending/index.html>`_: + per VPP build MRR throughput measurements against the trendline with anomaly highlights and associated CSIT test jobs.
\ No newline at end of file |