aboutsummaryrefslogtreecommitdiffstats
path: root/docs/report
diff options
context:
space:
mode:
Diffstat (limited to 'docs/report')
-rw-r--r--docs/report/detailed_test_results/honeycomb_functional_results/index.rst4
-rw-r--r--docs/report/honeycomb_functional_tests/csit_release_notes.rst39
-rw-r--r--docs/report/honeycomb_functional_tests/index.rst2
-rw-r--r--docs/report/honeycomb_functional_tests/overview.rst18
-rw-r--r--docs/report/honeycomb_performance_tests/csit_release_notes.rst19
-rw-r--r--docs/report/honeycomb_performance_tests/documentation.rst5
-rw-r--r--docs/report/honeycomb_performance_tests/index.rst11
-rw-r--r--docs/report/honeycomb_performance_tests/overview.rst124
-rw-r--r--docs/report/honeycomb_performance_tests/test_environment.rst22
-rw-r--r--docs/report/honeycomb_performance_tests/test_result_data.rst101
-rw-r--r--docs/report/index.rst1
-rw-r--r--docs/report/introduction/csit_design.rst2
-rw-r--r--docs/report/introduction/csit_test_naming.rst10
-rw-r--r--docs/report/introduction/general_notes.rst129
-rw-r--r--docs/report/introduction/overview.rst13
15 files changed, 400 insertions, 100 deletions
diff --git a/docs/report/detailed_test_results/honeycomb_functional_results/index.rst b/docs/report/detailed_test_results/honeycomb_functional_results/index.rst
index b054749213..7357af6065 100644
--- a/docs/report/detailed_test_results/honeycomb_functional_results/index.rst
+++ b/docs/report/detailed_test_results/honeycomb_functional_results/index.rst
@@ -1,10 +1,10 @@
-HoneyComb Functional Results
+Honeycomb Functional Results
============================
.. note::
Data sources for reported test results: i) FD.io test executor jobs
- `FD.io test executor HoneyComb functional jobs`_
+ `FD.io test executor Honeycomb functional jobs`_
, ii) archived FD.io jobs test result `output files
<../../_static/archive/>`_.
diff --git a/docs/report/honeycomb_functional_tests/csit_release_notes.rst b/docs/report/honeycomb_functional_tests/csit_release_notes.rst
index a398fa41cf..38ec001418 100644
--- a/docs/report/honeycomb_functional_tests/csit_release_notes.rst
+++ b/docs/report/honeycomb_functional_tests/csit_release_notes.rst
@@ -4,16 +4,15 @@ CSIT Release Notes
Changes in CSIT |release|
-------------------------
-#. Added Honeycomb functional tests
+#. Added Honeycomb functional tests for the following features:
- - ACL plugin
- - Routing
- - SLAAC
- - Proxy ARP
- - DHCP Relay
- - Neighbor Discovery Proxy
+ - Policer
-#. Changed execution environment from Ubuntu14.04 to Ubuntu16.04
+#. Improved test coverage for the following features:
+
+ - Interface Management
+ - Vlan
+ - Port Mirroring
Known Issues
------------
@@ -25,18 +24,20 @@ tests in VIRL:
| # | Issue | Jira ID | Description |
+---+--------------------------------------------+------------+----------------------------------------------------------------------------+
| 1 | IP address subnet validation | VPP-649 | When configuring two IP addresses from the same subnet on an interface, |
-| | | | VPP refuses the configuration but returns OK. This can cause desync |
-| | | | between Honeycomb's config and operational data. |
+| | | | VPP refuses the configuration but returns code 200:OK. This can cause |
+| | | | desync between Honeycomb's config and operational data. |
+---+--------------------------------------------+------------+----------------------------------------------------------------------------+
-| 2 | Persistence of VxLAN tunnel naming context | HC2VPP-47 | When VPP restarts with Honeycomb running and a VxLan interface configured, |
-| | | | the interface is sometimes renamed to "vxlan_tunnel0". |
-| | | | It is otherwise configured correctly. |
+| 2 | Removal of ACP-plugin interface assignment | HC2VPP-173 | Attempting to remove all ACLs from an interface responds with OK but does |
+| | | | not remove the assignments. |
+---+--------------------------------------------+------------+----------------------------------------------------------------------------+
-| 3 | Classifier plugin for IPv6 cases | VPP-687 | Classifier ignores IPv6 packets with less than 8 bytes after last header. |
-| | | | Fixed in VPP 17.07. |
+| 3 | VxLAN GPE configuration crashes VPP | VPP-875 | Specific VxLAN GPE configurations cause VPP to crash and restart. |
+---+--------------------------------------------+------------+----------------------------------------------------------------------------+
-| 4 | Batch disable Lisp features | HC2VPP-131 | When removing complex Lisp configurations in a single request, |
-| | | | the operation fails due to a write ordering issue. |
+| 4 | Policer traffic test failure | CSIT- | Traffic test has begun to fail, likely due to VPP changes. There is more |
+| | | | information available yet. |
++---+--------------------------------------------+------------+----------------------------------------------------------------------------+
+| 5 | SPAN traffic test failure | CSIT- | Traffic test has begun to fail, likely due to VPP changes. There is more |
+| | | | information available yet. |
++---+--------------------------------------------+------------+----------------------------------------------------------------------------+
+| 6 | Unnumbered interface VIRL issue | CSIT- | CRUD for unnumbered interface appears to fail in VIRL, but not in local |
+| | | | test runs. Investigation pending. |
+---+--------------------------------------------+------------+----------------------------------------------------------------------------+
-
-
diff --git a/docs/report/honeycomb_functional_tests/index.rst b/docs/report/honeycomb_functional_tests/index.rst
index d5d1b6180b..424ad3640f 100644
--- a/docs/report/honeycomb_functional_tests/index.rst
+++ b/docs/report/honeycomb_functional_tests/index.rst
@@ -1,4 +1,4 @@
-HoneyComb Functional Tests
+Honeycomb Functional Tests
==========================
.. toctree::
diff --git a/docs/report/honeycomb_functional_tests/overview.rst b/docs/report/honeycomb_functional_tests/overview.rst
index c73e9706f8..1c992a5c2d 100644
--- a/docs/report/honeycomb_functional_tests/overview.rst
+++ b/docs/report/honeycomb_functional_tests/overview.rst
@@ -53,7 +53,7 @@ with results listed in this report:
- **Basic interface management** - CRUD for interface state,
- ipv4/ipv6 address, ipv4 neighbor, MTU value.
- - Test case count: 7
+ - Test case count: 14
- **L2BD** - CRUD for L2 Bridge-Domain, interface assignment.
- Create up to two bridge domains with all implemented functions turned on.
- (flooding, unknown-unicast flooding, forwarding, learning, arp-termination)
@@ -86,7 +86,7 @@ with results listed in this report:
- Toggle interface state separately for super-interface and sub-interface.
- Configure IP address and bridge domain assignment on sub-interface.
- Configure VLAN tag rewrite on sub-interface.
- - Test case count: 17
+ - Test case count: 24
- **ACL** - CRD for low-level classifiers: table and session management,
- interface assignment.
- Configure up to 2 classify tables.
@@ -96,7 +96,7 @@ with results listed in this report:
- Test case count: 9
- **PBB** - CRD for provider backbone bridge sub-interface.
- Configure, modify and remove a PBB sub-interface over a physical interface.
- - Test case count: 9
+ - Test case count: 8
- **NSH_SFC** - CRD for NSH maps and entries, using NSH_SFC plugin.
- Configure up to 2 NSH entries.
- Configure up to 2 NSH maps.
@@ -107,7 +107,7 @@ with results listed in this report:
- Configure and delete Lisp mapping as local and remote.
- Configure and delete Lisp adjacency mapping
- Configure and delete Lisp map resolver, proxy ITR.
- - Test case count: 11
+ - Test case count: 16
- **NAT** - CRD for NAT entries, interface assignment.
- Configure and delete up to two NAT entries.
- Assign NAT entries to a physical interface.
@@ -116,7 +116,7 @@ with results listed in this report:
- Configure SPAN port mirroring on a physical interface, mirroring
- up to 2 interfaces.
- Remove SPAN configuration from interfaces.
- - Test case count: 3
+ - Test case count: 14
- **ACL-PLUGIN** - CRD for high-level classifier
- MAC + IP address classification.
- IPv4, IPv6 address classification.
@@ -144,11 +144,15 @@ with results listed in this report:
- Configure blackhole route.
- IPv4 and IPv6 variants.
- Test case count: 6
+- **Policer** - CRD for traffic policing feature.
+ - Configure Policing rules.
+ - Assign to interface.
+ - Test case count: 6
- **Honeycomb Infractructure** - configuration persistence,
- Netconf notifications for interface events,
- Netconf negative tests aimed at specific issues
-Total 158 Honeycomb tests in the CSIT |release|.
+Total 173 Honeycomb functional tests in the CSIT |release|.
Operational data in Honeycomb should mirror configuration data at all times.
Because of this, test cases follow this general pattern:
@@ -158,7 +162,7 @@ Because of this, test cases follow this general pattern:
#. modify configuration of the feature using restconf.
#. verify changes to operational data using restconf.
#. verify changes using VPP API dump, OR
-#. send a packet to VPP node and observe behaviour to verify configuration
+#. send a packet to VPP node and observe behaviour to verify configuration.
Test cases involving network interfaces utilize the first two interfaces on
the DUT node.
diff --git a/docs/report/honeycomb_performance_tests/csit_release_notes.rst b/docs/report/honeycomb_performance_tests/csit_release_notes.rst
new file mode 100644
index 0000000000..51b62a7a6a
--- /dev/null
+++ b/docs/report/honeycomb_performance_tests/csit_release_notes.rst
@@ -0,0 +1,19 @@
+CSIT Release Notes
+==================
+
+Changes in CSIT |release|
+-------------------------
+
+#. First release with honeycomb performance testing
+
+Known Issues
+------------
+
+Here is the list of known issues in CSIT |release| for Honeycomb performance
+tests in VIRL:
+
++---+--------------------------------------------+------------+----------------------------------------------------------------------------+
+| # | Issue | Jira ID | Description |
++---+--------------------------------------------+------------+----------------------------------------------------------------------------+
+| 1 | | | |
++---+--------------------------------------------+------------+----------------------------------------------------------------------------+
diff --git a/docs/report/honeycomb_performance_tests/documentation.rst b/docs/report/honeycomb_performance_tests/documentation.rst
new file mode 100644
index 0000000000..6b15bde6ee
--- /dev/null
+++ b/docs/report/honeycomb_performance_tests/documentation.rst
@@ -0,0 +1,5 @@
+Documentation
+=============
+
+`CSIT Honeycomb Performance Tests Documentation`_ contains detailed
+functional description and input parameters for each test case.
diff --git a/docs/report/honeycomb_performance_tests/index.rst b/docs/report/honeycomb_performance_tests/index.rst
new file mode 100644
index 0000000000..3177494395
--- /dev/null
+++ b/docs/report/honeycomb_performance_tests/index.rst
@@ -0,0 +1,11 @@
+Honeycomb Performance Tests
+===========================
+
+.. toctree::
+
+ overview
+ csit_release_notes
+ test_environment
+ documentation
+ test_result_data
+
diff --git a/docs/report/honeycomb_performance_tests/overview.rst b/docs/report/honeycomb_performance_tests/overview.rst
new file mode 100644
index 0000000000..0b2e3c41b8
--- /dev/null
+++ b/docs/report/honeycomb_performance_tests/overview.rst
@@ -0,0 +1,124 @@
+Overview
+========
+
+Tested Physical Topologies
+--------------------------
+
+CSIT VPP performance tests are executed on physical baremetal servers hosted by
+LF FD.io project. Testbed physical topology is shown in the figure below.
+
+::
+
+ +------------------------+ +------------------------+
+ | | | |
+ | +------------------+ | | +------------------+ |
+ | | | | | | | |
+ | | <-----------------> | |
+ | | DUT1 | | | | DUT2 | |
+ | +--^---------------+ | | +---------------^--+ |
+ | | | | | |
+ | | SUT1 | | SUT2 | |
+ +------------------------+ +------------------^-----+
+ | |
+ | |
+ | +-----------+ |
+ | | | |
+ +------------------> TG <------------------+
+ | |
+ +-----------+
+
+SUT1 runs VPP SW application in Linux user-mode as a
+Device Under Test (DUT), and a python script to generate traffic. SUT2 and TG
+are unused.
+sical connectivity between SUTs and to TG is provided using
+different NIC model. Currently installed NIC models include:
+
+Performance tests involve sending Netconf requests over localhost to the
+Honeycomb listener port, and measuring response time.
+
+Note that reported performance results are specific to the SUTs tested.
+Current LF FD.io SUTs are based on Intel XEON E5-2699v3 2.3GHz CPUs. SUTs with
+other CPUs are likely to yield different results.
+
+For detailed LF FD.io test bed specification and physical topology please refer
+to `LF FDio CSIT testbed wiki page
+<https://wiki.fd.io/view/CSIT/CSIT_LF_testbed>`_.
+
+Performance Tests Coverage
+--------------------------
+
+As of right now, there is only a single Honeycomb performance test. Measuring
+response time for a simple read operation, performed synchronously and using
+single (not batch) requests.
+
+Currently the tests do not trigger automatically, but can be run on-demand from
+the hc2vpp project.
+
+Performance Tests Naming
+------------------------
+
+CSIT |release| follows a common structured naming convention for all
+performance and system functional tests, introduced in CSIT |release-1|.
+
+The naming should be intuitive for majority of the tests. Complete
+description of CSIT test naming convention is provided on `CSIT test naming wiki
+<https://wiki.fd.io/view/CSIT/csit-test-naming>`_.
+
+Here few illustrative examples of the new naming usage for performance test
+suites:
+
+#. **Physical port to physical port - a.k.a. NIC-to-NIC, Phy-to-Phy, P2P**
+
+ - *PortNICConfig-WireEncapsulation-PacketForwardingFunction-
+ PacketProcessingFunction1-...-PacketProcessingFunctionN-TestType*
+ - *10ge2p1x520-dot1q-l2bdbasemaclrn-ndrdisc.robot* => 2 ports of 10GE on
+ Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain baseline switching
+ with MAC learning, NDR throughput discovery.
+ - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-ndrchk.robot* => 2 ports of 10GE
+ on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain baseline
+ switching with MAC learning, NDR throughput discovery.
+ - *10ge2p1x520-ethip4-ip4base-ndrdisc.robot* => 2 ports of 10GE on Intel
+ x520 NIC, IPv4 baseline routed forwarding, NDR throughput discovery.
+ - *10ge2p1x520-ethip6-ip6scale200k-ndrdisc.robot* => 2 ports of 10GE on
+ Intel x520 NIC, IPv6 scaled up routed forwarding, NDR throughput
+ discovery.
+
+#. **Physical port to VM (or VM chain) to physical port - a.k.a. NIC2VM2NIC,
+ P2V2P, NIC2VMchain2NIC, P2V2V2P**
+
+ - *PortNICConfig-WireEncapsulation-PacketForwardingFunction-
+ PacketProcessingFunction1-...-PacketProcessingFunctionN-VirtEncapsulation-
+ VirtPortConfig-VMconfig-TestType*
+ - *10ge2p1x520-dot1q-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2 ports
+ of 10GE on Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain
+ switching to/from two vhost interfaces and one VM, NDR throughput
+ discovery.
+ - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2
+ ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain
+ switching to/from two vhost interfaces and one VM, NDR throughput
+ discovery.
+ - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-4vhost-2vm-ndrdisc.robot* => 2
+ ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain
+ switching to/from four vhost interfaces and two VMs, NDR throughput
+ discovery.
+
+Methodology: Multi-Core
+-----------------------
+
+**Multi-core Test** - CSIT |release| multi-core tests are executed in the
+following thread and core configurations:
+
+#. 1t - 1 Honeycomb Netconf thread on 1 CPU physical core.
+#. 8t - 8 Honeycomb Netconf thread on 8 CPU physical core.
+#. 16t - 16 Honeycomb Netconf thread on 16 CPU physical core.
+
+Traffic generator also uses multiple threads/cores, to simulate multiple
+Netconf clients accessing the Honeycomb server.
+
+Methodology: Performance measurement
+------------------------------------
+
+The following values are measured and reported in tests:
+
+- Average request rate. Averaged over the entire test duration, over all client
+ threads. Negative replies (if any) are not counted and are reported separately.
diff --git a/docs/report/honeycomb_performance_tests/test_environment.rst b/docs/report/honeycomb_performance_tests/test_environment.rst
new file mode 100644
index 0000000000..1cafe26aa4
--- /dev/null
+++ b/docs/report/honeycomb_performance_tests/test_environment.rst
@@ -0,0 +1,22 @@
+Test Environment
+================
+
+To execute performance tests, there are three identical testbeds, each testbed
+consists of two SUTs and one TG.
+
+Server HW Configuration
+-----------------------
+
+See `Performance HW Configuration <../vpp_performance_tests/test_environment.html>`_
+
+Additionally, configuration for the Honeycomb client:
+
+
+**Honeycomb Startup Command**
+
+Use the server mode JIT compiler, increase the default memory size,
+metaspace size, and enable NUMA optimizations for the JVM.
+
+::
+
+ $ java -server -Xms128m -Xmx512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=512m -XX:+UseNUMA -XX:+UseParallelGC
diff --git a/docs/report/honeycomb_performance_tests/test_result_data.rst b/docs/report/honeycomb_performance_tests/test_result_data.rst
new file mode 100644
index 0000000000..563e93ea5f
--- /dev/null
+++ b/docs/report/honeycomb_performance_tests/test_result_data.rst
@@ -0,0 +1,101 @@
+Test Result Data
+================
+
+This section includes summary of Netconf read operation performance.
+Performance is reported for Honeycomb running in multiple configurations of
+netconf thread(s) and their physical CPU core(s) placement, and for different
+read operation targets.
+
+.. note::
+
+ Test results have been generated by
+ `FD.io test executor honeycomb performance jobs`_ with Robot Framework
+ result files csit-vpp-perf-\*.zip `archived here <../../_static/archive/>`_.
+
+Honeycomb + Netconf
+===================
+
+Performs read operations from Honeycomb's operational data store. Honeycomb
+does not contact VPP to obtain the most up-to-date value. Operations are
+performed synchronously per client, varying the number of clients from 1 to 16.
+
+netconf-netty-threads: 16
+
++----------------+----------------------------------+----------------+
+| # clients | TCP performance (reads/sec) | Total requests |
++================+==================================+================+
+| 1 | 6630 | 100K |
++----------------+----------------------------------+----------------+
+| 2 | 14598 | 100K |
++----------------+----------------------------------+----------------+
+| 4 | 28309 | 100K |
++----------------+----------------------------------+----------------+
+| 8 | 46715 | 100K |
++----------------+----------------------------------+----------------+
+| 16 | 458141 | 100K |
++----------------+----------------------------------+----------------+
+
+netconf-netty-threads: 1
+
++----------------+----------------------------------+----------------+
+| # clients | TCP performance (reads/sec) | Total requests |
++================+==================================+================+
+| 1 | 6563 | 100K |
++----------------+----------------------------------+----------------+
+| 2 | 7601 | 100K |
++----------------+----------------------------------+----------------+
+| 4 | 8212 | 100K |
++----------------+----------------------------------+----------------+
+| 8 | 8595 | 100K |
++----------------+----------------------------------+----------------+
+| 16 | 8699 | 100K |
++----------------+----------------------------------+----------------+
+
+Data source:
+https://jenkins.fd.io/view/hc2vpp/job/hc2vpp-csit-perf-master-ubuntu1604/4/
+
+Note: At 46K/s we are likely hitting the limits of the Netconf interface,
+according to https://wiki.opendaylight.org/view/NETCONF:Testing#Results_3
+
+Honeycomb + Netconf + VPP
+=========================
+
+Performs read operations from Honeycomb's operational data store. Honeycomb
+uses VPP API to obtain the value from VPP before responding. Operations are
+performed synchronously per client, varying the number of clients from 1 to 16.
+
+netconf-netty-threads: 16
+# Results pending
+
++----------------+----------------------------------+----------------+
+| # clients | TCP performance (reads/sec) | Total requests |
++================+==================================+================+
+| 1 | | 100K |
++----------------+----------------------------------+----------------+
+| 2 | | 100K |
++----------------+----------------------------------+----------------+
+| 4 | | 100K |
++----------------+----------------------------------+----------------+
+| 8 | | 100K |
++----------------+----------------------------------+----------------+
+| 16 | | 100K |
++----------------+----------------------------------+----------------+
+
+netconf-netty-threads: 1
+
++----------------+----------------------------------+----------------+
+| # clients | TCP performance (reads/sec) | Total requests |
++================+==================================+================+
+| 1 | | 100K |
++----------------+----------------------------------+----------------+
+| 2 | | 100K |
++----------------+----------------------------------+----------------+
+| 4 | | 100K |
++----------------+----------------------------------+----------------+
+| 8 | | 100K |
++----------------+----------------------------------+----------------+
+| 16 | | 100K |
++----------------+----------------------------------+----------------+
+
+Data source:
+# TODO
diff --git a/docs/report/index.rst b/docs/report/index.rst
index be7129cb79..8b4bb7955c 100644
--- a/docs/report/index.rst
+++ b/docs/report/index.rst
@@ -7,6 +7,7 @@ CSIT 17.07
introduction/index
vpp_performance_tests/index
dpdk_performance_tests/index
+ honeycomb_performance_tests/index
vpp_functional_tests/index
honeycomb_functional_tests/index
vpp_unit_tests/index
diff --git a/docs/report/introduction/csit_design.rst b/docs/report/introduction/csit_design.rst
index 374c9cdace..baba58f904 100644
--- a/docs/report/introduction/csit_design.rst
+++ b/docs/report/introduction/csit_design.rst
@@ -85,7 +85,7 @@ A brief bottom-up description is provided here:
- Functional tests using VIRL environment:
- VPP;
- - HoneyComb;
+ - Honeycomb;
- NSH_SFC;
- Performance tests using physical testbed environment:
diff --git a/docs/report/introduction/csit_test_naming.rst b/docs/report/introduction/csit_test_naming.rst
index 13eab06df5..c88ec493a3 100644
--- a/docs/report/introduction/csit_test_naming.rst
+++ b/docs/report/introduction/csit_test_naming.rst
@@ -11,7 +11,7 @@ The naming should be intuitive for majority of the tests. Complete
description of CSIT test naming convention is provided on
`CSIT test naming wiki page <https://wiki.fd.io/view/CSIT/csit-test-naming>`_.
Below few illustrative examples of the naming usage for test suites across CSIT
-performance, functional and HoneyComb management test areas.
+performance, functional and Honeycomb management test areas.
Naming Convention
-----------------
@@ -106,13 +106,13 @@ topologies:
* *mgmt-cfg-lisp-apivat-func* => configuration of LISP with VAT API calls,
functional tests.
* *mgmt-cfg-l2bd-apihc-apivat-func* => configuration of L2 Bridge-Domain with
- HoneyComb API and VAT API calls, functional tests.
+ Honeycomb API and VAT API calls, functional tests.
* *mgmt-oper-int-apihcnc-func* => reading status and operational data of
- interface with HoneyComb NetConf API calls, functional tests.
+ interface with Honeycomb NetConf API calls, functional tests.
* *mgmt-cfg-int-tap-apihcnc-func* => configuration of tap interfaces with
- HoneyComb NetConf API calls, functional tests.
+ Honeycomb NetConf API calls, functional tests.
* *mgmt-notif-int-subint-apihcnc-func* => notifications of interface and
- sub-interface events with HoneyComb NetConf Notifications, functional tests.
+ sub-interface events with Honeycomb NetConf Notifications, functional tests.
For complete description of CSIT test naming convention please refer to `CSIT
test naming wiki page <https://wiki.fd.io/view/CSIT/csit-test-naming>`_.
diff --git a/docs/report/introduction/general_notes.rst b/docs/report/introduction/general_notes.rst
index 64e6231443..380f109764 100644
--- a/docs/report/introduction/general_notes.rst
+++ b/docs/report/introduction/general_notes.rst
@@ -1,62 +1,67 @@
-General Notes
-=============
-
-All CSIT test results listed in this report are sourced and auto-generated
-from output.xml Robot Framework (RF) files resulting from LF FD.io Jenkins
-jobs execution against |vpp-release| release artifacts. References are
-provided to the original LF FD.io Jenkins job results. However, as LF FD.io
-Jenkins infrastructure does not automatically archive all jobs (history record
-is provided for the last 30 days or 40 jobs only), additional references are
-provided to the RF result files that got archived in FD.io nexus online
-storage system.
-
-FD.io CSIT project currently covers multiple FD.io system and sub-system
-testing areas and this is reflected in this report, where each testing area
-is listed separately, as follows:
-
-#. **VPP Performance Tests** - VPP performance tests are executed in physical
- FD.io testbeds, focusing on VPP network data plane performance at this stage,
- both for Phy-to-Phy (NIC-to-NIC) and Phy-to-VM-to-Phy (NIC-to-VM-to-NIC)
- forwarding topologies. Tested across a range of NICs, 10GE and 40GE
- interfaces, range of multi-thread and multi-core configurations. VPP
- application runs in host user- mode. TRex is used as a traffic generator.
-
-#. **DPDK Performance Tests** - VPP is using DPDK code to control and drive
- the NICs and physical interfaces. Testpmd tests are used as a baseline to
- profile the DPDK sub-system of VPP. DPDK performance tests executed in
- physical FD.io testbeds, focusing on Testpmd/L3FWD data plane performance for
- Phy-to-Phy (NIC-to-NIC). Tests cover a range of NICs, 10GE and 40GE
- interfaces, range of multi-thread and multi-core configurations.
- Testpmd/L3FWD application runs in host user-mode. TRex is used as a traffic
- generator.
-
-#. **VPP Functional Tests** - VPP functional tests are executed in virtual
- FD.io testbeds focusing on VPP packet processing functionality, including
- network data plane and in -line control plane. Tests cover vNIC-to-vNIC
- vNIC-to-VM-to-vNIC forwarding topologies. Scapy is used as a traffic
- generator.
-
-#. **HoneyComb Functional Tests** - HoneyComb functional tests are executed in
- virtual FD.io testbeds focusing on HoneyComb management and programming
- functionality of VPP. Tests cover a range of CRUD operations executed
- against VPP.
-
-#. **NSH_SFC Functional Tests** - NSH_SFC functional tests are executed in
- virtual FD.io testbeds focusing on NSH_SFC of VPP. Tests cover a range of
- CRUD operations executed against VPP.
-
-In addition to above, CSIT |release| report does also include VPP unit test
-results. VPP unit tests are developed within the FD.io VPP project and as they
-complement CSIT system functional tests, they are provided mainly as a reference
-and to provide a more complete view of automated testing executed against
-|vpp-release|.
-
-FD.io CSIT system is developed using two main coding platforms: Robot
-Framework (RF) and Python. CSIT |release| source code for the executed test
-suites is available in CSIT branch |release| in the directory
-"./tests/<name_of_the_test_suite>". A local copy of CSIT source code can be
-obtained by cloning CSIT git repository - "git clone
-https://gerrit.fd.io/r/csit". The CSIT testing virtual environment can be run
-on a local computer workstation (laptop, server) using Vagrant by following
-the instructions in `CSIT tutorials
-<https://wiki.fd.io/view/CSIT#Tutorials>`_.
+General Notes
+=============
+
+All CSIT test results listed in this report are sourced and auto-generated
+from output.xml Robot Framework (RF) files resulting from LF FD.io Jenkins
+jobs execution against |vpp-release| release artifacts. References are
+provided to the original LF FD.io Jenkins job results. However, as LF FD.io
+Jenkins infrastructure does not automatically archive all jobs (history record
+is provided for the last 30 days or 40 jobs only), additional references are
+provided to the RF result files that got archived in FD.io nexus online
+storage system.
+
+FD.io CSIT project currently covers multiple FD.io system and sub-system
+testing areas and this is reflected in this report, where each testing area
+is listed separately, as follows:
+
+#. **VPP Performance Tests** - VPP performance tests are executed in physical
+ FD.io testbeds, focusing on VPP network data plane performance at this stage,
+ both for Phy-to-Phy (NIC-to-NIC) and Phy-to-VM-to-Phy (NIC-to-VM-to-NIC)
+ forwarding topologies. Tested across a range of NICs, 10GE and 40GE
+ interfaces, range of multi-thread and multi-core configurations. VPP
+ application runs in host user- mode. TRex is used as a traffic generator.
+
+#. **DPDK Performance Tests** - VPP is using DPDK code to control and drive
+ the NICs and physical interfaces. Testpmd tests are used as a baseline to
+ profile the DPDK sub-system of VPP. DPDK performance tests executed in
+ physical FD.io testbeds, focusing on Testpmd/L3FWD data plane performance for
+ Phy-to-Phy (NIC-to-NIC). Tests cover a range of NICs, 10GE and 40GE
+ interfaces, range of multi-thread and multi-core configurations.
+ Testpmd/L3FWD application runs in host user-mode. TRex is used as a traffic
+ generator.
+
+#. **VPP Functional Tests** - VPP functional tests are executed in virtual
+ FD.io testbeds focusing on VPP packet processing functionality, including
+ network data plane and in -line control plane. Tests cover vNIC-to-vNIC
+ vNIC-to-VM-to-vNIC forwarding topologies. Scapy is used as a traffic
+ generator.
+
+#. **Honeycomb Functional Tests** - Honeycomb functional tests are executed in
+ virtual FD.io testbeds, focusing on Honeycomb management and programming
+ functionality of VPP. Tests cover a range of CRUD operations executed
+ against VPP.
+
+#. **Honeycomb Performance Tests** - Honeycomb performance tests are executed in
+ physical FD.io testbeds, focusing on the performance of Honeycomb management and programming
+ functionality of VPP. Tests cover a range of CRUD operations executed
+ against VPP.
+
+#. **NSH_SFC Functional Tests** - NSH_SFC functional tests are executed in
+ virtual FD.io testbeds focusing on NSH_SFC of VPP. Tests cover a range of
+ CRUD operations executed against VPP.
+
+In addition to above, CSIT |release| report does also include VPP unit test
+results. VPP unit tests are developed within the FD.io VPP project and as they
+complement CSIT system functional tests, they are provided mainly as a reference
+and to provide a more complete view of automated testing executed against
+|vpp-release|.
+
+FD.io CSIT system is developed using two main coding platforms: Robot
+Framework (RF) and Python. CSIT |release| source code for the executed test
+suites is available in CSIT branch |release| in the directory
+"./tests/<name_of_the_test_suite>". A local copy of CSIT source code can be
+obtained by cloning CSIT git repository - "git clone
+https://gerrit.fd.io/r/csit". The CSIT testing virtual environment can be run
+on a local computer workstation (laptop, server) using Vagrant by following
+the instructions in `CSIT tutorials
+<https://wiki.fd.io/view/CSIT#Tutorials>`_.
diff --git a/docs/report/introduction/overview.rst b/docs/report/introduction/overview.rst
index 4aac352d0c..57b4cd4897 100644
--- a/docs/report/introduction/overview.rst
+++ b/docs/report/introduction/overview.rst
@@ -10,7 +10,7 @@ continuous execution delivered in CSIT |release|. A high-level overview is
provided for each CSIT test environment running in Linux Foundation (LF) FD.io
Continuous Performance Labs. This is followed by summary of all executed tests
against the |vpp-release| and associated FD.io projects and sub-systems
-(HoneyComb, DPDK, NSH_SFC), CSIT |release| release notes, result highlights and
+(Honeycomb, DPDK, NSH_SFC), CSIT |release| release notes, result highlights and
known issues discovered in CSIT. More detailed description of each environment,
pointers to CSIT test code documentation and detailed test resuls with links to
the source data files are also provided.
@@ -48,13 +48,20 @@ CSIT |release| report contains following main sections and sub-sections:
added; *Test Environment* - environment description ; *Documentation* -
source code documentation for VPP functional tests.
-#. **HoneyComb Functional Tests** - HoneyComb functional tests executed in
+#. **Honeycomb Functional Tests** - Honeycomb functional tests executed in
virtual FD.io testbeds; *Overview* - tested virtual topologies, test
coverage and naming specifics; *CSIT Release Notes* - changes in CSIT
|release|, added tests, environment or methodology changes, known CSIT issues;
*Test Environment* - environment description ;
*Documentation* - source code documentation for Honeycomb functional tests.
+#. **Honeycomb Performance Tests** - Honeycomb performance tests executed in
+ physical FD.io testbeds; *Overview* - tested topologies, test
+ coverage and naming specifics; *CSIT Release Notes* - changes in CSIT
+ |release|, added tests, environment or methodology changes, known CSIT issues;
+ *Test Environment* - environment description ;
+ *Documentation* - source code documentation for Honeycomb performance tests.
+
#. **VPP Unit Tests** - refers to VPP functional unit tests executed as
part of vpp make test verify option within the FD.io VPP project; listed in
this report to give a more complete view about executed VPP functional tests;
@@ -71,7 +78,7 @@ CSIT |release| report contains following main sections and sub-sections:
#. **Detailed Test Results** - auto-generated results from CSIT jobs
executions using CSIT Robot Framework output files as source data; *VPP
Performance Results*, *DPDK Performance Results*, *VPP Functional
- Results*, *HoneyComb Functional Results*, *VPPtest Functional Results*.
+ Results*, *Honeycomb Functional Results*, *VPPtest Functional Results*.
#. **Test Configuration** - auto-generated DUT configuration data from CSIT jobs
executions using CSIT Robot Framework output files as source data; *VPP