aboutsummaryrefslogtreecommitdiffstats
path: root/docs/content/methodology/test/generic_segmentation_offload.md
blob: 0032d203de20dd8f798459243ba880924fd87f02 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
title: "Generic Segmentation Offload"
weight: 7
---

# Generic Segmentation Offload

## Overview

Generic Segmentation Offload (GSO) reduces per-packet processing
overhead by enabling applications  to pass a multi-packet buffer to
(v)NIC and process a smaller number of large packets (e.g. frame size of
64 KB), instead of processing higher numbers of small packets (e.g.
frame size of 1500 B), thus reducing per-packet overhead.

GSO tests for VPP vhostuser and tapv2 interfaces. All tests cases use iPerf3
client and server applications running TCP/IP as a traffic generator. For
performance comparison the same tests are run without GSO enabled.

## GSO Test Topologies

Two VPP GSO test topologies are implemented:

1. iPerfC_GSOvirtio_LinuxVM --- GSOvhost_VPP_GSOvhost --- iPerfS_GSOvirtio_LinuxVM
   - Tests VPP GSO on vhostuser interfaces and interaction with Linux
     virtio with GSO enabled.
2. iPerfC_GSOtap_LinuxNspace --- GSOtapv2_VPP_GSOtapv2 --- iPerfS_GSOtap_LinuxNspace
   - Tests VPP GSO on tapv2 interfaces and interaction with Linux tap
     with GSO enabled.

Common configuration:

- iPerfC (client) and iPerfS (server) run in TCP/IP mode without upper
  bandwidth limit.
- Trial duration is set to 30 sec.
- iPerfC, iPerfS and VPP run in the single SUT node.


## VPP GSOtap Topology

### VPP Configuration

VPP GSOtap tests are executed without using hyperthreading. VPP worker runs on
a single core. Multi-core tests are not executed. Each interface belongs to
separate namespace. Following core pinning scheme is used:

- 1t1c (rxq=1, rx_qsz=4096, tx_qsz=4096)
  - system isolated: 0,28,56,84
  - vpp mt:  1
  - vpp wt:  2
  - vhost:   3-5
  - iperf-s: 6
  - iperf-c: 7

### iPerf3 Server Configuration

iPerf3 version used 3.7

    $ sudo -E -S ip netns exec tap1_namespace iperf3 \
        --server --daemon --pidfile /tmp/iperf3_server.pid \
        --logfile /tmp/iperf3.log --port 5201 --affinity <X>

For the full iPerf3 reference please see
[iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst).


### iPerf3 Client Configuration

iPerf3 version used 3.7

    $ sudo -E -S ip netns exec tap1_namespace iperf3 \
        --client 2.2.2.2 --bind 1.1.1.1 --port 5201 --parallel <Y> \
        --time 30.0 --affinity <X> --zerocopy

For the full iPerf3 reference please see
[iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst).


## VPP GSOvhost Topology

### VPP Configuration

VPP GSOvhost tests are executed without using hyperthreading. VPP worker runs
on a single core. Multi-core tests are not executed. Following core pinning
scheme is used:

- 1t1c (rxq=1, rx_qsz=1024, tx_qsz=1024)
  - system isolated: 0,28,56,84
  - vpp mt:  1
  - vpp wt:  2
  - vm-iperf-s: 3,4,5,6,7
  - vm-iperf-c: 8,9,10,11,12
  - iperf-s: 1
  - iperf-c: 1

###  iPerf3 Server Configuration

iPerf3 version used 3.7

    $ sudo iperf3 \
        --server --daemon --pidfile /tmp/iperf3_server.pid \
        --logfile /tmp/iperf3.log --port 5201 --affinity X

For the full iPerf3 reference please see
[iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst).


### iPerf3 Client Configuration

iPerf3 version used 3.7

    $ sudo iperf3 \
        --client 2.2.2.2 --bind 1.1.1.1 --port 5201 --parallel <Y> \
        --time 30.0 --affinity X --zerocopy

For the full iPerf3 reference please see
[iPerf3 docs](https://github.com/esnet/iperf/blob/master/docs/invoking.rst).