1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
|
.. SPDX-License-Identifier: BSD-3-Clause
Copyright 2017 NXP
DPAA Poll Mode Driver
=====================
The DPAA NIC PMD (**librte_pmd_dpaa**) provides poll mode driver
support for the inbuilt NIC found in the **NXP DPAA** SoC family.
More information can be found at `NXP Official Website
<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
NXP DPAA (Data Path Acceleration Architecture - Gen 1)
------------------------------------------------------
This section provides an overview of the NXP DPAA architecture
and how it is integrated into the DPDK.
Contents summary
- DPAA overview
- DPAA driver architecture overview
.. _dpaa_overview:
DPAA Overview
~~~~~~~~~~~~~
Reference: `FSL DPAA Architecture <http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf>`_.
The QorIQ Data Path Acceleration Architecture (DPAA) is a set of hardware
components on specific QorIQ series multicore processors. This architecture
provides the infrastructure to support simplified sharing of networking
interfaces and accelerators by multiple CPU cores, and the accelerators
themselves.
DPAA includes:
- Cores
- Network and packet I/O
- Hardware offload accelerators
- Infrastructure required to facilitate flow of packets between the components above
Infrastructure components are:
- The Queue Manager (QMan) is a hardware accelerator that manages frame queues.
It allows CPUs and other accelerators connected to the SoC datapath to
enqueue and dequeue ethernet frames, thus providing the infrastructure for
data exchange among CPUs and datapath accelerators.
- The Buffer Manager (BMan) is a hardware buffer pool management block that
allows software and accelerators on the datapath to acquire and release
buffers in order to build frames.
Hardware accelerators are:
- SEC - Cryptographic accelerator
- PME - Pattern matching engine
The Network and packet I/O component:
- The Frame Manager (FMan) is a key component in the DPAA and makes use of the
DPAA infrastructure (QMan and BMan). FMan is responsible for packet
distribution and policing. Each frame can be parsed, classified and results
may be attached to the frame. This meta data can be used to select
particular QMan queue, which the packet is forwarded to.
DPAA DPDK - Poll Mode Driver Overview
-------------------------------------
This section provides an overview of the drivers for DPAA:
* Bus driver and associated "DPAA infrastructure" drivers
* Functional object drivers (such as Ethernet).
Brief description of each driver is provided in layout below as well as
in the following sections.
.. code-block:: console
+------------+
| DPDK DPAA |
| PMD |
+-----+------+
|
+-----+------+ +---------------+
: Ethernet :.......| DPDK DPAA |
. . . . . . . . . : (FMAN) : | Mempool driver|
. +---+---+----+ | (BMAN) |
. ^ | +-----+---------+
. | |<enqueue, .
. | | dequeue> .
. | | .
. +---+---V----+ .
. . . . . . . . . . .: Portal drv : .
. . : : .
. . +-----+------+ .
. . : QMAN : .
. . : Driver : .
+----+------+-------+ +-----+------+ .
| DPDK DPAA Bus | | .
| driver |....................|.....................
| /bus/dpaa | |
+-------------------+ |
|
========================== HARDWARE =====|========================
PHY
=========================================|========================
In the above representation, solid lines represent components which interface
with DPDK RTE Framework and dotted lines represent DPAA internal components.
DPAA Bus driver
~~~~~~~~~~~~~~~
The DPAA bus driver is a ``rte_bus`` driver which scans the platform like bus.
Key functions include:
- Scanning and parsing the various objects and adding them to their respective
device list.
- Performing probe for available drivers against each scanned device
- Creating necessary ethernet instance before passing control to the PMD
DPAA NIC Driver (PMD)
~~~~~~~~~~~~~~~~~~~~~
DPAA PMD is traditional DPDK PMD which provides necessary interface between
RTE framework and DPAA internal components/drivers.
- Once devices have been identified by DPAA Bus, each device is associated
with the PMD
- PMD is responsible for implementing necessary glue layer between RTE APIs
and lower level QMan and FMan blocks.
The Ethernet driver is bound to a FMAN port and implements the interfaces
needed to connect the DPAA network interface to the network stack.
Each FMAN Port corresponds to a DPDK network interface.
Features
^^^^^^^^
Features of the DPAA PMD are:
- Multiple queues for TX and RX
- Receive Side Scaling (RSS)
- Packet type information
- Checksum offload
- Promiscuous mode
DPAA Mempool Driver
~~~~~~~~~~~~~~~~~~~
DPAA has a hardware offloaded buffer pool manager, called BMan, or Buffer
Manager.
- Using standard Mempools operations RTE API, the mempool driver interfaces
with RTE to service each mempool creation, deletion, buffer allocation and
deallocation requests.
- Each FMAN instance has a BMan pool attached to it during initialization.
Each Tx frame can be automatically released by hardware, if allocated from
this pool.
Whitelisting & Blacklisting
---------------------------
For blacklisting a DPAA device, following commands can be used.
.. code-block:: console
<dpdk app> <EAL args> -b "dpaa_bus:fmX-macY" -- ...
e.g. "dpaa_bus:fm1-mac4"
Supported DPAA SoCs
-------------------
- LS1043A/LS1023A
- LS1046A/LS1026A
Prerequisites
-------------
See :doc:`../platform/dpaa` for setup information
- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
to setup the basic DPDK environment.
.. note::
Some part of dpaa bus code (qbman and fman - library) routines are
dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace.
Pre-Installation Configuration
------------------------------
Config File Options
~~~~~~~~~~~~~~~~~~~
The following options can be modified in the ``config`` file.
Please note that enabling debugging options may affect system performance.
- ``CONFIG_RTE_LIBRTE_DPAA_BUS`` (default ``n``)
By default it is enabled only for defconfig_arm64-dpaa-* config.
Toggle compilation of the ``librte_bus_dpaa`` driver.
- ``CONFIG_RTE_LIBRTE_DPAA_PMD`` (default ``n``)
By default it is enabled only for defconfig_arm64-dpaa-* config.
Toggle compilation of the ``librte_pmd_dpaa`` driver.
- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER`` (default ``n``)
Toggles display of bus configurations and enables a debugging queue
to fetch error (Rx/Tx) packets to driver. By default, packets with errors
(like wrong checksum) are dropped by the hardware.
- ``CONFIG_RTE_LIBRTE_DPAA_HWDEBUG`` (default ``n``)
Enables debugging of the Queue and Buffer Manager layer which interacts
with the DPAA hardware.
- ``CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS`` (default ``dpaa``)
This is not a DPAA specific configuration - it is a generic RTE config.
For optimal performance and hardware utilization, it is expected that DPAA
Mempool driver is used for mempools. For that, this configuration needs to
enabled.
Environment Variables
~~~~~~~~~~~~~~~~~~~~~
DPAA drivers uses the following environment variables to configure its
state during application initialization:
- ``DPAA_NUM_RX_QUEUES`` (default 1)
This defines the number of Rx queues configured for an application, per
port. Hardware would distribute across these many number of queues on Rx
of packets.
In case the application is configured to use lesser number of queues than
configured above, it might result in packet loss (because of distribution).
- ``DPAA_PUSH_QUEUES_NUMBER`` (default 4)
This defines the number of High performance queues to be used for ethdev Rx.
These queues use one private HW portal per queue configured, so they are
limited in the system. The first configured ethdev queues will be
automatically be assigned from the these high perf PUSH queues. Any queue
configuration beyond that will be standard Rx queues. The application can
choose to change their number if HW portals are limited.
The valid values are from '0' to '4'. The valuse shall be set to '0' if the
application want to use eventdev with DPAA device.
Driver compilation and testing
------------------------------
Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
for details.
#. Running testpmd:
Follow instructions available in the document
:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
to run testpmd.
Example output:
.. code-block:: console
./arm64-dpaa-linuxapp-gcc/testpmd -c 0xff -n 1 \
-- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
.....
EAL: Registered [pci] bus.
EAL: Registered [dpaa] bus.
EAL: Detected 4 lcore(s)
.....
EAL: dpaa: Bus scan completed
.....
Configuring Port 0 (socket 0)
Port 0: 00:00:00:00:00:01
Configuring Port 1 (socket 0)
Port 1: 00:00:00:00:00:02
.....
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd>
Limitations
-----------
Platform Requirement
~~~~~~~~~~~~~~~~~~~~
DPAA drivers for DPDK can only work on NXP SoCs as listed in the
``Supported DPAA SoCs``.
Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
up to 10240 bytes can still reach the host interface.
Multiprocess Support
~~~~~~~~~~~~~~~~~~~~
Current version of DPAA driver doesn't support multi-process applications
where I/O is performed using secondary processes. This feature would be
implemented in subsequent versions.
|