Age | Commit message (Collapse) | Author | Files | Lines |
|
Prior to the change, dpdk plugin assumes xd->device_index is
used both as index for internal dpdk_main->devices array
and DPDK port index to call into DPDK APIs.
However, when running on top of Failsafe PMDs,
DPDK port index range may no longer be contiguous (as noted:
http://dpdk.org/ml/archives/dev/2018-March/092375.html
for related changes in DPDK). Because this, dpdk plugin can
no longer iterate through all available DPDK ports
with a for 0->rte_eth_dev_count() loop and the assumption of
device_index no longer holds.
This is part of initial effort to enable vpp running over
dpdk on failsafe PMD in Microsoft Azure(3/4).
Change-Id: I416fd80f2d40e12e139f8f3492814da98343eae7
Signed-off-by: Rui Cai <rucai@microsoft.com>
|
|
port_id be used for dpdk port_id
Change-Id: Ia7d8cdc5dec2ad658c11f9c0f3ef8005a470ac3c
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
when flows are enabled on the device
Change-Id: I971764988d5a9e7078468f627205b3fa60736263
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
Change-Id: If174d189de40e6f9ffae99997bba93a2519d9fda
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
- use of vlib_log for non-dataplane logging
- redirect of dpdk logs trough unix pipe into vlib_log
- "show dpdk physmem" cli
Change-Id: I5da70f9c130273072a8cc80d169df31fc216b2c2
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
xd->flags is set incorrectly when a slave link is down in bonded interface mode.
This can result in VPP crash when data traffic flows to the interface.
Change-Id: Ideb9f5231db1211e8452c52fde646d681310c951
Signed-off-by: Steve Shin <jonshin@cisco.com>
|
|
Change-Id: I3713b4c460a3cd414b560e16aac054aee2e1181b
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Also remove DPDK 17.05 support.
Change-Id: I4f96cb3f002cd90b12d800d6904f2364d7c4e270
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Icaf7d7ad47284aea7a56e8006b69f45874d64202
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
For bonded interface in Active/Backup mode (mode 1), we need to
send a GARP/NA packet, if IP address is present, on slave link
state change to up or down to help with route convergence. The
callback from DPDK happens in a separate thread so we need to make
sure RPC call is used to signal the send_garp_na process in the
main thread. Also need to fix DPDK polling so the slave links are
not polled.
Change-Id: If5fd8ea2d28c54dd28726ac403ad366386ce9651
Signed-off-by: John Lo <loj@cisco.com>
|
|
DPDK 17.08 breaks ethdev and cryptodev APIs.
Address those changes while keeping backwards compatibility for
DPDK 17.02 and 17.05.
Change-Id: Idd6ac264d0d047fe586c41d4c4ca74e8fc778a54
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
|
|
If a bonded interface is in active-backup mode and configured with
IPv4 and/or IPv6 addresses, on slave interface link up/down, send
a GARP packet if configured with an IPv4 address and an unsolcited
NA if configured with an IPv6 address. These packets can help with
faster route convergence in the next hop router/switch.
Change-Id: I68ccb11a4a40cda414704fa08ee0171c952befa2
Signed-off-by: John Lo <loj@cisco.com>
|
|
Change-Id: Ib390164abb07ca0d38fd49e7e2e6b4e9ea856405
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I0fec86914ec027383ff511b7092beac2363f55f7
Signed-off-by: Damjan Marion <damarion@cisco.com>
|