Age | Commit message (Collapse) | Author | Files | Lines |
|
In case of tx success after multiple retries, the last buffers to be
enqueued will be both enqueued for tx and freed.
Type: fix
Fixes: 211ef2eb24
Change-Id: I57d218cff58b74c1f3d6dc5722624327f0821758
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Multiple API message handlers call vnet_get_sup_hw_interface(...)
without checking the inbound sw_if_index. This can cause a
pool_elt_at_index ASSERT in a debug image, and major disorder in a
production image.
Given that a number of places are coded as follows, add an
"api_visible_or_null" variant of vnet_get_sup_hw_interface, which
returns NULL given an invalid sw_if_index, or a hidden sw interface:
- hw = vnet_get_sup_hw_interface (vnm, sw_if_index);
+ hw = vnet_get_sup_hw_interface_api_visible_or_null (vnm, sw_if_index);
if (hw == NULL || memif_device_class.index != hw->dev_class_index)
return clib_error_return (0, "not a memif interface");
Rename two existing xxx_safe functions -> xxx_or_null to make it
obvious what they return.
Type: fix
Change-Id: I29996e8d0768fd9e0c5495bd91ff8bedcf2c5697
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
The fast path almost always has to deal with the real
pointers. Deriving the frame pointer from a frame_index requires a
load of the 32bit frame_index from memory, another 64bit load of the
heap base pointer and some calculations.
Lets store the full pointer instead and do a single 64bit load only.
This helps avoiding problems when the heap is grown and frames are
allocated below vm->heap_aligned_base.
Type: refactor
Change-Id: Ifa6e6e984aafe1e2755bff80f0a4dfcddee3623c
Signed-off-by: Andreas Schultz <andreas.schultz@travelping.com>
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
rdma interfaces filter packets per MAC by default to share the physical
interface between multiple users (eg. VPP and Linux).
When configured in promiscuous mode, all packets will go to this
interface, regardless of the MAC. All other interface will not receive
anymore packet while it is in promiscuous mode.
Promiscuous mode is needed (and automatically turned on) for L2 path
(l2patch, xconnect, bridge...).
Change-Id: I4c0eb4421f51d116e635e7828d00f202f4a97ded
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Change-Id: Ida681d299fd57eba66338444b99d2476bdb3c695
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
- Make plugin descriptions more consistent
so the output of "show plugin" can be
used in the wiki.
Change-Id: I4c6feb11e7dcc5a4cf0848eed37f1d3b035c7dda
Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
|
|
Add support for multiqueue for RDMA devices.
Change-Id: I78a2481cec6747494c670cc776475828be3af55b
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
rx: add batching for WC processing and release
tx: improve batching for WC submission and processing
rdma-core: compile in release mode to remove assert()
Change-Id: I5fb8736db36b50f8b758cd688100477b67e72d80
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Tx stats are no longer counted twice.
Submit tx packets as a single batch per vector instead of per-packet
Change-Id: I26820b21f23842b3a67ace0b939095f3550d3856
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Change-Id: Ic6244511b88bdd42756f74e3163a70b8014e8547
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I0b996460e05c40e74766563fb2a94c62a65063ce
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
RDMA ibverb is a userspace API to efficiently rx/tx packets. This is an
initial, unoptimized driver targeting Mellanox cards.
Next steps should include batching, multiqueue and additional cards.
Change-Id: I0309c7a543f75f2f9317eaf63ca502ac7a093ef9
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|