Age | Commit message (Collapse) | Author | Files | Lines |
|
QUEUE_SELECT and QUEUE_NOTIFY_OFF registers are shared between all
workers operating on the same device and operations are not atomic
Type: fix
Change-Id: Ie017b1bfc7e3b6b4e59029f45db78eeffd9f3aeb
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: feature
This patch adds packet buffering on tx for
slow backend which have some jitter/delays
in freeing the vrings.
There are some limitations to the current design:
1) It only works in poll mode.
2) Atleast 1 rx queue of an interface (with buffering
enabled) should be placed on each worker and main thread.
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
Change-Id: Ib93c350298b228e80426e58ac77f3bbc93b8be27
|
|
Type: fix
Change-Id: Ic07d0ae313b32e420ba93693cb75960a86f752a9
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: refactor
tap, virtio and vhost use virtio/vhost header files from linux
kernel. Different features are supported on different kernel
versions, making it difficult to use those in VPP. This patch
removes virtio/vhost based header dependencies to local header
files.
Change-Id: I064a8adb5cd9753c986b6f224bb075200b3856af
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: feature
Change-Id: I205f7c146a213d603d9d1e46fcf5195a876608dc
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: refactor
Change-Id: I7342178f9ab9adb99b91a4f984bc22bef2ce8021
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: feature
Change-Id: I5868dd267aa26aa97aec5fd70e70c5956ac52277
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: fix
Change-Id: Ie0cff37b474f8d85a3ae376e0f547a347fb1ad8a
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: fix
Change-Id: I3bcc8ff1cf0a828ce3ba112694d38e3287d38d8d
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: improvement
Change-Id: I220ea6ab609e3c1628f5210be441d0d5e825a32c
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: refactor
This patch refactor the existing flags and also add a new
flag for packet coalescing.
Change-Id: Ic826e4c81313f26d87c475cdf666b06cbed60a3a
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
This matches vhost queues to linux netdev queues and avoids random
packet shuffling across vhost queues on rx.
Change-Id: I9901689d361e440fb0b91c9fbaf8124ce525b316
Type: fix
Signed-off-by: Aloys Augustin <aloaugus@cisco.com>
|
|
Type: feature
Change-Id: I699a01ac925fe5c475a36032edb7018618bb4dd4
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: fix
Some vhost-backends give used descriptors back in
out-of-order. This patch fixes the native virtio to
handle out-of-order descriptors.
Change-Id: I57323303349f6a385e412ee22772ab979ae8edbf
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: refactor
Change-Id: I897e36bd5db593b417c2bac9f739bc51cf45bc08
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: feature
Change-Id: I7dcc8c6911d02729b3bda1b3a21a211c82c3b949
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: refactor
Change-Id: I34306c1206b2bf5f521be6c6b78074ccf9259a08
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: docs
Change-Id: I039ba9ad5385452b202366fba0b367506a21ea4f
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Type: fix
Ticket: VPP-1766
revert e4ac48e792f4eebfce296cfde844ee73b1abd62f
Change-Id: I03feea4008a47859d570ad8d1d08ff3f30d139ef
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
(cherry picked from commit 623a1b7053424b539a51faf866ab839d3da3f45b)
|
|
This patch adds support to configure host mtu size using
api, cli or startup.conf.
Type: feature
Change-Id: I8ab087d82dbe7dedc498825c1a3ea3fcb2cce030
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
In tap tx routine, virtio_interface_tx_inline, there used to be an
interface spinlock to ensure packets are processed in an orderly fashion
clib_spinlock_lock_if_init (&vif->lockp);
When virtio code was introduced in 19.04, that line is changed to
clib_spinlock_lock_if_init (&vring->lockp);
to accommodate multi-queues.
Unfortunately, althrough the spinlock exists in the vring, it was never
initialized for tap, only for virtio. As a result, many nasty things can
happen when running tap interface in multi-thread environment. Crash is
inevitable.
The fix is to initialize vring->lockp for tap and remove vif->lockp as it
is not used anymore.
Change-Id: I82b15d3e9b0fb6add9b9ac49bf602a538946634a
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
(cherry picked from commit c2c89782d34df0dc7197b18b042b4c2464a101ef)
|
|
Indirect buffers are used to store indirect descriptors
to xmit big packets.
This patch moves the indirect buffer allocation from
interface creation to device node. Now it allocates
or deallocates buffers during tx for chained buffers.
Change-Id: I55cec208a2a7432e12fe9254a7f8ef84a9302bd5
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
(cherry picked from commit 55203e745f5e3f1f6c4dbe99d6eab8dee4d13ea6)
|
|
When container is deleted which has tap interface attached,
Linux also delete the tap interface leaving the VPP side of
tap. This patch does a clean up job to remove that VPP side
of tap interface.
To produce the behavior:
In VPP:
create tap
On linux:
sudo ip netns add ns1
sudo ip link set dev tap0 netns ns1
sudo ip netns del ns1
Change-Id: Iaed1700073a9dc64e626c1d0c449f466c143f3ae
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: Id71ffa77e977651f219ac09d1feef334851209e1
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
This commit adds a "gso" parameter to existing "create tap..." CLI,
and a "no-gso" parameter for the compatibility with the future,
when/if defaults change.
It makes use of the lowest bit of the "tap_flags" field in the API call
in order to allow creation of GSO interfaces via API as well.
It does the necessary syscalls to enable the GSO
and checksum offload support on the kernel side and sets two flags
on the interface: virtio-specific virtio_if_t.gso_enabled,
and vnet_hw_interface_t.flags & VNET_HW_INTERFACE_FLAG_SUPPORTS_GSO.
The first one, if enabled, triggers the marking of the GSO-encapsulated
packets on ingress with VNET_BUFFER_F_GSO flag, and
setting vnet_buffer2(b)->gso_size to the desired L4 payload size.
VNET_HW_INTERFACE_FLAG_SUPPORTS_GSO determines the egress packet
processing in interface-output for such packets:
When the flag is set, they are sent out almost as usual (just taking
care to set the vnet header for virtio).
When the flag is not enabled (the case for most interfaces),
the egress path performs the re-segmentation such that
the L4 payload of the transmitted packets equals gso_size.
The operations in the datapath are enabled only when there is at least
one GSO-compatible interface in the system - this is done by tracking
the count in interface_main.gso_interface_count. This way the impact
of conditional checks for the setups that do not use GSO is minimized.
"show tap" CLI shows the state of the GSO flag on the interface, and
the total count of GSO-enabled interfaces (which is used to enable
the GSO-related processing in the packet path).
This commit lacks IPv6 extension header traversal support of any kind -
the L4 payload is assumed to follow the IPv6 header. Also it performs
the offloads only for TCP (TSO - TCP segmentation offload).
The UDP fragmentation offload (UFO) is not part of it.
For debug purposes it also adds the debug CLI:
"set tap gso {<interface> | sw_if_index <sw_idx>} <enable|disable>"
Change-Id: Ifd562db89adcc2208094b3d1032cee8c307aaef9
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
Change-Id: Idd560f3afde1dd03bc3d6fbb2070096146865f50
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: Ifc98373371b967c49a75989eac415ddda1dcf15f
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: Ieadf0a97379ed8b17241e454895c4e5e195dc52f
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: Id7fccf2f805e578fb05032aeb2b649a74c3c0e56
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: I25b2a28513821bc5eab9ac6890a3964d412b0399
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
* u32/u64/uword mismatches
* pointer-to-int fixes
* printf formatting issues
* issues with incorrect "ULL" and related suffixes
* structure alignment and padding issues
Change-Id: I70b989007758755fe8211c074f651150680f60b4
Signed-off-by: David Johnson <davijoh3@cisco.com>
|
|
Change-Id: I373f429c53c6f66ad38322addcfaccddb7761392
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
This patch teaches worer threads to sleep and to be waken up by
kernel if there is activity on file desctiptors assigned to that thread.
It also adds counters to epoll file descriptors and new
debug cli 'show unix file'.
Change-Id: Iaf67869f4aa88ff5b0a08982e1c08474013107c4
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Buffers may be allocated for indirect descriptors by tx thread and
they are freed when tx thread is invoked in the next invocation.
This is to allow the recipient (kernel) to have a chance to process
them. But if the tap interface is deleted, the tx thread may not yet
be called to clean up the indirect descriptors' buffers. In that case,
we need to remove them without waiting for the tx thread to be called.
Failure to do so may cause buffers leak when the tap interface is deleted.
For the RX ring, leakage also exists for vring->buffers when the interface
is removed.
Change-Id: I3df313a0e60334776b19daf51a9f5bf20dfdc489
Signed-off-by: Steven <sluong@cisco.com>
(cherry picked from commit d8a998e74b815dd3725dfcd80080e4e540940236)
|
|
Change-Id: I097a738b96a304621520f1842dcac7dbf61a8e3f
Signed-off-by: Milan Lenco <milan.lenco@pantheon.tech>
|
|
- change interface naming scheme
- rework netlink code
- add option to set link address, namespace
Change-Id: Icf667babb3077a07617b0b87c45c957e345cb4d1
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ided667356d5c6fb9648eb34685aabd6b16a598b7
Signed-off-by: Damjan Marion <damarion@cisco.com>
Signed-off-by: Steven Luong <sluong@cisco.com>
|