Age | Commit message (Collapse) | Author | Files | Lines |
|
In a rare event, after the vhost protocol message exchange has finished and
the interface had been brought up successfully, the driver MAY still change
its mind about the memory regions by sending new memory maps via
SET_MEM_TABLE. Upon processing SET_MEM_TABLE, VPP invalidates the old memory
regions and the descriptor tables. But it does not re-compute the new
descriptor tables based on the new memory maps. Since VPP does not have the
descriptor tables, it does not read the packets from the vring.
In the normal working case, after SET_MEM_TABLE, the driver follows up with
SET_VRING_ADDRESS which VPP computes the descriptor tables.
The fix is to stash away the descriptor table addresses from
SET_VRING_ADDRESS. Re-compute the new descriptor tables when processing
SET_MEM_TABLE if descriptor table addresses are known.
Type: fix
Ticket: VPP-1784
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: I3361f14c3a0372b8d07943eb6aa4b3a3f10708f9
(cherry picked from commit 61b8ba69f7a9540ed00576504528ce439f0286f5)
|
|
Type: feature
Change-Id: I913f08383ee1c24d610c3d2aac07cef402570e2c
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: fix
Ticket: VPP-1766
revert e4ac48e792f4eebfce296cfde844ee73b1abd62f
Change-Id: I03feea4008a47859d570ad8d1d08ff3f30d139ef
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
(cherry picked from commit 623a1b7053424b539a51faf866ab839d3da3f45b)
|
|
vlib_increment_combined_counter takes sw_if_index, not hw_if_index. Using
hw_if_index may work as long as there is no subinterface created to cause
hw_if_index and sw_if_index to differ.
Type: fix
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: I6db042186eeeacf32250f7ef261af8cd6f5ce56e
|
|
Set VNET_HW_INTERFACE_FLAG_SUPPORTS_TX_L4_CKSUM_OFFLOAD for the interface
to skip checksum calculation if guest supports checksum offload.
Type: fix
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: Ie933c3462394f07580ef7f2bec1d2eb3b075bd0c
|
|
previously, PG and virtio interfaces calculate wrong l3 and l4
header offset. This patch fixes this issue.
Type: fix
Ticket: VPP-1739
Change-Id: I5ba978e464babeb65e0711e1027320d46b3b9932
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
After the trace is collected and if the interface is then deleted, show
trace may crash for the debug image. This is due to the additional check
in pool_elt_at_index() to make sure that the block is not free.
The fix is to do the check in vhost format trace and return "interface deleted"
Type: fix
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: I0744f913ba6146609663443f408d784067880f93
|
|
Type: fix
Ticket: VPP-1727
Change-Id: Icfee35c5ab5e1c65079d1ca7bb514162319113e5
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
map_guest_mem may return null. Coverity complains about calls
without checking its return. Simple stuff.
Type: fix
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: I0626115f4951a88f23d9792f0232fb57c132fbc2
|
|
Type: fix
1. Add option '[gso-enabled]' in cli 'create interface virtio'
2. Add gso information in virtio_show()
Change-Id: I4eb58f4421325ef54a6a68c8341b3a6d3d68136a
Signed-off-by: Chenmin Sun <chenmin.sun@intel.com>
|
|
Add gso option in create vhost interface to support gso and checksum
offload.
Tested with the following startup options in qemu:
csum=on,gso=on,guest_csum=on,guest_tso4=on,guest_tso6=on,guest_ufo=on,
host_tso4=on,host_tso6=on,host_ufo=on
Type: feature
Change-Id: I9ba1ee33677a694c4a0dfe66e745b098995902b8
Signed-off-by: Steven Luong <sluong@cisco.com>
|
|
Multiple API message handlers call vnet_get_sup_hw_interface(...)
without checking the inbound sw_if_index. This can cause a
pool_elt_at_index ASSERT in a debug image, and major disorder in a
production image.
Given that a number of places are coded as follows, add an
"api_visible_or_null" variant of vnet_get_sup_hw_interface, which
returns NULL given an invalid sw_if_index, or a hidden sw interface:
- hw = vnet_get_sup_hw_interface (vnm, sw_if_index);
+ hw = vnet_get_sup_hw_interface_api_visible_or_null (vnm, sw_if_index);
if (hw == NULL || memif_device_class.index != hw->dev_class_index)
return clib_error_return (0, "not a memif interface");
Rename two existing xxx_safe functions -> xxx_or_null to make it
obvious what they return.
Type: fix
Change-Id: I29996e8d0768fd9e0c5495bd91ff8bedcf2c5697
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Some combinations of new qemu (2.11) and old dpdk (16.10) may
send VHOST_USER_SET_FEATURES at the end of the protocol exchange
which the vhost interface is already declared up and ready.
Unfortunately, the process of VHOST_USER_SET_FEATURES will cause
the interface to go down. Not sure if it is correct or needed.
Because there is no additional messages thereafter, the hardware
interface stays down.
The fix is to check the interface again at the end of processing
VHOST_USER_SET_FEATURES. If it is up and ready, we bring back
the hardware interface.
Type: fix
Change-Id: I490cd03820deacbd8b44d8f2cb38c26349dbe3b2
Signed-off-by: Steven Luong <sluong@cisco.com>
|
|
The fast path almost always has to deal with the real
pointers. Deriving the frame pointer from a frame_index requires a
load of the 32bit frame_index from memory, another 64bit load of the
heap base pointer and some calculations.
Lets store the full pointer instead and do a single 64bit load only.
This helps avoiding problems when the heap is grown and frames are
allocated below vm->heap_aligned_base.
Type: refactor
Change-Id: Ifa6e6e984aafe1e2755bff80f0a4dfcddee3623c
Signed-off-by: Andreas Schultz <andreas.schultz@travelping.com>
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Memory is dirt cheap. But there is no need to throw it away.
Type: fix
Change-Id: I155130ab3c435b1c04d7c0e9f54795b8de9383d9
Signed-off-by: Steven Luong <sluong@cisco.com>
|
|
Type: fix
Fixes: 8389fb9
Change-Id: I31076db78507736631609146d4cca28597aca704
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
This patch adds support to configure host mtu size using
api, cli or startup.conf.
Type: feature
Change-Id: I8ab087d82dbe7dedc498825c1a3ea3fcb2cce030
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
In tap tx routine, virtio_interface_tx_inline, there used to be an
interface spinlock to ensure packets are processed in an orderly fashion
clib_spinlock_lock_if_init (&vif->lockp);
When virtio code was introduced in 19.04, that line is changed to
clib_spinlock_lock_if_init (&vring->lockp);
to accommodate multi-queues.
Unfortunately, althrough the spinlock exists in the vring, it was never
initialized for tap, only for virtio. As a result, many nasty things can
happen when running tap interface in multi-thread environment. Crash is
inevitable.
The fix is to initialize vring->lockp for tap and remove vif->lockp as it
is not used anymore.
Change-Id: I82b15d3e9b0fb6add9b9ac49bf602a538946634a
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
(cherry picked from commit c2c89782d34df0dc7197b18b042b4c2464a101ef)
|
|
Change-Id: I7b735f5a540e8c278bac88245acb3f8c041c49c0
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Indirect buffers are used to store indirect descriptors
to xmit big packets.
This patch moves the indirect buffer allocation from
interface creation to device node. Now it allocates
or deallocates buffers during tx for chained buffers.
Change-Id: I55cec208a2a7432e12fe9254a7f8ef84a9302bd5
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
(cherry picked from commit 55203e745f5e3f1f6c4dbe99d6eab8dee4d13ea6)
|
|
The vlib init function subsystem now supports a mix of procedural and
formally-specified ordering constraints. We should eliminate procedural
knowledge wherever possible.
The following schemes are *roughly* equivalent:
static clib_error_t *init_runs_first (vlib_main_t *vm)
{
clib_error_t *error;
... do some stuff...
if ((error = vlib_call_init_function (init_runs_next)))
return error;
...
}
VLIB_INIT_FUNCTION (init_runs_first);
and
static clib_error_t *init_runs_first (vlib_main_t *vm)
{
... do some stuff...
}
VLIB_INIT_FUNCTION (init_runs_first) =
{
.runs_before = VLIB_INITS("init_runs_next"),
};
The first form will [most likely] call "init_runs_next" on the
spot. The second form means that "init_runs_first" runs before
"init_runs_next," possibly much earlier in the sequence.
Please DO NOT construct sets of init functions where A before B
actually means A *right before* B. It's not necessary - simply combine
A and B - and it leads to hugely annoying debugging exercises when
trying to switch from ad-hoc procedural ordering constraints to formal
ordering constraints.
Change-Id: I5e4353503bf43b4acb11a45fb33c79a5ade8426c
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Ifb16351f39e5eb2cd154e70a1c96243e4842e80d
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: I0ffb468aef56f5fd223218a83425771595863666
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Native virtio device through legacy driver can't support configurable queue size.
Change-Id: I76c446a071bef8a469873010325d830586aa84bd
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: I73f76c25754f6fb14a49ae47b6404f3cbabbeeb5
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
When container is deleted which has tap interface attached,
Linux also delete the tap interface leaving the VPP side of
tap. This patch does a clean up job to remove that VPP side
of tap interface.
To produce the behavior:
In VPP:
create tap
On linux:
sudo ip netns add ns1
sudo ip link set dev tap0 netns ns1
sudo ip netns del ns1
Change-Id: Iaed1700073a9dc64e626c1d0c449f466c143f3ae
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Some API action handlers called vl_msg_ai_send_shmem()
directly. That breaks Unix domain socket API transport.
A couple (bond / vhost) also tried to send a sw_interface_event
directly, but did not send the message to all that had
registred interest. That scheme never worked correctly.
Refactored and improved the interface event code.
Change-Id: Idb90edfd8703c6ae593b36b4eeb4d3ed7da5c808
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
Change-Id: I215e1e0208a073db80ec6f87695d734cf40fabe3
Signed-off-by: Jim Thompson <jim@netgate.com>
|
|
Change-Id: I7c6e4bf2abf08193e54a736510c07eeacd6aebe7
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: Id71ffa77e977651f219ac09d1feef334851209e1
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
This reverts commit e63325e3ca03c847963863446345e6c80a2c0cfd.
Allow time for CSIT to accommodate.
Change-Id: I59435e4ab5e05e36a2796c3bf44889b5d4823cc2
Signed-off-by: ot@cisco.com
|
|
Use of consistent API types for interface.api
Change-Id: Ieb54cebb4ac96b432a3f0b41596718aa2f34885b
Signed-off-by: Jakub Grajciar <jgrajcia@cisco.com>
|
|
Fix a typo in vhost_user_rx_discard_packet which may cause
txvq->last_avail_idx to go wild.
Change-Id: Ifaeb58835dff9b7ea82c061442722f1dcaa5d9a4
Signed-off-by: Steven Luong <sluong@cisco.com>
(cherry picked from commit 39382976701926c1f34191c1311829c15a53cb01)
|
|
Change-Id: I8819bcb9e228e7a432f4a7b67b6107f984927cd4
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
|
|
Change-Id: I911fb3f1c6351b37580c5dbde6939a549431a92d
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
|
|
map_guest_mem may be called from worker-thread/dataplane. It has a call
to vlib_log and may crash inside vlib_log's ASSERT statement
/* make sure we are running on the main thread to avoid use in dataplane
code, for dataplane logging consider use of event-logger */
ASSERT (vlib_get_thread_index () == 0);
The fix is to convert the vlib_log call in map_guest_map to event logger
Change-Id: Iaaf6d86782aa8a18d25e0209f22dc31f04668d56
Signed-off-by: Steven Luong <sluong@cisco.com>
|
|
Change-Id: I01c4f5755d579282773ac227b0bc24f8ddbb2bd1
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Symptom
-------
With NDR traffic blasting at VPP, bringing up a new VM with vhost
connection to VPP causes packet drops. I am able to recreate this
problem easily using a simple setup like this.
TREX-------------- switch ---- VPP
|---------------| |-------|
Cause
-----
The reason for the packet drops is due to vhost holding onto the worker
barrier lock for too long in vhost_user_socket_read(). There are quite a
few of system calls inside the routine. At the end of the routine, it
unconditionally calls vhost_user_update_iface_state() for all message
types. vhost_user_update_iface_state() also unconditionally calls
vhost_user_rx_thread_placement() and vhost_user_tx_thread_placement().
vhost_user_rx_thread_placement scraps out all existing cpu/queue mappings
for the interface and creates brand new cpu/queue mappings for the
interface. This process is very disruptive and very expensive. In my
opinion, this area of code needs a makeover.
Fixes
-----
* vhost_user_socket_read() is rewritten that it should not hold
onto the worker barrier lock for system calls, or at least minimize the
need for doing it.
* Remove the call to vhost_user_update_iface_state as a default route at
the end of vhost_user_socket_read(). There is only a couple of message
types which really need to call vhost_user_update_iface_state(). We put
the call to those message types which need it.
* Remove vhost_user_rx_thread_placement() and
vhost_user_tx_thread_placement from vhost_user_update_iface_state().
There is no need to repetatively change the cpu/queue mappings.
* vhost_user_rx_thread_placement() is actually quite expensive. It should
be called only once per queue for the interface. There is no need to
scrap the existing cpu/queue mappings and create new cpu/queue mappings
when the additional queues becomes active/enable.
* Change to create the cpu/queue mappings for the first RX when the
interface is created. Dont remove the cpu/queue mapping when the
interface is disconnected. Remove the cpu/queue mapping only when the
interface is deleted.
The create vhost user interface CLI also has some very expensive system
calls if the command is entered with the optional keyword "server"
As a bonus, This patch makes the create vhost user interface binary-api and
CLI thread safe. Do the protection for the small amount of code which is
thread unsafe.
Change-Id: I4a19cbf7e9cc37ea01286169882e5603e6d7eb77
Signed-off-by: Steven Luong <sluong@cisco.com>
|
|
This commit adds a "gso" parameter to existing "create tap..." CLI,
and a "no-gso" parameter for the compatibility with the future,
when/if defaults change.
It makes use of the lowest bit of the "tap_flags" field in the API call
in order to allow creation of GSO interfaces via API as well.
It does the necessary syscalls to enable the GSO
and checksum offload support on the kernel side and sets two flags
on the interface: virtio-specific virtio_if_t.gso_enabled,
and vnet_hw_interface_t.flags & VNET_HW_INTERFACE_FLAG_SUPPORTS_GSO.
The first one, if enabled, triggers the marking of the GSO-encapsulated
packets on ingress with VNET_BUFFER_F_GSO flag, and
setting vnet_buffer2(b)->gso_size to the desired L4 payload size.
VNET_HW_INTERFACE_FLAG_SUPPORTS_GSO determines the egress packet
processing in interface-output for such packets:
When the flag is set, they are sent out almost as usual (just taking
care to set the vnet header for virtio).
When the flag is not enabled (the case for most interfaces),
the egress path performs the re-segmentation such that
the L4 payload of the transmitted packets equals gso_size.
The operations in the datapath are enabled only when there is at least
one GSO-compatible interface in the system - this is done by tracking
the count in interface_main.gso_interface_count. This way the impact
of conditional checks for the setups that do not use GSO is minimized.
"show tap" CLI shows the state of the GSO flag on the interface, and
the total count of GSO-enabled interfaces (which is used to enable
the GSO-related processing in the packet path).
This commit lacks IPv6 extension header traversal support of any kind -
the L4 payload is assumed to follow the IPv6 header. Also it performs
the offloads only for TCP (TSO - TCP segmentation offload).
The UDP fragmentation offload (UFO) is not part of it.
For debug purposes it also adds the debug CLI:
"set tap gso {<interface> | sw_if_index <sw_idx>} <enable|disable>"
Change-Id: Ifd562db89adcc2208094b3d1032cee8c307aaef9
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
-fno-common makes sure we do not have multiple declarations of the same
global symbol across compilation units. It helps debug nasty linkage
bugs by guaranteeing that all reference to a global symbol use the same
underlying object.
It also helps avoiding benign mistakes such as declaring enum as global
objects instead of types in headers (hence the minor fixes scattered
across the source).
Change-Id: I55c16406dc54ff8a6860238b90ca990fa6b179f1
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Change-Id: I4e836244409c98739a13092ee252542a2c5fe259
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Example:
buffers {
default data-size 1536
}
Change-Id: I5b4436850ca18025c9fdcfc7ed648c2c2732d660
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Idd560f3afde1dd03bc3d6fbb2070096146865f50
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: Ifc98373371b967c49a75989eac415ddda1dcf15f
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: I60f88d50f062b004e6dea487bd627d303d0a5e75
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: Ib1316482dd7b1ae3c27c7eeb55839ed8af9ca162
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: I2e5fd45abcd07e9eda6184587889bdcd9613a159
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: Ieadf0a97379ed8b17241e454895c4e5e195dc52f
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: Id7fccf2f805e578fb05032aeb2b649a74c3c0e56
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: I25b2a28513821bc5eab9ac6890a3964d412b0399
Signed-off-by: Damjan Marion <damarion@cisco.com>
|