Age | Commit message (Collapse) | Author | Files | Lines |
|
Per interface, next hop graph node can be customized
with vnet_hw_interface_rx_redirect_to_node function,
but it doesn't work well for af-packet type interface.
In current implementation, with function
af_packet_set_interface_next_node invoked next hop graph
node index can be set to apif->per_interface_next_index,
but it's not set to next0 properly for packet processing
in af_packet_device_input_fn.
Type: fix
Signed-off-by: Michael Yu <michael.a.yu@nokia-sbell.com>
Change-Id: I8e132ddd1c3c01b6f476de78546d4a9389b3ff87
Signed-off-by: Michael Yu <michael.a.yu@nokia-sbell.com>
(cherry picked from commit 90b34ed67a516c4391ad353ba431f8419b582d50)
|
|
The VPP code tries to set all userspace memory in the table via IOCTL
to VHOST_SET_MEM_TABLE. But on aarch64, the userspace address range is
larger (48 bits) than that on x86 (47 bits). Below is an segment from
/proc/[vpp]/maps.
fffb41200000-fffb43a00000 rw-s 00000000 00:0e 532232
/anon_hugepage (deleted)
Instead of setting all userspace memory space to vhost-net, will only set
the address space reserved by pmalloc module during initialization.
Type: fix
Change-Id: I91cb35e990869b42094cf2cd0512593733d33677
Signed-off-by: Lijian Zhang <Lijian.Zhang@arm.com>
Reviewed-by: Steve Capper <Steve.Capper@arm.com>
(cherry picked from commit ba0da570f264785f6b50eff7829f6653c0924069)
|
|
After the trace is collected and if the interface is then deleted, show
trace may crash for the debug image. This is due to the additional check
in pool_elt_at_index() to make sure that the block is not free.
The fix is to do the check in vhost format trace and return "interface deleted"
Type: fix
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: I0744f913ba6146609663443f408d784067880f93
(cherry picked from commit 5cd987dda679fe50b9cd7a834bb9162db39ade78)
|
|
Type: fix
Ticket: VPP-1766
revert e4ac48e792f4eebfce296cfde844ee73b1abd62f
Change-Id: I03feea4008a47859d570ad8d1d08ff3f30d139ef
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
vlib_increment_combined_counter takes sw_if_index, not hw_if_index. Using
hw_if_index may work as long as there is no subinterface created to cause
hw_if_index and sw_if_index to differ.
Type: fix
Ticket: VPP-1759
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: I6db042186eeeacf32250f7ef261af8cd6f5ce56e
(cherry picked from commit efa119db3910e77f79eb005c67f8c01b473b40a1)
|
|
Set VNET_HW_INTERFACE_FLAG_SUPPORTS_TX_L4_CKSUM_OFFLOAD for the interface
to skip checksum calculation if guest supports checksum offload.
Type: fix
Ticket: VPP-1750
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: Ie933c3462394f07580ef7f2bec1d2eb3b075bd0c
(cherry picked from commit a75ad876401a700127ebf234fc422e76fcd57b4c)
|
|
previously, PG and virtio interfaces calculate wrong l3 and l4
header offset. This patch fixes this issue.
Type: fix
Ticket: VPP-1739
Change-Id: I5ba978e464babeb65e0711e1027320d46b3b9932
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
(cherry picked from commit 14bea1bb6505c0134dd5d2a18bcc436ce72cd149)
|
|
Type: fix
Ticket: VPP-1727
Change-Id: Icfee35c5ab5e1c65079d1ca7bb514162319113e5
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
(cherry picked from commit 7dfcf7f1f504f5e8283c54a428805cc3a4aa8da9)
|
|
map_guest_mem may return null. Coverity complains about calls
without checking its return. Simple stuff.
Type: fix
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: I0626115f4951a88f23d9792f0232fb57c132fbc2
|
|
Type: fix
1. Add option '[gso-enabled]' in cli 'create interface virtio'
2. Add gso information in virtio_show()
Change-Id: I4eb58f4421325ef54a6a68c8341b3a6d3d68136a
Signed-off-by: Chenmin Sun <chenmin.sun@intel.com>
|
|
Add gso option in create vhost interface to support gso and checksum
offload.
Tested with the following startup options in qemu:
csum=on,gso=on,guest_csum=on,guest_tso4=on,guest_tso6=on,guest_ufo=on,
host_tso4=on,host_tso6=on,host_ufo=on
Type: feature
Change-Id: I9ba1ee33677a694c4a0dfe66e745b098995902b8
Signed-off-by: Steven Luong <sluong@cisco.com>
|
|
Type: fix
Fixes: c30d87e6139c64eceade54972715b402c625763d
Change-Id: I86b606b18ff6a30709b7aff089fd5dd00103bd7f
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: feature
Change-Id: If11f00574322c35c1780c31d5f7b47d30e083e35
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Multiple API message handlers call vnet_get_sup_hw_interface(...)
without checking the inbound sw_if_index. This can cause a
pool_elt_at_index ASSERT in a debug image, and major disorder in a
production image.
Given that a number of places are coded as follows, add an
"api_visible_or_null" variant of vnet_get_sup_hw_interface, which
returns NULL given an invalid sw_if_index, or a hidden sw interface:
- hw = vnet_get_sup_hw_interface (vnm, sw_if_index);
+ hw = vnet_get_sup_hw_interface_api_visible_or_null (vnm, sw_if_index);
if (hw == NULL || memif_device_class.index != hw->dev_class_index)
return clib_error_return (0, "not a memif interface");
Rename two existing xxx_safe functions -> xxx_or_null to make it
obvious what they return.
Type: fix
Change-Id: I29996e8d0768fd9e0c5495bd91ff8bedcf2c5697
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Some combinations of new qemu (2.11) and old dpdk (16.10) may
send VHOST_USER_SET_FEATURES at the end of the protocol exchange
which the vhost interface is already declared up and ready.
Unfortunately, the process of VHOST_USER_SET_FEATURES will cause
the interface to go down. Not sure if it is correct or needed.
Because there is no additional messages thereafter, the hardware
interface stays down.
The fix is to check the interface again at the end of processing
VHOST_USER_SET_FEATURES. If it is up and ready, we bring back
the hardware interface.
Type: fix
Change-Id: I490cd03820deacbd8b44d8f2cb38c26349dbe3b2
Signed-off-by: Steven Luong <sluong@cisco.com>
|
|
CLI allocates vectors consumed by tap_create_if(), whereas API pass
null-terminated C-strings allocated on API segment.
Do not try to be too clever here, and just allocate our own private
copies.
Type: fix
Fixes: 8d879e1a6bac47240a232893e914815f781fd4bf
Ticket: VPP-1724
Change-Id: I3ccdb8e0fcd4cb9be414af9f38cf6c33931a1db7
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
The fast path almost always has to deal with the real
pointers. Deriving the frame pointer from a frame_index requires a
load of the 32bit frame_index from memory, another 64bit load of the
heap base pointer and some calculations.
Lets store the full pointer instead and do a single 64bit load only.
This helps avoiding problems when the heap is grown and frames are
allocated below vm->heap_aligned_base.
Type: refactor
Change-Id: Ifa6e6e984aafe1e2755bff80f0a4dfcddee3623c
Signed-off-by: Andreas Schultz <andreas.schultz@travelping.com>
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Memory is dirt cheap. But there is no need to throw it away.
Type: fix
Change-Id: I155130ab3c435b1c04d7c0e9f54795b8de9383d9
Signed-off-by: Steven Luong <sluong@cisco.com>
|
|
If the host interface name is not specified at creation, host_if_name
was wrongly set to a stack-allocated variable. Make sure it always
points to a heap allocated vector.
At deletion time, we must free all allocated vectors.
Type:fix
Change-Id: I17751f38e95097998d51225fdccbf3ce3c365593
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: fix
Fixes: 8389fb9
Change-Id: I31076db78507736631609146d4cca28597aca704
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
This patch adds support to configure host mtu size using
api, cli or startup.conf.
Type: feature
Change-Id: I8ab087d82dbe7dedc498825c1a3ea3fcb2cce030
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
In tap tx routine, virtio_interface_tx_inline, there used to be an
interface spinlock to ensure packets are processed in an orderly fashion
clib_spinlock_lock_if_init (&vif->lockp);
When virtio code was introduced in 19.04, that line is changed to
clib_spinlock_lock_if_init (&vring->lockp);
to accommodate multi-queues.
Unfortunately, althrough the spinlock exists in the vring, it was never
initialized for tap, only for virtio. As a result, many nasty things can
happen when running tap interface in multi-thread environment. Crash is
inevitable.
The fix is to initialize vring->lockp for tap and remove vif->lockp as it
is not used anymore.
Change-Id: I82b15d3e9b0fb6add9b9ac49bf602a538946634a
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
(cherry picked from commit c2c89782d34df0dc7197b18b042b4c2464a101ef)
|
|
Change-Id: I7b735f5a540e8c278bac88245acb3f8c041c49c0
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Indirect buffers are used to store indirect descriptors
to xmit big packets.
This patch moves the indirect buffer allocation from
interface creation to device node. Now it allocates
or deallocates buffers during tx for chained buffers.
Change-Id: I55cec208a2a7432e12fe9254a7f8ef84a9302bd5
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
(cherry picked from commit 55203e745f5e3f1f6c4dbe99d6eab8dee4d13ea6)
|
|
The vlib init function subsystem now supports a mix of procedural and
formally-specified ordering constraints. We should eliminate procedural
knowledge wherever possible.
The following schemes are *roughly* equivalent:
static clib_error_t *init_runs_first (vlib_main_t *vm)
{
clib_error_t *error;
... do some stuff...
if ((error = vlib_call_init_function (init_runs_next)))
return error;
...
}
VLIB_INIT_FUNCTION (init_runs_first);
and
static clib_error_t *init_runs_first (vlib_main_t *vm)
{
... do some stuff...
}
VLIB_INIT_FUNCTION (init_runs_first) =
{
.runs_before = VLIB_INITS("init_runs_next"),
};
The first form will [most likely] call "init_runs_next" on the
spot. The second form means that "init_runs_first" runs before
"init_runs_next," possibly much earlier in the sequence.
Please DO NOT construct sets of init functions where A before B
actually means A *right before* B. It's not necessary - simply combine
A and B - and it leads to hugely annoying debugging exercises when
trying to switch from ad-hoc procedural ordering constraints to formal
ordering constraints.
Change-Id: I5e4353503bf43b4acb11a45fb33c79a5ade8426c
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
1. Fix af_packet memory leak;
2. Fix close socket twice;
3. Adjust debug log for syscall;
4. Adjust dhcp client output log;
Change-Id: I96bfaef16c4fad80c5da0d9ac602f911fee1670d
Signed-off-by: jackiechen1985 <xiaobo.chen@tieto.com>
|
|
Change-Id: Ifb16351f39e5eb2cd154e70a1c96243e4842e80d
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: I0ffb468aef56f5fd223218a83425771595863666
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Native virtio device through legacy driver can't support configurable queue size.
Change-Id: I76c446a071bef8a469873010325d830586aa84bd
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: I73f76c25754f6fb14a49ae47b6404f3cbabbeeb5
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
When container is deleted which has tap interface attached,
Linux also delete the tap interface leaving the VPP side of
tap. This patch does a clean up job to remove that VPP side
of tap interface.
To produce the behavior:
In VPP:
create tap
On linux:
sudo ip netns add ns1
sudo ip link set dev tap0 netns ns1
sudo ip netns del ns1
Change-Id: Iaed1700073a9dc64e626c1d0c449f466c143f3ae
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Crash will happen when someone will try to setup a tap interface
in host namespace without providing the host side of tap interface
custom name. This patch fixes the problem by using the default name
in this case.
Change-Id: Ic1eaea5abd01bc6c766d0e0fcacae29ab7a7ec45
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Some API action handlers called vl_msg_ai_send_shmem()
directly. That breaks Unix domain socket API transport.
A couple (bond / vhost) also tried to send a sw_interface_event
directly, but did not send the message to all that had
registred interest. That scheme never worked correctly.
Refactored and improved the interface event code.
Change-Id: Idb90edfd8703c6ae593b36b4eeb4d3ed7da5c808
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
Change-Id: I215e1e0208a073db80ec6f87695d734cf40fabe3
Signed-off-by: Jim Thompson <jim@netgate.com>
|
|
Change-Id: I7c6e4bf2abf08193e54a736510c07eeacd6aebe7
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: I9282a838738d0ba54255bef347abf4735be29820
Signed-off-by: Jim Thompson <jim@netgate.com>
|
|
Change-Id: I550313a36ae02eb3faa2f1a5e3614f55275a00cf
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: Id71ffa77e977651f219ac09d1feef334851209e1
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
RDMA ibverb is a userspace API to efficiently rx/tx packets. This is an
initial, unoptimized driver targeting Mellanox cards.
Next steps should include batching, multiqueue and additional cards.
Change-Id: I0309c7a543f75f2f9317eaf63ca502ac7a093ef9
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Change-Id: I53ab8d17914e6563110354e4052109ac02bf8f3b
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
This reverts commit e63325e3ca03c847963863446345e6c80a2c0cfd.
Allow time for CSIT to accommodate.
Change-Id: I59435e4ab5e05e36a2796c3bf44889b5d4823cc2
Signed-off-by: ot@cisco.com
|
|
Use of consistent API types for interface.api
Change-Id: Ieb54cebb4ac96b432a3f0b41596718aa2f34885b
Signed-off-by: Jakub Grajciar <jgrajcia@cisco.com>
|
|
Fix a typo in vhost_user_rx_discard_packet which may cause
txvq->last_avail_idx to go wild.
Change-Id: Ifaeb58835dff9b7ea82c061442722f1dcaa5d9a4
Signed-off-by: Steven Luong <sluong@cisco.com>
(cherry picked from commit 39382976701926c1f34191c1311829c15a53cb01)
|
|
Change-Id: I8819bcb9e228e7a432f4a7b67b6107f984927cd4
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
|
|
Change-Id: I911fb3f1c6351b37580c5dbde6939a549431a92d
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
|
|
map_guest_mem may be called from worker-thread/dataplane. It has a call
to vlib_log and may crash inside vlib_log's ASSERT statement
/* make sure we are running on the main thread to avoid use in dataplane
code, for dataplane logging consider use of event-logger */
ASSERT (vlib_get_thread_index () == 0);
The fix is to convert the vlib_log call in map_guest_map to event logger
Change-Id: Iaaf6d86782aa8a18d25e0209f22dc31f04668d56
Signed-off-by: Steven Luong <sluong@cisco.com>
|
|
while https://gerrit.fd.io/r/#/c/16590/ fixed the leaked fd which coverity
reported at that time, new coverity run reports simailar leaked fd in a
different goto punt path. It would be nice if coverity reported both of them
at the same time. Or perhaps it did and I just missed it. Anyway, the new fix
is to put the close (fd) statement prior to the return of tap_create_if routine
which should catch all goto's.
Change-Id: I0a51ed3710e32d5d74c9cd9b5066a667153e2f9d
Signed-off-by: Steven Luong <sluong@cisco.com>
|
|
Change-Id: I01c4f5755d579282773ac227b0bc24f8ddbb2bd1
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Symptom
-------
With NDR traffic blasting at VPP, bringing up a new VM with vhost
connection to VPP causes packet drops. I am able to recreate this
problem easily using a simple setup like this.
TREX-------------- switch ---- VPP
|---------------| |-------|
Cause
-----
The reason for the packet drops is due to vhost holding onto the worker
barrier lock for too long in vhost_user_socket_read(). There are quite a
few of system calls inside the routine. At the end of the routine, it
unconditionally calls vhost_user_update_iface_state() for all message
types. vhost_user_update_iface_state() also unconditionally calls
vhost_user_rx_thread_placement() and vhost_user_tx_thread_placement().
vhost_user_rx_thread_placement scraps out all existing cpu/queue mappings
for the interface and creates brand new cpu/queue mappings for the
interface. This process is very disruptive and very expensive. In my
opinion, this area of code needs a makeover.
Fixes
-----
* vhost_user_socket_read() is rewritten that it should not hold
onto the worker barrier lock for system calls, or at least minimize the
need for doing it.
* Remove the call to vhost_user_update_iface_state as a default route at
the end of vhost_user_socket_read(). There is only a couple of message
types which really need to call vhost_user_update_iface_state(). We put
the call to those message types which need it.
* Remove vhost_user_rx_thread_placement() and
vhost_user_tx_thread_placement from vhost_user_update_iface_state().
There is no need to repetatively change the cpu/queue mappings.
* vhost_user_rx_thread_placement() is actually quite expensive. It should
be called only once per queue for the interface. There is no need to
scrap the existing cpu/queue mappings and create new cpu/queue mappings
when the additional queues becomes active/enable.
* Change to create the cpu/queue mappings for the first RX when the
interface is created. Dont remove the cpu/queue mapping when the
interface is disconnected. Remove the cpu/queue mapping only when the
interface is deleted.
The create vhost user interface CLI also has some very expensive system
calls if the command is entered with the optional keyword "server"
As a bonus, This patch makes the create vhost user interface binary-api and
CLI thread safe. Do the protection for the small amount of code which is
thread unsafe.
Change-Id: I4a19cbf7e9cc37ea01286169882e5603e6d7eb77
Signed-off-by: Steven Luong <sluong@cisco.com>
|
|
This commit adds a "gso" parameter to existing "create tap..." CLI,
and a "no-gso" parameter for the compatibility with the future,
when/if defaults change.
It makes use of the lowest bit of the "tap_flags" field in the API call
in order to allow creation of GSO interfaces via API as well.
It does the necessary syscalls to enable the GSO
and checksum offload support on the kernel side and sets two flags
on the interface: virtio-specific virtio_if_t.gso_enabled,
and vnet_hw_interface_t.flags & VNET_HW_INTERFACE_FLAG_SUPPORTS_GSO.
The first one, if enabled, triggers the marking of the GSO-encapsulated
packets on ingress with VNET_BUFFER_F_GSO flag, and
setting vnet_buffer2(b)->gso_size to the desired L4 payload size.
VNET_HW_INTERFACE_FLAG_SUPPORTS_GSO determines the egress packet
processing in interface-output for such packets:
When the flag is set, they are sent out almost as usual (just taking
care to set the vnet header for virtio).
When the flag is not enabled (the case for most interfaces),
the egress path performs the re-segmentation such that
the L4 payload of the transmitted packets equals gso_size.
The operations in the datapath are enabled only when there is at least
one GSO-compatible interface in the system - this is done by tracking
the count in interface_main.gso_interface_count. This way the impact
of conditional checks for the setups that do not use GSO is minimized.
"show tap" CLI shows the state of the GSO flag on the interface, and
the total count of GSO-enabled interfaces (which is used to enable
the GSO-related processing in the packet path).
This commit lacks IPv6 extension header traversal support of any kind -
the L4 payload is assumed to follow the IPv6 header. Also it performs
the offloads only for TCP (TSO - TCP segmentation offload).
The UDP fragmentation offload (UFO) is not part of it.
For debug purposes it also adds the debug CLI:
"set tap gso {<interface> | sw_if_index <sw_idx>} <enable|disable>"
Change-Id: Ifd562db89adcc2208094b3d1032cee8c307aaef9
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|