diff options
author | Steven Luong <sluong@cisco.com> | 2019-10-02 07:33:48 -0700 |
---|---|---|
committer | Damjan Marion <dmarion@me.com> | 2019-10-07 20:02:17 +0000 |
commit | 4442f7cb2ebca129170a559d846712c2b65d5051 (patch) | |
tree | 0c1fa7b628e3e7e1b1ab774fb69e0d8b359ab3d9 /src/vnet/devices/virtio/vhost_user.c | |
parent | 0dd97d473bc0c958d9fcea508e1f5122a137b23f (diff) |
devices: vhost not reading packets from vring
In a rare event, after the vhost protocol message exchange has finished and
the interface had been brought up successfully, the driver MAY still change
its mind about the memory regions by sending new memory maps via
SET_MEM_TABLE. Upon processing SET_MEM_TABLE, VPP invalidates the old memory
regions and the descriptor tables. But it does not re-compute the new
descriptor tables based on the new memory maps. Since VPP does not have the
descriptor tables, it does not read the packets from the vring.
In the normal working case, after SET_MEM_TABLE, the driver follows up with
SET_VRING_ADDRESS which VPP computes the descriptor tables.
The fix is to stash away the descriptor table addresses from
SET_VRING_ADDRESS. Re-compute the new descriptor tables when processing
SET_MEM_TABLE if descriptor table addresses are known.
Type: fix
Ticket: VPP-1784
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: I3361f14c3a0372b8d07943eb6aa4b3a3f10708f9
(cherry picked from commit 61b8ba69f7a9540ed00576504528ce439f0286f5)
Diffstat (limited to 'src/vnet/devices/virtio/vhost_user.c')
-rw-r--r-- | src/vnet/devices/virtio/vhost_user.c | 22 |
1 files changed, 22 insertions, 0 deletions
diff --git a/src/vnet/devices/virtio/vhost_user.c b/src/vnet/devices/virtio/vhost_user.c index c06f78ce1bb..7094a00fb33 100644 --- a/src/vnet/devices/virtio/vhost_user.c +++ b/src/vnet/devices/virtio/vhost_user.c @@ -571,6 +571,24 @@ vhost_user_socket_read (clib_file_t * uf) vui->nregions++; } + + /* + * Re-compute desc, used, and avail descriptor table if vring address + * is set. + */ + for (q = 0; q < VHOST_VRING_MAX_N; q++) + { + if (vui->vrings[q].desc_user_addr && + vui->vrings[q].used_user_addr && vui->vrings[q].avail_user_addr) + { + vui->vrings[q].desc = + map_user_mem (vui, vui->vrings[q].desc_user_addr); + vui->vrings[q].used = + map_user_mem (vui, vui->vrings[q].used_user_addr); + vui->vrings[q].avail = + map_user_mem (vui, vui->vrings[q].avail_user_addr); + } + } vlib_worker_thread_barrier_release (vm); break; @@ -614,6 +632,10 @@ vhost_user_socket_read (clib_file_t * uf) goto close_socket; } + vui->vrings[msg.state.index].desc_user_addr = msg.addr.desc_user_addr; + vui->vrings[msg.state.index].used_user_addr = msg.addr.used_user_addr; + vui->vrings[msg.state.index].avail_user_addr = msg.addr.avail_user_addr; + vlib_worker_thread_barrier_sync (vm); vui->vrings[msg.state.index].desc = desc; vui->vrings[msg.state.index].used = used; |