Age | Commit message (Collapse) | Author | Files | Lines |
|
Now, libc epfd and vls epfd are independent and can only epoll_wait independently without timeout, then app calling epoll_wait will occupy high CPU. So we nest vcl_mq_epfd into libc epfd when using eventfd with VPP, and then we can only epoll_wait libc epfd with specified timeout.
Type: feature
Signed-off-by: hanlin <hanlin_wang@163.com>
Change-Id: I6b6e0f501c769e186714bfbc187cfaed2533b4c2
Signed-off-by: hanlin <hanlin_wang@163.com>
|
|
Type: fix
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I35fba8f17bdd6e2f5612358608ff6c13f4b431fe
|
|
Type: refactor
- per vls worker private pool of sessions
- deep copy of vls worker data structures on fork
- maintain a global, i.e., heap allocated, and lock protected pool of
elements that track sessions that are shared between workers (due to
forking).
Credit for uncovering the issue goes to Intel team contributing code to
VSAP (Ping, Yuwei, Shujun, Guoao).
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: Id7d8bb06ecd7b03e4134f1cae23e740cf4634649
|
|
Type:refactor
Moves connect, disconnect, bind, unbind and app detach to message
queue from binary api. Simplifies app/vcl interaction with the session
layer since all session control messages are now handled over the mq.
Add/del segment messages require internal C api changes which affect all
builtin applications. They'll be moved in a different patch and might
not be back portable to 19.08.
Change-Id: I93f6d18e551b024effa75d47f5ff25f23ba8aff5
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: I08cb7180364a5ef8444c9895c6d4f4842661b2a7
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: I455d108dfe52d45d040167fecb37b33e9d630c3c
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: Idc7dfe743399dd8dee0f6b3ec83f194f3fca580b
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: I9b0e6d65255e516cf5bf18757d4769176ef76e92
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
If an application worker calls listen on a session, vpp registers the
worker to the listener's work load balance group and, as new connections
are accepted, it may potentially push accept notifications to it.
There are however applications, like nginx, that on some workers may
never accept new connections on a session they've started listening on.
To avoid accumulating accept events on such workers, this patch adds
support for passive listeners. That is, workers that have started
listening on a session but then never call accept or epoll/select on
that listener.
Change-Id: I007e6dcb54fc88a0e3aab3c6e2a3d1ef135cbd58
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
- More fine tuning for multi-process applications.
- Experimental support for multi-thread apps. This is meant for app
whose threads are not vcl workers and the sessions are shared between
them.
Change-Id: Ie07651da5f2cdcf39f5dead5431f50ad39cf3f74
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: I721542aca139d7908a4f917629856f82cae79962
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Moves LDP logic that allows sharing of sessions between multi-process
app workers into a separate VCL shim layer. Also refactors LDP to use
the new layer.
Change-Id: I8198b51eae7d099a8c486e36b29e3a0cb8cee8e9
Signed-off-by: Florin Coras <fcoras@cisco.com>
|