Age | Commit message (Collapse) | Author | Files | Lines |
|
vlib_main_t *vlib_get_main_not_inline(void)
vlib_thread_main_t *vlib_get_thread_main_not_inline(void)
elog_main_t *vlib_get_elog_main_not_inline()
Type: refactor
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I6de306d567283ad28ef34c9be0cf27452aecbf6c
|
|
Type: fix
adding routes should be MP safe. When new prefixes with differrent
prefix lengths are added, adjust the sorted list in an MP safe way.
Change-Id: Ib73a3c84d01eb86d17f8e79ea2bd2505dd9afb3d
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Callbacks for monitoring and performance measurement:
- Add new callback list type, with context
- Add callbacks for API, CLI, and barrier sync
- Modify node dispatch callback to pass plugin-specific context
- Modify perfmon plugin to keep PMC samples local to the plugin
- Include process nodes in dispatch callback
- Pass dispatch function return value to callback
Type: refactor
Signed-off-by: Tom Seidenberg <tseidenb@cisco.com>
Change-Id: I28b06c58490611e08d76ff5b01b2347ba2109b22
|
|
Type: feature
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I7e7d95a089dd849c1f01ecea84529d8dbf239f21
|
|
When CLIB_DEBUG is enabled, vlib_foreach_main macro asserts that
vlib_main it currently looks at is safely parked in barrier, by
checkling that vlib_main->parked_at_barrier is not 0.
Unfortunately, the check is racy - workers first increment the
atomic counter to indicate that they have reached the barrier
and _then_ set this_main->parked_at_barrier to 1. For the last
worker to suspend this opens the race - main thread is free
to execute and assert immediately after atomic counter has been
incremented, before worker gets to write to own parked_at_barrier.
Fix this by simply swapping the order of two operations.
Type: fix
Signed-off-by: Alexnader Kabaev <kan@FreeBSD.org>
Change-Id: Iae47abd6ca0be1c5413f5ecaefabc64cd7eac2ed
|
|
Cleaned up a few instances of side-bet elog_string hash table
usage. Elog_string handles that problem itself.
Add cli commands to vat to initialize, enable/disable, and save an
event log.
Event logging at the same time in both vpp and vat yields a pair
of event logs which can be merged by the "test_elog" tool.
Type: refactor
Change-Id: I8d6a72206f2309c967ea1630077fba31aef47f93
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Otherwise, all N worker threads try to sort the list at the same time:
a good way to have a bad day.
This approach performs *far* better than maintaing order by adding a
spin-lock. By direct measurement w/ elog + g2: 11 threads execute the
per-thread init function list in 22us, vs. 50ms with a CLIB_PAUSE()
enabled spin-lock.
Change-Id: I1745f2a213c0561260139a60114dcb981e0c64e5
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I6819dd9dbfc15c17740bdb98b51bdd639ef8c4d2
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
The main thread squirrels away vlib_time_now (&vlib_global_main),
worker threads use it to calculate an offset in f64 seconds from their
own vlib_time_now(vm) value. We use that offset until the next barrier
sync.
Thanks to Damjan for the suggestion.
Change-Id: If56cdfe68e5ad8ac3b0d0fc885dc3ba556cd1215
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
This is first part of addition of atomic macros with only macros for
__sync builtins.
- Based on earlier patch by Damjan (https://gerrit.fd.io/r/#/c/10729/)
Additionally
- clib_atomic_release macro added and used in the absence
of any memory barrier.
- clib_atomic_bool_cmp_and_swap added
Change-Id: Ie4e48c1e184a652018d1d0d87c4be80ddd180a3b
Original-patch-by: Damjan Marion <damarion@cisco.com>
Signed-off-by: Sirshak Das <sirshak.das@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
|
|
Add an "elog trace [api][cli][barrier]" debug CLI command. Removed the
barrier elog test command. Remove unused reliable multicast code.
Change-Id: Ib3ecde901b7c49fe92b313d0087cd7e776adcdce
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I3124238ab4d43bcef5590bad33a4ff0b5d8b7d15
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: I2d3b8d5a7192ff68bee443a99346ecb807b2d833
Signed-off-by: Damjan Marion <damarion@cisco.com>
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I6201a044e70ab6a58db8212960c57edc77c41f96
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
This prevents deadlock in case when worker A sends to B and worker B
sends to A
Change-Id: Id9436960f932c58325fe4f5ef8ec67b50031aeda
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Id3ccfcfa2a88cf7aa106f1202af7cd677de32575
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I32f68e2ee8f5d32962acdefb0193583f71d342b3
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Update ping code to use the new function
Change-Id: Ieb753b23f8402cbe5667c22747896784c8ece937
Signed-off-by: Florin Coras <fcoras@cisco.com>
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Support logging to both syslog and elog
Also include DaveB is_mp_safe fix, which had been lost
Change-Id: If82f7969e2f43c63c3fed5b1a0c7434c90c1f380
Signed-off-by: Colin Tregenza Dancer <ctd@metaswitch.com>
|
|
Change the rebuilding of worker thread clone datastructures
to run in parallel on the workers, instead of serially
on main.
Change-Id: Ib76bcfbef1e51f2399972090f4057be7aaa84e08
Signed-off-by: Colin Tregenza Dancer <ctd@metaswitch.com>
|
|
- refactor existing congestion control code (RFC 6582/5681). Handling of ack
feedback now consists of: ack parsing, cc event detection, event handling,
congestion control update
- extend sack scoreboard to support sack based retransmissions
- basic implementation of Eifel detection algorithm (RFC 3522) for
detecting spurious retransmissions
- actually initialize the per-thread frame freelist hash tables
- increase worker stack size to 2mb
- fix session queue node out-of-buffer handling
- ensure that the local buffer cache vec_len matches reality
- avoid 2x spurious event requeues when short of buffers
- count out-of-buffer events
- make the builtin server thread-safe
- fix bihash template threading issue: need to paint -1 across uninitialized
working_copy_length vector elements (via rebase from master)
Change-Id: I646cb9f1add9a67d08f4a87badbcb117980ebfc4
Signed-off-by: Florin Coras <fcoras@cisco.com>
Signed-off-by: Dave Barach <dbarach@cisco.com>
|
|
Change-Id: I82c663bc0866c6c68ba354104b0bb059387f4b9d
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
This patch deprecates stack-based thread identification,
Also removes requirement that thread stacks are adjacent.
Finally, possibly annoying for some folks, it renames
all occurences of cpu_index and cpu_number with thread
index. Using word "cpu" is misleading here as thread can
be migrated ti different CPU, and also it is not related
to linux cpu index.
Change-Id: I68cdaf661e701d2336fc953dcb9978d10a70f7c1
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I6ff7b65a400734a47bc0a7d03faf86ef1cf4f8c8
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Worker main loop is now shared code with main thread
main loop so no need to export functions anymore.
Change-Id: I99ee2eee981c1b88ca31d20eabeb6c21d030a34d
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Id18d59c9442602633a6310b2001a95bce8b6b232
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Clean up spurious binary API client link dependency on libvlib.so,
which managed to hide behind vlib_mains == 0 checks reached by
VLIB_xxx_FUNCTION macros.
Change-Id: I5df1f8ab07dca1944250e643ccf06e60a8462325
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I8e2e8f94a884ab2f9909d0c83ba00edd38cdab77
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I7b51f88292e057c6443b12224486f2d0c9f8ae23
Signed-off-by: Damjan Marion <damarion@cisco.com>
|