Age | Commit message (Collapse) | Author | Files | Lines |
|
Change-Id: I78c0a6da5d8fc63c1ced43589c42abc15ab12b16
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Add a bucket-level lock bit. Use a spinlock only when actually
allocating, freeing, or splitting a bucket. Should improve
multi-thread add/del performance.
Change-Id: I3e40e2a8371685457f340d6584dea14e3207f2b0
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Ic636297df4c03303fdcb176669f0268d80e22123
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Directly allocate and carve cache-line-aligned chunks of virtual
memory. To a first approximation, bihash wasn't using
clib_mem_free(...).
We eliminate mheap object header/trailers, which improves space
efficiency. We also eliminate the 4gb bihash table size limit. An 8_8
bihash w/ 100 million random entries uses 3.8 Gbytes.
Change-Id: Icf925fdf99bce7d6ac407ac4edd30560b8f04808
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Reference-count the number of entries in each bucket. If the reference
count goes to zero, free the backing store.
Add long-term churn-testing to test_bihash_template.c, thanks to
Andrew Yourtchenko for the initial implementation.
Change-Id: I4fbd9229cacfaba8027a85cbf87b74afdead6e39
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
when verbose option is used
Change-Id: Ib63ead4525332f897b8a1d8a4cf5a0eb1da1e7f3
Signed-off-by: Vijayabhaskar Katamreddy <vkatamre@cisco.com>
|
|
writer_lock must be inited before used.
Change-Id: Ib258aa09b3bccc4de6edba0eb75a7eec20f1a61f
Signed-off-by: JingLiuZTE <liu.jing5@zte.com.cn>
|
|
Change-Id: I4b1f27b95d67d48b7a13750ff8754c344ed7afa7
Signed-off-by: Chris Luke <chrisy@flirble.org>
|
|
Setting the bucket-level LRU cache size to zero removes the
bucket-level LRU cache code.
Change-Id: Idf2e63d0d508675e957366515863766f79a3479c
Signed-off-by: Dave Barach <dbarach@cisco.com>
|
|
Change-Id: I84908b9ad30d7555024e98b69ed37b111f31c27a
Signed-off-by: Dave Barach <dbarach@cisco.com>
|
|
According to Maciek, the easiest way to leverage the csit "performance
trend" job is to actually merge the patch once verified. Manual
testing indicates that the patch improves l2 path performance. Other
use-cases are TBD. It's possible that we'll need to back out the patch
depending on what happens.
Change-Id: Ic0a0363de35ef9be953ad7709c57c3936b73fd5a
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
The Wmaybe-uninitialized is the new error included with Wall.
This patch addresses the warning and fixes it.
Change-Id: I8fdf9ff2d236c46b717024a14874fbbbad8af303
Signed-off-by: Marco Varlese <marco.varlese@suse.com>
|
|
VPP crash is observed when MAC aging is enabled with multi-threaded mode.
If a thread other-than-zero expands the working_copies vector,
working_copy_lengths should be initialized with vec_validate_init_empty(..., -1)
to fill -1 across lower-numbered working_copy_lengths vector element.
Change-Id: I60959fc6511306b33acae323df9c6898fc6c50ce
Signed-off-by: Steve Shin <jonshin@cisco.com>
|
|
Particularly in the DCLIB_VEC64=1 case, using vectors vs. raw
clib_mem_alloc'ed memory causes abysmal memory allocator performance.
Change-Id: I07a4dec0cd69ca357445385e2671cdf23c59b95d
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I82c663bc0866c6c68ba354104b0bb059387f4b9d
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ib0144ba3a9a09971d3946c932e8fed6d5c1ad278
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I55dad7b5cfb3d38c22b1105f7d2d61e7449410ea
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I7b51f88292e057c6443b12224486f2d0c9f8ae23
Signed-off-by: Damjan Marion <damarion@cisco.com>
|