Age | Commit message (Collapse) | Author | Files | Lines |
|
Type: refactor
Change-Id: I5235bf3e9aff58af6ba2c14e8c6529c4fc9ec86c
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
In a case where one pounds on a single kvp in a KVP_AT_BUCKET_LEVEL
table, the code would sporadically return a transitional value (junk)
from a half-deleted kvp. At most, 64-bits worth of the kvp will be
written atomically, so using memset(...) to smear 0xFF's across a kvp
to free it left a lot to be desired.
Performance impact: very mild positive, thanks to FC for doing a
multi-thread host stack perf/scale test.
Added an ASSERT to catch attempts to add a (key,value) pair which
contains the magic "free kvp" value.
Type: fix
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I6a1aa8a2c30bc70bec4b696ce7b17c2839927065
|
|
This adds two new exported functions
for the clib_bihash
* clib_bihash_add_with_overwrite_cb allowing
to pass a callback to be called on overwriting
a key with bucket lock held.
* clib_bihash_add_del_with_hash doing an add_del
with a precomputed hash.
Type: feature
Change-Id: I1590c933fa7cf21e6a8ada89b3456a60c4988244
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
Type: fix
When freeing an uninstantiated bihash
created with dont_add_to_all_bihash_list = 1
we get a warning. This removes the
warning & the search for the bihash on
cleanup.
Change-Id: Iac50ce7e30b97925768f7ad3cb1d30af14686e21
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
For portabiliy reasons it is better to have all wrapped in clib code.
I.e. instead of using getcpu() we have clib_get_current_numa_node () and
clib_get_current_cpu_id().
Type: refactor
Change-Id: I29b52d7f29bc7f93873402c4070561f564b71c63
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
When calling the bihash_add_del... functions some callers add a comment
beside the value to indicate this is the is_add param. Make the code
easier to read by adding defines for add and delete that the callers
can use instead of having to use 0 or 1.
Type: improvement
Signed-off-by: Paul Atkins <patkins@graphiant.com>
Change-Id: Iab5f7c8e8df12ac62fc7e726ca1798622dcdb42c
|
|
Type: improvement
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: Ic31f7721f326ca9d78d645abcea63ce58df5bd5b
|
|
Type: improvement
Change-Id: I6d00ba840d2168af0658f97c45a42d39be7cbbad
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Change-Id: Ifb0fa114414aa2fdc244f964612ca3ac3e29b5e1
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: fix
Hitting a code not reachable when setting
BIHASH_KVP_AT_BUCKET_LEVEL = 1
Change-Id: I24d539df67ae7650a3b1969f5709a6f7366d786b
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
There can be a race condition in the case
a thread tries to do a bihash_search while
another instantiates the bihash.
Type: fix
Change-Id: Ic61b590763beb409e112957c43a5a66cd10afb28
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
Type: fix
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: Icce7eab4510785e15bdcf97e4d1881b0f46f6899
|
|
* Avoid doing expensive bit extraction for most likely case where bucket
.log2_page_size == 0 and .linear_search == 0, saves 3-5 cycles for
lookup, data_prefetch and add operation
* use bextr instruction when available (x86 BMI instruction set)
Type: improvement
Change-Id: I163df36a29287482c5f133be8b21d62a2f7440de
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Change-Id: Ia04bd26ecd689894753e036e52920316de611910
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Measured improvement is from 439 to 167 clocks for add operation
in 16_8 case...
Type: improvement
Change-Id: I975ff46ff30b983a3ec80a5cde25ccb68d7fa03b
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Template instances can allocate BIHASH_KVP_PER_PAGE data records
tangent to the bucket, to remove a dependent read / prefetch.
Template instances can ask for immediate memory allocation, to avoid
several branches in the lookup path.
Clean up l2 fib, gpb plugin codes: use clib_bihash_get_bucket(...)
Use hugepages for bihash allocation arenas
Type: improvement
Signed-off-by: Dave Barach <dave@barachs.net>
Signed-off-by: Damjan Marion <damarion@cisco.com>
Change-Id: I92fc11bc58e48d84e2d61f44580916dd1c56361c
|
|
Type: improvement
Change-Id: I073bb7bea2a55eabbb6c253b003966f0a821e4a3
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: feature
Change-Id: I28f7a658be3f3beec9ea32635b60d1d3a10d9b06
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Add controls to list / not list a specific bihash in clib_all_bihashes,
to immediately initialize a bihash.
clib_bihash_init2 is now the primary API. It takes a typical args_t
structure. clib_bihash_init becomes a compatibility widget. It
fabricates an args_t and calls init2...
Type: refactor
Ticket: VPP-1758
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: Ib3e1304884997cf7025af20bdc67a7dda290f15b
|
|
Type: fix
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: Ie37ff66faba79e3b8f46c7a704137f9ef2acc773
|
|
Reduces the vpp image virtual size by multiple gigabytes
Add a "show bihash" command which displays configured and current
virtual space in use by bihash tables.
Modify the .py test framework to call "show bihash" on test tear-down
Type: refactor
Change-Id: Ifc1b7e2c43d29bbef645f6802fa29ff8ef09940c
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Example / unit-test in .../src/plugins/unittest/bihash_test.c
Change-Id: I23fd0ba742d65291667a755965aee1a3d3477ca2
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
ffb14b9554afa1e58c3657e0c91dda3135008274 has changed the semantics
of alloc_arena_next to become an offset off alloc_arena, but
in the available memory check in BV (alloc_aligned) it still treats
it as a virtual address, resulting in the check always succeeding,
thus over a prolonged period bihash arena allocator
potentially overwriting whatever is following the arena.
Change-Id: I18882c5f340ca767a389e15cca2696a0a97ef015
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
This patch makes 32/64 bit interoperable shared memory bihash tables
work regardless of where they're mapped.
Change-Id: If5b4a37ccdaa75410eba755c7d7195633de1b30b
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Move the binary api segment above 4gb
Change-Id: I40e8aa7a97722a32397f5a538b5ff8344c50d408
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I1f0aae16e4ace850d7d79b9c2c644a3e0d002636
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Applications such as NAT that dynamically create entries require these entries to expire after some time.
Bihash user can now lazily delete expired entries. When inserting and bucket is full, expired entry is overwritten.
Change-Id: I6852305df399b546159407f1729c856afde5a634
Signed-off-by: Matus Fabian <matfabia@cisco.com>
|
|
Change-Id: I78c0a6da5d8fc63c1ced43589c42abc15ab12b16
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Add a bucket-level lock bit. Use a spinlock only when actually
allocating, freeing, or splitting a bucket. Should improve
multi-thread add/del performance.
Change-Id: I3e40e2a8371685457f340d6584dea14e3207f2b0
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Ic636297df4c03303fdcb176669f0268d80e22123
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Use similar approach as in the clib_bihash_search_inline_with_hash to
be able to do the hash calculation and lookup separately.
Change-Id: Ief79aa0f9f1e42b0af88be4807ca01fac30a80d7
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
Change-Id: I2e9d01ccba5288e89b886464436097d3cb7d2d18
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Directly allocate and carve cache-line-aligned chunks of virtual
memory. To a first approximation, bihash wasn't using
clib_mem_free(...).
We eliminate mheap object header/trailers, which improves space
efficiency. We also eliminate the 4gb bihash table size limit. An 8_8
bihash w/ 100 million random entries uses 3.8 Gbytes.
Change-Id: Icf925fdf99bce7d6ac407ac4edd30560b8f04808
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Reference-count the number of entries in each bucket. If the reference
count goes to zero, free the backing store.
Add long-term churn-testing to test_bihash_template.c, thanks to
Andrew Yourtchenko for the initial implementation.
Change-Id: I4fbd9229cacfaba8027a85cbf87b74afdead6e39
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
when verbose option is used
Change-Id: Ib63ead4525332f897b8a1d8a4cf5a0eb1da1e7f3
Signed-off-by: Vijayabhaskar Katamreddy <vkatamre@cisco.com>
|
|
Setting the bucket-level LRU cache size to zero removes the
bucket-level LRU cache code.
Change-Id: Idf2e63d0d508675e957366515863766f79a3479c
Signed-off-by: Dave Barach <dbarach@cisco.com>
|
|
Change-Id: I84908b9ad30d7555024e98b69ed37b111f31c27a
Signed-off-by: Dave Barach <dbarach@cisco.com>
|
|
According to Maciek, the easiest way to leverage the csit "performance
trend" job is to actually merge the patch once verified. Manual
testing indicates that the patch improves l2 path performance. Other
use-cases are TBD. It's possible that we'll need to back out the patch
depending on what happens.
Change-Id: Ic0a0363de35ef9be953ad7709c57c3936b73fd5a
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Particularly in the DCLIB_VEC64=1 case, using vectors vs. raw
clib_mem_alloc'ed memory causes abysmal memory allocator performance.
Change-Id: I07a4dec0cd69ca357445385e2671cdf23c59b95d
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I55dad7b5cfb3d38c22b1105f7d2d61e7449410ea
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I7b51f88292e057c6443b12224486f2d0c9f8ae23
Signed-off-by: Damjan Marion <damarion@cisco.com>
|