Age | Commit message (Collapse) | Author | Files | Lines |
|
In a case where one pounds on a single kvp in a KVP_AT_BUCKET_LEVEL
table, the code would sporadically return a transitional value (junk)
from a half-deleted kvp. At most, 64-bits worth of the kvp will be
written atomically, so using memset(...) to smear 0xFF's across a kvp
to free it left a lot to be desired.
Performance impact: very mild positive, thanks to FC for doing a
multi-thread host stack perf/scale test.
Added an ASSERT to catch attempts to add a (key,value) pair which
contains the magic "free kvp" value.
Type: fix
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I6a1aa8a2c30bc70bec4b696ce7b17c2839927065
|
|
Type: improvement
Change-Id: Ifb0fa114414aa2fdc244f964612ca3ac3e29b5e1
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Template instances can allocate BIHASH_KVP_PER_PAGE data records
tangent to the bucket, to remove a dependent read / prefetch.
Template instances can ask for immediate memory allocation, to avoid
several branches in the lookup path.
Clean up l2 fib, gpb plugin codes: use clib_bihash_get_bucket(...)
Use hugepages for bihash allocation arenas
Type: improvement
Signed-off-by: Dave Barach <dave@barachs.net>
Signed-off-by: Damjan Marion <damarion@cisco.com>
Change-Id: I92fc11bc58e48d84e2d61f44580916dd1c56361c
|
|
Example / unit-test in .../src/plugins/unittest/bihash_test.c
Change-Id: I23fd0ba742d65291667a755965aee1a3d3477ca2
Signed-off-by: Dave Barach <dave@barachs.net>
|