summaryrefslogtreecommitdiffstats
path: root/src/vppinfra/test_bihash_template.c
AgeCommit message (Collapse)AuthorFilesLines
2024-01-19vppinfra: fix test_bihashDmitry Valter1-2/+2
Correctly wrap data indices in test_bihash. Type: fix Signed-off-by: Dmitry Valter <d-valter@yandex-team.com> Change-Id: I740fa1cf9f8c382c12f01f607095c5995be6845f
2023-03-18vppinfra: fix corner-cases in bihash lookupDave Barach1-0/+65
In a case where one pounds on a single kvp in a KVP_AT_BUCKET_LEVEL table, the code would sporadically return a transitional value (junk) from a half-deleted kvp. At most, 64-bits worth of the kvp will be written atomically, so using memset(...) to smear 0xFF's across a kvp to free it left a lot to be desired. Performance impact: very mild positive, thanks to FC for doing a multi-thread host stack perf/scale test. Added an ASSERT to catch attempts to add a (key,value) pair which contains the magic "free kvp" value. Type: fix Signed-off-by: Dave Barach <dave@barachs.net> Change-Id: I6a1aa8a2c30bc70bec4b696ce7b17c2839927065
2022-04-04vppinfra: make _vec_len() read-onlyDamjan Marion1-2/+1
Use of _vec_len() to set vector length breaks address sanitizer. Users should use vec_set_len(), vec_inc_len(), vec_dec_len () instead. Type: improvement Change-Id: I441ae948771eb21c23a61f3ff9163bdad74a2cb8 Signed-off-by: Damjan Marion <damarion@cisco.com>
2020-08-06vppinfra: harmonize function namesDave Barach1-12/+12
Type: fix Signed-off-by: Dave Barach <dave@barachs.net> Change-Id: Icce7eab4510785e15bdcf97e4d1881b0f46f6899
2020-04-21vppinfra: bihash improvementsDave Barach1-2/+24
Template instances can allocate BIHASH_KVP_PER_PAGE data records tangent to the bucket, to remove a dependent read / prefetch. Template instances can ask for immediate memory allocation, to avoid several branches in the lookup path. Clean up l2 fib, gpb plugin codes: use clib_bihash_get_bucket(...) Use hugepages for bihash allocation arenas Type: improvement Signed-off-by: Dave Barach <dave@barachs.net> Signed-off-by: Damjan Marion <damarion@cisco.com> Change-Id: I92fc11bc58e48d84e2d61f44580916dd1c56361c
2018-08-2832/64 shmem bihash interoperabilityDave Barach1-1/+28
Move the binary api segment above 4gb Change-Id: I40e8aa7a97722a32397f5a538b5ff8344c50d408 Signed-off-by: Dave Barach <dave@barachs.net>
2018-08-22bihash: add support for reuse of expired entry when bucket is full (VPP-1272)Matus Fabian1-0/+45
Applications such as NAT that dynamically create entries require these entries to expire after some time. Bihash user can now lazily delete expired entries. When inserting and bucket is full, expired entry is overwritten. Change-Id: I6852305df399b546159407f1729c856afde5a634 Signed-off-by: Matus Fabian <matfabia@cisco.com>
2018-07-20Fine-grained add / delete lockingDave Barach1-36/+106
Add a bucket-level lock bit. Use a spinlock only when actually allocating, freeing, or splitting a bucket. Should improve multi-thread add/del performance. Change-Id: I3e40e2a8371685457f340d6584dea14e3207f2b0 Signed-off-by: Dave Barach <dave@barachs.net>
2018-02-22bihash table size perf/scale improvementsDave Barach1-7/+17
Directly allocate and carve cache-line-aligned chunks of virtual memory. To a first approximation, bihash wasn't using clib_mem_free(...). We eliminate mheap object header/trailers, which improves space efficiency. We also eliminate the 4gb bihash table size limit. An 8_8 bihash w/ 100 million random entries uses 3.8 Gbytes. Change-Id: Icf925fdf99bce7d6ac407ac4edd30560b8f04808 Signed-off-by: Dave Barach <dave@barachs.net>
2018-02-08Minimize bihash memory consumptionDave Barach1-93/+145
Reference-count the number of entries in each bucket. If the reference count goes to zero, free the backing store. Add long-term churn-testing to test_bihash_template.c, thanks to Andrew Yourtchenko for the initial implementation. Change-Id: I4fbd9229cacfaba8027a85cbf87b74afdead6e39 Signed-off-by: Dave Barach <dave@barachs.net>
2017-07-19Add a bihash prefetchable bucket-level cacheDave Barach1-6/+55
According to Maciek, the easiest way to leverage the csit "performance trend" job is to actually merge the patch once verified. Manual testing indicates that the patch improves l2 path performance. Other use-cases are TBD. It's possible that we'll need to back out the patch depending on what happens. Change-Id: Ic0a0363de35ef9be953ad7709c57c3936b73fd5a Signed-off-by: Dave Barach <dave@barachs.net>
2017-05-18VPP-847: improve bihash template memory allocator performanceDave Barach1-1/+44
Particularly in the DCLIB_VEC64=1 case, using vectors vs. raw clib_mem_alloc'ed memory causes abysmal memory allocator performance. Change-Id: I07a4dec0cd69ca357445385e2671cdf23c59b95d Signed-off-by: Dave Barach <dave@barachs.net>
2017-01-02Handle execessive hash collisions, VPP-555Dave Barach1-26/+6
Change-Id: I55dad7b5cfb3d38c22b1105f7d2d61e7449410ea Signed-off-by: Dave Barach <dave@barachs.net>
2016-12-28Reorganize source tree to use single autotools instanceDamjan Marion1-0/+297
Change-Id: I7b51f88292e057c6443b12224486f2d0c9f8ae23 Signed-off-by: Damjan Marion <damarion@cisco.com>