aboutsummaryrefslogtreecommitdiffstats
path: root/src/vppinfra/hash.c
AgeCommit message (Collapse)AuthorFilesLines
2021-10-07vppinfra: asan: improve overflow semanticBenoît Ganne1-12/+16
Type: improvement Change-Id: Ia63899b82e34f179f9efa921e4630b598f2a86cb Signed-off-by: Benoît Ganne <bganne@cisco.com>
2021-05-06vppinfra: fix testsDamjan Marion1-5/+5
Type: fix Change-Id: If59a66aae658dd35dbcb4987ab00c306b3c6e2e2 Signed-off-by: Damjan Marion <damarion@cisco.com>
2021-02-15vppinfra: fix memcpy undefined behaviourBenoît Ganne1-9/+9
Calling mem{cpy,move} with NULL pointers results in undefined behaviour. This in turns is exploited by GCC. For example, the sequence: memcpy (dst, src, n); if (!src) return; src[0] = 0xcafe; will be optimized as memcpy (dst, src, n); src[0] = 0xcafe; IOW the test for NULL is gone. vec_*() functions sometime call memcpy with NULL pointers and 0 length, triggering this optimization. For example, the sequence: vec_append(v1, v2); len = vec_len(v2); will crash if v2 is NULL, because the test for NULL pointer in vec_len() has been optimized out. This commit fixes occurrences of such undefined behaviour, and also introduces a memcpy wrapper to catch those in debug mode. Type: fix Change-Id: I175e2dd726a883f97cf7de3b15f66d4b237ddefd Signed-off-by: Benoît Ganne <bganne@cisco.com>
2020-10-17vppinfra: explicitly export symbolsDamjan Marion1-16/+16
Type: improvement Change-Id: I57a9f85f7df1fc48656b72592349f4c544302f77 Signed-off-by: Damjan Marion <damarion@cisco.com>
2019-11-27misc: add address sanitizer heap instrumentationBenoît Ganne1-4/+12
Introduce AddressSanitizer support: https://github.com/google/sanitizers/ This starts with heap instrumentation. vlib_buffer, bihash and stack instrumentation should follow. Type: feature Change-Id: I7f20e235b2f79db72efd0e756f22c75f717a9884 Signed-off-by: Benoît Ganne <bganne@cisco.com>
2019-10-11vppinfra: fix page boundary crossing bug in hash_memory64Dave Barach1-4/+47
Fix a day-1 bug, possibly dating back as far as 2002. The zap64() game involves fetching 8 byte chunks, and clearing octets not to be included in the key. That's fine *unless* the 8-byte fetch happens to cross a page boundary into unmapped or no-access space. Type: fix Signed-off-by: Dave Barach <dave@barachs.net> Change-Id: I4607e9840032257c96ba7387f86c931c0921749d
2018-11-14Remove c-11 memcpy checks from perf-critical codeDave Barach1-10/+10
Change-Id: Id4f37f5d4a03160572954a416efa1ef9b3d79ad1 Signed-off-by: Dave Barach <dave@barachs.net>
2018-10-23c11 safe string handling supportDave Barach1-3/+3
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab Signed-off-by: Dave Barach <dave@barachs.net>
2018-06-05VPP API: Memory traceOle Troan1-0/+8
if you plan to put a hash into shared memory, the key sum and key equal functions MUST be set to constants such as KEY_FUNC_STRING, KEY_FUNC_MEM, etc. -lvppinfra is PIC, which means that the process which set up the hash won't have the same idea where the key sum and key compare functions live in other processes. Change-Id: Ib3b5963a0d2fb467b91e1f16274df66ac74009e9 Signed-off-by: Ole Troan <ot@cisco.com> Signed-off-by: Dave Barach <dave@barachs.net> Signed-off-by: Ole Troan <ot@cisco.com>
2018-05-05autodetect alignment during _vec_resizeDamjan Marion1-1/+1
Change-Id: I2896dbde78b5d58dc706756f4c76632c303557ae Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-11-03silence clib_mem_unaligned() invalid read found by address-sanitizerGabriel Ganne1-19/+28
clib_mem_unaligned + zap64 casts its input as u64, computes a mask according to the input length, and returns the casted maked value. Therefore all the 8 Bytes of the u64 are systematically read, and the invalid ones are discarded. Since they are discarded correctly, this invalid read can safely be ignored. Revert "fix clib_mem_unaligned() invalid read" This reverts commit 0ed3d81a5fa274283ae69b69a405c385189897d3. Change-Id: I5cc33ad36063c414085636debe93707d9a75157a Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
2017-11-01fix clib_mem_unaligned() invalid readGabriel Ganne1-17/+18
clib_mem_unaligned + zap64 casts its input as u64, computes a mask according to the input length, and returns the casted maked value. Therefore all the 8 Bytes of the u64 are systematically read, and the invalid ones are discarded. For example, for a 5-Bytes string, we will do an invalid read of size 3, even though those 3 Bytes are never used. This patch proposes to only read what we have at the cost of reading as a u64 in one call, but that way, we do not trigger an invalid read error. Change-Id: I3e0b31c4113d9c8e53aa5fa3d3d396ec80f06a27 Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
2016-12-28Reorganize source tree to use single autotools instanceDamjan Marion1-0/+1095
Change-Id: I7b51f88292e057c6443b12224486f2d0c9f8ae23 Signed-off-by: Damjan Marion <damarion@cisco.com>