Age | Commit message (Collapse) | Author | Files | Lines |
|
Type: refactor
Change-Id: I5235bf3e9aff58af6ba2c14e8c6529c4fc9ec86c
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Change-Id: I37c187af80c21b8fb1ab15af112527a837e0df9e
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Use of _vec_len() to set vector length breaks address sanitizer.
Users should use vec_set_len(), vec_inc_len(), vec_dec_len () instead.
Type: improvement
Change-Id: I441ae948771eb21c23a61f3ff9163bdad74a2cb8
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
- support of in-place growth of vectors (if there is available space next to
existing alloc)
- drops the need for alloc_aligned_at_offset from memory allocator,
which allows easier swap to different memory allocator and reduces
malloc overhead
- rework of pool and vec macros to inline functions to improve debuggability
- fix alignment - in many cases macros were not using native alignment
of the particular datatype. Explicitly setting alignment with XXX_aligned()
versions of the macro is not needed anymore in > 99% of cases
- fix ASAN usage
- avoid use of vector of voids, this was root cause of several bugs
found in vec_* and pool_* function where sizeof() was used on voids
instead of real vector data type
- introduce minimal alignment which is currently 8 bytes, vectors will
be always aligned at least to that value (underlay allocator actually always
provide 16-byte aligned allocs)
Type: improvement
Change-Id: I20f4b081bb13bbf7bc0ace85cc4e301787f12fdf
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ic81aafd5f378de06e5ea8cdd6a59e07ff1a7afca
Type: improvement
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Change-Id: I17778e89674da0e8204713302e2293377bdabcbc
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: refactor
Change-Id: Ibd29a717eaf12d795b3bceb31835d6fc655268b1
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Rename vec_capacity to vec_mem_size as it returned the size of the
underlying memory allocation not the number of bytes that can be used
for vector elements.
Add new vec_max_elts macro that returns number of elements that can fit
into generic vector.
Type: fix
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I2e53a2bfa6e56a89af62d6ddc073ead58b8c49bb
|
|
Type: fix
hash_resize is available in hash.h file, but missing __clib_export in hash.c
Signed-off-by: Leung Lai Yung <benkerbuild@gmail.com>
Change-Id: Ibb741b532cd1080ec5d8314aae8dbbca87f42502
|
|
Type: improvement
Change-Id: Ia63899b82e34f179f9efa921e4630b598f2a86cb
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: fix
Change-Id: If59a66aae658dd35dbcb4987ab00c306b3c6e2e2
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Calling mem{cpy,move} with NULL pointers results in undefined behaviour.
This in turns is exploited by GCC. For example, the sequence:
memcpy (dst, src, n);
if (!src)
return;
src[0] = 0xcafe;
will be optimized as
memcpy (dst, src, n);
src[0] = 0xcafe;
IOW the test for NULL is gone.
vec_*() functions sometime call memcpy with NULL pointers and 0 length,
triggering this optimization. For example, the sequence:
vec_append(v1, v2);
len = vec_len(v2);
will crash if v2 is NULL, because the test for NULL pointer in vec_len()
has been optimized out.
This commit fixes occurrences of such undefined behaviour, and also
introduces a memcpy wrapper to catch those in debug mode.
Type: fix
Change-Id: I175e2dd726a883f97cf7de3b15f66d4b237ddefd
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: improvement
Change-Id: I57a9f85f7df1fc48656b72592349f4c544302f77
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Introduce AddressSanitizer support: https://github.com/google/sanitizers/
This starts with heap instrumentation. vlib_buffer, bihash and stack
instrumentation should follow.
Type: feature
Change-Id: I7f20e235b2f79db72efd0e756f22c75f717a9884
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Fix a day-1 bug, possibly dating back as far as 2002. The zap64() game
involves fetching 8 byte chunks, and clearing octets not to be
included in the key.
That's fine *unless* the 8-byte fetch happens to cross a page boundary
into unmapped or no-access space.
Type: fix
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I4607e9840032257c96ba7387f86c931c0921749d
|
|
Change-Id: Id4f37f5d4a03160572954a416efa1ef9b3d79ad1
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
if you plan to put a hash into shared memory, the key sum and key
equal functions MUST be set to constants such as KEY_FUNC_STRING,
KEY_FUNC_MEM, etc. -lvppinfra is PIC, which means that the process
which set up the hash won't have the same idea where the key sum and
key compare functions live in other processes.
Change-Id: Ib3b5963a0d2fb467b91e1f16274df66ac74009e9
Signed-off-by: Ole Troan <ot@cisco.com>
Signed-off-by: Dave Barach <dave@barachs.net>
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
Change-Id: I2896dbde78b5d58dc706756f4c76632c303557ae
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
clib_mem_unaligned + zap64 casts its input as u64, computes a mask
according to the input length, and returns the casted maked value.
Therefore all the 8 Bytes of the u64 are systematically read, and
the invalid ones are discarded.
Since they are discarded correctly, this invalid read can safely be
ignored.
Revert "fix clib_mem_unaligned() invalid read"
This reverts commit 0ed3d81a5fa274283ae69b69a405c385189897d3.
Change-Id: I5cc33ad36063c414085636debe93707d9a75157a
Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
|
|
clib_mem_unaligned + zap64 casts its input as u64, computes a mask
according to the input length, and returns the casted maked value.
Therefore all the 8 Bytes of the u64 are systematically read, and
the invalid ones are discarded.
For example, for a 5-Bytes string, we will do an invalid read of size 3,
even though those 3 Bytes are never used.
This patch proposes to only read what we have at the cost of reading as
a u64 in one call, but that way, we do not trigger an invalid read
error.
Change-Id: I3e0b31c4113d9c8e53aa5fa3d3d396ec80f06a27
Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
|
|
Change-Id: I7b51f88292e057c6443b12224486f2d0c9f8ae23
Signed-off-by: Damjan Marion <damarion@cisco.com>
|