Age | Commit message (Collapse) | Author | Files | Lines |
|
Type: fix
Change-Id: Ie292ee56dd5265a56ef472554aaf086e61da7089
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Change-Id: I37c187af80c21b8fb1ab15af112527a837e0df9e
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Change-Id: Iab3d65b6276829ad1e522e66380d1797e37579b8
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Change-Id: Ic675ad4edbf27b7230fc2a77f00c90c46d6350c3
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
- support of in-place growth of vectors (if there is available space next to
existing alloc)
- drops the need for alloc_aligned_at_offset from memory allocator,
which allows easier swap to different memory allocator and reduces
malloc overhead
- rework of pool and vec macros to inline functions to improve debuggability
- fix alignment - in many cases macros were not using native alignment
of the particular datatype. Explicitly setting alignment with XXX_aligned()
versions of the macro is not needed anymore in > 99% of cases
- fix ASAN usage
- avoid use of vector of voids, this was root cause of several bugs
found in vec_* and pool_* function where sizeof() was used on voids
instead of real vector data type
- introduce minimal alignment which is currently 8 bytes, vectors will
be always aligned at least to that value (underlay allocator actually always
provide 16-byte aligned allocs)
Type: improvement
Change-Id: I20f4b081bb13bbf7bc0ace85cc4e301787f12fdf
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Use of clib_mem_is_heap_object is not reliable enough for production use
as it relies on just few bytes of memory allocator chunk header.
Type: improvement
Change-Id: I48c8adde8b6348b15477e3a015ba515eb7ee7ec2
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
More generic vector heap code coming in another patch...
Type: refactor
Change-Id: I2327128fb3aba9d5d330f46a35afec32e1e3942e
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: refactor
Change-Id: I3625eacf9e04542ca8778df5d46075a8654642c7
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
vec_free() does the work
Type: refactor
Change-Id: I8a97607c3b2f58d116863642b32b55525dc15d88
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: refactor
Change-Id: Iaa1e43c87c5725ab33ea8489bff2a7bda18b9c79
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Calling mem{cpy,move} with NULL pointers results in undefined behaviour.
This in turns is exploited by GCC. For example, the sequence:
memcpy (dst, src, n);
if (!src)
return;
src[0] = 0xcafe;
will be optimized as
memcpy (dst, src, n);
src[0] = 0xcafe;
IOW the test for NULL is gone.
vec_*() functions sometime call memcpy with NULL pointers and 0 length,
triggering this optimization. For example, the sequence:
vec_append(v1, v2);
len = vec_len(v2);
will crash if v2 is NULL, because the test for NULL pointer in vec_len()
has been optimized out.
This commit fixes occurrences of such undefined behaviour, and also
introduces a memcpy wrapper to catch those in debug mode.
Type: fix
Change-Id: I175e2dd726a883f97cf7de3b15f66d4b237ddefd
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
vlib_validate_combined_counter_will_expand() was calling
_vec_resize_will_expand() with wrong arguments, which resulted in false
return value. Apart from the initial call, it never indicated a vector
resize.
The callers relying on this function did not perform a barrier sync
because of the wrong prediction even if the vector got extended by
a subsequent vlib_validate_combined_counter() call.
The fix introduces a new, simplified macro that is easier to call.
vec_resize_will_expand() accepts the same arguments as vec_resize().
Type: fix
Signed-off-by: Miklos Tirpak <miklos.tirpak@gmail.com>
Change-Id: Ib2c2c8afd3e665e0e3d6ae62ff5cfa287acf670f
|
|
'type' is a keyword in golang, so s/type/event_type/ in elog.h and
elsewhere.
Add vec_len_not_inline(...), elog_write_file_not_inline(...) and
elog_read_file_not_inline(...) since the inline forms aren't usable.
More such tweaks may follow.
Type: improvement
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I9a80a6afa635f5cdedee554ee9abe400fafc1cb6
|
|
Type: fix
Change-Id: If8dbbcb46193fd057fe8d704058609a3a8787d6c
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Some vector functions (e.g. vec_new) pass the vector pointer through
vec_resize. This turn the pointer from a real type into a void pointer.
Explicitly cast the pointer back to its original type to catch type
mismatches.
Type: improvement
Signed-off-by: Andreas Schultz <andreas.schultz@travelping.com>
Change-Id: Id13e52d4f038af2cee28e5abf1aca720f8909378
|
|
Minor change to vec_sort_with_function(...): don't depend on the qsort
implementation to deal with null, zero-long, or 1-long vectors
Type: fix
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I7bd7b0421673d2a025363089562aa7c6266fba66
|
|
Type: feature
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I7e7d95a089dd849c1f01ecea84529d8dbf239f21
|
|
Introduce AddressSanitizer support: https://github.com/google/sanitizers/
This starts with heap instrumentation. vlib_buffer, bihash and stack
instrumentation should follow.
Type: feature
Change-Id: I7f20e235b2f79db72efd0e756f22c75f717a9884
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Change-Id: I2294982e6df41a13e61783e18f947da0bdd4b499
Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
|
|
Change-Id: I6da153779010263e6fc4b51c64b01444aaadca17
Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
|
|
Change-Id: Id4f37f5d4a03160572954a416efa1ef9b3d79ad1
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
- Enable/Disable an interface for IGMP
- improve logging
- refactor common code
- no orphaned timers
- IGMP state changes in main thread only
- Large groups split over multiple state-change reports
- SSM range configuration API.
- more tests
Change-Id: If5674f1044e7e97274a711f47807c9ba689d7b9a
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I2896dbde78b5d58dc706756f4c76632c303557ae
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Object sizes must evenly divide alignment requests, or vice
versa. Otherwise, only the first object will be aligned as
requested.
Three choices: add CLIB_CACHE_LINE_ALIGN_MARK(align_me) at
the end of structures, manually pad to an even divisor or multiple of
the alignment request, or use plain vectors/pools.
static assert for enforcement.
Change-Id: I41aa6ff1a58267301d32aaf4b9cd24678ac1c147
Signed-off-by: Dave Barach <dbarach@cisco.com>
|
|
Change-Id: Ibcc20c24f6feb2b91245b0d88830a6c730d704e6
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
definition
Change-Id: I488d7c2b864c0e3661c8abf0363e4b97984d4974
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
Change-Id: Ieafd00c7d03fe5c090808c7af4aa2f86974a092e
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Iefffcf7843dc11803d69a875a72704a2543911a1
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I7b51f88292e057c6443b12224486f2d0c9f8ae23
Signed-off-by: Damjan Marion <damarion@cisco.com>
|