Age | Commit message (Collapse) | Author | Files | Lines |
|
- support of in-place growth of vectors (if there is available space next to
existing alloc)
- drops the need for alloc_aligned_at_offset from memory allocator,
which allows easier swap to different memory allocator and reduces
malloc overhead
- rework of pool and vec macros to inline functions to improve debuggability
- fix alignment - in many cases macros were not using native alignment
of the particular datatype. Explicitly setting alignment with XXX_aligned()
versions of the macro is not needed anymore in > 99% of cases
- fix ASAN usage
- avoid use of vector of voids, this was root cause of several bugs
found in vec_* and pool_* function where sizeof() was used on voids
instead of real vector data type
- introduce minimal alignment which is currently 8 bytes, vectors will
be always aligned at least to that value (underlay allocator actually always
provide 16-byte aligned allocs)
Type: improvement
Change-Id: I20f4b081bb13bbf7bc0ace85cc4e301787f12fdf
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: feature
Change-Id: Ie020fd7e2618284a63efbeb9895068f27c0fb9ab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
* u32/u64/uword mismatches
* pointer-to-int fixes
* printf formatting issues
* issues with incorrect "ULL" and related suffixes
* structure alignment and padding issues
Change-Id: I70b989007758755fe8211c074f651150680f60b4
Signed-off-by: David Johnson <davijoh3@cisco.com>
|
|
If the heap does not have enough space to satisfy allocation
request, the allocator calls sys_alloc(). There, if the request
is bigger than mparams.mmap_threshold, the mmap_alloc() is called
to allocate memory via a direct mmap call.
The resulting allocated memory is properly recognized by
clib_mem_is_heap_object() only for the first such request.
Subsequent requests overwrite the tracking data, resulting
in previously "valid" addresses become invalid, as seen
by clib_mem_is_heap_object(). The result is a misleading
behavior which masks other issues.
This is a temporary change to avoid the affected codepath
until there is a proper fix to track the directly mmap-allocated
memory.
Change-Id: I4137f91b5196d4503c40cf8ecc2f71554bc8f858
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
Also: don't #include /usr/include/malloc.h in dlmalloc.h
Change-Id: Ic73ff8862cc8aba371488b912255e28dd96374ff
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Avoids crashes on restarts if svm root region backing file was not
cleaned up.
Change-Id: I608cf5711aa8c3f9620900473bdf76bde8b918de
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Also: call os_panic() on a heap botch crash, attempt to generate a
post-mortem API dump, etc.
Add an "ugly" test case to vec_test.c, to cause a configurable block
overrun.
Change-Id: I7b29a7645277f9e485e06ff83335306fedc24b71
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Configure w/ --enable-dlmalloc, see .../build-data/platforms/vpp.mk
src/vppinfra/dlmalloc.[ch] are slightly modified versions of the
well-known Doug Lea malloc. Main advantage: dlmalloc mspaces have no
inherent size limit.
Change-Id: I19b3f43f3c65bcfb82c1a265a97922d01912446e
Signed-off-by: Dave Barach <dave@barachs.net>
|