Age | Commit message (Collapse) | Author | Files | Lines |
|
- support of in-place growth of vectors (if there is available space next to
existing alloc)
- drops the need for alloc_aligned_at_offset from memory allocator,
which allows easier swap to different memory allocator and reduces
malloc overhead
- rework of pool and vec macros to inline functions to improve debuggability
- fix alignment - in many cases macros were not using native alignment
of the particular datatype. Explicitly setting alignment with XXX_aligned()
versions of the macro is not needed anymore in > 99% of cases
- fix ASAN usage
- avoid use of vector of voids, this was root cause of several bugs
found in vec_* and pool_* function where sizeof() was used on voids
instead of real vector data type
- introduce minimal alignment which is currently 8 bytes, vectors will
be always aligned at least to that value (underlay allocator actually always
provide 16-byte aligned allocs)
Type: improvement
Change-Id: I20f4b081bb13bbf7bc0ace85cc4e301787f12fdf
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
More generic vector heap code coming in another patch...
Type: refactor
Change-Id: I2327128fb3aba9d5d330f46a35afec32e1e3942e
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Immediate benefit is ability to use hugepage backed memory.
Type: improvement
Change-Id: Ibcae961aa09ea92d3e931a40bedbc6346a4b2039
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: fix
Change-Id: I298d2a5067f7949002e6c010f892553f1eb9f477
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
_pool_init_fixed uses mmap to initialize a fixed-size and preallocated
pool, whose size is the sum of vector_size and free_index_size with
alignment to the CLIB_CACHE_LINE_BYTES and page size. In this way
vector_size equals to pool_header_t + vec_header_t + elt_size * max_elts
so moving to the end of the pool space should be pool_header_t pointer +
vector_size, instead of vec_header_t pointer + vector_size.
Simple code to reproduce this error:
u64 *pool;
pool_init_fixed(pool, 2042);
Improve unit test to cover this case
Type: fix
Signed-off-by: Jieqiang Wang <jieqiang.wang@arm.com>
Reviewed-by: Lijian Zhang <lijian.zhang@arm.com>
Reviewed-by: Tianyu Li <tianyu.li@arm.com>
Change-Id: If088ef89b3dcb2d874ee837ae9da60983b14615c
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Type: improvement
Change-Id: I57a9f85f7df1fc48656b72592349f4c544302f77
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Remove duplicate space allocation for the pool header. Not significant
w/ CLIB_CACHE_LINE_BYTES >= 64 since the code rounds the size of the
pool header to an even multiple of the cache line size.
Type: fix
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I923f2a60e7565cf2dfbc18d78264bf82ff30c926
|
|
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Simply call pool_init_fixed(...) before using the pool. Note that
fixed, preallocated pools live in individually-mmap'ed address
segments, except for the free element bitmap. A large fixed pool can
exceed 4gb.
Fix tcp buffer allocator leak, remove broken assert
Change-Id: I4421082e12a77c41c6e20f7747f3150dcd01fc26
Signed-off-by: Dave Barach <dave@barachs.net>
|