Age | Commit message (Collapse) | Author | Files | Lines |
|
Type: improvement
Change-Id: Ic675ad4edbf27b7230fc2a77f00c90c46d6350c3
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
- support of in-place growth of vectors (if there is available space next to
existing alloc)
- drops the need for alloc_aligned_at_offset from memory allocator,
which allows easier swap to different memory allocator and reduces
malloc overhead
- rework of pool and vec macros to inline functions to improve debuggability
- fix alignment - in many cases macros were not using native alignment
of the particular datatype. Explicitly setting alignment with XXX_aligned()
versions of the macro is not needed anymore in > 99% of cases
- fix ASAN usage
- avoid use of vector of voids, this was root cause of several bugs
found in vec_* and pool_* function where sizeof() was used on voids
instead of real vector data type
- introduce minimal alignment which is currently 8 bytes, vectors will
be always aligned at least to that value (underlay allocator actually always
provide 16-byte aligned allocs)
Type: improvement
Change-Id: I20f4b081bb13bbf7bc0ace85cc4e301787f12fdf
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
i.e.
memory {
default-hugepage-size 1G
}
Type: improvement
Change-Id: I822afb51712ae92f4e4992b8ffa33dcb15ccaef1
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Small fixed size object memory allocator.
Type: improvement
Change-Id: I727705d9d4292b6b38d41e239871103b15aa9038
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Change-Id: I1ab1b100000b4d7212c58e10312e16e7527bd333
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: fix
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I654747d618cc4fe99b7774827303769fe43392ed
|
|
This patch adds smal header in front of dlmalloc space, and it stores
some additional information about the heap.
Immediate benefit of this patch is that we know the underlying page size
si we can display heap page statistics / real memory usage.
Type: improvement
Change-Id: Ibd6989cc2f2f64630ab08734c9552e15029c5f3f
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Change-Id: Ie04302c576869bc7bfaa9f13ed2ea8a403a393d4
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I33c6a5d1686cc32a6cde149083256d6cf0770fc5
|
|
- it is confusing from end consumer perspective that some thing
is somewhere called heap and somewhere mspace
- this is base for additional work where heap pointer is not the same
thing like mspace
Type: improvement
Change-Id: I644d5a0de17690d65d164d8cec3c5654571629ef
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Change-Id: I81a7fb71c8ce0c0d22e326a4ddd01bc1c1aea5f7
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Change-Id: I9973bce20a0a2a8a7e227cf96518de5b79374425
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Change-Id: I04752c011e4ca58f56aa53f6ae27bae93a5c4590
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Change-Id: I381fc3dec8580208d0e24637d791af69011aa83b
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Change-Id: I298aadfdf17d98dfb1ada1ec4f87e0821e6aeb7f
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
To hold more data later...
Type: improvement
Change-Id: I4006d22dcacd788988c4907f2c263fd4e4a9d398
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Change-Id: Ie44dbf9396cfed19dba153810b7bd76ce5377cd4
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Also clib_mem_destroy() to destroy the current mspace.
Handy when an application wants to make a memory allocation arena
disappear.
Type: improvement
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I020db902fbe2473545506fecbc230c2b048992f8
|
|
It is a bad idea to poison memory after munmap because the address space
can be reused (eg. for global data of dlopen()ed object) and ASan model
allows access by default.
Moreover, access to a stale address space will fault.
Type: fix
Change-Id: I356de422f255447d9d50a3a71fb0c2eaa790d731
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Template instances can allocate BIHASH_KVP_PER_PAGE data records
tangent to the bucket, to remove a dependent read / prefetch.
Template instances can ask for immediate memory allocation, to avoid
several branches in the lookup path.
Clean up l2 fib, gpb plugin codes: use clib_bihash_get_bucket(...)
Use hugepages for bihash allocation arenas
Type: improvement
Signed-off-by: Dave Barach <dave@barachs.net>
Signed-off-by: Damjan Marion <damarion@cisco.com>
Change-Id: I92fc11bc58e48d84e2d61f44580916dd1c56361c
|
|
The mheap allocator has been turned off for several releases. This
commit removes the cmake config parameter, parallel support for
dlmalloc and mheap, and the mheap allocator itself.
Type: refactor
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I104f88a1f06e47e90e5f7fb3e11cd1ca66467903
|
|
Type: fix
Ticket: VPP-1837
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I6b1ea13fc83460bf4ee75cb9249d83dddaa64ded
|
|
Type: feature
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I999836a7893a89aac5243b111eac35fddd03e2a6
|
|
Type: refactor
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I13b239cd572ae6dfaec07019d3d9b7c0ed3edcfa
|
|
Type: fix
Change-Id: Ifd61c0683c85fe7340965c225ed23e46ec88e01a
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: feature
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I7e7d95a089dd849c1f01ecea84529d8dbf239f21
|
|
Introduce AddressSanitizer support: https://github.com/google/sanitizers/
This starts with heap instrumentation. vlib_buffer, bihash and stack
instrumentation should follow.
Type: feature
Change-Id: I7f20e235b2f79db72efd0e756f22c75f717a9884
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Valgrind never really worked well with VPP. Remove the partial support.
Type: refactor
Change-Id: Ic09773fd85f904fdd2240bc161e23a4c2b196cf6
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
IPsec zero-es all allocated key memory including memory sur-allocated by
the allocator.
Move it to its own function in clib mem infra to make it easier to
instrument.
Type: refactor
Change-Id: Icd1c44d18b741e723864abce75ac93e2eff74b61
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: feature
This is needed when creating pthreads in client applications,
they need a way to set __os_thread_index per thread
that does not conflict with the binary API thread index.
If __os_thread_index is left to 0 in two client pthreads and
they call vl_msg_api_alloc and vec_resize at the same time
it can fail due to them sharing (and push/poping) the same
clib_per_cpu_mheaps slot.
Change-Id: I85d4248a39b641a4d3ad5a1c1bd6e0db5875fab6
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
Type: feature
Change-Id: Ie020fd7e2618284a63efbeb9895068f27c0fb9ab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
leak-check { <any-debug-cli-command-and-args> }
Hint: "set term history off" or you'll have to sort through a bunch of
bogus leaks related to the debug cli history mechanism.
Cleaned up a set of reported leaks in the "show interface" command. At
some point, we thought about making a per-thread vlib_mains vector,
but we never did that. Several interface-related CLI's maintained
local static cache vectors. Not a bad idea, but not useful as things
shook out. Removed the static vectors.
Change-Id: I756bf2721a0d91993ecfded34c79da406f30a548
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Id4f37f5d4a03160572954a416efa1ef9b3d79ad1
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Iecceffe06a92660976ebb58cd3cbec4be8931db0
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I5ff713ad0b254c74c5622e3b9425cca365b5ee97
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Starting with kernel 4.14 hugepage fd can be retrieved with
memfd_systemcall
Change-Id: I0f3bd6d0a7757ffe7b98e83763502013ac763ecb
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I8691a10493d159a97574550c111f07722960a7cd
Signed-off-by: Haiyang Tan <haiyangtan@tencent.com>
|
|
Make sure that vpp_get_stats main heap does not address-collide with
the stats segment, which lands "somewhere" in the vpp address space.
Add mising MAP_ANONYMOUS flag in clib_mem_vm_map
Change-Id: I8a671d174eefd8dd24771ad2ed9f1250e2c7a9f8
Signed-off-by: Dave Barach <dave@barachs.net>
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
Configure w/ --enable-dlmalloc, see .../build-data/platforms/vpp.mk
src/vppinfra/dlmalloc.[ch] are slightly modified versions of the
well-known Doug Lea malloc. Main advantage: dlmalloc mspaces have no
inherent size limit.
Change-Id: I19b3f43f3c65bcfb82c1a265a97922d01912446e
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I6d049c0875b91f67f008dc04ae7efe2f8ddc276e
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
- update segment manager and session api to work with both flavors of
ssvm segments
- added generic ssvm slave/master init and del functions
- cleanup/refactor tcp_echo
- fixed uses of svm fifo pool as vector
Change-Id: Ieee8b163faa407da6e77e657a2322de213a9d2a0
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: I67648dbed3c7ed291b3e1ce617d83a776d3623bb
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: Iff33694fc42cc3bcc73cf1372339053a6365039c
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I82c663bc0866c6c68ba354104b0bb059387f4b9d
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I7b51f88292e057c6443b12224486f2d0c9f8ae23
Signed-off-by: Damjan Marion <damarion@cisco.com>
|