summaryrefslogtreecommitdiffstats
path: root/src/vlibmemory/memory_api.c
AgeCommit message (Collapse)AuthorFilesLines
2019-06-14api: add mp-safe/barrier-sync indication to elogsDave Barach1-0/+1
Costs nothing, and leaves nothing to the imagination. Type: fix Change-Id: I7c9f9fb9325475c268eca062da7bbbf014438cfc Signed-off-by: Dave Barach <dave@barachs.net>
2019-05-16init / exit function orderingDave Barach1-4/+4
The vlib init function subsystem now supports a mix of procedural and formally-specified ordering constraints. We should eliminate procedural knowledge wherever possible. The following schemes are *roughly* equivalent: static clib_error_t *init_runs_first (vlib_main_t *vm) { clib_error_t *error; ... do some stuff... if ((error = vlib_call_init_function (init_runs_next))) return error; ... } VLIB_INIT_FUNCTION (init_runs_first); and static clib_error_t *init_runs_first (vlib_main_t *vm) { ... do some stuff... } VLIB_INIT_FUNCTION (init_runs_first) = { .runs_before = VLIB_INITS("init_runs_next"), }; The first form will [most likely] call "init_runs_next" on the spot. The second form means that "init_runs_first" runs before "init_runs_next," possibly much earlier in the sequence. Please DO NOT construct sets of init functions where A before B actually means A *right before* B. It's not necessary - simply combine A and B - and it leads to hugely annoying debugging exercises when trying to switch from ad-hoc procedural ordering constraints to formal ordering constraints. Change-Id: I5e4353503bf43b4acb11a45fb33c79a5ade8426c Signed-off-by: Dave Barach <dave@barachs.net>
2019-02-26VPP-1574: minimize RPC barrier sync callsDave Barach1-3/+20
Grab the thread barrier across a set of RPCs, to greatly increase efficiency. Avoids running afoul of the barrier sync holddown timer. Change-Id: I782dfdb1bed398b290169c83266681c9edd57a3f Signed-off-by: Dave Barach <dave@barachs.net>
2019-01-02Fixes for buliding for 32bit targets:David Johnson1-4/+4
* u32/u64/uword mismatches * pointer-to-int fixes * printf formatting issues * issues with incorrect "ULL" and related suffixes * structure alignment and padding issues Change-Id: I70b989007758755fe8211c074f651150680f60b4 Signed-off-by: David Johnson <davijoh3@cisco.com>
2018-12-05bapi: add options to have vpp cleanup client registrationFlorin Coras1-16/+22
A client can send a memclnt delete message and ask vpp to cleanup the shared memory queue. Obviously, in this case no delete reply is sent back to the client. Change-Id: I9c8375093f8607680ad498a6bed0690ba02a7c3b Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-12-01Delete shared memory segment files when vpp startsDave Barach1-0/+22
Should have been done this way years ago. My bad. Change-Id: Ic7bf937fb6c4dc5c1b6ae64f2ecf8608b62e7039 Signed-off-by: Dave Barach <dave@barachs.net>
2018-11-01Move RPC calls off the binary API input queueDave Barach1-0/+34
Change-Id: I2476e3e916a42b41d1e66bfc1ec4f8c4264c1720 Signed-off-by: Dave Barach <dbarach@cisco.com>
2018-10-25Revert "Keep RPC traffic off the shared-memory API queue"Florin Coras1-20/+0
This reverts commit 71615399e194847d7833b744caedab9b841733e5. There seems to be an issue with ARPs when running with multiple workers. Change-Id: Iaa68081512362945a9caf24dcb8d70fc7c5b75df Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-10-24Keep RPC traffic off the shared-memory API queueDave Barach1-0/+20
Change-Id: Ib5c346641463768cf33eaf8cb5fab5b63171398d Signed-off-by: Dave Barach <dave@barachs.net>
2018-10-23c11 safe string handling supportDave Barach1-8/+8
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab Signed-off-by: Dave Barach <dave@barachs.net>
2018-08-07api: compute msg table for private registrationsFlorin Coras1-2/+7
Change-Id: Ibaa236c09c2eeea72ee8a8cc603d407217b4af23 Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-06-14Use __attribute__((weak)) references where necessaryDave Barach1-5/+1
It should be possible to use vlib without the vlibmemory library, etc. Change-Id: Ic2316b93d7dbb728fb4ff42a3ca8b0d747c9425e Signed-off-by: Dave Barach <dave@barachs.net>
2018-06-08export counters in a memfd segmentDave Barach1-0/+16
also export per-node error counters directory entries implement object types Change-Id: I8ce8e0a754e1be9de895c44ed9be6533b4ecef0f Signed-off-by: Dave Barach <dave@barachs.net>
2018-06-05VPP API: Memory traceOle Troan1-0/+1
if you plan to put a hash into shared memory, the key sum and key equal functions MUST be set to constants such as KEY_FUNC_STRING, KEY_FUNC_MEM, etc. -lvppinfra is PIC, which means that the process which set up the hash won't have the same idea where the key sum and key compare functions live in other processes. Change-Id: Ib3b5963a0d2fb467b91e1f16274df66ac74009e9 Signed-off-by: Ole Troan <ot@cisco.com> Signed-off-by: Dave Barach <dave@barachs.net> Signed-off-by: Ole Troan <ot@cisco.com>
2018-01-25session: add support for memfd segmentsFlorin Coras1-46/+58
- update segment manager and session api to work with both flavors of ssvm segments - added generic ssvm slave/master init and del functions - cleanup/refactor tcp_echo - fixed uses of svm fifo pool as vector Change-Id: Ieee8b163faa407da6e77e657a2322de213a9d2a0 Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-01-10svm: calc base address on AArch64 based on autodetected VA space sizeDamjan Marion1-1/+1
Change-Id: I7487eb74b8deebff849d662b55a6708566ccd9ef Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-01-09api: refactor vlibmemoryFlorin Coras1-0/+905
- separate client/server code for both memory and socket apis - separate memory api code from generic vlib api code - move unix_shared_memory_fifo to svm and rename to svm_fifo_t - overall declutter Change-Id: I90cdd98ff74d0787d58825b914b0f1eafcfa4dc2 Signed-off-by: Florin Coras <fcoras@cisco.com>