aboutsummaryrefslogtreecommitdiffstats
path: root/src/vppinfra
AgeCommit message (Collapse)AuthorFilesLines
2018-12-02vppinfra: c11 safe string functionsSteven2-0/+1082
Add memcmp_s, strcmp_s, strncmp_s, strcpy_s, strncpy_s, strcat_s, strncat_s, strtok_s, strnlen_s, and strstr_s C11 safe string API. For migrating extant unsafe API, add also the corresponding macro version of each safe API, clib_memcmp, clib_strcmp, etc. In general, the benefits of the safe string APIs are to provide null pointer checks, add additional argument to specify the string length of the passed string rather than relying on the null terminated character, and src/dest overlap checking for the the string copy operations. The macro version of the API takes the same number of arguments as the unsafe API to provide easy migration. However, it does not usually provide the full aformentioned benefits. In some cases, it is necessary to move to the safe API rather than using the macro in order to avoid some unpredictable problems such as accessing memory beyond what it is intended due to the lack of the passed string length. dbarach: add a "make test" vector, and a doxygen file header cookie. Change-Id: I5cd79b8928dcf76a79bf3f0b8cbc1a8f24942f4c Signed-off-by: Steven <sluong@cisco.com> Signed-off-by: Dave Barach <dave@barachs.net>
2018-11-29do not opttimize graph node functions in debug buildsDamjan Marion1-1/+1
Change-Id: I5b4cd419d317381a06e7e6d703373959f4bbd97b Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-11-29vcl: basic support for apps that forkFlorin Coras1-1/+1
- intercept fork and register a new worker with vpp - share sessions between parent and forked child - keep binary api state per worker Change-Id: Ib177517d661724fa042bd2d98d18e777056352a2 Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-11-29vppinfra: add pool_dup macroFlorin Coras1-0/+35
Change-Id: I192e340bd072d27bf6ddc382347ad5c3ca411bad Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-11-28Use acquire/release ordering when accessing svm_fifo shared variable cursizeSirshak Das1-0/+4
Improves TCP iperf3 performance by ~3% on AArch64. Change-Id: I1e51bd8403ba45ec6af4c2f96b95e884c1ae0d67 Signed-off-by: Sirshak Das <sirshak.das@arm.com> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
2018-11-28cmake: display warning and continue if dpdk not presentDamjan Marion1-4/+4
Change-Id: I5cb2619444507a159c42ac8401800e90b6541a20 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-11-27pmalloc: correct format_pmalloc_map u32 index overrun bugKingwel Xie2-5/+3
Change-Id: I95ba4eab6e2154ef33a479450b997c8317db3a92 Signed-off-by: Kingwel Xie <kingwel.xie@ericsson.com>
2018-11-26vppinfra: prevent dlmalloc from allocating memory via mmap_alloc()Andrew Yourtchenko1-1/+22
If the heap does not have enough space to satisfy allocation request, the allocator calls sys_alloc(). There, if the request is bigger than mparams.mmap_threshold, the mmap_alloc() is called to allocate memory via a direct mmap call. The resulting allocated memory is properly recognized by clib_mem_is_heap_object() only for the first such request. Subsequent requests overwrite the tracking data, resulting in previously "valid" addresses become invalid, as seen by clib_mem_is_heap_object(). The result is a misleading behavior which masks other issues. This is a temporary change to avoid the affected codepath until there is a proper fix to track the directly mmap-allocated memory. Change-Id: I4137f91b5196d4503c40cf8ecc2f71554bc8f858 Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
2018-11-20vppinfra: add 128 and 256 bit vector scatter/gather inlinesDamjan Marion2-0/+102
Change-Id: If6c65f16c6fba8beb90e189c1443c3d7d67ee02c Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-11-17pcap-based dispatch tracerDave Barach4-0/+581
To facilitate dispatch trajectory tracing, vlib_buffer_t decoding, etc. through Wireshark Change-Id: I31356b9fa1f40cba8830aaf10a86a9fbb7546438 Signed-off-by: Dave Barach <dave@barachs.net>
2018-11-15VPP-1474: fix 2x coverity warningsDave Barach2-3/+3
Change-Id: I441beaf3d7f57886580d7cce35ef592aa0fcca5f Signed-off-by: Dave Barach <dave@barachs.net>
2018-11-14Remove c-11 memcpy checks from perf-critical codeDave Barach16-58/+70
Change-Id: Id4f37f5d4a03160572954a416efa1ef9b3d79ad1 Signed-off-by: Dave Barach <dave@barachs.net>
2018-11-10pmalloc: u32 pp->index leads to va address overrunKingwel Xie1-1/+2
when pagesize is 1G, this pm->base + (pp->index << pm->def_log2_page_sz) would very soon overrun if creating multiple mempools add a (uword) to it Change-Id: If769b99d344cc3f547418a242a7497d044071615 Signed-off-by: Kingwel Xie <kingwel.xie@ericsson.com>
2018-11-08Calculate clock rounding constantDave Barach2-3/+20
Compute the first power of ten which is greater than 0.1% of the clock rate. Save the result, and use it to round future results. The previous constant value - 1e7 - didn't work properly on aarch64. Change-Id: Ic021e3eb1b90c0d4a7d9f1b6425123f0c8b48b0b Signed-off-by: Dave Barach <dave@barachs.net>
2018-11-08physmem: Add physmem map supportMohsin Kazmi2-0/+22
This patch adds support for mapping the virtual address to physical address and size of memory allocated. Change-Id: I7659a1881308e89b215c486fecd7c973076d0773 Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
2018-11-07pmalloc: fix shared mappingsDamjan Marion1-2/+4
Change-Id: I6782544d5ee0a66b1a027874b23574416093ca92 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-11-07Optimize xxx_zero_byte_mask NEON functionLijian Zhang1-44/+7
Optimize zero byte mask NEON functions below with less intrinsics, and get their outputs consistent with functions in vector_sse42.h always_inline u32 u64x2_zero_byte_mask (u64x2 input) always_inline u32 u32x4_zero_byte_mask (u32x4 input) always_inline u32 u16x8_zero_byte_mask (u16x8 input) always_inline u32 u8x16_zero_byte_mask (u8x16 input) always_inline u32 i64x2_zero_byte_mask (i64x2 input) always_inline u32 i32x4_zero_byte_mask (i32x4 input) always_inline u32 i16x8_zero_byte_mask (i16x8 input) always_inline u32 i8x16_zero_byte_mask (i8x16 input) Change-Id: I7f485915baeb37fa2dd484699b8769e0136f6574 Signed-off-by: Lijian Zhang <Lijian.Zhang@arm.com> Reviewed-by: Sirshak Das <Sirshak.Das@arm.com>
2018-11-05Enable atomic swap and store macro with acquire and release orderingSirshak Das2-4/+3
Add atomic swap and store macro with acquire and release ordering respectively. Variable in question is interupt_pending variable which is used as guard variable by input nodes to process the device queue. Atomic Swap is used with Acquire ordering as writes or reads following this in program order should not be reordered before the swap. Atomic Store is used with Release ordering, as post store the node is added to pending list. Change-Id: I1be49e91a15c58d0bf21ff5ba1bd37d5d7d12f7a Original-patch-by: Damjan Marion <damarion@cisco.com> Signed-off-by: Sirshak Das <sirshak.das@arm.com> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
2018-10-31Add and enable msb mask vector intrinsic for aarch64.Lijian Zhang1-10/+28
This patch enables the use of this function for enqueuing frames to the next graph node. Change-Id: I4003110db59870f7106e0d13942d6ff7bc54b46d Signed-off-by: Lijian Zhang <Lijian.Zhang@arm.com> Reviewed-by: Sirshak Das <Sirshak.Das@arm.com> Reviewed-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> Reviewed-by: Steve Capper <Steve.Capper@arm.com>
2018-10-30vppinfra: fix bug in default_socket_sendmsgDamjan Marion1-1/+1
Change-Id: Ia9b74761ce511d218bb5319c7c9b5e58be3e2e8a Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-10-28physmem: coverity issuesDamjan Marion1-3/+4
Change-Id: I34cc55d8292a69fb451ed0031484994f51d3537a Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-10-25pmalloc: don't lock 4K pages if we don't have access to pagemapDamjan Marion2-6/+30
Without pagemap access only way to do DMA to physmem is by using IOMMU. In such case VFIO will take care for preventing paging of such memory so we don't need to lock here. Change-Id: Ica9c20659fba3ea3c96202eb5f7d29c43b313fa9 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-10-25pmalloc: support for 4K pagesDamjan Marion5-80/+250
Change-Id: Iecceffe06a92660976ebb58cd3cbec4be8931db0 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-10-24vppinfra: autodetect default hugepage sizeDamjan Marion6-78/+59
Change-Id: I5ff713ad0b254c74c5622e3b9425cca365b5ee97 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-10-23physmem coverity issuesDamjan Marion1-2/+3
Change-Id: Ie9ff9b751190632dfc4576e5cbb1987a4142af5e Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-10-23Numa-aware, growable physical memory allocator (pmalloc)Damjan Marion5-0/+871
Change-Id: Ic4c46bc733afae8bf0d8146623ed15633928de30 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-10-23c11 safe string handling supportDave Barach65-144/+272
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab Signed-off-by: Dave Barach <dave@barachs.net>
2018-10-22vppinfra: use log2 page size in hugepage functionsDamjan Marion3-19/+27
Change-Id: Ibec32c6df32f4cd9889d378e244f170c93ad295b Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-10-22X86_64 perf counter pluginDave Barach2-0/+47
Change-Id: Ie5a00c15ee9536cc61afab57f6cadc1aa1972f3c Signed-off-by: Dave Barach <dave@barachs.net>
2018-10-22Fix dereferencing null string in dpdk_early_initJuraj Sloboda1-0/+2
Change-Id: Iffba7ebe5af8fadc0251f3a10022739d45f394ce Signed-off-by: Juraj Sloboda <jsloboda@cisco.com>
2018-10-19vppinfra: use memfd_create for hugepage mounts if supportedDamjan Marion2-25/+52
Starting with kernel 4.14 hugepage fd can be retrieved with memfd_systemcall Change-Id: I0f3bd6d0a7757ffe7b98e83763502013ac763ecb Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-10-19vppinfra: add atomic macros for __sync builtinsSirshak Das9-11/+56
This is first part of addition of atomic macros with only macros for __sync builtins. - Based on earlier patch by Damjan (https://gerrit.fd.io/r/#/c/10729/) Additionally - clib_atomic_release macro added and used in the absence of any memory barrier. - clib_atomic_bool_cmp_and_swap added Change-Id: Ie4e48c1e184a652018d1d0d87c4be80ddd180a3b Original-patch-by: Damjan Marion <damarion@cisco.com> Signed-off-by: Sirshak Das <sirshak.das@arm.com> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com> Reviewed-by: Steve Capper <steve.capper@arm.com>
2018-10-19Add pool_get_zero, pool_get_aligned_zero macrosDave Barach1-1/+13
Shorthand for the pattern: pool_get (<pool>, ep); memset (ep, 0, sizeof(*ep)); Should have done this years ago. Change-Id: Ideeb27a79ff4ca3e9a077c973b297671d1fa2d26 Signed-off-by: Dave Barach <dave@barachs.net>
2018-10-17bond: tx optimizationsDamjan Marion1-0/+12
Break up bond tx function into multiple small workloads: 1. parse the packet header and hash it based on the configured algorithm 2. optionally, trace the packet 3. convert the hash value from (1) to the slave port 4. update the buffers with the slave sw_if_index 5. Add the buffers to the queues 6. Create and send the frames old numbers ----------- Time 5.3, average vectors/node 223.74, last 128 main loops 40.00 per node 222.61 vector rates in 3.3627e6, out 6.6574e6, drop 3.3964e4, punt 0.0000e0 Name State Calls Vectors Suspends Clocks Vectors/Call BondEthernet0-output active 68998 17662979 0 1.89e1 255.99 BondEthernet0-tx active 68998 17662979 0 2.60e1 255.99 TenGigabitEthernet3/0/1-output active 68998 8797416 0 1.03e1 127.50 TenGigabitEthernet3/0/1-tx active 68998 8797416 0 7.85e1 127.50 TenGigabitEthernet7/0/1-output active 68996 8865563 0 1.02e1 128.49 TenGigabitEthernet7/0/1-tx active 68996 8865563 0 7.65e1 128.49 new numbers ----------- BondEthernet0-output active 304064 77840384 0 2.29e1 256.00 BondEthernet0-tx active 304064 77840384 0 2.47e1 256.00 TenGigabitEthernet3/0/1-output active 304064 38765525 0 1.03e1 127.49 TenGigabitEthernet3/0/1-tx active 304064 38765525 0 7.66e1 127.49 TenGigabitEthernet7/0/1-output active 304064 39074859 0 1.01e1 128.51 Change-Id: I3ef9a52bfe235559dae09d055c03c5612c08a0f7 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-10-16Fix coverity issue for potentially overflowing of page sizeHaiyang Tan1-2/+3
Change-Id: I2779626d745badb63386efcf729da7a094a4f297 Signed-off-by: Haiyang Tan <haiyangtan@tencent.com>
2018-10-10vppinfra: introduce clib_mem_vm_ext_free() to avoid fd leaksHaiyang Tan2-0/+12
Change-Id: I8691a10493d159a97574550c111f07722960a7cd Signed-off-by: Haiyang Tan <haiyangtan@tencent.com>
2018-10-10Integer underflow and out-of-bounds read (VPP-1442)Neale Ranns1-4/+4
Change-Id: Ife2a83b9d7f733f36e0e786ef79edcd394d7c0f9 Signed-off-by: Neale Ranns <nranns@cisco.com>
2018-10-09vppinfra: Fix extendto_high aarch64 NEON api.v19.01-rc0Sirshak Das1-1/+1
This fixes the l2BD and ip4 test case failures. Fixes VPP-1432, VPP-1428, VPP-1430 Change-Id: I48b5c961bab60cc3b39fcd6db47e098c81579480 Signed-off-by: Sirshak Das <sirshak.das@arm.com>
2018-10-04clib_count_equal_*: don't read of the end of a small array and init data ↵Neale Ranns1-8/+28
only if used (VPP-1429) Change-Id: I8afa57ecca590698d3430746968aa0a5b0070469 Signed-off-by: Neale Ranns <nranns@cisco.com>
2018-10-02VPP-1440: clean up coverity warningsDave Barach1-3/+10
Change-Id: Ic6823fb617ecae547a5f0e28b1e037848e40f682 Signed-off-by: Dave Barach <dave@barachs.net>
2018-10-01Support dynamic dual/quad loop selection on aarch64Lijian Zhang2-0/+95
Currently, there are three variants available on aarch64, qdf24xx, thunderx2t99, and cortex-a72. -DCLIB_N_PREFETCHES is passed to source code to select dual/quad implementation. Besides, different compiler options are applied on these critical functions. gcc-7.3.0 reports ICE(internal compiler error) with -mtune=thunderx2t99, so -mtune=thunderx2t99 is enabled only when gcc version is greater than 7.3.0 Cavium ThunderX2, Impermenter 0x43, Part 0x0af -march=armv8-a+crc+crypto -mtune=thunderx2t99 Qualcomm Centriq 2400, Impermenter 0x51, Part 0xc00 -march=armv8.1-a+crc+crypto -mtune=qdf24xx Cortex-A72, Impermenter 0x41, Part 0xd08 -march=armv8-a+crc+crypto -mtune=cortex-a72 Change-Id: Id5649c6325c1e642d0fd42535e3908793b13e02a Signed-off-by: Lijian Zhang <Lijian.Zhang@arm.com> Reviewed-by: Sirshak Das <sirshak.das@arm.com> Reviewed-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
2018-09-27Trivial: Cleanup some typos.Paul Vinciguerra20-37/+37
This is a new commit for code under a different maintainer. Change-Id: I79fa403fec6a312238a9a4b18b35dbcafaa05439 Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
2018-09-24svm: march svm_fifo take 2Florin Coras1-0/+29
Change-Id: Ifa4fceef7edbe43d444790a624957db0817064de Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-09-20bihash template: avoid memory leak upon rehashAndrew Yourtchenko1-0/+3
Call the BV (value_free) when we have performed the rehash and thus no longer need the memory that old value for the bucket refers to. Change-Id: Ibb82174fc8002aeb3e1a6c8d1f90293d73bc45d8 Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
2018-09-19bihash template: reinstate the check for the available memory in the arenaAndrew Yourtchenko2-3/+3
ffb14b9554afa1e58c3657e0c91dda3135008274 has changed the semantics of alloc_arena_next to become an offset off alloc_arena, but in the available memory check in BV (alloc_aligned) it still treats it as a virtual address, resulting in the check always succeeding, thus over a prolonged period bihash arena allocator potentially overwriting whatever is following the arena. Change-Id: I18882c5f340ca767a389e15cca2696a0a97ef015 Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
2018-09-19socket api: do not delay sending of messagesFlorin Coras1-0/+7
Instead of relying on main epoll loop to send messages, try to send as soon as possible. Change-Id: I27c0b4076f3599ad6e968df4746881a6717d4299 Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-09-13vppinfra: optmize clib_count_equal functionsDamjan Marion1-60/+136
Change-Id: Ia4c79d560bfa1118d4683a89a1209a08c5f546b3 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-09-12Add and enable u32x4_extend_to_u64x2_high for aarch64 NEON intrinsics.Sirshak Das1-0/+6
This is the high version of extendto. This function accomplishes the same task as both shuffling and extending done by SSE intrinsics. This enables the NEON version for buffer indexes to buffer pointer translation. Change-Id: I52d7bbf3d76ba69c9acb0e518ff4bc6abf3bbbd4 Signed-off-by: Sirshak Das <sirshak.das@arm.com> Reviewed-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Yi He <yi.he@arm.com> Verified-by: Lijian Zhang <lijian.zhang@arm.com>
2018-09-11bihash 32/64 bit shared memory interopDave Barach2-39/+39
This patch makes 32/64 bit interoperable shared memory bihash tables work regardless of where they're mapped. Change-Id: If5b4a37ccdaa75410eba755c7d7195633de1b30b Signed-off-by: Dave Barach <dave@barachs.net>
2018-09-11Replacing vtbl NEON intrinsic with rev NEON intrinsic for byte_swap.Sirshak Das1-5/+1
Using rev16 vector intrinsic to reverse byteorder in each word independently. Change-Id: I071c40780baffe0bda614ec5d9dd92858f574b0d Signed-off-by: Sirshak Das <sirshak.das@arm.com> Reviewed-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Brian Brooks <brian.brooks@arm.com> Reviewed-by: Yi He <yi.he@arm.com> Verified-by: Lijian Zhang <lijian.zhang@arm.com>