summaryrefslogtreecommitdiffstats
path: root/src/vppinfra
AgeCommit message (Collapse)AuthorFilesLines
2023-03-15vppinfra: widen the scope of test_vector_funcsDamjan Marion12-23/+23
Location changed and binary renamed to test_infra Also it is built by default. Type: improvement Change-Id: I27cd97f274501ceb7a01213e2bc9676cea00f39c Signed-off-by: Damjan Marion <damarion@cisco.com>
2023-03-15crypto-native: 256-bit AES CBC supportDamjan Marion1-0/+16
Used on intel client CPUs which suppport VAES instruction set without AVX512 Type: improvement Change-Id: I5f816a1ea9f89a8d298d2c0f38d8d7c06f414ba0 Signed-off-by: Damjan Marion <damarion@cisco.com>
2023-03-15build: add support for intel alderlake and sapphirerapids, part 2Damjan Marion1-1/+3
Type: improvement Change-Id: I64ca5bd3a959190111f61c5311a908d242c10bad Signed-off-by: Damjan Marion <damarion@cisco.com>
2023-03-14vlib: fix clib_crc32c on odd lengths and clib_crc32c_u8Andrew Yourtchenko1-1/+1
Fix the typo in the intrinsic name, which caused incorrect intrinsic to be used. Type: fix Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com> Change-Id: Ib7fde14d12897e4d1bfb5a01f6d65025473e4f8e
2023-03-14build: add support for intel alderlake and sapphirerapidsDamjan Marion1-1/+18
Disabled by default.. Type: improvement Change-Id: I36176c009e0873c048874ae38a7ea0a91449235c Signed-off-by: Damjan Marion <damarion@cisco.com>
2023-03-13avf: 512-bit SIMD version of avf_tx_prepareLeyi Rong1-0/+3
Exploiting AVX-512 operations on avf_tx_prepare(). Type: improvement Signed-off-by: Leyi Rong <leyi.rong@intel.com> Change-Id: I01e0b4a2e2d440659b4298668a868d983f5091c3
2023-03-10vlib: 512-bit SIMD version of vlib_buffer_freeLeyi Rong1-1/+4
Process 8 packets perf batch in vlib_buffer_free_inline() when CLIB_HAVE_VEC512 is enabled. Type: improvement Signed-off-by: Leyi Rong <leyi.rong@intel.com> Change-Id: I78b8a525bce25ee355c9bf0e0f651698a8c45bda
2023-03-06vppinfra: fix memory tracesBenoît Ganne1-49/+82
- allocates the memory trace spinlock independently from the main heap - disable tracing on a per thread basis - make sure we hold the memory trace spinlock when changing tracing Type: fix Change-Id: I7d84f22132abdc895343d447cd3a2c574786f58d Signed-off-by: Benoît Ganne <bganne@cisco.com>
2023-03-06vppinfra: adding support for socket mounting pathsMohsin Kazmi1-1/+5
Type: improvement Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com> Change-Id: If894b2b741d0d417a6fc458dda83ca1d8192385d
2023-03-06vppinfra: fix clib_bitmap_will_expand() result inversionVladislav Grishenko1-1/+1
Pool's pool_put_will_expand() calls clib_bitmap_will_expand(), so every put except ones that leads to free_bitmap reallocation will get false positive results and vice versa. Unfortunatelly there's no related test and existing bitmap tests are failing silently with false positive result as well. Fortunatelly neither clib_bitmap_will_expand() nor pool_put_will_expand() are being used by current vpp codebase. Type: fix Signed-off-by: Vladislav Grishenko <themiron@yandex-team.ru> Change-Id: Id5bb900cf6a1b1002d37670f5c415c74165b5421
2023-03-06vppinfra: display only the 1st 50 memory traces by defaultBenoît Ganne1-2/+4
When using memory traces it can take a long time to display all traces bigger than 1k if there are lots of them, especially as we need to resolve symbols. It is better to display only the 1st 50 by default, unless verbose is used. Also fix the help string. Type: improvement Change-Id: I1e5e30209f10d2b05c561dbf856cb126e0cf513d Signed-off-by: Benoît Ganne <bganne@cisco.com>
2023-02-06vppinfra: refactor clib_socket_init, add linux netns supportDamjan Marion3-208/+414
Type: improvement Change-Id: Ida2d044bccf0bc8914b4fe7d383f827400fa6a52 Signed-off-by: Damjan Marion <dmarion@me.com>
2023-01-30vppinfra: keep AddressSanitizer happyBenoît Ganne1-2/+3
The vector size must be increased before setting the element so that AddressSanitizer can keep track of the accessible memory. Type: fix Change-Id: I7b13ce98ff29d98e643f399ec1ecb4681d3cec92 Signed-off-by: Benoît Ganne <bganne@cisco.com>
2023-01-22vppinfra: fix random buffer OOB crash with ASANDmitry Valter1-1/+9
Don't truncate with vec_set_len bytes before they can be used. When built with ASAN, it these bytes are poisoned and trigger SIGSEGV when read. Type: fix Signed-off-by: Dmitry Valter <d-valter@yandex-team.ru> Change-Id: I912dbbd83822b884f214b3ddcde02e3527848592
2023-01-20vppinfra: clib_bitmap fixMaxime Peim1-5/+5
In clib_bitmap_set_region and clib_bitmap_set_multiple the index of the last bit to set was off by 1. If this index was pointing to the last bit of the bitmap, another uword would have been allocated, even though it was unnecessary. Moreover, in clib_bitmap_set_region, bits in the last word were not properly set. Indeed, the n_bits_left value is wrong since n_bits is not decreased by the number of already set bits. Type: fix Signed-off-by: Maxime Peim <mpeim@cisco.com> Change-Id: I8d7ef6f47abb9f1f64f38297da2c59509d74dd72
2023-01-18vppinfra:fix pcap write large file(> 0x80000000) error.aihua20131-1/+1
Type: improvement Signed-off-by: aihua2013 <51931196@qq.com> Change-Id: I22670f49abfb5d1fd728686fc7d65fb40ea6bda2
2023-01-14vppinfra: add const to char* params of several funcsSergey Nikiforov3-11/+9
These functions do not need modifiable strings. It helps with linker sections as well as C++ compatibility. It is a good style to use const where approriate. Type: refactor Signed-off-by: void234@gmail.com Change-Id: I8d1e922197b3594122296e8c1af57e0a8ec0bf3d
2023-01-13vppinfra: fix else if check in _vec_set_len()Liangxing Wang1-1/+1
Type: fix Signed-off-by: Liangxing Wang <liangxing.wang@arm.com> Change-Id: I1f757abccd228b9e73f25c96754738c8e6bff259
2023-01-12vppinfra: fix longstanding corner case bug in serialize_get()Dave Barach2-0/+47
serialize_get() -> serialize_write_not_inline(...) was losing track of the current buffer index when it managed to empty the overflow vector but had to turn around and use it again. Test-case added to test_serialize.c. This issue dates from 2010. Type: fix Signed-off-by: Dave Barach <dave@barachs.net> Change-Id: I024a03f7a50fd6df543ddbc7c45d85def4f1981d
2023-01-12misc: use right include for fctnl.h and poll.hGuillaume Solignac2-2/+2
Musl is stricter than glibc and has a warning that including fctnl.h and poll.h should be prefered rather than their sys/ counterparts, which breaks -Wall setups. Type: fix Signed-off-by: Guillaume Solignac <gsoligna@cisco.com> Change-Id: Id101e999371951b0927cc8c4109f8f1536de1bc2
2022-12-26vppinfra: fix function prototypesDave Barach2-4/+4
Type: fix Signed-off-by: Dave Barach <dave@barachs.net> Change-Id: Idbdfdf2d3fdbb64366f50d5a7458c4073a4f2746
2022-12-02vlib: clib_panic if sysconf() can't determine page size on startupAndrew Yourtchenko1-1/+7
Account for the potential of sysconf() returning -1 if it can not get the page size and make it a fatal error. Coverity: 277313 Type: fix Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com> Change-Id: I8cae6a35ec2f745c37f1fe6557e5fa66720b4628
2022-11-14crypto-ipsecmb: fix plugin crash in VirtualBoxMaros Ondrejicka1-0/+1
Plugin checks just for AVX2 instruction set, while the v1.3 of IPsec Multi-Buffer library checks for both AVX2 and BMI2 sets during init. VirtualBox VM doesn't provide BMI2 by default to guest operating system. Result is that VPP plugin decides to use AVX2 initialization and library then doesn't do it. Since flush_job remains empty, the self-check fails and with that the whole VPP crashes on start-up. Type: fix Signed-off-by: Maros Ondrejicka <maros.ondrejicka@pantheon.tech> Change-Id: I6b661f2b9bbe6dd03b499c55c38a9b814e6d718a
2022-10-25hash: add local variableGabriel Oginski1-2/+3
The current implmentation of the hash table is not thread-safe. This design leads to a segfault when VPP handling a lot of tunnels for Wireguard, where one thread modify the hash table and other threads starting to lookup at the same time. The fix add a local variable to store how many bits are used by a user object. Type: fix Signed-off-by: Gabriel Oginski <gabrielx.oginski@intel.com> Change-Id: Iecf6b3ef9f308b61015c66277cc459a6d019c9c1
2022-10-24vppinfra: fix incorrect sizeof() argument due to typoAndrew Yourtchenko1-1/+1
fixes coverity 282527 Type: fix Fixes: fecb2524ab Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com> Change-Id: I9ac72c3802e66369a8f24c92451e33f22c058f24
2022-10-18vppinfra: send minimal needed mask to the set_mempolicy syscallDamjan Marion1-11/+14
Type: fix fixes: 561ae5d Change-Id: I0d98f5b43bc9ab5d31463b285177a11a10b864d2 Signed-off-by: Damjan Marion <dmarion@me.com>
2022-10-17cnat: Add sctp supportNathan Skrzypczak1-8/+12
This patch adds SCTP support in the CNat translation primitives. It also exposes a clib_crc32c_with_init function allowing to set the init value to start the crc32 with instead of 0. Type: feature Change-Id: I86add4cfcac08f2a5a34d1e1841122fafd349fe7 Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
2022-10-12misc: fix issues reported by clang-15Damjan Marion1-1/+1
Type: improvement Change-Id: I3fbbda0378b72843ecd39a7e8592dedc9757793a Signed-off-by: Damjan Marion <dmarion@me.com>
2022-10-11vppinfra: fix AddressSanitizerBenoît Ganne1-0/+1
When checking for CLIB_SANITIZE_ADDR to enable specific behavior for AddressSanitizer, we must have vppinfra/clib.h included as it is defined there. Type: fix Change-Id: I9060c3c29c1289d28596c215a1d1709b2ea7c84e Signed-off-by: Benoît Ganne <bganne@cisco.com>
2022-09-09vppinfra: add clib_array_mask_set_u32()Damjan Marion1-0/+33
Type: improvement Change-Id: Idf1fb054d5ff495d772d01a79cbc6cd1b409d377 Signed-off-by: Damjan Marion <damarion@cisco.com>
2022-08-23vppinfra: fix coverity 249217Andrew Yourtchenko1-1/+1
Zero-initialize the temporary struct. Type: fix Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com> Change-Id: I8d73feae427a17470c47d1551ba7078213b589fc
2022-08-18vppinfra: correct clib_bitmap_set() return commentJon Loeliger1-1/+1
Fix a copy-n-paste issue that left clib_bitmap_set()'s return type documentation incorrect. Chnage it to indicate that the function returns a new pointer for the bitmap that could be different due to a possible reallocation. Type: docs Signed-off-by: Jon Loeliger <jdl@netgate.com> Change-Id: Ia193c4673c0e4d1760e91cd7f80ebe1868a3c9b5
2022-07-26vppinfra: fix formatting of format_base10Pim van Pelt1-1/+1
format_base10 reads 64b but is fed 32b values at the callsite; change to u64 consistently. The function has only one call site in interface/monitor.c which has a few additional bugs (spurious character, and ambiguous 'bits' versus 'bytes' in the output). Type: improvement Signed-off-by: Pim van Pelt <pim@ipng.nl> Change-Id: I360f0d439cc13c09bd3f53db8184bd12ad4bc2e9
2022-07-12perfmon: enable perfmon plugin for ArmZachary Leaf2-2/+5
This patch enables statistics from the Arm PMUv3 through the perfmon plugin. In comparison to using the Linux "perf" tool, it allows obtaining direct, per node level statistics (rather than per thread). By accessing the PMU counter registers directly from userspace, we can avoid the overhead of using a read() system call and get more accurate and fine grained statistics about the running of individual nodes. A demo of perfmon on Arm can be found at: https://asciinema.org/a/egVNN1OF7JEKHYmfl5bpDYxfF *Important Note* Perfmon on Arm is dependent on and works only on Linux kernel versions of v5.17+ as this is when userspace access to Arm perf counters was included. On most Arm systems, a maximum of 7 PMU events can be configured at once - (6x PMU events + 1x CPU_CYCLE counter). If some perf counters are in use elsewhere by other applications, and there are insufficient counters remaining to open the bundle, the perf_event_open call will fail (provided the events are grouped with the group_fd param, which perfmon currently utilises). See arm/events.h for a list of PMUv3 events available, although it is implementation defined whether most events are implemented or not. Only a small set of 7 events is required to be implemented in Armv8.0, with some additional events required in later versions. As such, depending on the implementation, some statistics may not be available. See Arm Architecture Reference Manual for Armv8-A, D7.10.2 "The PMU event number space and common events" for more information. arm/events.c:arm_init() gets information from the sysfs about what events are implemented on a particular CPU at runtime. Arm's implementation of the perfmon source callback .bundle_support uses this information to disable unsupported events in a bundle, or in the case no events are supported, disable the entire bundle. Where a particular event in a bundle is not implemented, the statistic for that event is shown as '-' in the 'show perfmon statistics' cli output, by disabling the column. There is additional code in perfmon.c to only open events which are marked as implemented. Since we're only opening and reading events that are implemented, some extra logic is required in cli.c to re-align either perfmon_node_stats_t or perfmon_reading_t with the column headings configured in each bundle, taking into account disabled columns. Userspace access to perf counters is disabled by default, and needs to be enabled with 'sudo sysctl kernel/perf_user_access=1'. There is a check built into the Arm event source init function (arm/events.c:arm_init) to check that userspace reading of perf counters is enabled in the /proc/sys/kernel/perf_user_access file. If the above file does not exist, it means the kernel version is unsupported. Users without a supported kernel will see a warning message, and no Arm bundles will be registered to use in perfmon. Enabling/using plugin: - include the following in startup.conf: - plugins { plugin perfmon_plugin.so { enable } - 'show perfmon bundle [verbose]' - show available statistics bundles - 'perfmon start bundle <bundle-name>' - enable and start logging - 'perfmon stop' - stop logging - 'show perfmon statistics' - show output For a general guide on using and understanding Arm PMUv3 events, see https://community.arm.com/arm-community-blogs/b/tools-software-ides-blog/posts/arm-neoverse-n1-performance-analysis-methodology Type: feature Signed-off-by: Zachary Leaf <zachary.leaf@arm.com> Tested-by: Jieqiang Wang <jieqiang.wang@arm.com> Change-Id: I0620fe5b1bbe78842dfb1d0b6a060bb99e777651
2022-07-06vppinfra: fix memory leak in sparse_vec_free()Sergey Matov1-1/+4
Type: fix Signed-off-by: Ivan Shvedunov <ivan4th@gmail.com> Signed-off-by: Sergey Matov <sergey.matov@travelping.com> Change-Id: I4ec1a68b7266f05ab7c543cd8207afb29e740743
2022-06-10vppinfra: fix bihash_8_16 entry format functionBenoît Ganne1-2/+1
Type: fix Change-Id: I1e8655baaf09b455f7f0052452402a372f738d0f Signed-off-by: Benoît Ganne <bganne@cisco.com>
2022-06-09vppinfra: missing __clib_export for clib_pmalloc_alloc_alignedDamjan Marion1-2/+2
Type: improvement Change-Id: I7489327d8b9c5f69b4ceb2159456f00f8a3612df Signed-off-by: Damjan Marion <damarion@cisco.com>
2022-05-24vppinfra: fix memory traceLeung Lai Yung1-0/+5
Type: fix reset the memory trace if mem trace is turned on Signed-off-by: Leung Lai Yung <benkerbuild@gmail.com> Change-Id: Ib99355b9ed42ff66c720bbea5cbbf03c65820d12
2022-05-24vlib: implement aux data handoffMohammed Hawari1-0/+3
Type: improvement Change-Id: I20b41537a249a55f01004e45392b34adaa8fd792 Signed-off-by: Mohammed Hawari <mohammed@hawari.fr>
2022-05-23ip: reassembly - fixing stepping index in a better wayVijayabhaskar Katamreddy1-10/+5
Type: fix pool_is_free_index() check is performed only for the first element Signed-off-by: Vijayabhaskar Katamreddy <vkatamre@cisco.com> Change-Id: Icadc715a9b54761ec69805a134a69a262137536d
2022-05-19 ip: reassembly - pacing reassembly timeoutsVijayabhaskar Katamreddy1-5/+16
Type: fix Pace the main thread activity for reassembly timeouts, to avoid barrier syncs Signed-off-by: Vijayabhaskar Katamreddy <vkatamre@cisco.com> Change-Id: If8c62a05c7d28bfa6ac530c2cd5124834b4e8a70
2022-05-18vppinfra: fix non-vector build on x86_64Damjan Marion1-1/+3
Type: fix Fixes: 56f54af Change-Id: Id03185953eb16da3a3276d2f21d64499784bbf17 Signed-off-by: Damjan Marion <damarion@cisco.com>
2022-05-06vppinfra: free vector against its heapDamjan Marion1-1/+1
Type: fix Change-Id: Ie292ee56dd5265a56ef472554aaf086e61da7089 Signed-off-by: Damjan Marion <damarion@cisco.com>
2022-04-29vppinfra: fix clib_mem_destroyDamjan Marion1-2/+1
Passing wrong pointer to clib_mem_vm_unmap... Type: fix Change-Id: I1f695d77bc45d9a6de3a4a3da1fbe6faebdad15e Signed-off-by: Damjan Marion <damarion@cisco.com>
2022-04-13vppinfra: fix GCC 7.3 build error with asm inlineGuillaume Solignac1-4/+4
GCC added asm inline in 8.3, so we change asm inline to asm volatile. Type: fix Fixes: d5045e68a782 ("vppinfra: introduce clib_perfmom") Signed-off-by: Guillaume Solignac <gsoligna@cisco.com> Change-Id: I9f7781ba9de66211404348ff477a17059b408a78
2022-04-13vppinfra: fix clang-10 build error with asm inlineTianyu Li1-1/+1
clang start to support parse asm inline from clang-11, Use asm volatile instead. Type: fix Fixes: d5045e68a782 ("vppinfra: introduce clib_perfmom") Signed-off-by: Tianyu Li <tianyu.li@arm.com> Change-Id: I00e5e19856caaed94e22f8fa6cf4f918483976a4
2022-04-12vppinfra: vector perf improvementsDamjan Marion14-136/+324
Type: improvement Change-Id: I37c187af80c21b8fb1ab15af112527a837e0df9e Signed-off-by: Damjan Marion <damarion@cisco.com>
2022-04-08vppinfra: introduce clib_perfmomDamjan Marion11-216/+530
Type: improvement Change-Id: I85a90774eb313020435c9bc2297c1bdf23d52efc Signed-off-by: Damjan Marion <damarion@cisco.com>
2022-04-08vppinfra: clib_interrupt_get_next reading unallocated memoryPaul Atkins2-1/+82
The clib interrupt structure has a couple of fields at the start of the cacheline, and then in the next cacheline it has a bitmap, which is then followed by an atomic bitmap. The size of the bitmaps is based on the number of interrupts, and when the memory is allocated the number of interrupts needed is used to size the overall block of memory. The interrupts typically map to pool entries, so if we want to store 512 entries then we store them in indices 0..511. This would then take 8 6 4bit words, so each bitmap would be this size when the struct is allocated. It is possible to walk over the end of the allocated data with certain sizes, one of which is 512. The reason this happens with 512 is that the check to see when to exit the loop is returning when offset is greater than the value needed to fit all the values. In this case 512 >> 6 = 8. If there had only been 511 entries then the size would have been 511 >> 6 = 7, and so it would have fitted in the space. Therefore modify the check to also check that we are not looking into the memory beyond what we have allocated in the case where the number of interrupt is one of the boundary values like 512. Also add a similar check first time round the loop as it is possible we could have ate same problem there too. Add a new test file to verify the new code works. The old version of the code made this test fail when run with the address sanitizer. Without the sanitiser it tended to pass because the following memory was typically set to 0 even though it was uninitialised. Type: fix Signed-off-by: Paul Atkins <patkins@graphiant.com> Change-Id: I2ec4afae43d296a5c30299bd7694c072ca76b9a4
2022-04-08vppinfra: add bright colors to format_tableDamjan Marion2-2/+20
Type: improvement Change-Id: I21de21af6dea9e39df5e912e20e56d878a40659f Signed-off-by: Damjan Marion <damarion@cisco.com>