Age | Commit message (Collapse) | Author | Files | Lines |
|
Optimize zero byte mask NEON functions below with less intrinsics,
and get their outputs consistent with functions in vector_sse42.h
always_inline u32 u64x2_zero_byte_mask (u64x2 input)
always_inline u32 u32x4_zero_byte_mask (u32x4 input)
always_inline u32 u16x8_zero_byte_mask (u16x8 input)
always_inline u32 u8x16_zero_byte_mask (u8x16 input)
always_inline u32 i64x2_zero_byte_mask (i64x2 input)
always_inline u32 i32x4_zero_byte_mask (i32x4 input)
always_inline u32 i16x8_zero_byte_mask (i16x8 input)
always_inline u32 i8x16_zero_byte_mask (i8x16 input)
Change-Id: I7f485915baeb37fa2dd484699b8769e0136f6574
Signed-off-by: Lijian Zhang <Lijian.Zhang@arm.com>
Reviewed-by: Sirshak Das <Sirshak.Das@arm.com>
|
|
Add atomic swap and store macro with acquire and release ordering
respectively. Variable in question is interupt_pending variable which
is used as guard variable by input nodes to process the device queue.
Atomic Swap is used with Acquire ordering as writes or reads following
this in program order should not be reordered before the swap.
Atomic Store is used with Release ordering, as post store the node is
added to pending list.
Change-Id: I1be49e91a15c58d0bf21ff5ba1bd37d5d7d12f7a
Original-patch-by: Damjan Marion <damarion@cisco.com>
Signed-off-by: Sirshak Das <sirshak.das@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
|
|
This patch enables the use of this function for enqueuing frames to the next graph node.
Change-Id: I4003110db59870f7106e0d13942d6ff7bc54b46d
Signed-off-by: Lijian Zhang <Lijian.Zhang@arm.com>
Reviewed-by: Sirshak Das <Sirshak.Das@arm.com>
Reviewed-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Reviewed-by: Steve Capper <Steve.Capper@arm.com>
|
|
Change-Id: Ia9b74761ce511d218bb5319c7c9b5e58be3e2e8a
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I34cc55d8292a69fb451ed0031484994f51d3537a
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Without pagemap access only way to do DMA to physmem is by
using IOMMU. In such case VFIO will take care for preventing
paging of such memory so we don't need to lock here.
Change-Id: Ica9c20659fba3ea3c96202eb5f7d29c43b313fa9
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Iecceffe06a92660976ebb58cd3cbec4be8931db0
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I5ff713ad0b254c74c5622e3b9425cca365b5ee97
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ie9ff9b751190632dfc4576e5cbb1987a4142af5e
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ic4c46bc733afae8bf0d8146623ed15633928de30
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Ibec32c6df32f4cd9889d378e244f170c93ad295b
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ie5a00c15ee9536cc61afab57f6cadc1aa1972f3c
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Iffba7ebe5af8fadc0251f3a10022739d45f394ce
Signed-off-by: Juraj Sloboda <jsloboda@cisco.com>
|
|
Starting with kernel 4.14 hugepage fd can be retrieved with
memfd_systemcall
Change-Id: I0f3bd6d0a7757ffe7b98e83763502013ac763ecb
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
This is first part of addition of atomic macros with only macros for
__sync builtins.
- Based on earlier patch by Damjan (https://gerrit.fd.io/r/#/c/10729/)
Additionally
- clib_atomic_release macro added and used in the absence
of any memory barrier.
- clib_atomic_bool_cmp_and_swap added
Change-Id: Ie4e48c1e184a652018d1d0d87c4be80ddd180a3b
Original-patch-by: Damjan Marion <damarion@cisco.com>
Signed-off-by: Sirshak Das <sirshak.das@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
|
|
Shorthand for the pattern:
pool_get (<pool>, ep);
memset (ep, 0, sizeof(*ep));
Should have done this years ago.
Change-Id: Ideeb27a79ff4ca3e9a077c973b297671d1fa2d26
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Break up bond tx function into multiple small workloads:
1. parse the packet header and hash it based on the configured algorithm
2. optionally, trace the packet
3. convert the hash value from (1) to the slave port
4. update the buffers with the slave sw_if_index
5. Add the buffers to the queues
6. Create and send the frames
old numbers
-----------
Time 5.3, average vectors/node 223.74, last 128 main loops 40.00 per node 222.61
vector rates in 3.3627e6, out 6.6574e6, drop 3.3964e4, punt 0.0000e0
Name State Calls Vectors Suspends Clocks Vectors/Call
BondEthernet0-output active 68998 17662979 0 1.89e1 255.99
BondEthernet0-tx active 68998 17662979 0 2.60e1 255.99
TenGigabitEthernet3/0/1-output active 68998 8797416 0 1.03e1 127.50
TenGigabitEthernet3/0/1-tx active 68998 8797416 0 7.85e1 127.50
TenGigabitEthernet7/0/1-output active 68996 8865563 0 1.02e1 128.49
TenGigabitEthernet7/0/1-tx active 68996 8865563 0 7.65e1 128.49
new numbers
-----------
BondEthernet0-output active 304064 77840384 0 2.29e1 256.00
BondEthernet0-tx active 304064 77840384 0 2.47e1 256.00
TenGigabitEthernet3/0/1-output active 304064 38765525 0 1.03e1 127.49
TenGigabitEthernet3/0/1-tx active 304064 38765525 0 7.66e1 127.49
TenGigabitEthernet7/0/1-output active 304064 39074859 0 1.01e1 128.51
Change-Id: I3ef9a52bfe235559dae09d055c03c5612c08a0f7
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I2779626d745badb63386efcf729da7a094a4f297
Signed-off-by: Haiyang Tan <haiyangtan@tencent.com>
|
|
Change-Id: I8691a10493d159a97574550c111f07722960a7cd
Signed-off-by: Haiyang Tan <haiyangtan@tencent.com>
|
|
Change-Id: Ife2a83b9d7f733f36e0e786ef79edcd394d7c0f9
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
This fixes the l2BD and ip4 test case failures.
Fixes VPP-1432, VPP-1428, VPP-1430
Change-Id: I48b5c961bab60cc3b39fcd6db47e098c81579480
Signed-off-by: Sirshak Das <sirshak.das@arm.com>
|
|
only if used (VPP-1429)
Change-Id: I8afa57ecca590698d3430746968aa0a5b0070469
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: Ic6823fb617ecae547a5f0e28b1e037848e40f682
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Currently, there are three variants available on aarch64, qdf24xx, thunderx2t99, and cortex-a72.
-DCLIB_N_PREFETCHES is passed to source code to select dual/quad implementation.
Besides, different compiler options are applied on these critical functions.
gcc-7.3.0 reports ICE(internal compiler error) with -mtune=thunderx2t99,
so -mtune=thunderx2t99 is enabled only when gcc version is greater than 7.3.0
Cavium ThunderX2, Impermenter 0x43, Part 0x0af
-march=armv8-a+crc+crypto -mtune=thunderx2t99
Qualcomm Centriq 2400, Impermenter 0x51, Part 0xc00
-march=armv8.1-a+crc+crypto -mtune=qdf24xx
Cortex-A72, Impermenter 0x41, Part 0xd08
-march=armv8-a+crc+crypto -mtune=cortex-a72
Change-Id: Id5649c6325c1e642d0fd42535e3908793b13e02a
Signed-off-by: Lijian Zhang <Lijian.Zhang@arm.com>
Reviewed-by: Sirshak Das <sirshak.das@arm.com>
Reviewed-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
|
|
This is a new commit for code under a different maintainer.
Change-Id: I79fa403fec6a312238a9a4b18b35dbcafaa05439
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Change-Id: Ifa4fceef7edbe43d444790a624957db0817064de
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Call the BV (value_free) when we have performed the rehash
and thus no longer need the memory that old value for the
bucket refers to.
Change-Id: Ibb82174fc8002aeb3e1a6c8d1f90293d73bc45d8
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
ffb14b9554afa1e58c3657e0c91dda3135008274 has changed the semantics
of alloc_arena_next to become an offset off alloc_arena, but
in the available memory check in BV (alloc_aligned) it still treats
it as a virtual address, resulting in the check always succeeding,
thus over a prolonged period bihash arena allocator
potentially overwriting whatever is following the arena.
Change-Id: I18882c5f340ca767a389e15cca2696a0a97ef015
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
Instead of relying on main epoll loop to send messages, try to send as
soon as possible.
Change-Id: I27c0b4076f3599ad6e968df4746881a6717d4299
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: Ia4c79d560bfa1118d4683a89a1209a08c5f546b3
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
This is the high version of extendto. This function accomplishes the
same task as both shuffling and extending done by SSE intrinsics.
This enables the NEON version for buffer indexes to buffer pointer
translation.
Change-Id: I52d7bbf3d76ba69c9acb0e518ff4bc6abf3bbbd4
Signed-off-by: Sirshak Das <sirshak.das@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Yi He <yi.he@arm.com>
Verified-by: Lijian Zhang <lijian.zhang@arm.com>
|
|
This patch makes 32/64 bit interoperable shared memory bihash tables
work regardless of where they're mapped.
Change-Id: If5b4a37ccdaa75410eba755c7d7195633de1b30b
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Using rev16 vector intrinsic to reverse byteorder in each word
independently.
Change-Id: I071c40780baffe0bda614ec5d9dd92858f574b0d
Signed-off-by: Sirshak Das <sirshak.das@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Brian Brooks <brian.brooks@arm.com>
Reviewed-by: Yi He <yi.he@arm.com>
Verified-by: Lijian Zhang <lijian.zhang@arm.com>
|
|
This is used in vlib_get_buffers_with_offset.
Change-Id: If4ff776bc97d21a22e870300b164eeb6a5ec3638
Signed-off-by: Sirshak Das <sirshak.das@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Brian Brooks <brian.brooks@arm.com>
Reviewed-by: Yi He <yi.he@arm.com>
Verified-by: Lijian Zhang <lijian.zhang@arm.com>
|
|
Having the NEON equivalent of u32x4_hadd for CLIB_HAVE_VEC128
Change-Id: I210f96f7ecb9b80b4753311a68e5e09ccda7e95b
Signed-off-by: Sirshak Das <sirshak.das@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Brian Brooks <brian.brooks@arm.com>
Reviewed-by: Yi He <yi.he@arm.com>
Verified-by: Lijian Zhang <lijian.zhang@arm.com>
|
|
clib_time_verify_frequency(...) rejects clock frequency changes
greater than 1%.
vlib_worker_thread_barrier_sync_int (...) continuously checks that the
barrier hold-down timer is not unreasonably far in the future.
Change-Id: I00ecb4c20e44de5d6a9c1499fa933e3dd834d11a
Signed-off-by: Dave Barach <dbarach@cisco.com>
|
|
Change-Id: Ifcc2717efd242ae2016563d6f3e5cd57fe161e00
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Add missing calls to clib_mem_init to vppinfra test codes.
Change-Id: I53ffc6fc287d1a378065bb86c18b6e995ecdb775
Signed-off-by: Damjan Marion <damarion@cisco.com>
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I9a0df8d15deefdf31cfead56c96433cd7220b802
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Make sure that vpp_get_stats main heap does not address-collide with
the stats segment, which lands "somewhere" in the vpp address space.
Add mising MAP_ANONYMOUS flag in clib_mem_vm_map
Change-Id: I8a671d174eefd8dd24771ad2ed9f1250e2c7a9f8
Signed-off-by: Dave Barach <dave@barachs.net>
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
Change-Id: I40332c2348c4aab873d726532f2ac3c4abde7ec9
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Move the binary api segment above 4gb
Change-Id: I40e8aa7a97722a32397f5a538b5ff8344c50d408
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I1382021a6f616571b4b3243ba8c8999239d10815
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Also: don't #include /usr/include/malloc.h in dlmalloc.h
Change-Id: Ic73ff8862cc8aba371488b912255e28dd96374ff
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Ib8dc37b1a39c92a0c7b22cebdf985c6afa8229d9
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I1f0aae16e4ace850d7d79b9c2c644a3e0d002636
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Preferably without mistaking -pie (address randomized) segment
addresses for heap objects.
Change-Id: Idca6b966f14b1caf6b4637843fe407dbc5017535
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Applications such as NAT that dynamically create entries require these entries to expire after some time.
Bihash user can now lazily delete expired entries. When inserting and bucket is full, expired entry is overwritten.
Change-Id: I6852305df399b546159407f1729c856afde5a634
Signed-off-by: Matus Fabian <matfabia@cisco.com>
|
|
Flowhash user can now rely on the table to be initialized
to zero and know when an entry is cleaned up by the
garbage collector.
This is usefull to store state in flowhash entries without
the need for callbacks when an entry timeouts.
Change-Id: Ieece6b7277241f84ea3f2473d0771c6ee8ce460c
Signed-off-by: Pierre Pfister <ppfister@cisco.com>
|