Age | Commit message (Collapse) | Author | Files | Lines |
|
Introduce AddressSanitizer support: https://github.com/google/sanitizers/
This starts with heap instrumentation. vlib_buffer, bihash and stack
instrumentation should follow.
Type: feature
Change-Id: I7f20e235b2f79db72efd0e756f22c75f717a9884
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Valgrind never really worked well with VPP. Remove the partial support.
Type: refactor
Change-Id: Ic09773fd85f904fdd2240bc161e23a4c2b196cf6
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
set the address to MMAP_FAILED if mmap has failed,
so that we do not attempt to free it in the error
handling path.
Change-Id: I6e6b51a365fb68086dc20aa40a676a36af59a3ba
Type: fix
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
Type: feature
Change-Id: I5bbf37969c9c51e40a013d1fc3ab966838eeb80d
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: refactor
Change-Id: Ic1c3e1f7987702cd88972acc34849dc1f585d5fe
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Coverity gets confused by ASSERT((l) <= vec_max_len(v)) when l is 0.
Type: fix
Change-Id: I247d7015b148233d8f195bcf41e9a047b7a21309
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
IPsec zero-es all allocated key memory including memory sur-allocated by
the allocator.
Move it to its own function in clib mem infra to make it easier to
instrument.
Type: refactor
Change-Id: Icd1c44d18b741e723864abce75ac93e2eff74b61
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
l2-flood and bier nodes reset vector length without updating it to its
effective size. Introduce a helper to do it (this allows ASAN to keep
track of the new vector size).
Type: refactor
Change-Id: I2d652550c440f0553a2b49c3ee3d37b49ebc16c3
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Fix a day-1 bug, possibly dating back as far as 2002. The zap64() game
involves fetching 8 byte chunks, and clearing octets not to be
included in the key.
That's fine *unless* the 8-byte fetch happens to cross a page boundary
into unmapped or no-access space.
Type: fix
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I4607e9840032257c96ba7387f86c931c0921749d
|
|
Type: feature
Signed-off-by: MathiasRaoul <mathias.raoul@gmail.com>
Change-Id: I8d71078a9ed42326e19453ea10008c6bb6992c52
|
|
Define CLIB_PAUSE () to generate the "yield" instruction. No significant
performance changes were observed for clib_spinlock_t and clib_rwlock_t.
Type: feature
Change-Id: I59eb996e61c7a16007517e57e6996567302c1657
Signed-off-by: Jason Zhang <jason.zhang2@arm.com>
Reviewed-by: Lijian Zhang <Lijian.Zhang@arm.com>
|
|
Type: feature
Change-Id: I8f5f4841965beb13ebc8c2a37ce0dc331c920109
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
In some cases it may happen that buffer is allocated by DPDK, and freed
by VPP native code. In such cases dpdk metadata is not reset, so we need
to do that during mempool dequeue. Template approach is taken to reduce
cost of that operation.
Type: fix
Fixes: 910d369
Change-Id: Ic239007cfc8fbceb965021c56963cda9d53f63be
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Add controls to list / not list a specific bihash in clib_all_bihashes,
to immediately initialize a bihash.
clib_bihash_init2 is now the primary API. It takes a typical args_t
structure. clib_bihash_init becomes a compatibility widget. It
fabricates an args_t and calls init2...
Type: refactor
Ticket: VPP-1758
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: Ib3e1304884997cf7025af20bdc67a7dda290f15b
|
|
when use pcap cli to capture pcakets into two files rx01.pcap && rx02.pcap,
the first time:
1)pcap rx trace on max 100 intfc any file rx01.pcap
2)......the process of capture data to buffer......
3)pcap rx trace off
the second time:
4)pcap rx trace on max 100 intfc any file rx02.pcap
5)......the process of capture data to buffer......
6)pcap rx trace off
the pcap_write function bug in this two lines
pm->n_packets_captured = 0;
if (pm->n_packets_captured >= pm->n_packets_to_capture) referring to calling pcap_close()
will result in that the twice pcap cli both writes the packets
into rx01.pcap, but nothing into rx02.pcap. Beside, the rx02.pcap
file will not be created.
solution: separate the pcap_close() out of pcap_write()
Change-Id: Iedeb46f9cf0a4cb12449fd75a4014f95f3bb3fa8
Signed-off-by: Jack Xu <jack.c.xu@ericsson.com>
|
|
- Allow "Microarch model(family)" row to show PASS
revison as either string (like A0, B0) or number (like
1.0, 2.0).
- Fix part number for Marvell CN96XX
Change-Id: Ie01a3960c4e5e481be354dc8bb60f744e5c65737
Signed-off-by: Nitin Saxena <nsaxena@marvell.com>
|
|
Type: feature
This is needed when creating pthreads in client applications,
they need a way to set __os_thread_index per thread
that does not conflict with the binary API thread index.
If __os_thread_index is left to 0 in two client pthreads and
they call vl_msg_api_alloc and vec_resize at the same time
it can fail due to them sharing (and push/poping) the same
clib_per_cpu_mheaps slot.
Change-Id: I85d4248a39b641a4d3ad5a1c1bd6e0db5875fab6
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
Provide default packet_to_capture value. Display interface name
correctly for "pcap tx/rx trace status".
Type: fix
Signed-off-by: John Lo <loj@cisco.com>
Change-Id: I7064d0dbea236a9aff68bba7fbaf2c4a73b16c6f
Signed-off-by: John Lo <loj@cisco.com>
|
|
Type: fix
Change-Id: I67b72b5ad03b972198c27bc0d927867f41b0e20b
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Previous implementation of clib_rwlock_t used two spinlocks: one
writer lock, and one to guard the counter for the number of readers.
This implementation uses a single condition variable rw_cnt which
has the following properties:
if a writer has the rwlock, rw_cnt = -1
if the rwlock is free, rw_cnt = 0
otherwise, rw_cnt > 0 and rw_cnt = number of readers
rw_cnt will never be less than -1
Benchmarking:
The results below are the cycle counts from test_rwlock.c, configured so
that for 10000 iterations, 6 reader and 6 writer threads on separate cores
are spawned such that each writer thread increments a global counter
10000 times in each iteration. For Taishan, 4 reader and 4 writer
threads are spawned in each test.
x86 Xeon old rwlock: 12.473e8, 11.655e8, 13.201e8, 11.347e8, 13.182e8
x86 Xeon new rwlock: 5.881e8, 5.796e8, 6.536e8, 5.540e8, 5.890e8
Aarch64 ThX2* old rwlock: 9.263e7, 8.933e7, 9.074e7, 8.979e7, 9.378e7
Aarch64 ThX2* new rwlock: 7.221e7, 8.107e7, 7.515e7, 7.672e7, 7.386e7
A72 old rwlock: 3.268e6, 3.200e6, 3.086e6, 3.176e6, 3.170e6
A72 new rwlock: 1.261e6, 1.288e6, 1.251e6, 1.229e6, 1.234e6
*ThunderX2 used additional gcc options "-march=armv8.1-a+crc+crypto+lse"
Type: refactor
Change-Id: I7c347d3037b36205ab532cbcb52a374c846eb275
Signed-off-by: Jason Zhang <jason.zhang2@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Lijian Zhang <Lijian.Zhang@arm.com>
|
|
"timer.[ch]" used a signal handler to deliver timer callbacks. Without
indulging in a set of sigprocmask(...) system calls, it would be
unsafe to use the mechanism.
Rather than wait for another developer to accidentally open this
particular can of worms, best to remove the code. It's nothing more
than an attractive nuisance at this point.
Type: refactor
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: Ia3e7b00a389c302b466605dff0c1bf3566b8dbbd
|
|
Type: fix
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: Ie37ff66faba79e3b8f46c7a704137f9ef2acc773
|
|
Tested performance of a CAS implementation (using __atomic_compare_exchange)
against a TAS implementation (using __atomic_exchange) using test_spinlock.c
and found some performance improvement.
Generated assembly for CAS and TAS implementations show that TAS always
executes with a load-store dependency, but CAS moves a branch condition
between the load and store so that only a load occurs when the lock is free.
Benchmarking:
The results below are the cycle counts from test_spinlock.c, configured so
that for 10000 iterations, 12 threads on separate cores are spawned, each of
which increments a global counter 10000 times in each iteration. For
A72, 8 threads are spawned in each test.
x86 Xeon TAS: 7.333e8, 7.605e8, 7.535e8, 7.485e8, 7.321e8
x86 Xeon CAS: 5.842e8, 5.433e8, 5.389e8, 5.983e8, 5.552e8
Aarch64 ThX2* TAS: 9.852e7, 10.209e7, 9.190e7, 9.600e7, 9.224e7
Aarch64 ThX2* CAS: 7.640e7, 7.486e7, 7.425e7, 7.269e7, 7.534e7
A72 TAS: 7.289e6, 6.963e6, 7.208e6, 6.976e6, 7.200e6
A72 CAS: 1.695e6, 1.608e6, 1.600e6, 1.634e6, 1.746e6
*ThunderX2 used additional gcc options "-march=armv8.1-a+crc+crypto+lse"
Type: refactor
Change-Id: Ic5cd97991804f6b012707fad1a5d1a6edb96cd3d
Signed-off-by: Jason Zhang <jason.zhang2@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Lijian Zhang <Lijian.Zhang@arm.com>
|
|
Spawns a uniform number of writer and reader threads across a number of
cores where each writer thread increments a global variable a specified
number of times, and the reader threads continually poll the global's
value until the writers complete.
Type: test
Change-Id: I979c3734c6d03139d0802bff1846875d226f6fbb
Signed-off-by: Jason Zhang <jason.zhang2@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Lijian Zhang <Lijian.Zhang@arm.com>
|
|
Spinlock performance improved when implemented with compare_and_exchange
instead of test_and_set. All instances of test_and_set locks were refactored
to use clib_spinlock_t when possible. Some locks e.g. ssvm synchronize
between processes rather than threads, so they cannot directly use
clib_spinlock_t.
Type: refactor
Change-Id: Ia16b5d4cd49209b2b57b8df6c94615c28b11bb60
Signed-off-by: Jason Zhang <jason.zhang2@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Lijian Zhang <Lijian.Zhang@arm.com>
|
|
Spawns a uniform number of threads across a number of cores where each
thread increments a global variable a specified number of times.
Type: test
Change-Id: I12b3a37708a199c297d022348d99dbb0e8349a9f
Signed-off-by: Jason Zhang <jason.zhang2@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Lijian Zhang <Lijian.Zhang@arm.com>
|
|
All instances of test_and_set locks used the following sequence
to release the locks:
CLIB_MEMORY_BARRIER ();
p->lock = 0; // p is a generic struct with a TAS lock
Use clib_atomic_release to generate more efficient assembly code.
Type: refactor
Change-Id: Idca3a38b1cf43578108bdd1afe83b6ebc17a4c68
Signed-off-by: Jason Zhang <jason.zhang2@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Lijian Zhang <Lijian.Zhang@arm.com>
|
|
Modified test-and-set spin locks to call CLIB_PAUSE () when spinning
for code consistency. Decreases the memory bandwidth consumed.
Type: fix
Change-Id: I1cca4f87f44f23f257c7a35466cd2e7767072f51
Signed-off-by: Jason Zhang <jason.zhang2@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Lijian Zhang <Lijian.Zhang@arm.com>
|
|
Type: feature
Change-Id: I5e030b23943c012d8191ff657165055d33ec87a2
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Type: fix
Ticket: VPP-1649
Change-Id: Ief77ec8d5f06bfcc63af6454c4cd9979cf0ab49d
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Type: feature
Change-Id: Ic720d56a6f8901efde2a58519bc9aa553205a9a6
Signed-off-by: Gary Boon <gboon@cisco.com>
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
The OOM check must consider the end of alloced arena and
not the start when checking for overflow.
Type: fix
Change-Id: Ie83e653d0894199d2fa433a604a0fe0cee142338
Signed-off-by: Andreas Schultz <andreas.schultz@travelping.com>
|
|
Type: refactor
Change-Id: I24159e0a848f552b4e27acfb5fe6f2cd91b50a19
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
elog string hashtable use strlen() to determine string length for
hashing, strings must be NULL-terminated for both inserts and lookups.
Type: fix
Fixes: 9c8ca8dd3197e40dfcb8bcecd95c10eeb56239ed
Change-Id: I0680d39a9b89411055fd6adc89c9f253adfae32c
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: feature
Change-Id: I21511c1abea703da67f1a491e73342496275c498
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
If is_add=2, fail w/ return value -2 if the key exists instead of
overwriting the (key,value) pair.
Type: feature
Change-Id: I00a3c194a381c68090369c31d6c6f9870cfe0a62
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Reduces the vpp image virtual size by multiple gigabytes
Add a "show bihash" command which displays configured and current
virtual space in use by bihash tables.
Modify the .py test framework to call "show bihash" on test tear-down
Type: refactor
Change-Id: Ifc1b7e2c43d29bbef645f6802fa29ff8ef09940c
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
New coverity toolset, new set of squawks to fix
Ticket: VPP-1649
Type: fix
Change-Id: I2a7e4c42b101c6c79c01b150b2523ce3d5d62354
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Makes life easier for binary API language bindings
Type: fix
Change-Id: Ib459274fda05153d01cbb7bc328a8407e3ee5027
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Add u64x2_scatter/u32x4_scatter in vector_neon.h. u64x2_scatter/u32x4_scatter
scatters data from SIMD register to scattered memory locations.
Type: feature
Change-Id: I298d5478c7ba6935ab7402a6d467c7ee00f17e9f
Signed-off-by: Lijian Zhang <Lijian.Zhang@arm.com>
Reviewed-by: Sirshak Das <Sirshak.Das@arm.com>
Reviewed-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
|
|
When only the fast in wheel is in use, the next expiring has
to be within the fast_slot_bitmap.
With mutliple wheels, the next expiring timer could be in the
slow wheel. The timers on the slow wheel are only moved into the
fast wheel when the fast wheel index reaches TW_SLOTS_PER_RING.
When calculating the next expiring timer we therefor need to
consider the timers on the slow wheel as well.
When there are no more before reaching TW_SLOTS_PER_RING, instead
of scanning the slow wheel, return the number of ticks until
TW_SLOTS_PER_RING is reached.
Type: fix
Change-Id: I847031f8efc015c888d082f0b0c1bd500aa65704
Signed-off-by: Andreas Schultz <andreas.schultz@travelping.com>
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Add u64x2_gather/u32x4_gather in vector_neon.h. u64x2_gather/u32x4_gather
gathers data from scattered memory locations to a SIMD register.
Type: feature
Change-Id: I1dd27e38af28b9bed85143014c86197ee549fede
Signed-off-by: Lijian Zhang <Lijian.Zhang@arm.com>
Reviewed-by: Sirshak Das <Sirshak.Das@arm.com>
Reviewed-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
|
|
Type:fix
Make sure tnil color is black and that the right node colors are
updated.
Change-Id: Ibd9d7ea9438df4dab977202955957824723a865d
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Type: feature
Add support for insert/search/del with custom compare function.
Change-Id: Ibb740afc224d8adc29d3e1b51b46cdd738d1bd93
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
This fixes two leaks in registering errors in the stats segment.
- The error name created by vlib_register_errors() was not freed.
- Duplicate error names (when interface readded) was added to the vector.
This fix also adds memory usage statistics for the statistics segment
as /mem/statseg/{used, total}
Change-Id: Ife98d5fc5baef5bdae426a5a1eef428af2b9ab8a
Type: fix
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
Type: feature
Change-Id: I53e1f05b2b048925fca3b2f6b0499ff9c3e6ee12
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: feature
Change-Id: Ie020fd7e2618284a63efbeb9895068f27c0fb9ab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Add a string hash to make sure that strings in the string table are
unique. This optimization has been coded piecemeal in multiple places,
we should have made the underlying function do the work years ago.
Type: fix
Change-Id: I5010fd4926b9b80ce3a168748f6de64e333ef498
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
__sync_lock_release switched to __atomic_store for code consitency,
although both generate same instructions with current compilers.
Change-Id: I37d320509e43a4c2b8a49af6346dc4a43ca2f535
Signed-off-by: Sirshak Das <sirshak.das@arm.com>
Reviewed-by: Lijian Zhang <Lijian.Zhang@arm.com>
Reviewed-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
|
|
__sync_test_and_set uses full memory barriers for AArch64,
__atomic_exchange(ACQUIRE) would use load acquire.
Change-Id: Ifdf2481db3b9dde6c5842d75671402862adb6d81
Signed-off-by: Sirshak Das <sirshak.das@arm.com>
Reviewed-by: Lijian Zhang <Lijian.Zhang@arm.com>
Reviewed-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
|