summaryrefslogtreecommitdiffstats
path: root/src/vppinfra/memcpy_sse3.h
AgeCommit message (Collapse)AuthorFilesLines
2021-10-12vppinfra: use unaligned non-vector load/stores in x86 memcpyDamjan Marion1-7/+7
Type: fix Change-Id: I54ef23a52f05cc95210a736f84b927dd69b8a6f7 Signed-off-by: Damjan Marion <damarion@cisco.com>
2021-02-15vppinfra: fix memcpy undefined behaviourBenoît Ganne1-1/+1
Calling mem{cpy,move} with NULL pointers results in undefined behaviour. This in turns is exploited by GCC. For example, the sequence: memcpy (dst, src, n); if (!src) return; src[0] = 0xcafe; will be optimized as memcpy (dst, src, n); src[0] = 0xcafe; IOW the test for NULL is gone. vec_*() functions sometime call memcpy with NULL pointers and 0 length, triggering this optimization. For example, the sequence: vec_append(v1, v2); len = vec_len(v2); will crash if v2 is NULL, because the test for NULL pointer in vec_len() has been optimized out. This commit fixes occurrences of such undefined behaviour, and also introduces a memcpy wrapper to catch those in debug mode. Type: fix Change-Id: I175e2dd726a883f97cf7de3b15f66d4b237ddefd Signed-off-by: Benoît Ganne <bganne@cisco.com>
2020-04-27vppinfra: selectively disable false-positive GCC-10 warningsBenoît Ganne1-0/+8
GCC-10 increase overflows-related warnings but is confused by SIMD operations. Type: fix Change-Id: Iafde754c2fbec60e2d0a328f295b1f5c156d8234 Signed-off-by: Benoît Ganne <bganne@cisco.com>
2018-11-14Remove c-11 memcpy checks from perf-critical codeDave Barach1-1/+1
Change-Id: Id4f37f5d4a03160572954a416efa1ef9b3d79ad1 Signed-off-by: Dave Barach <dave@barachs.net>
2018-10-23c11 safe string handling supportDave Barach1-1/+1
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab Signed-off-by: Dave Barach <dave@barachs.net>
2017-12-14vppinfra: add AVX512 variant of clib_memcpyDamjan Marion1-29/+33
Taken from DPDK, also AVX2 variant updated to be in sync with DPDK version. Change-Id: I8a42e4141a5a1a8cfbee328b07bd0c9b38a9eb05 Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-07-06vppinfra: revert clib_memcpy optimizationDamjan Marion1-5/+6
Looks like some compiler versions are producing wrong code when we are copying 9-16 bytes so reverting back to the original code. Change-Id: I74b5fa54a3b01f6288648f1cb0926030edd3b26f Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-04-21vppinfra: clib_memcpy improvementRay Kinsella1-5/+0
In the case where n is a constant 16 bytes, the second load/store is ignored by the load/store unit - it has neglible/zero cost. In the case where n is variable and greater than 512 bytes, the extra if (n == 16) branch has a very small performance impact. Change-Id: I04b313cf022c18fee31b1d9bcf6a128414659a99 Signed-off-by: Ray Kinsella <ray.kinsella@intel.com>
2017-03-01vppinfra: fix issue when copying 16 bytes with clib_memcpyDamjan Marion1-0/+5
Current code wos copying same data twice when length is 16. Change-Id: I8d935b32f61672aaea9789c097a5083ae8f78cdd Signed-off-by: Damjan Marion <damarion@cisco.com>
2016-12-28Reorganize source tree to use single autotools instanceDamjan Marion1-0/+355
Change-Id: I7b51f88292e057c6443b12224486f2d0c9f8ae23 Signed-off-by: Damjan Marion <damarion@cisco.com>