Age | Commit message (Collapse) | Author | Files | Lines |
|
This is first part of addition of atomic macros with only macros for
__sync builtins.
- Based on earlier patch by Damjan (https://gerrit.fd.io/r/#/c/10729/)
Additionally
- clib_atomic_release macro added and used in the absence
of any memory barrier.
- clib_atomic_bool_cmp_and_swap added
Change-Id: Ie4e48c1e184a652018d1d0d87c4be80ddd180a3b
Original-patch-by: Damjan Marion <damarion@cisco.com>
Signed-off-by: Sirshak Das <sirshak.das@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
|
|
Object sizes must evenly divide alignment requests, or vice
versa. Otherwise, only the first object will be aligned as
requested.
Three choices: add CLIB_CACHE_LINE_ALIGN_MARK(align_me) at
the end of structures, manually pad to an even divisor or multiple of
the alignment request, or use plain vectors/pools.
static assert for enforcement.
Change-Id: I41aa6ff1a58267301d32aaf4b9cd24678ac1c147
Signed-off-by: Dave Barach <dbarach@cisco.com>
|
|
structure definitions into two files to share code with NSH
plugin (iOAM)
Change-Id: I0192551f71678e4f814bc6a7d25200a1580f3033
Signed-off-by: Vengada <venggovi@cisco.com>
|
|
Coverity IDs - 163911, 163910, 163909, 163908, 163905, 163904, 163896, 161957, 161955
Change-Id: Ida822fa45c6936240f61282e2280541d7e6427b3
Signed-off-by: AkshayaNadahalli <anadahal@cisco.com>
|
|
Change-Id: Icf0ddf76ba1c8b588c79387284cd0349ebc6e45f
Signed-off-by: AkshayaNadahalli <anadahal@cisco.com>
|
|
Change-Id: I6d2afda991d771fb4a89fc3f6544f8e940a9b9f0
Signed-off-by: Shwetha Bhandari <shwethab@cisco.com>
|
|
Change-Id: I0963760a7da95612d5cab19596919b369a4d0f8e
Signed-off-by: Shwetha Bhandari <shwethab@cisco.com>
|
|
Refer to jira ticket for more details.
Change-Id: I6facb9ef8553a21464f9a2e612706f152badbb68
Signed-off-by: AkshayaNadahalli <anadahal@cisco.com>
|