Age | Commit message (Collapse) | Author | Files | Lines |
|
Main heap can be hugepage backed so it is more efficient to use
main heap instead of allocating special heap just for mtrie....
Type: improvement
Change-Id: I210912ab8567c043205ddfc10fdcfde9a0fa7757
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
No change in default behavior. To use htlb pages for the ip4 mtrie,
use the "ip" command-line option "mtrie-hugetlb".
Type: improvement
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I5497e426a47200edff2c7e15563ed6a42af12e7f
|
|
Identified and removed executable bit from source files in the tree.
find . -perm 755 -name *.[ch] -exec chmod a-x {} \;
Type: improvement
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
Change-Id: I00710d59fcc46ce5be5233109af4c8077daff74b
|
|
The mheap allocator has been turned off for several releases. This
commit removes the cmake config parameter, parallel support for
dlmalloc and mheap, and the mheap allocator itself.
Type: refactor
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I104f88a1f06e47e90e5f7fb3e11cd1ca66467903
|
|
ip4_mtrie used full memory barrier compare-and-swap in set_leaf () and
set_root_leaf () even though only one thread updates the trie. Replaced
such instances of compare-and-swap with atomic store-release.
Type: refactor
Change-Id: Ic6e3c84480697915541acd16dcc630d1c436137d
Signed-off-by: Jason Zhang <jason.zhang2@arm.com>
Reviewed-by: Lijian Zhang <Lijian.Zhang@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
|
|
ip4_mtrie set leaves atomically in set_leaf () and set_root_leaf () but
deleted leaves using regular stores in unset_leaf () and unset_root_leaf ().
Changed leaf deletion to update mtrie using atomic store-release.
Slight performance improvement was observed in benchmarking on Qualcomm
and Xeon machines. Benchmarking involved running 'ip route add' and
'ip route del' on vpp instances. Below are the routes/second for adding
and deleting 100k routes before and after the store-release changes:
Xeon Add Routes Before: 1.140e6, 1.139e6, 1.148e6, 1.158e6, 1.155e6
Xeon Add Routes After: 1.167e6, 1.170e6, 1.174e6, 1.173e6, 1.169e6
Xeon Del Routes Before: 7.287e7, 8.089e7, 6.048e7, 7.171e7, 7.821e7
Xeon Del Routes After: 8.729e7, 7.353e7, 7.856e7, 8.209e7, 7.787e7
Qualcomm Add Routes Before: 3.709e5, 3.954e5, 3.739e5, 3.759e5, 3.671e5
Qualcomm Add Routes After: 3.879e5, 3.967e5, 3.936e5, 3.764e5, 3.817e5
Qualcomm Del Routes Before: 1.286e7, 1.379e7, 1.353e7, 1.230e7, 1.331e7
Qualcomm Del Routes After: 1.411e7, 1.355e7, 1.373e7, 1.394e7, 1.314e7
Type: refactor
Change-Id: If3acd25a2fb87addd0eb13d82d3c8f46579e8060
Signed-off-by: Jason Zhang <jason.zhang2@arm.com>
Reviewed-by: Lijian Zhang <Lijian.Zhang@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
|
|
Type: fix
Change-Id: I1af0e4a9bc23a3b6b6d3a74df093801ab6cae1f8
Signed-off-by: Lijian Zhang <Lijian.Zhang@arm.com>
|
|
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
This is first part of addition of atomic macros with only macros for
__sync builtins.
- Based on earlier patch by Damjan (https://gerrit.fd.io/r/#/c/10729/)
Additionally
- clib_atomic_release macro added and used in the absence
of any memory barrier.
- clib_atomic_bool_cmp_and_swap added
Change-Id: Ie4e48c1e184a652018d1d0d87c4be80ddd180a3b
Original-patch-by: Damjan Marion <damarion@cisco.com>
Signed-off-by: Sirshak Das <sirshak.das@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
|
|
Change-Id: I4ba0aeb65219596475345e42b8cd34019f5594c6
Signed-off-by: mu.duojiao <mu.duojiao@zte.com.cn>
|
|
Change-Id: Idfed8243643780d3f52dfe6e6ec621c440daa6ae
Signed-off-by: mu.duojiao <mu.duojiao@zte.com.cn>
|
|
Configure w/ --enable-dlmalloc, see .../build-data/platforms/vpp.mk
src/vppinfra/dlmalloc.[ch] are slightly modified versions of the
well-known Doug Lea malloc. Main advantage: dlmalloc mspaces have no
inherent size limit.
Change-Id: I19b3f43f3c65bcfb82c1a265a97922d01912446e
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I994649761fe2e66e12ae0e49a84fb1d0a966ddfb
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I497e9f6489dd35219bcf2b51ac992467aac4c8eb
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
DBGvpp# sh fib mem
FIB memory
Tables:
SAFI Number Bytes
IPv4 unicast 2 673066
IPv6 unicast 2 1054608
MPLS 1 4194312
IPv4 multicast 2 2322
IPv6 multicast 2 ???
Nodes:
Name Size in-use /allocated totals
Entry 96 20 / 20 1920/1920
Entry Source 32 0 / 0 0/0
Entry Path-Extensions 60 0 / 0 0/0
multicast-Entry 192 12 / 12 2304/2304
Path-list 40 28 / 28 1120/1120
uRPF-list 16 20 / 20 320/320
Path 72 28 / 28 2016/2016
Node-list elements 20 28 / 28 560/560
Node-list heads 8 30 / 30 240/240
Change-Id: I8c8f6f1c87502a40265bf4f302d0daef111a4a4e
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I62a6ee78ee9ad73fd58a46fbdca54fd964fec113
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
- always use 'va_args' as pointer in all format_* functions
- u32 for all 'indent' params as it's declaration was inconsistent
Change-Id: Ic5799309a6b104c9b50fec309cba789c8da99e79
Signed-off-by: Christophe Fontaine <christophe.fontaine@enea.com>
|
|
Change-Id: I586a758a8b4b0ea5ca030b2df2796f5acb49e154
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: If98355bebe823f45b11b0908a8d7700ab273a6db
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
1) 16-8-8 stride. Reduce trie depth walk traded with increased memory in the top PLY.
2) separate the vector of protocol-independent (PI) fib_table_t with the vector of protocol dependent (PD) FIBs. PD FIBs are large structures, we don't want to burn the memory for ech PD type
3) Go straight to the PD FIB in the data-path thus avoiding an indirection through, e.g., a PLY pool.
Change-Id: I800d1ed0b2049040d5da95213f3ed6b12bdd78b7
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
1 - make the default route non-special, i.e. like any other less specific route. Consequently, all buckets have a valid valid index of either a leaf or a ply. Checks for special indeices in the data-path can thus be removed.
2 - since all leaves are now 'real' i.e. they represent a real load-balance object, to tell if a ply slot is 'empty' requeirs chekcing that the prefix length of the leaf occupying the slot is slot than the minium value for that ply.
3 - when removing a leaf find the cover first, then recurse down the ply and replace the old leaf with the cover. This saves us a ply walk.
Change-Id: Idd523019e8bb1b6ef527b1f5279a5e24bcf18332
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I7b51f88292e057c6443b12224486f2d0c9f8ae23
Signed-off-by: Damjan Marion <damarion@cisco.com>
|