Age | Commit message (Collapse) | Author | Files | Lines |
|
Type: improvement
Change-Id: Ie063ee0a0c59a9ad632200ce2b23703bc0d936e6
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: fix
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: Ibad744788e200ce012ad88ff59c2c34920742454
|
|
adj_alloc (...) is not thread safe when the adj pool or combined
counter vectors expand.
Type: fix
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I55710de6ecc083b7434e11798659cca9250c9131
|
|
Type: fix
the hash walk does not give the same guarantees as the bihash so
walk in a safe manner.
Change-Id: Idfe48c3a84ab3a341d887f7d196bc81ba34ae8b0
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Type: improvement
a bihash per-interface used too much memory.
Change-Id: I447bb66c0907e1632fa5d886a3600e518663c39e
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Type: fix
it is possible for a user to change the end node of a feature arc, but
this change should only apply to that 'instnace' of the arc, not all
arcs. for example, if a tunnel has its ipx-output end node changed to
adj-midchain-tx, this shouldn't affect all ipx-output arcs. obviously...
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: I41daea7ba6907963e42140307d065c8bcfdcb585
|
|
Type: improvement
- inline some common encap fixup functions into the midchain
rewrite node so we don't incur the cost of the virtual function call
- change the copy 'guess' from ethernet_header (which will never happen) to an ip4 header
- add adj-midchain-tx to multiarch sources
- don't run adj-midchain-tx as a feature, instead put this node as the
adj's next and at the end of the feature arc.
- cache the feature arc config index (to save the cache miss going to fetch it)
- don't check if features are enabled when taking the arc (since we know they are)
the last two changes will also benefit normal adjacencies taking the arc (i.e. for NAT, ACLs, etc)
for IPSec:
- don't run esp_encrypt as a feature, instead when required insert this
node into the adj's next and into the end of the feature arc. this
implies that encrypt is always 'the last feature' run, which is
symmetric with decrypt always being the first.
- esp_encrpyt for tunnels has adj-midchain-tx as next node
Change-Id: Ida0af56a704302cf2d7797ded5f118a781e8acb7
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Type: fix
Signed-off-by: ShivaShankarK <shivaashankar1204@gmail.com>
Change-Id: I193023705003e664c50487fdfaa42b813604a078
|
|
Type: feature
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: Iaba2ab11bfaa1c8db4023434e3043ac39500f938
|
|
Type: fix
Change-Id: I57f8bfbce4feed9d2775875cb8b1b729a47900a4
Signed-off-by: Neale Ranns <nranns@cisco.com>
(cherry picked from commit 24064d02aa9810ebc64c16dc778a179bb0ef5483)
|
|
Type: fix
coverity found invalid logic.
Change-Id: Ic9144ac805a4e5a18aa299794fedda044dcb65fe
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
fib_walk_sync may call adj_alloc which may cause adj_pool to expand. When
that happens, any previous frame which still use the old adj pointer needs to
refresh. Failure to do so may access or update to the old adj memory
unintentionally and crash mysteriously.
Type: fix
Ticket: VPPSUPP-54
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: I173dec4c5ce81c6e26c4fe011b894a7345901b24
|
|
cleaned up some trivial typo's while reading through adj.h
Type: docs
Change-Id: I1b6cd815dc10ed3da8db2024b3e015e076235d50
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Type: feature
plus fixes for gre
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: I0eca5f94b8b8ea0fcfb058162cafea4491708db6
|
|
Type: fix
Fixes: 418b225931634f6d113d2971cb9550837d69929d
Change-Id: Ia5f4ea24188c4f3de87e06a7fd07b40bcb47cfc1
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Type: fix
Change-Id: I0e826284c50713d322ee7943d87fd3363cfbdfbc
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Type: docs
Signed-off-by: Ole Troan <ot@cisco.com>
Change-Id: I46856db81d42c3f10c03a7bf9a245cc998cd8a01
|
|
Type: docs
Change-Id: I6cdfbae5a0eab8a69dfa2ae054945c510a3c63f6
Signed-off-by: Neale Ranns <nranns@cisco.com>
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Type: feature
- ip-neighbour: generic neighbour handling; APIs, DBs, event handling,
aging
- arp: ARP protocol implementation
- ip6-nd; IPv6 neighbor discovery implementation; separate ND,
MLD, RA
- ip6-link; manage link-local addresses
- l2-arp-term; events separated from IP neighbours, since they are not
the same.
vnet retains just enough education to perform ND/ARP packet
construction.
arp and ip6-nd to be moved to plugins soon.
Change-Id: I88dedd0006b299344f4c7024a0aa5baa6b9a8bbe
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Type: fix
Change-Id: Id3a1950e49d5eb1883af06a14df97e98f55162a8
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Type: feature
Change-Id: I28f7a658be3f3beec9ea32635b60d1d3a10d9b06
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Type: feature
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: I3feddfe44dee528b9ca05aa0150e9423306ae49d
|
|
Type: refactor
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: I3aad20b35d89fc541fdf185096d71ca12b09a6e2
|
|
Type: refactor
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: I1d8b88fe1eefc850865297b4f025b97e6373a6bd
|
|
This is a preparation step for migrating NAT to use SVR (shallow virtual
reassembly) to conserve space in vnet_buffer. Since max rewrite length
is currently pre-data size (128), u8 is sufficient to hold that value.
Type: refactor
Change-Id: I5374bb396e178245b870cb0bbf1370d2a54230bc
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
Type: fix
Change-Id: I82308e368d14d84f5970dad229bdcf2de7d1839d
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: fix
Change-Id: I1af0e4a9bc23a3b6b6d3a74df093801ab6cae1f8
Signed-off-by: Lijian Zhang <Lijian.Zhang@arm.com>
|
|
In some cases, we can refer to no-longer adjacencies (eg. in traces). Do
not dump them in this case as they are probably incorrect (memory can be
reused).
Type: fix
Change-Id: Ib653ba066bb6595ec6ec37d313a3124bce0eeed3
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: feature
Change-Id: I1632ff23b1bf6d91aa3406c95ebd6ef0aa595f35
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Type: style
Change-Id: I2a21076fffaeb5726be80356aaffc9fea3d95850
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Multiple API message handlers call vnet_get_sup_hw_interface(...)
without checking the inbound sw_if_index. This can cause a
pool_elt_at_index ASSERT in a debug image, and major disorder in a
production image.
Given that a number of places are coded as follows, add an
"api_visible_or_null" variant of vnet_get_sup_hw_interface, which
returns NULL given an invalid sw_if_index, or a hidden sw interface:
- hw = vnet_get_sup_hw_interface (vnm, sw_if_index);
+ hw = vnet_get_sup_hw_interface_api_visible_or_null (vnm, sw_if_index);
if (hw == NULL || memif_device_class.index != hw->dev_class_index)
return clib_error_return (0, "not a memif interface");
Rename two existing xxx_safe functions -> xxx_or_null to make it
obvious what they return.
Type: fix
Change-Id: I29996e8d0768fd9e0c5495bd91ff8bedcf2c5697
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Instead of all clients directly RR sourcing the entry they are tracking,
use a deidcated 'tracker' object. This tracker object is a entry
delegate and a child of the entry. The clients are then children of the
tracker.
The benefit of this aproach is that each time a new client tracks the
entry it doesn't RR source it. When an entry is sourced all its children
are updated. Thus, new clients tracking an entry is O(n^2). With the
tracker as indirection, the entry is sourced only once.
Type: feature
Change-Id: I5b80bdda6c02057152e5f721e580e786cd840a3b
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
The vlib init function subsystem now supports a mix of procedural and
formally-specified ordering constraints. We should eliminate procedural
knowledge wherever possible.
The following schemes are *roughly* equivalent:
static clib_error_t *init_runs_first (vlib_main_t *vm)
{
clib_error_t *error;
... do some stuff...
if ((error = vlib_call_init_function (init_runs_next)))
return error;
...
}
VLIB_INIT_FUNCTION (init_runs_first);
and
static clib_error_t *init_runs_first (vlib_main_t *vm)
{
... do some stuff...
}
VLIB_INIT_FUNCTION (init_runs_first) =
{
.runs_before = VLIB_INITS("init_runs_next"),
};
The first form will [most likely] call "init_runs_next" on the
spot. The second form means that "init_runs_first" runs before
"init_runs_next," possibly much earlier in the sequence.
Please DO NOT construct sets of init functions where A before B
actually means A *right before* B. It's not necessary - simply combine
A and B - and it leads to hugely annoying debugging exercises when
trying to switch from ad-hoc procedural ordering constraints to formal
ordering constraints.
Change-Id: I5e4353503bf43b4acb11a45fb33c79a5ade8426c
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Ib34ca76fcc5e85cb3cc646ffc7be208b8e757cba
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I26279c19b879e59c68fda31426fe42dae62a858d
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I215e1e0208a073db80ec6f87695d734cf40fabe3
Signed-off-by: Jim Thompson <jim@netgate.com>
|
|
Change-Id: I6527e3fd8bbbca2d5f728621fc66b3856b39d505
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I53ab8d17914e6563110354e4052109ac02bf8f3b
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
this can be used by e.g. tunnels so it doesn't need to be
implemented for each tunnel type.
Change-Id: I0790f89aa49f83421612b35108cce67693285999
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Using memcpy instead of complex specific copy logic. This simplify
the implementation and also improve perf slightly.
Also move adjacency data from tail to head of buffer, which improves
cache locality (header and data share the same cacheline)
Finally, fix VxLAN which used to workaround vnet_rewrite logic.
Change-Id: I770ddad9846f7ee505aa99ad417e6a61d5cbbefa
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Avoid the cache miss consequences of spraying [functionally harmless]
junk into un-prefetched rewrite space. As things stand, several tunnel
encap rewrites set rewrite data_bytes = 0, and take a performance hit
due to unwanted speculative copying.
Should be performance-neutral in speed-path cases, which won't execute
the added check.
Change-Id: Id83c0325e58c0f31631b4bae5a06457dfc7ed567
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Iac92a6d15e1feef4d97b8db09fc60901dc9f7880
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
|
|
Change-Id: I642823bdc3c7006a0b719ec1e3a9cd75b2b37253
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I7a48890c075826fbd8c75436dfdc5ffff230a693
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
if a tunnel's destination address is reachable through the tunnel
(see example config belwo) then search for and detect a recursion
loop and don't stack the adjacency. Otherwise this results in a
nasty surprise.
DBGvpp# loop cre
DBGvpp# set int state loop0 up
DBGvpp# set int ip addr loop0 10.0.0.1/24
DBGvpp# create gre tunnel src 10.0.0.1 dst 1.1.1.1
DBGvpp# set int state gre0 up
DBGvpp# set int unnum gre0 use loop0
DBGvpp# ip route 1.1.1.1/32 via gre0
DBGvpp# sh ip fib 1.1.1.1
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] locks:[src:plugin-hi:2, src:default-route:1, ]
1.1.1.1/32 fib:0 index:11 locks:4 <<< this is entry #11
src:CLI refs:1 entry-flags:attached, src-flags:added,contributing,active,
path-list:[14] locks:2 flags:shared,looped, uPRF-list:12 len:1 itfs:[2, ]
path:[14] pl-index:14 ip4 weight=1 pref=0 attached-nexthop: oper-flags:recursive-loop,resolved, cfg-flags:attached,
1.1.1.1 gre0 (p2p)
[@0]: ipv4 via 0.0.0.0 gre0: mtu:9000 4500000000000000fe2fb0cc0a0000010101010100000800
stacked-on entry:11: <<<< and the midchain forwards via entry #11
[@2]: dpo-drop ip4
src:recursive-resolution refs:1 src-flags:added, cover:-1
forwarding: unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:13 buckets:1 uRPF:12 to:[0:0]]
[0] [@6]: ipv4 via 0.0.0.0 gre0: mtu:9000 4500000000000000fe2fb0cc0a0000010101010100000800
stacked-on entry:11:
[@2]: dpo-drop ip4
DBGvpp# sh adj 1
[@1] ipv4 via 0.0.0.0 gre0: mtu:9000 4500000000000000fe2fb0cc0a0000010101010100000800
stacked-on entry:11:
[@2]: dpo-drop ip4
flags:midchain-ip-stack midchain-looped <<<<< this is a loop
counts:[0:0]
locks:4
delegates:
children:
{path:14}
Change-Id: I39b82bd1ea439be4611c88b130d40289fa0c1b59
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I463b153de93cfec29a9c15e8e84e41f6003d4c5f
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: Id4f37f5d4a03160572954a416efa1ef9b3d79ad1
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Iaecf8c060e1337d8c362ad9a9be2bb9701664397
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I6b59df939c9daf40e261d73d19f500bd90abe6ff
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
|