Age | Commit message (Collapse) | Author | Files | Lines |
|
Type: improvement
Signed-off-by: Neale Ranns <nranns@cisco.com>
Signed-off-by: Benoît Ganne <bganne@cisco.com>
Change-Id: I2f30a4f04fd9a8635ce2d259b5fd5b0c85cee8c3
|
|
When an mfib entry was created with both paths and entry_flags then
the entry flags were being ignored. If there are no paths then the
flags were passed into mfib_table_entry_update, but in the case where
the entry didn't exist and there were paths and flags, the entry was
created within mfib_table_entry_paths_update() which used a default
of MFIB_ENTRY_FLAG_NONE.
Pass the flags through into the mfib_table_entry_paths_update fn. All
existing callers other than the create case will now pass in
MFIB_ENTRY_FLAG_NONE.
Type: fix
Signed-off-by: Paul Atkins <patkins@graphiant.com>
Change-Id: I256375ba2fa863a62a88474ce1ea6bf2accdd456
|
|
Type: fix
Change-Id: I41455e1cdc62e7c0baa148630b0701b042f3b156
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
Change-Id: I20e48a5ac8068eccb8d998346d35227c4802bb68
Signed-off-by: Mohammed Hawari <mohammed@hawari.fr>
Type: feature
|
|
Type: refactor
Change-Id: Ie67dc579e88132ddb1ee4a34cb69f96920101772
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
allow clients that allocate punt reasons to pass a callback function
that is invoked when the first/last client registers to use/listen on
that punt reason. This allows the client to perform some necessary
configs that might not otherwise be enabled.
IPSec uses this callback to register the ESP proto and UDP handling
nodes, that would not otherwise be enabled unless a tunnel was present.
Change-Id: I9759349903f21ffeeb253d4271e619e6bf46054b
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Type: fix
Ticket: VPP-1837
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I402b1b06db736b2a7a242ce70ffd409c7c0a4fc2
|
|
Add an ALWAYS_ASSERT (...) macro, to (a) shut up coverity, and (b)
check the indicated condition in production images.
As in:
p = hash_get(...);
ALWAYS_ASSERT(p) /* was ASSERT(p) */
elt = pool_elt_at_index(pool, p[0]);
This may not be the best way to handle a specific case, but failure to
check return values at all followed by e.g. a pointer dereference
isn't ok.
Type: fix
Ticket: VPP-1837
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: Ia97c641cefcfb7ea7d77ea5a55ed4afea0345acb
|
|
Type: fix
Change-Id: Id53eb6ed15f270d747b9831a7b585cbafe515dd2
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Instead of all clients directly RR sourcing the entry they are tracking,
use a deidcated 'tracker' object. This tracker object is a entry
delegate and a child of the entry. The clients are then children of the
tracker.
The benefit of this aproach is that each time a new client tracks the
entry it doesn't RR source it. When an entry is sourced all its children
are updated. Thus, new clients tracking an entry is O(n^2). With the
tracker as indirection, the entry is sourced only once.
Type: feature
Change-Id: I5b80bdda6c02057152e5f721e580e786cd840a3b
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Enhance the route add/del APIs to take a set of paths rather than just one.
Most unicast routing protocols calcualte all the available paths in one
run of the algorithm so updating all the paths at once is beneficial for the client.
two knobs control the behaviour:
is_multipath - if set the the set of paths passed will be added to those
that already exist, otherwise the set will replace them.
is_add - add or remove the set
is_add=0, is_multipath=1 and an empty set, results in deleting the route.
It is also considerably faster to add multiple paths at once, than one at a time:
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.11
100000 routes in .572240 secs, 174751.80 routes/sec
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.12
100000 routes in .528383 secs, 189256.54 routes/sec
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.13
100000 routes in .757131 secs, 132077.52 routes/sec
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.14
100000 routes in .878317 secs, 113854.12 routes/sec
vat# ip_route_add_del 1.1.1.1/32 count 100000 multipath via 10.10.10.11 via 10.10.10.12 via 10.10.10.13 via 10.10.10.14
100000 routes in .900212 secs, 111084.93 routes/sec
Change-Id: I416b93f7684745099c1adb0b33edac58c9339c1a
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
Signed-off-by: Ole Troan <ot@cisco.com>
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
The vlib init function subsystem now supports a mix of procedural and
formally-specified ordering constraints. We should eliminate procedural
knowledge wherever possible.
The following schemes are *roughly* equivalent:
static clib_error_t *init_runs_first (vlib_main_t *vm)
{
clib_error_t *error;
... do some stuff...
if ((error = vlib_call_init_function (init_runs_next)))
return error;
...
}
VLIB_INIT_FUNCTION (init_runs_first);
and
static clib_error_t *init_runs_first (vlib_main_t *vm)
{
... do some stuff...
}
VLIB_INIT_FUNCTION (init_runs_first) =
{
.runs_before = VLIB_INITS("init_runs_next"),
};
The first form will [most likely] call "init_runs_next" on the
spot. The second form means that "init_runs_first" runs before
"init_runs_next," possibly much earlier in the sequence.
Please DO NOT construct sets of init functions where A before B
actually means A *right before* B. It's not necessary - simply combine
A and B - and it leads to hugely annoying debugging exercises when
trying to switch from ad-hoc procedural ordering constraints to formal
ordering constraints.
Change-Id: I5e4353503bf43b4acb11a45fb33c79a5ade8426c
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
A punt/exception path that provides:
1) clients that use the infra
2) clients can create punt reasons
3) clients can register to recieve packets that are punted
for a given reason to be sent to the desired node.
4) nodes which punt packets fill in the {reason,protocol} of the
buffere (in the meta-data) and send to the new node "punt-dispatch"
5) punt-dispatch sends packets to the registered nodes or drops
Change-Id: Ia4f144337f1387cbe585b4f375d0842aefffcde5
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
for multicast
Change-Id: I17caf3c5a2060de497c44655b66a15a2007f716b
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Learning GBP endpoints over vxlan-gbp tunnels
Change-Id: I1db9fda5a16802d9ad8b4efd4e475614f3b21502
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
|
|
Change-Id: I3a47c71ad3e35df47d11fed6db95019a45f3015f
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I085615fde1f966490f30ed5d32017b8b088cfd59
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Change-Id: I11ec0d7048d36c30a97d437e5b0abd05f06ab0eb
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
This patch implements vxlan with extension of group based
policy support.
Change-Id: I70405bf7332c02867286da8958d9652837edd3c2
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|