Age | Commit message (Collapse) | Author | Files | Lines |
|
Type: improvement
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I822ead1495edb96ee62e53dc5920aa6c565e3621
|
|
Use of _vec_len() to set vector length breaks address sanitizer.
Users should use vec_set_len(), vec_inc_len(), vec_dec_len () instead.
Type: improvement
Change-Id: I441ae948771eb21c23a61f3ff9163bdad74a2cb8
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Fixes case when packet to link-local address is received over
gre/mpls or other non-ethernet interface and ip6-ll fib for it
is undefined.
If by a chance ip6-ll fib index is valid, packet will be passed
to some ip6 fib with possibilities to be sent out over unrelated
interface or be looped again into ip6-link-local dpo till oom
and crash.
Type: fix
Signed-off-by: Vladislav Grishenko <themiron@yandex-team.ru>
Change-Id: Ie985f0373ea45e2926db7fb0a1ff951eca0e38f6
|
|
Type: refactor
Change-Id: I3625eacf9e04542ca8778df5d46075a8654642c7
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
This adds a new ip[46]-receive node, sibling
of ip[46]-local. Its goal is to set
vnet_buffer (b)->ip.rx_sw_if_index to the
sw_if_index of the local interface.
In dependant nodes further down the line (e.g.
hoststack) we then set sw_if_idx[rx] to this
value. So that we know which local interface
did receive the packet.
The TCP issue this fixes is that :
On accepts, we were setting tc->sw_if_index
to the source sw_if_index. We should use
the dest sw_if_index, so that packets
coming back on this connection have the
right source sw_if_index. And also setting
it in the tx-ed packet.
Change-Id: I569ed673e15c21e71f365c3ad45439b05bd14a9f
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
Type: improvement
Subsequent features in the data-path can thus easily find the l3 header
without parsing the label stack.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I26f7d4bbe9186aeb8654706579c72424e8ecca2c
|
|
Type: refactor
as opposed to wrtiing out the mtrie steps one by one each time.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I1248861350f9189f9a67ac6e68940813af279e03
|
|
Type: refactor
Change-Id: Id10cbf52e8f2dd809080a228d8fa282308be84ac
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Put plugin init order inside plugin instead of in vnet
Type: improvement
Signed-off-by: Bin Zhou (bzhou2) <bzhou2@cisco.com>
Change-Id: Icbacdb3f1cb4ac9d74e3f78458e8bc333793b4d6
|
|
trajectory trace has been broken for a while because we used to save the
buffer trajectory in a vector pointed to in opaque2. This does not work
well when opaque2 is copied (eg. because of a clone) as 2 buffers end up
sharing the same vector.
This dedicates a full cacheline in the buffer metadata instead when
trajectory is compiled in. No dynamic allocation, no sharing, no tears.
Type: refactor
Change-Id: I6a028ca1b48d38f393a36979e5e452c2dd48ad3f
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: feature
Support setting the MTU for a peer on an interface. The minimum value of
the path and interface MTU is used at forwarding time.
the path MTU is specified for a given peer, by address and table-ID.
In the forwarding plane the MTU is enfored either:
1 - if the peer is attached, then the MTU is set on the peer's
adjacency
2 - if the peer is not attached, it is remote, then a DPO is added to
the peer's FIB entry to perform the necessary fragmentation.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I8b9ea6a07868b50e97e2561f18d9335407dea7ae
|
|
Type: refactor
Change-Id: Ie67dc579e88132ddb1ee4a34cb69f96920101772
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
Change-Id: I487e698555545fce85d02d55deaaf7bb0007e388
|
|
Type: improvement
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
Change-Id: Iee04af801814b6360b045cf7dc8bcad6f517229e
|
|
Type: refactor
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
Change-Id: Id1801519638a9b97175847d7ed58824fb83433d6
|
|
Type: improvement
- cache the values form the BD on the input config to avoid loading
- avoid the short write long read on the sequence number
- use vlib_buffer_enqueue_to_next
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: I33442b9104b457e4c638d26e9ad3bc965687a0bc
|
|
Add dpo_pool_barrier_sync/release, use them to clean up
thread-unsafe pool expansion cases.
Type: fix
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I09299124a25f8d541e3bb4b75375568990e9b911
|
|
load_balance_alloc_i(...) is not thread safe when the
load_balance_pool or combined counter vectors expand.
Type: fix
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I7f295ed77350d1df0434d5ff461eedafe79131de
|
|
Type: docs
Change-Id: I9b5e5137eb4c1e89f6e8d7a278cd11a0fd496471
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Type: fix
Signed-off-by: ShivaShankarK <shivaashankar1204@gmail.com>
Change-Id: Iee88a2101ce42d7f1cdb65df532c349d14829e4c
|
|
Add an ALWAYS_ASSERT (...) macro, to (a) shut up coverity, and (b)
check the indicated condition in production images.
As in:
p = hash_get(...);
ALWAYS_ASSERT(p) /* was ASSERT(p) */
elt = pool_elt_at_index(pool, p[0]);
This may not be the best way to handle a specific case, but failure to
check return values at all followed by e.g. a pointer dereference
isn't ok.
Type: fix
Ticket: VPP-1837
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: Ia97c641cefcfb7ea7d77ea5a55ed4afea0345acb
|
|
Type: docs
Signed-off-by: John DeNisco <jdenisco@cisco.com>
Change-Id: I7280e5c5ad10a66c0787a5282291a2ef000bff5f
|
|
Type: docs
Change-Id: I9c4727db8d498d0b513157b19ad306b7aaacc222
Signed-off-by: Neale Ranns <nranns@cisco.com>
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Type: fix
Signed-off-by: Rajesh Goel <rajegoel@cisco.com>
Change-Id: Ie4372c5cf58ab215cdec5ce56f8a994daaba2844
|
|
Type: fix
Change-Id: I1af0e4a9bc23a3b6b6d3a74df093801ab6cae1f8
Signed-off-by: Lijian Zhang <Lijian.Zhang@arm.com>
|
|
Type: refactor
Signed-off-by: Simon Zhang <yuwei1.zhang@intel.com>
Change-Id: I8674ff5f6f6336b256b7df8187afbb36ddef71fb
|
|
Since vlib_buffer_copy() and vlib_buffer_clone() both preserve
VLIB_BUFFER_IS_TRACED bit in flags field, it should also copy
trace_handle which would add minimal overhead. Thus, callers of
these functions do not have to call vlib_buffer_copy_trace_flags()
to copy trace_handle.
Type: refactor
Signed-off-by: John Lo <loj@cisco.com>
Change-Id: Iff6a3f81660dd62b36a2966033eb380305340310
|
|
Type: feature
Change-Id: Ib24547a7c4c73ceb5383d1ca8f14ec40e6a90f01
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Type: fix
Fixes: 59fa121f
Change-Id: I9eb4fe1612734e54932228527c37bf33b705dbdb
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Enhance the route add/del APIs to take a set of paths rather than just one.
Most unicast routing protocols calcualte all the available paths in one
run of the algorithm so updating all the paths at once is beneficial for the client.
two knobs control the behaviour:
is_multipath - if set the the set of paths passed will be added to those
that already exist, otherwise the set will replace them.
is_add - add or remove the set
is_add=0, is_multipath=1 and an empty set, results in deleting the route.
It is also considerably faster to add multiple paths at once, than one at a time:
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.11
100000 routes in .572240 secs, 174751.80 routes/sec
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.12
100000 routes in .528383 secs, 189256.54 routes/sec
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.13
100000 routes in .757131 secs, 132077.52 routes/sec
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.14
100000 routes in .878317 secs, 113854.12 routes/sec
vat# ip_route_add_del 1.1.1.1/32 count 100000 multipath via 10.10.10.11 via 10.10.10.12 via 10.10.10.13 via 10.10.10.14
100000 routes in .900212 secs, 111084.93 routes/sec
Change-Id: I416b93f7684745099c1adb0b33edac58c9339c1a
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
Signed-off-by: Ole Troan <ot@cisco.com>
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Change-Id: Ib4cdbe8a6a1d10a643941c13aa0acbed410f876c
Type: Feature
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
The vlib init function subsystem now supports a mix of procedural and
formally-specified ordering constraints. We should eliminate procedural
knowledge wherever possible.
The following schemes are *roughly* equivalent:
static clib_error_t *init_runs_first (vlib_main_t *vm)
{
clib_error_t *error;
... do some stuff...
if ((error = vlib_call_init_function (init_runs_next)))
return error;
...
}
VLIB_INIT_FUNCTION (init_runs_first);
and
static clib_error_t *init_runs_first (vlib_main_t *vm)
{
... do some stuff...
}
VLIB_INIT_FUNCTION (init_runs_first) =
{
.runs_before = VLIB_INITS("init_runs_next"),
};
The first form will [most likely] call "init_runs_next" on the
spot. The second form means that "init_runs_first" runs before
"init_runs_next," possibly much earlier in the sequence.
Please DO NOT construct sets of init functions where A before B
actually means A *right before* B. It's not necessary - simply combine
A and B - and it leads to hugely annoying debugging exercises when
trying to switch from ad-hoc procedural ordering constraints to formal
ordering constraints.
Change-Id: I5e4353503bf43b4acb11a45fb33c79a5ade8426c
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I215e1e0208a073db80ec6f87695d734cf40fabe3
Signed-off-by: Jim Thompson <jim@netgate.com>
|
|
Change-Id: I53ab8d17914e6563110354e4052109ac02bf8f3b
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Change-Id: I3043112c3e7584f61e64dc6d20d57604ebceb76a
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
|
|
-fno-common makes sure we do not have multiple declarations of the same
global symbol across compilation units. It helps debug nasty linkage
bugs by guaranteeing that all reference to a global symbol use the same
underlying object.
It also helps avoiding benign mistakes such as declaring enum as global
objects instead of types in headers (hence the minor fixes scattered
across the source).
Change-Id: I55c16406dc54ff8a6860238b90ca990fa6b179f1
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Change-Id: I8dc261e40b8398c5c8ab6bb69ecebbd0176055d9
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I7a48890c075826fbd8c75436dfdc5ffff230a693
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: Id4f37f5d4a03160572954a416efa1ef9b3d79ad1
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I9b5f7b264f9978e3dd97b2d1eb103b7d10ac3170
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I9052202b8cbcf656e61d635253d515f0f3a8d145
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
keep the number of buckets in the load-balanced fixed. If a
path goes dwon fill its buckets with those from the next
available up path.
Change-Id: I15603ccb899fa9b77556b898c99136379cf32eae
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I785ecadbf30812a500629870aa717e64f4cf0cdd
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I798e4fb6470ae9e763f8de1c290ff0fc3c0b7f9e
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
route ADD API changed to return the stats segment index to use to read the counters
Change-Id: I2ef41e01eaa2f9cfaa49d9c88968897793825925
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: Ibda19d786070c942c75016ab568c8361de2f24af
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
- the FIB path takes a vector of type fib_mpls_label_t not u32 so the untype safe vec_add did not work
- write som eSR-MPLS tests
- allow an MPLS tunnel to resolve through a SR BSID
Change-Id: I2a18b9a9bf43584100ac269c4ebc286c9e3b3ea5
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I4907f48e6c4a4e91343fd0d4fface00f09e5fa2b
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
|
|
Change-Id: I59235d11baac18785a4c90cdaf14e8f3ddf06dab
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
|