Age | Commit message (Collapse) | Author | Files | Lines |
|
Type: fix
Several tunnels encapsulation use udp as outer header and udp src port
is set by inner header flow hash, such as gtpu, geneve, vxlan, vxlan-gbd
Since flow hash of inner header is already been calculated, keeping it
to vnet_buffere[b]->ip.flow_hash should save load-balance node work to
select ECMP uplinks.
Change-Id: I0e4e2b27178f4fcc5785e221d6d1f3e8747d0d59
Signed-off-by: Shawn Ji <xiaji@tethrnet.com>
|
|
Type: fix
Change-Id: Id53eb6ed15f270d747b9831a7b585cbafe515dd2
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Type: feature
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I2272521d6e69edcd385ef684af6dd4eea5eaa953
|
|
This is a preparation step for migrating NAT to use SVR (shallow virtual
reassembly) to conserve space in vnet_buffer. Since max rewrite length
is currently pre-data size (128), u8 is sufficient to hold that value.
Type: refactor
Change-Id: I5374bb396e178245b870cb0bbf1370d2a54230bc
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
Fix UDP over IP6 checksum offload setup for VXLAN and VXLAN-GBP.
Type: fix
Signed-off-by: John Lo <loj@cisco.com>
Change-Id: If110467a68234d8eed941869a2a03735f339dc33
|
|
Instead of all clients directly RR sourcing the entry they are tracking,
use a deidcated 'tracker' object. This tracker object is a entry
delegate and a child of the entry. The clients are then children of the
tracker.
The benefit of this aproach is that each time a new client tracks the
entry it doesn't RR source it. When an entry is sourced all its children
are updated. Thus, new clients tracking an entry is O(n^2). With the
tracker as indirection, the entry is sourced only once.
Type: feature
Change-Id: I5b80bdda6c02057152e5f721e580e786cd840a3b
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Type: fix
from the draft:
3. Backward Compatibility
VXLAN [RFC7348] requires reserved fields to be set to zero on
transmit and ignored on receive.
Change-Id: I98544907894f1a6eba9595a37c3c88322905630e
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
In the case of an error, it is uninitialized.
Type: fix
Change-Id: Ib88fb997e5eef410c1cd970674d9385575f30366
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Return VNET_API_ERROR_INVALID_VALUE intead of 1.
Type: fix
Change-Id: Ie5465cad9ca07b9147306a808e8b13d0c4867913
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Enhance the route add/del APIs to take a set of paths rather than just one.
Most unicast routing protocols calcualte all the available paths in one
run of the algorithm so updating all the paths at once is beneficial for the client.
two knobs control the behaviour:
is_multipath - if set the the set of paths passed will be added to those
that already exist, otherwise the set will replace them.
is_add - add or remove the set
is_add=0, is_multipath=1 and an empty set, results in deleting the route.
It is also considerably faster to add multiple paths at once, than one at a time:
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.11
100000 routes in .572240 secs, 174751.80 routes/sec
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.12
100000 routes in .528383 secs, 189256.54 routes/sec
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.13
100000 routes in .757131 secs, 132077.52 routes/sec
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.14
100000 routes in .878317 secs, 113854.12 routes/sec
vat# ip_route_add_del 1.1.1.1/32 count 100000 multipath via 10.10.10.11 via 10.10.10.12 via 10.10.10.13 via 10.10.10.14
100000 routes in .900212 secs, 111084.93 routes/sec
Change-Id: I416b93f7684745099c1adb0b33edac58c9339c1a
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
Signed-off-by: Ole Troan <ot@cisco.com>
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
This patch helps save 4.1 clocks/pkt from 62.9 to 58.8
clocks/pkt on Skylake.
Change-Id: I749a88a8fa6c78243441a89d6afcd04f106af3da
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
The vlib init function subsystem now supports a mix of procedural and
formally-specified ordering constraints. We should eliminate procedural
knowledge wherever possible.
The following schemes are *roughly* equivalent:
static clib_error_t *init_runs_first (vlib_main_t *vm)
{
clib_error_t *error;
... do some stuff...
if ((error = vlib_call_init_function (init_runs_next)))
return error;
...
}
VLIB_INIT_FUNCTION (init_runs_first);
and
static clib_error_t *init_runs_first (vlib_main_t *vm)
{
... do some stuff...
}
VLIB_INIT_FUNCTION (init_runs_first) =
{
.runs_before = VLIB_INITS("init_runs_next"),
};
The first form will [most likely] call "init_runs_next" on the
spot. The second form means that "init_runs_first" runs before
"init_runs_next," possibly much earlier in the sequence.
Please DO NOT construct sets of init functions where A before B
actually means A *right before* B. It's not necessary - simply combine
A and B - and it leads to hugely annoying debugging exercises when
trying to switch from ad-hoc procedural ordering constraints to formal
ordering constraints.
Change-Id: I5e4353503bf43b4acb11a45fb33c79a5ade8426c
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
packets should not egress on an iVXLAN tunnel if they
arrived on one.
Change-Id: I9adca30252364b4878f99e254aebc73b70a5d4d6
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
A punt/exception path that provides:
1) clients that use the infra
2) clients can create punt reasons
3) clients can register to recieve packets that are punted
for a given reason to be sent to the desired node.
4) nodes which punt packets fill in the {reason,protocol} of the
buffere (in the meta-data) and send to the new node "punt-dispatch"
5) punt-dispatch sends packets to the registered nodes or drops
Change-Id: Ia4f144337f1387cbe585b4f375d0842aefffcde5
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I561fd187b4865345f3bff86b3d6e67b0f0e97557
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I53ab8d17914e6563110354e4052109ac02bf8f3b
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Change-Id: I4d73b712da911588d511a8401b73cdc3c66346fe
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Using memcpy instead of complex specific copy logic. This simplify
the implementation and also improve perf slightly.
Also move adjacency data from tail to head of buffer, which improves
cache locality (header and data share the same cacheline)
Finally, fix VxLAN which used to workaround vnet_rewrite logic.
Change-Id: I770ddad9846f7ee505aa99ad417e6a61d5cbbefa
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Change-Id: Ide23bb3d82024118214902850821a8184fe65dfc
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
|
|
Change-Id: I8af7bca566ec7c9bd2b72529d49e04c6e649b44a
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
for multicast
Change-Id: I17caf3c5a2060de497c44655b66a15a2007f716b
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: Ica88268fd6a6ee01da7e9219bb4e81f22ed2fd4b
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: Ic65f686aaccaf8450732d88d7471b587faccaa9d
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I3722a1850f7a72e4382e351120c1514d7a1759b8
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: Id4f37f5d4a03160572954a416efa1ef9b3d79ad1
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Learning GBP endpoints over vxlan-gbp tunnels
Change-Id: I1db9fda5a16802d9ad8b4efd4e475614f3b21502
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
|
|
Change-Id: I3a47c71ad3e35df47d11fed6db95019a45f3015f
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I085615fde1f966490f30ed5d32017b8b088cfd59
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Change-Id: I11ec0d7048d36c30a97d437e5b0abd05f06ab0eb
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
This patch implements vxlan with extension of group based
policy support.
Change-Id: I70405bf7332c02867286da8958d9652837edd3c2
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|