Age | Commit message (Collapse) | Author | Files | Lines |
|
Configure a vxlan tunnel using this CLI and then assign an ip address to
the vxlan tunnel cause VPP to crash immediately
create vxlan tunnel src x.x.x.x dst y.y.y.y vni 1000 decap-next node ethernet-input l3
set interface ip address vxlan_tunnel0 z.z.z.z/24
It looks like when l3 mode is configured, the code calls the wrong function
to register the interface
Type: fix
Fixes: 3e38422ab905d26ab1625c74268e30c94327ea54
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: Ie1a08efc028f37fb528a7dfd7048ff6836bb8ddc
|
|
Fixing the mutliarch versions of vxlan, geneve and friends. Ensures that
main struct is correctly sized for all multiarch permutations.
Type: fix
Fixes: 290526e3c
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
Change-Id: I7c4c435763a5dcb0c3b429cd4f361d373d480c03
|
|
When an mfib entry was created with both paths and entry_flags then
the entry flags were being ignored. If there are no paths then the
flags were passed into mfib_table_entry_update, but in the case where
the entry didn't exist and there were paths and flags, the entry was
created within mfib_table_entry_paths_update() which used a default
of MFIB_ENTRY_FLAG_NONE.
Pass the flags through into the mfib_table_entry_paths_update fn. All
existing callers other than the create case will now pass in
MFIB_ENTRY_FLAG_NONE.
Type: fix
Signed-off-by: Paul Atkins <patkins@graphiant.com>
Change-Id: I256375ba2fa863a62a88474ce1ea6bf2accdd456
|
|
Type: fix
Change-Id: I41455e1cdc62e7c0baa148630b0701b042f3b156
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
Type: improvement
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
Change-Id: Ic0fa4f83048a280a7d1b04198c0f903798562d2d
|
|
Type: refactor
Change-Id: Id10cbf52e8f2dd809080a228d8fa282308be84ac
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
use autogenerated code
Type: improvement
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
Change-Id: I163eefa86f3248260481181818d70fa1b6eaa220
|
|
- Generate copyright year and version
instead of using hard-coded data
Type: refactor
Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
Change-Id: I6058f5025323b3aa483f5df4a2c4371e27b5914e
|
|
Move info from vpp_papi_provider to .api/vpp_objects
Change-Id: Iaf46483fda2840dfec8d37e0b9262e1c9912be59
Type: test
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Type: fix
Partially revert fix SEGV reported in VPP-1962
[commit a4b0541f64eef02fa0d003d8f831cfdeb45d3668]
This adds an is_l3 option to choose between L2 & L3
mode add tunnel creation time
Change-Id: Ia2c91a1099074b7d23fc031b78ed0f68628eeabe
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
- Refactor make test code to be co-located with
the vpp feature source code
Type: test
Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
Change-Id: I66379dfe671628e39dfc9685c4fd70fa0e566f6f
|
|
Type: improvement
Signed-off-by: Drenfong Wong <drenfong.wang@intel.com>
Change-Id: Ia81aaa86fe071cbbed028cc85c5f3fa0f1940a0f
|
|
Type: refactor
This patch refactors the offload flags in vlib_buffer_t.
There are two main reasons behind this refactoring.
First, offload flags are insufficient to represent outer
and inner headers offloads. Second, room for these flags
in first cacheline of vlib_buffer_t is also limited.
This patch introduces a generic offload flag in first
cacheline. And detailed offload flags in 2nd cacheline
of the structure for performance optimization.
Change-Id: Icc363a142fb9208ec7113ab5bbfc8230181f6004
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: improvement
Signed-off-by: Artem Glazychev <artem.glazychev@xored.com>
Change-Id: Ie30d51ab4df5599b52f7335f863b930cd69dbdc1
|
|
Previous commit broke naming of vxlan interfaces.
Type:fix
Fixes:a4b0541f6
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
Change-Id: I5e304821be73547b4e47c35ad9632283f153830f
|
|
Type: fix
Replace vnet_register_interface with ethernet_register_interface
Fixes https://jira.fd.io/browse/VPP-1962
Signed-off-by: Ed Warnicke <hagbard@gmail.com>
Change-Id: I5f578fc416605429fe1e2b510ad49eb754451d40
Signed-off-by: Ed Warnicke <hagbard@gmail.com>
|
|
Type: fix
If a tunnel's source is not local then post encap VPP will attempt to
receive (via ip4-local) that packet, things go wrong from there.
The fix is when stacking the encap forwarding don't accept a receive
DPO. This approach is taken, rather than rejecting bad tunnels, because
the 'local-ness' of the tunnel's source can change and we can't reject
tunnels that were once correctly configured but are no longer.
the user will quickly discover their mistake as traffic won't pass.
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: I46198422e321606e8baba003112e978a526b4c2f
|
|
Type: refactor
Change-Id: Ie67dc579e88132ddb1ee4a34cb69f96920101772
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: Id13f33843b230a1d169560742c4f7b2dc17d8718
|
|
Type: style
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: I26a19e42076e031ec5399d5ca05cb49fd6fbe1cd
|
|
This patch aims to improve decap performance by reducing expensive
hash_get callings as less as possible using AVX512 on XEON.
e.g. vxlan, vxlan_gpe, geneve, gtpu.
For the existing code, if vtep4 of the current packet match the last
vtep4_key_t well, expensive hash computation can be avoided and the
code returns directly.
This patch improves tunnel decap multiple flows case greatly by
leveraging 512bit vector register on XEON accommodating 8 vtep4_keys.
It enhances the possiblity of avoiding unnecessary hash computing
once hash key of the current packet hits any one of 8 in the 512bit
cache.
The oldest element in vtep4_cache_t is updated in round-robin order.
vlib_get_buffers is also leveraged in the meanwhile.
Type: improvement
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
Signed-off-by: Junfeng Wang <drenfong.wang@intel.com>
Change-Id: I313103202bd76f2dd638cd942554721b37ddad60
|
|
This is the code refactor for vnet/flow infra and the dpdk_plugin flow
implementation. The main works of the refactor are:
1. Added two base flow type: VNET_FLOW_TYPE_IP4 and VNET_FLOW_TYPE_IP6
as the base the flow type
2. All the other flows are derived from the base flow types
3. Removed some flow types that are not currently supported by
the hardware, and VPP won't leverage them either:
IP4_GTPU_IP4, IP4_GTPU_IP6, IP6_GTPC, IP6_GTPU,
IP6_GTPU_IP4, IP6_GTPU_IP6
4. Re-implemented the vnet/flow cli as well as the dpdk_plugin
implementation
5. refine cli prompt
6. refine display info in command "show flow entry"
Type: refactor
Signed-off-by: Chenmin Sun <chenmin.sun@intel.com>
Change-Id: Ica5e61c5881adc73b28335fd83e36ec1cb420c96
|
|
if ((A | B) == false) it means both A and B are false, so for
the following code
if (PREDICT_FALSE (!good_udp1))
{
if ((flags1 & VNET_BUFFER_F_L4_CHECKSUM_COMPUTED) == 0)
{
...
}
}
if ((flags1 & VNET_BUFFER_F_L4_CHECKSUM_COMPUTED) == 0) is always
true if the code run it. Remove it.
Type: improvement
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Change-Id: I6bd1e9340c7a00089fc1c9ae49773add832d309e
|
|
Type: fix
Change-Id: I1da4ace9f3e548af4b5b3373d695e4214c5df2ff
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: style
Change-Id: Id363ccd0e51c61388fb45ef10685929f629cccbd
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
vlib_get_buffers can save about 1.2 clocks per packet for vxlan encap
graph node on Skylake.
Type: improvement
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Change-Id: I9cad3211883de117c1b84324e8dfad38879de2d2
|
|
Use consistent API types.
Type: fix
Signed-off-by: Jakub Grajciar <jgrajcia@cisco.com>
Change-Id: I7f6f37ec6eed780322e2488d6eb0f5681945ba09
Signed-off-by: Jakub Grajciar <jgrajcia@cisco.com>
|
|
Bypass node MUST NOT intercept a packet if destination IP doesn’t match
a local address. However IP address interpretation depends on the VRF,
hence bypass node must take that into account.
This patch also factors-out common VTEP management and checking code.
Type: improvement
Signed-off-by: Nick Zavaritsky <nick.zavaritsky@emnify.com>
Change-Id: I5665d94882bbf45d15f8da140c7ada528ec7fa94
|
|
Add an ALWAYS_ASSERT (...) macro, to (a) shut up coverity, and (b)
check the indicated condition in production images.
As in:
p = hash_get(...);
ALWAYS_ASSERT(p) /* was ASSERT(p) */
elt = pool_elt_at_index(pool, p[0]);
This may not be the best way to handle a specific case, but failure to
check return values at all followed by e.g. a pointer dereference
isn't ok.
Type: fix
Ticket: VPP-1837
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: Ia97c641cefcfb7ea7d77ea5a55ed4afea0345acb
|
|
Type: docs
Signed-off-by: John DeNisco <jdenisco@cisco.com>
Change-Id: I7280e5c5ad10a66c0787a5282291a2ef000bff5f
|
|
Type: docs
Signed-off-by: John Lo <loj@cisco.com>
Change-Id: I4372195121e05af671a3f48b1c2796cd0132b279
|
|
Type: fix
Several tunnels encapsulation use udp as outer header and udp src port
is set by inner header flow hash, such as gtpu, geneve, vxlan, vxlan-gbd
Since flow hash of inner header is already been calculated, keeping it
to vnet_buffere[b]->ip.flow_hash should save load-balance node work to
select ECMP uplinks.
Change-Id: I0e4e2b27178f4fcc5785e221d6d1f3e8747d0d59
Signed-off-by: Shawn Ji <xiaji@tethrnet.com>
|
|
Type: fix
Change-Id: Id53eb6ed15f270d747b9831a7b585cbafe515dd2
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Type: feature
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I2272521d6e69edcd385ef684af6dd4eea5eaa953
|
|
Type: fix
Since Vxlan hw offload jumps the ethernet-input node, so needs to
adjust the data offset accordingly
In original code, the current_data is 0 when arriving vxlan-flow-input
node(due to no graph node before it, except the dpdk-input), so this
code block cannot find the correct vxlan header:
enum
{ payload_offset = sizeof (ip4_vxlan_header_t) };
vlib_buffer_advance (b0, payload_offset);
see code in src/vnet/vxlan/decap.c, function vxlan4_flow_input_node
This patch fixes this issue
Signed-off-by: Chenmin Sun <chenmin.sun@intel.com>
Change-Id: Iab4af7a7dc3b69a117a4c9ea1c59662669a6438c
|
|
This is a preparation step for migrating NAT to use SVR (shallow virtual
reassembly) to conserve space in vnet_buffer. Since max rewrite length
is currently pre-data size (128), u8 is sufficient to hold that value.
Type: refactor
Change-Id: I5374bb396e178245b870cb0bbf1370d2a54230bc
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
Fix UDP over IP6 checksum offload setup for VXLAN and VXLAN-GBP.
Type: fix
Signed-off-by: John Lo <loj@cisco.com>
Change-Id: If110467a68234d8eed941869a2a03735f339dc33
|
|
Instead of all clients directly RR sourcing the entry they are tracking,
use a deidcated 'tracker' object. This tracker object is a entry
delegate and a child of the entry. The clients are then children of the
tracker.
The benefit of this aproach is that each time a new client tracks the
entry it doesn't RR source it. When an entry is sourced all its children
are updated. Thus, new clients tracking an entry is O(n^2). With the
tracker as indirection, the entry is sourced only once.
Type: feature
Change-Id: I5b80bdda6c02057152e5f721e580e786cd840a3b
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Enhance the route add/del APIs to take a set of paths rather than just one.
Most unicast routing protocols calcualte all the available paths in one
run of the algorithm so updating all the paths at once is beneficial for the client.
two knobs control the behaviour:
is_multipath - if set the the set of paths passed will be added to those
that already exist, otherwise the set will replace them.
is_add - add or remove the set
is_add=0, is_multipath=1 and an empty set, results in deleting the route.
It is also considerably faster to add multiple paths at once, than one at a time:
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.11
100000 routes in .572240 secs, 174751.80 routes/sec
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.12
100000 routes in .528383 secs, 189256.54 routes/sec
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.13
100000 routes in .757131 secs, 132077.52 routes/sec
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.14
100000 routes in .878317 secs, 113854.12 routes/sec
vat# ip_route_add_del 1.1.1.1/32 count 100000 multipath via 10.10.10.11 via 10.10.10.12 via 10.10.10.13 via 10.10.10.14
100000 routes in .900212 secs, 111084.93 routes/sec
Change-Id: I416b93f7684745099c1adb0b33edac58c9339c1a
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
Signed-off-by: Ole Troan <ot@cisco.com>
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
plugins:
- ipfixcollector
vnet:
- geneve
- vxlan_gpe
- vxlan
Change-Id: I69a8b4017ee6990f2b4874fe3e94c4520bde7101
Signed-off-by: Jakub Grajciar <jgrajcia@cisco.com>
|
|
Change-Id: I53ab8d17914e6563110354e4052109ac02bf8f3b
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Using memcpy instead of complex specific copy logic. This simplify
the implementation and also improve perf slightly.
Also move adjacency data from tail to head of buffer, which improves
cache locality (header and data share the same cacheline)
Finally, fix VxLAN which used to workaround vnet_rewrite logic.
Change-Id: I770ddad9846f7ee505aa99ad417e6a61d5cbbefa
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Change-Id: Ide23bb3d82024118214902850821a8184fe65dfc
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
|
|
Change-Id: I68cc509b594b09751ff5e0e09bbca187a4a88edd
Signed-off-by: Jon Loeliger <jdl@netgate.com>
|
|
Change-Id: Id4f37f5d4a03160572954a416efa1ef9b3d79ad1
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
For vxlan_encap, code will touch memory area before the field "data"
in struct vlib_buffer_t, however so far it is not prefetched in cache
yet for this graph node.
After applying the patch, 2~3 cycles per pkt for vxlan4_encap can be
saved on Haswell. It will bring a lot of benefits on DVN platform too.
Change-Id: I26d8c57fb3d2415726be5367117d73eb715e35ad
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I3ef0725684bcb8ea526abe0ce62562b35a0070f5
Signed-off-by: Eyal Bari <ebari@cisco.com>
(cherry picked from commit 0d87894bf279a4678cfca6cc438583090b166f85)
|
|
Change-Id: I70fb7394f85b26f7e632d74fc31ef83597efdd16
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
store local/remote addresses + vrf + vni in hash key
store complete decap info in hash value (sw_if_index + next_index +
error)
this removes the need to access the tunnel object when matching both
unicast and mcast.
however for mcast handling it requires 3 hash lookups:
* one failed unicast lookup (by src+dst addrs)
* lookup by mcast(dst) addr .
* unicast lookup (tunnel local ip as dst + pkt's src addr)
where previously it needed 2:
* lookup by src to find unicast tunnel + compare dst to local addr
(failing for mcast)
* lookup by mcast to find the mcast tunnel
Change-Id: I7a3485d130a54194b8f7e2df0431258db36eceeb
Signed-off-by: Eyal Bari <ebari@cisco.com>
|