Age | Commit message (Collapse) | Author | Files | Lines |
|
__unused is a clang keyword, this struct member will trip the build when
using clang. Instead call the unused padding 'pad' which should be clear
to the purpose if not the usage.
Type: improvement
Change-Id: I0abae34841651be1ef6b7d94864f0dc8185f0733
Signed-off-by: Tom Jones <thj@freebsd.org>
|
|
classify hash used to be stored as u64 in buffer metadata, use 32 bits
instead:
- on almost all our supported arch (x86 and arm64) we use crc32c
intrinsics to compute the final hash: we really get a 32-bits hash
- the hash itself is used to compute a 32-bits bucket index by masking
upper bits: we always discard the higher 32-bits
- this allows to increase the l2 classify buffer metadata padding such
as it does not overlap with the ip fib_index metadata anymore. This
overlap is an issue when using the 'set metadata' action in the ip
ACL node which updates both fields
Type: fix
Change-Id: I5d35bdae97b96c3cae534e859b63950fb500ff50
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: refactor
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Ibb6d19de053c306e9758dbfa827ab7bcab5de856
|
|
Type: refactor
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I92496501360ee073795206bde87f4731a5ce074c
|
|
Check if L4 headers are truncated and if so, set a flag for (future)
consumers instead of reading/writing garbage data.
Type: fix
Fixes: de34c35fc73226943538149fae9dbc5cfbdc6e75
Change-Id: I0b656ec103a11c356b98a6f36cad98536a78d1dc
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
There is no reason to enforce vnet rewrite size to be equal to pre_data.
Moreover, since vnet rewrite size is now saved as u8, this limits
pre_data to 192 bytes.
Type: fix
Fixes: 7dbf9a1a4fff5c3b20ad972289e49e3f88e82f2d
Change-Id: I3f848aa905ea4a794f3b4aa62c929a481261a3f1
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: improvement
This adds a new ip[46]-receive node, sibling
of ip[46]-local. Its goal is to set
vnet_buffer (b)->ip.rx_sw_if_index to the
sw_if_index of the local interface.
In dependant nodes further down the line (e.g.
hoststack) we then set sw_if_idx[rx] to this
value. So that we know which local interface
did receive the packet.
The TCP issue this fixes is that :
On accepts, we were setting tc->sw_if_index
to the source sw_if_index. We should use
the dest sw_if_index, so that packets
coming back on this connection have the
right source sw_if_index. And also setting
it in the tx-ed packet.
Change-Id: I569ed673e15c21e71f365c3ad45439b05bd14a9f
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
- add format_vnet_buffer and format_vnet_buffer_no_chain to mirror
format_vlib_buffer and format_vlib_buffer_no_chain
- format_vnet_buffer used to be the "no chain" version, replace all of
its current use with the corresponding format_vnet_buffer_no_chain
- add a function to dump vnet buffer details from gdb
Type: improvement
Change-Id: I143ce845f80e7ef937ea33a557b6e3b5988c5b8f
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: improvement
Change-Id: Iaad50b2044702c46eff287708dfcb24e61022104
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: fix
Change-Id: I433fe3799975fe3ba00fa30226f6e8dae34e88fc
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: improvement
Some tests i.e. ipsec see performance regression when offload flags
are moved to 2nd cacheline. This patch moves them back to 1st cacheline.
Change-Id: I6ead45ff6d2c467b0d248f409e27c2ba31758741
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
trajectory trace has been broken for a while because we used to save the
buffer trajectory in a vector pointed to in opaque2. This does not work
well when opaque2 is copied (eg. because of a clone) as 2 buffers end up
sharing the same vector.
This dedicates a full cacheline in the buffer metadata instead when
trajectory is compiled in. No dynamic allocation, no sharing, no tears.
Type: refactor
Change-Id: I6a028ca1b48d38f393a36979e5e452c2dd48ad3f
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Make sure packet lands on the right thread for dst nat case.
Type: fix
Signed-off-by: Klement Sekera <ksekera@cisco.com>
Change-Id: I0ec4e4c2bb3fa80ff73fac588c36d36420ba68fa
|
|
Type: refactor
Change-Id: I02a527f57853ebff797f0d85761b71127916d6ce
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
When a buffer is freed and re-allocated for a new packet, opaque2 is
not reset, so the offload flags can be set to a stale value.
Make sure the offload flags are reset to the current value on 1st set.
Type: fix
Fixes: 6809538e646bf86c000dc1faba60b0a4157ad898
Change-Id: I4048febedf25b9995dbd080a11495ee7dbe59153
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: refactor
This patch refactors the offload flags in vlib_buffer_t.
There are two main reasons behind this refactoring.
First, offload flags are insufficient to represent outer
and inner headers offloads. Second, room for these flags
in first cacheline of vlib_buffer_t is also limited.
This patch introduces a generic offload flag in first
cacheline. And detailed offload flags in 2nd cacheline
of the structure for performance optimization.
Change-Id: Icc363a142fb9208ec7113ab5bbfc8230181f6004
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: improvement
negates the need to load the SA in the handoff node.
don't prefetch the packet data, it's not needed.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I340472dc437f050cc1c3c11dfeb47ab09c609624
|
|
Avoid doing inter-thread reads without locks by doing a handoff before
destination address rewrite. Destination address is read from a session
which is possibly owned by a different thread. By splitting the work in
two parts with a handoff in the middle, we can do both in a thread safe
way.
Type: improvement
Signed-off-by: Klement Sekera <ksekera@cisco.com>
Change-Id: I1c50d188393a610f5564fa230c75771a8065f273
|
|
This change introduces flow concept to endpoint-dependent NAT. Instead
of having a session and a plethora of special cases in code for e.g.
hairpinning, twice-nat and others, figure all this out and store it in
flow logic. Every flow has a match and a rewrite part. This unifies all
the NAT packet processing cases into one - match a flow and rewrite the
packet based on that flow. It also provides a cure for hairpinning
dilemma where one part of the flow is on one worker and another on
a different one. These cases are also sped up by not requiring
destination adress lookup every single time to be able to rewrite source
nat as this is now part of flow rewrite logic.
Type: improvement
Change-Id: Ib60c992e16792ea4d4129bc10202ebb99a73b5be
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
A special case when out2in packet needs to
be handoffed to other worker thread. We are
not able to determine which thread they belong
to in the first processing of nat handoff node.
These packets needs to go through out2in slowpath
before we are able to tell where to handoff them.
Type: fix
Ticket: VPP-1941
Change-Id: I7173bda970ce6a91d81f48fc72aa2457586a076f
Signed-off-by: Filip Varga <fivarga@cisco.com>
|
|
By storing thread and session index in hash table we are able to skip
multiple hash lookups in multi-worker scenario, which were used for
handoff before. Also, by storing sesion index in vnet_buffer2, we can
avoid repeating the lookup after handoff.
Type: improvement
Signed-off-by: Klement Sekera <ksekera@cisco.com>
Change-Id: I406fb12f4e2dd8f4a5ca5d83d59dbc37e1af9abf
|
|
Type: fix
Signed-off-by: Klement Sekera <ksekera@cisco.com>
Change-Id: I50df248afb3f6b46c49e6695b3f124cfd584f016
|
|
By aligning vnet_buffer_opaque.ip.save_rewrite_length and
vnet_buffer_opaque.ip.reass.save_rewrite_length we prevent shallow
virtual reassembly code from overwrite save_rewrite_length, allowing
other features down the pipe to rely on this value.
A static assert is added to guard this alignment.
Type: fix
Fixes: f126e746fc01c75bc99329d10ce9127b26b23814
Change-Id: Ie7c7f3abc2a221bbcf2830c0f006a4368088b342
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
Remove NAT's implementation of shallow virtual reassembly with
corresponding CLIs, APIs & tests. Replace with standalone shallow
virtual reassembly provided by ipX-sv-reass* nodes.
Type: refactor
Change-Id: I7e6c7487a5a500d591f6871474a359e0993e59b6
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
This is a preparation step for migrating NAT to use SVR (shallow virtual
reassembly) to conserve space in vnet_buffer. Since max rewrite length
is currently pre-data size (128), u8 is sufficient to hold that value.
Type: refactor
Change-Id: I5374bb396e178245b870cb0bbf1370d2a54230bc
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
Avoid corrupt next_index in vnet_buffer by moving input and output
variables into different memory places instead of sharing a common
space.
Type: fix
Fixes: de34c35fc73226943538149fae9dbc5cfbdc6e75
Change-Id: I34471fc6d0c8487535fac21349e688f398934f6d
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
Type: fix
Signed-off-by: Rajesh Goel <rajegoel@cisco.com>
Change-Id: Ie4372c5cf58ab215cdec5ce56f8a994daaba2844
|
|
Type: feature
Change-Id: Ibc8334e26c7e6f6120696c3e313b6e11d73dab99
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
Type: fix
Ticket: VPP-1721
Change-Id: I7a5d4f1440048ddc9f599ac11d06e5a7df20440e
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Removed sctp buffer metadata from vnet/buffer.h, added it to the
plugin. Add registration APIs for plugin-based vlib_buffer_opaque /
opaque2 decoders, used by "pcap dispatch trace ..." for display in the
wireshark dissector.
Type:refactor
Not actively maintained.
Change-Id: Ie4cb6ba66f68b3b3a7d7d2c63c917fdccf994371
Signed-off-by: Florin Coras <fcoras@cisco.com>
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Type:feature
Change-Id: I0b72954a6ae6a05abe0761cb4f227072863f127b
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
please consult the new tunnel proposal at:
https://wiki.fd.io/view/VPP/IPSec
Type: feature
Change-Id: I52857fc92ae068b85f59be08bdbea1bd5932e291
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
This change is made fix a crash, because is_feature flag semantics turn
out to be different from "custom app code" semantics. Introduce a flag
which custom plugins/apps can use to instead of tying that code to
is_feature flag.
Change-Id: Ief5898711e68529f9306cfac54c4dc9b3650f9e3
Ticket: N/A
Type: fix
Fixes: 21aa8f1022590b8b5caf819b4bbd485de0f1dfe5
Signed-off-by: Klement Sekera <ksekera@cisco.com>
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
Change-Id: Ib4cdbe8a6a1d10a643941c13aa0acbed410f876c
Type: Feature
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: Ib9f98fba5a724480ca95f11a762002c53e08df70
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
Change-Id: Icdcbac7453baa837a9c0c4a2401dff4a6aa6cba0
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I154e18f22ec7708127b8ade98e80546ab1dcd05b
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
in the following two scenarios
1. When fragments arrive in multiple interfaces and endup in different threads
2. When fragments arrive in same interafce but in different queues due to interface RSS doesnt have the ability to place fragments in the right queues
Change-Id: I9f9a8a4085692055ef6823d634c8e19ff3daea05
Signed-off-by: Vijayabhaskar Katamreddy <vkatamre@cisco.com>
|
|
This commit adds a "gso" parameter to existing "create tap..." CLI,
and a "no-gso" parameter for the compatibility with the future,
when/if defaults change.
It makes use of the lowest bit of the "tap_flags" field in the API call
in order to allow creation of GSO interfaces via API as well.
It does the necessary syscalls to enable the GSO
and checksum offload support on the kernel side and sets two flags
on the interface: virtio-specific virtio_if_t.gso_enabled,
and vnet_hw_interface_t.flags & VNET_HW_INTERFACE_FLAG_SUPPORTS_GSO.
The first one, if enabled, triggers the marking of the GSO-encapsulated
packets on ingress with VNET_BUFFER_F_GSO flag, and
setting vnet_buffer2(b)->gso_size to the desired L4 payload size.
VNET_HW_INTERFACE_FLAG_SUPPORTS_GSO determines the egress packet
processing in interface-output for such packets:
When the flag is set, they are sent out almost as usual (just taking
care to set the vnet header for virtio).
When the flag is not enabled (the case for most interfaces),
the egress path performs the re-segmentation such that
the L4 payload of the transmitted packets equals gso_size.
The operations in the datapath are enabled only when there is at least
one GSO-compatible interface in the system - this is done by tracking
the count in interface_main.gso_interface_count. This way the impact
of conditional checks for the setups that do not use GSO is minimized.
"show tap" CLI shows the state of the GSO flag on the interface, and
the total count of GSO-enabled interfaces (which is used to enable
the GSO-related processing in the packet path).
This commit lacks IPv6 extension header traversal support of any kind -
the L4 payload is assumed to follow the IPv6 header. Also it performs
the offloads only for TCP (TSO - TCP segmentation offload).
The UDP fragmentation offload (UFO) is not part of it.
For debug purposes it also adds the debug CLI:
"set tap gso {<interface> | sw_if_index <sw_idx>} <enable|disable>"
Change-Id: Ifd562db89adcc2208094b3d1032cee8c307aaef9
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
Change-Id: Ica88268fd6a6ee01da7e9219bb4e81f22ed2fd4b
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
This reverts commit be16020c5034bc69df25a8ecd7081aec9898d93c.
The arm verify job actually failed but the result was overwritten by an x86 ubuntu retry.
Change-Id: Idcae7691fc575053563b8ff8bcad661c15891668
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: I99c0737dfeeec2db267773625ddc9b55324fd237
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
Add a check to make sure that the vlib and vnet buffer flag bit
definitions do not overlap.
The VNET_BUFFER_F_AVAIL1...8 definitions allow out-of-tree codes to:
#define VNET_BUFFER_F_MY_USECASE VNET_BUFFER_F_AVAIL1
and so on. This avoids introducing irrelevant and/or proprietary bit
definitions into vnet/buffer.h, and hopefully minimizes merge pain for
everyone involved.
Change-Id: I5be4f61dceb81b5bfca005f6d609ade074af205b
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
VPP graph dispatch trace record description:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Major Version | Minor Version | NStrings | ProtoHint |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Buffer index (big endian) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ VPP graph node name ... ... | NULL octet |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Buffer Metadata ... ... | NULL octet |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Buffer Opaque ... ... | NULL octet |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Buffer Opaque 2 ... ... | NULL octet |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| VPP ASCII packet trace (if NStrings > 4) | NULL octet |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Packet data (up to 16K) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Graph dispatch records comprise a version stamp, an indication of how
many NULL-terminated strings will follow the record header, and a
protocol hint.
The buffer index allows downstream consumers of these data to easily
filter/track single packets as they traverse the forwarding
graph. FWIW, the 32-bit buffer index is stored in big endian format.
As of this writing, major version = 1, minor version = 0. Nstrings
will be either 4 or 5.
Here is the current set of protocol hints:
typedef enum
{
VLIB_NODE_PROTO_HINT_NONE = 0,
VLIB_NODE_PROTO_HINT_ETHERNET,
VLIB_NODE_PROTO_HINT_IP4,
VLIB_NODE_PROTO_HINT_IP6,
VLIB_NODE_PROTO_HINT_TCP,
VLIB_NODE_PROTO_HINT_UDP,
VLIB_NODE_N_PROTO_HINTS,
} vlib_node_proto_hint_t;
Example: VLIB_NODE_PROTO_HINT_IP6 means that the first octet of packet
data SHOULD be 0x60, and should begin an ipv6 packet header.
Change-Id: Idf310bad80cc0e4207394c80f18db5f77c378741
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
- Error where ICMPv6 error code doesn't reset VLIB_TX = -1
Leading to crash for ICMP generated on tunnelled packets
- Missed setting VNET_BUFFER_F_LOCALLY_ORIGINATED, so
IP in IPv6 packets never got fragmented.
- Add support for fragmentation of buffer chains.
- Remove support for inner fragmentation in frag code itself.
Change-Id: If9a97301b7e35ca97ffa5c0fada2b9e7e7dbfb27
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
Change-Id: I085615fde1f966490f30ed5d32017b8b088cfd59
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
This patch implements vxlan with extension of group based
policy support.
Change-Id: I70405bf7332c02867286da8958d9652837edd3c2
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: I59235d11baac18785a4c90cdaf14e8f3ddf06dab
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
|
|
- removed handoff-dispatch node
- removed some unused buffer metadata fields
- enqueue to thread logic moved to inline function
Change-Id: I7361e1d88f8cce74cd4fcec90d172eade1855cbd
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ib988d87e6758ffa31862096391f9f286b0797f2b
Signed-off-by: Vijayabhaskar Katamreddy <vkatamre@cisco.com>
|