Age | Commit message (Collapse) | Author | Files | Lines |
|
The async frames pool may be resized once drained. This will cause 2 problems: original pool pointer is invalidated and pool size changed, both problems will confuse the crypto infra user graph nodes (like IPsec and Wireguard) and crypto engines if they expect the pool pointers always valid and the pool size never changed (for performance reason).
This patch introduces fixed size of the async frames pool. This helps zeroing surprise to the components shown above and avoiding segmentation fault when pool resizing happened. In addition, the crypto engine may take advantage of the feature to sync its own pool/vector with crypto infra.
Type: improvement
Signed-off-by: Gabriel Oginski <gabrielx.oginski@intel.com>
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I2a71783b90149fa376848b9c4f84ce8c6c034bef
|
|
Otherwise, we will get an error. The program could remain from the previous run.
Type: fix
Signed-off-by: Artem Glazychev <artem.glazychev@xored.com>
Change-Id: I68e4072bd3b327592013804d67ccab7eb0ed3a0e
|
|
Type: style
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
Change-Id: I6b3ecf0bdb6cfdf260cf4ccae89b6bc2335ff54c
|
|
Linux uses NLM_F_REPLACE in the netlink message to signal a FIB update
The code invariably does a FIB update for IPv4 and a addition for IPv6.
Without this fix, the following:
ip route add 2001:db8::/48 via 2001:db8::1
ip route replace 2001:db8::/48 via 2001:db8::2
ends up as two separate FIB entries in VPP. With the fix, there will be one FIB entry (the second one with nexthop ::2).
Type: fix
Change-Id: I8f98d6ded52ae0c60bfddaa7fc39acbbaa19d34a
Signed-off-by: Pim van Pelt <pim@ipng.nl>
|
|
Type: feature
With this change, packets that are larger than a single buffer can fit
will be able to be sent and received over a Wireguard tunnel. Also,
cover this with tests.
Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
Change-Id: Ifaf7325676d728580097bc389b51a9be39e44d88
|
|
List of changed messages:
- nat44_add_del_static_mapping
- nat44_user_session_dump
- nat44_user_session_details
- nat44_user_session_v2_dump
- nat44_user_session_v2_details
This change is part of VPP API cleanup initiative.
Type: fix
Signed-off-by: Ondrej Fabry <ofabry@cisco.com>
Change-Id: I317ae93a0e763c3759a8c24fd550e1c97f6f4987
|
|
This patch can make crypto dispatch node adaptively switching
between pooling and interrupt mode, and improve vpp overall
performance.
Type: improvement
Signed-off-by: Xiaoming Jiang <jiangxiaoming@outlook.com>
Change-Id: I845ed1d29ba9f3c507ea95a337f6dca7f8d6e24e
|
|
In some cases, in the trace dump v2 dump function, we iterate over the
client cache even though this one could be empty.
Type: fix
Change-Id: Ice5cefa25bb93dabe86fe565347cdc32faa674ac
Signed-off-by: Maxime Peim <mpeim@cisco.com>
|
|
The plugin creates and manages adjacencies for the physical interface in
each interface pair (they are part of the x-connect feature). When a
link update notification is received from the host system, MAC address
of the corresponding physical interface is updated (as needed) as well
as previously created adjacencies for it (because a new rewrite string
needs to be generated).
Subinterfaces inherit MAC address from the parent interface. When MAC
address of the parent interface changes, it also implies MAC address
change for its subinterfaces. The problem is that this is currently not
considered in the plugin. After MAC address update on the parent
interface, packets sent from subinterfaces might have wrong source MAC
address. For example, IPv6 Neighbor Solicitation messages will be sent
with the wrong (previous) MAC address and neighbor discovery will fail.
With this fix, when the plugin updates adjacencies for a physical
interface, it will also update adjacencies for the subinterfaces with
existing interface pair.
Type: fix
Change-Id: Ia5f617197e33cb79b9b025c02c2c126c31a551ec
Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
|
|
When dumping packets from multiple threads using the API, first all
packets from thread 0 are dumped then all ones from thread 1, etc
Until we reach the limit specified by the API call, so we could never
get packets trace from threads with higher ids.
However, the tracedump CLI dump a maximum number of packets from all
threads, which we can expect from the API to do.
We also add a trace_clear_cache API so the client gets an answer when
he only wants to clear its packet cache.
Type: improvement
Change-Id: I0d4df8f6210a298ac3f22cd651eb4d8f445e1034
Signed-off-by: Maxime Peim <mpeim@cisco.com>
|
|
Type: fix
Signed-off-by: Xiaoming Jiang <jiangxiaoming@outlook.com>
Change-Id: I7f8c975fae5d71ce1226a8e19761fc75134e61e2
|
|
Type: feature
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
Change-Id: Ia81f1d8e706dbce9e57319d993bff595e6ba6f03
|
|
load-balance and replicate dpos both store their number of buckets as
u16, which can overflow if too many paths are configured. For
load-balance it can happens quite quickly because of weights
normalization.
Type: fix
Change-Id: I0c78c39fc3d40626dfc58b49e7d99d71f9852b50
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Signed-off-by: Andrew Ying <hi@andrewying.com>
Type: fix
Change-Id: I3c428c90146387ad9ce291c7f646d74f06952b40
|
|
If openssl tls server handshake fails, track the fact that the context
does not have an app session.
Type: fix
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I5f493059a3610067b59caffbbe441ce9e0868252
|
|
When I setup vpp by netvsc driver, occurs the following crash:
(format_dpdk_device_name) assertion `(i) < vec_len (dm->devices)' fails
vnet[100166]: #6 0x00007f434d651f6a _clib_error + 0x2da
vnet[100166]: #7 0x00007f430b4bef64 format_dpdk_device_name + 0xf4
vnet[100166]: #8 0x00007f434d6555f3 do_percent + 0xee3
vnet[100166]: #9 0x00007f434d654359 va_format + 0xb9
vnet[100166]: #10 0x00007f434d7ac16e vlib_log + 0x3ce
vnet[100166]: #11 0x00007f430b49ebe3 dpdk_device_start + 0x193
vnet[100166]: #12 0x00007f430b4aa233 dpdk_interface_admin_up_down + 0x163
vnet[100166]: #13 0x00007f434d988fc8 vnet_sw_interface_set_flags_helper + 0x378
vnet[100166]: #14 0x00007f434d989338 vnet_sw_interface_set_flags + 0x48
This patch fix it by device_index as a index for devices vec, and not
dpdk port_id.
Type: fix
Change-Id: I84c46616d06117c9ae3b2c7d0473050f1b8ded5f
Signed-off-by: Daniel Ding <danieldin95@163.com>
|
|
Type: fix
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
Change-Id: I9e6fd29c0e09406e48215f06977b2d4678650669
|
|
Type: fix
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
Change-Id: Idba74f880a251dbeec2205ee41e16b40d4799b06
|
|
This feature enables the use of the classifier and ip-in-out-acl nodes
to redirect matching sessions via arbitrary fib paths instead of relying
on additional VRFs.
Type: feature
Change-Id: Ia59d35481c2555aec96c806b62bf29671abb295a
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: fix
Signed-off-by: Xiaoming Jiang <jiangxiaoming@outlook.com>
Change-Id: I9971e69135e0652a36e4b4754774a43ea1d92e8b
|
|
Type: fix
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Change-Id: Ie3f390be16df81f6824344034377f9a6f4fa9f92
|
|
format_hexdump currently requires the length parameter to be uword
(64-bits) hence all callers must make sure to cast the length to uword.
Use u32 instead to benefit from C automatic integer promotion: any
length smaller or equal to u32 will be promoted to int fitting in u32).
Only callers using a length of u64 needs to downcast.
It also makes it similar to other variants.
Type: fix
Change-Id: I09b52fdde3970cec0be4150a29126ff63106c75b
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: improvement
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I8d3a1939870601297ecccf4cda6767510c2abfa5
|
|
Prior to dpdk-22.11, VPP can count on rte_eth_dev_socket_id to return
numa node 0 if the device didn't set it. Ever since below patch is
committed in dpdk
https://patchwork.dpdk.org/project/dpdk/patch/20220929120512.480-1-olivier.matz@6wind.com/#152498
the aforementioned assumption is no longer true. If the device didn't
set the numa node, VPP gets -1 from the aforementioned API call. This
causes VPP to crash.
This fix is to set the numa node to 0 if the API returns -1, or SOCKET_ID_ANY
Type: fix
Change-Id: I2fde2870e5a3eb98473fe8d119fef594bfba9a8d
Signed-off-by: Steven Luong <sluong@cisco.com>
|
|
Move GRE folder under vnet to the plugin folder, and modify some of path
of the #inlude<header> to the new path.
Add a plugin.c file to register a plugin.
JIRA: VPP-2044
Type: improvement
Change-Id: I7f64cecd97538a7492e56a41558dab58281a9fa5
Signed-off-by: Chuhao Tang <nicotang@cisco.com>
|
|
This change is part of VPP API cleanup initiative.
Type: refactor
Signed-off-by: Ondrej Fabry <ofabry@cisco.com>
Change-Id: I9f0f786b50aa77383b16e0f844c85f236f7aa8d0
|
|
Type: fix
Currently sw_scheduler runs interchangeably over queues of one selected
type either ENCRYPT or DECRYPT. Then switches the type for the next run.
This works fine in polling mode as missed frames get processed on the
next run. In interrupt mode if all of the workers miss a frame on the
first run the interrupt flag is lowered so the frame remains pending in
queues waiting for another crypto event to raise the interrupt.
With this fix force sw_scheduler in interrupt mode check the second half
of the queues if the first pass returned no results. This guarantees a
pending frame gets into processing before interrupt is reset.
Change-Id: I7e91d125702336eba72c6a3abaeabcae010d396a
Signed-off-by: Alexander Skorichenko <askorichenko@netgate.com>
|
|
When trying to start perfmon with a bundle that has a unique type while
specifying that type as argument, the command fails
(e.g. perfmon start bundle branch-mispred type node).
This error occurs because the returned value of
unformat_perfmon_active_type is actually a perfmon_bundle_type_t, but
it was treated as a perfmon_bundle_type_flag_t by a test in the CLI
function.
However, this test is useless and thus can just be removed.
Type: fix
Signed-off-by: Maxime Peim <mpeim@cisco.com>
Change-Id: I5d8b9815871621e8ee7b935586f4cedbc0e7a53d
|
|
Introduce async model into memif by utilizing new DMA API. Original
process is broken down to submission stage and completion stage. As
multiple submissions may in flight simultaneously, per thread data is
no longer safe, now replace thread data into each dma data structure.
As slave side already support zero copy mode, DMA option is only added
in master side.
Type: feature
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Change-Id: I084f253866f5127cdc73b9a08c8ce73b091488f3
|
|
This patch prepares code for bumping DPDK version to 22.11, but the DPDK version of this patch keeps at 22.07 for compatibility.
the "no-dsa" parameter in DPDK configuration is removed, the "blacklist" parameter can be used to block the related DSA devices.
Type: feature
Signed-off-by: Xinyao Cai <xinyao.cai@intel.com>
Change-Id: I08787c6584bba66383fc0a784963f33171196910
|
|
Set the mask of calculating the next cqe index to the corresponding CQ
size instead of rxq size.
Type: fix
Signed-off-by: Jieqiang Wang <jieqiang.wang@arm.com>
Change-Id: I67494f029967af64051f51452eba1fd699984cd9
|
|
Type: style
Change-Id: I969bc72185d3675a35cf227c60bedca20e09fdf5
Signed-off-by: Vratko Polak <vrpolak@cisco.com>
|
|
Previously, .src_ip_sticky may have been left uninitialized.
Type: fix
Fixes: 613e6dc0bf928def5d337312d522e1a15df87b00
Change-Id: Ifd866d6322fe9ff723f92b7ab3fd77e720a3cfa4
Signed-off-by: Vratko Polak <vrpolak@cisco.com>
|
|
RTA_VIA allows routes to have a next-hop in a different address family.
This commit makes linux-cp import those types of routes correctly,
instead of importing the routes without a gateway.
This uses rtnl_route_nh_get_gateway, which is available since libnl
3.4.0 (Oct. 9, 2017). Even Debian Stretch has it via backports.
Type: fix
Change-Id: I06297c700461ba7874eb8baf9355bd40990b3121
Signed-off-by: Adrian Pistol <vifino@posteo.net>
|
|
Nat in2out sessions are distributing among workers by client
addresses. In case there's multiple client vrfs with very
similar client addresses (usually from rfc1918), session
distribution/load can be unfair just due similar hash.
Let's take dynamic client fib_index into account, it'll affect
external port range only, outside address picking has own
address-based hash therefore not affected.
Type: improvement
Change-Id: I56ab2e1ce8dd27f2b1f9e7f22839ccf7774bfb82
Signed-off-by: Vladislav Grishenko <themiron@yandex-team.ru>
|
|
The unformat type for "%d" should be u32 or int.
Type: fix
Signed-off-by: Ted Chen <znscnchen@gmail.com>
Change-Id: I2483df6259ed8d3c7648c8db6345e5063ac8b57e
|
|
Adding api nat44_ed_vrf_tables_v2_dump which may replace
nat44_ed_vrf_tables_dump in the future.
- fixing endianess
Type: improvement
Signed-off-by: Daniel Béreš <daniel.beres@pantheon.tech>
Change-Id: I40d09ea3252589bdcb61db9f1629dacd87f69978
|
|
Creation of lcp tap for non-ethernet interfaces can potentially lead to a crash, so avoid it.
Type: fix
Change-Id: I76ded8a08ea38a2c31d0215804af023207d4d3e1
Signed-off-by: Stanislav Zaikin <stanislav.zaikin@46labs.com>
|
|
Previously we encountered the issue of failing to create completion
queues on some Arm platforms because DPDK may set MLX5_CQE_SIZE to 128
if DPDK MLX PMDs are built and DPDK plugin is loaded, which does not
satisfy the requirement of 64B size CQE by RDMA plugin.
We fixed this issue in 844a0e8b0("always use 64 byte CQEs for MLX5"),
but some of CSIT test cases failed due to this code change. It turns out
that we don't need to specify compressed CQE mode for txq CQ because
RDMA tx doesn't have the code logic to handle compressed CQEs, which
might cause unexpected behavior if it is enabled.
Type: fix
Fixes: 844a0e8b0 ("always use 64 byte CQEs for MLX5")
Signed-off-by: Jieqiang Wang <jieqiang.wang@arm.com>
Change-Id: I7909a6d44b15bcf39c15dfac9377b65520a0cbfb
|
|
Change of enums used in REPLY_MACRO() to appropriate one
for handlers:
-vl_api_nat44_ed_add_del_vrf_table_t_handler
-vl_api_nat44_ed_add_del_vrf_route_t_handler
Type: fix
Change-Id: I58e97817b1678da7c025c0d03a8b938a4e0f7b6c
Signed-off-by: Daniel Béreš <daniel.beres@pantheon.tech>
|
|
Originally the name for each session pool is incorrectly prepared.
It doesn't have right length. It is not null terminated.
The fix corrects the name formatting for each session pool.
Type: fix
Signed-off-by: Gabriel Oginski <gabrielx.oginski@intel.com>
Change-Id: I67da3d64702ccb27a5907825528f8c95d91040bb
|
|
Originally the name for each session pool can be incorrect prepared.
The fix changes formatting for name for each session pool.
Type: fix
Signed-off-by: Gabriel Oginski <gabrielx.oginski@intel.com>
Change-Id: I42e0752f9f46c5a42524ec7b863a7c9dd3c23110
|
|
- crypto code moved to vppinfra for better testing and reuse
- added 256-bit VAES support (Intel Client CPUs)
- added AES_GMAC functions
Change-Id: I960c8e14ca0a0126703e8f1589d86f32e2a98361
Type: improvement
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: fix
Change-Id: I141e5779aab7eee3068b702dd2f93765420fb920
Signed-off-by: Stanislav Zaikin <stanislav.zaikin@46labs.com>
|
|
Type: fix
API clients can register for peer events (e.g. to be notified when
connection is established). In a multi-worker setup, peer events might
be triggered from a worker thread. In order to send a peer event to the
clients, an API message needs to be allocated and populated.
API messages allocation is only allowed from the main thread. Currently,
the code does not handle the case when a peer event is trying to be sent
from a worker thread. In debug builds, when this happens, it causes
SIGABRT in vl_msg_api_alloc_internal() because assertion "pool == 0 ||
vlib_get_thread_index () == 0" fails. In production builds, when this
happens, it might cause unexplained behavior.
There is a test that is supposed to catch this but all multi-worker
Wireguard tests are currently disabled. This problem is likely to be one
of the reasons they were disabled.
With this fix, when a peer event is triggered from a worker thread,
allocate and send corresponding API message from the main thread using
RPC.
Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
Change-Id: Ib3fe19f8070563b35732afd16c017411c089437e
|
|
In a case where one pounds on a single kvp in a KVP_AT_BUCKET_LEVEL
table, the code would sporadically return a transitional value (junk)
from a half-deleted kvp. At most, 64-bits worth of the kvp will be
written atomically, so using memset(...) to smear 0xFF's across a kvp
to free it left a lot to be desired.
Performance impact: very mild positive, thanks to FC for doing a
multi-thread host stack perf/scale test.
Added an ASSERT to catch attempts to add a (key,value) pair which
contains the magic "free kvp" value.
Type: fix
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I6a1aa8a2c30bc70bec4b696ce7b17c2839927065
|
|
Used on intel client CPUs which suppport VAES instruction set without
AVX512
Type: improvement
Change-Id: I5f816a1ea9f89a8d298d2c0f38d8d7c06f414ba0
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
DMA batch status was set by hardware. Its value may be variable between
cpus twice accesses. Saving the value of status can fix it.
Type: fix
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Change-Id: Ibc9337239555744a571685b486c986991c3e9b18
|
|
Recognize and drive google virtual ethernet (gve) in google cloud.
Type: feature
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Change-Id: Ia559615ac059cabbca5d10bcd4049e87beaad638
|
|
Allocate and initialize dma batch structure when adding dma config.
The number of required dma batches is set by max_batches parameter.
Thus dma batches are not allocated dynamically in worker thread.
Application need to check the return value of vlib_dma_batch_new.
Type: improvement
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Change-Id: I5d05a67b59634cf2862a377d5ab77cb1040343ce
|