Age | Commit message (Collapse) | Author | Files | Lines |
|
module: id SFP/SFP+/SFP28, compatibility: 40g_active_cable
vendor: Amphenol, part NDCCGF-I202
revision: C, serial: APF1711202351C, date code: 170318
cable length: 2m
Change-Id: Ife35607b4f078f7b56737fe066ad4cbd247a7504
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I085615fde1f966490f30ed5d32017b8b088cfd59
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Warning messsage is displayed in system log:
vpp# show log
1970/ 1/ 1 01:00:01:198 warn dpdk unsupported rx offloads requested on port 0: scatter
Change-Id: I40021066daf2d37ca5233e3adce55e412f0d3932
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
The scatter/gather rxmode flag was set for ENA when building
against DPDK >= 18.08. ENA does not support this, so disable
it. It looks like enabling it was a copy/paste error.
Also, after offloads are adjusted based on whether "no-multi-seg"
is set, those configurations are overwritten by copying
port_conf_template over the port config. That should only happen
for versions of DPDK older than 18.08 because 18.08 and newer
make changes directly on the port config instead of making changes
to the template. Make the clib_memcpy() conditional on the DPDK
version being less than 18.08. After doing so, compiler
errors complain about port_conf_template being declared but not
used, so make it's declaration conditional.
Change-Id: If81980d71c379a565b51dd700b953f8c811a8703
Signed-off-by: Matthew Smith <mgsmith@netgate.com>
|
|
Change-Id: If74acb0168bed2201d2a8b47bf3f860540d1574b
Signed-off-by: Khers <s3m2e1.6star@gmail.com>
|
|
Change-Id: Ie68814c8afc6cd67eb75da0b95dffa7b404cb7ba
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
supporting TCP/UDP
The DPDK plugin sets all of the offload flags, which may cause an initialization failure
on the NICs that do not support SCTP offload. The VPP code does not deal with the SCTP
offload at the moment at all, so after discussing with Damjan, we agreed
the best approach to fix the issue is to not request the SCTP offload.
The output of "show hardware" for the NIC in question before this patch:
Name Idx Link Hardware
GigabitEthernet1/0/0 1 down GigabitEthernet1/0/0
Ethernet address 00:e0:67:09:90:4b
Intel 82540EM (e1000)
carrier down
flags: pmd pmd-init-fail maybe-multiseg tx-offload intel-phdr-cksum
rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
cpu socket 0
Errors:
rte_eth_dev_configure[port:0, errno:-22]: Unknown error -22
And the excerpt from "show log":
1970/ 1/ 1 00:00:00:739 notice dpdk Ethdev port_id=0 requested Tx offloads 0x1c doesn't match Tx offloads capabilities 0xf in rte_eth_dev_configure()
Change-Id: I159d65c02fc3f044441972205f1f0ac08e52050c
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
also some moving of l2 headers to reduce dependencies
Change-Id: I7a700a411a91451ef13fd65f9c90de2432b793bb
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Some counters (bytes, pkts) are formatted as signed instead of unsigned
in "show hardware-interfaces" and "show lb".
These stats counters are declared as u64.
Change-Id: Id1b588188bff4e36402beb8d07f779e9a5193956
Signed-off-by: Naoyuki Mori <naoyuki.mori@intel.com>
|
|
Change-Id: I1f54b994425c58776e1445c8d9fe142e7a644d3d
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
This adds a `num-mem-channels` option for DPDK enabling support for
choosing the number of memory channels used.
Change-Id: I1663dd866ac60592b6dd02261af66d87c64acdb1
Signed-off-by: Jessica Tallon <tsyesika@igalia.com>
|
|
DPDK 18.08 verifies if all set bits are supported and fails if not....
Change-Id: Ia87242fcda11147dff3bebe2e2bef32f0a8891fb
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
This significantly reduces need for
...
in multiarch code. Simply constructor macros will jost create static unused
entry if CLIB_MARCH_VARIANT is defined and that will be optimized out by
compiler.
Change-Id: I17d1c4ac0c903adcfadaa4a07de1b854c7ab14ac
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: If1b93341c222160b9a08f127620c024620e55c37
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I39f87ca161c891fb22462a23188982fef7c3243f
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
dpdk raw item match string changed from flexible array member to a
pointer member
Change-Id: I930f05112ce04b0cdb3feb985d755e730b102084
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
Needed a spinlock to protect the data vector. Cleaned up debug cli so
the output makes sense, and so that various parameters exist in one
place. Removed a nonsense memset-to-zero which led to ultra-confusing
results.
Change-Id: I91cd14ce7fe84fd2eceab86e016b5ee001993be4
Signed-off-by: Dave Barach <dbarach@cisco.com>
|
|
Change-Id: I63c36644c9d93f2c3ec6606ca0205b407499de4e
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
Fix a typo from previous patch. Change 0x104 to 0x1004.
Change-Id: I82230a8a0ec01567eb1d4bc12ac02062c2a98347
Signed-off-by: Matthew Smith <mgsmith@netgate.com>
|
|
If a PMD writes too many log messages using rte_vlog(), the
pipe for logging can fill and then rte_vlog() will block
on fflush() while it waits for something to read from the other
side of the pipe. That will never happen since the process node
that would read the other side of the pipe runs in the
same thread.
Set the write fd to non-blocking before the call to
rte_openlog_stream(). If the pipe is full, calls to write() or
fflush() will fail but execution will continue.
Change-Id: I0e5d710629633acda5617ff29897d6582c255d57
Signed-off-by: Matthew Smith <mgsmith@netgate.com>
|
|
Recognize the PF and VF device IDs for the Mellanox adapter
used on Azure.
Change-Id: Ic7b36b37ac93db2b696354ffe6fa2b6d62ee3801
Signed-off-by: Matthew Smith <mgsmith@netgate.com>
|
|
This patch addresses the coverity scan warnings reported for the DPDK
plugin.
Change-Id: Ie7ac7ffcf4a6c63245eae0f9910a193ab1e318a8
Signed-off-by: Marco Varlese <marco.varlese@suse.de>
|
|
The driver implements Cavium QLogic FastLinQ QL4xxxx 10G/25G/40G/50G/100G
Intelligent Ethernet Adapters (IEA) and Converged Network Adapters (CNA)
(doc/guides/nics/qede.rst)
Change-Id: If17e8cb572eb8c0585085be1c7cfdfa159eb6e68
Signed-off-by: Igor Mikhailov (imichail) <imichail@cisco.com>
|
|
Change-Id: If4da80c7eefe55905594eaaba0946d75f0892da5
Signed-off-by: Igor Mikhailov (imichail) <imichail@cisco.com>
|
|
Change-Id: I6fa4c6bf9c4e96ba4502a06907bdecc654ace665
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I7c611d3fa7fabe82294fc22a61d5a3927a2da39d
Signed-off-by: Jessica Tallon <tsyesika@igalia.com>
|
|
Workaround for lack of driver interrupt support. Also quite handy for
home gateway, laptop/vagrant, other use-cases not requiring maximum
vectors/second for proper operation.
Change-Id: Ifc4b98112450664beef67b89ab8a6940a3bf24b5
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I23caebf602e3e6ff45fdec106a0da88f6de7a284
Signed-off-by: Igor Mikhailov (imichail) <imichail@cisco.com>
|
|
Change-Id: I737dad64bf6dd0743d36500d5cfa1cb1a6594b98
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
ip4 vxlan cli/api (using flow infra) to create flows and enable them on
different hardware (currently tested with i40e)
to offload a vxlan tunnel onto hw:
set flow-offload vxlan hw TwentyFiveGigabitEthernet3/0/0 rx vxlan_tunnel1
to remove offload:
set flow-offload vxlan hw TwentyFiveGigabitEthernet3/0/0 rx vxlan_tunnel1 del
TODO:ipv6 handling
Change-Id: I70e61f792ef8e3f007d03d7df70e97ea4725b101
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
Change-Id: Iee8de25ab3c68ae3698c79852195dc336050914c
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
This patch separates setting of hardware interfaec and software
interface MTU. Software MTU is L2 payload MTU (i.e. not including L2
header). Per-protocol MTU for IPv4, IPv6 and MPLS can also be set.
Currently only IP4, IP6 are enabled in adjacency / rewrite code.
Documentation in src/vnet/MTU.md
Change-Id: Iee2fd6f0bbc8210748dd8e073ab9fab87d323690
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
... introduced with dpdk 18.05 support patch
Change-Id: Idf2283888f81d7652599651c0d65476e451f9343
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Added code to initialize failsafe PMD
This is part of initial effort to enable vpp running over
dpdk on failsafe PMD in Microsoft Azure(4/4).
Change-Id: Ia2469c7087ca4b5c7881dfb11ec5c4fcebaa1d04
Signed-off-by: Rui Cai <rucai@microsoft.com>
|
|
Change-Id: I205932bc727c990011bbbe1dc6c0cf5349d19806
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
- Modify the API send_ip6_na and send_ip4_garp to take sw_if_index instead
of vnet_hw_interface_t and add call to build_ethernet_rewrite to support
subinterface/vlan
- Add code to bonding driver to send an event to bond_process when the first
interface becomes active or when the active interface is down
- Create a bond_process to walk the interface and the corresponding
subinterfaces to send garp/ip6_na when an event is received.
- Minor cleanup in bonding/node.c
Note: dpdk bonding driver does not send garp/ip6_na for subinterfaces. There is
no attempt to fix it here. But the infra is now done and should be easy to
add the support.
Change-Id: If3ecc4cd0fb3051330f7fa11ca0dab3e18557ce1
Signed-off-by: Steven <sluong@cisco.com>
|
|
Added configure argument "--with-log2-cache-line-bytes=5|6|7|auto"
AKA 32, 64, or 128 bytes, or use the inferred value from the build host.
produces build-xxx/vpp/vppinfra/config.h, which .../src/vppinfra/cache.h
Kernels which implement the following pseudo-file (aka x86_64) are
easy: /sys/devices/system/cpu/cpu0/cache/index0/coherency_line_size
Otherwise, extract the cpuid from /proc/cpuinfo and map it to the
cache line size.
Change-Id: I7ff861e042faf82c3901fa1db98864fbdea95b74
Signed-off-by: Dave Barach <dave@barachs.net>
Signed-off-by: Nitin Saxena <nitin.saxena@cavium.com>
|
|
Prior to the change, dpdk plugin assumes xd->device_index is
used both as index for internal dpdk_main->devices array
and DPDK port index to call into DPDK APIs.
However, when running on top of Failsafe PMDs,
DPDK port index range may no longer be contiguous (as noted:
http://dpdk.org/ml/archives/dev/2018-March/092375.html
for related changes in DPDK). Because this, dpdk plugin can
no longer iterate through all available DPDK ports
with a for 0->rte_eth_dev_count() loop and the assumption of
device_index no longer holds.
This is part of initial effort to enable vpp running over
dpdk on failsafe PMD in Microsoft Azure(3/4).
Change-Id: I416fd80f2d40e12e139f8f3492814da98343eae7
Signed-off-by: Rui Cai <rucai@microsoft.com>
|
|
This fixes some compilation warnings with clang on AArch64.
Change-Id: Idb941944e3f199f483c80e143a9e5163a031c4aa
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
port_id be used for dpdk port_id
Change-Id: Ia7d8cdc5dec2ad658c11f9c0f3ef8005a470ac3c
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ibab5e27277f618ceb2d543b9d6a1a5f191e7d1db
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
1、Adding PMD type for support Cavium LiquidIO II CN23XX NIC;
2、Our company is using VPP + DPDK +Cavium LiquidIO II CN23XX NIC,
Unfortunately, the latest VPP code does not support
Cavium LiquidIO II CN23XX pci.
So I increased the PMD type to support LiquidIO NIC,
and can run normally, we most subsequent projects are
based on VPP + DPDK + Cavium LiquidIO II CN23XX NIC model,
so I hope VPP team can adopt this requirement, thanks a lot.
Change-Id: I604ae444d69b37c2e26962bfe4ccdfe983b75041
Signed-off-by: chuhong yao <ych@panath.cn>
|
|
The option no-multi-seg doesn't take effect for RX since MTU
which is too large is passed to DPDK lib, Which causes PMDs
are running XXX_scattered_rx function. The patch fixes the issue.
Change-Id: I91a6fb23fd118e872c8a52a6c35c36a86cb2c02b
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Should cost at most 1 clock per frame when not enabled.
Add "pcap rx trace..." debug CLI, refactored "pcap tx trace" debug CLI
to avoid duplicating code.
Change-Id: I19ac75d1cf94a6a24c98facbf0753381d37963ea
Signed-off-by: Dave Barach <dbarach@cisco.com>
|
|
Change-Id: Ic9f98c022e32715af395c9ed618589434eb0e526
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
when flows are enabled on the device
Change-Id: I971764988d5a9e7078468f627205b3fa60736263
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
Change-Id: I1042c0fe179b57a00ce99c8d62cb1bdbe24d9184
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ib3fcc3ceb7f315389bcdecbb7d9632540a5dd6ba
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I4b6577b496c56f27f07dd0066fcfdfd0cebb6f1a
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
During dpdk_lib_init, it calculates MRU and MTU and later calls
rte_eth_dev_set_mtu with calculated MTU value. However, dpdk_device_setup
calls rte_eth_dev_set_mtu with hi->max_packet_bytes, which is set to be
MRU value in dpdk_lib_init earlier.
Most of the time, MRU != MTU in dpdk_lib_init and it looks like
hi->max_packet_bytes is treated as MTU in other parts of vpp codebase.
Therefore, dpdk_lib_init should be consistent and use MTU instead of MRU
for hi->max_packet_bytes.
Change-Id: I23ff2a6cd45d6bc819b6f64d5f0fc0490b8a44de
Signed-off-by: Rui Cai <rucai@microsoft.com>
|