Age | Commit message (Collapse) | Author | Files | Lines |
|
Change-Id: I6fa4c6bf9c4e96ba4502a06907bdecc654ace665
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
- Some crypto devices rely on rte_cryptodev_start() API to be called by
application to enable a pre-configured H/W Crypto device.
- NXP dpaa2 is one of the example.
Change-Id: I2ad8ca0060604fb4e0541161e91bdebc6642f4da
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
|
|
Change-Id: I7c611d3fa7fabe82294fc22a61d5a3927a2da39d
Signed-off-by: Jessica Tallon <tsyesika@igalia.com>
|
|
Workaround for lack of driver interrupt support. Also quite handy for
home gateway, laptop/vagrant, other use-cases not requiring maximum
vectors/second for proper operation.
Change-Id: Ifc4b98112450664beef67b89ab8a6940a3bf24b5
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I23caebf602e3e6ff45fdec106a0da88f6de7a284
Signed-off-by: Igor Mikhailov (imichail) <imichail@cisco.com>
|
|
Change-Id: I024c1d398fcb51e5a20f9049d16a87b3b1ba0c20
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
|
|
Change-Id: I737dad64bf6dd0743d36500d5cfa1cb1a6594b98
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
ip4 vxlan cli/api (using flow infra) to create flows and enable them on
different hardware (currently tested with i40e)
to offload a vxlan tunnel onto hw:
set flow-offload vxlan hw TwentyFiveGigabitEthernet3/0/0 rx vxlan_tunnel1
to remove offload:
set flow-offload vxlan hw TwentyFiveGigabitEthernet3/0/0 rx vxlan_tunnel1 del
TODO:ipv6 handling
Change-Id: I70e61f792ef8e3f007d03d7df70e97ea4725b101
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
Change-Id: Iee8de25ab3c68ae3698c79852195dc336050914c
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
This patch separates setting of hardware interfaec and software
interface MTU. Software MTU is L2 payload MTU (i.e. not including L2
header). Per-protocol MTU for IPv4, IPv6 and MPLS can also be set.
Currently only IP4, IP6 are enabled in adjacency / rewrite code.
Documentation in src/vnet/MTU.md
Change-Id: Iee2fd6f0bbc8210748dd8e073ab9fab87d323690
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
... introduced with dpdk 18.05 support patch
Change-Id: Idf2283888f81d7652599651c0d65476e451f9343
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Added code to initialize failsafe PMD
This is part of initial effort to enable vpp running over
dpdk on failsafe PMD in Microsoft Azure(4/4).
Change-Id: Ia2469c7087ca4b5c7881dfb11ec5c4fcebaa1d04
Signed-off-by: Rui Cai <rucai@microsoft.com>
|
|
Change-Id: I205932bc727c990011bbbe1dc6c0cf5349d19806
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
- Modify the API send_ip6_na and send_ip4_garp to take sw_if_index instead
of vnet_hw_interface_t and add call to build_ethernet_rewrite to support
subinterface/vlan
- Add code to bonding driver to send an event to bond_process when the first
interface becomes active or when the active interface is down
- Create a bond_process to walk the interface and the corresponding
subinterfaces to send garp/ip6_na when an event is received.
- Minor cleanup in bonding/node.c
Note: dpdk bonding driver does not send garp/ip6_na for subinterfaces. There is
no attempt to fix it here. But the infra is now done and should be easy to
add the support.
Change-Id: If3ecc4cd0fb3051330f7fa11ca0dab3e18557ce1
Signed-off-by: Steven <sluong@cisco.com>
|
|
Added configure argument "--with-log2-cache-line-bytes=5|6|7|auto"
AKA 32, 64, or 128 bytes, or use the inferred value from the build host.
produces build-xxx/vpp/vppinfra/config.h, which .../src/vppinfra/cache.h
Kernels which implement the following pseudo-file (aka x86_64) are
easy: /sys/devices/system/cpu/cpu0/cache/index0/coherency_line_size
Otherwise, extract the cpuid from /proc/cpuinfo and map it to the
cache line size.
Change-Id: I7ff861e042faf82c3901fa1db98864fbdea95b74
Signed-off-by: Dave Barach <dave@barachs.net>
Signed-off-by: Nitin Saxena <nitin.saxena@cavium.com>
|
|
~5 clocks/packet improvement...
Change-Id: I1a78fa24dcd1b3ab7f45e10b9ded50f79517114a
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
This is ~50% improvement in buffer alloc performance.
For a 256 buffer allocation, it was ~10 clocks/buffer, now is < 5 clocks.
Change-Id: I97590e240a79a42bcab5eb26587fc2d11e6eb163
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Prior to the change, dpdk plugin assumes xd->device_index is
used both as index for internal dpdk_main->devices array
and DPDK port index to call into DPDK APIs.
However, when running on top of Failsafe PMDs,
DPDK port index range may no longer be contiguous (as noted:
http://dpdk.org/ml/archives/dev/2018-March/092375.html
for related changes in DPDK). Because this, dpdk plugin can
no longer iterate through all available DPDK ports
with a for 0->rte_eth_dev_count() loop and the assumption of
device_index no longer holds.
This is part of initial effort to enable vpp running over
dpdk on failsafe PMD in Microsoft Azure(3/4).
Change-Id: I416fd80f2d40e12e139f8f3492814da98343eae7
Signed-off-by: Rui Cai <rucai@microsoft.com>
|
|
This fixes some compilation warnings with clang on AArch64.
Change-Id: Idb941944e3f199f483c80e143a9e5163a031c4aa
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
port_id be used for dpdk port_id
Change-Id: Ia7d8cdc5dec2ad658c11f9c0f3ef8005a470ac3c
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
This breaks VFIO operation.
This reverts commit d3b3baa4f8e9e4d95264aff16fe85434ef8061bd.
Change-Id: I2482e0da2d1ebfc365d13668c4b992b040f561b4
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ibab5e27277f618ceb2d543b9d6a1a5f191e7d1db
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
1、Adding PMD type for support Cavium LiquidIO II CN23XX NIC;
2、Our company is using VPP + DPDK +Cavium LiquidIO II CN23XX NIC,
Unfortunately, the latest VPP code does not support
Cavium LiquidIO II CN23XX pci.
So I increased the PMD type to support LiquidIO NIC,
and can run normally, we most subsequent projects are
based on VPP + DPDK + Cavium LiquidIO II CN23XX NIC model,
so I hope VPP team can adopt this requirement, thanks a lot.
Change-Id: I604ae444d69b37c2e26962bfe4ccdfe983b75041
Signed-off-by: chuhong yao <ych@panath.cn>
|
|
- Currently mempool priv size is getting initialized after releasing buffers
to pool. This is causing mismatch in expected & real metadata size value
and buffers are getting released with wrong offset. (when metadata offset
is in use for a given platform)
- Since private data size is 0 initially, metadata size don't include space
for VLIB_BUFFER_HDR.
Change-Id: I780c4d518104631a3dcf192185bacf58b3598e65
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
|
|
- Fix the issue where eal iova mode is Virtual Address (RTE_IOVA_VA) but
setting DMA iova address to Physical address value always.
Change-Id: Ib1e9c1596d95885c7eff11723338121627203e61
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
|
|
The option no-multi-seg doesn't take effect for RX since MTU
which is too large is passed to DPDK lib, Which causes PMDs
are running XXX_scattered_rx function. The patch fixes the issue.
Change-Id: I91a6fb23fd118e872c8a52a6c35c36a86cb2c02b
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Should cost at most 1 clock per frame when not enabled.
Add "pcap rx trace..." debug CLI, refactored "pcap tx trace" debug CLI
to avoid duplicating code.
Change-Id: I19ac75d1cf94a6a24c98facbf0753381d37963ea
Signed-off-by: Dave Barach <dbarach@cisco.com>
|
|
Change-Id: Ic9f98c022e32715af395c9ed618589434eb0e526
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
Replace hash with a vector to improve performance.
Plus other minor performance improvements.
Change-Id: I3f0ebd909782ce3727f6360ce5ff5ddd131f8574
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
|
|
when flows are enabled on the device
Change-Id: I971764988d5a9e7078468f627205b3fa60736263
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
Change-Id: I1042c0fe179b57a00ce99c8d62cb1bdbe24d9184
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ib3fcc3ceb7f315389bcdecbb7d9632540a5dd6ba
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I4b6577b496c56f27f07dd0066fcfdfd0cebb6f1a
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
During dpdk_lib_init, it calculates MRU and MTU and later calls
rte_eth_dev_set_mtu with calculated MTU value. However, dpdk_device_setup
calls rte_eth_dev_set_mtu with hi->max_packet_bytes, which is set to be
MRU value in dpdk_lib_init earlier.
Most of the time, MRU != MTU in dpdk_lib_init and it looks like
hi->max_packet_bytes is treated as MTU in other parts of vpp codebase.
Therefore, dpdk_lib_init should be consistent and use MTU instead of MRU
for hi->max_packet_bytes.
Change-Id: I23ff2a6cd45d6bc819b6f64d5f0fc0490b8a44de
Signed-off-by: Rui Cai <rucai@microsoft.com>
|
|
Adding name, enum constants and formatting code
for failsafe PMD.
This is part of initial effort to enable vpp running over
dpdk on failsafe PMD in Microsoft Azure(2/4).
Change-Id: I4eb0093db9f666e2635f7ddff451e3c9064bd0c4
Signed-off-by: Rui Cai <rucai@microsoft.com>
|
|
When port_type_from_speed_capa() is called before the port link update isn't completed,
xd->port_type becomes VNET_DPDK_PORT_TYPE_UNKNOWN. This happens with Mellanox NIC
without lsc interrupt. Calling rte_eth_link_get before getting dev_info will ensure
the link state is up-to-date.
Change-Id: I83a59654778eb4bf0c65a4a4e225a326227b9641
Signed-off-by: Steve Shin <jonshin@cisco.com>
|
|
It is much cheaper to use ctzll than to do shift,subtract and mask
in likely case when we are looking for 1st set bit in the uword.
Change-Id: I31954081571978878c7098bafad0c85a91755fa2
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ibea4a96bdec5e368301a03d8b11a0712fa0265e0
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
The pointer to IP header was derived from l3_hdr_offset,
which would be ok, if l3_hdr_offset was valid. But it does not
have to be, so it was a bad solution. Now the previous nodes
mark whether it is a IPv6 or IPv4 packet tyle, and in esp_decrypt
we count get ip header pointer by substracting the size
of the ip header from the pointer to esp header (which lies
in front of the ip header).
Change-Id: I6d425b90931053711e8ce9126811b77ae6002a16
Signed-off-by: Szymon Sliwa <szs@semihalf.com>
|
|
Change-Id: I921465ea64b59d42674cc8f19069ed04e3b25026
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
Change-Id: I3669068f694614f8555b33bf0b703c41e45363ef
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: Ifea9c772e8784642433b92091f5769eb9ec06890
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I387b22427b3f322969bcf32fcfc189123c8ed6ae
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
Change-Id: I16557189aa4a763ec496cb4a45f6e12f2d46971f
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ic1e189c22e3d344d165e0eab05ccb667eef088a9
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
|
|
Object sizes must evenly divide alignment requests, or vice
versa. Otherwise, only the first object will be aligned as
requested.
Three choices: add CLIB_CACHE_LINE_ALIGN_MARK(align_me) at
the end of structures, manually pad to an even divisor or multiple of
the alignment request, or use plain vectors/pools.
static assert for enforcement.
Change-Id: I41aa6ff1a58267301d32aaf4b9cd24678ac1c147
Signed-off-by: Dave Barach <dbarach@cisco.com>
|
|
Change-Id: If33854f9c32736edf571fb66cdfa759db1c9de25
Signed-off-by: Szymon Sliwa <szs@semihalf.com>
|
|
Change-Id: I2794384557c6272fe217269b14a9db09eda19220
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: If174d189de40e6f9ffae99997bba93a2519d9fda
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I927c9358915e03187cf7d3098c00b85b5ea2f92d
Signed-off-by: Florin Coras <fcoras@cisco.com>
|