summaryrefslogtreecommitdiffstats
path: root/src/vnet/bonding/device.c
AgeCommit message (Collapse)AuthorFilesLines
2019-02-21Revert "bond: problem switching from l2 to l3"Peter Mikus1-11/+0
During CSIT testing we discovered that LACP tests were failing and producing coredumps. Reverting this patch fix the problem with VPP crashing. This reverts commit f23890138e02d4218c828c427f687f8ecdb0e165. Change-Id: Icf97053ce1473350add885cbebe591f7f3efcbea Signed-off-by: Peter Mikus <pmikus@cisco.com>
2019-01-13bonding: support custom interface IDsAlexander Chernavin1-1/+1
Change-Id: I78fe58144fa3ba2e1c7135897a13a2541f235c91 Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
2018-11-14Remove c-11 memcpy checks from perf-critical codeDave Barach1-2/+2
Change-Id: Id4f37f5d4a03160572954a416efa1ef9b3d79ad1 Signed-off-by: Dave Barach <dave@barachs.net>
2018-11-13vlib rename vlib_frame_args(...) to vlib_frame_scalar_args(..)Damjan Marion1-2/+2
Typically we have scalar_size == 0, so it doesn't matter but vlib_frame_args was providing pointer to scalar frame data, not vector data. To avoid future confusion function is renamed to vlib_frame_scalar_args(...) Change-Id: I48b75523b46d487feea24f3f3cb10c528dde516f Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-10-29bond: problem switching from l2 to l3Steven1-0/+11
when the last interface is removed from l2 in the bonding group, we should invoke ethernet_set_rx_direct to allow ip packets to go directly to ip4-input. Change-Id: I43b3cd64e2c119762edd0c295bb9348732adab45 Signed-off-by: Steven <sluong@cisco.com>
2018-10-17bond: tx optimizationsDamjan Marion1-305/+328
Break up bond tx function into multiple small workloads: 1. parse the packet header and hash it based on the configured algorithm 2. optionally, trace the packet 3. convert the hash value from (1) to the slave port 4. update the buffers with the slave sw_if_index 5. Add the buffers to the queues 6. Create and send the frames old numbers ----------- Time 5.3, average vectors/node 223.74, last 128 main loops 40.00 per node 222.61 vector rates in 3.3627e6, out 6.6574e6, drop 3.3964e4, punt 0.0000e0 Name State Calls Vectors Suspends Clocks Vectors/Call BondEthernet0-output active 68998 17662979 0 1.89e1 255.99 BondEthernet0-tx active 68998 17662979 0 2.60e1 255.99 TenGigabitEthernet3/0/1-output active 68998 8797416 0 1.03e1 127.50 TenGigabitEthernet3/0/1-tx active 68998 8797416 0 7.85e1 127.50 TenGigabitEthernet7/0/1-output active 68996 8865563 0 1.02e1 128.49 TenGigabitEthernet7/0/1-tx active 68996 8865563 0 7.65e1 128.49 new numbers ----------- BondEthernet0-output active 304064 77840384 0 2.29e1 256.00 BondEthernet0-tx active 304064 77840384 0 2.47e1 256.00 TenGigabitEthernet3/0/1-output active 304064 38765525 0 1.03e1 127.49 TenGigabitEthernet3/0/1-tx active 304064 38765525 0 7.66e1 127.49 TenGigabitEthernet7/0/1-output active 304064 39074859 0 1.01e1 128.51 Change-Id: I3ef9a52bfe235559dae09d055c03c5612c08a0f7 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-10-05bond: tx perf improvement, part troisDamjan Marion1-65/+164
Introduce bond_tx_inline which takes lb as a constant for gcc to do the optimization The number appears a tad better for 256 bytes frame. with the patch -------------- Thread 2 vpp_wk_1 (lcore 3) Time 4.3, average vectors/node 224.00, last 128 main loops 40.00 per node 222.61 vector rates in 8.4836e6, out 1.6967e7, drop 0.0000e0, punt 0.0000e0 Name State Calls Vectors Suspends Clocks Vectors/Call BondEthernet0-output active 141054 36109824 0 2.51e1 256.00 BondEthernet0-tx active 141054 36109824 0 2.55e1 256.00 TenGigabitEthernet6/0/0-output active 141054 18055469 0 9.43e0 128.00 TenGigabitEthernet6/0/0-tx active 141054 18055469 0 6.97e1 128.00 TenGigabitEthernet6/0/1-output active 141054 18054355 0 9.54e0 127.99 TenGigabitEthernet6/0/1-tx active 141054 18054355 0 7.05e1 127.99 bond-input active 141054 36109824 0 1.76e1 256.00 dpdk-input polling 70527 36109824 0 5.03e1 512.00 ethernet-input active 141054 36109824 0 6.12e1 256.00 ip4-input active 141054 36109824 0 3.26e1 256.00 ip4-lookup active 141054 36109824 0 2.94e1 256.00 ip4-rewrite active 141054 36109824 0 3.27e1 256.00 without the patch ----------------- Thread 2 vpp_wk_1 (lcore 3) Time 4.3, average vectors/node 224.00, last 128 main loops 40.00 per node 222.61 vector rates in 8.4443e6, out 1.6889e7, drop 0.0000e0, punt 0.0000e0 Name State Calls Vectors Suspends Clocks Vectors/Call BondEthernet0-output active 142744 36542464 0 2.51e1 256.00 BondEthernet0-tx active 142744 36542464 0 2.67e1 256.00 TenGigabitEthernet6/0/0-output active 142744 18270813 0 9.19e0 127.99 TenGigabitEthernet6/0/0-tx active 142744 18270813 0 6.98e1 127.99 TenGigabitEthernet6/0/1-output active 142744 18271651 0 9.43e0 128.00 TenGigabitEthernet6/0/1-tx active 142744 18271651 0 7.02e1 128.00 bond-input active 142744 36542464 0 1.76e1 256.00 dpdk-input polling 71372 36542464 0 5.08e1 512.00 ethernet-input active 142744 36542464 0 6.15e1 256.00 ip4-input active 142744 36542464 0 3.23e1 256.00 ip4-lookup active 142744 36542464 0 2.96e1 256.00 ip4-rewrite active 142744 36542464 0 3.28e1 256.00 Change-Id: I9fd43eda3c735cbff680ac6d2f01ecdae81f0eda Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-09-28bond: tx performance enhancement part deuxSteven1-110/+76
- Reduce per packet cost by buffering the output packet buffer indexes in the queue and process the queue outside the packet processing loop. - Move unnecessary variable initialization outside of the while loop. - There is no need to save the old interface if tracing is not enabled. Test result for 256 bytes packet comparison. Other packet size shows similar improvement. With the patch -------------- BondEthernet0-output active 52836 13526016 0 1.71e1 256.00 BondEthernet0-tx active 52836 13526016 0 2.68e1 256.00 TenGigabitEthernet6/0/0-output active 52836 6762896 0 9.17e0 127.99 TenGigabitEthernet6/0/0-tx active 52836 6762896 0 6.97e1 127.99 TenGigabitEthernet6/0/1-output active 52836 6763120 0 9.40e0 128.00 TenGigabitEthernet6/0/1-tx active 52836 6763120 0 7.00e1 128.00 bond-input active 52836 13526016 0 1.76e1 256.00 Without the patch ----------------- BondEthernet0-output active 60858 15579648 0 1.73e1 256.00 BondEthernet0-tx active 60858 15579648 0 2.94e1 256.00 TenGigabitEthernet6/0/0-output active 60858 7789626 0 9.29e0 127.99 TenGigabitEthernet6/0/0-tx active 60858 7789626 0 7.01e1 127.99 TenGigabitEthernet6/0/1-output active 60858 7790022 0 9.31e0 128.00 TenGigabitEthernet6/0/1-tx active 60858 7790022 0 7.10e1 128.00 bond-input active 60858 15579648 0 1.77e1 256.00 Change-Id: Ib6d73a63ceeaa2f1397ceaf4c5391c57fd865b04 Signed-off-by: Steven <sluong@cisco.com>
2018-09-26bond: tx perf improvementsDamjan Marion1-216/+172
Change-Id: I0c3f2add35ad9fc11308b7a2a2c69ffd8472dd2e Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-07-11avoid using thread local storage for thread indexDamjan Marion1-2/+2
It is cheaper to get thread index from vlib_main_t if available... Change-Id: I4582e160d06d9d7fccdc54271912f0635da79b50 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-06-05bond: send gratuitous arp when the active slave went down in active-backup modeSteven1-0/+48
- Modify the API send_ip6_na and send_ip4_garp to take sw_if_index instead of vnet_hw_interface_t and add call to build_ethernet_rewrite to support subinterface/vlan - Add code to bonding driver to send an event to bond_process when the first interface becomes active or when the active interface is down - Create a bond_process to walk the interface and the corresponding subinterfaces to send garp/ip6_na when an event is received. - Minor cleanup in bonding/node.c Note: dpdk bonding driver does not send garp/ip6_na for subinterfaces. There is no attempt to fix it here. But the infra is now done and should be easy to add the support. Change-Id: If3ecc4cd0fb3051330f7fa11ca0dab3e18557ce1 Signed-off-by: Steven <sluong@cisco.com>
2018-05-25bond: performance harvestingSteven1-109/+146
- hash is great. But it is a bit too slow for the DP. Use direct array indexing to quickly retrieve the slave interface. - the algorithm used by flow hash is great. But it is a bit too slow for the DP. Use l2_hash_hash() extracted from lb_hash.h which ECMP is using. It makes use of intrinsic crc32 instruction set. - shortcut modulo arithmetic when the operand is 2**x (where x up to 4) to avoid division instruction. - special case for link count == 1 in bond_tx_fn() - use clib_mem_unaligned to access data for the packet to avoid alignment error - Fix some typos for packet tracing. Change-Id: I8eae3ad497061c5473aa675ba894ee0211120d25 Signed-off-by: Steven <sluong@cisco.com>
2018-04-24lacp: deleting the bond subinterface may cause lacp to lose the partner ↵Steven1-12/+0
[VPP-1251] Problem: When the bond subinterface is removed, it was observed that we lost the lacp partner. Show hardware shows rx counter goes up, but show interface does not for the slave interfaces. Cause: We reset the interface promiscuous mode when the bond subinterface is deleted. This causes dpdk not to accept any packet. Leave the interface in promiscuous mode fixes the problem. Other fixes: There are few places we use hw_if_index as if they are sw_if_index. But they don't necessarily have the same value. As soon as a subinterface is created, they start to diverge. The fix is to use the correct API for the hw_if_index and sw_if_index. Change-Id: I1e6b8bca0a4aae396d217a141271cbf968500c91 Signed-off-by: Steven <sluong@cisco.com> (cherry picked from commit 42c6599bf3057a7e8f4f00f5b6a9dd72af48d283)
2018-04-13bond: ping fails between l2 BD [VPP-1238]Steven1-0/+43
In dpdk based bonding, when the bond interface is configured for l2, it automatically sets the bond interface to promiscuous mode and sets rx redirect to ethernet-input. This allows traffic to be bridged to non compute node facing interface when it is received from the compute node interface. For native vpp bonding, we need to do similar things. When the bond interface is configured for l2, we set the slave interfaces to promiscuous mode and set rx redirect to ethernet-input because dpdk does not know anything about the bond interface. Likewise, when a new interface is enslaved, we also need to do the same thing if the bond interface has already been configured for l2. Change-Id: I7e168008e8a4221be74929b2a20e6db0ce8f3110 Signed-off-by: Steven <sluong@cisco.com>
2018-04-12bond: 1 packet/frame == bad performance [VPP-1236]Steven1-6/+8
While https://gerrit.fd.io/r/#/c/11316/ took care of 1 packet/frame for most of the bonding modes, it missed the broadcast mode. This patch is to fix the 1 packet/frame for the broadcast mode. Change-Id: Iac48a2977c7f702f341479cc712a6448090dbc60 Signed-off-by: Steven <sluong@cisco.com>
2018-03-27bond: coverity woesSteven1-26/+31
coverity complains about statements in function A function A { x % vec_len (y) } because vec_len (y) is a macro and may return 0 if the pointer y is null. But coverity fails to realize the same statement vec_len (y) was already invoked and checked in the caller of function A and punt if vec_len (y) is 0. We can fix the coverity warning and shave off a few cpu cycles by caching the result of vec_len (y) and pass it around to avoid calling vec_len (y) again in multiple places. Change-Id: I095166373abd3af3859646f860ee97c52f12fb50 Signed-off-by: Steven <sluong@cisco.com>
2018-03-22bond: performance enhancementSteven1-155/+195
We were only puting one packet per frame to the output node. Change to buffer multiple packets per frame. Performance is now on top of dpdk-based bonding. Put a spinlock in the tx thread in case the rug is pulled under us. Change-Id: Ifda5af086a984a7301972cd6c8e428217f676a95 Signed-off-by: Steven <sluong@cisco.com>
2018-03-21bond: Add bonding driver and LACP protocolSteven1-0/+610
Add bonding driver to support creation of bond interface which composes of multiple slave interfaces. The slave interfaces could be physical interfaces, or just any virtual interfaces. For example, memif interfaces. The syntax to create a bond interface is create bond mode <lacp | xor | acitve-backup | broadcast | round-robin> To enslave an interface to the bond interface, enslave interface TenGigabitEthernet6/0/0 to BondEthernet0 Please see src/plugins/lacp/lacp_doc.md for more examples and additional options. LACP is a control plane protocol which manages and monitors the status of the slave interfaces. The protocol is part of 802.3ad standard. This patch implements LACPv1. LACPv2 is not supported. To enable LACP on the bond interface, specify "mode lacp" when the bond interface is created. The syntax to enslave a slave interface is the same as other bonding modes. Change-Id: I06581d3b87635972f9f0e1ec50b67560fc13e26c Signed-off-by: Steven <sluong@cisco.com>