Age | Commit message (Collapse) | Author | Files | Lines |
|
Type: fix
Change-Id: Iea7d73a304236b525b95bdad3bfdb41e711f8cdb
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
In a rare event, we may be skipping processing lacp pdu's when the it is
not in steady state.
Type: fix
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: I3595d22dbff8a97dce9fb4d4452d2051bcf6f523
|
|
Copy sw_if_index value instead of using pointers to original
bif->slaves content which could be overriden by eg. vec_del1().
Type: feature
Change-Id: I37e458effd6b2367479574f7bd3facd4e93bada4
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
vnet_feature_enable_disable takes sw_if_index, not hw_if_index. If there
is a subinterface created prior to the slave interface is created,
sw_if_index and hw_if_index start to diverge and the problem will happen.
Type: fix
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: I11e1f099378832f83b748526c6cbeb56960fad3c
|
|
Type: fix
Change-Id: Ibb7ba878b049b8b18e890c43fdd6324cb88d63b8
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Not all interfaces have the same characteristics within the bonding group.
For active-backup mode, we should do our best to select the slave that
performs the best as the primary slave. We already did that by preferring
the slave that is local numa. Sometimes, this is not enough. For example,
when all are local numas, the selection is arbitrary. Some slave interfaces
may have higher speed or better qos than the others. But this is hard to
infer.
One rule does not fit all. So we let the operator to optionally specify the
weight for each slave interface. Our primary slave selection rule is now
1. biggest weight
2. is local numa
3. current primary slave (to avoid churn)
4. lowest sw_if_index (for deterministic behavior)
This selection rule only applies to active-backup mode which only one slave
is used for forwarding traffic until it becomes unreachable. At that time,
the next "best" slave candidate is automatically promoted. The slaves are
sorted according to the preference rule when they are up. So there is no need
to find the next best candidate when the primary slave goes down.
Another good thing about this rule is when the down slave comes back up, it
is selected as the primary slave again unless there is indeed a "better"
slave than this down slave that were added during that period.
To set the weight for the slave interface, do this after the interface is
enslaved
set interface bond <interface-name> weight <value>
Type: feature
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: I59ced6d20ce1dec532e667dbe1afd1b4243e04f9
|
|
Add /if/lacp/<bond-sw_if_index>/<slave-sw_if_index>/state
<bond-sw_if_index> is a vector of the bond sw_if_index
<slave-sw_if_index> is a vector of the slave sw_if_index
Content is the integer value of the lacp actor state. The state is actually
a bitfield as described in the lacp protocol spec.
Type: feature
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: Ic6eca8ce2a1acd2d858e4e50b7eac1d000ea08e5
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
Virtual interfaces may be part of the bonding like physical interfaces. The
difference is virtual interfaces may disappear dynamically. As an example,
the following CLI sequence may crash the debug image
create vhost-user socket /tmp/sock1
create bond mode lacp
bond add BondEthernet0 VirtualEthernet0/0/0
delete vhost-user VirtualEhernet0/0/0
Notice the virtual interface is deleted without first doing bond delete.
The proper order is to first remove the slave interface from the bond prior
to deleting the virtual interface as shown below. But we should handle it
anyway.
create vhost-user socket /tmp/sock1
create bond mode lacp
bond add BondEthernet0 VirtualEthernet0/0/0
bond del VirtualEthernet0/0/0 <-----
delete vhost-user VirtualEhernet0/0/0
The fix is to register for VNET_SW_INTERFACE_ADD_DEL_FUNCTION and remove
the slave interface from the bond if the to-be-deleted interface is part of
the bond. We check the interface that it is actually up before we send
the lacp pdu. Up means both hw and sw admin up.
Type: fix
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: If4d2da074338b16aab0df54e00d719e55c45221a
|
|
Type: feature
Change-Id: Icd718c98ba2fa900cafaf1a59dfb100ee9914ec9
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
1. "numa-only" is optional and is disabled by default for lacp mode.
2. update lacp doc.
Type: fix
Change-Id: I6a3a8423ef31ad9980353a796957693cd6205d73
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
If numa-only is set, Only slaves on local numa node
transmit pkts if have at least one, otherwise the bond
interface works as usual.
CLI change:
create bond mode lacp [load-balance { l2 | l23 | l34 } {numa-only}]
[hw-addr <mac-address>] [id <if-id>]
The new member "u8 numa_only;" is also added to bond_create_if_args_t.
Type: feature
Change-Id: Icdccedafb0738d8c9d4a5acce909ce562428c071
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
This patch enables bonding numa awareness on multi-socket
server working in active-backeup mode.
The VPP adds capability for automatically preferring slave
with local numa node in order to reduces the load on the
QPI-bus and improve system overall performance in multi-socket
use cases. Users doesn't need to add any extra operation as
usual.
Change-Id: Iec267375fc399a9a0c0a7dca649fadb994d36671
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
We register callback for VNET_HW_INTERFACE_LINK_UP_DOWN_FUNCTION and
VNET_SW_INTERFACE_ADMIN_UP_DOWN_FUNCTION to add and remove the slave
interface from the bond interface accordingly. For static bonding without
lacp, one would think that it is good enough to put the slave interface into
the ective slave set as soon as it is configured. Wrong, sometimes the slave
interface is configured to be part of the bonding without ever bringing up the
hardware carrier or setting the admin state to up. In that case, we send
traffic to the "dead" slave interface.
The fix is to make sure both the carrier and admin state are up before we put
the slave into the active set for forwarding traffic.
Change-Id: I93b1c36d5481ca76cc8b87e8ca1b375ca3bd453b
Signed-off-by: Steven <sluong@cisco.com>
|
|
Change-Id: I78fe58144fa3ba2e1c7135897a13a2541f235c91
Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
|
|
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Break up bond tx function into multiple small workloads:
1. parse the packet header and hash it based on the configured algorithm
2. optionally, trace the packet
3. convert the hash value from (1) to the slave port
4. update the buffers with the slave sw_if_index
5. Add the buffers to the queues
6. Create and send the frames
old numbers
-----------
Time 5.3, average vectors/node 223.74, last 128 main loops 40.00 per node 222.61
vector rates in 3.3627e6, out 6.6574e6, drop 3.3964e4, punt 0.0000e0
Name State Calls Vectors Suspends Clocks Vectors/Call
BondEthernet0-output active 68998 17662979 0 1.89e1 255.99
BondEthernet0-tx active 68998 17662979 0 2.60e1 255.99
TenGigabitEthernet3/0/1-output active 68998 8797416 0 1.03e1 127.50
TenGigabitEthernet3/0/1-tx active 68998 8797416 0 7.85e1 127.50
TenGigabitEthernet7/0/1-output active 68996 8865563 0 1.02e1 128.49
TenGigabitEthernet7/0/1-tx active 68996 8865563 0 7.65e1 128.49
new numbers
-----------
BondEthernet0-output active 304064 77840384 0 2.29e1 256.00
BondEthernet0-tx active 304064 77840384 0 2.47e1 256.00
TenGigabitEthernet3/0/1-output active 304064 38765525 0 1.03e1 127.49
TenGigabitEthernet3/0/1-tx active 304064 38765525 0 7.66e1 127.49
TenGigabitEthernet7/0/1-output active 304064 39074859 0 1.01e1 128.51
Change-Id: I3ef9a52bfe235559dae09d055c03c5612c08a0f7
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
active-backup mode is using l2 load balance algo. It should be using
active-backup. Also notice that the output is missing a character.
vpp# create bond mode active-backup
create bond mode active-backup
vpp# sh bond
sh bond
interface name sw_if_index mode load balance active slaves slaves
BondEthernet0 6 xor l34 2 2
BondEthernet1 9 xor l34 1 1
BondEthernet2 10 active-backu l2 0 0
vpp#
Change-Id: If5ed0cc6c25f6c2ddabec15ff6188b34923d38e3
Signed-off-by: Steven <sluong@cisco.com>
|
|
- Reduce per packet cost by buffering the output packet buffer indexes in the queue and
process the queue outside the packet processing loop.
- Move unnecessary variable initialization outside of the while loop.
- There is no need to save the old interface if tracing is not enabled.
Test result for 256 bytes packet comparison. Other packet size shows similar improvement.
With the patch
--------------
BondEthernet0-output active 52836 13526016 0 1.71e1 256.00
BondEthernet0-tx active 52836 13526016 0 2.68e1 256.00
TenGigabitEthernet6/0/0-output active 52836 6762896 0 9.17e0 127.99
TenGigabitEthernet6/0/0-tx active 52836 6762896 0 6.97e1 127.99
TenGigabitEthernet6/0/1-output active 52836 6763120 0 9.40e0 128.00
TenGigabitEthernet6/0/1-tx active 52836 6763120 0 7.00e1 128.00
bond-input active 52836 13526016 0 1.76e1 256.00
Without the patch
-----------------
BondEthernet0-output active 60858 15579648 0 1.73e1 256.00
BondEthernet0-tx active 60858 15579648 0 2.94e1 256.00
TenGigabitEthernet6/0/0-output active 60858 7789626 0 9.29e0 127.99
TenGigabitEthernet6/0/0-tx active 60858 7789626 0 7.01e1 127.99
TenGigabitEthernet6/0/1-output active 60858 7790022 0 9.31e0 128.00
TenGigabitEthernet6/0/1-tx active 60858 7790022 0 7.10e1 128.00
bond-input active 60858 15579648 0 1.77e1 256.00
Change-Id: Ib6d73a63ceeaa2f1397ceaf4c5391c57fd865b04
Signed-off-by: Steven <sluong@cisco.com>
|
|
Change-Id: I0c3f2add35ad9fc11308b7a2a2c69ffd8472dd2e
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
After the slave interface is removed from bond, bond input node still receives traffic for
the slave interface.
We have to disable feature arc for the corresponding slave interface.
Change-Id: I44e7001e6685e290b032c48147d02911a55d547b
Signed-off-by: Steven <sluong@cisco.com>
|
|
- Modify the API send_ip6_na and send_ip4_garp to take sw_if_index instead
of vnet_hw_interface_t and add call to build_ethernet_rewrite to support
subinterface/vlan
- Add code to bonding driver to send an event to bond_process when the first
interface becomes active or when the active interface is down
- Create a bond_process to walk the interface and the corresponding
subinterfaces to send garp/ip6_na when an event is received.
- Minor cleanup in bonding/node.c
Note: dpdk bonding driver does not send garp/ip6_na for subinterfaces. There is
no attempt to fix it here. But the infra is now done and should be easy to
add the support.
Change-Id: If3ecc4cd0fb3051330f7fa11ca0dab3e18557ce1
Signed-off-by: Steven <sluong@cisco.com>
|
|
- hash is great. But it is a bit too slow for the DP. Use direct array indexing
to quickly retrieve the slave interface.
- the algorithm used by flow hash is great. But it is a bit too slow for the DP.
Use l2_hash_hash() extracted from lb_hash.h which ECMP is using. It makes use
of intrinsic crc32 instruction set.
- shortcut modulo arithmetic when the operand is 2**x (where x up to 4) to
avoid division instruction.
- special case for link count == 1 in bond_tx_fn()
- use clib_mem_unaligned to access data for the packet to avoid alignment error
- Fix some typos for packet tracing.
Change-Id: I8eae3ad497061c5473aa675ba894ee0211120d25
Signed-off-by: Steven <sluong@cisco.com>
|
|
[VPP-1251]
Problem:
When the bond subinterface is removed, it was observed that we lost the lacp
partner. Show hardware shows rx counter goes up, but show interface does not
for the slave interfaces.
Cause:
We reset the interface promiscuous mode when the bond subinterface is deleted.
This causes dpdk not to accept any packet. Leave the interface in promiscuous
mode fixes the problem.
Other fixes:
There are few places we use hw_if_index as if they are sw_if_index. But they
don't necessarily have the same value. As soon as a subinterface is created,
they start to diverge. The fix is to use the correct API for the hw_if_index
and sw_if_index.
Change-Id: I1e6b8bca0a4aae396d217a141271cbf968500c91
Signed-off-by: Steven <sluong@cisco.com>
(cherry picked from commit 42c6599bf3057a7e8f4f00f5b6a9dd72af48d283)
|
|
In dpdk based bonding, when the bond interface is configured for l2,
it automatically sets the bond interface to promiscuous mode and sets rx
redirect to ethernet-input. This allows traffic to be bridged to
non compute node facing interface when it is received from the compute
node interface.
For native vpp bonding, we need to do similar things. When the bond interface
is configured for l2, we set the slave interfaces to promiscuous mode
and set rx redirect to ethernet-input because dpdk does not know anything
about the bond interface. Likewise, when a new interface is enslaved, we also
need to do the same thing if the bond interface has already been configured
for l2.
Change-Id: I7e168008e8a4221be74929b2a20e6db0ce8f3110
Signed-off-by: Steven <sluong@cisco.com>
|
|
For the debug image, if the interface is removed and the trace was
collected prior to the interface delete, show trace may cause a crash.
This is because vnet_get_sw_interface_name and vnet_get_sup_hw_interface
are not safe if the interface is deleted.
The fix is to use format_vnet_sw_if_index_name if all we need is to
get the interface name in the trace to display. It would show "DELETED"
which is better than a crash.
Change-Id: I912402d3e71592ece9f49d36c8a6b7af97f3b69e
Signed-off-by: Steven <sluong@cisco.com>
|
|
rename "enslave interface <slave> to <BondEthernetx>" to
"bond add <BondEthernetx> <slave>
"detach interface <slave>" to
"bond del <slave>"
Change-Id: I1bf8f017517b1f8a823127c7efedd3766e45cd5b
Signed-off-by: Steven <sluong@cisco.com>
|
|
We were only puting one packet per frame to the output node. Change to
buffer multiple packets per frame. Performance is now on top of dpdk-based
bonding.
Put a spinlock in the tx thread in case the rug is pulled under us.
Change-Id: Ifda5af086a984a7301972cd6c8e428217f676a95
Signed-off-by: Steven <sluong@cisco.com>
|
|
Add bonding driver to support creation of bond interface which composes of
multiple slave interfaces. The slave interfaces could be physical interfaces,
or just any virtual interfaces. For example, memif interfaces.
The syntax to create a bond interface is
create bond mode <lacp | xor | acitve-backup | broadcast | round-robin>
To enslave an interface to the bond interface,
enslave interface TenGigabitEthernet6/0/0 to BondEthernet0
Please see src/plugins/lacp/lacp_doc.md for more examples and additional
options.
LACP is a control plane protocol which manages and monitors the status of
the slave interfaces. The protocol is part of 802.3ad standard. This patch
implements LACPv1. LACPv2 is not supported.
To enable LACP on the bond interface, specify "mode lacp" when the bond
interface is created. The syntax to enslave a slave interface is the same as
other bonding modes.
Change-Id: I06581d3b87635972f9f0e1ec50b67560fc13e26c
Signed-off-by: Steven <sluong@cisco.com>
|