Age | Commit message (Collapse) | Author | Files | Lines |
|
before and after:
ip4-load-balance 1.54e1
ip4-load-balance 1.36e1
p.s. Quad loops were not beneficial
Change-Id: I7bc01fc26288f0490af74db2b1b7993526c3d982
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
and document scaling
Change-Id: I65d8999e65616d77e525963c770d91e9b0d5e593
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I07681d94301e19389dda0caacd5a93b21d9aff1f
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Use explicit types vl_api_address/prefix in ipip.api.
Change-Id: Ib3133cebdbe4437742924efd49cde4009c4cc31b
Type: refactor
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
Change-Id: Iab27b405fb3ca7aed94ae974d57c286c41298c3a
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
For tunnel mode, after decryption the buffer length was being adjusted
by adding (iv length + esp header size). Subtract it instead.
Required for BFD to work on an IPsec tunnel interface. BFD verifies
that the amount of received data is the expected size. It drops the
packet if the buffer metadata says that the packet buffer contains
more data than the packet headers say it should.
Change-Id: I3146d5c3cbf1cceccc9989eefbc9a59e604e9975
Signed-off-by: Matthew Smith <mgsmith@netgate.com>
|
|
Change-Id: I363a4444f4d296f04371acd65c702b1a1ce70913
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Change-Id: Ib9297b712ff7d08bf085fb0b6c9e6ffd83c5fa57
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Type: fix
Fixes: 47feb11
Change-Id: I6b3b97cd361eef19c910c14fd06edb001a4c191b
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
This patch helps save 4.1 clocks/pkt from 62.9 to 58.8
clocks/pkt on Skylake.
Change-Id: I749a88a8fa6c78243441a89d6afcd04f106af3da
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Change-Id: I2e48eb772dc44912192d0684b8ee631d8d975e9e
Signed-off-by: Jakub Grajciar <jgrajcia@cisco.com>
|
|
call crypto backend only once per node call
Change-Id: I0faab89f603424f6c6ac0db28cc1a2b2c025093e
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
|
|
- add to the Punt API to allow different descriptions of the desired packets: UDP or exceptions
- move the punt nodes into punt_node.c
- improve tests (test that the correct packets are punted to the registered socket)
Change-Id: I1a133dec88106874993cba1f5a439cd26b2fef72
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Add a minimal ip6 hbh header processing test.
ioam plugin: use ip6_local_hop_by_hop_register_protocol() in
udp_ping_init().
Please test the ioam plugin udp_ping path AYEC, so I can
publish the patch.
Change-Id: I74e35276d6c38c31022026cfd238fad5e4a54485
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Id5c0f420e32e0504cea660fed2013f3ad28088aa
Signed-off-by: Jakub Grajciar <jgrajcia@cisco.com>
|
|
In tap tx routine, virtio_interface_tx_inline, there used to be an
interface spinlock to ensure packets are processed in an orderly fashion
clib_spinlock_lock_if_init (&vif->lockp);
When virtio code was introduced in 19.04, that line is changed to
clib_spinlock_lock_if_init (&vring->lockp);
to accommodate multi-queues.
Unfortunately, althrough the spinlock exists in the vring, it was never
initialized for tap, only for virtio. As a result, many nasty things can
happen when running tap interface in multi-thread environment. Crash is
inevitable.
The fix is to initialize vring->lockp for tap and remove vif->lockp as it
is not used anymore.
Change-Id: I82b15d3e9b0fb6add9b9ac49bf602a538946634a
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
(cherry picked from commit c2c89782d34df0dc7197b18b042b4c2464a101ef)
|
|
Change-Id: Ia0cadebab8b800e34e9574601cdebee5ca90cc6a
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Change-Id: I3906adf9aa20b4221eeb7a8b5b353c6f0cb32d04
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Change-Id: I527b7e43dfba05eab12591e193f07f5036e33f56
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Change-Id: Ida6a8f96bd858246e993250087bed45e7084ede1
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I7b735f5a540e8c278bac88245acb3f8c041c49c0
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
This patch can help save 2.7 clocks/pkt from 51.5 to 48.5
clocks/pkt on Skylake server.
Change-Id: I10173c8a147a0e54f925c7841c26f133eb75cbed
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
1. Using vlib_get_buffers replaces original logic.
2. Simplify some implementation.
Change-Id: I46cd3487c1d3289074d9dff22aa384688be326dd
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Change-Id: I005f96480e81f3e750c18261e78d0e401da7528e
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: Ie7a6c7b92a6beeb356f01384216a4982fb3d420e
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
This patch improves performance by prefetching encap header area
and taking full advantage of optimized function vlib_get_buffers.
After applying the patch, the function vxlan_gpe_encap can save
4.1 clocks/pkt from 41.7 to 37.6 clocks/pkt on Skylake.
Change-Id: I85d486b21a2524d64f2e246dfb4183539ec2532d
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
When multichained fragments comes into reassembly, followed by buffer Linearization or dropping the buffer for other reasons inbetween disturbs the multichained mbuf linking.
When packet is transmitted, followed by freeing of the buffers, woudl result in double free and packet corruptions
Change-Id: Ib5711d54e61fdd6a67deb30dad0b2a14afb9c2da
Signed-off-by: Vijayabhaskar Katamreddy <vkatamre@cisco.com>
|
|
Indirect buffers are used to store indirect descriptors
to xmit big packets.
This patch moves the indirect buffer allocation from
interface creation to device node. Now it allocates
or deallocates buffers during tx for chained buffers.
Change-Id: I55cec208a2a7432e12fe9254a7f8ef84a9302bd5
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
(cherry picked from commit 55203e745f5e3f1f6c4dbe99d6eab8dee4d13ea6)
|
|
Add a registration overwritten warning to ip4_icmp_register_type(...)
Change-Id: I6c2aabdb979b54ec49e827225acc74559ac4caab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
We must do lock fib while vrf id ~0, otherwise it crashes while unlocking fib.
Change-Id: Iec9754ccd67634a132bc5384a4f796d4a65943ae
Signed-off-by: jackiechen1985 <xiaobo.chen@tieto.com>
|
|
Replace enqueue code with marcro vlib_validate_buffer_enqueue_x1
Change-Id: I4b454b1d73fa5adbaf5f40cf45dc8975878ac93b
Signed-off-by: jackiechen1985 <xiaobo.chen@tieto.com>
|
|
Type: fix
Change-Id: Ib3a2777317f8c57e91ce43820ad7ca5d10ac8677
Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
|
|
when try_resplit
Signed-off-by: dongjuan <dong.juan1@zte.com.cn>
Change-Id: I3ebbe7d2d11453700503df7f3be549781d8b73a7
|
|
Change-Id: Id95fd604ed181a2f70c24e2c8cc4321755b7ba7f
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
The current code only allowed access to the main thread error counters.
That is not so useful for a multi worker instance.
No return a vector indexed by thread of counter_t values.
Type: fix
Change-Id: Ie322c8889c0c8175e1116e71de04a2cf453b9ed7
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
vlib_get_buffers helps save 1.4 clocks/pkt from 34.6 to 33.2
clocks/pkt on Skylake.
Change-Id: I741d10d20373f12d30ec8b04ad8c7444ffb42246
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Change-Id: I3a4883426b558476040af5b89bb7ccc8f151c5cc
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
limit max # of fragments to 3 per packet by default
add API option to configure the limit at runtime
Change-Id: Ie4b9507bf5c6095b9a5925972b37fe0032f4f9e8
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
1. remove unnecessary cast for void * pointer.
2. remove the unused input parameter.
Change-Id: Ic0324364fc0c772200d30fb18a0ba959ed4f7ea4
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Out-of-tree plugins can refer to IP types in their API. The .api and
associated headers must be exported.
Change-Id: I75004343b040defd9eebac6a8a95c2ecf3c8079a
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
leak-check { <any-debug-cli-command-and-args> }
Hint: "set term history off" or you'll have to sort through a bunch of
bogus leaks related to the debug cli history mechanism.
Cleaned up a set of reported leaks in the "show interface" command. At
some point, we thought about making a per-thread vlib_mains vector,
but we never did that. Several interface-related CLI's maintained
local static cache vectors. Not a bad idea, but not useful as things
shook out. Removed the static vectors.
Change-Id: I756bf2721a0d91993ecfded34c79da406f30a548
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
* Configure tests to raise exception if cli_inband fails.
* Fix failing tests.
* Add filename detail to pcap.stat clib_error_return for debugging.
Note: this change identifies spurious issues with packet-generator such as:
CliFailedCommandError: packet-generator capture: pcap file
'/tmp/vpp-unittest-Test6RD-v09RPA/pg0_out.pcap' does not exist.
These issues resolve themselves on remaining test passes.
Change-Id: Iecbd09daee954d892306d11baff3864a43c5b603
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
- if the port is unregistered then write ~0 into the sparse vec, this allows the DP to send packets to ICMP
- remove the v6 arcs from the v4 node and vice-versa (since they're never taken)
- i have tests for this in a pending change for the punt socket
Change-Id: Icbd97de2c2fc38490c16afc2e0b414d8436593c4
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Needed by QUIC to distinguish Q/Ssessions
Change-Id: Idcc9e46f86f54a7d06ce6d870edec1766e95c82d
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
The vlib init function subsystem now supports a mix of procedural and
formally-specified ordering constraints. We should eliminate procedural
knowledge wherever possible.
The following schemes are *roughly* equivalent:
static clib_error_t *init_runs_first (vlib_main_t *vm)
{
clib_error_t *error;
... do some stuff...
if ((error = vlib_call_init_function (init_runs_next)))
return error;
...
}
VLIB_INIT_FUNCTION (init_runs_first);
and
static clib_error_t *init_runs_first (vlib_main_t *vm)
{
... do some stuff...
}
VLIB_INIT_FUNCTION (init_runs_first) =
{
.runs_before = VLIB_INITS("init_runs_next"),
};
The first form will [most likely] call "init_runs_next" on the
spot. The second form means that "init_runs_first" runs before
"init_runs_next," possibly much earlier in the sequence.
Please DO NOT construct sets of init functions where A before B
actually means A *right before* B. It's not necessary - simply combine
A and B - and it leads to hugely annoying debugging exercises when
trying to switch from ad-hoc procedural ordering constraints to formal
ordering constraints.
Change-Id: I5e4353503bf43b4acb11a45fb33c79a5ade8426c
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I72ec95d4a3009a55b0f1fa7e45f9c53f31ef5fc1
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
The unused function nat44_ha_resync() was the only function that
used the error message VNET_API_ERROR_IN_PROGRESS. The error
was the only error code that was positive, and didn't really
play well with the other error codes.
Change-Id: I7d03c2ee915094b635f6efdca7427f71e4d19f2b
Signed-off-by: Jon Loeliger <jdl@netgate.com>
|
|
* Add support for multiple threads
* Replace quicly buffers with fifos
* Fix cleanup of sessions
* Update quicly release version
Change-Id: I551f936bbec05a15703f043ee85c8e1ba0ab9723
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
- track fifo segment free and chunk freelist memory
- improve fifo alloc. If there are enough chunks to satisfy a fifo
allocation request but not enough free memory, allocate a multi-chunk
fifo
- add apis to preallocate chunks and fifo headers
- more tests
Change-Id: If18dba7ab856272c9f565d36ac36365139793e0b
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: I753fbce091c0ba1004690be5ddeb04f463cf95a3
Signed-off-by: Neale Ranns <nranns@cisco.com>
|