Age | Commit message (Collapse) | Author | Files | Lines |
|
Change-Id: I7a54cdfa26652c04971999ad1f8144566e13c7bf
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I463b153de93cfec29a9c15e8e84e41f6003d4c5f
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I441beaf3d7f57886580d7cce35ef592aa0fcca5f
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
The assignment was redundant with a one just a dozen lines above
in the case of the ACL loaded being non-empty, so its only
apparent purpose in life was make coverity unhappy...
Thus fix by deletion.
Change-Id: I573308cb9c212bdfdca2551aa381720dbbcb006e
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
There are two reasons to modify the existing code ip4_input_inline.
1. For many tunnel decap cases, inner ip header or its part is possible
in the second cacheline, not first cacheline only after the field "data",
and this will cause data cache miss once the second cacheline is needed
to access. e.g vxlan-gpe.
2. For most of cases, "data" is the starting address of ethernet
header, not IP header. The existing code causes misunderstanding
from code readability perspective.
Change-Id: I43e119b899dbde95803bccbac54259729fd2cddf
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Signed-off-by: Yuwei Zhang <yuwei1.zhang@intel.com>
|
|
Change-Id: Ifa6d8391b1b2413a88b7720fc434e0bc849a149a
Signed-off-by: Klement Sekera <ksekera@cisco.com>
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
Change-Id: I0f2266c4727a96b6410a3084dc079bae7bc649ab
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: I3722a1850f7a72e4382e351120c1514d7a1759b8
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: Id4f37f5d4a03160572954a416efa1ef9b3d79ad1
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I8eb5546ff8634d5498d8ce5bbc9407bceb9ae3ef
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
- avoid cwnd excessive increments on threshold changes
- fix K computation when fastconvergence is on
Change-Id: I99c36abc879e63aecc0617f7aed5a2f68430ba71
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: I3a15960fe346763faf13e8728ce36c2f3bf7b05a
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: I88c3d3e516401bb1c84991515cd701c156ae19dd
Signed-off-by: Eyal Bari <ebari@cisco.com>
|
|
Change-Id: I04c59bbe1780e7289cb27a0a912803812fdc297e
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
Change-Id: If88fc3acdba1f73b3e8be94d8014556c5239596c
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I15ff191ee8724a3354c074db590472db05e0652e
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Typically we have scalar_size == 0, so it doesn't matter
but vlib_frame_args was providing pointer to scalar frame
data, not vector data. To avoid future confusion function
is renamed to vlib_frame_scalar_args(...)
Change-Id: I48b75523b46d487feea24f3f3cb10c528dde516f
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Make sure that we notify the app of the data enqueued in the burst
before notifying of disconnect.
Change-Id: I7747a5cbb4c6bc9132007f849c24ce04b7841273
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Remove old nonfunctional code for setting link-local addresses.
Use common API for setting all IPv6 addresses.
Change-Id: I562329df86341f81ef2441510a9eefbbf710f6e0
Signed-off-by: Juraj Sloboda <jsloboda@cisco.com>
Signed-off-by: Matus Fabian <matfabia@cisco.com>
|
|
Change-Id: Ib55684dd3f1d39f5436d6feb2fb105583027493c
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
when pagesize is 1G, this pm->base + (pp->index << pm->def_log2_page_sz) would very soon overrun if creating multiple mempools
add a (uword) to it
Change-Id: If769b99d344cc3f547418a242a7497d044071615
Signed-off-by: Kingwel Xie <kingwel.xie@ericsson.com>
|
|
Change-Id: I3daf8d473aa37b4597d130d19913b782cf7b8511
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I36c2fa33cdc1db9a6af9b48c99e281abd8af1b6e
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Because the code is not optimized, newreno is still the default
congestion control algorithm.
Change-Id: I7061cc80c5a75fa8e8265901fae4ea2888e35173
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
It is not used and just confuses people...
Change-Id: Ic731432a785731271531f183b448e4591a1d2a8b
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Compute the first power of ten which is greater than 0.1% of the clock
rate. Save the result, and use it to round future results. The
previous constant value - 1e7 - didn't work properly on aarch64.
Change-Id: Ic021e3eb1b90c0d4a7d9f1b6425123f0c8b48b0b
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Idd4471a3adf7023e48e85717f00c786b1dde0cca
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Idbce0393fc9e6e8dbb2765ed164ba7f90d1ffccc
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I755525d953605561477eeb2252ef38c60000c70a
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
This patch adds support for mapping the virtual address to physical
address and size of memory allocated.
Change-Id: I7659a1881308e89b215c486fecd7c973076d0773
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Change-Id: I294e4f93e925c58765d4692337208fcee7d12886
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
- update pacer once per burst
- better estimate initial rtt
- compute smoothed average for higher precision rtt estimate
Change-Id: I06d41a98784cdf861bedfbee2e7d0afc0d0154ef
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Avoids crash if suspended_process_frames grows.
Change-Id: Id26ef0dd0dd001b997c531c4dec004e7e7989670
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: Id69678adb578b323ae18034d1b1fddb7417bcc08
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I6782544d5ee0a66b1a027874b23574416093ca92
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I20192f3a8f4f01f47e775746f6fde7c685f185ee
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Instead of reusing buffers for acking, consume all buffers and program
output for (dup)ack generation. This implicitly fixes the drop counters
that were artificially inflated by both data and feedback traffic.
Moreover, the patch also significantly reduces the ack traffic as we now
only generate an ack per frame, unless duplicate acks need to be sent.
Because of the reduced feedback traffic, a sender's rx path and a
receiver's tx path are now significantly less loaded. In particular, a
sender can overwhelm a 40Gbps NIC and generate tx drop bursts for low
rtts. Consequently, tx pacing is now enforced by default.
Change-Id: I619c29a8945bf26c093f8f9e197e3c6d5d43868e
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Optimize zero byte mask NEON functions below with less intrinsics,
and get their outputs consistent with functions in vector_sse42.h
always_inline u32 u64x2_zero_byte_mask (u64x2 input)
always_inline u32 u32x4_zero_byte_mask (u32x4 input)
always_inline u32 u16x8_zero_byte_mask (u16x8 input)
always_inline u32 u8x16_zero_byte_mask (u8x16 input)
always_inline u32 i64x2_zero_byte_mask (i64x2 input)
always_inline u32 i32x4_zero_byte_mask (i32x4 input)
always_inline u32 i16x8_zero_byte_mask (i16x8 input)
always_inline u32 i8x16_zero_byte_mask (i8x16 input)
Change-Id: I7f485915baeb37fa2dd484699b8769e0136f6574
Signed-off-by: Lijian Zhang <Lijian.Zhang@arm.com>
Reviewed-by: Sirshak Das <Sirshak.Das@arm.com>
|
|
Change-Id: I30487bd736407378fb5a6d313e4eef12bbb262b8
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Learning GBP endpoints over vxlan-gbp tunnels
Change-Id: I1db9fda5a16802d9ad8b4efd4e475614f3b21502
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
|
|
pool (VPP-1485)
Change-Id: Iaa404361eac2a6612dcdaba3f73bae41a35c5446
Signed-off-by: Matus Fabian <matfabia@cisco.com>
|
|
Change-Id: I6d6a73ac62f24928fb51e89948b92a1cb9134c40
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
In output.c, we buffer the descriptors and call vmxnet3_reg_write_inline
once outside the loop. This change improves the performance dramatically.
When refilling the ring, there is no need to inform the device unless
explicitly specified by the device (ctrl.update_prod == 1)
Change-Id: I7031d58bff0d249e913d14236d416c91eb6ab94a
Signed-off-by: Steven <sluong@cisco.com>
|
|
Change-Id: Ifb841312d4a382547153b24903230b407f649e73
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
If no PCI address is specified in dpdk config, the default to automatically
put all PCIs in the whitelist.
For vmxnet3 PCIs, we want to change its default to exclude the vmxnet3 PCIs.
That is to put them in the blacklist instead of whitelist.
Change-Id: I2b7061d6437910eb0e1b16df19a770cab968c602
Signed-off-by: Steven <sluong@cisco.com>
|
|
Change-Id: I29f20dbaf2c2d735faff297cee552ed648f6f61b
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
(gdb) bt
bt
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
(gdb) frame 5
frame 5
293 if (PREDICT_FALSE (rxvq->last_avail_idx == rxvq->avail->idx))
(gdb) p *rxvq
p *rxvq
$3 = {cacheline0 = 0x7f290bcadd80 "\377\003", qsz_mask = 1023, last_avail_idx = 0, last_used_idx = 0, n_since_last_int = 0, desc = 0x0, avail = 0x0, used = 0x0, int_deadline = 0, started = 1 '\001', enabled = 1 '\001', log_used = 0 '\000', cacheline1 = 0x7f290bcaddc0 "\377\377\377\377\016", errfd = -1, callfd_idx = 14, kickfd_idx = 19, log_guest_addr = 5151049792, mode = 0}
The crash is because we access the null pointer rxvq->avail,
which is supposed to be derived from the mmap informed by the driver.
We fixed a similar issue before in
https://gerrit.fd.io/r/#/c/14545/
The reason was the driver ummaps the memory without doing the disconnect in
SR-IOV environment. The fixed was applied to the RX path. Now it happens in the
TX path. We just need to apply the same check in the TX path.
Change-Id: I7b1dfc96797cb5b52845bc6cec09a8c5d4325280
Signed-off-by: Steven <sluong@cisco.com>
|
|
Change-Id: I48a92035b58d83420eb3eed3f05a75ba283543c2
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I9375bca5f5136c84d801dbd635929bb1c37d75b4
Signed-off-by: Filip Varga <filip.varga@pantheon.tech>
|
|
Change-Id: I49a5029d256df8f749ee30d19ff7473147b6516f
Signed-off-by: Damjan Marion <damarion@cisco.com>
|