Age | Commit message (Collapse) | Author | Files | Lines |
|
trajectory trace has been broken for a while because we used to save the
buffer trajectory in a vector pointed to in opaque2. This does not work
well when opaque2 is copied (eg. because of a clone) as 2 buffers end up
sharing the same vector.
This dedicates a full cacheline in the buffer metadata instead when
trajectory is compiled in. No dynamic allocation, no sharing, no tears.
Type: refactor
Change-Id: I6a028ca1b48d38f393a36979e5e452c2dd48ad3f
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Add descriptions to clib_file_t template structures so that
sockets can be identified via the 'show unix file' cli command.
Type: fix
Change-Id: Ibf82d55aa6c7b1126bd252b76d0dc8b7076f5046
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
When using classifier to filter traces, not all packets will be traced.
In that case, we should only count traced packets.
Type: fix
Change-Id: I87d1e217b580ebff8c6ade7860eb43950420ae78
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: feature
Change-Id: I913f08383ee1c24d610c3d2aac07cef402570e2c
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
The vlib init function subsystem now supports a mix of procedural and
formally-specified ordering constraints. We should eliminate procedural
knowledge wherever possible.
The following schemes are *roughly* equivalent:
static clib_error_t *init_runs_first (vlib_main_t *vm)
{
clib_error_t *error;
... do some stuff...
if ((error = vlib_call_init_function (init_runs_next)))
return error;
...
}
VLIB_INIT_FUNCTION (init_runs_first);
and
static clib_error_t *init_runs_first (vlib_main_t *vm)
{
... do some stuff...
}
VLIB_INIT_FUNCTION (init_runs_first) =
{
.runs_before = VLIB_INITS("init_runs_next"),
};
The first form will [most likely] call "init_runs_next" on the
spot. The second form means that "init_runs_first" runs before
"init_runs_next," possibly much earlier in the sequence.
Please DO NOT construct sets of init functions where A before B
actually means A *right before* B. It's not necessary - simply combine
A and B - and it leads to hugely annoying debugging exercises when
trying to switch from ad-hoc procedural ordering constraints to formal
ordering constraints.
Change-Id: I5e4353503bf43b4acb11a45fb33c79a5ade8426c
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I4e836244409c98739a13092ee252542a2c5fe259
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Example:
buffers {
default data-size 1536
}
Change-Id: I5b4436850ca18025c9fdcfc7ed648c2c2732d660
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Id4f37f5d4a03160572954a416efa1ef9b3d79ad1
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Typically we have scalar_size == 0, so it doesn't matter
but vlib_frame_args was providing pointer to scalar frame
data, not vector data. To avoid future confusion function
is renamed to vlib_frame_scalar_args(...)
Change-Id: I48b75523b46d487feea24f3f3cb10c528dde516f
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I085615fde1f966490f30ed5d32017b8b088cfd59
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
It is cheaper to get thread index from vlib_main_t if available...
Change-Id: I4582e160d06d9d7fccdc54271912f0635da79b50
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ibac5a4588e66f6d3ad42dd2583e1e84b7d2314c4
Signed-off-by: sharath reddy <sharathkumarboyanapally@gmail.com>
|
|
https://gerrit.fd.io/r/#/c/8551/ decoupled the global variable,
namely tm->iovecs from TX and RX. However, to support multi-threads,
we have to eliminate the use of this global variable with per thread
variable. I notice that rx_buffers must also be per thread variable.
So, we introduce per thread struct to contain rx_buffers and iovecs.
Each thread will find the per thread struct with thread_index.
Change-Id: I61abf2fdace8d722525a382ac72f0d04a173b9ce
Signed-off-by: Steven <sluong@cisco.com>
|
|
It was observed that under heavy traffic, VPP accidentally sent traffic
with the wrong source and destination to the tun/tap interface. Traffic
appears to be sent to the wrong direction. This problem is only
seen when worker thread is configured.
When worker thread is used, TX and RX may reside in different
core. Yet both TX and RX threads are sharing the same global variable,
namely iovecs without any mutex or memory barrier protection.
This creates a race condition when heavy traffic is blasted to VPP,
like 1000 pps.
We could create a mutex or memory barrier to ensure atomic memory access.
But why bother? It is a lot cheaper to just decouple the iovecs such
that TX and RX have their own iovecs.
Change-Id: I86a5a19bd8de54d54f32e1f0845bae6a81bbf686
Signed-off-by: Steven <sluong@cisco.com>
|
|
This will allow us to use this code in client libraries without vlib.
Change-Id: I8557b752496841ba588aa36b6082cbe2cd1867fe
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
happen for one table
Change-Id: I99d3e9227c33ee42b90e4842080960fcc6c03913
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
This patch deprecates stack-based thread identification,
Also removes requirement that thread stacks are adjacent.
Finally, possibly annoying for some folks, it renames
all occurences of cpu_index and cpu_number with thread
index. Using word "cpu" is misleading here as thread can
be migrated ti different CPU, and also it is not related
to linux cpu index.
Change-Id: I68cdaf661e701d2336fc953dcb9978d10a70f7c1
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ifac7d9134d03d79164ce6f06ae9413279bbaadb3
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I7b51f88292e057c6443b12224486f2d0c9f8ae23
Signed-off-by: Damjan Marion <damarion@cisco.com>
|