Age | Commit message (Collapse) | Author | Files | Lines |
|
Add lookup/get/set API calls to manage both PCAP and Trace
filtering Classifier tables.
The "lookup" call may be used to identify a Classifier table
within a chain of tables taht matches a particular mask vector.
For efficiency, this call should be used to determine to which
table a match vector should be added.
The "get" calls return the first table within a chain (either
a PCAP or the Trace) set of tables. The "set" call may be
used to add a new table to one such chain. If the "sort_masks"
flag is set, the tables within the chain are ordered such that
the most-specific mask is first, and the least-specific mask
is last. A call that "sets" a chain to ~0 will delete and free
all the tables with a chain.
The PCAP filters are per-interface, with "local0", (that is,
sw_if_index == 0) holding the system-wide PCAP filter.
The Classifier used a reference-counted "set" for each PCAP
or trace filter that it stored. The ref counts were not used,
and the vector of tables was only used temporarily to establish
a sorted order for tables based on masks. None of that
complexity was actually warranted, and where it was used,
the same could be achieved more simply.
Type: refactor
Signed-off-by: Jon Loeliger <jdl@netgate.com>
Change-Id: Icc56116cca91b91c631ca0628e814fb53f3677d2
|
|
Type: refactor
Change-Id: Ie67dc579e88132ddb1ee4a34cb69f96920101772
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Inspired by a real-life conundrum: scenario X involves a vpp crash in
ip4-load-balance because vnet_buffer(b)->ip.adj_index[VLIB_TX] is
(still) set to ~0.
The problem takes most of a day to occur, and we need to see the
broken packet's graph trajectory, metadata, etc. to understand the
problem.
Fix a signed/unsigned ASSERT bug in vlib_get_trace_count().
Rename elog_post_mortem_dump() -> vlib_post_mortem_dump(), add
dispatch trace post-mortem dump.
Add FILTER_FLAG_POST_MORTEM so we can (putatively) capture a ludicrous
number of buffer traces, without actually using more than one dispatch
cycle's worth of memory.
Type: improvement
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: If093202ef071df46e290370bd9b33bf6560d30e6
|
|
Also spiffed up the vpp_api_test plugin loader so it executes
VLIB_INIT_FUNCTIONs and VLIB_API_INIT_FUNCTIONs.
Type: feature
Change-Id: Id9a4f455d73738c41bcfea220df2112bb9679681
Signed-off-by: Jon Loeliger <jdl@netgate.com>
Signed-off-by: Ole Troan <ot@cisco.com>
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Type: fix
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: Ibad744788e200ce012ad88ff59c2c34920742454
|
|
Packet tracing performance doesn't justify inlining
vlib_add_trace(...) over 500 times.
It makes a 15% text-segment size difference in a representative use-case:
Inline:
$ size .../vnet_skx.dir/ipsec/ipsec_input.c.o
text data bss dec hex filename
6831 80 0 6911 1aff .../vnet_skx.dir/ipsec/ipsec_input.c.o
Not inline:
$ size .../vnet_skx.dir/ipsec/ipsec_input.c.o
text data bss dec hex filename
5776 80 0 5856 16e0 .../vnet_skx.dir/ipsec/ipsec_input.c.o
Retain the original code as vlib_add_trace_inline, instantiate once as
vlib_add_trace.
Type: refactor
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: Iaf431dbf00c4aad03663d86f9dd1322e84d03962
|
|
Configure n-tuple classifier filters which apply to the vpp packet
tracer.
Update the documentation to reflect the new feature.
Add a test vector.
Type: feature
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: Iefa911716c670fc12e4825b937b62044433fec36
|
|
Type: feature
Change-Id: I913f08383ee1c24d610c3d2aac07cef402570e2c
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I56f25d653b71a25c70e6c5c1a93dd9c5158f2079
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Id4f37f5d4a03160572954a416efa1ef9b3d79ad1
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
There are multiple trace enablement schemes. It's easy to end up in
vlib_add_trace with tracing disabled insofar as the packet tracer is
concerned. When that happens, return the address of a per-system
dummy trace record.
Change-Id: I929391b8be4ed57e26e291afdc509a15f09a3160
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
In the CLI parsing, below is a common pattern:
/* Get a line of input. */
if (!unformat_user (input, unformat_line_input, line_input))
return 0;
while (unformat_check_input (line_input) != UNFORMAT_END_OF_INPUT)
{
if (unformat (line_input, "x"))
x = 1;
:
else
return clib_error_return (0, "unknown input `%U'",
format_unformat_error, line_input);
}
unformat_free (line_input);
The 'else' returns if an unknown string is encountered. There a memory
leak because the 'unformat_free(line_input)' is not called. There is a
large number of instances of this pattern.
Replaced the previous pattern with:
/* Get a line of input. */
if (!unformat_user (input, unformat_line_input, line_input))
return 0;
while (unformat_check_input (line_input) != UNFORMAT_END_OF_INPUT)
{
if (unformat (line_input, "x"))
x = 1;
:
else
{
error = clib_error_return (0, "unknown input `%U'",
format_unformat_error, line_input);
goto done:
}
}
/* ...Remaining code... */
done:
unformat_free (line_input);
return error;
}
In multiple files, 'unformat_free (line_input);' was never called, so
there was a memory leak whether an invalid string was entered or not.
Also, there were multiple instance where:
error = clib_error_return (0, "unknown input `%U'",
format_unformat_error, line_input);
used 'input' as the last parameter instead of 'line_input'. The result
is that output did not contain the substring in error, instead just an
empty string. Fixed all of those as well.
There are a lot of file, and very mind numbing work, so tried to keep
it to a pattern to avoid mistakes.
Change-Id: I8902f0c32a47dd7fb3bb3471a89818571702f1d2
Signed-off-by: Billy McFall <bmcfall@redhat.com>
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I7b51f88292e057c6443b12224486f2d0c9f8ae23
Signed-off-by: Damjan Marion <damarion@cisco.com>
|