From 27d978c9136e903244113a7ab57acea4b496898e Mon Sep 17 00:00:00 2001 From: Dave Barach Date: Tue, 3 Nov 2020 09:59:06 -0500 Subject: vlib: add postmortem pcap dispatch trace Inspired by a real-life conundrum: scenario X involves a vpp crash in ip4-load-balance because vnet_buffer(b)->ip.adj_index[VLIB_TX] is (still) set to ~0. The problem takes most of a day to occur, and we need to see the broken packet's graph trajectory, metadata, etc. to understand the problem. Fix a signed/unsigned ASSERT bug in vlib_get_trace_count(). Rename elog_post_mortem_dump() -> vlib_post_mortem_dump(), add dispatch trace post-mortem dump. Add FILTER_FLAG_POST_MORTEM so we can (putatively) capture a ludicrous number of buffer traces, without actually using more than one dispatch cycle's worth of memory. Type: improvement Signed-off-by: Dave Barach Change-Id: If093202ef071df46e290370bd9b33bf6560d30e6 --- src/vlib/trace.c | 9 +++++++++ 1 file changed, 9 insertions(+) (limited to 'src/vlib/trace.c') diff --git a/src/vlib/trace.c b/src/vlib/trace.c index abd116622c7..f90f275fa87 100644 --- a/src/vlib/trace.c +++ b/src/vlib/trace.c @@ -201,6 +201,15 @@ filter_accept (vlib_trace_main_t * tm, vlib_trace_header_t * h) if (tm->filter_flag == 0) return 1; + /* + * When capturing a post-mortem dispatch trace, + * toss all existing traces once per dispatch cycle. + * So we can trace 4 billion pkts without running out of + * memory... + */ + if (tm->filter_flag == FILTER_FLAG_POST_MORTEM) + return 0; + if (tm->filter_flag == FILTER_FLAG_INCLUDE) { while (h < e) -- cgit 1.2.3-korg