summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorVengada <venggovi@cisco.com>2017-04-12 04:38:56 -0700
committerDave Barach <openvpp@barachs.net>2017-04-15 12:37:48 +0000
commit68df5a51f6325f635e9abaea0c86dcbf44df1da3 (patch)
tree7b1284e128d7cf5420a312f1f7c49fe15111c05b
parent2594216a9a3673bbf301f9ae630aaa452a27ce4b (diff)
VPP-341: IOAM documentation
Change-Id: I24139082c795ccdfe19d398637a287523ec7a4cc Signed-off-by: Vengada <venggovi@cisco.com> Signed-off-by: Shwetha <shwethab@cisco.com> Signed-off-by: AkshayaNadahalli <anadahal@cisco.com>
-rw-r--r--src/plugins/ioam/ioam_analyser_doc.md101
-rw-r--r--src/plugins/ioam/ioam_ipv6_doc.md530
-rw-r--r--src/plugins/ioam/ioam_manycast_doc.md106
-rw-r--r--src/plugins/ioam/ioam_plugin_doc.md483
-rw-r--r--src/plugins/ioam/ioam_udppinger_doc.md191
-rw-r--r--src/plugins/ioam/ioam_vxlan_gpe_plugin_doc.md243
6 files changed, 1225 insertions, 429 deletions
diff --git a/src/plugins/ioam/ioam_analyser_doc.md b/src/plugins/ioam/ioam_analyser_doc.md
new file mode 100644
index 00000000000..1ee24f61791
--- /dev/null
+++ b/src/plugins/ioam/ioam_analyser_doc.md
@@ -0,0 +1,101 @@
+## IOAM Analyser for IPv6 {#ioam_analyser_doc}
+
+IOAM Analyser for IPv6 does
+- Analysing iOAM records and aggregating statistics
+- Export the aggregated statistics over IP-FIX to external collector.
+
+Following statistics are collected and exported per IOAM flow:
+- All the Paths available for the flow : Collected using IOAM Trace.
+- Delay
+- POT data: No of packets In Policy and Out of Policy.
+- Packet loss count
+- Reordered Packet count
+- Duplicate Packet count
+
+This feature can work on IOAM decapsulating node or as a standalone external analyser.
+
+## Configuration
+
+Below command can be used to configure a VPP node as IOAM analyser:
+
+ set ioam analyse [export-ipfix-collector] [disable] [listen-ipfix]
+
+- export-ipfix-collector : This keyword instructs VPP to export the IOAM
+analysis data to be exported to an external collector via IP-Fix. Note
+that IP-Fix collector information has to be configured using the below
+command:
+
+ set ipfix exporter collector <Remote IP Address> src <Local IP address>
+
+- listen-ipfix : This keyword instructs VPP node to listen to IP-Fix port
+4739 to receive RAW IOAM records exported by using IOAM Export plugin and
+analyse IOAM records.
+
+- disable : This keyword is used to instruct VPP to stop analysing IOAM.
+
+Example1 : To use VPP as IOAM Analyser on IOAM decapsulating node and export.
+
+ set ipam analyse export-ipfix-collector
+ set ipfix exporter collector 172.16.1.254 src 172.16.1.229
+
+ Above commands when configured on a IOAM Decapsulating node will analyse
+ all the IOAM data before Decap, aggregate statistics and export them to
+ node with IP address 172.16.1.254 via IP-Fix.
+
+Example2 : To use VPP as a standalone IOAM Analyser and export.
+
+ set ipam analyse export-ipfix-collector listen-ipfix
+ set ipfix exporter collector 172.16.1.254 src 172.16.1.229
+
+ Above commands when configured on a VPP node will listen on IP-Fix
+ port 4739 for IP-Fix records containing IOAM Raw data aggregate
+ statistics and export them to node with IP address 172.16.1.254 via IP-Fix.
+
+## Operational data
+For checking the operational data of VPP IOAM analyser below command needs to be used:
+
+ show ioam analyse
+
+Example:
+
+ vpp# show ioam analyse
+ iOAM Analyse Information:
+ Flow Number: 1
+ pkt_sent : 400
+ pkt_counter : 400
+ bytes_counter : 458700
+ Trace data:
+ pkt_sent : 400
+ pkt_counter : 100
+ bytes_counter : 458700
+ Trace data:
+ path_map:
+
+ node_id: 0x1, ingress_if: 1, egress_if: 2, state:UP
+ node_id: 0x2, ingress_if: 0, egress_if: 2, state:UP
+ node_id: 0x3, ingress_if: 3, egress_if: 0, state:UP
+ pkt_counter: 200
+ bytes_counter: 229350
+ min_delay: 10
+ max_delay: 50
+ mean_delay: 15
+
+ node_id: 0x1, ingress_if: 1, egress_if: 2, state:UP
+ node_id: 0x4, ingress_if: 10, egress_if: 12, state:UP
+ node_id: 0x3, ingress_if: 3, egress_if: 0, state:UP
+ pkt_counter: 200
+ bytes_counter: 229350
+ min_delay: 19
+ max_delay: 100
+ mean_delay: 35
+
+ POT data:
+ sfc_validated_count : 200
+ sfc_invalidated_count : 200
+
+ Seqno Data:
+ RX Packets : 400
+ Lost Packets : 0
+ Duplicate Packets : 0
+ Reordered Packets : 0
+
diff --git a/src/plugins/ioam/ioam_ipv6_doc.md b/src/plugins/ioam/ioam_ipv6_doc.md
new file mode 100644
index 00000000000..305375fa239
--- /dev/null
+++ b/src/plugins/ioam/ioam_ipv6_doc.md
@@ -0,0 +1,530 @@
+## IOAM for IPv6 {#ioam_ipv6_doc}
+IOAM data is transported as options in an IPv6 hop-by-hop extension header.
+
+### Export of IOAM data
+IPv6 IOAM hop-by-hop data collected is exported as IPfix records.
+#### Export data format and flow
+- Capturing sections of selected packets: First few cachelines of packet starting at IP6 header is copied and appended into a MTU sized buffer.
+- Encoding of the data captured: IPFIX is used with new template ID to indicate raw binary data export that is first 192 octets starting at IPv6 header, padded if the total length of packet is less than 192 octets. So no formatting of the data is required – we could call it “bulk transport of telemetry data over IPFIX”.
+- Sections of captured packets are appended to the buffer – This buffer contains an homogenous ipfix data set so no additional formatting is needed. It is then up to the IPFIX collector to parse, filter, and transform the data into the desired format for a particular deployment or use-case
+- Preparing the data for export: slapping the IP, UDP and IPfix header onto the captured set of buffer.
+- Sending the data : When the buffers are full they are flushed out and replaced. When the packet rate is low then periodically ipfix packets are flushed from per thread buffers.
+[https://gerrit.fd.io/r/#/c/2267] brings in this support.
+
+Performance on a node where iOAM data is exported and removed:
+
+ Name State Calls Vectors Suspends Clocks Vectors/Call
+ ip6-export active 1145933 272387308 0 1.12e2 237.69
+ ip6-hop-by-hop active 1145933 272387308 0 4.25e1 237.69
+ ip6-pop-hop-by-hop active 1145933 272387308 0 4.41e1 237.69
+
+## Configuration
+Configuring IOAM involves:
+- Selecting the packets for which IOAM data must be inserted, updated or removed
+ - Selection of packets for IOAM data insertion on IOAM encapsulating node.
+ Selection of packets is done by 5-tuple based classification
+ - Selection of packets for updating IOAM data is implicitly done on the
+ presence of IOAM options in the packet
+ - Selection of packets for removing the IOAM data is done on 5-tuple
+ based classification
+- The kind of data to be collected
+ - Tracing data
+ - Proof of transit
+ - Edge-to-Edge sequence numbers
+- Additional details for processing IOAM data to be collected
+ - For trace data - trace type, number of nodes to be recorded in the trace,
+ time stamp precision, etc.
+ - For POT data - configuration of POT profile required to process the POT data
+ - Enabling sequence numbers per classified flows
+
+The CLI for configuring IOAM is explained here followed by detailed steps
+and examples to deploy IOAM on VPP as an encapsulating, transit or
+decapsulating IOAM node in the subsequent sub-sections.
+
+VPP IOAM configuration for creating trace profile and enabling trace:
+
+ # set ioam-trace profile trace-type <0x1f|0x7|0x9|0x11|0x19>
+ trace-elts <number of trace elements> trace-tsp <0|1|2|3>
+ node-id <node ID in hex> app-data <application data in hex>
+
+
+A description of each of the options of the CLI follows:
+- trace-type : An entry in the "Node data List" array of the trace option
+can have different formats, following the needs of the a deployment.
+For example: Some deployments might only be interested
+in recording the node identifiers, whereas others might be interested
+in recording node identifier and timestamp.
+The following types are currently supported:
+ - 0x1f : Node data to include hop limit (8 bits), node ID (24 bits),
+ ingress and egress interface IDs (16 bits each), timestamp (32 bits),
+ application data (32 bits)
+ - 0x7 : Node data to include hop limit (8 bits), node ID (24 bits),
+ ingress and egress interface IDs (16 bits each)
+ - 0x9 : Node data to include hop limit (8 bits), node ID (24 bits),
+ timestamp (32 bits)
+ - 0x11: Node data to include hop limit (8 bits), node ID (24 bits),
+ application data (32 bits)
+ - 0x19: Node data to include hop limit (8 bits), node ID (24 bits),
+ timestamp (32 bits), application data (32 bits)
+- trace-elts : Defines the length of the node data array in the trace option.
+- trace-tsp : Defines the timestamp precision to use with the enumerated value
+ for precision as follows:
+ - 0 : 32bits timestamp in seconds
+ - 1 : 32bits timestamp in milliseconds
+ - 2 : 32bits timestamp in microseconds
+ - 3 : 32bits timestamp in nanoseconds
+- node-id : Unique identifier for the node, included in the node ID
+ field of the node data in trace option.
+- app-data : The value configured here is included as is in
+application data field of node data in trace option.
+
+Enabling trace is done by setting "trace" in the following command:
+
+
+ # set ioam [trace] [pot] [seqno] [analyse]
+
+### Trace configuration
+
+#### On IOAM encapsulating node
+ - **Configure classifier and apply ACL** to select packets for
+ IOAM data insertion
+ - Example to enable IOAM data insertion for all the packets
+ towards IPv6 address db06::06:
+
+ vpp# classify table acl-miss-next ip6-node ip6-lookup mask l3 ip6 dst
+
+ vpp# classify session acl-hit-next ip6-node ip6-add-syn-hop-by-hop
+ table-index 0 match l3 ip6 dst db06::06 ioam-encap test-encap
+
+ vpp# set int input acl intfc GigabitEthernet0/0/0 ip6-table 0
+
+ - **Enable tracing** : Specify node ID, maximum number of nodes for which
+ trace data should be recorded, type of data to be included for recording,
+ optionally application data to be included
+ - Example to enable tracing with a maximum of 4 nodes recorded
+ and the data to be recorded to include - hop limit, node id,
+ ingress and egress interface IDs, timestamp (millisecond precision),
+ application data (0x1234):
+
+
+ vpp# set ioam-trace profile trace-type 0x1f trace-elts 4 trace-tsp 1
+ node-id 0x1 app-data 0x1234
+ vpp# set ioam rewrite trace
+
+
+#### On IOAM transit node
+- The transit node requires trace type, timestamp precision, node ID and
+optionally application data to be configured,
+to update its node data in the trace option.
+
+Example:
+
+ vpp# set ioam-trace profile trace-type 0x1f trace-elts 4 trace-tsp 1
+ node-id 0x2 app-data 0x1234
+
+#### On the IOAM decapsulating node
+- The decapsulating node similar to encapsulating node requires
+**classification** of the packets to remove IOAM data from.
+ - Example to decapsulate IOAM data for packets towards
+ db06::06, configure classifier and enable it as an ACL as follows:
+
+
+ vpp# classify table acl-miss-next ip6-node ip6-lookup mask l3 ip6 dst
+
+ vpp# classify session acl-hit-next ip6-node ip6-lookup table-index 0
+ match l3 ip6 dst db06::06 ioam-decap test-decap
+
+ vpp# set int input acl intfc GigabitEthernet0/0/0 ip6-table 0
+
+
+- Decapsulating node requires trace type, timestamp precision,
+node ID and optionally application data to be configured,
+to update its node data in the trace option before it is decapsulated.
+
+Example:
+
+ vpp# set ioam-trace profile trace-type 0x1f trace-elts 4
+ trace-tsp 1 node-id 0x3 app-data 0x1234
+
+
+### Proof of Transit configuration
+
+For details on proof-of-transit,
+see the IETF draft [IOAM-ietf-proof-of-transit].
+To enable Proof of Transit all the nodes that participate
+and hence are verified for transit need a proof of transit profile.
+A script to generate a proof of transit profile as per the mechanism
+described in [IOAM-ietf-proof-of-transit] will be available at [IOAM-Devnet].
+
+The Proof of transit mechanism implemented here is based on
+Shamir's Secret Sharing algorithm.
+The overall algorithm uses two polynomials
+POLY-1 and POLY-2. The degree of polynomials depends on number of nodes
+to be verified for transit.
+POLY-1 is secret and constant. Each node gets a point on POLY-1
+at setup-time and keeps it secret.
+POLY-2 is public, random and per packet.
+Each node is assigned a point on POLY-1 and POLY-2 with the same x index.
+Each node derives its point on POLY-2 each time a packet arrives at it.
+A node then contributes its points on POLY-1 and POLY-2 to construct
+POLY-3 (POLY-3 = POLY-1 + POLY-2) using lagrange extrapolation and
+forwards it towards the verifier by updating POT data in the packet.
+The verifier constructs POLY-3 from the accumulated value from all the nodes
+and its own points on POLY-1 and POLY-2 and verifies whether
+POLY-3 = POLY-1 + POLY-2. Only the verifier knows POLY-1.
+The solution leverages finite field arithmetic in a field of size "prime number"
+for reasons explained in description of Shamir's secret sharing algorithm.
+
+Here is an explanation of POT profile list and profile configuration CLI to
+realize the above mechanism.
+It is best to use the script provided at [IOAM-Devnet] to generate
+this configuration.
+- **Create POT profile** : set pot profile name <string> id [0-1]
+[validator-key 0xu64] prime-number 0xu64 secret_share 0xu64
+lpc 0xu64 polynomial2 0xu64 bits-in-random [0-64]
+ - name : Profile list name.
+ - id : Profile id, it can be 0 or 1.
+ A maximum of two profiles can be configured per profile list.
+ - validator-key : Secret key configured only on the
+ verifier/decapsulating node used to compare and verify proof of transit.
+ - prime-number : Prime number for finite field arithmetic as required by the
+ proof of transit mechanism.
+ - secret_share : Unique point for each node on the secret polynomial POLY-1.
+ - lpc : Lagrange Polynomial Constant(LPC) calculated per node based on
+ its point (x value used for evaluating the points on the polynomial)
+ on the polynomial used in lagrange extrapolation
+ for reconstructing polynomial (POLY-3).
+ - polynomial2 : Is the pre-evaluated value of the point on
+ 2nd polynomial(POLY-2). This is unique for each node.
+ It is pre-evaluated for all the coefficients of POLY-2 except
+ for the constant part of the polynomial that changes per packet
+ and is received as part of the POT data in the packet.
+ - bits-in-random : To control the size of the random number to be
+ generated. This number has to match the other numbers generated and used
+ in the profile as per the algorithm.
+
+- **Set a configured profile as active/in-use** :
+set pot profile-active name <string> ID [0-1]
+ - name : Name of the profile list to be used for computing
+ POT data per packet.
+ - ID : Identifier of the profile within the list to be used.
+
+- **Enable POT**:
+set ioam rewrite pot
+
+#### On IOAM encapsulating node
+ - Configure the classifier and apply ACL to select packets for IOAM data insertion.
+ - Example to enable IOAM data insertion for all the packet towards
+ IPv6 address db06::06 -
+
+ vpp# classify table miss-next ip6-node ip6-lookup mask l3 ip6 dst
+
+ vpp# classify session acl-hit-next ip6-node
+ ip6-add-hop-by-hop table-index 0 match l3 ip6 dst db06::06 ioam-encap test-encap
+
+ vpp# set int input acl intfc GigabitEthernet0/0/0 ip6-table 0
+
+
+ - Configure the proof of transit profile list with profiles.
+Each profile list referred to by a name can contain 2 profiles,
+only one is in use for updating proof of transit data at any time.
+ - Example profile list example with a profile generated from the
+ script to verify transit through 3 nodes is:
+
+
+ vpp# set pot profile name example id 0 prime-number 0x7fff0000fa884685
+ secret_share 0x6c22eff0f45ec56d lpc 0x7fff0000fa884682
+ polynomial2 0xffb543d4a9c bits-in-random 63
+
+ - Enable one of the profiles from the configured profile list as active
+ so that is will be used for calculating proof of transit
+
+Example enable profile ID 0 from profile list example configured above:
+
+
+ vpp# set pot profile-active name example ID 0
+
+
+ - Enable POT option to be inserted
+
+
+ vpp# set ioam rewrite pot
+
+
+#### On IOAM transit node
+ - Configure the proof of transit profile list with profiles for transit node.
+Example:
+
+
+ vpp# set pot profile name example id 0 prime-number 0x7fff0000fa884685
+ secret_share 0x564cdbdec4eb625d lpc 0x1
+ polynomial2 0x23f3a227186a bits-in-random 63
+
+#### On IOAM decapsulating node / verifier
+- The decapsulating node, similar to the encapsulating node requires
+classification of the packets to remove IOAM data from.
+ - Example to decapsulate IOAM data for packets towards db06::06
+ configure classifier and enable it as an ACL as follows:
+
+
+ vpp# classify table miss-next ip6-node ip6-lookup mask l3 ip6 dst
+
+ vpp# classify session acl-hit-next ip6-node ip6-lookup table-index 0
+ match l3 ip6 dst db06::06 ioam-decap test-decap
+
+ vpp# set int input acl intfc GigabitEthernet0/0/0 ip6-table 0 ioam-decap test-decap
+
+- To update and verify the proof of transit, POT profile list should be configured.
+ - Example POT profile list configured as follows:
+
+ vpp# set pot profile name example id 0 validate-key 0x7fff0000fa88465d
+ prime-number 0x7fff0000fa884685 secret_share 0x7a08fbfc5b93116d lpc 0x3
+ polynomial2 0x3ff738597ce bits-in-random 63
+
+### Configuration to enable sequence numbers
+#### On IOAM encapsulating node
+ - Configure the classifier and apply ACL to select packets for IOAM data insertion.
+ - Example to enable IOAM data insertion for all the packet towards
+ IPv6 address db06::06 -
+
+ vpp# classify table miss-next ip6-node ip6-lookup mask l3 ip6 dst
+
+ vpp# classify session acl-hit-next ip6-node
+ ip6-add-hop-by-hop table-index 0 match l3 ip6 dst db06::06 ioam-encap test-encap
+
+ vpp# set int input acl intfc GigabitEthernet0/0/0 ip6-table 0
+
+To enable insertion of sequence numbers on the IOAM encapsulation node do the following:
+
+ set ioam rewrite seqno
+
+This will create and insert a unique sequence numbers for every flow matching a classify session.
+
+On IOAM decapsulating node to analyze the sequence numbers do the following:
+
+- Configure the classifier and apply ACL to select packets for IOAM data removal.
+ - Example to enable IOAM data insertion for all the packet towards
+ IPv6 address db06::06 -
+
+ vpp# classify table miss-next ip6-node ip6-lookup mask l3 ip6 dst
+
+ vpp# classify session acl-hit-next ip6-node
+ ip6-add-hop-by-hop table-index 0 match l3 ip6 dst db06::06 ioam-decap test-decap
+
+ vpp# set int input acl intfc GigabitEthernet0/0/0 ip6-table 0
+
+Following command will analyze the sequence numbers received:
+
+ set ioam rewrite analyse
+
+### Export configuration
+On IOAM decapsulating node where trace is decap is configured add the following:
+
+ set ioam export ipfix collector <ip4-address> src <ip4-address>
+
+## Operational data
+
+Following CLIs are available to check IOAM operation:
+- To check IOAM configuration that are effective use "show IOAM summary"
+
+Example:
+
+ vpp# show ioam summary
+ REWRITE FLOW CONFIGS - Not configured
+ HOP BY HOP OPTIONS - TRACE CONFIG -
+ Trace Type : 0x1f (31)
+ Trace timestamp precision : 1 (Milliseconds)
+ Num of trace nodes : 4
+ Node-id : 0x2 (2)
+ App Data : 0x1234 (4660)
+ POT OPTION - 1 (Enabled)
+ Try 'show IOAM pot and show pot profile' for more information
+
+- To find statistics about packets for which IOAM options were
+added (encapsulating node) and removed (decapsulating node) execute
+**show errors**
+
+Example on encapsulating node:
+
+
+ vpp# show error
+ Count Node Reason
+ 1208804706 ip6-inacl input ACL hits
+ 1208804706 ip6-add-hop-by-hop Pkts w/ added ip6 hop-by-hop options
+
+Example on decapsulating node:
+
+ vpp# show error
+ Count Node Reason
+ 69508569 ip6-inacl input ACL hits
+ 69508569 ip6-pop-hop-by-hop Pkts w/ removed ip6 hop-by-hop options
+
+- To check the POT profiles use "show pot profile"
+
+Example:
+
+ vpp# show pot profile
+ Profile list in use : example
+ POT Profile at index: 0
+ ID : 0
+ Validator : False (0)
+ Secret share : 0x564cdbdec4eb625d (6218586935324795485)
+ Prime number : 0x7fff0000fa884685 (9223090566081300101)
+ 2nd polynomial(eval) : 0x23f3a227186a (39529304496234)
+ LPC : 0x1 (1)
+ Bit mask : 0x7fffffffffffffff (9223372036854775807)
+ Profile index in use: 0
+ Pkts passed : 0x36 (54)
+
+- To get statistics of POT for packets use "show IOAM pot"
+
+Example at encapsulating or transit node:
+
+ vpp# show ioam pot
+ Pkts with ip6 hop-by-hop POT options - 54
+ Pkts with ip6 hop-by-hop POT options but no profile set - 0
+ Pkts with POT in Policy - 0
+ Pkts with POT out of Policy - 0
+
+
+Example at decapsulating/verification node:
+
+
+ vpp# show ioam pot
+ Pkts with ip6 hop-by-hop POT options - 54
+ Pkts with ip6 hop-by-hop POT options but no profile set - 0
+ Pkts with POT in Policy - 54
+ Pkts with POT out of Policy - 0
+
+
+- For statistics on number of packets processed for trace execute **show ioam trace**
+
+
+ vpp# show ioam trace
+ Pkts with ip6 hop-by-hop trace options - 0
+ Pkts with ip6 hop-by-hop trace options but no profile set - 0
+ Pkts with trace updated - 0
+ Pkts with trace options but no space - 0
+
+- For statistics and information on sequence number processing execute **show ioam e2e**
+On IOAM encapsulating node:
+
+
+ vpp# show ioam e2e
+ IOAM E2E information:
+ Flow name: test-encap
+ SeqNo Data:
+ Current Seq. Number : 156790
+
+On IOAM decapsulating node:
+
+
+ vpp# show ioam e2e
+ IOAM E2E information:
+ Flow name: test-decap
+ SeqNo Data:
+ Current Seq. Number : 156789
+ Highest Seq. Number : 156789
+ Packets received : 156789
+ Lost packets : 0
+ Reordered packets : 0
+ Duplicate packets : 0
+
+ Flow name: test-decap2
+ SeqNo Data:
+ Current Seq. Number : 0
+ Highest Seq. Number : 0
+ Packets received : 0
+ Lost packets : 0
+ Reordered packets : 0
+ Duplicate packets : 0
+
+- Tracing - enable trace of IPv6 packets to view the data inserted and
+collected.
+
+Example when the nodes are receiving data over a DPDK interface:
+Enable tracing using "trace add dpdk-input 20"
+(or other interface xxx-input e.g. af-packet-input, tapcli-rx etc) and
+execute "show trace" to view the IOAM data collected:
+
+
+ vpp# trace add dpdk-input 20
+
+ vpp# show trace
+
+ ------------------- Start of thread 0 vpp_main -------------------
+
+ Packet 1
+
+ 00:00:19:294697: dpdk-input
+ GigabitEthernetb/0/0 rx queue 0
+ buffer 0x10e6b: current data 0, length 214, free-list 0, totlen-nifb 0, trace 0x0
+ PKT MBUF: port 0, nb_segs 1, pkt_len 214
+ buf_len 2176, data_len 214, ol_flags 0x0, data_off 128, phys_addr 0xe9a35a00
+ packet_type 0x0
+ IP6: 00:50:56:9c:df:72 -> 00:50:56:9c:be:55
+ IP6_HOP_BY_HOP_OPTIONS: db05::2 -> db06::6
+ tos 0x00, flow label 0x0, hop limit 63, payload length 160
+ 00:00:19:294737: ethernet-input
+ IP6: 00:50:56:9c:df:72 -> 00:50:56:9c:be:55
+ 00:00:19:294753: ip6-input
+ IP6_HOP_BY_HOP_OPTIONS: db05::2 -> db06::6
+ tos 0x00, flow label 0x0, hop limit 63, payload length 160
+ 00:00:19:294757: ip6-lookup
+ fib 0 adj-idx 15 : indirect via db05::2 flow hash: 0x00000000
+ IP6_HOP_BY_HOP_OPTIONS: db05::2 -> db06::6
+ tos 0x00, flow label 0x0, hop limit 63, payload length 160
+ 00:00:19:294802: ip6-hop-by-hop
+ IP6_HOP_BY_HOP: next index 5 len 96 traced 96 Trace Type 0x1f , 1 elts left
+ [0] ttl 0x0 node ID 0x0 ingress 0x0 egress 0x0 ts 0x0
+ app 0x0
+ [1] ttl 0x3e node ID 0x3 ingress 0x1 egress 0x2 ts 0xb68c2213
+ app 0x1234
+ buffer 0x10e6b: current data 0, length 214, free-list 0, totlen-nifb 0, trace 0x0
+ PKT MBUF: port 0, nb_segs 1, pkt_len 214
+ buf_len 2176, data_len 214, ol_flags 0x0, data_off 128, phys_addr 0xe9a35a00
+ packet_type 0x0
+ IP6: 00:50:56:9c:df:72 -> 00:50:56:9c:be:55
+ IP6_HOP_BY_HOP_OPTIONS: db05::2 -> db06::6
+ tos 0x00, flow label 0x0, hop limit 63, payload length 160
+ 00:00:19:294737: ethernet-input
+ IP6: 00:50:56:9c:df:72 -> 00:50:56:9c:be:55
+ 00:00:19:294753: ip6-input
+ IP6_HOP_BY_HOP_OPTIONS: db05::2 -> db06::6
+ tos 0x00, flow label 0x0, hop limit 63, payload length 160
+ 00:00:19:294757: ip6-lookup
+ fib 0 adj-idx 15 : indirect via db05::2 flow hash: 0x00000000
+ IP6_HOP_BY_HOP_OPTIONS: db05::2 -> db06::6
+ tos 0x00, flow label 0x0, hop limit 63, payload length 160
+ 00:00:19:294802: ip6-hop-by-hop
+ IP6_HOP_BY_HOP: next index 5 len 96 traced 96 Trace Type 0x1f , 1 elts left
+ [0] ttl 0x0 node ID 0x0 ingress 0x0 egress 0x0 ts 0x0
+ app 0x0
+ [1] ttl 0x3e node ID 0x3 ingress 0x1 egress 0x2 ts 0xb68c2213
+ app 0x1234
+ [2] ttl 0x3f node ID 0x2 ingress 0x1 egress 0x2 ts 0xb68c2204
+ app 0x1234
+ [3] ttl 0x40 node ID 0x1 ingress 0x5 egress 0x6 ts 0xb68c2200
+ app 0x1234
+ POT opt present
+ random = 0x577a916946071950, Cumulative = 0x10b46e78a35a392d, Index = 0x0
+ 00:00:19:294810: ip6-rewrite
+ tx_sw_if_index 1 adj-idx 14 : GigabitEthernetb/0/0
+ IP6: 00:50:56:9c:be:55 -> 00:50:56:9c:df:72 flow hash: 0x00000000
+ IP6: 00:50:56:9c:be:55 -> 00:50:56:9c:df:72
+ IP6_HOP_BY_HOP_OPTIONS: db05::2 -> db06::6
+ tos 0x00, flow label 0x0, hop limit 62, payload length 160
+ 00:00:19:294814: GigabitEthernetb/0/0-output
+ GigabitEthernetb/0/0
+ IP6: 00:50:56:9c:be:55 -> 00:50:56:9c:df:72
+ IP6_HOP_BY_HOP_OPTIONS: db05::2 -> db06::6
+ tos 0x00, flow label 0x0, hop limit 62, payload length 160
+ 00:00:19:294820: GigabitEthernetb/0/0-tx
+ GigabitEthernetb/0/0 tx queue 0
+ buffer 0x10e6b: current data 0, length 214, free-list 0, totlen-nifb 0, trace 0x0
+ IP6: 00:50:56:9c:be:55 -> 00:50:56:9c:df:72
+
+ IP6_HOP_BY_HOP_OPTIONS: db05::2 -> db06::6
+
+ tos 0x00, flow label 0x0, hop limit 62, payload length 160
+
diff --git a/src/plugins/ioam/ioam_manycast_doc.md b/src/plugins/ioam/ioam_manycast_doc.md
new file mode 100644
index 00000000000..60321c58c31
--- /dev/null
+++ b/src/plugins/ioam/ioam_manycast_doc.md
@@ -0,0 +1,106 @@
+## IOAM and SRv6 for M-Anycast service selection {#ioam_manycast_doc}
+
+Anycast is often used to have a client choose one out of multiple servers.
+This might be due to performance, scale, or availability reasons.
+If a client initiates a TCP connection in an anycast scenario,
+the TCP session is usually established with the server which answers the quickest.
+
+There are cases where it is desirable to:
+- allow choosing the destination server not based on "fastest response time",
+but based on the delay between server and client (e.g. for a streaming application).
+- allow choosing the destination server based on other parameters,
+such as server load information.
+- ensure that all TCP connections of a particular client are hooked up to the same
+server, i.e. that all TCP sessions following the first one are connected to the same server as the first session.
+
+M-anycast combines IOAM and Segment Routing v6 (SRv6) to provide for a solution:
+- IOAM information is added to the initial TCP SYN packet to understand the transmit delay, as well as to the SYN-ACK packet to understand the return delay.
+- SRv6 is used to steer traffic to the set of servers, rather than rely on anycast procedures.
+Client and Servers can be left unchanged.
+SRv6 and iOAM information is added and removed "in transit"
+
+Introduce M-Anycast Server as a solution component to leverage Segment Routing to steer
+traffic, IOAM for optimized service selection.
+M-Anycast Server:
+- Hosts the Anycast address of the services
+- Intercepts TCP-SYN, replicates the SYN and sends to a selected subset of all services using SRv6 spray policy
+- Chooses an appropriate SYN-ACK using embedded in-band OAM data and forwards that SYN-ACK to the client with SRv6 header intact. The SRv6 header in the SYN-ACK received by the client is used to reach the selected server for subsequent packet exchange.
+
+VPP can function as an M-Anycast server. VPP can also be used as a IOAM and SRv6 decapsulating node at the application server edge. This allows for caching of IOAM and reattaching it to correlate the path performance across request-response (SYN/SYN-ACK) forwarding path.
+
+## Configuration
+Example: Highly redundant video-caches as micro-services hosted in multiple public clouds. All Services have an IPv6 address allocated from anycast IPv6 prefix (db06::/64).
+Here configuration to enable VPP as an M-Anycast server and IOAM caching/reattach node is provided.
+
+### M-Anycast Server
+- Enable M-Anycast service selection using:
+
+
+ set ioam ip6 sr-tunnel-select [disable] [oneway|rtt] [wait_for_responses <n|default 3>] sr_localsid <IPv6 address>
+
+Example:
+
+
+ set ioam ip6 sr-tunnel-select oneway sr_localsid db0a::2
+
+- Enable IOAM tracing. Override node for selected traffic processing
+Example:
+To enable M-Anycast service selection with IOAM tracing enabled for db06::/64 prefix and on the return path to process service selection for SRv6 localsid db0a::2:
+
+
+ classify table acl-miss-next ip6-node ip6-lookup mask hex 000000000000ffffffffffffffff0000 buckets 2 skip 2 match 1
+
+ classify session acl-hit-next ip6-node ip6-add-syn-hop-by-hop table-index 0 match hex 0000000000000000000000000000000000000000000000000000000000000000000000000000db060000000000000000 ioam-encap anycast
+
+ classify session acl-hit-next ip6-node ip6-lookup table-index 0 match hex 0000000000000000000000000000000000000000000000000000000000000000000000000000db0a0000000000000000 ioam-decap anycast
+
+ set int input acl intfc GigabitEthernet0/4/0 ip6-table 0
+ set int input acl intfc GigabitEthernet0/5/0 ip6-table 0
+ set ioam-trace profile trace-type 0x09 trace-elts 3 trace-tsp 1 node-id 0x1
+ set ioam rewrite trace
+
+
+- Enable SRv6 spray policy for steering traffic towards M-Anycast prefix.
+Example:
+To steer anycast prefix db06::/64 towards servers with address db07::1, db08::1, db09::1:
+
+
+ sr policy add bsid db11::2 next db07::1 insert spray
+ sr policy mod add sl bsid db11::2 next db08::1
+ sr policy mod add sl bsid db11::2 next db09::1
+ sr steer l3 db06::/64 via sr policy bsid db11::2
+ sr localsid address db0a::2 behavior end
+
+
+### IOAM Caching/reattach at application service edge
+- Enable IOAM data caching
+
+
+ set ioam ip6 cache sr_localsid <ip6 address> [disable]
+
+Example:
+
+
+ set ioam ip6 cache sr_localsid db07::1
+
+- Enable IOAM decap
+Example: To decap IOAM and cache the data towards db06::/64 and reinsert the data towards db04::/64:
+
+
+ classify table acl-miss-next ip6-node ip6-lookup mask hex 000000000000ffffffffffffffff0000 buckets 2 skip 2 match 1
+
+ classify session acl-hit-next ip6-node ip6-lookup table-index 0 match hex 0000000000000000000000000000000000000000000000000000000000000000000000000000db060000000000000000 ioam-decap anycast
+
+ classify session acl-hit-next ip6-node ip6-lookup table-index 0 match hex 0000000000000000000000000000000000000000000000000000000000000000000000000000db070000000000000000 ioam-decap anycast
+
+ classify session acl-hit-next ip6-node ip6-add-from-cache-hop-by-hop table-index 0 match hex 0000000000000000000000000000000000000000000000000000000000000000000000000000db040000000000000000 ioam-encap anycast-response
+
+ set int input acl intfc GigabitEthernet0/4/0 ip6-table 0
+
+ set ioam-trace profile trace-type 0x1f trace-elts 4 trace-tsp 1 node-id 0x3 app-data 0x1234
+
+- Enable SRv6 localsid processing to strip SRv6 header before forwarding towards application service
+
+
+ sr localsid address db07::1 behavior end psp
+
diff --git a/src/plugins/ioam/ioam_plugin_doc.md b/src/plugins/ioam/ioam_plugin_doc.md
index 343abcf73d8..6b15e498b13 100644
--- a/src/plugins/ioam/ioam_plugin_doc.md
+++ b/src/plugins/ioam/ioam_plugin_doc.md
@@ -1,48 +1,59 @@
-## VPP Inband OAM (iOAM) {#ioam_plugin_doc}
+## VPP Inband OAM (IOAM) {#ioam_plugin_doc}
-In-band OAM (iOAM) is an implementation study to record operational
+Inband OAM (IOAM) is an implementation study to record operational
information in the packet while the packet traverses a path between
two points in the network.
-Overview of iOAM can be found in [iOAM-Devnet] page.
+Overview of IOAM can be found in [IOAM-Devnet] page.
The following IETF drafts detail the motivation and mechanism for
recording operational information:
- - [iOAM-ietf-requirements] - Describes motivation and usecases for iOAM
- - [iOAM-ietf-data] - Describes data records that can be collected using iOAM
- - [iOAM-ietf-transport] - Lists out the transport protocols
- and mechanism to carry iOAM data records
- - [iOAM-ietf-proof-of-transit] - Describes the idea of Proof of Transit (POT)
+ - [IOAM-ietf-requirements] - Describes motivation and usecases for IOAM
+ - [IOAM-ietf-data] - Describes data records that can be collected using IOAM
+ - [IOAM-ietf-transport] - Lists out the transport protocols
+ and mechanism to carry IOAM data records
+ - [IOAM-ietf-proof-of-transit] - Describes the idea of Proof of Transit (POT)
and mechanisms to operationalize the idea
## Terminology
-In-band OAM is expected to be deployed in a specific domain rather
-than on the overall Internet. The part of the network which employs in-band OAM
-is referred to as **"in-band OAM-domain"**.
-
-In-band OAM data is added to a packet on entering the in-band OAM-domain
+IOAM is expected to be deployed in a specific domain rather
+than on the overall Internet. The part of the network which employs IOAM
+is referred to as **"IOAM-domain"**.
+
+IOAM data is added to a packet on entering the IOAM-domain
and is removed from the packet when exiting the domain.
-Within the in-band OAM-domain, network nodes that the packet traverses
-may update the in-band OAM data records.
+Within the IOAM-domain, network nodes that the packet traverses
+may update the IOAM data records.
-- The node which adds in-band OAM data to the packet is called the
-**"in-band OAM encapsulating node"**.
+- The node which adds IOAM data to the packet is called the
+**"IOAM encapsulating node"**.
-- The node which removes the in-band OAM data is referred to as the
-**"in-band OAM decapsulating node"**.
+- The node which removes the IOAM data is referred to as the
+**"IOAM decapsulating node"**.
-- Nodes within the domain which are aware of in-band OAM data and read
-and/or write or process the in-band OAM data are called
-**"in-band OAM transit nodes"**.
+- Nodes within the domain which are aware of IOAM data and read
+and/or write or process the IOAM data are called
+**"IOAM transit nodes"**.
## Features supported in the current release
-VPP can function as in-band OAM encapsulating, transit and decapsulating node.
-In this version of VPP in-band OAM data is transported as options in an
-IPv6 hop-by-hop extension header. Hence in-band OAM can be enabled
-for IPv6 traffic.
+
+- VPP can function as IOAM encapsulating, transit and decapsulating node and collect:
+ - IOAM Tracing information at each hop the packet traverses
+ - Sequence number using IOAM Edge-to-Edge option to detect packet loss, duplicate, reordering
+ - Proof of transit - to prove packet flow through a set of checkpoint nodes in the IOAM domain
+- VPP can transport IOAM metadata for native IPv6 and VXLAN-GPE encapsulated packets
+- At the IOAM decapsulating node the data captured can be exported as IPFIX records
+- At the IOAM decapsulation node the data collected can be analysed and summary reported via IPFIX
+
+Using the above IOAM features in VPP following solutions are available:
+- IOAM based UDP pinger detailed description of this can be found @subpage ioam_udppinger_doc
+- IOAM IPFIX Analyser detailed description of this can be found @subpage ioam_analyser_doc
+- M-Anycast server using IOAM and SRv6 detailed description of this can be
+ found @subpage ioam_manycast_doc
-The following iOAM features are supported:
-- **In-band OAM Tracing** : In-band OAM supports multiple data records to be
+The following IOAM options are supported:
+
+- **IOAM Tracing** : IOAM supports multiple data records to be
recorded in the packet as the packet traverses the network.
These data records offer insights into the operational behavior of the network.
The following information can be collected in the tracing
@@ -52,413 +63,27 @@ data from the nodes a packet traverses:
- Egress interface ID
- Timestamp
- Pre-configured application data
-
-- **In-band OAM Proof of Transit (POT)**: Proof of transit iOAM data is
+- **IOAM Proof of Transit (POT)**: Proof of transit IOAM data is
added to every packet for verifying that a packet traverses a specific
set of nodes.
-In-band OAM data is updated at every node that is enabled with iOAM
+IOAM data is updated at every node that is enabled with IOAM
proof of transit and is used to verify whether a packet traversed
all the specified nodes. When the verifier receives each packet,
it can validate whether the packet traversed the specified nodes.
+- **IOAM sequence number**: IOAM defined Edge-to-Edge(E2E) Option is to carry data
+ that is added by the IOAM encapsulating node and interpreted by IOAM
+ decapsulating node. Currently only sequence numbers use the IOAM Edge-to-Edge
+ option. In order to detect packet loss, packet reordering, or packet
+ duplication in an IOAM-domain, sequence numbers can be added
+ to packets
+Configuration for deploying IOAM for IPv6 is explained in @subpage ioam_ipv6_doc
+Configuration for deploying IOAM for VXLAN-GPE is explained in @subpage ioam_vxlan_gpe_plugin_doc
-## Configuration
-Configuring iOAM involves:
-- Selecting the packets for which iOAM data must be inserted, updated or removed
- - Selection of packets for iOAM data insertion on iOAM encapsulating node.
- Selection of packets is done by 5-tuple based classification
- - Selection of packets for updating iOAM data is implicitly done on the
- presence of iOAM options in the packet
- - Selection of packets for removing the iOAM data is done on 5-tuple
- based classification
-- The kind of data to be collected
- - Tracing data
- - Proof of transit
-- Additional details for processing iOAM data to be collected
- - For trace data - trace type, number of nodes to be recorded in the trace,
- time stamp precision, etc.
- - For POT data - configuration of POT profile required to process the POT data
-
-The CLI for configuring iOAM is explained here followed by detailed steps
-and examples to deploy iOAM on VPP as an encapsulating, transit or
-decapsulating iOAM node in the subsequent sub-sections.
-
-VPP iOAM configuration for enabling trace and POT is as follows:
-
- set ioam rewrite trace-type <0x1f|0x7|0x9|0x11|0x19>
- trace-elts <number of trace elements> trace-tsp <0|1|2|3>
- node-id <node ID in hex> app-data <application data in hex> [pot]
-
-A description of each of the options of the CLI follows:
-- trace-type : An entry in the "Node data List" array of the trace option
-can have different formats, following the needs of the a deployment.
-For example: Some deployments might only be interested
-in recording the node identifiers, whereas others might be interested
-in recording node identifier and timestamp.
-The following types are currently supported:
- - 0x1f : Node data to include hop limit (8 bits), node ID (24 bits),
- ingress and egress interface IDs (16 bits each), timestamp (32 bits),
- application data (32 bits)
- - 0x7 : Node data to include hop limit (8 bits), node ID (24 bits),
- ingress and egress interface IDs (16 bits each)
- - 0x9 : Node data to include hop limit (8 bits), node ID (24 bits),
- timestamp (32 bits)
- - 0x11: Node data to include hop limit (8 bits), node ID (24 bits),
- application data (32 bits)
- - 0x19: Node data to include hop limit (8 bits), node ID (24 bits),
- timestamp (32 bits), application data (32 bits)
-- trace-elts : Defines the length of the node data array in the trace option.
-- trace-tsp : Defines the timestamp precision to use with the enumerated value
- for precision as follows:
- - 0 : 32bits timestamp in seconds
- - 1 : 32bits timestamp in milliseconds
- - 2 : 32bits timestamp in microseconds
- - 3 : 32bits timestamp in nanoseconds
-- node-id : Unique identifier for the node, included in the node ID
- field of the node data in trace option.
-- app-data : The value configured here is included as is in
-application data field of node data in trace option.
-- pot : Enables POT option to be included in the iOAM options.
-
-### Trace configuration
-
-#### On in-band OAM encapsulating node
- - **Configure classifier and apply ACL** to select packets for
- iOAM data insertion
- - Example to enable iOAM data insertion for all the packets
- towards IPv6 address db06::06:
-
- vpp# classify table miss-next node ip6-lookup mask l3 ip6 dst
-
- vpp# classify session acl-hit-next node ip6-add-hop-by-hop
- table-index 0 match l3 ip6 dst db06::06
-
- vpp# set int input acl intfc GigabitEthernet0/0/0 ip6-table 0
-
- - **Enable tracing** : Specify node ID, maximum number of nodes for which
- trace data should be recorded, type of data to be included for recording,
- optionally application data to be included
- - Example to enable tracing with a maximum of 4 nodes recorded
- and the data to be recorded to include - hop limit, node id,
- ingress and egress interface IDs, timestamp (millisecond precision),
- application data (0x1234):
-
-
- vpp# set ioam rewrite trace-type 0x1f trace-elts 4 trace-tsp 1
- node-id 0x1 app-data 0x1234
-
-
-
-#### On in-band OAM transit node
-- The transit node requires trace type, timestamp precision, node ID and
-optionally application data to be configured,
-to update its node data in the trace option.
-
-Example:
-
- vpp# set ioam rewrite trace-type 0x1f trace-elts 4 trace-tsp 1
- node-id 0x2 app-data 0x1234
-
-#### On the In-band OAM decapsulating node
-- The decapsulating node similar to encapsulating node requires
-**classification** of the packets to remove iOAM data from.
- - Example to decapsulate iOAM data for packets towards
- db06::06, configure classifier and enable it as an ACL as follows:
-
-
- vpp# classify table miss-next node ip6-lookup mask l3 ip6 dst
-
- vpp# classify session acl-hit-next node ip6-lookup table-index 0
- match l3 ip6 dst db06::06 opaque-index 100
-
- vpp# set int input acl intfc GigabitEthernet0/0/0 ip6-table 0
-
-
-- Decapsulating node requires trace type, timestamp precision,
-node ID and optionally application data to be configured,
-to update its node data in the trace option before it is decapsulated.
-
-Example:
-
- vpp# set ioam rewrite trace-type 0x1f trace-elts 4
- trace-tsp 1 node-id 0x3 app-data 0x1234
-
-
-### Proof of Transit configuration
-
-For details on proof-of-transit,
-see the IETF draft [iOAM-ietf-proof-of-transit].
-To enable Proof of Transit all the nodes that participate
-and hence are verified for transit need a proof of transit profile.
-A script to generate a proof of transit profile as per the mechanism
-described in [iOAM-ietf-proof-of-transit] will be available at [iOAM-Devnet].
-
-The Proof of transit mechanism implemented here is based on
-Shamir's Secret Sharing algorithm.
-The overall algorithm uses two polynomials
-POLY-1 and POLY-2. The degree of polynomials depends on number of nodes
-to be verified for transit.
-POLY-1 is secret and constant. Each node gets a point on POLY-1
-at setup-time and keeps it secret.
-POLY-2 is public, random and per packet.
-Each node is assigned a point on POLY-1 and POLY-2 with the same x index.
-Each node derives its point on POLY-2 each time a packet arrives at it.
-A node then contributes its points on POLY-1 and POLY-2 to construct
-POLY-3 (POLY-3 = POLY-1 + POLY-2) using lagrange extrapolation and
-forwards it towards the verifier by updating POT data in the packet.
-The verifier constructs POLY-3 from the accumulated value from all the nodes
-and its own points on POLY-1 and POLY-2 and verifies whether
-POLY-3 = POLY-1 + POLY-2. Only the verifier knows POLY-1.
-The solution leverages finite field arithmetic in a field of size "prime number"
-for reasons explained in description of Shamir's secret sharing algorithm.
-
-Here is an explanation of POT profile list and profile configuration CLI to
-realize the above mechanism.
-It is best to use the script provided at [iOAM-Devnet] to generate
-this configuration.
-- **Create POT profile** : set pot profile name <string> id [0-1]
-[validator-key 0xu64] prime-number 0xu64 secret_share 0xu64
-lpc 0xu64 polynomial2 0xu64 bits-in-random [0-64]
- - name : Profile list name.
- - id : Profile id, it can be 0 or 1.
- A maximum of two profiles can be configured per profile list.
- - validator-key : Secret key configured only on the
- verifier/decapsulating node used to compare and verify proof of transit.
- - prime-number : Prime number for finite field arithmetic as required by the
- proof of transit mechanism.
- - secret_share : Unique point for each node on the secret polynomial POLY-1.
- - lpc : Lagrange Polynomial Constant(LPC) calculated per node based on
- its point (x value used for evaluating the points on the polynomial)
- on the polynomial used in lagrange extrapolation
- for reconstructing polynomial (POLY-3).
- - polynomial2 : Is the pre-evaluated value of the point on
- 2nd polynomial(POLY-2). This is unique for each node.
- It is pre-evaluated for all the coefficients of POLY-2 except
- for the constant part of the polynomial that changes per packet
- and is received as part of the POT data in the packet.
- - bits-in-random : To control the size of the random number to be
- generated. This number has to match the other numbers generated and used
- in the profile as per the algorithm.
-
-- **Set a configured profile as active/in-use** :
-set pot profile-active name <string> ID [0-1]
- - name : Name of the profile list to be used for computing
- POT data per packet.
- - ID : Identifier of the profile within the list to be used.
-
-#### On In-band OAM encapsulating node
- - Configure the classifier and apply ACL to select packets for iOAM data insertion.
- - Example to enable iOAM data insertion for all the packet towards
- IPv6 address db06::06 -
-
-
- vpp# classify table miss-next node ip6-lookup mask l3 ip6 dst
-
- vpp# classify session acl-hit-next node
- ip6-add-hop-by-hop table-index 0 match l3 ip6 dst db06::06
-
- vpp# set int input acl intfc GigabitEthernet0/0/0 ip6-table 0
-
-
- - Configure the proof of transit profile list with profiles.
-Each profile list referred to by a name can contain 2 profiles,
-only one is in use for updating proof of transit data at any time.
- - Example profile list example with a profile generated from the
- script to verify transit through 3 nodes is:
-
-
- vpp# set pot profile name example id 0 prime-number 0x7fff0000fa884685
- secret_share 0x6c22eff0f45ec56d lpc 0x7fff0000fa884682
- polynomial2 0xffb543d4a9c bits-in-random 63
-
- - Enable one of the profiles from the configured profile list as active
- so that is will be used for calculating proof of transit
-
-Example enable profile ID 0 from profile list example configured above:
-
-
- vpp# set pot profile-active name example ID 0
-
-
- - Enable POT option to be inserted
-
-
- vpp# set ioam rewrite pot
-
-
-#### On in-band OAM transit node
- - Configure the proof of transit profile list with profiles for transit node.
-Example:
-
-
- vpp# set pot profile name example id 0 prime-number 0x7fff0000fa884685
- secret_share 0x564cdbdec4eb625d lpc 0x1
- polynomial2 0x23f3a227186a bits-in-random 63
-
-#### On in-band OAM decapsulating node / verifier
-- The decapsulating node, similar to the encapsulating node requires
-classification of the packets to remove iOAM data from.
- - Example to decapsulate iOAM data for packets towards db06::06
- configure classifier and enable it as an ACL as follows:
-
-
- vpp# classify table miss-next node ip6-lookup mask l3 ip6 dst
-
- vpp# classify session acl-hit-next node ip6-lookup table-index 0
- match l3 ip6 dst db06::06 opaque-index 100
-
- vpp# set int input acl intfc GigabitEthernet0/0/0 ip6-table 0
-
-- To update and verify the proof of transit, POT profile list should be configured.
- - Example POT profile list configured as follows:
-
- vpp# set pot profile name example id 0 validate-key 0x7fff0000fa88465d
- prime-number 0x7fff0000fa884685 secret_share 0x7a08fbfc5b93116d lpc 0x3
- polynomial2 0x3ff738597ce bits-in-random 63
-
-## Operational data
-
-Following CLIs are available to check iOAM operation:
-- To check iOAM configuration that are effective use "show ioam summary"
-
-Example:
-
- vpp# show ioam summary
- REWRITE FLOW CONFIGS - Not configured
- HOP BY HOP OPTIONS - TRACE CONFIG -
- Trace Type : 0x1f (31)
- Trace timestamp precision : 1 (Milliseconds)
- Num of trace nodes : 4
- Node-id : 0x2 (2)
- App Data : 0x1234 (4660)
- POT OPTION - 1 (Enabled)
- Try 'show ioam pot and show pot profile' for more information
-
-- To find statistics about packets for which iOAM options were
-added (encapsulating node) and removed (decapsulating node) execute
-*show errors*
-
-Example on encapsulating node:
-
-
- vpp# show error
- Count Node Reason
- 1208804706 ip6-inacl input ACL hits
- 1208804706 ip6-add-hop-by-hop Pkts w/ added ip6 hop-by-hop options
-
-Example on decapsulating node:
-
- vpp# show error
- Count Node Reason
- 69508569 ip6-inacl input ACL hits
- 69508569 ip6-pop-hop-by-hop Pkts w/ removed ip6 hop-by-hop options
-
-- To check the POT profiles use "show pot profile"
-
-Example:
-
- vpp# show pot profile
- Profile list in use : example
- POT Profile at index: 0
- ID : 0
- Validator : False (0)
- Secret share : 0x564cdbdec4eb625d (6218586935324795485)
- Prime number : 0x7fff0000fa884685 (9223090566081300101)
- 2nd polynomial(eval) : 0x23f3a227186a (39529304496234)
- LPC : 0x1 (1)
- Bit mask : 0x7fffffffffffffff (9223372036854775807)
- Profile index in use: 0
- Pkts passed : 0x36 (54)
-
-- To get statistics of POT for packets use "show ioam pot"
-
-Example at encapsulating or transit node:
-
- vpp# show ioam pot
- Pkts with ip6 hop-by-hop POT options - 54
- Pkts with ip6 hop-by-hop POT options but no profile set - 0
- Pkts with POT in Policy - 0
- Pkts with POT out of Policy - 0
-
-
-Example at decapsulating/verification node:
-
-
- vpp# show ioam pot
- Pkts with ip6 hop-by-hop POT options - 54
- Pkts with ip6 hop-by-hop POT options but no profile set - 0
- Pkts with POT in Policy - 54
- Pkts with POT out of Policy - 0
-
-- Tracing - enable trace of IPv6 packets to view the data inserted and
-collected.
-
-Example when the nodes are receiving data over a DPDK interface:
-Enable tracing using "trace add dpdk-input 20" and
-execute "show trace" to view the iOAM data collected:
-
-
- vpp# trace add dpdk-input 20
-
- vpp# show trace
-
- ------------------- Start of thread 0 vpp_main -------------------
-
- Packet 1
-
- 00:00:19:294697: dpdk-input
- GigabitEthernetb/0/0 rx queue 0
- buffer 0x10e6b: current data 0, length 214, free-list 0, totlen-nifb 0, trace 0x0
- PKT MBUF: port 0, nb_segs 1, pkt_len 214
- buf_len 2176, data_len 214, ol_flags 0x0, data_off 128, phys_addr 0xe9a35a00
- packet_type 0x0
- IP6: 00:50:56:9c:df:72 -> 00:50:56:9c:be:55
- IP6_HOP_BY_HOP_OPTIONS: db05::2 -> db06::6
- tos 0x00, flow label 0x0, hop limit 63, payload length 160
- 00:00:19:294737: ethernet-input
- IP6: 00:50:56:9c:df:72 -> 00:50:56:9c:be:55
- 00:00:19:294753: ip6-input
- IP6_HOP_BY_HOP_OPTIONS: db05::2 -> db06::6
- tos 0x00, flow label 0x0, hop limit 63, payload length 160
- 00:00:19:294757: ip6-lookup
- fib 0 adj-idx 15 : indirect via db05::2 flow hash: 0x00000000
- IP6_HOP_BY_HOP_OPTIONS: db05::2 -> db06::6
- tos 0x00, flow label 0x0, hop limit 63, payload length 160
- 00:00:19:294802: ip6-hop-by-hop
- IP6_HOP_BY_HOP: next index 5 len 96 traced 96 Trace Type 0x1f , 1 elts left
- [0] ttl 0x0 node ID 0x0 ingress 0x0 egress 0x0 ts 0x0
- app 0x0
- [1] ttl 0x3e node ID 0x3 ingress 0x1 egress 0x2 ts 0xb68c2213
- app 0x1234
- [2] ttl 0x3f node ID 0x2 ingress 0x1 egress 0x2 ts 0xb68c2204
- app 0x1234
- [3] ttl 0x40 node ID 0x1 ingress 0x5 egress 0x6 ts 0xb68c2200
- app 0x1234
- POT opt present
- random = 0x577a916946071950, Cumulative = 0x10b46e78a35a392d, Index = 0x0
- 00:00:19:294810: ip6-rewrite
- tx_sw_if_index 1 adj-idx 14 : GigabitEthernetb/0/0
- IP6: 00:50:56:9c:be:55 -> 00:50:56:9c:df:72 flow hash: 0x00000000
- IP6: 00:50:56:9c:be:55 -> 00:50:56:9c:df:72
- IP6_HOP_BY_HOP_OPTIONS: db05::2 -> db06::6
- tos 0x00, flow label 0x0, hop limit 62, payload length 160
- 00:00:19:294814: GigabitEthernetb/0/0-output
- GigabitEthernetb/0/0
- IP6: 00:50:56:9c:be:55 -> 00:50:56:9c:df:72
- IP6_HOP_BY_HOP_OPTIONS: db05::2 -> db06::6
- tos 0x00, flow label 0x0, hop limit 62, payload length 160
- 00:00:19:294820: GigabitEthernetb/0/0-tx
- GigabitEthernetb/0/0 tx queue 0
- buffer 0x10e6b: current data 0, length 214, free-list 0, totlen-nifb 0, trace 0x0
- IP6: 00:50:56:9c:be:55 -> 00:50:56:9c:df:72
-
- IP6_HOP_BY_HOP_OPTIONS: db05::2 -> db06::6
-
- tos 0x00, flow label 0x0, hop limit 62, payload length 160
-[iOAM-Devnet]: <https://github.com/ciscodevnet/iOAM>
-[iOAM-ietf-requirements]:<https://tools.ietf.org/html/draft-brockners-inband-oam-requirements-01>
-[iOAM-ietf-transport]:<https://tools.ietf.org/html/draft-brockners-inband-oam-transport-01>
-[iOAM-ietf-data]:<https://tools.ietf.org/html/draft-brockners-inband-oam-data-01>
-[iOAM-ietf-proof-of-transit]:<https://tools.ietf.org/html/draft-brockners-proof-of-transit-01>
+[IOAM-Devnet]: <https://github.com/ciscodevnet/IOAM>
+[IOAM-ietf-requirements]:<https://tools.ietf.org/html/draft-brockners-inband-oam-requirements-03>
+[IOAM-ietf-transport]:<https://tools.ietf.org/html/draft-brockners-inband-oam-transport-03>
+[IOAM-ietf-data]:<https://tools.ietf.org/html/draft-brockners-inband-oam-data-04>
+[IOAM-ietf-proof-of-transit]:<https://tools.ietf.org/html/draft-brockners-proof-of-transit-03>
diff --git a/src/plugins/ioam/ioam_udppinger_doc.md b/src/plugins/ioam/ioam_udppinger_doc.md
new file mode 100644
index 00000000000..3bb42b7f290
--- /dev/null
+++ b/src/plugins/ioam/ioam_udppinger_doc.md
@@ -0,0 +1,191 @@
+## UDP-Pinger for IPv6 with IOAM {#ioam_udppinger_doc}
+
+Traditionally to detect and isolate network faults, ping and traceroute
+are used. But in a complex network with large number of U/E-CMP being
+availble, it would be difficult to detect and isolate faults in the
+network. Also detecting loss/reordering/duplication of packets becomes
+much harder. [draft-lapukhov-dataplane-probe] uses active probes to
+solve the above mentioned problems. UDP-Pinger with IOAM, would combine
+[draft-lapukhov-dataplane-probe] with [IOAM-ietf-data] and
+[IOAM-ietf-transport] to provide a more sophisticated way for
+detection/isolation of network faults and enable network telemetry.
+
+UDP-Pinger for IPv6 does:
+- Crafts and sends Probe packets from source node to destination.
+- Probe packet is an IPv6 packet with HBH header to collect IOAM data
+and UDP header followed by payload.
+- UDP source and destination ports are varied to cover all possible
+paths in network as well as to simulate real application traffic.
+- IOAM Trace option is used to record the path Probe packets take
+and also for measuring latency.
+- IOAM E2E option is used to measure packet loss, reordering and
+duplicate packets.
+- UDP payload follows packet format in [draft-lapukhov-dataplane-probe]
+and is used on source/destination nodes to identify Probe/Reply packets.
+- Destination node on receiving Probe packet sends a reply back to source.
+ Reply packet is formed by exchanging source and destination IP addresses
+and packet type in UDP payload.
+- Source node on receiving Reply packet can trace packet Path and measure
+latency, packet loss, reordering, duplication.
+- On detecting fault in network, Probe packets are sent with loopback
+flag set. On seeing loopback flag, each device in network along with
+forwarding the packet, also sends a copy back to source. With this
+Source node can corelate and detect the faulty node/link.
+
+## Configuration
+Following section describes how to enable UDP-Pinger on a VPP node.
+Please note that IOAM configurations as mentioned in @subpage ioam_ipv6_doc
+have to be made prior to starting any of the below configurations.
+
+### UDP-Pinger Enable/Disable
+For configuring UDP-Pinger between two end-points, following parametrs
+need to be provided by using the CLI:
+
+ set udp-ping src <local IPv6 address> src-port-range <local port range>
+ dst <remote IPv6 address> dst-port-range <destination port range>
+ interval <time interval in sec> [fault-detect] [disable]
+
+- src : IPv6 address of local node.
+- src-port-range : Port range for source port in UDP header.
+ Syntax is <start_port>:<end_port>
+- dst : IPv6 address of the destination node.
+- dst-port-range : Port range for destination port in UDP header.
+ Syntax is <start_port>:<end_port>
+- interval : Time interval in seconds for which Probe packets need to
+ be sent out.
+- fault-detect : This is to enable IOAM loopback functionality on
+ detecting a failure and to detect faulty node/link.
+- disable : Used for deleting a UDP-Ping flow.
+
+Example:
+
+ To create a UDP-Pinger session:
+ set udp-ping src db00::1 src-port-range 5000:5002 dst db02::1 dst-port-range 6000:6002 interval 1 fault-detect
+
+ To delete a UDP-Pinger session:
+ set udp-ping src db00::1 src-port-range 5000:5002 dst db02::1 dst-port-range 6000:6002 interval 1 fault-detect disable
+
+### UDP-Pinger Data Export
+For exporting network telemetry data extracted from UDP-Pinger sessions,
+below command is used. Data is exported as IP-Fix records.
+
+ set udp-ping export-ipfix [disable]
+
+ On enabling udp-ping export, UDP-Pinger data is exported as
+ IP-Fix record to IP-Fix collector address as configured in
+ IP-Fix using the command:
+
+ set ipfix exporter collector <Remote IP Address> src <Local IP address>
+
+Following data is exported from UDP-Pinger:
+- IOAM Trace information for each of UDP-Pinger flow
+- Roundtrip Delay
+- Packet loss count
+- Reordered Packet count
+- Duplicate Packet count
+
+Example:
+
+ To enable export:
+ set ipfix exporter collector 172.16.1.254 src 172.16.1.229
+ set udp-ping export-ipfix
+
+ To disable export:
+ set udp-ping export-ipfix disable
+
+## Operational data
+For checking the state of the UDP-Pinger sessions, below command can be used:
+
+ show udp-ping summary
+
+Command displays follwing for each UDP-Pinger session:
+- IOAM Trace information for each of UDP-Pinger flow
+- Roundtrip Delay
+- Packet loss count
+- Reordered Packet count
+- Duplicate Packet count
+
+Example:
+
+ vpp#show udp-ping summary
+ UDP-Ping data:
+ Src: db00::1, Dst: db02::1
+ Start src port: 5000, End src port: 5002
+ Start dst port: 6000, End dst port: 6002
+ Interval: 1
+
+ Src Port - 5000, Dst Port - 6000, Flow CTX - 0
+ Path State - Up
+ Path Data:
+ pkt_sent : 400
+ pkt_counter : 400
+ bytes_counter : 458700
+ Trace data:
+ pkt_sent : 400
+ pkt_counter : 400
+ bytes_counter : 45870
+ Trace data:
+ path_map:
+
+ node_id: 0x1, ingress_if: 1, egress_if: 2, state:UP
+ node_id: 0x2, ingress_if: 0, egress_if: 2, state:UP
+ node_id: 0x3, ingress_if: 3, egress_if: 0, state:UP
+ node_id: 0x2, ingress_if: 4, egress_if: 9, state:UP
+ node_id: 0x1, ingress_if: 10, egress_if: 0, state:UP
+ pkt_counter: 400
+ bytes_counter: 45870
+ min_delay: 10
+ max_delay: 50
+ mean_delay: 15
+
+ POT data:
+ sfc_validated_count : 0
+ sfc_invalidated_count : 0
+
+ Seqno Data:
+ RX Packets : 400
+ Lost Packets : 0
+ Duplicate Packets : 0
+ Reordered Packets : 0
+
+ Src Port - 5000, Dst Port - 6001, Flow CTX - 1
+ Path State - Down
+ Path Data:
+ pkt_sent : 500
+ pkt_counter : 400
+ bytes_counter : 45870
+ Trace data:
+ pkt_sent : 500
+ pkt_counter : 400
+ bytes_counter : 45870
+ Trace data:
+ path_map:
+
+ node_id: 0x1, ingress_if: 1, egress_if: 2, state:UP
+ node_id: 0x2, ingress_if: 0, egress_if: 2, state:UP
+ node_id: 0x3, ingress_if: 3, egress_if: 0, state:Down
+ node_id: 0x2, ingress_if: 4, egress_if: 9, state:Down
+ node_id: 0x1, ingress_if: 10, egress_if: 0, state:Down
+ pkt_counter: 500
+ bytes_counter: 45870
+ min_delay: 50
+ max_delay: 500
+ mean_delay: 100
+
+ POT data:
+ sfc_validated_count : 0
+ sfc_invalidated_count : 0
+
+ Seqno Data:
+ RX Packets : 400
+ Lost Packets : 100
+ Duplicate Packets : 20
+ Reordered Packets : 5
+
+ <So on for other source/destination port combination>
+
+
+[draft-lapukhov-dataplane-probe]:<https://tools.ietf.org/html/draft-lapukhov-dataplane-probe-01>
+[IOAM-ietf-data]:<https://tools.ietf.org/html/draft-brockners-inband-oam-data-04>
+[IOAM-ietf-transport]:<https://tools.ietf.org/html/draft-brockners-inband-oam-transport-03>
+
diff --git a/src/plugins/ioam/ioam_vxlan_gpe_plugin_doc.md b/src/plugins/ioam/ioam_vxlan_gpe_plugin_doc.md
new file mode 100644
index 00000000000..c37df3ef05f
--- /dev/null
+++ b/src/plugins/ioam/ioam_vxlan_gpe_plugin_doc.md
@@ -0,0 +1,243 @@
+## IOAM over VxLAN-GPE {#ioam_vxlan_gpe_plugin_doc}
+
+This document describes the details about configuring and monitoring the usage
+of IOAM over VxLAN-GPE. The packet formats used by the implementation are specified
+in the IETF draft below:
+ - [IOAM-ietf-transport] - Lists out the transport protocols
+ and mechanism to carry IOAM data records
+
+## Features supported in the current release
+VPP can function as IOAM encapsulating, transit and decapsulating node.
+IOAM data transported as options in an VxLAN-GPE extension header
+is described here.
+
+The following IOAM features are supported:
+
+- **IOAM Tracing** : IOAM supports multiple data records to be
+recorded in the packet as the packet traverses the network.
+These data records offer insights into the operational behavior of the network.
+The following information can be collected in the tracing
+data from the nodes a packet traverses:
+ - Node ID
+ - Ingress interface ID
+ - Egress interface ID
+ - Timestamp
+ - Pre-configured application data
+
+## Configuration
+Configuring IOAM over VxLAN-GPE involves:
+- Selecting the VxLAN-GPE tunnel for which IOAM data must be inserted, updated or removed
+ - For flows transported over VxLAN-GPE, selection of packets is done based
+ on the tuple of <VtepSrcIP, VtepDstIp, VNID>
+ - Selection of packets for updating IOAM data is implicitly done on the
+ presence of IOAM options in the packet
+ - Selection of packets for removing the IOAM data is done when the VxLAN-GPE tunnel is terminated.
+- The kind of data to be collected
+ - Tracing data
+- Additional details for processing IOAM data to be collected
+ - For trace data - trace type, number of nodes to be recorded in the trace,
+ time stamp precision, etc.
+
+The CLI for configuring IOAM is explained here followed by detailed steps
+and examples to deploy IOAM for VxLAN-GPE on VPP as an encapsulating, transit or
+decapsulating IOAM node in the subsequent sub-sections.
+
+### Trace configuration
+
+#### On IOAM encapsulating node
+ - Configure VxLAN tunnel parameters to select packets for IOAM data insertion
+ - Example to enable IOAM data insertion for all the packets
+ from src VTEP 10.1.1.1 dest VTEP 10.1.1.2 VNI 13
+
+ vpp# set vxlan-gpe-ioam vxlan <src-ip> <dst_ip> <vnid> [disable]
+ - Note the disable switch is used to disable the selection of packets for IOAM data insertion.
+
+ - **Enable tracing** : Specify node ID, maximum number of nodes for which
+ trace data should be recorded, type of data to be included for recording,
+ optionally application data to be included
+ - Example to enable tracing with a maximum of 4 nodes recorded
+ and the data to be recorded to include - hop limit, node id,
+ ingress and egress interface IDs, timestamp (millisecond precision),
+ application data (0x1234):
+
+
+ vpp# set ioam-trace profile trace-type 0x1f trace-elts 4 trace-tsp 1
+ node-id 0x1 app-data 0x1234
+ vpp# set vxlan-gpe-ioam trace
+
+
+
+#### On IOAM transit node
+- The transit node requires the outer Destination IP to be configured.
+- Additionally the transit node requires trace type, timestamp precision, node ID and
+optionally application data to be configured, to update its node data in the trace option.
+
+Example:
+
+ vpp# set ioam-trace profile trace-type 0x1f trace-elts 4 trace-tsp 1
+ node-id 0x2 app-data 0x1234
+ vpp# set vxlan-gpe-ioam-transit dst-ip <dst_ip> [outer-fib-index <outer_fib_index>] [disable]
+ - Note the disable switch is used to disable the selection of packets for IOAM data insertion.
+
+#### On the IOAM decapsulating node
+- The decapsulating node similar to encapsulating node requires
+configuration of the VxLAN-GPE tunnels for identifying the packets to remove IOAM data from.
+ - Example to decapsulate IOAM data for packets
+ from src VTEP 10.1.1.1 dest VTEP 10.1.1.2 VNI 13
+
+ vpp# set vxlan-gpe-ioam vxlan <src-ip> <dst_ip> <vnid> [disable]
+ - Note the disable switch is used to disable the selection of packets for IOAM data insertion.
+
+- Decapsulating node requires trace type, timestamp precision,
+node ID and optionally application data to be configured,
+to update its node data in the trace option before it is decapsulated.
+
+Example:
+
+ vpp# set ioam-trace profile trace-type 0x1f trace-elts 4
+ trace-tsp 1 node-id 0x3 app-data 0x1234
+ vpp# set vxlan-gpe-ioam trace
+
+## Export of IOAM records upon decapsulation
+
+IOAM data records extracted from the VxLAN-GPE header can be exported as IPFIX records.
+These IPFIX records can then be analysed using offline scripts or standard IPFIX collector modules.
+
+Example:
+ vpp# set vxlan-gpe-ioam export ipfix collector <ip4-address> src <ip4-address>
+
+
+## Operational data
+
+Following CLIs are available to check IOAM operation:
+- To check IOAM configuration that are effective use "show ioam summary"
+
+
+- Tracing - enable trace of VxLAN-GPE packets to view the data inserted and
+collected.
+
+Example when the nodes are receiving data over a DPDK interface:
+Enable tracing using "trace add dpdk-input 20" and
+execute "show trace" to view the IOAM data collected:
+
+
+ vpp# trace add dpdk-input 20
+
+ vpp# show trace
+
+ ------------------- Start of thread 0 vpp_main -------------------
+ Packet 1
+
+ 00:41:58:236271: af-packet-input
+ af_packet: hw_if_index 1 next-index 1
+ tpacket2_hdr:
+ status 0x20000001 len 114 snaplen 114 mac 66 net 80
+ sec 0x57c5b238 nsec 0x1bae439a vlan 0
+ 00:41:58:236281: ethernet-input
+ IP4: fa:16:3e:1b:3b:df -> fa:16:3e:a5:df:a7
+ 00:41:58:236289: l2-input
+ l2-input: sw_if_index 1 dst fa:16:3e:a5:df:a7 src fa:16:3e:1b:3b:df
+ 00:41:58:236292: l2-learn
+ l2-learn: sw_if_index 1 dst fa:16:3e:a5:df:a7 src fa:16:3e:1b:3b:df bd_index 1
+ 00:41:58:236297: l2-fwd
+ l2-fwd: sw_if_index 1 dst fa:16:3e:a5:df:a7 src fa:16:3e:1b:3b:df bd_index 1
+ 00:41:58:236299: l2-flood
+ l2-flood: sw_if_index 1 dst fa:16:3e:a5:df:a7 src fa:16:3e:1b:3b:df bd_index 1
+ 00:41:58:236304: l2-output
+ l2-output: sw_if_index 4 dst fa:16:3e:a5:df:a7 src fa:16:3e:1b:3b:df
+ 00:41:58:236306: vxlan-gpe-encap
+ VXLAN-GPE-ENCAP: tunnel 0
+ 00:41:58:236309: vxlan-gpe-encap-ioam-v4
+ VXLAN_GPE_IOAM_HOP_BY_HOP: next_index 0 len 40 traced 40 Trace Type 0x1f , 1 elts left
+ [0] ttl 0x0 node id 0x0 ingress 0x0 egress 0x0 ts 0x0
+ app 0x0
+ [1] ttl 0xff node id 0x323200 ingress 0x4 egress 0x4 ts 0x57c5b238
+ app 0xa5a55e5e
+ VXLAN-GPE-ENCAP: tunnel 0
+ VXLAN_GPE_IOAM_HOP_BY_HOP: next_index 0 len 8 traced 0VXLAN-GPE-ENCAP: tunnel 0
+ 00:41:58:236314: ip4-lookup
+ fib 0 adj-idx 13 : via 10.0.0.10 flow hash: 0x00000000
+ UDP: 6.0.0.11 -> 7.0.0.11
+ tos 0x00, ttl 254, length 190, checksum 0xaf19
+ fragment id 0x0000
+ UDP: 4790 -> 4790
+ length 170, checksum 0x0000
+ 00:41:58:236318: ip4-indirect
+ fib 0 adj-idx 10 : host-eth2
+ IP4: 02:fe:3c:85:ec:72 -> 02:fe:64:28:83:90 flow hash: 0x00000000
+ UDP: 6.0.0.11 -> 7.0.0.11
+ tos 0x00, ttl 254, length 190, checksum 0xaf19
+ fragment id 0x0000
+ UDP: 4790 -> 4790
+ length 170, checksum 0x0000
+ 00:41:58:236320: ip4-rewrite-transit
+ tx_sw_if_index 2 adj-idx 10 : host-eth2
+ IP4: 02:fe:3c:85:ec:72 -> 02:fe:64:28:83:90 flow hash: 0x00000000
+ IP4: 02:fe:3c:85:ec:72 -> 02:fe:64:28:83:90
+ UDP: 6.0.0.11 -> 7.0.0.11
+ tos 0x00, ttl 253, length 190, checksum 0xb019
+ fragment id 0x0000
+ UDP: 4790 -> 4790
+ length 170, checksum 0x0000
+ 00:41:58:236322: host-eth2-output
+ host-eth2
+ IP4: 02:fe:3c:85:ec:72 -> 02:fe:64:28:83:90
+ UDP: 6.0.0.11 -> 7.0.0.11
+ tos 0x00, ttl 253, length 190, checksum 0xb019
+ fragment id 0x0000
+ UDP: 4790 -> 4790
+ length 170, checksum 0x0000
+ 00:41:58:236512: l2-flood
+ l2-flood: sw_if_index 1 dst fa:16:3e:a5:df:a7 src fa:16:3e:1b:3b:df bd_index 1
+ 00:41:58:236514: error-drop
+ l2-flood: BVI L3 mac mismatch
+
+ vpp# trace add dpdk-input 20
+
+ vpp# show trace
+
+ ------------------- Start of thread 0 vpp_main -------------------
+ Packet 1
+
+ 17:26:12:929645: af-packet-input
+ af_packet: hw_if_index 1 next-index 1
+ tpacket2_hdr:
+ status 0x20000001 len 204 snaplen 204 mac 66 net 80
+ sec 0x57c670fd nsec 0x74e39a2 vlan 0
+ 17:26:12:929656: ethernet-input
+ IP4: 02:fe:c0:42:3c:a9 -> 02:fe:50:ec:fa:0a
+ 17:26:12:929662: ip4-input
+ UDP: 6.0.0.11 -> 7.0.0.11
+ tos 0x00, ttl 252, length 190, checksum 0xb119
+ fragment id 0x0000
+ UDP: 4790 -> 4790
+ length 170, checksum 0x0000
+ 17:26:12:929666: ip4-lookup
+ fib 0 adj-idx 12 : 7.0.0.11/16 flow hash: 0x00000000
+ UDP: 6.0.0.11 -> 7.0.0.11
+ tos 0x00, ttl 252, length 190, checksum 0xb119
+ fragment id 0x0000
+ UDP: 4790 -> 4790
+ length 170, checksum 0x0000
+ 17:26:12:929670: ip4-local
+ UDP: 6.0.0.11 -> 7.0.0.11
+ tos 0x00, ttl 252, length 190, checksum 0xb119
+ fragment id 0x0000
+ UDP: 4790 -> 4790
+ length 170, checksum 0x0000
+ 17:26:12:929672: ip4-udp-lookup
+ UDP: src-port 4790 dst-port 4790
+ 17:26:12:929680: vxlan4-gpe-input
+ VXLAN-GPE: tunnel 0 next 3 error 0IP6_HOP_BY_HOP: next index 3 len 40 traced 40 Trace Type 0x1f , 1 elts left
+ [0] ttl 0x0 node id 0x0 ingress 0x0 egress 0x0 ts 0x0
+ app 0x0
+ [1] ttl 0xff node id 0x323200 ingress 0x4 egress 0x4 ts 0x57c670fc
+ app 0xa5a55e5e
+
+ 17:26:12:929687: ethernet-input
+ IP4: fa:16:3e:1b:3b:df -> fa:16:3e:a5:df:a7
+
+
+[IOAM-vpp]: <#ioam_plugin_doc>
+[IOAM-ietf-transport]:<https://tools.ietf.org/html/draft-brockners-inband-oam-transport>
+