summaryrefslogtreecommitdiffstats
path: root/src/vnet/ip
AgeCommit message (Collapse)AuthorFilesLines
2019-06-04punt: fix the set_punt API/CLI which was rejecting valid portsNeale Ranns1-11/+11
add a UT for the API Change-Id: I93fb6ec2c5f74b991bf7f229250a30c0395b8e24 Signed-off-by: Neale Ranns <nranns@cisco.com>
2019-06-04Punt: specify packets by IP protocol TypeNeale Ranns11-38/+282
Change-Id: I0c2d6fccd95146e52bb88ca4a6e84554d5d6b2ed Signed-off-by: Neale Ranns <nranns@cisco.com>
2019-06-03ARP: add feature arcNeale Ranns3-17/+25
- arp-input, registered with the ethernet protocol dispatcher, performs basic checks and starts the arc - arp-reply; first feature on the arc replies to requests and learns from responses (no functional change) - arp-proxy; checks against the proxy DB arp-reply and arp-proxy are enabled when the interface is appropriately configured. Change-Id: I7d1bbabdb8c8b8187cac75e663daa4a5a7ce382a Signed-off-by: Neale Ranns <nranns@cisco.com>
2019-05-31VPP-1692: move NULL pointer checkDave Barach1-5/+5
TBH, this looks like merge damage or some such. Perfectly fine NULL pointer check, about three lines after it was needed. Change-Id: I52831062e30533a59fb76b644ee5ae389676d2ae Signed-off-by: Dave Barach <dave@barachs.net>
2019-05-30IP load-balance; perf improvement using the usual reciepeNeale Ranns2-305/+233
before and after: ip4-load-balance 1.54e1 ip4-load-balance 1.36e1 p.s. Quad loops were not beneficial Change-Id: I7bc01fc26288f0490af74db2b1b7993526c3d982 Signed-off-by: Neale Ranns <nranns@cisco.com>
2019-05-28Punt: socket register for exception dispatched/punted packets based on reasonNeale Ranns11-796/+1474
- add to the Punt API to allow different descriptions of the desired packets: UDP or exceptions - move the punt nodes into punt_node.c - improve tests (test that the correct packets are punted to the registered socket) Change-Id: I1a133dec88106874993cba1f5a439cd26b2fef72 Signed-off-by: Neale Ranns <nranns@cisco.com>
2019-05-28Add an ip6 local hop-by-hop protocol demux tableDave Barach3-22/+341
Add a minimal ip6 hbh header processing test. ioam plugin: use ip6_local_hop_by_hop_register_protocol() in udp_ping_init(). Please test the ioam plugin udp_ping path AYEC, so I can publish the patch. Change-Id: I74e35276d6c38c31022026cfd238fad5e4a54485 Signed-off-by: Dave Barach <dave@barachs.net>
2019-05-24ip4/6-reassembly fixesVijayabhaskar Katamreddy2-2/+11
When multichained fragments comes into reassembly, followed by buffer Linearization or dropping the buffer for other reasons inbetween disturbs the multichained mbuf linking. When packet is transmitted, followed by freeing of the buffers, woudl result in double free and packet corruptions Change-Id: Ib5711d54e61fdd6a67deb30dad0b2a14afb9c2da Signed-off-by: Vijayabhaskar Katamreddy <vkatamre@cisco.com>
2019-05-24Remove historical ip4 icmp OAM codeDave Barach1-0/+7
Add a registration overwritten warning to ip4_icmp_register_type(...) Change-Id: I6c2aabdb979b54ec49e827225acc74559ac4caab Signed-off-by: Dave Barach <dave@barachs.net>
2019-05-23Optimize codejackiechen19851-9/+4
Replace enqueue code with marcro vlib_validate_buffer_enqueue_x1 Change-Id: I4b454b1d73fa5adbaf5f40cf45dc8975878ac93b Signed-off-by: jackiechen1985 <xiaobo.chen@tieto.com>
2019-05-22stats: support multiple works for error countersOle Troan1-24/+0
The current code only allowed access to the main thread error counters. That is not so useful for a multi worker instance. No return a vector indexed by thread of counter_t values. Type: fix Change-Id: Ie322c8889c0c8175e1116e71de04a2cf453b9ed7 Signed-off-by: Ole Troan <ot@cisco.com>
2019-05-20reassembly: prevent long chain attackKlement Sekera8-13/+86
limit max # of fragments to 3 per packet by default add API option to configure the limit at runtime Change-Id: Ie4b9507bf5c6095b9a5925972b37fe0032f4f9e8 Signed-off-by: Klement Sekera <ksekera@cisco.com>
2019-05-16init / exit function orderingDave Barach1-41/+19
The vlib init function subsystem now supports a mix of procedural and formally-specified ordering constraints. We should eliminate procedural knowledge wherever possible. The following schemes are *roughly* equivalent: static clib_error_t *init_runs_first (vlib_main_t *vm) { clib_error_t *error; ... do some stuff... if ((error = vlib_call_init_function (init_runs_next))) return error; ... } VLIB_INIT_FUNCTION (init_runs_first); and static clib_error_t *init_runs_first (vlib_main_t *vm) { ... do some stuff... } VLIB_INIT_FUNCTION (init_runs_first) = { .runs_before = VLIB_INITS("init_runs_next"), }; The first form will [most likely] call "init_runs_next" on the spot. The second form means that "init_runs_first" runs before "init_runs_next," possibly much earlier in the sequence. Please DO NOT construct sets of init functions where A before B actually means A *right before* B. It's not necessary - simply combine A and B - and it leads to hugely annoying debugging exercises when trying to switch from ad-hoc procedural ordering constraints to formal ordering constraints. Change-Id: I5e4353503bf43b4acb11a45fb33c79a5ade8426c Signed-off-by: Dave Barach <dave@barachs.net>
2019-05-10Update ping cli .short_help.Paul Vinciguerra1-2/+2
Change-Id: I5c414a158a8a6b243128127c608ab0fbb5a9405b Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
2019-05-06ip4_load_balance: leverage vlib_get_buffersZhiyong Yang1-13/+11
vlib_get_buffers can save 1.2 clocks/pkt from 16.1 to 14.9 clocks/pkt on Skylake. Change-Id: I79d8b58b192280af5e5a5f73562b6301e1821cec Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
2019-04-30reassembly: avoid race-conditionsKlement Sekera1-12/+26
Change-Id: Ibf5c283217a985e43a562f1969573eeb26ee6017 Signed-off-by: Klement Sekera <ksekera@cisco.com>
2019-04-24ip4_lookup_inline: leverage vlib_get_buffers to improve perfZhiyong Yang1-32/+27
vlib_get_buffers can save at least 1.2 clocks/pkt for ip4_lookup_inline on Haswell. Change-Id: I730fc346cec4d2eb5ca364308e45268bda4d5f89 Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
2019-04-14ip4: don't format upper layer if it's a fragmentfaicker.mo1-0/+3
Parsing ipv4 upper layer is not meaningful if it's a fragment packet except the first. Change-Id: I442fb7ec01244fde8c4f7656a8ba633d0aa0f97e Signed-off-by: Faicker Mo <faicker.mo@ucloud.cn>
2019-04-10API: Fix shared memory only action handlers.Ole Troan1-5/+4
Some API action handlers called vl_msg_ai_send_shmem() directly. That breaks Unix domain socket API transport. A couple (bond / vhost) also tried to send a sw_interface_event directly, but did not send the message to all that had registred interest. That scheme never worked correctly. Refactored and improved the interface event code. Change-Id: Idb90edfd8703c6ae593b36b4eeb4d3ed7da5c808 Signed-off-by: Ole Troan <ot@cisco.com>
2019-04-10Make tcp/udp/icmp compute checksum safer for buffer-chain caseJohn Lo2-2/+2
Change-Id: I046e481a67fbeffdaa8504c8d77d232b986a61ee Signed-off-by: John Lo <loj@cisco.com>
2019-04-08fixing typosJim Thompson4-7/+7
Change-Id: I215e1e0208a073db80ec6f87695d734cf40fabe3 Signed-off-by: Jim Thompson <jim@netgate.com>
2019-03-28Punt InfraNeale Ranns1-2/+2
A punt/exception path that provides: 1) clients that use the infra 2) clients can create punt reasons 3) clients can register to recieve packets that are punted for a given reason to be sent to the desired node. 4) nodes which punt packets fill in the {reason,protocol} of the buffere (in the meta-data) and send to the new node "punt-dispatch" 5) punt-dispatch sends packets to the registered nodes or drops Change-Id: Ia4f144337f1387cbe585b4f375d0842aefffcde5 Signed-off-by: Neale Ranns <nranns@cisco.com>
2019-03-28IPSEC: run encrpyt as a feautre on the tunnelNeale Ranns2-14/+42
Change-Id: I6527e3fd8bbbca2d5f728621fc66b3856b39d505 Signed-off-by: Neale Ranns <nranns@cisco.com>
2019-03-28Typos. A bunch of typos I've been collecting.Paul Vinciguerra2-2/+2
Change-Id: I53ab8d17914e6563110354e4052109ac02bf8f3b Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
2019-03-26ip6-rewrite: bug fix buffer->error in dual loopKingwel Xie2-5/+24
error should be recorded in buffer so that process-error-punt can handle them correctly Per Damjan's comments, move counter to under else clause of last error0==NONE check. Both v4 and v6 are changed. Change-Id: I707c7877ccb12589337155173fc4a5200b42ee93 Signed-off-by: Kingwel Xie <kingwel.xie@ericsson.com>
2019-03-22ipv6: vectorized ext header checkDamjan Marion2-10/+35
Change-Id: I454bb01153d1d0536c4a6fe36103e7721aad8cd1 Signed-off-by: Damjan Marion <damarion@cisco.com>
2019-03-21icmp: bug fix of buffer->errorKingwel Xie2-2/+18
Recent changes in icmp4/6 choose to free the original buffer, and make a copy for sending icmp reply back. However, buffer->error will be ignored when the buffer is freed unconditionally. A quick fix can be moving the counter increment code to icmp, but I prefert to enqueue all buffers to 'error-drop' so that they can be handled in a batch rebase, using vlib_buffer_enqueue_to_single_next Change-Id: I9f3028b55f1d5f634763e2410cd91e17f368195e Signed-off-by: Kingwel Xie <kingwel.xie@ericsson.com>
2019-03-15Revert "API: Cleanup APIs interface.api"Ole Trøan1-1/+0
This reverts commit e63325e3ca03c847963863446345e6c80a2c0cfd. Allow time for CSIT to accommodate. Change-Id: I59435e4ab5e05e36a2796c3bf44889b5d4823cc2 Signed-off-by: ot@cisco.com
2019-03-15API: Cleanup APIs interface.apiJakub Grajciar1-0/+1
Use of consistent API types for interface.api Change-Id: Ieb54cebb4ac96b432a3f0b41596718aa2f34885b Signed-off-by: Jakub Grajciar <jgrajcia@cisco.com>
2019-03-14IGMP: typo and doc fix (no behaviour change)Neale Ranns1-1/+1
Change-Id: I1c870f90a8e0d14b972593e72242b430c13d3bf2 Signed-off-by: Neale Ranns <nranns@cisco.com>
2019-03-12ip: migrate old MULTIARCH macros to VLIB_NODE_FNFilip Tehlar9-174/+166
Change-Id: Id55ec87724e421d5b722314f9302c6ade7545306 Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
2019-03-12ICMP46 error: Clone first buffer instead of "truncating" original bufferOle Troan2-52/+36
Previous code was walked buffer chain, effectively trying to "truncate" the chain, reset the length of first buffer and reused that as the ICMP error message. That could have issues in cases there were other users of the buffer chain. Update to clone the first buffer in chain, and use that for the ICMP error message instead. Change-Id: Ibc1a0bf2d854dae41874808c8297028ed93dd69d Signed-off-by: Ole Troan <ot@cisco.com>
2019-03-07API: Add python2.7 support for enum flags via aenumOle Troan1-1/+0
Change-Id: I77a43bfb37d827727c331cd65eee77536cc15953 Signed-off-by: Ole Troan <ot@cisco.com>
2019-03-06ip: coverity woesSteven Luong1-4/+4
coverity complains about logically dead code for the statement if (error) because error was assigned to 0 prior to the check. I believe error was meant to get the return status of the call vnet_punt_socket_add. Change-Id: I794167493f63cb898d3618c2c28817823f46b765 Signed-off-by: Steven Luong <sluong@cisco.com>
2019-03-06punt.c -- coverity woesSteven Luong1-9/+0
Coverity complains about identical code is executed for if and else branch. Clean them up by removing the useless code. Change-Id: Ie53f1dff055440ab2c3c3d2ea91edb1e50204b38 Signed-off-by: Steven Luong <sluong@cisco.com>
2019-03-04Hash and handoff reassembly fragmentsVijayabhaskar Katamreddy2-61/+448
in the following two scenarios 1. When fragments arrive in multiple interfaces and endup in different threads 2. When fragments arrive in same interafce but in different queues due to interface RSS doesnt have the ability to place fragments in the right queues Change-Id: I9f9a8a4085692055ef6823d634c8e19ff3daea05 Signed-off-by: Vijayabhaskar Katamreddy <vkatamre@cisco.com>
2019-02-26VPP-1574: minimize RPC barrier sync callsDave Barach1-5/+25
Grab the thread barrier across a set of RPCs, to greatly increase efficiency. Avoids running afoul of the barrier sync holddown timer. Change-Id: I782dfdb1bed398b290169c83266681c9edd57a3f Signed-off-by: Dave Barach <dave@barachs.net>
2019-02-25buffer chain linearizationKlement Sekera3-336/+82
Rewrite vlib_buffer_chain_linearize function so that it works as intended. Linearize buffer chains coming out of reassembly to work around some dpdk-tx issues. Note that this is not a complete workaround as a sufficiently large packet will still cause the resulting chain to be too long. Drop features from reassembly code which relies on knowing which and how many buffers were freed during linearization, buffer counts and tracing capabilities for these cases. Change-Id: Ic65de53ecb5c78cd96b178033f6a576ab4060ed1 Signed-off-by: Klement Sekera <ksekera@cisco.com>
2019-02-19reassembly: handle ip6 atomic fragmentsKlement Sekera1-6/+3
Change-Id: Ide3425f144fb17201dcde7ba89f39e460048100d Signed-off-by: Klement Sekera <ksekera@cisco.com>
2019-02-19reassembly: fix buffer usage counterKlement Sekera1-5/+7
Change-Id: I713904f8eb2f724cb08dba494c160c14cc8b24a1 Signed-off-by: Klement Sekera <ksekera@cisco.com>
2019-02-19tap gso: experimental supportAndrew Yourtchenko2-17/+81
This commit adds a "gso" parameter to existing "create tap..." CLI, and a "no-gso" parameter for the compatibility with the future, when/if defaults change. It makes use of the lowest bit of the "tap_flags" field in the API call in order to allow creation of GSO interfaces via API as well. It does the necessary syscalls to enable the GSO and checksum offload support on the kernel side and sets two flags on the interface: virtio-specific virtio_if_t.gso_enabled, and vnet_hw_interface_t.flags & VNET_HW_INTERFACE_FLAG_SUPPORTS_GSO. The first one, if enabled, triggers the marking of the GSO-encapsulated packets on ingress with VNET_BUFFER_F_GSO flag, and setting vnet_buffer2(b)->gso_size to the desired L4 payload size. VNET_HW_INTERFACE_FLAG_SUPPORTS_GSO determines the egress packet processing in interface-output for such packets: When the flag is set, they are sent out almost as usual (just taking care to set the vnet header for virtio). When the flag is not enabled (the case for most interfaces), the egress path performs the re-segmentation such that the L4 payload of the transmitted packets equals gso_size. The operations in the datapath are enabled only when there is at least one GSO-compatible interface in the system - this is done by tracking the count in interface_main.gso_interface_count. This way the impact of conditional checks for the setups that do not use GSO is minimized. "show tap" CLI shows the state of the GSO flag on the interface, and the total count of GSO-enabled interfaces (which is used to enable the GSO-related processing in the packet path). This commit lacks IPv6 extension header traversal support of any kind - the L4 payload is assumed to follow the IPv6 header. Also it performs the offloads only for TCP (TSO - TCP segmentation offload). The UDP fragmentation offload (UFO) is not part of it. For debug purposes it also adds the debug CLI: "set tap gso {<interface> | sw_if_index <sw_idx>} <enable|disable>" Change-Id: Ifd562db89adcc2208094b3d1032cee8c307aaef9 Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
2019-02-19VPP-1573 fix crash in ip6 reassemblyKlement Sekera1-1/+1
Change-Id: I3a3076c7d87446b5ec2a02e70d3b6d05f1875875 Signed-off-by: Klement Sekera <ksekera@cisco.com>
2019-02-19ip6-local: fix uninitialized variable errorDamjan Marion1-1/+1
Change-Id: I245a8cc8f237242efadcf10d47b76222a6497e89 Signed-off-by: Damjan Marion <damarion@cisco.com>
2019-02-18Explicit dual-loop in ip6-localBenoît Ganne1-72/+134
Makes ip6-local node dual-loop explicit. This is only a style change. Change-Id: Ic8e7cecb3f51e98b8a069b501f5c338156934a6d Signed-off-by: Benoît Ganne <bganne@cisco.com>
2019-02-15Optimize ip6-localBenoît Ganne1-246/+187
Optimize IPv6 ip6-local node by rewriting the dual/single loop with prefetch and simpler unrolling. My local, unrepresentative tests for GRE4 termination over IPv6 show a performance improvement of ~40% for ip6-local node alone and ~5% globally. Change-Id: I11e1e86d3838dd3c081aa6be5e25dae16ed6e2d8 Signed-off-by: Benoît Ganne <bganne@cisco.com>
2019-02-14Add -fno-common compile optionBenoît Ganne2-1/+5
-fno-common makes sure we do not have multiple declarations of the same global symbol across compilation units. It helps debug nasty linkage bugs by guaranteeing that all reference to a global symbol use the same underlying object. It also helps avoiding benign mistakes such as declaring enum as global objects instead of types in headers (hence the minor fixes scattered across the source). Change-Id: I55c16406dc54ff8a6860238b90ca990fa6b179f1 Signed-off-by: Benoît Ganne <bganne@cisco.com>
2019-02-13ip6: convert code to new multiarchDamjan Marion2-96/+82
Change-Id: Idd09b5d0597336e4f2028113cae76c94fd1c5427 Signed-off-by: Damjan Marion <damarion@cisco.com>
2019-02-09buffers: fix typoDamjan Marion3-5/+5
Change-Id: I4e836244409c98739a13092ee252542a2c5fe259 Signed-off-by: Damjan Marion <damarion@cisco.com>
2019-02-06buffers: make buffer data size configurable from startup configDamjan Marion3-4/+6
Example: buffers { default data-size 1536 } Change-Id: I5b4436850ca18025c9fdcfc7ed648c2c2732d660 Signed-off-by: Damjan Marion <damarion@cisco.com>
2019-02-02Deprecate old mutliarch code, phase 1Damjan Marion1-22/+1
It is causing compilation sloness with gcc-7 so removing it before it was originally planned. So far macros are left in the tree so we can know which nodes to convert to new multiarch code. Change-Id: Idb14622ca61fdce1eba59723b20d98715b7971e6 Signed-off-by: Damjan Marion <damarion@cisco.com>