Age | Commit message (Collapse) | Author | Files | Lines |
|
Use of _vec_len() to set vector length breaks address sanitizer.
Users should use vec_set_len(), vec_inc_len(), vec_dec_len () instead.
Type: improvement
Change-Id: I441ae948771eb21c23a61f3ff9163bdad74a2cb8
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
When computing the inner packet checksum, the code wrongly
assumes that the IP version of the inner packet is the
same of the outer one. On the contrary, it is perfectly
possible to encapsulate v6 packets into v4 and viceversa,
so we need to check the IP format of the inner header before
calling vnet_calc_checksums_inline.
Ticket: VPP-2020
Type: fix
Signed-off-by: Mauro Sardara <msardara@cisco.com>
Change-Id: Ia4515563c164f6dd5096832c831a48cb0a29b3ad
Signed-off-by: Mauro Sardara <msardara@cisco.com>
|
|
Fixing the mutliarch versions of vxlan, geneve and friends. Ensures that
main struct is correctly sized for all multiarch permutations.
Type: fix
Fixes: 290526e3c
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
Change-Id: I7c4c435763a5dcb0c3b429cd4f361d373d480c03
|
|
When an mfib entry was created with both paths and entry_flags then
the entry flags were being ignored. If there are no paths then the
flags were passed into mfib_table_entry_update, but in the case where
the entry didn't exist and there were paths and flags, the entry was
created within mfib_table_entry_paths_update() which used a default
of MFIB_ENTRY_FLAG_NONE.
Pass the flags through into the mfib_table_entry_paths_update fn. All
existing callers other than the create case will now pass in
MFIB_ENTRY_FLAG_NONE.
Type: fix
Signed-off-by: Paul Atkins <patkins@graphiant.com>
Change-Id: I256375ba2fa863a62a88474ce1ea6bf2accdd456
|
|
similar behavior as here: 839dcc0fb7313638d9b8f52a9db81350dddfe461
Type: improvement
Signed-off-by: Artem Glazychev <artem.glazychev@xored.com>
Change-Id: I1b0a8f8f3dab48839e27df7065cf5f786cf0b5e9
|
|
Type: fix
Change-Id: I41455e1cdc62e7c0baa148630b0701b042f3b156
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
use autogenerated code
Type: improvement
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
Change-Id: I163eefa86f3248260481181818d70fa1b6eaa220
|
|
Type: improvement
Signed-off-by: Drenfong Wong <drenfong.wang@intel.com>
Change-Id: Ia81aaa86fe071cbbed028cc85c5f3fa0f1940a0f
|
|
Type: refactor
Change-Id: Ie67dc579e88132ddb1ee4a34cb69f96920101772
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: Id13f33843b230a1d169560742c4f7b2dc17d8718
|
|
Type: style
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: I26a19e42076e031ec5399d5ca05cb49fd6fbe1cd
|
|
This patch aims to improve decap performance by reducing expensive
hash_get callings as less as possible using AVX512 on XEON.
e.g. vxlan, vxlan_gpe, geneve, gtpu.
For the existing code, if vtep4 of the current packet match the last
vtep4_key_t well, expensive hash computation can be avoided and the
code returns directly.
This patch improves tunnel decap multiple flows case greatly by
leveraging 512bit vector register on XEON accommodating 8 vtep4_keys.
It enhances the possiblity of avoiding unnecessary hash computing
once hash key of the current packet hits any one of 8 in the 512bit
cache.
The oldest element in vtep4_cache_t is updated in round-robin order.
vlib_get_buffers is also leveraged in the meanwhile.
Type: improvement
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
Signed-off-by: Junfeng Wang <drenfong.wang@intel.com>
Change-Id: I313103202bd76f2dd638cd942554721b37ddad60
|
|
Use consistent API types.
Type: fix
Signed-off-by: Jakub Grajciar <jgrajcia@cisco.com>
Change-Id: Ic428e35141724b47a944211b4d95c3e41796c81e
Signed-off-by: Jakub Grajciar <jgrajcia@cisco.com>
|
|
Bypass node MUST NOT intercept a packet if destination IP doesn’t match
a local address. However IP address interpretation depends on the VRF,
hence bypass node must take that into account.
This patch also factors-out common VTEP management and checking code.
Type: improvement
Signed-off-by: Nick Zavaritsky <nick.zavaritsky@emnify.com>
Change-Id: I5665d94882bbf45d15f8da140c7ada528ec7fa94
|
|
Type: fix
Ticket: VPP-1837
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I402b1b06db736b2a7a242ce70ffd409c7c0a4fc2
|
|
Add an ALWAYS_ASSERT (...) macro, to (a) shut up coverity, and (b)
check the indicated condition in production images.
As in:
p = hash_get(...);
ALWAYS_ASSERT(p) /* was ASSERT(p) */
elt = pool_elt_at_index(pool, p[0]);
This may not be the best way to handle a specific case, but failure to
check return values at all followed by e.g. a pointer dereference
isn't ok.
Type: fix
Ticket: VPP-1837
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: Ia97c641cefcfb7ea7d77ea5a55ed4afea0345acb
|
|
Type: docs
Change-Id: Ia3949954423eb7691c02e99444767a9f01a14adf
Signed-off-by: Hongjun Ni <hongjun.ni@intel.com>
|
|
Type: fix
Change-Id: Id53eb6ed15f270d747b9831a7b585cbafe515dd2
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Type: feature
Signed-off-by: Dave Barach <dave@barachs.net>
Change-Id: I2272521d6e69edcd385ef684af6dd4eea5eaa953
|
|
Instead of all clients directly RR sourcing the entry they are tracking,
use a deidcated 'tracker' object. This tracker object is a entry
delegate and a child of the entry. The clients are then children of the
tracker.
The benefit of this aproach is that each time a new client tracks the
entry it doesn't RR source it. When an entry is sourced all its children
are updated. Thus, new clients tracking an entry is O(n^2). With the
tracker as indirection, the entry is sourced only once.
Type: feature
Change-Id: I5b80bdda6c02057152e5f721e580e786cd840a3b
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Type: style
Change-Id: Ia50867a853388d9f69571815ddcdaadfc47206bc
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Enhance the route add/del APIs to take a set of paths rather than just one.
Most unicast routing protocols calcualte all the available paths in one
run of the algorithm so updating all the paths at once is beneficial for the client.
two knobs control the behaviour:
is_multipath - if set the the set of paths passed will be added to those
that already exist, otherwise the set will replace them.
is_add - add or remove the set
is_add=0, is_multipath=1 and an empty set, results in deleting the route.
It is also considerably faster to add multiple paths at once, than one at a time:
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.11
100000 routes in .572240 secs, 174751.80 routes/sec
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.12
100000 routes in .528383 secs, 189256.54 routes/sec
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.13
100000 routes in .757131 secs, 132077.52 routes/sec
vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.14
100000 routes in .878317 secs, 113854.12 routes/sec
vat# ip_route_add_del 1.1.1.1/32 count 100000 multipath via 10.10.10.11 via 10.10.10.12 via 10.10.10.13 via 10.10.10.14
100000 routes in .900212 secs, 111084.93 routes/sec
Change-Id: I416b93f7684745099c1adb0b33edac58c9339c1a
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
Signed-off-by: Ole Troan <ot@cisco.com>
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
plugins:
- ipfixcollector
vnet:
- geneve
- vxlan_gpe
- vxlan
Change-Id: I69a8b4017ee6990f2b4874fe3e94c4520bde7101
Signed-off-by: Jakub Grajciar <jgrajcia@cisco.com>
|
|
This patch improves performance by prefetching encap header area
and taking full advantage of optimized function vlib_get_buffers.
After applying the patch, the function vxlan_gpe_encap can save
4.1 clocks/pkt from 41.7 to 37.6 clocks/pkt on Skylake.
Change-Id: I85d486b21a2524d64f2e246dfb4183539ec2532d
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Change-Id: Id95fd604ed181a2f70c24e2c8cc4321755b7ba7f
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Change-Id: I53ab8d17914e6563110354e4052109ac02bf8f3b
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Change-Id: Ide23bb3d82024118214902850821a8184fe65dfc
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
|
|
Change-Id: Id4f37f5d4a03160572954a416efa1ef9b3d79ad1
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I21ad6b04c19c8735d057174b1f260a59f2812241
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I085615fde1f966490f30ed5d32017b8b088cfd59
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
The patch can reduce 13 cycles per packet for the graph node
vxlan-gpe-encap and increases 5% or so vxlan_gpe encap throughput
on Haswell platform for the best case (All pkts have the same
sw_if_index).
Change-Id: I9c70fd3e0f2f0a9d922cf64970d0b0d51b772024
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Change-Id: I294be184764b45777d6e5e44f5d742b2c8732de4
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
|
|
Change-Id: Ieb8b53977fc8484c19780941e232ee072b667de3
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
It is cheaper to get thread index from vlib_main_t if available...
Change-Id: I4582e160d06d9d7fccdc54271912f0635da79b50
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Object sizes must evenly divide alignment requests, or vice
versa. Otherwise, only the first object will be aligned as
requested.
Three choices: add CLIB_CACHE_LINE_ALIGN_MARK(align_me) at
the end of structures, manually pad to an even divisor or multiple of
the alignment request, or use plain vectors/pools.
static assert for enforcement.
Change-Id: I41aa6ff1a58267301d32aaf4b9cd24678ac1c147
Signed-off-by: Dave Barach <dbarach@cisco.com>
|
|
Rather than having multiple copies of the same function
scattered around, promote the function into the FIB
PROTOCOL definitions in fib_types.h.
Change-Id: I11c4d85931167d3a5f3dc1278afecc8845b23cd7
Signed-off-by: Jon Loeliger <jdl@netgate.com>
|
|
Modify interface creation to allow creation of tunnel interfaces
without dedicated per tunnel output and tx nodes which are not
used for most tunnel types. Also changed interface-output node
function vnet_per_buffer_interface_output() so it does not rely
on hw_if_index as the next node index which is not flexible nor
efficient for large scale tunnel interfaces.
The improvenemts are done for VXLAN, VXLAN-GPE, GENEVE and GTPU
tunnels. GRE tunnel is still using per tunnel output nodes which
will be changed in a separate patch with other GRE enhencements.
Change-Id: I4123c01c0d2ead814417a867adb8c8a407e4df55
Signed-off-by: John Lo <loj@cisco.com>
|
|
This is a version of the VPP API generator in Python PLY. It supports
the existing language, and has a plugin architecture for generators.
Currently C and JSON are supported.
Changes:
- vl_api_version to option version = "major.minor.patch"
- enum support
- Added error checking and reporting
- import support (removed the C pre-processor)
- services (tying request/reply together)
Version:
option version = "1.0.0";
Enum:
enum colours {
RED,
BLUE = 50,
};
define foo {
vl_api_colours_t colours;
};
Services:
service {
rpc foo returns foo_reply;
rpc foo_dump returns stream foo_details;
rpc want_stats returns want_stats_reply
events ip4_counters, ip6_counters;
};
Future planned features:
- unions
- bool, text
- array support (including length)
- proto3 output plugin
- Refactor C/C++ generator as a plugin
- Refactor Java generator as a plugin
Change-Id: Ifa289966c790e1b1a8e2938a91e69331e3a58bdf
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
Improve "show xxx tunnel" output functions format_xxx_tunnel() for
GRE, VXLAN, VXLAN-GPE, GENEVE and GTPU tunnels to make their output
more consistent and provide better information.
Improved the output of "show int addr" to make its info more
consistent with tunnels and provide fib-index info.
Change-Id: Icd4b5b85a5bec417f8ee19afea336c770ad3b4c5
Signed-off-by: John Lo <loj@cisco.com>
|
|
This does not update api client code. In other words, if the client
assumes the transport is shmem based, this patch does not change that.
Furthermore, code that checks queue size, for tail dropping, is not
updated.
Done for the following apis:
Plugins
- acl
- gtpu
- memif
- nat
- pppoe
VNET
- bfd
- bier
- tapv2
- vhost user
- dhcp
- flow
- geneve
- ip
- punt
- ipsec/ipsec-gre
- l2
- l2tp
- lisp-cp/one-cp
- lisp-gpe
- map
- mpls
- policer
- session
- span
- udp
- tap
- vxlan/vxlan-gpe
- interface
VPP
- api/api.c
OAM
- oam_api.c
Stats
- stats.c
Change-Id: I0e33ecefb2bdab0295698c0add948068a5a83345
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
- separate client/server code for both memory and socket apis
- separate memory api code from generic vlib api code
- move unix_shared_memory_fifo to svm and rename to svm_fifo_t
- overall declutter
Change-Id: I90cdd98ff74d0787d58825b914b0f1eafcfa4dc2
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Move the functions hash_set_key_copy() and hash_unset_key_free()
which are dupilicated in various tunnel support code modules to
hash.h as hash_set_mem_alloc() and hash_unset_mem_free() to be
used by all.
Change-Id: I40723cabe29072ab7feb1804c221f28606d8e4fe
Signed-off-by: John Lo <loj@cisco.com>
|
|
The "test-all" target is still never called as part of any continuous
test (as it probably should) but at least it can now be expected to
succeed.
VXLAN-GPE:
* decapsulate Ethernet to "l2-input" instead of "ethernet-input"
otherwise the inner mac address get checked against the interface one
(external) and packet gets dropped (mac mismatch)
* set packet input sw_if_index to unicast vxlan tunnel for learning
TEST:
* VXLAN:
* reduce the number of share tunnels:
=> reduce test duration by half
=> no functional change
* VXLAN-GPE:
* fix test TearDown() cli: command is "show vxlan-gpe" only
* remove vxlan-gpe specific tests as the were a duplicated of the
BridgeDomain one and already inherited.
* disable test_mcast_rcv() and test_mcast_flood() tests
* P2PEthernetAPI:
* remove test: "create 100k of p2p subifs"
there already is a "create 1k p2p subifs" so this one is a load test
and not a unit test.
See: lists.fd.io/pipermail/vpp-dev/2017-November/007280.html
Change-Id: Icafb83769eb560cbdeb3dc6d1f1d3c23c0901cd9
Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
|
|
Change-Id: I3255cd1be4ae4ec8d09574183c96f59028374a5e
Signed-off-by: Swarup Nayak <swarupnpvt@gmail.com>
|
|
Change-Id: Ifabb8d22d20bc1031664d5f004e74cd363759ab6
Signed-off-by: sharath reddy <sharathkumarboyanapally@gmail.com>
|
|
Saves memory at no appreciable performance cost.
before:
DBGvpp# sh fib mem
FIB memory
Name Size in-use /allocated totals
Entry 80 7 / 150 560/12000
after:
DBGvpp# sh fib mem
FIB memory
Name Size in-use /allocated totals
Entry 72 7 / 7 504/504
Change-Id: Ic5d3920ceb57b54260dc9af2078c26484335fef1
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
- Global variables declared in header files without
the use of the 'extern' keword will result in multiple
instances of the variable to be created by the compiler
-- one for each different source file in which the
the header file is included. This results in wasted
memory allocated in the BSS segments as well as
potentially introducing bugs in the application.
Change-Id: I6ef1790b60a0bd9dd3994f8510723decf258b0cc
Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
|
|
Add one of these statements to foo.api:
vl_api_version 1.2.3
to generate a version tuple stanza in foo.api.h:
/****** Version tuple *****/
vl_api_version_tuple(foo, 1, 2, 3)
Change-Id: Ic514439e4677999daa8463a94f948f76b132ff15
Signed-off-by: Dave Barach <dave@barachs.net>
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
[support for VPWS/VPLS]
- switch to using dpo_proto_t rather than fib_protocol_t in fib_paths so that we can describe L2 paths
- VLIB nodes to handle pop/push of MPLS labels to L2
Change-Id: Id050d06a11fd2c9c1c81ce5a0654e6c5ae6afa6e
Signed-off-by: Neale Ranns <nranns@cisco.com>
|