Age | Commit message (Collapse) | Author | Files | Lines |
|
Change-Id: I6511110d0472203498a4f8741781eeeeb4f90844
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Also it removes ethernet_frame_is_any_taged implemebntation
which seems to be equally costly compared to two
invocations of ethernet_frame_is_tagged.
Change-Id: If1c95f8267cd34b807ec07e0d675cbd0db2fdf9f
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I9710be2e722d716e22d989b3417fb49d2db0848a
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I3908cc112b40d4bb52da18e7c3ac5ae0af455f87
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Group Base Policy (GBP) defines:
- endpoints: typically a VM or container that is connected to the
virtual switch/router (i.e. to VPP)
- endpoint-group: (EPG) a collection of endpoints
- policy: rules determining which traffic can pass between EPGs a.k.a
a 'contract'
Here, policy is implemented via an ACL.
EPG classification for transit packets is determined by:
- source EPG: from the packet's input interface
- destination EPG: from the packet's destination IP address.
Change-Id: I7b983844826b5fc3d49e21353ebda9df9b224e25
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
|
|
Enable CLIB_HAVE_VEC128 if both aarch64 and __ARM_NEON
ie. armv8 only, not armv7
Add more neon compare intrinsics wrappers.
I only add simple intrinsics wrappers. More complex ones can be added
later as they are needed, with performance tests on the corresponding
feature to back them up.
Remove wrongly added 128bits definitions defined on both armv7 and armv8
without concern for NEON instructions presence.
Notable correspondinf code activations:
* MHEAP_FLAG_SMALL_OBJECT_CACHE in mheap.c
* ip4 fib mtrie leaves access
* enable ixge plugin compilation for aarch64
(conf still disables it by default)
Change-Id: I99953823627bdff6f222d232c78aa7b655aaf77a
Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
|
|
Reference-count the number of entries in each bucket. If the reference
count goes to zero, free the backing store.
Add long-term churn-testing to test_bihash_template.c, thanks to
Andrew Yourtchenko for the initial implementation.
Change-Id: I4fbd9229cacfaba8027a85cbf87b74afdead6e39
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
This patch teaches worer threads to sleep and to be waken up by
kernel if there is activity on file desctiptors assigned to that thread.
It also adds counters to epoll file descriptors and new
debug cli 'show unix file'.
Change-Id: Iaf67869f4aa88ff5b0a08982e1c08474013107c4
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
For some files such as hugepages files, ftruncate() fails with the error
"Invalid argument" if the 'length' parameter is not on a page boundary.
Change-Id: I42a9cde98707da15e3c5d1653046e2277fc7a424
Signed-off-by: Igor Mikhailov (imichail) <imichail@cisco.com>
|
|
- use valloc as a 'central' segment baseva manager
- use per segment manager segment pools and use rwlocks to guard them
- add session test that exercises segment creation
- embed segment manager properties into application since they're shared
- fix rw locks
Change-Id: I761164c147275d9e8a926f1eda395e090d231f9a
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
This hash table intends to provide an alternative to the widely
used bihash table in places where either:
- Hash entry timeout is required
- The hash table data does not fit in CPU cache
Although the bihash table is very fast, each lookup requires
accessing two cache lines in a serialized fashion. It works fine
when the hash table is in cache, but hits a wall when it does not.
The 'flowhash' table uses a simplified design (at the cost of a
less good bucket auto-scaling) where each access only requires
a single memory lookup (in the absence of collision). The hash
table also uses a reduced number of registers.
In practice, a VPP node implementing a stateful feature would
typically:
- prefetch buffer metadata (in-cache)
- prefetch packet header (in-cache)
- compute hash & prefetch hash bucket (possibly in RAM)
- read/write key and value from bucket
Using this hash table, it is possible to pipeline accesses in a way
that does not exhaust CPU's line field buffers, even when the
requested value is located in RAM (i.e. not in cache).
Measurements showed it was possible to scale to tens of millions
of flows (with a full 5-tuple matching and 32B value, i.e. 1
cache line per flow) with no performance degradation when
the hash table grows to the point it doesn't fit in cache anymore.
I have used this table in a couple of non-open-sourced projects,
but think it might be useful to lb, nat, and possibly other VPP
subsystems.
More information in the .h file.
Change-Id: I2b13dde0eabd868b75da1cedbfca0bf74d705102
Signed-off-by: Pierre Pfister <ppfister@cisco.com>
|
|
Change-Id: Ibc252d9ed595be955790ec1c97d8730e43ad89b2
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Add some description and cleanup code that uses Arm system counter.
Change-Id: Ie1fe00e3e4b5d98867617b7b0184ac526e333c53
Signed-off-by: Brian Brooks <brian.brooks@arm.com>
|
|
Change-Id: I75e6c7d1a6ff1fcebc81ec10bd86b79f2bf3dc22
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I606fd89c410369cbd9ce9dcaaaa9dc58796e7c0e
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
- update segment manager and session api to work with both flavors of
ssvm segments
- added generic ssvm slave/master init and del functions
- cleanup/refactor tcp_echo
- fixed uses of svm fifo pool as vector
Change-Id: Ieee8b163faa407da6e77e657a2322de213a9d2a0
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
when verbose option is used
Change-Id: Ib63ead4525332f897b8a1d8a4cf5a0eb1da1e7f3
Signed-off-by: Vijayabhaskar Katamreddy <vkatamre@cisco.com>
|
|
On deleting sub-interfaces, functions vnet_delete_sub_interface()
and vnet_delete_hw_interface() are not cleaning up sub-interface
related hash tables and memory properly.
Change-Id: I17c7c4b2078c062c77bfe48889beb677610035ca
Signed-off-by: John Lo <loj@cisco.com>
(cherry picked from commit 7f5bec647c9dc743c015d461d040e63a77fd0a08)
|
|
Change-Id: I67648dbed3c7ed291b3e1ce617d83a776d3623bb
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: I056598a1818a39c2da73e252600c14585e5aae83
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
- separate client/server code for both memory and socket apis
- separate memory api code from generic vlib api code
- move unix_shared_memory_fifo to svm and rename to svm_fifo_t
- overall declutter
Change-Id: I90cdd98ff74d0787d58825b914b0f1eafcfa4dc2
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Combine implementer, part, variant, and revision into one cpu
description line.
For example : ARM (Cortex-A57 PASS 1.2)
* get infos from /proc/cpuinfo
* only recognize armv8 processors
* add all given cavium processors
* Cavium starts counting variants from 1 instead of 0
Change-Id: I4f3820fb13a6bd2a0dc59e28fbe6f48a5b0ceb25
Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
|
|
- add function to sock client that bootstraps shm api
- allow sock clients to request custom shm ring configs
Change-Id: Iabc1dd4f0dc8bbf8ba24de37f4966339fcf86107
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Move the functions hash_set_key_copy() and hash_unset_key_free()
which are dupilicated in various tunnel support code modules to
hash.h as hash_set_mem_alloc() and hash_unset_mem_free() to be
used by all.
Change-Id: I40723cabe29072ab7feb1804c221f28606d8e4fe
Signed-off-by: John Lo <loj@cisco.com>
|
|
This allows arm platforms to also take advantage of crc32 hardware
acceleration.
* add a wrapper for crc32_u64. It's the only one really used. Using it
instead of a call to clib_crc32c() eases building symmetrical hash
functions.
* replace #ifdef on SSE4 by a test on clib_crc32c_uses_intrinsics.
Note: keep the test on i386
* fix typo in lb test log
Change-Id: I03a0897b70f6c1717e6901d93cf0fe024d5facb5
Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
|
|
Change-Id: Ic9c1c70e06b953538ed43fc91ed26b6be82ce812
Signed-off-by: Kevin Wang <kevin.wang@arm.com>
|
|
Taken from DPDK, also AVX2 variant updated to be in sync with DPDK
version.
Change-Id: I8a42e4141a5a1a8cfbee328b07bd0c9b38a9eb05
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
"This time, for sure..."
Change-Id: Ie981003842d37c5eb6a0b2fe3abe974a93b86df8
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Ic551af286aa84293deb260560c12def430449598
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I0a0f8c9aad1530d18c70c962e729e84948a074ee
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
It looks like different compiler versions produce different results for
expressions like "(cast) ptr + inc".
Use parenthesis to avoid such issues.
Change-Id: I93a9883bf5fc05ae462df5b004817775f0739405
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
|
|
Change-Id: I63d720378b92813993525f80fee90fc79df27fba
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
Move elog_sample.c to src/examples/vlib
Change-Id: I7d32c83c424b9ca4a057372c7fc6a6e2b7dab034
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
clib_maplog_process(...): handle logs which weren't closed properly.
It will happen.
Change-Id: Ibcf9c9ea7a09991e6294050e7d2979a0d3f965cf
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
use getauxval(AT_HWCAP) to get the processor capabilities.
The result should be the same as calling
cat /proc/cpuinfo | grep Feature | head -n1
All but one (aes) features have a different name.
handle aes by adding it an arch prefix, which is skipped during print
and a clib_cpu_supports_aes() custom function.
Change-Id: If9830bd5a17bac1bd1b5337dacbb0ddbb8ed6b18
Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
|
|
this follows commit 01d86c7f6f05938c7d3fe181bd0aa2f75ccdd1df
which removed many unused functions from smp.h
Change-Id: I3aa0954a5e2319cc526fa68dda113f3cbe063960
Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
|
|
Change-Id: I0545018ec02f3706ad6a2da6fc13537db5c31a2d
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I0946c0a124f3fc9a0aa87499a35edfeaabaec932
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: I760b482b9de457bbb17de817db7079b57d3f5ec1
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
While my original attmept was to write this function to be portable
and work on non-x86 systems, seems that gcc-5 desn't respect aligment
attribute and issues alligned vector insutruciton which causes crash.
Change-Id: If165c8d482ac96f2b71959d326f9772b48097b48
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I2de52725f40380422ca5019405df36cc05681603
Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
|
|
Use this macro to arrange init function ordering between friend
plugins. Fails in the usual manner if the plugin doesn't exist, or if
the init function symbol is AWOL.
clib_error_t *
thisplug_init (vlib_main_t *vm)
{
clib_error_t *error = 0;
if ((error = vlib_plugin_init_function ("otherplug.so", otherplug_init)))
return error;
<etc>
return error;
}
VLIB_INIT_FUNCTION(thisplug_init);
Change-Id: Ideecaf46bc0b1546e85096e54be8ddef87946565
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Iba2d20c0a3d4f07457d108d014a6fa4522cb8e2c
Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
|
|
Change-Id: Ia66ac0a2fa23a3d29370b54e2014900838a8d3ac
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Change-Id: If581feca0d51d0420c971801aecdf9250c671b36
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Ibee8973270366c38dced6eb3e8ca41784549183a
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Add a way to toggle on and off a warning for a specific section of code.
This supports clang and gcc, and has no effect for any other compilers.
This follows commit bfc29ba442dbb65599f29fe5aa44c6219ed0d3a8 and
provides a generic way to handle warnings in such corner cases.
To disable a warning enabled by "-Wsome-warning" for a specific code:
WARN_OFF(some-warning) // disable compiler warning
; /* some code */
WARN_ON(some-warning) // enable the warning again
Change-Id: I0101caa0aa775e2b905c7b3b5fef3bbdce281673
Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
|
|
Change-Id: I245c034684ba8585c8f5bb5353027aba13f8a53e
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
writer_lock must be inited before used.
Change-Id: Ib258aa09b3bccc4de6edba0eb75a7eec20f1a61f
Signed-off-by: JingLiuZTE <liu.jing5@zte.com.cn>
|
|
clib_mem_unaligned + zap64 casts its input as u64, computes a mask
according to the input length, and returns the casted maked value.
Therefore all the 8 Bytes of the u64 are systematically read, and
the invalid ones are discarded.
Since they are discarded correctly, this invalid read can safely be
ignored.
Revert "fix clib_mem_unaligned() invalid read"
This reverts commit 0ed3d81a5fa274283ae69b69a405c385189897d3.
Change-Id: I5cc33ad36063c414085636debe93707d9a75157a
Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
|