Age | Commit message (Collapse) | Author | Files | Lines |
|
VPP-1352
If the number of rules within a given partition exceeds the limit,
the split_partition() might get called, in which we calculate
the relaxed mask, create a new partition with that mask and
attempt to reallocate some entries from the overcrowded partition.
The non-TM code was pre-expanding the vector with rules by
the number of rules in the new ACL being applied - which
caused the split_partition() to iterate over the rules
filled with zeroes. Most of the time it is benign, but
if a newly created relaxed partition is such that these
entries can be "relocated", then the code attempts to
do so, which does not end well.
Change-Id: I2dbf3ccd29ff97277b21cdb11c4424ff0915c3b7
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
Change-Id: I085615fde1f966490f30ed5d32017b8b088cfd59
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
applied ACLs
The partition_split() did not increment the refcount when using a mask type index,
thus subsequent modifications potentially resulted in double frees and in the best case
immediate crash, in the worst case delayed crash in another place.
Introduce the lock_mask_type_index() and call it, move the mask type index
related functions closer to the top of the file.
Make the assignment of the new mask type indices
for the tuplemerge case to use the assign_mask_type_index().
Keep some debugs in case we need to investigate this further at some point.
Change-Id: Iae370f5cd92e1fe1442480db34656a8a3442dbc0
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
(cherry picked from commit 1edc406da3d4f6e63de2f278360b5753f55c00df)
|
|
Change-Id: I634971f6376a7ea49de718ade9139e67eeed48e5
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
(cherry picked from commit d039281e11cfc4580fe140e72390c1c48688c722)
|
|
Commit 1c7bf5d41737984907e8bad1dc832eb6cb1d6288 added the poisoning
of the newly freed memory in debug builds, exposing a logic
error in mask assignment code - it passed a pointer to
within a pool to a function which might potentially expand the pool.
This resulted in a failure of the test in the debug version.
Fix that by making a local copy of the value before passing
a pointer to it.
Change-Id: I73f3670672c3d86778aad0f944d052d0480cc593
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
Configure w/ --enable-dlmalloc, see .../build-data/platforms/vpp.mk
src/vppinfra/dlmalloc.[ch] are slightly modified versions of the
well-known Doug Lea malloc. Main advantage: dlmalloc mspaces have no
inherent size limit.
Change-Id: I19b3f43f3c65bcfb82c1a265a97922d01912446e
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
Change-Id: Ie899ccbaae4df7cce4ebbba47ed6c3cce5269bdb
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Slightly refactored from the initial implementation of the TupleMerge [1]
algorithm by Valerio Bruschi (valerio.bruschi@telecom-paristech.fr)
[1] James Daly, Eric Torng "TupleMerge: Building Online Packet Classifiers
by Omitting Bits", In Proc. IEEE ICCCN 2017, pp. 1-10
Also add startup parameters to turn on/off the algorithm ("use tuple merge 1/0"),
and a startup parameter to be able to tweak the split threshold
("tuple merge split threshold N"), the default value of the split threshold
is 39 as per paper, but some more tuning might be necessary to find the best
value.
This change, alongside with the optimizations which avoid extra lookups,
significantly reduces the slowdown on the ClassBench generated ACLs, which
are supposed to resemble realistic ACLs seen in use in the field.
Change-Id: I9713e4673970e9a62d4d9e9718365293375fab7b
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
- instantiate the per-use mask type entry for a given hash ACE
this prepares to adding tuplemerge where the applied ACE may
have a different mask type due to relaxing of the tuples
- store the vector of the colliding rules for linear lookups
rather than traversing the linked list.
- store the lowest rule index for a given mask type inside
the structure. This allows to skip looking up at the later
mask types if we already matched an entry that is in front
of the very first entry in the new candidate mask type,
thus saving a worthless hash table lookup.
- use a vector of mask type indices rather than bitmap,
in the sorted order (by construction) of ascending
lowest rule index - this allows to terminate the lookups
early.
- adapt the debug cli outputs accordingly to show the data
- propagate the is_ip6 into the inner calls
Change-Id: I7a67b271e66785c6eab738b632b432d5886a0a8a
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
Trying to accomodate fragments as first class citizens
has shown to be more trouble than it's worth. So
fallback to linear ACL search in case it is a fragment
packet. Delete the corresponding code from the hash
matching.
Change-Id: Ic9ecc7c800d575615addb33dcaa89621462e9c7b
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
Add a new kv_16_8 field into 5tuple union, rename
the existing kv into kv_40_8 for clarity, and
add the compile-time alignment constraints.
Change-Id: I9bfca91f34850a5c89cba590fbfe9b865e63ef94
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
contiguous with L4 data
Using ip46_address_t was convenient from operational point of view but created
some difficulties dealing with IPv4 addresses - the extra 3x of u32 padding
are costly, and the "holes" mean we can not use the smaller key-value
data structures for the lookup.
This commit changes the 5tuple layout for the IPv4 case, such that
the src/dst addresses directly precede the L4 information.
That will allow to treat the same data within 40x8 key-value
structure as a 16x8 key-value structure starting with 24 byte offset.
Change-Id: Ifea8d266ca0b9c931d44440bf6dc62446c1a83ec
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
It is a relatively rarely used low level command for code that didn't change,
but due to infra changes it did not survive. Having it working may be very
useful for corner-case debugging. So, fix it for working with
the acl-as-a-service infra.
Change-Id: I11b60e0c78591cc340b043ec240f0311ea1eb2f9
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
(cherry picked from commit 18bde8a579960aa46f43ffbe5c2905774bd81a35)
|
|
- autosize the ACL plugin heap size based on the number of workers
- for manual heap size setting, use the proper types (uword),
and proper format/unformat functions (unformat_memory_size)
Change-Id: I7c46134e949862a0abc9087d7232402fc5a95ad8
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
other plugins
This code implements the functionality required for other plugins wishing
to perform ACL lookups in the contexts of their choice, rather than only
in the context of the interface in/out.
The lookups are the stateless ACLs - there is no concept of "direction"
within the context, hence no concept of "connection" either.
The plugins need to include the
The file acl_lookup_context.md has more info.
Change-Id: I91ba97428cc92b24d1517e808dc2fd8e56ea2f8d
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
- Teach vpp_api_test to send/receive API messages over sockets
- Add memfd-based shared memory
- Add api messages to create memfd-based shared memory segments
- vpp_api_test supports both socket and shared memory segment connections
- vpp_api_test pivot from socket to shared memory API messaging
- add socket client support to libvlibclient.so
- dead client reaper sends ping messages, container-friendly
- dead client reaper falls back to kill (<pid>, 0) live checking
if e.g. a python app goes silent for tens of seconds
- handle ping messages in python client support code
- teach show api ring about pairwise shared-memory segments
- fix ip probing of already resolved destinations (VPP-998)
We'll need this work to implement proper host-stack client isolation
Change-Id: Ic23b65f75c854d0393d9a2e9d6b122a9551be769
Signed-off-by: Dave Barach <dave@barachs.net>
Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
This adds the ability to tweak the memory allocation parameters of the ACL plugin
from the startup config. It may be useful in the cases involving higher limit
of the connections than the default 1M, or the high number of cores.
Change-Id: I2b6fb3f61126ff3ee998424b762b6aefe8fb1b8e
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
Add a counter incremented upon the ACL check,
so it is easier to see which kind of traffic
is being checked by the policy, add the corresponding
output to the debug CLI "show acl-plugin tables" command.
Change-Id: Id811dddf204e63eeceabfcc509e3e9c5aae1dbc8
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
portrange matches on the same hash key (VPP-937)
Multiple portranges that land on the same hash key will always report the match
on the first portrange - even when the subsequent portranges have matched.
Test escape, so make a corresponding test case and fix the code so it passes.
(the commit on stable/1707 has erroneously mentioned VPP-938 jira ticket)
Change-Id: Idbeb8a122252ead2468f5f9dbaf72cf0e8bb78f1
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
(cherry picked from commit fb088f0a201270e949469c915c529d75ad13353e)
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
(complete the fix for VPP-935)
The fix for VPP-935 missed the case that hash_acl_add() and hash_acl_delete() may be called
during the replacement of the existing applied ACL, as a result the "applied" logic needs
to be replicated for the hash acls separately, since it is a lower layer.
Change-Id: I7dcb2b120fcbdceb5e59acb5029f9eb77bd0f240
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
(cherry picked from commit ce9714032d36d18abe72981552219dff871ff392)
|
|
interface (VPP-935)
The logic in hash ACL bitmask update was using the vector
of ACLs applied to the interface to rebuild the hash lookup mask.
However, in transient cases (like doing group manipulation with
hash ACLs), that will not hold true. Thus, make
a local copy of for which ACL indices the hash_acl_apply
was called previously, and maintain that one local
to the hash_lookup.c file logic.
Change-Id: I30187d68febce8bba2ab6ffbb1eee13b5c96a44b
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
(cherry picked from commit 1de7d7044434196610190011ebb431f054701259)
|
|
traffic (VPP-910/VPP-929)
The commit fixing the VPP-910 and separating the memory operations
into separate heaps has missed setting the MHEAP_FLAG_THREAD_SAFE,
which quite obviously caused the issues in the multithread setup.
Fix that.
Also, add the debug CLIs
"set acl-plugin heap {main|hash} {validate|trace} {1|0}"
to toggle the memory instrumentation, in case we ever need it
in the future.
Change-Id: I8bd4f7978613f5ea75a030cfb90674dac34ae7bf
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
(cherry picked from commit e6423bef32ca2ffcfcd7a092eb4673badd53ea4c)
|
|
(VPP-910)
The further prolonged testing from testbed that reported VPP-910
has uncovered a couple of deeper issues with optimization from
7384, and the usage of subscripts rather than vec_elt_at_index()
allowed to hide a couple of further errors in the code.
Also, the current acl-plugin behavior of using the global
heap for its dynamic data is problematic - it makes
the troubleshooting much harder by potentially spreading
the problem around.
Based on this experience, this commits makes a few changes to fix
the issues seen, also improving the serviceability of the acl-plugin
code for the future:
- Use separate mheaps for any ACL-related control plane
operations and separate for the hash lookup datastructures,
to compartmentalize any memory-related issues for the ACL plugin.
- Ensure vec_elt_at_index() usage throughout the hash_lookup.c file.
- Use vectors rather than raw memory for storing the "ordinary" ACL rules.
- Rework the optimization from 7384 to use a separate tail pointer
rather than overloading the "prev" field.
- Make get_session_ptr() more conservative and adjust is_valid_session_ptr
accordingly
Change-Id: Ifda85193f361de5ed3782a4acd39622bd33c5830
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
(cherry picked from commit bd9c5ffe39e9ce61db95d74d150e07d738f24da1)
|
|
applied as part of many (VPP-910)
change 7385 has added the code which has the first ACE's "prev" entry within the linked list of
shadowed ACEs pointing to the last ACE, in order to avoid the frequent linear list traversal.
That change was not complete and did not update this "prev" entry whenever the last ACE was deleted.
As a result the changes within the applied ACLs which caused the calls to hash_acl_unapply/hash_acl_apply
may result in hitting assert which does the sanity check. The solution is to add the missing update logic.
Change-Id: I9cbe9a7c68b92fa3a22a8efd11b679667d38f186
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
(cherry picked from commit 45fe7399152f5ca511ba0b03fee3d5a3dffd1897)
|
|
When applying ACEs, in the new hash-based scheme, for each ACE
the lookup in the hash table is done, and either that ACE is added
to the end of the existing list if there is a match,
or a new list is created if there is no match.
Usually ACEs do not overlap, so this operation is fast, however,
the fragment-permit entries in case of a large number of ACLs
create a huge list which needs to be traversed for every other
ACE being added, slowing down the process dramatically.
The solution is to add an explicit flag to denote the first
element of the chain, and use the "prev" index of that
element to point to the tail element. The "next" field
of the last element is still ~0 and if we touch that
one, we do the linear search to find the first one,
but that is a relatively infrequent operation.
Change-Id: I352a3becd7854cf39aae65f0950afad7d18a70aa
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
(cherry picked from commit 204cf74aed51ca07933df7c606754abb4b26fd82)
|
|
Add a bihash-based ACL lookup mechanism and make it a new default.
This changes the time required to lookup a 5-tuple match
from O(total_N_entries) to O(total_N_mask_types), where
"mask type" is an overall mask on the 5-tuple required
to represent an ACE.
For testing/comparison there is a temporary debug CLI
"set acl-plugin use-hash-acl-matching {0|1}", which,
when set to 0, makes the plugin use the "old" linear lookup,
and when set to 1, makes it use the hash-based lookup.
Based on the discussions on vpp-dev mailing list,
prevent assigning the ACL index to an interface,
when the ACL with that index is not defined,
also prevent deleting an ACL if that ACL is applied.
Also, for the easier debugging of the state, there are
new debug CLI commands to see the ACL plugin state at
several layers:
"show acl-plugin acl [index N]" - show a high-level
ACL representation, used for the linear lookup and
as a base for building the hashtable-based lookup.
Also shows if a given ACL is applied somewhere.
"show acl-plugin interface [sw_if_index N]" - show
which interfaces have which ACL(s) applied.
"show acl-plugin tables" - a lower-level debug command
used to see the state of all of the related data structures
at once. There are specifiers possible, which make
for a more focused and maybe augmented output:
"show acl-plugin tables acl [index N]"
show the "bitmask-ready" representations of the ACLs,
we well as the mask types and their associated indices.
"show acl-plutin tables mask"
show the derived mask types and their indices only.
"show acl-plugin tables applied [sw_if_index N]"
show the table of all of the ACEs applied for a given
sw_if_index or all interfaces.
"show acl-plugin tables hash [verbose N]"
show the 48x8 bihash used for the ACL lookup.
Change-Id: I89fff051424cb44bcb189e3cee04c1b8f76efc28
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|