Age | Commit message (Collapse) | Author | Files | Lines |
|
The async frames pool may be resized once drained. This will cause 2 problems: original pool pointer is invalidated and pool size changed, both problems will confuse the crypto infra user graph nodes (like IPsec and Wireguard) and crypto engines if they expect the pool pointers always valid and the pool size never changed (for performance reason).
This patch introduces fixed size of the async frames pool. This helps zeroing surprise to the components shown above and avoiding segmentation fault when pool resizing happened. In addition, the crypto engine may take advantage of the feature to sync its own pool/vector with crypto infra.
Type: improvement
Signed-off-by: Gabriel Oginski <gabrielx.oginski@intel.com>
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I2a71783b90149fa376848b9c4f84ce8c6c034bef
|
|
This patch can make crypto dispatch node adaptively switching
between pooling and interrupt mode, and improve vpp overall
performance.
Type: improvement
Signed-off-by: Xiaoming Jiang <jiangxiaoming@outlook.com>
Change-Id: I845ed1d29ba9f3c507ea95a337f6dca7f8d6e24e
|
|
commit failed
Type: fix
Signed-off-by: Xiaoming Jiang <jiangxiaoming@outlook.com>
Change-Id: Ib4c61906a9cbb3eea1214394d164ecffb38fd36d
|
|
Using pre-shared keys is usually a bad idea, one should use eg. IKEv2
instead, but one does not always have the choice.
For AES-CBC, the IV must be unpredictable (see NIST SP800-38a Appendix
C) whereas for AES-CTR or AES-GCM, the IV should never be reused with
the same key material (see NIST SP800-38a Appendix B and NIST SP800-38d
section 8).
If one uses pre-shared keys and VPP is restarted, the IV counter
restarts at 0 and the same IVs are generated with the same pre-shared
keys materials.
To fix those issues we follow the recommendation from NIST SP800-38a
and NIST SP800-38d:
- we use a PRNG (not cryptographically secured) to generate IVs to
avoid generating the same IV sequence between VPP restarts. The PRNG is
chosen so that there is a low chance of generating the same sequence
- for AES-CBC, the generated IV is encrypted as part of the message.
This makes the (predictable) PRNG-generated IV unpredictable as it is
encrypted with the secret key
- for AES-CTR and GCM, we use the IV as-is as predictable IVs are fine
Most of the changes in this patch are caused by the need to shoehorn an
additional state of 2 u64 for the PRNG in the 1st cacheline of the SA
object.
Type: improvement
Change-Id: I2af89c21ae4b2c4c33dd21aeffcfb79c13c9d84c
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Error counters are added on a per-node basis. In Ipsec, it is
useful to also track the errors that occured per SA.
Type: feature
Change-Id: Iabcdcb439f67ad3c6c202b36ffc44ab39abac1bc
Signed-off-by: Arthur de Kerhor <arthurdekerhor@gmail.com>
|
|
Previously, even if sa defined traffic selectors esp packet src and dst
have been used for fast path inbound spd matching. This patch provides
a fix for that issue.
Type: fix
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: Ibd3ca224b155cc9e0c6aedd0f36aff489b7af5b8
|
|
For AES-CBC, the IV must be unpredictable (see NIST SP800-38a Appendix
C). Chaining IVs like is done by ipsecmb and native backends for the
VNET_CRYPTO_OP_FLAG_INIT_IV is fully predictable.
Encrypt a counter as part of the message, making the (predictable)
counter-generated IV unpredictable.
Fixes: VPP-2037
Type: fix
Change-Id: If4f192d62bf97dda553e7573331c75efa11822ae
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: fix
Change-Id: I7bd2696541c8b3824837e187de096fdde19b2c44
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
path implementation
In fast path implementation of spd policy lookup opposite convention to
the original implementation has been applied and local ip range has been
interchanged with the remote ip range. This fix addresses this issue.
Type: fix
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I0b6cccc80bf52b34524e98cfd1f1d542008bb7d0
|
|
Useful to update the tunnel paramaters and udp ports (NAT-T) of an SA
without having to rekey. Could be done by deleting and re-adding the
SA but it would not preserve the anti-replay window if there is one.
Use case: a nat update/reboot between the 2 endpoints of the tunnel.
Type: feature
Change-Id: Icf5c0aac218603e8aa9a008ed6f614e4a6db59a0
Signed-off-by: Arthur de Kerhor <arthurdekerhor@gmail.com>
|
|
Type: fix
Fixes: 815c6a4fbcbb636ce3b4dc98446ad205a30670a6
Ticket: VPP-2068
Change-Id: I42d678b0e28ac4d0b524dfc2dbd01bbad020cf24
Signed-off-by: Vratko Polak <vrpolak@cisco.com>
|
|
Fast path spd was explicitely storing array of policy id vectors.
This information was redundand, as this inofrmation is already stored
in bihash table. This additional array was affecting performance
when adding and removing fast path policies.
The other place that needed refactoring after removing this array was
cli command showing fast path policies.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I78d45653f71539e7ba90ff5d2834451f83ead4be
|
|
Type: improvement
Signed-off-by: jiangxiaoming <jiangxiaoming@outlook.com>
Change-Id: I91ba1ff4c1085f4aca60ca111cbbaf14a3b4d761
|
|
the batch
Type: fix
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Icd1e43a5764496784c355c93066273435f16dd35
|
|
Type: improvement
Change-Id: I3fbbda0378b72843ecd39a7e8592dedc9757793a
Signed-off-by: Damjan Marion <dmarion@me.com>
|
|
This patch introduces fast path matching for inbound traffic ipv6.
Fast path uses bihash tables in order to find matching policy.
Adding and removing policies in fast path is much faster than in current
implementation. It is still new feature and further work needs
and can be done in order to improve the perfromance.
Type: feature
Change-Id: Iaef6638033666ad6eb028ffe0c8a4f4374451753
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
|
|
Type: feature
Signed-off-by: Vladimir Ratnikov <vratnikov@netgate.com>
Change-Id: I4e03f60f34acd7809ddc5a743650bedbb95b2e98
|
|
This patch introduces fast path matching for inbound traffic ipv4.
Fast path uses bihash tables in order to find matching policy. Adding
and removing policies in fast path is much faster than in current
implementation. It is still new feature and further work needs
and can be done in order to improve perfromance.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: Ifbd5bfecc21b76ddf8363f5dc089d77595196675
|
|
zero-initialize the variables
Type: fix
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
Change-Id: I51c3856865eab037f646a0d184e82ecb3b5b3216
|
|
Zero-initialize the temporary struct, else coverity complains about a bunch of uninitialized fields.
Type: fix
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
Change-Id: I45dc42134f06917a7459d615804f978a175bec0f
|
|
Type: improvement
If an SA protecting an IPv6 tunnel interface has UDP encapsulation
enabled, the code in esp_encrypt_inline() inserts a UDP header but does
not set the next protocol or the UDP payload length, so the peer that
receives the packet drops it. Set the next protocol field and the UDP
payload length correctly.
The port(s) for UDP encapsulation of IPsec was not registered for IPv6.
Add this registration for IPv6 SAs when UDP encapsulation is enabled.
Add punt handling for IPv6 IKE on NAT-T port.
Add registration of linux-cp for the new punt reason.
Add unit tests of IPv6 ESP w/ UDP encapsulation on tun protect
Signed-off-by: Matthew Smith <mgsmith@netgate.com>
Change-Id: Ibb28e423ab8c7bcea2c1964782a788a0f4da5268
|
|
Type: improvement
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Ica7de5a493389c6f53b7cf04e06939473a63d2b9
|
|
This patch fixes followig coverity issues:
CID 274739 Out-of-bounds read
CID 274746 Out-of-bounds access
CID 274748 Out-of-bounds read
Type: fix
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I9bb6741f100a9414a5a15278ffa49b31ccd7994f
|
|
With this patch fast path for ipv6 policy lookup is enabled.
This impelentation scales and outperforms original implementation when
the number of defined flows is higher thatn 100k.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I9364b5b8db4fc708790d48c538add272c7cea400
|
|
This patch updates the "show ipsec spd" cli to display
policies maintained by fast path bihash table.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I58b9f92f3132dc9809b50786dc912e09c4b84d81
|
|
Parser can be configured from the level of startup.conf file:
fast path can be enabled and disabled.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: Ifab83ddcb75bc44c8165e7fa87a1a56d047732a1
|
|
This patch adds matching functionality for spd fast path
policy matching. Fast path matching has been introduced
for outbound traffic only.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I03d5edf7d7fbc03bf3e6edbe33cb15bc965f9d4e
|
|
This patch introduces ipsec_output.h file. Matching implementation is
moved there. The reason behind is the possibility of unit testing
matching mechanism. Therefore we need to have functions that are in
scope of our intrest there and since these are inline their
implementation needs to be moved to the header file as well.
Type: improvement
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: Id7c605375d1f3be146abf96ef70d336a5d156444
|
|
This patch introduces functions to add and delete fast path
policies.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I3f1f1323148080c9dac531fbe9fa33bad4efe814
|
|
This patch introdcues basic types supporting fast path lookup.
Fast path performs policy matching with use of hash lookup
(particularly bihash tries has been used for that purpose). Fast path
lookup addresses situation where huge number of policies is created
(~100k or more). In such scenario adding/removing a policy
and policy matching is not efficient and poorly scales (for example
adding 500k policies takes a few hours. Also lookup time
increases significantly). With fast path adding and matching up to
1M flows scales up linearly (adding 1M of policies takes about 150s
on the test machine vs many hours in case of original implementation,
also matching time is significantly improved). Fast path will not
deal well with a huge number of policies that are spanning large
ip/port ranges. Large range will be masked out almost entirely leaving
only a few bits for calculating the hash key. Such keys will tend to
gather much more policies than other keys and hash will match most of
the packets anihilating advantages of hashing. Having said that
we also think that it is not the real life scenario.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I600dae5111a37768ed4b23aa18426e66bbf7b529
|
|
Currently 0 has been used as the wildcard representing ANY type of
protocol. However 0 is valid value of ip protocol (HOPOPT) and therefore
it should not be used as a wildcard. Instead 255 is used which is
guaranteed by IANA to be reserved and not used as a protocol id.
Type: improvement
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I2320bae6fe380cb999dc5a9187beb68fda2d31eb
|
|
This fixes long standing annoyance that CLIs with optional args cannot
be executed from file, as they cannot distinguish between valid optional
args and next line in the file.
Multiline statements can be provided simply by using backslash before \n.
Also comments are supported - everything after # is ignored up to the
end of the line.
Example:
# multiline cli using backslash
show version \
verbose # end of line comment
packet-generator new { \
name x \
limit 5 \
# comment inside cmultiline cli \
size 128-128 \
interface local0 \
node null-node \
data { \
incrementing 30 \
} \
}
Type: fix
Change-Id: Ia6d588169bae14e6e3f18effe94820d05ace1dbf
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: feature
Change-Id: I940b6c9d206e407f3e17d66c97233cd658984e61
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Adding flow cache support to improve inbound IPv4/IPSec Security Policy
Database (SPD) lookup performance. By enabling the flow cache in startup
conf, this replaces a linear O(N) SPD search, with an O(1) hash table
search.
This patch is the ipsec4_input_node counterpart to
https://gerrit.fd.io/r/c/vpp/+/31694, and shares much of the same code,
theory and mechanism of action.
Details about the flow cache:
Mechanism:
1. First packet of a flow will undergo linear search in SPD
table. Once a policy match is found, a new entry will be added
into the flow cache. From 2nd packet onwards, the policy lookup
will happen in flow cache.
2. The flow cache is implemented using a hash table without collision
handling. This will avoid the logic to age out or recycle the old
flows in flow cache. Whenever a collision occurs, the old entry
will be overwritten by the new entry. Worst case is when all the
256 packets in a batch result in collision, falling back to linear
search. Average and best case will be O(1).
3. The size of flow cache is fixed and decided based on the number
of flows to be supported. The default is set to 1 million flows,
but is configurable by a startup.conf option.
4. Whenever a SPD rule is added/deleted by the control plane, all
current flow cache entries will be invalidated. As the SPD API is
not mp-safe, the data plane will wait for the control plane
operation to complete.
Cache invalidation is via an epoch counter that is incremented on
policy add/del and stored with each entry in the flow cache. If the
epoch counter in the flow cache does not match the current count,
the entry is considered stale, and we fall back to linear search.
The following configurable options are available through startup
conf under the ipsec{} entry:
1. ipv4-inbound-spd-flow-cache on/off - enable SPD flow cache
(default off)
2. ipv4-inbound-spd-hash-buckets %d - set number of hash buckets
(default 4,194,304: ~1 million flows with 25% load factor)
Performance with 1 core, 1 ESP Tunnel, null-decrypt then bypass,
94B (null encrypted packet) for different SPD policy matching indices:
SPD Policy index : 2 10 100 1000
Throughput : Mbps/Mbps Mbps/Mbps Mbps/Mbps Mbps/Mbps
(Baseline/Optimized)
ARM TX2 : 300/290 230/290 70/290 8.5/290
Type: improvement
Signed-off-by: Zachary Leaf <zachary.leaf@arm.com>
Signed-off-by: mgovind <govindarajan.Mohandoss@arm.com>
Tested-by: Jieqiang Wang <jieqiang.wang@arm.com>
Change-Id: I8be2ad4715accbb335c38cd933904119db75827b
|
|
Use of _vec_len() to set vector length breaks address sanitizer.
Users should use vec_set_len(), vec_inc_len(), vec_dec_len () instead.
Type: improvement
Change-Id: I441ae948771eb21c23a61f3ff9163bdad74a2cb8
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: refactor
Change-Id: I0a40e22e1439e13ffdbcbd6fd7cad40c8178418c
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
As per IPSec RFC4301 [1], any non-matching packets should be dropped by
default. This is handled correctly in ipsec_output.c, however in
ipsec_input.c non-matching packets are allowed to pass as per a matched
BYPASS rule.
For full details, see:
https://lists.fd.io/g/vpp-dev/topic/ipsec_input_output_default/84943480
It appears the ipsec6_input_node only matches PROTECT policies. Until
this is extended to handle BYPASS + DISCARD, we may wish to not drop
by default here, since all IPv6 traffic not matching a PROTECT policy
will be dropped.
[1]: https://datatracker.ietf.org/doc/html/rfc4301
Type: fix
Signed-off-by: Zachary Leaf <zachary.leaf@arm.com>
Change-Id: Iddbfd008dbe082486d1928f6a10ffbd83d859a20
|
|
Originally after remove the policy entry in spd, macro "vec_del1"
can change localization of the last entry in vector and finally the
entry list has not been sorted.
This patch fixes this issue by change executed macro "vec_delete"
instead of "vec_del1".
Type: fix
Signed-off-by: Gabriel Oginski <gabrielx.oginski@intel.com>
Change-Id: I396591cbbe17646e1d243aedb4cdc272ed4d5e25
|
|
Type: improvement
Ethernet frames on the wire are a minimum of 64 bytes, so use the length in the UDP header to determine if the ESP payload is one bytes of the special SPI, rather than the buffer's size (which will include the ethernet header's padding).
In the case of drop advance the packet back to the IP header so the ipx-drop node sees a sane packet.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Ic3b75487919f0c77507d6f725bd11202bc5afee8
|
|
Type: improvement
When an IPSec interface is first constructed, the end node of the feature arc is not changed, which means it is interface-output.
This means that traffic directed into adjacencies on the link, that do not have protection (w/ an SA), drop like this:
...
00:00:01:111710: ip4-midchain
tx_sw_if_index 4 dpo-idx 24 : ipv4 via 0.0.0.0 ipsec0: mtu:9000 next:6 flags:[]
stacked-on:
[@1]: dpo-drop ip4 flow hash: 0x00000000
00000000: 4500005c000100003f01cb8cac100202010101010800ecf40000000058585858
00000020: 58585858585858585858585858585858585858585858585858585858
00:00:01:111829: local0-output
ipsec0
00000000: 4500005c000100003f01cb8cac100202010101010800ecf40000000058585858
00000020: 5858585858585858585858585858585858585858585858585858585858585858
00000040: 58585858585858585858585858585858585858585858585858585858c2cf08c0
00000060: 2a2c103cd0126bd8b03c4ec20ce2bd02dd77b3e3a4f49664
00:00:01:112017: error-drop
rx:pg1
00:00:01:112034: drop
local0-output: interface is down
although that's a drop, no packets should go to local0, and we want all IPvX packets to go through ipX-drop.
This change sets the interface's end-arc node to the appropriate drop node when the interface is created, and when the last protection is removed.
The resulting drop is:
...
00:00:01:111504: ip4-midchain
tx_sw_if_index 4 dpo-idx 24 : ipv4 via 0.0.0.0 ipsec0: mtu:9000 next:0 flags:[]
stacked-on:
[@1]: dpo-drop ip4 flow hash: 0x00000000
00000000: 4500005c000100003f01cb8cac100202010101010800ecf40000000058585858
00000020: 58585858585858585858585858585858585858585858585858585858
00:00:01:111533: ip4-drop
ICMP: 172.16.2.2 -> 1.1.1.1
tos 0x00, ttl 63, length 92, checksum 0xcb8c dscp CS0 ecn NON_ECN
fragment id 0x0001
ICMP echo_request checksum 0xecf4 id 0
00:00:01:111620: error-drop
rx:pg1
00:00:01:111640: drop
null-node: blackholed packets
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I7e7de23c541d9f1210a05e6984a688f1f821a155
|
|
When a message is received, verify that it's sufficiently large to
accomodate any VLAs within message. To do that, we need a way to
calculate message size including any VLAs. This patch adds such
funcionality to vppapigen and necessary C code to use those to validate
message size on receipt. Drop messages which are malformed.
Type: improvement
Signed-off-by: Klement Sekera <ksekera@cisco.com>
Change-Id: I2903aa21dee84be6822b064795ba314de46c18f4
|
|
Type: fix
Fixes: f16e9a5507
If an attempt to submit an async crypto frame fails, the buffers that
were added to the frame are supposed to be dropped. This was not
happening and they are leaking, resulting in buffer exhaustion.
There are two issues:
1. The return value of esp_async_recycle_failed_submit() is used to
figure out how many buffers should be dropped. That function calls
vnet_crypto_async_reset_frame() and then returns f->n_elts. Resetting
the frame sets n_elts to 0. So esp_async_recycle_failed_submit() always
returns 0. It is safe to remove the call to reset the frame because
esp_async_recycle_failed_submit() is called in 2 places and a call to
reset the frame is made immediately afterwards in both cases - so it
is currently unnecessary anyway.
2. An array and an index are passed to esp_async_recycle_failed_submit().
The index should indicate the position in the array where indices of the
buffers contained in the frame should be written. Across multiple calls,
the same index value (n_sync) is passed. This means each call may overwrite
the same entries in the array with the buffer indices in the frame rather
than appending them to the entries which were written earlier. Pass n_noop
as the index instead of n_sync.
Change-Id: I525ab3c466965446f6c116f4c8c5ebb678a66d84
Signed-off-by: Matthew Smith <mgsmith@netgate.com>
|
|
Refactor and improve boundary checking on IPv6 extension header handling.
Limit parsing of IPv6 extension headers to a maximum of 4 headers and a
depth of 256 bytes.
Type: fix
Signed-off-by: Ole Troan <ot@cisco.com>
Change-Id: Ide40aaa2b482ceef7e92f02fa0caeadb3b8f7556
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
Type: fix
Using the adjacency to modify the interface's feature arc doesn't work, since there are potentially more than one adj per-interface.
Instead have the interface, when it is created, register what the end node of the feature arc is. This end node is then also used as the interface's tx node (i.e. it is used as the adjacency's next-node).
rename adj-midhcain-tx as 'tunnel-output', that's a bit more intuitive.
There's also a fix in config string handling to:
1- prevent false sharing of strings when the end node of the arc is different.
2- call registered listeners when the end node is changed
For IPSec the consequences are that one cannot provide per-adjacency behaviour using different end-nodes - this was previously done for the no-SA and an SA with no protection. These cases are no handled in the esp-encrypt node.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: If3a83d03a3000f28820d9a9cb4101d244803d084
|
|
Type: fix
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I93c819cdd802f0980a981d1fc5561d65b35d3382
|
|
Type: fix
This reverts commit 5ecda99d673298e5bf3c906e9bf6682fdcb57d83.
Change-Id: I393c7d8a6b32aa4f178d6b6dac025038bbf10fe6
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
Change-Id: Ib3fe4f306f23541a01246b74ad0f1a7074fa03bb
|
|
Adding flow cache support to improve outbound IPv4/IPSec SPD lookup
performance. Details about flow cache:
Mechanism:
1. First packet of a flow will undergo linear search in SPD
table. Once a policy match is found, a new entry will be added
into the flow cache. From 2nd packet onwards, the policy lookup
will happen in flow cache.
2. The flow cache is implemented using bihash without collision
handling. This will avoid the logic to age out or recycle the old
flows in flow cache. Whenever a collision occurs, old entry will
be overwritten by the new entry. Worst case is when all the 256
packets in a batch result in collision and fall back to linear
search. Average and best case will be O(1).
3. The size of flow cache is fixed and decided based on the number
of flows to be supported. The default is set to 1 million flows.
This can be made as a configurable option as a next step.
4. Whenever a SPD rule is added/deleted by the control plane, the
flow cache entries will be completely deleted (reset) in the
control plane. The assumption here is that SPD rule add/del is not
a frequent operation from control plane. Flow cache reset is done,
by putting the data plane in fall back mode, to bypass flow cache
and do linear search till the SPD rule add/delete operation is
complete. Once the rule is successfully added/deleted, the data
plane will be allowed to make use of the flow cache. The flow
cache will be reset only after flushing out the inflight packets
from all the worker cores using
vlib_worker_wait_one_loop().
Details about bihash usage:
1. A new bihash template (16_8) is added to support IPv4 5 tuple.
BIHASH_KVP_PER_PAGE and BIHASH_KVP_AT_BUCKET_LEVEL are set
to 1 in the new template. It means only one KVP is supported
per bucket.
2. Collision handling is avoided by calling
BV (clib_bihash_add_or_overwrite_stale) function.
Through the stale callback function pointer, the KVP entry
will be overwritten during collision.
3. Flow cache reset is done using
BV (clib_bihash_foreach_key_value_pair) function.
Through the callback function pointer, the KVP value is reset
to ~0ULL.
MRR performance numbers with 1 core, 1 ESP Tunnel, null-encrypt,
64B for different SPD policy matching indices:
SPD Policy index : 1 10 100 1000
Throughput : MPPS/MPPS MPPS/MPPS MPPS/MPPS KPPS/MPPS
(Baseline/Optimized)
ARM Neoverse N1 : 5.2/4.84 4.55/4.84 2.11/4.84 329.5/4.84
ARM TX2 : 2.81/2.6 2.51/2.6 1.27/2.6 176.62/2.6
INTEL SKX : 4.93/4.48 4.29/4.46 2.05/4.48 336.79/4.47
Next Steps:
Following can be made as a configurable option through startup
conf at IPSec level:
1. Enable/Disable Flow cache.
2. Bihash configuration like number of buckets and memory size.
3. Dual/Quad loop unroll can be applied around bihash to further
improve the performance.
4. The same flow cache logic can be applied for IPv6 as well as in
IPSec inbound direction. A deeper and wider flow cache using
bihash_40_8 can replace existing bihash_16_8, to make it
common for both IPv4 and IPv6 in both outbound and
inbound directions.
Following changes are made based on the review comments:
1. ON/OFF flow cache through startup conf. Default: OFF
2. Flow cache stale entry detection using epoch counter.
3. Avoid host order endianness conversion during flow cache
lookup.
4. Move IPSec startup conf to a common file.
5. Added SPD flow cache unit test case
6. Replaced bihash with vectors to implement flow cache.
7. ipsec_add_del_policy API is not mpsafe. Cleaned up
inflight packets check in control plane.
Type: improvement
Signed-off-by: mgovind <govindarajan.Mohandoss@arm.com>
Signed-off-by: Zachary Leaf <zachary.leaf@arm.com>
Tested-by: Jieqiang Wang <jieqiang.wang@arm.com>
Change-Id: I62b4d6625fbc6caf292427a5d2046aa5672b2006
|
|
Type: improvement
Change-Id: Iec585880085b12b08594a0640822cd831455d594
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
If logging is on, it will try to print the address nh. Make sure it is
not NULL.
Type: fix
Change-Id: I81c0295865901406d86e0d822a103b4d5adffe47
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|