Age | Commit message (Collapse) | Author | Files | Lines |
|
Zero-initialize the temporary struct, else coverity complains about a bunch of uninitialized fields.
Type: fix
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
Change-Id: I45dc42134f06917a7459d615804f978a175bec0f
|
|
Type: improvement
If an SA protecting an IPv6 tunnel interface has UDP encapsulation
enabled, the code in esp_encrypt_inline() inserts a UDP header but does
not set the next protocol or the UDP payload length, so the peer that
receives the packet drops it. Set the next protocol field and the UDP
payload length correctly.
The port(s) for UDP encapsulation of IPsec was not registered for IPv6.
Add this registration for IPv6 SAs when UDP encapsulation is enabled.
Add punt handling for IPv6 IKE on NAT-T port.
Add registration of linux-cp for the new punt reason.
Add unit tests of IPv6 ESP w/ UDP encapsulation on tun protect
Signed-off-by: Matthew Smith <mgsmith@netgate.com>
Change-Id: Ibb28e423ab8c7bcea2c1964782a788a0f4da5268
|
|
Type: improvement
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Ica7de5a493389c6f53b7cf04e06939473a63d2b9
|
|
This patch fixes followig coverity issues:
CID 274739 Out-of-bounds read
CID 274746 Out-of-bounds access
CID 274748 Out-of-bounds read
Type: fix
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I9bb6741f100a9414a5a15278ffa49b31ccd7994f
|
|
With this patch fast path for ipv6 policy lookup is enabled.
This impelentation scales and outperforms original implementation when
the number of defined flows is higher thatn 100k.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I9364b5b8db4fc708790d48c538add272c7cea400
|
|
This patch updates the "show ipsec spd" cli to display
policies maintained by fast path bihash table.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I58b9f92f3132dc9809b50786dc912e09c4b84d81
|
|
Parser can be configured from the level of startup.conf file:
fast path can be enabled and disabled.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: Ifab83ddcb75bc44c8165e7fa87a1a56d047732a1
|
|
This patch adds matching functionality for spd fast path
policy matching. Fast path matching has been introduced
for outbound traffic only.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I03d5edf7d7fbc03bf3e6edbe33cb15bc965f9d4e
|
|
This patch introduces ipsec_output.h file. Matching implementation is
moved there. The reason behind is the possibility of unit testing
matching mechanism. Therefore we need to have functions that are in
scope of our intrest there and since these are inline their
implementation needs to be moved to the header file as well.
Type: improvement
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: Id7c605375d1f3be146abf96ef70d336a5d156444
|
|
This patch introduces functions to add and delete fast path
policies.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I3f1f1323148080c9dac531fbe9fa33bad4efe814
|
|
This patch introdcues basic types supporting fast path lookup.
Fast path performs policy matching with use of hash lookup
(particularly bihash tries has been used for that purpose). Fast path
lookup addresses situation where huge number of policies is created
(~100k or more). In such scenario adding/removing a policy
and policy matching is not efficient and poorly scales (for example
adding 500k policies takes a few hours. Also lookup time
increases significantly). With fast path adding and matching up to
1M flows scales up linearly (adding 1M of policies takes about 150s
on the test machine vs many hours in case of original implementation,
also matching time is significantly improved). Fast path will not
deal well with a huge number of policies that are spanning large
ip/port ranges. Large range will be masked out almost entirely leaving
only a few bits for calculating the hash key. Such keys will tend to
gather much more policies than other keys and hash will match most of
the packets anihilating advantages of hashing. Having said that
we also think that it is not the real life scenario.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I600dae5111a37768ed4b23aa18426e66bbf7b529
|
|
Currently 0 has been used as the wildcard representing ANY type of
protocol. However 0 is valid value of ip protocol (HOPOPT) and therefore
it should not be used as a wildcard. Instead 255 is used which is
guaranteed by IANA to be reserved and not used as a protocol id.
Type: improvement
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I2320bae6fe380cb999dc5a9187beb68fda2d31eb
|
|
This fixes long standing annoyance that CLIs with optional args cannot
be executed from file, as they cannot distinguish between valid optional
args and next line in the file.
Multiline statements can be provided simply by using backslash before \n.
Also comments are supported - everything after # is ignored up to the
end of the line.
Example:
# multiline cli using backslash
show version \
verbose # end of line comment
packet-generator new { \
name x \
limit 5 \
# comment inside cmultiline cli \
size 128-128 \
interface local0 \
node null-node \
data { \
incrementing 30 \
} \
}
Type: fix
Change-Id: Ia6d588169bae14e6e3f18effe94820d05ace1dbf
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: feature
Change-Id: I940b6c9d206e407f3e17d66c97233cd658984e61
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Adding flow cache support to improve inbound IPv4/IPSec Security Policy
Database (SPD) lookup performance. By enabling the flow cache in startup
conf, this replaces a linear O(N) SPD search, with an O(1) hash table
search.
This patch is the ipsec4_input_node counterpart to
https://gerrit.fd.io/r/c/vpp/+/31694, and shares much of the same code,
theory and mechanism of action.
Details about the flow cache:
Mechanism:
1. First packet of a flow will undergo linear search in SPD
table. Once a policy match is found, a new entry will be added
into the flow cache. From 2nd packet onwards, the policy lookup
will happen in flow cache.
2. The flow cache is implemented using a hash table without collision
handling. This will avoid the logic to age out or recycle the old
flows in flow cache. Whenever a collision occurs, the old entry
will be overwritten by the new entry. Worst case is when all the
256 packets in a batch result in collision, falling back to linear
search. Average and best case will be O(1).
3. The size of flow cache is fixed and decided based on the number
of flows to be supported. The default is set to 1 million flows,
but is configurable by a startup.conf option.
4. Whenever a SPD rule is added/deleted by the control plane, all
current flow cache entries will be invalidated. As the SPD API is
not mp-safe, the data plane will wait for the control plane
operation to complete.
Cache invalidation is via an epoch counter that is incremented on
policy add/del and stored with each entry in the flow cache. If the
epoch counter in the flow cache does not match the current count,
the entry is considered stale, and we fall back to linear search.
The following configurable options are available through startup
conf under the ipsec{} entry:
1. ipv4-inbound-spd-flow-cache on/off - enable SPD flow cache
(default off)
2. ipv4-inbound-spd-hash-buckets %d - set number of hash buckets
(default 4,194,304: ~1 million flows with 25% load factor)
Performance with 1 core, 1 ESP Tunnel, null-decrypt then bypass,
94B (null encrypted packet) for different SPD policy matching indices:
SPD Policy index : 2 10 100 1000
Throughput : Mbps/Mbps Mbps/Mbps Mbps/Mbps Mbps/Mbps
(Baseline/Optimized)
ARM TX2 : 300/290 230/290 70/290 8.5/290
Type: improvement
Signed-off-by: Zachary Leaf <zachary.leaf@arm.com>
Signed-off-by: mgovind <govindarajan.Mohandoss@arm.com>
Tested-by: Jieqiang Wang <jieqiang.wang@arm.com>
Change-Id: I8be2ad4715accbb335c38cd933904119db75827b
|
|
Use of _vec_len() to set vector length breaks address sanitizer.
Users should use vec_set_len(), vec_inc_len(), vec_dec_len () instead.
Type: improvement
Change-Id: I441ae948771eb21c23a61f3ff9163bdad74a2cb8
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: refactor
Change-Id: I0a40e22e1439e13ffdbcbd6fd7cad40c8178418c
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
As per IPSec RFC4301 [1], any non-matching packets should be dropped by
default. This is handled correctly in ipsec_output.c, however in
ipsec_input.c non-matching packets are allowed to pass as per a matched
BYPASS rule.
For full details, see:
https://lists.fd.io/g/vpp-dev/topic/ipsec_input_output_default/84943480
It appears the ipsec6_input_node only matches PROTECT policies. Until
this is extended to handle BYPASS + DISCARD, we may wish to not drop
by default here, since all IPv6 traffic not matching a PROTECT policy
will be dropped.
[1]: https://datatracker.ietf.org/doc/html/rfc4301
Type: fix
Signed-off-by: Zachary Leaf <zachary.leaf@arm.com>
Change-Id: Iddbfd008dbe082486d1928f6a10ffbd83d859a20
|
|
Originally after remove the policy entry in spd, macro "vec_del1"
can change localization of the last entry in vector and finally the
entry list has not been sorted.
This patch fixes this issue by change executed macro "vec_delete"
instead of "vec_del1".
Type: fix
Signed-off-by: Gabriel Oginski <gabrielx.oginski@intel.com>
Change-Id: I396591cbbe17646e1d243aedb4cdc272ed4d5e25
|
|
Type: improvement
Ethernet frames on the wire are a minimum of 64 bytes, so use the length in the UDP header to determine if the ESP payload is one bytes of the special SPI, rather than the buffer's size (which will include the ethernet header's padding).
In the case of drop advance the packet back to the IP header so the ipx-drop node sees a sane packet.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Ic3b75487919f0c77507d6f725bd11202bc5afee8
|
|
Type: improvement
When an IPSec interface is first constructed, the end node of the feature arc is not changed, which means it is interface-output.
This means that traffic directed into adjacencies on the link, that do not have protection (w/ an SA), drop like this:
...
00:00:01:111710: ip4-midchain
tx_sw_if_index 4 dpo-idx 24 : ipv4 via 0.0.0.0 ipsec0: mtu:9000 next:6 flags:[]
stacked-on:
[@1]: dpo-drop ip4 flow hash: 0x00000000
00000000: 4500005c000100003f01cb8cac100202010101010800ecf40000000058585858
00000020: 58585858585858585858585858585858585858585858585858585858
00:00:01:111829: local0-output
ipsec0
00000000: 4500005c000100003f01cb8cac100202010101010800ecf40000000058585858
00000020: 5858585858585858585858585858585858585858585858585858585858585858
00000040: 58585858585858585858585858585858585858585858585858585858c2cf08c0
00000060: 2a2c103cd0126bd8b03c4ec20ce2bd02dd77b3e3a4f49664
00:00:01:112017: error-drop
rx:pg1
00:00:01:112034: drop
local0-output: interface is down
although that's a drop, no packets should go to local0, and we want all IPvX packets to go through ipX-drop.
This change sets the interface's end-arc node to the appropriate drop node when the interface is created, and when the last protection is removed.
The resulting drop is:
...
00:00:01:111504: ip4-midchain
tx_sw_if_index 4 dpo-idx 24 : ipv4 via 0.0.0.0 ipsec0: mtu:9000 next:0 flags:[]
stacked-on:
[@1]: dpo-drop ip4 flow hash: 0x00000000
00000000: 4500005c000100003f01cb8cac100202010101010800ecf40000000058585858
00000020: 58585858585858585858585858585858585858585858585858585858
00:00:01:111533: ip4-drop
ICMP: 172.16.2.2 -> 1.1.1.1
tos 0x00, ttl 63, length 92, checksum 0xcb8c dscp CS0 ecn NON_ECN
fragment id 0x0001
ICMP echo_request checksum 0xecf4 id 0
00:00:01:111620: error-drop
rx:pg1
00:00:01:111640: drop
null-node: blackholed packets
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I7e7de23c541d9f1210a05e6984a688f1f821a155
|
|
When a message is received, verify that it's sufficiently large to
accomodate any VLAs within message. To do that, we need a way to
calculate message size including any VLAs. This patch adds such
funcionality to vppapigen and necessary C code to use those to validate
message size on receipt. Drop messages which are malformed.
Type: improvement
Signed-off-by: Klement Sekera <ksekera@cisco.com>
Change-Id: I2903aa21dee84be6822b064795ba314de46c18f4
|
|
Type: fix
Fixes: f16e9a5507
If an attempt to submit an async crypto frame fails, the buffers that
were added to the frame are supposed to be dropped. This was not
happening and they are leaking, resulting in buffer exhaustion.
There are two issues:
1. The return value of esp_async_recycle_failed_submit() is used to
figure out how many buffers should be dropped. That function calls
vnet_crypto_async_reset_frame() and then returns f->n_elts. Resetting
the frame sets n_elts to 0. So esp_async_recycle_failed_submit() always
returns 0. It is safe to remove the call to reset the frame because
esp_async_recycle_failed_submit() is called in 2 places and a call to
reset the frame is made immediately afterwards in both cases - so it
is currently unnecessary anyway.
2. An array and an index are passed to esp_async_recycle_failed_submit().
The index should indicate the position in the array where indices of the
buffers contained in the frame should be written. Across multiple calls,
the same index value (n_sync) is passed. This means each call may overwrite
the same entries in the array with the buffer indices in the frame rather
than appending them to the entries which were written earlier. Pass n_noop
as the index instead of n_sync.
Change-Id: I525ab3c466965446f6c116f4c8c5ebb678a66d84
Signed-off-by: Matthew Smith <mgsmith@netgate.com>
|
|
Refactor and improve boundary checking on IPv6 extension header handling.
Limit parsing of IPv6 extension headers to a maximum of 4 headers and a
depth of 256 bytes.
Type: fix
Signed-off-by: Ole Troan <ot@cisco.com>
Change-Id: Ide40aaa2b482ceef7e92f02fa0caeadb3b8f7556
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
Type: fix
Using the adjacency to modify the interface's feature arc doesn't work, since there are potentially more than one adj per-interface.
Instead have the interface, when it is created, register what the end node of the feature arc is. This end node is then also used as the interface's tx node (i.e. it is used as the adjacency's next-node).
rename adj-midhcain-tx as 'tunnel-output', that's a bit more intuitive.
There's also a fix in config string handling to:
1- prevent false sharing of strings when the end node of the arc is different.
2- call registered listeners when the end node is changed
For IPSec the consequences are that one cannot provide per-adjacency behaviour using different end-nodes - this was previously done for the no-SA and an SA with no protection. These cases are no handled in the esp-encrypt node.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: If3a83d03a3000f28820d9a9cb4101d244803d084
|
|
Type: fix
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I93c819cdd802f0980a981d1fc5561d65b35d3382
|
|
Type: fix
This reverts commit 5ecda99d673298e5bf3c906e9bf6682fdcb57d83.
Change-Id: I393c7d8a6b32aa4f178d6b6dac025038bbf10fe6
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
Change-Id: Ib3fe4f306f23541a01246b74ad0f1a7074fa03bb
|
|
Adding flow cache support to improve outbound IPv4/IPSec SPD lookup
performance. Details about flow cache:
Mechanism:
1. First packet of a flow will undergo linear search in SPD
table. Once a policy match is found, a new entry will be added
into the flow cache. From 2nd packet onwards, the policy lookup
will happen in flow cache.
2. The flow cache is implemented using bihash without collision
handling. This will avoid the logic to age out or recycle the old
flows in flow cache. Whenever a collision occurs, old entry will
be overwritten by the new entry. Worst case is when all the 256
packets in a batch result in collision and fall back to linear
search. Average and best case will be O(1).
3. The size of flow cache is fixed and decided based on the number
of flows to be supported. The default is set to 1 million flows.
This can be made as a configurable option as a next step.
4. Whenever a SPD rule is added/deleted by the control plane, the
flow cache entries will be completely deleted (reset) in the
control plane. The assumption here is that SPD rule add/del is not
a frequent operation from control plane. Flow cache reset is done,
by putting the data plane in fall back mode, to bypass flow cache
and do linear search till the SPD rule add/delete operation is
complete. Once the rule is successfully added/deleted, the data
plane will be allowed to make use of the flow cache. The flow
cache will be reset only after flushing out the inflight packets
from all the worker cores using
vlib_worker_wait_one_loop().
Details about bihash usage:
1. A new bihash template (16_8) is added to support IPv4 5 tuple.
BIHASH_KVP_PER_PAGE and BIHASH_KVP_AT_BUCKET_LEVEL are set
to 1 in the new template. It means only one KVP is supported
per bucket.
2. Collision handling is avoided by calling
BV (clib_bihash_add_or_overwrite_stale) function.
Through the stale callback function pointer, the KVP entry
will be overwritten during collision.
3. Flow cache reset is done using
BV (clib_bihash_foreach_key_value_pair) function.
Through the callback function pointer, the KVP value is reset
to ~0ULL.
MRR performance numbers with 1 core, 1 ESP Tunnel, null-encrypt,
64B for different SPD policy matching indices:
SPD Policy index : 1 10 100 1000
Throughput : MPPS/MPPS MPPS/MPPS MPPS/MPPS KPPS/MPPS
(Baseline/Optimized)
ARM Neoverse N1 : 5.2/4.84 4.55/4.84 2.11/4.84 329.5/4.84
ARM TX2 : 2.81/2.6 2.51/2.6 1.27/2.6 176.62/2.6
INTEL SKX : 4.93/4.48 4.29/4.46 2.05/4.48 336.79/4.47
Next Steps:
Following can be made as a configurable option through startup
conf at IPSec level:
1. Enable/Disable Flow cache.
2. Bihash configuration like number of buckets and memory size.
3. Dual/Quad loop unroll can be applied around bihash to further
improve the performance.
4. The same flow cache logic can be applied for IPv6 as well as in
IPSec inbound direction. A deeper and wider flow cache using
bihash_40_8 can replace existing bihash_16_8, to make it
common for both IPv4 and IPv6 in both outbound and
inbound directions.
Following changes are made based on the review comments:
1. ON/OFF flow cache through startup conf. Default: OFF
2. Flow cache stale entry detection using epoch counter.
3. Avoid host order endianness conversion during flow cache
lookup.
4. Move IPSec startup conf to a common file.
5. Added SPD flow cache unit test case
6. Replaced bihash with vectors to implement flow cache.
7. ipsec_add_del_policy API is not mpsafe. Cleaned up
inflight packets check in control plane.
Type: improvement
Signed-off-by: mgovind <govindarajan.Mohandoss@arm.com>
Signed-off-by: Zachary Leaf <zachary.leaf@arm.com>
Tested-by: Jieqiang Wang <jieqiang.wang@arm.com>
Change-Id: I62b4d6625fbc6caf292427a5d2046aa5672b2006
|
|
Type: improvement
Change-Id: Iec585880085b12b08594a0640822cd831455d594
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
If logging is on, it will try to print the address nh. Make sure it is
not NULL.
Type: fix
Change-Id: I81c0295865901406d86e0d822a103b4d5adffe47
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: improvement
Change-Id: Iac01d7830b53819ace8f199554be10ab89ecdb97
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
Type: feature
Gaps in the sequence numbers received on an SA indicate packets that were lost.
Gaps are identified using the anti-replay window that records the sequences seen.
Publish the number of lost packets in the stats segment at /net/ipsec/sa/lost
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I8af1c09b7b25a705e18bf82e1623b3ce19e5a74d
|
|
The ipsec startup.conf config currently exists in ipsec_tun.c. This is
because currently the only ipsec{...} options are tunnel related.
This patch moves the ipsec config to a common file (ipsec.c) for future
extensibility/addition of non-tunnel related config options.
Type: refactor
Signed-off-by: Zachary Leaf <zachary.leaf@arm.com>
Change-Id: I1569dd7948334fd2cc28523ccc6791a22dea8d32
|
|
Type: refactor
Change-Id: Id10cbf52e8f2dd809080a228d8fa282308be84ac
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
These file are no longer needed
Type: improvement
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
Change-Id: I34f8e0b7e17d9e8c06dcd6c5ffe51aa273cdec07
|
|
Type: docs
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: Ica576e13953a3c720a7c093af649d1dd380cc2c0
|
|
Type: improvement
There's no need for the user to set the TUNNEL_V6 flag, it can be
derived from the tunnel's address type.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I073073dc970b8a3f2b2645bc697fc00db1adbb47
|
|
Type: fix
two problems;
1 - just because anti-reply is not enabled doesn't mean the high sequence
number should not be used.
- fix, there needs to be some means to detect a wrapped packet, so we
use a window size of 2^30.
2 - The SA object was used as a scratch pad for the high-sequence
number used during decryption. That means that once the batch has been
processed the high-sequence number used is lost. This means it is not
possible to distinguish this case:
if (seq < IPSEC_SA_ANTI_REPLAY_WINDOW_LOWER_BOUND (tl))
{
...
if (post_decrypt)
{
if (hi_seq_used == sa->seq_hi)
/* the high sequence number used to succesfully decrypt this
* packet is the same as the last-sequnence number of the SA.
* that means this packet did not cause a wrap.
* this packet is thus out of window and should be dropped */
return 1;
else
/* The packet decrypted with a different high sequence number
* to the SA, that means it is the wrap packet and should be
* accepted */
return 0;
}
- fix: don't use the SA as a scratch pad, use the 'packet_data' - the
same place that is used as the scratch pad for the low sequence number.
other consequences:
- An SA doesn't have seq and last_seq, it has only seq; the sequence
numnber of the last packet tx'd or rx'd.
- there's 64bits of space available on the SA's first cache line. move
the AES CTR mode IV there.
- test the ESN/AR combinations to catch the bugs this fixes. This
doubles the amount of tests, but without AR on they only run for 2
seconds. In the AR tests, the time taken to wait for packets that won't
arrive is dropped from 1 to 0.2 seconds thus reducing the runtime of
these tests from 10-15 to about 5 sceonds.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Iaac78905289a272dc01930d70decd8109cf5e7a5
|
|
ipsec4_input_node
ipsec_spd_policy_counters are incremented only for matched inbound
PROTECT actions (:273 and :370). BYPASS + DISCARD actions also have
SPD policy counters that should be incremented on match.
This fix increments the counters for inbound BYPASS and DISCARD actions.
Type: fix
Signed-off-by: Zachary Leaf <zachary.leaf@arm.com>
Change-Id: Iac3c6d344be25ba5326e1ed45115ca299dee5f49
|
|
Type: improvement
the rationale being that the del only requires the SA's ID, so it's a
bit mean to require the client to fill out all the other information as
well.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Ibbc20405e74d6a0e1a3797465ead5271f15888e4
|
|
Use autogenerated code.
Does not change API definitions.
Type: improvement
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
Change-Id: I0db7343e907524af5adb2f4771b45712927d5833
|
|
Length check must also take current_data into account.
Type: fix
Change-Id: I7a1b1752868892d40f59490d05452ef24565cca6
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: test
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Iec69d8624b15766ed65e7d09777819d2242dee17
|
|
Type: fix
If an async crypto frame is allocated during ESP encrypt/decrypt but
a buffer/op is not subsequently added to the frame, the frame leaks. It
is not submitted if the count of async ops is zero nor is it
returned to the frame pool. This happens frequently if >= 2 worker
threads are configured and a vector of buffers all have to be handed
off to other threads.
Wait until it is almost certain that the buffer will be added to the
frame before allocating the frame to make it more unlikely that an
allocated frame will not have any operations added to it.
For encrypt this is sufficient to ressolve the leak. For decrypt there
is still a chance that the buffer will fail to be added to the frame, so
remove the counter of async ops and ensure that all frames that were
allocated get either submitted or freed at the end.
Change-Id: I4778c3265359b192d8a88ab9f8c53519d46285a2
Signed-off-by: Matthew Smith <mgsmith@netgate.com>
|
|
When both chained and non-chained buffers are processed in the same
vector, make sure the non-chained buffers are processed as non-chained
crypto ops.
Type: fix
Change-Id: I19fc02c25a0d5e2e8a1342e2b88bbae3fe92862f
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: fix
The same value is used for other tunnel types.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I6593001918993d65f127cc9f716c95e932239842
|
|
Mechanical change for patch following this one...
Type: improvement
Change-Id: Iee12f3a8851f35569e6c039494a94fc36e83d20f
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: fix
Change-Id: I433fe3799975fe3ba00fa30226f6e8dae34e88fc
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: improvement
Some tests i.e. ipsec see performance regression when offload flags
are moved to 2nd cacheline. This patch moves them back to 1st cacheline.
Change-Id: I6ead45ff6d2c467b0d248f409e27c2ba31758741
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|