Age | Commit message (Collapse) | Author | Files | Lines |
|
This patch introduces fast path matching for inbound traffic ipv4.
Fast path uses bihash tables in order to find matching policy. Adding
and removing policies in fast path is much faster than in current
implementation. It is still new feature and further work needs
and can be done in order to improve perfromance.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: Ifbd5bfecc21b76ddf8363f5dc089d77595196675
|
|
Type: improvement
If an SA protecting an IPv6 tunnel interface has UDP encapsulation
enabled, the code in esp_encrypt_inline() inserts a UDP header but does
not set the next protocol or the UDP payload length, so the peer that
receives the packet drops it. Set the next protocol field and the UDP
payload length correctly.
The port(s) for UDP encapsulation of IPsec was not registered for IPv6.
Add this registration for IPv6 SAs when UDP encapsulation is enabled.
Add punt handling for IPv6 IKE on NAT-T port.
Add registration of linux-cp for the new punt reason.
Add unit tests of IPv6 ESP w/ UDP encapsulation on tun protect
Signed-off-by: Matthew Smith <mgsmith@netgate.com>
Change-Id: Ibb28e423ab8c7bcea2c1964782a788a0f4da5268
|
|
With this patch fast path for ipv6 policy lookup is enabled.
This impelentation scales and outperforms original implementation when
the number of defined flows is higher thatn 100k.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I9364b5b8db4fc708790d48c538add272c7cea400
|
|
This patch introduces functions to add and delete fast path
policies.
Type: feature
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Change-Id: I3f1f1323148080c9dac531fbe9fa33bad4efe814
|
|
Type: feature
Change-Id: I940b6c9d206e407f3e17d66c97233cd658984e61
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Adding flow cache support to improve inbound IPv4/IPSec Security Policy
Database (SPD) lookup performance. By enabling the flow cache in startup
conf, this replaces a linear O(N) SPD search, with an O(1) hash table
search.
This patch is the ipsec4_input_node counterpart to
https://gerrit.fd.io/r/c/vpp/+/31694, and shares much of the same code,
theory and mechanism of action.
Details about the flow cache:
Mechanism:
1. First packet of a flow will undergo linear search in SPD
table. Once a policy match is found, a new entry will be added
into the flow cache. From 2nd packet onwards, the policy lookup
will happen in flow cache.
2. The flow cache is implemented using a hash table without collision
handling. This will avoid the logic to age out or recycle the old
flows in flow cache. Whenever a collision occurs, the old entry
will be overwritten by the new entry. Worst case is when all the
256 packets in a batch result in collision, falling back to linear
search. Average and best case will be O(1).
3. The size of flow cache is fixed and decided based on the number
of flows to be supported. The default is set to 1 million flows,
but is configurable by a startup.conf option.
4. Whenever a SPD rule is added/deleted by the control plane, all
current flow cache entries will be invalidated. As the SPD API is
not mp-safe, the data plane will wait for the control plane
operation to complete.
Cache invalidation is via an epoch counter that is incremented on
policy add/del and stored with each entry in the flow cache. If the
epoch counter in the flow cache does not match the current count,
the entry is considered stale, and we fall back to linear search.
The following configurable options are available through startup
conf under the ipsec{} entry:
1. ipv4-inbound-spd-flow-cache on/off - enable SPD flow cache
(default off)
2. ipv4-inbound-spd-hash-buckets %d - set number of hash buckets
(default 4,194,304: ~1 million flows with 25% load factor)
Performance with 1 core, 1 ESP Tunnel, null-decrypt then bypass,
94B (null encrypted packet) for different SPD policy matching indices:
SPD Policy index : 2 10 100 1000
Throughput : Mbps/Mbps Mbps/Mbps Mbps/Mbps Mbps/Mbps
(Baseline/Optimized)
ARM TX2 : 300/290 230/290 70/290 8.5/290
Type: improvement
Signed-off-by: Zachary Leaf <zachary.leaf@arm.com>
Signed-off-by: mgovind <govindarajan.Mohandoss@arm.com>
Tested-by: Jieqiang Wang <jieqiang.wang@arm.com>
Change-Id: I8be2ad4715accbb335c38cd933904119db75827b
|
|
Type: fix
Using the adjacency to modify the interface's feature arc doesn't work, since there are potentially more than one adj per-interface.
Instead have the interface, when it is created, register what the end node of the feature arc is. This end node is then also used as the interface's tx node (i.e. it is used as the adjacency's next-node).
rename adj-midhcain-tx as 'tunnel-output', that's a bit more intuitive.
There's also a fix in config string handling to:
1- prevent false sharing of strings when the end node of the arc is different.
2- call registered listeners when the end node is changed
For IPSec the consequences are that one cannot provide per-adjacency behaviour using different end-nodes - this was previously done for the no-SA and an SA with no protection. These cases are no handled in the esp-encrypt node.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: If3a83d03a3000f28820d9a9cb4101d244803d084
|
|
Adding flow cache support to improve outbound IPv4/IPSec SPD lookup
performance. Details about flow cache:
Mechanism:
1. First packet of a flow will undergo linear search in SPD
table. Once a policy match is found, a new entry will be added
into the flow cache. From 2nd packet onwards, the policy lookup
will happen in flow cache.
2. The flow cache is implemented using bihash without collision
handling. This will avoid the logic to age out or recycle the old
flows in flow cache. Whenever a collision occurs, old entry will
be overwritten by the new entry. Worst case is when all the 256
packets in a batch result in collision and fall back to linear
search. Average and best case will be O(1).
3. The size of flow cache is fixed and decided based on the number
of flows to be supported. The default is set to 1 million flows.
This can be made as a configurable option as a next step.
4. Whenever a SPD rule is added/deleted by the control plane, the
flow cache entries will be completely deleted (reset) in the
control plane. The assumption here is that SPD rule add/del is not
a frequent operation from control plane. Flow cache reset is done,
by putting the data plane in fall back mode, to bypass flow cache
and do linear search till the SPD rule add/delete operation is
complete. Once the rule is successfully added/deleted, the data
plane will be allowed to make use of the flow cache. The flow
cache will be reset only after flushing out the inflight packets
from all the worker cores using
vlib_worker_wait_one_loop().
Details about bihash usage:
1. A new bihash template (16_8) is added to support IPv4 5 tuple.
BIHASH_KVP_PER_PAGE and BIHASH_KVP_AT_BUCKET_LEVEL are set
to 1 in the new template. It means only one KVP is supported
per bucket.
2. Collision handling is avoided by calling
BV (clib_bihash_add_or_overwrite_stale) function.
Through the stale callback function pointer, the KVP entry
will be overwritten during collision.
3. Flow cache reset is done using
BV (clib_bihash_foreach_key_value_pair) function.
Through the callback function pointer, the KVP value is reset
to ~0ULL.
MRR performance numbers with 1 core, 1 ESP Tunnel, null-encrypt,
64B for different SPD policy matching indices:
SPD Policy index : 1 10 100 1000
Throughput : MPPS/MPPS MPPS/MPPS MPPS/MPPS KPPS/MPPS
(Baseline/Optimized)
ARM Neoverse N1 : 5.2/4.84 4.55/4.84 2.11/4.84 329.5/4.84
ARM TX2 : 2.81/2.6 2.51/2.6 1.27/2.6 176.62/2.6
INTEL SKX : 4.93/4.48 4.29/4.46 2.05/4.48 336.79/4.47
Next Steps:
Following can be made as a configurable option through startup
conf at IPSec level:
1. Enable/Disable Flow cache.
2. Bihash configuration like number of buckets and memory size.
3. Dual/Quad loop unroll can be applied around bihash to further
improve the performance.
4. The same flow cache logic can be applied for IPv6 as well as in
IPSec inbound direction. A deeper and wider flow cache using
bihash_40_8 can replace existing bihash_16_8, to make it
common for both IPv4 and IPv6 in both outbound and
inbound directions.
Following changes are made based on the review comments:
1. ON/OFF flow cache through startup conf. Default: OFF
2. Flow cache stale entry detection using epoch counter.
3. Avoid host order endianness conversion during flow cache
lookup.
4. Move IPSec startup conf to a common file.
5. Added SPD flow cache unit test case
6. Replaced bihash with vectors to implement flow cache.
7. ipsec_add_del_policy API is not mpsafe. Cleaned up
inflight packets check in control plane.
Type: improvement
Signed-off-by: mgovind <govindarajan.Mohandoss@arm.com>
Signed-off-by: Zachary Leaf <zachary.leaf@arm.com>
Tested-by: Jieqiang Wang <jieqiang.wang@arm.com>
Change-Id: I62b4d6625fbc6caf292427a5d2046aa5672b2006
|
|
Use autogenerated code.
Does not change API definitions.
Type: improvement
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
Change-Id: I0db7343e907524af5adb2f4771b45712927d5833
|
|
Type: improvement
In the current scheme an async frame is submitted each time the crypto
op changes. thus happens each time a different SA is used and thus
potentially many times per-node. thi can lead to the submision of many
partially filled frames.
change the scheme to construct as many full frames as possible in the
node and submit them all at the end. the frame owner ship is passed to
the user so that there can be more than one open frame per-op at any
given time.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Ic2305581d7b5aa26133f52115e0cd28ba956ed55
|
|
Type: refactor
this allows the ipsec_sa_get funtion to be moved from ipsec.h to
ipsec_sa.h where it belongs.
Also use ipsec_sa_get throughout the code base.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I2dce726c4f7052b5507dd8dcfead0ed5604357df
|
|
Type: refactor
- remove the extern declaration of the nodes. keep the use of them to
the files that declare them
- remove duplicate declaration of ipsec_set_async_mode
- remove unsued ipsec_add_feature
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I6ce7bb4517b508a8f02b11f3bc819e1c5d539c02
|
|
Type: feature
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: I89dc3815eabfee135cd5b3c910dea5e2e2ef1333
|
|
Type: improvement
this save the cache miss on the protect structure.
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: I867d5e49df5edfd6b368f17a34747f32840080e4
|
|
Type: improvement
Change-Id: I0c82722dfce990345fe6eeecdb335678543367e0
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Not all ESP crypto algorithms require padding/alignment to be the same
as AES block/IV size. CCM, CTR and GCM all have no padding/alignment
requirements, and the RFCs indicate that no padding (beyond ESPs 4 octet
alignment requirement) should be used unless TFC (traffic flow
confidentiality) has been requested.
CTR: https://tools.ietf.org/html/rfc3686#section-3.2
GCM: https://tools.ietf.org/html/rfc4106#section-3.2
CCM: https://tools.ietf.org/html/rfc4309#section-3.2
- VPP is incorrectly using the IV/AES block size to pad CTR and GCM.
These modes do not require padding (beyond ESPs 4 octet requirement), as
a result packets will have unnecessary padding, which will waste
bandwidth at least and possibly fail certain network configurations that
have finely tuned MTU configurations at worst.
Fix this as well as changing the field names from ".*block_size" to
".*block_align" to better represent their actual (and only) use. Rename
"block_sz" in esp_encrypt to "esp_align" and set it correctly as well.
test: ipsec: Add unit-test to test for RFC correct padding/alignment
test: patch scapy to not incorrectly pad ccm, ctr, gcm modes as well
- Scapy is also incorrectly using the AES block size of 16 to pad CCM,
CTR, and GCM cipher modes. A bug report has been opened with the
and acknowledged with the upstream scapy project as well:
https://github.com/secdev/scapy/issues/2322
Ticket: VPP-1928
Type: fix
Signed-off-by: Christian Hopps <chopps@labn.net>
Change-Id: Iaa4d6a325a2e99fdcb2c375a3395bcfe7947770e
|
|
Type: feature
thus allowing NAT traversal,
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: Ie8650ceeb5074f98c68d2d90f6adc2f18afeba08
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Type: improvement
- inline some common encap fixup functions into the midchain
rewrite node so we don't incur the cost of the virtual function call
- change the copy 'guess' from ethernet_header (which will never happen) to an ip4 header
- add adj-midchain-tx to multiarch sources
- don't run adj-midchain-tx as a feature, instead put this node as the
adj's next and at the end of the feature arc.
- cache the feature arc config index (to save the cache miss going to fetch it)
- don't check if features are enabled when taking the arc (since we know they are)
the last two changes will also benefit normal adjacencies taking the arc (i.e. for NAT, ACLs, etc)
for IPSec:
- don't run esp_encrypt as a feature, instead when required insert this
node into the adj's next and into the end of the feature arc. this
implies that encrypt is always 'the last feature' run, which is
symmetric with decrypt always being the first.
- esp_encrpyt for tunnels has adj-midchain-tx as next node
Change-Id: Ida0af56a704302cf2d7797ded5f118a781e8acb7
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Type: feature
Signed-off-by: Damjan Marion <damarion@cisco.com>
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Dariusz Kazimierski <dariuszx.kazimierski@intel.com>
Signed-off-by: Piotr Kleski <piotrx.kleski@intel.com>
Change-Id: I4c3fcccf55c36842b7b48aed260fef2802b5c54b
|
|
Type: feature
Change-Id: Ie072a7c2bbb1e4a77f7001754f01897efd30fc53
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
|
|
Type: fix
Change-Id: Iff9b1960b122f7d326efc37770b4ae3e81eb3122
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Type: fix
1 - big packets; chained buffers and those without enoguh space to add
ESP header
2 - IPv6 extension headers in packets that are encrypted/decrypted
3 - Interface protection with SAs that have null algorithms
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: Ie330861fb06a9b248d9dcd5c730e21326ac8e973
|
|
the sequence number increment and the anti-replay window
checks must be atomic. Given the vector nature of VPP we
can't simply use atomic increments for sequence numbers,
since a vector on thread 1 with lower sequence numbers could
be 'overtaken' by packets on thread 2 with higher sequence
numbers.
The anti-replay logic requires a critical section, not just
atomics, and we don't want that.
So when the SA see the first packet it is bound to that worker
all subsequent packets, that arrive on a different worker,
are subject to a handoff.
Type: feature
Change-Id: Ia20a8645fb50622ea6235ab015a537f033d531a4
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Type: fix
Signed-off-by: Prashant Maheshwari <pmahesh2@cisco.com>
Change-Id: I81b937fc8cfec36f8fb5de711ffbb02f23f3664e
Signed-off-by: Prashant Maheshwari <pmahesh2@cisco.com>
|
|
APIs for dedicated IPSec tunnels will remain in this release and are
used to programme the IPIP tunnel protect. APIs will be removed in a
future release.
see:
https://wiki.fd.io/view/VPP/IPSec
Type: feature
Change-Id: I0f01f597946fdd15dfa5cae3643104d5a9c83089
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
If specified, shows keys, otherwise redacts. This change sets this flag
in the existing CLI code (thus maintaining the old behavior). The use
case for not specifying the insecure flag (and thus redacting the keys
from the show output) is for log messages.
Type: feature
Signed-off-by: Christian E. Hopps <chopps@chopps.org>
Change-Id: I8c0ab6a9a8aba7c687a2559fa1a23fac9d0aa111
|
|
Type: fix
If a tunnel interface has the crypto alg set on the outbound SA to
IPSEC_CRYPTO_ALG_NONE and packets are sent out that interface,
the attempt to write an ESP trailer on the packet occurs at the
wrong offset and the vnet buffer opaque data is corrupted, which
can result in a SEGV when a subsequent node attempts to use that
data.
When an outbound SA is set on a tunnel interface which has no crypto
alg set, add a node to the ip{4,6}-output feature arcs which drops all
packets leaving that interface instead of adding the node which would
try to encrypt the packets.
Change-Id: Ie0ac8d8fdc8a035ab8bb83b72b6a94161bebaa48
Signed-off-by: Matthew Smith <mgsmith@netgate.com>
|
|
please consult the new tunnel proposal at:
https://wiki.fd.io/view/VPP/IPSec
Type: feature
Change-Id: I52857fc92ae068b85f59be08bdbea1bd5932e291
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: Ide2a9df18db371c8428855d7f12f246006d7c04c
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: If96f661d507305da4b96cac7b1a8f14ba90676ad
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Id2ddb77b4ec3dd543d6e638bc882923f2bac011d
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: Iff6f81a49b9cff5522fbb4914d47472423eac5db
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: Idb661261c2191adda963a7815822fd7a27a9e7a0
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I48a4b0a16f71cbab04dd0955d3ec4001074b57ed
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I6527e3fd8bbbca2d5f728621fc66b3856b39d505
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: Ibe7f806b9d600994e83c9f1be526fdb0a1ef1833
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Change-Id: I6a76907dc7bed2a81282b63669bea2219d6903c9
Signed-off-by: Kingwel Xie <kingwel.xie@ericsson.com>
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
|
|
Change-Id: Ibf320b3e7b054b686f3af9a55afd5d5bda9b1048
Signed-off-by: Damjan Marion <damarion@cisco.com>
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
|
|
Change-Id: I1e431aa36a282ca7565c6618a940d591674b8cd2
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
baseline:
ipsec0-tx 1.27e1
ipsec-if-input 8.19e1
this change:
ipsec0-tx 6.17e0
ipsec-if-input 6.39e1
this also fixes the double tunnel TX counts by removing the duplicate
from the TX node.
Change-Id: Ie4608acda08dc653b6fb9e2c85185d83625efd40
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
also fix the stats to include all the data in the tunnel.
And don't load the SA.
Change-Id: I7cd2e8d879f19683175fd0de78a606a2836e6da2
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: I7d48a4e236c6e7b11b0c9750a30fb68e829d64a5
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Change-Id: Ib55deb620f4f58cac07da7cb69418a3a30ff3136
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
- return the stats_index of each SPD in the create API call
- no ip_any in the API as this creates 2 SPD entries. client must add both v4 and v6 explicitly
- only one pool of SPD entries (rhter than one per-SPD) to support this
- no packets/bytes in the dump API. Polling the stats segment is much more efficient
(if the SA lifetime is based on packet/bytes)
- emit the policy index in the packet trace and CLI commands.
Change-Id: I7eaf52c9d0495fa24450facf55229941279b8569
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
No function change. Only breaking the monster ipsec.[hc]
into smaller constituent parts
Change-Id: I3fd4d2d041673db5865d46a4002f6bd383f378af
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
- use enums to enumerate the algoritms and protocols that are supported
- use address_t types to simplify encode/deocde
- use typedefs of entry objects to get consistency between add/del API and dump
Change-Id: I7e7c58c06a150e2439633ba9dca58bc1049677ee
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
This patch adds a configuration parameter to IPSec tunnels, enabling
custom FIB selection for encapsulated packets.
Although this option could also be used for policy-based IPSec,
this change only enables it for virtual-tunnel-interface mode.
Note that this patch does change the API default behavior regarding
TX fib selection for encapsulated packets.
Previous behavior was to use the same FIB after and before encap.
The new default behavior consists in using the FIB 0 as default.
Change-Id: I5c212af909940a8cf6c7e3971bdc7623a2296452
Signed-off-by: Pierre Pfister <ppfister@cisco.com>
|
|
Change-Id: Ia3dcd98edb6188deb96a3a99d831e71b2ffa0060
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
Change-Id: Ifa6d8391b1b2413a88b7720fc434e0bc849a149a
Signed-off-by: Klement Sekera <ksekera@cisco.com>
Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
|
|
Change-Id: Ic6b27659f1fe9e8df39e80a0441305e4e952195a
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|