Age | Commit message (Collapse) | Author | Files | Lines |
|
Use of _vec_len() to set vector length breaks address sanitizer.
Users should use vec_set_len(), vec_inc_len(), vec_dec_len () instead.
Type: improvement
Change-Id: I441ae948771eb21c23a61f3ff9163bdad74a2cb8
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: refactor
Change-Id: I0a40e22e1439e13ffdbcbd6fd7cad40c8178418c
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
As per IPSec RFC4301 [1], any non-matching packets should be dropped by
default. This is handled correctly in ipsec_output.c, however in
ipsec_input.c non-matching packets are allowed to pass as per a matched
BYPASS rule.
For full details, see:
https://lists.fd.io/g/vpp-dev/topic/ipsec_input_output_default/84943480
It appears the ipsec6_input_node only matches PROTECT policies. Until
this is extended to handle BYPASS + DISCARD, we may wish to not drop
by default here, since all IPv6 traffic not matching a PROTECT policy
will be dropped.
[1]: https://datatracker.ietf.org/doc/html/rfc4301
Type: fix
Signed-off-by: Zachary Leaf <zachary.leaf@arm.com>
Change-Id: Iddbfd008dbe082486d1928f6a10ffbd83d859a20
|
|
Originally after remove the policy entry in spd, macro "vec_del1"
can change localization of the last entry in vector and finally the
entry list has not been sorted.
This patch fixes this issue by change executed macro "vec_delete"
instead of "vec_del1".
Type: fix
Signed-off-by: Gabriel Oginski <gabrielx.oginski@intel.com>
Change-Id: I396591cbbe17646e1d243aedb4cdc272ed4d5e25
|
|
Type: improvement
Ethernet frames on the wire are a minimum of 64 bytes, so use the length in the UDP header to determine if the ESP payload is one bytes of the special SPI, rather than the buffer's size (which will include the ethernet header's padding).
In the case of drop advance the packet back to the IP header so the ipx-drop node sees a sane packet.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Ic3b75487919f0c77507d6f725bd11202bc5afee8
|
|
Type: improvement
When an IPSec interface is first constructed, the end node of the feature arc is not changed, which means it is interface-output.
This means that traffic directed into adjacencies on the link, that do not have protection (w/ an SA), drop like this:
...
00:00:01:111710: ip4-midchain
tx_sw_if_index 4 dpo-idx 24 : ipv4 via 0.0.0.0 ipsec0: mtu:9000 next:6 flags:[]
stacked-on:
[@1]: dpo-drop ip4 flow hash: 0x00000000
00000000: 4500005c000100003f01cb8cac100202010101010800ecf40000000058585858
00000020: 58585858585858585858585858585858585858585858585858585858
00:00:01:111829: local0-output
ipsec0
00000000: 4500005c000100003f01cb8cac100202010101010800ecf40000000058585858
00000020: 5858585858585858585858585858585858585858585858585858585858585858
00000040: 58585858585858585858585858585858585858585858585858585858c2cf08c0
00000060: 2a2c103cd0126bd8b03c4ec20ce2bd02dd77b3e3a4f49664
00:00:01:112017: error-drop
rx:pg1
00:00:01:112034: drop
local0-output: interface is down
although that's a drop, no packets should go to local0, and we want all IPvX packets to go through ipX-drop.
This change sets the interface's end-arc node to the appropriate drop node when the interface is created, and when the last protection is removed.
The resulting drop is:
...
00:00:01:111504: ip4-midchain
tx_sw_if_index 4 dpo-idx 24 : ipv4 via 0.0.0.0 ipsec0: mtu:9000 next:0 flags:[]
stacked-on:
[@1]: dpo-drop ip4 flow hash: 0x00000000
00000000: 4500005c000100003f01cb8cac100202010101010800ecf40000000058585858
00000020: 58585858585858585858585858585858585858585858585858585858
00:00:01:111533: ip4-drop
ICMP: 172.16.2.2 -> 1.1.1.1
tos 0x00, ttl 63, length 92, checksum 0xcb8c dscp CS0 ecn NON_ECN
fragment id 0x0001
ICMP echo_request checksum 0xecf4 id 0
00:00:01:111620: error-drop
rx:pg1
00:00:01:111640: drop
null-node: blackholed packets
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I7e7de23c541d9f1210a05e6984a688f1f821a155
|
|
When a message is received, verify that it's sufficiently large to
accomodate any VLAs within message. To do that, we need a way to
calculate message size including any VLAs. This patch adds such
funcionality to vppapigen and necessary C code to use those to validate
message size on receipt. Drop messages which are malformed.
Type: improvement
Signed-off-by: Klement Sekera <ksekera@cisco.com>
Change-Id: I2903aa21dee84be6822b064795ba314de46c18f4
|
|
Type: fix
Fixes: f16e9a5507
If an attempt to submit an async crypto frame fails, the buffers that
were added to the frame are supposed to be dropped. This was not
happening and they are leaking, resulting in buffer exhaustion.
There are two issues:
1. The return value of esp_async_recycle_failed_submit() is used to
figure out how many buffers should be dropped. That function calls
vnet_crypto_async_reset_frame() and then returns f->n_elts. Resetting
the frame sets n_elts to 0. So esp_async_recycle_failed_submit() always
returns 0. It is safe to remove the call to reset the frame because
esp_async_recycle_failed_submit() is called in 2 places and a call to
reset the frame is made immediately afterwards in both cases - so it
is currently unnecessary anyway.
2. An array and an index are passed to esp_async_recycle_failed_submit().
The index should indicate the position in the array where indices of the
buffers contained in the frame should be written. Across multiple calls,
the same index value (n_sync) is passed. This means each call may overwrite
the same entries in the array with the buffer indices in the frame rather
than appending them to the entries which were written earlier. Pass n_noop
as the index instead of n_sync.
Change-Id: I525ab3c466965446f6c116f4c8c5ebb678a66d84
Signed-off-by: Matthew Smith <mgsmith@netgate.com>
|
|
Refactor and improve boundary checking on IPv6 extension header handling.
Limit parsing of IPv6 extension headers to a maximum of 4 headers and a
depth of 256 bytes.
Type: fix
Signed-off-by: Ole Troan <ot@cisco.com>
Change-Id: Ide40aaa2b482ceef7e92f02fa0caeadb3b8f7556
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
Type: fix
Using the adjacency to modify the interface's feature arc doesn't work, since there are potentially more than one adj per-interface.
Instead have the interface, when it is created, register what the end node of the feature arc is. This end node is then also used as the interface's tx node (i.e. it is used as the adjacency's next-node).
rename adj-midhcain-tx as 'tunnel-output', that's a bit more intuitive.
There's also a fix in config string handling to:
1- prevent false sharing of strings when the end node of the arc is different.
2- call registered listeners when the end node is changed
For IPSec the consequences are that one cannot provide per-adjacency behaviour using different end-nodes - this was previously done for the no-SA and an SA with no protection. These cases are no handled in the esp-encrypt node.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: If3a83d03a3000f28820d9a9cb4101d244803d084
|
|
Type: fix
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I93c819cdd802f0980a981d1fc5561d65b35d3382
|
|
Type: fix
This reverts commit 5ecda99d673298e5bf3c906e9bf6682fdcb57d83.
Change-Id: I393c7d8a6b32aa4f178d6b6dac025038bbf10fe6
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
Change-Id: Ib3fe4f306f23541a01246b74ad0f1a7074fa03bb
|
|
Adding flow cache support to improve outbound IPv4/IPSec SPD lookup
performance. Details about flow cache:
Mechanism:
1. First packet of a flow will undergo linear search in SPD
table. Once a policy match is found, a new entry will be added
into the flow cache. From 2nd packet onwards, the policy lookup
will happen in flow cache.
2. The flow cache is implemented using bihash without collision
handling. This will avoid the logic to age out or recycle the old
flows in flow cache. Whenever a collision occurs, old entry will
be overwritten by the new entry. Worst case is when all the 256
packets in a batch result in collision and fall back to linear
search. Average and best case will be O(1).
3. The size of flow cache is fixed and decided based on the number
of flows to be supported. The default is set to 1 million flows.
This can be made as a configurable option as a next step.
4. Whenever a SPD rule is added/deleted by the control plane, the
flow cache entries will be completely deleted (reset) in the
control plane. The assumption here is that SPD rule add/del is not
a frequent operation from control plane. Flow cache reset is done,
by putting the data plane in fall back mode, to bypass flow cache
and do linear search till the SPD rule add/delete operation is
complete. Once the rule is successfully added/deleted, the data
plane will be allowed to make use of the flow cache. The flow
cache will be reset only after flushing out the inflight packets
from all the worker cores using
vlib_worker_wait_one_loop().
Details about bihash usage:
1. A new bihash template (16_8) is added to support IPv4 5 tuple.
BIHASH_KVP_PER_PAGE and BIHASH_KVP_AT_BUCKET_LEVEL are set
to 1 in the new template. It means only one KVP is supported
per bucket.
2. Collision handling is avoided by calling
BV (clib_bihash_add_or_overwrite_stale) function.
Through the stale callback function pointer, the KVP entry
will be overwritten during collision.
3. Flow cache reset is done using
BV (clib_bihash_foreach_key_value_pair) function.
Through the callback function pointer, the KVP value is reset
to ~0ULL.
MRR performance numbers with 1 core, 1 ESP Tunnel, null-encrypt,
64B for different SPD policy matching indices:
SPD Policy index : 1 10 100 1000
Throughput : MPPS/MPPS MPPS/MPPS MPPS/MPPS KPPS/MPPS
(Baseline/Optimized)
ARM Neoverse N1 : 5.2/4.84 4.55/4.84 2.11/4.84 329.5/4.84
ARM TX2 : 2.81/2.6 2.51/2.6 1.27/2.6 176.62/2.6
INTEL SKX : 4.93/4.48 4.29/4.46 2.05/4.48 336.79/4.47
Next Steps:
Following can be made as a configurable option through startup
conf at IPSec level:
1. Enable/Disable Flow cache.
2. Bihash configuration like number of buckets and memory size.
3. Dual/Quad loop unroll can be applied around bihash to further
improve the performance.
4. The same flow cache logic can be applied for IPv6 as well as in
IPSec inbound direction. A deeper and wider flow cache using
bihash_40_8 can replace existing bihash_16_8, to make it
common for both IPv4 and IPv6 in both outbound and
inbound directions.
Following changes are made based on the review comments:
1. ON/OFF flow cache through startup conf. Default: OFF
2. Flow cache stale entry detection using epoch counter.
3. Avoid host order endianness conversion during flow cache
lookup.
4. Move IPSec startup conf to a common file.
5. Added SPD flow cache unit test case
6. Replaced bihash with vectors to implement flow cache.
7. ipsec_add_del_policy API is not mpsafe. Cleaned up
inflight packets check in control plane.
Type: improvement
Signed-off-by: mgovind <govindarajan.Mohandoss@arm.com>
Signed-off-by: Zachary Leaf <zachary.leaf@arm.com>
Tested-by: Jieqiang Wang <jieqiang.wang@arm.com>
Change-Id: I62b4d6625fbc6caf292427a5d2046aa5672b2006
|
|
Type: improvement
Change-Id: Iec585880085b12b08594a0640822cd831455d594
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
If logging is on, it will try to print the address nh. Make sure it is
not NULL.
Type: fix
Change-Id: I81c0295865901406d86e0d822a103b4d5adffe47
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: improvement
Change-Id: Iac01d7830b53819ace8f199554be10ab89ecdb97
Signed-off-by: Nathan Skrzypczak <nathan.skrzypczak@gmail.com>
|
|
Type: feature
Gaps in the sequence numbers received on an SA indicate packets that were lost.
Gaps are identified using the anti-replay window that records the sequences seen.
Publish the number of lost packets in the stats segment at /net/ipsec/sa/lost
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I8af1c09b7b25a705e18bf82e1623b3ce19e5a74d
|
|
The ipsec startup.conf config currently exists in ipsec_tun.c. This is
because currently the only ipsec{...} options are tunnel related.
This patch moves the ipsec config to a common file (ipsec.c) for future
extensibility/addition of non-tunnel related config options.
Type: refactor
Signed-off-by: Zachary Leaf <zachary.leaf@arm.com>
Change-Id: I1569dd7948334fd2cc28523ccc6791a22dea8d32
|
|
Type: refactor
Change-Id: Id10cbf52e8f2dd809080a228d8fa282308be84ac
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
These file are no longer needed
Type: improvement
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
Change-Id: I34f8e0b7e17d9e8c06dcd6c5ffe51aa273cdec07
|
|
Type: docs
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: Ica576e13953a3c720a7c093af649d1dd380cc2c0
|
|
Type: improvement
There's no need for the user to set the TUNNEL_V6 flag, it can be
derived from the tunnel's address type.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I073073dc970b8a3f2b2645bc697fc00db1adbb47
|
|
Type: fix
two problems;
1 - just because anti-reply is not enabled doesn't mean the high sequence
number should not be used.
- fix, there needs to be some means to detect a wrapped packet, so we
use a window size of 2^30.
2 - The SA object was used as a scratch pad for the high-sequence
number used during decryption. That means that once the batch has been
processed the high-sequence number used is lost. This means it is not
possible to distinguish this case:
if (seq < IPSEC_SA_ANTI_REPLAY_WINDOW_LOWER_BOUND (tl))
{
...
if (post_decrypt)
{
if (hi_seq_used == sa->seq_hi)
/* the high sequence number used to succesfully decrypt this
* packet is the same as the last-sequnence number of the SA.
* that means this packet did not cause a wrap.
* this packet is thus out of window and should be dropped */
return 1;
else
/* The packet decrypted with a different high sequence number
* to the SA, that means it is the wrap packet and should be
* accepted */
return 0;
}
- fix: don't use the SA as a scratch pad, use the 'packet_data' - the
same place that is used as the scratch pad for the low sequence number.
other consequences:
- An SA doesn't have seq and last_seq, it has only seq; the sequence
numnber of the last packet tx'd or rx'd.
- there's 64bits of space available on the SA's first cache line. move
the AES CTR mode IV there.
- test the ESN/AR combinations to catch the bugs this fixes. This
doubles the amount of tests, but without AR on they only run for 2
seconds. In the AR tests, the time taken to wait for packets that won't
arrive is dropped from 1 to 0.2 seconds thus reducing the runtime of
these tests from 10-15 to about 5 sceonds.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Iaac78905289a272dc01930d70decd8109cf5e7a5
|
|
ipsec4_input_node
ipsec_spd_policy_counters are incremented only for matched inbound
PROTECT actions (:273 and :370). BYPASS + DISCARD actions also have
SPD policy counters that should be incremented on match.
This fix increments the counters for inbound BYPASS and DISCARD actions.
Type: fix
Signed-off-by: Zachary Leaf <zachary.leaf@arm.com>
Change-Id: Iac3c6d344be25ba5326e1ed45115ca299dee5f49
|
|
Type: improvement
the rationale being that the del only requires the SA's ID, so it's a
bit mean to require the client to fill out all the other information as
well.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Ibbc20405e74d6a0e1a3797465ead5271f15888e4
|
|
Use autogenerated code.
Does not change API definitions.
Type: improvement
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
Change-Id: I0db7343e907524af5adb2f4771b45712927d5833
|
|
Length check must also take current_data into account.
Type: fix
Change-Id: I7a1b1752868892d40f59490d05452ef24565cca6
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: test
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Iec69d8624b15766ed65e7d09777819d2242dee17
|
|
Type: fix
If an async crypto frame is allocated during ESP encrypt/decrypt but
a buffer/op is not subsequently added to the frame, the frame leaks. It
is not submitted if the count of async ops is zero nor is it
returned to the frame pool. This happens frequently if >= 2 worker
threads are configured and a vector of buffers all have to be handed
off to other threads.
Wait until it is almost certain that the buffer will be added to the
frame before allocating the frame to make it more unlikely that an
allocated frame will not have any operations added to it.
For encrypt this is sufficient to ressolve the leak. For decrypt there
is still a chance that the buffer will fail to be added to the frame, so
remove the counter of async ops and ensure that all frames that were
allocated get either submitted or freed at the end.
Change-Id: I4778c3265359b192d8a88ab9f8c53519d46285a2
Signed-off-by: Matthew Smith <mgsmith@netgate.com>
|
|
When both chained and non-chained buffers are processed in the same
vector, make sure the non-chained buffers are processed as non-chained
crypto ops.
Type: fix
Change-Id: I19fc02c25a0d5e2e8a1342e2b88bbae3fe92862f
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: fix
The same value is used for other tunnel types.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I6593001918993d65f127cc9f716c95e932239842
|
|
Mechanical change for patch following this one...
Type: improvement
Change-Id: Iee12f3a8851f35569e6c039494a94fc36e83d20f
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: fix
Change-Id: I433fe3799975fe3ba00fa30226f6e8dae34e88fc
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: improvement
Some tests i.e. ipsec see performance regression when offload flags
are moved to 2nd cacheline. This patch moves them back to 1st cacheline.
Change-Id: I6ead45ff6d2c467b0d248f409e27c2ba31758741
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
We don't use libssl anymore... At least not directly.
Type: improvement
Change-Id: I9a0fab6e3c576d945498ce46f030bd26c1a14d15
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
The crypto op flags must be reset to frame flags minus invalid values
depending of the operation, instead of forcing them to specific values.
Type: fix
Change-Id: Ib02c2a738bbca6962394b3c03088d516d0da56a0
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
This module implement Python access to the VPP statistics segment. It
accesses the data structures directly in shared memory.
VPP uses optimistic locking, so data structures may change underneath
us while we are reading. Data is copied out and it's important to
spend as little time as possible "holding the lock".
Counters are stored in VPP as a two dimensional array.
Index by thread and index (typically sw_if_index).
Simple counters count only packets, Combined counters count packets
and octets.
Counters can be accessed in either dimension.
stat['/if/rx'] - returns 2D lists
stat['/if/rx'][0] - returns counters for all interfaces for thread 0
stat['/if/rx'][0][1] - returns counter for interface 1 on thread 0
stat['/if/rx'][0][1]['packets'] - returns the packet counter
for interface 1 on thread 0
stat['/if/rx'][:, 1] - returns the counters for interface 1 on all threads
stat['/if/rx'][:, 1].packets() - returns the packet counters for
interface 1 on all threads
stat['/if/rx'][:, 1].sum_packets() - returns the sum of packet counters for
interface 1 on all threads
stat['/if/rx-miss'][:, 1].sum() - returns the sum of packet counters for
interface 1 on all threads for simple counters
Type: refactor
Signed-off-by: Ole Troan <ot@cisco.com>
Change-Id: I1fe7f7c7d11378d06be8276db5e1900ecdb8f515
Signed-off-by: Ole Troan <ot@cisco.com>
|
|
Change-Id: Ia304488900bd9236ab4e7cc6f17ae029ee6f2c00
Type: fix
Signed-off-by: Mohammed Hawari <mohammed@hawari.fr>
|
|
Change-Id: I20e48a5ac8068eccb8d998346d35227c4802bb68
Signed-off-by: Mohammed Hawari <mohammed@hawari.fr>
Type: feature
|
|
Type: feature
This feautre only applies to ESP not AH SAs.
As well as the gobal switch for ayncs mode, allow individual SAs to be
async.
If global async is on, all SAs are async. If global async mode is off,
then if then an SA can be individually set to async. This preserves the
global switch behaviour.
the stratergy in the esp encrypt.decrypt nodes is to separate the frame
into, 1) sync buffers, 2) async buffers and 3) no-op buffers.
Sync buffer will undergo a cyrpto/ath operation, no-op will not, they
are dropped or handed-off.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Ifc15b10b870b19413ad030ce7f92ed56275d6791
|
|
Type: improvement
In the current scheme an async frame is submitted each time the crypto
op changes. thus happens each time a different SA is used and thus
potentially many times per-node. thi can lead to the submision of many
partially filled frames.
change the scheme to construct as many full frames as possible in the
node and submit them all at the end. the frame owner ship is passed to
the user so that there can be more than one open frame per-op at any
given time.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Ic2305581d7b5aa26133f52115e0cd28ba956ed55
|
|
Type: refactor
this allows the ipsec_sa_get funtion to be moved from ipsec.h to
ipsec_sa.h where it belongs.
Also use ipsec_sa_get throughout the code base.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I2dce726c4f7052b5507dd8dcfead0ed5604357df
|
|
Type: test
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Iabc8f2b09ee10a82aacebd36acfe8648cf69b7d7
|
|
Type: refactor
- remove the extern declaration of the nodes. keep the use of them to
the files that declare them
- remove duplicate declaration of ipsec_set_async_mode
- remove unsued ipsec_add_feature
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I6ce7bb4517b508a8f02b11f3bc819e1c5d539c02
|
|
Make the ipsec[46]-tun-input nodes siblings of device-input so that
input features can be enabled on them. Register ipsec-tun for feature
updates. When a feature is enabled on the device-input arc and the
ifindex is an IPSec tunnel, change the end node of the arc for that
ifindex to be the appropriate ESP decrypt node. Set a flag on the
tunnel to indicate that the feature arc should be started for packets
input on the tunnel.
Test input policing on ESP IPSec tunnels.
Type: improvement
Signed-off-by: Brian Russell <brian@graphiant.com>
Change-Id: I3b9f047e5e737f3ea4c58fc82cd3c15700b6f9f7
|
|
Type: refactor
This patch refactors the offload flags in vlib_buffer_t.
There are two main reasons behind this refactoring.
First, offload flags are insufficient to represent outer
and inner headers offloads. Second, room for these flags
in first cacheline of vlib_buffer_t is also limited.
This patch introduces a generic offload flag in first
cacheline. And detailed offload flags in 2nd cacheline
of the structure for performance optimization.
Change-Id: Icc363a142fb9208ec7113ab5bbfc8230181f6004
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: improvement
negates the need to load the SA in the handoff node.
don't prefetch the packet data, it's not needed.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I340472dc437f050cc1c3c11dfeb47ab09c609624
|
|
support
Type: feature
attmpet 2. this includes changes in ah_encrypt that don't use
uninitialised memory when doing tunnel mode fixups.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: Ie3cb776f5c415c93b8a5ee22f22586fd0181110d
|
|
This reverts commit c7eaa711f3e25580687df0618e9ca80d3dc85e5f.
Reason for revert: The jenkins job named 'vpp-merge-master-ubuntu1804-x86_64' had 2 IPv6 AH tests fail after the change was merged. Those 2 tests also failed the next time that job ran after an unrelated change was merged.
Change-Id: I0e2c3ee895114029066c82624e79807af575b6c0
Signed-off-by: Matthew Smith <mgsmith@netgate.com>
|