summaryrefslogtreecommitdiffstats
path: root/dpdk/Makefile
AgeCommit message (Collapse)AuthorFilesLines
2018-09-20rename vpp-dpdk-dev to vpp-ext-depsDamjan Marion1-485/+0
We need to have new tenants in the development package. This is first of series of patches which will allow us to have multiple external libs and tools packaged for developer's convenience. Change-Id: I884bd75fba96005bbf8cea92774682b2228e0e22 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-09-12Add patch for ixgbe x550 SFP+ to DPDK 18.08Matthew Smith1-1/+1
Patch for ixgbe which has been used with DPDK 18.02, 18.05. If the link flaps before link status has been successfully collected, the MAC will be reset and the PMD will not wait long enough for it to come back up before giving up, which will continue happening every time an attempt is made to check the link status. This patch was submitted to upstream DPDK in July 2018 but it has not been included in a release yet. Change-Id: Ib2100b33d2a986f3cf74e42fc5538412f76f42c7 Signed-off-by: Matthew Smith <mgsmith@netgate.com>
2018-08-29patch mlx PMDs for VPP w/ DPDK 18.05 or newerMatthew Smith1-1/+1
Memory allocation changed in DPDK 18.05. The mlx4 and mlx5 PMDs did not support using externally allocated memory. The patch for mlx5 was generated by Mellanox. That patch was modified to apply to the mlx4 PMD and tested on Microsoft Azure. Change-Id: I92116b1d71a3896d5bf7b1f10c40c898d72540d6 Signed-off-by: Matthew Smith <mgsmith@netgate.com>
2018-08-21dpdk: bump DPDK version to 18.08Damjan Marion1-2/+2
Change-Id: Ia1b188492138a0ca0e95daf6eb8e761e8db081ef Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-08-21NASM: update to latest stable 2.13.03Marco Varlese1-1/+1
Change-Id: I04cdd296bc3de0460308351d0bbb00d7cbbf023e Signed-off-by: Marco Varlese <marco.varlese@suse.de>
2018-08-17fix compiling warnings with GCCLijian Zhang1-0/+15
GCC 7 plus Ubuntu-16.04 default ccache 3.2.4 reports warnings on switch case fall-through code, which is commonly seen in DPDK code, which will cause image building failure finally. It requires newer ccache version to support -Wimplicit-fallthrough. To suppress the warning, if GCC is version 7 or higher, and it's old ccache version, will disable the fall-through check. dpdk-18.05/drivers/net/ark/ark_ethdev_rx.h:39:0, from dpdk-18.05/drivers/net/ark/ark_ethdev_rx.c:36: dpdk-18.05/arm64-armv8a-linuxapp-gcc/include/rte_mbuf.h: In function ‘rte_pktmbuf_alloc_bulk’: dpdk-18.05/arm64-armv8a-linuxapp-gcc/include/rte_mbuf.h:1292:7: error: this statement may fall through [-Werror=implicit-fallthrough=] idx++; ~~~^~ Change-Id: I4d12492471fadef9d349ba9e853a6799512f76f5 Signed-off-by: Lijian Zhang <Lijian.Zhang@arm.com>
2018-08-13dpdk: support for DPDK 18.08Damjan Marion1-1/+2
Change-Id: If1b93341c222160b9a08f127620c024620e55c37 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-07-08ixgbe link update patch for DPDK 18.05Matthew Smith1-1/+1
Add patch for DPDK 18.05 that was previously applied to DPDK 18.02.1. Issue with ixgbe on x550 SFP+ still exists. Bug report submitted to DPDK: https://bugs.dpdk.org/show_bug.cgi?id=69 Change-Id: I9b005709ddf2a72192b1288ba8b4bac85bf12685 Signed-off-by: Matthew Smith <mgsmith@netgate.com>
2018-07-02dpdk: bump default DPDK version to 18.05Damjan Marion1-1/+1
Change-Id: I739d3e6c25efe8d32b2f4a60557c644edfe958e0 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-06-26Update to latest stable DPDK release (18.02.2)Marco Varlese1-3/+3
Change-Id: I00b0e4d7f7b597760a898c895b1a80bfac3a47fb Signed-off-by: Marco Varlese <marco.varlese@suse.com>
2018-06-17ixgbe patch for link status updatesMatthew Smith1-1/+1
An x550 with SFP+ interfaces attached to some switches can have problems bringing the port up. After configuring the link, there is a wait for 500 ms for the link to come up. Some switches don't bring their ports up that quickly. So the link is never observed to come up and is reconfigured again the next time dpdk_update_link_state() is called. Subsequent attempts time out also indefinitely. Instead of waiting through 5 iterations of a 100 ms delay, wait through 10 iterations. The i40e PMD does this when updating link status. This issue & patch will be reported to Intel so this or some better solution can be applied upstream in the future. Change-Id: I16d706a2790e51d695edc43c0ca17f1eff1dcf5e Signed-off-by: Matthew Smith <mgsmith@netgate.com>
2018-06-07Add support for DPDK 18.05Damjan Marion1-7/+17
Change-Id: I205932bc727c990011bbbe1dc6c0cf5349d19806 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-06-04Configure or deduce CLIB_LOG2_CACHE_LINE_BYTES (VPP-1064)Dave Barach1-0/+1
Added configure argument "--with-log2-cache-line-bytes=5|6|7|auto" AKA 32, 64, or 128 bytes, or use the inferred value from the build host. produces build-xxx/vpp/vppinfra/config.h, which .../src/vppinfra/cache.h Kernels which implement the following pseudo-file (aka x86_64) are easy: /sys/devices/system/cpu/cpu0/cache/index0/coherency_line_size Otherwise, extract the cpuid from /proc/cpuinfo and map it to the cache line size. Change-Id: I7ff861e042faf82c3901fa1db98864fbdea95b74 Signed-off-by: Dave Barach <dave@barachs.net> Signed-off-by: Nitin Saxena <nitin.saxena@cavium.com>
2018-05-30Fix clang compilation on aarch64: replace -pie with -fPIE for dpdk compilation.Sirshak Das1-0/+4
Fixes clang error: argument unused during compilation: '-pie' by replacing it with -fPIE Change-Id: Ic89a5e325e019d4d794d35556a07ebcf0b718dd3 Signed-off-by: Sirshak Das <sirshak.das@arm.com> Reviewed-by: Brian Brooks <brian.brooks@arm.com> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
2018-05-25Fix VPP DPDK build failure with Mellanox NIC on aarch64Bin Huang1-1/+2
This compile issue was first reported by Sirshak Das in following thread: https://lists.fd.io/g/vpp-dev/message/8384 The issue was caused by auto-config shell script auto-config-h.sh regard quotation mark "" as $CROSS prefix for $CC when CROSS is empty. Change-Id: Ied535c6d18c4dffacbddabc3ad2087dffe19438d Signed-off-by: Bin Huang <huangbin.mails@gmail.com>
2018-05-12dpdk: Add build related keywords for failsafe PMDRui Cai1-1/+4
Added build related keywords for TAP, FAILSAFE PMD and also added some missing keywords for mlx4 PMD This is part of initial effort to enable vpp running over dpdk on failsafe PMD in Microsoft Azure (1/4). Change-Id: I2aebf209fbc6db030185f41971b51a593a003a3a Signed-off-by: Rui Cai <rucai@microsoft.com>
2018-04-24DPDK: CVE-2018-1059Marco Varlese1-3/+3
For further details, please see https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1059 Change-Id: Icd207f129e5fdcc3d9d8ad56ba5a368926f2804d Signed-off-by: Marco Varlese <marco.varlese@suse.com>
2018-03-05Set DPDK_MLX4_PMD and DPDK_MLX5_PMD compile with default dlopen linksAmir Zeidner1-0/+2
dlopen linkage allow more transparent use for Mellanox nics. Mellanox shared library librte_pmd_mlx5/4_glue.so* placed in LD_LIBRARY_PATH At run time Mellanox code will be loaded only when Mellanox nics explicty used. i.e if VPP is used with other vendor Mellanox code is not loaded. Change-Id: Ib05bdbfc4cbb6e447c67186c98361f9c5b447140 Signed-off-by: Amir Zeidner <amirzei@mellanox.com>
2018-02-24dpdk: Add option to specify cache line size for dpdk buildDamjan Marion1-0/+2
Change-Id: Ib3361eded05babfc17ead28af7d252e7503ce141 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-02-23DPDK: disabling DPAA since broken for 18.02Marco Varlese1-0/+6
Change-Id: I1e0cea8e7ea6d8a777ca38abb61f4c093f29c722 Signed-off-by: Marco Varlese <marco.varlese@suse.com>
2018-02-21dpdk: fix building dpdk debug images with dpdk 18.02Damjan Marion1-1/+1
Looks like bug in ipsec-mb library when DEBUG=yes is passed so simply we stop doing that. Change-Id: Ifedd6d8a2aecf5af902ab4fa80ef197aebd5f829 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-02-19dpdk: bump to 18.02Damjan Marion1-1/+1
Change-Id: I3764f57a4b8df96d6bd20753b86fc0119d833bd9 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-02-15dpdk: add support for DPDK 18.02, deprecate 17.08Damjan Marion1-48/+5
17.11 is still default. Change-Id: I524d232579db8a59c717c5d760398b6b7f811d03 Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-12-18Provide useful output when installed vpp-dpdk version is incorrectEd Warnicke1-0/+5
Change-Id: Icb931de82cb5969fa4976611629e2f882c720a99 Signed-off-by: Ed Warnicke <eaw@cisco.com>
2017-11-30dpdk: bump to 17.11Damjan Marion1-2/+2
Change-Id: I84fafa369c6f16295e1c24d9712de2229bf78a91 Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-11-29Revert "Dpdk: 17.08 tarball updated 11/27"Damjan Marion1-4/+3
DPDK tarball was changed by mistake and now it is reverted back. This reverts commit 3d786efcb087533320e89f80077127fc507cfd99. Change-Id: I1a07b96fbc3f4fe13bb4a5c401036cae4ac5d346 Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-11-29Dpdk: 17.08 tarball updated 11/27Ed Kern1-3/+4
The dpdk package posted on static.dpdk.org for 17.08 was updated 11/27. This updates the checksum thats statically included in makefile. Looks like they also changed the dir structure to add -stable. fast.dpdk.org has issues with its mirrors being in sync...changing to static.dpdk.org for now Change-Id: Id81e328b07873700ae3f76e1ca819f94f26f38c8 Signed-off-by: Ed Kern <ejk@cisco.com>
2017-11-20dpdk: add support for DPDK 17.11Damjan Marion1-1/+6
Also remove DPDK 17.05 support. Change-Id: I4f96cb3f002cd90b12d800d6904f2364d7c4e270 Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-11-13Reduce number of parallel buildsDamjan Marion1-1/+1
With recent introduction of C++ code required memory for each compiler instance has significantly increased causing build issues. Currently build system spins 2 compiler instances per logical CPU core. As CPU can hardly execute more than one thread at a time, it should be pretty safe to change that formula so it doesn't multiply number of cpu cores by 2 and such change will signifucantly reduce amount of memory needed. Change-Id: Ic829fff6e45f4caf98a6d9c1c98c53ed003039ef Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-11-05dpdk: build nasm from sourceSergio Gonzalez Monroy1-1/+21
As not all distros have the minimum required nasm version (2.12.01) available, build nasm from sources when building Intel IPsec MB library. Change-Id: Iaa9da87f612c0f84da5704162c3bf430b3351076 Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
2017-10-10dpdk: patch to support bonded interface for MLX NICSteve Shin1-1/+1
At present, creating bonding devices using --vdev is broken for PMD like mlx5 as it is neither UIO nor VFIO based and hence PMD driver is unknown to find_port_id_by_pci_addr(). This DPDK patch fixes parsing PCI ID from bonding device params by verifying it in RTE PCI bus, rather than checking dev->kdrv. Change-Id: If575f63ef31733102566610d769ddd212d74736a Signed-off-by: Steve Shin <jonshin@cisco.com>
2017-10-05dpdk/ipsec: rework plus improved cli commandsSergio Gonzalez Monroy1-2/+2
This patch reworks the DPDK ipsec implementation including the cryptodev management as well as replacing new cli commands for better usability. For the data path: - The dpdk-esp-encrypt-post node is not necessary anymore. - IPv4 packets in the decrypt path are sent to ip4-input-no-checksum instead of ip4-input. The DPDK cryptodev cli commands are replaced by the following new commands: - show dpdk crypto devices - show dpdk crypto placement [verbose] - set dpdk crypto placement (<device> <thread> | auto) - clear dpdk crypto placement <device> [<thread>] - show dpdk crypto pools Change-Id: I47324517ede82d3e6e0e9f9c71c1a3433714b27b Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
2017-09-12Add option to build without multi-buffer crypto.Thomas F Herbert1-2/+3
JIRA VPP-498 This patch also allows RPMs to be built without multi- buffer crypto for some RPM based downstream distros that don't have sufficiently new nasm or don't have an USA export license for multi-buffer crypto. The default is to build WITH multi-buffer crypto for x86-64. This patch allows optional building without multi-buffer crypto. To build without multi-buffer crypto, set the AESNI environment variable to n. To build rpm packages without multi-buffer crypto, build the rpms with the option turned off. make build AESNI=n or.. make pkg-rpm --without aesni ---How to test this patch on a Centos build.--- Build as above and verify that nasm isn't executed during the build process. vpp may be installed and the dpdk plugin may be inspected to verify that the multi-buffer code isn't present. Change-Id: I8c5cfd4cdd9eb2b96772a687eaa54560806e001b Signed-off-by: Thomas F Herbert <therbert@redhat.com>
2017-09-11Improved arm64 chip detectionBrian Brooks1-11/+46
Use ARMv8 Main ID Register (exposed thru /proc/cpuinfo) to identify the CPU implementor and part number. For further details, see the ARMv8 ARM D7.2.66. Change-Id: I2b0d0b165cda4ab9fc57c645af90e9e354b73f44 Signed-off-by: Brian Brooks <brian.brooks@arm.com> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Ola Liljedahl <ola.liljedahl@arm.com> Reviewed-by: Song Zhu <song.zhu@arm.com>
2017-08-31Native arm64 build: dpdk/Makefile changeBrian Brooks1-2/+11
With this change, the status of `make build': Huawei D02, Linux 4.4.0, gcc 5.4.1 - success AMD Seattle, Linux 4.4.6, gcc 5.3.1 - compiler ICEs Cavium ThunderX, Linux 4.4.49, gcc 5.4.0 - success Before: Huawei D02, Linux 4.4.0, gcc 5.4.1 - fail AMD Seattle, Linux 4.4.6, gcc 5.3.1 - fail Cavium ThunderX, Linux 4.4.49, gcc 5.4.0 - success Change-Id: I49db34a33f9ca0725c7511d4f796706892b5b2da Signed-off-by: Brian Brooks <brian.brooks@arm.com>
2017-08-25dpdk: bump to dpdk 17.08, remove support for dpdk 17.02Damjan Marion1-3/+2
Change-Id: I674fb1212e48693939045523df085326a4dd1809 Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-08-25dpdk: required changes for 17.08Sergio Gonzalez Monroy1-14/+33
DPDK 17.08 breaks ethdev and cryptodev APIs. Address those changes while keeping backwards compatibility for DPDK 17.02 and 17.05. Change-Id: Idd6ac264d0d047fe586c41d4c4ca74e8fc778a54 Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
2017-08-22dpdk: define MACHINE before it is usedDamjan Marion1-2/+1
This fixes build on non-x86 platforms like arm64. Change-Id: I7ff5df92f89e34c27889d82f35924dc28cde8c39 Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-08-21dpdk: disable tun/tap PMDDamjan Marion1-0/+1
Beside the fact that we don't need it, it fails to build on ARM64. Change-Id: Iefae8bf234b588d8005df5e053b9152b6611929c Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-08-15Previous version was still downloading, unpacking and building IPSEC / AESMarco Varlese1-6/+12
libraries. This patch addresses the misbehaviour. Change-Id: I41f1ece3ca21c5a8f2c95533ed3d77a535233ea6 Signed-off-by: Marco Varlese <marco.varlese@suse.com>
2017-08-14dpdk: force libdir for isa-l crypto librarySergio Gonzalez Monroy1-1/+2
Depending on the OS, the default libdir might change. RHEL/Ubuntu: libdir={exec_prefix}/lib OpenSUSE: libdir={exec_prefix}/lib64 Change-Id: I5f1672e5815ad821e6ac5fff95de5232ab735b67 Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
2017-08-14Added MD5SUM for DPDK 17.08 tarball as a first step towards migrationMarco Varlese1-0/+1
Change-Id: Ic73b857c4e3d5a3f695e93924de5a5bed0af5019 Signed-off-by: Marco Varlese <marco.varlese@suse.com>
2017-08-09dpdk: only build SW crypto for x86_64 platformsSergio Gonzalez Monroy1-3/+10
Change-Id: If559747ad59c82c81d15734f27e15548eca0962b Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
2017-07-14dpdk: update buildSergio Gonzalez Monroy1-28/+31
Current optional DPDK PMDs are: - AESNI MB PMD (SW crypto) - AESNI GCM PMD (SW crypto) - MLX4 PMD - MLX5 PMD This change will always build DPDK SW crypto PMDs and required SW crypto libraries, while MLX PMDs are still optional and the user has to build required libraries. Now the configure script detects if any of the optional DPDK PMDs were built and link against their required libraries/dependencies. Change-Id: I1560bebd71035d6486483f22da90042ec2ce40a1 Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
2017-06-24make: Fix parallel building with some container platforms (VPP-880)Chris Luke1-1/+7
With some Linux container platforms /proc/cpuinfo reads as an empty file. (Aside: stat on /proc/cpuinfo always indicates a length of zero bytes, regardless of its content). This has the effect that the make '-j' parameter being passed the unhelpful value of '0' both in build-root/Makefile and dpdk/Makefile. Make complains with the error: make: the '-j' option requires a positive integer argument This patch checks for '0' and replaces it with '2' as a reasonable number of jobs to run in parallel when the CPU count isn't known (and assumed to be one). It also makes the value determination consistent between VPP and DPDK (2*ncpu). Change-Id: I78b89420114a825fab4d339e4f9291d486b7b9c8 Signed-off-by: Chris Luke <chrisy@flirble.org>
2017-05-31Revert "dpdk: build sw cryptodev support with make verify"Peter Mikus1-9/+4
This reverts commit 0e2e10b77d63196bfb93ae5be1251bbc1a1b561a. Change-Id: I3c1737f391b6ed127f92416f06449216e79859bb Signed-off-by: Peter Mikus <pmikus@cisco.com>
2017-05-30dpdk: build sw cryptodev support with make verifySergio Gonzalez Monroy1-4/+9
Change-Id: Ica95b5d3d44563c93c89b2a3233171c3aa1f048d Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
2017-05-17dpdk: disable 16-bit descriptors for X710/XL710Damjan Marion1-2/+1
This fixes issue with rx packet drops on VF. Change-Id: I8c1a35213013f8856b71e7204496f463319cbe28 Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-05-15dpdk: revert dpdk 17.05 change which causes virtio issuesDamjan Marion1-1/+1
This patch is causing DPDK to provide bad MAC address for legacy virtio interfaces. Change-Id: I526cd35a38164ede80a8ab6decb9e0d1ebfad723 Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-05-11dpdk: bump to dpdk 17.05Damjan Marion1-4/+5
Change-Id: I19744387859129c6b8dc104041af158bf5f1d988 Signed-off-by: Damjan Marion <damarion@cisco.com>
"o">.remote_ip4, dst=cls.dst_if.remote_ip4) / UDP(sport=1234, dport=5678) / Raw(payload)) size = packet_sizes[(i // 2) % len(packet_sizes)] cls.extend_packet(p, size, cls.padding) info.data = p @classmethod def create_fragments(cls): infos = cls._packet_infos cls.pkt_infos = [] for index, info in infos.iteritems(): p = info.data # cls.logger.debug(ppp("Packet:", p.__class__(str(p)))) fragments_400 = fragment_rfc791(p, 400) fragments_300 = fragment_rfc791(p, 300) fragments_200 = [ x for f in fragments_400 for x in fragment_rfc791(f, 200)] cls.pkt_infos.append( (index, fragments_400, fragments_300, fragments_200)) cls.fragments_400 = [ x for (_, frags, _, _) in cls.pkt_infos for x in frags] cls.fragments_300 = [ x for (_, _, frags, _) in cls.pkt_infos for x in frags] cls.fragments_200 = [ x for (_, _, _, frags) in cls.pkt_infos for x in frags] cls.logger.debug("Fragmented %s packets into %s 400-byte fragments, " "%s 300-byte fragments and %s 200-byte fragments" % (len(infos), len(cls.fragments_400), len(cls.fragments_300), len(cls.fragments_200))) def verify_capture(self, capture, dropped_packet_indexes=[]): """Verify captured packet stream. :param list capture: Captured packet stream. """ info = None seen = set() for packet in capture: try: self.logger.debug(ppp("Got packet:", packet)) ip = packet[IP] udp = packet[UDP] payload_info = self.payload_to_info(str(packet[Raw])) packet_index = payload_info.index self.assertTrue( packet_index not in dropped_packet_indexes, ppp("Packet received, but should be dropped:", packet)) if packet_index in seen: raise Exception(ppp("Duplicate packet received", packet)) seen.add(packet_index) self.assertEqual(payload_info.dst, self.src_if.sw_if_index) info = self._packet_infos[packet_index] self.assertTrue(info is not None) self.assertEqual(packet_index, info.index) saved_packet = info.data self.assertEqual(ip.src, saved_packet[IP].src) self.assertEqual(ip.dst, saved_packet[IP].dst) self.assertEqual(udp.payload, saved_packet[UDP].payload) except Exception: self.logger.error(ppp("Unexpected or invalid packet:", packet)) raise for index in self._packet_infos: self.assertTrue(index in seen or index in dropped_packet_indexes, "Packet with packet_index %d not received" % index) def test_reassembly(self): """ basic reassembly """ self.pg_enable_capture() self.src_if.add_stream(self.fragments_200) self.pg_start() packets = self.dst_if.get_capture(len(self.pkt_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() # run it all again to verify correctness self.pg_enable_capture() self.src_if.add_stream(self.fragments_200) self.pg_start() packets = self.dst_if.get_capture(len(self.pkt_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() def test_reversed(self): """ reverse order reassembly """ fragments = list(self.fragments_200) fragments.reverse() self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() packets = self.dst_if.get_capture(len(self.packet_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() # run it all again to verify correctness self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() packets = self.dst_if.get_capture(len(self.packet_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() def test_random(self): """ random order reassembly """ fragments = list(self.fragments_200) shuffle(fragments) self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() packets = self.dst_if.get_capture(len(self.packet_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() # run it all again to verify correctness self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() packets = self.dst_if.get_capture(len(self.packet_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() def test_duplicates(self): """ duplicate fragments """ fragments = [ x for (_, frags, _, _) in self.pkt_infos for x in frags for _ in range(0, min(2, len(frags))) ] self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() packets = self.dst_if.get_capture(len(self.pkt_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() def test_overlap1(self): """ overlapping fragments case #1 """ fragments = [] for _, _, frags_300, frags_200 in self.pkt_infos: if len(frags_300) == 1: fragments.extend(frags_300) else: for i, j in zip(frags_200, frags_300): fragments.extend(i) fragments.extend(j) self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() packets = self.dst_if.get_capture(len(self.pkt_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() # run it all to verify correctness self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() packets = self.dst_if.get_capture(len(self.pkt_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() def test_overlap2(self): """ overlapping fragments case #2 """ fragments = [] for _, _, frags_300, frags_200 in self.pkt_infos: if len(frags_300) == 1: fragments.extend(frags_300) else: # care must be taken here so that there are no fragments # received by vpp after reassembly is finished, otherwise # new reassemblies will be started and packet generator will # freak out when it detects unfreed buffers zipped = zip(frags_300, frags_200) for i, j in zipped[:-1]: fragments.extend(i) fragments.extend(j) fragments.append(zipped[-1][0]) self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() packets = self.dst_if.get_capture(len(self.pkt_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() # run it all to verify correctness self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() packets = self.dst_if.get_capture(len(self.pkt_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() def test_timeout_inline(self): """ timeout (inline) """ dropped_packet_indexes = set( index for (index, frags, _, _) in self.pkt_infos if len(frags) > 1 ) self.vapi.ip_reassembly_set(timeout_ms=0, max_reassemblies=1000, expire_walk_interval_ms=10000) self.pg_enable_capture() self.src_if.add_stream(self.fragments_400) self.pg_start() packets = self.dst_if.get_capture( len(self.pkt_infos) - len(dropped_packet_indexes)) self.verify_capture(packets, dropped_packet_indexes) self.src_if.assert_nothing_captured() def test_timeout_cleanup(self): """ timeout (cleanup) """ # whole packets + fragmented packets sans last fragment fragments = [ x for (_, frags_400, _, _) in self.pkt_infos for x in frags_400[:-1 if len(frags_400) > 1 else None] ] # last fragments for fragmented packets fragments2 = [frags_400[-1] for (_, frags_400, _, _) in self.pkt_infos if len(frags_400) > 1] dropped_packet_indexes = set( index for (index, frags_400, _, _) in self.pkt_infos if len(frags_400) > 1) self.vapi.ip_reassembly_set(timeout_ms=100, max_reassemblies=1000, expire_walk_interval_ms=50) self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() self.sleep(.25, "wait before sending rest of fragments") self.src_if.add_stream(fragments2) self.pg_start() packets = self.dst_if.get_capture( len(self.pkt_infos) - len(dropped_packet_indexes)) self.verify_capture(packets, dropped_packet_indexes) self.src_if.assert_nothing_captured() def test_disabled(self): """ reassembly disabled """ dropped_packet_indexes = set( index for (index, frags_400, _, _) in self.pkt_infos if len(frags_400) > 1) self.vapi.ip_reassembly_set(timeout_ms=1000, max_reassemblies=0, expire_walk_interval_ms=10000) self.pg_enable_capture() self.src_if.add_stream(self.fragments_400) self.pg_start() packets = self.dst_if.get_capture( len(self.pkt_infos) - len(dropped_packet_indexes)) self.verify_capture(packets, dropped_packet_indexes) self.src_if.assert_nothing_captured() class TestIPv6Reassembly(VppTestCase): """ IPv6 Reassembly """ @classmethod def setUpClass(cls): super(TestIPv6Reassembly, cls).setUpClass() cls.create_pg_interfaces([0, 1]) cls.src_if = cls.pg0 cls.dst_if = cls.pg1 # setup all interfaces for i in cls.pg_interfaces: i.admin_up() i.config_ip6() i.resolve_ndp() # packet sizes cls.packet_sizes = [64, 512, 1518, 9018] cls.padding = " abcdefghijklmn" cls.create_stream(cls.packet_sizes) cls.create_fragments() def setUp(self): """ Test setup - force timeout on existing reassemblies """ super(TestIPv6Reassembly, self).setUp() self.vapi.ip_reassembly_enable_disable( sw_if_index=self.src_if.sw_if_index, enable_ip6=True) self.vapi.ip_reassembly_set(timeout_ms=0, max_reassemblies=1000, expire_walk_interval_ms=10, is_ip6=1) self.sleep(.25) self.vapi.ip_reassembly_set(timeout_ms=1000000, max_reassemblies=1000, expire_walk_interval_ms=10000, is_ip6=1) self.logger.debug(self.vapi.ppcli("show ip6-reassembly details")) def tearDown(self): super(TestIPv6Reassembly, self).tearDown() self.logger.debug(self.vapi.ppcli("show ip6-reassembly details")) @classmethod def create_stream(cls, packet_sizes, packet_count=test_packet_count): """Create input packet stream for defined interface. :param list packet_sizes: Required packet sizes. """ for i in range(0, packet_count): info = cls.create_packet_info(cls.src_if, cls.src_if) payload = cls.info_to_payload(info) p = (Ether(dst=cls.src_if.local_mac, src=cls.src_if.remote_mac) / IPv6(src=cls.src_if.remote_ip6, dst=cls.dst_if.remote_ip6) / UDP(sport=1234, dport=5678) / Raw(payload)) size = packet_sizes[(i // 2) % len(packet_sizes)] cls.extend_packet(p, size, cls.padding) info.data = p @classmethod def create_fragments(cls): infos = cls._packet_infos cls.pkt_infos = [] for index, info in infos.iteritems(): p = info.data # cls.logger.debug(ppp("Packet:", p.__class__(str(p)))) fragments_400 = fragment_rfc8200(p, info.index, 400) fragments_300 = fragment_rfc8200(p, info.index, 300) cls.pkt_infos.append((index, fragments_400, fragments_300)) cls.fragments_400 = [ x for _, frags, _ in cls.pkt_infos for x in frags] cls.fragments_300 = [ x for _, _, frags in cls.pkt_infos for x in frags] cls.logger.debug("Fragmented %s packets into %s 400-byte fragments, " "and %s 300-byte fragments" % (len(infos), len(cls.fragments_400), len(cls.fragments_300))) def verify_capture(self, capture, dropped_packet_indexes=[]): """Verify captured packet strea . :param list capture: Captured packet stream. """ info = None seen = set() for packet in capture: try: self.logger.debug(ppp("Got packet:", packet)) ip = packet[IPv6] udp = packet[UDP] payload_info = self.payload_to_info(str(packet[Raw])) packet_index = payload_info.index self.assertTrue( packet_index not in dropped_packet_indexes, ppp("Packet received, but should be dropped:", packet)) if packet_index in seen: raise Exception(ppp("Duplicate packet received", packet)) seen.add(packet_index) self.assertEqual(payload_info.dst, self.src_if.sw_if_index) info = self._packet_infos[packet_index] self.assertTrue(info is not None) self.assertEqual(packet_index, info.index) saved_packet = info.data self.assertEqual(ip.src, saved_packet[IPv6].src) self.assertEqual(ip.dst, saved_packet[IPv6].dst) self.assertEqual(udp.payload, saved_packet[UDP].payload) except Exception: self.logger.error(ppp("Unexpected or invalid packet:", packet)) raise for index in self._packet_infos: self.assertTrue(index in seen or index in dropped_packet_indexes, "Packet with packet_index %d not received" % index) def test_reassembly(self): """ basic reassembly """ self.pg_enable_capture() self.src_if.add_stream(self.fragments_400) self.pg_start() packets = self.dst_if.get_capture(len(self.pkt_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() # run it all again to verify correctness self.pg_enable_capture() self.src_if.add_stream(self.fragments_400) self.pg_start() packets = self.dst_if.get_capture(len(self.pkt_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() def test_reversed(self): """ reverse order reassembly """ fragments = list(self.fragments_400) fragments.reverse() self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() packets = self.dst_if.get_capture(len(self.pkt_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() # run it all again to verify correctness self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() packets = self.dst_if.get_capture(len(self.pkt_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() def test_random(self): """ random order reassembly """ fragments = list(self.fragments_400) shuffle(fragments) self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() packets = self.dst_if.get_capture(len(self.pkt_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() # run it all again to verify correctness self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() packets = self.dst_if.get_capture(len(self.pkt_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() def test_duplicates(self): """ duplicate fragments """ fragments = [ x for (_, frags, _) in self.pkt_infos for x in frags for _ in range(0, min(2, len(frags))) ] self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() packets = self.dst_if.get_capture(len(self.pkt_infos)) self.verify_capture(packets) self.src_if.assert_nothing_captured() def test_overlap1(self): """ overlapping fragments case #1 """ fragments = [] for _, frags_400, frags_300 in self.pkt_infos: if len(frags_300) == 1: fragments.extend(frags_400) else: for i, j in zip(frags_300, frags_400): fragments.extend(i) fragments.extend(j) dropped_packet_indexes = set( index for (index, _, frags) in self.pkt_infos if len(frags) > 1 ) self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() packets = self.dst_if.get_capture( len(self.pkt_infos) - len(dropped_packet_indexes)) self.verify_capture(packets, dropped_packet_indexes) self.src_if.assert_nothing_captured() def test_overlap2(self): """ overlapping fragments case #2 """ fragments = [] for _, frags_400, frags_300 in self.pkt_infos: if len(frags_400) == 1: fragments.extend(frags_400) else: # care must be taken here so that there are no fragments # received by vpp after reassembly is finished, otherwise # new reassemblies will be started and packet generator will # freak out when it detects unfreed buffers zipped = zip(frags_400, frags_300) for i, j in zipped[:-1]: fragments.extend(i) fragments.extend(j) fragments.append(zipped[-1][0]) dropped_packet_indexes = set( index for (index, _, frags) in self.pkt_infos if len(frags) > 1 ) self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() packets = self.dst_if.get_capture( len(self.pkt_infos) - len(dropped_packet_indexes)) self.verify_capture(packets, dropped_packet_indexes) self.src_if.assert_nothing_captured() def test_timeout_inline(self): """ timeout (inline) """ dropped_packet_indexes = set( index for (index, frags, _) in self.pkt_infos if len(frags) > 1 ) self.vapi.ip_reassembly_set(timeout_ms=0, max_reassemblies=1000, expire_walk_interval_ms=10000, is_ip6=1) self.pg_enable_capture() self.src_if.add_stream(self.fragments_400) self.pg_start() packets = self.dst_if.get_capture( len(self.pkt_infos) - len(dropped_packet_indexes)) self.verify_capture(packets, dropped_packet_indexes) pkts = self.src_if.get_capture( expected_count=len(dropped_packet_indexes)) for icmp in pkts: self.assertIn(ICMPv6TimeExceeded, icmp) self.assertIn(IPv6ExtHdrFragment, icmp) self.assertIn(icmp[IPv6ExtHdrFragment].id, dropped_packet_indexes) dropped_packet_indexes.remove(icmp[IPv6ExtHdrFragment].id) def test_timeout_cleanup(self): """ timeout (cleanup) """ # whole packets + fragmented packets sans last fragment fragments = [ x for (_, frags_400, _) in self.pkt_infos for x in frags_400[:-1 if len(frags_400) > 1 else None] ] # last fragments for fragmented packets fragments2 = [frags_400[-1] for (_, frags_400, _) in self.pkt_infos if len(frags_400) > 1] dropped_packet_indexes = set( index for (index, frags_400, _) in self.pkt_infos if len(frags_400) > 1) self.vapi.ip_reassembly_set(timeout_ms=100, max_reassemblies=1000, expire_walk_interval_ms=50) self.vapi.ip_reassembly_set(timeout_ms=100, max_reassemblies=1000, expire_walk_interval_ms=50, is_ip6=1) self.pg_enable_capture() self.src_if.add_stream(fragments) self.pg_start() self.sleep(.25, "wait before sending rest of fragments") self.src_if.add_stream(fragments2) self.pg_start() packets = self.dst_if.get_capture( len(self.pkt_infos) - len(dropped_packet_indexes)) self.verify_capture(packets, dropped_packet_indexes) pkts = self.src_if.get_capture( expected_count=len(dropped_packet_indexes)) for icmp in pkts: self.assertIn(ICMPv6TimeExceeded, icmp) self.assertIn(IPv6ExtHdrFragment, icmp) self.assertIn(icmp[IPv6ExtHdrFragment].id, dropped_packet_indexes) dropped_packet_indexes.remove(icmp[IPv6ExtHdrFragment].id) def test_disabled(self): """ reassembly disabled """ dropped_packet_indexes = set( index for (index, frags_400, _) in self.pkt_infos if len(frags_400) > 1) self.vapi.ip_reassembly_set(timeout_ms=1000, max_reassemblies=0, expire_walk_interval_ms=10000, is_ip6=1) self.pg_enable_capture() self.src_if.add_stream(self.fragments_400) self.pg_start() packets = self.dst_if.get_capture( len(self.pkt_infos) - len(dropped_packet_indexes)) self.verify_capture(packets, dropped_packet_indexes) self.src_if.assert_nothing_captured() def test_missing_upper(self): """ missing upper layer """ p = (Ether(dst=self.src_if.local_mac, src=self.src_if.remote_mac) / IPv6(src=self.src_if.remote_ip6, dst=self.src_if.local_ip6) / UDP(sport=1234, dport=5678) / Raw()) self.extend_packet(p, 1000, self.padding) fragments = fragment_rfc8200(p, 1, 500) bad_fragment = p.__class__(str(fragments[1])) bad_fragment[IPv6ExtHdrFragment].nh = 59 bad_fragment[IPv6ExtHdrFragment].offset = 0 self.pg_enable_capture() self.src_if.add_stream([bad_fragment]) self.pg_start() pkts = self.src_if.get_capture(expected_count=1) icmp = pkts[0] self.assertIn(ICMPv6ParamProblem, icmp) self.assert_equal(icmp[ICMPv6ParamProblem].code, 3, "ICMP code") def test_invalid_frag_size(self): """ fragment size not a multiple of 8 """ p = (Ether(dst=self.src_if.local_mac, src=self.src_if.remote_mac) / IPv6(src=self.src_if.remote_ip6, dst=self.src_if.local_ip6) / UDP(sport=1234, dport=5678) / Raw()) self.extend_packet(p, 1000, self.padding) fragments = fragment_rfc8200(p, 1, 500) bad_fragment = fragments[0] self.extend_packet(bad_fragment, len(bad_fragment) + 5) self.pg_enable_capture() self.src_if.add_stream([bad_fragment]) self.pg_start() pkts = self.src_if.get_capture(expected_count=1) icmp = pkts[0] self.assertIn(ICMPv6ParamProblem, icmp) self.assert_equal(icmp[ICMPv6ParamProblem].code, 0, "ICMP code") def test_invalid_packet_size(self): """ total packet size > 65535 """ p = (Ether(dst=self.src_if.local_mac, src=self.src_if.remote_mac) / IPv6(src=self.src_if.remote_ip6, dst=self.src_if.local_ip6) / UDP(sport=1234, dport=5678) / Raw()) self.extend_packet(p, 1000, self.padding) fragments = fragment_rfc8200(p, 1, 500) bad_fragment = fragments[1] bad_fragment[IPv6ExtHdrFragment].offset = 65500 self.pg_enable_capture() self.src_if.add_stream([bad_fragment]) self.pg_start() pkts = self.src_if.get_capture(expected_count=1) icmp = pkts[0] self.assertIn(ICMPv6ParamProblem, icmp) self.assert_equal(icmp[ICMPv6ParamProblem].code, 0, "ICMP code") class TestFIFReassembly(VppTestCase): """ Fragments in fragments reassembly """ @classmethod def setUpClass(cls): super(TestFIFReassembly, cls).setUpClass() cls.create_pg_interfaces([0, 1]) cls.src_if = cls.pg0 cls.dst_if = cls.pg1 for i in cls.pg_interfaces: i.admin_up() i.config_ip4() i.resolve_arp() i.config_ip6() i.resolve_ndp() cls.packet_sizes = [64, 512, 1518, 9018] cls.padding = " abcdefghijklmn" def setUp(self): """ Test setup - force timeout on existing reassemblies """ super(TestFIFReassembly, self).setUp() self.vapi.ip_reassembly_enable_disable( sw_if_index=self.src_if.sw_if_index, enable_ip4=True, enable_ip6=True) self.vapi.ip_reassembly_enable_disable( sw_if_index=self.dst_if.sw_if_index, enable_ip4=True, enable_ip6=True) self.vapi.ip_reassembly_set(timeout_ms=0, max_reassemblies=1000, expire_walk_interval_ms=10) self.vapi.ip_reassembly_set(timeout_ms=0, max_reassemblies=1000, expire_walk_interval_ms=10, is_ip6=1) self.sleep(.25) self.vapi.ip_reassembly_set(timeout_ms=1000000, max_reassemblies=1000, expire_walk_interval_ms=10000) self.vapi.ip_reassembly_set(timeout_ms=1000000, max_reassemblies=1000, expire_walk_interval_ms=10000, is_ip6=1) def tearDown(self): self.logger.debug(self.vapi.ppcli("show ip4-reassembly details")) self.logger.debug(self.vapi.ppcli("show ip6-reassembly details")) super(TestFIFReassembly, self).tearDown() def verify_capture(self, capture, ip_class, dropped_packet_indexes=[]): """Verify captured packet stream. :param list capture: Captured packet stream. """ info = None seen = set() for packet in capture: try: self.logger.debug(ppp("Got packet:", packet)) ip = packet[ip_class] udp = packet[UDP] payload_info = self.payload_to_info(str(packet[Raw])) packet_index = payload_info.index self.assertTrue( packet_index not in dropped_packet_indexes, ppp("Packet received, but should be dropped:", packet)) if packet_index in seen: raise Exception(ppp("Duplicate packet received", packet)) seen.add(packet_index) self.assertEqual(payload_info.dst, self.dst_if.sw_if_index) info = self._packet_infos[packet_index] self.assertTrue(info is not None) self.assertEqual(packet_index, info.index) saved_packet = info.data self.assertEqual(ip.src, saved_packet[ip_class].src) self.assertEqual(ip.dst, saved_packet[ip_class].dst) self.assertEqual(udp.payload, saved_packet[UDP].payload) except Exception: self.logger.error(ppp("Unexpected or invalid packet:", packet)) raise for index in self._packet_infos: self.assertTrue(index in seen or index in dropped_packet_indexes, "Packet with packet_index %d not received" % index) def test_fif4(self): """ Fragments in fragments (4o4) """ # TODO this should be ideally in setUpClass, but then we hit a bug # with VppIpRoute incorrectly reporting it's present when it's not # so we need to manually remove the vpp config, thus we cannot have # it shared for multiple test cases self.tun_ip4 = "1.1.1.2" self.gre4 = VppGreInterface(self, self.src_if.local_ip4, self.tun_ip4) self.gre4.add_vpp_config() self.gre4.admin_up() self.gre4.config_ip4() self.vapi.ip_reassembly_enable_disable( sw_if_index=self.gre4.sw_if_index, enable_ip4=True) self.route4 = VppIpRoute(self, self.tun_ip4, 32, [VppRoutePath(self.src_if.remote_ip4, self.src_if.sw_if_index)]) self.route4.add_vpp_config() self.reset_packet_infos() for i in range(test_packet_count): info = self.create_packet_info(self.src_if, self.dst_if) payload = self.info_to_payload(info) # Ethernet header here is only for size calculation, thus it # doesn't matter how it's initialized. This is to ensure that # reassembled packet is not > 9000 bytes, so that it's not dropped p = (Ether() / IP(id=i, src=self.src_if.remote_ip4, dst=self.dst_if.remote_ip4) / UDP(sport=1234, dport=5678) / Raw(payload)) size = self.packet_sizes[(i // 2) % len(self.packet_sizes)] self.extend_packet(p, size, self.padding) info.data = p[IP] # use only IP part, without ethernet header fragments = [x for _, p in self._packet_infos.iteritems() for x in fragment_rfc791(p.data, 400)] encapped_fragments = \ [Ether(dst=self.src_if.local_mac, src=self.src_if.remote_mac) / IP(src=self.tun_ip4, dst=self.src_if.local_ip4) / GRE() / p for p in fragments] fragmented_encapped_fragments = \ [x for p in encapped_fragments for x in fragment_rfc791(p, 200)] self.src_if.add_stream(fragmented_encapped_fragments) self.pg_enable_capture(self.pg_interfaces) self.pg_start() self.src_if.assert_nothing_captured() packets = self.dst_if.get_capture(len(self._packet_infos)) self.verify_capture(packets, IP) # TODO remove gre vpp config by hand until VppIpRoute gets fixed # so that it's query_vpp_config() works as it should self.gre4.remove_vpp_config() self.logger.debug(self.vapi.ppcli("show interface")) def test_fif6(self): """ Fragments in fragments (6o6) """ # TODO this should be ideally in setUpClass, but then we hit a bug # with VppIpRoute incorrectly reporting it's present when it's not # so we need to manually remove the vpp config, thus we cannot have # it shared for multiple test cases self.tun_ip6 = "1002::1" self.gre6 = VppGre6Interface(self, self.src_if.local_ip6, self.tun_ip6) self.gre6.add_vpp_config() self.gre6.admin_up() self.gre6.config_ip6() self.vapi.ip_reassembly_enable_disable( sw_if_index=self.gre6.sw_if_index, enable_ip6=True) self.route6 = VppIpRoute(self, self.tun_ip6, 128, [VppRoutePath(self.src_if.remote_ip6, self.src_if.sw_if_index, proto=DpoProto.DPO_PROTO_IP6)], is_ip6=1) self.route6.add_vpp_config() self.reset_packet_infos() for i in range(test_packet_count): info = self.create_packet_info(self.src_if, self.dst_if) payload = self.info_to_payload(info) # Ethernet header here is only for size calculation, thus it # doesn't matter how it's initialized. This is to ensure that # reassembled packet is not > 9000 bytes, so that it's not dropped p = (Ether() / IPv6(src=self.src_if.remote_ip6, dst=self.dst_if.remote_ip6) / UDP(sport=1234, dport=5678) / Raw(payload)) size = self.packet_sizes[(i // 2) % len(self.packet_sizes)] self.extend_packet(p, size, self.padding) info.data = p[IPv6] # use only IPv6 part, without ethernet header fragments = [x for _, i in self._packet_infos.iteritems() for x in fragment_rfc8200( i.data, i.index, 400)] encapped_fragments = \ [Ether(dst=self.src_if.local_mac, src=self.src_if.remote_mac) / IPv6(src=self.tun_ip6, dst=self.src_if.local_ip6) / GRE() / p for p in fragments] fragmented_encapped_fragments = \ [x for p in encapped_fragments for x in ( fragment_rfc8200( p, 2 * len(self._packet_infos) + p[IPv6ExtHdrFragment].id, 200) if IPv6ExtHdrFragment in p else [p] ) ] self.src_if.add_stream(fragmented_encapped_fragments) self.pg_enable_capture(self.pg_interfaces) self.pg_start() self.src_if.assert_nothing_captured() packets = self.dst_if.get_capture(len(self._packet_infos)) self.verify_capture(packets, IPv6) # TODO remove gre vpp config by hand until VppIpRoute gets fixed # so that it's query_vpp_config() works as it should self.gre6.remove_vpp_config() if __name__ == '__main__': unittest.main(testRunner=VppTestRunner)