aboutsummaryrefslogtreecommitdiffstats
path: root/src/plugins/dpdk
AgeCommit message (Collapse)AuthorFilesLines
2018-10-22vppinfra: use log2 page size in hugepage functionsDamjan Marion1-1/+1
Change-Id: Ibec32c6df32f4cd9889d378e244f170c93ad295b Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-10-22Fix dereferencing null string in dpdk_early_initJuraj Sloboda1-0/+1
Change-Id: Iffba7ebe5af8fadc0251f3a10022739d45f394ce Signed-off-by: Juraj Sloboda <jsloboda@cisco.com>
2018-10-22ipsec: split ipsec nodes into ip4/ip6 nodesKlement Sekera3-71/+254
Change-Id: Ic6b27659f1fe9e8df39e80a0441305e4e952195a Signed-off-by: Klement Sekera <ksekera@cisco.com>
2018-10-22dpdk: use rte_vfio_dma_map API instead of digging for vfio fdDamjan Marion1-63/+17
This is new API introduced in DPDK 18.08. Change-Id: I66d49fe54a6abf2af621b597e8d4535e29cecec4 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-10-20dpdk: add netvsc PMDStephen Hemminger3-1/+40
Teach DPDK plugin about the netvsc Poll Mode Driver. The speed of the Netvsc device matches the speed of the external port on the underlying vswitch. Therefore 1G, 10G, 25G, 56G and even 100G are possible. Change-Id: I14ab6907b7d8d350b63a083409d45fb9c348a364 Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2018-10-20dpdk: use rx/tx_offload_names to formatStephen Hemminger1-38/+53
Rather than keeping our own list of offload capabalities, use the function in 18.08 or later to decode the value. Also, introduce a formatter to convert to lower case because DPDK API returns upper case names. Change-Id: I87546fa2bec67f8a8b44288f5994514114cb6faf Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2018-10-19vppinfra: add atomic macros for __sync builtinsSirshak Das1-2/+2
This is first part of addition of atomic macros with only macros for __sync builtins. - Based on earlier patch by Damjan (https://gerrit.fd.io/r/#/c/10729/) Additionally - clib_atomic_release macro added and used in the absence of any memory barrier. - clib_atomic_bool_cmp_and_swap added Change-Id: Ie4e48c1e184a652018d1d0d87c4be80ddd180a3b Original-patch-by: Damjan Marion <damarion@cisco.com> Signed-off-by: Sirshak Das <sirshak.das@arm.com> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com> Reviewed-by: Steve Capper <steve.capper@arm.com>
2018-10-16Fix coverity issue for potentially overflowing of page sizeHaiyang Tan1-3/+3
Change-Id: I2779626d745badb63386efcf729da7a094a4f297 Signed-off-by: Haiyang Tan <haiyangtan@tencent.com>
2018-10-15dpdk: only look at PCI information on PCI devicesStephen Hemminger5-8/+26
The rte_device is use as a base type of all DPDK devices. It is not valid to use container_of to find PCI information unless the bus of the rte_device is pci. Otherwise, the pointer is looking at some other data, which may or may not be zero. This change introduces a helper function to get rte_pci_device pointer. If device is on PCI bus it returns pointer to rte_pci_device info, otherwise it returns NULL. Change-Id: Ia7446006bb93a7a54844969f3b3dd3b918890dfd Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2018-10-12dpdk: support passing log-levelStephen Hemminger1-1/+2
Since DPDK 17.05, DPDK logging supports per subsystem dynamic logging. Allow passing this as log-level on EAL command line. dpdk { log-level pmd.net.virtio.*:debug ... } Change-Id: If9576c11aba390a5cd2740fc1c9da5768689bd74 Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2018-10-10vppinfra: introduce clib_mem_vm_ext_free() to avoid fd leaksHaiyang Tan1-1/+1
Change-Id: I8691a10493d159a97574550c111f07722960a7cd Signed-off-by: Haiyang Tan <haiyangtan@tencent.com>
2018-10-01thread: Add show threads apiMohsin Kazmi2-2/+2
Change-Id: I3124238ab4d43bcef5590bad33a4ff0b5d8b7d15 Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
2018-09-27dpdk_plugin: fix mlx5 build and runtime issuesSirshak Das1-2/+10
There are issues with VPP finding and linking the mlx5 shared glue library which was built by default if mlx5 was enabled. Runtime Errors this patch fixes: net_mlx5: cannot load glue library: librte_pmd_mlx5_glue.so.18.05.0: cannot open shared object file: No such file or directory net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5) This patch introduces additional config parameter to disable glue library building and instead statically link ibverbs and mlx5 libraries to the PMD and dpdk_plugin. Change-Id: I0b2f67652a57854c778e991780903fb15706ace8 Signed-off-by: Sirshak Das <sirshak.das@arm.com> Reviewed-by: Lijian Zhang <Lijian.Zhang@arm.com>
2018-09-26dpdk: fix QSFP+ module infoDamjan Marion1-1/+2
Change-Id: I89c7df778e66a5d2147190dc99445405d81964e5 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-09-25dpdk: show pluggable info in 'show hardware'Damjan Marion1-0/+28
module: id SFP/SFP+/SFP28, compatibility: 40g_active_cable vendor: Amphenol, part NDCCGF-I202 revision: C, serial: APF1711202351C, date code: 170318 cable length: 2m Change-Id: Ife35607b4f078f7b56737fe066ad4cbd247a7504 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-09-24Trivial: Clean up some typos.Paul Vinciguerra8-21/+21
Change-Id: I085615fde1f966490f30ed5d32017b8b088cfd59 Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
2018-09-19dpdk: mask and warn if rx/tx offload is not availableDamjan Marion4-94/+115
Warning messsage is displayed in system log: vpp# show log 1970/ 1/ 1 01:00:01:198 warn dpdk unsupported rx offloads requested on port 0: scatter Change-Id: I40021066daf2d37ca5233e3adce55e412f0d3932 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-09-18disable scatter/gather for ENA with DPDK 18.08Matthew Smith1-1/+5
The scatter/gather rxmode flag was set for ENA when building against DPDK >= 18.08. ENA does not support this, so disable it. It looks like enabling it was a copy/paste error. Also, after offloads are adjusted based on whether "no-multi-seg" is set, those configurations are overwritten by copying port_conf_template over the port config. That should only happen for versions of DPDK older than 18.08 because 18.08 and newer make changes directly on the port config instead of making changes to the template. Make the clib_memcpy() conditional on the DPDK version being less than 18.08. After doing so, compiler errors complain about port_conf_template being declared but not used, so make it's declaration conditional. Change-Id: If81980d71c379a565b51dd700b953f8c811a8703 Signed-off-by: Matthew Smith <mgsmith@netgate.com>
2018-09-14dpdk: add detection of mellanox PMDsDamjan Marion1-0/+10
Change-Id: I523fc489f5e73ba726ab0711eab3fdde53dc35e8 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-09-12Always use 'lib' instead of 'lib64'Damjan Marion1-3/+2
It is packaging responsibility to put libs in the right place. Use of lib64 resulted in huge amount of files with hardcoded lib64. This patch simplifies things... Change-Id: Iab0dea0583e480907732c5d2379eb951a00fa9e6 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-09-12device flags will set in dpdk_update_link_state.Khers1-2/+0
Change-Id: If74acb0168bed2201d2a8b47bf3f860540d1574b Signed-off-by: Khers <s3m2e1.6star@gmail.com>
2018-09-10dpdk: clean interface link information on admin down / stopDamjan Marion1-0/+1
Change-Id: Ie68814c8afc6cd67eb75da0b95dffa7b404cb7ba Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-09-10dpdk-plugin: do not request SCTP offload, some cards do not support it while ↵Andrew Yourtchenko1-2/+2
supporting TCP/UDP The DPDK plugin sets all of the offload flags, which may cause an initialization failure on the NICs that do not support SCTP offload. The VPP code does not deal with the SCTP offload at the moment at all, so after discussing with Damjan, we agreed the best approach to fix the issue is to not request the SCTP offload. The output of "show hardware" for the NIC in question before this patch: Name Idx Link Hardware GigabitEthernet1/0/0 1 down GigabitEthernet1/0/0 Ethernet address 00:e0:67:09:90:4b Intel 82540EM (e1000) carrier down flags: pmd pmd-init-fail maybe-multiseg tx-offload intel-phdr-cksum rx queues 1, rx desc 1024, tx queues 1, tx desc 1024 cpu socket 0 Errors: rte_eth_dev_configure[port:0, errno:-22]: Unknown error -22 And the excerpt from "show log": 1970/ 1/ 1 00:00:00:739 notice dpdk Ethdev port_id=0 requested Tx offloads 0x1c doesn't match Tx offloads capabilities 0xf in rte_eth_dev_configure() Change-Id: I159d65c02fc3f044441972205f1f0ac08e52050c Signed-off-by: Andrew Yourtchenko <ayourtch@gmail.com>
2018-09-08L2 BVI/FIB: Update L2 FIB table when BVI's MAC changesNeale Ranns1-1/+2
also some moving of l2 headers to reduce dependencies Change-Id: I7a700a411a91451ef13fd65f9c90de2432b793bb Signed-off-by: Neale Ranns <nranns@cisco.com>
2018-09-04Fixed showing negative count in stats show CLINaoyuki Mori1-2/+2
Some counters (bytes, pkts) are formatted as signed instead of unsigned in "show hardware-interfaces" and "show lb". These stats counters are declared as u64. Change-Id: Id1b588188bff4e36402beb8d07f779e9a5193956 Signed-off-by: Naoyuki Mori <naoyuki.mori@intel.com>
2018-09-03Deprecate old buffer replication schemeDamjan Marion4-137/+3
Change-Id: I1f54b994425c58776e1445c8d9fe142e7a644d3d Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-08-30cmake: a bit of packaging workDamjan Marion1-0/+3
Change-Id: I40332c2348c4aab873d726532f2ac3c4abde7ec9 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-08-27cmake: Fix plugins .h includesMohsin Kazmi1-0/+5
Change-Id: I90600d000afb02e8969f3c01bcf9e4b5c10a7d39 Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
2018-08-27cmake: add missing vat pluginsDamjan Marion1-0/+3
Change-Id: Ib61f0299c17c0f021408ab0a44c5b54f55f8a8ec Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-08-25cmake: improve add_vpp_plugin macroDamjan Marion1-15/+19
Change-Id: Iffd5c45ab242a919592a1f686f7f880936b68a1a Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-08-21VPP-1268: Add option for memory channels DPDK usesJessica Tallon1-0/+2
This adds a `num-mem-channels` option for DPDK enabling support for choosing the number of memory channels used. Change-Id: I1663dd866ac60592b6dd02261af66d87c64acdb1 Signed-off-by: Jessica Tallon <tsyesika@igalia.com>
2018-08-18cmake: highlight warning and error messagesDamjan Marion1-3/+3
Change-Id: Id4b73368382b5e78c138987fe092429af5cb0afd Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-08-18cmake: DPDK rte_config.h parsingDamjan Marion1-12/+36
Change-Id: I53cad8e7787a132a5d6bacd5fda3fe67b7d59b44 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-08-17CMake as an alternative to autotools (experimental)Damjan Marion1-0/+100
Change-Id: Ibc59323e849810531dd0963e85493efad3b86857 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-08-16dpdk: fix rss hash function handlingDamjan Marion3-6/+23
DPDK 18.08 verifies if all set bits are supported and fails if not.... Change-Id: Ia87242fcda11147dff3bebe2e2bef32f0a8891fb Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-08-13Multiarch handling in different constructor macrosDamjan Marion4-18/+5
This significantly reduces need for ... in multiarch code. Simply constructor macros will jost create static unused entry if CLIB_MARCH_VARIANT is defined and that will be optimized out by compiler. Change-Id: I17d1c4ac0c903adcfadaa4a07de1b854c7ab14ac Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-08-13dpdk: support for DPDK 18.08Damjan Marion5-66/+75
Change-Id: If1b93341c222160b9a08f127620c024620e55c37 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-08-11Multiversioning: Device (tx) function constructorMohsin Kazmi1-17/+3
Change-Id: I39f87ca161c891fb22462a23188982fef7c3243f Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
2018-08-06dpdk-flow:fix raw item initEyal Bari1-1/+3
dpdk raw item match string changed from flexible array member to a pointer member Change-Id: I930f05112ce04b0cdb3feb985d755e730b102084 Signed-off-by: Eyal Bari <ebari@cisco.com>
2018-07-26Clean up dpdk plugin rx/tx pcap tracingDave Barach2-32/+26
Needed a spinlock to protect the data vector. Cleaned up debug cli so the output makes sense, and so that various parameters exist in one place. Removed a nonsense memset-to-zero which led to ultra-confusing results. Change-Id: I91cd14ce7fe84fd2eceab86e016b5ee001993be4 Signed-off-by: Dave Barach <dbarach@cisco.com>
2018-07-23fix vector index range checksEyal Bari1-1/+1
Change-Id: I63c36644c9d93f2c3ec6606ca0205b407499de4e Signed-off-by: Eyal Bari <ebari@cisco.com>
2018-07-19dpdk: fix Mellanox Connect-X3 VF IDMatthew Smith1-1/+1
Fix a typo from previous patch. Change 0x104 to 0x1004. Change-Id: I82230a8a0ec01567eb1d4bc12ac02062c2a98347 Signed-off-by: Matthew Smith <mgsmith@netgate.com>
2018-07-19dpdk: set log write fd to non-blockingMatthew Smith1-7/+15
If a PMD writes too many log messages using rte_vlog(), the pipe for logging can fill and then rte_vlog() will block on fflush() while it waits for something to read from the other side of the pipe. That will never happen since the process node that would read the other side of the pipe runs in the same thread. Set the write fd to non-blocking before the call to rte_openlog_stream(). If the pipe is full, calls to write() or fflush() will fail but execution will continue. Change-Id: I0e5d710629633acda5617ff29897d6582c255d57 Signed-off-by: Matthew Smith <mgsmith@netgate.com>
2018-07-19dpdk: add device IDs for Mellanox ConnectX-3Matthew Smith1-1/+6
Recognize the PF and VF device IDs for the Mellanox adapter used on Azure. Change-Id: Ic7b36b37ac93db2b696354ffe6fa2b6d62ee3801 Signed-off-by: Matthew Smith <mgsmith@netgate.com>
2018-07-18DPDK: coverity scan warningsMarco Varlese2-7/+4
This patch addresses the coverity scan warnings reported for the DPDK plugin. Change-Id: Ie7ac7ffcf4a6c63245eae0f9910a193ab1e318a8 Signed-off-by: Marco Varlese <marco.varlese@suse.de>
2018-07-18Add config option to use dlmalloc instead of mheapDave Barach1-2/+2
Configure w/ --enable-dlmalloc, see .../build-data/platforms/vpp.mk src/vppinfra/dlmalloc.[ch] are slightly modified versions of the well-known Doug Lea malloc. Main advantage: dlmalloc mspaces have no inherent size limit. Change-Id: I19b3f43f3c65bcfb82c1a265a97922d01912446e Signed-off-by: Dave Barach <dave@barachs.net>
2018-07-13Add QEDE poll mode driver (librte_pmd_qede)Igor Mikhailov (imichail)3-4/+10
The driver implements Cavium QLogic FastLinQ QL4xxxx 10G/25G/40G/50G/100G Intelligent Ethernet Adapters (IEA) and Converged Network Adapters (CNA) (doc/guides/nics/qede.rst) Change-Id: If17e8cb572eb8c0585085be1c7cfdfa159eb6e68 Signed-off-by: Igor Mikhailov (imichail) <imichail@cisco.com>
2018-07-10Remove unused variablesIgor Mikhailov (imichail)1-4/+0
Change-Id: If4da80c7eefe55905594eaaba0946d75f0892da5 Signed-off-by: Igor Mikhailov (imichail) <imichail@cisco.com>
2018-06-26dpdk: display rx/tx burst function name in "show hardware detail"Damjan Marion1-0/+20
Change-Id: I6fa4c6bf9c4e96ba4502a06907bdecc654ace665 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-06-25dpdk: Enhancement to call crypto start api at initializationSachin Saxena1-0/+4
- Some crypto devices rely on rte_cryptodev_start() API to be called by application to enable a pre-configured H/W Crypto device. - NXP dpaa2 is one of the example. Change-Id: I2ad8ca0060604fb4e0541161e91bdebc6642f4da Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
' href='#n1462'>1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134
#!/usr/bin/env python

import unittest
import socket

from framework import VppTestCase, VppTestRunner
from vpp_ip import DpoProto
from vpp_ip_route import VppIpRoute, VppRoutePath, VppMplsRoute, \
    VppMplsIpBind, VppIpMRoute, VppMRoutePath, \
    MRouteItfFlags, MRouteEntryFlags, VppIpTable, VppMplsTable, \
    VppMplsLabel, MplsLspMode, find_mpls_route, \
    FibPathProto, FibPathType, FibPathFlags, VppMplsLabel, MplsLspMode
from vpp_mpls_tunnel_interface import VppMPLSTunnelInterface

import scapy.compat
from scapy.packet import Raw
from scapy.layers.l2 import Ether
from scapy.layers.inet import IP, UDP, ICMP
from scapy.layers.inet6 import IPv6, ICMPv6TimeExceeded
from scapy.contrib.mpls import MPLS

NUM_PKTS = 67


def verify_filter(capture, sent):
    if not len(capture) == len(sent):
        # filter out any IPv6 RAs from the capture
        for p in capture:
            if p.haslayer(IPv6):
                capture.remove(p)
    return capture


def verify_mpls_stack(tst, rx, mpls_labels):
    # the rx'd packet has the MPLS label popped
    eth = rx[Ether]
    tst.assertEqual(eth.type, 0x8847)

    rx_mpls = rx[MPLS]

    for ii in range(len(mpls_labels)):
        tst.assertEqual(rx_mpls.label, mpls_labels[ii].value)
        tst.assertEqual(rx_mpls.cos, mpls_labels[ii].exp)
        tst.assertEqual(rx_mpls.ttl, mpls_labels[ii].ttl)

        if ii == len(mpls_labels) - 1:
            tst.assertEqual(rx_mpls.s, 1)
        else:
            # not end of stack
            tst.assertEqual(rx_mpls.s, 0)
            # pop the label to expose the next
            rx_mpls = rx_mpls[MPLS].payload


class TestMPLS(VppTestCase):
    """ MPLS Test Case """

    @classmethod
    def setUpClass(cls):
        super(TestMPLS, cls).setUpClass()

    @classmethod
    def tearDownClass(cls):
        super(TestMPLS, cls).tearDownClass()

    def setUp(self):
        super(TestMPLS, self).setUp()

        # create 2 pg interfaces
        self.create_pg_interfaces(range(4))

        # setup both interfaces
        # assign them different tables.
        table_id = 0
        self.tables = []

        tbl = VppMplsTable(self, 0)
        tbl.add_vpp_config()
        self.tables.append(tbl)

        for i in self.pg_interfaces:
            i.admin_up()

            if table_id != 0:
                tbl = VppIpTable(self, table_id)
                tbl.add_vpp_config()
                self.tables.append(tbl)
                tbl = VppIpTable(self, table_id, is_ip6=1)
                tbl.add_vpp_config()
                self.tables.append(tbl)

            i.set_table_ip4(table_id)
            i.set_table_ip6(table_id)
            i.config_ip4()
            i.resolve_arp()
            i.config_ip6()
            i.resolve_ndp()
            i.enable_mpls()
            table_id += 1

    def tearDown(self):
        for i in self.pg_interfaces:
            i.unconfig_ip4()
            i.unconfig_ip6()
            i.ip6_disable()
            i.set_table_ip4(0)
            i.set_table_ip6(0)
            i.disable_mpls()
            i.admin_down()
        super(TestMPLS, self).tearDown()

    # the default of 64 matches the IP packet TTL default
    def create_stream_labelled_ip4(
            self,
            src_if,
            mpls_labels,
            ping=0,
            ip_itf=None,
            dst_ip=None,
            chksum=None,
            ip_ttl=64,
            n=257):
        self.reset_packet_infos()
        pkts = []
        for i in range(0, n):
            info = self.create_packet_info(src_if, src_if)
            payload = self.info_to_payload(info)
            p = Ether(dst=src_if.local_mac, src=src_if.remote_mac)

            for ii in range(len(mpls_labels)):
                p = p / MPLS(label=mpls_labels[ii].value,
                             ttl=mpls_labels[ii].ttl,
                             cos=mpls_labels[ii].exp)
            if not ping:
                if not dst_ip:
                    p = (p / IP(src=src_if.local_ip4,
                                dst=src_if.remote_ip4,
                                ttl=ip_ttl) /
                         UDP(sport=1234, dport=1234) /
                         Raw(payload))
                else:
                    p = (p / IP(src=src_if.local_ip4, dst=dst_ip, ttl=ip_ttl) /
                         UDP(sport=1234, dport=1234) /
                         Raw(payload))
            else:
                p = (p / IP(src=ip_itf.remote_ip4,
                            dst=ip_itf.local_ip4,
                            ttl=ip_ttl) /
                     ICMP())

            if chksum:
                p[IP].chksum = chksum
            info.data = p.copy()
            pkts.append(p)
        return pkts

    def create_stream_ip4(self, src_if, dst_ip, ip_ttl=64, ip_dscp=0):
        self.reset_packet_infos()
        pkts = []
        for i in range(0, 257):
            info = self.create_packet_info(src_if, src_if)
            payload = self.info_to_payload(info)
            p = (Ether(dst=src_if.local_mac, src=src_if.remote_mac) /
                 IP(src=src_if.remote_ip4, dst=dst_ip,
                    ttl=ip_ttl, tos=ip_dscp) /
                 UDP(sport=1234, dport=1234) /
                 Raw(payload))
            info.data = p.copy()
            pkts.append(p)
        return pkts

    def create_stream_ip6(self, src_if, dst_ip, ip_ttl=64, ip_dscp=0):
        self.reset_packet_infos()
        pkts = []
        for i in range(0, 257):
            info = self.create_packet_info(src_if, src_if)
            payload = self.info_to_payload(info)
            p = (Ether(dst=src_if.local_mac, src=src_if.remote_mac) /
                 IPv6(src=src_if.remote_ip6, dst=dst_ip,
                      hlim=ip_ttl, tc=ip_dscp) /
                 UDP(sport=1234, dport=1234) /
                 Raw(payload))
            info.data = p.copy()
            pkts.append(p)
        return pkts

    def create_stream_labelled_ip6(self, src_if, mpls_labels,
                                   hlim=64, dst_ip=None):
        if dst_ip is None:
            dst_ip = src_if.remote_ip6
        self.reset_packet_infos()
        pkts = []
        for i in range(0, 257):
            info = self.create_packet_info(src_if, src_if)
            payload = self.info_to_payload(info)
            p = Ether(dst=src_if.local_mac, src=src_if.remote_mac)
            for l in mpls_labels:
                p = p / MPLS(label=l.value, ttl=l.ttl, cos=l.exp)

            p = p / (IPv6(src=src_if.remote_ip6, dst=dst_ip, hlim=hlim) /
                     UDP(sport=1234, dport=1234) /
                     Raw(payload))
            info.data = p.copy()
            pkts.append(p)
        return pkts

    def verify_capture_ip4(self, src_if, capture, sent, ping_resp=0,
                           ip_ttl=None, ip_dscp=0):
        try:
            capture = verify_filter(capture, sent)

            self.assertEqual(len(capture), len(sent))

            for i in range(len(capture)):
                tx = sent[i]
                rx = capture[i]

                # the rx'd packet has the MPLS label popped
                eth = rx[Ether]
                self.assertEqual(eth.type, 0x800)

                tx_ip = tx[IP]
                rx_ip = rx[IP]

                if not ping_resp:
                    self.assertEqual(rx_ip.src, tx_ip.src)
                    self.assertEqual(rx_ip.dst, tx_ip.dst)
                    self.assertEqual(rx_ip.tos, ip_dscp)
                    if not ip_ttl:
                        # IP processing post pop has decremented the TTL
                        self.assertEqual(rx_ip.ttl + 1, tx_ip.ttl)
                    else:
                        self.assertEqual(rx_ip.ttl, ip_ttl)
                else:
                    self.assertEqual(rx_ip.src, tx_ip.dst)
                    self.assertEqual(rx_ip.dst, tx_ip.src)

        except:
            raise

    def verify_capture_labelled_ip4(self, src_if, capture, sent,
                                    mpls_labels, ip_ttl=None):
        try:
            capture = verify_filter(capture, sent)

            self.assertEqual(len(capture), len(sent))

            for i in range(len(capture)):
                tx = sent[i]
                rx = capture[i]
                tx_ip = tx[IP]
                rx_ip = rx[IP]

                verify_mpls_stack(self, rx, mpls_labels)

                self.assertEqual(rx_ip.src, tx_ip.src)
                self.assertEqual(rx_ip.dst, tx_ip.dst)
                if not ip_ttl:
                    # IP processing post pop has decremented the TTL
                    self.assertEqual(rx_ip.ttl + 1, tx_ip.ttl)
                else:
                    self.assertEqual(rx_ip.ttl, ip_ttl)

        except:
            raise

    def verify_capture_labelled_ip6(self, src_if, capture, sent,
                                    mpls_labels, ip_ttl=None):
        try:
            capture = verify_filter(capture, sent)

            self.assertEqual(len(capture), len(sent))

            for i in range(len(capture)):
                tx = sent[i]
                rx = capture[i]
                tx_ip = tx[IPv6]
                rx_ip = rx[IPv6]

                verify_mpls_stack(self, rx, mpls_labels)

                self.assertEqual(rx_ip.src, tx_ip.src)
                self.assertEqual(rx_ip.dst, tx_ip.dst)
                if not ip_ttl:
                    # IP processing post pop has decremented the TTL
                    self.assertEqual(rx_ip.hlim + 1, tx_ip.hlim)
                else:
                    self.assertEqual(rx_ip.hlim, ip_ttl)

        except:
            raise

    def verify_capture_tunneled_ip4(self, src_if, capture, sent, mpls_labels):
        try:
            capture = verify_filter(capture, sent)

            self.assertEqual(len(capture), len(sent))

            for i in range(len(capture)):
                tx = sent[i]
                rx = capture[i]
                tx_ip = tx[IP]
                rx_ip = rx[IP]

                verify_mpls_stack(self, rx, mpls_labels)

                self.assertEqual(rx_ip.src, tx_ip.src)
                self.assertEqual(rx_ip.dst, tx_ip.dst)
                # IP processing post pop has decremented the TTL
                self.assertEqual(rx_ip.ttl + 1, tx_ip.ttl)

        except:
            raise

    def verify_capture_labelled(self, src_if, capture, sent,
                                mpls_labels):
        try:
            capture = verify_filter(capture, sent)

            self.assertEqual(len(capture), len(sent))

            for i in range(len(capture)):
                rx = capture[i]
                verify_mpls_stack(self, rx, mpls_labels)
        except:
            raise

    def verify_capture_ip6(self, src_if, capture, sent,
                           ip_hlim=None, ip_dscp=0):
        try:
            self.assertEqual(len(capture), len(sent))

            for i in range(len(capture)):
                tx = sent[i]
                rx = capture[i]

                # the rx'd packet has the MPLS label popped
                eth = rx[Ether]
                self.assertEqual(eth.type, 0x86DD)

                tx_ip = tx[IPv6]
                rx_ip = rx[IPv6]

                self.assertEqual(rx_ip.src, tx_ip.src)
                self.assertEqual(rx_ip.dst, tx_ip.dst)
                self.assertEqual(rx_ip.tc,  ip_dscp)
                # IP processing post pop has decremented the TTL
                if not ip_hlim:
                    self.assertEqual(rx_ip.hlim + 1, tx_ip.hlim)
                else:
                    self.assertEqual(rx_ip.hlim, ip_hlim)

        except:
            raise

    def verify_capture_ip6_icmp(self, src_if, capture, sent):
        try:
            self.assertEqual(len(capture), len(sent))

            for i in range(len(capture)):
                tx = sent[i]
                rx = capture[i]

                # the rx'd packet has the MPLS label popped
                eth = rx[Ether]
                self.assertEqual(eth.type, 0x86DD)

                tx_ip = tx[IPv6]
                rx_ip = rx[IPv6]

                self.assertEqual(rx_ip.dst, tx_ip.src)
                # ICMP sourced from the interface's address
                self.assertEqual(rx_ip.src, src_if.local_ip6)
                # hop-limit reset to 255 for IMCP packet
                self.assertEqual(rx_ip.hlim, 255)

                icmp = rx[ICMPv6TimeExceeded]

        except:
            raise

    def test_swap(self):
        """ MPLS label swap tests """

        #
        # A simple MPLS xconnect - eos label in label out
        #
        route_32_eos = VppMplsRoute(self, 32, 1,
                                    [VppRoutePath(self.pg0.remote_ip4,
                                                  self.pg0.sw_if_index,
                                                  labels=[VppMplsLabel(33)])])
        route_32_eos.add_vpp_config()

        self.assertTrue(
            find_mpls_route(self, 0, 32, 1,
                            [VppRoutePath(self.pg0.remote_ip4,
                                          self.pg0.sw_if_index,
                                          labels=[VppMplsLabel(33)])]))

        #
        # a stream that matches the route for 10.0.0.1
        # PG0 is in the default table
        #
        tx = self.create_stream_labelled_ip4(self.pg0,
                                             [VppMplsLabel(32, ttl=32, exp=1)])
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_labelled(self.pg0, rx, tx,
                                     [VppMplsLabel(33, ttl=31, exp=1)])

        self.assertEqual(route_32_eos.get_stats_to()['packets'], 257)

        #
        # A simple MPLS xconnect - non-eos label in label out
        #
        route_32_neos = VppMplsRoute(self, 32, 0,
                                     [VppRoutePath(self.pg0.remote_ip4,
                                                   self.pg0.sw_if_index,
                                                   labels=[VppMplsLabel(33)])])
        route_32_neos.add_vpp_config()

        #
        # a stream that matches the route for 10.0.0.1
        # PG0 is in the default table
        #
        tx = self.create_stream_labelled_ip4(self.pg0,
                                             [VppMplsLabel(32, ttl=21, exp=7),
                                              VppMplsLabel(99)])
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_labelled(self.pg0, rx, tx,
                                     [VppMplsLabel(33, ttl=20, exp=7),
                                      VppMplsLabel(99)])
        self.assertEqual(route_32_neos.get_stats_to()['packets'], 257)

        #
        # A simple MPLS xconnect - non-eos label in label out, uniform mode
        #
        route_42_neos = VppMplsRoute(
            self, 42, 0,
            [VppRoutePath(self.pg0.remote_ip4,
                          self.pg0.sw_if_index,
                          labels=[VppMplsLabel(43, MplsLspMode.UNIFORM)])])
        route_42_neos.add_vpp_config()

        tx = self.create_stream_labelled_ip4(self.pg0,
                                             [VppMplsLabel(42, ttl=21, exp=7),
                                              VppMplsLabel(99)])
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_labelled(self.pg0, rx, tx,
                                     [VppMplsLabel(43, ttl=20, exp=7),
                                      VppMplsLabel(99)])

        #
        # An MPLS xconnect - EOS label in IP out
        #
        route_33_eos = VppMplsRoute(self, 33, 1,
                                    [VppRoutePath(self.pg0.remote_ip4,
                                                  self.pg0.sw_if_index,
                                                  labels=[])])
        route_33_eos.add_vpp_config()

        tx = self.create_stream_labelled_ip4(self.pg0, [VppMplsLabel(33)])
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_ip4(self.pg0, rx, tx)

        #
        # disposed packets have an invalid IPv4 checksum
        #
        tx = self.create_stream_labelled_ip4(self.pg0, [VppMplsLabel(33)],
                                             dst_ip=self.pg0.remote_ip4,
                                             n=65,
                                             chksum=1)
        self.send_and_assert_no_replies(self.pg0, tx, "Invalid Checksum")

        #
        # An MPLS xconnect - EOS label in IP out, uniform mode
        #
        route_3333_eos = VppMplsRoute(
            self, 3333, 1,
            [VppRoutePath(self.pg0.remote_ip4,
                          self.pg0.sw_if_index,
                          labels=[VppMplsLabel(3, MplsLspMode.UNIFORM)])])
        route_3333_eos.add_vpp_config()

        tx = self.create_stream_labelled_ip4(
            self.pg0,
            [VppMplsLabel(3333, ttl=55, exp=3)])
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_ip4(self.pg0, rx, tx, ip_ttl=54, ip_dscp=0x60)
        tx = self.create_stream_labelled_ip4(
            self.pg0,
            [VppMplsLabel(3333, ttl=66, exp=4)])
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_ip4(self.pg0, rx, tx, ip_ttl=65, ip_dscp=0x80)

        #
        # An MPLS xconnect - EOS label in IPv6 out
        #
        route_333_eos = VppMplsRoute(
            self, 333, 1,
            [VppRoutePath(self.pg0.remote_ip6,
                          self.pg0.sw_if_index,
                          labels=[])],
            eos_proto=FibPathProto.FIB_PATH_NH_PROTO_IP6)
        route_333_eos.add_vpp_config()

        tx = self.create_stream_labelled_ip6(self.pg0, [VppMplsLabel(333)])
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_ip6(self.pg0, rx, tx)

        #
        # disposed packets have an TTL expired
        #
        tx = self.create_stream_labelled_ip6(self.pg0,
                                             [VppMplsLabel(333, ttl=64)],
                                             dst_ip=self.pg1.remote_ip6,
                                             hlim=1)
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_ip6_icmp(self.pg0, rx, tx)

        #
        # An MPLS xconnect - EOS label in IPv6 out w imp-null
        #
        route_334_eos = VppMplsRoute(
            self, 334, 1,
            [VppRoutePath(self.pg0.remote_ip6,
                          self.pg0.sw_if_index,
                          labels=[VppMplsLabel(3)])],
            eos_proto=FibPathProto.FIB_PATH_NH_PROTO_IP6)
        route_334_eos.add_vpp_config()

        tx = self.create_stream_labelled_ip6(self.pg0,
                                             [VppMplsLabel(334, ttl=64)])
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_ip6(self.pg0, rx, tx)

        #
        # An MPLS xconnect - EOS label in IPv6 out w imp-null in uniform mode
        #
        route_335_eos = VppMplsRoute(
            self, 335, 1,
            [VppRoutePath(self.pg0.remote_ip6,
                          self.pg0.sw_if_index,
                          labels=[VppMplsLabel(3, MplsLspMode.UNIFORM)])],
            eos_proto=FibPathProto.FIB_PATH_NH_PROTO_IP6)
        route_335_eos.add_vpp_config()

        tx = self.create_stream_labelled_ip6(
            self.pg0,
            [VppMplsLabel(335, ttl=27, exp=4)])
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_ip6(self.pg0, rx, tx, ip_hlim=26, ip_dscp=0x80)

        #
        # disposed packets have an TTL expired
        #
        tx = self.create_stream_labelled_ip6(self.pg0, [VppMplsLabel(334)],
                                             dst_ip=self.pg1.remote_ip6,
                                             hlim=0)
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_ip6_icmp(self.pg0, rx, tx)

        #
        # An MPLS xconnect - non-EOS label in IP out - an invalid configuration
        # so this traffic should be dropped.
        #
        route_33_neos = VppMplsRoute(self, 33, 0,
                                     [VppRoutePath(self.pg0.remote_ip4,
                                                   self.pg0.sw_if_index,
                                                   labels=[])])
        route_33_neos.add_vpp_config()

        tx = self.create_stream_labelled_ip4(self.pg0,
                                             [VppMplsLabel(33),
                                              VppMplsLabel(99)])
        self.send_and_assert_no_replies(
            self.pg0, tx,
            "MPLS non-EOS packets popped and forwarded")

        #
        # A recursive EOS x-connect, which resolves through another x-connect
        # in pipe mode
        #
        route_34_eos = VppMplsRoute(self, 34, 1,
                                    [VppRoutePath("0.0.0.0",
                                                  0xffffffff,
                                                  nh_via_label=32,
                                                  labels=[VppMplsLabel(44),
                                                          VppMplsLabel(45)])])
        route_34_eos.add_vpp_config()
        self.logger.info(self.vapi.cli("sh mpls fib 34"))

        tx = self.create_stream_labelled_ip4(self.pg0,
                                             [VppMplsLabel(34, ttl=3)])
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_labelled(self.pg0, rx, tx,
                                     [VppMplsLabel(33),
                                      VppMplsLabel(44),
                                      VppMplsLabel(45, ttl=2)])

        self.assertEqual(route_34_eos.get_stats_to()['packets'], 257)
        self.assertEqual(route_32_neos.get_stats_via()['packets'], 257)

        #
        # A recursive EOS x-connect, which resolves through another x-connect
        # in uniform mode
        #
        route_35_eos = VppMplsRoute(
            self, 35, 1,
            [VppRoutePath("0.0.0.0",
                          0xffffffff,
                          nh_via_label=42,
                          labels=[VppMplsLabel(44)])])
        route_35_eos.add_vpp_config()

        tx = self.create_stream_labelled_ip4(self.pg0,
                                             [VppMplsLabel(35, ttl=3)])
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_labelled(self.pg0, rx, tx,
                                     [VppMplsLabel(43, ttl=2),
                                      VppMplsLabel(44, ttl=2)])

        #
        # A recursive non-EOS x-connect, which resolves through another
        # x-connect
        #
        route_34_neos = VppMplsRoute(self, 34, 0,
                                     [VppRoutePath("0.0.0.0",
                                                   0xffffffff,
                                                   nh_via_label=32,
                                                   labels=[VppMplsLabel(44),
                                                           VppMplsLabel(46)])])
        route_34_neos.add_vpp_config()

        tx = self.create_stream_labelled_ip4(self.pg0,
                                             [VppMplsLabel(34, ttl=45),
                                              VppMplsLabel(99)])
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        # it's the 2nd (counting from 0) label in the stack that is swapped
        self.verify_capture_labelled(self.pg0, rx, tx,
                                     [VppMplsLabel(33),
                                      VppMplsLabel(44),
                                      VppMplsLabel(46, ttl=44),
                                      VppMplsLabel(99)])

        #
        # an recursive IP route that resolves through the recursive non-eos
        # x-connect
        #
        ip_10_0_0_1 = VppIpRoute(self, "10.0.0.1", 32,
                                 [VppRoutePath("0.0.0.0",
                                               0xffffffff,
                                               nh_via_label=34,
                                               labels=[VppMplsLabel(55)])])
        ip_10_0_0_1.add_vpp_config()

        tx = self.create_stream_ip4(self.pg0, "10.0.0.1")
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_labelled_ip4(self.pg0, rx, tx,
                                         [VppMplsLabel(33),
                                          VppMplsLabel(44),
                                          VppMplsLabel(46),
                                          VppMplsLabel(55)])
        self.assertEqual(ip_10_0_0_1.get_stats_to()['packets'], 257)

        ip_10_0_0_1.remove_vpp_config()
        route_34_neos.remove_vpp_config()
        route_34_eos.remove_vpp_config()
        route_33_neos.remove_vpp_config()
        route_33_eos.remove_vpp_config()
        route_32_neos.remove_vpp_config()
        route_32_eos.remove_vpp_config()

    def test_bind(self):
        """ MPLS Local Label Binding test """

        #
        # Add a non-recursive route with a single out label
        #
        route_10_0_0_1 = VppIpRoute(self, "10.0.0.1", 32,
                                    [VppRoutePath(self.pg0.remote_ip4,
                                                  self.pg0.sw_if_index,
                                                  labels=[VppMplsLabel(45)])])
        route_10_0_0_1.add_vpp_config()

        # bind a local label to the route
        binding = VppMplsIpBind(self, 44, "10.0.0.1", 32)
        binding.add_vpp_config()

        # non-EOS stream
        tx = self.create_stream_labelled_ip4(self.pg0,
                                             [VppMplsLabel(44),
                                              VppMplsLabel(99)])
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_labelled(self.pg0, rx, tx,
                                     [VppMplsLabel(45, ttl=63),
                                      VppMplsLabel(99)])

        # EOS stream
        tx = self.create_stream_labelled_ip4(self.pg0, [VppMplsLabel(44)])
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_labelled(self.pg0, rx, tx,
                                     [VppMplsLabel(45, ttl=63)])

        # IP stream
        tx = self.create_stream_ip4(self.pg0, "10.0.0.1")
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_labelled_ip4(self.pg0, rx, tx, [VppMplsLabel(45)])

        #
        # cleanup
        #
        binding.remove_vpp_config()
        route_10_0_0_1.remove_vpp_config()

    def test_imposition(self):
        """ MPLS label imposition test """

        #
        # Add a non-recursive route with a single out label
        #
        route_10_0_0_1 = VppIpRoute(self, "10.0.0.1", 32,
                                    [VppRoutePath(self.pg0.remote_ip4,
                                                  self.pg0.sw_if_index,
                                                  labels=[VppMplsLabel(32)])])
        route_10_0_0_1.add_vpp_config()

        #
        # a stream that matches the route for 10.0.0.1
        # PG0 is in the default table
        #
        tx = self.create_stream_ip4(self.pg0, "10.0.0.1")
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_labelled_ip4(self.pg0, rx, tx, [VppMplsLabel(32)])

        #
        # Add a non-recursive route with a 3 out labels
        #
        route_10_0_0_2 = VppIpRoute(self, "10.0.0.2", 32,
                                    [VppRoutePath(self.pg0.remote_ip4,
                                                  self.pg0.sw_if_index,
                                                  labels=[VppMplsLabel(32),
                                                          VppMplsLabel(33),
                                                          VppMplsLabel(34)])])
        route_10_0_0_2.add_vpp_config()

        tx = self.create_stream_ip4(self.pg0, "10.0.0.2",
                                    ip_ttl=44, ip_dscp=0xff)
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_labelled_ip4(self.pg0, rx, tx,
                                         [VppMplsLabel(32),
                                          VppMplsLabel(33),
                                          VppMplsLabel(34)],
                                         ip_ttl=43)

        #
        # Add a non-recursive route with a single out label in uniform mode
        #
        route_10_0_0_3 = VppIpRoute(
            self, "10.0.0.3", 32,
            [VppRoutePath(self.pg0.remote_ip4,
                          self.pg0.sw_if_index,
                          labels=[VppMplsLabel(32,
                                               mode=MplsLspMode.UNIFORM)])])
        route_10_0_0_3.add_vpp_config()

        tx = self.create_stream_ip4(self.pg0, "10.0.0.3",
                                    ip_ttl=54, ip_dscp=0xbe)
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_labelled_ip4(self.pg0, rx, tx,
                                         [VppMplsLabel(32, ttl=53, exp=5)])

        #
        # Add a IPv6 non-recursive route with a single out label in
        # uniform mode
        #
        route_2001_3 = VppIpRoute(
            self, "2001::3", 128,
            [VppRoutePath(self.pg0.remote_ip6,
                          self.pg0.sw_if_index,
                          labels=[VppMplsLabel(32,
                                               mode=MplsLspMode.UNIFORM)])])
        route_2001_3.add_vpp_config()

        tx = self.create_stream_ip6(self.pg0, "2001::3",
                                    ip_ttl=54, ip_dscp=0xbe)
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_labelled_ip6(self.pg0, rx, tx,
                                         [VppMplsLabel(32, ttl=53, exp=5)])

        #
        # add a recursive path, with output label, via the 1 label route
        #
        route_11_0_0_1 = VppIpRoute(self, "11.0.0.1", 32,
                                    [VppRoutePath("10.0.0.1",
                                                  0xffffffff,
                                                  labels=[VppMplsLabel(44)])])
        route_11_0_0_1.add_vpp_config()

        #
        # a stream that matches the route for 11.0.0.1, should pick up
        # the label stack for 11.0.0.1 and 10.0.0.1
        #
        tx = self.create_stream_ip4(self.pg0, "11.0.0.1")
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_labelled_ip4(self.pg0, rx, tx,
                                         [VppMplsLabel(32),
                                          VppMplsLabel(44)])

        self.assertEqual(route_11_0_0_1.get_stats_to()['packets'], 257)

        #
        # add a recursive path, with 2 labels, via the 3 label route
        #
        route_11_0_0_2 = VppIpRoute(self, "11.0.0.2", 32,
                                    [VppRoutePath("10.0.0.2",
                                                  0xffffffff,
                                                  labels=[VppMplsLabel(44),
                                                          VppMplsLabel(45)])])
        route_11_0_0_2.add_vpp_config()

        #
        # a stream that matches the route for 11.0.0.1, should pick up
        # the label stack for 11.0.0.1 and 10.0.0.1
        #
        tx = self.create_stream_ip4(self.pg0, "11.0.0.2")
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_labelled_ip4(self.pg0, rx, tx,
                                         [VppMplsLabel(32),
                                          VppMplsLabel(33),
                                          VppMplsLabel(34),
                                          VppMplsLabel(44),
                                          VppMplsLabel(45)])

        self.assertEqual(route_11_0_0_2.get_stats_to()['packets'], 257)

        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_labelled_ip4(self.pg0, rx, tx,
                                         [VppMplsLabel(32),
                                          VppMplsLabel(33),
                                          VppMplsLabel(34),
                                          VppMplsLabel(44),
                                          VppMplsLabel(45)])

        self.assertEqual(route_11_0_0_2.get_stats_to()['packets'], 514)

        #
        # cleanup
        #
        route_11_0_0_2.remove_vpp_config()
        route_11_0_0_1.remove_vpp_config()
        route_10_0_0_2.remove_vpp_config()
        route_10_0_0_1.remove_vpp_config()

    def test_tunnel_pipe(self):
        """ MPLS Tunnel Tests - Pipe """

        #
        # Create a tunnel with a single out label
        #
        mpls_tun = VppMPLSTunnelInterface(
            self,
            [VppRoutePath(self.pg0.remote_ip4,
                          self.pg0.sw_if_index,
                          labels=[VppMplsLabel(44),
                                  VppMplsLabel(46)])])
        mpls_tun.add_vpp_config()
        mpls_tun.admin_up()

        #
        # add an unlabelled route through the new tunnel
        #
        route_10_0_0_3 = VppIpRoute(self, "10.0.0.3", 32,
                                    [VppRoutePath("0.0.0.0",
                                                  mpls_tun._sw_if_index)])
        route_10_0_0_3.add_vpp_config()

        self.vapi.cli("clear trace")
        tx = self.create_stream_ip4(self.pg0, "10.0.0.3")
        self.pg0.add_stream(tx)

        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx = self.pg0.get_capture()
        self.verify_capture_tunneled_ip4(self.pg0, rx, tx,
                                         [VppMplsLabel(44),
                                          VppMplsLabel(46)])

        #
        # add a labelled route through the new tunnel
        #
        route_10_0_0_4 = VppIpRoute(self, "10.0.0.4", 32,
                                    [VppRoutePath("0.0.0.0",
                                                  mpls_tun._sw_if_index,
                                                  labels=[33])])
        route_10_0_0_4.add_vpp_config()

        self.vapi.cli("clear trace")
        tx = self.create_stream_ip4(self.pg0, "10.0.0.4")
        self.pg0.add_stream(tx)

        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx = self.pg0.get_capture()
        self.verify_capture_tunneled_ip4(self.pg0, rx, tx,
                                         [VppMplsLabel(44),
                                          VppMplsLabel(46),
                                          VppMplsLabel(33, ttl=255)])

    def test_tunnel_uniform(self):
        """ MPLS Tunnel Tests - Uniform """

        #
        # Create a tunnel with a single out label
        # The label stack is specified here from outer to inner
        #
        mpls_tun = VppMPLSTunnelInterface(
            self,
            [VppRoutePath(self.pg0.remote_ip4,
                          self.pg0.sw_if_index,
                          labels=[VppMplsLabel(44, ttl=32),
                                  VppMplsLabel(46, MplsLspMode.UNIFORM)])])
        mpls_tun.add_vpp_config()
        mpls_tun.admin_up()

        #
        # add an unlabelled route through the new tunnel
        #
        route_10_0_0_3 = VppIpRoute(self, "10.0.0.3", 32,
                                    [VppRoutePath("0.0.0.0",
                                                  mpls_tun._sw_if_index)])
        route_10_0_0_3.add_vpp_config()

        self.vapi.cli("clear trace")
        tx = self.create_stream_ip4(self.pg0, "10.0.0.3", ip_ttl=24)
        self.pg0.add_stream(tx)

        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx = self.pg0.get_capture()
        self.verify_capture_tunneled_ip4(self.pg0, rx, tx,
                                         [VppMplsLabel(44, ttl=32),
                                          VppMplsLabel(46, ttl=23)])

        #
        # add a labelled route through the new tunnel
        #
        route_10_0_0_4 = VppIpRoute(
            self, "10.0.0.4", 32,
            [VppRoutePath("0.0.0.0",
                          mpls_tun._sw_if_index,
                          labels=[VppMplsLabel(33, ttl=47)])])
        route_10_0_0_4.add_vpp_config()

        self.vapi.cli("clear trace")
        tx = self.create_stream_ip4(self.pg0, "10.0.0.4")
        self.pg0.add_stream(tx)

        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx = self.pg0.get_capture()
        self.verify_capture_tunneled_ip4(self.pg0, rx, tx,
                                         [VppMplsLabel(44, ttl=32),
                                          VppMplsLabel(46, ttl=47),
                                          VppMplsLabel(33, ttl=47)])

    def test_mpls_tunnel_many(self):
        """ MPLS Multiple Tunnels """

        for ii in range(10):
            mpls_tun = VppMPLSTunnelInterface(
                self,
                [VppRoutePath(self.pg0.remote_ip4,
                              self.pg0.sw_if_index,
                              labels=[VppMplsLabel(44, ttl=32),
                                      VppMplsLabel(46, MplsLspMode.UNIFORM)])])
            mpls_tun.add_vpp_config()
            mpls_tun.admin_up()

    def test_v4_exp_null(self):
        """ MPLS V4 Explicit NULL test """

        #
        # The first test case has an MPLS TTL of 0
        # all packet should be dropped
        #
        tx = self.create_stream_labelled_ip4(self.pg0,
                                             [VppMplsLabel(0, ttl=0)])
        self.send_and_assert_no_replies(self.pg0, tx,
                                        "MPLS TTL=0 packets forwarded")

        #
        # a stream with a non-zero MPLS TTL
        # PG0 is in the default table
        #
        tx = self.create_stream_labelled_ip4(self.pg0, [VppMplsLabel(0)])
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_ip4(self.pg0, rx, tx)

        #
        # a stream with a non-zero MPLS TTL
        # PG1 is in table 1
        # we are ensuring the post-pop lookup occurs in the VRF table
        #
        tx = self.create_stream_labelled_ip4(self.pg1, [VppMplsLabel(0)])
        rx = self.send_and_expect(self.pg1, tx, self.pg1)
        self.verify_capture_ip4(self.pg1, rx, tx)

    def test_v6_exp_null(self):
        """ MPLS V6 Explicit NULL test """

        #
        # a stream with a non-zero MPLS TTL
        # PG0 is in the default table
        #
        tx = self.create_stream_labelled_ip6(self.pg0, [VppMplsLabel(2)])
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_ip6(self.pg0, rx, tx)

        #
        # a stream with a non-zero MPLS TTL
        # PG1 is in table 1
        # we are ensuring the post-pop lookup occurs in the VRF table
        #
        tx = self.create_stream_labelled_ip6(self.pg1, [VppMplsLabel(2)])
        rx = self.send_and_expect(self.pg1, tx, self.pg1)
        self.verify_capture_ip6(self.pg0, rx, tx)

    def test_deag(self):
        """ MPLS Deagg """

        #
        # A de-agg route - next-hop lookup in default table
        #
        route_34_eos = VppMplsRoute(self, 34, 1,
                                    [VppRoutePath("0.0.0.0",
                                                  0xffffffff,
                                                  nh_table_id=0)])
        route_34_eos.add_vpp_config()

        #
        # ping an interface in the default table
        # PG0 is in the default table
        #
        tx = self.create_stream_labelled_ip4(self.pg0,
                                             [VppMplsLabel(34)],
                                             ping=1,
                                             ip_itf=self.pg0)
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_ip4(self.pg0, rx, tx, ping_resp=1)

        #
        # A de-agg route - next-hop lookup in non-default table
        #
        route_35_eos = VppMplsRoute(self, 35, 1,
                                    [VppRoutePath("0.0.0.0",
                                                  0xffffffff,
                                                  nh_table_id=1)])
        route_35_eos.add_vpp_config()

        #
        # ping an interface in the non-default table
        # PG0 is in the default table. packet arrive labelled in the
        # default table and egress unlabelled in the non-default
        #
        tx = self.create_stream_labelled_ip4(
            self.pg0, [VppMplsLabel(35)], ping=1, ip_itf=self.pg1)
        rx = self.send_and_expect(self.pg0, tx, self.pg1)
        self.verify_capture_ip4(self.pg1, rx, tx, ping_resp=1)

        #
        # Double pop
        #
        route_36_neos = VppMplsRoute(self, 36, 0,
                                     [VppRoutePath("0.0.0.0",
                                                   0xffffffff)])
        route_36_neos.add_vpp_config()

        tx = self.create_stream_labelled_ip4(self.pg0,
                                             [VppMplsLabel(36),
                                              VppMplsLabel(35)],
                                             ping=1, ip_itf=self.pg1)
        rx = self.send_and_expect(self.pg0, tx, self.pg1)
        self.verify_capture_ip4(self.pg1, rx, tx, ping_resp=1)

        route_36_neos.remove_vpp_config()
        route_35_eos.remove_vpp_config()
        route_34_eos.remove_vpp_config()

    def test_interface_rx(self):
        """ MPLS Interface Receive """

        #
        # Add a non-recursive route that will forward the traffic
        # post-interface-rx
        #
        route_10_0_0_1 = VppIpRoute(self, "10.0.0.1", 32,
                                    table_id=1,
                                    paths=[VppRoutePath(self.pg1.remote_ip4,
                                                        self.pg1.sw_if_index)])
        route_10_0_0_1.add_vpp_config()

        #
        # An interface receive label that maps traffic to RX on interface
        # pg1
        # by injecting the packet in on pg0, which is in table 0
        # doing an interface-rx on pg1 and matching a route in table 1
        # if the packet egresses, then we must have swapped to pg1
        # so as to have matched the route in table 1
        #
        route_34_eos = VppMplsRoute(
            self, 34, 1,
            [VppRoutePath("0.0.0.0",
                          self.pg1.sw_if_index,
                          type=FibPathType.FIB_PATH_TYPE_INTERFACE_RX)])
        route_34_eos.add_vpp_config()

        #
        # ping an interface in the default table
        # PG0 is in the default table
        #
        tx = self.create_stream_labelled_ip4(self.pg0,
                                             [VppMplsLabel(34)],
                                             dst_ip="10.0.0.1")
        rx = self.send_and_expect(self.pg0, tx, self.pg1)
        self.verify_capture_ip4(self.pg1, rx, tx)

    def test_mcast_mid_point(self):
        """ MPLS Multicast Mid Point """

        #
        # Add a non-recursive route that will forward the traffic
        # post-interface-rx
        #
        route_10_0_0_1 = VppIpRoute(self, "10.0.0.1", 32,
                                    table_id=1,
                                    paths=[VppRoutePath(self.pg1.remote_ip4,
                                                        self.pg1.sw_if_index)])
        route_10_0_0_1.add_vpp_config()

        #
        # Add a mcast entry that replicate to pg2 and pg3
        # and replicate to a interface-rx (like a bud node would)
        #
        route_3400_eos = VppMplsRoute(
            self, 3400, 1,
            [VppRoutePath(self.pg2.remote_ip4,
                          self.pg2.sw_if_index,
                          labels=[VppMplsLabel(3401)]),
             VppRoutePath(self.pg3.remote_ip4,
                          self.pg3.sw_if_index,
                          labels=[VppMplsLabel(3402)]),
             VppRoutePath("0.0.0.0",
                          self.pg1.sw_if_index,
                          type=FibPathType.FIB_PATH_TYPE_INTERFACE_RX)],
            is_multicast=1)
        route_3400_eos.add_vpp_config()

        #
        # ping an interface in the default table
        # PG0 is in the default table
        #
        self.vapi.cli("clear trace")
        tx = self.create_stream_labelled_ip4(self.pg0,
                                             [VppMplsLabel(3400, ttl=64)],
                                             n=257,
                                             dst_ip="10.0.0.1")
        self.pg0.add_stream(tx)

        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx = self.pg1.get_capture(257)
        self.verify_capture_ip4(self.pg1, rx, tx)

        rx = self.pg2.get_capture(257)
        self.verify_capture_labelled(self.pg2, rx, tx,
                                     [VppMplsLabel(3401, ttl=63)])
        rx = self.pg3.get_capture(257)
        self.verify_capture_labelled(self.pg3, rx, tx,
                                     [VppMplsLabel(3402, ttl=63)])

    def test_mcast_head(self):
        """ MPLS Multicast Head-end """

        #
        # Create a multicast tunnel with two replications
        #
        mpls_tun = VppMPLSTunnelInterface(
            self,
            [VppRoutePath(self.pg2.remote_ip4,
                          self.pg2.sw_if_index,
                          labels=[VppMplsLabel(42)]),
             VppRoutePath(self.pg3.remote_ip4,
                          self.pg3.sw_if_index,
                          labels=[VppMplsLabel(43)])],
            is_multicast=1)
        mpls_tun.add_vpp_config()
        mpls_tun.admin_up()

        #
        # add an unlabelled route through the new tunnel
        #
        route_10_0_0_3 = VppIpRoute(self, "10.0.0.3", 32,
                                    [VppRoutePath("0.0.0.0",
                                                  mpls_tun._sw_if_index)])
        route_10_0_0_3.add_vpp_config()

        self.vapi.cli("clear trace")
        tx = self.create_stream_ip4(self.pg0, "10.0.0.3")
        self.pg0.add_stream(tx)

        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx = self.pg2.get_capture(257)
        self.verify_capture_tunneled_ip4(self.pg0, rx, tx, [VppMplsLabel(42)])
        rx = self.pg3.get_capture(257)
        self.verify_capture_tunneled_ip4(self.pg0, rx, tx, [VppMplsLabel(43)])

        #
        # An an IP multicast route via the tunnel
        # A (*,G).
        # one accepting interface, pg0, 1 forwarding interface via the tunnel
        #
        route_232_1_1_1 = VppIpMRoute(
            self,
            "0.0.0.0",
            "232.1.1.1", 32,
            MRouteEntryFlags.MFIB_ENTRY_FLAG_NONE,
            [VppMRoutePath(self.pg0.sw_if_index,
                           MRouteItfFlags.MFIB_ITF_FLAG_ACCEPT),
             VppMRoutePath(mpls_tun._sw_if_index,
                           MRouteItfFlags.MFIB_ITF_FLAG_FORWARD)])
        route_232_1_1_1.add_vpp_config()
        self.logger.info(self.vapi.cli("sh ip mfib index 0"))

        self.vapi.cli("clear trace")
        tx = self.create_stream_ip4(self.pg0, "232.1.1.1")
        self.pg0.add_stream(tx)

        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx = self.pg2.get_capture(257)
        self.verify_capture_tunneled_ip4(self.pg0, rx, tx, [VppMplsLabel(42)])
        rx = self.pg3.get_capture(257)
        self.verify_capture_tunneled_ip4(self.pg0, rx, tx, [VppMplsLabel(43)])

    def test_mcast_ip4_tail(self):
        """ MPLS IPv4 Multicast Tail """

        #
        # Add a multicast route that will forward the traffic
        # post-disposition
        #
        route_232_1_1_1 = VppIpMRoute(
            self,
            "0.0.0.0",
            "232.1.1.1", 32,
            MRouteEntryFlags.MFIB_ENTRY_FLAG_NONE,
            table_id=1,
            paths=[VppMRoutePath(self.pg1.sw_if_index,
                                 MRouteItfFlags.MFIB_ITF_FLAG_FORWARD)])
        route_232_1_1_1.add_vpp_config()

        #
        # An interface receive label that maps traffic to RX on interface
        # pg1
        # by injecting the packet in on pg0, which is in table 0
        # doing an rpf-id  and matching a route in table 1
        # if the packet egresses, then we must have matched the route in
        # table 1
        #
        route_34_eos = VppMplsRoute(
            self, 34, 1,
            [VppRoutePath("0.0.0.0",
                          0xffffffff,
                          nh_table_id=1,
                          rpf_id=55)],
            is_multicast=1,
            eos_proto=FibPathProto.FIB_PATH_NH_PROTO_IP4)

        route_34_eos.add_vpp_config()

        #
        # Drop due to interface lookup miss
        #
        self.vapi.cli("clear trace")
        tx = self.create_stream_labelled_ip4(self.pg0, [VppMplsLabel(34)],
                                             dst_ip="232.1.1.1", n=1)
        self.send_and_assert_no_replies(self.pg0, tx, "RPF-ID drop none")

        #
        # set the RPF-ID of the entry to match the input packet's
        #
        route_232_1_1_1.update_rpf_id(55)
        self.logger.info(self.vapi.cli("sh ip mfib index 1 232.1.1.1"))

        tx = self.create_stream_labelled_ip4(self.pg0, [VppMplsLabel(34)],
                                             dst_ip="232.1.1.1")
        rx = self.send_and_expect(self.pg0, tx, self.pg1)
        self.verify_capture_ip4(self.pg1, rx, tx)

        #
        # disposed packets have an invalid IPv4 checksum
        #
        tx = self.create_stream_labelled_ip4(self.pg0, [VppMplsLabel(34)],
                                             dst_ip="232.1.1.1", n=65,
                                             chksum=1)
        self.send_and_assert_no_replies(self.pg0, tx, "Invalid Checksum")

        #
        # set the RPF-ID of the entry to not match the input packet's
        #
        route_232_1_1_1.update_rpf_id(56)
        tx = self.create_stream_labelled_ip4(self.pg0, [VppMplsLabel(34)],
                                             dst_ip="232.1.1.1")
        self.send_and_assert_no_replies(self.pg0, tx, "RPF-ID drop 56")

    def test_mcast_ip6_tail(self):
        """ MPLS IPv6 Multicast Tail """

        #
        # Add a multicast route that will forward the traffic
        # post-disposition
        #
        route_ff = VppIpMRoute(
            self,
            "::",
            "ff01::1", 32,
            MRouteEntryFlags.MFIB_ENTRY_FLAG_NONE,
            table_id=1,
            paths=[VppMRoutePath(self.pg1.sw_if_index,
                                 MRouteItfFlags.MFIB_ITF_FLAG_FORWARD,
                                 proto=FibPathProto.FIB_PATH_NH_PROTO_IP6)])
        route_ff.add_vpp_config()

        #
        # An interface receive label that maps traffic to RX on interface
        # pg1
        # by injecting the packet in on pg0, which is in table 0
        # doing an rpf-id  and matching a route in table 1
        # if the packet egresses, then we must have matched the route in
        # table 1
        #
        route_34_eos = VppMplsRoute(
            self, 34, 1,
            [VppRoutePath("::",
                          0xffffffff,
                          nh_table_id=1,
                          rpf_id=55)],
            is_multicast=1,
            eos_proto=FibPathProto.FIB_PATH_NH_PROTO_IP6)

        route_34_eos.add_vpp_config()

        #
        # Drop due to interface lookup miss
        #
        tx = self.create_stream_labelled_ip6(self.pg0, [VppMplsLabel(34)],
                                             dst_ip="ff01::1")
        self.send_and_assert_no_replies(self.pg0, tx, "RPF Miss")

        #
        # set the RPF-ID of the entry to match the input packet's
        #
        route_ff.update_rpf_id(55)

        tx = self.create_stream_labelled_ip6(self.pg0, [VppMplsLabel(34)],
                                             dst_ip="ff01::1")
        rx = self.send_and_expect(self.pg0, tx, self.pg1)
        self.verify_capture_ip6(self.pg1, rx, tx)

        #
        # disposed packets have hop-limit = 1
        #
        tx = self.create_stream_labelled_ip6(self.pg0,
                                             [VppMplsLabel(34)],
                                             dst_ip="ff01::1",
                                             hlim=1)
        rx = self.send_and_expect(self.pg0, tx, self.pg0)
        self.verify_capture_ip6_icmp(self.pg0, rx, tx)

        #
        # set the RPF-ID of the entry to not match the input packet's
        #
        route_ff.update_rpf_id(56)
        tx = self.create_stream_labelled_ip6(self.pg0,
                                             [VppMplsLabel(34)],
                                             dst_ip="ff01::1")
        self.send_and_assert_no_replies(self.pg0, tx, "RPF-ID drop 56")


class TestMPLSDisabled(VppTestCase):
    """ MPLS disabled """

    @classmethod
    def setUpClass(cls):
        super(TestMPLSDisabled, cls).setUpClass()

    @classmethod
    def tearDownClass(cls):
        super(TestMPLSDisabled, cls).tearDownClass()

    def setUp(self):
        super(TestMPLSDisabled, self).setUp()

        # create 2 pg interfaces
        self.create_pg_interfaces(range(2))

        self.tbl = VppMplsTable(self, 0)
        self.tbl.add_vpp_config()

        # PG0 is MPLS enabled
        self.pg0.admin_up()
        self.pg0.config_ip4()
        self.pg0.resolve_arp()
        self.pg0.enable_mpls()

        # PG 1 is not MPLS enabled
        self.pg1.admin_up()

    def tearDown(self):
        for i in self.pg_interfaces:
            i.unconfig_ip4()
            i.admin_down()

        self.pg0.disable_mpls()
        super(TestMPLSDisabled, self).tearDown()

    def test_mpls_disabled(self):
        """ MPLS Disabled """

        tx = (Ether(src=self.pg1.remote_mac,
                    dst=self.pg1.local_mac) /
              MPLS(label=32, ttl=64) /
              IPv6(src="2001::1", dst=self.pg0.remote_ip6) /
              UDP(sport=1234, dport=1234) /
              Raw('\xa5' * 100))

        #
        # A simple MPLS xconnect - eos label in label out
        #
        route_32_eos = VppMplsRoute(self, 32, 1,
                                    [VppRoutePath(self.pg0.remote_ip4,
                                                  self.pg0.sw_if_index,
                                                  labels=[33])])
        route_32_eos.add_vpp_config()

        #
        # PG1 does not forward IP traffic
        #
        self.send_and_assert_no_replies(self.pg1, tx, "MPLS disabled")

        #
        # MPLS enable PG1
        #
        self.pg1.enable_mpls()

        #
        # Now we get packets through
        #
        self.pg1.add_stream(tx)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx = self.pg0.get_capture(1)

        #
        # Disable PG1
        #
        self.pg1.disable_mpls()

        #
        # PG1 does not forward IP traffic
        #
        self.send_and_assert_no_replies(self.pg1, tx, "IPv6 disabled")
        self.send_and_assert_no_replies(self.pg1, tx, "IPv6 disabled")


class TestMPLSPIC(VppTestCase):
    """ MPLS Prefix-Independent Convergence (PIC) edge convergence """

    @classmethod
    def setUpClass(cls):
        super(TestMPLSPIC, cls).setUpClass()

    @classmethod
    def tearDownClass(cls):
        super(TestMPLSPIC, cls).tearDownClass()

    def setUp(self):
        super(TestMPLSPIC, self).setUp()

        # create 2 pg interfaces
        self.create_pg_interfaces(range(4))

        mpls_tbl = VppMplsTable(self, 0)
        mpls_tbl.add_vpp_config()
        tbl4 = VppIpTable(self, 1)
        tbl4.add_vpp_config()
        tbl6 = VppIpTable(self, 1, is_ip6=1)
        tbl6.add_vpp_config()

        # core links
        self.pg0.admin_up()
        self.pg0.config_ip4()
        self.pg0.resolve_arp()
        self.pg0.enable_mpls()

        self.pg1.admin_up()
        self.pg1.config_ip4()
        self.pg1.resolve_arp()
        self.pg1.enable_mpls()

        # VRF (customer facing) link
        self.pg2.admin_up()
        self.pg2.set_table_ip4(1)
        self.pg2.config_ip4()
        self.pg2.resolve_arp()
        self.pg2.set_table_ip6(1)
        self.pg2.config_ip6()
        self.pg2.resolve_ndp()

        self.pg3.admin_up()
        self.pg3.set_table_ip4(1)
        self.pg3.config_ip4()
        self.pg3.resolve_arp()
        self.pg3.set_table_ip6(1)
        self.pg3.config_ip6()
        self.pg3.resolve_ndp()

    def tearDown(self):
        self.pg0.disable_mpls()
        self.pg1.disable_mpls()
        for i in self.pg_interfaces:
            i.unconfig_ip4()
            i.unconfig_ip6()
            i.set_table_ip4(0)
            i.set_table_ip6(0)
            i.admin_down()
        super(TestMPLSPIC, self).tearDown()

    def test_mpls_ibgp_pic(self):
        """ MPLS iBGP Prefix-Independent Convergence (PIC) edge convergence

        1) setup many iBGP VPN routes via a pair of iBGP peers.
        2) Check EMCP forwarding to these peers
        3) withdraw the IGP route to one of these peers.
        4) check forwarding continues to the remaining peer
        """

        #
        # IGP+LDP core routes
        #
        core_10_0_0_45 = VppIpRoute(self, "10.0.0.45", 32,
                                    [VppRoutePath(self.pg0.remote_ip4,
                                                  self.pg0.sw_if_index,
                                                  labels=[45])])
        core_10_0_0_45.add_vpp_config()

        core_10_0_0_46 = VppIpRoute(self, "10.0.0.46", 32,
                                    [VppRoutePath(self.pg1.remote_ip4,
                                                  self.pg1.sw_if_index,
                                                  labels=[46])])
        core_10_0_0_46.add_vpp_config()

        #
        # Lot's of VPN routes. We need more the 64 so VPP will build
        # the fast convergence indirection
        #
        vpn_routes = []
        pkts = []
        for ii in range(NUM_PKTS):
            dst = "192.168.1.%d" % ii
            vpn_routes.append(VppIpRoute(
                self, dst, 32,
                [VppRoutePath(
                    "10.0.0.45",
                    0xffffffff,
                    labels=[145],
                    flags=FibPathFlags.FIB_PATH_FLAG_RESOLVE_VIA_HOST),
                 VppRoutePath(
                     "10.0.0.46",
                     0xffffffff,
                     labels=[146],
                     flags=FibPathFlags.FIB_PATH_FLAG_RESOLVE_VIA_HOST)],
                table_id=1))
            vpn_routes[ii].add_vpp_config()

            pkts.append(Ether(dst=self.pg2.local_mac,
                              src=self.pg2.remote_mac) /
                        IP(src=self.pg2.remote_ip4, dst=dst) /
                        UDP(sport=1234, dport=1234) /
                        Raw('\xa5' * 100))

        #
        # Send the packet stream (one pkt to each VPN route)
        #  - expect a 50-50 split of the traffic
        #
        self.pg2.add_stream(pkts)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx0 = self.pg0._get_capture(NUM_PKTS)
        rx1 = self.pg1._get_capture(NUM_PKTS)

        # not testing the LB hashing algorithm so we're not concerned
        # with the split ratio, just as long as neither is 0
        self.assertNotEqual(0, len(rx0))
        self.assertNotEqual(0, len(rx1))
        self.assertEqual(len(pkts), len(rx0) + len(rx1),
                         "Expected all (%s) packets across both ECMP paths. "
                         "rx0: %s rx1: %s." % (len(pkts), len(rx0), len(rx1)))

        #
        # use a test CLI command to stop the FIB walk process, this
        # will prevent the FIB converging the VPN routes and thus allow
        # us to probe the interim (post-fail, pre-converge) state
        #
        self.vapi.ppcli("test fib-walk-process disable")

        #
        # Withdraw one of the IGP routes
        #
        core_10_0_0_46.remove_vpp_config()

        #
        # now all packets should be forwarded through the remaining peer
        #
        self.vapi.ppcli("clear trace")
        self.pg2.add_stream(pkts)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx0 = self.pg0.get_capture(NUM_PKTS)
        self.assertEqual(len(pkts), len(rx0),
                         "Expected all (%s) packets across single path. "
                         "rx0: %s." % (len(pkts), len(rx0)))

        #
        # enable the FIB walk process to converge the FIB
        #
        self.vapi.ppcli("test fib-walk-process enable")

        #
        # packets should still be forwarded through the remaining peer
        #
        self.pg2.add_stream(pkts)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx0 = self.pg0.get_capture(NUM_PKTS)
        self.assertEqual(len(pkts), len(rx0),
                         "Expected all (%s) packets across single path. "
                         "rx0: %s." % (len(pkts), len(rx0)))

        #
        # Add the IGP route back and we return to load-balancing
        #
        core_10_0_0_46.add_vpp_config()

        self.pg2.add_stream(pkts)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx0 = self.pg0._get_capture(NUM_PKTS)
        rx1 = self.pg1._get_capture(NUM_PKTS)
        self.assertNotEqual(0, len(rx0))
        self.assertNotEqual(0, len(rx1))
        self.assertEqual(len(pkts), len(rx0) + len(rx1),
                         "Expected all (%s) packets across both ECMP paths. "
                         "rx0: %s rx1: %s." % (len(pkts), len(rx0), len(rx1)))

    def test_mpls_ebgp_pic(self):
        """ MPLS eBGP Prefix-Independent Convergence (PIC) edge convergence

        1) setup many eBGP VPN routes via a pair of eBGP peers.
        2) Check EMCP forwarding to these peers
        3) withdraw one eBGP path - expect LB across remaining eBGP
        """

        #
        # Lot's of VPN routes. We need more the 64 so VPP will build
        # the fast convergence indirection
        #
        vpn_routes = []
        vpn_bindings = []
        pkts = []
        for ii in range(NUM_PKTS):
            dst = "192.168.1.%d" % ii
            local_label = 1600 + ii
            vpn_routes.append(VppIpRoute(
                self, dst, 32,
                [VppRoutePath(
                    self.pg2.remote_ip4,
                    0xffffffff,
                    nh_table_id=1,
                    flags=FibPathFlags.FIB_PATH_FLAG_RESOLVE_VIA_ATTACHED),
                 VppRoutePath(
                     self.pg3.remote_ip4,
                     0xffffffff,
                     nh_table_id=1,
                     flags=FibPathFlags.FIB_PATH_FLAG_RESOLVE_VIA_ATTACHED)],
                table_id=1))
            vpn_routes[ii].add_vpp_config()

            vpn_bindings.append(VppMplsIpBind(self, local_label, dst, 32,
                                              ip_table_id=1))
            vpn_bindings[ii].add_vpp_config()

            pkts.append(Ether(dst=self.pg0.local_mac,
                              src=self.pg0.remote_mac) /
                        MPLS(label=local_label, ttl=64) /
                        IP(src=self.pg0.remote_ip4, dst=dst) /
                        UDP(sport=1234, dport=1234) /
                        Raw('\xa5' * 100))

        #
        # Send the packet stream (one pkt to each VPN route)
        #  - expect a 50-50 split of the traffic
        #
        self.pg0.add_stream(pkts)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx0 = self.pg2._get_capture(NUM_PKTS)
        rx1 = self.pg3._get_capture(NUM_PKTS)

        # not testing the LB hashing algorithm so we're not concerned
        # with the split ratio, just as long as neither is 0
        self.assertNotEqual(0, len(rx0))
        self.assertNotEqual(0, len(rx1))
        self.assertEqual(len(pkts), len(rx0) + len(rx1),
                         "Expected all (%s) packets across both ECMP paths. "
                         "rx0: %s rx1: %s." % (len(pkts), len(rx0), len(rx1)))

        #
        # use a test CLI command to stop the FIB walk process, this
        # will prevent the FIB converging the VPN routes and thus allow
        # us to probe the interim (post-fail, pre-converge) state
        #
        self.vapi.ppcli("test fib-walk-process disable")

        #
        # withdraw the connected prefix on the interface.
        #
        self.pg2.unconfig_ip4()

        #
        # now all packets should be forwarded through the remaining peer
        #
        self.pg0.add_stream(pkts)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx0 = self.pg3.get_capture(NUM_PKTS)
        self.assertEqual(len(pkts), len(rx0),
                         "Expected all (%s) packets across single path. "
                         "rx0: %s." % (len(pkts), len(rx0)))

        #
        # enable the FIB walk process to converge the FIB
        #
        self.vapi.ppcli("test fib-walk-process enable")

        #
        # packets should still be forwarded through the remaining peer
        #
        self.pg0.add_stream(pkts)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx0 = self.pg3.get_capture(NUM_PKTS)
        self.assertEqual(len(pkts), len(rx0),
                         "Expected all (%s) packets across single path. "
                         "rx0: %s." % (len(pkts), len(rx0)))

        #
        # put the connected routes back
        #
        self.pg2.config_ip4()
        self.pg2.resolve_arp()

        self.pg0.add_stream(pkts)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx0 = self.pg2._get_capture(NUM_PKTS)
        rx1 = self.pg3._get_capture(NUM_PKTS)
        self.assertNotEqual(0, len(rx0))
        self.assertNotEqual(0, len(rx1))
        self.assertEqual(len(pkts), len(rx0) + len(rx1),
                         "Expected all (%s) packets across both ECMP paths. "
                         "rx0: %s rx1: %s." % (len(pkts), len(rx0), len(rx1)))

    def test_mpls_v6_ebgp_pic(self):
        """ MPLSv6 eBGP Prefix-Independent Convergence (PIC) edge convergence

        1) setup many eBGP VPNv6 routes via a pair of eBGP peers
        2) Check EMCP forwarding to these peers
        3) withdraw one eBGP path - expect LB across remaining eBGP
        """

        #
        # Lot's of VPN routes. We need more the 64 so VPP will build
        # the fast convergence indirection
        #
        vpn_routes = []
        vpn_bindings = []
        pkts = []
        for ii in range(NUM_PKTS):
            dst = "3000::%d" % ii
            local_label = 1600 + ii
            vpn_routes.append(VppIpRoute(
                self, dst, 128,
                [VppRoutePath(
                    self.pg2.remote_ip6,
                    0xffffffff,
                    nh_table_id=1,
                    flags=FibPathFlags.FIB_PATH_FLAG_RESOLVE_VIA_ATTACHED),
                 VppRoutePath(
                     self.pg3.remote_ip6,
                     0xffffffff,
                     nh_table_id=1,
                     flags=FibPathFlags.FIB_PATH_FLAG_RESOLVE_VIA_ATTACHED)],
                table_id=1))
            vpn_routes[ii].add_vpp_config()

            vpn_bindings.append(VppMplsIpBind(self, local_label, dst, 128,
                                              ip_table_id=1))
            vpn_bindings[ii].add_vpp_config()

            pkts.append(Ether(dst=self.pg0.local_mac,
                              src=self.pg0.remote_mac) /
                        MPLS(label=local_label, ttl=64) /
                        IPv6(src=self.pg0.remote_ip6, dst=dst) /
                        UDP(sport=1234, dport=1234) /
                        Raw('\xa5' * 100))
            self.logger.info(self.vapi.cli("sh ip6 fib %s" % dst))

        self.pg0.add_stream(pkts)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx0 = self.pg2._get_capture(NUM_PKTS)
        rx1 = self.pg3._get_capture(NUM_PKTS)
        self.assertNotEqual(0, len(rx0))
        self.assertNotEqual(0, len(rx1))
        self.assertEqual(len(pkts), len(rx0) + len(rx1),
                         "Expected all (%s) packets across both ECMP paths. "
                         "rx0: %s rx1: %s." % (len(pkts), len(rx0), len(rx1)))

        #
        # use a test CLI command to stop the FIB walk process, this
        # will prevent the FIB converging the VPN routes and thus allow
        # us to probe the interim (post-fail, pre-converge) state
        #
        self.vapi.ppcli("test fib-walk-process disable")

        #
        # withdraw the connected prefix on the interface.
        # and shutdown the interface so the ND cache is flushed.
        #
        self.pg2.unconfig_ip6()
        self.pg2.admin_down()

        #
        # now all packets should be forwarded through the remaining peer
        #
        self.pg0.add_stream(pkts)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx0 = self.pg3.get_capture(NUM_PKTS)
        self.assertEqual(len(pkts), len(rx0),
                         "Expected all (%s) packets across single path. "
                         "rx0: %s." % (len(pkts), len(rx0)))

        #
        # enable the FIB walk process to converge the FIB
        #
        self.vapi.ppcli("test fib-walk-process enable")
        self.pg0.add_stream(pkts)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx0 = self.pg3.get_capture(NUM_PKTS)
        self.assertEqual(len(pkts), len(rx0),
                         "Expected all (%s) packets across single path. "
                         "rx0: %s." % (len(pkts), len(rx0)))

        #
        # put the connected routes back
        #
        self.pg2.admin_up()
        self.pg2.config_ip6()
        self.pg2.resolve_ndp()

        self.pg0.add_stream(pkts)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx0 = self.pg2._get_capture(NUM_PKTS)
        rx1 = self.pg3._get_capture(NUM_PKTS)
        self.assertNotEqual(0, len(rx0))
        self.assertNotEqual(0, len(rx1))
        self.assertEqual(len(pkts), len(rx0) + len(rx1),
                         "Expected all (%s) packets across both ECMP paths. "
                         "rx0: %s rx1: %s." % (len(pkts), len(rx0), len(rx1)))


class TestMPLSL2(VppTestCase):
    """ MPLS-L2 """

    @classmethod
    def setUpClass(cls):
        super(TestMPLSL2, cls).setUpClass()

    @classmethod
    def tearDownClass(cls):
        super(TestMPLSL2, cls).tearDownClass()

    def setUp(self):
        super(TestMPLSL2, self).setUp()

        # create 2 pg interfaces
        self.create_pg_interfaces(range(2))

        # create the default MPLS table
        self.tables = []
        tbl = VppMplsTable(self, 0)
        tbl.add_vpp_config()
        self.tables.append(tbl)

        # use pg0 as the core facing interface
        self.pg0.admin_up()
        self.pg0.config_ip4()
        self.pg0.resolve_arp()
        self.pg0.enable_mpls()

        # use the other 2 for customer facing L2 links
        for i in self.pg_interfaces[1:]:
            i.admin_up()

    def tearDown(self):
        for i in self.pg_interfaces[1:]:
            i.admin_down()

        self.pg0.disable_mpls()
        self.pg0.unconfig_ip4()
        self.pg0.admin_down()
        super(TestMPLSL2, self).tearDown()

    def verify_capture_tunneled_ethernet(self, capture, sent, mpls_labels):
        capture = verify_filter(capture, sent)

        self.assertEqual(len(capture), len(sent))

        for i in range(len(capture)):
            tx = sent[i]
            rx = capture[i]

            # the MPLS TTL is 255 since it enters a new tunnel
            verify_mpls_stack(self, rx, mpls_labels)

            tx_eth = tx[Ether]
            rx_eth = Ether(scapy.compat.raw(rx[MPLS].payload))

            self.assertEqual(rx_eth.src, tx_eth.src)
            self.assertEqual(rx_eth.dst, tx_eth.dst)

    def test_vpws(self):
        """ Virtual Private Wire Service """

        #
        # Create an MPLS tunnel that pushes 1 label
        # For Ethernet over MPLS the uniform mode is irrelevant since ttl/cos
        # information is not in the packet, but we test it works anyway
        #
        mpls_tun_1 = VppMPLSTunnelInterface(
            self,
            [VppRoutePath(self.pg0.remote_ip4,
                          self.pg0.sw_if_index,
                          labels=[VppMplsLabel(42, MplsLspMode.UNIFORM)])],
            is_l2=1)
        mpls_tun_1.add_vpp_config()
        mpls_tun_1.admin_up()

        #
        # Create a label entry to for 55 that does L2 input to the tunnel
        #
        route_55_eos = VppMplsRoute(
            self, 55, 1,
            [VppRoutePath("0.0.0.0",
                          mpls_tun_1.sw_if_index,
                          type=FibPathType.FIB_PATH_TYPE_INTERFACE_RX,
                          proto=FibPathProto.FIB_PATH_NH_PROTO_ETHERNET)],
            eos_proto=FibPathProto.FIB_PATH_NH_PROTO_ETHERNET)
        route_55_eos.add_vpp_config()

        #
        # Cross-connect the tunnel with one of the customers L2 interfaces
        #
        self.vapi.sw_interface_set_l2_xconnect(self.pg1.sw_if_index,
                                               mpls_tun_1.sw_if_index,
                                               enable=1)
        self.vapi.sw_interface_set_l2_xconnect(mpls_tun_1.sw_if_index,
                                               self.pg1.sw_if_index,
                                               enable=1)

        #
        # inject a packet from the core
        #
        pcore = (Ether(dst=self.pg0.local_mac,
                       src=self.pg0.remote_mac) /
                 MPLS(label=55, ttl=64) /
                 Ether(dst="00:00:de:ad:ba:be",
                       src="00:00:de:ad:be:ef") /
                 IP(src="10.10.10.10", dst="11.11.11.11") /
                 UDP(sport=1234, dport=1234) /
                 Raw('\xa5' * 100))

        tx0 = pcore * NUM_PKTS
        rx0 = self.send_and_expect(self.pg0, tx0, self.pg1)
        payload = pcore[MPLS].payload

        self.assertEqual(rx0[0][Ether].dst, payload[Ether].dst)
        self.assertEqual(rx0[0][Ether].src, payload[Ether].src)

        #
        # Inject a packet from the customer/L2 side
        #
        tx1 = pcore[MPLS].payload * NUM_PKTS
        rx1 = self.send_and_expect(self.pg1, tx1, self.pg0)

        self.verify_capture_tunneled_ethernet(rx1, tx1, [VppMplsLabel(42)])

    def test_vpls(self):
        """ Virtual Private LAN Service """
        #
        # Create an L2 MPLS tunnel
        #
        mpls_tun = VppMPLSTunnelInterface(
            self,
            [VppRoutePath(self.pg0.remote_ip4,
                          self.pg0.sw_if_index,
                          labels=[VppMplsLabel(42)])],
            is_l2=1)
        mpls_tun.add_vpp_config()
        mpls_tun.admin_up()

        #
        # Create a label entry to for 55 that does L2 input to the tunnel
        #
        route_55_eos = VppMplsRoute(
            self, 55, 1,
            [VppRoutePath("0.0.0.0",
                          mpls_tun.sw_if_index,
                          type=FibPathType.FIB_PATH_TYPE_INTERFACE_RX,
                          proto=FibPathProto.FIB_PATH_NH_PROTO_ETHERNET)],
            eos_proto=FibPathProto.FIB_PATH_NH_PROTO_ETHERNET)
        route_55_eos.add_vpp_config()

        #
        # add to tunnel to the customers bridge-domain
        #
        self.vapi.sw_interface_set_l2_bridge(
            rx_sw_if_index=mpls_tun.sw_if_index, bd_id=1)
        self.vapi.sw_interface_set_l2_bridge(
            rx_sw_if_index=self.pg1.sw_if_index, bd_id=1)

        #
        # Packet from the customer interface and from the core
        #
        p_cust = (Ether(dst="00:00:de:ad:ba:be",
                        src="00:00:de:ad:be:ef") /
                  IP(src="10.10.10.10", dst="11.11.11.11") /
                  UDP(sport=1234, dport=1234) /
                  Raw('\xa5' * 100))
        p_core = (Ether(src="00:00:de:ad:ba:be",
                        dst="00:00:de:ad:be:ef") /
                  IP(dst="10.10.10.10", src="11.11.11.11") /
                  UDP(sport=1234, dport=1234) /
                  Raw('\xa5' * 100))

        #
        # The BD is learning, so send in one of each packet to learn
        #
        p_core_encap = (Ether(dst=self.pg0.local_mac,
                              src=self.pg0.remote_mac) /
                        MPLS(label=55, ttl=64) /
                        p_core)

        self.pg1.add_stream(p_cust)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()
        self.pg0.add_stream(p_core_encap)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        # we've learnt this so expect it be be forwarded
        rx0 = self.pg1.get_capture(1)

        self.assertEqual(rx0[0][Ether].dst, p_core[Ether].dst)
        self.assertEqual(rx0[0][Ether].src, p_core[Ether].src)

        #
        # now a stream in each direction
        #
        self.pg1.add_stream(p_cust * NUM_PKTS)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx0 = self.pg0.get_capture(NUM_PKTS)

        self.verify_capture_tunneled_ethernet(rx0, p_cust*NUM_PKTS,
                                              [VppMplsLabel(42)])

        #
        # remove interfaces from customers bridge-domain
        #
        self.vapi.sw_interface_set_l2_bridge(
            rx_sw_if_index=mpls_tun.sw_if_index, bd_id=1, enable=0)
        self.vapi.sw_interface_set_l2_bridge(
            rx_sw_if_index=self.pg1.sw_if_index, bd_id=1, enable=0)


if __name__ == '__main__':
    unittest.main(testRunner=VppTestRunner)