summaryrefslogtreecommitdiffstats
path: root/src/vlibapi
AgeCommit message (Collapse)AuthorFilesLines
2018-06-21Implement DHCPv6 IA NA client (VPP-1094)Juraj Sloboda1-1/+2
Change-Id: I682a47d6cf9975aca6136188d28ee93eaadf4fe3 Signed-off-by: Juraj Sloboda <jsloboda@cisco.com>
2018-06-14Fix SEGV in generic event sub reaperMatthew Smith1-1/+1
When a client subscribed to receive events disconnects from the API, while deleting their subscription, a hash lookup was being performed against a pointer that did not refer to a hash, resulting in a SEGV. Perform the hash lookup against the correct hash. Change-Id: I011d7479e2c3b9ee50721cf7499385c3ff7f704a Signed-off-by: Matthew Smith <mgsmith@netgate.com>
2018-06-13Stat segment / client: show run" works nowDave Barach2-2/+3
Seems to have minimal-to-zero performance consequences. Data appears accurate: result match the debug CLI output. Checked at low rates, 27 MPPS sprayed across two worker threads. Change-Id: I09ede5150b88a91547feeee448a2854997613004 Signed-off-by: Dave Barach <dave@barachs.net>
2018-06-08Add reaper functions to want events APIs (VPP-1304)Neale Ranns1-1/+19
Change-Id: Iaeb52d94cb6da63ee93af7c1cf2dade6046cba1d Signed-off-by: Neale Ranns <nranns@cisco.com>
2018-06-08Implement DHCPv6 PD client (VPP-718, VPP-1050)Juraj Sloboda1-1/+2
Change-Id: I72a1ccdfdd5573335ef78fc01d5268934c73bd31 Signed-off-by: Juraj Sloboda <jsloboda@cisco.com>
2018-06-05VPP API: Memory traceOle Troan2-0/+19
if you plan to put a hash into shared memory, the key sum and key equal functions MUST be set to constants such as KEY_FUNC_STRING, KEY_FUNC_MEM, etc. -lvppinfra is PIC, which means that the process which set up the hash won't have the same idea where the key sum and key compare functions live in other processes. Change-Id: Ib3b5963a0d2fb467b91e1f16274df66ac74009e9 Signed-off-by: Ole Troan <ot@cisco.com> Signed-off-by: Dave Barach <dave@barachs.net> Signed-off-by: Ole Troan <ot@cisco.com>
2018-03-16IPv6 ND Router discovery data plane (VPP-1095)Juraj Sloboda1-1/+2
Add API call to send Router Solicitation messages. Save info from incoming Router Advertisement messages and notify listeners. Change-Id: Ie518b5492231e03291bd4c4280be4727bfecab46 Signed-off-by: Juraj Sloboda <jsloboda@cisco.com>
2018-01-25session: add support for memfd segmentsFlorin Coras1-0/+1
- update segment manager and session api to work with both flavors of ssvm segments - added generic ssvm slave/master init and del functions - cleanup/refactor tcp_echo - fixed uses of svm fifo pool as vector Change-Id: Ieee8b163faa407da6e77e657a2322de213a9d2a0 Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-01-22svm: queue sub: Add conditional timed waitMohsin Kazmi1-1/+1
On reviece side svm queue only permits blocking and non-blocking calls. This patch adds timed wait blocking functionality which returns either on signal/event or on given time out. It also preserves the original behavior, so it will not hurt client applications which are using svm queue. Change-Id: Ic10632170330a80afb8bc781d4ccddfe4da2c69a Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
2018-01-09api: refactor vlibmemoryFlorin Coras6-303/+38
- separate client/server code for both memory and socket apis - separate memory api code from generic vlib api code - move unix_shared_memory_fifo to svm and rename to svm_fifo_t - overall declutter Change-Id: I90cdd98ff74d0787d58825b914b0f1eafcfa4dc2 Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-01-05sock api: add infra for bootstrapping shm clientsFlorin Coras1-44/+20
- add function to sock client that bootstraps shm api - allow sock clients to request custom shm ring configs Change-Id: Iabc1dd4f0dc8bbf8ba24de37f4966339fcf86107 Signed-off-by: Florin Coras <fcoras@cisco.com>
2017-10-13VPP-1027: DNS name resolverDave Barach1-1/+16
This patch is a plausible first-cut, suitable for initial testing by vcl (host stack client library). Main features; - recursive name resolution - multiple ip4/ip6 name servers - cache size limit enforcement - currently limited to 65K - ttl / aging - static mapping support - show / clear / debug CLI commands Binary APIs provided for the following: - add/delete name servers - enable/disable the name cache - resolve a name To Do list: - Respond to ip4/ip6 client DNS requests (vs. binary API requests) - Perf / scale tuning - map pending transaction ids to pool indices, so the cache can (greatly) exceed 65K entries - Security improvements - Use unpredictable dns transaction IDs, related to previous item - Make sure that response-packet src ip addresses match the server - Add binary APIs - deliver raw response data to clients - control recursive name resolution - Documentation Change-Id: I48c373d5c05d7108ccd814d4055caf8c75ca10b7 Signed-off-by: Dave Barach <dave@barachs.net>
2017-10-10API versioning: Fix coverity errors from strncpy()Ole Troan1-1/+1
Change-Id: Ife87f9b00f918ff1bb8c91c6f13ebe53a3555a12 Signed-off-by: Ole Troan <ot@cisco.com>
2017-10-09vppapigen: support per-file (major,minor,patch) version stampsDave Barach3-0/+22
Add one of these statements to foo.api: vl_api_version 1.2.3 to generate a version tuple stanza in foo.api.h: /****** Version tuple *****/ vl_api_version_tuple(foo, 1, 2, 3) Change-Id: Ic514439e4677999daa8463a94f948f76b132ff15 Signed-off-by: Dave Barach <dave@barachs.net> Signed-off-by: Ole Troan <ot@cisco.com>
2017-10-06Coverity fixes for API socketChris Luke1-5/+7
- Coverity whines about a zero-length field not being initialized. Change the struct setup to an initializer which will implicitly zero all unused fields, and add the coverity notation that should stop it whining. One or both of these should shut it up! - Fix some incorrect use of ntohl that was tainting values; in these cases htonl should have been used, and avoid a double-swap. Change-Id: I00493a77eb23a0b8feb647165ee349e1e9d5cfdb Signed-off-by: Chris Luke <chrisy@flirble.org>
2017-10-05Clean up "show api ring" debug CLIDave Barach1-1/+4
Add a primary svm_region_t pointer to the api_main_t so we can always find the primary region, even when processing an API message from a memfd segment. Change-Id: I07fffe2ac1088ce44de10a34bc771ddc93af967d Signed-off-by: Dave Barach <dave@barachs.net>
2017-10-03Repair vlib API socket serverDave Barach3-98/+167
- Teach vpp_api_test to send/receive API messages over sockets - Add memfd-based shared memory - Add api messages to create memfd-based shared memory segments - vpp_api_test supports both socket and shared memory segment connections - vpp_api_test pivot from socket to shared memory API messaging - add socket client support to libvlibclient.so - dead client reaper sends ping messages, container-friendly - dead client reaper falls back to kill (<pid>, 0) live checking if e.g. a python app goes silent for tens of seconds - handle ping messages in python client support code - teach show api ring about pairwise shared-memory segments - fix ip probing of already resolved destinations (VPP-998) We'll need this work to implement proper host-stack client isolation Change-Id: Ic23b65f75c854d0393d9a2e9d6b122a9551be769 Signed-off-by: Dave Barach <dave@barachs.net> Signed-off-by: Dave Wallace <dwallacelf@gmail.com> Signed-off-by: Florin Coras <fcoras@cisco.com>
2017-09-28General documentation updatesChris Luke1-37/+40
- We now have several developer-focused docs, so create an index page for them. - Rework several docs to fit into the index structure. - Experiment with code highlighting; tweak the CSS slightly to make it slightly nicer to look at. Change-Id: I4185a18f84fa0764745ca7a3148276064a3155c6 Signed-off-by: Chris Luke <chrisy@flirble.org>
2017-09-27VPP-990 remove registered handler if control ping failsv18.01-rc0Matej Perina2-0/+13
Change-Id: I5ca5763f0dc0a73cc6f014b855426b7ac180f356 Signed-off-by: Matej Perina <mperina@cisco.com>
2017-09-25Add binary API documentationDave Barach2-70/+471
Change-Id: Id1a5da12b13d87bacfa81094f471b95db40c39be Signed-off-by: Dave Barach <dave@barachs.net>
2017-09-22IP-MAC,ND:wildcard events,fix sending multiple eventsEyal Bari1-0/+1
wildcard ND events publisher was sending the last event mutiple times Change-Id: I6c30f2de03fa825e79df9005a3cfaaf68ff7ea2f Signed-off-by: Eyal Bari <ebari@cisco.com>
2017-09-20Improve API message handler re-registration checkDave Barach1-3/+5
Change-Id: Iedcea2fb45052852666b91a21eed011f5593313d Signed-off-by: Dave Barach <dave@barachs.net>
2017-09-18L2BD,ARP-TERM:fix arp query report mechanism+testEyal Bari1-1/+2
previous mechanism was emitting duplicates of last event, when handling multiple arp queries. tests: * arp events sent for graps * duplicate suppression * verify no events when disabled Change-Id: I84adc23980d43b819261eccf02ec056b5cec61df Signed-off-by: Eyal Bari <ebari@cisco.com>
2017-09-13API message table inspection utilitiesDave Barach1-0/+3
Add doxygen tags for show/clear commands Change-Id: Ic939c561b15b0b720a8db1ecacc17e3d74419e1d Signed-off-by: Dave Barach <dave@barachs.net>
2017-09-11Recombine diags and minimum barrier open time changes (VPP-968)Colin Tregenza Dancer2-2/+14
Support logging to both syslog and elog Also include DaveB is_mp_safe fix, which had been lost Change-Id: If82f7969e2f43c63c3fed5b1a0c7434c90c1f380 Signed-off-by: Colin Tregenza Dancer <ctd@metaswitch.com>
2017-09-09move unix_file_* code to vppinfraDamjan Marion1-1/+1
This will allow us to use this code in client libraries without vlib. Change-Id: I8557b752496841ba588aa36b6082cbe2cd1867fe Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-09-07Allow individual stats API and introduce stats.apiKeith Burns (alagalah)1-1/+0
- want_interface_simple_stats - want_interface_combined_stats - want_ip4|6_fib|nbr_stats Change-Id: I4e97461def508958b3e429c3fe8859b36fef2d18 Signed-off-by: Keith Burns (alagalah) <alagalah@gmail.com>
2017-07-01Refactor API message handling codeKlement Sekera2-255/+293
This is preparation for new C API. Moving common stuff to separate headers reduces dependency issues. Change-Id: Ie7adb23398de72448e5eba6c1c1da4e1bc678725 Signed-off-by: Klement Sekera <ksekera@cisco.com>
2017-06-01Improve fifo allocator performanceDave Barach1-0/+3
- add option to preallocate fifos in a segment - track active fifos with doubly linked list instead of vector - update udp redirect test code to read fifo pointers from API call instead of digging them up from fifo segment header - input-node based active-open session generator Change-Id: I804b81e99d95f8690d17e12660c6645995e28a9a Signed-off-by: Dave Barach <dave@barachs.net> Signed-off-by: Florin Coras <fcoras@cisco.com> Signed-off-by: Dave Barach <dbarach@cisco.com>
2017-05-19Enforce Bridge Domain ID range to match 24-bit VNI rangeJohn Lo1-0/+14
Enforce bridge domain ID range to allow a maximum value of 16M which matches the range of 24-bit VNI used for virtual overlay network ID. Fix "show bridge-domain" output to allow full 16M BD ID range to be displayed using 8-digit spaces. Change-Id: I80d9c76ea7c001bcccd3c19df1f3e55d2970f01c Signed-off-by: John Lo <loj@cisco.com>
2017-05-17Add vl_msg_api_get_message_length[_inline]Dave Barach2-0/+17
Change-Id: I6d86cf7966d51ec7a507bbb59c586adbfb45be05 Signed-off-by: Dave Barach <dave@barachs.net>
2017-05-03A sprinkling of const in vlibmemory/api.h and friendsNeale Ranns2-7/+8
Change-Id: I953ebb37eeec7de0c4a6b00258c3c67a83cbc020 Signed-off-by: Neale Ranns <nranns@cisco.com>
2017-03-30Clean up more Debian packaging symbol warningsDave Barach1-0/+227
Change-Id: I6081a38af3817f0957a2faf0e3e41afa4a74f3a4 Signed-off-by: Dave Barach <dave@barachs.net>
2017-03-16API:replaced all REPLY_MACRO's with api_helper_macros.hEyal Bari1-4/+4
Change-Id: I08ab1fd0abdd1db4aff11a38c9c0134b01368e11 Signed-off-by: Eyal Bari <ebari@cisco.com>
2017-03-15API: define optional base_id for REPLY_MACRO'sEyal Bari1-4/+8
this enables sharing the api_helper_macros.h implementation Change-Id: Ie3fc89f3b4b5a47fcfd4b5776db90e249c55dbc3 Signed-off-by: Eyal Bari <ebari@cisco.com>
2017-03-14API:support hidden sw interfacesEyal Bari1-9/+9
validate interfaces - added check for hidden interfaces interface dump - dont send hidden interfaces set_unnumbered - test for hidden vl_api_create_vlan_subif_t_handler, vl_api_create_subif_t_handler - fixed potential memory leak some other minor refactors to make code clearer and shorter Change-Id: Icce6b724336b7d1536fbd07a74bf7abe4916d2c0 Signed-off-by: Eyal Bari <ebari@cisco.com>
2017-03-14Clean up dead API client reaper callack schemeDave Barach1-0/+44
Change-Id: Iec3df234ca9f717d87787cefc76b73ed9ad42332 Signed-off-by: Dave Barach <dave@barachs.net>
2017-03-09vlib_mains == 0 special cases be goneDave Barach3-520/+29
Clean up spurious binary API client link dependency on libvlib.so, which managed to hide behind vlib_mains == 0 checks reached by VLIB_xxx_FUNCTION macros. Change-Id: I5df1f8ab07dca1944250e643ccf06e60a8462325 Signed-off-by: Dave Barach <dave@barachs.net>
2017-03-03Improve api trace replay consistency checkingDave Barach2-0/+9
Change-Id: I2c4b9646d53e4c008ccbe6d09c6a683c776c1f60 Signed-off-by: Dave Barach <dave@barachs.net>
2017-03-02Clean up binary api message handler registration issuesDave Barach2-0/+19
Removed a fair number of "BUG" message handlers, due to conflicts with actual message handlers in api_format.c. Vpp itself had no business receiving certain messages, up to the point where we started building in relevant code from vpp_api_test. Eliminated all but one duplicate registration complaint. That one needs attention from the vxlan team since the duplicated handlers have diverged. Change-Id: Iafce5429d2f906270643b4ea5f0130e20beb4d1d Signed-off-by: Dave Barach <dave@barachs.net>
2017-02-14Fix typo in API warning message.Jon Loeliger1-1/+1
Change-Id: I51488620a7eeaf7a0edba71437d2b49ae3cf0bf5 Signed-off-by: Jon Loeliger <jdl@netgate.com>
2017-02-07Fix M(), M2() macros in VATFilip Tehlar1-2/+4
Change-Id: I76593632cde97f7cb80bbc395735404f39f3bd3f Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
2017-02-02Refactor fragile msg macro W and W2 to not burry return control flow.Jon Loeliger1-6/+6
Instead, have them accept and assign a return paramter leaving the return control flow up to the caller. Clean up otherwise misleading returns present even after "NOT REACHED" comments. Change-Id: I0861921f73ab65d55b95eabd27514f0129152723 Signed-off-by: Jon Loeliger <jdl@netgate.com>
2017-02-02Localize the timeout variable within the W message macro.Jon Loeliger1-2/+2
Rather than rely on an unbound variable, explicitly introduce the timeout variable within the 'do { ... } while (0)' construct as a block-local variable. Change-Id: I6e78635290f9b5ab3f56b7f116c5fa762c88c9e9 Signed-off-by: Jon Loeliger <jdl@netgate.com>
2017-02-02Convert message macro S to accept a message pointer parameter;Jon Loeliger1-1/+1
Rather than blindly assume an unbound, fixed message parameter explicilty pass it as a paramter to the S() macro. Change-Id: Ieea1f1815cadd2eec7d9240408d69acdc3caa49a Signed-off-by: Jon Loeliger <jdl@netgate.com>
2017-02-02Convert M() and M2() macros to honor their second, mp, parameter.Jon Loeliger1-2/+2
Now that all the M() and M2() uses properly supply a message pointer as second parameter, fix the macros to use it. Change-Id: I0b8f4848416c3fa2e06755ad6ea7171b7c546124 Signed-off-by: Jon Loeliger <jdl@netgate.com>
2017-01-25Repair plugin binary API message numberingDave Barach1-10/+8
Change-Id: I422a3f168bd483e011cfaf54af022cb79b78db02 Signed-off-by: Dave Barach <dave@barachs.net>
2017-01-23binary-api debug CLI works with pluginsDave Barach2-1/+76
Change-Id: I81f33f5153d5afac94b66b5a8cb91da77463af79 Signed-off-by: Dave Barach <dave@barachs.net>
2017-01-09Self-service garbage collection for the API message allocatorDave Barach1-1/+2
Change-Id: Iadc08eede15fa5978e4010bbece0232aab8b0fee Signed-off-by: Dave Barach <dave@barachs.net>
2017-01-05Fix uninitialized stack local, VPP-581Dave Barach1-0/+2
Sporadically messes up the client message allocation ring, by setting c->message_bounce[msg_id] non-zero. A day-1 bug, made blatantly obvious by the python API language binding for no particular reason. Manually cherry-picked from stable/1701 due to the recent tree reorganization. Change-Id: Ifa03c5487436cbe50a6204db48fd9ce4938e32bb Signed-off-by: Dave Barach <dave@barachs.net>
60'>1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980
/*
 * Copyright (c) 2016 Cisco and/or its affiliates.
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at:
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
/**
 * @file
 * @brief NAT44 inside to outside network translation
 */

#include <vlib/vlib.h>
#include <vnet/vnet.h>

#include <vnet/ip/ip.h>
#include <vnet/ethernet/ethernet.h>
#include <vnet/fib/ip4_fib.h>
#include <vnet/udp/udp_local.h>
#include <nat/nat.h>
#include <nat/lib/ipfix_logging.h>
#include <nat/nat_inlines.h>
#include <nat/nat44/inlines.h>
#include <nat/nat_syslog.h>
#include <nat/nat_ha.h>

#include <vppinfra/hash.h>
#include <vppinfra/error.h>
#include <vppinfra/elog.h>
#include <nat/lib/nat_inlines.h>

typedef struct
{
  u32 sw_if_index;
  u32 next_index;
  u32 session_index;
  u32 is_slow_path;
  u32 is_hairpinning;
} snat_in2out_trace_t;

/* packet trace format function */
static u8 *
format_snat_in2out_trace (u8 * s, va_list * args)
{
  CLIB_UNUSED (vlib_main_t * vm) = va_arg (*args, vlib_main_t *);
  CLIB_UNUSED (vlib_node_t * node) = va_arg (*args, vlib_node_t *);
  snat_in2out_trace_t *t = va_arg (*args, snat_in2out_trace_t *);
  char *tag;

  tag = t->is_slow_path ? "NAT44_IN2OUT_SLOW_PATH" : "NAT44_IN2OUT_FAST_PATH";

  s = format (s, "%s: sw_if_index %d, next index %d, session %d", tag,
	      t->sw_if_index, t->next_index, t->session_index);
  if (t->is_hairpinning)
    {
      s = format (s, ", with-hairpinning");
    }

  return s;
}

static u8 *
format_snat_in2out_fast_trace (u8 * s, va_list * args)
{
  CLIB_UNUSED (vlib_main_t * vm) = va_arg (*args, vlib_main_t *);
  CLIB_UNUSED (vlib_node_t * node) = va_arg (*args, vlib_node_t *);
  snat_in2out_trace_t *t = va_arg (*args, snat_in2out_trace_t *);

  s = format (s, "NAT44_IN2OUT_FAST: sw_if_index %d, next index %d",
	      t->sw_if_index, t->next_index);

  return s;
}

#define foreach_snat_in2out_error                       \
_(UNSUPPORTED_PROTOCOL, "unsupported protocol")         \
_(OUT_OF_PORTS, "out of ports")                         \
_(BAD_OUTSIDE_FIB, "outside VRF ID not found")          \
_(BAD_ICMP_TYPE, "unsupported ICMP type")               \
_(NO_TRANSLATION, "no translation")                     \
_(MAX_SESSIONS_EXCEEDED, "maximum sessions exceeded")   \
_(CANNOT_CREATE_USER, "cannot create NAT user")

typedef enum
{
#define _(sym,str) SNAT_IN2OUT_ERROR_##sym,
  foreach_snat_in2out_error
#undef _
    SNAT_IN2OUT_N_ERROR,
} snat_in2out_error_t;

static char *snat_in2out_error_strings[] = {
#define _(sym,string) string,
  foreach_snat_in2out_error
#undef _
};

typedef enum
{
  SNAT_IN2OUT_NEXT_LOOKUP,
  SNAT_IN2OUT_NEXT_DROP,
  SNAT_IN2OUT_NEXT_ICMP_ERROR,
  SNAT_IN2OUT_NEXT_SLOW_PATH,
  SNAT_IN2OUT_N_NEXT,
} snat_in2out_next_t;

static inline int
snat_not_translate (snat_main_t * sm, vlib_node_runtime_t * node,
		    u32 sw_if_index0, ip4_header_t * ip0, u32 proto0,
		    u32 rx_fib_index0, u32 thread_index)
{
  udp_header_t *udp0 = ip4_next_header (ip0);
  clib_bihash_kv_8_8_t kv0, value0;

  init_nat_k (&kv0, ip0->dst_address, udp0->dst_port, sm->outside_fib_index,
	      proto0);

  /* NAT packet aimed at external address if */
  /* has active sessions */
  if (clib_bihash_search_8_8 (&sm->per_thread_data[thread_index].out2in, &kv0,
			      &value0))
    {
      /* or is static mappings */
      ip4_address_t placeholder_addr;
      u16 placeholder_port;
      u32 placeholder_fib_index;
      if (!snat_static_mapping_match
	  (sm, ip0->dst_address, udp0->dst_port, sm->outside_fib_index,
	   proto0, &placeholder_addr, &placeholder_port,
	   &placeholder_fib_index, 1, 0, 0, 0, 0, 0, 0))
	return 0;
    }
  else
    return 0;

  if (sm->forwarding_enabled)
    return 1;

  return snat_not_translate_fast (sm, node, sw_if_index0, ip0, proto0,
				  rx_fib_index0);
}

static inline int
nat_not_translate_output_feature (snat_main_t * sm, ip4_header_t * ip0,
				  u32 proto0, u16 src_port, u16 dst_port,
				  u32 thread_index, u32 sw_if_index)
{
  clib_bihash_kv_8_8_t kv0, value0;
  snat_interface_t *i;

  /* src NAT check */
  init_nat_k (&kv0, ip0->src_address, src_port,
	      ip4_fib_table_get_index_for_sw_if_index (sw_if_index), proto0);

  if (!clib_bihash_search_8_8
      (&sm->per_thread_data[thread_index].out2in, &kv0, &value0))
    return 1;

  /* dst NAT check */
  init_nat_k (&kv0, ip0->dst_address, dst_port,
	      ip4_fib_table_get_index_for_sw_if_index (sw_if_index), proto0);
  if (!clib_bihash_search_8_8
      (&sm->per_thread_data[thread_index].in2out, &kv0, &value0))
    {
      /* hairpinning */
    /* *INDENT-OFF* */
    pool_foreach (i, sm->output_feature_interfaces,
    ({
      if ((nat_interface_is_inside(i)) && (sw_if_index == i->sw_if_index))
        return 0;
    }));
    /* *INDENT-ON* */
      return 1;
    }

  return 0;
}

#ifndef CLIB_MARCH_VARIANT
int
nat44_i2o_is_idle_session_cb (clib_bihash_kv_8_8_t * kv, void *arg)
{
  snat_main_t *sm = &snat_main;
  nat44_is_idle_session_ctx_t *ctx = arg;
  snat_session_t *s;
  u64 sess_timeout_time;
  snat_main_per_thread_data_t *tsm = vec_elt_at_index (sm->per_thread_data,
						       ctx->thread_index);
  clib_bihash_kv_8_8_t s_kv;

  s = pool_elt_at_index (tsm->sessions, kv->value);
  sess_timeout_time = s->last_heard + (f64) nat44_session_get_timeout (sm, s);
  if (ctx->now >= sess_timeout_time)
    {
      init_nat_o2i_k (&s_kv, s);
      if (clib_bihash_add_del_8_8 (&tsm->out2in, &s_kv, 0))
	nat_elog_warn ("out2in key del failed");

      nat_ipfix_logging_nat44_ses_delete (ctx->thread_index,
					  s->in2out.addr.as_u32,
					  s->out2in.addr.as_u32,
					  s->nat_proto,
					  s->in2out.port,
					  s->out2in.port,
					  s->in2out.fib_index);

      nat_syslog_nat44_apmdel (s->user_index, s->in2out.fib_index,
			       &s->in2out.addr, s->in2out.port,
			       &s->out2in.addr, s->out2in.port, s->nat_proto);

      nat_ha_sdel (&s->out2in.addr, s->out2in.port, &s->ext_host_addr,
		   s->ext_host_port, s->nat_proto, s->out2in.fib_index,
		   ctx->thread_index);

      if (!snat_is_session_static (s))
	snat_free_outside_address_and_port (sm->addresses, ctx->thread_index,
					    &s->out2in.addr,
					    s->out2in.port, s->nat_proto);

      nat44_delete_session (sm, s, ctx->thread_index);
      return 1;
    }

  return 0;
}
#endif

static u32
slow_path (snat_main_t * sm, vlib_buffer_t * b0,
	   ip4_header_t * ip0,
	   ip4_address_t i2o_addr,
	   u16 i2o_port,
	   u32 rx_fib_index0,
	   nat_protocol_t nat_proto,
	   snat_session_t ** sessionp,
	   vlib_node_runtime_t * node, u32 next0, u32 thread_index, f64 now)
{
  snat_user_t *u;
  snat_session_t *s = 0;
  clib_bihash_kv_8_8_t kv0;
  u8 is_sm = 0;
  nat_outside_fib_t *outside_fib;
  fib_node_index_t fei = FIB_NODE_INDEX_INVALID;
  u8 identity_nat;
  fib_prefix_t pfx = {
    .fp_proto = FIB_PROTOCOL_IP4,
    .fp_len = 32,
    .fp_addr = {
		.ip4.as_u32 = ip0->dst_address.as_u32,
		},
  };
  nat44_is_idle_session_ctx_t ctx0;
  ip4_address_t sm_addr;
  u16 sm_port;
  u32 sm_fib_index;

  if (PREDICT_FALSE (nat44_maximum_sessions_exceeded (sm, thread_index)))
    {
      b0->error = node->errors[SNAT_IN2OUT_ERROR_MAX_SESSIONS_EXCEEDED];
      nat_ipfix_logging_max_sessions (thread_index,
				      sm->max_translations_per_thread);
      nat_elog_notice ("maximum sessions exceeded");
      return SNAT_IN2OUT_NEXT_DROP;
    }

  /* First try to match static mapping by local address and port */
  if (snat_static_mapping_match
      (sm, i2o_addr, i2o_port, rx_fib_index0, nat_proto, &sm_addr,
       &sm_port, &sm_fib_index, 0, 0, 0, 0, 0, &identity_nat, 0))
    {
      /* Try to create dynamic translation */
      if (snat_alloc_outside_address_and_port (sm->addresses, rx_fib_index0,
					       thread_index,
					       nat_proto,
					       &sm_addr, &sm_port,
					       sm->port_per_thread,
					       sm->per_thread_data
					       [thread_index].snat_thread_index))
	{
	  b0->error = node->errors[SNAT_IN2OUT_ERROR_OUT_OF_PORTS];
	  return SNAT_IN2OUT_NEXT_DROP;
	}
    }
  else
    {
      if (PREDICT_FALSE (identity_nat))
	{
	  *sessionp = s;
	  return next0;
	}

      is_sm = 1;
    }

  u = nat_user_get_or_create (sm, &ip0->src_address, rx_fib_index0,
			      thread_index);
  if (!u)
    {
      b0->error = node->errors[SNAT_IN2OUT_ERROR_CANNOT_CREATE_USER];
      return SNAT_IN2OUT_NEXT_DROP;
    }

  s = nat_session_alloc_or_recycle (sm, u, thread_index, now);
  if (!s)
    {
      nat44_delete_user_with_no_session (sm, u, thread_index);
      nat_elog_warn ("create NAT session failed");
      return SNAT_IN2OUT_NEXT_DROP;
    }

  if (is_sm)
    s->flags |= SNAT_SESSION_FLAG_STATIC_MAPPING;
  user_session_increment (sm, u, is_sm);
  s->in2out.addr = i2o_addr;
  s->in2out.port = i2o_port;
  s->in2out.fib_index = rx_fib_index0;
  s->nat_proto = nat_proto;
  s->out2in.addr = sm_addr;
  s->out2in.port = sm_port;
  s->out2in.fib_index = sm->outside_fib_index;
  switch (vec_len (sm->outside_fibs))
    {
    case 0:
      s->out2in.fib_index = sm->outside_fib_index;
      break;
    case 1:
      s->out2in.fib_index = sm->outside_fibs[0].fib_index;
      break;
    default:
      /* *INDENT-OFF* */
      vec_foreach (outside_fib, sm->outside_fibs)
        {
          fei = fib_table_lookup (outside_fib->fib_index, &pfx);
          if (FIB_NODE_INDEX_INVALID != fei)
            {
              if (fib_entry_get_resolving_interface (fei) != ~0)
                {
                  s->out2in.fib_index = outside_fib->fib_index;
                  break;
                }
            }
        }
      /* *INDENT-ON* */
      break;
    }
  s->ext_host_addr.as_u32 = ip0->dst_address.as_u32;
  s->ext_host_port = vnet_buffer (b0)->ip.reass.l4_dst_port;
  *sessionp = s;

  /* Add to translation hashes */
  ctx0.now = now;
  ctx0.thread_index = thread_index;
  init_nat_i2o_kv (&kv0, s, s - sm->per_thread_data[thread_index].sessions);
  if (clib_bihash_add_or_overwrite_stale_8_8
      (&sm->per_thread_data[thread_index].in2out, &kv0,
       nat44_i2o_is_idle_session_cb, &ctx0))
    nat_elog_notice ("in2out key add failed");

  init_nat_o2i_kv (&kv0, s, s - sm->per_thread_data[thread_index].sessions);
  if (clib_bihash_add_or_overwrite_stale_8_8
      (&sm->per_thread_data[thread_index].out2in, &kv0,
       nat44_o2i_is_idle_session_cb, &ctx0))
    nat_elog_notice ("out2in key add failed");

  /* log NAT event */
  nat_ipfix_logging_nat44_ses_create (thread_index,
				      s->in2out.addr.as_u32,
				      s->out2in.addr.as_u32,
				      s->nat_proto,
				      s->in2out.port,
				      s->out2in.port, s->in2out.fib_index);

  nat_syslog_nat44_apmadd (s->user_index, s->in2out.fib_index,
			   &s->in2out.addr, s->in2out.port, &s->out2in.addr,
			   s->out2in.port, s->nat_proto);

  nat_ha_sadd (&s->in2out.addr, s->in2out.port, &s->out2in.addr,
	       s->out2in.port, &s->ext_host_addr, s->ext_host_port,
	       &s->ext_host_nat_addr, s->ext_host_nat_port,
	       s->nat_proto, s->in2out.fib_index, s->flags, thread_index, 0);

  return next0;
}

#ifndef CLIB_MARCH_VARIANT
static_always_inline snat_in2out_error_t
icmp_get_key (vlib_buffer_t * b, ip4_header_t * ip0,
	      ip4_address_t * addr, u16 * port, nat_protocol_t * nat_proto)
{
  icmp46_header_t *icmp0;
  icmp_echo_header_t *echo0, *inner_echo0 = 0;
  ip4_header_t *inner_ip0 = 0;
  void *l4_header = 0;
  icmp46_header_t *inner_icmp0;

  icmp0 = (icmp46_header_t *) ip4_next_header (ip0);
  echo0 = (icmp_echo_header_t *) (icmp0 + 1);

  if (!icmp_type_is_error_message
      (vnet_buffer (b)->ip.reass.icmp_type_or_tcp_flags))
    {
      *nat_proto = NAT_PROTOCOL_ICMP;
      *addr = ip0->src_address;
      *port = vnet_buffer (b)->ip.reass.l4_src_port;
    }
  else
    {
      inner_ip0 = (ip4_header_t *) (echo0 + 1);
      l4_header = ip4_next_header (inner_ip0);
      *nat_proto = ip_proto_to_nat_proto (inner_ip0->protocol);
      *addr = inner_ip0->dst_address;
      switch (*nat_proto)
	{
	case NAT_PROTOCOL_ICMP:
	  inner_icmp0 = (icmp46_header_t *) l4_header;
	  inner_echo0 = (icmp_echo_header_t *) (inner_icmp0 + 1);
	  *port = inner_echo0->identifier;
	  break;
	case NAT_PROTOCOL_UDP:
	case NAT_PROTOCOL_TCP:
	  *port = ((tcp_udp_header_t *) l4_header)->dst_port;
	  break;
	default:
	  return SNAT_IN2OUT_ERROR_UNSUPPORTED_PROTOCOL;
	}
    }
  return -1;			/* success */
}

/**
 * Get address and port values to be used for ICMP packet translation
 * and create session if needed
 *
 * @param[in,out] sm             NAT main
 * @param[in,out] node           NAT node runtime
 * @param[in] thread_index       thread index
 * @param[in,out] b0             buffer containing packet to be translated
 * @param[in,out] ip0            ip header
 * @param[out] p_proto           protocol used for matching
 * @param[out] p_value           address and port after NAT translation
 * @param[out] p_dont_translate  if packet should not be translated
 * @param d                      optional parameter
 * @param e                      optional parameter
 */
u32
icmp_match_in2out_slow (snat_main_t * sm, vlib_node_runtime_t * node,
			u32 thread_index, vlib_buffer_t * b0,
			ip4_header_t * ip0, ip4_address_t * addr, u16 * port,
			u32 * fib_index, nat_protocol_t * proto, void *d,
			void *e, u8 * dont_translate)
{
  snat_main_per_thread_data_t *tsm = &sm->per_thread_data[thread_index];
  u32 sw_if_index0;
  snat_session_t *s0 = 0;
  clib_bihash_kv_8_8_t kv0, value0;
  u32 next0 = ~0;
  int err;
  vlib_main_t *vm = vlib_get_main ();
  *dont_translate = 0;

  sw_if_index0 = vnet_buffer (b0)->sw_if_index[VLIB_RX];
  *fib_index = ip4_fib_table_get_index_for_sw_if_index (sw_if_index0);

  err = icmp_get_key (b0, ip0, addr, port, proto);
  if (err != -1)
    {
      b0->error = node->errors[err];
      next0 = SNAT_IN2OUT_NEXT_DROP;
      goto out;
    }

  init_nat_k (&kv0, *addr, *port, *fib_index, *proto);
  if (clib_bihash_search_8_8 (&tsm->in2out, &kv0, &value0))
    {
      if (vnet_buffer (b0)->sw_if_index[VLIB_TX] != ~0)
	{
	  if (PREDICT_FALSE
	      (nat_not_translate_output_feature
	       (sm, ip0, *proto, *port, *port, thread_index, sw_if_index0)))
	    {
	      *dont_translate = 1;
	      goto out;
	    }
	}
      else
	{
	  if (PREDICT_FALSE (snat_not_translate (sm, node, sw_if_index0,
						 ip0, NAT_PROTOCOL_ICMP,
						 *fib_index, thread_index)))
	    {
	      *dont_translate = 1;
	      goto out;
	    }
	}

      if (PREDICT_FALSE
	  (icmp_type_is_error_message
	   (vnet_buffer (b0)->ip.reass.icmp_type_or_tcp_flags)))
	{
	  b0->error = node->errors[SNAT_IN2OUT_ERROR_BAD_ICMP_TYPE];
	  next0 = SNAT_IN2OUT_NEXT_DROP;
	  goto out;
	}

      next0 =
	slow_path (sm, b0, ip0, *addr, *port, *fib_index, *proto, &s0, node,
		   next0, thread_index, vlib_time_now (vm));

      if (PREDICT_FALSE (next0 == SNAT_IN2OUT_NEXT_DROP))
	goto out;

      if (!s0)
	{
	  *dont_translate = 1;
	  goto out;
	}
    }
  else
    {
      if (PREDICT_FALSE
	  (vnet_buffer (b0)->ip.reass.icmp_type_or_tcp_flags !=
	   ICMP4_echo_request
	   && vnet_buffer (b0)->ip.reass.icmp_type_or_tcp_flags !=
	   ICMP4_echo_reply
	   && !icmp_type_is_error_message (vnet_buffer (b0)->ip.
					   reass.icmp_type_or_tcp_flags)))
	{
	  b0->error = node->errors[SNAT_IN2OUT_ERROR_BAD_ICMP_TYPE];
	  next0 = SNAT_IN2OUT_NEXT_DROP;
	  goto out;
	}

      s0 = pool_elt_at_index (tsm->sessions, value0.value);
    }

out:
  if (s0)
    {
      *addr = s0->out2in.addr;
      *port = s0->out2in.port;
      *fib_index = s0->out2in.fib_index;
    }
  if (d)
    *(snat_session_t **) (d) = s0;
  return next0;
}
#endif

#ifndef CLIB_MARCH_VARIANT
/**
 * Get address and port values to be used for ICMP packet translation
 *
 * @param[in] sm                 NAT main
 * @param[in,out] node           NAT node runtime
 * @param[in] thread_index       thread index
 * @param[in,out] b0             buffer containing packet to be translated
 * @param[in,out] ip0            ip header
 * @param[out] p_proto           protocol used for matching
 * @param[out] p_value           address and port after NAT translation
 * @param[out] p_dont_translate  if packet should not be translated
 * @param d                      optional parameter
 * @param e                      optional parameter
 */
u32
icmp_match_in2out_fast (snat_main_t * sm, vlib_node_runtime_t * node,
			u32 thread_index, vlib_buffer_t * b0,
			ip4_header_t * ip0, ip4_address_t * addr, u16 * port,
			u32 * fib_index, nat_protocol_t * proto, void *d,
			void *e, u8 * dont_translate)
{
  u32 sw_if_index0;
  u8 is_addr_only;
  u32 next0 = ~0;
  int err;
  *dont_translate = 0;

  sw_if_index0 = vnet_buffer (b0)->sw_if_index[VLIB_RX];
  *fib_index = ip4_fib_table_get_index_for_sw_if_index (sw_if_index0);

  err = icmp_get_key (b0, ip0, addr, port, proto);
  if (err != -1)
    {
      b0->error = node->errors[err];
      next0 = SNAT_IN2OUT_NEXT_DROP;
      goto out;
    }

  ip4_address_t sm_addr;
  u16 sm_port;
  u32 sm_fib_index;

  if (snat_static_mapping_match
      (sm, *addr, *port, *fib_index, *proto, &sm_addr, &sm_port,
       &sm_fib_index, 0, &is_addr_only, 0, 0, 0, 0, 0))
    {
      if (PREDICT_FALSE (snat_not_translate_fast (sm, node, sw_if_index0, ip0,
						  IP_PROTOCOL_ICMP,
						  *fib_index)))
	{
	  *dont_translate = 1;
	  goto out;
	}

      if (icmp_type_is_error_message
	  (vnet_buffer (b0)->ip.reass.icmp_type_or_tcp_flags))
	{
	  next0 = SNAT_IN2OUT_NEXT_DROP;
	  goto out;
	}

      b0->error = node->errors[SNAT_IN2OUT_ERROR_NO_TRANSLATION];
      next0 = SNAT_IN2OUT_NEXT_DROP;
      goto out;
    }

  if (PREDICT_FALSE
      (vnet_buffer (b0)->ip.reass.icmp_type_or_tcp_flags != ICMP4_echo_request
       && (vnet_buffer (b0)->ip.reass.icmp_type_or_tcp_flags !=
	   ICMP4_echo_reply || !is_addr_only)
       && !icmp_type_is_error_message (vnet_buffer (b0)->ip.
				       reass.icmp_type_or_tcp_flags)))
    {
      b0->error = node->errors[SNAT_IN2OUT_ERROR_BAD_ICMP_TYPE];
      next0 = SNAT_IN2OUT_NEXT_DROP;
      goto out;
    }

out:
  return next0;
}
#endif

#ifndef CLIB_MARCH_VARIANT
u32
icmp_in2out (snat_main_t * sm,
	     vlib_buffer_t * b0,
	     ip4_header_t * ip0,
	     icmp46_header_t * icmp0,
	     u32 sw_if_index0,
	     u32 rx_fib_index0,
	     vlib_node_runtime_t * node,
	     u32 next0, u32 thread_index, void *d, void *e)
{
  vlib_main_t *vm = vlib_get_main ();
  ip4_address_t addr;
  u16 port;
  u32 fib_index;
  nat_protocol_t protocol;
  icmp_echo_header_t *echo0, *inner_echo0 = 0;
  ip4_header_t *inner_ip0;
  void *l4_header = 0;
  icmp46_header_t *inner_icmp0;
  u8 dont_translate;
  u32 new_addr0, old_addr0;
  u16 old_id0, new_id0;
  u16 old_checksum0, new_checksum0;
  ip_csum_t sum0;
  u16 checksum0;
  u32 next0_tmp;

  echo0 = (icmp_echo_header_t *) (icmp0 + 1);

  next0_tmp =
    sm->icmp_match_in2out_cb (sm, node, thread_index, b0, ip0, &addr, &port,
			      &fib_index, &protocol, d, e, &dont_translate);
  if (next0_tmp != ~0)
    next0 = next0_tmp;
  if (next0 == SNAT_IN2OUT_NEXT_DROP || dont_translate)
    goto out;

  if (PREDICT_TRUE (!ip4_is_fragment (ip0)))
    {
      sum0 =
	ip_incremental_checksum_buffer (vm, b0,
					(u8 *) icmp0 -
					(u8 *) vlib_buffer_get_current (b0),
					ntohs (ip0->length) -
					ip4_header_bytes (ip0), 0);
      checksum0 = ~ip_csum_fold (sum0);
      if (PREDICT_FALSE (checksum0 != 0 && checksum0 != 0xffff))
	{
	  next0 = SNAT_IN2OUT_NEXT_DROP;
	  goto out;
	}
    }

  old_addr0 = ip0->src_address.as_u32;
  new_addr0 = ip0->src_address.as_u32 = addr.as_u32;

  sum0 = ip0->checksum;
  sum0 = ip_csum_update (sum0, old_addr0, new_addr0, ip4_header_t,
			 src_address /* changed member */ );
  ip0->checksum = ip_csum_fold (sum0);

  if (!vnet_buffer (b0)->ip.reass.is_non_first_fragment)
    {
      if (icmp0->checksum == 0)
	icmp0->checksum = 0xffff;

      if (!icmp_type_is_error_message (icmp0->type))
	{
	  new_id0 = port;
	  if (PREDICT_FALSE (new_id0 != echo0->identifier))
	    {
	      old_id0 = echo0->identifier;
	      new_id0 = port;
	      echo0->identifier = new_id0;

	      sum0 = icmp0->checksum;
	      sum0 =
		ip_csum_update (sum0, old_id0, new_id0, icmp_echo_header_t,
				identifier);
	      icmp0->checksum = ip_csum_fold (sum0);
	    }
	}
      else
	{
	  inner_ip0 = (ip4_header_t *) (echo0 + 1);
	  l4_header = ip4_next_header (inner_ip0);

	  if (!ip4_header_checksum_is_valid (inner_ip0))
	    {
	      next0 = SNAT_IN2OUT_NEXT_DROP;
	      goto out;
	    }

	  /* update inner destination IP address */
	  old_addr0 = inner_ip0->dst_address.as_u32;
	  inner_ip0->dst_address = addr;
	  new_addr0 = inner_ip0->dst_address.as_u32;
	  sum0 = icmp0->checksum;
	  sum0 = ip_csum_update (sum0, old_addr0, new_addr0, ip4_header_t,
				 dst_address /* changed member */ );
	  icmp0->checksum = ip_csum_fold (sum0);

	  /* update inner IP header checksum */
	  old_checksum0 = inner_ip0->checksum;
	  sum0 = inner_ip0->checksum;
	  sum0 = ip_csum_update (sum0, old_addr0, new_addr0, ip4_header_t,
				 dst_address /* changed member */ );
	  inner_ip0->checksum = ip_csum_fold (sum0);
	  new_checksum0 = inner_ip0->checksum;
	  sum0 = icmp0->checksum;
	  sum0 =
	    ip_csum_update (sum0, old_checksum0, new_checksum0, ip4_header_t,
			    checksum);
	  icmp0->checksum = ip_csum_fold (sum0);

	  switch (protocol)
	    {
	    case NAT_PROTOCOL_ICMP:
	      inner_icmp0 = (icmp46_header_t *) l4_header;
	      inner_echo0 = (icmp_echo_header_t *) (inner_icmp0 + 1);

	      old_id0 = inner_echo0->identifier;
	      new_id0 = port;
	      inner_echo0->identifier = new_id0;

	      sum0 = icmp0->checksum;
	      sum0 =
		ip_csum_update (sum0, old_id0, new_id0, icmp_echo_header_t,
				identifier);
	      icmp0->checksum = ip_csum_fold (sum0);
	      break;
	    case NAT_PROTOCOL_UDP:
	    case NAT_PROTOCOL_TCP:
	      old_id0 = ((tcp_udp_header_t *) l4_header)->dst_port;
	      new_id0 = port;
	      ((tcp_udp_header_t *) l4_header)->dst_port = new_id0;

	      sum0 = icmp0->checksum;
	      sum0 = ip_csum_update (sum0, old_id0, new_id0, tcp_udp_header_t,
				     dst_port);
	      icmp0->checksum = ip_csum_fold (sum0);
	      break;
	    default:
	      ASSERT (0);
	    }
	}
    }

  if (vnet_buffer (b0)->sw_if_index[VLIB_TX] == ~0)
    {
      if (0 != snat_icmp_hairpinning (sm, b0, ip0, icmp0,
				      sm->endpoint_dependent))
	vnet_buffer (b0)->sw_if_index[VLIB_TX] = fib_index;
    }

out:
  return next0;
}
#endif

static inline u32
icmp_in2out_slow_path (snat_main_t * sm,
		       vlib_buffer_t * b0,
		       ip4_header_t * ip0,
		       icmp46_header_t * icmp0,
		       u32 sw_if_index0,
		       u32 rx_fib_index0,
		       vlib_node_runtime_t * node,
		       u32 next0,
		       f64 now, u32 thread_index, snat_session_t ** p_s0)
{
  vlib_main_t *vm = vlib_get_main ();

  next0 = icmp_in2out (sm, b0, ip0, icmp0, sw_if_index0, rx_fib_index0, node,
		       next0, thread_index, p_s0, 0);
  snat_session_t *s0 = *p_s0;
  if (PREDICT_TRUE (next0 != SNAT_IN2OUT_NEXT_DROP && s0))
    {
      /* Accounting */
      nat44_session_update_counters (s0, now,
				     vlib_buffer_length_in_chain
				     (vm, b0), thread_index);
      /* Per-user LRU list maintenance */
      nat44_session_update_lru (sm, s0, thread_index);
    }
  return next0;
}

static int
nat_in2out_sm_unknown_proto (snat_main_t * sm,
			     vlib_buffer_t * b,
			     ip4_header_t * ip, u32 rx_fib_index)
{
  clib_bihash_kv_8_8_t kv, value;
  snat_static_mapping_t *m;
  u32 old_addr, new_addr;
  ip_csum_t sum;

  init_nat_k (&kv, ip->src_address, 0, rx_fib_index, 0);
  if (clib_bihash_search_8_8 (&sm->static_mapping_by_local, &kv, &value))
    return 1;

  m = pool_elt_at_index (sm->static_mappings, value.value);

  old_addr = ip->src_address.as_u32;
  new_addr = ip->src_address.as_u32 = m->external_addr.as_u32;
  sum = ip->checksum;
  sum = ip_csum_update (sum, old_addr, new_addr, ip4_header_t, src_address);
  ip->checksum = ip_csum_fold (sum);


  /* Hairpinning */
  if (vnet_buffer (b)->sw_if_index[VLIB_TX] == ~0)
    {
      vnet_buffer (b)->sw_if_index[VLIB_TX] = m->fib_index;
      nat_hairpinning_sm_unknown_proto (sm, b, ip);
    }

  return 0;
}

static inline uword
snat_in2out_node_fn_inline (vlib_main_t * vm,
			    vlib_node_runtime_t * node,
			    vlib_frame_t * frame, int is_slow_path,
			    int is_output_feature)
{
  u32 n_left_from, *from;
  snat_main_t *sm = &snat_main;
  f64 now = vlib_time_now (vm);
  u32 thread_index = vm->thread_index;

  from = vlib_frame_vector_args (frame);
  n_left_from = frame->n_vectors;

  vlib_buffer_t *bufs[VLIB_FRAME_SIZE], **b = bufs;
  u16 nexts[VLIB_FRAME_SIZE], *next = nexts;
  vlib_get_buffers (vm, from, b, n_left_from);

  while (n_left_from >= 2)
    {
      vlib_buffer_t *b0, *b1;
      u32 next0, next1;
      u32 sw_if_index0, sw_if_index1;
      ip4_header_t *ip0, *ip1;
      ip_csum_t sum0, sum1;
      u32 new_addr0, old_addr0, new_addr1, old_addr1;
      u16 old_port0, new_port0, old_port1, new_port1;
      udp_header_t *udp0, *udp1;
      tcp_header_t *tcp0, *tcp1;
      icmp46_header_t *icmp0, *icmp1;
      u32 rx_fib_index0, rx_fib_index1;
      u32 proto0, proto1;
      snat_session_t *s0 = 0, *s1 = 0;
      clib_bihash_kv_8_8_t kv0, value0, kv1, value1;
      u32 iph_offset0 = 0, iph_offset1 = 0;

      b0 = *b;
      b++;
      b1 = *b;
      b++;

      /* Prefetch next iteration. */
      if (PREDICT_TRUE (n_left_from >= 4))
	{
	  vlib_buffer_t *p2, *p3;

	  p2 = *b;
	  p3 = *(b + 1);

	  vlib_prefetch_buffer_header (p2, LOAD);
	  vlib_prefetch_buffer_header (p3, LOAD);

	  CLIB_PREFETCH (p2->data, CLIB_CACHE_LINE_BYTES, LOAD);
	  CLIB_PREFETCH (p3->data, CLIB_CACHE_LINE_BYTES, LOAD);
	}

      if (is_output_feature)
	iph_offset0 = vnet_buffer (b0)->ip.reass.save_rewrite_length;

      ip0 = (ip4_header_t *) ((u8 *) vlib_buffer_get_current (b0) +
			      iph_offset0);

      udp0 = ip4_next_header (ip0);
      tcp0 = (tcp_header_t *) udp0;
      icmp0 = (icmp46_header_t *) udp0;

      sw_if_index0 = vnet_buffer (b0)->sw_if_index[VLIB_RX];
      rx_fib_index0 = vec_elt (sm->ip4_main->fib_index_by_sw_if_index,
			       sw_if_index0);

      next0 = next1 = SNAT_IN2OUT_NEXT_LOOKUP;

      if (PREDICT_FALSE (ip0->ttl == 1))
	{
	  vnet_buffer (b0)->sw_if_index[VLIB_TX] = (u32) ~ 0;
	  icmp4_error_set_vnet_buffer (b0, ICMP4_time_exceeded,
				       ICMP4_time_exceeded_ttl_exceeded_in_transit,
				       0);
	  next0 = SNAT_IN2OUT_NEXT_ICMP_ERROR;
	  goto trace00;
	}

      proto0 = ip_proto_to_nat_proto (ip0->protocol);

      /* Next configured feature, probably ip4-lookup */
      if (is_slow_path)
	{
	  if (PREDICT_FALSE (proto0 == NAT_PROTOCOL_OTHER))
	    {
	      if (nat_in2out_sm_unknown_proto (sm, b0, ip0, rx_fib_index0))
		{
		  next0 = SNAT_IN2OUT_NEXT_DROP;
		  b0->error =
		    node->errors[SNAT_IN2OUT_ERROR_UNSUPPORTED_PROTOCOL];
		}
	      vlib_increment_simple_counter (is_slow_path ? &sm->
					     counters.slowpath.in2out.
					     other : &sm->counters.fastpath.
					     in2out.other, thread_index,
					     sw_if_index0, 1);
	      goto trace00;
	    }

	  if (PREDICT_FALSE (proto0 == NAT_PROTOCOL_ICMP))
	    {
	      next0 = icmp_in2out_slow_path
		(sm, b0, ip0, icmp0, sw_if_index0, rx_fib_index0,
		 node, next0, now, thread_index, &s0);
	      vlib_increment_simple_counter (is_slow_path ? &sm->
					     counters.slowpath.in2out.
					     icmp : &sm->counters.fastpath.
					     in2out.icmp, thread_index,
					     sw_if_index0, 1);
	      goto trace00;
	    }
	}
      else
	{
	  if (PREDICT_FALSE (proto0 == NAT_PROTOCOL_OTHER))
	    {
	      next0 = SNAT_IN2OUT_NEXT_SLOW_PATH;
	      goto trace00;
	    }

	  if (PREDICT_FALSE (proto0 == NAT_PROTOCOL_ICMP))
	    {
	      next0 = SNAT_IN2OUT_NEXT_SLOW_PATH;
	      goto trace00;
	    }
	}

      init_nat_k (&kv0, ip0->src_address,
		  vnet_buffer (b0)->ip.reass.l4_src_port, rx_fib_index0,
		  proto0);
      if (PREDICT_FALSE
	  (clib_bihash_search_8_8
	   (&sm->per_thread_data[thread_index].in2out, &kv0, &value0) != 0))
	{
	  if (is_slow_path)
	    {
	      if (is_output_feature)
		{
		  if (PREDICT_FALSE
		      (nat_not_translate_output_feature
		       (sm, ip0, proto0,
			vnet_buffer (b0)->ip.reass.l4_src_port,
			vnet_buffer (b0)->ip.reass.l4_dst_port,
			thread_index, sw_if_index0)))
		    goto trace00;

		  /*
		   * Send DHCP packets to the ipv4 stack, or we won't
		   * be able to use dhcp client on the outside interface
		   */
		  if (PREDICT_FALSE
		      (proto0 == NAT_PROTOCOL_UDP
		       && (vnet_buffer (b0)->ip.reass.l4_dst_port ==
			   clib_host_to_net_u16
			   (UDP_DST_PORT_dhcp_to_server))
		       && ip0->dst_address.as_u32 == 0xffffffff))
		    goto trace00;
		}
	      else
		{
		  if (PREDICT_FALSE
		      (snat_not_translate
		       (sm, node, sw_if_index0, ip0, proto0,
			rx_fib_index0, thread_index)))
		    goto trace00;
		}

	      next0 = slow_path (sm, b0, ip0,
				 ip0->src_address,
				 vnet_buffer (b0)->ip.reass.l4_src_port,
				 rx_fib_index0,
				 proto0, &s0, node, next0, thread_index, now);
	      if (PREDICT_FALSE (next0 == SNAT_IN2OUT_NEXT_DROP))
		goto trace00;

	      if (PREDICT_FALSE (!s0))
		goto trace00;
	    }
	  else
	    {
	      next0 = SNAT_IN2OUT_NEXT_SLOW_PATH;
	      goto trace00;
	    }
	}
      else
	s0 =
	  pool_elt_at_index (sm->per_thread_data[thread_index].sessions,
			     value0.value);

      b0->flags |= VNET_BUFFER_F_IS_NATED;

      old_addr0 = ip0->src_address.as_u32;
      ip0->src_address = s0->out2in.addr;
      new_addr0 = ip0->src_address.as_u32;
      if (!is_output_feature)
	vnet_buffer (b0)->sw_if_index[VLIB_TX] = s0->out2in.fib_index;

      sum0 = ip0->checksum;
      sum0 = ip_csum_update (sum0, old_addr0, new_addr0,
			     ip4_header_t, src_address /* changed member */ );
      ip0->checksum = ip_csum_fold (sum0);


      if (PREDICT_TRUE (proto0 == NAT_PROTOCOL_TCP))
	{
	  if (!vnet_buffer (b0)->ip.reass.is_non_first_fragment)
	    {
	      old_port0 = vnet_buffer (b0)->ip.reass.l4_src_port;
	      new_port0 = udp0->src_port = s0->out2in.port;
	      sum0 = tcp0->checksum;
	      sum0 = ip_csum_update (sum0, old_addr0, new_addr0,
				     ip4_header_t,
				     dst_address /* changed member */ );
	      sum0 = ip_csum_update (sum0, old_port0, new_port0,
				     ip4_header_t /* cheat */ ,
				     length /* changed member */ );
	      mss_clamping (sm->mss_clamping, tcp0, &sum0);
	      tcp0->checksum = ip_csum_fold (sum0);
	    }
	  vlib_increment_simple_counter (is_slow_path ? &sm->
					 counters.slowpath.in2out.tcp : &sm->
					 counters.fastpath.in2out.tcp,
					 thread_index, sw_if_index0, 1);
	}
      else
	{
	  if (!vnet_buffer (b0)->ip.reass.is_non_first_fragment)
	    {
	      udp0->src_port = s0->out2in.port;
	      if (PREDICT_FALSE (udp0->checksum))
		{
		  old_port0 = vnet_buffer (b0)->ip.reass.l4_src_port;
		  new_port0 = udp0->src_port;
		  sum0 = udp0->checksum;
		  sum0 = ip_csum_update (sum0, old_addr0, new_addr0, ip4_header_t, dst_address	/* changed member */
		    );
		  sum0 =
		    ip_csum_update (sum0, old_port0, new_port0,
				    ip4_header_t /* cheat */ ,
				    length /* changed member */ );
		  udp0->checksum = ip_csum_fold (sum0);
		}
	    }
	  vlib_increment_simple_counter (is_slow_path ? &sm->
					 counters.slowpath.in2out.udp : &sm->
					 counters.fastpath.in2out.udp,
					 thread_index, sw_if_index0, 1);
	}

      /* Accounting */
      nat44_session_update_counters (s0, now,
				     vlib_buffer_length_in_chain (vm, b0),
				     thread_index);
      /* Per-user LRU list maintenance */
      nat44_session_update_lru (sm, s0, thread_index);
    trace00:

      if (PREDICT_FALSE ((node->flags & VLIB_NODE_FLAG_TRACE)
			 && (b0->flags & VLIB_BUFFER_IS_TRACED)))
	{
	  snat_in2out_trace_t *t = vlib_add_trace (vm, node, b0, sizeof (*t));
	  t->is_slow_path = is_slow_path;
	  t->sw_if_index = sw_if_index0;
	  t->next_index = next0;
	  t->session_index = ~0;
	  if (s0)
	    t->session_index =
	      s0 - sm->per_thread_data[thread_index].sessions;
	}

      if (next0 == SNAT_IN2OUT_NEXT_DROP)
	{
	  vlib_increment_simple_counter (is_slow_path ? &sm->
					 counters.slowpath.in2out.
					 drops : &sm->counters.fastpath.
					 in2out.drops, thread_index,
					 sw_if_index0, 1);
	}

      if (is_output_feature)
	iph_offset1 = vnet_buffer (b1)->ip.reass.save_rewrite_length;

      ip1 = (ip4_header_t *) ((u8 *) vlib_buffer_get_current (b1) +
			      iph_offset1);

      udp1 = ip4_next_header (ip1);
      tcp1 = (tcp_header_t *) udp1;
      icmp1 = (icmp46_header_t *) udp1;

      sw_if_index1 = vnet_buffer (b1)->sw_if_index[VLIB_RX];
      rx_fib_index1 = vec_elt (sm->ip4_main->fib_index_by_sw_if_index,
			       sw_if_index1);

      if (PREDICT_FALSE (ip1->ttl == 1))
	{
	  vnet_buffer (b1)->sw_if_index[VLIB_TX] = (u32) ~ 0;
	  icmp4_error_set_vnet_buffer (b1, ICMP4_time_exceeded,
				       ICMP4_time_exceeded_ttl_exceeded_in_transit,
				       0);
	  next1 = SNAT_IN2OUT_NEXT_ICMP_ERROR;
	  goto trace01;
	}

      proto1 = ip_proto_to_nat_proto (ip1->protocol);

      /* Next configured feature, probably ip4-lookup */
      if (is_slow_path)
	{
	  if (PREDICT_FALSE (proto1 == NAT_PROTOCOL_OTHER))
	    {
	      if (nat_in2out_sm_unknown_proto (sm, b1, ip1, rx_fib_index1))
		{
		  next1 = SNAT_IN2OUT_NEXT_DROP;
		  b1->error =
		    node->errors[SNAT_IN2OUT_ERROR_UNSUPPORTED_PROTOCOL];
		}
	      vlib_increment_simple_counter (is_slow_path ? &sm->
					     counters.slowpath.in2out.
					     other : &sm->counters.fastpath.
					     in2out.other, thread_index,
					     sw_if_index1, 1);
	      goto trace01;
	    }

	  if (PREDICT_FALSE (proto1 == NAT_PROTOCOL_ICMP))
	    {
	      next1 = icmp_in2out_slow_path
		(sm, b1, ip1, icmp1, sw_if_index1, rx_fib_index1, node,
		 next1, now, thread_index, &s1);
	      vlib_increment_simple_counter (is_slow_path ? &sm->
					     counters.slowpath.in2out.
					     icmp : &sm->counters.fastpath.
					     in2out.icmp, thread_index,
					     sw_if_index1, 1);
	      goto trace01;
	    }
	}
      else
	{
	  if (PREDICT_FALSE (proto1 == NAT_PROTOCOL_OTHER))
	    {
	      next1 = SNAT_IN2OUT_NEXT_SLOW_PATH;
	      goto trace01;
	    }

	  if (PREDICT_FALSE (proto1 == NAT_PROTOCOL_ICMP))
	    {
	      next1 = SNAT_IN2OUT_NEXT_SLOW_PATH;
	      goto trace01;
	    }
	}

      init_nat_k (&kv1, ip1->src_address,
		  vnet_buffer (b1)->ip.reass.l4_src_port, rx_fib_index1,
		  proto1);
      if (PREDICT_FALSE
	  (clib_bihash_search_8_8
	   (&sm->per_thread_data[thread_index].in2out, &kv1, &value1) != 0))
	{
	  if (is_slow_path)
	    {
	      if (is_output_feature)
		{
		  if (PREDICT_FALSE
		      (nat_not_translate_output_feature
		       (sm, ip1, proto1,
			vnet_buffer (b1)->ip.reass.l4_src_port,
			vnet_buffer (b1)->ip.reass.l4_dst_port,
			thread_index, sw_if_index1)))
		    goto trace01;

		  /*
		   * Send DHCP packets to the ipv4 stack, or we won't
		   * be able to use dhcp client on the outside interface
		   */
		  if (PREDICT_FALSE
		      (proto1 == NAT_PROTOCOL_UDP
		       && (vnet_buffer (b1)->ip.reass.l4_dst_port ==
			   clib_host_to_net_u16
			   (UDP_DST_PORT_dhcp_to_server))
		       && ip1->dst_address.as_u32 == 0xffffffff))
		    goto trace01;
		}
	      else
		{
		  if (PREDICT_FALSE
		      (snat_not_translate
		       (sm, node, sw_if_index1, ip1, proto1,
			rx_fib_index1, thread_index)))
		    goto trace01;
		}

	      next1 =
		slow_path (sm, b1, ip1, ip1->src_address,
			   vnet_buffer (b1)->ip.reass.l4_src_port,
			   rx_fib_index1, proto1, &s1, node, next1,
			   thread_index, now);
	      if (PREDICT_FALSE (next1 == SNAT_IN2OUT_NEXT_DROP))
		goto trace01;

	      if (PREDICT_FALSE (!s1))
		goto trace01;
	    }
	  else
	    {
	      next1 = SNAT_IN2OUT_NEXT_SLOW_PATH;
	      goto trace01;
	    }
	}
      else
	s1 =
	  pool_elt_at_index (sm->per_thread_data[thread_index].sessions,
			     value1.value);

      b1->flags |= VNET_BUFFER_F_IS_NATED;

      old_addr1 = ip1->src_address.as_u32;
      ip1->src_address = s1->out2in.addr;
      new_addr1 = ip1->src_address.as_u32;
      if (!is_output_feature)
	vnet_buffer (b1)->sw_if_index[VLIB_TX] = s1->out2in.fib_index;

      sum1 = ip1->checksum;
      sum1 = ip_csum_update (sum1, old_addr1, new_addr1,
			     ip4_header_t, src_address /* changed member */ );
      ip1->checksum = ip_csum_fold (sum1);

      if (PREDICT_TRUE (proto1 == NAT_PROTOCOL_TCP))
	{
	  if (!vnet_buffer (b1)->ip.reass.is_non_first_fragment)
	    {
	      old_port1 = vnet_buffer (b1)->ip.reass.l4_src_port;
	      new_port1 = udp1->src_port = s1->out2in.port;
	      sum1 = tcp1->checksum;
	      sum1 = ip_csum_update (sum1, old_addr1, new_addr1,
				     ip4_header_t,
				     dst_address /* changed member */ );
	      sum1 = ip_csum_update (sum1, old_port1, new_port1,
				     ip4_header_t /* cheat */ ,
				     length /* changed member */ );
	      mss_clamping (sm->mss_clamping, tcp1, &sum1);
	      tcp1->checksum = ip_csum_fold (sum1);
	    }
	  vlib_increment_simple_counter (is_slow_path ? &sm->
					 counters.slowpath.in2out.tcp : &sm->
					 counters.fastpath.in2out.tcp,
					 thread_index, sw_if_index1, 1);
	}
      else
	{
	  if (!vnet_buffer (b1)->ip.reass.is_non_first_fragment)
	    {
	      udp1->src_port = s1->out2in.port;
	      if (PREDICT_FALSE (udp1->checksum))
		{
		  old_port1 = vnet_buffer (b1)->ip.reass.l4_src_port;
		  new_port1 = udp1->src_port;
		  sum1 = udp1->checksum;
		  sum1 = ip_csum_update (sum1, old_addr1, new_addr1, ip4_header_t, dst_address	/* changed member */
		    );
		  sum1 =
		    ip_csum_update (sum1, old_port1, new_port1,
				    ip4_header_t /* cheat */ ,
				    length /* changed member */ );
		  udp1->checksum = ip_csum_fold (sum1);
		}
	    }
	  vlib_increment_simple_counter (is_slow_path ? &sm->
					 counters.slowpath.in2out.udp : &sm->
					 counters.fastpath.in2out.udp,
					 thread_index, sw_if_index1, 1);
	}

      /* Accounting */
      nat44_session_update_counters (s1, now,
				     vlib_buffer_length_in_chain (vm, b1),
				     thread_index);
      /* Per-user LRU list maintenance */
      nat44_session_update_lru (sm, s1, thread_index);
    trace01:

      if (PREDICT_FALSE ((node->flags & VLIB_NODE_FLAG_TRACE)
			 && (b1->flags & VLIB_BUFFER_IS_TRACED)))
	{
	  snat_in2out_trace_t *t = vlib_add_trace (vm, node, b1, sizeof (*t));
	  t->sw_if_index = sw_if_index1;
	  t->next_index = next1;
	  t->session_index = ~0;
	  if (s1)
	    t->session_index =
	      s1 - sm->per_thread_data[thread_index].sessions;
	}

      if (next1 == SNAT_IN2OUT_NEXT_DROP)
	{
	  vlib_increment_simple_counter (is_slow_path ? &sm->
					 counters.slowpath.in2out.
					 drops : &sm->counters.fastpath.
					 in2out.drops, thread_index,
					 sw_if_index1, 1);
	}

      n_left_from -= 2;
      next[0] = next0;
      next[1] = next1;
      next += 2;
    }

  while (n_left_from > 0)
    {
      vlib_buffer_t *b0;
      u32 next0;
      u32 sw_if_index0;
      ip4_header_t *ip0;
      ip_csum_t sum0;
      u32 new_addr0, old_addr0;
      u16 old_port0, new_port0;
      udp_header_t *udp0;
      tcp_header_t *tcp0;
      icmp46_header_t *icmp0;
      u32 rx_fib_index0;
      u32 proto0;
      snat_session_t *s0 = 0;
      clib_bihash_kv_8_8_t kv0, value0;
      u32 iph_offset0 = 0;

      b0 = *b;
      b++;
      next0 = SNAT_IN2OUT_NEXT_LOOKUP;

      if (is_output_feature)
	iph_offset0 = vnet_buffer (b0)->ip.reass.save_rewrite_length;

      ip0 = (ip4_header_t *) ((u8 *) vlib_buffer_get_current (b0) +
			      iph_offset0);

      udp0 = ip4_next_header (ip0);
      tcp0 = (tcp_header_t *) udp0;
      icmp0 = (icmp46_header_t *) udp0;

      sw_if_index0 = vnet_buffer (b0)->sw_if_index[VLIB_RX];
      rx_fib_index0 = vec_elt (sm->ip4_main->fib_index_by_sw_if_index,
			       sw_if_index0);

      if (PREDICT_FALSE (ip0->ttl == 1))
	{
	  vnet_buffer (b0)->sw_if_index[VLIB_TX] = (u32) ~ 0;
	  icmp4_error_set_vnet_buffer (b0, ICMP4_time_exceeded,
				       ICMP4_time_exceeded_ttl_exceeded_in_transit,
				       0);
	  next0 = SNAT_IN2OUT_NEXT_ICMP_ERROR;
	  goto trace0;
	}

      proto0 = ip_proto_to_nat_proto (ip0->protocol);

      /* Next configured feature, probably ip4-lookup */
      if (is_slow_path)
	{
	  if (PREDICT_FALSE (proto0 == NAT_PROTOCOL_OTHER))
	    {
	      if (nat_in2out_sm_unknown_proto (sm, b0, ip0, rx_fib_index0))
		{
		  next0 = SNAT_IN2OUT_NEXT_DROP;
		  b0->error =
		    node->errors[SNAT_IN2OUT_ERROR_UNSUPPORTED_PROTOCOL];
		}
	      vlib_increment_simple_counter (is_slow_path ? &sm->
					     counters.slowpath.in2out.
					     other : &sm->counters.fastpath.
					     in2out.other, thread_index,
					     sw_if_index0, 1);
	      goto trace0;
	    }

	  if (PREDICT_FALSE (proto0 == NAT_PROTOCOL_ICMP))
	    {
	      next0 = icmp_in2out_slow_path
		(sm, b0, ip0, icmp0, sw_if_index0, rx_fib_index0, node,
		 next0, now, thread_index, &s0);
	      vlib_increment_simple_counter (is_slow_path ? &sm->
					     counters.slowpath.in2out.
					     icmp : &sm->counters.fastpath.
					     in2out.icmp, thread_index,
					     sw_if_index0, 1);
	      goto trace0;
	    }
	}
      else
	{
	  if (PREDICT_FALSE (proto0 == NAT_PROTOCOL_OTHER))
	    {
	      next0 = SNAT_IN2OUT_NEXT_SLOW_PATH;
	      goto trace0;
	    }

	  if (PREDICT_FALSE (proto0 == NAT_PROTOCOL_ICMP))
	    {
	      next0 = SNAT_IN2OUT_NEXT_SLOW_PATH;
	      goto trace0;
	    }
	}

      init_nat_k (&kv0, ip0->src_address,
		  vnet_buffer (b0)->ip.reass.l4_src_port, rx_fib_index0,
		  proto0);

      if (clib_bihash_search_8_8
	  (&sm->per_thread_data[thread_index].in2out, &kv0, &value0))
	{
	  if (is_slow_path)
	    {
	      if (is_output_feature)
		{
		  if (PREDICT_FALSE
		      (nat_not_translate_output_feature
		       (sm, ip0, proto0,
			vnet_buffer (b0)->ip.reass.l4_src_port,
			vnet_buffer (b0)->ip.reass.l4_dst_port,
			thread_index, sw_if_index0)))
		    goto trace0;

		  /*
		   * Send DHCP packets to the ipv4 stack, or we won't
		   * be able to use dhcp client on the outside interface
		   */
		  if (PREDICT_FALSE
		      (proto0 == NAT_PROTOCOL_UDP
		       && (vnet_buffer (b0)->ip.reass.l4_dst_port ==
			   clib_host_to_net_u16
			   (UDP_DST_PORT_dhcp_to_server))
		       && ip0->dst_address.as_u32 == 0xffffffff))
		    goto trace0;
		}
	      else
		{
		  if (PREDICT_FALSE
		      (snat_not_translate
		       (sm, node, sw_if_index0, ip0, proto0, rx_fib_index0,
			thread_index)))
		    goto trace0;
		}

	      next0 =
		slow_path (sm, b0, ip0, ip0->src_address,
			   vnet_buffer (b0)->ip.reass.l4_src_port,
			   rx_fib_index0, proto0, &s0, node, next0,
			   thread_index, now);

	      if (PREDICT_FALSE (next0 == SNAT_IN2OUT_NEXT_DROP))
		goto trace0;

	      if (PREDICT_FALSE (!s0))
		goto trace0;
	    }
	  else
	    {
	      next0 = SNAT_IN2OUT_NEXT_SLOW_PATH;
	      goto trace0;
	    }
	}
      else
	s0 =
	  pool_elt_at_index (sm->per_thread_data[thread_index].sessions,
			     value0.value);

      b0->flags |= VNET_BUFFER_F_IS_NATED;

      old_addr0 = ip0->src_address.as_u32;
      ip0->src_address = s0->out2in.addr;
      new_addr0 = ip0->src_address.as_u32;
      if (!is_output_feature)
	vnet_buffer (b0)->sw_if_index[VLIB_TX] = s0->out2in.fib_index;

      sum0 = ip0->checksum;
      sum0 = ip_csum_update (sum0, old_addr0, new_addr0,
			     ip4_header_t, src_address /* changed member */ );
      ip0->checksum = ip_csum_fold (sum0);

      if (PREDICT_TRUE (proto0 == NAT_PROTOCOL_TCP))
	{
	  if (!vnet_buffer (b0)->ip.reass.is_non_first_fragment)
	    {
	      old_port0 = vnet_buffer (b0)->ip.reass.l4_src_port;
	      new_port0 = udp0->src_port = s0->out2in.port;
	      sum0 = tcp0->checksum;
	      sum0 =
		ip_csum_update (sum0, old_addr0, new_addr0, ip4_header_t,
				dst_address /* changed member */ );
	      sum0 =
		ip_csum_update (sum0, old_port0, new_port0,
				ip4_header_t /* cheat */ ,
				length /* changed member */ );
	      mss_clamping (sm->mss_clamping, tcp0, &sum0);
	      tcp0->checksum = ip_csum_fold (sum0);
	    }
	  vlib_increment_simple_counter (is_slow_path ? &sm->
					 counters.slowpath.in2out.tcp : &sm->
					 counters.fastpath.in2out.tcp,
					 thread_index, sw_if_index0, 1);
	}
      else
	{
	  if (!vnet_buffer (b0)->ip.reass.is_non_first_fragment)
	    {
	      udp0->src_port = s0->out2in.port;
	      if (PREDICT_FALSE (udp0->checksum))
		{
		  old_port0 = vnet_buffer (b0)->ip.reass.l4_src_port;
		  new_port0 = udp0->src_port;
		  sum0 = udp0->checksum;
		  sum0 =
		    ip_csum_update (sum0, old_addr0, new_addr0, ip4_header_t,
				    dst_address /* changed member */ );
		  sum0 =
		    ip_csum_update (sum0, old_port0, new_port0,
				    ip4_header_t /* cheat */ ,
				    length /* changed member */ );
		  udp0->checksum = ip_csum_fold (sum0);
		}
	    }
	  vlib_increment_simple_counter (is_slow_path ? &sm->
					 counters.slowpath.in2out.udp : &sm->
					 counters.fastpath.in2out.udp,
					 thread_index, sw_if_index0, 1);
	}

      /* Accounting */
      nat44_session_update_counters (s0, now,
				     vlib_buffer_length_in_chain (vm, b0),
				     thread_index);
      /* Per-user LRU list maintenance */
      nat44_session_update_lru (sm, s0, thread_index);

    trace0:
      if (PREDICT_FALSE ((node->flags & VLIB_NODE_FLAG_TRACE)
			 && (b0->flags & VLIB_BUFFER_IS_TRACED)))
	{
	  snat_in2out_trace_t *t = vlib_add_trace (vm, node, b0, sizeof (*t));
	  t->is_slow_path = is_slow_path;
	  t->sw_if_index = sw_if_index0;
	  t->next_index = next0;
	  t->session_index = ~0;
	  if (s0)
	    t->session_index =
	      s0 - sm->per_thread_data[thread_index].sessions;
	}

      if (next0 == SNAT_IN2OUT_NEXT_DROP)
	{
	  vlib_increment_simple_counter (is_slow_path ? &sm->
					 counters.slowpath.in2out.
					 drops : &sm->counters.fastpath.
					 in2out.drops, thread_index,
					 sw_if_index0, 1);
	}

      n_left_from--;
      next[0] = next0;
      next++;
    }

  vlib_buffer_enqueue_to_next (vm, node, from, (u16 *) nexts,
			       frame->n_vectors);
  return frame->n_vectors;
}

VLIB_NODE_FN (snat_in2out_node) (vlib_main_t * vm,
				 vlib_node_runtime_t * node,
				 vlib_frame_t * frame)
{
  return snat_in2out_node_fn_inline (vm, node, frame, 0 /* is_slow_path */ ,
				     0);
}

/* *INDENT-OFF* */
VLIB_REGISTER_NODE (snat_in2out_node) = {
  .name = "nat44-in2out",
  .vector_size = sizeof (u32),
  .format_trace = format_snat_in2out_trace,
  .type = VLIB_NODE_TYPE_INTERNAL,

  .n_errors = ARRAY_LEN(snat_in2out_error_strings),
  .error_strings = snat_in2out_error_strings,

  .runtime_data_bytes = sizeof (snat_runtime_t),

  .n_next_nodes = SNAT_IN2OUT_N_NEXT,

  /* edit / add dispositions here */
  .next_nodes = {
    [SNAT_IN2OUT_NEXT_DROP] = "error-drop",
    [SNAT_IN2OUT_NEXT_LOOKUP] = "ip4-lookup",
    [SNAT_IN2OUT_NEXT_SLOW_PATH] = "nat44-in2out-slowpath",
    [SNAT_IN2OUT_NEXT_ICMP_ERROR] = "ip4-icmp-error",
  },
};
/* *INDENT-ON* */

VLIB_NODE_FN (snat_in2out_output_node) (vlib_main_t * vm,
					vlib_node_runtime_t * node,
					vlib_frame_t * frame)
{
  return snat_in2out_node_fn_inline (vm, node, frame, 0 /* is_slow_path */ ,
				     1);
}

/* *INDENT-OFF* */
VLIB_REGISTER_NODE (snat_in2out_output_node) = {
  .name = "nat44-in2out-output",
  .vector_size = sizeof (u32),
  .format_trace = format_snat_in2out_trace,
  .type = VLIB_NODE_TYPE_INTERNAL,

  .n_errors = ARRAY_LEN(snat_in2out_error_strings),
  .error_strings = snat_in2out_error_strings,

  .runtime_data_bytes = sizeof (snat_runtime_t),

  .n_next_nodes = SNAT_IN2OUT_N_NEXT,

  /* edit / add dispositions here */
  .next_nodes = {
    [SNAT_IN2OUT_NEXT_DROP] = "error-drop",
    [SNAT_IN2OUT_NEXT_LOOKUP] = "interface-output",
    [SNAT_IN2OUT_NEXT_SLOW_PATH] = "nat44-in2out-output-slowpath",
    [SNAT_IN2OUT_NEXT_ICMP_ERROR] = "ip4-icmp-error",
  },
};
/* *INDENT-ON* */

VLIB_NODE_FN (snat_in2out_slowpath_node) (vlib_main_t * vm,
					  vlib_node_runtime_t * node,
					  vlib_frame_t * frame)
{
  return snat_in2out_node_fn_inline (vm, node, frame, 1 /* is_slow_path */ ,
				     0);
}

/* *INDENT-OFF* */
VLIB_REGISTER_NODE (snat_in2out_slowpath_node) = {
  .name = "nat44-in2out-slowpath",
  .vector_size = sizeof (u32),
  .format_trace = format_snat_in2out_trace,
  .type = VLIB_NODE_TYPE_INTERNAL,

  .n_errors = ARRAY_LEN(snat_in2out_error_strings),
  .error_strings = snat_in2out_error_strings,

  .runtime_data_bytes = sizeof (snat_runtime_t),

  .n_next_nodes = SNAT_IN2OUT_N_NEXT,

  /* edit / add dispositions here */
  .next_nodes = {
    [SNAT_IN2OUT_NEXT_DROP] = "error-drop",
    [SNAT_IN2OUT_NEXT_LOOKUP] = "ip4-lookup",
    [SNAT_IN2OUT_NEXT_SLOW_PATH] = "nat44-in2out-slowpath",
    [SNAT_IN2OUT_NEXT_ICMP_ERROR] = "ip4-icmp-error",
  },
};
/* *INDENT-ON* */

VLIB_NODE_FN (snat_in2out_output_slowpath_node) (vlib_main_t * vm,
						 vlib_node_runtime_t * node,
						 vlib_frame_t * frame)
{
  return snat_in2out_node_fn_inline (vm, node, frame, 1 /* is_slow_path */ ,
				     1);
}

/* *INDENT-OFF* */
VLIB_REGISTER_NODE (snat_in2out_output_slowpath_node) = {
  .name = "nat44-in2out-output-slowpath",
  .vector_size = sizeof (u32),
  .format_trace = format_snat_in2out_trace,
  .type = VLIB_NODE_TYPE_INTERNAL,

  .n_errors = ARRAY_LEN(snat_in2out_error_strings),
  .error_strings = snat_in2out_error_strings,

  .runtime_data_bytes = sizeof (snat_runtime_t),

  .n_next_nodes = SNAT_IN2OUT_N_NEXT,

  /* edit / add dispositions here */
  .next_nodes = {
    [SNAT_IN2OUT_NEXT_DROP] = "error-drop",
    [SNAT_IN2OUT_NEXT_LOOKUP] = "interface-output",
    [SNAT_IN2OUT_NEXT_SLOW_PATH] = "nat44-in2out-output-slowpath",
    [SNAT_IN2OUT_NEXT_ICMP_ERROR] = "ip4-icmp-error",
  },
};
/* *INDENT-ON* */

VLIB_NODE_FN (snat_in2out_fast_node) (vlib_main_t * vm,
				      vlib_node_runtime_t * node,
				      vlib_frame_t * frame)
{
  u32 n_left_from, *from, *to_next;
  snat_in2out_next_t next_index;
  snat_main_t *sm = &snat_main;
  int is_hairpinning = 0;

  from = vlib_frame_vector_args (frame);
  n_left_from = frame->n_vectors;
  next_index = node->cached_next_index;

  while (n_left_from > 0)
    {
      u32 n_left_to_next;

      vlib_get_next_frame (vm, node, next_index, to_next, n_left_to_next);

      while (n_left_from > 0 && n_left_to_next > 0)
	{
	  u32 bi0;
	  vlib_buffer_t *b0;
	  u32 next0;
	  u32 sw_if_index0;
	  ip4_header_t *ip0;
	  ip_csum_t sum0;
	  u32 new_addr0, old_addr0;
	  u16 old_port0, new_port0;
	  udp_header_t *udp0;
	  tcp_header_t *tcp0;
	  icmp46_header_t *icmp0;
	  u32 proto0;
	  u32 rx_fib_index0;
	  ip4_address_t sm0_addr;
	  u16 sm0_port;
	  u32 sm0_fib_index;


	  /* speculatively enqueue b0 to the current next frame */
	  bi0 = from[0];
	  to_next[0] = bi0;
	  from += 1;
	  to_next += 1;
	  n_left_from -= 1;
	  n_left_to_next -= 1;

	  b0 = vlib_get_buffer (vm, bi0);
	  next0 = SNAT_IN2OUT_NEXT_LOOKUP;

	  ip0 = vlib_buffer_get_current (b0);
	  udp0 = ip4_next_header (ip0);
	  tcp0 = (tcp_header_t *) udp0;
	  icmp0 = (icmp46_header_t *) udp0;

	  sw_if_index0 = vnet_buffer (b0)->sw_if_index[VLIB_RX];
	  rx_fib_index0 =
	    ip4_fib_table_get_index_for_sw_if_index (sw_if_index0);

	  if (PREDICT_FALSE (ip0->ttl == 1))
	    {
	      vnet_buffer (b0)->sw_if_index[VLIB_TX] = (u32) ~ 0;
	      icmp4_error_set_vnet_buffer (b0, ICMP4_time_exceeded,
					   ICMP4_time_exceeded_ttl_exceeded_in_transit,
					   0);
	      next0 = SNAT_IN2OUT_NEXT_ICMP_ERROR;
	      goto trace0;
	    }

	  proto0 = ip_proto_to_nat_proto (ip0->protocol);

	  if (PREDICT_FALSE (proto0 == NAT_PROTOCOL_OTHER))
	    goto trace0;

	  if (PREDICT_FALSE (proto0 == NAT_PROTOCOL_ICMP))
	    {
	      next0 = icmp_in2out (sm, b0, ip0, icmp0, sw_if_index0,
				   rx_fib_index0, node, next0, ~0, 0, 0);
	      goto trace0;
	    }

	  if (snat_static_mapping_match
	      (sm, ip0->src_address, udp0->src_port, rx_fib_index0, proto0,
	       &sm0_addr, &sm0_port, &sm0_fib_index, 0, 0, 0, 0, 0, 0, 0))
	    {
	      b0->error = node->errors[SNAT_IN2OUT_ERROR_NO_TRANSLATION];
	      next0 = SNAT_IN2OUT_NEXT_DROP;
	      goto trace0;
	    }

	  new_addr0 = sm0_addr.as_u32;
	  new_port0 = sm0_port;
	  vnet_buffer (b0)->sw_if_index[VLIB_TX] = sm0_fib_index;
	  old_addr0 = ip0->src_address.as_u32;
	  ip0->src_address.as_u32 = new_addr0;

	  sum0 = ip0->checksum;
	  sum0 = ip_csum_update (sum0, old_addr0, new_addr0,
				 ip4_header_t,
				 src_address /* changed member */ );
	  ip0->checksum = ip_csum_fold (sum0);

	  if (PREDICT_FALSE (new_port0 != udp0->dst_port))
	    {
	      old_port0 = udp0->src_port;
	      udp0->src_port = new_port0;

	      if (PREDICT_TRUE (proto0 == NAT_PROTOCOL_TCP))
		{
		  sum0 = tcp0->checksum;
		  sum0 = ip_csum_update (sum0, old_addr0, new_addr0,
					 ip4_header_t,
					 dst_address /* changed member */ );
		  sum0 = ip_csum_update (sum0, old_port0, new_port0,
					 ip4_header_t /* cheat */ ,
					 length /* changed member */ );
		  mss_clamping (sm->mss_clamping, tcp0, &sum0);
		  tcp0->checksum = ip_csum_fold (sum0);
		}
	      else if (udp0->checksum)
		{
		  sum0 = udp0->checksum;
		  sum0 = ip_csum_update (sum0, old_addr0, new_addr0,
					 ip4_header_t,
					 dst_address /* changed member */ );
		  sum0 = ip_csum_update (sum0, old_port0, new_port0,
					 ip4_header_t /* cheat */ ,
					 length /* changed member */ );
		  udp0->checksum = ip_csum_fold (sum0);
		}
	    }
	  else
	    {
	      if (PREDICT_TRUE (proto0 == NAT_PROTOCOL_TCP))
		{
		  sum0 = tcp0->checksum;
		  sum0 = ip_csum_update (sum0, old_addr0, new_addr0,
					 ip4_header_t,
					 dst_address /* changed member */ );
		  mss_clamping (sm->mss_clamping, tcp0, &sum0);
		  tcp0->checksum = ip_csum_fold (sum0);
		}
	      else if (udp0->checksum)
		{
		  sum0 = udp0->checksum;
		  sum0 = ip_csum_update (sum0, old_addr0, new_addr0,
					 ip4_header_t,
					 dst_address /* changed member */ );
		  udp0->checksum = ip_csum_fold (sum0);
		}
	    }

	  /* Hairpinning */
	  snat_hairpinning (vm, node, sm, b0, ip0, udp0, tcp0, proto0, 0,
			    0 /* do_trace */ );
	  is_hairpinning = 1;

	trace0:
	  if (PREDICT_FALSE ((node->flags & VLIB_NODE_FLAG_TRACE)
			     && (b0->flags & VLIB_BUFFER_IS_TRACED)))
	    {
	      snat_in2out_trace_t *t =
		vlib_add_trace (vm, node, b0, sizeof (*t));
	      t->sw_if_index = sw_if_index0;
	      t->next_index = next0;
	      t->is_hairpinning = is_hairpinning;
	    }

	  if (next0 != SNAT_IN2OUT_NEXT_DROP)
	    {

	      vlib_increment_simple_counter (&sm->counters.fastpath.
					     in2out.other, sw_if_index0,
					     vm->thread_index, 1);
	    }

	  /* verify speculative enqueue, maybe switch current next frame */
	  vlib_validate_buffer_enqueue_x1 (vm, node, next_index,
					   to_next, n_left_to_next,
					   bi0, next0);
	}

      vlib_put_next_frame (vm, node, next_index, n_left_to_next);
    }

  return frame->n_vectors;
}


/* *INDENT-OFF* */
VLIB_REGISTER_NODE (snat_in2out_fast_node) = {
  .name = "nat44-in2out-fast",
  .vector_size = sizeof (u32),
  .format_trace = format_snat_in2out_fast_trace,
  .type = VLIB_NODE_TYPE_INTERNAL,

  .n_errors = ARRAY_LEN(snat_in2out_error_strings),
  .error_strings = snat_in2out_error_strings,

  .runtime_data_bytes = sizeof (snat_runtime_t),

  .n_next_nodes = SNAT_IN2OUT_N_NEXT,

  /* edit / add dispositions here */
  .next_nodes = {
    [SNAT_IN2OUT_NEXT_DROP] = "error-drop",
    [SNAT_IN2OUT_NEXT_LOOKUP] = "ip4-lookup",
    [SNAT_IN2OUT_NEXT_SLOW_PATH] = "nat44-in2out-slowpath",
    [SNAT_IN2OUT_NEXT_ICMP_ERROR] = "ip4-icmp-error",
  },
};
/* *INDENT-ON* */

/*
 * fd.io coding-style-patch-verification: ON
 *
 * Local Variables:
 * eval: (c-set-style "gnu")
 * End:
 */