aboutsummaryrefslogtreecommitdiffstats
path: root/src
AgeCommit message (Collapse)AuthorFilesLines
2017-11-16dpdk/ipsec: use physmem when creating poolsSergio Gonzalez Monroy3-84/+116
Change-Id: Ic4f797cea6fa21fb29d646256210357cf5267b38 Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
2017-11-16Add Support of DHCP VSS Type 0 where VPN-ID is ASCIIJohn Lo9-220/+294
Enhence support of DHCP VSS (Virtual Subnet Selection) to include VSS type 0 where VSS info is a NVT (Network Virtual Terminal) ASCII VPN ID where the ASCII string MUST NOT be terminated with a zero byte. Existing code already support VSS type 1, where VSS information is a RFC 2685 VPN-ID of 7 bytes with 3 bytes OUI and 4 bytes VPN index, and VSS type 255 indicating global VPN. Change-Id: I54edbc447c89a2aacd1cc9fc72bd5ba386037608 Signed-off-by: John Lo <loj@cisco.com>
2017-11-15memif: fix uninitialized pointer read coverity errorSteven1-0/+2
Set the content of tmp.sock prior to calling memif_msg_send_disconnect. Also fix the problem socket was not close in the same spot due to error encountered. Change-Id: I8f54ebad2250d1944afcc52e71d2a59da05362af Signed-off-by: Steven <sluong@cisco.com>
2017-11-15BIER: coverity fixesNeale Ranns6-11/+22
Change-Id: I657bade082f9f754b294cd5f23ecfad4f0f46265 Signed-off-by: Neale Ranns <nranns@cisco.com>
2017-11-15Punt DNS request/reply traffic when name resolution disabledDave Barach3-5/+27
Change-Id: Iaad22f25993783be57247aa1f050740f96d2566a Signed-off-by: Dave Barach <dave@barachs.net>
2017-11-15Revert "vnet: af_packet mark l3 offload cksum"Jakub Grajciar1-2/+1
This reverts commit fa600c9169c0d7104af7a9be12a0471a8a8c8262. Change-Id: I873b53b2c025d7aba2211cab9b3e2d780af33b32 Signed-off-by: Jakub Grajciar <Jakub.Grajciar@pantheon.tech>
2017-11-15armv8 crc32 - fix macro nameGabriel Ganne1-1/+1
Change-Id: Iba2d20c0a3d4f07457d108d014a6fa4522cb8e2c Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
2017-11-15Fix cosmetic issue in configure.acDamjan Marion1-1/+1
Change-Id: I0a6a58b4ed0a6609382cce8bc2c6668a681823fc Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-11-15VOM: interface's handle() retreives from singular instanceNeale Ranns2-0/+11
Change-Id: I262f2113f5805c0f89b615a0383efa8520184dd1 Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
2017-11-15VOM: interface RD update reconfigures L3 bindingsNeale Ranns4-21/+82
Change-Id: I273e1ea28c3c146e4a88d031c790c1cc56dccf00 Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
2017-11-14VOM: bridge-domain learning mode and route help commandsNeale Ranns8-7/+125
Change-Id: I2fa219d6530f1e7a3b8ae32d35a0c60ba57c5129 Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
2017-11-14Ip6 dump not showing fib table names (VPP-1063)Neale Ranns2-9/+9
Change-Id: Idc7e7c35f17d514589d1264f1d1be664192ee586 Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
2017-11-14Fix typos in configure.ac and dpdk/buffer.cDamjan Marion2-10/+10
Change-Id: I74ff693315a3ffc7aa2640f25d906ca0d6da6bc5 Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-11-14vppinfra: fix pool_get_aligned_will_expand for fixed poolsFlorin Coras1-3/+4
Change-Id: Ia66ac0a2fa23a3d29370b54e2014900838a8d3ac Signed-off-by: Florin Coras <fcoras@cisco.com>
2017-11-14NULL-terminate load_balance_nsh_nodes[]Gabriel Ganne1-0/+1
Change-Id: Ibc5528bea564f6c2b0ff34220405395bc78274fc Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
2017-11-14bier - fix node table declarationGabriel Ganne3-3/+6
Need to be NULL-terminated. Fix declarations of: - bier_disp_table_bier_nodes - bier_table_mpls_nodes - bier_fmask_mpls_nodes This was crashing during make test on aarch64 platform: During the API call to bier_table_add_del, the crash happens during dpo_default_get_next_node(). Change-Id: I16207ba38fc9ab65bad787878c4608740c312257 Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
2017-11-14Fix builtin http server static request freeFlorin Coras1-0/+1
Change-Id: Ice61d4c6c281aa8c4e89447208e0ad047bcce639 Signed-off-by: Florin Coras <fcoras@cisco.com>
2017-11-14vnet: af_packet_set_l4_cksum_offload device class checkJakub Grajciar2-1/+5
Change-Id: Ie07b71977c46d2f1e030799a08cc5af0fdc397aa Signed-off-by: Jakub Grajciar <Jakub.Grajciar@pantheon.tech>
2017-11-14vnet: af_packet mark l3 offload cksumJakub Grajciar1-1/+2
Change-Id: I42ee5898e1f775692811eebab11bcfe458f1ec63 Signed-off-by: Jakub Grajciar <Jakub.Grajciar@pantheon.tech>
2017-11-14VCL-LDPRELOAD: add sendfile/sendfile64 implementation.Dave Wallace10-53/+301
Change-Id: If0c399269238912456d670432d7e953c9d91b9fb Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
2017-11-14l2-flood: fix restore vnet buffer's flags in the replication routineSteve Shin2-1/+7
When BUM packets are flooded in the l2 domain, some data should be kept and restored for recycling in the replication routine. If l2 bridge domain has multiple interfaces mixed with normal and vlan tagged, the vlan tag value of the vnet buffer can be changed while flooding the replicated packets. The change is made to store and restore the original vlan tag in the replication logic. Change-Id: I399cf54cd2e74cb44a2be42241bdc4fba85032c5 Signed-off-by: Steve Shin <jonshin@cisco.com>
2017-11-13dpdk: introduce AVX512 variants of node functionsDamjan Marion10-69/+259
Change-Id: If581feca0d51d0420c971801aecdf9250c671b36 Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-11-13Instead of a min term size, use a default (VPP-1061)Chris Luke1-16/+21
- In the bug report, Docker was sometimes giving shells a 0,0 terminal size. The minimum-term-size logic meant that VPP assumed the terminal had 1 row. The pager functioned, but of course overwrote the one line with its own prompt. - Instead of a minimum size, always use a default size when the either terminal dimension is 0. Change-Id: Iee5a465f0e4cbb618ef2222b40a52994aefa54bf Signed-off-by: Chris Luke <chrisy@flirble.org>
2017-11-13NAT: Buufer overflow for memcpy()Ole Troan1-3/+2
Change-Id: I11d1f9507d429ad8b25e9873272ede231623e622 Signed-off-by: Ole Troan <ot@cisco.com>
2017-11-12session: add handle to disconnect_session_reply api msg.Dave Wallace1-1/+1
Change-Id: I40f80110f5224b676d60252f9721fd1bc8a10b58 Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
2017-11-11VCL: clean up disconnect_session debug output.Dave Wallace1-14/+25
- Run VPP in xfce4-terminal in VCL unit tests. Change-Id: Iba6a870617a811261de0a54fa38cdb5109ae1d07 Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
2017-11-12VCL/LDPRELOAD: Fix out-of-bounds access and inequality comparison coverity ↵Steven2-56/+47
errors Fixed out-of-bounds access in vcom_socket.c by limiting the copy to the size of the address field that was passed. Truncation will occur if the address field is not big enough. Fixed inequality comparison in vppcom.c by using the predefined macro MAP_FAILED. Change-Id: I9517c29ae811d08058621bd548a352b4d4f05139 Signed-off-by: Steven <sluong@cisco.com>
2017-11-11ACL: Add coding-style-patch-verification and indent.Jon Loeliger1-965/+1260
Change-Id: I2397ada9760d546423e031ad45535ef8801b05e7 Signed-off-by: Jon Loeliger <jdl@netgate.com>
2017-11-11ACLs: Use better error return codes than "-1" everywhere.Jon Loeliger2-15/+17
Added two new errors: ACL_IN_USE_INBOUND ACL_IN_USE_OUTBOUND Update ACL tests to expect new, precise return values. Change-Id: I644861a18aa5b70cce5f451dd6655641160c7697 Signed-off-by: Jon Loeliger <jdl@netgate.com>
2017-11-11MPLS disposition actions at the tail of unicast LSPsNeale Ranns6-31/+107
Change-Id: I8c42e26152f2ed1246f91b789887bfc923418bdf Signed-off-by: Neale Ranns <nranns@cisco.com>
2017-11-11Update CPU listDamjan Marion1-17/+35
Change-Id: Ibee8973270366c38dced6eb3e8ca41784549183a Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-11-11dpdk: optimize buffer alloc/freeDamjan Marion1-49/+118
This reverts commit 45a588fa3efaaf52360986360ab1f6827bae3164. Change-Id: I7e541545791f7743ee827bdec8b6fc46cbb0938f Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-11-11Handle CPU flags from autotools projectDamjan Marion5-4/+15
Change-Id: Id085c1e3cbc7bf03df02755f9e35896cdb57e9e3 Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-11-10VCL: Fix accept state machine, EPOLLET logic.Dave Wallace2-178/+258
Change-Id: I909b717e5c62e91623483bdbb93d9fe4c14f0be7 Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
2017-11-10Map SVM regions at a sane offset on arm64Brian Brooks1-1/+7
Mapping shared virtual memory at 0x30000000, which appears to be derived from x86-32, turns out to be too close to the heap on arm64 systems. The symptoms of memory corruption were random and included crashes in the Python runtime and what appeared to be corruption of malloc's internal mutex. Thanks to Gabriel Ganne for pointing out that disabling ASLR seemed to mitigate the situation. This patch maps SVM regions at an offset from the arm64 kernel constant TASK_UNMAPPED_BASE and also assumes a 48-bit VA (for Ubuntu). Change-Id: I642e5fe83344ab9b5c66c93e0cf1575c17251f3b Signed-off-by: Brian Brooks <brian.brooks@arm.com>
2017-11-10VCL-LDPRELOAD: Fix epoll_pwait timeout.Dave Wallace2-12/+17
Change-Id: I5712f45c35dbdf34141c42b9d864cad1f918e5e8 Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
2017-11-10Break up vpe.apiNeale Ranns29-2523/+2888
- makes the VAPI generated file more consumable. - VOM build times improve. Change-Id: I838488930bd23a0d3818adfdffdbca3eead382df Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
2017-11-10add warning control macro setGabriel Ganne2-11/+106
Add a way to toggle on and off a warning for a specific section of code. This supports clang and gcc, and has no effect for any other compilers. This follows commit bfc29ba442dbb65599f29fe5aa44c6219ed0d3a8 and provides a generic way to handle warnings in such corner cases. To disable a warning enabled by "-Wsome-warning" for a specific code: WARN_OFF(some-warning) // disable compiler warning ; /* some code */ WARN_ON(some-warning) // enable the warning again Change-Id: I0101caa0aa775e2b905c7b3b5fef3bbdce281673 Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
2017-11-10Further fix to SHG handling for ARP/ICMPv6 from BVI in a BDJohn Lo1-6/+25
For ARP/ICMPv6 packets received from a BVI in a BD, allow flood to all remote VTEPs via VXLAN tunnels irrespective of SHG check for ARP request or ICMPv6 neighbor solicitation packets only. All other packets types will flood normally as per SHG check. Change-Id: I17b1cef9015e363fb684c2b6506ed6c4efe70bba Signed-off-by: John Lo <loj@cisco.com> (cherry picked from commit 5b99133cff1ff0eb9043dd8bd3648b0b3aafa47e)
2017-11-10add classify session action set-sr-policy-indexGabriel Ganne5-1/+41
This allows to use the classifier to steer source routing packets instead of using the "sr steer" command. This way we can steer on anything instead of only the dst ip address. test: * add add_node_next function to the VppPapiProvider class. * add simple test scenario using the classifier to steer packets with dest ip addr == a7::/8 to the source routing insert node. * use new interface indexes (3,4) instead of (0,1) to prevent a cleanup conflict with the other tests which attach a specific fib to the interface. The test creates interfaces sepsrated from the other tests to prevent a conflict in the cleaning of the ip6 fib index 1 which causes vpp not to be able to find a default route on this table. Change-Id: Ibacb30fab3ce53f0dfe848ca6a8cdf0d111d8336 Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
2017-11-10Allow Openssl 1.1.0Marco Varlese5-12/+159
This patch addresses all the code changes required to VPP to support openssl 1.1.0 API. All the changes have been done so that VPP can still be built against current openssl API whilst forward-looking to version 1.1.0. Change-Id: I65e22c53c5decde7a15c7eb78a62951ee246b8dc Signed-off-by: Marco Varlese <marco.varlese@suse.com>
2017-11-10vppinfra: add 512-bit vector definitions and typesDamjan Marion1-0/+25
Change-Id: I245c034684ba8585c8f5bb5353027aba13f8a53e Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-11-10Fix bug in key calculation for IPsec tunnel interfaceMatthew Smith1-2/+2
When IPsec tunnel interface has the inbound SA updated, the key used to find the right interface for inbound packets was being generated using the destination address instead of the source. Change-Id: Id5a6fb1511637c912b329aad65188789646a5889 Signed-off-by: Matthew Smith <mgsmith@netgate.com>
2017-11-10session: add app ns index to ns create apiFlorin Coras3-4/+66
Change-Id: I86bfe4e8b0a899cc54c9b37eeb5eec701d0baf3d Signed-off-by: Florin Coras <fcoras@cisco.com>
2017-11-10Add sw_if_index to the ip_neighbor_details_t response.Jon Loeliger2-4/+9
When a DUMP with sw_if_index == ~0 is used to get all Neighbor entries for all interfaces, it is unclear in the details to which interface the neighbor belongs. Clear that up by returning the associated sw_if_index as well. Change-Id: Ib584a57138f7faceffed64d7c1854f7af92e0e42 Signed-off-by: Jon Loeliger <jdl@netgate.com>
2017-11-10session: use listener logic for proxy rulesFlorin Coras5-53/+133
This moves session proxy logic from session rules tables to table/logic used to manage session listeners in order to avoid overlap of semantically different rules. Change-Id: I463522cce91b92d942f6a2086fb14c3366b9f023 Signed-off-by: Florin Coras <fcoras@cisco.com>
2017-11-10VOM: enum_base - not constexpr to appease coverityNeale Ranns1-23/+23
Change-Id: Id87e245882eab80a85a2883ffdb7a0f3b7f26a75 Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
2017-11-10VOM: memset DHCP hostname in VPP APINeale Ranns1-0/+1
Change-Id: I74886c31f8ceba2561679513560cf5ae46757236 Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
2017-11-10BIER: replace uintXX_t with uXXNeale Ranns4-12/+12
Change-Id: I0ba698da9739c11de3a368fe4cf3617167a8d854 Signed-off-by: Neale Ranns <nranns@cisco.com>
2017-11-10session: use pool for segment manager propertiesFlorin Coras5-37/+83
Change-Id: I280fea2610dcfc0b2da84973b9f567daec42f1f6 Signed-off-by: Florin Coras <fcoras@cisco.com>
1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931
/*
 * Copyright (c) 2017 SUSE LLC.
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at:
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
#include <vnet/sctp/sctp.h>
#include <vnet/sctp/sctp_debug.h>
#include <vppinfra/random.h>
#include <openssl/hmac.h>

vlib_node_registration_t sctp4_output_node;
vlib_node_registration_t sctp6_output_node;

typedef enum _sctp_output_next
{
  SCTP_OUTPUT_NEXT_DROP,
  SCTP_OUTPUT_NEXT_IP_LOOKUP,
  SCTP_OUTPUT_N_NEXT
} sctp_output_next_t;

#define foreach_sctp4_output_next              	\
  _ (DROP, "error-drop")                        \
  _ (IP_LOOKUP, "ip4-lookup")

#define foreach_sctp6_output_next              	\
  _ (DROP, "error-drop")                        \
  _ (IP_LOOKUP, "ip6-lookup")

static char *sctp_error_strings[] = {
#define sctp_error(n,s) s,
#include <vnet/sctp/sctp_error.def>
#undef sctp_error
};

typedef struct
{
  sctp_header_t sctp_header;
  sctp_connection_t sctp_connection;
} sctp_tx_trace_t;

/**
 * Flush tx frame populated by retransmits and timer pops
 */
void
sctp_flush_frame_to_output (vlib_main_t * vm, u8 thread_index, u8 is_ip4)
{
  if (sctp_main.tx_frames[!is_ip4][thread_index])
    {
      u32 next_index;
      next_index = is_ip4 ? sctp4_output_node.index : sctp6_output_node.index;
      vlib_put_frame_to_node (vm, next_index,
			      sctp_main.tx_frames[!is_ip4][thread_index]);
      sctp_main.tx_frames[!is_ip4][thread_index] = 0;
    }
}

/**
 * Flush ip lookup tx frames populated by timer pops
 */
always_inline void
sctp_flush_frame_to_ip_lookup (vlib_main_t * vm, u8 thread_index, u8 is_ip4)
{
  if (sctp_main.ip_lookup_tx_frames[!is_ip4][thread_index])
    {
      u32 next_index;
      next_index = is_ip4 ? ip4_lookup_node.index : ip6_lookup_node.index;
      vlib_put_frame_to_node (vm, next_index,
			      sctp_main.ip_lookup_tx_frames[!is_ip4]
			      [thread_index]);
      sctp_main.ip_lookup_tx_frames[!is_ip4][thread_index] = 0;
    }
}

/**
 * Flush v4 and v6 sctp and ip-lookup tx frames for thread index
 */
void
sctp_flush_frames_to_output (u8 thread_index)
{
  vlib_main_t *vm = vlib_get_main ();
  sctp_flush_frame_to_output (vm, thread_index, 1);
  sctp_flush_frame_to_output (vm, thread_index, 0);
  sctp_flush_frame_to_ip_lookup (vm, thread_index, 1);
  sctp_flush_frame_to_ip_lookup (vm, thread_index, 0);
}

u32
ip4_sctp_compute_checksum (vlib_main_t * vm, vlib_buffer_t * p0,
			   ip4_header_t * ip0)
{
  ip_csum_t checksum;
  u32 ip_header_length, payload_length_host_byte_order;
  u32 n_this_buffer, n_bytes_left, n_ip_bytes_this_buffer;
  void *data_this_buffer;

  /* Initialize checksum with ip header. */
  ip_header_length = ip4_header_bytes (ip0);
  payload_length_host_byte_order =
    clib_net_to_host_u16 (ip0->length) - ip_header_length;
  checksum =
    clib_host_to_net_u32 (payload_length_host_byte_order +
			  (ip0->protocol << 16));

  if (BITS (uword) == 32)
    {
      checksum =
	ip_csum_with_carry (checksum,
			    clib_mem_unaligned (&ip0->src_address, u32));
      checksum =
	ip_csum_with_carry (checksum,
			    clib_mem_unaligned (&ip0->dst_address, u32));
    }
  else
    checksum =
      ip_csum_with_carry (checksum,
			  clib_mem_unaligned (&ip0->src_address, u64));

  n_bytes_left = n_this_buffer = payload_length_host_byte_order;
  data_this_buffer = (void *) ip0 + ip_header_length;
  n_ip_bytes_this_buffer =
    p0->current_length - (((u8 *) ip0 - p0->data) - p0->current_data);
  if (n_this_buffer + ip_header_length > n_ip_bytes_this_buffer)
    {
      n_this_buffer = n_ip_bytes_this_buffer > ip_header_length ?
	n_ip_bytes_this_buffer - ip_header_length : 0;
    }
  while (1)
    {
      checksum =
	ip_incremental_checksum (checksum, data_this_buffer, n_this_buffer);
      n_bytes_left -= n_this_buffer;
      if (n_bytes_left == 0)
	break;

      ASSERT (p0->flags & VLIB_BUFFER_NEXT_PRESENT);
      p0 = vlib_get_buffer (vm, p0->next_buffer);
      data_this_buffer = vlib_buffer_get_current (p0);
      n_this_buffer = p0->current_length;
    }

  return checksum;
}

u32
ip6_sctp_compute_checksum (vlib_main_t * vm, vlib_buffer_t * p0,
			   ip6_header_t * ip0, int *bogus_lengthp)
{
  ip_csum_t checksum;
  u16 payload_length_host_byte_order;
  u32 i, n_this_buffer, n_bytes_left;
  u32 headers_size = sizeof (ip0[0]);
  void *data_this_buffer;

  ASSERT (bogus_lengthp);
  *bogus_lengthp = 0;

  /* Initialize checksum with ip header. */
  checksum = ip0->payload_length + clib_host_to_net_u16 (ip0->protocol);
  payload_length_host_byte_order = clib_net_to_host_u16 (ip0->payload_length);
  data_this_buffer = (void *) (ip0 + 1);

  for (i = 0; i < ARRAY_LEN (ip0->src_address.as_uword); i++)
    {
      checksum = ip_csum_with_carry (checksum,
				     clib_mem_unaligned (&ip0->
							 src_address.as_uword
							 [i], uword));
      checksum =
	ip_csum_with_carry (checksum,
			    clib_mem_unaligned (&ip0->dst_address.as_uword[i],
						uword));
    }

  /* some icmp packets may come with a "router alert" hop-by-hop extension header (e.g., mldv2 packets)
   * or UDP-Ping packets */
  if (PREDICT_FALSE (ip0->protocol == IP_PROTOCOL_IP6_HOP_BY_HOP_OPTIONS))
    {
      u32 skip_bytes;
      ip6_hop_by_hop_ext_t *ext_hdr =
	(ip6_hop_by_hop_ext_t *) data_this_buffer;

      /* validate really icmp6 next */
      ASSERT ((ext_hdr->next_hdr == IP_PROTOCOL_SCTP));

      skip_bytes = 8 * (1 + ext_hdr->n_data_u64s);
      data_this_buffer = (void *) ((u8 *) data_this_buffer + skip_bytes);

      payload_length_host_byte_order -= skip_bytes;
      headers_size += skip_bytes;
    }

  n_bytes_left = n_this_buffer = payload_length_host_byte_order;
  if (p0 && n_this_buffer + headers_size > p0->current_length)
    n_this_buffer =
      p0->current_length >
      headers_size ? p0->current_length - headers_size : 0;
  while (1)
    {
      checksum =
	ip_incremental_checksum (checksum, data_this_buffer, n_this_buffer);
      n_bytes_left -= n_this_buffer;
      if (n_bytes_left == 0)
	break;

      if (!(p0->flags & VLIB_BUFFER_NEXT_PRESENT))
	{
	  *bogus_lengthp = 1;
	  return 0xfefe;
	}
      p0 = vlib_get_buffer (vm, p0->next_buffer);
      data_this_buffer = vlib_buffer_get_current (p0);
      n_this_buffer = p0->current_length;
    }

  return checksum;
}

void
sctp_push_ip_hdr (sctp_main_t * tm, sctp_sub_connection_t * sctp_sub_conn,
		  vlib_buffer_t * b)
{
  sctp_header_t *th = vlib_buffer_get_current (b);
  vlib_main_t *vm = vlib_get_main ();
  if (sctp_sub_conn->c_is_ip4)
    {
      ip4_header_t *ih;
      ih = vlib_buffer_push_ip4 (vm, b, &sctp_sub_conn->c_lcl_ip4,
				 &sctp_sub_conn->c_rmt_ip4, IP_PROTOCOL_SCTP,
				 1);
      th->checksum = ip4_sctp_compute_checksum (vm, b, ih);
    }
  else
    {
      ip6_header_t *ih;
      int bogus = ~0;

      ih = vlib_buffer_push_ip6 (vm, b, &sctp_sub_conn->c_lcl_ip6,
				 &sctp_sub_conn->c_rmt_ip6, IP_PROTOCOL_SCTP);
      th->checksum = ip6_sctp_compute_checksum (vm, b, ih, &bogus);
      ASSERT (!bogus);
    }
}

always_inline void *
sctp_reuse_buffer (vlib_main_t * vm, vlib_buffer_t * b)
{
  if (b->flags & VLIB_BUFFER_NEXT_PRESENT)
    vlib_buffer_free_one (vm, b->next_buffer);
  /* Zero all flags but free list index and trace flag */
  b->flags &= VLIB_BUFFER_NEXT_PRESENT - 1;
  b->current_data = 0;
  b->current_length = 0;
  b->total_length_not_including_first_buffer = 0;
  vnet_buffer (b)->sctp.flags = 0;
  vnet_buffer (b)->sctp.subconn_idx = MAX_SCTP_CONNECTIONS;

  /* Leave enough space for headers */
  return vlib_buffer_make_headroom (b, MAX_HDRS_LEN);
}

always_inline void *
sctp_init_buffer (vlib_main_t * vm, vlib_buffer_t * b)
{
  ASSERT ((b->flags & VLIB_BUFFER_NEXT_PRESENT) == 0);
  b->flags &= VLIB_BUFFER_NON_DEFAULT_FREELIST;
  b->flags |= VNET_BUFFER_F_LOCALLY_ORIGINATED;
  b->total_length_not_including_first_buffer = 0;
  vnet_buffer (b)->sctp.flags = 0;
  vnet_buffer (b)->sctp.subconn_idx = MAX_SCTP_CONNECTIONS;
  VLIB_BUFFER_TRACE_TRAJECTORY_INIT (b);
  /* Leave enough space for headers */
  return vlib_buffer_make_headroom (b, MAX_HDRS_LEN);
}

always_inline int
sctp_alloc_tx_buffers (sctp_main_t * tm, u8 thread_index, u32 n_free_buffers)
{
  vlib_main_t *vm = vlib_get_main ();
  u32 current_length = vec_len (tm->tx_buffers[thread_index]);
  u32 n_allocated;

  vec_validate (tm->tx_buffers[thread_index],
		current_length + n_free_buffers - 1);
  n_allocated =
    vlib_buffer_alloc (vm, &tm->tx_buffers[thread_index][current_length],
		       n_free_buffers);
  _vec_len (tm->tx_buffers[thread_index]) = current_length + n_allocated;
  /* buffer shortage, report failure */
  if (vec_len (tm->tx_buffers[thread_index]) == 0)
    {
      clib_warning ("out of buffers");
      return -1;
    }
  return 0;
}

always_inline int
sctp_get_free_buffer_index (sctp_main_t * tm, u32 * bidx)
{
  u32 *my_tx_buffers;
  u32 thread_index = vlib_get_thread_index ();
  if (PREDICT_FALSE (vec_len (tm->tx_buffers[thread_index]) == 0))
    {
      if (sctp_alloc_tx_buffers (tm, thread_index, VLIB_FRAME_SIZE))
	return -1;
    }
  my_tx_buffers = tm->tx_buffers[thread_index];
  *bidx = my_tx_buffers[vec_len (my_tx_buffers) - 1];
  _vec_len (my_tx_buffers) -= 1;
  return 0;
}

always_inline void
sctp_enqueue_to_output_i (vlib_main_t * vm, vlib_buffer_t * b, u32 bi,
			  u8 is_ip4, u8 flush)
{
  sctp_main_t *tm = vnet_get_sctp_main ();
  u32 thread_index = vlib_get_thread_index ();
  u32 *to_next, next_index;
  vlib_frame_t *f;

  b->flags |= VNET_BUFFER_F_LOCALLY_ORIGINATED;
  b->error = 0;

  /* Decide where to send the packet */
  next_index = is_ip4 ? sctp4_output_node.index : sctp6_output_node.index;
  sctp_trajectory_add_start (b, 2);

  /* Get frame to v4/6 output node */
  f = tm->tx_frames[!is_ip4][thread_index];
  if (!f)
    {
      f = vlib_get_frame_to_node (vm, next_index);
      ASSERT (f);
      tm->tx_frames[!is_ip4][thread_index] = f;
    }
  to_next = vlib_frame_vector_args (f);
  to_next[f->n_vectors] = bi;
  f->n_vectors += 1;
  if (flush || f->n_vectors == VLIB_FRAME_SIZE)
    {
      vlib_put_frame_to_node (vm, next_index, f);
      tm->tx_frames[!is_ip4][thread_index] = 0;
    }
}

always_inline void
sctp_enqueue_to_output_now (vlib_main_t * vm, vlib_buffer_t * b, u32 bi,
			    u8 is_ip4)
{
  sctp_enqueue_to_output_i (vm, b, bi, is_ip4, 1);
}

always_inline void
sctp_enqueue_to_ip_lookup_i (vlib_main_t * vm, vlib_buffer_t * b, u32 bi,
			     u8 is_ip4, u32 fib_index, u8 flush)
{
  sctp_main_t *tm = vnet_get_sctp_main ();
  u32 thread_index = vlib_get_thread_index ();
  u32 *to_next, next_index;
  vlib_frame_t *f;

  b->flags |= VNET_BUFFER_F_LOCALLY_ORIGINATED;
  b->error = 0;

  vnet_buffer (b)->sw_if_index[VLIB_TX] = fib_index;
  vnet_buffer (b)->sw_if_index[VLIB_RX] = 0;

  /* Send to IP lookup */
  next_index = is_ip4 ? ip4_lookup_node.index : ip6_lookup_node.index;
  if (VLIB_BUFFER_TRACE_TRAJECTORY > 0)
    {
      b->pre_data[0] = 2;
      b->pre_data[1] = next_index;
    }

  f = tm->ip_lookup_tx_frames[!is_ip4][thread_index];
  if (!f)
    {
      f = vlib_get_frame_to_node (vm, next_index);
      ASSERT (f);
      tm->ip_lookup_tx_frames[!is_ip4][thread_index] = f;
    }

  to_next = vlib_frame_vector_args (f);
  to_next[f->n_vectors] = bi;
  f->n_vectors += 1;
  if (flush || f->n_vectors == VLIB_FRAME_SIZE)
    {
      vlib_put_frame_to_node (vm, next_index, f);
      tm->ip_lookup_tx_frames[!is_ip4][thread_index] = 0;
    }
}

always_inline void
sctp_enqueue_to_ip_lookup (vlib_main_t * vm, vlib_buffer_t * b, u32 bi,
			   u8 is_ip4, u32 fib_index)
{
  sctp_enqueue_to_ip_lookup_i (vm, b, bi, is_ip4, fib_index, 0);
  if (vm->thread_index == 0 && vlib_num_workers ())
    session_flush_frames_main_thread (vm);
}

/**
 * Convert buffer to INIT
 */
void
sctp_prepare_init_chunk (sctp_connection_t * sctp_conn, u8 idx,
			 vlib_buffer_t * b)
{
  u32 random_seed = random_default_seed ();
  u16 alloc_bytes = sizeof (sctp_init_chunk_t);
  sctp_sub_connection_t *sub_conn = &sctp_conn->sub_conn[idx];

  sctp_ipv4_addr_param_t *ip4_param = 0;
  sctp_ipv6_addr_param_t *ip6_param = 0;

  if (sub_conn->c_is_ip4)
    alloc_bytes += sizeof (sctp_ipv4_addr_param_t);
  else
    alloc_bytes += sizeof (sctp_ipv6_addr_param_t);

  /* As per RFC 4960 the chunk_length value does NOT contemplate
   * the size of the first header (see sctp_header_t) and any padding
   */
  u16 chunk_len = alloc_bytes - sizeof (sctp_header_t);

  alloc_bytes += vnet_sctp_calculate_padding (alloc_bytes);

  sctp_init_chunk_t *init_chunk = vlib_buffer_push_uninit (b, alloc_bytes);

  u16 pointer_offset = sizeof (init_chunk);
  if (sub_conn->c_is_ip4)
    {
      ip4_param = (sctp_ipv4_addr_param_t *) init_chunk + pointer_offset;
      ip4_param->address.as_u32 = sub_conn->c_lcl_ip.ip4.as_u32;

      pointer_offset += sizeof (sctp_ipv4_addr_param_t);
    }
  else
    {
      ip6_param = (sctp_ipv6_addr_param_t *) init_chunk + pointer_offset;
      ip6_param->address.as_u64[0] = sub_conn->c_lcl_ip.ip6.as_u64[0];
      ip6_param->address.as_u64[1] = sub_conn->c_lcl_ip.ip6.as_u64[1];

      pointer_offset += sizeof (sctp_ipv6_addr_param_t);
    }

  init_chunk->sctp_hdr.src_port = sub_conn->c_lcl_port;	/* No need of host_to_net conversion, already in net-byte order */
  init_chunk->sctp_hdr.dst_port = sub_conn->c_rmt_port;	/* No need of host_to_net conversion, already in net-byte order */
  init_chunk->sctp_hdr.checksum = 0;
  /* The sender of an INIT must set the VERIFICATION_TAG to 0 as per RFC 4960 Section 8.5.1 */
  init_chunk->sctp_hdr.verification_tag = 0x0;

  vnet_sctp_set_chunk_type (&init_chunk->chunk_hdr, INIT);
  vnet_sctp_set_chunk_length (&init_chunk->chunk_hdr, chunk_len);
  vnet_sctp_common_hdr_params_host_to_net (&init_chunk->chunk_hdr);

  sctp_init_cwnd (sctp_conn);

  init_chunk->a_rwnd = clib_host_to_net_u32 (sctp_conn->sub_conn[idx].cwnd);
  init_chunk->initiate_tag = clib_host_to_net_u32 (random_u32 (&random_seed));
  init_chunk->inboud_streams_count =
    clib_host_to_net_u16 (INBOUND_STREAMS_COUNT);
  init_chunk->outbound_streams_count =
    clib_host_to_net_u16 (OUTBOUND_STREAMS_COUNT);

  init_chunk->initial_tsn =
    clib_host_to_net_u32 (sctp_conn->local_initial_tsn);
  SCTP_CONN_TRACKING_DBG ("sctp_conn->local_initial_tsn = %u",
			  sctp_conn->local_initial_tsn);

  sctp_conn->local_tag = init_chunk->initiate_tag;

  vnet_buffer (b)->sctp.connection_index = sub_conn->c_c_index;
  vnet_buffer (b)->sctp.subconn_idx = idx;

  SCTP_DBG_STATE_MACHINE ("CONN_INDEX = %u, CURR_CONN_STATE = %u (%s), "
			  "CHUNK_TYPE = %s, "
			  "SRC_PORT = %u, DST_PORT = %u",
			  sub_conn->connection.c_index,
			  sctp_conn->state,
			  sctp_state_to_string (sctp_conn->state),
			  sctp_chunk_to_string (INIT),
			  init_chunk->sctp_hdr.src_port,
			  init_chunk->sctp_hdr.dst_port);
}

void
sctp_compute_mac (sctp_connection_t * sctp_conn,
		  sctp_state_cookie_param_t * state_cookie)
{
#if OPENSSL_VERSION_NUMBER >= 0x10100000L
  HMAC_CTX *ctx;
#else
  HMAC_CTX ctx;
#endif
  unsigned int len = 0;
  const EVP_MD *md = EVP_sha1 ();
#if OPENSSL_VERSION_NUMBER >= 0x10100000L
  ctx = HMAC_CTX_new ();
  HMAC_Init_ex (ctx, &state_cookie->creation_time,
		sizeof (state_cookie->creation_time), md, NULL);
  HMAC_Update (ctx, (const unsigned char *) &sctp_conn, sizeof (sctp_conn));
  HMAC_Final (ctx, state_cookie->mac, &len);
#else
  HMAC_CTX_init (&ctx);
  HMAC_Init_ex (&ctx, &state_cookie->creation_time,
		sizeof (state_cookie->creation_time), md, NULL);
  HMAC_Update (&ctx, (const unsigned char *) &sctp_conn, sizeof (sctp_conn));
  HMAC_Final (&ctx, state_cookie->mac, &len);
  HMAC_CTX_cleanup (&ctx);
#endif

  ENDIANESS_SWAP (state_cookie->mac);
}

void
sctp_prepare_cookie_ack_chunk (sctp_connection_t * sctp_conn, u8 idx,
			       vlib_buffer_t * b)
{
  vlib_main_t *vm = vlib_get_main ();

  sctp_reuse_buffer (vm, b);

  u16 alloc_bytes = sizeof (sctp_cookie_ack_chunk_t);

  /* As per RFC 4960 the chunk_length value does NOT contemplate
   * the size of the first header (see sctp_header_t) and any padding
   */
  u16 chunk_len = alloc_bytes - sizeof (sctp_header_t);

  alloc_bytes += vnet_sctp_calculate_padding (alloc_bytes);

  sctp_cookie_ack_chunk_t *cookie_ack_chunk =
    vlib_buffer_push_uninit (b, alloc_bytes);

  cookie_ack_chunk->sctp_hdr.checksum = 0;
  cookie_ack_chunk->sctp_hdr.src_port =
    sctp_conn->sub_conn[idx].connection.lcl_port;
  cookie_ack_chunk->sctp_hdr.dst_port =
    sctp_conn->sub_conn[idx].connection.rmt_port;
  cookie_ack_chunk->sctp_hdr.verification_tag = sctp_conn->remote_tag;
  vnet_sctp_set_chunk_type (&cookie_ack_chunk->chunk_hdr, COOKIE_ACK);
  vnet_sctp_set_chunk_length (&cookie_ack_chunk->chunk_hdr, chunk_len);

  vnet_buffer (b)->sctp.connection_index =
    sctp_conn->sub_conn[idx].connection.c_index;
  vnet_buffer (b)->sctp.subconn_idx = idx;
}

void
sctp_prepare_cookie_echo_chunk (sctp_connection_t * sctp_conn, u8 idx,
				vlib_buffer_t * b, u8 reuse_buffer)
{
  vlib_main_t *vm = vlib_get_main ();

  if (reuse_buffer)
    sctp_reuse_buffer (vm, b);

  /* The minimum size of the message is given by the sctp_init_ack_chunk_t */
  u16 alloc_bytes = sizeof (sctp_cookie_echo_chunk_t);
  /* As per RFC 4960 the chunk_length value does NOT contemplate
   * the size of the first header (see sctp_header_t) and any padding
   */
  u16 chunk_len = alloc_bytes - sizeof (sctp_header_t);
  alloc_bytes += vnet_sctp_calculate_padding (alloc_bytes);
  sctp_cookie_echo_chunk_t *cookie_echo_chunk =
    vlib_buffer_push_uninit (b, alloc_bytes);
  cookie_echo_chunk->sctp_hdr.checksum = 0;
  cookie_echo_chunk->sctp_hdr.src_port =
    sctp_conn->sub_conn[idx].connection.lcl_port;
  cookie_echo_chunk->sctp_hdr.dst_port =
    sctp_conn->sub_conn[idx].connection.rmt_port;
  cookie_echo_chunk->sctp_hdr.verification_tag = sctp_conn->remote_tag;
  vnet_sctp_set_chunk_type (&cookie_echo_chunk->chunk_hdr, COOKIE_ECHO);
  vnet_sctp_set_chunk_length (&cookie_echo_chunk->chunk_hdr, chunk_len);
  clib_memcpy (&(cookie_echo_chunk->cookie), &sctp_conn->cookie_param,
	       sizeof (sctp_state_cookie_param_t));

  vnet_buffer (b)->sctp.connection_index =
    sctp_conn->sub_conn[idx].connection.c_index;
  vnet_buffer (b)->sctp.subconn_idx = idx;
}


/*
 *  Send COOKIE_ECHO
 */
void
sctp_send_cookie_echo (sctp_connection_t * sctp_conn)
{
  vlib_buffer_t *b;
  u32 bi;
  sctp_main_t *tm = vnet_get_sctp_main ();
  vlib_main_t *vm = vlib_get_main ();

  if (PREDICT_FALSE (sctp_conn->init_retransmit_err > SCTP_MAX_INIT_RETRANS))
    {
      clib_warning ("Reached MAX_INIT_RETRANS times. Aborting connection.");

      session_stream_connect_notify (&sctp_conn->sub_conn
				     [SCTP_PRIMARY_PATH_IDX].connection, 1);

      sctp_connection_timers_reset (sctp_conn);

      sctp_connection_cleanup (sctp_conn);
    }

  if (PREDICT_FALSE (sctp_get_free_buffer_index (tm, &bi)))
    return;

  b = vlib_get_buffer (vm, bi);
  u8 idx = SCTP_PRIMARY_PATH_IDX;

  sctp_init_buffer (vm, b);
  sctp_prepare_cookie_echo_chunk (sctp_conn, idx, b, 0);
  sctp_enqueue_to_output_now (vm, b, bi, sctp_conn->sub_conn[idx].c_is_ip4);

  /* Start the T1_INIT timer */
  sctp_timer_set (sctp_conn, idx, SCTP_TIMER_T1_INIT,
		  sctp_conn->sub_conn[idx].RTO);

  /* Change state to COOKIE_WAIT */
  sctp_conn->state = SCTP_STATE_COOKIE_WAIT;

  /* Measure RTT with this */
  sctp_conn->sub_conn[idx].rtt_ts = sctp_time_now ();
}


/**
 * Convert buffer to ERROR
 */
void
sctp_prepare_operation_error (sctp_connection_t * sctp_conn, u8 idx,
			      vlib_buffer_t * b, u8 err_cause)
{
  vlib_main_t *vm = vlib_get_main ();

  sctp_reuse_buffer (vm, b);

  /* The minimum size of the message is given by the sctp_operation_error_t */
  u16 alloc_bytes =
    sizeof (sctp_operation_error_t) + sizeof (sctp_err_cause_param_t);

  /* As per RFC 4960 the chunk_length value does NOT contemplate
   * the size of the first header (see sctp_header_t) and any padding
   */
  u16 chunk_len = alloc_bytes - sizeof (sctp_header_t);

  alloc_bytes += vnet_sctp_calculate_padding (alloc_bytes);

  sctp_operation_error_t *err_chunk =
    vlib_buffer_push_uninit (b, alloc_bytes);

  /* src_port & dst_port are already in network byte-order */
  err_chunk->sctp_hdr.checksum = 0;
  err_chunk->sctp_hdr.src_port = sctp_conn->sub_conn[idx].connection.lcl_port;
  err_chunk->sctp_hdr.dst_port = sctp_conn->sub_conn[idx].connection.rmt_port;
  /* As per RFC4960 Section 5.2.2: copy the INITIATE_TAG into the VERIFICATION_TAG of the ABORT chunk */
  err_chunk->sctp_hdr.verification_tag = sctp_conn->local_tag;

  err_chunk->err_causes[0].param_hdr.length =
    clib_host_to_net_u16 (sizeof (err_chunk->err_causes[0].param_hdr.type) +
			  sizeof (err_chunk->err_causes[0].param_hdr.length));
  err_chunk->err_causes[0].param_hdr.type = clib_host_to_net_u16 (err_cause);

  vnet_sctp_set_chunk_type (&err_chunk->chunk_hdr, OPERATION_ERROR);
  vnet_sctp_set_chunk_length (&err_chunk->chunk_hdr, chunk_len);

  vnet_buffer (b)->sctp.connection_index =
    sctp_conn->sub_conn[idx].connection.c_index;
  vnet_buffer (b)->sctp.subconn_idx = idx;
}

/**
 * Convert buffer to ABORT
 */
void
sctp_prepare_abort_for_collision (sctp_connection_t * sctp_conn, u8 idx,
				  vlib_buffer_t * b, ip4_address_t * ip4_addr,
				  ip6_address_t * ip6_addr)
{
  vlib_main_t *vm = vlib_get_main ();

  sctp_reuse_buffer (vm, b);

  /* The minimum size of the message is given by the sctp_abort_chunk_t */
  u16 alloc_bytes = sizeof (sctp_abort_chunk_t);

  /* As per RFC 4960 the chunk_length value does NOT contemplate
   * the size of the first header (see sctp_header_t) and any padding
   */
  u16 chunk_len = alloc_bytes - sizeof (sctp_header_t);

  alloc_bytes += vnet_sctp_calculate_padding (alloc_bytes);

  sctp_abort_chunk_t *abort_chunk = vlib_buffer_push_uninit (b, alloc_bytes);

  /* src_port & dst_port are already in network byte-order */
  abort_chunk->sctp_hdr.checksum = 0;
  abort_chunk->sctp_hdr.src_port =
    sctp_conn->sub_conn[idx].connection.lcl_port;
  abort_chunk->sctp_hdr.dst_port =
    sctp_conn->sub_conn[idx].connection.rmt_port;
  /* As per RFC4960 Section 5.2.2: copy the INITIATE_TAG into the VERIFICATION_TAG of the ABORT chunk */
  abort_chunk->sctp_hdr.verification_tag = sctp_conn->local_tag;

  vnet_sctp_set_chunk_type (&abort_chunk->chunk_hdr, ABORT);
  vnet_sctp_set_chunk_length (&abort_chunk->chunk_hdr, chunk_len);

  vnet_buffer (b)->sctp.connection_index =
    sctp_conn->sub_conn[idx].connection.c_index;
  vnet_buffer (b)->sctp.subconn_idx = idx;
}

/**
 * Convert buffer to INIT-ACK
 */
void
sctp_prepare_initack_chunk_for_collision (sctp_connection_t * sctp_conn,
					  u8 idx, vlib_buffer_t * b,
					  ip4_address_t * ip4_addr,
					  ip6_address_t * ip6_addr)
{
  vlib_main_t *vm = vlib_get_main ();
  sctp_ipv4_addr_param_t *ip4_param = 0;
  sctp_ipv6_addr_param_t *ip6_param = 0;

  sctp_reuse_buffer (vm, b);

  /* The minimum size of the message is given by the sctp_init_ack_chunk_t */
  u16 alloc_bytes =
    sizeof (sctp_init_ack_chunk_t) + sizeof (sctp_state_cookie_param_t);

  if (PREDICT_TRUE (ip4_addr != NULL))
    {
      /* Create room for variable-length fields in the INIT_ACK chunk */
      alloc_bytes += SCTP_IPV4_ADDRESS_TYPE_LENGTH;
    }
  if (PREDICT_TRUE (ip6_addr != NULL))
    {
      /* Create room for variable-length fields in the INIT_ACK chunk */
      alloc_bytes += SCTP_IPV6_ADDRESS_TYPE_LENGTH;
    }

  if (sctp_conn->sub_conn[idx].connection.is_ip4)
    alloc_bytes += sizeof (sctp_ipv4_addr_param_t);
  else
    alloc_bytes += sizeof (sctp_ipv6_addr_param_t);

  /* As per RFC 4960 the chunk_length value does NOT contemplate
   * the size of the first header (see sctp_header_t) and any padding
   */
  u16 chunk_len = alloc_bytes - sizeof (sctp_header_t);

  alloc_bytes += vnet_sctp_calculate_padding (alloc_bytes);

  sctp_init_ack_chunk_t *init_ack_chunk =
    vlib_buffer_push_uninit (b, alloc_bytes);

  u16 pointer_offset = sizeof (sctp_init_ack_chunk_t);

  /* Create State Cookie parameter */
  sctp_state_cookie_param_t *state_cookie_param =
    (sctp_state_cookie_param_t *) ((char *) init_ack_chunk + pointer_offset);

  state_cookie_param->param_hdr.type =
    clib_host_to_net_u16 (SCTP_STATE_COOKIE_TYPE);
  state_cookie_param->param_hdr.length =
    clib_host_to_net_u16 (sizeof (sctp_state_cookie_param_t));
  state_cookie_param->creation_time = clib_host_to_net_u32 (sctp_time_now ());
  state_cookie_param->cookie_lifespan =
    clib_host_to_net_u32 (SCTP_VALID_COOKIE_LIFE);

  sctp_compute_mac (sctp_conn, state_cookie_param);

  pointer_offset += sizeof (sctp_state_cookie_param_t);

  if (PREDICT_TRUE (ip4_addr != NULL))
    {
      sctp_ipv4_addr_param_t *ipv4_addr =
	(sctp_ipv4_addr_param_t *) init_ack_chunk + pointer_offset;

      ipv4_addr->param_hdr.type =
	clib_host_to_net_u16 (SCTP_IPV4_ADDRESS_TYPE);
      ipv4_addr->param_hdr.length =
	clib_host_to_net_u16 (SCTP_IPV4_ADDRESS_TYPE_LENGTH);
      ipv4_addr->address.as_u32 = ip4_addr->as_u32;

      pointer_offset += SCTP_IPV4_ADDRESS_TYPE_LENGTH;
    }
  if (PREDICT_TRUE (ip6_addr != NULL))
    {
      sctp_ipv6_addr_param_t *ipv6_addr =
	(sctp_ipv6_addr_param_t *) init_ack_chunk + pointer_offset;

      ipv6_addr->param_hdr.type =
	clib_host_to_net_u16 (SCTP_IPV6_ADDRESS_TYPE);
      ipv6_addr->param_hdr.length =
	clib_host_to_net_u16 (SCTP_IPV6_ADDRESS_TYPE_LENGTH);
      ipv6_addr->address.as_u64[0] = ip6_addr->as_u64[0];
      ipv6_addr->address.as_u64[1] = ip6_addr->as_u64[1];

      pointer_offset += SCTP_IPV6_ADDRESS_TYPE_LENGTH;
    }

  if (sctp_conn->sub_conn[idx].connection.is_ip4)
    {
      ip4_param = (sctp_ipv4_addr_param_t *) init_ack_chunk + pointer_offset;
      ip4_param->address.as_u32 =
	sctp_conn->sub_conn[idx].connection.lcl_ip.ip4.as_u32;

      pointer_offset += sizeof (sctp_ipv4_addr_param_t);
    }
  else
    {
      ip6_param = (sctp_ipv6_addr_param_t *) init_ack_chunk + pointer_offset;
      ip6_param->address.as_u64[0] =
	sctp_conn->sub_conn[idx].connection.lcl_ip.ip6.as_u64[0];
      ip6_param->address.as_u64[1] =
	sctp_conn->sub_conn[idx].connection.lcl_ip.ip6.as_u64[1];

      pointer_offset += sizeof (sctp_ipv6_addr_param_t);
    }

  /* src_port & dst_port are already in network byte-order */
  init_ack_chunk->sctp_hdr.checksum = 0;
  init_ack_chunk->sctp_hdr.src_port =
    sctp_conn->sub_conn[idx].connection.lcl_port;
  init_ack_chunk->sctp_hdr.dst_port =
    sctp_conn->sub_conn[idx].connection.rmt_port;
  /* the sctp_conn->verification_tag is already in network byte-order (being a copy of the init_tag coming with the INIT chunk) */
  init_ack_chunk->sctp_hdr.verification_tag = sctp_conn->remote_tag;
  init_ack_chunk->initial_tsn =
    clib_host_to_net_u32 (sctp_conn->local_initial_tsn);
  SCTP_CONN_TRACKING_DBG ("init_ack_chunk->initial_tsn = %u",
			  init_ack_chunk->initial_tsn);

  vnet_sctp_set_chunk_type (&init_ack_chunk->chunk_hdr, INIT_ACK);
  vnet_sctp_set_chunk_length (&init_ack_chunk->chunk_hdr, chunk_len);

  init_ack_chunk->initiate_tag = sctp_conn->local_tag;

  init_ack_chunk->a_rwnd =
    clib_host_to_net_u32 (sctp_conn->sub_conn[idx].cwnd);
  init_ack_chunk->inboud_streams_count =
    clib_host_to_net_u16 (INBOUND_STREAMS_COUNT);
  init_ack_chunk->outbound_streams_count =
    clib_host_to_net_u16 (OUTBOUND_STREAMS_COUNT);

  vnet_buffer (b)->sctp.connection_index =
    sctp_conn->sub_conn[idx].connection.c_index;
  vnet_buffer (b)->sctp.subconn_idx = idx;
}

/**
 * Convert buffer to INIT-ACK
 */
void
sctp_prepare_initack_chunk (sctp_connection_t * sctp_conn, u8 idx,
			    vlib_buffer_t * b, ip4_address_t * ip4_addr,
			    u8 add_ip4, ip6_address_t * ip6_addr, u8 add_ip6)
{
  vlib_main_t *vm = vlib_get_main ();
  sctp_ipv4_addr_param_t *ip4_param = 0;
  sctp_ipv6_addr_param_t *ip6_param = 0;
  u32 random_seed = random_default_seed ();

  sctp_reuse_buffer (vm, b);

  /* The minimum size of the message is given by the sctp_init_ack_chunk_t */
  u16 alloc_bytes =
    sizeof (sctp_init_ack_chunk_t) + sizeof (sctp_state_cookie_param_t);

  if (PREDICT_FALSE (add_ip4 == 1))
    {
      /* Create room for variable-length fields in the INIT_ACK chunk */
      alloc_bytes += SCTP_IPV4_ADDRESS_TYPE_LENGTH;
    }
  if (PREDICT_FALSE (add_ip6 == 1))
    {
      /* Create room for variable-length fields in the INIT_ACK chunk */
      alloc_bytes += SCTP_IPV6_ADDRESS_TYPE_LENGTH;
    }

  if (sctp_conn->sub_conn[idx].connection.is_ip4)
    alloc_bytes += sizeof (sctp_ipv4_addr_param_t);
  else
    alloc_bytes += sizeof (sctp_ipv6_addr_param_t);

  /* As per RFC 4960 the chunk_length value does NOT contemplate
   * the size of the first header (see sctp_header_t) and any padding
   */
  u16 chunk_len = alloc_bytes - sizeof (sctp_header_t);

  alloc_bytes += vnet_sctp_calculate_padding (alloc_bytes);

  sctp_init_ack_chunk_t *init_ack_chunk =
    vlib_buffer_push_uninit (b, alloc_bytes);

  u16 pointer_offset = sizeof (sctp_init_ack_chunk_t);

  /* Create State Cookie parameter */
  sctp_state_cookie_param_t *state_cookie_param =
    (sctp_state_cookie_param_t *) ((char *) init_ack_chunk + pointer_offset);

  state_cookie_param->param_hdr.type =
    clib_host_to_net_u16 (SCTP_STATE_COOKIE_TYPE);
  state_cookie_param->param_hdr.length =
    clib_host_to_net_u16 (sizeof (sctp_state_cookie_param_t));
  state_cookie_param->creation_time = clib_host_to_net_u32 (sctp_time_now ());
  state_cookie_param->cookie_lifespan =
    clib_host_to_net_u32 (SCTP_VALID_COOKIE_LIFE);

  sctp_compute_mac (sctp_conn, state_cookie_param);

  pointer_offset += sizeof (sctp_state_cookie_param_t);

  if (PREDICT_TRUE (ip4_addr != NULL))
    {
      sctp_ipv4_addr_param_t *ipv4_addr =
	(sctp_ipv4_addr_param_t *) init_ack_chunk + pointer_offset;

      ipv4_addr->param_hdr.type =
	clib_host_to_net_u16 (SCTP_IPV4_ADDRESS_TYPE);
      ipv4_addr->param_hdr.length =
	clib_host_to_net_u16 (SCTP_IPV4_ADDRESS_TYPE_LENGTH);
      ipv4_addr->address.as_u32 = ip4_addr->as_u32;

      pointer_offset += SCTP_IPV4_ADDRESS_TYPE_LENGTH;
    }
  if (PREDICT_TRUE (ip6_addr != NULL))
    {
      sctp_ipv6_addr_param_t *ipv6_addr =
	(sctp_ipv6_addr_param_t *) init_ack_chunk + pointer_offset;

      ipv6_addr->param_hdr.type =
	clib_host_to_net_u16 (SCTP_IPV6_ADDRESS_TYPE);
      ipv6_addr->param_hdr.length =
	clib_host_to_net_u16 (SCTP_IPV6_ADDRESS_TYPE_LENGTH);
      ipv6_addr->address.as_u64[0] = ip6_addr->as_u64[0];
      ipv6_addr->address.as_u64[1] = ip6_addr->as_u64[1];

      pointer_offset += SCTP_IPV6_ADDRESS_TYPE_LENGTH;
    }

  if (sctp_conn->sub_conn[idx].connection.is_ip4)
    {
      ip4_param = (sctp_ipv4_addr_param_t *) init_ack_chunk + pointer_offset;
      ip4_param->address.as_u32 =
	sctp_conn->sub_conn[idx].connection.lcl_ip.ip4.as_u32;

      pointer_offset += sizeof (sctp_ipv4_addr_param_t);
    }
  else
    {
      ip6_param = (sctp_ipv6_addr_param_t *) init_ack_chunk + pointer_offset;
      ip6_param->address.as_u64[0] =
	sctp_conn->sub_conn[idx].connection.lcl_ip.ip6.as_u64[0];
      ip6_param->address.as_u64[1] =
	sctp_conn->sub_conn[idx].connection.lcl_ip.ip6.as_u64[1];

      pointer_offset += sizeof (sctp_ipv6_addr_param_t);
    }

  /* src_port & dst_port are already in network byte-order */
  init_ack_chunk->sctp_hdr.checksum = 0;
  init_ack_chunk->sctp_hdr.src_port =
    sctp_conn->sub_conn[idx].connection.lcl_port;
  init_ack_chunk->sctp_hdr.dst_port =
    sctp_conn->sub_conn[idx].connection.rmt_port;
  /* the sctp_conn->verification_tag is already in network byte-order (being a copy of the init_tag coming with the INIT chunk) */
  init_ack_chunk->sctp_hdr.verification_tag = sctp_conn->remote_tag;
  init_ack_chunk->initial_tsn =
    clib_host_to_net_u32 (sctp_conn->local_initial_tsn);
  SCTP_CONN_TRACKING_DBG ("init_ack_chunk->initial_tsn = %u",
			  init_ack_chunk->initial_tsn);

  vnet_sctp_set_chunk_type (&init_ack_chunk->chunk_hdr, INIT_ACK);
  vnet_sctp_set_chunk_length (&init_ack_chunk->chunk_hdr, chunk_len);

  init_ack_chunk->initiate_tag =
    clib_host_to_net_u32 (random_u32 (&random_seed));

  init_ack_chunk->a_rwnd =
    clib_host_to_net_u32 (sctp_conn->sub_conn[idx].cwnd);
  init_ack_chunk->inboud_streams_count =
    clib_host_to_net_u16 (INBOUND_STREAMS_COUNT);
  init_ack_chunk->outbound_streams_count =
    clib_host_to_net_u16 (OUTBOUND_STREAMS_COUNT);

  sctp_conn->local_tag = init_ack_chunk->initiate_tag;

  vnet_buffer (b)->sctp.connection_index =
    sctp_conn->sub_conn[idx].connection.c_index;
  vnet_buffer (b)->sctp.subconn_idx = idx;
}

/**
 * Convert buffer to SHUTDOWN
 */
void
sctp_prepare_shutdown_chunk (sctp_connection_t * sctp_conn, u8 idx,
			     vlib_buffer_t * b)
{
  u16 alloc_bytes = sizeof (sctp_shutdown_association_chunk_t);

  /* As per RFC 4960 the chunk_length value does NOT contemplate
   * the size of the first header (see sctp_header_t) and any padding
   */
  u16 chunk_len = alloc_bytes - sizeof (sctp_header_t);

  alloc_bytes += vnet_sctp_calculate_padding (alloc_bytes);

  sctp_shutdown_association_chunk_t *shutdown_chunk =
    vlib_buffer_push_uninit (b, alloc_bytes);

  shutdown_chunk->sctp_hdr.checksum = 0;
  /* No need of host_to_net conversion, already in net-byte order */
  shutdown_chunk->sctp_hdr.src_port =
    sctp_conn->sub_conn[idx].connection.lcl_port;
  shutdown_chunk->sctp_hdr.dst_port =
    sctp_conn->sub_conn[idx].connection.rmt_port;
  shutdown_chunk->sctp_hdr.verification_tag = sctp_conn->remote_tag;
  vnet_sctp_set_chunk_type (&shutdown_chunk->chunk_hdr, SHUTDOWN);
  vnet_sctp_set_chunk_length (&shutdown_chunk->chunk_hdr, chunk_len);

  shutdown_chunk->cumulative_tsn_ack = sctp_conn->last_rcvd_tsn;

  vnet_buffer (b)->sctp.connection_index =
    sctp_conn->sub_conn[idx].connection.c_index;
  vnet_buffer (b)->sctp.subconn_idx = idx;
}

/*
 * Send SHUTDOWN
 */
void
sctp_send_shutdown (sctp_connection_t * sctp_conn)
{
  vlib_buffer_t *b;
  u32 bi;
  sctp_main_t *tm = vnet_get_sctp_main ();
  vlib_main_t *vm = vlib_get_main ();

  if (sctp_check_outstanding_data_chunks (sctp_conn) > 0)
    return;

  if (PREDICT_FALSE (sctp_get_free_buffer_index (tm, &bi)))
    return;

  u8 idx = SCTP_PRIMARY_PATH_IDX;

  b = vlib_get_buffer (vm, bi);
  sctp_init_buffer (vm, b);
  sctp_prepare_shutdown_chunk (sctp_conn, idx, b);

  sctp_enqueue_to_output_now (vm, b, bi,
			      sctp_conn->sub_conn[idx].connection.is_ip4);
}

/**
 * Convert buffer to SHUTDOWN_ACK
 */
void
sctp_prepare_shutdown_ack_chunk (sctp_connection_t * sctp_conn, u8 idx,
				 vlib_buffer_t * b)
{
  u16 alloc_bytes = sizeof (sctp_shutdown_association_chunk_t);
  alloc_bytes += vnet_sctp_calculate_padding (alloc_bytes);

  u16 chunk_len = alloc_bytes - sizeof (sctp_header_t);

  sctp_shutdown_ack_chunk_t *shutdown_ack_chunk =
    vlib_buffer_push_uninit (b, alloc_bytes);

  shutdown_ack_chunk->sctp_hdr.checksum = 0;
  /* No need of host_to_net conversion, already in net-byte order */
  shutdown_ack_chunk->sctp_hdr.src_port =
    sctp_conn->sub_conn[idx].connection.lcl_port;
  shutdown_ack_chunk->sctp_hdr.dst_port =
    sctp_conn->sub_conn[idx].connection.rmt_port;
  shutdown_ack_chunk->sctp_hdr.verification_tag = sctp_conn->remote_tag;

  vnet_sctp_set_chunk_type (&shutdown_ack_chunk->chunk_hdr, SHUTDOWN_ACK);
  vnet_sctp_set_chunk_length (&shutdown_ack_chunk->chunk_hdr, chunk_len);

  vnet_buffer (b)->sctp.connection_index =
    sctp_conn->sub_conn[idx].connection.c_index;
  vnet_buffer (b)->sctp.subconn_idx = idx;
}

/*
 * Send SHUTDOWN_ACK
 */
void
sctp_send_shutdown_ack (sctp_connection_t * sctp_conn, u8 idx,
			vlib_buffer_t * b)
{
  vlib_main_t *vm = vlib_get_main ();

  if (sctp_check_outstanding_data_chunks (sctp_conn) > 0)
    return;

  sctp_reuse_buffer (vm, b);

  sctp_prepare_shutdown_ack_chunk (sctp_conn, idx, b);
}

/**
 * Convert buffer to SACK
 */
void
sctp_prepare_sack_chunk (sctp_connection_t * sctp_conn, u8 idx,
			 vlib_buffer_t * b)
{
  vlib_main_t *vm = vlib_get_main ();

  sctp_reuse_buffer (vm, b);

  u16 alloc_bytes = sizeof (sctp_selective_ack_chunk_t);

  /* As per RFC 4960 the chunk_length value does NOT contemplate
   * the size of the first header (see sctp_header_t) and any padding
   */
  u16 chunk_len = alloc_bytes - sizeof (sctp_header_t);

  alloc_bytes += vnet_sctp_calculate_padding (alloc_bytes);

  sctp_selective_ack_chunk_t *sack = vlib_buffer_push_uninit (b, alloc_bytes);

  sack->sctp_hdr.checksum = 0;
  sack->sctp_hdr.src_port = sctp_conn->sub_conn[idx].connection.lcl_port;
  sack->sctp_hdr.dst_port = sctp_conn->sub_conn[idx].connection.rmt_port;
  sack->sctp_hdr.verification_tag = sctp_conn->remote_tag;
  vnet_sctp_set_chunk_type (&sack->chunk_hdr, SACK);
  vnet_sctp_set_chunk_length (&sack->chunk_hdr, chunk_len);

  sack->cumulative_tsn_ack = sctp_conn->next_tsn_expected;

  sctp_conn->ack_state = 0;

  vnet_buffer (b)->sctp.connection_index =
    sctp_conn->sub_conn[idx].connection.c_index;
  vnet_buffer (b)->sctp.subconn_idx = idx;
}

/**
 * Convert buffer to HEARTBEAT_ACK
 */
void
sctp_prepare_heartbeat_ack_chunk (sctp_connection_t * sctp_conn, u8 idx,
				  vlib_buffer_t * b)
{
  vlib_main_t *vm = vlib_get_main ();

  u16 alloc_bytes = sizeof (sctp_hb_ack_chunk_t);

  sctp_reuse_buffer (vm, b);

  /* As per RFC 4960 the chunk_length value does NOT contemplate
   * the size of the first header (see sctp_header_t) and any padding
   */
  u16 chunk_len = alloc_bytes - sizeof (sctp_header_t);

  alloc_bytes += vnet_sctp_calculate_padding (alloc_bytes);

  sctp_hb_ack_chunk_t *hb_ack = vlib_buffer_push_uninit (b, alloc_bytes);

  hb_ack->sctp_hdr.checksum = 0;
  /* No need of host_to_net conversion, already in net-byte order */
  hb_ack->sctp_hdr.src_port = sctp_conn->sub_conn[idx].connection.lcl_port;
  hb_ack->sctp_hdr.dst_port = sctp_conn->sub_conn[idx].connection.rmt_port;
  hb_ack->sctp_hdr.verification_tag = sctp_conn->remote_tag;
  hb_ack->hb_info.param_hdr.type = clib_host_to_net_u16 (1);
  hb_ack->hb_info.param_hdr.length =
    clib_host_to_net_u16 (sizeof (hb_ack->hb_info.hb_info));

  vnet_sctp_set_chunk_type (&hb_ack->chunk_hdr, HEARTBEAT_ACK);
  vnet_sctp_set_chunk_length (&hb_ack->chunk_hdr, chunk_len);

  vnet_buffer (b)->sctp.connection_index =
    sctp_conn->sub_conn[idx].connection.c_index;
  vnet_buffer (b)->sctp.subconn_idx = idx;
}

/**
 * Convert buffer to HEARTBEAT
 */
void
sctp_prepare_heartbeat_chunk (sctp_connection_t * sctp_conn, u8 idx,
			      vlib_buffer_t * b)
{
  u16 alloc_bytes = sizeof (sctp_hb_req_chunk_t);

  /* As per RFC 4960 the chunk_length value does NOT contemplate
   * the size of the first header (see sctp_header_t) and any padding
   */
  u16 chunk_len = alloc_bytes - sizeof (sctp_header_t);

  alloc_bytes += vnet_sctp_calculate_padding (alloc_bytes);

  sctp_hb_req_chunk_t *hb_req = vlib_buffer_push_uninit (b, alloc_bytes);

  hb_req->sctp_hdr.checksum = 0;
  /* No need of host_to_net conversion, already in net-byte order */
  hb_req->sctp_hdr.src_port = sctp_conn->sub_conn[idx].connection.lcl_port;
  hb_req->sctp_hdr.dst_port = sctp_conn->sub_conn[idx].connection.rmt_port;
  hb_req->sctp_hdr.verification_tag = sctp_conn->remote_tag;
  hb_req->hb_info.param_hdr.type = clib_host_to_net_u16 (1);
  hb_req->hb_info.param_hdr.length =
    clib_host_to_net_u16 (sizeof (hb_req->hb_info.hb_info));

  vnet_sctp_set_chunk_type (&hb_req->chunk_hdr, HEARTBEAT);
  vnet_sctp_set_chunk_length (&hb_req->chunk_hdr, chunk_len);

  vnet_buffer (b)->sctp.connection_index =
    sctp_conn->sub_conn[idx].connection.c_index;
  vnet_buffer (b)->sctp.subconn_idx = idx;
}

void
sctp_send_heartbeat (sctp_connection_t * sctp_conn)
{
  vlib_buffer_t *b;
  u32 bi;
  sctp_main_t *tm = vnet_get_sctp_main ();
  vlib_main_t *vm = vlib_get_main ();

  u8 i;
  u32 now = sctp_time_now ();

  for (i = 0; i < MAX_SCTP_CONNECTIONS; i++)
    {
      if (sctp_conn->sub_conn[i].state == SCTP_SUBCONN_STATE_DOWN)
	continue;

      if (now > (sctp_conn->sub_conn[i].last_seen + SCTP_HB_INTERVAL))
	{
	  if (PREDICT_FALSE (sctp_get_free_buffer_index (tm, &bi)))
	    return;

	  b = vlib_get_buffer (vm, bi);
	  sctp_init_buffer (vm, b);
	  sctp_prepare_heartbeat_chunk (sctp_conn, i, b);

	  sctp_enqueue_to_output_now (vm, b, bi,
				      sctp_conn->sub_conn[i].
				      connection.is_ip4);

	  sctp_conn->sub_conn[i].unacknowledged_hb += 1;
	}
    }
}

/**
 * Convert buffer to SHUTDOWN_COMPLETE
 */
void
sctp_prepare_shutdown_complete_chunk (sctp_connection_t * sctp_conn, u8 idx,
				      vlib_buffer_t * b)
{
  u16 alloc_bytes = sizeof (sctp_shutdown_association_chunk_t);
  alloc_bytes += vnet_sctp_calculate_padding (alloc_bytes);

  u16 chunk_len = alloc_bytes - sizeof (sctp_header_t);

  sctp_shutdown_complete_chunk_t *shutdown_complete =
    vlib_buffer_push_uninit (b, alloc_bytes);

  shutdown_complete->sctp_hdr.checksum = 0;
  /* No need of host_to_net conversion, already in net-byte order */
  shutdown_complete->sctp_hdr.src_port =
    sctp_conn->sub_conn[idx].connection.lcl_port;
  shutdown_complete->sctp_hdr.dst_port =
    sctp_conn->sub_conn[idx].connection.rmt_port;
  shutdown_complete->sctp_hdr.verification_tag = sctp_conn->remote_tag;

  vnet_sctp_set_chunk_type (&shutdown_complete->chunk_hdr, SHUTDOWN_COMPLETE);
  vnet_sctp_set_chunk_length (&shutdown_complete->chunk_hdr, chunk_len);

  vnet_buffer (b)->sctp.connection_index =
    sctp_conn->sub_conn[idx].connection.c_index;
  vnet_buffer (b)->sctp.subconn_idx = idx;
}

void
sctp_send_shutdown_complete (sctp_connection_t * sctp_conn, u8 idx,
			     vlib_buffer_t * b0)
{
  vlib_main_t *vm = vlib_get_main ();

  if (sctp_check_outstanding_data_chunks (sctp_conn) > 0)
    return;

  sctp_reuse_buffer (vm, b0);

  sctp_prepare_shutdown_complete_chunk (sctp_conn, idx, b0);
}

/*
 *  Send INIT
 */
void
sctp_send_init (sctp_connection_t * sctp_conn)
{
  vlib_buffer_t *b;
  u32 bi;
  sctp_main_t *tm = vnet_get_sctp_main ();
  vlib_main_t *vm = vlib_get_main ();

  if (PREDICT_FALSE (sctp_conn->init_retransmit_err > SCTP_MAX_INIT_RETRANS))
    {
      clib_warning ("Reached MAX_INIT_RETRANS times. Aborting connection.");

      session_stream_connect_notify (&sctp_conn->sub_conn
				     [SCTP_PRIMARY_PATH_IDX].connection, 1);

      sctp_connection_timers_reset (sctp_conn);

      sctp_connection_cleanup (sctp_conn);

      return;
    }

  if (PREDICT_FALSE (sctp_get_free_buffer_index (tm, &bi)))
    return;

  b = vlib_get_buffer (vm, bi);
  u8 idx = SCTP_PRIMARY_PATH_IDX;

  sctp_init_buffer (vm, b);
  sctp_prepare_init_chunk (sctp_conn, idx, b);

  sctp_push_ip_hdr (tm, &sctp_conn->sub_conn[idx], b);
  sctp_enqueue_to_ip_lookup (vm, b, bi, sctp_conn->sub_conn[idx].c_is_ip4,
			     sctp_conn->sub_conn[idx].c_fib_index);

  /* Start the T1_INIT timer */
  sctp_timer_set (sctp_conn, idx, SCTP_TIMER_T1_INIT,
		  sctp_conn->sub_conn[idx].RTO);

  /* Change state to COOKIE_WAIT */
  sctp_conn->state = SCTP_STATE_COOKIE_WAIT;

  /* Measure RTT with this */
  sctp_conn->sub_conn[idx].rtt_ts = sctp_time_now ();
}

/**
 * Push SCTP header and update connection variables
 */
static void
sctp_push_hdr_i (sctp_connection_t * sctp_conn, vlib_buffer_t * b,
		 sctp_state_t next_state)
{
  u16 data_len =
    b->current_length + b->total_length_not_including_first_buffer;
  ASSERT (!b->total_length_not_including_first_buffer
	  || (b->flags & VLIB_BUFFER_NEXT_PRESENT));

  SCTP_ADV_DBG_OUTPUT ("b->current_length = %u, "
		       "b->current_data = %p "
		       "data_len = %u",
		       b->current_length, b->current_data, data_len);

  u16 bytes_to_add = sizeof (sctp_payload_data_chunk_t);
  u16 chunk_length = data_len + bytes_to_add - sizeof (sctp_header_t);

  bytes_to_add += vnet_sctp_calculate_padding (bytes_to_add + data_len);

  sctp_payload_data_chunk_t *data_chunk =
    vlib_buffer_push_uninit (b, bytes_to_add);

  u8 idx = sctp_data_subconn_select (sctp_conn);
  SCTP_DBG_OUTPUT
    ("SCTP_CONN = %p, IDX = %u, S_INDEX = %u, C_INDEX = %u, sctp_conn->[...].LCL_PORT = %u, sctp_conn->[...].RMT_PORT = %u",
     sctp_conn, idx, sctp_conn->sub_conn[idx].connection.s_index,
     sctp_conn->sub_conn[idx].connection.c_index,
     sctp_conn->sub_conn[idx].connection.lcl_port,
     sctp_conn->sub_conn[idx].connection.rmt_port);
  data_chunk->sctp_hdr.checksum = 0;
  data_chunk->sctp_hdr.src_port =
    sctp_conn->sub_conn[idx].connection.lcl_port;
  data_chunk->sctp_hdr.dst_port =
    sctp_conn->sub_conn[idx].connection.rmt_port;
  data_chunk->sctp_hdr.verification_tag = sctp_conn->remote_tag;

  data_chunk->tsn = clib_host_to_net_u32 (sctp_conn->next_tsn);
  data_chunk->stream_id = clib_host_to_net_u16 (0);
  data_chunk->stream_seq = clib_host_to_net_u16 (0);

  vnet_sctp_set_chunk_type (&data_chunk->chunk_hdr, DATA);
  vnet_sctp_set_chunk_length (&data_chunk->chunk_hdr, chunk_length);

  vnet_sctp_set_bbit (&data_chunk->chunk_hdr);
  vnet_sctp_set_ebit (&data_chunk->chunk_hdr);

  SCTP_ADV_DBG_OUTPUT ("POINTER_WITH_DATA = %p, DATA_OFFSET = %u",
		       b->data, b->current_data);

  if (sctp_conn->sub_conn[idx].state != SCTP_SUBCONN_AWAITING_SACK)
    {
      sctp_conn->sub_conn[idx].state = SCTP_SUBCONN_AWAITING_SACK;
      sctp_conn->last_unacked_tsn = sctp_conn->next_tsn;
    }

  sctp_conn->next_tsn += data_len;

  u32 inflight = sctp_conn->next_tsn - sctp_conn->last_unacked_tsn;
  /* Section 7.2.2; point (3) */
  if (sctp_conn->sub_conn[idx].partially_acked_bytes >=
      sctp_conn->sub_conn[idx].cwnd
      && inflight >= sctp_conn->sub_conn[idx].cwnd)
    {
      sctp_conn->sub_conn[idx].cwnd += sctp_conn->sub_conn[idx].PMTU;
      sctp_conn->sub_conn[idx].partially_acked_bytes -=
	sctp_conn->sub_conn[idx].cwnd;
    }

  sctp_conn->sub_conn[idx].last_data_ts = sctp_time_now ();

  vnet_buffer (b)->sctp.connection_index =
    sctp_conn->sub_conn[idx].connection.c_index;

  vnet_buffer (b)->sctp.subconn_idx = idx;
}

u32
sctp_push_header (transport_connection_t * trans_conn, vlib_buffer_t * b)
{
  sctp_connection_t *sctp_conn =
    sctp_get_connection_from_transport (trans_conn);

  SCTP_DBG_OUTPUT ("TRANS_CONN = %p, SCTP_CONN = %p, "
		   "S_INDEX = %u, C_INDEX = %u,"
		   "trans_conn->LCL_PORT = %u, trans_conn->RMT_PORT = %u",
		   trans_conn,
		   sctp_conn,
		   trans_conn->s_index,
		   trans_conn->c_index,
		   trans_conn->lcl_port, trans_conn->rmt_port);

  sctp_push_hdr_i (sctp_conn, b, SCTP_STATE_ESTABLISHED);

  sctp_trajectory_add_start (b, 3);

  return 0;
}

u32
sctp_prepare_data_retransmit (sctp_connection_t * sctp_conn,
			      u8 idx,
			      u32 offset,
			      u32 max_deq_bytes, vlib_buffer_t ** b)
{
  sctp_main_t *tm = vnet_get_sctp_main ();
  vlib_main_t *vm = vlib_get_main ();
  int n_bytes = 0;
  u32 bi, available_bytes, seg_size;
  u8 *data;

  ASSERT (sctp_conn->state >= SCTP_STATE_ESTABLISHED);
  ASSERT (max_deq_bytes != 0);

  /*
   * Make sure we can retransmit something
   */
  available_bytes =
    session_tx_fifo_max_dequeue (&sctp_conn->sub_conn[idx].connection);
  ASSERT (available_bytes >= offset);
  available_bytes -= offset;
  if (!available_bytes)
    return 0;
  max_deq_bytes = clib_min (sctp_conn->sub_conn[idx].cwnd, max_deq_bytes);
  max_deq_bytes = clib_min (available_bytes, max_deq_bytes);

  seg_size = max_deq_bytes;

  /*
   * Allocate and fill in buffer(s)
   */

  if (PREDICT_FALSE (sctp_get_free_buffer_index (tm, &bi)))
    return 0;
  *b = vlib_get_buffer (vm, bi);
  data = sctp_init_buffer (vm, *b);

  /* Easy case, buffer size greater than mss */
  if (PREDICT_TRUE (seg_size <= tm->bytes_per_buffer))
    {
      n_bytes =
	stream_session_peek_bytes (&sctp_conn->sub_conn[idx].connection, data,
				   offset, max_deq_bytes);
      ASSERT (n_bytes == max_deq_bytes);
      b[0]->current_length = n_bytes;
      sctp_push_hdr_i (sctp_conn, *b, sctp_conn->state);
    }

  return n_bytes;
}

void
sctp_data_retransmit (sctp_connection_t * sctp_conn)
{
  vlib_main_t *vm = vlib_get_main ();
  vlib_buffer_t *b = 0;
  u32 bi, n_bytes = 0;

  SCTP_DBG_OUTPUT
    ("SCTP_CONN = %p, IDX = %u, S_INDEX = %u, C_INDEX = %u, sctp_conn->[...].LCL_PORT = %u, sctp_conn->[...].RMT_PORT = %u",
     sctp_conn, idx, sctp_conn->sub_conn[idx].connection.s_index,
     sctp_conn->sub_conn[idx].connection.c_index,
     sctp_conn->sub_conn[idx].connection.lcl_port,
     sctp_conn->sub_conn[idx].connection.rmt_port);

  if (sctp_conn->state >= SCTP_STATE_ESTABLISHED)
    {
      return;
    }

  u8 idx = sctp_data_subconn_select (sctp_conn);

  n_bytes =
    sctp_prepare_data_retransmit (sctp_conn, idx, 0,
				  sctp_conn->sub_conn[idx].cwnd, &b);
  if (n_bytes > 0)
    SCTP_DBG_OUTPUT ("We have data (%u bytes) to retransmit", n_bytes);

  bi = vlib_get_buffer_index (vm, b);

  sctp_enqueue_to_output_now (vm, b, bi,
			      sctp_conn->sub_conn[idx].connection.is_ip4);

  return;
}

#if SCTP_DEBUG_STATE_MACHINE
always_inline u8
sctp_validate_output_state_machine (sctp_connection_t * sctp_conn,
				    u8 chunk_type)
{
  u8 result = 0;
  switch (sctp_conn->state)
    {
    case SCTP_STATE_CLOSED:
      if (chunk_type != INIT && chunk_type != INIT_ACK)
	result = 1;
      break;
    case SCTP_STATE_ESTABLISHED:
      if (chunk_type != DATA && chunk_type != HEARTBEAT &&
	  chunk_type != HEARTBEAT_ACK && chunk_type != SACK &&
	  chunk_type != COOKIE_ACK && chunk_type != SHUTDOWN)
	result = 1;
      break;
    case SCTP_STATE_COOKIE_WAIT:
      if (chunk_type != COOKIE_ECHO)
	result = 1;
      break;
    case SCTP_STATE_SHUTDOWN_SENT:
      if (chunk_type != SHUTDOWN_COMPLETE)
	result = 1;
      break;
    case SCTP_STATE_SHUTDOWN_RECEIVED:
      if (chunk_type != SHUTDOWN_ACK)
	result = 1;
      break;
    }
  return result;
}
#endif

always_inline u8
sctp_is_retransmitting (sctp_connection_t * sctp_conn, u8 idx)
{
  return sctp_conn->sub_conn[idx].is_retransmitting;
}

always_inline uword
sctp46_output_inline (vlib_main_t * vm,
		      vlib_node_runtime_t * node,
		      vlib_frame_t * from_frame, int is_ip4)
{
  u32 n_left_from, next_index, *from, *to_next;
  u32 my_thread_index = vm->thread_index;

  from = vlib_frame_vector_args (from_frame);
  n_left_from = from_frame->n_vectors;
  next_index = node->cached_next_index;
  sctp_set_time_now (my_thread_index);

  while (n_left_from > 0)
    {
      u32 n_left_to_next;

      vlib_get_next_frame (vm, node, next_index, to_next, n_left_to_next);

      while (n_left_from > 0 && n_left_to_next > 0)
	{
	  u32 bi0;
	  vlib_buffer_t *b0;
	  sctp_header_t *sctp_hdr = 0;
	  sctp_connection_t *sctp_conn;
	  sctp_tx_trace_t *t0;
	  sctp_header_t *th0 = 0;
	  u32 error0 = SCTP_ERROR_PKTS_SENT, next0 =
	    SCTP_OUTPUT_NEXT_IP_LOOKUP;

#if SCTP_DEBUG_STATE_MACHINE
	  u16 packet_length = 0;
#endif

	  bi0 = from[0];
	  to_next[0] = bi0;
	  from += 1;
	  to_next += 1;
	  n_left_from -= 1;
	  n_left_to_next -= 1;

	  b0 = vlib_get_buffer (vm, bi0);

	  sctp_conn =
	    sctp_connection_get (vnet_buffer (b0)->sctp.connection_index,
				 my_thread_index);

	  if (PREDICT_FALSE (sctp_conn == 0))
	    {
	      error0 = SCTP_ERROR_INVALID_CONNECTION;
	      next0 = SCTP_OUTPUT_NEXT_DROP;
	      goto done;
	    }

	  u8 idx = vnet_buffer (b0)->sctp.subconn_idx;

	  th0 = vlib_buffer_get_current (b0);

	  if (is_ip4)
	    {
	      ip4_header_t *iph4 = vlib_buffer_push_ip4 (vm,
							 b0,
							 &sctp_conn->sub_conn
							 [idx].connection.
							 lcl_ip.ip4,
							 &sctp_conn->
							 sub_conn
							 [idx].connection.
							 rmt_ip.ip4,
							 IP_PROTOCOL_SCTP, 1);

	      u32 checksum = ip4_sctp_compute_checksum (vm, b0, iph4);

	      sctp_hdr = ip4_next_header (iph4);
	      sctp_hdr->checksum = checksum;

	      vnet_buffer (b0)->l4_hdr_offset = (u8 *) th0 - b0->data;

#if SCTP_DEBUG_STATE_MACHINE
	      packet_length = clib_net_to_host_u16 (iph4->length);
#endif
	    }
	  else
	    {
	      ip6_header_t *iph6 = vlib_buffer_push_ip6 (vm,
							 b0,
							 &sctp_conn->sub_conn
							 [idx].
							 connection.lcl_ip.
							 ip6,
							 &sctp_conn->sub_conn
							 [idx].
							 connection.rmt_ip.
							 ip6,
							 IP_PROTOCOL_SCTP);

	      int bogus = ~0;
	      u32 checksum = ip6_sctp_compute_checksum (vm, b0, iph6, &bogus);
	      ASSERT (!bogus);

	      sctp_hdr = ip6_next_header (iph6);
	      sctp_hdr->checksum = checksum;

	      vnet_buffer (b0)->l3_hdr_offset = (u8 *) iph6 - b0->data;
	      vnet_buffer (b0)->l4_hdr_offset = (u8 *) th0 - b0->data;

#if SCTP_DEBUG_STATE_MACHINE
	      packet_length = clib_net_to_host_u16 (iph6->payload_length);
#endif
	    }

	  sctp_full_hdr_t *full_hdr = (sctp_full_hdr_t *) sctp_hdr;
	  u8 chunk_type = vnet_sctp_get_chunk_type (&full_hdr->common_hdr);
	  if (chunk_type >= UNKNOWN)
	    {
	      clib_warning
		("Trying to send an unrecognized chunk... something is really bad.");
	      error0 = SCTP_ERROR_UNKNOWN_CHUNK;
	      next0 = SCTP_OUTPUT_NEXT_DROP;
	      goto done;
	    }

#if SCTP_DEBUG_STATE_MACHINE
	  u8 is_valid =
	    (sctp_conn->sub_conn[idx].connection.lcl_port ==
	     sctp_hdr->src_port
	     || sctp_conn->sub_conn[idx].connection.lcl_port ==
	     sctp_hdr->dst_port)
	    && (sctp_conn->sub_conn[idx].connection.rmt_port ==
		sctp_hdr->dst_port
		|| sctp_conn->sub_conn[idx].connection.rmt_port ==
		sctp_hdr->src_port);

	  if (!is_valid)
	    {
	      SCTP_DBG_STATE_MACHINE ("BUFFER IS INCORRECT: conn_index = %u, "
				      "packet_length = %u, "
				      "chunk_type = %u [%s], "
				      "connection.lcl_port = %u, sctp_hdr->src_port = %u, "
				      "connection.rmt_port = %u, sctp_hdr->dst_port = %u",
				      sctp_conn->sub_conn[idx].
				      connection.c_index, packet_length,
				      chunk_type,
				      sctp_chunk_to_string (chunk_type),
				      sctp_conn->sub_conn[idx].
				      connection.lcl_port, sctp_hdr->src_port,
				      sctp_conn->sub_conn[idx].
				      connection.rmt_port,
				      sctp_hdr->dst_port);

	      error0 = SCTP_ERROR_UNKNOWN_CHUNK;
	      next0 = SCTP_OUTPUT_NEXT_DROP;
	      goto done;
	    }
#endif
	  SCTP_DBG_STATE_MACHINE
	    ("SESSION_INDEX = %u, CONN_INDEX = %u, CURR_CONN_STATE = %u (%s), "
	     "CHUNK_TYPE = %s, " "SRC_PORT = %u, DST_PORT = %u",
	     sctp_conn->sub_conn[idx].connection.s_index,
	     sctp_conn->sub_conn[idx].connection.c_index,
	     sctp_conn->state, sctp_state_to_string (sctp_conn->state),
	     sctp_chunk_to_string (chunk_type), full_hdr->hdr.src_port,
	     full_hdr->hdr.dst_port);

	  /* Let's make sure the state-machine does not send anything crazy */
#if SCTP_DEBUG_STATE_MACHINE
	  if (sctp_validate_output_state_machine (sctp_conn, chunk_type) != 0)
	    {
	      SCTP_DBG_STATE_MACHINE
		("Sending the wrong chunk (%s) based on state-machine status (%s)",
		 sctp_chunk_to_string (chunk_type),
		 sctp_state_to_string (sctp_conn->state));

	      error0 = SCTP_ERROR_UNKNOWN_CHUNK;
	      next0 = SCTP_OUTPUT_NEXT_DROP;
	      goto done;

	    }
#endif

	  /* Karn's algorithm: RTT measurements MUST NOT be made using
	   * packets that were retransmitted
	   */
	  if (!sctp_is_retransmitting (sctp_conn, idx))
	    {
	      /* Measure RTT with this */
	      if (chunk_type == DATA
		  && sctp_conn->sub_conn[idx].RTO_pending == 0)
		{
		  sctp_conn->sub_conn[idx].RTO_pending = 1;
		  sctp_conn->sub_conn[idx].rtt_ts = sctp_time_now ();
		}
	      else
		sctp_conn->sub_conn[idx].rtt_ts = sctp_time_now ();
	    }

	  /* Let's take care of TIMERS */
	  switch (chunk_type)
	    {
	    case COOKIE_ECHO:
	      {
		sctp_conn->state = SCTP_STATE_COOKIE_ECHOED;
		break;
	      }
	    case DATA:
	      {
		SCTP_ADV_DBG_OUTPUT ("PACKET_LENGTH = %u", packet_length);

		sctp_timer_update (sctp_conn, idx, SCTP_TIMER_T3_RXTX,
				   sctp_conn->sub_conn[idx].RTO);
		break;
	      }
	    case SHUTDOWN:
	      {
		/* Start the SCTP_TIMER_T2_SHUTDOWN timer */
		sctp_timer_set (sctp_conn, idx, SCTP_TIMER_T2_SHUTDOWN,
				sctp_conn->sub_conn[idx].RTO);
		sctp_conn->state = SCTP_STATE_SHUTDOWN_SENT;
		break;
	      }
	    case SHUTDOWN_ACK:
	      {
		/* Start the SCTP_TIMER_T2_SHUTDOWN timer */
		sctp_timer_set (sctp_conn, idx, SCTP_TIMER_T2_SHUTDOWN,
				sctp_conn->sub_conn[idx].RTO);
		sctp_conn->state = SCTP_STATE_SHUTDOWN_ACK_SENT;
		break;
	      }
	    case SHUTDOWN_COMPLETE:
	      {
		sctp_conn->state = SCTP_STATE_CLOSED;
		break;
	      }
	    }

	  vnet_buffer (b0)->sw_if_index[VLIB_RX] = 0;
	  vnet_buffer (b0)->sw_if_index[VLIB_TX] =
	    sctp_conn->sub_conn[idx].c_fib_index;

	  b0->flags |= VNET_BUFFER_F_LOCALLY_ORIGINATED;

	  SCTP_DBG_STATE_MACHINE
	    ("SESSION_INDEX = %u, CONNECTION_INDEX = %u, " "NEW_STATE = %s, "
	     "CHUNK_SENT = %s", sctp_conn->sub_conn[idx].connection.s_index,
	     sctp_conn->sub_conn[idx].connection.c_index,
	     sctp_state_to_string (sctp_conn->state),
	     sctp_chunk_to_string (chunk_type));

	  vnet_sctp_common_hdr_params_host_to_net (&full_hdr->common_hdr);

	done:
	  b0->error = node->errors[error0];
	  if (PREDICT_FALSE (b0->flags & VLIB_BUFFER_IS_TRACED))
	    {
	      t0 = vlib_add_trace (vm, node, b0, sizeof (*t0));
	      if (th0)
		{
		  clib_memcpy (&t0->sctp_header, th0,
			       sizeof (t0->sctp_header));
		}
	      else
		{
		  memset (&t0->sctp_header, 0, sizeof (t0->sctp_header));
		}
	      clib_memcpy (&t0->sctp_connection, sctp_conn,
			   sizeof (t0->sctp_connection));
	    }

	  vlib_validate_buffer_enqueue_x1 (vm, node, next_index, to_next,
					   n_left_to_next, bi0, next0);
	}

      vlib_put_next_frame (vm, node, next_index, n_left_to_next);
    }

  return from_frame->n_vectors;
}

static uword
sctp4_output (vlib_main_t * vm, vlib_node_runtime_t * node,
	      vlib_frame_t * from_frame)
{
  return sctp46_output_inline (vm, node, from_frame, 1 /* is_ip4 */ );
}

static uword
sctp6_output (vlib_main_t * vm, vlib_node_runtime_t * node,
	      vlib_frame_t * from_frame)
{
  return sctp46_output_inline (vm, node, from_frame, 0 /* is_ip4 */ );
}

/* *INDENT-OFF* */
VLIB_REGISTER_NODE (sctp4_output_node) =
{
  .function = sctp4_output,.name = "sctp4-output",
    /* Takes a vector of packets. */
    .vector_size = sizeof (u32),
    .n_errors = SCTP_N_ERROR,
    .error_strings = sctp_error_strings,
    .n_next_nodes = SCTP_OUTPUT_N_NEXT,
    .next_nodes = {
#define _(s,n) [SCTP_OUTPUT_NEXT_##s] = n,
    foreach_sctp4_output_next
#undef _
    },
    .format_buffer = format_sctp_header,
    .format_trace = format_sctp_tx_trace,
};
/* *INDENT-ON* */

VLIB_NODE_FUNCTION_MULTIARCH (sctp4_output_node, sctp4_output);

/* *INDENT-OFF* */
VLIB_REGISTER_NODE (sctp6_output_node) =
{
  .function = sctp6_output,
  .name = "sctp6-output",
    /* Takes a vector of packets. */
  .vector_size = sizeof (u32),
  .n_errors = SCTP_N_ERROR,
  .error_strings = sctp_error_strings,
  .n_next_nodes = SCTP_OUTPUT_N_NEXT,
  .next_nodes = {
#define _(s,n) [SCTP_OUTPUT_NEXT_##s] = n,
    foreach_sctp6_output_next
#undef _
  },
  .format_buffer = format_sctp_header,
  .format_trace = format_sctp_tx_trace,
};
/* *INDENT-ON* */

VLIB_NODE_FUNCTION_MULTIARCH (sctp6_output_node, sctp6_output);

/*
 * fd.io coding-style-patch-verification: ON
 *
 * Local Variables:
 * eval: (c-set-style "gnu")
 * End:
 */