summaryrefslogtreecommitdiffstats
path: root/src/vnet/tcp/tcp_newreno.c
AgeCommit message (Collapse)AuthorFilesLines
2020-04-02tcp: move features to separate filesFlorin Coras1-0/+1
Type: refactor Signed-off-by: Florin Coras <fcoras@cisco.com> Change-Id: Ia477b8dba9266f47907967e363c11048e5cd95ab
2019-10-17tcp: Init cwnd from ssthresh.Sergey Ivanushkin1-1/+31
Set high ssthresh out of the box and make configurable Type: fix Signed-off-by: Sergey Ivanushkin <sergey.ivanushkin@enea.com> Change-Id: Iba1549b4ee55e51468ad0b28ef3d26a85fa9cae0
2019-09-25tcp: use sacks for timer based recoveryFlorin Coras1-6/+0
Type: feature If available, reuse sack scoreboard in timer triggered retransmit to minimize spurious retransmits. Additional changes/refactoring: - limited transmit updates - add sacked rxt count to scoreboard - prr pacing of fast retransmits - startup pacing updates - changed loss window to flight + mss Change-Id: I057de6a9d6401698bd1031d5cf5cfbb62f2bdf61 Signed-off-by: Florin Coras <fcoras@cisco.com>
2019-09-04tcp: cc algos handle cwnd on congestion signalFlorin Coras1-0/+6
Type: refactor Change-Id: I15b10a22d0d0b83075a0eef5ef8c09cf76989866 Signed-off-by: Florin Coras <fcoras@cisco.com>
2019-07-05tcp: add loss signal to cc algoFlorin Coras1-4/+12
Type:feature Change-Id: Ibe1a4c555b55fb929d55b02599aaf099ed522cdf Signed-off-by: Florin Coras <fcoras@cisco.com>
2019-06-25tcp: delivery rate estimatorFlorin Coras1-2/+3
Type: feature First cut implementation with limited testing. The feature is not enabled by default and the expectation is that cc algorithms will enable it on demand. Change-Id: I92b70cb4dabcff0e9ccd1d725952c4880af394da Signed-off-by: Florin Coras <fcoras@cisco.com>
2019-04-08host stack: update stale copyrightFlorin Coras1-1/+1
Change-Id: I33cd6e44d126c73c1f4c16b2041ea607b4d7f39f Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-11-13tcp: cubic fast convergenceFlorin Coras1-0/+1
Change-Id: I3a15960fe346763faf13e8728ce36c2f3bf7b05a Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-11-09tcp: basic cubic implementationFlorin Coras1-9/+1
Because the code is not optimized, newreno is still the default congestion control algorithm. Change-Id: I7061cc80c5a75fa8e8265901fae4ea2888e35173 Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-05-26tcp: loss recovery improvements/fixesFlorin Coras1-1/+1
- fix newreno cwnd computation - reset snd_una_max on entering recovery - accept acks beyond snd_nxt but less than snd_congestion when in recovery - avoid entering fast recovery multiple times when using sacks - avoid as much as possible sending small segments when doing fast retransmit - more event logging Change-Id: I19dd151d7704e39d4eae06de3a26f5e124875366 Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-05-23tcp: cc improvements and fixesFlorin Coras1-0/+2
Change-Id: I6615bb612bcc3f795b5f822ea55209bb30ef35b5 Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-04-20tcp: make newreno byte instead of acks dependentFlorin Coras1-2/+8
Should be more resilient to ack losses Change-Id: Icec3b93c1d290dec437fcc4e6fe5171906c9ba8a Signed-off-by: Florin Coras <fcoras@cisco.com>
2017-07-11Horizontal (nSessions) scaling draftDave Barach1-2/+2
- Data structure preallocation. - Input state machine fixes for mid-stream 3-way handshake retries. - Batch connections in the builtin_client - Multiple private fifo segment support - Fix elog simultaneous event type registration - Fix sacks when segment hole is added after highest sacked - Add "accepting" session state for sessions pending accept - Add ssvm non-recursive locking - Estimate RTT for syn-ack - Don't init fifo pointers. We're using relative offsets for ooo segments - CLI to dump individual session Change-Id: Ie0598563fd246537bafba4feed7985478ea1d415 Signed-off-by: Dave Barach <dbarach@cisco.com> Signed-off-by: Florin Coras <fcoras@cisco.com>
2017-06-19Overall tcp performance improvements (VPP-846)Florin Coras1-2/+2
- limit minimum rto per connection - cleanup sack scoreboard - switched svm fifo out-of-order data handling from absolute offsets to relative offsets. - improve cwnd handling when using sacks - add cc event debug stats - improved uri tcp test client/server: bugfixes and added half-duplex mode - expanded builtin client/server - updated uri socket client/server code to work in half-duplex - ensure session node unsets fifo event for empty fifo - fix session detach Change-Id: Ia446972340e32a65e0694ee2844355167d0c170d Signed-off-by: Florin Coras <fcoras@cisco.com>
2017-06-09Implement sack based tcp loss recovery (RFC 6675)Florin Coras1-3/+17
- refactor existing congestion control code (RFC 6582/5681). Handling of ack feedback now consists of: ack parsing, cc event detection, event handling, congestion control update - extend sack scoreboard to support sack based retransmissions - basic implementation of Eifel detection algorithm (RFC 3522) for detecting spurious retransmissions - actually initialize the per-thread frame freelist hash tables - increase worker stack size to 2mb - fix session queue node out-of-buffer handling - ensure that the local buffer cache vec_len matches reality - avoid 2x spurious event requeues when short of buffers - count out-of-buffer events - make the builtin server thread-safe - fix bihash template threading issue: need to paint -1 across uninitialized working_copy_length vector elements (via rebase from master) Change-Id: I646cb9f1add9a67d08f4a87badbcb117980ebfc4 Signed-off-by: Florin Coras <fcoras@cisco.com> Signed-off-by: Dave Barach <dbarach@cisco.com>
2017-05-07Fix TCP loss recovery, VPP-745Florin Coras1-1/+1
Allows pure loss recovery retransmits only on timeout. Change-Id: I563cdbf9e7b890a6569350bdbda4f746ace0544e Signed-off-by: Florin Coras <fcoras@cisco.com>
2017-03-01VPP-598: tcp stack initial commitDave Barach1-0/+93
Change-Id: I49e5ce0aae6e4ff634024387ceaf7dbc432a0351 Signed-off-by: Dave Barach <dave@barachs.net> Signed-off-by: Florin Coras <fcoras@cisco.com>
or: #fff0f0 } /* Literal.String.Backtick */ .highlight .sc { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Char */ .highlight .dl { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Delimiter */ .highlight .sd { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Doc */ .highlight .s2 { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Double */ .highlight .se { color: #0044dd; background-color: #fff0f0 } /* Literal.String.Escape */ .highlight .sh { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Heredoc */ .highlight .si { color: #3333bb; background-color: #fff0f0 } /* Literal.String.Interpol */ .highlight .sx { color: #22bb22; background-color: #f0fff0 } /* Literal.String.Other */ .highlight .sr { color: #008800; background-color: #fff0ff } /* Literal.String.Regex */ .highlight .s1 { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Single */ .highlight .ss { color: #aa6600; background-color: #fff0f0 } /* Literal.String.Symbol */ .highlight .bp { color: #003388 } /* Name.Builtin.Pseudo */ .highlight .fm { color: #0066bb; font-weight: bold } /* Name.Function.Magic */ .highlight .vc { color: #336699 } /* Name.Variable.Class */ .highlight .vg { color: #dd7700 } /* Name.Variable.Global */ .highlight .vi { color: #3333bb } /* Name.Variable.Instance */ .highlight .vm { color: #336699 } /* Name.Variable.Magic */ .highlight .il { color: #0000DD; font-weight: bold } /* Literal.Number.Integer.Long */ }
#!/usr/bin/env python3

import unittest
import socket

from framework import VppTestCase, VppTestRunner
from vpp_ip_route import VppIpRoute, VppRoutePath
from vpp_l2 import L2_PORT_TYPE, BRIDGE_FLAGS

from scapy.packet import Raw
from scapy.layers.l2 import Ether
from scapy.layers.inet import IP, UDP

NUM_PKTS = 67


class TestL2Flood(VppTestCase):
    """ L2-flood """

    @classmethod
    def setUpClass(cls):
        super(TestL2Flood, cls).setUpClass()

    @classmethod
    def tearDownClass(cls):
        super(TestL2Flood, cls).tearDownClass()

    def setUp(self):
        super(TestL2Flood, self).setUp()

        # 12 l2 interface and one l3
        self.create_pg_interfaces(range(13))
        self.create_bvi_interfaces(1)

        for i in self.pg_interfaces:
            i.admin_up()
        for i in self.bvi_interfaces:
            i.admin_up()

        self.pg12.config_ip4()
        self.pg12.resolve_arp()
        self.bvi0.config_ip4()

    def tearDown(self):
        self.pg12.unconfig_ip4()
        self.bvi0.unconfig_ip4()

        for i in self.pg_interfaces:
            i.admin_down()
        for i in self.bvi_interfaces:
            i.admin_down()
        super(TestL2Flood, self).tearDown()

    def test_flood(self):
        """ L2 Flood Tests """

        #
        # Create a single bridge Domain
        #
        self.vapi.bridge_domain_add_del(bd_id=1)

        #
        # add each interface to the BD. 3 interfaces per split horizon group
        #
        for i in self.pg_interfaces[0:4]:
            self.vapi.sw_interface_set_l2_bridge(rx_sw_if_index=i.sw_if_index,
                                                 bd_id=1, shg=0)
        for i in self.pg_interfaces[4:8]:
            self.vapi.sw_interface_set_l2_bridge(rx_sw_if_index=i.sw_if_index,
                                                 bd_id=1, shg=1)
        for i in self.pg_interfaces[8:12]:
            self.vapi.sw_interface_set_l2_bridge(rx_sw_if_index=i.sw_if_index,
                                                 bd_id=1, shg=2)
        for i in self.bvi_interfaces:
            self.vapi.sw_interface_set_l2_bridge(rx_sw_if_index=i.sw_if_index,
                                                 bd_id=1, shg=2,
                                                 port_type=L2_PORT_TYPE.BVI)

        p = (Ether(dst="ff:ff:ff:ff:ff:ff",
                   src="00:00:de:ad:be:ef") /
             IP(src="10.10.10.10", dst="1.1.1.1") /
             UDP(sport=1234, dport=1234) /
             Raw(b'\xa5' * 100))

        #
        # input on pg0 expect copies on pg1->11
        # this is in SHG=0 so its flooded to all, expect the pg0 since that's
        # the ingress link
        #
        self.pg0.add_stream(p*NUM_PKTS)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        for i in self.pg_interfaces[1:12]:
            rx0 = i.get_capture(NUM_PKTS, timeout=1)

        #
        # input on pg4 (SHG=1) expect copies on pg0->3 (SHG=0)
        # and pg8->11 (SHG=2)
        #
        self.pg4.add_stream(p*NUM_PKTS)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        for i in self.pg_interfaces[:4]:
            rx0 = i.get_capture(NUM_PKTS, timeout=1)
        for i in self.pg_interfaces[8:12]:
            rx0 = i.get_capture(NUM_PKTS, timeout=1)
        for i in self.pg_interfaces[4:8]:
            i.assert_nothing_captured(remark="Different SH group")

        #
        # An IP route so the packet that hits the BVI is sent out of pg12
        #
        ip_route = VppIpRoute(self, "1.1.1.1", 32,
                              [VppRoutePath(self.pg12.remote_ip4,
                                            self.pg12.sw_if_index)])
        ip_route.add_vpp_config()

        self.logger.info(self.vapi.cli("sh bridge 1 detail"))

        #
        # input on pg0 expect copies on pg1->12
        # this is in SHG=0 so its flooded to all, expect the pg0 since that's
        # the ingress link
        #
        self.pg0.add_stream(p*NUM_PKTS)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        for i in self.pg_interfaces[1:]:
            rx0 = i.get_capture(NUM_PKTS, timeout=1)

        #
        # input on pg4 (SHG=1) expect copies on pg0->3 (SHG=0)
        # and pg8->12 (SHG=2)
        #
        self.pg4.add_stream(p*NUM_PKTS)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        for i in self.pg_interfaces[:4]:
            rx0 = i.get_capture(NUM_PKTS, timeout=1)
        for i in self.pg_interfaces[8:13]:
            rx0 = i.get_capture(NUM_PKTS, timeout=1)
        for i in self.pg_interfaces[4:8]:
            i.assert_nothing_captured(remark="Different SH group")

        #
        # cleanup
        #
        for i in self.pg_interfaces[:12]:
            self.vapi.sw_interface_set_l2_bridge(rx_sw_if_index=i.sw_if_index,
                                                 bd_id=1, enable=0)
        for i in self.bvi_interfaces:
            self.vapi.sw_interface_set_l2_bridge(rx_sw_if_index=i.sw_if_index,
                                                 bd_id=1, shg=2,
                                                 port_type=L2_PORT_TYPE.BVI,
                                                 enable=0)

        self.vapi.bridge_domain_add_del(bd_id=1, is_add=0)

    def test_flood_one(self):
        """ L2 no-Flood Test """

        #
        # Create a single bridge Domain
        #
        self.vapi.bridge_domain_add_del(bd_id=1)

        #
        # add 2 interfaces to the BD. this means a flood goes to only
        # one member
        #
        for i in self.pg_interfaces[:2]:
            self.vapi.sw_interface_set_l2_bridge(rx_sw_if_index=i.sw_if_index,
                                                 bd_id=1, shg=0)

        p = (Ether(dst="ff:ff:ff:ff:ff:ff",
                   src="00:00:de:ad:be:ef") /
             IP(src="10.10.10.10", dst="1.1.1.1") /
             UDP(sport=1234, dport=1234) /
             Raw(b'\xa5' * 100))

        #
        # input on pg0 expect copies on pg1
        #
        self.send_and_expect(self.pg0, p*NUM_PKTS, self.pg1)

        #
        # cleanup
        #
        for i in self.pg_interfaces[:2]:
            self.vapi.sw_interface_set_l2_bridge(rx_sw_if_index=i.sw_if_index,
                                                 bd_id=1, enable=0)
        self.vapi.bridge_domain_add_del(bd_id=1, is_add=0)

    def test_uu_fwd(self):
        """ UU Flood """

        #
        # Create a single bridge Domain
        #
        self.vapi.bridge_domain_add_del(bd_id=1, uu_flood=1)

        #
        # add each interface to the BD. 3 interfaces per split horizon group
        #
        for i in self.pg_interfaces[0:4]:
            self.vapi.sw_interface_set_l2_bridge(rx_sw_if_index=i.sw_if_index,
                                                 bd_id=1, shg=0)

        #
        # an unknown unicast and broadcast packets
        #
        p_uu = (Ether(dst="00:00:00:c1:5c:00",
                      src="00:00:de:ad:be:ef") /
                IP(src="10.10.10.10", dst="1.1.1.1") /
                UDP(sport=1234, dport=1234) /
                Raw(b'\xa5' * 100))
        p_bm = (Ether(dst="ff:ff:ff:ff:ff:ff",
                      src="00:00:de:ad:be:ef") /
                IP(src="10.10.10.10", dst="1.1.1.1") /
                UDP(sport=1234, dport=1234) /
                Raw(b'\xa5' * 100))

        #
        # input on pg0, expected copies on pg1->4
        #
        self.pg0.add_stream(p_uu*NUM_PKTS)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        for i in self.pg_interfaces[1:4]:
            rx0 = i.get_capture(NUM_PKTS, timeout=1)

        self.pg0.add_stream(p_bm*NUM_PKTS)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        for i in self.pg_interfaces[1:4]:
            rx0 = i.get_capture(NUM_PKTS, timeout=1)

        #
        # use pg8 as the uu-fwd interface
        #
        self.vapi.sw_interface_set_l2_bridge(
            rx_sw_if_index=self.pg8.sw_if_index, bd_id=1, shg=0,
            port_type=L2_PORT_TYPE.UU_FWD)

        #
        # expect the UU packet on the uu-fwd interface and not be flooded
        #
        self.pg0.add_stream(p_uu*NUM_PKTS)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx0 = self.pg8.get_capture(NUM_PKTS, timeout=1)

        for i in self.pg_interfaces[0:4]:
            i.assert_nothing_captured(remark="UU not flooded")

        self.pg0.add_stream(p_bm*NUM_PKTS)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        for i in self.pg_interfaces[1:4]:
            rx0 = i.get_capture(NUM_PKTS, timeout=1)

        #
        # remove the uu-fwd interface and expect UU to be flooded again
        #
        self.vapi.sw_interface_set_l2_bridge(
            rx_sw_if_index=self.pg8.sw_if_index, bd_id=1, shg=0,
            port_type=L2_PORT_TYPE.UU_FWD, enable=0)

        self.pg0.add_stream(p_uu*NUM_PKTS)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        for i in self.pg_interfaces[1:4]:
            rx0 = i.get_capture(NUM_PKTS, timeout=1)

        #
        # change the BD config to not support UU-flood
        #
        self.vapi.bridge_flags(bd_id=1, is_set=0, flags=BRIDGE_FLAGS.UU_FLOOD)

        self.send_and_assert_no_replies(self.pg0, p_uu)

        #
        # re-add the uu-fwd interface
        #
        self.vapi.sw_interface_set_l2_bridge(
            rx_sw_if_index=self.pg8.sw_if_index, bd_id=1, shg=0,
            port_type=L2_PORT_TYPE.UU_FWD)
        self.logger.info(self.vapi.cli("sh bridge 1 detail"))

        self.pg0.add_stream(p_uu*NUM_PKTS)
        self.pg_enable_capture(self.pg_interfaces)
        self.pg_start()

        rx0 = self.pg8.get_capture(NUM_PKTS, timeout=1)

        for i in self.pg_interfaces[0:4]:
            i.assert_nothing_captured(remark="UU not flooded")

        #
        # remove the uu-fwd interface
        #
        self.vapi.sw_interface_set_l2_bridge(
            rx_sw_if_index=self.pg8.sw_if_index, bd_id=1, shg=0,
            port_type=L2_PORT_TYPE.UU_FWD, enable=0)
        self.send_and_assert_no_replies(self.pg0, p_uu)

        #
        # cleanup
        #
        for i in self.pg_interfaces[:4]:
            self.vapi.sw_interface_set_l2_bridge(rx_sw_if_index=i.sw_if_index,
                                                 bd_id=1, enable=0)

        self.vapi.bridge_domain_add_del(bd_id=1, is_add=0)


if __name__ == '__main__':
    unittest.main(testRunner=VppTestRunner)