aboutsummaryrefslogtreecommitdiffstats
path: root/test/test_vxlan.py
AgeCommit message (Collapse)AuthorFilesLines
2019-06-18fib: fib api updatesNeale Ranns1-8/+13
Enhance the route add/del APIs to take a set of paths rather than just one. Most unicast routing protocols calcualte all the available paths in one run of the algorithm so updating all the paths at once is beneficial for the client. two knobs control the behaviour: is_multipath - if set the the set of paths passed will be added to those that already exist, otherwise the set will replace them. is_add - add or remove the set is_add=0, is_multipath=1 and an empty set, results in deleting the route. It is also considerably faster to add multiple paths at once, than one at a time: vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.11 100000 routes in .572240 secs, 174751.80 routes/sec vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.12 100000 routes in .528383 secs, 189256.54 routes/sec vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.13 100000 routes in .757131 secs, 132077.52 routes/sec vat# ip_add_del_route 1.1.1.1/32 count 100000 multipath via 10.10.10.14 100000 routes in .878317 secs, 113854.12 routes/sec vat# ip_route_add_del 1.1.1.1/32 count 100000 multipath via 10.10.10.11 via 10.10.10.12 via 10.10.10.13 via 10.10.10.14 100000 routes in .900212 secs, 111084.93 routes/sec Change-Id: I416b93f7684745099c1adb0b33edac58c9339c1a Signed-off-by: Neale Ranns <neale.ranns@cisco.com> Signed-off-by: Ole Troan <ot@cisco.com> Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
2019-04-11Tests: Refactor tearDown show command logging, add lifecycle markers.Paul Vinciguerra1-5/+6
This change adds a consistent interface for adding test-specific show commands to log.txt. It also adds log markers for the execution of setUp[Class], tearDown[Class] in the logs. Change-Id: I7d42e396e594a59e866a7d55dac0af25548e657a Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
2019-04-10Tests Cleanup: Fix missing calls to setUpClass/tearDownClass.Paul Vinciguerra1-0/+4
Continuation/Part 2 of https://gerrit.fd.io/r/#/c/17092/ Change-Id: Id0122d84eaf2c05d29e5be63a594d5e528ee7c9a Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
2019-03-12Tests: Raise exception, don't raise string.Paul Vinciguerra1-1/+1
This was deprecated ~ python 2.4, and causes a TypeError as sideEffect. >>> raise "foo" Traceback (most recent call last): File "<input>", line 1, in <module> TypeError: exceptions must derive from BaseException Change-Id: I4117b6d60ae896eaa1ef2a73a323d8d241f8c3a7 Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
2019-03-11vpp_papi_provider: Remove more wrapper functions.Ole Troan1-35/+30
Split this work up into pieces. Please don't add new wrappers to vpp_papi_provider.py. Change-Id: I0f8f2afc4cd2bba07ea70ddecea2d7319f7b2e10 Signed-off-by: Ole Troan <ot@cisco.com>
2018-12-10Test framework: StringIO fixes for Python3Ole Troan1-18/+2
Add 2/3 support to binarytomac and mactobinary and move to vpp_mac.py Change-Id: I3dc7e4a24486aee22140c781aae7e44e58935877 Signed-off-by: Ole Troan <ot@cisco.com>
2018-08-17VPP-1392: VXLAN fails with IP fragmentationOle Troan1-1/+46
Not only is it wasteful to send all fragments back through ip4-lookup, but it doesn't work with tunnel mechanisms that don't have IP enabled on their payload side. Change-Id: Ic92d95982dddaa70969a2a6ea2f98edec7614425 Signed-off-by: Ole Troan <ot@cisco.com>
2018-06-21test:vxlan over ipv6 testsEyal Bari1-1/+5
Change-Id: Id910db0e3a07ecc6f469e2f0d1e97f39ba48cc60 Signed-off-by: Eyal Bari <ebari@cisco.com>
2017-12-13make "test-all" target pass againGabriel Ganne1-1/+1
The "test-all" target is still never called as part of any continuous test (as it probably should) but at least it can now be expected to succeed. VXLAN-GPE: * decapsulate Ethernet to "l2-input" instead of "ethernet-input" otherwise the inner mac address get checked against the interface one (external) and packet gets dropped (mac mismatch) * set packet input sw_if_index to unicast vxlan tunnel for learning TEST: * VXLAN: * reduce the number of share tunnels: => reduce test duration by half => no functional change * VXLAN-GPE: * fix test TearDown() cli: command is "show vxlan-gpe" only * remove vxlan-gpe specific tests as the were a duplicated of the BridgeDomain one and already inherited. * disable test_mcast_rcv() and test_mcast_flood() tests * P2PEthernetAPI: * remove test: "create 100k of p2p subifs" there already is a "create 1k p2p subifs" so this one is a load test and not a unit test. See: lists.fd.io/pipermail/vpp-dev/2017-November/007280.html Change-Id: Icafb83769eb560cbdeb3dc6d1f1d3c23c0901cd9 Signed-off-by: Gabriel Ganne <gabriel.ganne@enea.com>
2017-04-07VXLAN/TEST:validate vxlan del reply has valid sw_if_indexEyal Bari1-1/+3
Change-Id: Icf7420b7ee212e9341f63f005dc287d019fd8e4c Signed-off-by: Eyal Bari <ebari@cisco.com>
2017-03-29VXLAN:validate mcast encapsulation ip/macEyal Bari1-3/+9
Change-Id: I399257e372f83f4d12dc7873617980af6e46a9bc Signed-off-by: Eyal Bari <ebari@cisco.com>
2017-01-25test adding and removing shared mcast dst tunnelsEyal Bari1-8/+39
Adds and delete 2000 multicast vxlan tunnels sharing group address to test mcast tunnel ref count code as part of the stability stage (before starting traffic tests) Change-Id: Ic50cedf80471e14431feb493104eff5ea7d5d429 Signed-off-by: Eyal Bari <ebari@cisco.com>
2017-01-11vxlan unit test - minor fixesEyal Bari1-20/+19
moved ip4_range and ip4n_range to util added n_ucast_tunnels Change-Id: I9140c4e54a0636d90a97db03da842f5183319af5 Signed-off-by: Eyal Bari <ebari@cisco.com>
2017-01-06Added basic tests for multicast vxlan tunnelsEyal Bari1-14/+107
unicast flood test - test headend replication multicast flood test - test flooding when a multicast vxlan tunnel is present in BD multicast receive test - verify that multicast packet are received on their corresponding unicast tunnels and that unmatched packets are dropped all tests run after adding and removing 200 mcast tunnels to test stability Change-Id: Ia05108c39ac35096a5b633cf52480a9ba87c14df Signed-off-by: Eyal Bari <ebari@cisco.com>
2016-12-13make test: Use VXLAN built in scapy 2.3.3Matej Klotton1-2/+6
- fix documentation issues. - fix mpls test. Change-Id: Ieef6b4b5e4aca99e89bd03e45a991be89d42adba Signed-off-by: Matej Klotton <mklotton@cisco.com>
2016-12-05make test: fix missing log/packet messagesKlement Sekera1-2/+1
Change-Id: Idb3119792943664748c4abc3829ad723f4156dfe Signed-off-by: Klement Sekera <ksekera@cisco.com>
2016-10-26refactor test frameworkKlement Sekera1-54/+52
Change-Id: I31da3b1857b6399f9899276a2d99cdd19436296c Signed-off-by: Klement Sekera <ksekera@cisco.com> Signed-off-by: Matej Klotton <mklotton@cisco.com> Signed-off-by: Jan Gelety <jgelety@cisco.com> Signed-off-by: Juraj Sloboda <jsloboda@cisco.com>
2016-10-03test: new test infrastructureDamjan Marion1-0/+102
Change-Id: I73ca19c431743f6b39669c583d9222a6559346ef Signed-off-by: Jan Gelety <jgelety@cisco.com> Signed-off-by: Juraj Sloboda <jsloboda@cisco.com> Signed-off-by: Stefan Kobza <skobza@cisco.com> Signed-off-by: Matej Klotton <mklotton@cisco.com> Signed-off-by: Maciek Konstantynowicz <mkonstan@cisco.com> Signed-off-by: Damjan Marion <damarion@cisco.com>
t">
# Binary API support    {#api_doc}

VPP provides a binary API scheme to allow a wide variety of client codes to
program data-plane tables. As of this writing, there are hundreds of binary
APIs.

Messages are defined in `*.api` files. Today, there are about 50 api files,
with more arriving as folks add programmable features.  The API file compiler
sources reside in @ref src/tools/vppapigen.

From @ref src/vnet/interface.api, here's a typical request/response message
definition:

```{.c}
     autoreply define sw_interface_set_flags
     {
       u32 client_index;
       u32 context;
       u32 sw_if_index;
       /* 1 = up, 0 = down */
       u8 admin_up_down;
     };
```

To a first approximation, the API compiler renders this definition into
`build-root/.../vpp/include/vnet/interface.api.h` as follows:

```{.c}
    /****** Message ID / handler enum ******/
    #ifdef vl_msg_id
    vl_msg_id(VL_API_SW_INTERFACE_SET_FLAGS, vl_api_sw_interface_set_flags_t_handler)
    vl_msg_id(VL_API_SW_INTERFACE_SET_FLAGS_REPLY, vl_api_sw_interface_set_flags_reply_t_handler)
    #endif	

    /****** Message names ******/
    #ifdef vl_msg_name
    vl_msg_name(vl_api_sw_interface_set_flags_t, 1)
    vl_msg_name(vl_api_sw_interface_set_flags_reply_t, 1)
    #endif	

    /****** Message name, crc list ******/
    #ifdef vl_msg_name_crc_list
    #define foreach_vl_msg_name_crc_interface \
    _(VL_API_SW_INTERFACE_SET_FLAGS, sw_interface_set_flags, f890584a) \
    _(VL_API_SW_INTERFACE_SET_FLAGS_REPLY, sw_interface_set_flags_reply, dfbf3afa) \
    #endif	

    /****** Typedefs *****/
    #ifdef vl_typedefs
    typedef VL_API_PACKED(struct _vl_api_sw_interface_set_flags {
        u16 _vl_msg_id;
        u32 client_index;
        u32 context;
        u32 sw_if_index;
        u8 admin_up_down;
    }) vl_api_sw_interface_set_flags_t;

    typedef VL_API_PACKED(struct _vl_api_sw_interface_set_flags_reply {
        u16 _vl_msg_id;
        u32 context;
        i32 retval;
    }) vl_api_sw_interface_set_flags_reply_t;

    ...
    #endif /* vl_typedefs */
```

To change the admin state of an interface, a binary api client sends a
@ref vl_api_sw_interface_set_flags_t to VPP, which will respond  with a
@ref vl_api_sw_interface_set_flags_reply_t message.

Multiple layers of software, transport types, and shared libraries
implement a variety of features:

* API message allocation, tracing, pretty-printing, and replay.
* Message transport via global shared memory, pairwise/private shared
  memory, and sockets.
* Barrier synchronization of worker threads across thread-unsafe
  message handlers.
    
Correctly-coded message handlers know nothing about the transport used to
deliver messages to/from VPP. It's reasonably straighforward to use multiple
API message transport types simultaneously.

For historical reasons, binary api messages are (putatively) sent in network
byte order. As of this writing, we're seriously considering whether that
choice makes sense.


## Message Allocation

Since binary API messages are always processed in order, we allocate messages
using a ring allocator whenever possible. This scheme is extremely fast when
compared with a traditional memory allocator, and doesn't cause heap
fragmentation. See
@ref src/vlibmemory/memory_shared.c @ref vl_msg_api_alloc_internal().

Regardless of transport, binary api messages always follow a @ref msgbuf_t
header:

```{.c}
    typedef struct msgbuf_
    {
      unix_shared_memory_queue_t *q;
      u32 data_len;
      u32 gc_mark_timestamp;
      u8 data[0];
    } msgbuf_t;
```

This structure makes it easy to trace messages without having to
decode them - simply save data_len bytes - and allows
@ref vl_msg_api_free() to rapidly dispose of message buffers:

```{.c}
    void
    vl_msg_api_free (void *a)
    {
      msgbuf_t *rv;
      api_main_t *am = &api_main;

      rv = (msgbuf_t *) (((u8 *) a) - offsetof (msgbuf_t, data));

      /*
       * Here's the beauty of the scheme.  Only one proc/thread has
       * control of a given message buffer. To free a buffer, we just 
       * clear the queue field, and leave. No locks, no hits, no errors...
       */
      if (rv->q)
        {
          rv->q = 0;
          rv->gc_mark_timestamp = 0;
          return;
        }
      <snip>
    }
```

## Message Tracing and Replay

It's extremely important that VPP can capture and replay sizeable binary API
traces. System-level issues involving hundreds of thousands of API
transactions can be re-run in a second or less. Partial replay allows one to
binary-search for the point where the wheels fall off. One can add scaffolding
to the data plane, to trigger when complex conditions obtain.

With binary API trace, print, and replay, system-level bug reports of the form
"after 300,000 API transactions, the VPP data-plane stopped forwarding
traffic, FIX IT!" can be solved offline.

More often than not, one discovers that a control-plane client
misprograms the data plane after a long time or under complex
circumstances. Without direct evidence, "it's a data-plane problem!"

See @ref src/vlibmemory/memory_vlib.c @ref vl_msg_api_process_file(),
and @ref src/vlibapi/api_shared.c. See also the debug CLI command "api trace"

## Client connection details

Establishing a binary API connection to VPP from a C-language client
is easy:

```{.c}
        int
        connect_to_vpe (char *client_name, int client_message_queue_length)
        {
          vat_main_t *vam = &vat_main;
          api_main_t *am = &api_main;

          if (vl_client_connect_to_vlib ("/vpe-api", client_name, 
                                    	client_message_queue_length) < 0)
            return -1;

          /* Memorize vpp's binary API message input queue address */
          vam->vl_input_queue = am->shmem_hdr->vl_input_queue;
          /* And our client index */
          vam->my_client_index = am->my_client_index;
          return 0;
        }       
```

32 is a typical value for client_message_queue_length. VPP cannot
block when it needs to send an API message to a binary API client, and
the VPP-side binary API message handlers are very fast. When sending
asynchronous messages, make sure to scrape the binary API rx ring with
some enthusiasm.

### binary API message RX pthread

Calling @ref vl_client_connect_to_vlib spins up a binary API message RX
pthread:

```{.c}
        static void *
        rx_thread_fn (void *arg)
        {
          unix_shared_memory_queue_t *q;
          memory_client_main_t *mm = &memory_client_main;
          api_main_t *am = &api_main;

          q = am->vl_input_queue;

          /* So we can make the rx thread terminate cleanly */
          if (setjmp (mm->rx_thread_jmpbuf) == 0)
            {
              mm->rx_thread_jmpbuf_valid = 1;
              while (1)
        	{
        	  vl_msg_api_queue_handler (q);
        	}
            }
          pthread_exit (0);
        }       
```

To handle the binary API message queue yourself, use
@ref vl_client_connect_to_vlib_no_rx_pthread.

In turn, vl_msg_api_queue_handler(...) uses mutex/condvar signalling
to wake up, process VPP -> client traffic, then sleep. VPP supplies a
condvar broadcast when the VPP -> client API message queue transitions
from empty to nonempty.

VPP checks its own binary API input queue at a very high rate.  VPP
invokes message handlers in "process" context [aka cooperative
multitasking thread context] at a variable rate, depending on
data-plane packet processing requirements.

## Client disconnection details

To disconnect from VPP, call @ref vl_client_disconnect_from_vlib.
Please arrange to call this function if the client application
terminates abnormally. VPP makes every effort to hold a decent funeral
for dead clients, but VPP can't guarantee to free leaked memory in the
shared binary API segment.

## Sending binary API messages to VPP

The point of the exercise is to send binary API messages to VPP, and
to receive replies from VPP. Many VPP binary APIs comprise a client
request message, and a simple status reply. For example, to
set the admin status of an interface, one codes:

```{.c}
    vl_api_sw_interface_set_flags_t *mp;

    mp = vl_msg_api_alloc (sizeof (*mp));
    memset (mp, 0, sizeof (*mp));
    mp->_vl_msg_id = clib_host_to_net_u16 (VL_API_SW_INTERFACE_SET_FLAGS);
    mp->client_index = api_main.my_client_index;
    mp->sw_if_index = clib_host_to_net_u32 (<interface-sw-if-index>);
    vl_msg_api_send (api_main.shmem_hdr->vl_input_queue, (u8 *)mp);
```

Key points:

* Use @ref vl_msg_api_alloc to allocate message buffers

* Allocated message buffers are not initialized, and must be presumed
  to contain trash.

* Don't forget to set the _vl_msg_id field!

* As of this writing, binary API message IDs and data are sent in
  network byte order

* The client-library global data structure @ref api_main keeps track
  of sufficient pointers and handles used to communicate with VPP

## Receiving binary API messages from VPP

Unless you've made other arrangements (see @ref
vl_client_connect_to_vlib_no_rx_pthread), *messages are received on a
separate rx pthread*. Synchronization with the client application main
thread is the responsibility of the application!

Set up message handlers about as follows:

```{.c}
    #define vl_typedefs		/* define message structures */
    #include <vpp/api/vpe_all_api_h.h>
    #undef vl_typedefs

    /* declare message handlers for each api */

    #define vl_endianfun		/* define message structures */
    #include <vpp/api/vpe_all_api_h.h>
    #undef vl_endianfun

    /* instantiate all the print functions we know about */
    #define vl_print(handle, ...)
    #define vl_printfun
    #include <vpp/api/vpe_all_api_h.h>
    #undef vl_printfun

    /* Define a list of all message that the client handles */
    #define foreach_vpe_api_reply_msg                            \
       _(SW_INTERFACE_SET_FLAGS_REPLY, sw_interface_set_flags_reply)           

       static clib_error_t *
       my_api_hookup (vlib_main_t * vm)
       {
         api_main_t *am = &api_main;

       #define _(N,n)                                                  \
           vl_msg_api_set_handlers(VL_API_##N, #n,                     \
                                  vl_api_##n##_t_handler,              \
                                  vl_noop_handler,                     \
                                  vl_api_##n##_t_endian,               \
                                  vl_api_##n##_t_print,                \
                                  sizeof(vl_api_##n##_t), 1);
         foreach_vpe_api_msg;
       #undef _

         return 0;
        }
```

The key API used to establish message handlers is @ref
vl_msg_api_set_handlers , which sets values in multiple parallel
vectors in the @ref api_main_t structure. As of this writing: not all
vector element values can be set through the API. You'll see sporadic
API message registrations followed by minor adjustments of this form:

```{.c}
    /*
     * Thread-safe API messages
     */
    am->is_mp_safe[VL_API_IP_ADD_DEL_ROUTE] = 1;
    am->is_mp_safe[VL_API_GET_NODE_GRAPH] = 1;
```