aboutsummaryrefslogtreecommitdiffstats
path: root/test
AgeCommit message (Expand)AuthorFilesLines
2022-10-31tests: session in interrupt modeFilip Tehlar1-0/+9
2022-10-20tests: enable extended test runs in run.pyNaveen Joy1-0/+14
2022-10-11l2: Add bridge_domain_add_del_v2 to l2 apiLaszlo Kiraly12-48/+90
2022-10-11tests: don't use tmp as the default log dir with run.pyNaveen Joy1-2/+4
2022-10-07tests: disable broken wireguard tests on vpp_debug imageDave Wallace2-1/+13
2022-09-30fib: fix dpo-receive address in ip6-ll fibsVladislav Grishenko1-1/+3
2022-09-30udp: add udp encap source port entropy supportVladislav Grishenko1-4/+199
2022-09-28tests: stabilize wireguard ratelimiting testAlexander Chernavin1-29/+13
2022-09-27tests: enable ipsec-esp 'make test' testcases on ubuntu-22.04Dave Wallace2-59/+1
2022-09-27wireguard: stop sending handshakes when wg intf is downAlexander Chernavin1-1/+133
2022-09-27wireguard: fix re-handshake timer when response sentAlexander Chernavin1-22/+48
2022-09-27tests: disable failing tests on ubuntu-22.04 debian-11Dave Wallace11-8/+116
2022-09-23bfd: add tracing support to bfd-processKlement Sekera1-0/+2
2022-09-21ipsec: introduce fast path ipv6 inbound matchingPiotr Bronowski1-40/+77
2022-09-20tests: run tests against a running VPPNaveen Joy4-12/+291
2022-09-19tests: skip tests failing on ubuntu 22.04Dave Wallace4-2/+87
2022-09-19igmp: validate ip router alert option lengthVladislav Grishenko1-15/+45
2022-09-19arp: update error reason when checking for proxy-arpBenoît Ganne1-1/+5
2022-09-15nat: fix nat44-ed port range with multiple workersVladislav Grishenko1-1/+192
2022-09-12ipsec: introduce fast path ipv4 inbound matchingPiotr Bronowski1-0/+844
2022-09-09vlib: don't leak node frames on reforkDmitry Valter1-0/+103
2022-08-19ipsec: enable UDP encap for IPv6 ESP tun protectMatthew Smith2-0/+160
2022-08-18ikev2: accept key exchange on CREATE_CHILD_SAAtzm Watanabe1-9/+53
2022-08-17wireguard: fix fib entry trackingAlexander Chernavin1-0/+83
2022-08-16tests: move "venv" to "build-root" directory from "test" directorySaima Yunus5-10/+8
2022-08-11ip: Use .api declared error countersNeale Ranns5-25/+11
2022-08-11ipsec: Use .api declared error countersNeale Ranns2-24/+12
2022-08-11arp: Use the new style error count declarationNeale Ranns1-3/+1
2022-08-10ikev2: do not accept rekey until old SA is deletedAtzm Watanabe1-9/+39
2022-08-09wireguard: add peers roaming supportAlexander Chernavin1-19/+247
2022-08-09wireguard: add handshake rate limiting supportAlexander Chernavin1-0/+168
2022-08-09ip-neighbor: ARP and ND stats per-interface.Neale Ranns2-0/+88
2022-08-08wireguard: add dos mitigation supportAlexander Chernavin1-4/+187
2022-08-08ikev2: fix rekeying with multiple notify payloadsAtzm Watanabe1-1/+8
2022-08-05tests: fix node variant selectionBenoît Ganne1-1/+1
2022-08-04tests: run a test inside a QEMU VMNaveen Joy6-0/+809
2022-08-03wireguard: add processing of received cookie messagesAlexander Chernavin3-5/+199
2022-07-15tests: add fast path ipv6 python tests for outbound policy matchingPiotr Bronowski2-6/+857
2022-06-29ipsec: add fast path python testsFan Zhang2-0/+782
2022-06-28session quic: allow custom config of rx mqs seg sizeFlorin Coras1-1/+1
2022-06-28ipsec: change wildcard value for any protocol of spd policyPiotr Bronowski5-21/+21
2022-06-10vcl: fix iperf3 server crash issue when it runs over vpp host stack.Liangxing Wang1-0/+11
2022-06-05wireguard: fix crash by not sending arp via wg interfaceAlexander Chernavin1-8/+54
2022-05-24tests: fix ipsec sdp cases with parrallel jobTianyu Li2-10/+10
2022-05-24tests: fix default failed dir settingKlement Sekera2-8/+7
2022-05-16flowprobe: add api messages to obtain current stateAlexander Chernavin1-1/+92
2022-05-13tests: fix pnat tests formattingAlexander Chernavin1-51/+53
2022-05-13flowprobe: add support for reporting on inbound packetsAlexander Chernavin1-36/+148
2022-05-12pnat: add support to wildcard IP Protocol field if not specifiedFahad Naeem1-0/+99
2022-05-11tests: fix checkstyle-pythonKlement Sekera1-1/+1
class="p">(char *client_name, int client_message_queue_length) { vat_main_t *vam = &vat_main; api_main_t *am = &api_main; if (vl_client_connect_to_vlib ("/vpe-api", client_name, client_message_queue_length) < 0) return -1; /* Memorize vpp's binary API message input queue address */ vam->vl_input_queue = am->shmem_hdr->vl_input_queue; /* And our client index */ vam->my_client_index = am->my_client_index; return 0; } 32 is a typical value for client_message_queue_length. VPP *cannot* block when it needs to send an API message to a binary API client. The VPP-side binary API message handlers are very fast. So, when sending asynchronous messages, make sure to scrape the binary API rx ring with some enthusiasm! **Binary API message RX pthread** Calling `vl_client_connect_to_vlib <https://docs.fd.io/vpp/18.11/da/d25/memory__client_8h.html#a6654b42c91be33bfb6a4b4bfd2327920>`_ spins up a binary API message RX pthread: .. code-block:: C static void * rx_thread_fn (void *arg) { svm_queue_t *q; memory_client_main_t *mm = &memory_client_main; api_main_t *am = &api_main; int i; q = am->vl_input_queue; /* So we can make the rx thread terminate cleanly */ if (setjmp (mm->rx_thread_jmpbuf) == 0) { mm->rx_thread_jmpbuf_valid = 1; /* * Find an unused slot in the per-cpu-mheaps array, * and grab it for this thread. We need to be able to * push/pop the thread heap without affecting other thread(s). */ if (__os_thread_index == 0) { for (i = 0; i < ARRAY_LEN (clib_per_cpu_mheaps); i++) { if (clib_per_cpu_mheaps[i] == 0) { /* Copy the main thread mheap pointer */ clib_per_cpu_mheaps[i] = clib_per_cpu_mheaps[0]; __os_thread_index = i; break; } } ASSERT (__os_thread_index > 0); } while (1) vl_msg_api_queue_handler (q); } pthread_exit (0); } To handle the binary API message queue yourself, use `vl_client_connect_to_vlib_no_rx_pthread <https://docs.fd.io/vpp/18.11/da/d25/memory__client_8h.html#a11b9577297106c57c0783b96ab190c36>`_. **Queue non-empty signalling** vl_msg_api_queue_handler(...) uses mutex/condvar signalling to wake up, process VPP -> client traffic, then sleep. VPP supplies a condvar broadcast when the VPP -> client API message queue transitions from empty to nonempty. VPP checks its own binary API input queue at a very high rate. VPP invokes message handlers in "process" context [aka cooperative multitasking thread context] at a variable rate, depending on data-plane packet processing requirements. Client disconnection details ____________________________ To disconnect from VPP, call `vl_client_disconnect_from_vlib <https://docs.fd.io/vpp/18.11/da/d25/memory__client_8h.html#a82c9ba6e7ead8362ae2175eefcf2fd12>`_. Please arrange to call this function if the client application terminates abnormally. VPP makes every effort to hold a decent funeral for dead clients, but VPP can't guarantee to free leaked memory in the shared binary API segment. Sending binary API messages to VPP __________________________________ The point of the exercise is to send binary API messages to VPP, and to receive replies from VPP. Many VPP binary APIs comprise a client request message, and a simple status reply. For example, to set the admin status of an interface: .. code-block:: C vl_api_sw_interface_set_flags_t *mp; mp = vl_msg_api_alloc (sizeof (*mp)); memset (mp, 0, sizeof (*mp)); mp->_vl_msg_id = clib_host_to_net_u16 (VL_API_SW_INTERFACE_SET_FLAGS); mp->client_index = api_main.my_client_index; mp->sw_if_index = clib_host_to_net_u32 (<interface-sw-if-index>); vl_msg_api_send (api_main.shmem_hdr->vl_input_queue, (u8 *)mp); Key points: * Use `vl_msg_api_alloc <https://docs.fd.io/vpp/18.11/dc/d5a/memory__shared_8h.html#a109ff1e95ebb2c968d43c100c4a1c55a>`_ to allocate message buffers * Allocated message buffers are not initialized, and must be presumed to contain trash. * Don't forget to set the _vl_msg_id field! * As of this writing, binary API message IDs and data are sent in network byte order * The client-library global data structure `api_main <https://docs.fd.io/vpp/18.11/d6/dd1/api__shared_8c.html#af58e3e46b569573e9622b826b2f47a22>`_ keeps track of sufficient pointers and handles used to communicate with VPP Receiving binary API messages from VPP ______________________________________ Unless you've made other arrangements (see `vl_client_connect_to_vlib_no_rx_pthread <https://docs.fd.io/vpp/18.11/da/d25/memory__client_8h.html#a11b9577297106c57c0783b96ab190c36>`_), *messages are received on a separate rx pthread*. Synchronization with the client application main thread is the responsibility of the application! Set up message handlers about as follows: .. code-block:: C #define vl_typedefs /* define message structures */ #include <vpp/api/vpe_all_api_h.h> #undef vl_typedefs /* declare message handlers for each api */ #define vl_endianfun /* define message structures */ #include <vpp/api/vpe_all_api_h.h> #undef vl_endianfun /* instantiate all the print functions we know about */ #define vl_print(handle, ...) #define vl_printfun #include <vpp/api/vpe_all_api_h.h> #undef vl_printfun /* Define a list of all message that the client handles */ #define foreach_vpe_api_reply_msg \ _(SW_INTERFACE_SET_FLAGS_REPLY, sw_interface_set_flags_reply) static clib_error_t * my_api_hookup (vlib_main_t * vm) { api_main_t *am = &api_main; #define _(N,n) \ vl_msg_api_set_handlers(VL_API_##N, #n, \ vl_api_##n##_t_handler, \ vl_noop_handler, \ vl_api_##n##_t_endian, \ vl_api_##n##_t_print, \ sizeof(vl_api_##n##_t), 1); foreach_vpe_api_msg; #undef _ return 0; } The key API used to establish message handlers is `vl_msg_api_set_handlers <https://docs.fd.io/vpp/18.11/d6/dd1/api__shared_8c.html#aa8a8e1f3876ec1a02f283c1862ecdb7a>`_ , which sets values in multiple parallel vectors in the `api_main_t <https://docs.fd.io/vpp/18.11/dd/db2/structapi__main__t.html>`_ structure. As of this writing: not all vector element values can be set through the API. You'll see sporadic API message registrations followed by minor adjustments of this form: .. code-block:: C /* * Thread-safe API messages */ am->is_mp_safe[VL_API_IP_ADD_DEL_ROUTE] = 1; am->is_mp_safe[VL_API_GET_NODE_GRAPH] = 1; API message numbering in plugins -------------------------------- Binary API message numbering in plugins relies on vpp to issue a block of message-ID's for the plugin to use: .. code-block:: C static clib_error_t * my_init (vlib_main_t * vm) { my_main_t *mm = &my_main; name = format (0, "myplugin_%08x%c", api_version, 0); /* Ask for a correctly-sized block of API message decode slots */ mm->msg_id_base = vl_msg_api_get_msg_ids ((char *) name, VL_MSG_FIRST_AVAILABLE); } Control-plane codes use the vl_client_get_first_plugin_msg_id (...) api to recover the message ID block base: .. code-block:: C /* Ask the vpp engine for the first assigned message-id */ name = format (0, "myplugin_%08x%c", api_version, 0); sm->msg_id_base = vl_client_get_first_plugin_msg_id ((char *) name); It's a fairly common error to forget to add msg_id_base when registering message handlers, or when sending messages. Using macros from .../src/vlibapi/api_helper_macros.h can automate the process, but remember to #define REPLY_MSG_ID_BASE before #including the file: .. code-block:: C #define REPLY_MSG_ID_BASE mm->msg_id_base #include <vlibapi/api_helper_macros.h>