# Binary API support {#api_doc} VPP provides a binary API scheme to allow a wide variety of client codes to program data-plane tables. As of this writing, there are hundreds of binary APIs. Messages are defined in `*.api` files. Today, there are about 50 api files, with more arriving as folks add programmable features. The API file compiler sources reside in @ref src/tools/vppapigen. From @ref src/vnet/interface.api, here's a typical request/response message definition: ```{.c} autoreply define sw_interface_set_flags { u32 client_index; u32 context; u32 sw_if_index; /* 1 = up, 0 = down */ u8 admin_up_down; }; ``` To a first approximation, the API compiler renders this definition into `build-root/.../vpp/include/vnet/interface.api.h` as follows: ```{.c} /****** Message ID / handler enum ******/ #ifdef vl_msg_id vl_msg_id(VL_API_SW_INTERFACE_SET_FLAGS, vl_api_sw_interface_set_flags_t_handler) vl_msg_id(VL_API_SW_INTERFACE_SET_FLAGS_REPLY, vl_api_sw_interface_set_flags_reply_t_handler) #endif /****** Message names ******/ #ifdef vl_msg_name vl_msg_name(vl_api_sw_interface_set_flags_t, 1) vl_msg_name(vl_api_sw_interface_set_flags_reply_t, 1) #endif /****** Message name, crc list ******/ #ifdef vl_msg_name_crc_list #define foreach_vl_msg_name_crc_interface \ _(VL_API_SW_INTERFACE_SET_FLAGS, sw_interface_set_flags, f890584a) \ _(VL_API_SW_INTERFACE_SET_FLAGS_REPLY, sw_interface_set_flags_reply, dfbf3afa) \ #endif /****** Typedefs *****/ #ifdef vl_typedefs typedef VL_API_PACKED(struct _vl_api_sw_interface_set_flags { u16 _vl_msg_id; u32 client_index; u32 context; u32 sw_if_index; u8 admin_up_down; }) vl_api_sw_interface_set_flags_t; typedef VL_API_PACKED(struct _vl_api_sw_interface_set_flags_reply { u16 _vl_msg_id; u32 context; i32 retval; }) vl_api_sw_interface_set_flags_reply_t; ... #endif /* vl_typedefs */ ``` To change the admin state of an interface, a binary api client sends a @ref vl_api_sw_interface_set_flags_t to VPP, which will respond with a @ref vl_api_sw_interface_set_flags_reply_t message. Multiple layers of software, transport types, and shared libraries implement a variety of features: * API message allocation, tracing, pretty-printing, and replay. * Message transport via global shared memory, pairwise/private shared memory, and sockets. * Barrier synchronization of worker threads across thread-unsafe message handlers. Correctly-coded message handlers know nothing about the transport used to deliver messages to/from VPP. It's reasonably straighforward to use multiple API message transport types simultaneously. For historical reasons, binary api messages are (putatively) sent in network byte order. As of this writing, we're seriously considering whether that choice makes sense. ## Message Allocation Since binary API messages are always processed in order, we allocate messages using a ring allocator whenever possible. This scheme is extremely fast when compared with a traditional memory allocator, and doesn't cause heap fragmentation. See @ref src/vlibmemory/memory_shared.c @ref vl_msg_api_alloc_internal(). Regardless of transport, binary api messages always follow a @ref msgbuf_t header: ```{.c} typedef struct msgbuf_ { unix_shared_memory_queue_t *q; u32 data_len; u32 gc_mark_timestamp; u8 data[0]; } msgbuf_t; ``` This structure makes it easy to trace messages without having to decode them - simply save data_len bytes - and allows @ref vl_msg_api_free() to rapidly dispose of message buffers: ```{.c} void vl_msg_api_free (void *a) { msgbuf_t *rv; api_main_t *am = &api_main; rv = (msgbuf_t *) (((u8 *) a) - offsetof (msgbuf_t, data)); /* * Here's the beauty of the scheme. Only one proc/thread has * control of a given message buffer. To free a buffer, we just * clear the queue field, and leave. No locks, no hits, no errors... */ if (rv->q) { rv->q = 0; rv->gc_mark_timestamp = 0; return; } } ``` ## Message Tracing and Replay It's extremely important that VPP can capture and replay sizeable binary API traces. System-level issues involving hundreds of thousands of API transactions can be re-run in a second or less. Partial replay allows one to binary-search for the point where the wheels fall off. One can add scaffolding to the data plane, to trigger when complex conditions obtain. With binary API trace, print, and replay, system-level bug reports of the form "after 300,000 API transactions, the VPP data-plane stopped forwarding traffic, FIX IT!" can be solved offline. More often than not, one discovers that a control-plane client misprograms the data plane after a long time or under complex circumstances. Without direct evidence, "it's a data-plane problem!" See @ref src/vlibmemory/memory_vlib.c @ref vl_msg_api_process_file(), and @ref src/vlibapi/api_shared.c. See also the debug CLI command "api trace" ## Client connection details Establishing a binary API connection to VPP from a C-language client is easy: ```{.c} int connect_to_vpe (char *client_name, int client_message_queue_length) { vat_main_t *vam = &vat_main; api_main_t *am = &api_main; if (vl_client_connect_to_vlib ("/vpe-api", client_name, client_message_queue_length) < 0) return -1; /* Memorize vpp's binary API message input queue address */ vam->vl_input_queue = am->shmem_hdr->vl_input_queue; /* And our client index */ vam->my_client_index = am->my_client_index; return 0; } ``` 32 is a typical value for client_message_queue_length. VPP cannot block when it needs to send an API message to a binary API client, and the VPP-side binary API message handlers are very fast. When sending asynchronous messages, make sure to scrape the binary API rx ring with some enthusiasm. ### binary API message RX pthr