aboutsummaryrefslogtreecommitdiffstats
path: root/bootstrap-TLDK.sh
AgeCommit message (Collapse)AuthorFilesLines
2017-08-01 Test TLDK tcpdump command not found issue.Fangyin Hu1-1/+1
Use the new VIRL image to test. Add the debug information for the test. Change-Id: I8343a17d38dffbf84039e39a06fc5c5a65aaf201 Signed-off-by: Fangyin Hu <fangyinx.hu@intel.com>
2017-07-17 Fix the TLDK test tcpdump not found issue.Fangyin Hu1-0/+12
Add the PYBOT execute exit status code. Change-Id: If0fc29c580177f1b187ec751c6708fc138838bed Signed-off-by: Fangyin Hu <fangyinx.hu@intel.com>
2017-07-06 Patch to tldk bootstrap on the tldk package download.Fangyin Hu1-1/+1
Rebase and fix the path issue. Change-Id: I572e4c81feb54e7f391af0007af709cafa42c255 Signed-off-by: qun wan <qun.wan@intel.com> Signed-off-by: Fangyin Hu <fangyinx.hu@intel.com>
2017-06-29CSIT-687: Directory structure reorganizationTibor Frank1-3/+3
Change-Id: I772c9e214be2461adf58124998d272e7d795220f Signed-off-by: Tibor Frank <tifrank@cisco.com> Signed-off-by: Maciek Konstantynowicz <mkonstan@cisco.com>
2017-06-20 Patch on tldk bootstrap file to fix the wget of dpdk packagequn wan1-0/+5
Change-Id: I0be71fc359c334830043d0fbd45826fbfd4cb8aa Signed-off-by: qun wan <qun.wan@intel.com>
2017-06-16 Patches for the tldk functional test cases.qun wan1-2/+2
Change-Id: I0a70339edaff4c0e023c586ff84c8085ae6bc9c0 Signed-off-by: qun wan <qun.wan@intel.com>
2017-05-18 TLDK Bootstrap.sh file to support the jenkins job scheduling.qun wan1-25/+4
Change-Id: Ia89e5a32fdc963a1c20345da1310b7958cd34c48 Signed-off-by: qun wan <qun.wan@intel.com>
2017-04-03Add x710 and xl710 tests for testpmdTibor Frank1-2/+2
10ge2p1x710-eth-l2xcbase-ndrdisc 40ge2p1xl710-eth-l2xcbase-ndrdisc Change-Id: Iea411182fd41e1ae9ed9b5a17f540befc247adb9 Signed-off-by: Tibor Frank <tifrank@cisco.com>
2017-01-02CSIT-488 TLDK jbb validation jobs need cmake installedpmikus1-1/+1
Add cmake into VIRL Ubuntu image Change-Id: I2cd33a58c0043a2c3e04809d1f73068520767929 Signed-off-by: pmikus <pmikus@cisco.com>
2016-12-15 check the return valueFangyin Hu1-0/+217
Change-Id: Id507ba1a139415b395f99cd3fcbe2581e1fe3f54 Signed-off-by: Fangyin Hu <fangyinx.hu@intel.com>
.highlight .gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */ .highlight .ge { font-style: italic } /* Generic.Emph */ .highlight .gr { color: #aa0000 } /* Generic.Error */ .highlight .gh { color: #333333 } /* Generic.Heading */ .highlight .gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */ .highlight .go { color: #888888 } /* Generic.Output */ .highlight .gp { color: #555555 } /* Generic.Prompt */ .highlight .gs { font-weight: bold } /* Generic.Strong */ .highlight .gu { color: #666666 } /* Generic.Subheading */ .highlight .gt { color: #aa0000 } /* Generic.Traceback */ .highlight .kc { color: #008800; font-weight: bold } /* Keyword.Constant */ .highlight .kd { color: #008800; font-weight: bold } /* Keyword.Declaration */ .highlight .kn { color: #008800; font-weight: bold } /* Keyword.Namespace */ .highlight .kp { color: #008800 } /* Keyword.Pseudo */ .highlight .kr { color: #008800; font-weight: bold } /* Keyword.Reserved */ .highlight .kt { color: #888888; font-weight: bold } /* Keyword.Type */ .highlight .m { color: #0000DD; font-weight: bold } /* Literal.Number */ .highlight .s { color: #dd2200; background-color: #fff0f0 } /* Literal.String */ .highlight .na { color: #336699 } /* Name.Attribute */ .highlight .nb { color: #003388 } /* Name.Builtin */ .highlight .nc { color: #bb0066; font-weight: bold } /* Name.Class */ .highlight .no { color: #003366; font-weight: bold } /* Name.Constant */ .highlight .nd { color: #555555 } /* Name.Decorator */ .highlight .ne { color: #bb0066; font-weight: bold } /* Name.Exception */ .highlight .nf { color: #0066bb; font-weight: bold } /* Name.Function */ .highlight .nl { color: #336699; font-style: italic } /* Name.Label */ .highlight .nn { color: #bb0066; font-weight: bold } /* Name.Namespace */ .highlight .py { color: #336699; font-weight: bold } /* Name.Property */ .highlight .nt { color: #bb0066; font-weight: bold } /* Name.Tag */ .highlight .nv { color: #336699 } /* Name.Variable */ .highlight .ow { color: #008800 } /* Operator.Word */ .highlight .w { color: #bbbbbb } /* Text.Whitespace */ .highlight .mb { color: #0000DD; font-weight: bold } /* Literal.Number.Bin */ .highlight .mf { color: #0000DD; font-weight: bold } /* Literal.Number.Float */ .highlight .mh { color: #0000DD; font-weight: bold } /* Literal.Number.Hex */ .highlight .mi { color: #0000DD; font-weight: bold } /* Literal.Number.Integer */ .highlight .mo { color: #0000DD; font-weight: bold } /* Literal.Number.Oct */ .highlight .sa { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Affix */ .highlight .sb { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Backtick */ .highlight .sc { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Char */ .highlight .dl { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Delimiter */ .highlight .sd { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Doc */ .highlight .s2 { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Double */ .highlight .se { color: #0044dd; background-color: #fff0f0 } /* Literal.String.Escape */ .highlight .sh { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Heredoc */ .highlight .si { color: #3333bb; background-color: #fff0f0 } /* Literal.String.Interpol */ .highlight .sx { color: #22bb22; background-color: #f0fff0 } /* Literal.String.Other */ .highlight .sr { color: #008800; background-color: #fff0ff } /* Literal.String.Regex */ .highlight .s1 { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Single */ .highlight .ss { color: #aa6600; background-color: #fff0f0 } /* Literal.String.Symbol */ .highlight .bp { color: #003388 } /* Name.Builtin.Pseudo */ .highlight .fm { color: #0066bb; font-weight: bold } /* Name.Function.Magic */ .highlight .vc { color: #336699 } /* Name.Variable.Class */ .highlight .vg { color: #dd7700 } /* Name.Variable.Global */ .highlight .vi { color: #3333bb } /* Name.Variable.Instance */ .highlight .vm { color: #336699 } /* Name.Variable.Magic */ .highlight .il { color: #0000DD; font-weight: bold } /* Literal.Number.Integer.Long */ }
.. _api_doc:

Writing API handlers
====================

VPP provides a binary API scheme to allow a wide variety of client codes
to program data-plane tables. As of this writing, there are hundreds of
binary APIs.

Messages are defined in ``*.api`` files. Today, there are about 50 api
files, with more arriving as folks add programmable features. The API
file compiler sources reside in @ref src/tools/vppapigen.

From @ref src/vnet/interface.api, here’s a typical request/response
message definition:

.. code:: c

        autoreply define sw_interface_set_flags
        {
          u32 client_index;
          u32 context;
          u32 sw_if_index;
          /* 1 = up, 0 = down */
          u8 admin_up_down;
        };

To a first approximation, the API compiler renders this definition into
``build-root/.../vpp/include/vnet/interface.api.h`` as follows:

.. code:: c

       /****** Message ID / handler enum ******/
       #ifdef vl_msg_id
       vl_msg_id(VL_API_SW_INTERFACE_SET_FLAGS, vl_api_sw_interface_set_flags_t_handler)
       vl_msg_id(VL_API_SW_INTERFACE_SET_FLAGS_REPLY, vl_api_sw_interface_set_flags_reply_t_handler)
       #endif

       /****** Message names ******/
       #ifdef vl_msg_name
       vl_msg_name(vl_api_sw_interface_set_flags_t, 1)
       vl_msg_name(vl_api_sw_interface_set_flags_reply_t, 1)
       #endif

       /****** Message name, crc list ******/
       #ifdef vl_msg_name_crc_list
       #define foreach_vl_msg_name_crc_interface \
       _(VL_API_SW_INTERFACE_SET_FLAGS, sw_interface_set_flags, f890584a) \
       _(VL_API_SW_INTERFACE_SET_FLAGS_REPLY, sw_interface_set_flags_reply, dfbf3afa) \
       #endif

       /****** Typedefs *****/
       #ifdef vl_typedefs
       typedef VL_API_PACKED(struct _vl_api_sw_interface_set_flags {
           u16 _vl_msg_id;
           u32 client_index;
           u32 context;
           u32 sw_if_index;
           u8 admin_up_down;
       }) vl_api_sw_interface_set_flags_t;

       typedef VL_API_PACKED(struct _vl_api_sw_interface_set_flags_reply {
           u16 _vl_msg_id;
           u32 context;
           i32 retval;
       }) vl_api_sw_interface_set_flags_reply_t;

       ...
       #endif /* vl_typedefs */

To change the admin state of an interface, a binary api client sends a
@ref vl_api_sw_interface_set_flags_t to VPP, which will respond with a
@ref vl_api_sw_interface_set_flags_reply_t message.

Multiple layers of software, transport types, and shared libraries
implement a variety of features:

-  API message allocation, tracing, pretty-printing, and replay.
-  Message transport via global shared memory, pairwise/private shared
   memory, and sockets.
-  Barrier synchronization of worker threads across thread-unsafe
   message handlers.

Correctly-coded message handlers know nothing about the transport used
to deliver messages to/from VPP. It’s reasonably straightforward to use
multiple API message transport types simultaneously.

For historical reasons, binary api messages are (putatively) sent in
network byte order. As of this writing, we’re seriously considering
whether that choice makes sense.

Message Allocation
------------------

Since binary API messages are always processed in order, we allocate
messages using a ring allocator whenever possible. This scheme is
extremely fast when compared with a traditional memory allocator, and
doesn’t cause heap fragmentation. See @ref
src/vlibmemory/memory_shared.c @ref vl_msg_api_alloc_internal().

Regardless of transport, binary api messages always follow a @ref
msgbuf_t header:

.. code:: c

       typedef struct msgbuf_
       {
         unix_shared_memory_queue_t *q;
         u32 data_len;
         u32 gc_mark_timestamp;
         u8 data[0];
       } msgbuf_t;

This structure makes it easy to trace messages without having to decode
them - simply save data_len bytes - and allows @ref vl_msg_api_free() to
rapidly dispose of message buffers:

.. code:: c

       void
       vl_msg_api_free (void *a)
       {
         msgbuf_t *rv;
         api_main_t *am = &api_main;

         rv = (msgbuf_t *) (((u8 *) a) - offsetof (msgbuf_t, data));

         /*
          * Here's the beauty of the scheme.  Only one proc/thread has
          * control of a given message buffer. To free a buffer, we just
          * clear the queue field, and leave. No locks, no hits, no errors...
          */
         if (rv->q)
           {
             rv->q = 0;
             rv->gc_mark_timestamp = 0;
             return;
           }
         <snip>
       }

Message Tracing and Replay
--------------------------

It’s extremely important that VPP can capture and replay sizeable binary
API traces. System-level issues involving hundreds of thousands of API
transactions can be re-run in a second or less. Partial replay allows
one to binary-search for the point where the wheels fall off. One can
add scaffolding to the data plane, to trigger when complex conditions
obtain.

With binary API trace, print, and replay, system-level bug reports of
the form “after 300,000 API transactions, the VPP data-plane stopped
forwarding traffic, FIX IT!” can be solved offline.

More often than not, one discovers that a control-plane client
misprograms the data plane after a long time or under complex
circumstances. Without direct evidence, “it’s a data-plane problem!”

See @ref src/vlibmemory/memory_vlib.c @ref vl_msg_api_process_file(),
and @ref src/vlibapi/api_shared.c. See also the debug CLI command “api
trace”

Client connection details
-------------------------

Establishing a binary API connection to VPP from a C-language client is
easy:

.. code:: c

           int
           connect_to_vpe (char *client_name, int client_message_queue_length)
           {
             vat_main_t *vam = &vat_main;
             api_main_t *am = &api_main;

             if (vl_client_connect_to_vlib ("/vpe-api", client_name,
                                           client_message_queue_length) < 0)
               return -1;

             /* Memorize vpp's binary API message input queue address */
             vam->vl_input_queue = am->shmem_hdr->vl_input_queue;
             /* And our client index */
             vam->my_client_index = am->my_client_index;
             return 0;
           }

32 is a typical value for client_message_queue_length. VPP cannot block
when it needs to send an API message to a binary API client, and the
VPP-side binary API message handlers are very fast. When sending
asynchronous messages, make sure to scrape the binary API rx ring with
some enthusiasm.

binary API message RX pthread
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Calling @ref vl_client_connect_to_vlib spins up a binary API message RX
pthread:

.. code:: c

           static void *
           rx_thread_fn (void *arg)
           {
             unix_shared_memory_queue_t *q;
             memory_client_main_t *mm = &memory_client_main;
             api_main_t *am = &api_main;

             q = am->vl_input_queue;

             /* So we can make the rx thread terminate cleanly */
             if (setjmp (mm->rx_thread_jmpbuf) == 0)
               {
                 mm->rx_thread_jmpbuf_valid = 1;
                 while (1)
               {
                 vl_msg_api_queue_handler (q);
               }
               }
             pthread_exit (0);
           }

To handle the binary API message queue yourself, use @ref
vl_client_connect_to_vlib_no_rx_pthread.

In turn, vl_msg_api_queue_handler(…) uses mutex/condvar signalling to
wake up, process VPP -> client traffic, then sleep. VPP supplies a
condvar broadcast when the VPP -> client API message queue transitions
from empty to nonempty.

VPP checks its own binary API input queue at a very high rate. VPP
invokes message handlers in “process” context [aka cooperative
multitasking thread context] at a variable rate, depending on data-plane
packet processing requirements.

Client disconnection details
----------------------------

To disconnect from VPP, call @ref vl_client_disconnect_from_vlib. Please
arrange to call this function if the client application terminates
abnormally. VPP makes every effort to hold a decent funeral for dead
clients, but VPP can’t guarantee to free leaked memory in the shared
binary API segment.

Sending binary API messages to VPP
----------------------------------

The point of the exercise is to send binary API messages to VPP, and to
receive replies from VPP. Many VPP binary APIs comprise a client request
message, and a simple status reply. For example, to set the admin status
of an interface, one codes:

.. code:: c

       vl_api_sw_interface_set_flags_t *mp;

       mp = vl_msg_api_alloc (sizeof (*mp));
       memset (mp, 0, sizeof (*mp));
       mp->_vl_msg_id = clib_host_to_net_u16 (VL_API_SW_INTERFACE_SET_FLAGS);
       mp->client_index = api_main.my_client_index;
       mp->sw_if_index = clib_host_to_net_u32 (<interface-sw-if-index>);
       vl_msg_api_send (api_main.shmem_hdr->vl_input_queue, (u8 *)mp);

Key points:

-  Use @ref vl_msg_api_alloc to allocate message buffers

-  Allocated message buffers are not initialized, and must be presumed
   to contain trash.

-  Don’t forget to set the \_vl_msg_id field!

-  As of this writing, binary API message IDs and data are sent in
   network byte order

-  The client-library global data structure @ref api_main keeps track of
   sufficient pointers and handles used to communicate with VPP

Receiving binary API messages from VPP
--------------------------------------

Unless you’ve made other arrangements (see @ref
vl_client_connect_to_vlib_no_rx_pthread), *messages are received on a
separate rx pthread*. Synchronization with the client application main
thread is the responsibility of the application!

Set up message handlers about as follows:

.. code:: c

       #define vl_typedefs     /* define message structures */
       #include <vpp/api/vpe_all_api_h.h>
       #undef vl_typedefs

       /* declare message handlers for each api */

       #define vl_endianfun        /* define message structures */
       #include <vpp/api/vpe_all_api_h.h>
       #undef vl_endianfun

       /* instantiate all the print functions we know about */
       #define vl_print(handle, ...)
       #define vl_printfun
       #include <vpp/api/vpe_all_api_h.h>
       #undef vl_printfun

       /* Define a list of all message that the client handles */
       #define foreach_vpe_api_reply_msg                            \
          _(SW_INTERFACE_SET_FLAGS_REPLY, sw_interface_set_flags_reply)

          static clib_error_t *
          my_api_hookup (vlib_main_t * vm)
          {
            api_main_t *am = &api_main;

          #define _(N,n)                                                  \
              vl_msg_api_set_handlers(VL_API_##N, #n,                     \
                                     vl_api_##n##_t_handler,              \
                                     vl_noop_handler,                     \
                                     vl_api_##n##_t_endian,               \
                                     vl_api_##n##_t_print,                \
                                     sizeof(vl_api_##n##_t), 1);
            foreach_vpe_api_msg;
          #undef _

            return 0;
           }

The key API used to establish message handlers is @ref
vl_msg_api_set_handlers , which sets values in multiple parallel vectors
in the @ref api_main_t structure. As of this writing: not all vector
element values can be set through the API. You’ll see sporadic API
message registrations followed by minor adjustments of this form:

.. code:: c

       /*
        * Thread-safe API messages
        */
       am->is_mp_safe[VL_API_IP_ADD_DEL_ROUTE] = 1;
       am->is_mp_safe[VL_API_GET_NODE_GRAPH] = 1;