summaryrefslogtreecommitdiffstats
path: root/README.md
blob: 4cc283b5e530fd12246a33599306cb375fdc26ac (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
Vector Packet Processing
========================

## Introduction

The VPP platform is an extensible framework that provides out-of-the-box
production quality switch/router functionality. It is the open source version
of Cisco's Vector Packet Processing (VPP) technology: a high performance,
packet-processing stack that can run on commodity CPUs.

The benefits of this implementation of VPP are its high performance, proven
technology, its modularity and flexibility, and rich feature set.

For more information on VPP and its features please visit the
[FD.io website](http://fd.io/) and
[What is VPP?](https://wiki.fd.io/view/VPP/What_is_VPP%3F) pages.


## Changes

Details of the changes leading up to this version of VPP can be found under
@ref release_notes.


## Directory layout

| Directory name         | Description                                 |
| ---------------------- | ------------------------------------------- |
|      build-data        | Build metadata                              |
|      build-root        | Build output directory                      |
|      doxygen           | Documentation generator configuration       |
|      dpdk              | DPDK patches and build infrastructure       |
| @ref extras/libmemif   | Client library for memif                    |
| @ref src/examples      | VPP example code                            |
| @ref src/plugins       | VPP bundled plugins directory               |
| @ref src/svm           | Shared virtual memory allocation library    |
|      src/tests         | Standalone tests (not part of test harness) |
|      src/vat           | VPP API test program                        |
| @ref src/vlib          | VPP application library                     |
| @ref src/vlibapi       | VPP API library                             |
| @ref src/vlibmemory    | VPP Memory management                       |
| @ref src/vnet          | VPP networking                              |
| @ref src/vpp           | VPP application                             |
| @ref src/vpp-api       | VPP application API bindings                |
| @ref src/vppinfra      | VPP core library                            |
| @ref src/vpp/api       | Not-yet-relocated API bindings              |
|      test              | Unit tests and Python test harness          |

## Getting started

In general anyone interested in building, developing or running VPP should
consult the [VPP wiki](https://wiki.fd.io/view/VPP) for more complete
documentation.

In particular, readers are recommended to take a look at [Pulling, Building,
Running, Hacking, Pushing](https://wiki.fd.io/view/VPP/Pulling,_Building,_Run
ning,_Hacking_and_Pushing_VPP_Code) which provides extensive step-by-step
coverage of the topic.

For the impatient, some salient information is distilled below.


### Quick-start: On an existing Linux host

To install system dependencies, build VPP and then install it, simply run the
build script. This should be performed a non-privileged user with `sudo`
access from the project base directory:

    ./extras/vagrant/build.sh

If you want a more fine-grained approach because you intend to do some
development work, the `Makefile` in the root directory of the source tree
provides several convenience shortcuts as `make` targets that may be of
interest. To see the available targets run:

    make


### Quick-start: Vagrant

The directory `extras/vagrant` contains a `VagrantFile` and supporting
scripts to bootstrap a working VPP inside a Vagrant-managed Virtual Machine.
This VM can then be used to test concepts with VPP or as a development
platform to extend VPP. Some obvious caveats apply when using a VM for VPP
since its performance will never match that of bare metal; if your work is
timing or performance sensitive, consider using bare metal in addition or
instead of the VM.

For this to work you will need a working installation of Vagrant. Instructions
for this can be found [on the Setting up Vagrant wiki page]
(https://wiki.fd.io/view/DEV/Setting_Up_Vagrant).


## More information

Several modules provide documentation, see @subpage user_doc for more
end-user-oriented information. Also see @subpage dev_doc for developer notes.

Visit the [VPP wiki](https://wiki.fd.io/view/VPP) for details on more
advanced building strategies and other development notes.


## Test Framework

There is PyDoc generated documentation available for the VPP test framework.
See @ref test_framework_doc for details.
74 } /* Literal.String.Delimiter */ .highlight .sd { color: #e6db74 } /* Literal.String.Doc */ .highlight .s2 { color: #e6db74 } /* Literal.String.Double */ .highlight .se { color: #ae81ff } /* Literal.String.Escape */ .highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ .highlight .si { color: #e6db74 } /* Literal.String.Interpol */ .highlight .sx { color: #e6db74 } /* Literal.String.Other */ .highlight .sr { color: #e6db74 } /* Literal.String.Regex */ .highlight .s1 { color: #e6db74 } /* Literal.String.Single */ .highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ .highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ .highlight .fm { color: #a6e22e } /* Name.Function.Magic */ .highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ .highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ .highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ .highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ .highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ } @media (prefers-color-scheme: light) { .highlight .hll { background-color: #ffffcc } .highlight .c { color: #888888 } /* Comment */ .highlight .err { color: #a61717; background-color: #e3d2d2 } /* Error */ .highlight .k { color: #008800; font-weight: bold } /* Keyword */ .highlight .ch { color: #888888 } /* Comment.Hashbang */ .highlight .cm { color: #888888 } /* Comment.Multiline */ .highlight .cp { color: #cc0000; font-weight: bold } /* Comment.Preproc */ .highlight .cpf { color: #888888 } /* Comment.PreprocFile */ .highlight .c1 { color: #888888 } /* Comment.Single */ .highlight .cs { color: #cc0000; font-weight: bold; background-color: #fff0f0 } /* Comment.Special */ .highlight .gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */ .highlight .ge { font-style: italic } /* Generic.Emph */ .highlight .gr { color: #aa0000 } /* Generic.Error */ .highlight .gh { color: #333333 } /* Generic.Heading */ .highlight .gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */ .highlight .go { color: #888888 } /* Generic.Output */ .highlight .gp { color: #555555 } /* Generic.Prompt */ .highlight .gs { font-weight: bold } /* Generic.Strong */ .highlight .gu { color: #666666 } /* Generic.Subheading */ .highlight .gt { color: #aa0000 } /* Generic.Traceback */ .highlight .kc { color: #008800; font-weight: bold } /* Keyword.Constant */ .highlight .kd { color: #008800; font-weight: bold } /* Keyword.Declaration */ .highlight .kn { color: #008800; font-weight: bold } /* Keyword.Namespace */ .highlight .kp { color: #008800 } /* Keyword.Pseudo */ .highlight .kr { color: #008800; font-weight: bold } /* Keyword.Reserved */ .highlight .kt { color: #888888; font-weight: bold } /* Keyword.Type */ .highlight .m { color: #0000DD; font-weight: bold } /* Literal.Number */ .highlight .s { color: #dd2200; background-color: #fff0f0 } /* Literal.String */ .highlight .na { color: #336699 } /* Name.Attribute */ .highlight .nb { color: #003388 } /* Name.Builtin */ .highlight .nc { color: #bb0066; font-weight: bold } /* Name.Class */ .highlight .no { color: #003366; font-weight: bold } /* Name.Constant */ .highlight .nd { color: #555555 } /* Name.Decorator */ .highlight .ne { color: #bb0066; font-weight: bold } /* Name.Exception */ .highlight .nf { color: #0066bb; font-weight: bold } /* Name.Function */ .highlight .nl { color: #336699; font-style: italic } /* Name.Label */ .highlight .nn { color: #bb0066; font-weight: bold } /* Name.Namespace */ .highlight .py { color: #336699; font-weight: bold } /* Name.Property */ .highlight .nt { color: #bb0066; font-weight: bold } /* Name.Tag */ .highlight .nv { color: #336699 } /* Name.Variable */ .highlight .ow { color: #008800 } /* Operator.Word */ .highlight .w { color: #bbbbbb } /* Text.Whitespace */ .highlight .mb { color: #0000DD; font-weight: bold } /* Literal.Number.Bin */ .highlight .mf { color: #0000DD; font-weight: bold } /* Literal.Number.Float */ .highlight .mh { color: #0000DD; font-weight: bold } /* Literal.Number.Hex */ .highlight .mi { color: #0000DD; font-weight: bold } /* Literal.Number.Integer */ .highlight .mo { color: #0000DD; font-weight: bold } /* Literal.Number.Oct */ .highlight .sa { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Affix */ .highlight .sb { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Backtick */ .highlight .sc { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Char */ .highlight .dl { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Delimiter */ .highlight .sd { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Doc */ .highlight .s2 { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Double */ .highlight .se { color: #0044dd; background-color: #fff0f0 } /* Literal.String.Escape */ .highlight .sh { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Heredoc */ .highlight .si { color: #3333bb; background-color: #fff0f0 } /* Literal.String.Interpol */ .highlight .sx { color: #22bb22; background-color: #f0fff0 } /* Literal.String.Other */ .highlight .sr { color: #008800; background-color: #fff0ff } /* Literal.String.Regex */ .highlight .s1 { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Single */ .highlight .ss { color: #aa6600; background-color: #fff0f0 } /* Literal.String.Symbol */ .highlight .bp { color: #003388 } /* Name.Builtin.Pseudo */ .highlight .fm { color: #0066bb; font-weight: bold } /* Name.Function.Magic */ .highlight .vc { color: #336699 } /* Name.Variable.Class */ .highlight .vg { color: #dd7700 } /* Name.Variable.Global */ .highlight .vi { color: #3333bb } /* Name.Variable.Instance */ .highlight .vm { color: #336699 } /* Name.Variable.Magic */ .highlight .il { color: #0000DD; font-weight: bold } /* Literal.Number.Integer.Long */ }
/*
 * Copyright (c) 2015-2019 Cisco and/or its affiliates.
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at:
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
#include <svm/ssvm.h>
#include <svm/svm_common.h>

typedef int (*init_fn) (ssvm_private_t *);
typedef void (*delete_fn) (ssvm_private_t *);

static init_fn master_init_fns[SSVM_N_SEGMENT_TYPES] =
  { ssvm_master_init_shm, ssvm_master_init_memfd, ssvm_master_init_private };
static init_fn slave_init_fns[SSVM_N_SEGMENT_TYPES] =
  { ssvm_slave_init_shm, ssvm_slave_init_memfd, ssvm_slave_init_private };
static delete_fn delete_fns[SSVM_N_SEGMENT_TYPES] =
  { ssvm_delete_shm, ssvm_delete_memfd, ssvm_delete_private };

int
ssvm_master_init_shm (ssvm_private_t * ssvm)
{
  int ssvm_fd;
#if USE_DLMALLOC == 0
  int mh_flags = MHEAP_FLAG_DISABLE_VM | MHEAP_FLAG_THREAD_SAFE;
#endif
  clib_mem_vm_map_t mapa = { 0 };
  u8 junk = 0, *ssvm_filename;
  ssvm_shared_header_t *sh;
  uword page_size, requested_va = 0;
  void *oldheap;

  if (ssvm->ssvm_size == 0)
    return SSVM_API_ERROR_NO_SIZE;

  if (CLIB_DEBUG > 1)
    clib_warning ("[%d] creating segment '%s'", getpid (), ssvm->name);

  ASSERT (vec_c_string_is_terminated (ssvm->name));
  ssvm_filename = format (0, "/dev/shm/%s%c", ssvm->name, 0);
  unlink ((char *) ssvm_filename);
  vec_free (ssvm_filename);

  ssvm_fd = shm_open ((char *) ssvm->name, O_RDWR | O_CREAT | O_EXCL, 0777);
  if (ssvm_fd < 0)
    {
      clib_unix_warning ("create segment '%s'", ssvm->name);
      return SSVM_API_ERROR_CREATE_FAILURE;
    }

  if (fchmod (ssvm_fd, S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP) < 0)
    clib_unix_warning ("ssvm segment chmod");
  if (svm_get_root_rp ())
    {
      /* TODO: is this really needed? */
      svm_main_region_t *smr = svm_get_root_rp ()->data_base;
      if (fchown (ssvm_fd, smr->uid, smr->gid) < 0)
	clib_unix_warning ("ssvm segment chown");
    }

  if (lseek (ssvm_fd, ssvm->ssvm_size, SEEK_SET) < 0)
    {
      clib_unix_warning ("lseek");
      close (ssvm_fd);
      return SSVM_API_ERROR_SET_SIZE;
    }

  if (write (ssvm_fd, &junk, 1) != 1)
    {
      clib_unix_warning ("set ssvm size");
      close (ssvm_fd);
      return SSVM_API_ERROR_SET_SIZE;
    }

  page_size = clib_mem_get_fd_page_size (ssvm_fd);
  if (ssvm->requested_va)
    {
      requested_va = ssvm->requested_va;
      clib_mem_vm_randomize_va (&requested_va, min_log2 (page_size));
    }

  mapa.requested_va = requested_va;
  mapa.size = ssvm->ssvm_size;
  mapa.fd = ssvm_fd;
  if (clib_mem_vm_ext_map (&mapa))
    {
      clib_unix_warning ("mmap");
      close (ssvm_fd);
      return SSVM_API_ERROR_MMAP;
    }
  close (ssvm_fd);

  sh = mapa.addr;
  sh->master_pid = ssvm->my_pid;
  sh->ssvm_size = ssvm->ssvm_size;
  sh->ssvm_va = pointer_to_uword (sh);
  sh->type = SSVM_SEGMENT_SHM;
#if USE_DLMALLOC == 0
  sh->heap = mheap_alloc_with_flags (((u8 *) sh) + page_size,
				     ssvm->ssvm_size - page_size, mh_flags);
#else
  sh->heap = create_mspace_with_base (((u8 *) sh) + page_size,
				      ssvm->ssvm_size - page_size,
				      1 /* locked */ );
  mspace_disable_expand (sh->heap);
#endif

  oldheap = ssvm_push_heap (sh);
  sh->name = format (0, "%s", ssvm->name, 0);
  ssvm_pop_heap (oldheap);

  ssvm->sh = sh;
  ssvm->my_pid = getpid ();
  ssvm->i_am_master = 1;

  /* The application has to set set sh->ready... */
  return 0;
}

int
ssvm_slave_init_shm (ssvm_private_t * ssvm)
{
  struct stat stat;
  int ssvm_fd = -1;
  ssvm_shared_header_t *sh;

  ASSERT (vec_c_string_is_terminated (ssvm->name));
  ssvm->i_am_master = 0;

  while (ssvm->attach_timeout-- > 0)
    {
      if (ssvm_fd < 0)
	ssvm_fd = shm_open ((char *) ssvm->name, O_RDWR, 0777);
      if (ssvm_fd < 0)
	{
	  sleep (1);
	  continue;
	}
      if (fstat (ssvm_fd, &stat) < 0)
	{
	  sleep (1);
	  continue;
	}

      if (stat.st_size > 0)
	goto map_it;
    }
  clib_warning ("slave timeout");
  return SSVM_API_ERROR_SLAVE_TIMEOUT;

map_it:
  sh = (void *) mmap (0, MMAP_PAGESIZE, PROT_READ | PROT_WRITE, MAP_SHARED,
		      ssvm_fd, 0);
  if (sh == MAP_FAILED)
    {
      clib_unix_warning ("slave research mmap");
      close (ssvm_fd);
      return SSVM_API_ERROR_MMAP;
    }

  while (ssvm->attach_timeout-- > 0)
    {
      if (sh->ready)
	goto re_map_it;
    }
  close (ssvm_fd);
  munmap (sh, MMAP_PAGESIZE);
  clib_warning ("slave timeout 2");
  return SSVM_API_ERROR_SLAVE_TIMEOUT;

re_map_it:
  ssvm->requested_va = sh->ssvm_va;
  ssvm->ssvm_size = sh->ssvm_size;
  munmap (sh, MMAP_PAGESIZE);

  sh = ssvm->sh = (void *) mmap ((void *) ssvm->requested_va, ssvm->ssvm_size,
				 PROT_READ | PROT_WRITE,
				 MAP_SHARED | MAP_FIXED, ssvm_fd, 0);

  if (sh == MAP_FAILED)
    {
      clib_unix_warning ("slave final mmap");
      close (ssvm_fd);
      return SSVM_API_ERROR_MMAP;
    }
  sh->slave_pid = getpid ();
  return 0;
}

void
ssvm_delete_shm (ssvm_private_t * ssvm)
{
  u8 *fn;

  fn = format (0, "/dev/shm/%s%c", ssvm->name, 0);

  if (CLIB_DEBUG > 1)
    clib_warning ("[%d] unlinking ssvm (%s) backing file '%s'", getpid (),
		  ssvm->name, fn);

  /* Throw away the backing file */
  if (unlink ((char *) fn) < 0)
    clib_unix_warning ("unlink segment '%s'", ssvm->name);

  vec_free (fn);
  vec_free (ssvm->name);

  munmap ((void *) ssvm->sh, ssvm->ssvm_size);
}

/**
 * Initialize memfd segment master
 */
int
ssvm_master_init_memfd (ssvm_private_t * memfd)
{
  uword page_size;
  ssvm_shared_header_t *sh;
  void *oldheap;
  clib_mem_vm_alloc_t alloc = { 0 };
  clib_error_t *err;

  if (memfd->ssvm_size == 0)
    return SSVM_API_ERROR_NO_SIZE;

  ASSERT (vec_c_string_is_terminated (memfd->name));

  alloc.name = (char *) memfd->name;
  alloc.size = memfd->ssvm_size;
  alloc.flags = CLIB_MEM_VM_F_SHARED;
  alloc.requested_va = memfd->requested_va;
  if ((err = clib_mem_vm_ext_alloc (&alloc)))
    {
      clib_error_report (err);
      return SSVM_API_ERROR_CREATE_FAILURE;
    }

  memfd->fd = alloc.fd;
  memfd->sh = (ssvm_shared_header_t *) alloc.addr;
  memfd->my_pid = getpid ();
  memfd->i_am_master = 1;

  page_size = 1ull << alloc.log2_page_size;
  sh = memfd->sh;
  sh->master_pid = memfd->my_pid;
  sh->ssvm_size = memfd->ssvm_size;
  sh->ssvm_va = pointer_to_uword (sh);
  sh->type = SSVM_SEGMENT_MEMFD;

#if USE_DLMALLOC == 0
  uword flags = MHEAP_FLAG_DISABLE_VM | MHEAP_FLAG_THREAD_SAFE;

  sh->heap = mheap_alloc_with_flags (((u8 *) sh) + page_size,
				     memfd->ssvm_size - page_size, flags);
#else
  sh->heap = create_mspace_with_base (((u8 *) sh) + page_size,
				      memfd->ssvm_size - page_size,
				      1 /* locked */ );
  mspace_disable_expand (sh->heap);
#endif
  oldheap = ssvm_push_heap (sh);
  sh->name = format (0, "%s", memfd->name, 0);
  ssvm_pop_heap (oldheap);

  /* The application has to set set sh->ready... */
  return 0;
}

/**
 * Initialize memfd segment slave
 *
 * Subtly different than svm_slave_init. The caller needs to acquire
 * a usable file descriptor for the memfd segment e.g. via
 * vppinfra/socket.c:default_socket_recvmsg
 */
int
ssvm_slave_init_memfd (ssvm_private_t * memfd)
{
  clib_mem_vm_map_t mapa = { 0 };
  ssvm_shared_header_t *sh;
  uword page_size;

  memfd->i_am_master = 0;

  page_size = clib_mem_get_fd_page_size (memfd->fd);
  if (!page_size)
    {
      clib_unix_warning ("page size unknown");
      return SSVM_API_ERROR_MMAP;
    }

  /*
   * Map the segment once, to look at the shared header
   */
  mapa.fd = memfd->fd;
  mapa.size = page_size;

  if (clib_mem_vm_ext_map (&mapa))
    {
      clib_unix_warning ("slave research mmap (fd %d)", mapa.fd);
      close (memfd->fd);
      return SSVM_API_ERROR_MMAP;
    }

  sh = mapa.addr;
  memfd->requested_va = sh->ssvm_va;
  memfd->ssvm_size = sh->ssvm_size;
  clib_mem_vm_free (sh, page_size);

  /*
   * Remap the segment at the 'right' address
   */
  mapa.requested_va = memfd->requested_va;
  mapa.size = memfd->ssvm_size;
  if (clib_mem_vm_ext_map (&mapa))
    {
      clib_unix_warning ("slave final mmap");
      close (memfd->fd);
      return SSVM_API_ERROR_MMAP;
    }

  sh = mapa.addr;
  sh->slave_pid = getpid ();
  memfd->sh = sh;
  return 0;
}

void
ssvm_delete_memfd (ssvm_private_t * memfd)
{
  vec_free (memfd->name);
  clib_mem_vm_free (memfd->sh, memfd->ssvm_size);
  close (memfd->fd);
}

/**
 * Initialize segment in a private heap
 */
int
ssvm_master_init_private (ssvm_private_t * ssvm)
{
  uword pagesize = clib_mem_get_page_size (), rnd_size = 0;
  ssvm_shared_header_t *sh;
  u8 *heap;

  rnd_size = clib_max (ssvm->ssvm_size + (pagesize - 1), ssvm->ssvm_size);
  rnd_size &= ~(pagesize - 1);

#if USE_DLMALLOC == 0
  {
    mheap_t *heap_header;

    heap = mheap_alloc (0, rnd_size);
    if (heap == 0)
      {
	clib_unix_warning ("mheap alloc");
	return -1;
      }
    heap_header = mheap_header (heap);
    heap_header->flags |= MHEAP_FLAG_THREAD_SAFE;
  }
#else
  heap = create_mspace (rnd_size, 1 /* locked */ );
  if (heap == 0)
    {
      clib_unix_warning ("mheap alloc");
      return -1;
    }

  mspace_disable_expand (heap);

  /* Find actual size because mspace size is rounded up by dlmalloc */
  struct dlmallinfo dlminfo;
  dlminfo = mspace_mallinfo (heap);
  rnd_size = dlminfo.fordblks;
#endif

  ssvm->ssvm_size = rnd_size;
  ssvm->i_am_master = 1;
  ssvm->my_pid = getpid ();
  ssvm->requested_va = ~0;

  /* Allocate a [sic] shared memory header, in process memory... */
  sh = clib_mem_alloc_aligned (sizeof (*sh), CLIB_CACHE_LINE_BYTES);
  ssvm->sh = sh;

  clib_memset (sh, 0, sizeof (*sh));
  sh->heap = heap;
  sh->ssvm_size = rnd_size;
  sh->ssvm_va = pointer_to_uword (heap);
  sh->type = SSVM_SEGMENT_PRIVATE;
  sh->name = ssvm->name;

  return 0;
}

int
ssvm_slave_init_private (ssvm_private_t * ssvm)
{
  clib_warning ("BUG: this should not be called!");
  return -1;
}

void
ssvm_delete_private (ssvm_private_t * ssvm)
{
  vec_free (ssvm->name);
#if USE_DLMALLOC == 0
  mheap_free (ssvm->sh->heap);
#else
  destroy_mspace (ssvm->sh->heap);
#endif
  clib_mem_free (ssvm->sh);
}

int
ssvm_master_init (ssvm_private_t * ssvm, ssvm_segment_type_t type)
{
  return (master_init_fns[type]) (ssvm);
}

int
ssvm_slave_init (ssvm_private_t * ssvm, ssvm_segment_type_t type)
{
  return (slave_init_fns[type]) (ssvm);
}

void
ssvm_delete (ssvm_private_t * ssvm)
{
  delete_fns[ssvm->sh->type] (ssvm);
}

ssvm_segment_type_t
ssvm_type (const ssvm_private_t * ssvm)
{
  return ssvm->sh->type;
}

u8 *
ssvm_name (const ssvm_private_t * ssvm)
{
  return ssvm->sh->name;
}

/*
 * fd.io coding-style-patch-verification: ON
 *
 * Local Variables:
 * eval: (c-set-style "gnu")
 * End:
 */