Age | Commit message (Collapse) | Author | Files | Lines |
|
Type: fix
Signed-off-by: Ole Troan <ot@cisco.com>
Change-Id: Ifd1e1907740e55420dc040eb2afbbbf9887aea3c
|
|
Type: refactor
Signed-off-by: Ole Troan <ot@cisco.com>
Change-Id: I42705c508e88dae40a4c19ab20a1340623a57e16
|
|
Change-Id: Ied34720ca5a6e6e717eea4e86003e854031b6eab
Signed-off-by: Dave Barach <dave@barachs.net>
|
|
This does not update api client code. In other words, if the client
assumes the transport is shmem based, this patch does not change that.
Furthermore, code that checks queue size, for tail dropping, is not
updated.
Done for the following apis:
Plugins
- acl
- gtpu
- memif
- nat
- pppoe
VNET
- bfd
- bier
- tapv2
- vhost user
- dhcp
- flow
- geneve
- ip
- punt
- ipsec/ipsec-gre
- l2
- l2tp
- lisp-cp/one-cp
- lisp-gpe
- map
- mpls
- policer
- session
- span
- udp
- tap
- vxlan/vxlan-gpe
- interface
VPP
- api/api.c
OAM
- oam_api.c
Stats
- stats.c
Change-Id: I0e33ecefb2bdab0295698c0add948068a5a83345
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
- separate client/server code for both memory and socket apis
- separate memory api code from generic vlib api code
- move unix_shared_memory_fifo to svm and rename to svm_fifo_t
- overall declutter
Change-Id: I90cdd98ff74d0787d58825b914b0f1eafcfa4dc2
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
- Teach vpp_api_test to send/receive API messages over sockets
- Add memfd-based shared memory
- Add api messages to create memfd-based shared memory segments
- vpp_api_test supports both socket and shared memory segment connections
- vpp_api_test pivot from socket to shared memory API messaging
- add socket client support to libvlibclient.so
- dead client reaper sends ping messages, container-friendly
- dead client reaper falls back to kill (<pid>, 0) live checking
if e.g. a python app goes silent for tens of seconds
- handle ping messages in python client support code
- teach show api ring about pairwise shared-memory segments
- fix ip probing of already resolved destinations (VPP-998)
We'll need this work to implement proper host-stack client isolation
Change-Id: Ic23b65f75c854d0393d9a2e9d6b122a9551be769
Signed-off-by: Dave Barach <dave@barachs.net>
Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
Signed-off-by: Florin Coras <fcoras@cisco.com>
|
|
Supports 64K PPPoE sessions
This plugin adds three graph nodes:
1) pppoe-input for PPPoE decapsulation
2) pppoe-encap for PPPoE encapsulation
3) pppoe-tap-dispatch for control plane process
Below is the configuration to make PPPoE CP and DP work:
vim /etc/vpp/startup.conf
tuntap {
enable
ethernet
name newtap
}
create pppoe tap tap-if-index 1
//Configure it after a subscriber's PPPoE discovery and PPP link establishment succeeds:
create pppoe session client-ip 100.1.2.1 session-id 1 client-mac 00:11:01:00:00:01
show pppoe fib
show pppoe session
Change-Id: I73e724b6bf7c3e4181a9914c5752da1fa72d7e60
Signed-off-by: Hongjun Ni <hongjun.ni@intel.com>
|