aboutsummaryrefslogtreecommitdiffstats
path: root/src/plugins/memif
AgeCommit message (Collapse)AuthorFilesLines
2017-06-16memif: show memif CLI enhancementSteven1-39/+115
Add optional keywords to show memif to allow display a particular interface and option to display the descriptor tables. The new syntax for the show memif command is now show memif [<interface>] [descriptors] Change-Id: I20696bbea1142bdc152b6e351c6ece24b1cf5500 Signed-off-by: Steven <sluong@cisco.com>
2017-06-16memif: jumbo frames supportSteven2-148/+272
Current memif interface supports frame size up to 2048. This patch is to enhance memif to support jumbo frames. On tx (writing buffers to the ring), keep reading the next buffer in vlib when the flag VLIB_BUFFER_NEXT_PRESENT and merge it to the same ring entry. Use descriptor chaining if the buffer is not big enough. On rx (reading buffers from the ring), if the packet is greater than 2048, create multiple vlib buffers, chained with the VLIB_BUFFER_NEXT_PRESENT. Testing: Because the ping command provided by VPP does not support jumbo frames, I have to use linux ping. Here is the set up that I use for testing. VM1 --- vhost ---- VPP1 --- memif --- VPP2 --- vhost --- VM2 Create vhost-user interfaces between VM1 and VPP1 and between VPP2 and VM2 VM configuration: Set the interface mtu on the VM, e.g 9216 to support jumbo frames. create static route and static arp on VM1 to VM2 and vice versa. Use iperf3 or ping -s 8000 from VM1 to VM2 or vice versa. Sample run sluong@ubuntu:~$ ping 131.1.1.1 -c1 -s 8000 ping 131.1.1.1 -c1 -s 8000 PING 131.1.1.1 (131.1.1.1) 8000(8028) bytes of data. 8008 bytes from 131.1.1.1: icmp_seq=1 ttl=62 time=0.835 ms --- 131.1.1.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.835/0.835/0.835/0.000 ms sluong@ubuntu:~$ DBGvpp# sh interface memif0 Name Idx State Counter Count memif0 1 up rx packets 1 rx bytes 8042 tx packets 1 tx bytes 8042 ip4 1 DBGvpp# Change-Id: I469bece3d45a790dceaee1d6a8e976bd018feee2 Signed-off-by: Steven <sluong@cisco.com>
2017-06-13memif: fix crash during interface deleteDamjan Marion1-0/+4
Change-Id: Ide6d26d6fcc81be6f26ac0abe2cd0d6a0838cfe6 Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-06-12memif: complete refactor of socket handling codeDamjan Marion10-1146/+1795
Change-Id: I4d41def83a23f13701f1ddcea722d481e4c85cbc Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-06-07Add support for memif API to VAT.Milan Lenco1-0/+360
Change-Id: I01dc439fc84f9213e55ba56982eff34474637115 Signed-off-by: Milan Lenco <milan.lenco@pantheon.tech>
2017-06-02memif: fix coverity warningsSteven1-0/+2
Check -1 for return from read prior to using the data Change-Id: Ibab7309244de488737ea7938b334fab495bf855d Signed-off-by: Steven <sluong@cisco.com>
2017-05-31memif: multi-queues supportSteven7-73/+167
- Add rx-queues and tx-queues option to the create memif CLI - Add vlib_worker_thread_barrier_sync () to memif_conn_fd_read_ready () as the latter function may disconnect the ring and clean up the shared memory. - On transmit, write the rid (queue number) to the socket. - On receive, read the rid and trigger the interrupt for the corresponding thread. Change-Id: If1c7e26c7124174678f047909cbc33e931eaac8c Signed-off-by: Steven <sluong@cisco.com>
2017-05-29memif: master instance crashes when typing quit on slaveSteven1-15/+29
When I type in 'quit' on the slave instance, the master instance crashes on this line. 0: /home/sluong/vpp-master/vpp/build-data/../src/vlib/unix/input.c:200 (linux_epoll_input) assertion `! pool_is_free (um->file_pool, _e)' fails Aborted (core dumped) Below is the decode from gdb line_number=0, fmt=0x7f57af6cc9a0 "%s:%d (%s) assertion `%s' fails") at /home/sluong/vpp-master/vpp/build-data/../src/vppinfra/error.c:143 vm=0x7f57af8e2400 <vlib_global_main>, node=0x7f576d40ad80, frame=0x0) at /home/sluong/vpp-master/vpp/build-data/../src/vlib/unix/input.c:200 vm=0x7f57af8e2400 <vlib_global_main>, node=0x7f576d40ad80, type=VLIB_NODE_TYPE_PRE_INPUT, dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x0, last_time_stamp=1525665215050617) at /home/sluong/vpp-master/vpp/build-data/../src/vlib/main.c:1016 vm=0x7f57af8e2400 <vlib_global_main>, is_main=1) at /home/sluong/vpp-master/vpp/build-data/../src/vlib/main.c:1500 I am able to reproduce the problem consistently with the below procedure. 1. Create 3 memif interfaces between slave and master instances. 2. Type 'quit' on the slave. Neither crashes the first time. 3. Bring back the slave. Type 'quit' on the master. Neither crashes. 4. Bring back the master. Type 'quit' on the slave. The master crashes. There are two places the interrupt line is disconnected and the unix file is removed via the call unix_file_del () 1. memif_int_fd_read_ready () 2. memif_disconnect () which is called via multiple places in memif. When the crash happens, the unix file was removed from memif_disconnect () via memif_conn_fd_read_ready () with size of the message == 0 in recvmsg (). It is noted when the unix file was removed from memif_int_fd_read_ready (), it never crashes. It is a race condition. However, if I follow the aformentioned procedure, the crash always happens. The reason the crash happens when memif_disconnect () removes the unix file is because there may still be pending input in linux_epoll_input (). When linux_epoll_input () tries to access the unix file via the line 200 unix_file_t *f = pool_elt_at_index (um->file_pool, i); it crashes. We could add code in linux_epoll_input () to avoid the crash if the index for the particular file_pool is already free. Or we could fix memif to not remove the unix file in memif_conn_fd_read_ready () when recvmsg () got 0 byte and just postpone the unix file deletion in memif_int_fd_read_ready () later after linux_epoll_input () got a chance to run to empty the input stream. I choose to fix the problem in the latter approach. I split the function memif_disconnect () into two parts. For the code path which memif_conn_fd_read_ready () calls memif_disconnect (), it does not remove the unix file. All other calls to memif_disconnect () will continue to do what it uses to do to avoid regression. Please let me know if I should fix the problem other way. Change-Id: I8efe2a3d24c6581609bc7b6fe82c2b59c22d8e4b Signed-off-by: Steven <sluong@cisco.com>
2017-05-15memif: migrate memif to use vnet device infra APIsSteven2-22/+40
Migrate memif to use vnet device infra APIs. No new function is added. Change-Id: I70e440d2ae1e673876365041f31fe78997aceecf Signed-off-by: Steven <sluong@cisco.com>
2017-04-25"autoreply" flag: autogenerate standard xxx_reply_t messagesDave Barach1-11/+1
Change-Id: I72298aaae7d172082ece3a8edea4217c11b28d79 Signed-off-by: Dave Barach <dave@barachs.net>
2017-04-20Temporary workaround for the bug VPP-698.Milan Lenco1-2/+2
Change-Id: I220b0b95449f24cc547206e38ab8e10019115ec0 Signed-off-by: Milan Lenco <milan.lenco@pantheon.tech>
2017-04-06Use thread local storage for thread indexDamjan Marion1-17/+18
This patch deprecates stack-based thread identification, Also removes requirement that thread stacks are adjacent. Finally, possibly annoying for some folks, it renames all occurences of cpu_index and cpu_number with thread index. Using word "cpu" is misleading here as thread can be migrated ti different CPU, and also it is not related to linux cpu index. Change-Id: I68cdaf661e701d2336fc953dcb9978d10a70f7c1 Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-04-05Fix two more memif coverity issuesMilan Lenco2-25/+32
Change-Id: I935620798d6fe82b99b6bd564749e20a189b4ae3 Signed-off-by: Milan Lenco <milan.lenco@pantheon.tech>
2017-04-03Fix memif coverity issuesMilan Lenco3-10/+33
Change-Id: I844ec53b55ceaa1e00996f5cf8a018537ea8b481 Signed-off-by: Milan Lenco <milan.lenco@pantheon.tech>
2017-03-30vppinfra: add spinlock inline functionsDamjan Marion3-30/+7
Change-Id: I86089e9bb604adfc260a111685001be1c897ce53 Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-03-22Add memif - packet memory interface for intra-host communicationDamjan Marion9-0/+2731
Change-Id: I94c06b07a39f07ceba87bf3e7fcfc70e43231e8a Signed-off-by: Damjan Marion <damarion@cisco.com> Co-Authored-By: Milan Lenco <Milan.Lenco@pantheon.tech>