aboutsummaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2018-06-12MTU assigning to itself (Coverity)Ole Troan1-4/+0
Change-Id: Iee8de25ab3c68ae3698c79852195dc336050914c Signed-off-by: Ole Troan <ot@cisco.com>
2018-06-11vom: Add support for af-packet dumpMohsin Kazmi5-9/+102
Change-Id: I0a1fc36ac29f6da70334ea3b5a5cf0e841faef76 Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
2018-06-11tcp: cleanup connection/session fixesFlorin Coras11-50/+81
- Cleanup session state after last ack and avoid using a cleanup timer. - Change session cleanup to free the session as opposed to waiting for delete notify. - When in close-wait, postpone sending the fin on close until all outstanding data has been sent. - Don't flush rx fifo unless in closed state Change-Id: Ic2a4f0d5568b65c83f4b55b6c469a7b24b947f39 Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-06-11udp: fix for multiple workers and add testFlorin Coras6-7/+98
Since the main thread is not used for session polling anymore, when vpp is started with multiple wokers, allocate connections on the first. Also add a simple udp make test. Change-Id: Id869f5d89e0fced51048f0384fa86a5022258b7c Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-06-11MTU: Software interface / Per-protocol MTU supportOle Troan33-166/+359
This patch separates setting of hardware interfaec and software interface MTU. Software MTU is L2 payload MTU (i.e. not including L2 header). Per-protocol MTU for IPv4, IPv6 and MPLS can also be set. Currently only IP4, IP6 are enabled in adjacency / rewrite code. Documentation in src/vnet/MTU.md Change-Id: Iee2fd6f0bbc8210748dd8e073ab9fab87d323690 Signed-off-by: Ole Troan <ot@cisco.com>
2018-06-11Fix multiple NAT translation with interface address as externalAlexander Chernavin1-4/+4
Change-Id: Idd65c6d0489bf83984a2c34d22d3f94000fc7018 Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
2018-06-10IGMP: use simple u32 bit hash keyNeale Ranns3-18/+15
some IGMP hashse use only a u32 key, which is not stored in the object, so don't use memory based hash Change-Id: Iaa4eddf568ea0164bc2a812da4cc502f1811b93c Signed-off-by: Neale Ranns <nranns@cisco.com>
2018-06-10cli: Disable XON/XOFF in the ttyChris Luke1-0/+4
- CLI history forward-search is bound to ^S which is common, but that is also the tty's default control byte to pause output. So we disable XON/XOFF in the tty so that we can use ^S. Change-Id: I61717c77a11733d64eed7f8119677e7cd2e20029 Signed-off-by: Chris Luke <chrisy@flirble.org>
2018-06-10cli: Fix reverse-line-wrap in terminals (VPP-1295)Chris Luke1-86/+195
- Terminals do not reverse-line-wrap when the cursor is at the left edge and \b tries to make it go left. - Instead, we have to track the cursor position if we need to emit \b's and if we are at the left edge emit an ANSI sequence to relocate the cursor. Previously we usually simply calculated the new cursor position after a bunch of output had completed. - Further trickiness is required since most xterm-like terminals also defer moving the cursor to the next line when at the right edge[1], and then if they receive a \b move the cursor back one character too many. - This requires intricate reworking of everywhere that \b is emitted by the CLI code during command line editing. [1] Bash counters this issue by tracking the cursor position as output is generated and forcing the cursor to the next line (by emitting a space followed by \r) if it gets to this phantom cursor position); here we effectively do that but only if the user tries to go left when in this state. Change-Id: I7c1d7c0e24c53111a5810cebb504ccfdac743086 Signed-off-by: Chris Luke <chrisy@flirble.org>
2018-06-10tcp: fix timer based recovery exit conditionFlorin Coras2-2/+2
Change-Id: I3f36e5760fd2935cc29d22601d4c0a1d2a22ba84 Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-06-10cli: Fix off-by-one in the pagerChris Luke1-2/+2
- The last line in the pager buffer was sometimes missed when using space/pg-dn; simple off-by-one error. Change-Id: Id4e5f7cf0e5db4f719f87b9069d75427bc66d3f7 Signed-off-by: Chris Luke <chrisy@flirble.org>
2018-06-10Don't use foreach_vlib_main macro w/out barrier syncDave Barach1-4/+8
It should be OK to scrape dispatch stats without forcing a barrier sync. Scrape the stats manually. We'll see what happens. Change-Id: Ia20b51ea12ed81cce76e1801401bad0edd0645bb Signed-off-by: Dave Barach <dave@barachs.net>
2018-06-10add script for virtual function create/show/removeDamjan Marion1-0/+132
Change-Id: I151bc4269cb4d7e8572a6a676da20f69206d6c3f Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-06-09Fix bug in vlib_buffer_free_from_ring_no_nextDamjan Marion1-1/+1
Change-Id: I332bb4578d1a3c79770985bf1f315d2ed823a3e5 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-06-09session: cleanup queue node tracingFlorin Coras1-67/+67
Change-Id: Ib8e332174d96bf9cfa4bbaaa5b8d8bc9958424b1 Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-06-09avf: properly cofigure RSS LUTDamjan Marion2-13/+40
Change-Id: I85cfab692ae0a72277ae561cdba7dcbc1f60aca3 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-06-09avf: add support for intel X722 NICsDamjan Marion1-0/+1
Change-Id: I3e07070eed4948e813ad1490963c7f8ef7f4262e Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-06-08Time range support for vppinfraDave Barach4-0/+787
Change-Id: I2356b1e05fd868b46b4d26ade760900a5739ca4d Signed-off-by: Dave Barach <dave@barachs.net>
2018-06-08Add reaper functions to want events APIs (VPP-1304)Neale Ranns6-51/+263
Change-Id: Iaeb52d94cb6da63ee93af7c1cf2dade6046cba1d Signed-off-by: Neale Ranns <nranns@cisco.com>
2018-06-08Implement DHCPv6 PD client (VPP-718, VPP-1050)Juraj Sloboda16-45/+3562
Change-Id: I72a1ccdfdd5573335ef78fc01d5268934c73bd31 Signed-off-by: Juraj Sloboda <jsloboda@cisco.com>
2018-06-08http server: do not close connections after replyFlorin Coras1-22/+29
Change-Id: I7add46258fe44bc4d23d805ffc7eae75e37cab82 Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-06-08export counters in a memfd segmentDave Barach18-32/+1221
also export per-node error counters directory entries implement object types Change-Id: I8ce8e0a754e1be9de895c44ed9be6533b4ecef0f Signed-off-by: Dave Barach <dave@barachs.net>
2018-06-08Gratuitous ARP packet handlingNeale Ranns3-10/+137
only learn from a GARP packet if it is an update to an existing entry. Change-Id: I4c1b59cfedb911466e5e4c9756cf53a6676e1909 Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
2018-06-08Adding pad for reasm vnet_buffer reasm struct sothat adj_index is retainedVijayabhaskar Katamreddy1-0/+1
Change-Id: Ib756c4f3e8caba1f77ef48b62a2a5d7283fe5016 Signed-off-by: Vijayabhaskar Katamreddy <vkatamre@cisco.com>
2018-06-08LB: reply message id and table length are wrong.Hongjun Ni1-2/+2
Change-Id: Iea2c661cb3e0728bb2d10b06791ed84fed00f6a7 Signed-off-by: Hongjun Ni <hongjun.ni@intel.com>
2018-06-07Fix IP scan neighbor API/CLI handling of interval and staleJohn Lo1-2/+2
Change-Id: I77264c4398e6fad461bb4dc10867a1f9c3accec0 Signed-off-by: John Lo <loj@cisco.com>
2018-06-07dpdk: fix interface naming issueDamjan Marion1-2/+9
... introduced with dpdk 18.05 support patch Change-Id: Idf2283888f81d7652599651c0d65476e451f9343 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-06-07Revert "Allow arp-input node to learn IPv4 neighbors from GARP packets"John Lo1-5/+0
This reverts commit d018870d1b02109fc8b328446f15312fdd2fcd11. Change-Id: I700ade7a25ae5ed72cfed586e50b02492a4f11de Signed-off-by: John Lo <loj@cisco.com>
2018-06-07dpdk: failsafe PMD initialization codeRui Cai2-1/+26
Added code to initialize failsafe PMD This is part of initial effort to enable vpp running over dpdk on failsafe PMD in Microsoft Azure(4/4). Change-Id: Ia2469c7087ca4b5c7881dfb11ec5c4fcebaa1d04 Signed-off-by: Rui Cai <rucai@microsoft.com>
2018-06-07DHCP Client DumpNeale Ranns17-397/+888
- use types on the DHCP API so that the same data is sent in comfing messages and in dumps - add the DHCP client dump API - update VOM to refelct API changes - rename VOM class dhcp_config* dhcp_client* - the VOM dhcp_client class maintains the lease data (which it reads on a dump) for clients to read Change-Id: I2a43463937cbd80c01d45798e74b21288d8b8ead Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
2018-06-07Add support for DPDK 18.05Damjan Marion20-1998/+168
Change-Id: I205932bc727c990011bbbe1dc6c0cf5349d19806 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-06-07Allow arp-input node to learn IPv4 neighbors from GARP packetsJohn Lo1-0/+5
Change-Id: I86019f4ff9b0c8c633638fa23341d8ce49099ba6 Signed-off-by: John Lo <loj@cisco.com>
2018-06-06Alter logging semantics for VPP PAPI objectIan Wells1-2/+11
Logging previously used a string name for the log level and changed the system-wide log level based on this string name. It now uses a logging-module provided constant for the log level and changes its own logger's level based on the name, and only if the level is provided. This allows the logging to be more compatible with Pythonic usage, where an external source may be used to dictate logging levels across the system on a per module basis and should not be overridden. Change-Id: Icf6896ff61a29b12c11d04374767322cdb330323 Signed-off-by: Ian Wells <iawells@cisco.com>
2018-06-05bond: send gratuitous arp when the active slave went down in active-backup modeSteven10-40/+111
- Modify the API send_ip6_na and send_ip4_garp to take sw_if_index instead of vnet_hw_interface_t and add call to build_ethernet_rewrite to support subinterface/vlan - Add code to bonding driver to send an event to bond_process when the first interface becomes active or when the active interface is down - Create a bond_process to walk the interface and the corresponding subinterfaces to send garp/ip6_na when an event is received. - Minor cleanup in bonding/node.c Note: dpdk bonding driver does not send garp/ip6_na for subinterfaces. There is no attempt to fix it here. But the infra is now done and should be easy to add the support. Change-Id: If3ecc4cd0fb3051330f7fa11ca0dab3e18557ce1 Signed-off-by: Steven <sluong@cisco.com>
2018-06-05lb api: correct byte order of new_flows_table_length argumentAndrey "Zed" Zaikin1-1/+1
Change-Id: I3ac348a8cb1a515dfe1839eaa084c87719d282e1 Signed-off-by: Andrey "Zed" Zaikin <zed.0xff@gmail.com>
2018-06-05VPP API: Memory traceOle Troan12-35/+188
if you plan to put a hash into shared memory, the key sum and key equal functions MUST be set to constants such as KEY_FUNC_STRING, KEY_FUNC_MEM, etc. -lvppinfra is PIC, which means that the process which set up the hash won't have the same idea where the key sum and key compare functions live in other processes. Change-Id: Ib3b5963a0d2fb467b91e1f16274df66ac74009e9 Signed-off-by: Ole Troan <ot@cisco.com> Signed-off-by: Dave Barach <dave@barachs.net> Signed-off-by: Ole Troan <ot@cisco.com>
2018-06-05VPP-1305: Add support for tagsJerome Tollet2-2/+18
Change-Id: I9e759037295fe675abe426e565a562b1ec1e7d33 Signed-off-by: Jerome Tollet <jtollet@cisco.com>
2018-06-05BIER CLI show commands; no crash on non-existant objects (VPP-1303)Neale Ranns3-6/+27
DBGvpp# sh bier disp entry DBGvpp# sh bier disp entry 0 No such BIER disp entry: 0 DBGvpp# sh bier disp table DBGvpp# sh bier disp table 0 No such BIER disp table: 0 DBGvpp# sh bier disp table 11 No such BIER disp table: 11 DBGvpp# DBGvpp# sh bier bift no BIFT entries DBGvpp# sh bier bift set 0 no BIFT entries DBGvpp# sh bier bift set 0 sd 0 bsl 0 no BIFT entries DBGvpp# DBGvpp# sh bier fib No BIER tables DBGvpp# sh bier fib 0 DBGvpp# sh bier fib 0 4 DBGvpp# sh bier fmask DBGvpp# sh bier fmask 2 No BIER f-mask 2 DBGvpp# sh bier imp DBGvpp# sh bier imp 0 No such BIER imposition: 0 Change-Id: Ibadac3441dd8a6d1b96bd9ee4358e28498875b95 Signed-off-by: Neale Ranns <nranns@cisco.com>
2018-06-04Configure or deduce CLIB_LOG2_CACHE_LINE_BYTES (VPP-1064)Dave Barach5-1/+91
Added configure argument "--with-log2-cache-line-bytes=5|6|7|auto" AKA 32, 64, or 128 bytes, or use the inferred value from the build host. produces build-xxx/vpp/vppinfra/config.h, which .../src/vppinfra/cache.h Kernels which implement the following pseudo-file (aka x86_64) are easy: /sys/devices/system/cpu/cpu0/cache/index0/coherency_line_size Otherwise, extract the cpuid from /proc/cpuinfo and map it to the cache line size. Change-Id: I7ff861e042faf82c3901fa1db98864fbdea95b74 Signed-off-by: Dave Barach <dave@barachs.net> Signed-off-by: Nitin Saxena <nitin.saxena@cavium.com>
2018-06-04ip4: optimize ip4_localFlorin Coras2-253/+319
"sh run" says the number of clocks for my tcp based throughput test dropped from ~43 to ~23 Change-Id: I719439ba7fc079ad36be1432c5d7cf74e3b70d73 Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-06-04Fix API trace dump for tapv2Milan Lenco1-3/+3
Change-Id: Ib092da61ba037ea30c6f38ea692ef9f1ca0cd8e7 Signed-off-by: Milan Lenco <milan.lenco@pantheon.tech>
2018-06-04Remove unused GRE buffer meta-dataNeale Ranns1-6/+0
Change-Id: Ia8ef019742c13b1149916d51796cad6f50687162 Signed-off-by: Neale Ranns <nranns@cisco.com>
2018-06-04flow:add enabled hw format functionEyal Bari2-0/+25
Change-Id: Ide1f76e9207b6022d5258a119f8d59cca85651b5 Signed-off-by: Eyal Bari <ebari@cisco.com>
2018-06-04ip: save fib index for buffer in ip lookupFlorin Coras5-114/+65
Avoids recomputing the fib index in ip local for locally delivered packets and should incur no extra cost when forwarding packets. Change-Id: Id826ffa8206392087327f154337eabc8a801b4d7 Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-06-04fix usage string missing argJerome Tollet1-1/+1
Change-Id: I9710e9ed6ceff6c0b2de0bcf77f355762df88b58 Signed-off-by: Jerome Tollet <jtollet@cisco.com>
2018-06-04Join the VAC read timeout threadNeale Ranns1-21/+25
Change-Id: I5bcbae276f8ac23718c5afc859da222508d07ad7 Signed-off-by: Neale Ranns <neale.ranns@cisco.com> Signed-off-by: Ole Troan <ot@cisco.com>
2018-06-04Enable Position Independent Executable for production VPPNeale Ranns1-1/+1
Change-Id: I0f81423b854be1dc456df696416e5f3747393208 Signed-off-by: Neale Ranns <nranns@cisco.com>
2018-06-03dpdk: buffer free optimizationsDamjan Marion1-76/+61
~5 clocks/packet improvement... Change-Id: I1a78fa24dcd1b3ab7f45e10b9ded50f79517114a Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-06-03dpdk: improve buffer alloc perfomanceDamjan Marion3-73/+56
This is ~50% improvement in buffer alloc performance. For a 256 buffer allocation, it was ~10 clocks/buffer, now is < 5 clocks. Change-Id: I97590e240a79a42bcab5eb26587fc2d11e6eb163 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-06-03Interface Tag: memset field in VOM, clear the tag in VPP on interface deleteNeale Ranns2-0/+2
Change-Id: Id97de732b5952d5d86202e7749c9e81cf8dbed87 Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
class="p">); clib_memcpy (sh->key, &kv8v8->kb.k_u64[j], p->key_size); sh->value = kv8v8->values[j]; } } } break; case 4: kv4 = &kv->kv4; for (j = 0; j < 8; j++) { if (kv4->values[j] != (u32) ~ 0) { vec_add2 (shs, sh, 1); clib_memcpy (sh->key, &kv4->kb.kb[j], p->key_size); sh->value = kv4->values[j]; } } break; } } /* *INDENT-OFF* */ hash_foreach_pair (hp, p->overflow_hash, ({ vec_add2 (shs, sh, 1); clib_memcpy (sh->key, (u8 *)hp->key, p->key_size); sh->value = hp->value[0]; })); /* *INDENT-ON* */ vec_sort_with_function (shs, sh_compare); for (i = 0; i < vec_len (shs); i++) { sh = vec_elt_at_index (shs, i); s = format (s, " %U value %u\n", format_hex_bytes, sh->key, p->key_size, sh->value); } vec_free (shs); } return s; } void abort (void); void pfhash_init (pfhash_t * p, char *name, u32 key_size, u32 value_size, u32 nbuckets) { pfhash_kv_t *kv; memset (p, 0, sizeof (*p)); u32 key_bytes; switch (key_size) { case 4: key_bytes = 4; break; case 8: key_bytes = 8; break; case 16: key_bytes = 16; break; default: ASSERT (0); abort (); } switch (value_size) { case 4: case 8: break; default: ASSERT (0); abort (); } p->name = format (0, "%s", name); vec_add1 (p->name, 0); p->overflow_hash = hash_create_mem (0, key_bytes, sizeof (uword)); nbuckets = 1 << (max_log2 (nbuckets)); /* This sets the entire bucket array to zero */ vec_validate (p->buckets, nbuckets - 1); p->key_size = key_size; p->value_size = value_size; /* * Unset buckets implicitly point at the 0th pool elt. * All search routines will return ~0 if they go there. */ pool_get_aligned (p->kvp, kv, 16); memset (kv, 0xff, sizeof (*kv)); } static pfhash_kv_16_t * pfhash_get_kv_16 (pfhash_t * p, u32 bucket_contents, u32x4 * key, u32 * match_index) { u32x4 diff[3]; u32 is_equal[3]; pfhash_kv_16_t *kv = 0; *match_index = (u32) ~ 0; kv = &p->kvp[bucket_contents].kv16; diff[0] = u32x4_sub (kv->kb.k_u32x4[0], key[0]); diff[1] = u32x4_sub (kv->kb.k_u32x4[1], key[0]); diff[2] = u32x4_sub (kv->kb.k_u32x4[2], key[0]); is_equal[0] = u32x4_zero_byte_mask (diff[0]) == 0xffff; is_equal[1] = u32x4_zero_byte_mask (diff[1]) == 0xffff; is_equal[2] = u32x4_zero_byte_mask (diff[2]) == 0xffff; if (is_equal[0]) *match_index = 0; if (is_equal[1]) *match_index = 1; if (is_equal[2]) *match_index = 2; return kv; } static pfhash_kv_8_t * pfhash_get_kv_8 (pfhash_t * p, u32 bucket_contents, u64 * key, u32 * match_index) { pfhash_kv_8_t *kv; *match_index = (u32) ~ 0; kv = &p->kvp[bucket_contents].kv8; if (kv->kb.k_u64[0] == key[0]) *match_index = 0; if (kv->kb.k_u64[1] == key[0]) *match_index = 1; if (kv->kb.k_u64[2] == key[0]) *match_index = 2; if (kv->kb.k_u64[3] == key[0]) *match_index = 3; if (kv->kb.k_u64[4] == key[0]) *match_index = 4; return kv; } static pfhash_kv_8v8_t * pfhash_get_kv_8v8 (pfhash_t * p, u32 bucket_contents, u64 * key, u32 * match_index) { pfhash_kv_8v8_t *kv; *match_index = (u32) ~ 0; kv = &p->kvp[bucket_contents].kv8v8; if (kv->kb.k_u64[0] == key[0]) *match_index = 0; if (kv->kb.k_u64[1] == key[0]) *match_index = 1; if (kv->kb.k_u64[2] == key[0]) *match_index = 2; if (kv->kb.k_u64[3] == key[0]) *match_index = 3; return kv; } static pfhash_kv_4_t * pfhash_get_kv_4 (pfhash_t * p, u32 bucket_contents, u32 * key, u32 * match_index) { u32x4 vector_key; u32x4 is_equal[2]; u32 zbm[2], winner_index; pfhash_kv_4_t *kv; *match_index = (u32) ~ 0; kv = &p->kvp[bucket_contents].kv4; vector_key = u32x4_splat (key[0]); is_equal[0] = u32x4_is_equal (kv->kb.k_u32x4[0], vector_key); is_equal[1] = u32x4_is_equal (kv->kb.k_u32x4[1], vector_key); zbm[0] = ~u32x4_zero_byte_mask (is_equal[0]) & 0xFFFF; zbm[1] = ~u32x4_zero_byte_mask (is_equal[1]) & 0xFFFF; if (PREDICT_FALSE ((zbm[0] == 0) && (zbm[1] == 0))) return kv; winner_index = min_log2 (zbm[0]) >> 2; winner_index = zbm[1] ? (4 + (min_log2 (zbm[1]) >> 2)) : winner_index; *match_index = winner_index; return kv; } static pfhash_kv_t * pfhash_get_internal (pfhash_t * p, u32 bucket_contents, void *key, u32 * match_index) { pfhash_kv_t *kv = 0; switch (p->key_size) { case 16: kv = (pfhash_kv_t *) pfhash_get_kv_16 (p, bucket_contents, key, match_index); break; case 8: if (p->value_size == 4) kv = (pfhash_kv_t *) pfhash_get_kv_8 (p, bucket_contents, key, match_index); else kv = (pfhash_kv_t *) pfhash_get_kv_8v8 (p, bucket_contents, key, match_index); break; case 4: kv = (pfhash_kv_t *) pfhash_get_kv_4 (p, bucket_contents, key, match_index); break; default: ASSERT (0); } return kv; } u64 pfhash_get (pfhash_t * p, u32 bucket, void *key) { pfhash_kv_t *kv; u32 match_index = ~0; pfhash_kv_16_t *kv16; pfhash_kv_8_t *kv8; pfhash_kv_8v8_t *kv8v8; pfhash_kv_4_t *kv4; u32 bucket_contents = pfhash_read_bucket_prefetch_kv (p, bucket); if (bucket_contents == PFHASH_BUCKET_OVERFLOW) { uword *hp; hp = hash_get_mem (p->overflow_hash, key); if (hp) return hp[0]; return (u64) ~ 0; } kv = pfhash_get_internal (p, bucket_contents, key, &match_index); if (match_index == (u32) ~ 0) return (u64) ~ 0; kv16 = (void *) kv; kv8 = (void *) kv; kv4 = (void *) kv; kv8v8 = (void *) kv; switch (p->key_size) { case 16: return (kv16->values[match_index] == (u32) ~ 0) ? (u64) ~ 0 : (u64) kv16->values[match_index]; case 8: if (p->value_size == 4) return (kv8->values[match_index] == (u32) ~ 0) ? (u64) ~ 0 : (u64) kv8->values[match_index]; else return kv8v8->values[match_index]; case 4: return (kv4->values[match_index] == (u32) ~ 0) ? (u64) ~ 0 : (u64) kv4->values[match_index]; default: ASSERT (0); } return (u64) ~ 0; } void pfhash_set (pfhash_t * p, u32 bucket, void *key, void *value) { u32 bucket_contents = pfhash_read_bucket_prefetch_kv (p, bucket); u32 match_index = (u32) ~ 0; pfhash_kv_t *kv; pfhash_kv_16_t *kv16; pfhash_kv_8_t *kv8; pfhash_kv_8v8_t *kv8v8; pfhash_kv_4_t *kv4; int i; u8 *kcopy; if (bucket_contents == PFHASH_BUCKET_OVERFLOW) { hash_pair_t *hp; hp = hash_get_pair_mem (p->overflow_hash, key); if (hp) { clib_warning ("replace value 0x%08x with value 0x%08x", hp->value[0], (u64) value); hp->value[0] = (u64) value; return; } kcopy = clib_mem_alloc (p->key_size); clib_memcpy (kcopy, key, p->key_size); hash_set_mem (p->overflow_hash, kcopy, value); p->nitems++; p->nitems_in_overflow++; return; } if (bucket_contents == 0) { pool_get_aligned (p->kvp, kv, 16); memset (kv, 0xff, sizeof (*kv)); p->buckets[bucket] = kv - p->kvp; } else kv = pfhash_get_internal (p, bucket_contents, key, &match_index); kv16 = (void *) kv; kv8 = (void *) kv; kv8v8 = (void *) kv; kv4 = (void *) kv; p->nitems++; if (match_index != (u32) ~ 0) { switch (p->key_size) { case 16: kv16->values[match_index] = (u32) (u64) value; return; case 8: if (p->value_size == 4) kv8->values[match_index] = (u32) (u64) value; else kv8v8->values[match_index] = (u64) value; return; case 4: kv4->values[match_index] = (u64) value; return; default: ASSERT (0); } } switch (p->key_size) { case 16: for (i = 0; i < 3; i++) { if (kv16->values[i] == (u32) ~ 0) { clib_memcpy (&kv16->kb.k_u32x4[i], key, p->key_size); kv16->values[i] = (u32) (u64) value; return; } } /* copy bucket contents to overflow hash tbl */ for (i = 0; i < 3; i++) { kcopy = clib_mem_alloc (p->key_size); clib_memcpy (kcopy, &kv16->kb.k_u32x4[i], p->key_size); hash_set_mem (p->overflow_hash, kcopy, kv16->values[i]); p->nitems_in_overflow++; } /* Add new key to overflow */ kcopy = clib_mem_alloc (p->key_size); clib_memcpy (kcopy, key, p->key_size); hash_set_mem (p->overflow_hash, kcopy, value); p->buckets[bucket] = PFHASH_BUCKET_OVERFLOW; p->overflow_count++; p->nitems_in_overflow++; return; case 8: if (p->value_size == 4) { for (i = 0; i < 5; i++) { if (kv8->values[i] == (u32) ~ 0) { clib_memcpy (&kv8->kb.k_u64[i], key, 8); kv8->values[i] = (u32) (u64) value; return; } } /* copy bucket contents to overflow hash tbl */ for (i = 0; i < 5; i++) { kcopy = clib_mem_alloc (p->key_size); clib_memcpy (kcopy, &kv8->kb.k_u64[i], 8); hash_set_mem (p->overflow_hash, kcopy, kv8->values[i]); p->nitems_in_overflow++; } } else { for (i = 0; i < 4; i++) { if (kv8v8->values[i] == (u64) ~ 0) { clib_memcpy (&kv8v8->kb.k_u64[i], key, 8); kv8v8->values[i] = (u64) value; return; } } /* copy bucket contents to overflow hash tbl */ for (i = 0; i < 4; i++) { kcopy = clib_mem_alloc (p->key_size); clib_memcpy (kcopy, &kv8v8->kb.k_u64[i], 8); hash_set_mem (p->overflow_hash, kcopy, kv8v8->values[i]); p->nitems_in_overflow++; } } /* Add new key to overflow */ kcopy = clib_mem_alloc (p->key_size); clib_memcpy (kcopy, key, p->key_size); hash_set_mem (p->overflow_hash, kcopy, value); p->buckets[bucket] = PFHASH_BUCKET_OVERFLOW; p->overflow_count++; p->nitems_in_overflow++; return; case 4: for (i = 0; i < 8; i++) { if (kv4->values[i] == (u32) ~ 0) { clib_memcpy (&kv4->kb.kb[i], key, 4); kv4->values[i] = (u32) (u64) value; return; } } /* copy bucket contents to overflow hash tbl */ for (i = 0; i < 8; i++) { kcopy = clib_mem_alloc (p->key_size); clib_memcpy (kcopy, &kv4->kb.kb[i], 4); hash_set_mem (p->overflow_hash, kcopy, kv4->values[i]); p->nitems_in_overflow++; } /* Add new key to overflow */ kcopy = clib_mem_alloc (p->key_size); clib_memcpy (kcopy, key, p->key_size); hash_set_mem (p->overflow_hash, kcopy, value); p->buckets[bucket] = PFHASH_BUCKET_OVERFLOW; p->overflow_count++; p->nitems_in_overflow++; return; default: ASSERT (0); } } void pfhash_unset (pfhash_t * p, u32 bucket, void *key) { u32 bucket_contents = pfhash_read_bucket_prefetch_kv (p, bucket); u32 match_index = (u32) ~ 0; pfhash_kv_t *kv; pfhash_kv_16_t *kv16; pfhash_kv_8_t *kv8; pfhash_kv_8v8_t *kv8v8; pfhash_kv_4_t *kv4; void *oldkey; if (bucket_contents == PFHASH_BUCKET_OVERFLOW) { hash_pair_t *hp; hp = hash_get_pair_mem (p->overflow_hash, key); if (hp) { oldkey = (void *) hp->key; hash_unset_mem (p->overflow_hash, key); clib_mem_free (oldkey); p->nitems--; p->nitems_in_overflow--; } return; } kv = pfhash_get_internal (p, bucket_contents, key, &match_index); if (match_index == (u32) ~ 0) return; p->nitems--; kv16 = (void *) kv; kv8 = (void *) kv; kv8v8 = (void *) kv; kv4 = (void *) kv; switch (p->key_size) { case 16: kv16->values[match_index] = (u32) ~ 0; return; case 8: if (p->value_size == 4) kv8->values[match_index] = (u32) ~ 0; else kv8v8->values[match_index] = (u64) ~ 0; return; case 4: kv4->values[match_index] = (u32) ~ 0; return; default: ASSERT (0); } } void pfhash_free (pfhash_t * p) { hash_pair_t *hp; int i; u8 **keys = 0; vec_free (p->name); pool_free (p->kvp); /* *INDENT-OFF* */ hash_foreach_pair (hp, p->overflow_hash, ({ vec_add1 (keys, (u8 *)hp->key); })); /* *INDENT-ON* */ hash_free (p->overflow_hash); for (i = 0; i < vec_len (keys); i++) vec_free (keys[i]); vec_free (keys); } #endif /* * fd.io coding-style-patch-verification: ON * * Local Variables: * eval: (c-set-style "gnu") * End: */