Age | Commit message (Collapse) | Author | Files | Lines |
|
Type: improvement
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I7b2fd896dfa6df46916f46327975b95561809f00
|
|
Type: improvement
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I1658a9c19d8eae4c9a42c0a111d4ad343b8eb8a4
|
|
Type: feature
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: Ic373cd2c11272da539eb4b0db27227f36f2f9688
|
|
Type: refactor
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I63a44e11322f6fe27255820524e022f6d710b083
|
|
Add descriptions to clib_file_t template structures so that
sockets can be identified via the 'show unix file' cli command.
Type: fix
Change-Id: Ibf82d55aa6c7b1126bd252b76d0dc8b7076f5046
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Type: fix
shallow was the default, but probably by accident as it depended on
module load order.
full assembly is the v4 behaviour.
using proper types allows gdb to print enum names.
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: If157c5b83614c7adbd7a15a8227a68f8caf4e92c
Signed-off-by: Neale Ranns <nranns@cisco.com>
|
|
Current vhost multi-queues support has a hard limit of 8 queue pairs
due to static vring array. This limit was raised in qemu. VPP should
support more than 8 queue pairs also.
Change static vring allocation to dynamic. When the interface is
created, we allocate 8 queue pairs to begin with. We also keep track
of how many queue pairs that the interface actually uses.
We reply VHOST_USER_GET_QUEUE_NUM with 128 as our maximum number of
support queue pair. When qemu starts initializing queue pair greater
than 8, we expand the vrings as needed on demand.
Type: improvement
Signed-off-by: Steven Luong <sluong@cisco.com>
Change-Id: I4a02d987d52d1bbe601b00e71f650fe6ebfcc0d7
|
|
Type: improvement
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: Id8e77e8b2623be719fd43a95e181eaa5b7df2b6e
|
|
- Refactor make test code to be co-located with
the vpp feature source code
Type: test
Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
Change-Id: I17003925be06d1051f18f1c24ff081790a610c23
|
|
Type: improvement
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I518e096fe13847759806ff62009e73fd8f7451b7
|
|
- Refactor make test code to be co-located with
the vpp feature source code.
Type: test
Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
Change-Id: I3ef69bc915d2217357a9e2b1afa1cfd6c363faa0
|
|
On the one hand, make sure options are terminated with NOPs to avoid
issues with clients that can't parse options that don't end on an u32
boundary. On the other, make sure the padding is rfc compliant. If
options end with EOL the padding should be zeros. The current change
does not use EOL so the padding is NOPs.
Type: improvement
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I608056707ef9658ca90b9c095e84a0689d8000d7
|
|
The bio interacts directly with the session so it avoids using an
intermediary mem bio and, implicitly, higher memory consumption and an
extra memcpy.
Type: improvement
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: Ifb675cfd12df86396a7a738a6cd4d0882c69ad2f
|
|
Type: fix
This change makes esp_move_icv() update pd->current_length if the first
buffer's length is updated.
In case that ICV is split over two buffers, esp_move_icv() copies ICV
to last buffer, it also updates the before_last buffer's current_length.
However, in esp_decrypt_post_crypto(), pd->current_lenght is used to update
first buffer lenght, but pd is not updated in esp_move_icv()
and the total pkt lenght ends up incorrect.
This only happens in tunnel mode when ICV is split between 1st and 2nd buffers.
Signed-off-by: PiotrX Kleski <piotrx.kleski@intel.com>
Change-Id: Ic39d87454ec0d022c050775acb64c5c25ccf7f13
|
|
Type: refactor
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
Change-Id: I64527e9f5259e9984dc1e90023b367ee0fd8deeb
|
|
- Refactor make test code to be co-located with
the vpp feature source code
Type: test
Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
Change-Id: I0529eb51b5a6bc2a5f1a49ee9d3320908ad1dba9
|
|
Add safeguards when tracing packets to avoid cases where clear trace
was issue while buffers were held in reassembly.
Type: fix
Change-Id: I1bdd1e629e8bc08ce63913fd3c4b2327e47dec04
Signed-off-by: Klement Sekera <ksekera@cisco.com>
|
|
Add lookup/get/set API calls to manage both PCAP and Trace
filtering Classifier tables.
The "lookup" call may be used to identify a Classifier table
within a chain of tables taht matches a particular mask vector.
For efficiency, this call should be used to determine to which
table a match vector should be added.
The "get" calls return the first table within a chain (either
a PCAP or the Trace) set of tables. The "set" call may be
used to add a new table to one such chain. If the "sort_masks"
flag is set, the tables within the chain are ordered such that
the most-specific mask is first, and the least-specific mask
is last. A call that "sets" a chain to ~0 will delete and free
all the tables with a chain.
The PCAP filters are per-interface, with "local0", (that is,
sw_if_index == 0) holding the system-wide PCAP filter.
The Classifier used a reference-counted "set" for each PCAP
or trace filter that it stored. The ref counts were not used,
and the vector of tables was only used temporarily to establish
a sorted order for tables based on masks. None of that
complexity was actually warranted, and where it was used,
the same could be achieved more simply.
Type: refactor
Signed-off-by: Jon Loeliger <jdl@netgate.com>
Change-Id: Icc56116cca91b91c631ca0628e814fb53f3677d2
|
|
Type: fix
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: Idb62154191e85651263be9ae116dd87b93e3a140
|
|
Type: refactor
Change-Id: I077110e1a422722e20aa546a6f3224c06ab0cde5
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: refactor
Change-Id: Ie67dc579e88132ddb1ee4a34cb69f96920101772
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
- reduces number of instructions generated 4 times compared to old code
- adds pool_foreach2 which is more friendly to clang-format
Type: improvement
Change-Id: I51e9c7fb09655c60d883987dadf5b2666c12b3f7
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: fix
Change-Id: I269214e3eae72e837f25ee61d714556d976d410f
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Type: improvement
ip4_rewrite_inline_with_gso() did vlib_prefetch_buffer_header() for all nodes.
However it is not necessary for ip-rewrite, it is only needed by ip-midchain.
This patch makes ip4-rewrite prefetches less buffers to save cycles.
Signed-off-by: PiotrX Kleski <piotrx.kleski@intel.com>
Change-Id: Ib82dcb0eda4a2d1d7b8d664f2224d49b72aef50f
|
|
Type: fix
Change-Id: I7ca955882c0e263a9ace4b14021e51488564e411
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: style
Change-Id: Iee9463735c4d114a97e6167d717d1911c4477e70
Signed-off-by: Paul Vinciguerra <pvinci@vinciconsulting.com>
|
|
Type: refactor
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
Change-Id: Ic4ef53f49102d7b5061f1b6d3a1d0c8427b9d1f7
|
|
Type: refactor
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
Change-Id: Idf17c3c02fb77fcadf69a9164abd4da35289aaed
|
|
- vgb() (vlib_get_buffer)
- ph() (pool_header)
Type: feature
Signed-off-by: Christian Hopps <chopps@labn.net>
Change-Id: Ica954480a7809c918cf65b06a0333ebe246a6f3a
|
|
Change-Id: I53011e089bfecb08483792029b534b09b9e33a10
Type: improvement
Signed-off-by: Mohammed Hawari <mohammed@hawari.fr>
|
|
Type: feature
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
Change-Id: I964afd9266645de5c87d49c58ce6b48c2c18f97f
|
|
Change-Id: I2bf6ba325975309183dba1e14e9519c944710752
Signed-off-by: Mohammed Hawari <mohammed@hawari.fr>
Type: improvement
|
|
Change-Id: I162061f83a190723c3b4b5585717851c4b9ba255
Signed-off-by: Mohammed Hawari <mohammed@hawari.fr>
Type: fix
|
|
In ipfix_classify_table_add_del and ipfix_classify_table_details the
ip_version field has vl_api_address_family_t type. However, there is
no encode/decode for the field in the IPFIX API. Moreover, the IPFIX
code expects the field to contain raw 4 or 6 to indicate the IP
version.
With this change, encode/decode the ip_version field in the IPFIX
API. Also, stop converting transport_protocol between host and network
byte order because it's u8.
Type: fix
Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
Change-Id: I4051756b8077b4367dd779cb555a34f74f6d7a9d
|
|
Type: feature
Use the FIB to provide SAS (in so far as it is today)
- Use the glean adjacency as the record of the connected prefixes
= there's a glean per-{interface, protocol, connected-prefix}
- Keep the glean up to date with whatever the recieve host prefix is
(since it can change)
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
Change-Id: I0f3dd1edb1f3fc965af1c7c586709028eb9cdeac
|
|
Type: fix
Change-Id: I2cc1cfd519e5b3502c59cf72e95e454f9122b8e5
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: fix
Changed vnet_crypto_async_reset_frame assert to expect also
ERROR state frames.
Signed-off-by: PiotrX Kleski <piotrx.kleski@intel.com>
Change-Id: I3abc29f3f9642027aee38a59a932e54c90da859d
|
|
Type: fix
To avoid race condition happening in async crypto engines,
async frame state and thread index set should happen before enqueue.
In addition as the enqueue handler already returns the enqueue status,
when an enqueue is failed, the async crypto engine shall not worry
about setting the async frame state but let the submit_open_frame function
to do just that.
Signed-off-by: PiotrX Kleski <piotrx.kleski@intel.com>
Reviewed-by: Fan Zhang <roy.fan.zhang@intel.com>
Change-Id: Ic1b0c94478b3cfd5fab98657218bbd70c46a220a
|
|
Type: fix
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I1440c11fb9d962a05d877aebb4de364c86f9953e
|
|
Type: fix
Change-Id: I8ce1df5c97941fb645b33476db9cfc74f1395b15
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
Type: fix
The vector 'to_free' allocated on heap should be freed to avoid memory leak.
Signed-off-by: barryxie <barryxie@tencent.com>
Change-Id: I539498b50a7f3e346c83b869fb400868961c233f
|
|
Type: fix
Change-Id: I1f1f0b6e8c5ef8bc9f2aca4bdc78e89fa951b841
Signed-off-by: Mohsin Kazmi <sykazmi@cisco.com>
|
|
A special case when out2in packet needs to
be handoffed to other worker thread. We are
not able to determine which thread they belong
to in the first processing of nat handoff node.
These packets needs to go through out2in slowpath
before we are able to tell where to handoff them.
Type: fix
Ticket: VPP-1941
Change-Id: I7173bda970ce6a91d81f48fc72aa2457586a076f
Signed-off-by: Filip Varga <fivarga@cisco.com>
|
|
Type: improvement
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I71b9d54311fcad808fcdaad0df2dca8c161d580e
|
|
Instead of enforcing a "strict" release of data, which relies on
frequent rescheduling of sessions, allow some pacer coalescing, i.e.,
short bursts, that can minimize load on scheduler/session layer and
potentially leverage tso.
Type: improvement
Signed-off-by: Florin Coras <fcoras@cisco.com>
Change-Id: I67e38e5b8dc335bd214113b70c68c27ae92bd6da
|
|
Type: improvement
This patch changes the prediction of the comparison between
SA owner thread index and the current thread index.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Change-Id: I48de0bb2c57dbb09cfab63925bf8dc96613d8bcf
|
|
Type: improvement
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
Change-Id: I318424ffa569d9a09187066d6ba15576757c1cf6
|
|
Type: improvement
also clean up GRE includes across the code base.
Signed-off-by: Neale Ranns <nranns@cisco.com>
Change-Id: I90928b0da3927b7ca1a23683aa80d4b53bbf63fd
|
|
Type: improvement
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
Change-Id: I487e698555545fce85d02d55deaaf7bb0007e388
|
|
Type: improvement
Signed-off-by: Neale Ranns <neale.ranns@cisco.com>
Change-Id: Iee04af801814b6360b045cf7dc8bcad6f517229e
|