Age | Commit message (Collapse) | Author | Files | Lines |
|
Type: improvement
Previously multiple sw crypto scheduler queues per core design
caused unaverage frame processing rate for each async op ID –
the lower the op ID is the highly likely they are processed first.
For example, when a RX core is feeding both encryption and
decryption jobs of the same crypto algorithm to the queues at a
high rate, in the mean time the crypto cores have no enough
cycles to process all: the jobs in the decryption queue are less
likely being processed, causing packet drop.
To improve the situation this patch makes every core only owning
a two queues, one for encrypt operations and one for decrypt.
The queue is changed either after checking each core
or after founding a frame to process.
All crypto jobs with different algorithm are pushed to
thoses queues and are treated evenly.
In addition, the crypto async infra now uses unified dequeue handler,
one per engine. Only the active engine will be registered its
dequeue handler in crypto main.
Signed-off-by: DariuszX Kazimierski <dariuszx.kazimierski@intel.com>
Signed-off-by: PiotrX Kleski <piotrx.kleski@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Jakub Wysocki <jakubx.wysocki@intel.com>
Change-Id: I517ee8e31633980de5e0dd4b05e1d5db5dea760e
|
|
vlib_buffer_enqueue_to_next() requires to allow overflow of up to 63
elements of 'buffer' and 'nexts' array.
- add helper to compute the minimum size
- fix occurences in session and async crypto
Type: fix
Change-Id: If8d7eebc5bf9beba71ba194aec0f79b8eb6d5843
Signed-off-by: Benoît Ganne <bganne@cisco.com>
|
|
Type: improvement
Change-Id: If3da7d4338470912f37ff1794620418d928fb77f
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: improvement
Also:
- state as enum so my GDB life is easier
- typo; s/indice/indices/;
Signed-off-by: Neale Ranns <neale@graphiant.com>
Change-Id: I3320f5ef1ccd7d042071ef336488a41adfad7463
|
|
Type: refactor
Change-Id: I077110e1a422722e20aa546a6f3224c06ab0cde5
Signed-off-by: Damjan Marion <damarion@cisco.com>
|
|
Type: feature
This patch adds new sw_scheduler async crypto engine.
The engine transforms async frames info sync crypto ops and
delegates them to active sync engines. With the patch it
is possible to increase the single worker crypto throughput
by offloading the crypto workload to multiple workers.
By default all workers in the system will attend the crypto
workload processing. However a worker's available cycles
are limited. To avail more cycles to one worker to process
other workload (e.g. the worker core that handles the RX/TX
and IPSec stack processing), a useful cli command is added
to remove itself (or add it back later) from the heavy
crypto workload but only let other workers to process the
crypto. The command is:
- set sw_scheduler worker <idx> crypto <on|off>
It also adds new interrupt mode to async crypto dispatch node.
This mode signals the node when new frames are enqueued
as opposed to polling mode that continuously calls dispatch node.
New cli commands:
- set crypto async dispatch [polling|interrupt]
- show crypto async status (displays mode and nodes' states)
Signed-off-by: PiotrX Kleski <piotrx.kleski@intel.com>
Signed-off-by: DariuszX Kazimierski <dariuszx.kazimierski@intel.com>
Reviewed-by: Fan Zhang <roy.fan.zhang@intel.com>
Change-Id: I332655f347bb9e3bc9c64166e86e393e911bdb39
|
|
Type: feature
Signed-off-by: Damjan Marion <damarion@cisco.com>
Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Dariusz Kazimierski <dariuszx.kazimierski@intel.com>
Signed-off-by: Piotr Kleski <piotrx.kleski@intel.com>
Change-Id: I4c3fcccf55c36842b7b48aed260fef2802b5c54b
|