diff options
author | Pierre Pfister <ppfister@cisco.com> | 2018-01-12 09:41:16 +0100 |
---|---|---|
committer | Dave Barach <openvpp@barachs.net> | 2018-02-01 12:48:05 +0000 |
commit | 953f551e3629a3b96c678a35f5e6f507ea67cd84 (patch) | |
tree | e615a27746f273c5646d4e1a0065627a31889a44 /src/plugins/ila.am | |
parent | be9b41ba3887452c864d1423ea03ed4ee2b9153c (diff) |
Add flowhash hash table to vppinfra
This hash table intends to provide an alternative to the widely
used bihash table in places where either:
- Hash entry timeout is required
- The hash table data does not fit in CPU cache
Although the bihash table is very fast, each lookup requires
accessing two cache lines in a serialized fashion. It works fine
when the hash table is in cache, but hits a wall when it does not.
The 'flowhash' table uses a simplified design (at the cost of a
less good bucket auto-scaling) where each access only requires
a single memory lookup (in the absence of collision). The hash
table also uses a reduced number of registers.
In practice, a VPP node implementing a stateful feature would
typically:
- prefetch buffer metadata (in-cache)
- prefetch packet header (in-cache)
- compute hash & prefetch hash bucket (possibly in RAM)
- read/write key and value from bucket
Using this hash table, it is possible to pipeline accesses in a way
that does not exhaust CPU's line field buffers, even when the
requested value is located in RAM (i.e. not in cache).
Measurements showed it was possible to scale to tens of millions
of flows (with a full 5-tuple matching and 32B value, i.e. 1
cache line per flow) with no performance degradation when
the hash table grows to the point it doesn't fit in cache anymore.
I have used this table in a couple of non-open-sourced projects,
but think it might be useful to lb, nat, and possibly other VPP
subsystems.
More information in the .h file.
Change-Id: I2b13dde0eabd868b75da1cedbfca0bf74d705102
Signed-off-by: Pierre Pfister <ppfister@cisco.com>
Diffstat (limited to 'src/plugins/ila.am')
0 files changed, 0 insertions, 0 deletions