summaryrefslogtreecommitdiffstats
path: root/src/plugins/crypto_openssl
AgeCommit message (Collapse)AuthorFilesLines
2019-05-20openssl plugin 3des routine iv_len fixVladimir Ratnikov1-1/+7
Since 3DES has 8 bytes of initialization vector and code contains hardcode for 16 bytes, check added to determine if crypto algorythm is 3DES_CBC and set corresponding iv_len param Change-Id: Iac50c8a8241e321e3b4d576c88f2496852bd905c Signed-off-by: Vladimir Ratnikov <vratnikov@netgate.com>
2019-05-16init / exit function orderingDave Barach1-5/+7
The vlib init function subsystem now supports a mix of procedural and formally-specified ordering constraints. We should eliminate procedural knowledge wherever possible. The following schemes are *roughly* equivalent: static clib_error_t *init_runs_first (vlib_main_t *vm) { clib_error_t *error; ... do some stuff... if ((error = vlib_call_init_function (init_runs_next))) return error; ... } VLIB_INIT_FUNCTION (init_runs_first); and static clib_error_t *init_runs_first (vlib_main_t *vm) { ... do some stuff... } VLIB_INIT_FUNCTION (init_runs_first) = { .runs_before = VLIB_INITS("init_runs_next"), }; The first form will [most likely] call "init_runs_next" on the spot. The second form means that "init_runs_first" runs before "init_runs_next," possibly much earlier in the sequence. Please DO NOT construct sets of init functions where A before B actually means A *right before* B. It's not necessary - simply combine A and B - and it leads to hugely annoying debugging exercises when trying to switch from ad-hoc procedural ordering constraints to formal ordering constraints. Change-Id: I5e4353503bf43b4acb11a45fb33c79a5ade8426c Signed-off-by: Dave Barach <dave@barachs.net>
2019-05-03plugins: clean up plugin descriptionsDave Wallace1-1/+1
- Make plugin descriptions more consistent so the output of "show plugin" can be used in the wiki. Change-Id: I4c6feb11e7dcc5a4cf0848eed37f1d3b035c7dda Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
2019-04-26crypto, ipsec: change GCM IV handlingDamjan Marion1-5/+1
- nonce construction out of salt and iv is ipsec specific so it should be handled in ipsec code - fixes GCM unit tests - GCM IV is constructed out of simple counter, per RFC4106 section 3.1 Change-Id: Ib7712cc9612830daa737f5171d8384f1d361bb61 Signed-off-by: Damjan Marion <damarion@cisco.com>
2019-04-25crypto: AES GCM IV length is always 12Damjan Marion1-1/+1
... at least for use cases we are interested in Change-Id: I1156ff354635e8f990ce2664ebc8dcd3786ddca5 Signed-off-by: Damjan Marion <damarion@cisco.com>
2019-04-25crypto: improve key handlingDamjan Marion1-5/+10
Change-Id: If96f661d507305da4b96cac7b1a8f14ba90676ad Signed-off-by: Damjan Marion <damarion@cisco.com>
2019-04-16IPSEC: support GCM in ESPNeale Ranns1-2/+6
Change-Id: Id2ddb77b4ec3dd543d6e638bc882923f2bac011d Signed-off-by: Neale Ranns <nranns@cisco.com>
2019-04-15crypto: openssl - IV len not passed by caller. Callee knows from algo typeNeale Ranns1-2/+2
Change-Id: Ib80e9bfb19a79e1adc79aef90371a15954daa993 Signed-off-by: Neale Ranns <nranns@cisco.com>
2019-04-15crypto-openssl-gcm: account for failed decryptsNeale Ranns1-3/+6
Change-Id: I749c5a9d58128fd6d0fb8284e56b8f89cf91c609 Signed-off-by: Neale Ranns <nranns@cisco.com>
2019-04-14crypto: add support for AES-CTR cipherfituldo1-1/+4
Change-Id: I7d84bab7768421ed37813702c0413e52167f41ab Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
2019-04-11crypto: add more AES-GCM test casesFilip Tehlar1-2/+4
Change-Id: Ibb3e2f3ba5f31482fc2f0dce53d68f8476608f4b Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
2019-04-07crypto: add support for AEAD and AES-GCMDamjan Marion1-14/+72
Change-Id: Iff6f81a49b9cff5522fbb4914d47472423eac5db Signed-off-by: Damjan Marion <damarion@cisco.com>
2019-04-05crypto: fix init dependencyFilip Tehlar1-1/+2
Change-Id: Ie8dcd9fa0d0487b146eaa62113a5ee06bd3e7d3b Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
2019-04-04crypto: pass multiple ops to handlerDamjan Marion1-3/+3
Change-Id: I438ef1f50d83560ecc608f898cfc61d7f51e1724 Signed-off-by: Damjan Marion <damarion@cisco.com>
2019-03-29ipsec: esp-decrypt reworkDamjan Marion1-4/+11
Change-Id: Icf83c876d0880d1872b84e0a3d34be654b76149f Signed-off-by: Damjan Marion <damarion@cisco.com>
2019-03-20crypto: add hmac truncate optionDamjan Marion1-1/+8
This reverts commit 785368e559dbdf50676f74f43f13423c817abb52. Change-Id: I782ac2be4e161790c73ccd4b08492e2188a6d79d Signed-off-by: Damjan Marion <damarion@cisco.com>
2019-03-20crypto_openssl: call EVP_EncryptFinal_ex only if neededDamjan Marion1-2/+4
Change-Id: I4dc6749a67c0726bae20b8204a5171676308b909 Signed-off-by: Damjan Marion <damarion@cisco.com>
2019-03-20tests: implement crypto tests per RFC2202Filip Tehlar1-0/+1
Change-Id: I18b30d5ee8aa60c34d52b7716b5feb7225cb0d59 Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
2019-03-19crypto: introduce crypto infraDamjan Marion2-0/+239
Change-Id: Ibf320b3e7b054b686f3af9a55afd5d5bda9b1048 Signed-off-by: Damjan Marion <damarion@cisco.com> Signed-off-by: Filip Tehlar <ftehlar@cisco.com>
67'>567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675
/*
 * Copyright (c) 2015 Cisco and/or its affiliates.
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at:
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

#include <stdint.h>

#include <vlib/vlib.h>
#include <vnet/vnet.h>
#include <vnet/policer/policer.h>
#include <vnet/policer/police_inlines.h>
#include <vnet/ip/ip.h>
#include <vnet/classify/policer_classify.h>
#include <vnet/classify/vnet_classify.h>
#include <vnet/l2/feat_bitmap.h>
#include <vnet/l2/l2_input.h>


/* Dispatch functions meant to be instantiated elsewhere */

typedef struct
{
  u32 next_index;
  u32 sw_if_index;
  u32 policer_index;
} vnet_policer_trace_t;

/* packet trace format function */
static u8 *
format_policer_trace (u8 * s, va_list * args)
{
  CLIB_UNUSED (vlib_main_t * vm) = va_arg (*args, vlib_main_t *);
  CLIB_UNUSED (vlib_node_t * node) = va_arg (*args, vlib_node_t *);
  vnet_policer_trace_t *t = va_arg (*args, vnet_policer_trace_t *);

  s = format (s, "VNET_POLICER: sw_if_index %d policer_index %d next %d",
	      t->sw_if_index, t->policer_index, t->next_index);
  return s;
}

#define foreach_vnet_policer_error              \
_(TRANSMIT, "Packets Transmitted")              \
_(DROP, "Packets Dropped")

typedef enum
{
#define _(sym,str) VNET_POLICER_ERROR_##sym,
  foreach_vnet_policer_error
#undef _
    VNET_POLICER_N_ERROR,
} vnet_policer_error_t;

static char *vnet_policer_error_strings[] = {
#define _(sym,string) string,
  foreach_vnet_policer_error
#undef _
};

static inline uword
vnet_policer_inline (vlib_main_t *vm, vlib_node_runtime_t *node,
		     vlib_frame_t *frame)
{
  u32 n_left_from, *from, *to_next;
  vnet_policer_next_t next_index;
  vnet_policer_main_t *pm = &vnet_policer_main;
  u64 time_in_policer_periods;
  u32 transmitted = 0;

  time_in_policer_periods =
    clib_cpu_time_now () >> POLICER_TICKS_PER_PERIOD_SHIFT;

  from = vlib_frame_vector_args (frame);
  n_left_from = frame->n_vectors;
  next_index = node->cached_next_index;

  while (n_left_from > 0)
    {
      u32 n_left_to_next;

      vlib_get_next_frame (vm, node, next_index, to_next, n_left_to_next);

      while (n_left_from >= 4 && n_left_to_next >= 2)
	{
	  u32 bi0, bi1;
	  vlib_buffer_t *b0, *b1;
	  u32 next0, next1;
	  u32 sw_if_index0, sw_if_index1;
	  u32 pi0 = 0, pi1 = 0;
	  u8 act0, act1;

	  /* Prefetch next iteration. */
	  {
	    vlib_buffer_t *b2, *b3;

	    b2 = vlib_get_buffer (vm, from[2]);
	    b3 = vlib_get_buffer (vm, from[3]);

	    vlib_prefetch_buffer_header (b2, LOAD);
	    vlib_prefetch_buffer_header (b3, LOAD);
	  }

	  /* speculatively enqueue b0 and b1 to the current next frame */
	  to_next[0] = bi0 = from[0];
	  to_next[1] = bi1 = from[1];
	  from += 2;
	  to_next += 2;
	  n_left_from -= 2;
	  n_left_to_next -= 2;

	  b0 = vlib_get_buffer (vm, bi0);
	  b1 = vlib_get_buffer (vm, bi1);

	  sw_if_index0 = vnet_buffer (b0)->sw_if_index[VLIB_RX];
	  sw_if_index1 = vnet_buffer (b1)->sw_if_index[VLIB_RX];

	  pi0 = pm->policer_index_by_sw_if_index[sw_if_index0];
	  pi1 = pm->policer_index_by_sw_if_index[sw_if_index1];

	  act0 = vnet_policer_police (vm, b0, pi0, time_in_policer_periods,
				      POLICE_CONFORM /* no chaining */ );

	  act1 = vnet_policer_police (vm, b1, pi1, time_in_policer_periods,
				      POLICE_CONFORM /* no chaining */ );

	  if (PREDICT_FALSE (act0 == QOS_ACTION_DROP)) /* drop action */
	    {
	      next0 = VNET_POLICER_NEXT_DROP;
	      b0->error = node->errors[VNET_POLICER_ERROR_DROP];
	    }
	  else			/* transmit or mark-and-transmit action */
	    {
	      transmitted++;
	      vnet_feature_next (&next0, b0);
	    }

	  if (PREDICT_FALSE (act1 == QOS_ACTION_DROP)) /* drop action */
	    {
	      next1 = VNET_POLICER_NEXT_DROP;
	      b1->error = node->errors[VNET_POLICER_ERROR_DROP];
	    }
	  else			/* transmit or mark-and-transmit action */
	    {
	      transmitted++;
	      vnet_feature_next (&next1, b1);
	    }

	  if (PREDICT_FALSE ((node->flags & VLIB_NODE_FLAG_TRACE)))
	    {
	      if (b0->flags & VLIB_BUFFER_IS_TRACED)
		{
		  vnet_policer_trace_t *t =
		    vlib_add_trace (vm, node, b0, sizeof (*t));
		  t->sw_if_index = sw_if_index0;
		  t->next_index = next0;
		}
	      if (b1->flags & VLIB_BUFFER_IS_TRACED)
		{
		  vnet_policer_trace_t *t =
		    vlib_add_trace (vm, node, b1, sizeof (*t));
		  t->sw_if_index = sw_if_index1;
		  t->next_index = next1;
		}
	    }

	  /* verify speculative enqueues, maybe switch current next frame */
	  vlib_validate_buffer_enqueue_x2 (vm, node, next_index,
					   to_next, n_left_to_next,
					   bi0, bi1, next0, next1);
	}

      while (n_left_from > 0 && n_left_to_next > 0)
	{
	  u32 bi0;
	  vlib_buffer_t *b0;
	  u32 next0;
	  u32 sw_if_index0;
	  u32 pi0 = 0;
	  u8 act0;

	  bi0 = from[0];
	  to_next[0] = bi0;
	  from += 1;
	  to_next += 1;
	  n_left_from -= 1;
	  n_left_to_next -= 1;

	  b0 = vlib_get_buffer (vm, bi0);

	  sw_if_index0 = vnet_buffer (b0)->sw_if_index[VLIB_RX];

	  pi0 = pm->policer_index_by_sw_if_index[sw_if_index0];

	  act0 = vnet_policer_police (vm, b0, pi0, time_in_policer_periods,
				      POLICE_CONFORM /* no chaining */ );

	  if (PREDICT_FALSE (act0 == QOS_ACTION_DROP)) /* drop action */
	    {
	      next0 = VNET_POLICER_NEXT_DROP;
	      b0->error = node->errors[VNET_POLICER_ERROR_DROP];
	    }
	  else			/* transmit or mark-and-transmit action */
	    {
	      transmitted++;
	      vnet_feature_next (&next0, b0);
	    }

	  if (PREDICT_FALSE ((node->flags & VLIB_NODE_FLAG_TRACE)
			     && (b0->flags & VLIB_BUFFER_IS_TRACED)))
	    {
	      vnet_policer_trace_t *t =
		vlib_add_trace (vm, node, b0, sizeof (*t));
	      t->sw_if_index = sw_if_index0;
	      t->next_index = next0;
	      t->policer_index = pi0;
	    }

	  /* verify speculative enqueue, maybe switch current next frame */
	  vlib_validate_buffer_enqueue_x1 (vm, node, next_index,
					   to_next, n_left_to_next,
					   bi0, next0);
	}

      vlib_put_next_frame (vm, node, next_index, n_left_to_next);
    }

  vlib_node_increment_counter (vm, node->node_index,
			       VNET_POLICER_ERROR_TRANSMIT, transmitted);
  return frame->n_vectors;
}

VLIB_NODE_FN (policer_input_node)
(vlib_main_t *vm, vlib_node_runtime_t *node, vlib_frame_t *frame)
{
  return vnet_policer_inline (vm, node, frame);
}

VLIB_REGISTER_NODE (policer_input_node) = {
  .name = "policer-input",
  .vector_size = sizeof (u32),
  .format_trace = format_policer_trace,
  .type = VLIB_NODE_TYPE_INTERNAL,
  .n_errors = ARRAY_LEN(vnet_policer_error_strings),
  .error_strings = vnet_policer_error_strings,
  .n_next_nodes = VNET_POLICER_N_NEXT,
  .next_nodes = {
		 [VNET_POLICER_NEXT_DROP] = "error-drop",
		 },
};

VNET_FEATURE_INIT (policer_input_node, static) = {
  .arc_name = "device-input",
  .node_name = "policer-input",
  .runs_before = VNET_FEATURES ("ethernet-input"),
};

typedef struct
{
  u32 sw_if_index;
  u32 next_index;
  u32 table_index;
  u32 offset;
  u32 policer_index;
} policer_classify_trace_t;

static u8 *
format_policer_classify_trace (u8 * s, va_list * args)
{
  CLIB_UNUSED (vlib_main_t * vm) = va_arg (*args, vlib_main_t *);
  CLIB_UNUSED (vlib_node_t * node) = va_arg (*args, vlib_node_t *);
  policer_classify_trace_t *t = va_arg (*args, policer_classify_trace_t *);

  s = format (s, "POLICER_CLASSIFY: sw_if_index %d next %d table %d offset %d"
	      " policer_index %d",
	      t->sw_if_index, t->next_index, t->table_index, t->offset,
	      t->policer_index);
  return s;
}

#define foreach_policer_classify_error                 \
_(MISS, "Policer classify misses")                     \
_(HIT, "Policer classify hits")                        \
_(CHAIN_HIT, "Policer classify hits after chain walk") \
_(DROP, "Policer classify action drop")

typedef enum
{
#define _(sym,str) POLICER_CLASSIFY_ERROR_##sym,
  foreach_policer_classify_error
#undef _
    POLICER_CLASSIFY_N_ERROR,
} policer_classify_error_t;

static char *policer_classify_error_strings[] = {
#define _(sym,string) string,
  foreach_policer_classify_error
#undef _
};

static inline uword
policer_classify_inline (vlib_main_t * vm,
			 vlib_node_runtime_t * node,
			 vlib_frame_t * frame,
			 policer_classify_table_id_t tid)
{
  u32 n_left_from, *from, *to_next;
  policer_classify_next_index_t next_index;
  policer_classify_main_t *pcm = &policer_classify_main;
  vnet_classify_main_t *vcm = pcm->vnet_classify_main;
  f64 now = vlib_time_now (vm);
  u32 hits = 0;
  u32 misses = 0;
  u32 chain_hits = 0;
  u32 n_next_nodes;
  u64 time_in_policer_periods;

  time_in_policer_periods =
    clib_cpu_time_now () >> POLICER_TICKS_PER_PERIOD_SHIFT;

  n_next_nodes = node->n_next_nodes;

  from = vlib_frame_vector_args (frame);
  n_left_from = frame->n_vectors;

  /* First pass: compute hashes */
  while (n_left_from > 2)
    {
      vlib_buffer_t *b0, *b1;
      u32 bi0, bi1;
      u8 *h0, *h1;
      u32 sw_if_index0, sw_if_index1;
      u32 table_index0, table_index1;
      vnet_classify_table_t *t0, *t1;

      /* Prefetch next iteration */
      {
	vlib_buffer_t *p1, *p2;

	p1 = vlib_get_buffer (vm, from[1]);
	p2 = vlib_get_buffer (vm, from[2]);

	vlib_prefetch_buffer_header (p1, STORE);
	CLIB_PREFETCH (p1->data, CLIB_CACHE_LINE_BYTES, STORE);
	vlib_prefetch_buffer_header (p2, STORE);
	CLIB_PREFETCH (p2->data, CLIB_CACHE_LINE_BYTES, STORE);
      }

      bi0 = from[0];
      b0 = vlib_get_buffer (vm, bi0);
      h0 = b0->data;

      bi1 = from[1];
      b1 = vlib_get_buffer (vm, bi1);
      h1 = b1->data;

      sw_if_index0 = vnet_buffer (b0)->sw_if_index[VLIB_RX];
      table_index0 =
	pcm->classify_table_index_by_sw_if_index[tid][sw_if_index0];

      sw_if_index1 = vnet_buffer (b1)->sw_if_index[VLIB_RX];
      table_index1 =
	pcm->classify_table_index_by_sw_if_index[tid][sw_if_index1];

      t0 = pool_elt_at_index (vcm->tables, table_index0);

      t1 = pool_elt_at_index (vcm->tables, table_index1);

      vnet_buffer (b0)->l2_classify.hash =
	vnet_classify_hash_packet (t0, (u8 *) h0);

      vnet_classify_prefetch_bucket (t0, vnet_buffer (b0)->l2_classify.hash);

      vnet_buffer (b1)->l2_classify.hash =
	vnet_classify_hash_packet (t1, (u8 *) h1);

      vnet_classify_prefetch_bucket (t1, vnet_buffer (b1)->l2_classify.hash);

      vnet_buffer (b0)->l2_classify.table_index = table_index0;

      vnet_buffer (b1)->l2_classify.table_index = table_index1;

      from += 2;
      n_left_from -= 2;
    }

  while (n_left_from > 0)
    {
      vlib_buffer_t *b0;
      u32 bi0;
      u8 *h0;
      u32 sw_if_index0;
      u32 table_index0;
      vnet_classify_table_t *t0;

      bi0 = from[0];
      b0 = vlib_get_buffer (vm, bi0);
      h0 = b0->data;

      sw_if_index0 = vnet_buffer (b0)->sw_if_index[VLIB_RX];
      table_index0 =
	pcm->classify_table_index_by_sw_if_index[tid][sw_if_index0];

      t0 = pool_elt_at_index (vcm->tables, table_index0);
      vnet_buffer (b0)->l2_classify.hash =
	vnet_classify_hash_packet (t0, (u8 *) h0);

      vnet_buffer (b0)->l2_classify.table_index = table_index0;
      vnet_classify_prefetch_bucket (t0, vnet_buffer (b0)->l2_classify.hash);

      from++;
      n_left_from--;
    }

  next_index = node->cached_next_index;
  from = vlib_frame_vector_args (frame);
  n_left_from = frame->n_vectors;

  while (n_left_from > 0)
    {
      u32 n_left_to_next;

      vlib_get_next_frame (vm, node, next_index, to_next, n_left_to_next);

      /* Not enough load/store slots to dual loop... */
      while (n_left_from > 0 && n_left_to_next > 0)
	{
	  u32 bi0;
	  vlib_buffer_t *b0;
	  u32 next0 = POLICER_CLASSIFY_NEXT_INDEX_DROP;
	  u32 table_index0;
	  vnet_classify_table_t *t0;
	  vnet_classify_entry_t *e0;
	  u64 hash0;
	  u8 *h0;
	  u8 act0;

	  /* Stride 3 seems to work best */
	  if (PREDICT_TRUE (n_left_from > 3))
	    {
	      vlib_buffer_t *p1 = vlib_get_buffer (vm, from[3]);
	      vnet_classify_table_t *tp1;
	      u32 table_index1;
	      u64 phash1;

	      table_index1 = vnet_buffer (p1)->l2_classify.table_index;

	      if (PREDICT_TRUE (table_index1 != ~0))
		{
		  tp1 = pool_elt_at_index (vcm->tables, table_index1);
		  phash1 = vnet_buffer (p1)->l2_classify.hash;
		  vnet_classify_prefetch_entry (tp1, phash1);
		}
	    }

	  /* Speculatively enqueue b0 to the current next frame */
	  bi0 = from[0];
	  to_next[0] = bi0;
	  from += 1;
	  to_next += 1;
	  n_left_from -= 1;
	  n_left_to_next -= 1;

	  b0 = vlib_get_buffer (vm, bi0);
	  h0 = b0->data;
	  table_index0 = vnet_buffer (b0)->l2_classify.table_index;
	  e0 = 0;
	  t0 = 0;

	  if (tid == POLICER_CLASSIFY_TABLE_L2)
	    {
	      /* Feature bitmap update and determine the next node */
	      next0 = vnet_l2_feature_next (b0, pcm->feat_next_node_index,
					    L2INPUT_FEAT_POLICER_CLAS);
	    }
	  else
	    vnet_get_config_data (pcm->vnet_config_main[tid],
				  &b0->current_config_index, &next0,
				  /* # bytes of config data */ 0);

	  vnet_buffer (b0)->l2_classify.opaque_index = ~0;

	  if (PREDICT_TRUE (table_index0 != ~0))
	    {
	      hash0 = vnet_buffer (b0)->l2_classify.hash;
	      t0 = pool_elt_at_index (vcm->tables, table_index0);
	      e0 = vnet_classify_find_entry (t0, (u8 *) h0, hash0, now);

	      if (e0)
		{
		  act0 = vnet_policer_police (vm,
					      b0,
					      e0->next_index,
					      time_in_policer_periods,
					      e0->opaque_index);
		  if (PREDICT_FALSE (act0 == QOS_ACTION_DROP))
		    {
		      next0 = POLICER_CLASSIFY_NEXT_INDEX_DROP;
		      b0->error = node->errors[POLICER_CLASSIFY_ERROR_DROP];
		    }
		  hits++;
		}
	      else
		{
		  while (1)
		    {
		      if (PREDICT_TRUE (t0->next_table_index != ~0))
			{
			  t0 = pool_elt_at_index (vcm->tables,
						  t0->next_table_index);
			}
		      else
			{
			  next0 = (t0->miss_next_index < n_next_nodes) ?
			    t0->miss_next_index : next0;
			  misses++;
			  break;
			}

		      hash0 = vnet_classify_hash_packet (t0, (u8 *) h0);
		      e0 =
			vnet_classify_find_entry (t0, (u8 *) h0, hash0, now);
		      if (e0)
			{
			  act0 = vnet_policer_police (vm,
						      b0,
						      e0->next_index,
						      time_in_policer_periods,
						      e0->opaque_index);
			  if (PREDICT_FALSE (act0 == QOS_ACTION_DROP))
			    {
			      next0 = POLICER_CLASSIFY_NEXT_INDEX_DROP;
			      b0->error =
				node->errors[POLICER_CLASSIFY_ERROR_DROP];
			    }
			  hits++;
			  chain_hits++;
			  break;
			}
		    }
		}
	    }
	  if (PREDICT_FALSE ((node->flags & VLIB_NODE_FLAG_TRACE)
			     && (b0->flags & VLIB_BUFFER_IS_TRACED)))
	    {
	      policer_classify_trace_t *t =
		vlib_add_trace (vm, node, b0, sizeof (*t));
	      t->sw_if_index = vnet_buffer (b0)->sw_if_index[VLIB_RX];
	      t->next_index = next0;
	      t->table_index = t0 ? t0 - vcm->tables : ~0;
	      t->offset = (e0 && t0) ? vnet_classify_get_offset (t0, e0) : ~0;
	      t->policer_index = e0 ? e0->next_index : ~0;
	    }

	  /* Verify speculative enqueue, maybe switch current next frame */
	  vlib_validate_buffer_enqueue_x1 (vm, node, next_index, to_next,
					   n_left_to_next, bi0, next0);
	}

      vlib_put_next_frame (vm, node, next_index, n_left_to_next);
    }

  vlib_node_increment_counter (vm, node->node_index,
			       POLICER_CLASSIFY_ERROR_MISS, misses);
  vlib_node_increment_counter (vm, node->node_index,
			       POLICER_CLASSIFY_ERROR_HIT, hits);
  vlib_node_increment_counter (vm, node->node_index,
			       POLICER_CLASSIFY_ERROR_CHAIN_HIT, chain_hits);

  return frame->n_vectors;
}

VLIB_NODE_FN (ip4_policer_classify_node) (vlib_main_t * vm,
					  vlib_node_runtime_t * node,
					  vlib_frame_t * frame)
{
  return policer_classify_inline (vm, node, frame,
				  POLICER_CLASSIFY_TABLE_IP4);
}

/* *INDENT-OFF* */
VLIB_REGISTER_NODE (ip4_policer_classify_node) = {
  .name = "ip4-policer-classify",
  .vector_size = sizeof (u32),
  .format_trace = format_policer_classify_trace,
  .n_errors = ARRAY_LEN(policer_classify_error_strings),
  .error_strings = policer_classify_error_strings,
  .n_next_nodes = POLICER_CLASSIFY_NEXT_INDEX_N_NEXT,
  .next_nodes = {
    [POLICER_CLASSIFY_NEXT_INDEX_DROP] = "error-drop",
  },
};
/* *INDENT-ON* */

VLIB_NODE_FN (ip6_policer_classify_node) (vlib_main_t * vm,
					  vlib_node_runtime_t * node,
					  vlib_frame_t * frame)
{
  return policer_classify_inline (vm, node, frame,
				  POLICER_CLASSIFY_TABLE_IP6);
}

/* *INDENT-OFF* */
VLIB_REGISTER_NODE (ip6_policer_classify_node) = {
  .name = "ip6-policer-classify",
  .vector_size = sizeof (u32),
  .format_trace = format_policer_classify_trace,
  .n_errors = ARRAY_LEN(policer_classify_error_strings),
  .error_strings = policer_classify_error_strings,
  .n_next_nodes = POLICER_CLASSIFY_NEXT_INDEX_N_NEXT,
  .next_nodes = {
    [POLICER_CLASSIFY_NEXT_INDEX_DROP] = "error-drop",
  },
};
/* *INDENT-ON* */

VLIB_NODE_FN (l2_policer_classify_node) (vlib_main_t * vm,
					 vlib_node_runtime_t * node,
					 vlib_frame_t * frame)
{
  return policer_classify_inline (vm, node, frame, POLICER_CLASSIFY_TABLE_L2);
}

/* *INDENT-OFF* */
VLIB_REGISTER_NODE (l2_policer_classify_node) = {
  .name = "l2-policer-classify",
  .vector_size = sizeof (u32),
  .format_trace = format_policer_classify_trace,
  .n_errors = ARRAY_LEN (policer_classify_error_strings),
  .error_strings = policer_classify_error_strings,
  .n_next_nodes = POLICER_CLASSIFY_NEXT_INDEX_N_NEXT,
  .next_nodes = {
    [POLICER_CLASSIFY_NEXT_INDEX_DROP] = "error-drop",
  },
};
/* *INDENT-ON* */

#ifndef CLIB_MARCH_VARIANT
static clib_error_t *
policer_classify_init (vlib_main_t * vm)
{
  policer_classify_main_t *pcm = &policer_classify_main;

  pcm->vlib_main = vm;
  pcm->vnet_main = vnet_get_main ();
  pcm->vnet_classify_main = &vnet_classify_main;

  /* Initialize L2 feature next-node indexes */
  feat_bitmap_init_next_nodes (vm,
			       l2_policer_classify_node.index,
			       L2INPUT_N_FEAT,
			       l2input_get_feat_names (),
			       pcm->feat_next_node_index);

  return 0;
}

VLIB_INIT_FUNCTION (policer_classify_init);
#endif /* CLIB_MARCH_VARIANT */

/*
 * fd.io coding-style-patch-verification: ON
 *
 * Local Variables:
 * eval: (c-set-style "gnu")
 * End:
 */