summaryrefslogtreecommitdiffstats
path: root/src/vnet/l2tp
AgeCommit message (Collapse)AuthorFilesLines
2018-07-19Remove unused argument to vlib_feature_nextDamjan Marion1-1/+1
Change-Id: Ieb8b53977fc8484c19780941e232ee072b667de3 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-07-11avoid using thread local storage for thread indexDamjan Marion1-1/+1
It is cheaper to get thread index from vlib_main_t if available... Change-Id: I4582e160d06d9d7fccdc54271912f0635da79b50 Signed-off-by: Damjan Marion <damarion@cisco.com>
2018-01-23VPPAPIGEN: vppapigen replacement in Python PLY.Ole Troan1-1/+1
This is a version of the VPP API generator in Python PLY. It supports the existing language, and has a plugin architecture for generators. Currently C and JSON are supported. Changes: - vl_api_version to option version = "major.minor.patch" - enum support - Added error checking and reporting - import support (removed the C pre-processor) - services (tying request/reply together) Version: option version = "1.0.0"; Enum: enum colours { RED, BLUE = 50, }; define foo { vl_api_colours_t colours; }; Services: service { rpc foo returns foo_reply; rpc foo_dump returns stream foo_details; rpc want_stats returns want_stats_reply events ip4_counters, ip6_counters; }; Future planned features: - unions - bool, text - array support (including length) - proto3 output plugin - Refactor C/C++ generator as a plugin - Refactor Java generator as a plugin Change-Id: Ifa289966c790e1b1a8e2938a91e69331e3a58bdf Signed-off-by: Ole Troan <ot@cisco.com>
2018-01-11api: remove transport specific code from handlersFlorin Coras1-6/+6
This does not update api client code. In other words, if the client assumes the transport is shmem based, this patch does not change that. Furthermore, code that checks queue size, for tail dropping, is not updated. Done for the following apis: Plugins - acl - gtpu - memif - nat - pppoe VNET - bfd - bier - tapv2 - vhost user - dhcp - flow - geneve - ip - punt - ipsec/ipsec-gre - l2 - l2tp - lisp-cp/one-cp - lisp-gpe - map - mpls - policer - session - span - udp - tap - vxlan/vxlan-gpe - interface VPP - api/api.c OAM - oam_api.c Stats - stats.c Change-Id: I0e33ecefb2bdab0295698c0add948068a5a83345 Signed-off-by: Florin Coras <fcoras@cisco.com>
2018-01-09api: refactor vlibmemoryFlorin Coras1-2/+2
- separate client/server code for both memory and socket apis - separate memory api code from generic vlib api code - move unix_shared_memory_fifo to svm and rename to svm_fifo_t - overall declutter Change-Id: I90cdd98ff74d0787d58825b914b0f1eafcfa4dc2 Signed-off-by: Florin Coras <fcoras@cisco.com>
2017-10-24Add extern to *_main global variable declarations in header files.Dave Wallace1-1/+1
- Global variables declared in header files without the use of the 'extern' keword will result in multiple instances of the variable to be created by the compiler -- one for each different source file in which the the header file is included. This results in wasted memory allocated in the BSS segments as well as potentially introducing bugs in the application. Change-Id: I6ef1790b60a0bd9dd3994f8510723decf258b0cc Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
2017-10-09vppapigen: support per-file (major,minor,patch) version stampsDave Barach1-0/+2
Add one of these statements to foo.api: vl_api_version 1.2.3 to generate a version tuple stanza in foo.api.h: /****** Version tuple *****/ vl_api_version_tuple(foo, 1, 2, 3) Change-Id: Ic514439e4677999daa8463a94f948f76b132ff15 Signed-off-by: Dave Barach <dave@barachs.net> Signed-off-by: Ole Troan <ot@cisco.com>
2017-04-25"autoreply" flag: autogenerate standard xxx_reply_t messagesDave Barach1-25/+3
Change-Id: I72298aaae7d172082ece3a8edea4217c11b28d79 Signed-off-by: Dave Barach <dave@barachs.net>
2017-04-06Use thread local storage for thread indexDamjan Marion3-5/+5
This patch deprecates stack-based thread identification, Also removes requirement that thread stacks are adjacent. Finally, possibly annoying for some folks, it renames all occurences of cpu_index and cpu_number with thread index. Using word "cpu" is misleading here as thread can be migrated ti different CPU, and also it is not related to linux cpu index. Change-Id: I68cdaf661e701d2336fc953dcb9978d10a70f7c1 Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-03-16vlib: make runtime_data thread-localDamjan Marion1-0/+10
Change-Id: I4aa3e7e42fb81211de1aed07dc7befee87a1e18b Signed-off-by: Damjan Marion <damarion@cisco.com>
2017-02-22VPP-635: CLI Memory leak with invalid parameterBilly McFall1-11/+28
In the CLI parsing, below is a common pattern: /* Get a line of input. */ if (!unformat_user (input, unformat_line_input, line_input)) return 0; while (unformat_check_input (line_input) != UNFORMAT_END_OF_INPUT) { if (unformat (line_input, "x")) x = 1; : else return clib_error_return (0, "unknown input `%U'", format_unformat_error, line_input); } unformat_free (line_input); The 'else' returns if an unknown string is encountered. There a memory leak because the 'unformat_free(line_input)' is not called. There is a large number of instances of this pattern. Replaced the previous pattern with: /* Get a line of input. */ if (!unformat_user (input, unformat_line_input, line_input)) return 0; while (unformat_check_input (line_input) != UNFORMAT_END_OF_INPUT) { if (unformat (line_input, "x")) x = 1; : else { error = clib_error_return (0, "unknown input `%U'", format_unformat_error, line_input); goto done: } } /* ...Remaining code... */ done: unformat_free (line_input); return error; } In multiple files, 'unformat_free (line_input);' was never called, so there was a memory leak whether an invalid string was entered or not. Also, there were multiple instance where: error = clib_error_return (0, "unknown input `%U'", format_unformat_error, line_input); used 'input' as the last parameter instead of 'line_input'. The result is that output did not contain the substring in error, instead just an empty string. Fixed all of those as well. There are a lot of file, and very mind numbing work, so tried to keep it to a pattern to avoid mistakes. Change-Id: I8902f0c32a47dd7fb3bb3471a89818571702f1d2 Signed-off-by: Billy McFall <bmcfall@redhat.com> Signed-off-by: Dave Barach <dave@barachs.net>
2016-12-28Reorganize source tree to use single autotools instanceDamjan Marion8-0/+1976
Change-Id: I7b51f88292e057c6443b12224486f2d0c9f8ae23 Signed-off-by: Damjan Marion <damarion@cisco.com>
474'>474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052
/*-
 *   BSD LICENSE
 *
 *   Copyright(c) 2014-2016 Chelsio Communications.
 *   All rights reserved.
 *
 *   Redistribution and use in source and binary forms, with or without
 *   modification, are permitted provided that the following conditions
 *   are met:
 *
 *     * Redistributions of source code must retain the above copyright
 *       notice, this list of conditions and the following disclaimer.
 *     * Redistributions in binary form must reproduce the above copyright
 *       notice, this list of conditions and the following disclaimer in
 *       the documentation and/or other materials provided with the
 *       distribution.
 *     * Neither the name of Chelsio Communications nor the names of its
 *       contributors may be used to endorse or promote products derived
 *       from this software without specific prior written permission.
 *
 *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
 *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
 *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
 *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
 *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
 *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
 *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
 *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
 *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 */

#include <sys/queue.h>
#include <stdio.h>
#include <errno.h>
#include <stdint.h>
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
#include <inttypes.h>
#include <netinet/in.h>

#include <rte_byteorder.h>
#include <rte_common.h>
#include <rte_cycles.h>
#include <rte_interrupts.h>
#include <rte_log.h>
#include <rte_debug.h>
#include <rte_pci.h>
#include <rte_atomic.h>
#include <rte_branch_prediction.h>
#include <rte_memory.h>
#include <rte_memzone.h>
#include <rte_tailq.h>
#include <rte_eal.h>
#include <rte_alarm.h>
#include <rte_ether.h>
#include <rte_ethdev.h>
#include <rte_atomic.h>
#include <rte_malloc.h>
#include <rte_random.h>
#include <rte_dev.h>

#include "cxgbe.h"

/*
 * Macros needed to support the PCI Device ID Table ...
 */
#define CH_PCI_DEVICE_ID_TABLE_DEFINE_BEGIN \
	static struct rte_pci_id cxgb4_pci_tbl[] = {
#define CH_PCI_DEVICE_ID_FUNCTION 0x4

#define PCI_VENDOR_ID_CHELSIO 0x1425

#define CH_PCI_ID_TABLE_ENTRY(devid) \
		{ RTE_PCI_DEVICE(PCI_VENDOR_ID_CHELSIO, (devid)) }

#define CH_PCI_DEVICE_ID_TABLE_DEFINE_END \
		{ .vendor_id = 0, } \
	}

/*
 *... and the PCI ID Table itself ...
 */
#include "t4_pci_id_tbl.h"

static uint16_t cxgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
				uint16_t nb_pkts)
{
	struct sge_eth_txq *txq = (struct sge_eth_txq *)tx_queue;
	uint16_t pkts_sent, pkts_remain;
	uint16_t total_sent = 0;
	int ret = 0;

	CXGBE_DEBUG_TX(adapter, "%s: txq = %p; tx_pkts = %p; nb_pkts = %d\n",
		       __func__, txq, tx_pkts, nb_pkts);

	t4_os_lock(&txq->txq_lock);
	/* free up desc from already completed tx */
	reclaim_completed_tx(&txq->q);
	while (total_sent < nb_pkts) {
		pkts_remain = nb_pkts - total_sent;

		for (pkts_sent = 0; pkts_sent < pkts_remain; pkts_sent++) {
			ret = t4_eth_xmit(txq, tx_pkts[total_sent + pkts_sent]);
			if (ret < 0)
				break;
		}
		if (!pkts_sent)
			break;
		total_sent += pkts_sent;
		/* reclaim as much as possible */
		reclaim_completed_tx(&txq->q);
	}

	t4_os_unlock(&txq->txq_lock);
	return total_sent;
}

static uint16_t cxgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
				uint16_t nb_pkts)
{
	struct sge_eth_rxq *rxq = (struct sge_eth_rxq *)rx_queue;
	unsigned int work_done;

	CXGBE_DEBUG_RX(adapter, "%s: rxq->rspq.cntxt_id = %u; nb_pkts = %d\n",
		       __func__, rxq->rspq.cntxt_id, nb_pkts);

	if (cxgbe_poll(&rxq->rspq, rx_pkts, (unsigned int)nb_pkts, &work_done))
		dev_err(adapter, "error in cxgbe poll\n");

	CXGBE_DEBUG_RX(adapter, "%s: work_done = %u\n", __func__, work_done);
	return work_done;
}

static void cxgbe_dev_info_get(struct rte_eth_dev *eth_dev,
			       struct rte_eth_dev_info *device_info)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;
	int max_queues = adapter->sge.max_ethqsets / adapter->params.nports;

	static const struct rte_eth_desc_lim cxgbe_desc_lim = {
		.nb_max = CXGBE_MAX_RING_DESC_SIZE,
		.nb_min = CXGBE_MIN_RING_DESC_SIZE,
		.nb_align = 1,
	};

	device_info->min_rx_bufsize = CXGBE_MIN_RX_BUFSIZE;
	device_info->max_rx_pktlen = CXGBE_MAX_RX_PKTLEN;
	device_info->max_rx_queues = max_queues;
	device_info->max_tx_queues = max_queues;
	device_info->max_mac_addrs = 1;
	/* XXX: For now we support one MAC/port */
	device_info->max_vfs = adapter->params.arch.vfcount;
	device_info->max_vmdq_pools = 0; /* XXX: For now no support for VMDQ */

	device_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
				       DEV_RX_OFFLOAD_IPV4_CKSUM |
				       DEV_RX_OFFLOAD_UDP_CKSUM |
				       DEV_RX_OFFLOAD_TCP_CKSUM;

	device_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
				       DEV_TX_OFFLOAD_IPV4_CKSUM |
				       DEV_TX_OFFLOAD_UDP_CKSUM |
				       DEV_TX_OFFLOAD_TCP_CKSUM |
				       DEV_TX_OFFLOAD_TCP_TSO;

	device_info->reta_size = pi->rss_size;

	device_info->rx_desc_lim = cxgbe_desc_lim;
	device_info->tx_desc_lim = cxgbe_desc_lim;
	device_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G;
}

static void cxgbe_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;

	t4_set_rxmode(adapter, adapter->mbox, pi->viid, -1,
		      1, -1, 1, -1, false);
}

static void cxgbe_dev_promiscuous_disable(struct rte_eth_dev *eth_dev)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;

	t4_set_rxmode(adapter, adapter->mbox, pi->viid, -1,
		      0, -1, 1, -1, false);
}

static void cxgbe_dev_allmulticast_enable(struct rte_eth_dev *eth_dev)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;

	/* TODO: address filters ?? */

	t4_set_rxmode(adapter, adapter->mbox, pi->viid, -1,
		      -1, 1, 1, -1, false);
}

static void cxgbe_dev_allmulticast_disable(struct rte_eth_dev *eth_dev)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;

	/* TODO: address filters ?? */

	t4_set_rxmode(adapter, adapter->mbox, pi->viid, -1,
		      -1, 0, 1, -1, false);
}

static int cxgbe_dev_link_update(struct rte_eth_dev *eth_dev,
				 __rte_unused int wait_to_complete)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;
	struct sge *s = &adapter->sge;
	struct rte_eth_link *old_link = &eth_dev->data->dev_link;
	unsigned int work_done, budget = 4;

	cxgbe_poll(&s->fw_evtq, NULL, budget, &work_done);
	if (old_link->link_status == pi->link_cfg.link_ok)
		return -1;  /* link not changed */

	eth_dev->data->dev_link.link_status = pi->link_cfg.link_ok;
	eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
	eth_dev->data->dev_link.link_speed = pi->link_cfg.speed;

	/* link has changed */
	return 0;
}

static int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;
	struct rte_eth_dev_info dev_info;
	int err;
	uint16_t new_mtu = mtu + ETHER_HDR_LEN + ETHER_CRC_LEN;

	cxgbe_dev_info_get(eth_dev, &dev_info);

	/* Must accommodate at least ETHER_MIN_MTU */
	if ((new_mtu < ETHER_MIN_MTU) || (new_mtu > dev_info.max_rx_pktlen))
		return -EINVAL;

	/* set to jumbo mode if needed */
	if (new_mtu > ETHER_MAX_LEN)
		eth_dev->data->dev_conf.rxmode.jumbo_frame = 1;
	else
		eth_dev->data->dev_conf.rxmode.jumbo_frame = 0;

	err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
			    -1, -1, true);
	if (!err)
		eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_mtu;

	return err;
}

static int cxgbe_dev_tx_queue_start(struct rte_eth_dev *eth_dev,
				    uint16_t tx_queue_id);
static int cxgbe_dev_rx_queue_start(struct rte_eth_dev *eth_dev,
				    uint16_t tx_queue_id);
static void cxgbe_dev_tx_queue_release(void *q);
static void cxgbe_dev_rx_queue_release(void *q);

/*
 * Stop device.
 */
static void cxgbe_dev_close(struct rte_eth_dev *eth_dev)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;
	int i, dev_down = 0;

	CXGBE_FUNC_TRACE();

	if (!(adapter->flags & FULL_INIT_DONE))
		return;

	cxgbe_down(pi);

	/*
	 *  We clear queues only if both tx and rx path of the port
	 *  have been disabled
	 */
	t4_sge_eth_clear_queues(pi);

	/*  See if all ports are down */
	for_each_port(adapter, i) {
		pi = adap2pinfo(adapter, i);
		/*
		 * Skip first port of the adapter since it will be closed
		 * by DPDK
		 */
		if (i == 0)
			continue;
		dev_down += (pi->eth_dev->data->dev_started == 0) ? 1 : 0;
	}

	/* If rest of the ports are stopped, then free up resources */
	if (dev_down == (adapter->params.nports - 1))
		cxgbe_close(adapter);
}

/* Start the device.
 * It returns 0 on success.
 */
static int cxgbe_dev_start(struct rte_eth_dev *eth_dev)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;
	int err = 0, i;

	CXGBE_FUNC_TRACE();

	/*
	 * If we don't have a connection to the firmware there's nothing we
	 * can do.
	 */
	if (!(adapter->flags & FW_OK)) {
		err = -ENXIO;
		goto out;
	}

	if (!(adapter->flags & FULL_INIT_DONE)) {
		err = cxgbe_up(adapter);
		if (err < 0)
			goto out;
	}

	err = setup_rss(pi);
	if (err)
		goto out;

	for (i = 0; i < pi->n_tx_qsets; i++) {
		err = cxgbe_dev_tx_queue_start(eth_dev, i);
		if (err)
			goto out;
	}

	for (i = 0; i < pi->n_rx_qsets; i++) {
		err = cxgbe_dev_rx_queue_start(eth_dev, i);
		if (err)
			goto out;
	}

	err = link_start(pi);
	if (err)
		goto out;

out:
	return err;
}

/*
 * Stop device: disable rx and tx functions to allow for reconfiguring.
 */
static void cxgbe_dev_stop(struct rte_eth_dev *eth_dev)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;

	CXGBE_FUNC_TRACE();

	if (!(adapter->flags & FULL_INIT_DONE))
		return;

	cxgbe_down(pi);

	/*
	 *  We clear queues only if both tx and rx path of the port
	 *  have been disabled
	 */
	t4_sge_eth_clear_queues(pi);
}

static int cxgbe_dev_configure(struct rte_eth_dev *eth_dev)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;
	int err;

	CXGBE_FUNC_TRACE();

	if (!(adapter->flags & FW_QUEUE_BOUND)) {
		err = setup_sge_fwevtq(adapter);
		if (err)
			return err;
		adapter->flags |= FW_QUEUE_BOUND;
	}

	err = cfg_queue_count(eth_dev);
	if (err)
		return err;

	return 0;
}

static int cxgbe_dev_tx_queue_start(struct rte_eth_dev *eth_dev,
				    uint16_t tx_queue_id)
{
	int ret;
	struct sge_eth_txq *txq = (struct sge_eth_txq *)
				  (eth_dev->data->tx_queues[tx_queue_id]);

	dev_debug(NULL, "%s: tx_queue_id = %d\n", __func__, tx_queue_id);

	ret = t4_sge_eth_txq_start(txq);
	if (ret == 0)
		eth_dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;

	return ret;
}

static int cxgbe_dev_tx_queue_stop(struct rte_eth_dev *eth_dev,
				   uint16_t tx_queue_id)
{
	int ret;
	struct sge_eth_txq *txq = (struct sge_eth_txq *)
				  (eth_dev->data->tx_queues[tx_queue_id]);

	dev_debug(NULL, "%s: tx_queue_id = %d\n", __func__, tx_queue_id);

	ret = t4_sge_eth_txq_stop(txq);
	if (ret == 0)
		eth_dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;

	return ret;
}

static int cxgbe_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
				    uint16_t queue_idx,	uint16_t nb_desc,
				    unsigned int socket_id,
				    const struct rte_eth_txconf *tx_conf)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;
	struct sge *s = &adapter->sge;
	struct sge_eth_txq *txq = &s->ethtxq[pi->first_qset + queue_idx];
	int err = 0;
	unsigned int temp_nb_desc;

	RTE_SET_USED(tx_conf);

	dev_debug(adapter, "%s: eth_dev->data->nb_tx_queues = %d; queue_idx = %d; nb_desc = %d; socket_id = %d; pi->first_qset = %u\n",
		  __func__, eth_dev->data->nb_tx_queues, queue_idx, nb_desc,
		  socket_id, pi->first_qset);

	/*  Free up the existing queue  */
	if (eth_dev->data->tx_queues[queue_idx]) {
		cxgbe_dev_tx_queue_release(eth_dev->data->tx_queues[queue_idx]);
		eth_dev->data->tx_queues[queue_idx] = NULL;
	}

	eth_dev->data->tx_queues[queue_idx] = (void *)txq;

	/* Sanity Checking
	 *
	 * nb_desc should be > 1023 and <= CXGBE_MAX_RING_DESC_SIZE
	 */
	temp_nb_desc = nb_desc;
	if (nb_desc < CXGBE_MIN_RING_DESC_SIZE) {
		dev_warn(adapter, "%s: number of descriptors must be >= %d. Using default [%d]\n",
			 __func__, CXGBE_MIN_RING_DESC_SIZE,
			 CXGBE_DEFAULT_TX_DESC_SIZE);
		temp_nb_desc = CXGBE_DEFAULT_TX_DESC_SIZE;
	} else if (nb_desc > CXGBE_MAX_RING_DESC_SIZE) {
		dev_err(adapter, "%s: number of descriptors must be between %d and %d inclusive. Default [%d]\n",
			__func__, CXGBE_MIN_RING_DESC_SIZE,
			CXGBE_MAX_RING_DESC_SIZE, CXGBE_DEFAULT_TX_DESC_SIZE);
		return -(EINVAL);
	}

	txq->q.size = temp_nb_desc;

	err = t4_sge_alloc_eth_txq(adapter, txq, eth_dev, queue_idx,
				   s->fw_evtq.cntxt_id, socket_id);

	dev_debug(adapter, "%s: txq->q.cntxt_id= %d err = %d\n",
		  __func__, txq->q.cntxt_id, err);

	return err;
}

static void cxgbe_dev_tx_queue_release(void *q)
{
	struct sge_eth_txq *txq = (struct sge_eth_txq *)q;

	if (txq) {
		struct port_info *pi = (struct port_info *)
				       (txq->eth_dev->data->dev_private);
		struct adapter *adap = pi->adapter;

		dev_debug(adapter, "%s: pi->port_id = %d; tx_queue_id = %d\n",
			  __func__, pi->port_id, txq->q.cntxt_id);

		t4_sge_eth_txq_release(adap, txq);
	}
}

static int cxgbe_dev_rx_queue_start(struct rte_eth_dev *eth_dev,
				    uint16_t rx_queue_id)
{
	int ret;
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adap = pi->adapter;
	struct sge_rspq *q;

	dev_debug(adapter, "%s: pi->port_id = %d; rx_queue_id = %d\n",
		  __func__, pi->port_id, rx_queue_id);

	q = eth_dev->data->rx_queues[rx_queue_id];

	ret = t4_sge_eth_rxq_start(adap, q);
	if (ret == 0)
		eth_dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;

	return ret;
}

static int cxgbe_dev_rx_queue_stop(struct rte_eth_dev *eth_dev,
				   uint16_t rx_queue_id)
{
	int ret;
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adap = pi->adapter;
	struct sge_rspq *q;

	dev_debug(adapter, "%s: pi->port_id = %d; rx_queue_id = %d\n",
		  __func__, pi->port_id, rx_queue_id);

	q = eth_dev->data->rx_queues[rx_queue_id];
	ret = t4_sge_eth_rxq_stop(adap, q);
	if (ret == 0)
		eth_dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;

	return ret;
}

static int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
				    uint16_t queue_idx,	uint16_t nb_desc,
				    unsigned int socket_id,
				    const struct rte_eth_rxconf *rx_conf,
				    struct rte_mempool *mp)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;
	struct sge *s = &adapter->sge;
	struct sge_eth_rxq *rxq = &s->ethrxq[pi->first_qset + queue_idx];
	int err = 0;
	int msi_idx = 0;
	unsigned int temp_nb_desc;
	struct rte_eth_dev_info dev_info;
	unsigned int pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;

	RTE_SET_USED(rx_conf);

	dev_debug(adapter, "%s: eth_dev->data->nb_rx_queues = %d; queue_idx = %d; nb_desc = %d; socket_id = %d; mp = %p\n",
		  __func__, eth_dev->data->nb_rx_queues, queue_idx, nb_desc,
		  socket_id, mp);

	cxgbe_dev_info_get(eth_dev, &dev_info);

	/* Must accommodate at least ETHER_MIN_MTU */
	if ((pkt_len < dev_info.min_rx_bufsize) ||
	    (pkt_len > dev_info.max_rx_pktlen)) {
		dev_err(adap, "%s: max pkt len must be > %d and <= %d\n",
			__func__, dev_info.min_rx_bufsize,
			dev_info.max_rx_pktlen);
		return -EINVAL;
	}

	/*  Free up the existing queue  */
	if (eth_dev->data->rx_queues[queue_idx]) {
		cxgbe_dev_rx_queue_release(eth_dev->data->rx_queues[queue_idx]);
		eth_dev->data->rx_queues[queue_idx] = NULL;
	}

	eth_dev->data->rx_queues[queue_idx] = (void *)rxq;

	/* Sanity Checking
	 *
	 * nb_desc should be > 0 and <= CXGBE_MAX_RING_DESC_SIZE
	 */
	temp_nb_desc = nb_desc;
	if (nb_desc < CXGBE_MIN_RING_DESC_SIZE) {
		dev_warn(adapter, "%s: number of descriptors must be >= %d. Using default [%d]\n",
			 __func__, CXGBE_MIN_RING_DESC_SIZE,
			 CXGBE_DEFAULT_RX_DESC_SIZE);
		temp_nb_desc = CXGBE_DEFAULT_RX_DESC_SIZE;
	} else if (nb_desc > CXGBE_MAX_RING_DESC_SIZE) {
		dev_err(adapter, "%s: number of descriptors must be between %d and %d inclusive. Default [%d]\n",
			__func__, CXGBE_MIN_RING_DESC_SIZE,
			CXGBE_MAX_RING_DESC_SIZE, CXGBE_DEFAULT_RX_DESC_SIZE);
		return -(EINVAL);
	}

	rxq->rspq.size = temp_nb_desc;
	if ((&rxq->fl) != NULL)
		rxq->fl.size = temp_nb_desc;

	/* Set to jumbo mode if necessary */
	if (pkt_len > ETHER_MAX_LEN)
		eth_dev->data->dev_conf.rxmode.jumbo_frame = 1;
	else
		eth_dev->data->dev_conf.rxmode.jumbo_frame = 0;

	err = t4_sge_alloc_rxq(adapter, &rxq->rspq, false, eth_dev, msi_idx,
			       &rxq->fl, t4_ethrx_handler,
			       t4_get_mps_bg_map(adapter, pi->tx_chan), mp,
			       queue_idx, socket_id);

	dev_debug(adapter, "%s: err = %d; port_id = %d; cntxt_id = %u\n",
		  __func__, err, pi->port_id, rxq->rspq.cntxt_id);
	return err;
}

static void cxgbe_dev_rx_queue_release(void *q)
{
	struct sge_eth_rxq *rxq = (struct sge_eth_rxq *)q;
	struct sge_rspq *rq = &rxq->rspq;

	if (rq) {
		struct port_info *pi = (struct port_info *)
				       (rq->eth_dev->data->dev_private);
		struct adapter *adap = pi->adapter;

		dev_debug(adapter, "%s: pi->port_id = %d; rx_queue_id = %d\n",
			  __func__, pi->port_id, rxq->rspq.cntxt_id);

		t4_sge_eth_rxq_release(adap, rxq);
	}
}

/*
 * Get port statistics.
 */
static void cxgbe_dev_stats_get(struct rte_eth_dev *eth_dev,
				struct rte_eth_stats *eth_stats)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;
	struct sge *s = &adapter->sge;
	struct port_stats ps;
	unsigned int i;

	cxgbe_stats_get(pi, &ps);

	/* RX Stats */
	eth_stats->ipackets = ps.rx_frames;
	eth_stats->ibytes   = ps.rx_octets;
	eth_stats->imissed  = ps.rx_ovflow0 + ps.rx_ovflow1 +
			      ps.rx_ovflow2 + ps.rx_ovflow3 +
			      ps.rx_trunc0 + ps.rx_trunc1 +
			      ps.rx_trunc2 + ps.rx_trunc3;
	eth_stats->ierrors  = ps.rx_symbol_err + ps.rx_fcs_err +
			      ps.rx_jabber + ps.rx_too_long + ps.rx_runt +
			      ps.rx_len_err;

	/* TX Stats */
	eth_stats->opackets = ps.tx_frames;
	eth_stats->obytes   = ps.tx_octets;
	eth_stats->oerrors  = ps.tx_error_frames;

	for (i = 0; i < pi->n_rx_qsets; i++) {
		struct sge_eth_rxq *rxq =
			&s->ethrxq[pi->first_qset + i];

		eth_stats->q_ipackets[i] = rxq->stats.pkts;
		eth_stats->q_ibytes[i] = rxq->stats.rx_bytes;
	}

	for (i = 0; i < pi->n_tx_qsets; i++) {
		struct sge_eth_txq *txq =
			&s->ethtxq[pi->first_qset + i];

		eth_stats->q_opackets[i] = txq->stats.pkts;
		eth_stats->q_obytes[i] = txq->stats.tx_bytes;
		eth_stats->q_errors[i] = txq->stats.mapping_err;
	}
}

/*
 * Reset port statistics.
 */
static void cxgbe_dev_stats_reset(struct rte_eth_dev *eth_dev)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;
	struct sge *s = &adapter->sge;
	unsigned int i;

	cxgbe_stats_reset(pi);
	for (i = 0; i < pi->n_rx_qsets; i++) {
		struct sge_eth_rxq *rxq =
			&s->ethrxq[pi->first_qset + i];

		rxq->stats.pkts = 0;
		rxq->stats.rx_bytes = 0;
	}
	for (i = 0; i < pi->n_tx_qsets; i++) {
		struct sge_eth_txq *txq =
			&s->ethtxq[pi->first_qset + i];

		txq->stats.pkts = 0;
		txq->stats.tx_bytes = 0;
		txq->stats.mapping_err = 0;
	}
}

static int cxgbe_flow_ctrl_get(struct rte_eth_dev *eth_dev,
			       struct rte_eth_fc_conf *fc_conf)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct link_config *lc = &pi->link_cfg;
	int rx_pause, tx_pause;

	fc_conf->autoneg = lc->fc & PAUSE_AUTONEG;
	rx_pause = lc->fc & PAUSE_RX;
	tx_pause = lc->fc & PAUSE_TX;

	if (rx_pause && tx_pause)
		fc_conf->mode = RTE_FC_FULL;
	else if (rx_pause)
		fc_conf->mode = RTE_FC_RX_PAUSE;
	else if (tx_pause)
		fc_conf->mode = RTE_FC_TX_PAUSE;
	else
		fc_conf->mode = RTE_FC_NONE;
	return 0;
}

static int cxgbe_flow_ctrl_set(struct rte_eth_dev *eth_dev,
			       struct rte_eth_fc_conf *fc_conf)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;
	struct link_config *lc = &pi->link_cfg;

	if (lc->supported & FW_PORT_CAP_ANEG) {
		if (fc_conf->autoneg)
			lc->requested_fc |= PAUSE_AUTONEG;
		else
			lc->requested_fc &= ~PAUSE_AUTONEG;
	}

	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
	    (fc_conf->mode & RTE_FC_RX_PAUSE))
		lc->requested_fc |= PAUSE_RX;
	else
		lc->requested_fc &= ~PAUSE_RX;

	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
	    (fc_conf->mode & RTE_FC_TX_PAUSE))
		lc->requested_fc |= PAUSE_TX;
	else
		lc->requested_fc &= ~PAUSE_TX;

	return t4_link_l1cfg(adapter, adapter->mbox, pi->tx_chan,
			     &pi->link_cfg);
}

static const uint32_t *
cxgbe_dev_supported_ptypes_get(struct rte_eth_dev *eth_dev)
{
	static const uint32_t ptypes[] = {
		RTE_PTYPE_L3_IPV4,
		RTE_PTYPE_L3_IPV6,
		RTE_PTYPE_UNKNOWN
	};

	if (eth_dev->rx_pkt_burst == cxgbe_recv_pkts)
		return ptypes;
	return NULL;
}

static int cxgbe_get_eeprom_length(struct rte_eth_dev *dev)
{
	RTE_SET_USED(dev);
	return EEPROMSIZE;
}

/**
 * eeprom_ptov - translate a physical EEPROM address to virtual
 * @phys_addr: the physical EEPROM address
 * @fn: the PCI function number
 * @sz: size of function-specific area
 *
 * Translate a physical EEPROM address to virtual.  The first 1K is
 * accessed through virtual addresses starting at 31K, the rest is
 * accessed through virtual addresses starting at 0.
 *
 * The mapping is as follows:
 * [0..1K) -> [31K..32K)
 * [1K..1K+A) -> [31K-A..31K)
 * [1K+A..ES) -> [0..ES-A-1K)
 *
 * where A = @fn * @sz, and ES = EEPROM size.
 */
static int eeprom_ptov(unsigned int phys_addr, unsigned int fn, unsigned int sz)
{
	fn *= sz;
	if (phys_addr < 1024)
		return phys_addr + (31 << 10);
	if (phys_addr < 1024 + fn)
		return fn + phys_addr - 1024;
	if (phys_addr < EEPROMSIZE)
		return phys_addr - 1024 - fn;
	if (phys_addr < EEPROMVSIZE)
		return phys_addr - 1024;
	return -EINVAL;
}

/* The next two routines implement eeprom read/write from physical addresses.
 */
static int eeprom_rd_phys(struct adapter *adap, unsigned int phys_addr, u32 *v)
{
	int vaddr = eeprom_ptov(phys_addr, adap->pf, EEPROMPFSIZE);

	if (vaddr >= 0)
		vaddr = t4_seeprom_read(adap, vaddr, v);
	return vaddr < 0 ? vaddr : 0;
}

static int eeprom_wr_phys(struct adapter *adap, unsigned int phys_addr, u32 v)
{
	int vaddr = eeprom_ptov(phys_addr, adap->pf, EEPROMPFSIZE);

	if (vaddr >= 0)
		vaddr = t4_seeprom_write(adap, vaddr, v);
	return vaddr < 0 ? vaddr : 0;
}

#define EEPROM_MAGIC 0x38E2F10C

static int cxgbe_get_eeprom(struct rte_eth_dev *dev,
			    struct rte_dev_eeprom_info *e)
{
	struct port_info *pi = (struct port_info *)(dev->data->dev_private);
	struct adapter *adapter = pi->adapter;
	u32 i, err = 0;
	u8 *buf = rte_zmalloc(NULL, EEPROMSIZE, 0);

	if (!buf)
		return -ENOMEM;

	e->magic = EEPROM_MAGIC;
	for (i = e->offset & ~3; !err && i < e->offset + e->length; i += 4)
		err = eeprom_rd_phys(adapter, i, (u32 *)&buf[i]);

	if (!err)
		rte_memcpy(e->data, buf + e->offset, e->length);
	rte_free(buf);
	return err;
}

static int cxgbe_set_eeprom(struct rte_eth_dev *dev,
			    struct rte_dev_eeprom_info *eeprom)
{
	struct port_info *pi = (struct port_info *)(dev->data->dev_private);
	struct adapter *adapter = pi->adapter;
	u8 *buf;
	int err = 0;
	u32 aligned_offset, aligned_len, *p;

	if (eeprom->magic != EEPROM_MAGIC)
		return -EINVAL;

	aligned_offset = eeprom->offset & ~3;
	aligned_len = (eeprom->length + (eeprom->offset & 3) + 3) & ~3;

	if (adapter->pf > 0) {
		u32 start = 1024 + adapter->pf * EEPROMPFSIZE;

		if (aligned_offset < start ||
		    aligned_offset + aligned_len > start + EEPROMPFSIZE)
			return -EPERM;
	}

	if (aligned_offset != eeprom->offset || aligned_len != eeprom->length) {
		/* RMW possibly needed for first or last words.
		 */
		buf = rte_zmalloc(NULL, aligned_len, 0);
		if (!buf)
			return -ENOMEM;
		err = eeprom_rd_phys(adapter, aligned_offset, (u32 *)buf);
		if (!err && aligned_len > 4)
			err = eeprom_rd_phys(adapter,
					     aligned_offset + aligned_len - 4,
					     (u32 *)&buf[aligned_len - 4]);
		if (err)
			goto out;
		rte_memcpy(buf + (eeprom->offset & 3), eeprom->data,
			   eeprom->length);
	} else {
		buf = eeprom->data;
	}

	err = t4_seeprom_wp(adapter, false);
	if (err)
		goto out;

	for (p = (u32 *)buf; !err && aligned_len; aligned_len -= 4, p++) {
		err = eeprom_wr_phys(adapter, aligned_offset, *p);
		aligned_offset += 4;
	}

	if (!err)
		err = t4_seeprom_wp(adapter, true);
out:
	if (buf != eeprom->data)
		rte_free(buf);
	return err;
}

static int cxgbe_get_regs_len(struct rte_eth_dev *eth_dev)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;

	return t4_get_regs_len(adapter) / sizeof(uint32_t);
}

static int cxgbe_get_regs(struct rte_eth_dev *eth_dev,
			  struct rte_dev_reg_info *regs)
{
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = pi->adapter;

	regs->version = CHELSIO_CHIP_VERSION(adapter->params.chip) |
		(CHELSIO_CHIP_RELEASE(adapter->params.chip) << 10) |
		(1 << 16);

	if (regs->data == NULL) {
		regs->length = cxgbe_get_regs_len(eth_dev);
		regs->width = sizeof(uint32_t);

		return 0;
	}

	t4_get_regs(adapter, regs->data, (regs->length * sizeof(uint32_t)));

	return 0;
}

static const struct eth_dev_ops cxgbe_eth_dev_ops = {
	.dev_start		= cxgbe_dev_start,
	.dev_stop		= cxgbe_dev_stop,
	.dev_close		= cxgbe_dev_close,
	.promiscuous_enable	= cxgbe_dev_promiscuous_enable,
	.promiscuous_disable	= cxgbe_dev_promiscuous_disable,
	.allmulticast_enable	= cxgbe_dev_allmulticast_enable,
	.allmulticast_disable	= cxgbe_dev_allmulticast_disable,
	.dev_configure		= cxgbe_dev_configure,
	.dev_infos_get		= cxgbe_dev_info_get,
	.dev_supported_ptypes_get = cxgbe_dev_supported_ptypes_get,
	.link_update		= cxgbe_dev_link_update,
	.mtu_set		= cxgbe_dev_mtu_set,
	.tx_queue_setup         = cxgbe_dev_tx_queue_setup,
	.tx_queue_start		= cxgbe_dev_tx_queue_start,
	.tx_queue_stop		= cxgbe_dev_tx_queue_stop,
	.tx_queue_release	= cxgbe_dev_tx_queue_release,
	.rx_queue_setup         = cxgbe_dev_rx_queue_setup,
	.rx_queue_start		= cxgbe_dev_rx_queue_start,
	.rx_queue_stop		= cxgbe_dev_rx_queue_stop,
	.rx_queue_release	= cxgbe_dev_rx_queue_release,
	.stats_get		= cxgbe_dev_stats_get,
	.stats_reset		= cxgbe_dev_stats_reset,
	.flow_ctrl_get		= cxgbe_flow_ctrl_get,
	.flow_ctrl_set		= cxgbe_flow_ctrl_set,
	.get_eeprom_length	= cxgbe_get_eeprom_length,
	.get_eeprom		= cxgbe_get_eeprom,
	.set_eeprom		= cxgbe_set_eeprom,
	.get_reg		= cxgbe_get_regs,
};

/*
 * Initialize driver
 * It returns 0 on success.
 */
static int eth_cxgbe_dev_init(struct rte_eth_dev *eth_dev)
{
	struct rte_pci_device *pci_dev;
	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
	struct adapter *adapter = NULL;
	char name[RTE_ETH_NAME_MAX_LEN];
	int err = 0;

	CXGBE_FUNC_TRACE();

	eth_dev->dev_ops = &cxgbe_eth_dev_ops;
	eth_dev->rx_pkt_burst = &cxgbe_recv_pkts;
	eth_dev->tx_pkt_burst = &cxgbe_xmit_pkts;

	/* for secondary processes, we don't initialise any further as primary
	 * has already done this work.
	 */
	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
		return 0;

	pci_dev = eth_dev->pci_dev;

	snprintf(name, sizeof(name), "cxgbeadapter%d", eth_dev->data->port_id);
	adapter = rte_zmalloc(name, sizeof(*adapter), 0);
	if (!adapter)
		return -1;

	adapter->use_unpacked_mode = 1;
	adapter->regs = (void *)pci_dev->mem_resource[0].addr;
	if (!adapter->regs) {
		dev_err(adapter, "%s: cannot map device registers\n", __func__);
		err = -ENOMEM;
		goto out_free_adapter;
	}
	adapter->pdev = pci_dev;
	adapter->eth_dev = eth_dev;
	pi->adapter = adapter;

	err = cxgbe_probe(adapter);
	if (err) {
		dev_err(adapter, "%s: cxgbe probe failed with err %d\n",
			__func__, err);
		goto out_free_adapter;
	}

	return 0;

out_free_adapter:
	rte_free(adapter);
	return err;
}

static struct eth_driver rte_cxgbe_pmd = {
	.pci_drv = {
		.id_table = cxgb4_pci_tbl,
		.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
		.probe = rte_eth_dev_pci_probe,
		.remove = rte_eth_dev_pci_remove,
	},
	.eth_dev_init = eth_cxgbe_dev_init,
	.dev_private_size = sizeof(struct port_info),
};

RTE_PMD_REGISTER_PCI(net_cxgbe, rte_cxgbe_pmd.pci_drv);
RTE_PMD_REGISTER_PCI_TABLE(net_cxgbe, cxgb4_pci_tbl);