examples/ipsec-secgw: add IPsec sample application

Sample app implementing an IPsec Security Geteway.
The main goal of this app is to show the use of cryptodev framework
in a "real world" application.

Currently only supported static IPv4 ESP IPsec tunnels for the following
algorithms:
- Cipher: AES-CBC, NULL
- Authentication: HMAC-SHA1, NULL

Not supported:
- SA auto negotiation (No IKE implementation)
- chained mbufs

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This commit is contained in:
Sergio Gonzalez Monroy 2016-03-11 02:12:40 +00:00 committed by Thomas Monjalon
parent ab8536d538
commit d299106e8e
15 changed files with 3711 additions and 0 deletions

View file

@ -566,6 +566,10 @@ M: Pablo de Lara <pablo.de.lara.guarch@intel.com>
F: examples/helloworld/
F: doc/guides/sample_app_ug/hello_world.rst
M: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
F: examples/ipsec-secgw/
F: doc/guides/sample_app_ug/ipsec_secgw.rst
F: examples/ipv4_multicast/
F: doc/guides/sample_app_ug/ipv4_multicast.rst

View file

@ -148,6 +148,9 @@ Examples
vhost-switch often fails to allocate mbuf when dequeue from vring because it
wrongly calculates the number of mbufs needed.
* **examples/ipsec-secgw: ipsec security gateway**
New application implementing an IPsec Security Gateway.
Other
~~~~~

View file

@ -73,6 +73,7 @@ Sample Applications User Guide
proc_info
ptpclient
performance_thread
ipsec_secgw
**Figures**

View file

@ -0,0 +1,524 @@
.. BSD LICENSE
Copyright(c) 2016 Intel Corporation. All rights reserved.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
* Neither the name of Intel Corporation nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
IPsec Security Gateway Sample Application
=========================================
The IPsec Security Gateway application is an example of a "real world"
application using DPDK cryptodev framework.
Overview
--------
The application demonstrates the implementation of a Security Gateway
(not IPsec compliant, see Constraints bellow) using DPDK based on RFC4301,
RFC4303, RFC3602 and RFC2404.
Internet Key Exchange (IKE) is not implemented, so only manual setting of
Security Policies and Security Associations is supported.
The Security Policies (SP) are implemented as ACL rules, the Security
Associations (SA) are stored in a table and the Routing is implemented
using LPM.
The application classify the ports between Protected and Unprotected.
Thus, traffic received in an Unprotected or Protected port is consider
Inbound or Outbound respectively.
Path for IPsec Inbound traffic:
* Read packets from the port
* Classify packets between IPv4 and ESP.
* Inbound SA lookup for ESP packets based on their SPI
* Verification/Decryption
* Removal of ESP and outer IP header
* Inbound SP check using ACL of decrypted packets and any other IPv4 packet
we read.
* Routing
* Write packet to port
Path for IPsec Outbound traffic:
* Read packets from the port
* Outbound SP check using ACL of all IPv4 traffic
* Outbound SA lookup for packets that need IPsec protection
* Add ESP and outter IP header
* Encryption/Digest
* Routing
* Write packet to port
Constraints
-----------
* IPv4 traffic
* ESP tunnel mode
* EAS-CBC, HMAC-SHA1 and NULL
* Each SA must be handle by a unique lcore (1 RX queue per port)
* No chained mbufs
Compiling the Application
-------------------------
To compile the application:
#. Go to the sample application directory:
.. code-block:: console
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/ipsec-secgw
#. Set the target (a default target is used if not specified). For example:
.. code-block:: console
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the *DPDK Getting Started Guide* for possible RTE_TARGET values.
#. Build the application:
.. code-block:: console
make
Running the Application
-----------------------
The application has a number of command line options:
.. code-block:: console
./build/ipsec-secgw [EAL options] -- -p PORTMASK -P -u PORTMASK --config
(port,queue,lcore)[,(port,queue,lcore] --single-sa SAIDX --ep0|--ep1
where,
* -p PORTMASK: Hexadecimal bitmask of ports to configure
* -P: optional, sets all ports to promiscuous mode so that packets are
accepted regardless of the packet's Ethernet MAC destination address.
Without this option, only packets with the Ethernet MAC destination address
set to the Ethernet address of the port are accepted (default is enabled).
* -u PORTMASK: hexadecimal bitmask of unprotected ports
* --config (port,queue,lcore)[,(port,queue,lcore)]: determines which queues
from which ports are mapped to which cores
* --single-sa SAIDX: use a single SA for outbound traffic, bypassing the SP
on both Inbound and Outbound. This option is meant for debugging/performance
purposes.
* --ep0: configure the app as Endpoint 0.
* --ep1: configure the app as Endpoint 1.
Either one of --ep0 or --ep1 *must* be specified.
The main purpose of these options is two easily configure two systems
back-to-back that would forward traffic through an IPsec tunnel.
The mapping of lcores to port/queues is similar to other l3fwd applications.
For example, given the following command line:
.. code-block:: console
./build/ipsec-secgw -l 20,21 -n 4 --socket-mem 0,2048
--vdev "cryptodev_null_pmd" -- -p 0xf -P -u 0x3
--config="(0,0,20),(1,0,20),(2,0,21),(3,0,21)" --ep0
where each options means:
* The -l option enables cores 20 and 21
* The -n option sets memory 4 channels
* The --socket-mem to use 2GB on socket 1
* The --vdev "cryptodev_null_pmd" option creates virtual NULL cryptodev PMD
* The -p option enables ports (detected) 0, 1, 2 and 3
* The -P option enables promiscuous mode
* The -u option sets ports 1 and 2 as unprotected, leaving 2 and 3 as protected
* The --config option enables one queue per port with the following mapping:
+----------+-----------+-----------+---------------------------------------+
| **Port** | **Queue** | **lcore** | **Description** |
| | | | |
+----------+-----------+-----------+---------------------------------------+
| 0 | 0 | 20 | Map queue 0 from port 0 to lcore 20. |
| | | | |
+----------+-----------+-----------+---------------------------------------+
| 1 | 0 | 20 | Map queue 0 from port 1 to lcore 20. |
| | | | |
+----------+-----------+-----------+---------------------------------------+
| 2 | 0 | 21 | Map queue 0 from port 2 to lcore 21. |
| | | | |
+----------+-----------+-----------+---------------------------------------+
| 3 | 0 | 21 | Map queue 0 from port 3 to lcore 21. |
| | | | |
+----------+-----------+-----------+---------------------------------------+
* The --ep0 options configures the app with a given set of SP, SA and Routing
entries as explained below in more detail.
Refer to the *DPDK Getting Started Guide* for general information on running
applications and the Environment Abstraction Layer (EAL) options.
The application would do a best effort to "map" crypto devices to cores, with
hardware devices having priority.
This means that if the application is using a single core and both hardware
and software crypto devices are detected, hardware devices will be used.
A way to achive the case where you want to force the use of virtual crypto
devices is to whitelist the ethernet devices needed and therefore implicitely
blacklisting all hardware crypto devices.
For example, something like the following command line:
.. code-block:: console
./build/ipsec-secgw -l 20,21 -n 4 --socket-mem 0,2048
-w 81:00.0 -w 81:00.1 -w 81:00.2 -w 81:00.3
--vdev "cryptodev_aesni_mb_pmd" --vdev "cryptodev_null_pmd" --
-p 0xf -P -u 0x3 --config="(0,0,20),(1,0,20),(2,0,21),(3,0,21)"
--ep0
Configurations
--------------
The following sections provide some details on the default values used to
initialize the SP, SA and Routing tables.
Currently all the configuration is hard coded into the application.
Security Policy Initialization
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As mention in the overview, the Security Policies are ACL rules.
The application defines two ACLs, one each of Inbound and Outbound, and
it replicates them per socket in use.
Following are the default rules:
Endpoint 0 Outbound Security Policies:
+---------+------------------+-----------+------------+
| **Src** | **Dst** | **proto** | **SA idx** |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.105.0/24 | Any | 5 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.106.0/24 | Any | 6 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.107.0/24 | Any | 7 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.108.0/24 | Any | 8 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.200.0/24 | Any | 9 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.250.0/24 | Any | BYPASS |
| | | | |
+---------+------------------+-----------+------------+
Endpoint 0 Inbound Security Policies:
+---------+------------------+-----------+------------+
| **Src** | **Dst** | **proto** | **SA idx** |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.115.0/24 | Any | 5 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.116.0/24 | Any | 6 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.117.0/24 | Any | 7 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.118.0/24 | Any | 8 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.210.0/24 | Any | 9 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.240.0/24 | Any | BYPASS |
| | | | |
+---------+------------------+-----------+------------+
Endpoint 1 Outbound Security Policies:
+---------+------------------+-----------+------------+
| **Src** | **Dst** | **proto** | **SA idx** |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.115.0/24 | Any | 5 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.116.0/24 | Any | 6 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.117.0/24 | Any | 7 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.118.0/24 | Any | 8 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.210.0/24 | Any | 9 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.240.0/24 | Any | BYPASS |
| | | | |
+---------+------------------+-----------+------------+
Endpoint 1 Inbound Security Policies:
+---------+------------------+-----------+------------+
| **Src** | **Dst** | **proto** | **SA idx** |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.105.0/24 | Any | 5 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.106.0/24 | Any | 6 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.107.0/24 | Any | 7 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.108.0/24 | Any | 8 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.200.0/24 | Any | 9 |
| | | | |
+---------+------------------+-----------+------------+
| Any | 192.168.250.0/24 | Any | BYPASS |
| | | | |
+---------+------------------+-----------+------------+
Security Association Initialization
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The SAs are kept in a array table.
For Inbound, the SPI is used as index module the table size.
This means that on a table for 100 SA, SPI 5 and 105 would use the same index
and that is not currently supported.
Notice that it is not an issue for Outbound traffic as we store the index and
not the SPI in the Security Policy.
All SAs configured with AES-CBC and HMAC-SHA1 share the same values for cipher
block size and key, and authentication digest size and key.
Following are the default values:
Endpoint 0 Outbound Security Associations:
+---------+------------+-----------+----------------+------------------+
| **SPI** | **Cipher** | **Auth** | **Tunnel src** | **Tunnel dst** |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 5 | AES-CBC | HMAC-SHA1 | 172.16.1.5 | 172.16.2.5 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 6 | AES-CBC | HMAC-SHA1 | 172.16.1.6 | 172.16.2.6 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 7 | AES-CBC | HMAC-SHA1 | 172.16.1.7 | 172.16.2.7 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 8 | AES-CBC | HMAC-SHA1 | 172.16.1.8 | 172.16.2.8 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 9 | NULL | NULL | 172.16.1.5 | 172.16.2.5 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
Endpoint 0 Inbound Security Associations:
+---------+------------+-----------+----------------+------------------+
| **SPI** | **Cipher** | **Auth** | **Tunnel src** | **Tunnel dst** |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 5 | AES-CBC | HMAC-SHA1 | 172.16.2.5 | 172.16.1.5 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 6 | AES-CBC | HMAC-SHA1 | 172.16.2.6 | 172.16.1.6 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 7 | AES-CBC | HMAC-SHA1 | 172.16.2.7 | 172.16.1.7 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 8 | AES-CBC | HMAC-SHA1 | 172.16.2.8 | 172.16.1.8 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 9 | NULL | NULL | 172.16.2.5 | 172.16.1.5 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
Endpoint 1 Outbound Security Associations:
+---------+------------+-----------+----------------+------------------+
| **SPI** | **Cipher** | **Auth** | **Tunnel src** | **Tunnel dst** |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 5 | AES-CBC | HMAC-SHA1 | 172.16.2.5 | 172.16.1.5 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 6 | AES-CBC | HMAC-SHA1 | 172.16.2.6 | 172.16.1.6 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 7 | AES-CBC | HMAC-SHA1 | 172.16.2.7 | 172.16.1.7 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 8 | AES-CBC | HMAC-SHA1 | 172.16.2.8 | 172.16.1.8 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 9 | NULL | NULL | 172.16.2.5 | 172.16.1.5 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
Endpoint 1 Inbound Security Associations:
+---------+------------+-----------+----------------+------------------+
| **SPI** | **Cipher** | **Auth** | **Tunnel src** | **Tunnel dst** |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 5 | AES-CBC | HMAC-SHA1 | 172.16.1.5 | 172.16.2.5 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 6 | AES-CBC | HMAC-SHA1 | 172.16.1.6 | 172.16.2.6 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 7 | AES-CBC | HMAC-SHA1 | 172.16.1.7 | 172.16.2.7 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 8 | AES-CBC | HMAC-SHA1 | 172.16.1.8 | 172.16.2.8 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
| 9 | NULL | NULL | 172.16.1.5 | 172.16.2.5 |
| | | | | |
+---------+------------+-----------+----------------+------------------+
Routing Initialization
~~~~~~~~~~~~~~~~~~~~~~
The Routing is implemented using LPM table.
Following default values:
Endpoint 0 Routing Table:
+------------------+----------+
| **Dst addr** | **Port** |
| | |
+------------------+----------+
| 172.16.2.5/32 | 0 |
| | |
+------------------+----------+
| 172.16.2.6/32 | 0 |
| | |
+------------------+----------+
| 172.16.2.7/32 | 1 |
| | |
+------------------+----------+
| 172.16.2.8/32 | 1 |
| | |
+------------------+----------+
| 192.168.115.0/24 | 2 |
| | |
+------------------+----------+
| 192.168.116.0/24 | 2 |
| | |
+------------------+----------+
| 192.168.117.0/24 | 3 |
| | |
+------------------+----------+
| 192.168.118.0/24 | 3 |
| | |
+------------------+----------+
| 192.168.210.0/24 | 2 |
| | |
+------------------+----------+
| 192.168.240.0/24 | 2 |
| | |
+------------------+----------+
| 192.168.250.0/24 | 0 |
| | |
+------------------+----------+
Endpoint 1 Routing Table:
+------------------+----------+
| **Dst addr** | **Port** |
| | |
+------------------+----------+
| 172.16.1.5/32 | 2 |
| | |
+------------------+----------+
| 172.16.1.6/32 | 2 |
| | |
+------------------+----------+
| 172.16.1.7/32 | 3 |
| | |
+------------------+----------+
| 172.16.1.8/32 | 3 |
| | |
+------------------+----------+
| 192.168.105.0/24 | 0 |
| | |
+------------------+----------+
| 192.168.106.0/24 | 0 |
| | |
+------------------+----------+
| 192.168.107.0/24 | 1 |
| | |
+------------------+----------+
| 192.168.108.0/24 | 1 |
| | |
+------------------+----------+
| 192.168.200.0/24 | 0 |
| | |
+------------------+----------+
| 192.168.240.0/24 | 2 |
| | |
+------------------+----------+
| 192.168.250.0/24 | 0 |
| | |
+------------------+----------+

View file

@ -82,5 +82,6 @@ DIRS-y += vmdq
DIRS-y += vmdq_dcb
DIRS-$(CONFIG_RTE_LIBRTE_POWER) += vm_power_manager
DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += l2fwd-crypto
DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += ipsec-secgw
include $(RTE_SDK)/mk/rte.extsubdir.mk

View file

@ -0,0 +1,58 @@
# BSD LICENSE
#
# Copyright(c) 2016 Intel Corporation. All rights reserved.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in
# the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Intel Corporation nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
APP = ipsec-secgw
CFLAGS += -O3 -gdwarf-2
CFLAGS += $(WERROR_FLAGS)
VPATH += $(SRCDIR)/librte_ipsec
#
# all source are stored in SRCS-y
#
SRCS-y += ipsec.c
SRCS-y += esp.c
SRCS-y += sp.c
SRCS-y += sa.c
SRCS-y += rt.c
SRCS-y += ipsec-secgw.c
include $(RTE_SDK)/mk/rte.extapp.mk

250
examples/ipsec-secgw/esp.c Normal file
View file

@ -0,0 +1,250 @@
/*-
* BSD LICENSE
*
* Copyright(c) 2016 Intel Corporation. All rights reserved.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of Intel Corporation nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <stdint.h>
#include <stdlib.h>
#include <netinet/ip.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <rte_common.h>
#include <rte_memcpy.h>
#include <rte_crypto.h>
#include <rte_cryptodev.h>
#include <rte_random.h>
#include "ipsec.h"
#include "esp.h"
#include "ipip.h"
#define IP_ESP_HDR_SZ (sizeof(struct ip) + sizeof(struct esp_hdr))
static inline void
random_iv_u64(uint64_t *buf, uint16_t n)
{
unsigned left = n & 0x7;
unsigned i;
IPSEC_ASSERT((n & 0x3) == 0);
for (i = 0; i < (n >> 3); i++)
buf[i] = rte_rand();
if (left)
*((uint32_t *)&buf[i]) = (uint32_t)lrand48();
}
/* IPv4 Tunnel */
int
esp4_tunnel_inbound_pre_crypto(struct rte_mbuf *m, struct ipsec_sa *sa,
struct rte_crypto_op *cop)
{
int32_t payload_len;
struct rte_crypto_sym_op *sym_cop;
IPSEC_ASSERT(m != NULL);
IPSEC_ASSERT(sa != NULL);
IPSEC_ASSERT(cop != NULL);
payload_len = rte_pktmbuf_pkt_len(m) - IP_ESP_HDR_SZ - sa->iv_len -
sa->digest_len;
if ((payload_len & (sa->block_size - 1)) || (payload_len <= 0)) {
IPSEC_LOG(DEBUG, IPSEC_ESP, "payload %d not multiple of %u\n",
payload_len, sa->block_size);
return -EINVAL;
}
sym_cop = (struct rte_crypto_sym_op *)(cop + 1);
sym_cop->m_src = m;
sym_cop->cipher.data.offset = IP_ESP_HDR_SZ + sa->iv_len;
sym_cop->cipher.data.length = payload_len;
sym_cop->cipher.iv.data = rte_pktmbuf_mtod_offset(m, void*,
IP_ESP_HDR_SZ);
sym_cop->cipher.iv.phys_addr = rte_pktmbuf_mtophys_offset(m,
IP_ESP_HDR_SZ);
sym_cop->cipher.iv.length = sa->iv_len;
sym_cop->auth.data.offset = sizeof(struct ip);
if (sa->auth_algo == RTE_CRYPTO_AUTH_AES_GCM)
sym_cop->auth.data.length = sizeof(struct esp_hdr);
else
sym_cop->auth.data.length = sizeof(struct esp_hdr) +
sa->iv_len + payload_len;
sym_cop->auth.digest.data = rte_pktmbuf_mtod_offset(m, void*,
rte_pktmbuf_pkt_len(m) - sa->digest_len);
sym_cop->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
rte_pktmbuf_pkt_len(m) - sa->digest_len);
sym_cop->auth.digest.length = sa->digest_len;
return 0;
}
int
esp4_tunnel_inbound_post_crypto(struct rte_mbuf *m, struct ipsec_sa *sa,
struct rte_crypto_op *cop)
{
uint8_t *nexthdr, *pad_len;
uint8_t *padding;
uint16_t i;
IPSEC_ASSERT(m != NULL);
IPSEC_ASSERT(sa != NULL);
IPSEC_ASSERT(cop != NULL);
if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
IPSEC_LOG(ERR, IPSEC_ESP, "Failed crypto op\n");
return -1;
}
nexthdr = rte_pktmbuf_mtod_offset(m, uint8_t*,
rte_pktmbuf_pkt_len(m) - sa->digest_len - 1);
pad_len = nexthdr - 1;
padding = pad_len - *pad_len;
for (i = 0; i < *pad_len; i++) {
if (padding[i] != i) {
IPSEC_LOG(ERR, IPSEC_ESP, "invalid pad_len field\n");
return -EINVAL;
}
}
if (rte_pktmbuf_trim(m, *pad_len + 2 + sa->digest_len)) {
IPSEC_LOG(ERR, IPSEC_ESP,
"failed to remove pad_len + digest\n");
return -EINVAL;
}
return ip4ip_inbound(m, sizeof(struct esp_hdr) + sa->iv_len);
}
int
esp4_tunnel_outbound_pre_crypto(struct rte_mbuf *m, struct ipsec_sa *sa,
struct rte_crypto_op *cop)
{
uint16_t pad_payload_len, pad_len;
struct ip *ip;
struct esp_hdr *esp;
int i;
char *padding;
struct rte_crypto_sym_op *sym_cop;
IPSEC_ASSERT(m != NULL);
IPSEC_ASSERT(sa != NULL);
IPSEC_ASSERT(cop != NULL);
/* Payload length */
pad_payload_len = RTE_ALIGN_CEIL(rte_pktmbuf_pkt_len(m) + 2,
sa->block_size);
pad_len = pad_payload_len - rte_pktmbuf_pkt_len(m);
rte_prefetch0(rte_pktmbuf_mtod_offset(m, void *,
rte_pktmbuf_pkt_len(m)));
/* Check maximum packet size */
if (unlikely(IP_ESP_HDR_SZ + sa->iv_len + pad_payload_len +
sa->digest_len > IP_MAXPACKET)) {
IPSEC_LOG(DEBUG, IPSEC_ESP, "ipsec packet is too big\n");
return -EINVAL;
}
padding = rte_pktmbuf_append(m, pad_len + sa->digest_len);
IPSEC_ASSERT(padding != NULL);
ip = ip4ip_outbound(m, sizeof(struct esp_hdr) + sa->iv_len,
sa->src, sa->dst);
esp = (struct esp_hdr *)(ip + 1);
esp->spi = sa->spi;
esp->seq = htonl(sa->seq++);
IPSEC_LOG(DEBUG, IPSEC_ESP, "pktlen %u\n", rte_pktmbuf_pkt_len(m));
/* Fill pad_len using default sequential scheme */
for (i = 0; i < pad_len - 2; i++)
padding[i] = i + 1;
padding[pad_len - 2] = pad_len - 2;
padding[pad_len - 1] = IPPROTO_IPIP;
sym_cop = (struct rte_crypto_sym_op *)(cop + 1);
sym_cop->m_src = m;
sym_cop->cipher.data.offset = IP_ESP_HDR_SZ + sa->iv_len;
sym_cop->cipher.data.length = pad_payload_len;
sym_cop->cipher.iv.data = rte_pktmbuf_mtod_offset(m, uint8_t *,
IP_ESP_HDR_SZ);
sym_cop->cipher.iv.phys_addr = rte_pktmbuf_mtophys_offset(m,
IP_ESP_HDR_SZ);
sym_cop->cipher.iv.length = sa->iv_len;
sym_cop->auth.data.offset = sizeof(struct ip);
sym_cop->auth.data.length = sizeof(struct esp_hdr) + sa->iv_len +
pad_payload_len;
sym_cop->auth.digest.data = rte_pktmbuf_mtod_offset(m, uint8_t *,
IP_ESP_HDR_SZ + sa->iv_len + pad_payload_len);
sym_cop->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
IP_ESP_HDR_SZ + sa->iv_len + pad_payload_len);
sym_cop->auth.digest.length = sa->digest_len;
if (sa->cipher_algo == RTE_CRYPTO_CIPHER_AES_CBC)
random_iv_u64((uint64_t *)sym_cop->cipher.iv.data,
sym_cop->cipher.iv.length);
return 0;
}
int
esp4_tunnel_outbound_post_crypto(struct rte_mbuf *m __rte_unused,
struct ipsec_sa *sa __rte_unused,
struct rte_crypto_op *cop)
{
IPSEC_ASSERT(m != NULL);
IPSEC_ASSERT(sa != NULL);
IPSEC_ASSERT(cop != NULL);
if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
IPSEC_LOG(ERR, IPSEC_ESP, "Failed crypto op\n");
return -1;
}
return 0;
}

View file

@ -0,0 +1,66 @@
/*-
* BSD LICENSE
*
* Copyright(c) 2016 Intel Corporation. All rights reserved.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of Intel Corporation nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __RTE_IPSEC_XFORM_ESP_H__
#define __RTE_IPSEC_XFORM_ESP_H__
struct mbuf;
/* RFC4303 */
struct esp_hdr {
uint32_t spi;
uint32_t seq;
/* Payload */
/* Padding */
/* Pad Length */
/* Next Header */
/* Integrity Check Value - ICV */
};
/* IPv4 Tunnel */
int
esp4_tunnel_inbound_pre_crypto(struct rte_mbuf *m, struct ipsec_sa *sa,
struct rte_crypto_op *cop);
int
esp4_tunnel_inbound_post_crypto(struct rte_mbuf *m, struct ipsec_sa *sa,
struct rte_crypto_op *cop);
int
esp4_tunnel_outbound_pre_crypto(struct rte_mbuf *m, struct ipsec_sa *sa,
struct rte_crypto_op *cop);
int
esp4_tunnel_outbound_post_crypto(struct rte_mbuf *m, struct ipsec_sa *sa,
struct rte_crypto_op *cop);
#endif /* __RTE_IPSEC_XFORM_ESP_H__ */

103
examples/ipsec-secgw/ipip.h Normal file
View file

@ -0,0 +1,103 @@
/*-
* BSD LICENSE
*
* Copyright(c) 2016 Intel Corporation. All rights reserved.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of Intel Corporation nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __IPIP_H__
#define __IPIP_H__
#include <stdint.h>
#include <netinet/in.h>
#include <netinet/ip.h>
#include <rte_mbuf.h>
#define IPV6_VERSION (6)
static inline struct ip *
ip4ip_outbound(struct rte_mbuf *m, uint32_t offset, uint32_t src, uint32_t dst)
{
struct ip *inip, *outip;
inip = rte_pktmbuf_mtod(m, struct ip*);
IPSEC_ASSERT(inip->ip_v == IPVERSION || inip->ip_v == IPV6_VERSION);
offset += sizeof(struct ip);
outip = (struct ip *)rte_pktmbuf_prepend(m, offset);
IPSEC_ASSERT(outip != NULL);
/* Per RFC4301 5.1.2.1 */
outip->ip_v = IPVERSION;
outip->ip_hl = 5;
outip->ip_tos = inip->ip_tos;
outip->ip_len = htons(rte_pktmbuf_data_len(m));
outip->ip_id = 0;
outip->ip_off = 0;
outip->ip_ttl = IPDEFTTL;
outip->ip_p = IPPROTO_ESP;
outip->ip_src.s_addr = src;
outip->ip_dst.s_addr = dst;
return outip;
}
static inline int
ip4ip_inbound(struct rte_mbuf *m, uint32_t offset)
{
struct ip *inip;
struct ip *outip;
outip = rte_pktmbuf_mtod(m, struct ip*);
IPSEC_ASSERT(outip->ip_v == IPVERSION);
offset += sizeof(struct ip);
inip = (struct ip *)rte_pktmbuf_adj(m, offset);
IPSEC_ASSERT(inip->ip_v == IPVERSION || inip->ip_v == IPV6_VERSION);
/* Check packet is still bigger than IP header (inner) */
IPSEC_ASSERT(rte_pktmbuf_pkt_len(m) > sizeof(struct ip));
/* RFC4301 5.1.2.1 Note 6 */
if ((inip->ip_tos & htons(IPTOS_ECN_ECT0 | IPTOS_ECN_ECT1)) &&
((outip->ip_tos & htons(IPTOS_ECN_CE)) == IPTOS_ECN_CE))
inip->ip_tos |= htons(IPTOS_ECN_CE);
return 0;
}
#endif /* __IPIP_H__ */

View file

@ -0,0 +1,1360 @@
/*-
* BSD LICENSE
*
* Copyright(c) 2016 Intel Corporation. All rights reserved.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of Intel Corporation nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <inttypes.h>
#include <sys/types.h>
#include <string.h>
#include <sys/queue.h>
#include <stdarg.h>
#include <errno.h>
#include <getopt.h>
#include <rte_common.h>
#include <rte_byteorder.h>
#include <rte_log.h>
#include <rte_eal.h>
#include <rte_launch.h>
#include <rte_atomic.h>
#include <rte_cycles.h>
#include <rte_prefetch.h>
#include <rte_lcore.h>
#include <rte_per_lcore.h>
#include <rte_branch_prediction.h>
#include <rte_interrupts.h>
#include <rte_pci.h>
#include <rte_random.h>
#include <rte_debug.h>
#include <rte_ether.h>
#include <rte_ethdev.h>
#include <rte_mempool.h>
#include <rte_mbuf.h>
#include <rte_acl.h>
#include <rte_lpm.h>
#include <rte_hash.h>
#include <rte_jhash.h>
#include <rte_cryptodev.h>
#include "ipsec.h"
#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1
#define MAX_JUMBO_PKT_LEN 9600
#define MEMPOOL_CACHE_SIZE 256
#define NB_MBUF (32000)
#define CDEV_MAP_ENTRIES 1024
#define CDEV_MP_NB_OBJS 2048
#define CDEV_MP_CACHE_SZ 64
#define MAX_QUEUE_PAIRS 1
#define OPTION_CONFIG "config"
#define OPTION_SINGLE_SA "single-sa"
#define OPTION_EP0 "ep0"
#define OPTION_EP1 "ep1"
#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
#define NB_SOCKETS 4
/* Configure how many packets ahead to prefetch, when reading packets */
#define PREFETCH_OFFSET 3
#define MAX_RX_QUEUE_PER_LCORE 16
#define MAX_LCORE_PARAMS 1024
#define UNPROTECTED_PORT(port) (unprotected_port_mask & (1 << portid))
/*
* Configurable number of RX/TX ring descriptors
*/
#define IPSEC_SECGW_RX_DESC_DEFAULT 128
#define IPSEC_SECGW_TX_DESC_DEFAULT 512
static uint16_t nb_rxd = IPSEC_SECGW_RX_DESC_DEFAULT;
static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT;
#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN
#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \
(((uint64_t)((a) & 0xff) << 56) | \
((uint64_t)((b) & 0xff) << 48) | \
((uint64_t)((c) & 0xff) << 40) | \
((uint64_t)((d) & 0xff) << 32) | \
((uint64_t)((e) & 0xff) << 24) | \
((uint64_t)((f) & 0xff) << 16) | \
((uint64_t)((g) & 0xff) << 8) | \
((uint64_t)(h) & 0xff))
#else
#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \
(((uint64_t)((h) & 0xff) << 56) | \
((uint64_t)((g) & 0xff) << 48) | \
((uint64_t)((f) & 0xff) << 40) | \
((uint64_t)((e) & 0xff) << 32) | \
((uint64_t)((d) & 0xff) << 24) | \
((uint64_t)((c) & 0xff) << 16) | \
((uint64_t)((b) & 0xff) << 8) | \
((uint64_t)(a) & 0xff))
#endif
#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0))
#define ETHADDR_TO_UINT64(addr) __BYTES_TO_UINT64( \
addr.addr_bytes[0], addr.addr_bytes[1], \
addr.addr_bytes[2], addr.addr_bytes[3], \
addr.addr_bytes[4], addr.addr_bytes[5], \
0, 0)
/* port/source ethernet addr and destination ethernet addr */
struct ethaddr_info {
uint64_t src, dst;
};
struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = {
{ 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) },
{ 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) },
{ 0, ETHADDR(0x00, 0x16, 0x3e, 0x08, 0x69, 0x26) },
{ 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) }
};
/* mask of enabled ports */
static uint32_t enabled_port_mask;
static uint32_t unprotected_port_mask;
static int32_t promiscuous_on = 1;
static int32_t numa_on = 1; /**< NUMA is enabled by default. */
static int32_t ep = -1; /**< Endpoint configuration (0 or 1) */
static uint32_t nb_lcores;
static uint32_t single_sa;
static uint32_t single_sa_idx;
struct lcore_rx_queue {
uint8_t port_id;
uint8_t queue_id;
} __rte_cache_aligned;
struct lcore_params {
uint8_t port_id;
uint8_t queue_id;
uint8_t lcore_id;
} __rte_cache_aligned;
static struct lcore_params lcore_params_array[MAX_LCORE_PARAMS];
static struct lcore_params *lcore_params;
static uint16_t nb_lcore_params;
static struct rte_hash *cdev_map_in;
static struct rte_hash *cdev_map_out;
struct buffer {
uint16_t len;
struct rte_mbuf *m_table[MAX_PKT_BURST] __rte_aligned(sizeof(void *));
};
struct lcore_conf {
uint16_t nb_rx_queue;
struct lcore_rx_queue rx_queue_list[MAX_RX_QUEUE_PER_LCORE];
uint16_t tx_queue_id[RTE_MAX_ETHPORTS];
struct buffer tx_mbufs[RTE_MAX_ETHPORTS];
struct ipsec_ctx inbound;
struct ipsec_ctx outbound;
struct rt_ctx *rt_ctx;
} __rte_cache_aligned;
static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
.max_rx_pkt_len = ETHER_MAX_LEN,
.split_hdr_size = 0,
.header_split = 0, /**< Header Split disabled */
.hw_ip_checksum = 1, /**< IP checksum offload enabled */
.hw_vlan_filter = 0, /**< VLAN filtering disabled */
.jumbo_frame = 0, /**< Jumbo Frame Support disabled */
.hw_strip_crc = 0, /**< CRC stripped by hardware */
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
ETH_RSS_TCP | ETH_RSS_SCTP,
},
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
},
};
static struct socket_ctx socket_ctx[NB_SOCKETS];
struct traffic_type {
const uint8_t *data[MAX_PKT_BURST * 2];
struct rte_mbuf *pkts[MAX_PKT_BURST * 2];
uint32_t res[MAX_PKT_BURST * 2];
uint32_t num;
};
struct ipsec_traffic {
struct traffic_type ipsec4;
struct traffic_type ipv4;
};
static inline void
prepare_one_packet(struct rte_mbuf *pkt, struct ipsec_traffic *t)
{
uint8_t *nlp;
if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
rte_pktmbuf_adj(pkt, ETHER_HDR_LEN);
nlp = rte_pktmbuf_mtod_offset(pkt, uint8_t *,
offsetof(struct ip, ip_p));
if (*nlp == IPPROTO_ESP)
t->ipsec4.pkts[(t->ipsec4.num)++] = pkt;
else {
t->ipv4.data[t->ipv4.num] = nlp;
t->ipv4.pkts[(t->ipv4.num)++] = pkt;
}
} else {
/* Unknown/Unsupported type, drop the packet */
rte_pktmbuf_free(pkt);
}
}
static inline void
prepare_traffic(struct rte_mbuf **pkts, struct ipsec_traffic *t,
uint16_t nb_pkts)
{
int32_t i;
t->ipsec4.num = 0;
t->ipv4.num = 0;
for (i = 0; i < (nb_pkts - PREFETCH_OFFSET); i++) {
rte_prefetch0(rte_pktmbuf_mtod(pkts[i + PREFETCH_OFFSET],
void *));
prepare_one_packet(pkts[i], t);
}
/* Process left packets */
for (; i < nb_pkts; i++)
prepare_one_packet(pkts[i], t);
}
static inline void
prepare_tx_pkt(struct rte_mbuf *pkt, uint8_t port)
{
pkt->ol_flags |= PKT_TX_IP_CKSUM | PKT_TX_IPV4;
pkt->l3_len = sizeof(struct ip);
pkt->l2_len = ETHER_HDR_LEN;
struct ether_hdr *ethhdr = (struct ether_hdr *)rte_pktmbuf_prepend(pkt,
ETHER_HDR_LEN);
ethhdr->ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
memcpy(&ethhdr->s_addr, &ethaddr_tbl[port].src,
sizeof(struct ether_addr));
memcpy(&ethhdr->d_addr, &ethaddr_tbl[port].dst,
sizeof(struct ether_addr));
}
static inline void
prepare_tx_burst(struct rte_mbuf *pkts[], uint16_t nb_pkts, uint8_t port)
{
int32_t i;
const int32_t prefetch_offset = 2;
for (i = 0; i < (nb_pkts - prefetch_offset); i++) {
rte_prefetch0(pkts[i + prefetch_offset]->cacheline1);
prepare_tx_pkt(pkts[i], port);
}
/* Process left packets */
for (; i < nb_pkts; i++)
prepare_tx_pkt(pkts[i], port);
}
/* Send burst of packets on an output interface */
static inline int32_t
send_burst(struct lcore_conf *qconf, uint16_t n, uint8_t port)
{
struct rte_mbuf **m_table;
int32_t ret;
uint16_t queueid;
queueid = qconf->tx_queue_id[port];
m_table = (struct rte_mbuf **)qconf->tx_mbufs[port].m_table;
prepare_tx_burst(m_table, n, port);
ret = rte_eth_tx_burst(port, queueid, m_table, n);
if (unlikely(ret < n)) {
do {
rte_pktmbuf_free(m_table[ret]);
} while (++ret < n);
}
return 0;
}
/* Enqueue a single packet, and send burst if queue is filled */
static inline int32_t
send_single_packet(struct rte_mbuf *m, uint8_t port)
{
uint32_t lcore_id;
uint16_t len;
struct lcore_conf *qconf;
lcore_id = rte_lcore_id();
qconf = &lcore_conf[lcore_id];
len = qconf->tx_mbufs[port].len;
qconf->tx_mbufs[port].m_table[len] = m;
len++;
/* enough pkts to be sent */
if (unlikely(len == MAX_PKT_BURST)) {
send_burst(qconf, MAX_PKT_BURST, port);
len = 0;
}
qconf->tx_mbufs[port].len = len;
return 0;
}
static inline void
process_pkts_inbound(struct ipsec_ctx *ipsec_ctx,
struct ipsec_traffic *traffic)
{
struct rte_mbuf *m;
uint16_t idx, nb_pkts_in, i, j;
uint32_t sa_idx, res;
nb_pkts_in = ipsec_inbound(ipsec_ctx, traffic->ipsec4.pkts,
traffic->ipsec4.num, MAX_PKT_BURST);
/* SP/ACL Inbound check ipsec and ipv4 */
for (i = 0; i < nb_pkts_in; i++) {
idx = traffic->ipv4.num++;
m = traffic->ipsec4.pkts[i];
traffic->ipv4.pkts[idx] = m;
traffic->ipv4.data[idx] = rte_pktmbuf_mtod_offset(m,
uint8_t *, offsetof(struct ip, ip_p));
}
rte_acl_classify((struct rte_acl_ctx *)ipsec_ctx->sp_ctx,
traffic->ipv4.data, traffic->ipv4.res,
traffic->ipv4.num, DEFAULT_MAX_CATEGORIES);
j = 0;
for (i = 0; i < traffic->ipv4.num - nb_pkts_in; i++) {
m = traffic->ipv4.pkts[i];
res = traffic->ipv4.res[i];
if (res & ~BYPASS) {
rte_pktmbuf_free(m);
continue;
}
traffic->ipv4.pkts[j++] = m;
}
/* Check return SA SPI matches pkt SPI */
for ( ; i < traffic->ipv4.num; i++) {
m = traffic->ipv4.pkts[i];
sa_idx = traffic->ipv4.res[i] & PROTECT_MASK;
if (sa_idx == 0 || !inbound_sa_check(ipsec_ctx->sa_ctx,
m, sa_idx)) {
rte_pktmbuf_free(m);
continue;
}
traffic->ipv4.pkts[j++] = m;
}
traffic->ipv4.num = j;
}
static inline void
process_pkts_outbound(struct ipsec_ctx *ipsec_ctx,
struct ipsec_traffic *traffic)
{
struct rte_mbuf *m;
uint16_t idx, nb_pkts_out, i, j;
uint32_t sa_idx, res;
rte_acl_classify((struct rte_acl_ctx *)ipsec_ctx->sp_ctx,
traffic->ipv4.data, traffic->ipv4.res,
traffic->ipv4.num, DEFAULT_MAX_CATEGORIES);
/* Drop any IPsec traffic from protected ports */
for (i = 0; i < traffic->ipsec4.num; i++)
rte_pktmbuf_free(traffic->ipsec4.pkts[i]);
traffic->ipsec4.num = 0;
j = 0;
for (i = 0; i < traffic->ipv4.num; i++) {
m = traffic->ipv4.pkts[i];
res = traffic->ipv4.res[i];
sa_idx = res & PROTECT_MASK;
if ((res == 0) || (res & DISCARD))
rte_pktmbuf_free(m);
else if (sa_idx != 0) {
traffic->ipsec4.res[traffic->ipsec4.num] = sa_idx;
traffic->ipsec4.pkts[traffic->ipsec4.num++] = m;
} else /* BYPASS */
traffic->ipv4.pkts[j++] = m;
}
traffic->ipv4.num = j;
nb_pkts_out = ipsec_outbound(ipsec_ctx, traffic->ipsec4.pkts,
traffic->ipsec4.res, traffic->ipsec4.num,
MAX_PKT_BURST);
for (i = 0; i < nb_pkts_out; i++) {
idx = traffic->ipv4.num++;
m = traffic->ipsec4.pkts[i];
traffic->ipv4.pkts[idx] = m;
}
}
static inline void
process_pkts_inbound_nosp(struct ipsec_ctx *ipsec_ctx,
struct ipsec_traffic *traffic)
{
uint16_t nb_pkts_in, i;
/* Drop any IPv4 traffic from unprotected ports */
for (i = 0; i < traffic->ipv4.num; i++)
rte_pktmbuf_free(traffic->ipv4.pkts[i]);
traffic->ipv4.num = 0;
nb_pkts_in = ipsec_inbound(ipsec_ctx, traffic->ipsec4.pkts,
traffic->ipsec4.num, MAX_PKT_BURST);
for (i = 0; i < nb_pkts_in; i++)
traffic->ipv4.pkts[i] = traffic->ipsec4.pkts[i];
traffic->ipv4.num = nb_pkts_in;
}
static inline void
process_pkts_outbound_nosp(struct ipsec_ctx *ipsec_ctx,
struct ipsec_traffic *traffic)
{
uint16_t nb_pkts_out, i;
/* Drop any IPsec traffic from protected ports */
for (i = 0; i < traffic->ipsec4.num; i++)
rte_pktmbuf_free(traffic->ipsec4.pkts[i]);
traffic->ipsec4.num = 0;
for (i = 0; i < traffic->ipv4.num; i++)
traffic->ipv4.res[i] = single_sa_idx;
nb_pkts_out = ipsec_outbound(ipsec_ctx, traffic->ipv4.pkts,
traffic->ipv4.res, traffic->ipv4.num,
MAX_PKT_BURST);
traffic->ipv4.num = nb_pkts_out;
}
static inline void
route_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
{
uint32_t hop[MAX_PKT_BURST * 2];
uint32_t dst_ip[MAX_PKT_BURST * 2];
uint16_t i, offset;
if (nb_pkts == 0)
return;
for (i = 0; i < nb_pkts; i++) {
offset = offsetof(struct ip, ip_dst);
dst_ip[i] = *rte_pktmbuf_mtod_offset(pkts[i],
uint32_t *, offset);
dst_ip[i] = rte_be_to_cpu_32(dst_ip[i]);
}
rte_lpm_lookup_bulk((struct rte_lpm *)rt_ctx, dst_ip, hop, nb_pkts);
for (i = 0; i < nb_pkts; i++) {
if ((hop[i] & RTE_LPM_LOOKUP_SUCCESS) == 0) {
rte_pktmbuf_free(pkts[i]);
continue;
}
send_single_packet(pkts[i], hop[i] & 0xff);
}
}
static inline void
process_pkts(struct lcore_conf *qconf, struct rte_mbuf **pkts,
uint8_t nb_pkts, uint8_t portid)
{
struct ipsec_traffic traffic;
prepare_traffic(pkts, &traffic, nb_pkts);
if (single_sa) {
if (UNPROTECTED_PORT(portid))
process_pkts_inbound_nosp(&qconf->inbound, &traffic);
else
process_pkts_outbound_nosp(&qconf->outbound, &traffic);
} else {
if (UNPROTECTED_PORT(portid))
process_pkts_inbound(&qconf->inbound, &traffic);
else
process_pkts_outbound(&qconf->outbound, &traffic);
}
route_pkts(qconf->rt_ctx, traffic.ipv4.pkts, traffic.ipv4.num);
}
static inline void
drain_buffers(struct lcore_conf *qconf)
{
struct buffer *buf;
uint32_t portid;
for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
buf = &qconf->tx_mbufs[portid];
if (buf->len == 0)
continue;
send_burst(qconf, buf->len, portid);
buf->len = 0;
}
}
/* main processing loop */
static int32_t
main_loop(__attribute__((unused)) void *dummy)
{
struct rte_mbuf *pkts[MAX_PKT_BURST];
uint32_t lcore_id;
uint64_t prev_tsc, diff_tsc, cur_tsc;
int32_t i, nb_rx;
uint8_t portid, queueid;
struct lcore_conf *qconf;
int32_t socket_id;
const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1)
/ US_PER_S * BURST_TX_DRAIN_US;
struct lcore_rx_queue *rxql;
prev_tsc = 0;
lcore_id = rte_lcore_id();
qconf = &lcore_conf[lcore_id];
rxql = qconf->rx_queue_list;
socket_id = rte_lcore_to_socket_id(lcore_id);
qconf->rt_ctx = socket_ctx[socket_id].rt_ipv4;
qconf->inbound.sp_ctx = socket_ctx[socket_id].sp_ipv4_in;
qconf->inbound.sa_ctx = socket_ctx[socket_id].sa_ipv4_in;
qconf->inbound.cdev_map = cdev_map_in;
qconf->outbound.sp_ctx = socket_ctx[socket_id].sp_ipv4_out;
qconf->outbound.sa_ctx = socket_ctx[socket_id].sa_ipv4_out;
qconf->outbound.cdev_map = cdev_map_out;
if (qconf->nb_rx_queue == 0) {
RTE_LOG(INFO, IPSEC, "lcore %u has nothing to do\n", lcore_id);
return 0;
}
RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", lcore_id);
for (i = 0; i < qconf->nb_rx_queue; i++) {
portid = rxql[i].port_id;
queueid = rxql[i].queue_id;
RTE_LOG(INFO, IPSEC,
" -- lcoreid=%u portid=%hhu rxqueueid=%hhu\n",
lcore_id, portid, queueid);
}
while (1) {
cur_tsc = rte_rdtsc();
/* TX queue buffer drain */
diff_tsc = cur_tsc - prev_tsc;
if (unlikely(diff_tsc > drain_tsc)) {
drain_buffers(qconf);
prev_tsc = cur_tsc;
}
/* Read packet from RX queues */
for (i = 0; i < qconf->nb_rx_queue; ++i) {
portid = rxql[i].port_id;
queueid = rxql[i].queue_id;
nb_rx = rte_eth_rx_burst(portid, queueid,
pkts, MAX_PKT_BURST);
if (nb_rx > 0)
process_pkts(qconf, pkts, nb_rx, portid);
}
}
}
static int32_t
check_params(void)
{
uint8_t lcore, portid, nb_ports;
uint16_t i;
int32_t socket_id;
if (lcore_params == NULL) {
printf("Error: No port/queue/core mappings\n");
return -1;
}
nb_ports = rte_eth_dev_count();
if (nb_ports > RTE_MAX_ETHPORTS)
nb_ports = RTE_MAX_ETHPORTS;
for (i = 0; i < nb_lcore_params; ++i) {
lcore = lcore_params[i].lcore_id;
if (!rte_lcore_is_enabled(lcore)) {
printf("error: lcore %hhu is not enabled in "
"lcore mask\n", lcore);
return -1;
}
socket_id = rte_lcore_to_socket_id(lcore);
if (socket_id != 0 && numa_on == 0) {
printf("warning: lcore %hhu is on socket %d "
"with numa off\n",
lcore, socket_id);
}
portid = lcore_params[i].port_id;
if ((enabled_port_mask & (1 << portid)) == 0) {
printf("port %u is not enabled in port mask\n", portid);
return -1;
}
if (portid >= nb_ports) {
printf("port %u is not present on the board\n", portid);
return -1;
}
}
return 0;
}
static uint8_t
get_port_nb_rx_queues(const uint8_t port)
{
int32_t queue = -1;
uint16_t i;
for (i = 0; i < nb_lcore_params; ++i) {
if (lcore_params[i].port_id == port &&
lcore_params[i].queue_id > queue)
queue = lcore_params[i].queue_id;
}
return (uint8_t)(++queue);
}
static int32_t
init_lcore_rx_queues(void)
{
uint16_t i, nb_rx_queue;
uint8_t lcore;
for (i = 0; i < nb_lcore_params; ++i) {
lcore = lcore_params[i].lcore_id;
nb_rx_queue = lcore_conf[lcore].nb_rx_queue;
if (nb_rx_queue >= MAX_RX_QUEUE_PER_LCORE) {
printf("error: too many queues (%u) for lcore: %u\n",
nb_rx_queue + 1, lcore);
return -1;
}
lcore_conf[lcore].rx_queue_list[nb_rx_queue].port_id =
lcore_params[i].port_id;
lcore_conf[lcore].rx_queue_list[nb_rx_queue].queue_id =
lcore_params[i].queue_id;
lcore_conf[lcore].nb_rx_queue++;
}
return 0;
}
/* display usage */
static void
print_usage(const char *prgname)
{
printf("%s [EAL options] -- -p PORTMASK -P -u PORTMASK"
" --"OPTION_CONFIG" (port,queue,lcore)[,(port,queue,lcore]"
" --single-sa SAIDX --ep0|--ep1\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
" -P : enable promiscuous mode\n"
" -u PORTMASK: hexadecimal bitmask of unprotected ports\n"
" --"OPTION_CONFIG": (port,queue,lcore): "
"rx queues configuration\n"
" --single-sa SAIDX: use single SA index for outbound, "
"bypassing the SP\n"
" --ep0: Configure as Endpoint 0\n"
" --ep1: Configure as Endpoint 1\n", prgname);
}
static int32_t
parse_portmask(const char *portmask)
{
char *end = NULL;
unsigned long pm;
/* parse hexadecimal string */
pm = strtoul(portmask, &end, 16);
if ((portmask[0] == '\0') || (end == NULL) || (*end != '\0'))
return -1;
if ((pm == 0) && errno)
return -1;
return pm;
}
static int32_t
parse_decimal(const char *str)
{
char *end = NULL;
unsigned long num;
num = strtoul(str, &end, 10);
if ((str[0] == '\0') || (end == NULL) || (*end != '\0'))
return -1;
return num;
}
static int32_t
parse_config(const char *q_arg)
{
char s[256];
const char *p, *p0 = q_arg;
char *end;
enum fieldnames {
FLD_PORT = 0,
FLD_QUEUE,
FLD_LCORE,
_NUM_FLD
};
int long int_fld[_NUM_FLD];
char *str_fld[_NUM_FLD];
int32_t i;
uint32_t size;
nb_lcore_params = 0;
while ((p = strchr(p0, '(')) != NULL) {
++p;
p0 = strchr(p, ')');
if (p0 == NULL)
return -1;
size = p0 - p;
if (size >= sizeof(s))
return -1;
snprintf(s, sizeof(s), "%.*s", size, p);
if (rte_strsplit(s, sizeof(s), str_fld, _NUM_FLD, ',') !=
_NUM_FLD)
return -1;
for (i = 0; i < _NUM_FLD; i++) {
errno = 0;
int_fld[i] = strtoul(str_fld[i], &end, 0);
if (errno != 0 || end == str_fld[i] || int_fld[i] > 255)
return -1;
}
if (nb_lcore_params >= MAX_LCORE_PARAMS) {
printf("exceeded max number of lcore params: %hu\n",
nb_lcore_params);
return -1;
}
lcore_params_array[nb_lcore_params].port_id =
(uint8_t)int_fld[FLD_PORT];
lcore_params_array[nb_lcore_params].queue_id =
(uint8_t)int_fld[FLD_QUEUE];
lcore_params_array[nb_lcore_params].lcore_id =
(uint8_t)int_fld[FLD_LCORE];
++nb_lcore_params;
}
lcore_params = lcore_params_array;
return 0;
}
#define __STRNCMP(name, opt) (!strncmp(name, opt, sizeof(opt)))
static int32_t
parse_args_long_options(struct option *lgopts, int32_t option_index)
{
int32_t ret = -1;
const char *optname = lgopts[option_index].name;
if (__STRNCMP(optname, OPTION_CONFIG)) {
ret = parse_config(optarg);
if (ret)
printf("invalid config\n");
}
if (__STRNCMP(optname, OPTION_SINGLE_SA)) {
ret = parse_decimal(optarg);
if (ret != -1) {
single_sa = 1;
single_sa_idx = ret;
printf("Configured with single SA index %u\n",
single_sa_idx);
ret = 0;
}
}
if (__STRNCMP(optname, OPTION_EP0)) {
printf("endpoint 0\n");
ep = 0;
ret = 0;
}
if (__STRNCMP(optname, OPTION_EP1)) {
printf("endpoint 1\n");
ep = 1;
ret = 0;
}
return ret;
}
#undef __STRNCMP
static int32_t
parse_args(int32_t argc, char **argv)
{
int32_t opt, ret;
char **argvopt;
int32_t option_index;
char *prgname = argv[0];
static struct option lgopts[] = {
{OPTION_CONFIG, 1, 0, 0},
{OPTION_SINGLE_SA, 1, 0, 0},
{OPTION_EP0, 0, 0, 0},
{OPTION_EP1, 0, 0, 0},
{NULL, 0, 0, 0}
};
argvopt = argv;
while ((opt = getopt_long(argc, argvopt, "p:Pu:",
lgopts, &option_index)) != EOF) {
switch (opt) {
case 'p':
enabled_port_mask = parse_portmask(optarg);
if (enabled_port_mask == 0) {
printf("invalid portmask\n");
print_usage(prgname);
return -1;
}
break;
case 'P':
printf("Promiscuous mode selected\n");
promiscuous_on = 1;
break;
case 'u':
unprotected_port_mask = parse_portmask(optarg);
if (unprotected_port_mask == 0) {
printf("invalid unprotected portmask\n");
print_usage(prgname);
return -1;
}
break;
case 0:
if (parse_args_long_options(lgopts, option_index)) {
print_usage(prgname);
return -1;
}
break;
default:
print_usage(prgname);
return -1;
}
}
if (optind >= 0)
argv[optind-1] = prgname;
ret = optind-1;
optind = 0; /* reset getopt lib */
return ret;
}
static void
print_ethaddr(const char *name, const struct ether_addr *eth_addr)
{
char buf[ETHER_ADDR_FMT_SIZE];
ether_format_addr(buf, ETHER_ADDR_FMT_SIZE, eth_addr);
printf("%s%s", name, buf);
}
/* Check the link status of all ports in up to 9s, and print them finally */
static void
check_all_ports_link_status(uint8_t port_num, uint32_t port_mask)
{
#define CHECK_INTERVAL 100 /* 100ms */
#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
uint8_t portid, count, all_ports_up, print_flag = 0;
struct rte_eth_link link;
printf("\nChecking link status");
fflush(stdout);
for (count = 0; count <= MAX_CHECK_TIME; count++) {
all_ports_up = 1;
for (portid = 0; portid < port_num; portid++) {
if ((port_mask & (1 << portid)) == 0)
continue;
memset(&link, 0, sizeof(link));
rte_eth_link_get_nowait(portid, &link);
/* print link status if flag set */
if (print_flag == 1) {
if (link.link_status)
printf("Port %d Link Up - speed %u "
"Mbps - %s\n", (uint8_t)portid,
(uint32_t)link.link_speed,
(link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
("full-duplex") : ("half-duplex\n"));
else
printf("Port %d Link Down\n",
(uint8_t)portid);
continue;
}
/* clear all_ports_up flag if any link down */
if (link.link_status == 0) {
all_ports_up = 0;
break;
}
}
/* after finally printing all link status, get out */
if (print_flag == 1)
break;
if (all_ports_up == 0) {
printf(".");
fflush(stdout);
rte_delay_ms(CHECK_INTERVAL);
}
/* set the print_flag if all ports up or timeout */
if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
print_flag = 1;
printf("done\n");
}
}
}
static int32_t
add_mapping(struct rte_hash *map, const char *str, uint16_t cdev_id,
uint16_t qp, struct lcore_params *params,
struct ipsec_ctx *ipsec_ctx,
const struct rte_cryptodev_capabilities *cipher,
const struct rte_cryptodev_capabilities *auth)
{
int32_t ret = 0;
unsigned long i;
struct cdev_key key = { 0 };
key.lcore_id = params->lcore_id;
if (cipher)
key.cipher_algo = cipher->sym.cipher.algo;
if (auth)
key.auth_algo = auth->sym.auth.algo;
ret = rte_hash_lookup(map, &key);
if (ret != -ENOENT)
return 0;
for (i = 0; i < ipsec_ctx->nb_qps; i++)
if (ipsec_ctx->tbl[i].id == cdev_id)
break;
if (i == ipsec_ctx->nb_qps) {
if (ipsec_ctx->nb_qps == MAX_QP_PER_LCORE) {
printf("Maximum number of crypto devices assigned to "
"a core, increase MAX_QP_PER_LCORE value\n");
return 0;
}
ipsec_ctx->tbl[i].id = cdev_id;
ipsec_ctx->tbl[i].qp = qp;
ipsec_ctx->nb_qps++;
printf("%s cdev mapping: lcore %u using cdev %u qp %u "
"(cdev_id_qp %lu)\n", str, key.lcore_id,
cdev_id, qp, i);
}
ret = rte_hash_add_key_data(map, &key, (void *)i);
if (ret < 0) {
printf("Faled to insert cdev mapping for (lcore %u, "
"cdev %u, qp %u), errno %d\n",
key.lcore_id, ipsec_ctx->tbl[i].id,
ipsec_ctx->tbl[i].qp, ret);
return 0;
}
return 1;
}
static int32_t
add_cdev_mapping(struct rte_cryptodev_info *dev_info, uint16_t cdev_id,
uint16_t qp, struct lcore_params *params)
{
int32_t ret = 0;
const struct rte_cryptodev_capabilities *i, *j;
struct rte_hash *map;
struct lcore_conf *qconf;
struct ipsec_ctx *ipsec_ctx;
const char *str;
qconf = &lcore_conf[params->lcore_id];
if ((unprotected_port_mask & (1 << params->port_id)) == 0) {
map = cdev_map_out;
ipsec_ctx = &qconf->outbound;
str = "Outbound";
} else {
map = cdev_map_in;
ipsec_ctx = &qconf->inbound;
str = "Inbound";
}
/* Required cryptodevs with operation chainning */
if (!(dev_info->feature_flags &
RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING))
return ret;
for (i = dev_info->capabilities;
i->op != RTE_CRYPTO_OP_TYPE_UNDEFINED; i++) {
if (i->op != RTE_CRYPTO_OP_TYPE_SYMMETRIC)
continue;
if (i->sym.xform_type != RTE_CRYPTO_SYM_XFORM_CIPHER)
continue;
for (j = dev_info->capabilities;
j->op != RTE_CRYPTO_OP_TYPE_UNDEFINED; j++) {
if (j->op != RTE_CRYPTO_OP_TYPE_SYMMETRIC)
continue;
if (j->sym.xform_type != RTE_CRYPTO_SYM_XFORM_AUTH)
continue;
ret |= add_mapping(map, str, cdev_id, qp, params,
ipsec_ctx, i, j);
}
}
return ret;
}
static int32_t
cryptodevs_init(void)
{
struct rte_cryptodev_config dev_conf;
struct rte_cryptodev_qp_conf qp_conf;
uint16_t idx, max_nb_qps, qp, i;
int16_t cdev_id;
struct rte_hash_parameters params = { 0 };
params.entries = CDEV_MAP_ENTRIES;
params.key_len = sizeof(struct cdev_key);
params.hash_func = rte_jhash;
params.hash_func_init_val = 0;
params.socket_id = rte_socket_id();
params.name = "cdev_map_in";
cdev_map_in = rte_hash_create(&params);
if (cdev_map_in == NULL)
rte_panic("Failed to create cdev_map hash table, errno = %d\n",
rte_errno);
params.name = "cdev_map_out";
cdev_map_out = rte_hash_create(&params);
if (cdev_map_out == NULL)
rte_panic("Failed to create cdev_map hash table, errno = %d\n",
rte_errno);
printf("lcore/cryptodev/qp mappings:\n");
idx = 0;
/* Start from last cdev id to give HW priority */
for (cdev_id = rte_cryptodev_count() - 1; cdev_id >= 0; cdev_id--) {
struct rte_cryptodev_info cdev_info;
rte_cryptodev_info_get(cdev_id, &cdev_info);
if (nb_lcore_params > cdev_info.max_nb_queue_pairs)
max_nb_qps = cdev_info.max_nb_queue_pairs;
else
max_nb_qps = nb_lcore_params;
qp = 0;
i = 0;
while (qp < max_nb_qps && i < nb_lcore_params) {
if (add_cdev_mapping(&cdev_info, cdev_id, qp,
&lcore_params[idx]))
qp++;
idx++;
idx = idx % nb_lcore_params;
i++;
}
if (qp == 0)
continue;
dev_conf.socket_id = rte_cryptodev_socket_id(cdev_id);
dev_conf.nb_queue_pairs = qp;
dev_conf.session_mp.nb_objs = CDEV_MP_NB_OBJS;
dev_conf.session_mp.cache_size = CDEV_MP_CACHE_SZ;
if (rte_cryptodev_configure(cdev_id, &dev_conf))
rte_panic("Failed to initialize crypodev %u\n",
cdev_id);
qp_conf.nb_descriptors = CDEV_MP_NB_OBJS;
for (qp = 0; qp < dev_conf.nb_queue_pairs; qp++)
if (rte_cryptodev_queue_pair_setup(cdev_id, qp,
&qp_conf, dev_conf.socket_id))
rte_panic("Failed to setup queue %u for "
"cdev_id %u\n", 0, cdev_id);
}
printf("\n");
return 0;
}
static void
port_init(uint8_t portid)
{
struct rte_eth_dev_info dev_info;
struct rte_eth_txconf *txconf;
uint16_t nb_tx_queue, nb_rx_queue;
uint16_t tx_queueid, rx_queueid, queue, lcore_id;
int32_t ret, socket_id;
struct lcore_conf *qconf;
struct ether_addr ethaddr;
rte_eth_dev_info_get(portid, &dev_info);
printf("Configuring device port %u:\n", portid);
rte_eth_macaddr_get(portid, &ethaddr);
ethaddr_tbl[portid].src = ETHADDR_TO_UINT64(ethaddr);
print_ethaddr("Address: ", &ethaddr);
printf("\n");
nb_rx_queue = get_port_nb_rx_queues(portid);
nb_tx_queue = nb_lcores;
if (nb_rx_queue > dev_info.max_rx_queues)
rte_exit(EXIT_FAILURE, "Error: queue %u not available "
"(max rx queue is %u)\n",
nb_rx_queue, dev_info.max_rx_queues);
if (nb_tx_queue > dev_info.max_tx_queues)
rte_exit(EXIT_FAILURE, "Error: queue %u not available "
"(max tx queue is %u)\n",
nb_tx_queue, dev_info.max_tx_queues);
printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
nb_rx_queue, nb_tx_queue);
ret = rte_eth_dev_configure(portid, nb_rx_queue, nb_tx_queue,
&port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Cannot configure device: "
"err=%d, port=%d\n", ret, portid);
/* init one TX queue per lcore */
tx_queueid = 0;
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
if (rte_lcore_is_enabled(lcore_id) == 0)
continue;
if (numa_on)
socket_id = (uint8_t)rte_lcore_to_socket_id(lcore_id);
else
socket_id = 0;
/* init TX queue */
printf("Setup txq=%u,%d,%d\n", lcore_id, tx_queueid, socket_id);
txconf = &dev_info.default_txconf;
txconf->txq_flags = 0;
ret = rte_eth_tx_queue_setup(portid, tx_queueid, nb_txd,
socket_id, txconf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup: "
"err=%d, port=%d\n", ret, portid);
qconf = &lcore_conf[lcore_id];
qconf->tx_queue_id[portid] = tx_queueid;
tx_queueid++;
/* init RX queues */
for (queue = 0; queue < qconf->nb_rx_queue; ++queue) {
if (portid != qconf->rx_queue_list[queue].port_id)
continue;
rx_queueid = qconf->rx_queue_list[queue].queue_id;
printf("Setup rxq=%d,%d,%d\n", portid, rx_queueid,
socket_id);
ret = rte_eth_rx_queue_setup(portid, rx_queueid,
nb_rxd, socket_id, NULL,
socket_ctx[socket_id].mbuf_pool);
if (ret < 0)
rte_exit(EXIT_FAILURE,
"rte_eth_rx_queue_setup: err=%d, "
"port=%d\n", ret, portid);
}
}
printf("\n");
}
static void
pool_init(struct socket_ctx *ctx, int32_t socket_id, uint32_t nb_mbuf)
{
char s[64];
snprintf(s, sizeof(s), "mbuf_pool_%d", socket_id);
ctx->mbuf_pool = rte_pktmbuf_pool_create(s, nb_mbuf,
MEMPOOL_CACHE_SIZE, ipsec_metadata_size(),
RTE_MBUF_DEFAULT_BUF_SIZE,
socket_id);
if (ctx->mbuf_pool == NULL)
rte_exit(EXIT_FAILURE, "Cannot init mbuf pool on socket %d\n",
socket_id);
else
printf("Allocated mbuf pool on socket %d\n", socket_id);
}
int32_t
main(int32_t argc, char **argv)
{
int32_t ret;
uint32_t lcore_id, nb_ports;
uint8_t portid, socket_id;
/* init EAL */
ret = rte_eal_init(argc, argv);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Invalid EAL parameters\n");
argc -= ret;
argv += ret;
/* parse application arguments (after the EAL ones) */
ret = parse_args(argc, argv);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Invalid parameters\n");
if (ep < 0)
rte_exit(EXIT_FAILURE, "need to choose either EP0 or EP1\n");
if ((unprotected_port_mask & enabled_port_mask) !=
unprotected_port_mask)
rte_exit(EXIT_FAILURE, "Invalid unprotected portmask 0x%x\n",
unprotected_port_mask);
nb_ports = rte_eth_dev_count();
if (nb_ports > RTE_MAX_ETHPORTS)
nb_ports = RTE_MAX_ETHPORTS;
if (check_params() < 0)
rte_exit(EXIT_FAILURE, "check_params failed\n");
ret = init_lcore_rx_queues();
if (ret < 0)
rte_exit(EXIT_FAILURE, "init_lcore_rx_queues failed\n");
nb_lcores = rte_lcore_count();
/* Replicate each contex per socket */
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
if (rte_lcore_is_enabled(lcore_id) == 0)
continue;
if (numa_on)
socket_id = (uint8_t)rte_lcore_to_socket_id(lcore_id);
else
socket_id = 0;
if (socket_ctx[socket_id].mbuf_pool)
continue;
sa_init(&socket_ctx[socket_id], socket_id, ep);
sp_init(&socket_ctx[socket_id], socket_id, ep);
rt_init(&socket_ctx[socket_id], socket_id, ep);
pool_init(&socket_ctx[socket_id], socket_id, NB_MBUF);
}
for (portid = 0; portid < nb_ports; portid++) {
if ((enabled_port_mask & (1 << portid)) == 0)
continue;
port_init(portid);
}
cryptodevs_init();
/* start ports */
for (portid = 0; portid < nb_ports; portid++) {
if ((enabled_port_mask & (1 << portid)) == 0)
continue;
/* Start device */
ret = rte_eth_dev_start(portid);
if (ret < 0)
rte_exit(EXIT_FAILURE, "rte_eth_dev_start: "
"err=%d, port=%d\n", ret, portid);
/*
* If enabled, put device in promiscuous mode.
* This allows IO forwarding mode to forward packets
* to itself through 2 cross-connected ports of the
* target machine.
*/
if (promiscuous_on)
rte_eth_promiscuous_enable(portid);
}
check_all_ports_link_status((uint8_t)nb_ports, enabled_port_mask);
/* launch per-lcore init on every lcore */
rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER);
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
if (rte_eal_wait_lcore(lcore_id) < 0)
return -1;
}
return 0;
}

View file

@ -0,0 +1,203 @@
/*-
* BSD LICENSE
*
* Copyright(c) 2016 Intel Corporation. All rights reserved.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of Intel Corporation nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <netinet/in.h>
#include <netinet/ip.h>
#include <rte_branch_prediction.h>
#include <rte_log.h>
#include <rte_crypto.h>
#include <rte_cryptodev.h>
#include <rte_mbuf.h>
#include <rte_hash.h>
#include "ipsec.h"
static inline int
create_session(struct ipsec_ctx *ipsec_ctx __rte_unused, struct ipsec_sa *sa)
{
uint32_t cdev_id_qp = 0;
int32_t ret;
struct cdev_key key = { 0 };
key.lcore_id = (uint8_t)rte_lcore_id();
key.cipher_algo = (uint8_t)sa->cipher_algo;
key.auth_algo = (uint8_t)sa->auth_algo;
ret = rte_hash_lookup_data(ipsec_ctx->cdev_map, &key,
(void **)&cdev_id_qp);
if (ret < 0) {
IPSEC_LOG(ERR, IPSEC, "No cryptodev: core %u, cipher_algo %u, "
"auth_algo %u\n", key.lcore_id, key.cipher_algo,
key.auth_algo);
return -1;
}
IPSEC_LOG(DEBUG, IPSEC, "Create session for SA spi %u on cryptodev "
"%u qp %u\n", sa->spi, ipsec_ctx->tbl[cdev_id_qp].id,
ipsec_ctx->tbl[cdev_id_qp].qp);
sa->crypto_session = rte_cryptodev_sym_session_create(
ipsec_ctx->tbl[cdev_id_qp].id, sa->xforms);
sa->cdev_id_qp = cdev_id_qp;
return 0;
}
static inline void
enqueue_cop(struct cdev_qp *cqp, struct rte_crypto_op *cop)
{
int ret, i;
cqp->buf[cqp->len++] = cop;
if (cqp->len == MAX_PKT_BURST) {
ret = rte_cryptodev_enqueue_burst(cqp->id, cqp->qp,
cqp->buf, cqp->len);
if (ret < cqp->len) {
IPSEC_LOG(DEBUG, IPSEC, "Cryptodev %u queue %u:"
" enqueued %u crypto ops out of %u\n",
cqp->id, cqp->qp,
ret, cqp->len);
for (i = ret; i < cqp->len; i++)
rte_pktmbuf_free(cqp->buf[i]->sym->m_src);
}
cqp->in_flight += ret;
cqp->len = 0;
}
}
static inline uint16_t
ipsec_processing(struct ipsec_ctx *ipsec_ctx, struct rte_mbuf *pkts[],
struct ipsec_sa *sas[], uint16_t nb_pkts, uint16_t max_pkts)
{
int ret = 0, i, j, nb_cops;
struct ipsec_mbuf_metadata *priv;
struct rte_crypto_op *cops[max_pkts];
struct ipsec_sa *sa;
struct rte_mbuf *pkt;
for (i = 0; i < nb_pkts; i++) {
rte_prefetch0(sas[i]);
rte_prefetch0(pkts[i]);
priv = get_priv(pkts[i]);
sa = sas[i];
priv->sa = sa;
IPSEC_ASSERT(sa != NULL);
priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
rte_prefetch0(&priv->sym_cop);
priv->cop.sym = &priv->sym_cop;
if ((unlikely(sa->crypto_session == NULL)) &&
create_session(ipsec_ctx, sa)) {
rte_pktmbuf_free(pkts[i]);
continue;
}
rte_crypto_op_attach_sym_session(&priv->cop,
sa->crypto_session);
ret = sa->pre_crypto(pkts[i], sa, &priv->cop);
if (unlikely(ret)) {
rte_pktmbuf_free(pkts[i]);
continue;
}
IPSEC_ASSERT(sa->cdev_id_qp < ipsec_ctx->nb_qps);
enqueue_cop(&ipsec_ctx->tbl[sa->cdev_id_qp], &priv->cop);
}
nb_pkts = 0;
for (i = 0; i < ipsec_ctx->nb_qps && nb_pkts < max_pkts; i++) {
struct cdev_qp *cqp;
cqp = &ipsec_ctx->tbl[ipsec_ctx->last_qp++];
if (ipsec_ctx->last_qp == ipsec_ctx->nb_qps)
ipsec_ctx->last_qp %= ipsec_ctx->nb_qps;
if (cqp->in_flight == 0)
continue;
nb_cops = rte_cryptodev_dequeue_burst(cqp->id, cqp->qp,
cops, max_pkts - nb_pkts);
cqp->in_flight -= nb_cops;
for (j = 0; j < nb_cops; j++) {
pkt = cops[j]->sym->m_src;
rte_prefetch0(pkt);
priv = get_priv(pkt);
sa = priv->sa;
IPSEC_ASSERT(sa != NULL);
ret = sa->post_crypto(pkt, sa, cops[j]);
if (unlikely(ret))
rte_pktmbuf_free(pkt);
else
pkts[nb_pkts++] = pkt;
}
}
/* return packets */
return nb_pkts;
}
uint16_t
ipsec_inbound(struct ipsec_ctx *ctx, struct rte_mbuf *pkts[],
uint16_t nb_pkts, uint16_t len)
{
struct ipsec_sa *sas[nb_pkts];
inbound_sa_lookup(ctx->sa_ctx, pkts, sas, nb_pkts);
return ipsec_processing(ctx, pkts, sas, nb_pkts, len);
}
uint16_t
ipsec_outbound(struct ipsec_ctx *ctx, struct rte_mbuf *pkts[],
uint32_t sa_idx[], uint16_t nb_pkts, uint16_t len)
{
struct ipsec_sa *sas[nb_pkts];
outbound_sa_lookup(ctx->sa_ctx, sa_idx, sas, nb_pkts);
return ipsec_processing(ctx, pkts, sas, nb_pkts, len);
}

View file

@ -0,0 +1,192 @@
/*-
* BSD LICENSE
*
* Copyright(c) 2016 Intel Corporation. All rights reserved.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of Intel Corporation nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __IPSEC_H__
#define __IPSEC_H__
#include <stdint.h>
#include <netinet/in.h>
#include <netinet/ip.h>
#include <rte_byteorder.h>
#include <rte_ip.h>
#include <rte_crypto.h>
#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1
#define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2
#define RTE_LOGTYPE_IPSEC_IPIP RTE_LOGTYPE_USER3
#define MAX_PKT_BURST 32
#define MAX_QP_PER_LCORE 256
#ifdef IPSEC_DEBUG
#define IPSEC_ASSERT(exp) \
if (!(exp)) { \
rte_panic("line%d\tassert \"" #exp "\" failed\n", __LINE__); \
}
#define IPSEC_LOG RTE_LOG
#else
#define IPSEC_ASSERT(exp) do {} while (0)
#define IPSEC_LOG(...) do {} while (0)
#endif /* IPSEC_DEBUG */
#define MAX_DIGEST_SIZE 32 /* Bytes -- 256 bits */
#define uint32_t_to_char(ip, a, b, c, d) do {\
*a = (unsigned char)(ip >> 24 & 0xff);\
*b = (unsigned char)(ip >> 16 & 0xff);\
*c = (unsigned char)(ip >> 8 & 0xff);\
*d = (unsigned char)(ip & 0xff);\
} while (0)
#define DEFAULT_MAX_CATEGORIES 1
#define IPSEC_SA_MAX_ENTRIES (64) /* must be power of 2, max 2 power 30 */
#define SPI2IDX(spi) (spi & (IPSEC_SA_MAX_ENTRIES - 1))
#define INVALID_SPI (0)
#define DISCARD (0x80000000)
#define BYPASS (0x40000000)
#define PROTECT_MASK (0x3fffffff)
#define PROTECT(sa_idx) (SPI2IDX(sa_idx) & PROTECT_MASK) /* SA idx 30 bits */
#define IPSEC_XFORM_MAX 2
struct rte_crypto_xform;
struct ipsec_xform;
struct rte_cryptodev_session;
struct rte_mbuf;
struct ipsec_sa;
typedef int (*ipsec_xform_fn)(struct rte_mbuf *m, struct ipsec_sa *sa,
struct rte_crypto_op *cop);
struct ipsec_sa {
uint32_t spi;
uint32_t cdev_id_qp;
uint32_t src;
uint32_t dst;
struct rte_cryptodev_sym_session *crypto_session;
struct rte_crypto_sym_xform *xforms;
ipsec_xform_fn pre_crypto;
ipsec_xform_fn post_crypto;
enum rte_crypto_cipher_algorithm cipher_algo;
enum rte_crypto_auth_algorithm auth_algo;
uint16_t digest_len;
uint16_t iv_len;
uint16_t block_size;
uint16_t flags;
uint32_t seq;
} __rte_cache_aligned;
struct ipsec_mbuf_metadata {
struct ipsec_sa *sa;
struct rte_crypto_op cop;
struct rte_crypto_sym_op sym_cop;
};
struct cdev_qp {
uint16_t id;
uint16_t qp;
uint16_t in_flight;
uint16_t len;
struct rte_crypto_op *buf[MAX_PKT_BURST] __rte_aligned(sizeof(void *));
};
struct ipsec_ctx {
struct rte_hash *cdev_map;
struct sp_ctx *sp_ctx;
struct sa_ctx *sa_ctx;
uint16_t nb_qps;
uint16_t last_qp;
struct cdev_qp tbl[MAX_QP_PER_LCORE];
};
struct cdev_key {
uint16_t lcore_id;
uint8_t cipher_algo;
uint8_t auth_algo;
};
struct socket_ctx {
struct sa_ctx *sa_ipv4_in;
struct sa_ctx *sa_ipv4_out;
struct sp_ctx *sp_ipv4_in;
struct sp_ctx *sp_ipv4_out;
struct rt_ctx *rt_ipv4;
struct rte_mempool *mbuf_pool;
};
uint16_t
ipsec_inbound(struct ipsec_ctx *ctx, struct rte_mbuf *pkts[],
uint16_t nb_pkts, uint16_t len);
uint16_t
ipsec_outbound(struct ipsec_ctx *ctx, struct rte_mbuf *pkts[],
uint32_t sa_idx[], uint16_t nb_pkts, uint16_t len);
static inline uint16_t
ipsec_metadata_size(void)
{
return sizeof(struct ipsec_mbuf_metadata);
}
static inline struct ipsec_mbuf_metadata *
get_priv(struct rte_mbuf *m)
{
return RTE_PTR_ADD(m, sizeof(struct rte_mbuf));
}
int
inbound_sa_check(struct sa_ctx *sa_ctx, struct rte_mbuf *m, uint32_t sa_idx);
void
inbound_sa_lookup(struct sa_ctx *sa_ctx, struct rte_mbuf *pkts[],
struct ipsec_sa *sa[], uint16_t nb_pkts);
void
outbound_sa_lookup(struct sa_ctx *sa_ctx, uint32_t sa_idx[],
struct ipsec_sa *sa[], uint16_t nb_pkts);
void
sp_init(struct socket_ctx *ctx, int socket_id, unsigned ep);
void
sa_init(struct socket_ctx *ctx, int socket_id, unsigned ep);
void
rt_init(struct socket_ctx *ctx, int socket_id, unsigned ep);
#endif /* __IPSEC_H__ */

144
examples/ipsec-secgw/rt.c Normal file
View file

@ -0,0 +1,144 @@
/*-
* BSD LICENSE
*
* Copyright(c) 2016 Intel Corporation. All rights reserved.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of Intel Corporation nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/*
* Routing Table (RT)
*/
#include <rte_lpm.h>
#include <rte_errno.h>
#include "ipsec.h"
#define RT_IPV4_MAX_RULES 64
struct ipv4_route {
uint32_t ip;
uint8_t depth;
uint8_t if_out;
};
/* In the default routing table we have:
* ep0 protected ports 0 and 1, and unprotected ports 2 and 3.
*/
static struct ipv4_route rt_ipv4_ep0[] = {
{ IPv4(172, 16, 2, 5), 32, 0 },
{ IPv4(172, 16, 2, 6), 32, 0 },
{ IPv4(172, 16, 2, 7), 32, 1 },
{ IPv4(172, 16, 2, 8), 32, 1 },
{ IPv4(192, 168, 115, 0), 24, 2 },
{ IPv4(192, 168, 116, 0), 24, 2 },
{ IPv4(192, 168, 117, 0), 24, 3 },
{ IPv4(192, 168, 118, 0), 24, 3 },
{ IPv4(192, 168, 210, 0), 24, 2 },
{ IPv4(192, 168, 240, 0), 24, 2 },
{ IPv4(192, 168, 250, 0), 24, 0 }
};
/* In the default routing table we have:
* ep1 protected ports 0 and 1, and unprotected ports 2 and 3.
*/
static struct ipv4_route rt_ipv4_ep1[] = {
{ IPv4(172, 16, 1, 5), 32, 2 },
{ IPv4(172, 16, 1, 6), 32, 2 },
{ IPv4(172, 16, 1, 7), 32, 3 },
{ IPv4(172, 16, 1, 8), 32, 3 },
{ IPv4(192, 168, 105, 0), 24, 0 },
{ IPv4(192, 168, 106, 0), 24, 0 },
{ IPv4(192, 168, 107, 0), 24, 1 },
{ IPv4(192, 168, 108, 0), 24, 1 },
{ IPv4(192, 168, 200, 0), 24, 0 },
{ IPv4(192, 168, 240, 0), 24, 2 },
{ IPv4(192, 168, 250, 0), 24, 0 }
};
void
rt_init(struct socket_ctx *ctx, int socket_id, unsigned ep)
{
char name[PATH_MAX];
unsigned i;
int ret;
struct rte_lpm *lpm;
struct ipv4_route *rt;
char a, b, c, d;
unsigned nb_routes;
struct rte_lpm_config conf = { 0 };
if (ctx == NULL)
rte_exit(EXIT_FAILURE, "NULL context.\n");
if (ctx->rt_ipv4 != NULL)
rte_exit(EXIT_FAILURE, "Routing Table for socket %u already "
"initialized\n", socket_id);
printf("Creating Routing Table (RT) context with %u max routes\n",
RT_IPV4_MAX_RULES);
if (ep == 0) {
rt = rt_ipv4_ep0;
nb_routes = RTE_DIM(rt_ipv4_ep0);
} else if (ep == 1) {
rt = rt_ipv4_ep1;
nb_routes = RTE_DIM(rt_ipv4_ep1);
} else
rte_exit(EXIT_FAILURE, "Invalid EP value %u. Only 0 or 1 "
"supported.\n", ep);
/* create the LPM table */
snprintf(name, sizeof(name), "%s_%u", "rt_ipv4", socket_id);
conf.max_rules = RT_IPV4_MAX_RULES;
conf.number_tbl8s = RTE_LPM_TBL8_NUM_ENTRIES;
lpm = rte_lpm_create(name, socket_id, &conf);
if (lpm == NULL)
rte_exit(EXIT_FAILURE, "Unable to create LPM table "
"on socket %d\n", socket_id);
/* populate the LPM table */
for (i = 0; i < nb_routes; i++) {
ret = rte_lpm_add(lpm, rt[i].ip, rt[i].depth, rt[i].if_out);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Unable to add entry num %u to "
"LPM table on socket %d\n", i, socket_id);
uint32_t_to_char(rt[i].ip, &a, &b, &c, &d);
printf("LPM: Adding route %hhu.%hhu.%hhu.%hhu/%hhu (%hhu)\n",
a, b, c, d, rt[i].depth, rt[i].if_out);
}
ctx->rt_ipv4 = (struct rt_ctx *)lpm;
}

438
examples/ipsec-secgw/sa.c Normal file
View file

@ -0,0 +1,438 @@
/*-
* BSD LICENSE
*
* Copyright(c) 2016 Intel Corporation. All rights reserved.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of Intel Corporation nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/*
* Security Associations
*/
#include <netinet/ip.h>
#include <rte_memzone.h>
#include <rte_crypto.h>
#include <rte_cryptodev.h>
#include <rte_byteorder.h>
#include <rte_errno.h>
#include "ipsec.h"
#include "esp.h"
/* SAs EP0 Outbound */
const struct ipsec_sa sa_ep0_out[] = {
{ 5, 0, IPv4(172, 16, 1, 5), IPv4(172, 16, 2, 5),
NULL, NULL,
esp4_tunnel_outbound_pre_crypto,
esp4_tunnel_outbound_post_crypto,
RTE_CRYPTO_CIPHER_AES_CBC, RTE_CRYPTO_AUTH_SHA1_HMAC,
12, 16, 16,
0, 0 },
{ 6, 0, IPv4(172, 16, 1, 6), IPv4(172, 16, 2, 6),
NULL, NULL,
esp4_tunnel_outbound_pre_crypto,
esp4_tunnel_outbound_post_crypto,
RTE_CRYPTO_CIPHER_AES_CBC, RTE_CRYPTO_AUTH_SHA1_HMAC,
12, 16, 16,
0, 0 },
{ 7, 0, IPv4(172, 16, 1, 7), IPv4(172, 16, 2, 7),
NULL, NULL,
esp4_tunnel_outbound_pre_crypto,
esp4_tunnel_outbound_post_crypto,
RTE_CRYPTO_CIPHER_AES_CBC, RTE_CRYPTO_AUTH_SHA1_HMAC,
12, 16, 16,
0, 0 },
{ 8, 0, IPv4(172, 16, 1, 8), IPv4(172, 16, 2, 8),
NULL, NULL,
esp4_tunnel_outbound_pre_crypto,
esp4_tunnel_outbound_post_crypto,
RTE_CRYPTO_CIPHER_AES_CBC, RTE_CRYPTO_AUTH_SHA1_HMAC,
12, 16, 16,
0, 0 },
{ 9, 0, IPv4(172, 16, 1, 5), IPv4(172, 16, 2, 5),
NULL, NULL,
esp4_tunnel_outbound_pre_crypto,
esp4_tunnel_outbound_post_crypto,
RTE_CRYPTO_CIPHER_NULL, RTE_CRYPTO_AUTH_NULL,
0, 0, 4,
0, 0 },
};
/* SAs EP0 Inbound */
const struct ipsec_sa sa_ep0_in[] = {
{ 5, 0, IPv4(172, 16, 2, 5), IPv4(172, 16, 1, 5),
NULL, NULL,
esp4_tunnel_inbound_pre_crypto,
esp4_tunnel_inbound_post_crypto,
RTE_CRYPTO_CIPHER_AES_CBC, RTE_CRYPTO_AUTH_SHA1_HMAC,
12, 16, 16,
0, 0 },
{ 6, 0, IPv4(172, 16, 2, 6), IPv4(172, 16, 1, 6),
NULL, NULL,
esp4_tunnel_inbound_pre_crypto,
esp4_tunnel_inbound_post_crypto,
RTE_CRYPTO_CIPHER_AES_CBC, RTE_CRYPTO_AUTH_SHA1_HMAC,
12, 16, 16,
0, 0 },
{ 7, 0, IPv4(172, 16, 2, 7), IPv4(172, 16, 1, 7),
NULL, NULL,
esp4_tunnel_inbound_pre_crypto,
esp4_tunnel_inbound_post_crypto,
RTE_CRYPTO_CIPHER_AES_CBC, RTE_CRYPTO_AUTH_SHA1_HMAC,
12, 16, 16,
0, 0 },
{ 8, 0, IPv4(172, 16, 2, 8), IPv4(172, 16, 1, 8),
NULL, NULL,
esp4_tunnel_inbound_pre_crypto,
esp4_tunnel_inbound_post_crypto,
RTE_CRYPTO_CIPHER_AES_CBC, RTE_CRYPTO_AUTH_SHA1_HMAC,
12, 16, 16,
0, 0 },
{ 9, 0, IPv4(172, 16, 2, 5), IPv4(172, 16, 1, 5),
NULL, NULL,
esp4_tunnel_inbound_pre_crypto,
esp4_tunnel_inbound_post_crypto,
RTE_CRYPTO_CIPHER_NULL, RTE_CRYPTO_AUTH_NULL,
0, 0, 4,
0, 0 },
};
/* SAs EP1 Outbound */
const struct ipsec_sa sa_ep1_out[] = {
{ 5, 0, IPv4(172, 16, 2, 5), IPv4(172, 16, 1, 5),
NULL, NULL,
esp4_tunnel_outbound_pre_crypto,
esp4_tunnel_outbound_post_crypto,
RTE_CRYPTO_CIPHER_AES_CBC, RTE_CRYPTO_AUTH_SHA1_HMAC,
12, 16, 16,
0, 0 },
{ 6, 0, IPv4(172, 16, 2, 6), IPv4(172, 16, 1, 6),
NULL, NULL,
esp4_tunnel_outbound_pre_crypto,
esp4_tunnel_outbound_post_crypto,
RTE_CRYPTO_CIPHER_AES_CBC, RTE_CRYPTO_AUTH_SHA1_HMAC,
12, 16, 16,
0, 0 },
{ 7, 0, IPv4(172, 16, 2, 7), IPv4(172, 16, 1, 7),
NULL, NULL,
esp4_tunnel_outbound_pre_crypto,
esp4_tunnel_outbound_post_crypto,
RTE_CRYPTO_CIPHER_AES_CBC, RTE_CRYPTO_AUTH_SHA1_HMAC,
12, 16, 16,
0, 0 },
{ 8, 0, IPv4(172, 16, 2, 8), IPv4(172, 16, 1, 8),
NULL, NULL,
esp4_tunnel_outbound_pre_crypto,
esp4_tunnel_outbound_post_crypto,
RTE_CRYPTO_CIPHER_AES_CBC, RTE_CRYPTO_AUTH_SHA1_HMAC,
12, 16, 16,
0, 0 },
{ 9, 0, IPv4(172, 16, 2, 5), IPv4(172, 16, 1, 5),
NULL, NULL,
esp4_tunnel_outbound_pre_crypto,
esp4_tunnel_outbound_post_crypto,
RTE_CRYPTO_CIPHER_NULL, RTE_CRYPTO_AUTH_NULL,
0, 0, 4,
0, 0 },
};
/* SAs EP1 Inbound */
const struct ipsec_sa sa_ep1_in[] = {
{ 5, 0, IPv4(172, 16, 1, 5), IPv4(172, 16, 2, 5),
NULL, NULL,
esp4_tunnel_inbound_pre_crypto,
esp4_tunnel_inbound_post_crypto,
RTE_CRYPTO_CIPHER_AES_CBC, RTE_CRYPTO_AUTH_SHA1_HMAC,
12, 16, 16,
0, 0 },
{ 6, 0, IPv4(172, 16, 1, 6), IPv4(172, 16, 2, 6),
NULL, NULL,
esp4_tunnel_inbound_pre_crypto,
esp4_tunnel_inbound_post_crypto,
RTE_CRYPTO_CIPHER_AES_CBC, RTE_CRYPTO_AUTH_SHA1_HMAC,
12, 16, 16,
0, 0 },
{ 7, 0, IPv4(172, 16, 1, 7), IPv4(172, 16, 2, 7),
NULL, NULL,
esp4_tunnel_inbound_pre_crypto,
esp4_tunnel_inbound_post_crypto,
RTE_CRYPTO_CIPHER_AES_CBC, RTE_CRYPTO_AUTH_SHA1_HMAC,
12, 16, 16,
0, 0 },
{ 8, 0, IPv4(172, 16, 1, 8), IPv4(172, 16, 2, 8),
NULL, NULL,
esp4_tunnel_inbound_pre_crypto,
esp4_tunnel_inbound_post_crypto,
RTE_CRYPTO_CIPHER_AES_CBC, RTE_CRYPTO_AUTH_SHA1_HMAC,
12, 16, 16,
0, 0 },
{ 9, 0, IPv4(172, 16, 1, 5), IPv4(172, 16, 2, 5),
NULL, NULL,
esp4_tunnel_inbound_pre_crypto,
esp4_tunnel_inbound_post_crypto,
RTE_CRYPTO_CIPHER_NULL, RTE_CRYPTO_AUTH_NULL,
0, 0, 4,
0, 0 },
};
static uint8_t cipher_key[256] = "sixteenbytes key";
/* AES CBC xform */
const struct rte_crypto_sym_xform aescbc_enc_xf = {
NULL,
RTE_CRYPTO_SYM_XFORM_CIPHER,
.cipher = { RTE_CRYPTO_CIPHER_OP_ENCRYPT, RTE_CRYPTO_CIPHER_AES_CBC,
.key = { cipher_key, 16 } }
};
const struct rte_crypto_sym_xform aescbc_dec_xf = {
NULL,
RTE_CRYPTO_SYM_XFORM_CIPHER,
.cipher = { RTE_CRYPTO_CIPHER_OP_DECRYPT, RTE_CRYPTO_CIPHER_AES_CBC,
.key = { cipher_key, 16 } }
};
static uint8_t auth_key[256] = "twentybytes hash key";
/* SHA1 HMAC xform */
const struct rte_crypto_sym_xform sha1hmac_gen_xf = {
NULL,
RTE_CRYPTO_SYM_XFORM_AUTH,
.auth = { RTE_CRYPTO_AUTH_OP_GENERATE, RTE_CRYPTO_AUTH_SHA1_HMAC,
.key = { auth_key, 20 }, 12, 0 }
};
const struct rte_crypto_sym_xform sha1hmac_verify_xf = {
NULL,
RTE_CRYPTO_SYM_XFORM_AUTH,
.auth = { RTE_CRYPTO_AUTH_OP_VERIFY, RTE_CRYPTO_AUTH_SHA1_HMAC,
.key = { auth_key, 20 }, 12, 0 }
};
/* AES CBC xform */
const struct rte_crypto_sym_xform null_cipher_xf = {
NULL,
RTE_CRYPTO_SYM_XFORM_CIPHER,
.cipher = { .algo = RTE_CRYPTO_CIPHER_NULL }
};
const struct rte_crypto_sym_xform null_auth_xf = {
NULL,
RTE_CRYPTO_SYM_XFORM_AUTH,
.auth = { .algo = RTE_CRYPTO_AUTH_NULL }
};
struct sa_ctx {
struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES];
struct {
struct rte_crypto_sym_xform a;
struct rte_crypto_sym_xform b;
} xf[IPSEC_SA_MAX_ENTRIES];
};
static struct sa_ctx *
sa_ipv4_create(const char *name, int socket_id)
{
char s[PATH_MAX];
struct sa_ctx *sa_ctx;
unsigned mz_size;
const struct rte_memzone *mz;
snprintf(s, sizeof(s), "%s_%u", name, socket_id);
/* Create SA array table */
printf("Creating SA context with %u maximum entries\n",
IPSEC_SA_MAX_ENTRIES);
mz_size = sizeof(struct sa_ctx);
mz = rte_memzone_reserve(s, mz_size, socket_id,
RTE_MEMZONE_1GB | RTE_MEMZONE_SIZE_HINT_ONLY);
if (mz == NULL) {
printf("Failed to allocate SA DB memory\n");
rte_errno = -ENOMEM;
return NULL;
}
sa_ctx = (struct sa_ctx *)mz->addr;
return sa_ctx;
}
static int
sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
unsigned nb_entries, unsigned inbound)
{
struct ipsec_sa *sa;
unsigned i, idx;
for (i = 0; i < nb_entries; i++) {
idx = SPI2IDX(entries[i].spi);
sa = &sa_ctx->sa[idx];
if (sa->spi != 0) {
printf("Index %u already in use by SPI %u\n",
idx, sa->spi);
return -EINVAL;
}
*sa = entries[i];
sa->src = rte_cpu_to_be_32(sa->src);
sa->dst = rte_cpu_to_be_32(sa->dst);
if (inbound) {
if (sa->cipher_algo == RTE_CRYPTO_CIPHER_NULL) {
sa_ctx->xf[idx].a = null_auth_xf;
sa_ctx->xf[idx].b = null_cipher_xf;
} else {
sa_ctx->xf[idx].a = sha1hmac_verify_xf;
sa_ctx->xf[idx].b = aescbc_dec_xf;
}
} else { /* outbound */
if (sa->cipher_algo == RTE_CRYPTO_CIPHER_NULL) {
sa_ctx->xf[idx].a = null_cipher_xf;
sa_ctx->xf[idx].b = null_auth_xf;
} else {
sa_ctx->xf[idx].a = aescbc_enc_xf;
sa_ctx->xf[idx].b = sha1hmac_gen_xf;
}
}
sa_ctx->xf[idx].a.next = &sa_ctx->xf[idx].b;
sa_ctx->xf[idx].b.next = NULL;
sa->xforms = &sa_ctx->xf[idx].a;
}
return 0;
}
static inline int
sa_out_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
unsigned nb_entries)
{
return sa_add_rules(sa_ctx, entries, nb_entries, 0);
}
static inline int
sa_in_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
unsigned nb_entries)
{
return sa_add_rules(sa_ctx, entries, nb_entries, 1);
}
void
sa_init(struct socket_ctx *ctx, int socket_id, unsigned ep)
{
const struct ipsec_sa *sa_out_entries, *sa_in_entries;
unsigned nb_out_entries, nb_in_entries;
const char *name;
if (ctx == NULL)
rte_exit(EXIT_FAILURE, "NULL context.\n");
if (ctx->sa_ipv4_in != NULL)
rte_exit(EXIT_FAILURE, "Inbound SA DB for socket %u already "
"initialized\n", socket_id);
if (ctx->sa_ipv4_out != NULL)
rte_exit(EXIT_FAILURE, "Outbound SA DB for socket %u already "
"initialized\n", socket_id);
if (ep == 0) {
sa_out_entries = sa_ep0_out;
nb_out_entries = RTE_DIM(sa_ep0_out);
sa_in_entries = sa_ep0_in;
nb_in_entries = RTE_DIM(sa_ep0_in);
} else if (ep == 1) {
sa_out_entries = sa_ep1_out;
nb_out_entries = RTE_DIM(sa_ep1_out);
sa_in_entries = sa_ep1_in;
nb_in_entries = RTE_DIM(sa_ep1_in);
} else
rte_exit(EXIT_FAILURE, "Invalid EP value %u. "
"Only 0 or 1 supported.\n", ep);
name = "sa_ipv4_in";
ctx->sa_ipv4_in = sa_ipv4_create(name, socket_id);
if (ctx->sa_ipv4_in == NULL)
rte_exit(EXIT_FAILURE, "Error [%d] creating SA context %s "
"in socket %d\n", rte_errno, name, socket_id);
name = "sa_ipv4_out";
ctx->sa_ipv4_out = sa_ipv4_create(name, socket_id);
if (ctx->sa_ipv4_out == NULL)
rte_exit(EXIT_FAILURE, "Error [%d] creating SA context %s "
"in socket %d\n", rte_errno, name, socket_id);
sa_in_add_rules(ctx->sa_ipv4_in, sa_in_entries, nb_in_entries);
sa_out_add_rules(ctx->sa_ipv4_out, sa_out_entries, nb_out_entries);
}
int
inbound_sa_check(struct sa_ctx *sa_ctx, struct rte_mbuf *m, uint32_t sa_idx)
{
struct ipsec_mbuf_metadata *priv;
priv = RTE_PTR_ADD(m, sizeof(struct rte_mbuf));
return (sa_ctx->sa[sa_idx].spi == priv->sa->spi);
}
void
inbound_sa_lookup(struct sa_ctx *sa_ctx, struct rte_mbuf *pkts[],
struct ipsec_sa *sa[], uint16_t nb_pkts)
{
unsigned i;
uint32_t *src, spi;
for (i = 0; i < nb_pkts; i++) {
spi = rte_pktmbuf_mtod_offset(pkts[i], struct esp_hdr *,
sizeof(struct ip))->spi;
if (spi == INVALID_SPI)
continue;
sa[i] = &sa_ctx->sa[SPI2IDX(spi)];
if (spi != sa[i]->spi) {
sa[i] = NULL;
continue;
}
src = rte_pktmbuf_mtod_offset(pkts[i], uint32_t *,
offsetof(struct ip, ip_src));
if ((sa[i]->src != *src) || (sa[i]->dst != *(src + 1)))
sa[i] = NULL;
}
}
void
outbound_sa_lookup(struct sa_ctx *sa_ctx, uint32_t sa_idx[],
struct ipsec_sa *sa[], uint16_t nb_pkts)
{
unsigned i;
for (i = 0; i < nb_pkts; i++)
sa[i] = &sa_ctx->sa[sa_idx[i]];
}

364
examples/ipsec-secgw/sp.c Normal file
View file

@ -0,0 +1,364 @@
/*-
* BSD LICENSE
*
* Copyright(c) 2016 Intel Corporation. All rights reserved.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of Intel Corporation nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/*
* Security Policies
*/
#include <netinet/ip.h>
#include <rte_acl.h>
#include "ipsec.h"
#define MAX_ACL_RULE_NUM 1000
/*
* Rule and trace formats definitions.
*/
enum {
PROTO_FIELD_IPV4,
SRC_FIELD_IPV4,
DST_FIELD_IPV4,
SRCP_FIELD_IPV4,
DSTP_FIELD_IPV4,
NUM_FIELDS_IPV4
};
/*
* That effectively defines order of IPV4 classifications:
* - PROTO
* - SRC IP ADDRESS
* - DST IP ADDRESS
* - PORTS (SRC and DST)
*/
enum {
RTE_ACL_IPV4_PROTO,
RTE_ACL_IPV4_SRC,
RTE_ACL_IPV4_DST,
RTE_ACL_IPV4_PORTS,
RTE_ACL_IPV4_NUM
};
struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = {
{
.type = RTE_ACL_FIELD_TYPE_BITMASK,
.size = sizeof(uint8_t),
.field_index = PROTO_FIELD_IPV4,
.input_index = RTE_ACL_IPV4_PROTO,
.offset = 0,
},
{
.type = RTE_ACL_FIELD_TYPE_MASK,
.size = sizeof(uint32_t),
.field_index = SRC_FIELD_IPV4,
.input_index = RTE_ACL_IPV4_SRC,
.offset = offsetof(struct ip, ip_src) - offsetof(struct ip, ip_p)
},
{
.type = RTE_ACL_FIELD_TYPE_MASK,
.size = sizeof(uint32_t),
.field_index = DST_FIELD_IPV4,
.input_index = RTE_ACL_IPV4_DST,
.offset = offsetof(struct ip, ip_dst) - offsetof(struct ip, ip_p)
},
{
.type = RTE_ACL_FIELD_TYPE_RANGE,
.size = sizeof(uint16_t),
.field_index = SRCP_FIELD_IPV4,
.input_index = RTE_ACL_IPV4_PORTS,
.offset = sizeof(struct ip) - offsetof(struct ip, ip_p)
},
{
.type = RTE_ACL_FIELD_TYPE_RANGE,
.size = sizeof(uint16_t),
.field_index = DSTP_FIELD_IPV4,
.input_index = RTE_ACL_IPV4_PORTS,
.offset = sizeof(struct ip) - offsetof(struct ip, ip_p) +
sizeof(uint16_t)
},
};
RTE_ACL_RULE_DEF(acl4_rules, RTE_DIM(ipv4_defs));
const struct acl4_rules acl4_rules_in[] = {
{
.data = {.userdata = PROTECT(5), .category_mask = 1, .priority = 1},
/* destination IPv4 */
.field[2] = {.value.u32 = IPv4(192, 168, 105, 0),
.mask_range.u32 = 24,},
/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
/* destination port */
.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,}
},
{
.data = {.userdata = PROTECT(6), .category_mask = 1, .priority = 2},
/* destination IPv4 */
.field[2] = {.value.u32 = IPv4(192, 168, 106, 0),
.mask_range.u32 = 24,},
/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
/* destination port */
.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,}
},
{
.data = {.userdata = PROTECT(7), .category_mask = 1, .priority = 3},
/* destination IPv4 */
.field[2] = {.value.u32 = IPv4(192, 168, 107, 0),
.mask_range.u32 = 24,},
/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
/* destination port */
.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,}
},
{
.data = {.userdata = PROTECT(8), .category_mask = 1, .priority = 4},
/* destination IPv4 */
.field[2] = {.value.u32 = IPv4(192, 168, 108, 0),
.mask_range.u32 = 24,},
/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
/* destination port */
.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,}
},
{
.data = {.userdata = PROTECT(9), .category_mask = 1, .priority = 5},
/* destination IPv4 */
.field[2] = {.value.u32 = IPv4(192, 168, 200, 0),
.mask_range.u32 = 24,},
/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
/* destination port */
.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,}
},
{
.data = {.userdata = BYPASS, .category_mask = 1, .priority = 6},
/* destination IPv4 */
.field[2] = {.value.u32 = IPv4(192, 168, 250, 0),
.mask_range.u32 = 24,},
/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
/* destination port */
.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,}
}
};
const struct acl4_rules acl4_rules_out[] = {
{
.data = {.userdata = PROTECT(5), .category_mask = 1, .priority = 1},
/* destination IPv4 */
.field[2] = {.value.u32 = IPv4(192, 168, 115, 0),
.mask_range.u32 = 24,},
/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
/* destination port */
.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,}
},
{
.data = {.userdata = PROTECT(6), .category_mask = 1, .priority = 2},
/* destination IPv4 */
.field[2] = {.value.u32 = IPv4(192, 168, 116, 0),
.mask_range.u32 = 24,},
/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
/* destination port */
.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,}
},
{
.data = {.userdata = PROTECT(7), .category_mask = 1, .priority = 3},
/* destination IPv4 */
.field[2] = {.value.u32 = IPv4(192, 168, 117, 0),
.mask_range.u32 = 24,},
/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
/* destination port */
.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,}
},
{
.data = {.userdata = PROTECT(8), .category_mask = 1, .priority = 4},
/* destination IPv4 */
.field[2] = {.value.u32 = IPv4(192, 168, 118, 0),
.mask_range.u32 = 24,},
/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
/* destination port */
.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,}
},
{
.data = {.userdata = PROTECT(9), .category_mask = 1, .priority = 5},
/* destination IPv4 */
.field[2] = {.value.u32 = IPv4(192, 168, 210, 0),
.mask_range.u32 = 24,},
/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
/* destination port */
.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,}
},
{
.data = {.userdata = BYPASS, .category_mask = 1, .priority = 6},
/* destination IPv4 */
.field[2] = {.value.u32 = IPv4(192, 168, 240, 0),
.mask_range.u32 = 24,},
/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
/* destination port */
.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,}
}
};
static void
print_one_ipv4_rule(const struct acl4_rules *rule, int extra)
{
unsigned char a, b, c, d;
uint32_t_to_char(rule->field[SRC_FIELD_IPV4].value.u32,
&a, &b, &c, &d);
printf("%hhu.%hhu.%hhu.%hhu/%u ", a, b, c, d,
rule->field[SRC_FIELD_IPV4].mask_range.u32);
uint32_t_to_char(rule->field[DST_FIELD_IPV4].value.u32,
&a, &b, &c, &d);
printf("%hhu.%hhu.%hhu.%hhu/%u ", a, b, c, d,
rule->field[DST_FIELD_IPV4].mask_range.u32);
printf("%hu : %hu %hu : %hu 0x%hhx/0x%hhx ",
rule->field[SRCP_FIELD_IPV4].value.u16,
rule->field[SRCP_FIELD_IPV4].mask_range.u16,
rule->field[DSTP_FIELD_IPV4].value.u16,
rule->field[DSTP_FIELD_IPV4].mask_range.u16,
rule->field[PROTO_FIELD_IPV4].value.u8,
rule->field[PROTO_FIELD_IPV4].mask_range.u8);
if (extra)
printf("0x%x-0x%x-0x%x ",
rule->data.category_mask,
rule->data.priority,
rule->data.userdata);
}
static inline void
dump_ipv4_rules(const struct acl4_rules *rule, int num, int extra)
{
int i;
for (i = 0; i < num; i++, rule++) {
printf("\t%d:", i + 1);
print_one_ipv4_rule(rule, extra);
printf("\n");
}
}
static struct rte_acl_ctx *
acl4_init(const char *name, int socketid, const struct acl4_rules *rules,
unsigned rules_nb)
{
char s[PATH_MAX];
struct rte_acl_param acl_param;
struct rte_acl_config acl_build_param;
struct rte_acl_ctx *ctx;
printf("Creating SP context with %u max rules\n", MAX_ACL_RULE_NUM);
memset(&acl_param, 0, sizeof(acl_param));
/* Create ACL contexts */
snprintf(s, sizeof(s), "%s_%d", name, socketid);
printf("IPv4 %s entries [%u]:\n", s, rules_nb);
dump_ipv4_rules(rules, rules_nb, 1);
acl_param.name = s;
acl_param.socket_id = socketid;
acl_param.rule_size = RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs));
acl_param.max_rule_num = MAX_ACL_RULE_NUM;
ctx = rte_acl_create(&acl_param);
if (ctx == NULL)
rte_exit(EXIT_FAILURE, "Failed to create ACL context\n");
if (rte_acl_add_rules(ctx, (const struct rte_acl_rule *)rules,
rules_nb) < 0)
rte_exit(EXIT_FAILURE, "add rules failed\n");
/* Perform builds */
memset(&acl_build_param, 0, sizeof(acl_build_param));
acl_build_param.num_categories = DEFAULT_MAX_CATEGORIES;
acl_build_param.num_fields = RTE_DIM(ipv4_defs);
memcpy(&acl_build_param.defs, ipv4_defs, sizeof(ipv4_defs));
if (rte_acl_build(ctx, &acl_build_param) != 0)
rte_exit(EXIT_FAILURE, "Failed to build ACL trie\n");
rte_acl_dump(ctx);
return ctx;
}
void
sp_init(struct socket_ctx *ctx, int socket_id, unsigned ep)
{
const char *name;
const struct acl4_rules *rules_out, *rules_in;
unsigned nb_out_rules, nb_in_rules;
if (ctx == NULL)
rte_exit(EXIT_FAILURE, "NULL context.\n");
if (ctx->sp_ipv4_in != NULL)
rte_exit(EXIT_FAILURE, "Inbound SP DB for socket %u already "
"initialized\n", socket_id);
if (ctx->sp_ipv4_out != NULL)
rte_exit(EXIT_FAILURE, "Outbound SP DB for socket %u already "
"initialized\n", socket_id);
if (ep == 0) {
rules_out = acl4_rules_in;
nb_out_rules = RTE_DIM(acl4_rules_in);
rules_in = acl4_rules_out;
nb_in_rules = RTE_DIM(acl4_rules_out);
} else if (ep == 1) {
rules_out = acl4_rules_out;
nb_out_rules = RTE_DIM(acl4_rules_out);
rules_in = acl4_rules_in;
nb_in_rules = RTE_DIM(acl4_rules_in);
} else
rte_exit(EXIT_FAILURE, "Invalid EP value %u. "
"Only 0 or 1 supported.\n", ep);
name = "sp_ipv4_in";
ctx->sp_ipv4_in = (struct sp_ctx *)acl4_init(name, socket_id,
rules_in, nb_in_rules);
name = "sp_ipv4_out";
ctx->sp_ipv4_out = (struct sp_ctx *)acl4_init(name, socket_id,
rules_out, nb_out_rules);
}