Intel release 0.27.1

This commit is contained in:
DataHoarder 2019-09-13 18:53:56 +00:00
parent 7cf35a4864
commit 118e827b0e
34 changed files with 1773 additions and 591 deletions

264
README
View file

@ -3,7 +3,7 @@ README for Intel(R) Ethernet Switch Host Interface Driver
===============================================================================
February 23, 2017
November 26, 2018
===============================================================================
@ -17,26 +17,26 @@ Contents
- Known Issues
================================================================================
Important Notes
---------------
===============
Configuring SR-IOV for improved network security
------------------------------------------------
In a virtualized environment, on Intel(R) Ethernet Server Adapters that support
SR-IOV, the virtual function (VF) may be subject to malicious behavior.
In a virtualized environment, on Intel(R) Ethernet Network Adapters that
support SR-IOV, the virtual function (VF) may be subject to malicious behavior.
Software-generated layer two frames, like IEEE 802.3x (link flow control), IEEE
802.1Qbb (priority based flow-control), and others of this type, are not
expected and can throttle traffic between the host and the virtual switch,
reducing performance. To resolve this issue, configure all SR-IOV enabled ports
for VLAN tagging. This configuration allows unexpected, and potentially
malicious, frames to be dropped.
reducing performance. To resolve this issue, and to ensure isolation from
unintended traffic streams, configure all SR-IOV enabled ports for VLAN tagging
from the administrative interface on the PF. This configuration allows
unexpected, and potentially malicious, frames to be dropped. See "Configuring
VLAN Tagging on SR-IOV Enabled Adapter Ports" in this README for configuration
instructions.
Overview
--------
========
This driver supports kernel versions 2.6.32 and newer.
Driver information can be obtained using ethtool, lspci, and iproute2 ip.
@ -61,35 +61,59 @@ that is electrically bifurcated into two x8 ports. Without a BIOS that supports
bifurcation, only 1 port is available.
================================================================================
Building and Installation
-------------------------
To build a binary RPM* package of this driver, run 'rpmbuild -tb
fm10k-<x.x.x>.tar.gz', where <x.x.x> is the version number for the driver tar
file.
Note: For the build to work properly, the currently running kernel MUST match
the version and configuration of the installed kernel sources. If you have just
recompiled the kernel reboot the system before building.
=========================
To build a binary RPM package of this driver
--------------------------------------------
Note: RPM functionality has only been tested in Red Hat distributions.
_lbank_line_
1. Move the base driver tar file to the directory of your choice. For
example, use '/home/username/fm10k' or '/usr/local/src/fm10k'.
1. Run the following command, where <x.x.x> is the version number for the
driver tar file.
# rpmbuild -tb fm10k-<x.x.x>.tar.gz
NOTE: For the build to work properly, the currently running kernel MUST
match the version and configuration of the installed kernel sources. If
you have just recompiled the kernel, reboot the system before building.
2. After building the RPM, the last few lines of the tool output contain the
location of the RPM file that was built. Install the RPM with one of the
following commands, where <RPM> is the location of the RPM file:
# rpm -Uvh <RPM>
or
# dnf/yum localinstall <RPM>
NOTES:
- To compile the driver on some kernel/arch combinations, you may need to
install a package with the development version of libelf (e.g. libelf-dev,
libelf-devel, elfutilsl-libelf-devel).
- When compiling an out-of-tree driver, details will vary by distribution.
However, you will usually need a kernel-devel RPM or some RPM that provides the
kernel headers at a minimum. To find the kernel-devel header sources for a
particular kernel, you will usually fill in the link at /lib/modules/'uname
-r'/build.
To manually build the driver
----------------------------
1. Move the base driver tar file to the directory of your choice.
For example, use '/home/username/fm10k' or '/usr/local/src/fm10k'.
2. Untar/unzip the archive, where <x.x.x> is the version number for the
driver tar file:
tar zxf fm10k-<x.x.x>.tar.gz
# tar zxf fm10k-<x.x.x>.tar.gz
3. Change to the driver src directory, where <x.x.x> is the version number
for the driver tar:
cd fm10k-<x.x.x>/src/
# cd fm10k-<x.x.x>/src/
4. Compile the driver module:
# make install
The binary will be installed as:
/lib/modules/<KERNEL
VERSION>/updates/drivers/net/ethernet/intel/fm10k/fm10k.ko
@ -97,51 +121,57 @@ VERSION>/updates/drivers/net/ethernet/intel/fm10k/fm10k.ko
The install location listed above is the default location. This may differ
for various Linux distributions.
5. Load the module using the modprobe command:
modprobe <fm10k> [parameter=port1_value,port2_value]
5. Load the module using the modprobe command.
Make sure that any older fm10k drivers are removed from the kernel before
loading the new module:
rmmod fm10k; modprobe fm10k
To check the version of the driver and then load it:
# modinfo fm10k
# modprobe fm10k [parameter=port1_value,port2_value]
Alternately, make sure that any older fm10k drivers are removed from the
kernel before loading the new module:
# rmmod fm10k; modprobe fm10k
6. Assign an IP address to the interface by entering the following,
where ethX is the interface name that was shown in dmesg after modprobe:
ip address add <IP_address>/<netmask bits> dev ethX
# ip address add <IP_address>/<netmask bits> dev ethX
NOTE: Before proceeding, ensure that netdev is enabled and that a
switch manager is running. To enable netdev, use one of the following
commands:
#ifconfig <netdev> up
# ifconfig <netdev> up
or
#ip link set <netdev> up
# ip link set <netdev> up
7. Verify that the interface works. Enter the following, where IP_address
is the IP address for another machine on the same subnet as the interface
that is being tested:
ping <IP_address>
# ping <IP_address>
Note: For certain distributions like (but not limited to) RedHat Enterprise
Linux 7 and Ubuntu, once the driver is installed the initrd/initramfs file may
Linux 7 and Ubuntu, once the driver is installed, the initrd/initramfs file may
need to be updated to prevent the OS loading old versions of the fm10k driver.
The dracut utility may be used on RedHat distributions:
# dracut --force
For Ubuntu:
# update-initramfs -u
================================================================================
Command Line Parameters
-----------------------
=======================
If the driver is built as a module, the following optional parameters are used
by entering them on the command line with the modprobe command using this
syntax:
modprobe fm10k [<option>=<VAL1>]
# modprobe fm10k [<option>=<VAL1>]
For example:
modprobe fm10k max_vfs=7
# modprobe fm10k max_vfs=7
The default value for each parameter is generally the recommended setting,
unless otherwise noted.
@ -159,26 +189,31 @@ max_vfs
-------
This parameter adds support for SR-IOV. It causes the driver to spawn up to
max_vfs worth of virtual functions.
Valid Range:0-64
Valid Range: 0-64
NOTE: This parameter is only used on kernel 3.7.x and below. On kernel 3.8.x
and above, use sysfs to enable VFs. Also, for Red Hat distributions, this
parameter is only used on version 6.6 and older. For version 6.7 and newer, use
sysfs. For example:
#echo $num_vf_enabled > /sys/class/net/$dev/device/sriov_numvfs //enable
VFs
#echo 0 > /sys/class/net/$dev/device/sriov_numvfs //disable VFs
sysfs.
For example, you can create 4 VFs as follows:
# echo 4 > /sys/class/net/<dev>/device/sriov_numvfs
To disable VFs, write 0 to the same file:
# echo 0 > /sys/class/net/<dev>/device/sriov_numvfs
The parameters for the driver are referenced by position. Thus, if you have a
dual port adapter, or more than one adapter in your system, and want N virtual
functions per port, you must specify a number for each port with each parameter
separated by a comma. For example:
modprobe fm10k max_vfs=4
# modprobe fm10k max_vfs=4
This will spawn 4 VFs on the first port.
modprobe fm10k max_vfs=2,4
# modprobe fm10k max_vfs=2,4
This will spawn 2 VFs on the first port and 4 VFs on the second port.
@ -192,42 +227,45 @@ support it, you can check sysfs to find the mapping.
NOTE: When SR-IOV mode is enabled, hardware VLAN filtering and VLAN tag
stripping/insertion will remain enabled. Please remove the old VLAN filter
before the new VLAN filter is added. For example,
ip link set eth0 vf 0 vlan 100 // set vlan 100 for VF 0
ip link set eth0 vf 0 vlan 0 // Delete vlan 100
ip link set eth0 vf 0 vlan 200 // set a new vlan 200 for VF 0
before the new VLAN filter is added. For example:
# ip link set eth0 vf 0 vlan 100 // set vlan 100 for VF 0
# ip link set eth0 vf 0 vlan 0 // Delete vlan 100
# ip link set eth0 vf 0 vlan 200 // set a new vlan 200 for VF 0
Configuring SR-IOV for improved network security
------------------------------------------------
In a virtualized environment, on Intel(R) Ethernet Server Adapters that support
SR-IOV, the virtual function (VF) may be subject to malicious behavior.
In a virtualized environment, on Intel(R) Ethernet Network Adapters that
support SR-IOV, the virtual function (VF) may be subject to malicious behavior.
Software-generated layer two frames, like IEEE 802.3x (link flow control), IEEE
802.1Qbb (priority based flow-control), and others of this type, are not
expected and can throttle traffic between the host and the virtual switch,
reducing performance. To resolve this issue, configure all SR-IOV enabled ports
for VLAN tagging. This configuration allows unexpected, and potentially
malicious, frames to be dropped.
reducing performance. To resolve this issue, and to ensure isolation from
unintended traffic streams, configure all SR-IOV enabled ports for VLAN tagging
from the administrative interface on the PF. This configuration allows
unexpected, and potentially malicious, frames to be dropped. See "Configuring
VLAN Tagging on SR-IOV Enabled Adapter Ports" in this README for configuration
instructions.
Configuring VLAN tagging on SR-IOV enabled adapter ports
Configuring VLAN Tagging on SR-IOV Enabled Adapter Ports
--------------------------------------------------------
To configure VLAN tagging for the ports on an SR-IOV enabled adapter, use the
following command. The VLAN configuration should be done before the VF driver
is loaded or the VM is booted.
is loaded or the VM is booted. The VF is not aware of the VLAN tag being
inserted on transmit and removed on received frames (sometimes called "port
VLAN" mode).
$ ip link set dev <PF netdev id> vf <id> vlan <vlan id>
# ip link set dev <PF netdev id> vf <id> vlan <vlan id>
For example, the following instructions will configure PF eth0 and the first VF
on VLAN 10.
$ ip link set dev eth0 vf 0 vlan 10
For example, the following will configure PF eth0 and the first VF on VLAN 10:
================================================================================
# ip link set dev eth0 vf 0 vlan 10
Additional Features and Configurations
-------------------------------------------
======================================
Configuring the Driver on Different Distributions
-------------------------------------------------
@ -246,7 +284,8 @@ Viewing Link Messages
Link messages will not be displayed to the console if the distribution is
restricting system messages. In order to see network driver link messages on
your console, set dmesg to eight by entering the following:
dmesg -n 8
# dmesg -n 8
NOTE: This setting is not saved across reboots.
@ -257,17 +296,20 @@ Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU)
to a value larger than the default value of 1500.
Use the ifconfig command to increase the MTU size. For example, enter the
following where <x> is the interface number:
following where X is the interface number:
# ifconfig ethX mtu 9000 up
ifconfig eth<x> mtu 9000 up
Alternatively, you can use the ip command as follows:
ip link set mtu 9000 dev eth<x>
ip link set up dev eth<x>
# ip link set mtu 9000 dev ethX
# ip link set up dev ethX
This setting is not saved across reboots. The setting change can be made
permanent by adding 'MTU=9000' to the file:
/etc/sysconfig/network-scripts/ifcfg-eth<x> for RHEL or to the file
/etc/sysconfig/network/<config_file> for SLES.
permanent by adding 'MTU=9000' to the following file:
/etc/sysconfig/network-scripts/ifcfg-ethX for RHEL
or
/etc/sysconfig/network/<config_file> for SLES
NOTE: The maximum MTU setting for Jumbo Frames is 15342. This value coincides
with the maximum Jumbo Frames size of 15364 bytes.
@ -276,13 +318,17 @@ NOTE: This driver will attempt to use multiple page sized buffers to receive
each jumbo packet. This should help to avoid buffer starvation issues when
allocating receive packets.
NOTE: Packet loss may have a greater impact on throughput when you use jumbo
frames. If you observe a drop in performance after enabling jumbo frames,
enabling flow control may mitigate the issue.
ethtool
-------
The driver utilizes the ethtool interface for driver configuration and
diagnostics, as well as displaying statistical information. The latest ethtool
version is required for this functionality. Download it at:
http://ftp.kernel.org/pub/software/network/ethtool/
https://kernel.org/pub/software/network/ethtool/
Supported ethtool Commands and Options for Filtering
----------------------------------------------------
@ -319,42 +365,18 @@ The Intel(R) Ethernet Switch Host Interface Driver does not support Flow
Control. It will not send pause frames. This may result in dropped frames.
VXLAN Overlay HW Offloading
---------------------------
Tunnel/Overlay Stateless Offloads
---------------------------------
Supported tunnels and overlays include VXLAN, GENEVE, and others depending on
hardware and software configuration. Stateless offloads are enabled by default.
To view the current state of all offloads:
Virtual Extensible LAN (VXLAN) allows you to extend an L2 network over an L3
network, which may be useful in a virtualized or cloud environment. Some
Intel(R) Ethernet Network devices perform VXLAN processing, offloading it from
the operating system. This reduces CPU utilization.
VXLAN offloading is controlled by the tx and rx checksum offload options
provided by ethtool. That is, if tx checksum offload is enabled, and the
adapter has the capability, VXLAN offloading is also enabled.
If rx checksum offload is enabled, then the VXLAN packets rx checksum will be
offloaded, unless the command #ethtool -K $INTERFACE_NAME rx off was used to
specifically disable the VXLAN rx offload.
VXLAN Overlay HW Offloading is enabled by default. To view and configure VXLAN
offload on a VXLAN-overlay offload enabled device, use the following command:
# ethtool -k ethX
(This command displays the offloads and their current state.)
For more information on configuring your network for overlay HW offloading
support, refer to the Intel Technical Brief, "Creating Overlay Networks Using
Intel Ethernet Converged Network Adapters" (Intel Networking Division, August
2013):
http://www.intel.com/content/dam/www/public/us/en/documents/technology-briefs/
overlay-networks-using-converged-network-adapters-brief.pdf
================================================================================
# ethtool -k ethX
Known Issues/Troubleshooting
----------------------------
============================
FUM_BAD_VF_QACCESS error on port reset
--------------------------------------
@ -404,26 +426,21 @@ driver_override entry in the /sys filesystem to prevent the bus driver from
binding to the PF host interface device in the first place.
================================================================================
Support
-------
=======
For general information, go to the Intel support website at:
http://www.intel.com/support/
or the Intel Wired Networking project hosted by Sourceforge at:
http://sourceforge.net/projects/e1000
If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
to e1000-devel@lists.sf.net.
================================================================================
License
-------
=======
This program is free software; you can redistribute it and/or modify it under
the terms and conditions of the GNU General Public License, version 2, as
published by the Free Software Foundation.
@ -439,14 +456,13 @@ St - Fifth Floor, Boston, MA 02110-1301 USA.
The full GNU General Public License is included in this distribution in the
file called "COPYING".
Copyright(c) 2015-2017 Intel Corporation.
================================================================================
Copyright(c) 2015-2018 Intel Corporation.
Trademarks
----------
Intel and Itanium are trademarks or registered trademarks of Intel Corporation
or its subsidiaries in the United States and/or other countries.
==========
Intel is a trademark or registered trademark of Intel Corporation or its
subsidiaries in the United States and/or other countries.
* Other names and brands may be claimed as the property of others.

69
SUMS
View file

@ -1,34 +1,35 @@
12724 3 fm10k-0.26.1/fm10k.7
64801 9 fm10k-0.26.1/fm10k.spec
51187 1 fm10k-0.26.1/pci.updates
38058 6 fm10k-0.26.1/scripts/set_irq_affinity
03644 17 fm10k-0.26.1/README
12529 18 fm10k-0.26.1/COPYING
06320 36 fm10k-0.26.1/src/fm10k_ethtool.c
32480 11 fm10k-0.26.1/src/common.mk
32173 18 fm10k-0.26.1/src/fm10k_iov.c
54788 2 fm10k-0.26.1/src/fm10k_osdep.h
30476 54 fm10k-0.26.1/src/fm10k_netdev.c
52109 24 fm10k-0.26.1/src/fm10k_tlv.c
64411 19 fm10k-0.26.1/src/fm10k.h
21492 57 fm10k-0.26.1/src/fm10k_pf.c
58291 11 fm10k-0.26.1/src/fm10k_mbx.h
63501 4 fm10k-0.26.1/src/fm10k_pf.h
09022 16 fm10k-0.26.1/src/fm10k_vf.c
31576 25 fm10k-0.26.1/src/fm10k_type.h
46221 7 fm10k-0.26.1/src/fm10k_tlv.h
37658 57 fm10k-0.26.1/src/fm10k_main.c
64132 62 fm10k-0.26.1/src/fm10k_mbx.c
51022 6 fm10k-0.26.1/src/fm10k_uio.c
62868 5 fm10k-0.26.1/src/fm10k_dcbnl.c
44685 1 fm10k-0.26.1/src/Module.supported
62241 77 fm10k-0.26.1/src/fm10k_pci.c
08460 2 fm10k-0.26.1/src/fm10k_vf.h
13974 49 fm10k-0.26.1/src/kcompat.c
12716 6 fm10k-0.26.1/src/fm10k_debugfs.c
06386 168 fm10k-0.26.1/src/kcompat.h
43700 7 fm10k-0.26.1/src/fm10k_param.c
31284 2 fm10k-0.26.1/src/fm10k_ies.c
64779 15 fm10k-0.26.1/src/fm10k_common.c
37650 1 fm10k-0.26.1/src/fm10k_common.h
26994 6 fm10k-0.26.1/src/Makefile
19138 5 fm10k-0.27.1/src/fm10k_dcbnl.c
04654 19 fm10k-0.27.1/src/fm10k.h
18774 54 fm10k-0.27.1/src/fm10k_netdev.c
46131 6 fm10k-0.27.1/src/fm10k_uio.c
46696 76 fm10k-0.27.1/src/fm10k_pci.c
04219 58 fm10k-0.27.1/src/fm10k_main.c
31444 2 fm10k-0.27.1/src/fm10k_ies.c
05651 20 fm10k-0.27.1/src/fm10k_iov.c
61940 6 fm10k-0.27.1/src/fm10k_debugfs.c
28646 36 fm10k-0.27.1/src/fm10k_ethtool.c
43494 7 fm10k-0.27.1/src/fm10k_param.c
38807 2 fm10k-0.27.1/src/fm10k_osdep.h
13929 11 fm10k-0.27.1/src/fm10k_mbx.h
45842 1 fm10k-0.27.1/src/fm10k_common.h
06999 7 fm10k-0.27.1/src/fm10k_tlv.h
09292 25 fm10k-0.27.1/src/fm10k_type.h
27102 15 fm10k-0.27.1/src/fm10k_common.c
39664 4 fm10k-0.27.1/src/fm10k_pf.h
30297 24 fm10k-0.27.1/src/fm10k_tlv.c
55495 57 fm10k-0.27.1/src/fm10k_pf.c
57610 2 fm10k-0.27.1/src/fm10k_vf.h
45619 62 fm10k-0.27.1/src/fm10k_mbx.c
63411 16 fm10k-0.27.1/src/fm10k_vf.c
23395 187 fm10k-0.27.1/src/kcompat.h
19777 54 fm10k-0.27.1/src/kcompat.c
24889 10 fm10k-0.27.1/src/kcompat_overflow.h
12653 6 fm10k-0.27.1/src/Makefile
10410 11 fm10k-0.27.1/src/common.mk
44685 1 fm10k-0.27.1/src/Module.supported
32962 7 fm10k-0.27.1/scripts/set_irq_affinity
50228 1 fm10k-0.27.1/pci.updates
12529 18 fm10k-0.27.1/COPYING
09965 17 fm10k-0.27.1/README
48664 3 fm10k-0.27.1/fm10k.7
30174 10 fm10k-0.27.1/fm10k.spec

14
fm10k.7
View file

@ -16,7 +16,7 @@ modprobe fm10k [<option>=<VAL1>,<VAL2>,...]
.SH DESCRIPTION
This driver is intended for \fB2.6.32\fR and newer kernels. A version of the driver may already be included by your distribution and/or the kernel.org kernel.
This driver includes support for any 64 bit Linux supported system, including Itanium(R)2, x86_64, PPC64, ARM, etc.
This driver includes support for any 64 bit Linux supported system, x86_64, PPC64, ARM, etc.
.LP
This driver is only supported as a loadable module at this time. Intel is not supplying patches against the kernel source to allow for static linking of the drivers.
@ -28,17 +28,21 @@ use with Linux.
.LP
Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU) to a value larger than the default value of 1500.
Use the ifconfig command to increase the MTU size. For example, enter the following where <x> is the interface number:
Use the ifconfig command to increase the MTU size. For example, enter the following where X is the interface number:
# ifconfig ethX mtu 9000 up
ifconfig eth<x> mtu 9000 up
Alternatively, you can use the ip command as follows:
ip link set mtu 9000 dev eth<x>
ip link set up dev eth<x>
# ip link set mtu 9000 dev ethX
# ip link set up dev ethX
.LP
NOTE: The maximum MTU setting for Jumbo Frames is 15342. This value coincides with the maximum Jumbo Frames size of 15364 bytes.
NOTE: This driver will attempt to use multiple page sized buffers to receive each jumbo packet. This should help to avoid buffer starvation issues when allocating receive packets.
NOTE: Packet loss may have a greater impact on throughput when you use jumbo frames. If you observe a drop in performance after enabling jumbo frames, enabling flow control may mitigate the issue.
See the section "Jumbo Frames" in the Readme.
.LP
.B RSS

View file

@ -1,6 +1,6 @@
Name: fm10k
Summary: Intel(R) Ethernet Switch Host Interface Driver
Version: 0.26.1
Version: 0.27.1
Release: 1
Source: %{name}-%{version}.tar.gz
Vendor: Intel Corporation
@ -19,6 +19,14 @@ BuildRoot: %{_tmppath}/%{name}-%{version}-root
%define pcitable %find %{_pcitable}
Requires: kernel, fileutils, findutils, gawk, bash
# Check for existence of %kernel_module_package_buildreqs ...
%if 0%{?!kernel_module_package_buildreqs:1}
# ... and provide a suitable definition if it is not defined
%define kernel_module_package_buildreqs kernel-devel
%endif
BuildRequires: %kernel_module_package_buildreqs
%description
This package contains the Intel(R) Ethernet Switch Host Interface Driver.
@ -42,6 +50,7 @@ find lib -name "fm10k.ko" \
rm -rf %{buildroot}
%files -f file.list
%defattr(-,root,root)
%{_mandir}/man7/fm10k.7.gz
%doc COPYING
@ -75,6 +84,8 @@ bash -s %{pciids} \
%{name} \
<<"END"
#! /bin/bash
# Copyright (C) 2017 Intel Corporation
# For licensing information, see the file 'LICENSE' in the root folder
# $1 = system pci.ids file to update
# $2 = system pcitable file to update
# $3 = file with new entries in pci.ids file format

View file

@ -1,5 +1,5 @@
# SPDX-License-Identifier: GPL-2.0
# Copyright(c) 2013 - 2018 Intel Corporation.
# Copyright(c) 2013 - 2019 Intel Corporation.
# updates for the system pci.ids file
#

View file

@ -1,6 +1,7 @@
#!/bin/bash
#
# Copyright (c) 2015, Intel Corporation
# Copyright (c) 2015 - 2019, Intel Corporation
# For licensing information, see the file 'LICENSE' in the root folder
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
@ -36,8 +37,9 @@
usage()
{
echo
echo "Usage: $0 [-x] {all|local|remote|one|custom} [ethX] <[ethY]>"
echo "Usage: $0 [-x|-X] {all|local|remote|one|custom} [ethX] <[ethY]>"
echo " options: -x Configure XPS as well as smp_affinity"
echo " options: -X Disable XPS but set smp_affinity"
echo " options: {remote|one} can be followed by a specific node number"
echo " Ex: $0 local eth0"
echo " Ex: $0 remote 1 eth0"
@ -47,11 +49,37 @@ usage()
exit 1
}
usageX()
{
echo "options -x and -X cannot both be specified, pick one"
exit 1
}
if [ "$1" == "-x" ]; then
XPS_ENA=1
shift
fi
if [ "$1" == "-X" ]; then
if [ -n "$XPS_ENA" ]; then
usageX
fi
XPS_DIS=2
shift
fi
if [ "$1" == -x ]; then
usageX
fi
if [ -n "$XPS_ENA" ] && [ -n "$XPS_DIS" ]; then
usageX
fi
if [ -z "$XPS_ENA" ]; then
XPS_ENA=$XPS_DIS
fi
num='^[0-9]+$'
# Vars
AFF=$1
@ -106,10 +134,18 @@ set_affinity()
printf "%s" $MASK > /proc/irq/$IRQ/smp_affinity
printf "%s %d %s -> /proc/irq/$IRQ/smp_affinity\n" $IFACE $core $MASK
if ! [ -z "$XPS_ENA" ]; then
case "$XPS_ENA" in
1)
printf "%s %d %s -> /sys/class/net/%s/queues/tx-%d/xps_cpus\n" $IFACE $core $MASK $IFACE $((n-1))
printf "%s" $MASK > /sys/class/net/$IFACE/queues/tx-$((n-1))/xps_cpus
fi
;;
2)
MASK=0
printf "%s %d %s -> /sys/class/net/%s/queues/tx-%d/xps_cpus\n" $IFACE $core $MASK $IFACE $((n-1))
printf "%s" $MASK > /sys/class/net/$IFACE/queues/tx-$((n-1))/xps_cpus
;;
*)
esac
}
# Allow usage of , or -

View file

@ -1,5 +1,5 @@
# SPDX-License-Identifier: GPL-2.0
# Copyright(c) 2013 - 2018 Intel Corporation.
# Copyright(c) 2013 - 2019 Intel Corporation.
ifneq ($(KERNELRELEASE),)
# kbuild part of makefile

View file

@ -1,6 +1,9 @@
# SPDX-License-Identifier: GPL-2.0
# Copyright(c) 2013 - 2018 Intel Corporation.
# Copyright(c) 2013 - 2019 Intel Corporation.
# SPDX-License-Identifier: GPL-2.0-only
# Copyright (C) 2015-2019 Intel Corporation
#
# common Makefile rules useful for out-of-tree Linux driver builds
#
# Usage: include common.mk
@ -111,7 +114,6 @@ VSP := $(foreach file, ${VSP}, ${test_file})
CSP := $(foreach file, ${CSP}, ${test_file})
MSP := $(foreach file, ${MSP}, ${test_file})
# and use the first valid entry in the Search Paths
ifeq (,${VERSION_FILE})
VERSION_FILE := $(firstword ${VSP})
@ -137,6 +139,10 @@ ifeq (,$(wildcard ${SYSTEM_MAP_FILE}))
$(warning Missing System.map file - depmod will not check for missing symbols)
endif
ifneq ($(words $(subst :, ,$(CURDIR))), 1)
$(error Sources directory '$(CURDIR)' cannot contain spaces nor colons. Rename directory or move sources to another path)
endif
#######################
# Linux Version Setup #
#######################

View file

@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#ifndef _FM10K_H_
#define _FM10K_H_
@ -551,9 +551,6 @@ void fm10k_update_stats(struct fm10k_intfc *interface);
void fm10k_service_event_schedule(struct fm10k_intfc *interface);
void fm10k_macvlan_schedule(struct fm10k_intfc *interface);
void fm10k_update_rx_drop_en(struct fm10k_intfc *interface);
#ifdef CONFIG_NET_POLL_CONTROLLER
void fm10k_netpoll(struct net_device *netdev);
#endif
/* Netdev */
#ifdef HAVE_ENCAP_CSUM_OFFLOAD
@ -619,6 +616,7 @@ void fm10k_iov_suspend(struct pci_dev *pdev);
int fm10k_iov_resume(struct pci_dev *pdev);
void fm10k_iov_disable(struct pci_dev *pdev);
int fm10k_iov_configure(struct pci_dev *pdev, int num_vfs);
void fm10k_iov_update_stats(struct fm10k_intfc *interface);
s32 fm10k_iov_update_pvid(struct fm10k_intfc *interface, u16 glort, u16 pvid);
#ifdef IFLA_VF_MAX
int fm10k_ndo_set_vf_mac(struct net_device *netdev, int vf_idx, u8 *mac);
@ -637,7 +635,11 @@ int fm10k_ndo_set_vf_bw(struct net_device *netdev, int vf_idx, int max_rate);
#endif
int fm10k_ndo_get_vf_config(struct net_device *netdev,
int vf_idx, struct ifla_vf_info *ivi);
#endif
#ifdef HAVE_VF_STATS
int fm10k_ndo_get_vf_stats(struct net_device *netdev,
int vf_idx, struct ifla_vf_stats *stats);
#endif /* HAVE_VF_STATS */
#endif /* IFLA_VF_MAX */
/* DebugFS */
#ifdef CONFIG_DEBUG_FS

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#include "fm10k_common.h"

View file

@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#ifndef _FM10K_COMMON_H_
#define _FM10K_COMMON_H_

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#include "fm10k.h"
@ -37,7 +37,7 @@ static int fm10k_dcbnl_ieee_getets(struct net_device *dev, struct ieee_ets *ets)
static int fm10k_dcbnl_ieee_setets(struct net_device *dev, struct ieee_ets *ets)
{
u8 num_tc = 0;
int i, err;
int i;
/* verify type and determine num_tcs needed */
for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
@ -58,7 +58,7 @@ static int fm10k_dcbnl_ieee_setets(struct net_device *dev, struct ieee_ets *ets)
/* update TC hardware mapping if necessary */
if (num_tc != netdev_get_num_tc(dev)) {
err = fm10k_setup_tc(dev, num_tc);
int err = fm10k_setup_tc(dev, num_tc);
if (err)
return err;
}

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#include "fm10k.h"
@ -160,8 +160,6 @@ void fm10k_dbg_q_vector_init(struct fm10k_q_vector *q_vector)
snprintf(name, sizeof(name), "q_vector.%03d", q_vector->v_idx);
q_vector->dbg_q_vector = debugfs_create_dir(name, interface->dbg_intfc);
if (!q_vector->dbg_q_vector)
return;
/* Generate a file for each rx ring in the q_vector */
for (i = 0; i < q_vector->tx.count; i++) {

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#include <linux/vmalloc.h>
@ -24,7 +24,8 @@ struct fm10k_stats {
/* netdevice statistics */
#define FM10K_NETDEV_STAT(_net_stat) \
FM10K_STAT_FIELDS(struct net_device_stats, #_net_stat, _net_stat)
FM10K_STAT_FIELDS(struct net_device_stats, __stringify(_net_stat), \
_net_stat)
static const struct fm10k_stats fm10k_gstrings_net_stats[] = {
FM10K_NETDEV_STAT(tx_packets),
@ -223,7 +224,6 @@ static void __fm10k_add_ethtool_stats(u64 **data, void *pointer,
const unsigned int size)
{
unsigned int i;
char *p;
if (!pointer) {
/* memory is not zero allocated so we have to clear it */
@ -233,7 +233,7 @@ static void __fm10k_add_ethtool_stats(u64 **data, void *pointer,
}
for (i = 0; i < size; i++) {
p = (char *)pointer + stats[i].stat_offset;
char *p = (char *)pointer + stats[i].stat_offset;
switch (stats[i].sizeof_stat) {
case sizeof(u64):
@ -559,7 +559,7 @@ static int fm10k_set_ringparam(struct net_device *netdev,
/* allocate temporary buffer to store rings in */
i = max_t(int, interface->num_tx_queues, interface->num_rx_queues);
temp_ring = vmalloc(i * sizeof(struct fm10k_ring));
temp_ring = vmalloc(array_size(i, sizeof(struct fm10k_ring)));
if (!temp_ring) {
err = -ENOMEM;
@ -652,7 +652,6 @@ static int fm10k_set_coalesce(struct net_device *dev,
struct ethtool_coalesce *ec)
{
struct fm10k_intfc *interface = netdev_priv(dev);
struct fm10k_q_vector *qv;
u16 tx_itr, rx_itr;
int i;
@ -678,7 +677,8 @@ static int fm10k_set_coalesce(struct net_device *dev,
/* update q_vectors */
for (i = 0; i < interface->num_q_vectors; i++) {
qv = interface->q_vector[i];
struct fm10k_q_vector *qv = interface->q_vector[i];
qv->tx.itr = tx_itr;
qv->rx.itr = rx_itr;
}

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#include "fm10k.h"

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#include "fm10k.h"
#include "fm10k_vf.h"
@ -321,8 +321,6 @@ static void fm10k_mask_aer_comp_abort(struct pci_dev *pdev)
pci_read_config_dword(pdev, pos + PCI_ERR_UNCOR_MASK, &err_mask);
err_mask |= PCI_ERR_UNC_COMP_ABORT;
pci_write_config_dword(pdev, pos + PCI_ERR_UNCOR_MASK, err_mask);
mmiowb();
}
int fm10k_iov_resume(struct pci_dev *pdev)
@ -428,7 +426,7 @@ static s32 fm10k_iov_alloc_data(struct pci_dev *pdev, int num_vfs)
struct fm10k_iov_data *iov_data = interface->iov_data;
struct fm10k_hw *hw = &interface->hw;
size_t size;
int i, err;
int i;
/* return error if iov_data is already populated */
if (iov_data)
@ -454,6 +452,7 @@ static s32 fm10k_iov_alloc_data(struct pci_dev *pdev, int num_vfs)
/* loop through vf_info structures initializing each entry */
for (i = 0; i < num_vfs; i++) {
struct fm10k_vf_info *vf_info = &iov_data->vf_info[i];
int err;
/* Record VF VSI value */
vf_info->vsi = i + 1;
@ -521,6 +520,27 @@ int fm10k_iov_configure(struct pci_dev *pdev, int num_vfs)
return num_vfs;
}
/**
* fm10k_iov_update_stats - Update stats for all VFs
* @interface: device private structure
*
* Updates the VF statistics for all enabled VFs. Expects to be called by
* fm10k_update_stats and assumes that locking via the __FM10K_UPDATING_STATS
* bit is already handled.
*/
void fm10k_iov_update_stats(struct fm10k_intfc *interface)
{
struct fm10k_iov_data *iov_data = interface->iov_data;
struct fm10k_hw *hw = &interface->hw;
int i;
if (!iov_data)
return;
for (i = 0; i < iov_data->num_vfs; i++)
hw->iov.ops.update_stats(hw, iov_data->vf_info[i].stats, i);
}
#ifdef IFLA_VF_MAX
static inline void fm10k_reset_vf_info(struct fm10k_intfc *interface,
struct fm10k_vf_info *vf_info)
@ -667,5 +687,35 @@ int fm10k_ndo_get_vf_config(struct net_device *netdev,
return 0;
}
#endif /* IFLA_VF_MAX */
#ifdef HAVE_VF_STATS
int fm10k_ndo_get_vf_stats(struct net_device *netdev,
int vf_idx, struct ifla_vf_stats *stats)
{
struct fm10k_intfc *interface = netdev_priv(netdev);
struct fm10k_iov_data *iov_data = interface->iov_data;
struct fm10k_hw *hw = &interface->hw;
struct fm10k_hw_stats_q *hw_stats;
u32 idx, qpp;
/* verify SR-IOV is active and that vf idx is valid */
if (!iov_data || vf_idx >= iov_data->num_vfs)
return -EINVAL;
qpp = fm10k_queues_per_pool(hw);
hw_stats = iov_data->vf_info[vf_idx].stats;
for (idx = 0; idx < qpp; idx++) {
stats->rx_packets += hw_stats[idx].rx_packets.count;
stats->tx_packets += hw_stats[idx].tx_packets.count;
stats->rx_bytes += hw_stats[idx].rx_bytes.count;
stats->tx_bytes += hw_stats[idx].tx_bytes.count;
#ifdef HAVE_VF_STATS_DROPPED
stats->rx_dropped += hw_stats[idx].rx_drops.count;
#endif
}
return 0;
}
#endif /* HAVE_VF_STATS */
#endif /* IFLA_VF_MAX */

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#include <linux/types.h>
#include <linux/module.h>
@ -13,17 +13,17 @@
#include "fm10k.h"
#define DRV_VERSION "0.26.1"
#define DRV_VERSION "0.27.1"
#define DRV_SUMMARY "Intel(R) Ethernet Switch Host Interface Driver"
const char fm10k_driver_version[] = DRV_VERSION;
char fm10k_driver_name[] = "fm10k";
static const char fm10k_driver_string[] = DRV_SUMMARY;
static const char fm10k_copyright[] =
"Copyright(c) 2013 - 2018 Intel Corporation.";
"Copyright(c) 2013 - 2019 Intel Corporation.";
MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>");
MODULE_DESCRIPTION(DRV_SUMMARY);
MODULE_LICENSE("GPL");
MODULE_LICENSE("GPL v2");
MODULE_VERSION(DRV_VERSION);
/* single workqueue for entire fm10k driver */
@ -43,6 +43,8 @@ static int __init fm10k_init_module(void)
/* create driver workqueue */
fm10k_workqueue = alloc_workqueue("%s", WQ_MEM_RECLAIM, 0,
fm10k_driver_name);
if (!fm10k_workqueue)
return -ENOMEM;
dev_add_pack(&ies_packet_type);
@ -287,7 +289,7 @@ static bool fm10k_add_rx_frag(struct fm10k_rx_buffer *rx_buffer,
/* we need the header to contain the greater of either ETH_HLEN or
* 60 bytes if the skb->len is less than 60 for skb_pad.
*/
pull_len = eth_get_headlen(va, FM10K_RX_HDR_LEN);
pull_len = eth_get_headlen(skb->dev, va, FM10K_RX_HDR_LEN);
/* align pull length to size of long to optimize memcpy performance */
memcpy(__skb_put(skb, pull_len), va, ALIGN(pull_len, sizeof(long)));
@ -322,7 +324,7 @@ static struct sk_buff *fm10k_fetch_rx_buffer(struct fm10k_ring *rx_ring,
/* prefetch first cache line of first page */
prefetch(page_addr);
#if L1_CACHE_BYTES < 128
prefetch(page_addr + L1_CACHE_BYTES);
prefetch((void *)((u8 *)page_addr + L1_CACHE_BYTES));
#endif
/* allocate a skb to store the frags */
@ -1040,7 +1042,7 @@ static void fm10k_tx_map(struct fm10k_ring *tx_ring,
struct sk_buff *skb = first->skb;
struct fm10k_tx_buffer *tx_buffer;
struct fm10k_tx_desc *tx_desc;
struct skb_frag_struct *frag;
skb_frag_t *frag;
unsigned char *data;
dma_addr_t dma;
unsigned int data_len, size;
@ -1137,18 +1139,22 @@ static void fm10k_tx_map(struct fm10k_ring *tx_ring,
fm10k_maybe_stop_tx(tx_ring, DESC_NEEDED);
/* notify HW of packet */
#ifdef HAVE_SKB_XMIT_MORE
if (netif_xmit_stopped(txring_txq(tx_ring)) || !skb->xmit_more) {
#endif /* HAVE_SKB_XMIT_MORE */
if (netif_xmit_stopped(txring_txq(tx_ring)) || !netdev_xmit_more()) {
writel(i, tx_ring->tail);
#ifndef SPIN_UNLOCK_IMPLIES_MMIOWB
/* we need this if more than one processor can write to our tail
* at a time, it synchronizes IO on IA64/Altix systems
/* The following mmiowb() is required on certain
* architechtures (IA64/Altix in particular) in order to
* synchronize the I/O calls with respect to a spin lock. This
* is because the wmb() on those architectures does not
* guarantee anything for posted I/O writes.
*
* Note that the associated spin_unlock() is not within the
* driver code, but in the networking core stack.
*/
mmiowb();
#ifdef HAVE_SKB_XMIT_MORE
#endif /* SPIN_UNLOCK_IMPLIES_MMIOWB */
}
#endif /* HAVE_SKB_XMIT_MORE */
return;
dma_error:
@ -1183,8 +1189,11 @@ netdev_tx_t fm10k_xmit_frame_ring(struct sk_buff *skb,
* + 2 desc gap to keep tail from touching head
* otherwise try next time
*/
for (f = 0; f < skb_shinfo(skb)->nr_frags; f++)
count += TXD_USE_COUNT(skb_shinfo(skb)->frags[f].size);
for (f = 0; f < skb_shinfo(skb)->nr_frags; f++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[f];
count += TXD_USE_COUNT(skb_frag_size(frag));
}
if (fm10k_maybe_stop_tx(tx_ring, count + 3)) {
tx_ring->tx_stats.tx_busy++;
@ -1573,11 +1582,11 @@ static int fm10k_poll(struct napi_struct *napi, int budget)
if (!clean_complete)
return budget;
/* all work done, exit the polling mode */
napi_complete_done(napi, work_done);
/* re-enable the q_vector */
fm10k_qv_enable(q_vector);
/* Exit the polling mode, but don't re-enable interrupts if stack might
* poll us due to busy-polling
*/
if (likely(napi_complete_done(napi, work_done)))
fm10k_qv_enable(q_vector);
return min(work_done, budget - 1);
}
@ -1711,14 +1720,12 @@ static int fm10k_alloc_q_vector(struct fm10k_intfc *interface,
{
struct fm10k_q_vector *q_vector;
struct fm10k_ring *ring;
int ring_count, size;
int ring_count;
ring_count = txr_count + rxr_count;
size = sizeof(struct fm10k_q_vector) +
(sizeof(struct fm10k_ring) * ring_count);
/* allocate q_vector and rings */
q_vector = kzalloc(size, GFP_KERNEL);
q_vector = kzalloc(struct_size(q_vector, ring, ring_count), GFP_KERNEL);
if (!q_vector)
return -ENOMEM;
@ -1984,7 +1991,7 @@ static int fm10k_init_msix_capability(struct fm10k_intfc *interface)
static bool fm10k_cache_ring_qos(struct fm10k_intfc *interface)
{
struct net_device *dev = interface->netdev;
int pc, offset, rss_i, i, q_idx;
int pc, offset, rss_i, i;
u16 pc_stride = interface->ring_feature[RING_F_QOS].mask + 1;
u8 num_pcs = netdev_get_num_tc(dev);
@ -1994,7 +2001,8 @@ static bool fm10k_cache_ring_qos(struct fm10k_intfc *interface)
rss_i = interface->ring_feature[RING_F_RSS].indices;
for (pc = 0, offset = 0; pc < num_pcs; pc++, offset += rss_i) {
q_idx = pc;
int q_idx = pc;
for (i = 0; i < rss_i; i++) {
interface->tx_ring[offset + i]->reg_idx = q_idx;
interface->tx_ring[offset + i]->qos_pc = pc;

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#include "fm10k_common.h"
@ -297,13 +297,14 @@ static u16 fm10k_mbx_validate_msg_size(struct fm10k_mbx_info *mbx, u16 len)
{
struct fm10k_mbx_fifo *fifo = &mbx->rx;
u16 total_len = 0, msg_len;
u32 *msg;
/* length should include previous amounts pushed */
len += mbx->pushed;
/* offset in message is based off of current message size */
do {
u32 *msg;
msg = fifo->buffer + fm10k_fifo_tail_offset(fifo, total_len);
msg_len = FM10K_TLV_DWORD_LEN(*msg);
total_len += msg_len;
@ -1920,7 +1921,6 @@ static void fm10k_sm_mbx_transmit(struct fm10k_hw *hw,
/* reduce length by 1 to convert to a mask */
u16 mbmem_len = mbx->mbmem_len - 1;
u16 tail_len, len = 0;
u32 *msg;
/* push head behind tail */
if (mbx->tail < head)
@ -1930,6 +1930,8 @@ static void fm10k_sm_mbx_transmit(struct fm10k_hw *hw,
/* determine msg aligned offset for end of buffer */
do {
u32 *msg;
msg = fifo->buffer + fm10k_fifo_head_offset(fifo, len);
tail_len = len;
len += FM10K_TLV_DWORD_LEN(*msg);
@ -2132,11 +2134,10 @@ fifo_err:
* DWORDs, not bytes. Any invalid values will cause the mailbox to return
* error.
**/
s32 fm10k_sm_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx,
s32 fm10k_sm_mbx_init(struct fm10k_hw __always_unused *hw,
struct fm10k_mbx_info *mbx,
const struct fm10k_msg_data *msg_data)
{
UNREFERENCED_1PARAMETER(hw);
mbx->mbx_reg = FM10K_GMBX;
mbx->mbmem_reg = FM10K_MBMEM_PF(0);

View file

@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#ifndef _FM10K_MBX_H_
#define _FM10K_MBX_H_

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#include "fm10k.h"
#include <linux/vmalloc.h>
@ -64,7 +64,7 @@ err:
**/
static int fm10k_setup_all_tx_resources(struct fm10k_intfc *interface)
{
int i, err = 0;
int i, err;
for (i = 0; i < interface->num_tx_queues; i++) {
err = fm10k_setup_tx_resources(interface->tx_ring[i]);
@ -131,7 +131,7 @@ err:
**/
static int fm10k_setup_all_rx_resources(struct fm10k_intfc *interface)
{
int i, err = 0;
int i, err;
for (i = 0; i < interface->num_rx_queues; i++) {
err = fm10k_setup_rx_resources(interface->rx_ring[i]);
@ -179,7 +179,6 @@ void fm10k_unmap_and_free_tx_resource(struct fm10k_ring *ring,
**/
static void fm10k_clean_tx_ring(struct fm10k_ring *tx_ring)
{
struct fm10k_tx_buffer *tx_buffer;
unsigned long size;
u16 i;
@ -189,7 +188,8 @@ static void fm10k_clean_tx_ring(struct fm10k_ring *tx_ring)
/* Free all the Tx ring sk_buffs */
for (i = 0; i < tx_ring->count; i++) {
tx_buffer = &tx_ring->tx_buffer[i];
struct fm10k_tx_buffer *tx_buffer = &tx_ring->tx_buffer[i];
fm10k_unmap_and_free_tx_resource(tx_ring, tx_buffer);
}
@ -263,8 +263,7 @@ static void fm10k_clean_rx_ring(struct fm10k_ring *rx_ring)
if (!rx_ring->rx_buffer)
return;
if (rx_ring->skb)
dev_kfree_skb(rx_ring->skb);
dev_kfree_skb(rx_ring->skb);
rx_ring->skb = NULL;
/* Free all the Rx ring sk_buffs */
@ -988,7 +987,7 @@ static int fm10k_uc_vlan_unsync(struct net_device *netdev,
u16 glort = interface->glort;
u16 vid = interface->vid;
bool set = !!(vid / VLAN_N_VID);
int err = -EHOSTDOWN;
int err;
/* drop any leading bits on the VLAN ID */
vid &= VLAN_N_VID - 1;
@ -1008,7 +1007,7 @@ static int fm10k_mc_vlan_unsync(struct net_device *netdev,
u16 glort = interface->glort;
u16 vid = interface->vid;
bool set = !!(vid / VLAN_N_VID);
int err = -EHOSTDOWN;
int err;
/* drop any leading bits on the VLAN ID */
vid &= VLAN_N_VID - 1;
@ -1700,11 +1699,11 @@ static int __fm10k_setup_tc(struct net_device *dev, u32 handle, __be16 proto,
static void fm10k_assign_l2_accel(struct fm10k_intfc *interface,
struct fm10k_l2_accel *l2_accel)
{
struct fm10k_ring *ring;
int i;
for (i = 0; i < interface->num_rx_queues; i++) {
ring = interface->rx_ring[i];
struct fm10k_ring *ring = interface->rx_ring[i];
rcu_assign_pointer(ring->l2_accel, l2_accel);
}
@ -1719,7 +1718,7 @@ static void *fm10k_dfwd_add_station(struct net_device *dev,
struct fm10k_l2_accel *old_l2_accel = NULL;
struct fm10k_dglort_cfg dglort = { 0 };
struct fm10k_hw *hw = &interface->hw;
int size = 0, i;
int size, i;
u16 vid, glort;
/* The hardware supported by fm10k only filters on the destination MAC
@ -1931,7 +1930,10 @@ static const struct net_device_ops fm10k_netdev_ops = {
.ndo_set_vf_tx_rate = fm10k_ndo_set_vf_bw,
#endif
.ndo_get_vf_config = fm10k_ndo_get_vf_config,
#endif
#ifdef HAVE_VF_STATS
.ndo_get_vf_stats = fm10k_ndo_get_vf_stats,
#endif /* HAVE_VF_STATS */
#endif /* IFLA_VF_MAX */
#ifdef HAVE_FDB_OPS
#ifndef USE_DEFAULT_FDB_DEL_DUMP
.ndo_fdb_add = ndo_dflt_fdb_add,
@ -1974,9 +1976,6 @@ static const struct net_device_ops fm10k_netdev_ops = {
/* End of ops backported into RHEL7.x */
},
#endif
#ifdef CONFIG_NET_POLL_CONTROLLER
.ndo_poll_controller = fm10k_netpoll,
#endif
#ifdef HAVE_NDO_FEATURES_CHECK
.ndo_features_check = fm10k_features_check,
#endif

View file

@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
/* glue for the OS independent part of fm10k
* includes register access macros
@ -39,24 +39,4 @@ do { \
/* read ctrl register which has no clear on read fields as PCIe flush */
#define fm10k_write_flush(hw) fm10k_read_reg((hw), FM10K_CTRL)
/* used by shared code to declare unused parameters */
#define UNREFERENCED_XPARAMETER
#define UNREFERENCED_1PARAMETER(_p) \
uninitialized_var(_p)
#define UNREFERENCED_2PARAMETER(_p, _q) do { \
uninitialized_var(_p); \
uninitialized_var(_q); \
} while (0)
#define UNREFERENCED_3PARAMETER(_p, _q, _r) do { \
uninitialized_var(_p); \
uninitialized_var(_q); \
uninitialized_var(_r); \
} while (0)
#define UNREFERENCED_4PARAMETER(_p, _q, _r, _s) do { \
uninitialized_var(_p); \
uninitialized_var(_q); \
uninitialized_var(_r); \
uninitialized_var(_s); \
} while (0)
#endif /* _FM10K_OSDEP_H_ */

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#include <linux/types.h>
#include <linux/module.h>
@ -136,10 +136,10 @@ static int fm10k_validate_option(unsigned int *value,
break;
case list_option: {
int i;
const struct fm10k_opt_list *ent;
for (i = 0; i < opt->arg.l.nr; i++) {
ent = &opt->arg.l.p[i];
const struct fm10k_opt_list *ent = &opt->arg.l.p[i];
if (*value == ent->i) {
if (ent->str[0] != '\0')
pr_info("%s\n", ent->str);

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#include <linux/module.h>
#include <linux/interrupt.h>
@ -362,7 +362,6 @@ static void fm10k_detach_subtask(struct fm10k_intfc *interface)
struct net_device *netdev = interface->netdev;
u32 __iomem *hw_addr;
u32 value;
int err;
/* do nothing if netdev is still present or hw_addr is set */
if (netif_device_present(netdev) || interface->hw.hw_addr)
@ -380,6 +379,8 @@ static void fm10k_detach_subtask(struct fm10k_intfc *interface)
hw_addr = READ_ONCE(interface->uc_addr);
value = readl(hw_addr);
if (~value) {
int err;
/* Make sure the reset was initiated because we detached,
* otherwise we might race with a different reset flow.
*/
@ -647,6 +648,9 @@ void fm10k_update_stats(struct fm10k_intfc *interface)
net_stats->rx_errors = rx_errors;
net_stats->rx_dropped = interface->stats.nodesc_drop.count;
/* Update VF statistics */
fm10k_iov_update_stats(interface);
clear_bit(__FM10K_UPDATING_STATS, interface->state);
}
@ -715,8 +719,6 @@ static void fm10k_watchdog_subtask(struct fm10k_intfc *interface)
*/
static void fm10k_check_hang_subtask(struct fm10k_intfc *interface)
{
int i;
/* If we're down or resetting, just bail */
if (test_bit(__FM10K_DOWN, interface->state) ||
test_bit(__FM10K_RESETTING, interface->state))
@ -728,6 +730,8 @@ static void fm10k_check_hang_subtask(struct fm10k_intfc *interface)
interface->next_tx_hang_check = jiffies + (2 * HZ);
if (netif_carrier_ok(interface->netdev)) {
int i;
/* Force detection of hung controller */
for (i = 0; i < interface->num_tx_queues; i++)
set_check_for_tx_hang(interface->tx_ring[i]);
@ -1236,28 +1240,6 @@ static irqreturn_t fm10k_msix_mbx_vf(int __always_unused irq, void *data)
return IRQ_HANDLED;
}
#ifdef CONFIG_NET_POLL_CONTROLLER
/**
* fm10k_netpoll - A Polling 'interrupt' handler
* @netdev: network interface device structure
*
* This is used by netconsole to send skbs without having to re-enable
* interrupts. It's not called while the normal interrupt routine is executing.
**/
void fm10k_netpoll(struct net_device *netdev)
{
struct fm10k_intfc *interface = netdev_priv(netdev);
int i;
/* if interface is down do nothing */
if (test_bit(__FM10K_DOWN, interface->state))
return;
for (i = 0; i < interface->num_q_vectors; i++)
fm10k_msix_clean_rings(0, interface->q_vector[i]);
}
#endif
#define FM10K_ERR_MSG(type) case (type): error = #type; break
static void fm10k_handle_fault(struct fm10k_intfc *interface, int type,
struct fm10k_fault *fault)
@ -2430,7 +2412,7 @@ static int fm10k_handle_resume(struct fm10k_intfc *interface)
/* Restart the MAC/VLAN request queue in-case of outstanding events */
fm10k_macvlan_schedule(interface);
return err;
return 0;
}
/**
@ -2443,7 +2425,7 @@ static int fm10k_handle_resume(struct fm10k_intfc *interface)
**/
static int __maybe_unused fm10k_resume(struct device *dev)
{
struct fm10k_intfc *interface = pci_get_drvdata(to_pci_dev(dev));
struct fm10k_intfc *interface = dev_get_drvdata(dev);
struct net_device *netdev = interface->netdev;
struct fm10k_hw *hw = &interface->hw;
int err;
@ -2470,7 +2452,7 @@ static int __maybe_unused fm10k_resume(struct device *dev)
**/
static int __maybe_unused fm10k_suspend(struct device *dev)
{
struct fm10k_intfc *interface = pci_get_drvdata(to_pci_dev(dev));
struct fm10k_intfc *interface = dev_get_drvdata(dev);
struct net_device *netdev = interface->netdev;
netif_device_detach(netdev);
@ -2598,7 +2580,9 @@ static pci_ers_result_t fm10k_io_slot_reset(struct pci_dev *pdev)
result = PCI_ERS_RESULT_RECOVERED;
}
#ifdef REQUIRE_PCI_CLEANUP_AER_ERROR_STATUS
pci_cleanup_aer_uncorrect_error_status(pdev);
#endif
return result;
}
@ -2617,6 +2601,7 @@ static void fm10k_io_resume(struct pci_dev *pdev)
int err;
err = fm10k_handle_resume(interface);
if (err)
dev_warn(&pdev->dev,
"%s failed: %d\n", __func__, err);
@ -2638,7 +2623,6 @@ static void fm10k_io_reset_prepare(struct pci_dev *pdev)
if (pci_num_vf(pdev))
dev_warn(&pdev->dev,
"PCIe FLR may cause issues for any active VF devices\n");
fm10k_prepare_suspend(pci_get_drvdata(pdev));
}
@ -2690,11 +2674,10 @@ static struct pci_error_handlers fm10k_err_handler = {
.error_detected = fm10k_io_error_detected,
.slot_reset = fm10k_io_slot_reset,
.resume = fm10k_io_resume,
#ifdef HAVE_PCI_ERROR_HANDLER_RESET_PREPARE
#if defined(HAVE_PCI_ERROR_HANDLER_RESET_PREPARE)
.reset_prepare = fm10k_io_reset_prepare,
.reset_done = fm10k_io_reset_done,
#endif
#ifdef HAVE_PCI_ERROR_HANDLER_RESET_NOTIFY
#elif defined(HAVE_PCI_ERROR_HANDLER_RESET_NOTIFY)
.reset_notify = fm10k_io_reset_notify,
#endif
};

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#include "fm10k_pf.h"
#include "fm10k_vf.h"
@ -900,7 +900,7 @@ static s32 fm10k_iov_assign_default_mac_vlan_pf(struct fm10k_hw *hw,
goto err_out;
}
udelay(100);
usleep_range(100, 200);
txdctl = fm10k_read_reg(hw, FM10K_TXDCTL(vf_q_idx));
}
@ -1152,13 +1152,12 @@ static void fm10k_iov_update_stats_pf(struct fm10k_hw *hw,
* assumption is that in this case it is acceptable to just directly
* hand off the message from the VF to the underlying shared code.
**/
s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *hw, u32 **results,
s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *hw, u32 __always_unused **results,
struct fm10k_mbx_info *mbx)
{
struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx;
u8 vf_idx = vf_info->vf_idx;
UNREFERENCED_1PARAMETER(results);
return hw->iov.ops.assign_int_moderator(hw, vf_idx);
}
@ -1353,7 +1352,6 @@ s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *hw, u32 **results,
struct fm10k_mbx_info *mbx)
{
struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx;
u32 *result;
s32 err = 0;
u32 msg[2];
u8 mode = 0;
@ -1363,7 +1361,7 @@ s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *hw, u32 **results,
return FM10K_ERR_PARAM;
if (!!results[FM10K_LPORT_STATE_MSG_XCAST_MODE]) {
result = results[FM10K_LPORT_STATE_MSG_XCAST_MODE];
u32 *result = results[FM10K_LPORT_STATE_MSG_XCAST_MODE];
/* XCAST mode update requested */
err = fm10k_tlv_attr_get_u8(result, &mode);
@ -1567,7 +1565,7 @@ static s32 fm10k_get_fault_pf(struct fm10k_hw *hw, int type,
/* read remaining fields */
fault->address = fm10k_read_reg(hw, type + FM10K_FAULT_ADDR_HI);
fault->address <<= 32;
fault->address = fm10k_read_reg(hw, type + FM10K_FAULT_ADDR_LO);
fault->address |= fm10k_read_reg(hw, type + FM10K_FAULT_ADDR_LO);
fault->specinfo = fm10k_read_reg(hw, type + FM10K_FAULT_SPECINFO);
/* clear valid bit to allow for next error */
@ -1643,13 +1641,12 @@ const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[] = {
* switch API.
**/
s32 fm10k_msg_lport_map_pf(struct fm10k_hw *hw, u32 **results,
struct fm10k_mbx_info *mbx)
struct fm10k_mbx_info __always_unused *mbx)
{
u16 glort, mask;
u32 dglort_map;
s32 err;
UNREFERENCED_1PARAMETER(mbx);
err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_LPORT_MAP],
&dglort_map);
if (err)
@ -1687,13 +1684,12 @@ const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[] = {
* This handler configures the default VLAN for the PF
**/
static s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *hw, u32 **results,
struct fm10k_mbx_info *mbx)
struct fm10k_mbx_info __always_unused *mbx)
{
u16 glort, pvid;
u32 pvid_update;
s32 err;
UNREFERENCED_1PARAMETER(mbx);
err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_UPDATE_PVID],
&pvid_update);
if (err)
@ -1749,12 +1745,11 @@ const struct fm10k_tlv_attr fm10k_err_msg_attr[] = {
* messages that the PF has sent.
**/
s32 fm10k_msg_err_pf(struct fm10k_hw *hw, u32 **results,
struct fm10k_mbx_info *mbx)
struct fm10k_mbx_info __always_unused *mbx)
{
struct fm10k_swapi_error err_msg;
s32 err;
UNREFERENCED_1PARAMETER(mbx);
/* extract structure from message */
err = fm10k_tlv_attr_get_le_struct(results[FM10K_PF_ATTR_ID_ERR],
&err_msg, sizeof(err_msg));

View file

@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#ifndef _FM10K_PF_H_
#define _FM10K_PF_H_

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#include "fm10k_tlv.h"
@ -472,7 +472,7 @@ static s32 fm10k_tlv_attr_parse(u32 *attr, u32 **results,
const struct fm10k_tlv_attr *tlv_attr)
{
u32 i, attr_id, offset = 0;
s32 err = 0;
s32 err;
u16 len;
/* verify pointers are not NULL */
@ -587,10 +587,10 @@ s32 fm10k_tlv_msg_parse(struct fm10k_hw *hw, u32 *msg,
* a minimum it just indicates that the message requested was
* unimplemented.
**/
s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results,
struct fm10k_mbx_info *mbx)
s32 fm10k_tlv_msg_error(struct fm10k_hw __always_unused *hw,
u32 __always_unused **results,
struct fm10k_mbx_info __always_unused *mbx)
{
UNREFERENCED_3PARAMETER(hw, results, mbx);
return FM10K_NOT_IMPLEMENTED;
}

View file

@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#ifndef _FM10K_TLV_H_
#define _FM10K_TLV_H_
@ -76,8 +76,8 @@ struct fm10k_tlv_attr {
#define FM10K_TLV_ATTR_S32(id) { id, FM10K_TLV_SIGNED, 4 }
#define FM10K_TLV_ATTR_S64(id) { id, FM10K_TLV_SIGNED, 8 }
#define FM10K_TLV_ATTR_LE_STRUCT(id, len) { id, FM10K_TLV_LE_STRUCT, len }
#define FM10K_TLV_ATTR_NESTED(id) { id, FM10K_TLV_NESTED }
#define FM10K_TLV_ATTR_LAST { FM10K_TLV_ERROR }
#define FM10K_TLV_ATTR_NESTED(id) { id, FM10K_TLV_NESTED, 0 }
#define FM10K_TLV_ATTR_LAST { FM10K_TLV_ERROR, 0, 0 }
struct fm10k_msg_data {
unsigned int id;

View file

@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#ifndef _FM10K_TYPE_H_
#define _FM10K_TYPE_H_
@ -582,6 +582,7 @@ struct fm10k_vf_info {
* at the same offset as the mailbox
*/
struct fm10k_mbx_info mbx; /* PF side of VF mailbox */
struct fm10k_hw_stats_q stats[FM10K_MAX_QUEUES_POOL];
int rate; /* Tx BW cap as defined by OS */
u16 glort; /* resource tag for this VF */
u16 sw_vid; /* Switch API assigned VLAN */

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#include "fm10k.h"

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#include "fm10k_vf.h"
@ -198,13 +198,12 @@ static s32 fm10k_update_vlan_vf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set)
* This function should determine the MAC address for the VF
**/
s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *hw, u32 **results,
struct fm10k_mbx_info *mbx)
struct fm10k_mbx_info __always_unused *mbx)
{
u8 perm_addr[ETH_ALEN];
u16 vid;
s32 err;
UNREFERENCED_1PARAMETER(mbx);
/* record MAC address requested */
err = fm10k_tlv_attr_get_mac_vlan(
results[FM10K_MAC_VLAN_MSG_DEFAULT_MAC],
@ -268,14 +267,14 @@ static s32 fm10k_read_mac_addr_vf(struct fm10k_hw *hw)
* This function is used to add or remove unicast MAC addresses for
* the VF.
**/
static s32 fm10k_update_uc_addr_vf(struct fm10k_hw *hw, u16 glort,
const u8 *mac, u16 vid, bool add, u8 flags)
static s32 fm10k_update_uc_addr_vf(struct fm10k_hw *hw,
u16 __always_unused glort,
const u8 *mac, u16 vid, bool add,
u8 __always_unused flags)
{
struct fm10k_mbx_info *mbx = &hw->mbx;
u32 msg[7];
UNREFERENCED_2PARAMETER(glort, flags);
/* verify VLAN ID is valid */
if (vid >= FM10K_VLAN_TABLE_VID_MAX)
return FM10K_ERR_PARAM;
@ -312,14 +311,13 @@ static s32 fm10k_update_uc_addr_vf(struct fm10k_hw *hw, u16 glort,
* This function is used to add or remove multicast MAC addresses for
* the VF.
**/
static s32 fm10k_update_mc_addr_vf(struct fm10k_hw *hw, u16 glort,
static s32 fm10k_update_mc_addr_vf(struct fm10k_hw *hw,
u16 __always_unused glort,
const u8 *mac, u16 vid, bool add)
{
struct fm10k_mbx_info *mbx = &hw->mbx;
u32 msg[7];
UNREFERENCED_1PARAMETER(glort);
/* verify VLAN ID is valid */
if (vid >= FM10K_VLAN_TABLE_VID_MAX)
return FM10K_ERR_PARAM;
@ -378,9 +376,8 @@ const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[] = {
* are ready to bring up the interface.
**/
s32 fm10k_msg_lport_state_vf(struct fm10k_hw *hw, u32 **results,
struct fm10k_mbx_info *mbx)
struct fm10k_mbx_info __always_unused *mbx)
{
UNREFERENCED_1PARAMETER(mbx);
hw->mac.dglort_map = !results[FM10K_LPORT_STATE_MSG_READY] ?
FM10K_DGLORTMAP_NONE : FM10K_DGLORTMAP_ZERO;
@ -398,13 +395,13 @@ s32 fm10k_msg_lport_state_vf(struct fm10k_hw *hw, u32 **results,
* enabled we can add filters, if it is disabled all filters for this
* logical port are flushed.
**/
static s32 fm10k_update_lport_state_vf(struct fm10k_hw *hw, u16 glort,
u16 count, bool enable)
static s32 fm10k_update_lport_state_vf(struct fm10k_hw *hw,
u16 __always_unused glort,
u16 __always_unused count, bool enable)
{
struct fm10k_mbx_info *mbx = &hw->mbx;
u32 msg[2];
UNREFERENCED_2PARAMETER(glort, count);
/* reset glort mask 0 as we have to wait to be enabled */
hw->mac.dglort_map = FM10K_DGLORTMAP_NONE;
@ -427,12 +424,12 @@ static s32 fm10k_update_lport_state_vf(struct fm10k_hw *hw, u16 glort,
* so that it can enable either multicast, multicast promiscuous, or
* promiscuous mode of operation.
**/
static s32 fm10k_update_xcast_mode_vf(struct fm10k_hw *hw, u16 glort, u8 mode)
static s32 fm10k_update_xcast_mode_vf(struct fm10k_hw *hw,
u16 __always_unused glort, u8 mode)
{
struct fm10k_mbx_info *mbx = &hw->mbx;
u32 msg[3];
UNREFERENCED_1PARAMETER(glort);
if (mode > FM10K_XCAST_MODE_NONE)
return FM10K_ERR_PARAM;
@ -483,10 +480,9 @@ static void fm10k_rebind_hw_stats_vf(struct fm10k_hw *hw,
* that information to then populate a DGLORTMAP/DEC entry and the queues
* to which it has been assigned.
**/
static s32 fm10k_configure_dglort_map_vf(struct fm10k_hw *hw,
static s32 fm10k_configure_dglort_map_vf(struct fm10k_hw __always_unused *hw,
struct fm10k_dglort_cfg *dglort)
{
UNREFERENCED_1PARAMETER(hw);
/* verify the dglort pointer */
if (!dglort)
return FM10K_ERR_PARAM;

View file

@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#ifndef _FM10K_VF_H_
#define _FM10K_VF_H_

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#include "fm10k.h"
#include "kcompat.h"
@ -1131,7 +1131,27 @@ int __kc_pcie_get_minimum_link(struct pci_dev *dev, enum pci_bus_speed *speed,
return 0;
}
#endif
#if (RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(6,7))
int _kc_pci_wait_for_pending_transaction(struct pci_dev *dev)
{
int i;
u16 status;
/* Wait for Transaction Pending bit clean */
for (i = 0; i < 4; i++) {
if (i)
msleep((1 << (i - 1)) * 100);
pcie_capability_read_word(dev, PCI_EXP_DEVSTA, &status);
if (!(status & PCI_EXP_DEVSTA_TRPND))
return 1;
}
return 0;
}
#endif /* <RHEL6.7 */
#endif /* <3.12 */
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,13,0) )
int __kc_dma_set_mask_and_coherent(struct device *dev, u64 mask)
@ -1468,8 +1488,9 @@ void *__kc_devm_kmemdup(struct device *dev, const void *src, size_t len,
#endif /* 3.16.0 */
/******************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,17,0) )
#endif /* 3.17.0 */
#if ((LINUX_VERSION_CODE < KERNEL_VERSION(3,17,0)) && \
(RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,5)))
#endif /* <3.17.0 && RHEL_RELEASE_CODE < RHEL7.5 */
/******************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,18,0) )
@ -1531,7 +1552,8 @@ void __kc_skb_complete_tx_timestamp(struct sk_buff *skb,
#include <linux/sctp.h>
#endif
unsigned int __kc_eth_get_headlen(unsigned char *data, unsigned int max_len)
u32 __kc_eth_get_headlen(const struct net_device __always_unused *dev,
unsigned char *data, unsigned int max_len)
{
union {
unsigned char *network;
@ -1971,3 +1993,127 @@ void _kc_pcie_print_link_status(struct pci_dev *dev) {
PCIE_SPEED2STR(speed_cap), width_cap);
}
#endif /* 4.17.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(5,1,0))
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(8,1)))
#define HAVE_NDO_FDB_ADD_EXTACK
#else /* !RHEL || RHEL < 8.1 */
#ifdef HAVE_TC_SETUP_CLSFLOWER
#define FLOW_DISSECTOR_MATCH(__rule, __type, __out) \
const struct flow_match *__m = &(__rule)->match; \
struct flow_dissector *__d = (__m)->dissector; \
\
(__out)->key = skb_flow_dissector_target(__d, __type, (__m)->key); \
(__out)->mask = skb_flow_dissector_target(__d, __type, (__m)->mask); \
void flow_rule_match_basic(const struct flow_rule *rule,
struct flow_match_basic *out)
{
FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_BASIC, out);
}
void flow_rule_match_control(const struct flow_rule *rule,
struct flow_match_control *out)
{
FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_CONTROL, out);
}
void flow_rule_match_eth_addrs(const struct flow_rule *rule,
struct flow_match_eth_addrs *out)
{
FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS, out);
}
#ifdef HAVE_TC_FLOWER_ENC
void flow_rule_match_enc_keyid(const struct flow_rule *rule,
struct flow_match_enc_keyid *out)
{
FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_KEYID, out);
}
void flow_rule_match_enc_ports(const struct flow_rule *rule,
struct flow_match_ports *out)
{
FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_PORTS, out);
}
void flow_rule_match_enc_control(const struct flow_rule *rule,
struct flow_match_control *out)
{
FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_CONTROL, out);
}
void flow_rule_match_enc_ipv4_addrs(const struct flow_rule *rule,
struct flow_match_ipv4_addrs *out)
{
FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS, out);
}
void flow_rule_match_enc_ipv6_addrs(const struct flow_rule *rule,
struct flow_match_ipv6_addrs *out)
{
FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS, out);
}
#endif
#ifndef HAVE_TC_FLOWER_VLAN_IN_TAGS
void flow_rule_match_vlan(const struct flow_rule *rule,
struct flow_match_vlan *out)
{
FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_VLAN, out);
}
#endif
void flow_rule_match_ipv4_addrs(const struct flow_rule *rule,
struct flow_match_ipv4_addrs *out)
{
FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_IPV4_ADDRS, out);
}
void flow_rule_match_ipv6_addrs(const struct flow_rule *rule,
struct flow_match_ipv6_addrs *out)
{
FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_IPV6_ADDRS, out);
}
void flow_rule_match_ports(const struct flow_rule *rule,
struct flow_match_ports *out)
{
FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_PORTS, out);
}
#endif /* HAVE_TC_SETUP_CLSFLOWER */
#endif /* !RHEL || RHEL < 8.1 */
#endif /* 5.1.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(5,3,0))
#ifdef HAVE_TC_CB_AND_SETUP_QDISC_MQPRIO
int _kc_flow_block_cb_setup_simple(struct flow_block_offload *f,
struct list_head __always_unused *driver_list,
tc_setup_cb_t *cb,
void *cb_ident, void *cb_priv,
bool ingress_only)
{
if (ingress_only &&
f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
return -EOPNOTSUPP;
/* Note: Upstream has driver_block_list, but older kernels do not */
switch (f->command) {
case TC_BLOCK_BIND:
#ifdef HAVE_TCF_BLOCK_CB_REGISTER_EXTACK
return tcf_block_cb_register(f->block, cb, cb_ident, cb_priv,
f->extack);
#else
return tcf_block_cb_register(f->block, cb, cb_ident, cb_priv);
#endif
case TC_BLOCK_UNBIND:
tcf_block_cb_unregister(f->block, cb, cb_ident);
return 0;
default:
return -EOPNOTSUPP;
}
}
#endif /* HAVE_TC_CB_AND_SETUP_QDISC_MQPRIO */
#endif /* 5.3.0 */

View file

@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
#ifndef _KCOMPAT_H_
#define _KCOMPAT_H_
@ -9,30 +9,52 @@
#else
#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c))
#endif
#include <linux/init.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/string.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/skbuff.h>
#include <linux/ioport.h>
#include <linux/slab.h>
#include <linux/list.h>
#include <linux/io.h>
#include <linux/delay.h>
#include <linux/sched.h>
#include <linux/in.h>
#include <linux/ip.h>
#include <linux/ipv6.h>
#include <linux/tcp.h>
#include <linux/udp.h>
#include <linux/mii.h>
#include <linux/vmalloc.h>
#include <asm/io.h>
#include <linux/errno.h>
#include <linux/etherdevice.h>
#include <linux/ethtool.h>
#include <linux/if_vlan.h>
#include <linux/in.h>
#include <linux/if_link.h>
#include <linux/init.h>
#include <linux/ioport.h>
#include <linux/ip.h>
#include <linux/ipv6.h>
#include <linux/list.h>
#include <linux/mii.h>
#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/pci.h>
#include <linux/sched.h>
#include <linux/skbuff.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/tcp.h>
#include <linux/types.h>
#include <linux/udp.h>
#include <linux/vmalloc.h>
#ifndef GCC_VERSION
#define GCC_VERSION (__GNUC__ * 10000 \
+ __GNUC_MINOR__ * 100 \
+ __GNUC_PATCHLEVEL__)
#endif /* GCC_VERSION */
/* Backport macros for controlling GCC diagnostics */
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(4,18,0) )
/* Compilers before gcc-4.6 do not understand "#pragma GCC diagnostic push" */
#if GCC_VERSION >= 40600
#define __diag_str1(s) #s
#define __diag_str(s) __diag_str1(s)
#define __diag(s) _Pragma(__diag_str(GCC diagnostic s))
#else
#define __diag(s)
#endif /* GCC_VERSION >= 4.6 */
#define __diag_push() __diag(push)
#define __diag_pop() __diag(pop)
#endif /* LINUX_VERSION < 4.18.0 */
#ifndef NSEC_PER_MSEC
#define NSEC_PER_MSEC 1000000L
@ -149,14 +171,6 @@ struct msix_entry {
#define PCIE_LINK_STATE_L1 2
#endif
#ifndef mmiowb
#ifdef CONFIG_IA64
#define mmiowb() asm volatile ("mf.a" ::: "memory")
#else
#define mmiowb()
#endif
#endif
#ifndef SET_NETDEV_DEV
#define SET_NETDEV_DEV(net, pdev)
#endif
@ -191,11 +205,11 @@ struct msix_entry {
#endif
#ifndef NETIF_F_LRO
#define NETIF_F_LRO (1 << 15)
#define NETIF_F_LRO BIT(15)
#endif
#ifndef NETIF_F_NTUPLE
#define NETIF_F_NTUPLE (1 << 27)
#define NETIF_F_NTUPLE BIT(27)
#endif
#ifndef NETIF_F_ALL_FCOE
@ -423,8 +437,8 @@ struct ethtool_gstrings {
#ifndef ETHTOOL_TEST
#define ETHTOOL_TEST 0x1a
enum ethtool_test_flags {
ETH_TEST_FL_OFFLINE = (1 << 0),
ETH_TEST_FL_FAILED = (1 << 1),
ETH_TEST_FL_OFFLINE = BIT(0),
ETH_TEST_FL_FAILED = BIT(1),
};
struct ethtool_test {
u32 cmd;
@ -699,6 +713,10 @@ struct _kc_ethtool_pauseparam {
#define ETHTOOL_BUSINFO_LEN 32
#endif
#ifndef WAKE_FILTER
#define WAKE_FILTER BIT(7)
#endif
#ifndef SPEED_2500
#define SPEED_2500 2500
#endif
@ -872,10 +890,29 @@ struct _kc_ethtool_pauseparam {
* - 4.4.103-6.33.1, 4.4.103-6.38.1
* - 4.4.{114,120}-94.nn.y */
#define SLE_VERSION_CODE SLE_VERSION(12,3,0)
#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(4,12,14))
/* SLES15 Beta1 is 4.12.14-2.
* SLES12 SP4 will also use 4.12.14-nn.xx.y */
#elif (LINUX_VERSION_CODE == KERNEL_VERSION(4,12,14) && \
(SLE_LOCALVERSION_CODE == KERNEL_VERSION(94,41,0) || \
(SLE_LOCALVERSION_CODE >= KERNEL_VERSION(95,0,0) && \
SLE_LOCALVERSION_CODE < KERNEL_VERSION(96,0,0))))
/* SLES12 SP4 GM is 4.12.14-94.41 and update kernel is 4.12.14-95.x. */
#define SLE_VERSION_CODE SLE_VERSION(12,4,0)
#elif (LINUX_VERSION_CODE == KERNEL_VERSION(4,12,14) && \
(SLE_LOCALVERSION_CODE == KERNEL_VERSION(23,0,0) || \
SLE_LOCALVERSION_CODE == KERNEL_VERSION(2,0,0) || \
SLE_LOCALVERSION_CODE == KERNEL_VERSION(136,0,0) || \
(SLE_LOCALVERSION_CODE >= KERNEL_VERSION(25,0,0) && \
SLE_LOCALVERSION_CODE < KERNEL_VERSION(26,0,0)) || \
(SLE_LOCALVERSION_CODE >= KERNEL_VERSION(150,0,0) && \
SLE_LOCALVERSION_CODE < KERNEL_VERSION(151,0,0))))
/* SLES15 Beta1 is 4.12.14-2
* SLES15 GM is 4.12.14-23 and update kernel is 4.12.14-{25,136},
* and 4.12.14-150.14.
*/
#define SLE_VERSION_CODE SLE_VERSION(15,0,0)
#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(4,12,14) && \
SLE_LOCALVERSION_CODE >= KERNEL_VERSION(25,23,0))
/* SLES15 SP1 Beta1 is 4.12.14-25.23 */
#define SLE_VERSION_CODE SLE_VERSION(15,1,0)
/* new SLES kernels must be added here with >= based on kernel
* the idea is to order from newest to oldest and just catch all
* of them using the >=
@ -936,14 +973,10 @@ static inline int _kc_test_and_set_bit(int nr, volatile unsigned long *addr)
#ifdef CONFIG_DYNAMIC_DEBUG
#undef dev_dbg
#define dev_dbg(dev, format, arg...) dev_printk(KERN_DEBUG, dev, format, ##arg)
#undef pr_debug
#define pr_debug(format, arg...) printk(KERN_DEBUG format, ##arg)
#endif /* CONFIG_DYNAMIC_DEBUG */
#undef list_for_each_entry_safe
#define list_for_each_entry_safe(pos, n, head, member) \
for (n = NULL, pos = list_first_entry(head, typeof(*pos), member); \
&pos->member != (head); \
pos = list_next_entry(pos, member))
#undef hlist_for_each_entry_safe
#define hlist_for_each_entry_safe(pos, n, head, member) \
for (n = NULL, pos = hlist_entry_safe((head)->first, typeof(*(pos)), \
@ -957,6 +990,28 @@ static inline int _kc_test_and_set_bit(int nr, volatile unsigned long *addr)
#endif
#endif /* __KLOCWORK__ */
/* Older versions of GCC will trigger -Wformat-nonliteral warnings for const
* char * strings. Unfortunately, the implementation of do_trace_printk does
* this, in order to add a storage attribute to the memory. This was fixed in
* GCC 5.1, but we still use older distributions built with GCC 4.x.
*
* The string pointer is only passed as a const char * to the __trace_bprintk
* function. Since that function has the __printf attribute, it will trigger
* the warnings. We can't remove the attribute, so instead we'll use the
* __diag macro to disable -Wformat-nonliteral around the call to
* __trace_bprintk.
*/
#if GCC_VERSION < 50100
#define __trace_bprintk(ip, fmt, args...) ({ \
int err; \
__diag_push(); \
__diag(ignored "-Wformat-nonliteral"); \
err = __trace_bprintk(ip, fmt, ##args); \
__diag_pop(); \
err; \
})
#endif /* GCC_VERSION < 5.1.0 */
/*****************************************************************************/
/* 2.6.4 => 2.6.0 */
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(2,4,25) || \
@ -1028,7 +1083,7 @@ static inline int _kc_pci_dma_mapping_error(dma_addr_t dma_addr)
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(2,6,4) )
extern int _kc_scnprintf(char * buf, size_t size, const char *fmt, ...);
int _kc_scnprintf(char * buf, size_t size, const char *fmt, ...);
#define scnprintf(buf, size, fmt, args...) _kc_scnprintf(buf, size, fmt, ##args)
#endif /* < 2.6.4 */
@ -1095,7 +1150,7 @@ static inline struct mii_ioctl_data *_kc_if_mii(struct ifreq *rq)
#ifndef kcalloc
#define kcalloc(n, size, flags) _kc_kzalloc(((n) * (size)), flags)
extern void *_kc_kzalloc(size_t size, int flags);
void *_kc_kzalloc(size_t size, int flags);
#endif
#define MSEC_PER_SEC 1000L
static inline unsigned int _kc_jiffies_to_msecs(const unsigned long j)
@ -1161,13 +1216,13 @@ static inline struct vlan_ethhdr *vlan_eth_hdr(const struct sk_buff *skb)
}
/* Wake-On-Lan options. */
#define WAKE_PHY (1 << 0)
#define WAKE_UCAST (1 << 1)
#define WAKE_MCAST (1 << 2)
#define WAKE_BCAST (1 << 3)
#define WAKE_ARP (1 << 4)
#define WAKE_MAGIC (1 << 5)
#define WAKE_MAGICSECURE (1 << 6) /* only meaningful if WAKE_MAGIC */
#define WAKE_PHY BIT(0)
#define WAKE_UCAST BIT(1)
#define WAKE_MCAST BIT(2)
#define WAKE_BCAST BIT(3)
#define WAKE_ARP BIT(4)
#define WAKE_MAGIC BIT(5)
#define WAKE_MAGICSECURE BIT(6) /* only meaningful if WAKE_MAGIC */
#define skb_header_pointer _kc_skb_header_pointer
static inline void *_kc_skb_header_pointer(const struct sk_buff *skb,
@ -1324,7 +1379,7 @@ static inline int _kc_is_multicast_ether_addr(const u8 *addr)
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(2,6,13) )
#ifndef kstrdup
#define kstrdup _kc_kstrdup
extern char *_kc_kstrdup(const char *s, unsigned int gfp);
char *_kc_kstrdup(const char *s, unsigned int gfp);
#endif
#endif /* < 2.6.13 */
@ -1333,7 +1388,7 @@ extern char *_kc_kstrdup(const char *s, unsigned int gfp);
#define pm_message_t u32
#ifndef kzalloc
#define kzalloc _kc_kzalloc
extern void *_kc_kzalloc(size_t size, int flags);
void *_kc_kzalloc(size_t size, int flags);
#endif
/* Generic MII registers. */
@ -1344,10 +1399,10 @@ extern void *_kc_kzalloc(size_t size, int flags);
#define ESTATUS_1000_TFULL 0x2000 /* Can do 1000BT Full */
#define ESTATUS_1000_THALF 0x1000 /* Can do 1000BT Half */
#define SUPPORTED_Pause (1 << 13)
#define SUPPORTED_Asym_Pause (1 << 14)
#define ADVERTISED_Pause (1 << 13)
#define ADVERTISED_Asym_Pause (1 << 14)
#define SUPPORTED_Pause BIT(13)
#define SUPPORTED_Asym_Pause BIT(14)
#define ADVERTISED_Pause BIT(13)
#define ADVERTISED_Asym_Pause BIT(14)
#if (!(RHEL_RELEASE_CODE && \
(RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(4,3)) && \
@ -1593,16 +1648,16 @@ static inline int __kc_skb_checksum_help(struct sk_buff *skb)
#define PCIE_LINK_STATUS 0x12
#define pci_config_space_ich8lan() do {} while(0)
#undef pci_save_state
extern int _kc_pci_save_state(struct pci_dev *);
int _kc_pci_save_state(struct pci_dev *);
#define pci_save_state(pdev) _kc_pci_save_state(pdev)
#undef pci_restore_state
extern void _kc_pci_restore_state(struct pci_dev *);
void _kc_pci_restore_state(struct pci_dev *);
#define pci_restore_state(pdev) _kc_pci_restore_state(pdev)
#endif /* !(RHEL_RELEASE_CODE >= RHEL 5.4) */
#ifdef HAVE_PCI_ERS
#undef free_netdev
extern void _kc_free_netdev(struct net_device *);
void _kc_free_netdev(struct net_device *);
#define free_netdev(netdev) _kc_free_netdev(netdev)
#endif
static inline int pci_enable_pcie_error_reporting(struct pci_dev __always_unused *dev)
@ -1612,7 +1667,7 @@ static inline int pci_enable_pcie_error_reporting(struct pci_dev __always_unused
#define pci_disable_pcie_error_reporting(dev) do {} while (0)
#define pci_cleanup_aer_uncorrect_error_status(dev) do {} while (0)
extern void *_kc_kmemdup(const void *src, size_t len, unsigned gfp);
void *_kc_kmemdup(const void *src, size_t len, unsigned gfp);
#define kmemdup(src, len, gfp) _kc_kmemdup(src, len, gfp)
#ifndef bool
#define bool _Bool
@ -1707,7 +1762,7 @@ static inline __wsum csum_unfold(__sum16 n)
#define __aligned(x) __attribute__((aligned(x)))
#endif
extern struct pci_dev *_kc_netdev_to_pdev(struct net_device *netdev);
struct pci_dev *_kc_netdev_to_pdev(struct net_device *netdev);
#define netdev_to_dev(netdev) \
pci_dev_to_dev(_kc_netdev_to_pdev(netdev))
#define devm_kzalloc(dev, size, flags) kzalloc(size, flags)
@ -1781,16 +1836,16 @@ enum {
#define hex_asc(x) "0123456789abcdef"[x]
#endif
#include <linux/ctype.h>
extern void _kc_print_hex_dump(const char *level, const char *prefix_str,
int prefix_type, int rowsize, int groupsize,
const void *buf, size_t len, bool ascii);
void _kc_print_hex_dump(const char *level, const char *prefix_str,
int prefix_type, int rowsize, int groupsize,
const void *buf, size_t len, bool ascii);
#define print_hex_dump(lvl, s, t, r, g, b, l, a) \
_kc_print_hex_dump(lvl, s, t, r, g, b, l, a)
#ifndef ADVERTISED_2500baseX_Full
#define ADVERTISED_2500baseX_Full (1 << 15)
#define ADVERTISED_2500baseX_Full BIT(15)
#endif
#ifndef SUPPORTED_2500baseX_Full
#define SUPPORTED_2500baseX_Full (1 << 15)
#define SUPPORTED_2500baseX_Full BIT(15)
#endif
#ifndef ETH_P_PAUSE
@ -1802,6 +1857,8 @@ static inline int compound_order(struct page *page)
return 0;
}
#define __must_be_array(a) 0
#ifndef SKB_WITH_OVERHEAD
#define SKB_WITH_OVERHEAD(X) \
((X) - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
@ -1869,7 +1926,7 @@ struct napi_struct {
#endif
#ifdef NAPI
extern int __kc_adapter_clean(struct net_device *, int *);
int __kc_adapter_clean(struct net_device *, int *);
/* The following defines only provide limited support for NAPI calls and
* should only be used by drivers which are not multi-queue enabled.
*/
@ -2050,7 +2107,7 @@ static inline int _kc_strict_strtol(const char *buf, unsigned int base, long *re
#undef kzalloc_node
#define kzalloc_node(_size, _flags, _node) kzalloc(_size, _flags)
extern void _kc_pci_disable_link_state(struct pci_dev *dev, int state);
void _kc_pci_disable_link_state(struct pci_dev *dev, int state);
#define pci_disable_link_state(p, s) _kc_pci_disable_link_state(p, s)
#else /* < 2.6.26 */
#define NETDEV_CAN_SET_GSO_MAX_SIZE
@ -2119,9 +2176,9 @@ static inline __u32 _kc_ethtool_cmd_speed(struct ethtool_cmd *ep)
#endif
#ifdef HAVE_TX_MQ
extern void _kc_netif_tx_stop_all_queues(struct net_device *);
extern void _kc_netif_tx_wake_all_queues(struct net_device *);
extern void _kc_netif_tx_start_all_queues(struct net_device *);
void _kc_netif_tx_stop_all_queues(struct net_device *);
void _kc_netif_tx_wake_all_queues(struct net_device *);
void _kc_netif_tx_start_all_queues(struct net_device *);
#define netif_tx_stop_all_queues(a) _kc_netif_tx_stop_all_queues(a)
#define netif_tx_wake_all_queues(a) _kc_netif_tx_wake_all_queues(a)
#define netif_tx_start_all_queues(a) _kc_netif_tx_start_all_queues(a)
@ -2157,7 +2214,7 @@ extern void _kc_netif_tx_start_all_queues(struct net_device *);
#endif /* NETIF_F_MULTI_QUEUE */
#ifndef __WARN_printf
extern void __kc_warn_slowpath(const char *file, const int line,
void __kc_warn_slowpath(const char *file, const int line,
const char *fmt, ...) __attribute__((format(printf, 3, 4)));
#define __WARN_printf(arg...) __kc_warn_slowpath(__FILE__, __LINE__, arg)
#endif /* __WARN_printf */
@ -2194,8 +2251,8 @@ static inline void _kc_ethtool_cmd_speed_set(struct ethtool_cmd *ep,
pci_resource_len(pdev, bar))
#define pci_wake_from_d3 _kc_pci_wake_from_d3
#define pci_prepare_to_sleep _kc_pci_prepare_to_sleep
extern int _kc_pci_wake_from_d3(struct pci_dev *dev, bool enable);
extern int _kc_pci_prepare_to_sleep(struct pci_dev *dev);
int _kc_pci_wake_from_d3(struct pci_dev *dev, bool enable);
int _kc_pci_prepare_to_sleep(struct pci_dev *dev);
#define netdev_alloc_page(a) alloc_page(GFP_ATOMIC)
#ifndef __skb_queue_head_init
static inline void __kc_skb_queue_head_init(struct sk_buff_head *list)
@ -2239,7 +2296,7 @@ static inline void __kc_skb_queue_head_init(struct sk_buff_head *list)
#endif
#ifndef pci_clear_master
extern void _kc_pci_clear_master(struct pci_dev *dev);
void _kc_pci_clear_master(struct pci_dev *dev);
#define pci_clear_master(dev) _kc_pci_clear_master(dev)
#endif
@ -2336,23 +2393,23 @@ static inline bool pci_is_root_bus(struct pci_bus *pbus)
#endif
#ifndef SUPPORTED_1000baseKX_Full
#define SUPPORTED_1000baseKX_Full (1 << 17)
#define SUPPORTED_1000baseKX_Full BIT(17)
#endif
#ifndef SUPPORTED_10000baseKX4_Full
#define SUPPORTED_10000baseKX4_Full (1 << 18)
#define SUPPORTED_10000baseKX4_Full BIT(18)
#endif
#ifndef SUPPORTED_10000baseKR_Full
#define SUPPORTED_10000baseKR_Full (1 << 19)
#define SUPPORTED_10000baseKR_Full BIT(19)
#endif
#ifndef ADVERTISED_1000baseKX_Full
#define ADVERTISED_1000baseKX_Full (1 << 17)
#define ADVERTISED_1000baseKX_Full BIT(17)
#endif
#ifndef ADVERTISED_10000baseKX4_Full
#define ADVERTISED_10000baseKX4_Full (1 << 18)
#define ADVERTISED_10000baseKX4_Full BIT(18)
#endif
#ifndef ADVERTISED_10000baseKR_Full
#define ADVERTISED_10000baseKR_Full (1 << 19)
#define ADVERTISED_10000baseKR_Full BIT(19)
#endif
static inline unsigned long dev_trans_start(struct net_device *dev)
@ -2381,7 +2438,7 @@ static inline unsigned long dev_trans_start(struct net_device *dev)
#define netdev_tx_t int
#if defined(CONFIG_FCOE) || defined(CONFIG_FCOE_MODULE)
#ifndef NETIF_F_FCOE_MTU
#define NETIF_F_FCOE_MTU (1 << 26)
#define NETIF_F_FCOE_MTU BIT(26)
#endif
#endif /* CONFIG_FCOE || CONFIG_FCOE_MODULE */
@ -2551,7 +2608,7 @@ static inline bool pci_is_pcie(struct pci_dev *dev)
#if (RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(6,0))
#ifndef pci_num_vf
#define pci_num_vf(pdev) _kc_pci_num_vf(pdev)
extern int _kc_pci_num_vf(struct pci_dev *dev);
int _kc_pci_num_vf(struct pci_dev *dev);
#endif
#endif /* RHEL_RELEASE_CODE */
@ -2836,9 +2893,9 @@ struct device_node;
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(2,6,36) )
extern int _kc_ethtool_op_set_flags(struct net_device *, u32, u32);
int _kc_ethtool_op_set_flags(struct net_device *, u32, u32);
#define ethtool_op_set_flags _kc_ethtool_op_set_flags
extern u32 _kc_ethtool_op_get_flags(struct net_device *);
u32 _kc_ethtool_op_get_flags(struct net_device *);
#define ethtool_op_get_flags _kc_ethtool_op_get_flags
enum {
@ -2950,10 +3007,10 @@ static inline int __kc_netif_set_real_num_rx_queues(struct net_device __always_u
#define VLAN_N_VID VLAN_GROUP_ARRAY_LEN
#endif /* VLAN_N_VID */
#ifndef ETH_FLAG_TXVLAN
#define ETH_FLAG_TXVLAN (1 << 7)
#define ETH_FLAG_TXVLAN BIT(7)
#endif /* ETH_FLAG_TXVLAN */
#ifndef ETH_FLAG_RXVLAN
#define ETH_FLAG_RXVLAN (1 << 8)
#define ETH_FLAG_RXVLAN BIT(8)
#endif /* ETH_FLAG_RXVLAN */
#define WQ_MEM_RECLAIM WQ_RESCUER
@ -2998,8 +3055,8 @@ static inline __be16 vlan_get_protocol(const struct sk_buff *skb)
#endif /* !RHEL5.7+ || RHEL6.0 */
#ifdef HAVE_HW_TIME_STAMP
#define SKBTX_HW_TSTAMP (1 << 0)
#define SKBTX_IN_PROGRESS (1 << 2)
#define SKBTX_HW_TSTAMP BIT(0)
#define SKBTX_IN_PROGRESS BIT(2)
#define SKB_SHARED_TX_IS_UNION
#endif
@ -3056,7 +3113,7 @@ static inline int _kc_skb_checksum_start_offset(const struct sk_buff *skb)
#define TC_BITMASK 15
#endif
#ifndef NETIF_F_RXCSUM
#define NETIF_F_RXCSUM (1 << 29)
#define NETIF_F_RXCSUM BIT(29)
#endif
#ifndef skb_queue_reverse_walk_safe
#define skb_queue_reverse_walk_safe(queue, skb, tmp) \
@ -3080,15 +3137,15 @@ static inline int _kc_skb_checksum_start_offset(const struct sk_buff *skb)
#define kstrtou32(a, b, c) ((*(c)) = simple_strtoul((a), NULL, (b)), 0)
#endif /* !(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6,4)) */
#if (!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(6,0)))
extern u16 ___kc_skb_tx_hash(struct net_device *, const struct sk_buff *, u16);
u16 ___kc_skb_tx_hash(struct net_device *, const struct sk_buff *, u16);
#define __skb_tx_hash(n, s, q) ___kc_skb_tx_hash((n), (s), (q))
extern u8 _kc_netdev_get_num_tc(struct net_device *dev);
u8 _kc_netdev_get_num_tc(struct net_device *dev);
#define netdev_get_num_tc(dev) _kc_netdev_get_num_tc(dev)
extern int _kc_netdev_set_num_tc(struct net_device *dev, u8 num_tc);
int _kc_netdev_set_num_tc(struct net_device *dev, u8 num_tc);
#define netdev_set_num_tc(dev, tc) _kc_netdev_set_num_tc((dev), (tc))
#define netdev_reset_tc(dev) _kc_netdev_set_num_tc((dev), 0)
#define netdev_set_tc_queue(dev, tc, cnt, off) do {} while (0)
extern u8 _kc_netdev_get_prio_tc_map(struct net_device *dev, u8 up);
u8 _kc_netdev_get_prio_tc_map(struct net_device *dev, u8 up);
#define netdev_get_prio_tc_map(dev, up) _kc_netdev_get_prio_tc_map(dev, up)
#define netdev_set_prio_tc_map(dev, up, tc) do {} while (0)
#else /* RHEL6.1 or greater */
@ -3227,11 +3284,14 @@ static inline int _kc_kstrtol_from_user(const char __user *s, size_t count,
}
#endif
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,0) || \
RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(5,7)))
/* 20000base_blah_full Supported and Advertised Registers */
#define SUPPORTED_20000baseMLD2_Full (1 << 21)
#define SUPPORTED_20000baseKR2_Full (1 << 22)
#define ADVERTISED_20000baseMLD2_Full (1 << 21)
#define ADVERTISED_20000baseKR2_Full (1 << 22)
#define SUPPORTED_20000baseMLD2_Full BIT(21)
#define SUPPORTED_20000baseKR2_Full BIT(22)
#define ADVERTISED_20000baseMLD2_Full BIT(21)
#define ADVERTISED_20000baseKR2_Full BIT(22)
#endif /* RHEL_RELEASE_CODE */
#endif /* < 3.0.0 */
/*****************************************************************************/
@ -3475,8 +3535,8 @@ int _kc_simple_open(struct inode *inode, struct file *file);
#ifndef skb_add_rx_frag
#define skb_add_rx_frag _kc_skb_add_rx_frag
extern void _kc_skb_add_rx_frag(struct sk_buff *, int, struct page *,
int, int, unsigned int);
void _kc_skb_add_rx_frag(struct sk_buff * skb, int i, struct page *page,
int off, int size, unsigned int truesize);
#endif
#ifdef NET_ADDR_RANDOM
#define eth_hw_addr_random(N) do { \
@ -3509,6 +3569,10 @@ extern void _kc_skb_add_rx_frag(struct sk_buff *, int, struct page *,
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,5,0) )
#ifndef SIZE_MAX
#define SIZE_MAX (~(size_t)0)
#endif
#ifndef BITS_PER_LONG_LONG
#define BITS_PER_LONG_LONG 64
#endif
@ -3599,14 +3663,14 @@ static inline void _kc_eth_random_addr(u8 *addr)
/* these defines were all added in one commit, so should be safe
* to trigger activiation on one define
*/
#define SUPPORTED_40000baseKR4_Full (1 << 23)
#define SUPPORTED_40000baseCR4_Full (1 << 24)
#define SUPPORTED_40000baseSR4_Full (1 << 25)
#define SUPPORTED_40000baseLR4_Full (1 << 26)
#define ADVERTISED_40000baseKR4_Full (1 << 23)
#define ADVERTISED_40000baseCR4_Full (1 << 24)
#define ADVERTISED_40000baseSR4_Full (1 << 25)
#define ADVERTISED_40000baseLR4_Full (1 << 26)
#define SUPPORTED_40000baseKR4_Full BIT(23)
#define SUPPORTED_40000baseCR4_Full BIT(24)
#define SUPPORTED_40000baseSR4_Full BIT(25)
#define SUPPORTED_40000baseLR4_Full BIT(26)
#define ADVERTISED_40000baseKR4_Full BIT(23)
#define ADVERTISED_40000baseCR4_Full BIT(24)
#define ADVERTISED_40000baseSR4_Full BIT(25)
#define ADVERTISED_40000baseLR4_Full BIT(26)
#endif
#ifndef mmd_eee_cap_to_ethtool_sup_t
@ -3762,6 +3826,7 @@ int __kc_pcie_capability_clear_word(struct pci_dev *dev, int pos,
#if (SLE_VERSION_CODE && SLE_VERSION_CODE >= SLE_VERSION(11,3,0))
#define USE_CONST_DEV_UC_CHAR
#define HAVE_NDO_FDB_ADD_NLATTR
#endif
#if !(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6,8))
@ -3868,6 +3933,7 @@ static inline bool __mod_delayed_work(struct workqueue_struct *wq,
#include <linux/hashtable.h>
#define HAVE_CONST_STRUCT_PCI_ERROR_HANDLERS
#define USE_CONST_DEV_UC_CHAR
#define HAVE_NDO_FDB_ADD_NLATTR
#endif /* >= 3.7.0 */
/*****************************************************************************/
@ -4074,7 +4140,7 @@ static inline bool __kc_is_link_local_ether_addr(const u8 *addr)
&name[hash_min(key, HASH_BITS(name))], member)
#ifdef CONFIG_XPS
extern int __kc_netif_set_xps_queue(struct net_device *, const struct cpumask *, u16);
int __kc_netif_set_xps_queue(struct net_device *, const struct cpumask *, u16);
#define netif_set_xps_queue(_dev, _mask, _idx) __kc_netif_set_xps_queue((_dev), (_mask), (_idx))
#else /* CONFIG_XPS */
#define netif_set_xps_queue(_dev, _mask, _idx) do {} while (0)
@ -4082,7 +4148,7 @@ extern int __kc_netif_set_xps_queue(struct net_device *, const struct cpumask *,
#ifdef HAVE_NETDEV_SELECT_QUEUE
#define _kc_hashrnd 0xd631614b /* not so random hash salt */
extern u16 __kc_netdev_pick_tx(struct net_device *dev, struct sk_buff *skb);
u16 __kc_netdev_pick_tx(struct net_device *dev, struct sk_buff *skb);
#define __netdev_pick_tx __kc_netdev_pick_tx
#endif /* HAVE_NETDEV_SELECT_QUEUE */
#else
@ -4096,7 +4162,7 @@ extern u16 __kc_netdev_pick_tx(struct net_device *dev, struct sk_buff *skb);
#define NAPI_POLL_WEIGHT 64
#endif
#ifdef CONFIG_PCI_IOV
extern int __kc_pci_vfs_assigned(struct pci_dev *dev);
int __kc_pci_vfs_assigned(struct pci_dev *dev);
#else
static inline int __kc_pci_vfs_assigned(struct pci_dev __always_unused *dev)
{
@ -4125,24 +4191,28 @@ static inline struct sk_buff *__kc__vlan_hwaccel_put_tag(struct sk_buff *skb,
#endif
#ifdef HAVE_FDB_OPS
#ifdef USE_CONST_DEV_UC_CHAR
extern int __kc_ndo_dflt_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
struct net_device *dev,
const unsigned char *addr, u16 flags);
#ifdef HAVE_FDB_DEL_NLATTR
extern int __kc_ndo_dflt_fdb_del(struct ndmsg *ndm, struct nlattr *tb[],
struct net_device *dev,
const unsigned char *addr);
#if defined(HAVE_NDO_FDB_ADD_NLATTR)
int __kc_ndo_dflt_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
struct net_device *dev,
const unsigned char *addr, u16 flags);
#elif defined(USE_CONST_DEV_UC_CHAR)
int __kc_ndo_dflt_fdb_add(struct ndmsg *ndm, struct net_device *dev,
const unsigned char *addr, u16 flags);
#else
extern int __kc_ndo_dflt_fdb_del(struct ndmsg *ndm, struct net_device *dev,
const unsigned char *addr);
#endif
int __kc_ndo_dflt_fdb_add(struct ndmsg *ndm, struct net_device *dev,
unsigned char *addr, u16 flags);
#endif /* HAVE_NDO_FDB_ADD_NLATTR */
#if defined(HAVE_FDB_DEL_NLATTR)
int __kc_ndo_dflt_fdb_del(struct ndmsg *ndm, struct nlattr *tb[],
struct net_device *dev,
const unsigned char *addr);
#elif defined(USE_CONST_DEV_UC_CHAR)
int __kc_ndo_dflt_fdb_del(struct ndmsg *ndm, struct net_device *dev,
const unsigned char *addr);
#else
extern int __kc_ndo_dflt_fdb_add(struct ndmsg *ndm, struct net_device *dev,
unsigned char *addr, u16 flags);
extern int __kc_ndo_dflt_fdb_del(struct ndmsg *ndm, struct net_device *dev,
unsigned char *addr);
#endif
int __kc_ndo_dflt_fdb_del(struct ndmsg *ndm, struct net_device *dev,
unsigned char *addr);
#endif /* HAVE_FDB_DEL_NLATTR */
#define ndo_dflt_fdb_add __kc_ndo_dflt_fdb_add
#define ndo_dflt_fdb_del __kc_ndo_dflt_fdb_del
#endif /* HAVE_FDB_OPS */
@ -4176,6 +4246,7 @@ of_get_mac_address(struct device_node __always_unused *np)
#define HAVE_ENCAP_TSO_OFFLOAD
#define USE_DEFAULT_FDB_DEL_DUMP
#define HAVE_SKB_INNER_NETWORK_HEADER
#if (RHEL_RELEASE_CODE && \
(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,0)) && \
(RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(8,0)))
@ -4187,6 +4258,7 @@ of_get_mac_address(struct device_node __always_unused *np)
#if (RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,5))
#define HAVE_GENEVE_RX_OFFLOAD
#endif /* RHEL >=7.3 && RHEL < 7.5 */
#define HAVE_ETHTOOL_FLOW_UNION_IP6_SPEC
#define HAVE_RHEL7_NET_DEVICE_OPS_EXT
#if !defined(HAVE_UDP_ENC_TUNNEL) && IS_ENABLED(CONFIG_GENEVE)
#define HAVE_UDP_ENC_TUNNEL
@ -4199,13 +4271,22 @@ of_get_mac_address(struct device_node __always_unused *np)
#define HAVE_RHEL7_NETDEV_OPS_EXT_NDO_UDP_TUNNEL
#define HAVE_UDP_ENC_RX_OFFLOAD
#endif /* RHEL >= 7.4 */
#endif /* RHEL >= 7.0 && RHEL < 8.0 */
#if (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(8,0))
#define HAVE_TCF_BLOCK_CB_REGISTER_EXTACK
#define NO_NETDEV_BPF_PROG_ATTACHED
#endif /* RHEL >= 8.0 */
#endif /* >= 3.10.0 */
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,11,0) )
#define netdev_notifier_info_to_dev(ptr) ptr
#ifndef time_in_range64
#define time_in_range64(a, b, c) \
(time_after_eq64(a, b) && \
time_before_eq64(a, c))
#endif /* time_in_range64 */
#if ((RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6,6)) ||\
(SLE_VERSION_CODE && SLE_VERSION_CODE >= SLE_VERSION(11,4,0)))
#define HAVE_NDO_SET_VF_LINK_STATE
@ -4221,12 +4302,17 @@ of_get_mac_address(struct device_node __always_unused *np)
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,12,0) )
extern int __kc_pcie_get_minimum_link(struct pci_dev *dev,
enum pci_bus_speed *speed,
enum pcie_link_width *width);
int __kc_pcie_get_minimum_link(struct pci_dev *dev, enum pci_bus_speed *speed,
enum pcie_link_width *width);
#ifndef pcie_get_minimum_link
#define pcie_get_minimum_link(_p, _s, _w) __kc_pcie_get_minimum_link(_p, _s, _w)
#endif
#if (RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(6,7))
int _kc_pci_wait_for_pending_transaction(struct pci_dev *dev);
#define pci_wait_for_pending_transaction _kc_pci_wait_for_pending_transaction
#endif /* <RHEL6.7 */
#else /* >= 3.12.0 */
#if ( SLE_VERSION_CODE && SLE_VERSION_CODE >= SLE_VERSION(12,0,0))
#define HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK
@ -4244,7 +4330,7 @@ extern int __kc_pcie_get_minimum_link(struct pci_dev *dev,
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,13,0) )
#define dma_set_mask_and_coherent(_p, _m) __kc_dma_set_mask_and_coherent(_p, _m)
extern int __kc_dma_set_mask_and_coherent(struct device *dev, u64 mask);
int __kc_dma_set_mask_and_coherent(struct device *dev, u64 mask);
#ifndef u64_stats_init
#define u64_stats_init(a) do { } while(0)
#endif
@ -4281,6 +4367,10 @@ static inline struct pci_dev *pci_upstream_bridge(struct pci_dev *dev)
devm_kzalloc(dev, cnt * size, flags)
#endif /* > 2.6.20 */
#if (!(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,2)))
#define list_last_entry(ptr, type, member) list_entry((ptr)->prev, type, member)
#endif
#else /* >= 3.13.0 */
#define HAVE_VXLAN_CHECKS
#if (UBUNTU_VERSION_CODE && UBUNTU_VERSION_CODE >= UBUNTU_VERSION(3,13,0,24))
@ -4302,7 +4392,10 @@ static inline struct pci_dev *pci_upstream_bridge(struct pci_dev *dev)
#define U32_MAX ((u32)~0U)
#endif
#if (!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,2)))
#define dev_consume_skb_any(x) dev_kfree_skb_any(x)
#define dev_consume_skb_irq(x) dev_kfree_skb_irq(x)
#endif
#if (!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,0)) && \
!(SLE_VERSION_CODE && SLE_VERSION_CODE >= SLE_VERSION(12,0,0)))
@ -4360,9 +4453,8 @@ static inline void __kc_skb_set_hash(struct sk_buff __maybe_unused *skb,
#endif
#ifndef pci_enable_msix_range
extern int __kc_pci_enable_msix_range(struct pci_dev *dev,
struct msix_entry *entries,
int minvec, int maxvec);
int __kc_pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
int minvec, int maxvec);
#define pci_enable_msix_range __kc_pci_enable_msix_range
#endif
@ -4395,6 +4487,23 @@ int __kc_ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset,
#define OPTIMIZE_HIDE_VAR(var) barrier()
#endif
#endif
#if (!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,0)) && \
!(SLE_VERSION_CODE && SLE_VERSION_CODE >= SLE_VERSION(10,4,0)))
static inline __u32 skb_get_hash_raw(const struct sk_buff *skb)
{
#ifdef NETIF_F_RXHASH
return skb->rxhash;
#else
return 0;
#endif /* NETIF_F_RXHASH */
}
#endif /* !RHEL > 5.9 && !SLES >= 10.4 */
#if (RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,5))
#define request_firmware_direct request_firmware
#endif /* !RHEL || RHEL < 7.5 */
#else /* >= 3.14.0 */
/* for ndo_dfwd_ ops add_station, del_station and _start_xmit */
@ -4406,7 +4515,6 @@ int __kc_ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset,
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,15,0) )
#if (!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,1)) && \
!(UBUNTU_VERSION_CODE && UBUNTU_VERSION_CODE >= UBUNTU_VERSION(3,13,0,30)))
#define u64_stats_fetch_begin_irq u64_stats_fetch_begin_bh
@ -4520,18 +4628,15 @@ static inline void __kc_dev_mc_unsync(struct net_device __maybe_unused *dev,
#define NETIF_F_GSO_UDP_TUNNEL_CSUM 0
#define SKB_GSO_UDP_TUNNEL_CSUM 0
#endif
extern void *__kc_devm_kmemdup(struct device *dev, const void *src, size_t len,
gfp_t gfp);
void *__kc_devm_kmemdup(struct device *dev, const void *src, size_t len,
gfp_t gfp);
#define devm_kmemdup __kc_devm_kmemdup
#else
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(4,13,0) )
#if ( ( LINUX_VERSION_CODE < KERNEL_VERSION(4,13,0) ) && \
! ( SLE_VERSION_CODE && ( SLE_VERSION_CODE >= SLE_VERSION(12,4,0)) ) )
#define HAVE_PCI_ERROR_HANDLER_RESET_NOTIFY
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(15,0,0)))
#undef HAVE_PCI_ERROR_HANDLER_RESET_NOTIFY
#define HAVE_PCI_ERROR_HANDLER_RESET_PREPARE
#endif /* SLES15 */
#endif /* >= 3.16.0 && < 4.13.0 */
#endif /* >= 3.16.0 && < 4.13.0 && !(SLES >= 12sp4) */
#define HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
#endif /* 3.16.0 */
@ -4561,14 +4666,49 @@ static inline struct timespec timespec64_to_timespec(const struct timespec64 ts6
#define timespec64_to_ns timespec_to_ns
#define ns_to_timespec64 ns_to_timespec
#define ktime_to_timespec64 ktime_to_timespec
#define ktime_get_ts64 ktime_get_ts
#define ktime_get_real_ts64 ktime_get_real_ts
#define timespec64_add_ns timespec_add_ns
#endif /* timespec64 */
#endif /* !(RHEL6.8<RHEL7.0) && !RHEL7.2+ */
#if (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6,8) && \
RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,0))
static inline void ktime_get_real_ts64(struct timespec64 *ts)
{
*ts = ktime_to_timespec64(ktime_get_real());
}
static inline void ktime_get_ts64(struct timespec64 *ts)
{
*ts = ktime_to_timespec64(ktime_get());
}
#endif
#if !(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4))
#define hlist_add_behind(_a, _b) hlist_add_after(_b, _a)
#endif
#if (RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,5))
#endif /* RHEL_RELEASE_CODE < RHEL7.5 */
#if (RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,3))
static inline u64 ktime_get_ns(void)
{
return ktime_to_ns(ktime_get());
}
static inline u64 ktime_get_real_ns(void)
{
return ktime_to_ns(ktime_get_real());
}
static inline u64 ktime_get_boot_ns(void)
{
return ktime_to_ns(ktime_get_boottime());
}
#endif /* RHEL < 7.3 */
#else
#define HAVE_DCBNL_OPS_SETAPP_RETURN_INT
#include <linux/time64.h>
@ -4578,13 +4718,14 @@ static inline struct timespec timespec64_to_timespec(const struct timespec64 ts6
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,18,0) )
#ifndef NO_PTP_SUPPORT
#include <linux/errqueue.h>
extern struct sk_buff *__kc_skb_clone_sk(struct sk_buff *skb);
extern void __kc_skb_complete_tx_timestamp(struct sk_buff *skb,
struct skb_shared_hwtstamps *hwtstamps);
struct sk_buff *__kc_skb_clone_sk(struct sk_buff *skb);
void __kc_skb_complete_tx_timestamp(struct sk_buff *skb,
struct skb_shared_hwtstamps *hwtstamps);
#define skb_clone_sk __kc_skb_clone_sk
#define skb_complete_tx_timestamp __kc_skb_complete_tx_timestamp
#endif
extern unsigned int __kc_eth_get_headlen(unsigned char *data, unsigned int max_len);
u32 __kc_eth_get_headlen(const struct net_device *dev, unsigned char *data,
unsigned int max_len);
#define eth_get_headlen __kc_eth_get_headlen
#ifndef ETH_P_XDSA
#define ETH_P_XDSA 0x00F8
@ -4634,11 +4775,14 @@ static inline void _kc_napi_complete_done(struct napi_struct *napi,
int __always_unused work_done) {
napi_complete(napi);
}
/* don't use our backport if the distro kernels already have it */
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE < SLE_VERSION(12,3,0))) || \
(RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,5)))
#define napi_complete_done _kc_napi_complete_done
#endif
extern int _kc_bitmap_print_to_pagebuf(bool list, char *buf,
const unsigned long *maskp,
int nmaskbits);
int _kc_bitmap_print_to_pagebuf(bool list, char *buf,
const unsigned long *maskp, int nmaskbits);
#define bitmap_print_to_pagebuf _kc_bitmap_print_to_pagebuf
#ifndef NETDEV_RSS_KEY_LEN
@ -4649,7 +4793,7 @@ extern int _kc_bitmap_print_to_pagebuf(bool list, char *buf,
(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,2)))))
#define netdev_rss_key_fill(buffer, len) __kc_netdev_rss_key_fill(buffer, len)
#endif /* RHEL_RELEASE_CODE */
extern void __kc_netdev_rss_key_fill(void *buffer, size_t len);
void __kc_netdev_rss_key_fill(void *buffer, size_t len);
#define SPEED_20000 20000
#define SPEED_40000 40000
#ifndef dma_rmb
@ -4771,7 +4915,11 @@ static inline struct sk_buff *__kc_napi_alloc_skb(struct napi_struct *napi, unsi
!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,2))
static inline struct device_node *
pci_device_to_OF_node(const struct pci_dev __always_unused *pdev) { return NULL; }
#else /* !CONFIG_OF && RHEL < 7.3 */
#define HAVE_DDP_PROFILE_UPLOAD_SUPPORT
#endif /* !CONFIG_OF && RHEL < 7.3 */
#else /* < 4.0 */
#define HAVE_DDP_PROFILE_UPLOAD_SUPPORT
#endif /* < 4.0 */
/*****************************************************************************/
@ -4805,7 +4953,7 @@ of_find_net_device_by_node(struct device_node __always_unused *np)
#if !((RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(6,8) && RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,0)) && \
(RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,2)) && \
(SLE_VERSION_CODE > SLE_VERSION(12,1,0)))
extern unsigned int _kc_cpumask_local_spread(unsigned int i, int node);
unsigned int _kc_cpumask_local_spread(unsigned int i, int node);
#define cpumask_local_spread _kc_cpumask_local_spread
#endif
#else /* >= 4,1,0 */
@ -4858,8 +5006,25 @@ static inline __u64 ethtool_get_flow_spec_ring_vf(__u64 ring_cookie)
#if (RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4))
#define HAVE_NDO_DFLT_BRIDGE_GETLINK_VLAN_SUPPORT
#endif
#if (LINUX_VERSION_CODE > KERNEL_VERSION(2,6,27))
#if (!((RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6,8) && \
RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,0)) || \
RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,2)))
static inline bool pci_ari_enabled(struct pci_bus *bus)
{
return bus->self && bus->self->ari_enabled;
}
#endif /* !(RHEL6.8+ || RHEL7.2+) */
#else
static inline bool pci_ari_enabled(struct pci_bus *bus)
{
return false;
}
#endif /* 2.6.27 */
#else
#define HAVE_NDO_DFLT_BRIDGE_GETLINK_VLAN_SUPPORT
#define HAVE_VF_STATS
#endif /* 4.2.0 */
/*****************************************************************************/
@ -4917,8 +5082,8 @@ static inline void writeq(__u64 val, volatile void __iomem *addr)
#endif /* NETIF_F_SCTP_CRC */
#if (!(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,3)))
#define eth_platform_get_mac_address _kc_eth_platform_get_mac_address
extern int _kc_eth_platform_get_mac_address(struct device *dev __maybe_unused,
u8 *mac_addr __maybe_unused);
int _kc_eth_platform_get_mac_address(struct device *dev __maybe_unused,
u8 *mac_addr __maybe_unused);
#endif /* !(RHEL_RELEASE >= 7.3) */
#else /* 4.5.0 */
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(4,8,0) )
@ -4974,29 +5139,54 @@ static inline void page_ref_inc(struct page *page)
#ifndef IPV4_USER_FLOW
#define IPV4_USER_FLOW 0x0d /* spec only (usr_ip4_spec) */
#endif
#if (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4))
#define HAVE_TC_SETUP_CLSFLOWER
#define HAVE_TC_FLOWER_ENC
#endif
#if ((RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,7)) || \
(SLE_VERSION_CODE >= SLE_VERSION(12,2,0)))
#define HAVE_TC_SETUP_CLSU32
#endif
#if (SLE_VERSION_CODE >= SLE_VERSION(12,2,0))
#define HAVE_TC_SETUP_CLSFLOWER
#endif
#else /* >= 4.6.0 */
#define HAVE_PAGE_COUNT_BULK_UPDATE
#define HAVE_ETHTOOL_FLOW_UNION_IP6_SPEC
#define HAVE_PTP_CROSSTIMESTAMP
#define HAVE_TC_SETUP_CLSFLOWER
#define HAVE_TC_SETUP_CLSU32
#endif /* 4.6.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,7,0))
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(12,3,0))) ||\
(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4))
#if ((SLE_VERSION_CODE >= SLE_VERSION(12,3,0)) ||\
(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4)))
#define HAVE_NETIF_TRANS_UPDATE
#endif
#if (UBUNTU_VERSION_CODE && \
UBUNTU_VERSION_CODE >= UBUNTU_VERSION(4,4,0,21)) || \
(RHEL_RELEASE_CODE && \
RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4)) || \
(SLE_VERSION_CODE && SLE_VERSION_CODE >= SLE_VERSION(12,3,0))
#endif /* SLES12sp3+ || RHEL7.4+ */
#if ((UBUNTU_VERSION_CODE >= UBUNTU_VERSION(4,4,0,21)) || \
(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4)) || \
(SLE_VERSION_CODE >= SLE_VERSION(12,3,0)))
#define HAVE_DEVLINK_SUPPORT
#endif /* UBUNTU 4,4,0,21, RHEL 7.4, SLES12 SP3 */
#if ((RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,3)) ||\
(SLE_VERSION_CODE >= SLE_VERSION(12,3,0)))
#define HAVE_ETHTOOL_25G_BITS
#define HAVE_ETHTOOL_50G_BITS
#define HAVE_ETHTOOL_100G_BITS
#endif /* RHEL7.3+ || SLES12sp3+ */
#else /* 4.7.0 */
#define HAVE_DEVLINK_SUPPORT
#define HAVE_NETIF_TRANS_UPDATE
#define HAVE_ETHTOOL_CONVERT_U32_AND_LINK_MODE
#define HAVE_ETHTOOL_25G_BITS
#define HAVE_ETHTOOL_50G_BITS
#define HAVE_ETHTOOL_100G_BITS
#define HAVE_TCF_MIRRED_REDIRECT
#endif /* 4.7.0 */
/*****************************************************************************/
@ -5017,6 +5207,10 @@ struct udp_tunnel_info {
#define HAVE_TCF_EXTS_TO_LIST
#endif
#if (UBUNTU_VERSION_CODE && UBUNTU_VERSION_CODE < UBUNTU_VERSION(4,8,0,0))
#define tc_no_actions(_exts) true
#define tc_for_each_action(_a, _exts) while (0)
#endif
#if !(SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(12,3,0))) &&\
!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4))
static inline int
@ -5055,90 +5249,78 @@ pci_release_mem_regions(struct pci_dev *pdev)
pci_select_bars(pdev, IORESOURCE_MEM));
}
#endif /* !SLE_VERSION(12,3,0) */
#if ((RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4)) ||\
(SLE_VERSION_CODE >= SLE_VERSION(12,3,0)))
#define HAVE_ETHTOOL_NEW_50G_BITS
#endif /* RHEL7.4+ || SLES12sp3+ */
#else
#define HAVE_UDP_ENC_RX_OFFLOAD
#define HAVE_TCF_EXTS_TO_LIST
#define HAVE_ETHTOOL_NEW_50G_BITS
#endif /* 4.8.0 */
/*****************************************************************************/
#ifdef ETHTOOL_GLINKSETTINGS
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,7,0))
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,3)))
#define HAVE_ETHTOOL_25G_BITS
#define HAVE_ETHTOOL_50G_BITS
#define HAVE_ETHTOOL_100G_BITS
#endif /* RHEL_RELEASE_VERSION(7,3) */
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(12,3,0)))
#define HAVE_ETHTOOL_25G_BITS
#define HAVE_ETHTOOL_50G_BITS
#define HAVE_ETHTOOL_100G_BITS
#endif /* SLE_VERSION(12,3,0) */
#else
#define HAVE_ETHTOOL_25G_BITS
#define HAVE_ETHTOOL_50G_BITS
#define HAVE_ETHTOOL_100G_BITS
#endif /* KERNEL_VERSION(4.7.0) */
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,8,0))
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4)))
#define HAVE_ETHTOOL_NEW_50G_BITS
#endif /* RHEL_RELEASE_VERSION(7,4) */
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(12,3,0)))
#define HAVE_ETHTOOL_NEW_50G_BITS
#endif /* SLE_VERSION(12,3,0) */
#else
#define HAVE_ETHTOOL_NEW_50G_BITS
#endif /* KERNEL_VERSION(4.8.0)*/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,9,0))
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4)))
#define HAVE_ETHTOOL_NEW_1G_BITS
#define HAVE_ETHTOOL_NEW_10G_BITS
#endif /* RHEL_RELEASE_VERSION(7,4) */
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(15,4,0)))
#define HAVE_ETHTOOL_NEW_1G_BITS
#define HAVE_ETHTOOL_NEW_10G_BITS
#endif /* SLE_VERSION(15,4,0) */
#else
#define HAVE_ETHTOOL_NEW_1G_BITS
#define HAVE_ETHTOOL_NEW_10G_BITS
#endif /* KERNEL_VERSION(4.9.0) */
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,10,0))
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4)))
#define HAVE_ETHTOOL_NEW_2500MB_BITS
#define HAVE_ETHTOOL_5G_BITS
#endif /* RHEL_RELEASE_VERSION(7,4) */
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(15,4,0)))
#define HAVE_ETHTOOL_NEW_2500MB_BITS
#define HAVE_ETHTOOL_5G_BITS
#endif /* SLE_VERSION(15,4,0) */
#else
#define HAVE_ETHTOOL_NEW_2500MB_BITS
#define HAVE_ETHTOOL_5G_BITS
#endif /* KERNEL_VERSION(4.10.0) */
#endif /* ETHTOOL_GLINKSETTINGS */
/*****************************************************************************/
#ifdef NETIF_F_HW_TC
#endif /* NETIF_F_HW_TC */
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,9,0))
#ifdef NETIF_F_HW_TC
#ifdef HAVE_TC_SETUP_CLSFLOWER
#if (!(RHEL_RELEASE_CODE) && !(SLE_VERSION_CODE) || \
(SLE_VERSION_CODE && (SLE_VERSION_CODE < SLE_VERSION(12,3,0))))
#define HAVE_TC_FLOWER_VLAN_IN_TAGS
#endif /* !RHEL_RELEASE_CODE && !SLE_VERSION_CODE || SLE_VERSION(12,3,0) */
#endif /* NETIF_F_HW_TC */
#endif /* !RHEL_RELEASE_CODE && !SLE_VERSION_CODE || <SLE_VERSION(12,3,0) */
#endif /* HAVE_TC_SETUP_CLSFLOWER */
#if (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4))
#define HAVE_ETHTOOL_NEW_1G_BITS
#define HAVE_ETHTOOL_NEW_10G_BITS
#endif /* RHEL7.4+ */
#if (!(SLE_VERSION_CODE) && !(RHEL_RELEASE_CODE)) || \
SLE_VERSION_CODE && (SLE_VERSION_CODE <= SLE_VERSION(12,3,0)) || \
RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE <= RHEL_RELEASE_VERSION(7,5))
#define time_is_before_jiffies64(a) time_after64(get_jiffies_64(), a)
#endif /* !SLE_VERSION_CODE && !RHEL_RELEASE_CODE || (SLES <= 12.3.0) || (RHEL <= 7.5) */
#if (RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,4))
static inline void bitmap_from_u64(unsigned long *dst, u64 mask)
{
dst[0] = mask & ULONG_MAX;
if (sizeof(mask) > sizeof(unsigned long))
dst[1] = mask >> 32;
}
#endif /* <RHEL7.4 */
#else /* >=4.9 */
#define HAVE_ETHTOOL_NEW_1G_BITS
#define HAVE_ETHTOOL_NEW_10G_BITS
#endif /* KERNEL_VERSION(4.9.0) */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,10,0))
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4)))
#define HAVE_DEV_WALK_API
/* SLES 12.3 and RHEL 7.5 backported this interface */
#if (!SLE_VERSION_CODE && !RHEL_RELEASE_CODE) || \
(SLE_VERSION_CODE && (SLE_VERSION_CODE < SLE_VERSION(12,3,0))) || \
(RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,5)))
static inline bool _kc_napi_complete_done2(struct napi_struct *napi,
int __always_unused work_done)
{
/* it was really hard to get napi_complete_done to be safe to call
* recursively without running into our own kcompat, so just use
* napi_complete
*/
napi_complete(napi);
/* true means that the stack is telling the driver to go-ahead and
* re-enable interrupts
*/
return true;
}
#ifdef napi_complete_done
#undef napi_complete_done
#endif
#define napi_complete_done _kc_napi_complete_done2
#endif /* sles and rhel exclusion for < 4.10 */
#if (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4))
#define HAVE_DEV_WALK_API
#define HAVE_ETHTOOL_NEW_2500MB_BITS
#define HAVE_ETHTOOL_5G_BITS
#endif /* RHEL7.4+ */
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE == SLE_VERSION(12,3,0)))
#define HAVE_STRUCT_DMA_ATTRS
#endif /* (SLES == 12.3.0) */
@ -5210,6 +5392,14 @@ static inline void __page_frag_cache_drain(struct page *page,
#define HAVE_NETDEV_TC_RESETS_XPS
#define HAVE_XPS_QOS_SUPPORT
#define HAVE_DEV_WALK_API
#define HAVE_ETHTOOL_NEW_2500MB_BITS
#define HAVE_ETHTOOL_5G_BITS
/* kernel 4.10 onwards, as part of busy_poll rewrite, new state were added
* which is part of NAPI:state. If NAPI:state=NAPI_STATE_IN_BUSY_POLL,
* it means napi_poll is invoked in busy_poll context
*/
#define HAVE_NAPI_STATE_IN_BUSY_POLL
#define HAVE_TCF_MIRRED_EGRESS_REDIRECT
#endif /* 4.10.0 */
/*****************************************************************************/
@ -5221,6 +5411,47 @@ static inline void __page_frag_cache_drain(struct page *page,
(RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,5))))
#define HAVE_VOID_NDO_GET_STATS64
#endif /* (SLES >= 12.3.0) && (RHEL >= 7.5) */
static inline void _kc_dev_kfree_skb_irq(struct sk_buff *skb)
{
if (!skb)
return;
dev_kfree_skb_irq(skb);
}
#undef dev_kfree_skb_irq
#define dev_kfree_skb_irq _kc_dev_kfree_skb_irq
static inline void _kc_dev_consume_skb_irq(struct sk_buff *skb)
{
if (!skb)
return;
dev_consume_skb_irq(skb);
}
#undef dev_consume_skb_irq
#define dev_consume_skb_irq _kc_dev_consume_skb_irq
static inline void _kc_dev_kfree_skb_any(struct sk_buff *skb)
{
if (!skb)
return;
dev_kfree_skb_any(skb);
}
#undef dev_kfree_skb_any
#define dev_kfree_skb_any _kc_dev_kfree_skb_any
static inline void _kc_dev_consume_skb_any(struct sk_buff *skb)
{
if (!skb)
return;
dev_consume_skb_any(skb);
}
#undef dev_consume_skb_any
#define dev_consume_skb_any _kc_dev_consume_skb_any
#else /* > 4.11 */
#define HAVE_VOID_NDO_GET_STATS64
#define HAVE_VM_OPS_FAULT_NO_VMA
@ -5228,7 +5459,14 @@ static inline void __page_frag_cache_drain(struct page *page,
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,13,0))
#if ((SLE_VERSION_CODE && (SLE_VERSION_CODE > SLE_VERSION(12,3,0))) || \
(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,5)))
#define HAVE_TCF_EXTS_HAS_ACTION
#endif
#define PCI_EXP_LNKCAP_SLS_8_0GB 0x00000003 /* LNKCAP2 SLS Vector bit 2 */
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(12,4,0)))
#define HAVE_PCI_ERROR_HANDLER_RESET_PREPARE
#endif /* SLES >= 12sp4 */
#else /* > 4.13 */
#define HAVE_HWTSTAMP_FILTER_NTP_ALL
#define HAVE_NDO_SETUP_TC_CHAIN_INDEX
@ -5238,11 +5476,13 @@ static inline void __page_frag_cache_drain(struct page *page,
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,14,0))
#ifdef ETHTOOL_GLINKSETTINGS
#ifndef ethtool_link_ksettings_del_link_mode
#define ethtool_link_ksettings_del_link_mode(ptr, name, mode) \
__clear_bit(ETHTOOL_LINK_MODE_ ## mode ## _BIT, (ptr)->link_modes.name)
#endif
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(15,0,0)))
#endif /* ETHTOOL_GLINKSETTINGS */
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(12,4,0)))
#define HAVE_NDO_SETUP_TC_REMOVE_TC_TO_NETDEV
#endif
@ -5276,6 +5516,7 @@ struct _kc_bpf_prog {
#else /* > 4.14 */
#define HAVE_XDP_SUPPORT
#define HAVE_NDO_SETUP_TC_REMOVE_TC_TO_NETDEV
#define HAVE_TCF_EXTS_HAS_ACTION
#endif /* 4.14.0 */
/*****************************************************************************/
@ -5337,6 +5578,15 @@ struct ethtool_link_ksettings {
#define ethtool_link_ksettings_add_link_mode(ptr, name, mode)\
(*((ptr)->link_modes.name) |= (typeof(*((ptr)->link_modes.name)))ETHTOOL_LINK_CONVERT(name, mode))
/**
* ethtool_link_ksettings_del_link_mode
* @ptr: ptr to ksettings struct
* @name: supported or advertising
* @mode: link mode to delete
*/
#define ethtool_link_ksettings_del_link_mode(ptr, name, mode)\
(*((ptr)->link_modes.name) &= ~(typeof(*((ptr)->link_modes.name)))ETHTOOL_LINK_CONVERT(name, mode))
/**
* ethtool_link_ksettings_test_link_mode
* @ptr: ptr to ksettings struct
@ -5345,6 +5595,30 @@ struct ethtool_link_ksettings {
*/
#define ethtool_link_ksettings_test_link_mode(ptr, name, mode)\
(!!(*((ptr)->link_modes.name) & ETHTOOL_LINK_CONVERT(name, mode)))
/**
* _kc_ethtool_ksettings_to_cmd - Convert ethtool_link_ksettings to ethtool_cmd
* @ks: ethtool_link_ksettings struct
* @cmd: ethtool_cmd struct
*
* Convert an ethtool_link_ksettings structure into the older ethtool_cmd
* structure. We provide this in kcompat.h so that drivers can easily
* implement the older .{get|set}_settings as wrappers around the new api.
* Hence, we keep it prefixed with _kc_ to make it clear this isn't actually
* a real function in the kernel.
*/
static inline void
_kc_ethtool_ksettings_to_cmd(struct ethtool_link_ksettings *ks,
struct ethtool_cmd *cmd)
{
cmd->supported = (u32)ks->link_modes.supported[0];
cmd->advertising = (u32)ks->link_modes.advertising[0];
ethtool_cmd_speed_set(cmd, ks->base.speed);
cmd->duplex = ks->base.duplex;
cmd->autoneg = ks->base.autoneg;
cmd->port = ks->base.port;
}
#endif /* !ETHTOOL_GLINKSETTINGS */
/*****************************************************************************/
@ -5359,9 +5633,13 @@ const char *_kc_phy_speed_to_str(int speed);
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,15,0))
#if !(RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,6)))
#if ((RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,6))) || \
(SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(15,1,0))))
#define HAVE_TC_CB_AND_SETUP_QDISC_MQPRIO
#define HAVE_TCF_BLOCK
#else /* RHEL >= 7.6 || SLES >= 15.1 */
#define TC_SETUP_QDISC_MQPRIO TC_SETUP_MQPRIO
#endif
#endif /* !(RHEL >= 7.6) && !(SLES >= 15.1) */
void _kc_ethtool_intersect_link_masks(struct ethtool_link_ksettings *dst,
struct ethtool_link_ksettings *src);
#define ethtool_intersect_link_masks _kc_ethtool_intersect_link_masks
@ -5369,6 +5647,7 @@ void _kc_ethtool_intersect_link_masks(struct ethtool_link_ksettings *dst,
#define HAVE_NDO_BPF
#define HAVE_XDP_BUFF_DATA_META
#define HAVE_TC_CB_AND_SETUP_QDISC_MQPRIO
#define HAVE_TCF_BLOCK
#endif /* 4.15.0 */
/*****************************************************************************/
@ -5409,9 +5688,29 @@ static inline unsigned long _kc_array_index_mask_nospec(unsigned long index,
(typeof(_i)) (_i & _mask); \
})
#endif /* array_index_nospec */
#if (!(RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,6))) && \
!(SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(15,1,0))))
#ifdef HAVE_TC_CB_AND_SETUP_QDISC_MQPRIO
#include <net/pkt_cls.h>
static inline bool
tc_cls_can_offload_and_chain0(const struct net_device *dev,
struct tc_cls_common_offload *common)
{
if (!tc_can_offload(dev))
return false;
if (common->chain_index)
return false;
return true;
}
#endif /* HAVE_TC_CB_AND_SETUP_QDISC_MQPRIO */
#endif /* !(RHEL >= 7.6) && !(SLES >= 15.1) */
#else /* >= 4.16 */
#include <linux/nospec.h>
#define HAVE_XDP_BUFF_RXQ
#define HAVE_TC_FLOWER_OFFLOAD_COMMON_EXTACK
#define HAVE_TCF_MIRRED_DEV
#define HAVE_VF_STATS_DROPPED
#endif /* 4.16.0 */
/*****************************************************************************/
@ -5424,6 +5723,8 @@ static inline unsigned long _kc_array_index_mask_nospec(unsigned long index,
#define PCI_EXP_LNKCAP2_SLS_16_0GB 0x00000010 /* Supported Speed 16GT/s */
void _kc_pcie_print_link_status(struct pci_dev *dev);
#define pcie_print_link_status _kc_pcie_print_link_status
#else /* >= 4.17.0 */
#define HAVE_XDP_BUFF_IN_XDP_H
#endif /* 4.17.0 */
/*****************************************************************************/
@ -5442,6 +5743,7 @@ static inline bool _kc_macvlan_supports_dest_filter(struct net_device *dev)
}
#endif
#if (!SLE_VERSION_CODE || (SLE_VERSION_CODE < SLE_VERSION(15,1,0)))
#ifndef macvlan_accel_priv
#define macvlan_accel_priv _kc_macvlan_accel_priv
static inline void *_kc_macvlan_accel_priv(struct net_device *dev)
@ -5462,15 +5764,46 @@ static inline int _kc_macvlan_release_l2fw_offload(struct net_device *dev)
return dev_uc_add(macvlan->lowerdev, dev->dev_addr);
}
#endif
#endif /* !SLES || SLES < 15.1 */
#endif /* NETIF_F_HW_L2FW_DOFFLOAD */
#include "kcompat_overflow.h"
#if (SLE_VERSION_CODE < SLE_VERSION(15,1,0))
#define firmware_request_nowarn request_firmware_direct
#endif /* !SLES || SLES < 15.1 */
#else
#include <linux/overflow.h>
#include <net/xdp_sock.h>
#define HAVE_XDP_FRAME_STRUCT
#define HAVE_XDP_SOCK
#define HAVE_NDO_XDP_XMIT_BULK_AND_FLAGS
#define NO_NDO_XDP_FLUSH
#define HAVE_AF_XDP_SUPPORT
#ifndef xdp_umem_get_data
static inline char *__kc_xdp_umem_get_data(struct xdp_umem *umem, u64 addr)
{
return umem->pages[addr >> PAGE_SHIFT].addr + (addr & (PAGE_SIZE - 1));
}
#define xdp_umem_get_data __kc_xdp_umem_get_data
#endif /* !xdp_umem_get_data */
#ifndef xdp_umem_get_dma
static inline dma_addr_t __kc_xdp_umem_get_dma(struct xdp_umem *umem, u64 addr)
{
return umem->pages[addr >> PAGE_SHIFT].dma + (addr & (PAGE_SIZE - 1));
}
#define xdp_umem_get_dma __kc_xdp_umem_get_dma
#endif /* !xdp_umem_get_dma */
#endif /* 4.18.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,19,0))
#define bitmap_alloc(nbits, flags) \
kmalloc_array(BITS_TO_LONGS(nbits), sizeof(unsigned long), flags)
#define bitmap_zalloc(nbits, flags) bitmap_alloc(nbits, ((flags) | __GFP_ZERO))
#define bitmap_free(bitmap) kfree(bitmap)
#ifdef ETHTOOL_GLINKSETTINGS
#define ethtool_ks_clear(ptr, name) \
ethtool_link_ksettings_zero_link_mode(ptr, name)
@ -5481,10 +5814,311 @@ static inline int _kc_macvlan_release_l2fw_offload(struct net_device *dev)
#define ethtool_ks_test(ptr, name, mode) \
ethtool_link_ksettings_test_link_mode(ptr, name, mode)
#endif /* ETHTOOL_GLINKSETTINGS */
#define HAVE_NETPOLL_CONTROLLER
#define REQUIRE_PCI_CLEANUP_AER_ERROR_STATUS
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(15,1,0)))
#define HAVE_TCF_MIRRED_DEV
#define HAVE_NDO_SELECT_QUEUE_SB_DEV
#define HAVE_TCF_BLOCK_CB_REGISTER_EXTACK
#endif
#if ((RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(8,0)) ||\
(SLE_VERSION_CODE >= SLE_VERSION(15,1,0)))
#define HAVE_TCF_EXTS_FOR_EACH_ACTION
#undef HAVE_TCF_EXTS_TO_LIST
#endif /* RHEL8.0+ */
#else /* >= 4.19.0 */
#define HAVE_TCF_BLOCK_CB_REGISTER_EXTACK
#define NO_NETDEV_BPF_PROG_ATTACHED
#define HAVE_NDO_SELECT_QUEUE_SB_DEV
#define HAVE_NETDEV_SB_DEV
#undef HAVE_TCF_EXTS_TO_LIST
#define HAVE_TCF_EXTS_FOR_EACH_ACTION
#endif /* 4.19.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,20,0))
#define HAVE_XDP_UMEM_PROPS
#ifdef HAVE_AF_XDP_SUPPORT
#ifndef napi_if_scheduled_mark_missed
static inline bool __kc_napi_if_scheduled_mark_missed(struct napi_struct *n)
{
unsigned long val, new;
do {
val = READ_ONCE(n->state);
if (val & NAPIF_STATE_DISABLE)
return true;
if (!(val & NAPIF_STATE_SCHED))
return false;
new = val | NAPIF_STATE_MISSED;
} while (cmpxchg(&n->state, val, new) != val);
return true;
}
#define napi_if_scheduled_mark_missed __kc_napi_if_scheduled_mark_missed
#endif /* !napi_if_scheduled_mark_missed */
#endif /* HAVE_AF_XDP_SUPPORT */
#else /* >= 4.20.0 */
#define HAVE_AF_XDP_ZC_SUPPORT
#define HAVE_VXLAN_TYPE
#endif /* 4.20.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(5,0,0))
#if (!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(8,0)))
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,12,0))
#define NETLINK_MAX_COOKIE_LEN 20
struct netlink_ext_ack {
const char *_msg;
const struct nlattr *bad_attr;
u8 cookie[NETLINK_MAX_COOKIE_LEN];
u8 cookie_len;
};
#endif /* < 4.12 */
static inline int _kc_dev_open(struct net_device *netdev,
struct netlink_ext_ack __always_unused *extack)
{
return dev_open(netdev);
}
#define dev_open _kc_dev_open
#endif /* !(RHEL_RELEASE_CODE && RHEL > RHEL(8,0)) */
#if (RHEL_RELEASE_CODE && \
(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,7) && \
RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(8,0)) || \
(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(8,1)))
#define HAVE_PTP_SYS_OFFSET_EXTENDED_IOCTL
#else /* RHEL >= 7.7 && RHEL < 8.0 || RHEL >= 8.1 */
struct ptp_system_timestamp {
struct timespec64 pre_ts;
struct timespec64 post_ts;
};
static inline void
ptp_read_system_prets(struct ptp_system_timestamp __always_unused *sts)
{
;
}
static inline void
ptp_read_system_postts(struct ptp_system_timestamp __always_unused *sts)
{
;
}
#endif /* !(RHEL >= 7.7 && RHEL != 8.0) */
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(8,1)))
#define HAVE_NDO_BRIDGE_SETLINK_EXTACK
#endif /* RHEL 8.1 */
#else /* >= 5.0.0 */
#define HAVE_PTP_SYS_OFFSET_EXTENDED_IOCTL
#define HAVE_NDO_BRIDGE_SETLINK_EXTACK
#define HAVE_DMA_ALLOC_COHERENT_ZEROES_MEM
#define HAVE_GENEVE_TYPE
#define HAVE_TC_INDIR_BLOCK
#endif /* 5.0.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(5,1,0))
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(8,1)))
#define HAVE_NDO_FDB_ADD_EXTACK
#else /* RHEL < 8.1 */
#ifdef HAVE_TC_SETUP_CLSFLOWER
#include <net/pkt_cls.h>
struct flow_match {
struct flow_dissector *dissector;
void *mask;
void *key;
};
struct flow_match_basic {
struct flow_dissector_key_basic *key, *mask;
};
struct flow_match_control {
struct flow_dissector_key_control *key, *mask;
};
struct flow_match_eth_addrs {
struct flow_dissector_key_eth_addrs *key, *mask;
};
#ifdef HAVE_TC_FLOWER_ENC
struct flow_match_enc_keyid {
struct flow_dissector_key_keyid *key, *mask;
};
#endif
#ifndef HAVE_TC_FLOWER_VLAN_IN_TAGS
struct flow_match_vlan {
struct flow_dissector_key_vlan *key, *mask;
};
#endif
struct flow_match_ipv4_addrs {
struct flow_dissector_key_ipv4_addrs *key, *mask;
};
struct flow_match_ipv6_addrs {
struct flow_dissector_key_ipv6_addrs *key, *mask;
};
struct flow_match_ports {
struct flow_dissector_key_ports *key, *mask;
};
struct flow_rule {
struct flow_match match;
#if 0
/* In 5.1+ kernels, action is a member of struct flow_rule but is
* not compatible with how we kcompat tc_cls_flower_offload_flow_rule
* below. By not declaring it here, any driver that attempts to use
* action as an element of struct flow_rule will fail to compile
* instead of silently trying to access memory that shouldn't be.
*/
struct flow_action action;
#endif
};
void flow_rule_match_basic(const struct flow_rule *rule,
struct flow_match_basic *out);
void flow_rule_match_control(const struct flow_rule *rule,
struct flow_match_control *out);
void flow_rule_match_eth_addrs(const struct flow_rule *rule,
struct flow_match_eth_addrs *out);
#ifndef HAVE_TC_FLOWER_VLAN_IN_TAGS
void flow_rule_match_vlan(const struct flow_rule *rule,
struct flow_match_vlan *out);
#endif
void flow_rule_match_ipv4_addrs(const struct flow_rule *rule,
struct flow_match_ipv4_addrs *out);
void flow_rule_match_ipv6_addrs(const struct flow_rule *rule,
struct flow_match_ipv6_addrs *out);
void flow_rule_match_ports(const struct flow_rule *rule,
struct flow_match_ports *out);
#ifdef HAVE_TC_FLOWER_ENC
void flow_rule_match_enc_ports(const struct flow_rule *rule,
struct flow_match_ports *out);
void flow_rule_match_enc_control(const struct flow_rule *rule,
struct flow_match_control *out);
void flow_rule_match_enc_ipv4_addrs(const struct flow_rule *rule,
struct flow_match_ipv4_addrs *out);
void flow_rule_match_enc_ipv6_addrs(const struct flow_rule *rule,
struct flow_match_ipv6_addrs *out);
void flow_rule_match_enc_keyid(const struct flow_rule *rule,
struct flow_match_enc_keyid *out);
#endif
static inline struct flow_rule *
tc_cls_flower_offload_flow_rule(struct tc_cls_flower_offload *tc_flow_cmd)
{
return (struct flow_rule *)&tc_flow_cmd->dissector;
}
static inline bool flow_rule_match_key(const struct flow_rule *rule,
enum flow_dissector_key_id key)
{
return dissector_uses_key(rule->match.dissector, key);
}
#endif /* HAVE_TC_SETUP_CLSFLOWER */
#endif /* RHEL < 8.1 */
#else /* >= 5.1.0 */
#define HAVE_NDO_FDB_ADD_EXTACK
#define NO_XDP_QUERY_XSK_UMEM
#define HAVE_TC_FLOW_RULE_INFRASTRUCTURE
#define HAVE_TC_FLOWER_ENC_IP
#endif /* 5.1.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(5,2,0))
#ifdef HAVE_SKB_XMIT_MORE
#define netdev_xmit_more() (skb->xmit_more)
#else
#define netdev_xmit_more() (0)
#endif
#ifndef eth_get_headlen
static inline u32
__kc_eth_get_headlen(const struct net_device __always_unused *dev, void *data,
unsigned int len)
{
return eth_get_headlen(data, len);
}
#define eth_get_headlen(dev, data, len) __kc_eth_get_headlen(dev, data, len)
#endif /* !eth_get_headlen */
#ifndef mmiowb
#ifdef CONFIG_IA64
#define mmiowb() asm volatile ("mf.a" ::: "memory")
#else
#define mmiowb()
#endif
#endif /* mmiowb */
#else /* >= 5.2.0 */
#define HAVE_NDO_SELECT_QUEUE_FALLBACK_REMOVED
#define SPIN_UNLOCK_IMPLIES_MMIOWB
#endif /* 5.2.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(5,3,0))
#define flow_block_offload tc_block_offload
#define flow_block_command tc_block_command
#define flow_block_binder_type tcf_block_binder_type
#define flow_cls_offload tc_cls_flower_offload
#define flow_cls_common_offload tc_cls_common_offload
#define flow_cls_offload_flow_rule tc_cls_flower_offload_flow_rule
#define FLOW_CLS_REPLACE TC_CLSFLOWER_REPLACE
#define FLOW_CLS_DESTROY TC_CLSFLOWER_DESTROY
#define FLOW_CLS_STATS TC_CLSFLOWER_STATS
#define FLOW_CLS_TMPLT_CREATE TC_CLSFLOWER_TMPLT_CREATE
#define FLOW_CLS_TMPLT_DESTROY TC_CLSFLOWER_TMPLT_DESTROY
#define FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS \
TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS
#define FLOW_BLOCK_BIND TC_BLOCK_BIND
#define FLOW_BLOCK_UNBIND TC_BLOCK_UNBIND
#ifdef HAVE_TC_CB_AND_SETUP_QDISC_MQPRIO
#include <net/pkt_cls.h>
int _kc_flow_block_cb_setup_simple(struct flow_block_offload *f,
struct list_head *driver_list,
tc_setup_cb_t *cb,
void *cb_ident, void *cb_priv,
bool ingress_only);
#define flow_block_cb_setup_simple(f, driver_list, cb, cb_ident, cb_priv, \
ingress_only) \
_kc_flow_block_cb_setup_simple(f, driver_list, cb, cb_ident, cb_priv, \
ingress_only)
#endif /* HAVE_TC_CB_AND_SETUP_QDISC_MQPRIO */
#else /* >= 5.3.0 */
#define XSK_UMEM_RETURNS_XDP_DESC
#define HAVE_FLOW_BLOCK_API
#endif /* 5.3.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(5,4,0))
static inline unsigned int skb_frag_off(const skb_frag_t *frag)
{
return frag->page_offset;
}
static inline void skb_frag_off_add(skb_frag_t *frag, int delta)
{
frag->page_offset += delta;
}
#define __flow_indr_block_cb_register __tc_indr_block_cb_register
#define __flow_indr_block_cb_unregister __tc_indr_block_cb_unregister
#else /* >= 5.4.0 */
#define HAVE_NDO_XSK_WAKEUP
#endif /* 5.4.0 */
#endif /* _KCOMPAT_H_ */

315
src/kcompat_overflow.h Normal file
View file

@ -0,0 +1,315 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2019 Intel Corporation. */
/* SPDX-License-Identifier: GPL-2.0 OR MIT */
#ifndef __LINUX_OVERFLOW_H
#define __LINUX_OVERFLOW_H
#include <linux/compiler.h>
/*
* In the fallback code below, we need to compute the minimum and
* maximum values representable in a given type. These macros may also
* be useful elsewhere, so we provide them outside the
* COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW block.
*
* It would seem more obvious to do something like
*
* #define type_min(T) (T)(is_signed_type(T) ? (T)1 << (8*sizeof(T)-1) : 0)
* #define type_max(T) (T)(is_signed_type(T) ? ((T)1 << (8*sizeof(T)-1)) - 1 : ~(T)0)
*
* Unfortunately, the middle expressions, strictly speaking, have
* undefined behaviour, and at least some versions of gcc warn about
* the type_max expression (but not if -fsanitize=undefined is in
* effect; in that case, the warning is deferred to runtime...).
*
* The slightly excessive casting in type_min is to make sure the
* macros also produce sensible values for the exotic type _Bool. [The
* overflow checkers only almost work for _Bool, but that's
* a-feature-not-a-bug, since people shouldn't be doing arithmetic on
* _Bools. Besides, the gcc builtins don't allow _Bool* as third
* argument.]
*
* Idea stolen from
* https://mail-index.netbsd.org/tech-misc/2007/02/05/0000.html -
* credit to Christian Biere.
*/
/* The is_signed_type macro is redefined in a few places in various kernel
* headers. If this header is included at the same time as one of those, we
* will generate compilation warnings. Since we can't fix every old kernel,
* rename is_signed_type for this file to _kc_is_signed_type. This prevents
* the macro name collision, and should be safe since our drivers do not
* directly call the macro.
*/
#define _kc_is_signed_type(type) (((type)(-1)) < (type)1)
#define __type_half_max(type) ((type)1 << (8*sizeof(type) - 1 - _kc_is_signed_type(type)))
#define type_max(T) ((T)((__type_half_max(T) - 1) + __type_half_max(T)))
#define type_min(T) ((T)((T)-type_max(T)-(T)1))
#ifdef COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW
/*
* For simplicity and code hygiene, the fallback code below insists on
* a, b and *d having the same type (similar to the min() and max()
* macros), whereas gcc's type-generic overflow checkers accept
* different types. Hence we don't just make check_add_overflow an
* alias for __builtin_add_overflow, but add type checks similar to
* below.
*/
#define check_add_overflow(a, b, d) ({ \
typeof(a) __a = (a); \
typeof(b) __b = (b); \
typeof(d) __d = (d); \
(void) (&__a == &__b); \
(void) (&__a == __d); \
__builtin_add_overflow(__a, __b, __d); \
})
#define check_sub_overflow(a, b, d) ({ \
typeof(a) __a = (a); \
typeof(b) __b = (b); \
typeof(d) __d = (d); \
(void) (&__a == &__b); \
(void) (&__a == __d); \
__builtin_sub_overflow(__a, __b, __d); \
})
#define check_mul_overflow(a, b, d) ({ \
typeof(a) __a = (a); \
typeof(b) __b = (b); \
typeof(d) __d = (d); \
(void) (&__a == &__b); \
(void) (&__a == __d); \
__builtin_mul_overflow(__a, __b, __d); \
})
#else
/* Checking for unsigned overflow is relatively easy without causing UB. */
#define __unsigned_add_overflow(a, b, d) ({ \
typeof(a) __a = (a); \
typeof(b) __b = (b); \
typeof(d) __d = (d); \
(void) (&__a == &__b); \
(void) (&__a == __d); \
*__d = __a + __b; \
*__d < __a; \
})
#define __unsigned_sub_overflow(a, b, d) ({ \
typeof(a) __a = (a); \
typeof(b) __b = (b); \
typeof(d) __d = (d); \
(void) (&__a == &__b); \
(void) (&__a == __d); \
*__d = __a - __b; \
__a < __b; \
})
/*
* If one of a or b is a compile-time constant, this avoids a division.
*/
#define __unsigned_mul_overflow(a, b, d) ({ \
typeof(a) __a = (a); \
typeof(b) __b = (b); \
typeof(d) __d = (d); \
(void) (&__a == &__b); \
(void) (&__a == __d); \
*__d = __a * __b; \
__builtin_constant_p(__b) ? \
__b > 0 && __a > type_max(typeof(__a)) / __b : \
__a > 0 && __b > type_max(typeof(__b)) / __a; \
})
/*
* For signed types, detecting overflow is much harder, especially if
* we want to avoid UB. But the interface of these macros is such that
* we must provide a result in *d, and in fact we must produce the
* result promised by gcc's builtins, which is simply the possibly
* wrapped-around value. Fortunately, we can just formally do the
* operations in the widest relevant unsigned type (u64) and then
* truncate the result - gcc is smart enough to generate the same code
* with and without the (u64) casts.
*/
/*
* Adding two signed integers can overflow only if they have the same
* sign, and overflow has happened iff the result has the opposite
* sign.
*/
#define __signed_add_overflow(a, b, d) ({ \
typeof(a) __a = (a); \
typeof(b) __b = (b); \
typeof(d) __d = (d); \
(void) (&__a == &__b); \
(void) (&__a == __d); \
*__d = (u64)__a + (u64)__b; \
(((~(__a ^ __b)) & (*__d ^ __a)) \
& type_min(typeof(__a))) != 0; \
})
/*
* Subtraction is similar, except that overflow can now happen only
* when the signs are opposite. In this case, overflow has happened if
* the result has the opposite sign of a.
*/
#define __signed_sub_overflow(a, b, d) ({ \
typeof(a) __a = (a); \
typeof(b) __b = (b); \
typeof(d) __d = (d); \
(void) (&__a == &__b); \
(void) (&__a == __d); \
*__d = (u64)__a - (u64)__b; \
((((__a ^ __b)) & (*__d ^ __a)) \
& type_min(typeof(__a))) != 0; \
})
/*
* Signed multiplication is rather hard. gcc always follows C99, so
* division is truncated towards 0. This means that we can write the
* overflow check like this:
*
* (a > 0 && (b > MAX/a || b < MIN/a)) ||
* (a < -1 && (b > MIN/a || b < MAX/a) ||
* (a == -1 && b == MIN)
*
* The redundant casts of -1 are to silence an annoying -Wtype-limits
* (included in -Wextra) warning: When the type is u8 or u16, the
* __b_c_e in check_mul_overflow obviously selects
* __unsigned_mul_overflow, but unfortunately gcc still parses this
* code and warns about the limited range of __b.
*/
#define __signed_mul_overflow(a, b, d) ({ \
typeof(a) __a = (a); \
typeof(b) __b = (b); \
typeof(d) __d = (d); \
typeof(a) __tmax = type_max(typeof(a)); \
typeof(a) __tmin = type_min(typeof(a)); \
(void) (&__a == &__b); \
(void) (&__a == __d); \
*__d = (u64)__a * (u64)__b; \
(__b > 0 && (__a > __tmax/__b || __a < __tmin/__b)) || \
(__b < (typeof(__b))-1 && (__a > __tmin/__b || __a < __tmax/__b)) || \
(__b == (typeof(__b))-1 && __a == __tmin); \
})
#define check_add_overflow(a, b, d) \
__builtin_choose_expr(_kc_is_signed_type(typeof(a)), \
__signed_add_overflow(a, b, d), \
__unsigned_add_overflow(a, b, d))
#define check_sub_overflow(a, b, d) \
__builtin_choose_expr(_kc_is_signed_type(typeof(a)), \
__signed_sub_overflow(a, b, d), \
__unsigned_sub_overflow(a, b, d))
#define check_mul_overflow(a, b, d) \
__builtin_choose_expr(_kc_is_signed_type(typeof(a)), \
__signed_mul_overflow(a, b, d), \
__unsigned_mul_overflow(a, b, d))
#endif /* COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW */
/** check_shl_overflow() - Calculate a left-shifted value and check overflow
*
* @a: Value to be shifted
* @s: How many bits left to shift
* @d: Pointer to where to store the result
*
* Computes *@d = (@a << @s)
*
* Returns true if '*d' cannot hold the result or when 'a << s' doesn't
* make sense. Example conditions:
* - 'a << s' causes bits to be lost when stored in *d.
* - 's' is garbage (e.g. negative) or so large that the result of
* 'a << s' is guaranteed to be 0.
* - 'a' is negative.
* - 'a << s' sets the sign bit, if any, in '*d'.
*
* '*d' will hold the results of the attempted shift, but is not
* considered "safe for use" if false is returned.
*/
#define check_shl_overflow(a, s, d) ({ \
typeof(a) _a = a; \
typeof(s) _s = s; \
typeof(d) _d = d; \
u64 _a_full = _a; \
unsigned int _to_shift = \
_s >= 0 && _s < 8 * sizeof(*d) ? _s : 0; \
*_d = (_a_full << _to_shift); \
(_to_shift != _s || *_d < 0 || _a < 0 || \
(*_d >> _to_shift) != _a); \
})
/**
* array_size() - Calculate size of 2-dimensional array.
*
* @a: dimension one
* @b: dimension two
*
* Calculates size of 2-dimensional array: @a * @b.
*
* Returns: number of bytes needed to represent the array or SIZE_MAX on
* overflow.
*/
static inline __must_check size_t array_size(size_t a, size_t b)
{
size_t bytes;
if (check_mul_overflow(a, b, &bytes))
return SIZE_MAX;
return bytes;
}
/**
* array3_size() - Calculate size of 3-dimensional array.
*
* @a: dimension one
* @b: dimension two
* @c: dimension three
*
* Calculates size of 3-dimensional array: @a * @b * @c.
*
* Returns: number of bytes needed to represent the array or SIZE_MAX on
* overflow.
*/
static inline __must_check size_t array3_size(size_t a, size_t b, size_t c)
{
size_t bytes;
if (check_mul_overflow(a, b, &bytes))
return SIZE_MAX;
if (check_mul_overflow(bytes, c, &bytes))
return SIZE_MAX;
return bytes;
}
static inline __must_check size_t __ab_c_size(size_t n, size_t size, size_t c)
{
size_t bytes;
if (check_mul_overflow(n, size, &bytes))
return SIZE_MAX;
if (check_add_overflow(bytes, c, &bytes))
return SIZE_MAX;
return bytes;
}
/**
* struct_size() - Calculate size of structure with trailing array.
* @p: Pointer to the structure.
* @member: Name of the array member.
* @n: Number of elements in the array.
*
* Calculates size of memory needed for structure @p followed by an
* array of @n @member elements.
*
* Return: number of bytes needed or SIZE_MAX on overflow.
*/
#define struct_size(p, member, n) \
__ab_c_size(n, \
sizeof(*(p)->member) + __must_be_array((p)->member),\
sizeof(*(p)))
#endif /* __LINUX_OVERFLOW_H */