Intel release 0.26.1

This commit is contained in:
Silicom Ltd 2018-09-05 20:45:41 +00:00 committed by Shoghi
parent 1e989c9e83
commit 7cf35a4864
33 changed files with 4601 additions and 2140 deletions

530
COPYING
View file

@ -1,339 +1,319 @@
"This software program is licensed subject to the GNU General Public License
(GPL). Version 2, June 1991, available at
<http://www.fsf.org/copyleft/gpl.html>"
GNU General Public License
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
59 Temple Place - Suite 330, Boston, MA 02111-1307, USA
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA
Everyone is permitted to copy and distribute verbatim copies of this license
document, but changing it is not allowed.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your freedom to
share and change it. By contrast, the GNU General Public License is intended
to guarantee your freedom to share and change free software--to make sure
the software is free for all its users. This General Public License applies
to most of the Free Software Foundation's software and to any other program
whose authors commit to using it. (Some other Free Software Foundation
software is covered by the GNU Library General Public License instead.) You
can apply it to your programs, too.
The licenses for most software are designed to take away your freedom to share
and change it. By contrast, the GNU General Public License is intended to
guarantee your freedom to share and change free software--to make sure the
software is free for all its users. This General Public License applies to most
of the Free Software Foundation's software and to any other program whose
authors commit to using it. (Some other Free Software Foundation software is
covered by the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not price. Our
General Public Licenses are designed to make sure that you have the freedom
to distribute copies of free software (and charge for this service if you
wish), that you receive source code or can get it if you want it, that you
can change the software or use pieces of it in new free programs; and that
you know you can do these things.
General Public Licenses are designed to make sure that you have the freedom to
distribute copies of free software (and charge for this service if you wish),
that you receive source code or can get it if you want it, that you can change
the software or use pieces of it in new free programs; and that you know you can
do these things.
To protect your rights, we need to make restrictions that forbid anyone to
deny you these rights or to ask you to surrender the rights. These
restrictions translate to certain responsibilities for you if you distribute
copies of the software, or if you modify it.
To protect your rights, we need to make restrictions that forbid anyone to deny
you these rights or to ask you to surrender the rights. These restrictions
translate to certain responsibilities for you if you distribute copies of the
software, or if you modify it.
For example, if you distribute copies of such a program, whether gratis or
for a fee, you must give the recipients all the rights that you have. You
must make sure that they, too, receive or can get the source code. And you
must show them these terms so they know their rights.
We protect your rights with two steps: (1) copyright the software, and (2)
offer you this license which gives you legal permission to copy, distribute
and/or modify the software.
For example, if you distribute copies of such a program, whether gratis or for a
fee, you must give the recipients all the rights that you have. You must make
sure that they, too, receive or can get the source code. And you must show them
these terms so they know their rights.
Also, for each author's protection and ours, we want to make certain that
everyone understands that there is no warranty for this free software. If
the software is modified by someone else and passed on, we want its
recipients to know that what they have is not the original, so that any
problems introduced by others will not reflect on the original authors'
reputations.
We protect your rights with two steps: (1) copyright the software, and (2) offer
you this license which gives you legal permission to copy, distribute and/or
modify the software.
Finally, any free program is threatened constantly by software patents. We
wish to avoid the danger that redistributors of a free program will
individually obtain patent licenses, in effect making the program
proprietary. To prevent this, we have made it clear that any patent must be
licensed for everyone's free use or not licensed at all.
Also, for each author's protection and ours, we want to make certain that
everyone understands that there is no warranty for this free software. If the
software is modified by someone else and passed on, we want its recipients to
know that what they have is not the original, so that any problems introduced by
others will not reflect on the original authors' reputations.
The precise terms and conditions for copying, distribution and modification
follow.
Finally, any free program is threatened constantly by software patents. We wish
to avoid the danger that redistributors of a free program will individually
obtain patent licenses, in effect making the program proprietary. To prevent
this, we have made it clear that any patent must be licensed for everyone's free
use or not licensed at all.
The precise terms and conditions for copying, distribution and modification
follow.
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains a notice
placed by the copyright holder saying it may be distributed under the
terms of this General Public License. The "Program", below, refers to any
such program or work, and a "work based on the Program" means either the
Program or any derivative work under copyright law: that is to say, a
work containing the Program or a portion of it, either verbatim or with
modifications and/or translated into another language. (Hereinafter,
translation is included without limitation in the term "modification".)
Each licensee is addressed as "you".
placed by the copyright holder saying it may be distributed under the terms of
this General Public License. The "Program", below, refers to any such program
or work, and a "work based on the Program" means either the Program or any
derivative work under copyright law: that is to say, a work containing the
Program or a portion of it, either verbatim or with modifications and/or
translated into another language. (Hereinafter, translation is included without
limitation in the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of running
the Program is not restricted, and the output from the Program is covered
only if its contents constitute a work based on the Program (independent
of having been made by running the Program). Whether that is true depends
on what the Program does.
Activities other than copying, distribution and modification are not covered by
this License; they are outside its scope. The act of running the Program is not
restricted, and the output from the Program is covered only if its contents
constitute a work based on the Program (independent of having been made by
running the Program). Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's source code
as you receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice and
disclaimer of warranty; keep intact all the notices that refer to this
License and to the absence of any warranty; and give any other recipients
of the Program a copy of this License along with the Program.
1. You may copy and distribute verbatim copies of the Program's source code as
you receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice and
disclaimer of warranty; keep intact all the notices that refer to this License
and to the absence of any warranty; and give any other recipients of the Program
a copy of this License along with the Program.
You may charge a fee for the physical act of transferring a copy, and you
may at your option offer warranty protection in exchange for a fee.
You may charge a fee for the physical act of transferring a copy, and you may at
your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion of it,
thus forming a work based on the Program, and copy and distribute such
modifications or work under the terms of Section 1 above, provided that
you also meet all of these conditions:
2. You may modify your copy or copies of the Program or any portion of it, thus
forming a work based on the Program, and copy and distribute such
modifications or work under the terms of Section 1 above, provided that you
also meet all of these conditions:
* a) You must cause the modified files to carry prominent notices stating
that you changed the files and the date of any change.
a) You must cause the modified files to carry prominent notices stating that
you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in whole or
in part contains or is derived from the Program or any part thereof, to be
licensed as a whole at no charge to all third parties under the terms of
this License.
c) If the modified program normally reads commands interactively when
run, you must cause it, when started running for such interactive use in the
most ordinary way, to print or display an announcement including an
appropriate copyright notice and a notice that there is no warranty (or
else, saying that you provide a warranty) and that users may redistribute
the program under these conditions, and telling the user how to view a copy
of this License. (Exception: if the Program itself is interactive but does
not normally print such an announcement, your work based on the Program is
not required to print an announcement.)
* b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any part
thereof, to be licensed as a whole at no charge to all third parties
under the terms of this License.
These requirements apply to the modified work as a whole. If identifiable
sections of that work are not derived from the Program, and can be reasonably
considered independent and separate works in themselves, then this License,
and its terms, do not apply to those sections when you distribute them as
separate works. But when you distribute the same sections as part of a whole
which is a work based on the Program, the distribution of the whole must be on
the terms of this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
* c) If the modified program normally reads commands interactively when
run, you must cause it, when started running for such interactive
use in the most ordinary way, to print or display an announcement
including an appropriate copyright notice and a notice that there is
no warranty (or else, saying that you provide a warranty) and that
users may redistribute the program under these conditions, and
telling the user how to view a copy of this License. (Exception: if
the Program itself is interactive but does not normally print such
an announcement, your work based on the Program is not required to
print an announcement.)
Thus, it is not the intent of this section to claim rights or contest your
rights to work written entirely by you; rather, the intent is to exercise the
right to control the distribution of derivative or collective works based on the
Program.
These requirements apply to the modified work as a whole. If identifiable
sections of that work are not derived from the Program, and can be
reasonably considered independent and separate works in themselves, then
this License, and its terms, do not apply to those sections when you
distribute them as separate works. But when you distribute the same
sections as part of a whole which is a work based on the Program, the
distribution of the whole must be on the terms of this License, whose
permissions for other licensees extend to the entire whole, and thus to
each and every part regardless of who wrote it.
In addition, mere aggregation of another work not based on the Program with
the Program (or with a work based on the Program) on a volume of a storage or
distribution medium does not bring the other work under the scope of this
License.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
3. You may copy and distribute the Program (or a work based on it, under
Section 2) in object code or executable form under the terms of Sections 1 and 2
above provided that you also do one of the following:
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of a
storage or distribution medium does not bring the other work under the
scope of this License.
a) Accompany it with the complete corresponding machine-readable source
code, which must be distributed under the terms of Sections 1 and 2 above
on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three years, to
give any third party, for a charge no more than your cost of physically
performing source distribution, a complete machine-readable copy of the
corresponding source code, to be distributed under the terms of Sections 1
and 2 above on a medium customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer to
distribute corresponding source code. (This alternative is allowed only for
noncommercial distribution and only if you received the program in object
code or executable form with such an offer, in accord with Subsection b
above.)
3. You may copy and distribute the Program (or a work based on it, under
Section 2) in object code or executable form under the terms of Sections
1 and 2 above provided that you also do one of the following:
The source code for a work means the preferred form of the work for making
modifications to it. For an executable work, complete source code means all the
source code for all modules it contains, plus any associated interface
definition files, plus the scripts used to control compilation and installation
of the executable. However, as a special exception, the source code distributed
need not include anything that is normally distributed (in either source or
binary form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component itself
accompanies the executable.
* a) Accompany it with the complete corresponding machine-readable source
code, which must be distributed under the terms of Sections 1 and 2
above on a medium customarily used for software interchange; or,
* b) Accompany it with a written offer, valid for at least three years,
to give any third party, for a charge no more than your cost of
physically performing source distribution, a complete machine-
readable copy of the corresponding source code, to be distributed
under the terms of Sections 1 and 2 above on a medium customarily
used for software interchange; or,
* c) Accompany it with the information you received as to the offer to
distribute corresponding source code. (This alternative is allowed
only for noncommercial distribution and only if you received the
program in object code or executable form with such an offer, in
accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source code
means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to control
compilation and installation of the executable. However, as a special
exception, the source code distributed need not include anything that is
normally distributed (in either source or binary form) with the major
components (compiler, kernel, and so on) of the operating system on which
the executable runs, unless that component itself accompanies the
executable.
If distribution of executable or object code is made by offering access
to copy from a designated place, then offering equivalent access to copy
the source code from the same place counts as distribution of the source
code, even though third parties are not compelled to copy the source
along with the object code.
If distribution of executable or object code is made by offering access to copy
from a designated place, then offering equivalent access to copy the source code
from the same place counts as distribution of the source code, even though third
parties are not compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program except as
expressly provided under this License. Any attempt otherwise to copy,
modify, sublicense or distribute the Program is void, and will
automatically terminate your rights under this License. However, parties
who have received copies, or rights, from you under this License will not
have their licenses terminated so long as such parties remain in full
compliance.
expressly provided under this License. Any attempt otherwise to copy, modify,
sublicense or distribute the Program is void, and will automatically terminate
your rights under this License. However, parties who have received copies, or
rights, from you under this License will not have their licenses terminated so
long as such parties remain in full compliance.
5. You are not required to accept this License, since you have not signed
it. However, nothing else grants you permission to modify or distribute
the Program or its derivative works. These actions are prohibited by law
if you do not accept this License. Therefore, by modifying or
distributing the Program (or any work based on the Program), you
indicate your acceptance of this License to do so, and all its terms and
conditions for copying, distributing or modifying the Program or works
based on it.
5. You are not required to accept this License, since you have not signed it.
However, nothing else grants you permission to modify or distribute the
Program or its derivative works. These actions are prohibited by law if you do
not accept this License. Therefore, by modifying or distributing the Program (or
any work based on the Program), you indicate your acceptance of this License to
do so, and all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further restrictions
on the recipients' exercise of the rights granted herein. You are not
responsible for enforcing compliance by third parties to this License.
6. Each time you redistribute the Program (or any work based on the Program),
the recipient automatically receives a license from the original licensor to
copy, distribute or modify the Program subject to these terms and conditions.
You may not impose any further restrictions on the recipients' exercise of the
rights granted herein. You are not responsible for enforcing compliance by third
parties to this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot distribute
so as to satisfy simultaneously your obligations under this License and
any other pertinent obligations, then as a consequence you may not
distribute the Program at all. For example, if a patent license would
not permit royalty-free redistribution of the Program by all those who
receive copies directly or indirectly through you, then the only way you
could satisfy both it and this License would be to refrain entirely from
distribution of the Program.
7. If, as a consequence of a court judgment or allegation of patent infringement
or for any other reason (not limited to patent issues), conditions are imposed
on you (whether by court order, agreement or otherwise) that contradict the
conditions of this License, they do not excuse you from the conditions of this
License. If you cannot distribute so as to satisfy simultaneously your
obligations under this License and any other pertinent obligations, then as a
consequence you may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by all those
who receive copies directly or indirectly through you, then the only way you
could satisfy both it and this License would be to refrain entirely from
distribution of the Program.
If any portion of this section is held invalid or unenforceable under any
particular circumstance, the balance of the section is intended to apply
and the section as a whole is intended to apply in other circumstances.
If any portion of this section is held invalid or unenforceable under any
particular circumstance, the balance of the section is intended to apply and the
section as a whole is intended to apply in other circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is implemented
by public license practices. Many people have made generous contributions
to the wide range of software distributed through that system in
reliance on consistent application of that system; it is up to the
author/donor to decide if he or she is willing to distribute software
through any other system and a licensee cannot impose that choice.
It is not the purpose of this section to induce you to infringe any patents or
other property right claims or to contest validity of any such claims; this
section has the sole purpose of protecting the integrity of the free software
distribution system, which is implemented by public license practices. Many
people have made generous contributions to the wide range of software
distributed through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing to
distribute software through any other system and a licensee cannot impose that
choice.
This section is intended to make thoroughly clear what is believed to be
a consequence of the rest of this License.
This section is intended to make thoroughly clear what is believed to be a
consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in certain
countries either by patents or by copyrighted interfaces, the original
copyright holder who places the Program under this License may add an
explicit geographical distribution limitation excluding those countries,
so that distribution is permitted only in or among countries not thus
excluded. In such case, this License incorporates the limitation as if
written in the body of this License.
8. If the distribution and/or use of the Program is restricted in certain
countries either by patents or by copyrighted interfaces, the original copyright
holder who places the Program under this License may add an explicit
geographical distribution limitation excluding those countries, so that
distribution is permitted only in or among countries not thus excluded. In such
case, this License incorporates the limitation as if written in the body of this
License.
9. The Free Software Foundation may publish revised and/or new versions of
the General Public License from time to time. Such new versions will be
similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
9. The Free Software Foundation may publish revised and/or new versions of the
General Public License from time to time. Such new versions will be similar in
spirit to the present version, but may differ in detail to address new problems
or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and
conditions either of that version or of any later version published by
the Free Software Foundation. If the Program does not specify a version
number of this License, you may choose any version ever published by the
Free Software Foundation.
Each version is given a distinguishing version number. If the Program specifies
a version number of this License which applies to it and "any later version",
you have the option of following the terms and conditions either of that version
or of any later version published by the Free Software Foundation. If the
Program does not specify a version number of this License, you may choose any
version ever published by the Free Software Foundation.
10. If you wish to incorporate parts of the Program into other free programs
whose distribution conditions are different, write to the author to ask
for permission. For software which is copyrighted by the Free Software
Foundation, write to the Free Software Foundation; we sometimes make
exceptions for this. Our decision will be guided by the two goals of
preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
whose distribution conditions are different, write to the author to ask for
permission. For software which is copyrighted by the Free Software Foundation,
write to the Free Software Foundation; we sometimes make exceptions for this.
Our decision will be guided by the two goals of preserving the free status of
all derivatives of our free software and of promoting the sharing and reuse of
software generally.
NO WARRANTY
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE
ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH
YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL
NECESSARY SERVICING, REPAIR OR CORRECTION.
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE
PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED
IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM
"AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING,
BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR
DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL
DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM
(INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED
INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF
THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR
OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL
ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE
PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL,
SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY
TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF
THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER
PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it free
software which everyone can redistribute and change under these terms.
If you develop a new program, and you want it to be of the greatest possible use
to the public, the best way to achieve this is to make it free software which
everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to
attach them to the start of each source file to most effectively convey the
exclusion of warranty; and each file should have at least the "copyright"
line and a pointer to where the full notice is found.
To do so, attach the following notices to the program. It is safest to attach
them to the start of each source file to most effectively convey the exclusion
of warranty; and each file should have at least the "copyright" line and a
pointer to where the full notice is found.
one line to give the program's name and an idea of what it does.
Copyright (C) yyyy name of author
one line to give the program's name and an idea of what it does.
Copyright (C) yyyy name of author
This program is free software; you can redistribute it and/or modify it
under the terms of the GNU General Public License as published by the Free
Software Foundation; either version 2 of the License, or (at your option)
any later version.
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2
of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
more details.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with
this program; if not, write to the Free Software Foundation, Inc., 59
Temple Place - Suite 330, Boston, MA 02111-1307, USA.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
Also add information on how to contact you by electronic and paper mail.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this when
it starts in an interactive mode:
If the program is interactive, make it output a short notice like this when it
starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author Gnomovision comes
with ABSOLUTELY NO WARRANTY; for details type 'show w'. This is free
software, and you are welcome to redistribute it under certain conditions;
type 'show c' for details.
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details
type `show w'. This is free software, and you are welcome
to redistribute it under certain conditions; type `show c'
for details.
The hypothetical commands 'show w' and 'show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may be
called something other than 'show w' and 'show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may be
called something other than `show w' and `show c' ; they could even be mouse-
clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
You should also get your employer (if you work as a programmer) or your school,
if any, to sign a "copyright disclaimer" for the program, if necessary. Here is
a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
'Gnomovision' (which makes passes at compilers) written by James Hacker.
Yoyodyne, Inc., hereby disclaims all copyright
interest in the program `Gnomovision'
(which makes passes at compilers) written
by James Hacker.
signature of Ty Coon, 1 April 1989
Ty Coon, President of Vice
signature of Ty Coon, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Library General Public
License instead of this License.
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may consider
it more useful to permit linking proprietary applications with the library. If
this is what you want to do, use the GNU Lesser General Public License instead
of this License.

244
README
View file

@ -3,7 +3,7 @@ README for Intel(R) Ethernet Switch Host Interface Driver
===============================================================================
April 4, 2016
February 23, 2017
===============================================================================
@ -25,21 +25,18 @@ Important Notes
Configuring SR-IOV for improved network security
------------------------------------------------
In a virtualized environment, on Intel(R) Server Adapters that support SR-IOV,
the virtual function (VF) may be subject to malicious behavior. Software-
generated layer two frames, like IEEE 802.3x (link flow control), IEEE 802.1Qbb
(priority based flow-control), and others of this type, are not expected and
can throttle traffic between the host and the virtual switch, reducing
performance. To resolve this issue, configure all SR-IOV enabled ports for
VLAN tagging. This configuration allows unexpected, and potentially malicious,
frames to be dropped.
In a virtualized environment, on Intel(R) Ethernet Server Adapters that support
SR-IOV, the virtual function (VF) may be subject to malicious behavior.
Software-generated layer two frames, like IEEE 802.3x (link flow control), IEEE
802.1Qbb (priority based flow-control), and others of this type, are not
expected and can throttle traffic between the host and the virtual switch,
reducing performance. To resolve this issue, configure all SR-IOV enabled ports
for VLAN tagging. This configuration allows unexpected, and potentially
malicious, frames to be dropped.
Overview
--------
This driver supports kernel versions 2.6.32 and newer.
Driver information can be obtained using ethtool, lspci, and iproute2 ip.
@ -69,16 +66,16 @@ bifurcation, only 1 port is available.
Building and Installation
-------------------------
To build a binary RPM* package of this driver, run 'rpmbuild -tb
fm10k-<x.x.x>.tar.gz', where <x.x.x> is the version number for the driver tar file.
fm10k-<x.x.x>.tar.gz', where <x.x.x> is the version number for the driver tar
file.
NOTES:
Note: For the build to work properly, the currently running kernel MUST match
the version and configuration of the installed kernel sources. If you have just
recompiled the kernel reboot the system before building.
- For the build to work properly, the currently running kernel MUST match
the version and configuration of the installed kernel sources. If you have
just recompiled the kernel reboot the system before building.
- RPM functionality has only been tested in Red Hat distributions.
Note: RPM functionality has only been tested in Red Hat distributions.
_lbank_line_
1. Move the base driver tar file to the directory of your choice. For
example, use '/home/username/fm10k' or '/usr/local/src/fm10k'.
@ -94,7 +91,8 @@ NOTES:
4. Compile the driver module:
# make install
The binary will be installed as:
/lib/modules/<KERNEL VERSION>/updates/drivers/net/ethernet/intel/fm10k/fm10k.ko
/lib/modules/<KERNEL
VERSION>/updates/drivers/net/ethernet/intel/fm10k/fm10k.ko
The install location listed above is the default location. This may differ
for various Linux distributions.
@ -108,20 +106,25 @@ NOTES:
6. Assign an IP address to the interface by entering the following,
where ethX is the interface name that was shown in dmesg after modprobe:
ip address add <IP_address>/<netmask bits> dev ethX
NOTE: Before proceeding, ensure that netdev is enabled and that a
switch manager is running. To enable netdev, use one of the following
commands:
#ifconfig <netdev> up
or
#ip link set <netdev> up
7. Verify that the interface works. Enter the following, where IP_address
is the IP address for another machine on the same subnet as the interface
that is being tested:
ping <IP_address>
NOTE:
For certain distributions like (but not limited to) RedHat Enterprise
Linux 7 and Ubuntu, once the driver is installed the initrd/initramfs
file may need to be updated to prevent the OS loading old versions
of the fm10k driver. The dracut utility may be used on RedHat
distributions:
Note: For certain distributions like (but not limited to) RedHat Enterprise
Linux 7 and Ubuntu, once the driver is installed the initrd/initramfs file may
need to be updated to prevent the OS loading old versions of the fm10k driver.
The dracut utility may be used on RedHat distributions:
# dracut --force
For Ubuntu:
# update-initramfs -u
@ -144,37 +147,52 @@ The default value for each parameter is generally the recommended setting,
unless otherwise noted.
RSS
---
Valid Range: 0-128
0 = Assign up to the lesser value of the number of CPUs or the number of queues
X = Assign X queues, where X is less than or equal to the maximum number of
queues (128 queues).
queues (128 queues).
max_vfs
-------
This parameter adds support for SR-IOV. It causes the driver to spawn up to
max_vfs worth of virtual functions.
Valid Range:0-64
NOTE: This parameter is only used on kernel 3.7.x and below. On kernel 3.8.x
and above, use sysfs to enable VFs. For example:
#echo $num_vf_enabled > /sys/class/net/$dev/device/sriov_numvfs //enable VFs
and above, use sysfs to enable VFs. Also, for Red Hat distributions, this
parameter is only used on version 6.6 and older. For version 6.7 and newer, use
sysfs. For example:
#echo $num_vf_enabled > /sys/class/net/$dev/device/sriov_numvfs //enable
VFs
#echo 0 > /sys/class/net/$dev/device/sriov_numvfs //disable VFs
The parameters for the driver are referenced by position. Thus, if you have a
dual port adapter, or more than one adapter in your system, and want N virtual
functions per port, you must specify a number for each port with each parameter
separated by a comma. For example:
modprobe fm10k max_vfs=4,1
modprobe fm10k max_vfs=4
This will spawn 4 VFs on the first port.
modprobe fm10k max_vfs=2,4
This will spawn 2 VFs on the first port and 4 VFs on the second port.
NOTE: Caution must be used in loading the driver with these parameters.
Depending on your system configuration, number of slots, etc., it is impossible
to predict in all cases where the positions would be on the command line.
This parameter adds support for SR-IOV. It causes the driver to spawn up to
max_vfs worth of virtual functions.
NOTE: When SR-IOV mode is enabled, hardware VLAN
filtering and VLAN tag stripping/insertion will remain enabled. Please remove
the old VLAN filter before the new VLAN filter is added. For example,
NOTE: Neither the device nor the driver control how VFs are mapped into config
space. Bus layout will vary by operating system. On operating systems that
support it, you can check sysfs to find the mapping.
NOTE: When SR-IOV mode is enabled, hardware VLAN filtering and VLAN tag
stripping/insertion will remain enabled. Please remove the old VLAN filter
before the new VLAN filter is added. For example,
ip link set eth0 vf 0 vlan 100 // set vlan 100 for VF 0
ip link set eth0 vf 0 vlan 0 // Delete vlan 100
ip link set eth0 vf 0 vlan 200 // set a new vlan 200 for VF 0
@ -182,30 +200,28 @@ ip link set eth0 vf 0 vlan 200 // set a new vlan 200 for VF 0
Configuring SR-IOV for improved network security
------------------------------------------------
In a virtualized environment, on Intel(R) Server Adapters that support SR-IOV,
the virtual function (VF) may be subject to malicious behavior. Software-
generated layer two frames, like IEEE 802.3x (link flow control), IEEE 802.1Qbb
(priority based flow-control), and others of this type, are not expected and
can throttle traffic between the host and the virtual switch, reducing
performance. To resolve this issue, configure all SR-IOV enabled ports for
VLAN tagging. This configuration allows unexpected, and potentially malicious,
frames to be dropped.
In a virtualized environment, on Intel(R) Ethernet Server Adapters that support
SR-IOV, the virtual function (VF) may be subject to malicious behavior.
Software-generated layer two frames, like IEEE 802.3x (link flow control), IEEE
802.1Qbb (priority based flow-control), and others of this type, are not
expected and can throttle traffic between the host and the virtual switch,
reducing performance. To resolve this issue, configure all SR-IOV enabled ports
for VLAN tagging. This configuration allows unexpected, and potentially
malicious, frames to be dropped.
Configuring VLAN tagging on SR-IOV enabled adapter ports
--------------------------------------------------------
To configure VLAN tagging for the ports on an SR-IOV enabled adapter,
use the following command. The VLAN configuration should be done
before the VF driver is loaded or the VM is booted.
To configure VLAN tagging for the ports on an SR-IOV enabled adapter, use the
following command. The VLAN configuration should be done before the VF driver
is loaded or the VM is booted.
$ ip link set dev <PF netdev id> vf <id> vlan <vlan id>
For example, the following instructions will configure PF eth0 and
the first VF on VLAN 10.
For example, the following instructions will configure PF eth0 and the first VF
on VLAN 10.
$ ip link set dev eth0 vf 0 vlan 10
.
================================================================================
@ -213,23 +229,20 @@ $ ip link set dev eth0 vf 0 vlan 10
Additional Features and Configurations
-------------------------------------------
Configuring the Driver on Different Distributions
-------------------------------------------------
Configuring a network driver to load properly when the system is started is
distribution dependent. Typically, the configuration process involves adding
an alias line to /etc/modules.conf or /etc/modprobe.conf as well as editing
other system startup scripts and/or configuration files. Many popular Linux
distribution dependent. Typically, the configuration process involves adding an
alias line to /etc/modules.conf or /etc/modprobe.conf as well as editing other
system startup scripts and/or configuration files. Many popular Linux
distributions ship with tools to make these changes for you. To learn the
proper way to configure a network device for your system, refer to your
distribution documentation. If during this process you are asked for the
driver or module name, the name for the Base Driver is fm10k.
distribution documentation. If during this process you are asked for the driver
or module name, the name for the Base Driver is fm10k.
Viewing Link Messages
---------------------
Link messages will not be displayed to the console if the distribution is
restricting system messages. In order to see network driver link messages on
your console, set dmesg to eight by entering the following:
@ -240,36 +253,39 @@ NOTE: This setting is not saved across reboots.
Jumbo Frames
------------
Jumbo Frames support is enabled by changing the Maximum Transmission Unit
(MTU) to a value larger than the default value of 1500.
Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU)
to a value larger than the default value of 1500.
Use the ifconfig command to increase the MTU size. For example, enter the
following where <x> is the interface number:
ifconfig eth<x> mtu 9000 up
Alternatively, you can use the ip command as follows:
ip link set mtu 9000 dev eth<x>
ip link set up dev eth<x>
This setting is not saved across reboots. The setting change can be made
permanent by adding 'MTU=9000' to the file:
/etc/sysconfig/network-scripts/ifcfg-eth<x> for RHEL or to the file
/etc/sysconfig/network/<config_file> for SLES.
NOTES:
- The maximum MTU setting for Jumbo Frames is 15342. This value coincides
with the maximum Jumbo Frames size of 15364 bytes.
- This driver will attempt to use multiple page sized buffers to receive
each jumbo packet. This should help to avoid buffer starvation issues
when allocating receive packets.
NOTE: The maximum MTU setting for Jumbo Frames is 15342. This value coincides
with the maximum Jumbo Frames size of 15364 bytes.
NOTE: This driver will attempt to use multiple page sized buffers to receive
each jumbo packet. This should help to avoid buffer starvation issues when
allocating receive packets.
ethtool
-------
The driver utilizes the ethtool interface for driver configuration and
diagnostics, as well as displaying statistical information. The latest
ethtool version is required for this functionality. Download it at
diagnostics, as well as displaying statistical information. The latest ethtool
version is required for this functionality. Download it at:
http://ftp.kernel.org/pub/software/network/ethtool/
Supported ethtool Commands and Options
--------------------------------------
Supported ethtool Commands and Options for Filtering
----------------------------------------------------
-n --show-nfc
Retrieves the receive network flow classification configurations.
@ -279,7 +295,8 @@ rx-flow-hash tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6
-N --config-nfc
Configures the receive network flow classification.
rx-flow-hash tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6 m|v|t|s|d|f|n|r...
rx-flow-hash tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6
m|v|t|s|d|f|n|r...
Configures the hash options for the specified network traffic type.
udp4 UDP over IPv4
@ -298,7 +315,6 @@ https://www.linuxfoundation.org/collaborate/workgroups/networking/napi
Flow Control
------------
The Intel(R) Ethernet Switch Host Interface Driver does not support Flow
Control. It will not send pause frames. This may result in dropped frames.
@ -307,27 +323,28 @@ VXLAN Overlay HW Offloading
---------------------------
Virtual Extensible LAN (VXLAN) allows you to extend an L2 network over an L3
network, which may be useful in a virtualized or cloud environment. Some Intel(R)
Ethernet Network devices perform VXLAN processing, offloading it from the
operating system. This reduces CPU utilization.
network, which may be useful in a virtualized or cloud environment. Some
Intel(R) Ethernet Network devices perform VXLAN processing, offloading it from
the operating system. This reduces CPU utilization.
VXLAN offloading is controlled by the tx and rx checksum offload options
provided by ethtool. That is, if tx checksum offload is enabled, and the adapter
has the capability, VXLAN offloading is also enabled. If rx checksum offload is
enabled, then the VXLAN packets rx checksum will be offloaded, unless the module
parameter vxlan_rx=0,0 was used to specifically disable the VXLAN rx offload.
provided by ethtool. That is, if tx checksum offload is enabled, and the
adapter has the capability, VXLAN offloading is also enabled.
If rx checksum offload is enabled, then the VXLAN packets rx checksum will be
offloaded, unless the command #ethtool -K $INTERFACE_NAME rx off was used to
specifically disable the VXLAN rx offload.
VXLAN Overlay HW Offloading is enabled by default. To view and configure VXLAN
on a VXLAN-overlay offload enabled device, use the following
command:
offload on a VXLAN-overlay offload enabled device, use the following command:
# ethtool -k ethX
(This command displays the offloads and their current state.)
For more information on configuring your network for overlay HW offloading
support, refer to the Intel Technical Brief, "Creating Overlay Networks
Using Intel Ethernet Converged Network Adapters" (Intel Networking Division,
August 2013):
support, refer to the Intel Technical Brief, "Creating Overlay Networks Using
Intel Ethernet Converged Network Adapters" (Intel Networking Division, August
2013):
http://www.intel.com/content/dam/www/public/us/en/documents/technology-briefs/
overlay-networks-using-converged-network-adapters-brief.pdf
@ -339,6 +356,21 @@ overlay-networks-using-converged-network-adapters-brief.pdf
Known Issues/Troubleshooting
----------------------------
FUM_BAD_VF_QACCESS error on port reset
--------------------------------------
A FUM_BAD_VF_QACCESS error may be written to the message buffer when a command
or application triggers a reset on the port's physical function (PF). When the
PF is reset, any bound virtual functions (VFs) can no longer access their
queues. This behavior is expected. No user intervention is required. After the
PF reset is complete, the VFs will be able to access their queues normally.
Traffic and pings fail to pass after switch reset command issued
----------------------------------------------------------------
Resetting the switch may cause a failure to pass traffic and pings, with a
"init_hw failed: -4" message. To fix this, reload the VF driver or unbind
and rebind the VF device.
Packets dropped when issuing ping with very large payload
---------------------------------------------------------
@ -355,20 +387,19 @@ To fix this, ensure the CPU does not enter a deep C-state by using one of
the following methods:
- Change your BIOS to disable the lowest C-states
- Change the CPU governor
- Use /dev/cpu_dma_latency (see the Linux Kernels Documentation folder)
- Use /dev/cpu_dma_latency (see the Linux Kernel's Documentation folder)
Driver cannot bind to PF when VM assigned to PF is started
----------------------------------------------------------
In a virtualization setup, if the fm10k driver is not installed on the host
OS and a VM that has been assigned one of the PCI PF host interface devices
is started, the fm10k driver can no longer bind to the PCI PF host interface
In a virtualization setup, if the fm10k driver is not installed on the host OS
and a VM that has been assigned one of the PCI PF host interface devices is
started, the fm10k driver can no longer bind to the PCI PF host interface
devices on the host OS.
To correct this, manually unbind the bus driver from the PF host interface
device (or stop the VM) and then manually bind the fm10k driver to the PF
host interface device. Alternatively, you can add "fm10k" to the device's
device (or stop the VM) and then manually bind the fm10k driver to the PF host
interface device. Alternatively, you can add "fm10k" to the device's
driver_override entry in the /sys filesystem to prevent the bus driver from
binding to the PF host interface device in the first place.
@ -379,13 +410,13 @@ binding to the PF host interface device in the first place.
Support
-------
For general information, go to the Intel support website at:
www.intel.com/support/
http://www.intel.com/support/
or the Intel Wired Networking project hosted by Sourceforge at:
http://sourceforge.net/projects/e1000
If an issue is identified with the released source code on a supported
kernel with a supported adapter, email the specific information related to the
issue to e1000-devel@lists.sf.net.
If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
to e1000-devel@lists.sf.net.
================================================================================
@ -393,14 +424,13 @@ issue to e1000-devel@lists.sf.net.
License
-------
This program is free software; you can redistribute it and/or modify it under
the terms and conditions of the GNU General Public License, version 2, as
published by the Free Software Foundation.
This program is distributed in the hope it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
A PARTICULAR PURPOSE. See the GNU General Public License for more details.
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with
this program; if not, write to the Free Software Foundation, Inc., 51 Franklin
@ -409,16 +439,14 @@ St - Fifth Floor, Boston, MA 02110-1301 USA.
The full GNU General Public License is included in this distribution in the
file called "COPYING".
Copyright(c) 2015-2016 Intel Corporation.
Copyright(c) 2015-2017 Intel Corporation.
================================================================================
Trademarks
----------
Intel, Itanium, and Pentium are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
Intel and Itanium are trademarks or registered trademarks of Intel Corporation
or its subsidiaries in the United States and/or other countries.
* Other names and brands may be claimed as the property of others.

68
SUMS
View file

@ -1,34 +1,34 @@
39773 19 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/COPYING
46672 2 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/pci.updates
63945 16 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/README
00983 3 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/fm10k.7
44685 1 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/Module.supported
04678 63 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_mbx.c
54527 7 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_uio.c
38426 3 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_vf.h
59436 6 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/Makefile
43704 5 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_pf.h
03485 3 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_osdep.h
32293 16 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_vf.c
20014 15 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_iov.c
50045 15 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_common.c
32579 18 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k.h
43281 3 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_ies.c
55714 25 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_tlv.c
46481 26 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_type.h
27983 57 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_pf.c
07137 7 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_debugfs.c
34323 12 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_mbx.h
09616 124 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/kcompat.h
57211 5 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_dcbnl.c
50169 2 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_common.h
19482 42 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_netdev.c
33179 8 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/common.mk
54271 7 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_tlv.h
59228 41 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/kcompat.c
62204 66 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_pci.c
62342 36 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_ethtool.c
05090 57 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_main.c
40225 7 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/src/fm10k_param.c
13488 10 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/fm10k.spec
38058 6 /tmp/tmp.0LDyzMhcUE/fm10k-0.20.1/scripts/set_irq_affinity
12724 3 fm10k-0.26.1/fm10k.7
64801 9 fm10k-0.26.1/fm10k.spec
51187 1 fm10k-0.26.1/pci.updates
38058 6 fm10k-0.26.1/scripts/set_irq_affinity
03644 17 fm10k-0.26.1/README
12529 18 fm10k-0.26.1/COPYING
06320 36 fm10k-0.26.1/src/fm10k_ethtool.c
32480 11 fm10k-0.26.1/src/common.mk
32173 18 fm10k-0.26.1/src/fm10k_iov.c
54788 2 fm10k-0.26.1/src/fm10k_osdep.h
30476 54 fm10k-0.26.1/src/fm10k_netdev.c
52109 24 fm10k-0.26.1/src/fm10k_tlv.c
64411 19 fm10k-0.26.1/src/fm10k.h
21492 57 fm10k-0.26.1/src/fm10k_pf.c
58291 11 fm10k-0.26.1/src/fm10k_mbx.h
63501 4 fm10k-0.26.1/src/fm10k_pf.h
09022 16 fm10k-0.26.1/src/fm10k_vf.c
31576 25 fm10k-0.26.1/src/fm10k_type.h
46221 7 fm10k-0.26.1/src/fm10k_tlv.h
37658 57 fm10k-0.26.1/src/fm10k_main.c
64132 62 fm10k-0.26.1/src/fm10k_mbx.c
51022 6 fm10k-0.26.1/src/fm10k_uio.c
62868 5 fm10k-0.26.1/src/fm10k_dcbnl.c
44685 1 fm10k-0.26.1/src/Module.supported
62241 77 fm10k-0.26.1/src/fm10k_pci.c
08460 2 fm10k-0.26.1/src/fm10k_vf.h
13974 49 fm10k-0.26.1/src/kcompat.c
12716 6 fm10k-0.26.1/src/fm10k_debugfs.c
06386 168 fm10k-0.26.1/src/kcompat.h
43700 7 fm10k-0.26.1/src/fm10k_param.c
31284 2 fm10k-0.26.1/src/fm10k_ies.c
64779 15 fm10k-0.26.1/src/fm10k_common.c
37650 1 fm10k-0.26.1/src/fm10k_common.h
26994 6 fm10k-0.26.1/src/Makefile

41
fm10k.7
View file

@ -5,7 +5,7 @@
.\" * Other names and brands may be claimed as the property of others.
.\"
.
.TH fm10k 1 "March 16, 2016"
.TH fm10k 1 "February 23, 2017"
.SH NAME
fm10k \-This file describes the Intel(R) Ethernet Switch
Host Interface Driver
@ -14,13 +14,11 @@ Host Interface Driver
modprobe fm10k [<option>=<VAL1>,<VAL2>,...]
.PD 1v
.SH DESCRIPTION
This driver is intended for \fB2.6.32\fR and newer kernels.
This driver includes support for any 64 bit Linux supported system,
including Itanium(R)2, x86_64, PPC64,ARM, etc.
This driver is intended for \fB2.6.32\fR and newer kernels. A version of the driver may already be included by your distribution and/or the kernel.org kernel.
This driver includes support for any 64 bit Linux supported system, including Itanium(R)2, x86_64, PPC64, ARM, etc.
.LP
This driver is only supported as a loadable module at this time. Intel is
not supplying patches against the kernel source to allow for static linking of
the drivers.
This driver is only supported as a loadable module at this time. Intel is not supplying patches against the kernel source to allow for static linking of the drivers.
For questions related to hardware requirements, refer to the documentation
@ -28,21 +26,19 @@ supplied with your Intel adapter. All hardware requirements listed apply to
use with Linux.
.SH Jumbo Frames
.LP
Jumbo Frames support is enabled by changing the Maximum Transmission Unit
(MTU) to a value larger than the default value of 1500.
Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU) to a value larger than the default value of 1500.
Use the ifconfig command to increase the MTU size. For example, enter the
following where <x> is the interface number:
Use the ifconfig command to increase the MTU size. For example, enter the following where <x> is the interface number:
ifconfig eth<x> mtu 9000 up
Alternatively, you can use the ip command as follows:
ip link set mtu 9000 dev eth<x>
ip link set up dev eth<x>
.LP
NOTES:
- The maximum MTU setting for Jumbo Frames is 15342. This value coincides
with the maximum Jumbo Frames size of 15364 bytes.
- This driver will attempt to use multiple page sized buffers to receive
each jumbo packet. This should help to avoid buffer starvation issues
when allocating receive packets.
NOTE: The maximum MTU setting for Jumbo Frames is 15342. This value coincides with the maximum Jumbo Frames size of 15364 bytes.
NOTE: This driver will attempt to use multiple page sized buffers to receive each jumbo packet. This should help to avoid buffer starvation issues when allocating receive packets.
See the section "Jumbo Frames" in the Readme.
.LP
.B RSS
@ -52,19 +48,16 @@ See the section "Jumbo Frames" in the Readme.
0 = Assign up to the lesser value of the number of CPUs or the number of queues
.IP
X = Assign X queues, where X is less than or equal to the maximum number of
queues (128 queues).
queues (128 queues).
.IP
.IP
.SH SUPPORT
.LP
For additional information regarding building and installation,
see the
For additional information regarding building and installation, see the
README
included with the driver.
For general information, go to the Intel support website at:
.B www.intel.com/support/
.B http://www.intel.com/support/
.LP
If an issue is identified with the released source code on a supported
kernel with a supported adapter, email the specific information related to the
issue to e1000-devel@lists.sf.net.
If an issue is identified with the released source code on a supported kernel with a supported adapter, email the specific information related to the issue to e1000-devel@lists.sf.net.
.LP

View file

@ -1,19 +1,16 @@
Name: fm10k
Summary: Intel(R) Ethernet Switch Host Interface Driver
Version: 0.20.1
Version: 0.26.1
Release: 1
Source: %{name}-%{version}.tar.gz
Vendor: Intel Corporation
License: GPL
License: GPL-2.0
ExclusiveOS: linux
Group: System Environment/Kernel
Provides: %{name}
URL: http://www.intel.com/content/www/us/en/switch-silicon/ethernet-switch-silicon.html
URL: http://support.intel.com
BuildRoot: %{_tmppath}/%{name}-%{version}-root
# do not generate debugging packages by default - newer versions of rpmbuild
# may instead need:
#%define debug_package %{nil}
%debug_package %{nil}
%global debug_package %{nil}
# macros for finding system files to update at install time (pci.ids, pcitable)
%define find() %(for f in %*; do if [ -e $f ]; then echo $f; break; fi; done)
%define _pciids /usr/share/pci.ids /usr/share/hwdata/pci.ids
@ -33,13 +30,12 @@ make -C src clean
make -C src
%install
make -C src INSTALL_MOD_PATH=%{buildroot} MANDIR=%{_mandir} install
make -C src INSTALL_MOD_PATH=%{buildroot} MANDIR=%{_mandir} modules_install mandocs_install
# Remove modules files that we do not want to include
find %{buildroot}/lib/modules/%(uname -r) -name 'modules.*' -exec rm -f {} \;
# Append .new to driver name to avoid conflict with kernel RPM
find %{buildroot}/lib/modules/ -name 'modules.*' -exec rm -f {} \;
cd %{buildroot}
find lib -name "fm10k.*o" -exec mv {} {}.new \; \
-fprintf %{_builddir}/%{name}-%{version}/file.list "/%p.new\n"
find lib -name "fm10k.ko" \
-fprintf %{_builddir}/%{name}-%{version}/file.list "/%p\n"
%clean
@ -54,61 +50,16 @@ rm -rf %{buildroot}
%doc pci.updates
%post
FL="%{_docdir}/%{name}-%{version}/file.list
%{_docdir}/%{name}/file.list"
FL=$(for d in $FL ; do if [ -e $d ]; then echo $d; break; fi; done)
if [ -d /usr/local/lib/%{name} ]; then
rm -rf /usr/local/lib/%{name}
fi
if [ -d /usr/local/share/%{name} ]; then
rm -rf /usr/local/share/%{name}
fi
# Save old drivers (aka .o and .o.gz)
mkdir /usr/local/share/%{name}
cp --parents %{pciids} /usr/local/share/%{name}/
echo "original pci.ids saved in /usr/local/share/%{name}";
if [ "%{pcitable}" != "/dev/null" ]; then
cp --parents %{pcitable} /usr/local/share/%{name}/
echo "original pcitable saved in /usr/local/share/%{name}";
fi
for k in $(sed 's/\/lib\/modules\/\([0-9a-zA-Z_\.\-]*\).*/\1/' $FL) ;
do
d_drivers=/lib/modules/$k
d_usr=/usr/local/share/%{name}/$k
mkdir -p $d_usr
cd $d_drivers; find . -name %{name}.*o -exec cp --parents {} $d_usr \; -exec rm -f {} \;
cd $d_drivers; find . -name %{name}_*.*o -exec cp --parents {} $d_usr \; -exec rm -f {} \;
cd $d_drivers; find . -name %{name}.*o.gz -exec cp --parents {} $d_usr \; -exec rm -f {} \;
cd $d_drivers; find . -name %{name}_*.*o.gz -exec cp --parents {} $d_usr \; -exec rm -f {} \;
cp --parents %{pciids} /usr/local/share/%{name}/
if [ "%{pcitable}" != "/dev/null" ]; then
cp --parents %{pcitable} /usr/local/share/%{name}/
fi
done
# Add driver link
for f in $(sed 's/\.new$//' $FL) ; do
ln -f $f.new $f
done
# Check if kernel version rpm was built on IS the same as running kernel
BK_LIST=$(sed 's/\/lib\/modules\/\([0-9a-zA-Z_\.\-]*\).*/\1/' $FL)
MATCH=no
for i in $BK_LIST
do
if [ $(uname -r) == $i ] ; then
MATCH=yes
break
fi
done
if [ $MATCH == no ] ; then
echo -n "WARNING: Running kernel is $(uname -r). "
echo -n "RPM supports kernels ( "
for i in $BK_LIST
do
echo -n "$i "
done
echo ")"
fi
LD="%{_docdir}/%{name}";
if [ -d %{_docdir}/%{name}-%{version} ]; then
@ -370,31 +321,62 @@ END
mv -f $LD/pci.ids.new %{pciids}
if [ "%{pcitable}" != "/dev/null" ]; then
mv -f $LD/pcitable.new %{pcitable}
mv -f $LD/pcitable.new %{pcitable}
fi
uname -r | grep BOOT || /sbin/depmod -a > /dev/null 2>&1 || true
%preun
# If doing RPM un-install
if [ $1 -eq 0 ] ; then
FL="%{_docdir}/%{name}-%{version}/file.list
%{_docdir}/%{name}/file.list"
FL=$(for d in $FL ; do if [ -e $d ]; then echo $d; break; fi; done)
# Remove driver link
for f in $(sed 's/\.new$//' $FL) ; do
rm -f $f
done
# Restore old drivers
if [ -d /usr/local/share/%{name} ]; then
cd /usr/local/share/%{name}; find . -name '%{name}.*o*' -exec cp --parents {} /lib/modules/ \;
cd /usr/local/share/%{name}; find . -name '%{name}_*.*o*' -exec cp --parents {} /lib/modules/ \;
rm -rf /usr/local/share/%{name}
if which dracut >/dev/null 2>&1; then
echo "Updating initramfs with dracut..."
if dracut --force ; then
echo "Successfully updated initramfs."
else
echo "Failed to update initramfs."
echo "You must update your initramfs image for changes to take place."
exit -1
fi
elif which mkinitrd >/dev/null 2>&1; then
echo "Updating initrd with mkinitrd..."
if mkinitrd; then
echo "Successfully updated initrd."
else
echo "Failed to update initrd."
echo "You must update your initrd image for changes to take place."
exit -1
fi
else
echo "Unable to determine utility to update initrd image."
echo "You must update your initrd manually for changes to take place."
exit -1
fi
%preun
rm -rf /usr/local/share/%{name}
%postun
uname -r | grep BOOT || /sbin/depmod -a > /dev/null 2>&1 || true
if which dracut >/dev/null 2>&1; then
echo "Updating initramfs with dracut..."
if dracut --force ; then
echo "Successfully updated initramfs."
else
echo "Failed to update initramfs."
echo "You must update your initramfs image for changes to take place."
exit -1
fi
elif which mkinitrd >/dev/null 2>&1; then
echo "Updating initrd with mkinitrd..."
if mkinitrd; then
echo "Successfully updated initrd."
else
echo "Failed to update initrd."
echo "You must update your initrd image for changes to take place."
exit -1
fi
else
echo "Unable to determine utility to update initrd image."
echo "You must update your initrd manually for changes to take place."
exit -1
fi

View file

@ -1,25 +1,5 @@
################################################################################
#
# Intel(R) Ethernet Switch Host Interface Driver
# Copyright(c) 2013 - 2016 Intel Corporation.
#
# This program is free software; you can redistribute it and/or modify it
# under the terms and conditions of the GNU General Public License,
# version 2, as published by the Free Software Foundation.
#
# This program is distributed in the hope it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
# more details.
#
# The full GNU General Public License is included in this distribution in
# the file called "COPYING".
#
# Contact Information:
# e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
# Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
#
################################################################################
# SPDX-License-Identifier: GPL-2.0
# Copyright(c) 2013 - 2018 Intel Corporation.
# updates for the system pci.ids file
#
@ -29,7 +9,9 @@
# (numerical order).
#
8086 Intel Corporation
15a4 Ethernet Switch FM10000 Host Interface
15a4 Ethernet Switch FM10000 Host Interface
15a5 Ethernet Switch FM10000 Host Virtual Interface
15d0 Ethernet SDI Adapter FM10420-100GbE-QDA2
15d0 Ethernet SDI Adapter
8086 0001 Ethernet SDI Adapter FM10420-100GbE-QDA2
8086 0002 Ethernet SDI Adapter FM10840-MTP2
15d5 Ethernet SDI Adapter FM10420-25GbE-DA2

View file

@ -1,25 +1,5 @@
################################################################################
#
# Intel(R) Ethernet Switch Host Interface Driver
# Copyright(c) 2013 - 2016 Intel Corporation.
#
# This program is free software; you can redistribute it and/or modify it
# under the terms and conditions of the GNU General Public License,
# version 2, as published by the Free Software Foundation.
#
# This program is distributed in the hope it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
# more details.
#
# The full GNU General Public License is included in this distribution in
# the file called "COPYING".
#
# Contact Information:
# e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
# Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
#
################################################################################
# SPDX-License-Identifier: GPL-2.0
# Copyright(c) 2013 - 2018 Intel Corporation.
ifneq ($(KERNELRELEASE),)
# kbuild part of makefile
@ -54,41 +34,28 @@ else # ifneq($(KERNELRELEASE),)
DRIVER := fm10k
ifeq (,$(wildcard common.mk))
$(error Cannot find common.mk build rules)
else
include common.mk
endif
# If the user just wants to print the help output, don't include common.mk or
# perform any other checks. This ensures that running "make help" will always
# work even if kernel-devel is not installed, or if the common.mk fails under
# any other error condition.
ifneq ($(MAKECMDGOALS),help)
include common.mk
# fm10k does not support building on kernels older than 2.6.32
$(call minimum_kver_check,2,6,32)
endif
############################
# Module Install Directory #
############################
# Default to using updates/drivers/net/ethernet/intel/ path, since depmod since
# v3.1 defaults to checking updates folder first, and only checking kernels/
# and extra afterwards. We use updates instead of kernel/* due to desire to
# prevent over-writing built-in modules files.
INSTALL_MOD_DIR ?= updates/drivers/net/ethernet/intel/${DRIVER}
######################
# Kernel Build Macro #
######################
# kernel build function
# ${1} is the kernel build target
# ${2] may contain any extra rules to pass directly to the sub-make process
kernelbuild = ${MAKE} $(if ${GCC_I_SYS},CC:="${GCC_I_SYS}") \
$(if ${EXTRA_CFLAGS},ccflags-y:="${EXTRA_CFLAGS}") \
-C ${KSRC} \
$(if ${KOBJ},O:=${KOBJ}) \
CONFIG_${DRIVER_UPPERCASE}=m \
M:=$(call readlink,.) \
$(if ${INSTALL_MOD_PATH},INSTALL_MOD_PATH:=${INSTALL_MOD_PATH}) \
INSTALL_MOD_DIR:=${INSTALL_MOD_DIR} \
${2} ${1};
# Command to update initramfs or display a warning message
ifeq (${cmd_initrd},)
define cmd_initramfs
@echo "Unable to update initramfs. You may need to do this manaully."
endef
else
define cmd_initramfs
@echo "Updating initramfs..."
-@$(call cmd_initrd)
endef
endif
###############
# Build rules #
@ -116,7 +83,7 @@ sparse: clean
# Run coccicheck static analyzer
ccc: clean
@+$(call kernelbuild,modules,coccicheck MODE=report))
@+$(call kernelbuild,modules,coccicheck MODE=report)
# Build manfiles
manfile:
@ -127,44 +94,77 @@ clean:
@+$(call kernelbuild,clean)
@-rm -rf *.${MANSECTION}.gz *.ko
# Install the modules and manpage
install: default manfile
install -D -m 644 ${DRIVER}.${MANSECTION}.gz ${INSTALL_MOD_PATH}${MANDIR}/man${MANSECTION}/${DRIVER}.${MANSECTION}.gz
@$(call kernelbuild,modules_install)
$(call cmd_depmod)
mandocs_install: manfile
@echo "Copying manpages..."
@install -D -m 644 ${DRIVER}.${MANSECTION}.gz ${INSTALL_MOD_PATH}${MANDIR}/man${MANSECTION}/${DRIVER}.${MANSECTION}.gz
uninstall:
rm -f ${INSTALL_MOD_PATH}/lib/modules/${KVER}/${INSTALL_MOD_DIR}/${DRIVER}.ko;
# Install kernel module files. This target is called by the RPM specfile when
# generating binary RPMs, and is not expected to modify files outside of the
# build root. Thus, it must not update initramfs, or run depmod.
modules_install: default
@echo "Installing modules..."
@+$(call kernelbuild,modules_install)
# After installing all the files, perform necessary work to ensure the system
# will use the new modules. This includes running depmod to update module
# dependencies and updating the initramfs image in case the module is loaded
# during early boot.
install: modules_install mandocs_install
$(call cmd_depmod)
$(call cmd_initramfs)
mandocs_uninstall:
if [ -e ${INSTALL_MOD_PATH}${MANDIR}/man${MANSECTION}/${DRIVER}.${MANSECTION}.gz ] ; then \
rm -f ${INSTALL_MOD_PATH}${MANDIR}/man${MANSECTION}/${DRIVER}.${MANSECTION}.gz ; \
fi;
# Remove installed module files. This target is called by the RPM specfile when
# generating binary RPMs, and is not expected to modify files outside of the
# build root. Thus, it must not update the initramfs image or run depmod.
modules_uninstall:
rm -f ${INSTALL_MOD_PATH}/lib/modules/${KVER}/${INSTALL_MOD_DIR}/${DRIVER}.ko;
# After uninstalling all the files, perform necessary work to restore the
# system back to using the default kernel modules. This includes running depmod
# to update module dependencies and updating the initramfs image.
uninstall: modules_uninstall mandocs_uninstall
$(call cmd_depmod)
$(call cmd_initramfs)
########
# Help #
########
help:
@echo 'Cleaning targets:'
@echo ' clean - Clean files generated by kernel module build'
@echo 'Build targets:'
@echo ' default - Build module(s) with standard verbosity'
@echo ' noisy - Build module(s) with V=1 verbosity -- very noisy'
@echo ' silent - Build module(s), squelching all output'
@echo ''
@echo 'Static Analysis:'
@echo ' checkwarnings - Clean, then build module(s) with W=1 warnings enabled'
@echo ' sparse - Clean, then check module(s) using sparse'
@echo ' ccc - Clean, then check module(s) using coccicheck'
@echo ''
@echo 'Cleaning targets:'
@echo ' clean - Clean files generated by kernel module build'
@echo ''
@echo 'Other targets:'
@echo ' manfile - Generate a gzipped manpage'
@echo ' install - Build then install the module(s) and manpage'
@echo ' uninstall - Uninstall the module(s) and manpage'
@echo ' modules_install - install the module(s) only'
@echo ' mandocs_install - install the manpage only'
@echo ' install - Build then install the module(s) and manpage, and update initramfs'
@echo ' modules_uninstall - uninstall the module(s) only'
@echo ' mandocs_uninstall - uninstall the manpage only'
@echo ' uninstall - Uninstall the module(s) and manpage, and update initramfs'
@echo ' help - Display this help message'
@echo ''
@echo 'Variables:'
@echo ' LINUX_VERSION - Debug tool to force kernel LINUX_VERSION_CODE. Use at your own risk.'
@echo ' W=N - Kernel variable for setting warning levels'
@echo ' V=N - Kernel variable for setting output verbosity'
@echo ' INSTALL_MOD_PATH - Add prefix for the module and manpage installation path'
@echo ' INSTALL_MOD_DIR - Use module directory other than updates/drivers/net/ethernet/intel/${DRIVER}'
@echo ' KSRC - Specifies the full path to the kernel tree to build against'
@echo ' Other variables may be available for tuning make process, see'
@echo ' Kernel Kbuild documentation for more information'

View file

@ -1,27 +1,15 @@
################################################################################
#
# Intel(R) Ethernet Switch Host Interface Driver
# Copyright(c) 2013 - 2016 Intel Corporation.
#
# This program is free software; you can redistribute it and/or modify it
# under the terms and conditions of the GNU General Public License,
# version 2, as published by the Free Software Foundation.
#
# This program is distributed in the hope it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
# more details.
#
# The full GNU General Public License is included in this distribution in
# the file called "COPYING".
#
# Contact Information:
# e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
# Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
#
################################################################################
# SPDX-License-Identifier: GPL-2.0
# Copyright(c) 2013 - 2018 Intel Corporation.
# common Makefile rules useful for out-of-tree Linux driver builds
#
# Usage: include common.mk
#
# After including, you probably want to add a minimum_kver_check call
#
# Required Variables:
# DRIVER
# -- Set to the lowercase driver name
#####################
# Helpful functions #
@ -45,14 +33,12 @@ cmd_depmod = /sbin/depmod $(if ${SYSTEM_MAP_FILE},-e -F ${SYSTEM_MAP_FILE}) \
-a ${KVER}
################
# initrd Macro #
# dracut Macro #
################
cmd_initrd := $(shell \
if which dracut > /dev/null 2>&1 ; then \
echo "dracut --force"; \
elif which mkinitrd > /dev/null 2>&1 ; then \
echo "mkinitrd"; \
elif which update-initramfs > /dev/null 2>&1 ; then \
echo "update-initramfs -u"; \
fi )
@ -69,8 +55,8 @@ endif
# Kernel Search Path
# All the places we look for kernel source
KSP := /lib/modules/${BUILD_KERNEL}/build \
/lib/modules/${BUILD_KERNEL}/source \
KSP := /lib/modules/${BUILD_KERNEL}/source \
/lib/modules/${BUILD_KERNEL}/build \
/usr/src/linux-${BUILD_KERNEL} \
/usr/src/linux-$(${BUILD_KERNEL} | sed 's/-.*//') \
/usr/src/kernel-headers-${BUILD_KERNEL} \
@ -174,6 +160,30 @@ ifneq (${LINUX_VERSION_CODE},)
EXTRA_CFLAGS += -DLINUX_VERSION_CODE=${LINUX_VERSION_CODE}
endif
# Determine SLE_LOCALVERSION_CODE for SuSE SLE >= 11 (needed by kcompat)
# This assumes SuSE will continue setting CONFIG_LOCALVERSION to the string
# appended to the stable kernel version on which their kernel is based with
# additional versioning information (up to 3 numbers), a possible abbreviated
# git SHA1 commit id and a kernel type, e.g. CONFIG_LOCALVERSION=-1.2.3-default
# or CONFIG_LOCALVERSION=-999.gdeadbee-default
ifeq (1,$(shell ${CC} -E -dM ${CONFIG_FILE} 2> /dev/null |\
grep -m 1 CONFIG_SUSE_KERNEL | awk '{ print $$3 }'))
ifneq (10,$(shell ${CC} -E -dM ${CONFIG_FILE} 2> /dev/null |\
grep -m 1 CONFIG_SLE_VERSION | awk '{ print $$3 }'))
LOCALVERSION := $(shell ${CC} -E -dM ${CONFIG_FILE} 2> /dev/null |\
grep -m 1 CONFIG_LOCALVERSION | awk '{ print $$3 }' |\
cut -d'-' -f2 | sed 's/\.g[[:xdigit:]]\{7\}//')
LOCALVER_A := $(shell echo ${LOCALVERSION} | cut -d'.' -f1)
LOCALVER_B := $(shell echo ${LOCALVERSION} | cut -s -d'.' -f2)
LOCALVER_C := $(shell echo ${LOCALVERSION} | cut -s -d'.' -f3)
SLE_LOCALVERSION_CODE := $(shell expr ${LOCALVER_A} \* 65536 + \
0${LOCALVER_B} \* 256 + 0${LOCALVER_C})
EXTRA_CFLAGS += -DSLE_LOCALVERSION_CODE=${SLE_LOCALVERSION_CODE}
endif
endif
EXTRA_CFLAGS += ${CFLAGS_EXTRA}
# get the kernel version - we use this to find the correct install path
@ -230,3 +240,59 @@ ifeq (,${MANDIR})
# fallback to /usr/man
MANDIR := /usr/man
endif
####################
# CCFLAGS variable #
####################
# set correct CCFLAGS variable for kernels older than 2.6.24
ifeq (0,$(shell [ ${KVER_CODE} -lt $(call get_kvercode,2,6,24) ]; echo $$?))
CCFLAGS_VAR := EXTRA_CFLAGS
else
CCFLAGS_VAR := ccflags-y
endif
#################
# KBUILD_OUTPUT #
#################
# Only set KBUILD_OUTPUT if the real paths of KOBJ and KSRC differ
ifneq ($(call readlink,${KSRC}),$(call readlink,${KOBJ}))
export KBUILD_OUTPUT ?= ${KOBJ}
endif
############################
# Module Install Directory #
############################
# Default to using updates/drivers/net/ethernet/intel/ path, since depmod since
# v3.1 defaults to checking updates folder first, and only checking kernels/
# and extra afterwards. We use updates instead of kernel/* due to desire to
# prevent over-writing built-in modules files.
export INSTALL_MOD_DIR ?= updates/drivers/net/ethernet/intel/${DRIVER}
######################
# Kernel Build Macro #
######################
# kernel build function
# ${1} is the kernel build target
# ${2} may contain any extra rules to pass directly to the sub-make process
#
# This function is expected to be executed by
# @+$(call kernelbuild,<target>,<extra parameters>)
# from within a Makefile recipe.
#
# The following variables are expected to be defined for its use:
# GCC_I_SYS -- if set it will enable use of gcc-i-sys.sh wrapper to use -isystem
# CCFLAGS_VAR -- the CCFLAGS variable to set extra CFLAGS
# EXTRA_CFLAGS -- a set of extra CFLAGS to pass into the ccflags-y variable
# KSRC -- the location of the kernel source tree to build against
# DRIVER_UPPERCASE -- the uppercase name of the kernel module, set from DRIVER
#
kernelbuild = ${MAKE} $(if ${GCC_I_SYS},CC="${GCC_I_SYS}") \
${CCFLAGS_VAR}="${EXTRA_CFLAGS}" \
-C "${KSRC}" \
CONFIG_${DRIVER_UPPERCASE}=m \
M="${CURDIR}" \
${2} ${1}

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _FM10K_H_
#define _FM10K_H_
@ -71,14 +54,16 @@ enum fm10k_ring_state_t {
__FM10K_TX_DETECT_HANG,
__FM10K_HANG_CHECK_ARMED,
__FM10K_TX_XPS_INIT_DONE,
/* This must be last and is used to calculate BITMAP size */
__FM10K_TX_STATE_SIZE__,
};
#define check_for_tx_hang(ring) \
test_bit(__FM10K_TX_DETECT_HANG, &(ring)->state)
test_bit(__FM10K_TX_DETECT_HANG, (ring)->state)
#define set_check_for_tx_hang(ring) \
set_bit(__FM10K_TX_DETECT_HANG, &(ring)->state)
set_bit(__FM10K_TX_DETECT_HANG, (ring)->state)
#define clear_check_for_tx_hang(ring) \
clear_bit(__FM10K_TX_DETECT_HANG, &(ring)->state)
clear_bit(__FM10K_TX_DETECT_HANG, (ring)->state)
struct fm10k_tx_buffer {
struct fm10k_tx_desc *next_to_watch;
@ -134,7 +119,7 @@ struct fm10k_ring {
struct fm10k_rx_buffer *rx_buffer;
};
u32 __iomem *tail;
unsigned long state;
DECLARE_BITMAP(state, __FM10K_TX_STATE_SIZE__);
dma_addr_t dma; /* phys. address of descriptor ring */
unsigned int size; /* length in bytes */
@ -251,43 +236,93 @@ struct fm10k_iov_data {
struct fm10k_vf_info vf_info[0];
};
#define fm10k_vxlan_port_for_each(vp, intfc) \
list_for_each_entry(vp, &(intfc)->vxlan_port, list)
struct fm10k_vxlan_port {
struct fm10k_udp_port {
struct list_head list;
sa_family_t sa_family;
__be16 port;
};
enum fm10k_macvlan_request_type {
FM10K_UC_MAC_REQUEST,
FM10K_MC_MAC_REQUEST,
FM10K_VLAN_REQUEST
};
struct fm10k_macvlan_request {
enum fm10k_macvlan_request_type type;
struct list_head list;
union {
struct fm10k_mac_request {
u8 addr[ETH_ALEN];
u16 glort;
u16 vid;
} mac;
struct fm10k_vlan_request {
u32 vid;
u8 vsi;
} vlan;
};
bool set;
};
/* one work queue for entire driver */
extern struct workqueue_struct *fm10k_workqueue;
/* The following enumeration contains flags which indicate or enable modified
* driver behaviors. To avoid race conditions, the flags are stored in
* a BITMAP in the fm10k_intfc structure. The BITMAP should be accessed using
* atomic *_bit() operations.
*/
enum fm10k_flags_t {
FM10K_FLAG_RESET_REQUESTED,
FM10K_FLAG_RSS_FIELD_IPV4_UDP,
FM10K_FLAG_RSS_FIELD_IPV6_UDP,
FM10K_FLAG_SWPRI_CONFIG,
#ifndef IFF_RXFH_CONFIGURED
FM10K_FLAG_RXFH_CONFIGURED,
#endif
FM10K_FLAG_UIO_REGISTERED,
FM10K_FLAG_IES_MODE,
/* __FM10K_FLAGS_SIZE__ is used to calculate the size of
* interface->flags and must be the last value in this
* enumeration.
*/
__FM10K_FLAGS_SIZE__
};
enum fm10k_state_t {
__FM10K_RESETTING,
__FM10K_RESET_DETACHED,
__FM10K_RESET_SUSPENDED,
__FM10K_DOWN,
__FM10K_SERVICE_SCHED,
__FM10K_SERVICE_REQUEST,
__FM10K_SERVICE_DISABLE,
__FM10K_MACVLAN_SCHED,
__FM10K_MACVLAN_REQUEST,
__FM10K_MACVLAN_DISABLE,
__FM10K_LINK_DOWN,
__FM10K_UPDATING_STATS,
/* This value must be last and determines the BITMAP size */
__FM10K_STATE_SIZE__,
};
struct fm10k_intfc {
#ifdef HAVE_VLAN_RX_REGISTER
/* vlgrp must be first member of structure */
struct vlan_group *vlgrp;
#else
unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
#endif
unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
struct net_device *netdev;
#ifdef NETIF_F_HW_L2FW_DOFFLOAD
struct fm10k_l2_accel *l2_accel; /* pointer to L2 acceleration list */
#endif
struct pci_dev *pdev;
unsigned long state;
DECLARE_BITMAP(state, __FM10K_STATE_SIZE__);
/* Access flag values using atomic *_bit() operations */
DECLARE_BITMAP(flags, __FM10K_FLAGS_SIZE__);
u32 flags;
#define FM10K_FLAG_RESET_REQUESTED (u32)(BIT(0))
#define FM10K_FLAG_RSS_FIELD_IPV4_UDP (u32)(BIT(1))
#define FM10K_FLAG_RSS_FIELD_IPV6_UDP (u32)(BIT(2))
#define FM10K_FLAG_RX_TS_ENABLED (u32)(BIT(3))
#define FM10K_FLAG_SWPRI_CONFIG (u32)(BIT(4))
#define FM10K_FLAG_DEBUG_STATS (u32)(BIT(5))
#ifndef IFF_RXFH_CONFIGURED
#define FM10K_FLAG_RXFH_CONFIGURED (u32)(BIT(29))
#endif
#define FM10K_UIO_REGISTERED (u32)(BIT(30))
#define FM10K_FLAG_IES_MODE (u32)(BIT(31))
int xcast_mode;
/* Tx fast path data */
@ -342,6 +377,8 @@ struct fm10k_intfc {
struct fm10k_hw_stats stats;
struct fm10k_hw hw;
/* Mailbox lock */
spinlock_t mbx_lock;
u32 __iomem *uc_addr;
u32 __iomem *sw_addr;
u16 msg_enable;
@ -359,8 +396,15 @@ struct fm10k_intfc {
u32 reta[FM10K_RETA_SIZE];
u32 rssrk[FM10K_RSSRK_SIZE];
/* VXLAN port tracking information */
/* UDP encapsulation port tracking information */
struct list_head vxlan_port;
struct list_head geneve_port;
/* MAC/VLAN update queue */
struct list_head macvlan_requests;
struct delayed_work macvlan_task;
/* MAC/VLAN update queue lock */
spinlock_t macvlan_lock;
/* UIO device capabilities structure */
struct uio_info uio;
@ -390,34 +434,19 @@ struct fm10k_intfc {
u16 vid;
};
enum fm10k_state_t {
__FM10K_RESETTING,
__FM10K_DOWN,
__FM10K_SERVICE_SCHED,
__FM10K_SERVICE_DISABLE,
__FM10K_MBX_LOCK,
__FM10K_LINK_DOWN,
};
static inline void fm10k_mbx_lock(struct fm10k_intfc *interface)
{
/* busy loop if we cannot obtain the lock as some calls
* such as ndo_set_rx_mode may be made in atomic context
*/
while (test_and_set_bit(__FM10K_MBX_LOCK, &interface->state))
udelay(20);
spin_lock(&interface->mbx_lock);
}
static inline void fm10k_mbx_unlock(struct fm10k_intfc *interface)
{
/* flush memory to make sure state is correct */
smp_mb__before_atomic();
clear_bit(__FM10K_MBX_LOCK, &interface->state);
spin_unlock(&interface->mbx_lock);
}
static inline int fm10k_mbx_trylock(struct fm10k_intfc *interface)
{
return !test_and_set_bit(__FM10K_MBX_LOCK, &interface->state);
return spin_trylock(&interface->mbx_lock);
}
/* fm10k_test_staterr - test bits in Rx descriptor status and error fields */
@ -441,7 +470,7 @@ static inline u16 fm10k_desc_unused(struct fm10k_ring *ring)
(&(((union fm10k_rx_desc *)((R)->desc))[i]))
#define FM10K_MAX_TXD_PWR 14
#define FM10K_MAX_DATA_PER_TXD BIT(FM10K_MAX_TXD_PWR)
#define FM10K_MAX_DATA_PER_TXD (1u << FM10K_MAX_TXD_PWR)
/* Tx Descriptors needed, worst case */
#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), FM10K_MAX_DATA_PER_TXD)
@ -505,6 +534,7 @@ __be16 fm10k_tx_encap_offload(struct sk_buff *skb);
netdev_tx_t fm10k_xmit_frame_ring(struct sk_buff *skb,
struct fm10k_ring *tx_ring);
void fm10k_tx_timeout_reset(struct fm10k_intfc *interface);
u64 fm10k_get_tx_pending(struct fm10k_ring *ring, bool in_sw);
bool fm10k_check_tx_hang(struct fm10k_ring *tx_ring);
void fm10k_alloc_rx_buffers(struct fm10k_ring *rx_ring, u16 cleaned_count);
@ -519,6 +549,7 @@ void fm10k_up(struct fm10k_intfc *interface);
void fm10k_down(struct fm10k_intfc *interface);
void fm10k_update_stats(struct fm10k_intfc *interface);
void fm10k_service_event_schedule(struct fm10k_intfc *interface);
void fm10k_macvlan_schedule(struct fm10k_intfc *interface);
void fm10k_update_rx_drop_en(struct fm10k_intfc *interface);
#ifdef CONFIG_NET_POLL_CONTROLLER
void fm10k_netpoll(struct net_device *netdev);
@ -543,6 +574,12 @@ void fm10k_reset_rx_state(struct fm10k_intfc *);
int fm10k_setup_tc(struct net_device *dev, u8 tc);
int fm10k_open(struct net_device *netdev);
int fm10k_close(struct net_device *netdev);
int fm10k_queue_vlan_request(struct fm10k_intfc *interface, u32 vid,
u8 vsi, bool set);
int fm10k_queue_mac_request(struct fm10k_intfc *interface, u16 glort,
const unsigned char *addr, u16 vid, bool set);
void fm10k_clear_macvlan_queue(struct fm10k_intfc *interface,
u16 glort, bool vlans);
/* UIO */
#if IS_ENABLED(CONFIG_UIO)
@ -563,14 +600,13 @@ static inline bool fm10k_is_ies(struct net_device *dev)
{
struct fm10k_intfc *interface = netdev_priv(dev);
return !!(interface->flags & FM10K_FLAG_IES_MODE);
return test_bit(FM10K_FLAG_IES_MODE, interface->flags);
}
extern struct packet_type ies_packet_type;
/* Ethtool */
void fm10k_set_ethtool_ops(struct net_device *dev);
u32 fm10k_get_reta_size(struct net_device *netdev);
void fm10k_write_reta(struct fm10k_intfc *interface, const u32 *indir);
/* Param */
@ -586,13 +622,18 @@ int fm10k_iov_configure(struct pci_dev *pdev, int num_vfs);
s32 fm10k_iov_update_pvid(struct fm10k_intfc *interface, u16 glort, u16 pvid);
#ifdef IFLA_VF_MAX
int fm10k_ndo_set_vf_mac(struct net_device *netdev, int vf_idx, u8 *mac);
#ifdef IFLA_VF_VLAN_INFO_MAX
int fm10k_ndo_set_vf_vlan(struct net_device *netdev,
int vf_idx, u16 vid, u8 qos, __be16 vlan_proto);
#else
int fm10k_ndo_set_vf_vlan(struct net_device *netdev,
int vf_idx, u16 vid, u8 qos);
#endif
#ifdef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
int fm10k_ndo_set_vf_bw(struct net_device *netdev, int vf_idx, int rate,
int unused);
int fm10k_ndo_set_vf_bw(struct net_device *netdev, int vf_idx,
int __always_unused min_rate, int max_rate);
#else
int fm10k_ndo_set_vf_bw(struct net_device *netdev, int vf_idx, int rate);
int fm10k_ndo_set_vf_bw(struct net_device *netdev, int vf_idx, int max_rate);
#endif
int fm10k_ndo_get_vf_config(struct net_device *netdev,
int vf_idx, struct ifla_vf_info *ivi);

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include "fm10k_common.h"
@ -207,6 +190,9 @@ s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt)
/* clear tx_ready to prevent any false hits for reset */
hw->mac.tx_ready = false;
if (FM10K_REMOVED(hw->hw_addr))
return 0;
/* clear the enable bit for all rings */
for (i = 0; i < q_cnt; i++) {
reg = fm10k_read_reg(hw, FM10K_TXDCTL(i));
@ -259,6 +245,7 @@ s32 fm10k_stop_hw_generic(struct fm10k_hw *hw)
* fm10k_read_hw_stats_32b - Reads value of 32-bit registers
* @hw: pointer to the hardware structure
* @addr: address of register containing a 32-bit value
* @stat: pointer to structure holding hw stat information
*
* Function reads the content of the register and returns the delta
* between the base and the current value.
@ -278,6 +265,7 @@ u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr,
* fm10k_read_hw_stats_48b - Reads value of 48-bit registers
* @hw: pointer to the hardware structure
* @addr: address of register containing the lower 32-bit value
* @stat: pointer to structure holding hw stat information
*
* Function reads the content of 2 registers, combined to represent a 48-bit
* statistical value. Extra processing is required to handle overflowing.
@ -458,7 +446,6 @@ void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q,
/**
* fm10k_unbind_hw_stats_q - Unbind the queue counters from their queues
* @hw: pointer to the hardware structure
* @q: pointer to the ring of hardware statistics queue
* @idx: index pointing to the start of the ring iteration
* @count: number of queues to iterate over
@ -503,7 +490,7 @@ s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready)
goto out;
/* if we somehow dropped the Tx enable we should reset */
if (hw->mac.tx_ready && !(txdctl & FM10K_TXDCTL_ENABLE)) {
if (mac->tx_ready && !(txdctl & FM10K_TXDCTL_ENABLE)) {
ret_val = FM10K_ERR_RESET_REQUESTED;
goto out;
}
@ -514,13 +501,17 @@ s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready)
goto out;
}
/* verify Mailbox is still valid */
if (!mbx->ops.tx_ready(mbx, FM10K_VFMBX_MSG_MTU))
/* verify Mailbox is still open */
if (mbx->state != FM10K_STATE_OPEN)
goto out;
/* interface cannot receive traffic without logical ports */
if (mac->dglort_map == FM10K_DGLORTMAP_NONE)
if (mac->dglort_map == FM10K_DGLORTMAP_NONE) {
if (mac->ops.request_lport_map)
ret_val = mac->ops.request_lport_map(hw);
goto out;
}
/* if we passed all the tests above then the switch is ready and we no
* longer need to check for link

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _FM10K_COMMON_H_
#define _FM10K_COMMON_H_

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include "fm10k.h"

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include "fm10k.h"
@ -52,9 +35,9 @@ static void fm10k_dbg_desc_seq_stop(struct seq_file __always_unused *s,
static void fm10k_dbg_desc_break(struct seq_file *s, int i)
{
while (i--)
seq_puts(s, "-");
seq_putc(s, '-');
seq_puts(s, "\n");
seq_putc(s, '\n');
}
static int fm10k_dbg_tx_desc_seq_show(struct seq_file *s, void *v)

View file

@ -1,39 +1,31 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include <linux/vmalloc.h>
#include "fm10k.h"
struct fm10k_stats {
/* The stat_string is expected to be a format string formatted using
* vsnprintf by fm10k_add_stat_strings. Every member of a stats array
* should use the same format specifiers as they will be formatted
* using the same variadic arguments.
*/
char stat_string[ETH_GSTRING_LEN];
int sizeof_stat;
int stat_offset;
};
#define FM10K_NETDEV_STAT(_net_stat) { \
.stat_string = #_net_stat, \
.sizeof_stat = FIELD_SIZEOF(struct net_device_stats, _net_stat), \
.stat_offset = offsetof(struct net_device_stats, _net_stat) \
#define FM10K_STAT_FIELDS(_type, _name, _stat) { \
.stat_string = _name, \
.sizeof_stat = FIELD_SIZEOF(_type, _stat), \
.stat_offset = offsetof(_type, _stat) \
}
/* netdevice statistics */
#define FM10K_NETDEV_STAT(_net_stat) \
FM10K_STAT_FIELDS(struct net_device_stats, #_net_stat, _net_stat)
static const struct fm10k_stats fm10k_gstrings_net_stats[] = {
FM10K_NETDEV_STAT(tx_packets),
FM10K_NETDEV_STAT(tx_bytes),
@ -51,11 +43,9 @@ static const struct fm10k_stats fm10k_gstrings_net_stats[] = {
#define FM10K_NETDEV_STATS_LEN ARRAY_SIZE(fm10k_gstrings_net_stats)
#define FM10K_STAT(_name, _stat) { \
.stat_string = _name, \
.sizeof_stat = FIELD_SIZEOF(struct fm10k_intfc, _stat), \
.stat_offset = offsetof(struct fm10k_intfc, _stat) \
}
/* General interface statistics */
#define FM10K_STAT(_name, _stat) \
FM10K_STAT_FIELDS(struct fm10k_intfc, _name, _stat)
static const struct fm10k_stats fm10k_gstrings_global_stats[] = {
FM10K_STAT("tx_restart_queue", restart_queue),
@ -76,6 +66,8 @@ static const struct fm10k_stats fm10k_gstrings_global_stats[] = {
FM10K_STAT("mac_rules_used", hw.swapi.mac.used),
FM10K_STAT("mac_rules_avail", hw.swapi.mac.avail),
FM10K_STAT("reset_while_pending", hw.mac.reset_while_pending),
FM10K_STAT("tx_hang_count", tx_timeout_count),
};
@ -90,11 +82,9 @@ static const struct fm10k_stats fm10k_gstrings_pf_stats[] = {
FM10K_STAT("nodesc_drop", stats.nodesc_drop.count),
};
#define FM10K_MBX_STAT(_name, _stat) { \
.stat_string = _name, \
.sizeof_stat = FIELD_SIZEOF(struct fm10k_mbx_info, _stat), \
.stat_offset = offsetof(struct fm10k_mbx_info, _stat) \
}
/* mailbox statistics */
#define FM10K_MBX_STAT(_name, _stat) \
FM10K_STAT_FIELDS(struct fm10k_mbx_info, _name, _stat)
static const struct fm10k_stats fm10k_gstrings_mbx_stats[] = {
FM10K_MBX_STAT("mbx_tx_busy", tx_busy),
@ -108,15 +98,13 @@ static const struct fm10k_stats fm10k_gstrings_mbx_stats[] = {
FM10K_MBX_STAT("mbx_rx_mbmem_pushed", rx_mbmem_pushed),
};
#define FM10K_QUEUE_STAT(_name, _stat) { \
.stat_string = _name, \
.sizeof_stat = FIELD_SIZEOF(struct fm10k_ring, _stat), \
.stat_offset = offsetof(struct fm10k_ring, _stat) \
}
/* per-queue ring statistics */
#define FM10K_QUEUE_STAT(_name, _stat) \
FM10K_STAT_FIELDS(struct fm10k_ring, _name, _stat)
static const struct fm10k_stats fm10k_gstrings_queue_stats[] = {
FM10K_QUEUE_STAT("packets", stats.packets),
FM10K_QUEUE_STAT("bytes", stats.bytes),
FM10K_QUEUE_STAT("%s_queue_%u_packets", stats.packets),
FM10K_QUEUE_STAT("%s_queue_%u_bytes", stats.bytes),
};
#define FM10K_GLOBAL_STATS_LEN ARRAY_SIZE(fm10k_gstrings_global_stats)
@ -148,68 +136,60 @@ static const char fm10k_prv_flags[FM10K_PRV_FLAG_LEN][ETH_GSTRING_LEN] = {
"ies-tagging",
};
static void fm10k_add_stat_strings(char **p, const char *prefix,
const struct fm10k_stats stats[],
const unsigned int size)
static void __fm10k_add_stat_strings(u8 **p, const struct fm10k_stats stats[],
const unsigned int size, ...)
{
unsigned int i;
for (i = 0; i < size; i++) {
snprintf(*p, ETH_GSTRING_LEN, "%s%s",
prefix, stats[i].stat_string);
va_list args;
va_start(args, size);
vsnprintf(*p, ETH_GSTRING_LEN, stats[i].stat_string, args);
*p += ETH_GSTRING_LEN;
va_end(args);
}
}
#define fm10k_add_stat_strings(p, stats, ...) \
__fm10k_add_stat_strings(p, stats, ARRAY_SIZE(stats), ## __VA_ARGS__)
static void fm10k_get_stat_strings(struct net_device *dev, u8 *data)
{
struct fm10k_intfc *interface = netdev_priv(dev);
char *p = (char *)data;
unsigned int i;
fm10k_add_stat_strings(&p, "", fm10k_gstrings_net_stats,
FM10K_NETDEV_STATS_LEN);
fm10k_add_stat_strings(&data, fm10k_gstrings_net_stats);
fm10k_add_stat_strings(&p, "", fm10k_gstrings_global_stats,
FM10K_GLOBAL_STATS_LEN);
fm10k_add_stat_strings(&data, fm10k_gstrings_global_stats);
fm10k_add_stat_strings(&p, "", fm10k_gstrings_mbx_stats,
FM10K_MBX_STATS_LEN);
fm10k_add_stat_strings(&data, fm10k_gstrings_mbx_stats);
if (interface->hw.mac.type != fm10k_mac_vf)
fm10k_add_stat_strings(&p, "", fm10k_gstrings_pf_stats,
FM10K_PF_STATS_LEN);
fm10k_add_stat_strings(&data, fm10k_gstrings_pf_stats);
for (i = 0; i < interface->hw.mac.max_queues; i++) {
char prefix[ETH_GSTRING_LEN];
fm10k_add_stat_strings(&data, fm10k_gstrings_queue_stats,
"tx", i);
snprintf(prefix, ETH_GSTRING_LEN, "tx_queue_%u_", i);
fm10k_add_stat_strings(&p, prefix,
fm10k_gstrings_queue_stats,
FM10K_QUEUE_STATS_LEN);
snprintf(prefix, ETH_GSTRING_LEN, "rx_queue_%u_", i);
fm10k_add_stat_strings(&p, prefix,
fm10k_gstrings_queue_stats,
FM10K_QUEUE_STATS_LEN);
fm10k_add_stat_strings(&data, fm10k_gstrings_queue_stats,
"rx", i);
}
}
static void fm10k_get_strings(struct net_device *dev,
u32 stringset, u8 *data)
{
char *p = (char *)data;
switch (stringset) {
case ETH_SS_TEST:
memcpy(data, *fm10k_gstrings_test,
memcpy(data, fm10k_gstrings_test,
FM10K_TEST_LEN * ETH_GSTRING_LEN);
break;
case ETH_SS_STATS:
fm10k_get_stat_strings(dev, data);
break;
case ETH_SS_PRIV_FLAGS:
memcpy(p, fm10k_prv_flags,
memcpy(data, fm10k_prv_flags,
FM10K_PRV_FLAG_LEN * ETH_GSTRING_LEN);
break;
}
@ -238,9 +218,9 @@ static int fm10k_get_sset_count(struct net_device *dev, int sset)
}
}
static void fm10k_add_ethtool_stats(u64 **data, void *pointer,
const struct fm10k_stats stats[],
const unsigned int size)
static void __fm10k_add_ethtool_stats(u64 **data, void *pointer,
const struct fm10k_stats stats[],
const unsigned int size)
{
unsigned int i;
char *p;
@ -269,11 +249,16 @@ static void fm10k_add_ethtool_stats(u64 **data, void *pointer,
*((*data)++) = *(u8 *)p;
break;
default:
WARN_ONCE(1, "unexpected stat size for %s",
stats[i].stat_string);
*((*data)++) = 0;
}
}
}
#define fm10k_add_ethtool_stats(data, pointer, stats) \
__fm10k_add_ethtool_stats(data, pointer, stats, ARRAY_SIZE(stats))
static void fm10k_get_ethtool_stats(struct net_device *netdev,
struct ethtool_stats __always_unused *stats,
u64 *data)
@ -284,20 +269,16 @@ static void fm10k_get_ethtool_stats(struct net_device *netdev,
fm10k_update_stats(interface);
fm10k_add_ethtool_stats(&data, net_stats, fm10k_gstrings_net_stats,
FM10K_NETDEV_STATS_LEN);
fm10k_add_ethtool_stats(&data, net_stats, fm10k_gstrings_net_stats);
fm10k_add_ethtool_stats(&data, interface, fm10k_gstrings_global_stats,
FM10K_GLOBAL_STATS_LEN);
fm10k_add_ethtool_stats(&data, interface, fm10k_gstrings_global_stats);
fm10k_add_ethtool_stats(&data, &interface->hw.mbx,
fm10k_gstrings_mbx_stats,
FM10K_MBX_STATS_LEN);
fm10k_gstrings_mbx_stats);
if (interface->hw.mac.type != fm10k_mac_vf) {
fm10k_add_ethtool_stats(&data, interface,
fm10k_gstrings_pf_stats,
FM10K_PF_STATS_LEN);
fm10k_gstrings_pf_stats);
}
for (i = 0; i < interface->hw.mac.max_queues; i++) {
@ -305,13 +286,11 @@ static void fm10k_get_ethtool_stats(struct net_device *netdev,
ring = interface->tx_ring[i];
fm10k_add_ethtool_stats(&data, ring,
fm10k_gstrings_queue_stats,
FM10K_QUEUE_STATS_LEN);
fm10k_gstrings_queue_stats);
ring = interface->rx_ring[i];
fm10k_add_ethtool_stats(&data, ring,
fm10k_gstrings_queue_stats,
FM10K_QUEUE_STATS_LEN);
fm10k_gstrings_queue_stats);
}
}
@ -352,27 +331,23 @@ static void fm10k_get_reg_q(struct fm10k_hw *hw, u32 *buff, int i)
buff[idx++] = fm10k_read_reg(hw, FM10K_TX_SGLORT(i));
buff[idx++] = fm10k_read_reg(hw, FM10K_PFVTCTL(i));
BUILD_BUG_ON(idx != FM10K_REGS_LEN_Q);
BUG_ON(idx != FM10K_REGS_LEN_Q);
}
/* If function below adds more registers this define needs to be updated */
#define FM10K_REGS_LEN_VSI (1 + FM10K_RSSRK_SIZE + FM10K_RETA_SIZE)
/* If function above adds more registers this define needs to be updated */
#define FM10K_REGS_LEN_VSI 43
static void fm10k_get_reg_vsi(struct fm10k_hw *hw, u32 *buff, int i)
{
int idx = 0, j;
buff[idx++] = fm10k_read_reg(hw, FM10K_MRQC(i));
for (j = 0; j < FM10K_RSSRK_SIZE; j++, idx++)
if (idx < FM10K_REGS_LEN_VSI)
buff[idx] = fm10k_read_reg(hw, FM10K_RSSRK(i, j));
for (j = 0; j < FM10K_RETA_SIZE; j++, idx++)
if (idx < FM10K_REGS_LEN_VSI)
buff[idx] = fm10k_read_reg(hw, FM10K_RETA(i, j));
for (j = 0; j < 10; j++)
buff[idx++] = fm10k_read_reg(hw, FM10K_RSSRK(i, j));
for (j = 0; j < 32; j++)
buff[idx++] = fm10k_read_reg(hw, FM10K_RETA(i, j));
WARN_ONCE(idx != FM10K_REGS_LEN_VSI,
"Incorrect value for FM10K_REGS_LEN_VSI (expected %d, got %d)\n",
idx, FM10K_REGS_LEN_VSI);
BUG_ON(idx != FM10K_REGS_LEN_VSI);
}
static void fm10k_get_regs(struct net_device *netdev,
@ -569,7 +544,7 @@ static int fm10k_set_ringparam(struct net_device *netdev,
return 0;
}
while (test_and_set_bit(__FM10K_RESETTING, &interface->state))
while (test_and_set_bit(__FM10K_RESETTING, interface->state))
usleep_range(1000, 2000);
if (!netif_running(interface->netdev)) {
@ -655,7 +630,7 @@ err_setup:
fm10k_up(interface);
vfree(temp_ring);
clear_reset:
clear_bit(__FM10K_RESETTING, &interface->state);
clear_bit(__FM10K_RESETTING, interface->state);
return err;
}
@ -723,7 +698,8 @@ static int fm10k_get_rss_hash_opts(struct fm10k_intfc *interface,
cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
/* fall through */
case UDP_V4_FLOW:
if (interface->flags & FM10K_FLAG_RSS_FIELD_IPV4_UDP)
if (test_bit(FM10K_FLAG_RSS_FIELD_IPV4_UDP,
interface->flags))
cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
/* fall through */
case SCTP_V4_FLOW:
@ -739,7 +715,8 @@ static int fm10k_get_rss_hash_opts(struct fm10k_intfc *interface,
cmd->data |= RXH_IP_SRC | RXH_IP_DST;
break;
case UDP_V6_FLOW:
if (interface->flags & FM10K_FLAG_RSS_FIELD_IPV6_UDP)
if (test_bit(FM10K_FLAG_RSS_FIELD_IPV6_UDP,
interface->flags))
cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
cmd->data |= RXH_IP_SRC | RXH_IP_DST;
break;
@ -776,12 +753,13 @@ static int fm10k_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
return ret;
}
#define UDP_RSS_FLAGS (FM10K_FLAG_RSS_FIELD_IPV4_UDP | \
FM10K_FLAG_RSS_FIELD_IPV6_UDP)
static int fm10k_set_rss_hash_opt(struct fm10k_intfc *interface,
struct ethtool_rxnfc *nfc)
{
u32 flags = interface->flags;
int rss_ipv4_udp = test_bit(FM10K_FLAG_RSS_FIELD_IPV4_UDP,
interface->flags);
int rss_ipv6_udp = test_bit(FM10K_FLAG_RSS_FIELD_IPV6_UDP,
interface->flags);
/* RSS does not support anything other than hashing
* to queues on src and dst IPs and ports
@ -805,10 +783,12 @@ static int fm10k_set_rss_hash_opt(struct fm10k_intfc *interface,
return -EINVAL;
switch (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
case 0:
flags &= ~FM10K_FLAG_RSS_FIELD_IPV4_UDP;
clear_bit(FM10K_FLAG_RSS_FIELD_IPV4_UDP,
interface->flags);
break;
case (RXH_L4_B_0_1 | RXH_L4_B_2_3):
flags |= FM10K_FLAG_RSS_FIELD_IPV4_UDP;
set_bit(FM10K_FLAG_RSS_FIELD_IPV4_UDP,
interface->flags);
break;
default:
return -EINVAL;
@ -820,10 +800,12 @@ static int fm10k_set_rss_hash_opt(struct fm10k_intfc *interface,
return -EINVAL;
switch (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
case 0:
flags &= ~FM10K_FLAG_RSS_FIELD_IPV6_UDP;
clear_bit(FM10K_FLAG_RSS_FIELD_IPV6_UDP,
interface->flags);
break;
case (RXH_L4_B_0_1 | RXH_L4_B_2_3):
flags |= FM10K_FLAG_RSS_FIELD_IPV6_UDP;
set_bit(FM10K_FLAG_RSS_FIELD_IPV6_UDP,
interface->flags);
break;
default:
return -EINVAL;
@ -847,28 +829,41 @@ static int fm10k_set_rss_hash_opt(struct fm10k_intfc *interface,
return -EINVAL;
}
/* if we changed something we need to update flags */
if (flags != interface->flags) {
/* If something changed we need to update the MRQC register. Note that
* test_bit() is guaranteed to return strictly 0 or 1, so testing for
* equality is safe.
*/
if ((rss_ipv4_udp != test_bit(FM10K_FLAG_RSS_FIELD_IPV4_UDP,
interface->flags)) ||
(rss_ipv6_udp != test_bit(FM10K_FLAG_RSS_FIELD_IPV6_UDP,
interface->flags))) {
struct fm10k_hw *hw = &interface->hw;
bool warn = false;
u32 mrqc;
if ((flags & UDP_RSS_FLAGS) &&
!(interface->flags & UDP_RSS_FLAGS))
netif_warn(interface, drv, interface->netdev,
"enabling UDP RSS: fragmented packets may arrive out of order to the stack above\n");
interface->flags = flags;
/* Perform hash on these packet types */
mrqc = FM10K_MRQC_IPV4 |
FM10K_MRQC_TCP_IPV4 |
FM10K_MRQC_IPV6 |
FM10K_MRQC_TCP_IPV6;
if (flags & FM10K_FLAG_RSS_FIELD_IPV4_UDP)
if (test_bit(FM10K_FLAG_RSS_FIELD_IPV4_UDP,
interface->flags)) {
mrqc |= FM10K_MRQC_UDP_IPV4;
if (flags & FM10K_FLAG_RSS_FIELD_IPV6_UDP)
warn = true;
}
if (test_bit(FM10K_FLAG_RSS_FIELD_IPV6_UDP,
interface->flags)) {
mrqc |= FM10K_MRQC_UDP_IPV6;
warn = true;
}
/* If we enable UDP RSS display a warning that this may cause
* fragmented UDP packets to arrive out of order.
*/
if (warn)
netif_warn(interface, drv, interface->netdev,
"enabling UDP RSS: fragmented packets may arrive out of order to the stack above\n");
fm10k_write_reg(hw, FM10K_MRQC(0), mrqc);
}
@ -951,7 +946,7 @@ static void fm10k_self_test(struct net_device *dev,
memset(data, 0, sizeof(*data) * FM10K_TEST_LEN);
if (FM10K_REMOVED(hw)) {
if (FM10K_REMOVED(hw->hw_addr)) {
netif_err(interface, drv, dev,
"Interface removed - test blocked\n");
eth_test->flags |= ETH_TEST_FL_FAILED;
@ -967,7 +962,7 @@ static u32 fm10k_get_priv_flags(struct net_device *netdev)
struct fm10k_intfc *interface = netdev_priv(netdev);
u32 priv_flags = 0;
if (interface->flags & FM10K_FLAG_IES_MODE)
if (test_bit(FM10K_FLAG_IES_MODE, interface->flags))
priv_flags |= BIT(FM10K_PRV_FLAG_IES);
return priv_flags;
@ -985,30 +980,47 @@ static int fm10k_set_priv_flags(struct net_device *netdev, u32 priv_flags)
if (interface->hw.mac.type == fm10k_mac_vf)
return -EINVAL;
interface->flags |= FM10K_FLAG_IES_MODE;
if (!test_and_set_bit(FM10K_FLAG_IES_MODE, interface->flags))
set_bit(FM10K_FLAG_RESET_REQUESTED, interface->flags);
} else {
interface->flags &= ~FM10K_FLAG_IES_MODE;
if (test_and_clear_bit(FM10K_FLAG_IES_MODE, interface->flags))
set_bit(FM10K_FLAG_RESET_REQUESTED, interface->flags);
}
return 0;
}
u32 fm10k_get_reta_size(struct net_device __always_unused *netdev)
static u32 fm10k_get_reta_size(struct net_device __always_unused *netdev)
{
return FM10K_RETA_SIZE * FM10K_RETA_ENTRIES_PER_REG;
}
void fm10k_write_reta(struct fm10k_intfc *interface, const u32 *indir)
{
u16 rss_i = interface->ring_feature[RING_F_RSS].indices;
struct fm10k_hw *hw = &interface->hw;
int i;
u32 table[4];
int i, j;
/* record entries to reta table */
for (i = 0; i < FM10K_RETA_SIZE; i++, indir += 4) {
u32 reta = indir[0] |
(indir[1] << 8) |
(indir[2] << 16) |
(indir[3] << 24);
for (i = 0; i < FM10K_RETA_SIZE; i++) {
u32 reta, n;
/* generate a new table if we weren't given one */
for (j = 0; j < 4; j++) {
if (indir)
n = indir[4 * i + j];
else
n = ethtool_rxfh_indir_default(4 * i + j,
rss_i);
table[j] = n;
}
reta = table[0] |
(table[1] << 8) |
(table[2] << 16) |
(table[3] << 24);
if (interface->reta[i] == reta)
continue;
@ -1061,7 +1073,7 @@ static int fm10k_set_reta(struct net_device *netdev, const u32 *indir)
* to set the redirection table to default, so we just assume that it
* is an explicit request for a customized table.
*/
interface->flags |= FM10K_FLAG_RXFH_CONFIGURED;
set_bit(FM10K_FLAG_RXFH_CONFIGURED, interface->flags);
#endif
fm10k_write_reta(interface, indir);
@ -1320,6 +1332,7 @@ static const struct ethtool_ops_ext fm10k_ethtool_ops_ext = {
.get_channels = fm10k_get_channels,
.set_channels = fm10k_set_channels,
#endif
.get_ts_info = ethtool_op_get_ts_info,
};
void fm10k_set_ethtool_ops(struct net_device *dev)

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include "fm10k.h"

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include "fm10k.h"
#include "fm10k_vf.h"
@ -35,10 +18,133 @@ static s32 fm10k_iov_msg_error(struct fm10k_hw *hw, u32 **results,
return fm10k_tlv_msg_error(hw, results, mbx);
}
/**
* fm10k_iov_msg_queue_mac_vlan - Message handler for MAC/VLAN request from VF
* @hw: Pointer to hardware structure
* @results: Pointer array to message, results[0] is pointer to message
* @mbx: Pointer to mailbox information structure
*
* This function is a custom handler for MAC/VLAN requests from the VF. The
* assumption is that it is acceptable to directly hand off the message from
* the VF to the PF's switch manager. However, we use a MAC/VLAN message
* queue to avoid overloading the mailbox when a large number of requests
* come in.
**/
static s32 fm10k_iov_msg_queue_mac_vlan(struct fm10k_hw *hw, u32 **results,
struct fm10k_mbx_info *mbx)
{
struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx;
struct fm10k_intfc *interface = hw->back;
u8 mac[ETH_ALEN];
u32 *result;
int err = 0;
bool set;
u16 vlan;
u32 vid;
/* we shouldn't be updating rules on a disabled interface */
if (!FM10K_VF_FLAG_ENABLED(vf_info))
err = FM10K_ERR_PARAM;
if (!err && !!results[FM10K_MAC_VLAN_MSG_VLAN]) {
result = results[FM10K_MAC_VLAN_MSG_VLAN];
/* record VLAN id requested */
err = fm10k_tlv_attr_get_u32(result, &vid);
if (err)
return err;
set = !(vid & FM10K_VLAN_CLEAR);
vid &= ~FM10K_VLAN_CLEAR;
/* if the length field has been set, this is a multi-bit
* update request. For multi-bit requests, simply disallow
* them when the pf_vid has been set. In this case, the PF
* should have already cleared the VLAN_TABLE, and if we
* allowed them, it could allow a rogue VF to receive traffic
* on a VLAN it was not assigned. In the single-bit case, we
* need to modify requests for VLAN 0 to use the default PF or
* SW vid when assigned.
*/
if (vid >> 16) {
/* prevent multi-bit requests when PF has
* administratively set the VLAN for this VF
*/
if (vf_info->pf_vid)
return FM10K_ERR_PARAM;
} else {
err = fm10k_iov_select_vid(vf_info, (u16)vid);
if (err < 0)
return err;
vid = err;
}
/* update VSI info for VF in regards to VLAN table */
err = hw->mac.ops.update_vlan(hw, vid, vf_info->vsi, set);
}
if (!err && !!results[FM10K_MAC_VLAN_MSG_MAC]) {
result = results[FM10K_MAC_VLAN_MSG_MAC];
/* record unicast MAC address requested */
err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan);
if (err)
return err;
/* block attempts to set MAC for a locked device */
if (is_valid_ether_addr(vf_info->mac) &&
!ether_addr_equal(mac, vf_info->mac))
return FM10K_ERR_PARAM;
set = !(vlan & FM10K_VLAN_CLEAR);
vlan &= ~FM10K_VLAN_CLEAR;
err = fm10k_iov_select_vid(vf_info, vlan);
if (err < 0)
return err;
vlan = (u16)err;
/* Add this request to the MAC/VLAN queue */
err = fm10k_queue_mac_request(interface, vf_info->glort,
mac, vlan, set);
}
if (!err && !!results[FM10K_MAC_VLAN_MSG_MULTICAST]) {
result = results[FM10K_MAC_VLAN_MSG_MULTICAST];
/* record multicast MAC address requested */
err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan);
if (err)
return err;
/* verify that the VF is allowed to request multicast */
if (!(vf_info->vf_flags & FM10K_VF_FLAG_MULTI_ENABLED))
return FM10K_ERR_PARAM;
set = !(vlan & FM10K_VLAN_CLEAR);
vlan &= ~FM10K_VLAN_CLEAR;
err = fm10k_iov_select_vid(vf_info, vlan);
if (err < 0)
return err;
vlan = (u16)err;
/* Add this request to the MAC/VLAN queue */
err = fm10k_queue_mac_request(interface, vf_info->glort,
mac, vlan, set);
}
return err;
}
static const struct fm10k_msg_data iov_mbx_data[] = {
FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test),
FM10K_VF_MSG_MSIX_HANDLER(fm10k_iov_msg_msix_pf),
FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_iov_msg_mac_vlan_pf),
FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_iov_msg_queue_mac_vlan),
FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_iov_msg_lport_state_pf),
FM10K_TLV_MSG_ERROR_HANDLER(fm10k_iov_msg_error),
};
@ -51,7 +157,7 @@ s32 fm10k_iov_event(struct fm10k_intfc *interface)
int i;
/* if there is no iov_data then there is no mailbox to process */
if (!ACCESS_ONCE(interface->iov_data))
if (!READ_ONCE(interface->iov_data))
return 0;
rcu_read_lock();
@ -66,25 +172,21 @@ s32 fm10k_iov_event(struct fm10k_intfc *interface)
goto read_unlock;
/* read VFLRE to determine if any VFs have been reset */
do {
vflre = fm10k_read_reg(hw, FM10K_PFVFLRE(0));
vflre <<= 32;
vflre |= fm10k_read_reg(hw, FM10K_PFVFLRE(1));
vflre = (vflre << 32) | (vflre >> 32);
vflre |= fm10k_read_reg(hw, FM10K_PFVFLRE(0));
vflre = fm10k_read_reg(hw, FM10K_PFVFLRE(1));
vflre <<= 32;
vflre |= fm10k_read_reg(hw, FM10K_PFVFLRE(0));
i = iov_data->num_vfs;
i = iov_data->num_vfs;
for (vflre <<= 64 - i; vflre && i--; vflre += vflre) {
struct fm10k_vf_info *vf_info = &iov_data->vf_info[i];
for (vflre <<= 64 - i; vflre && i--; vflre += vflre) {
struct fm10k_vf_info *vf_info = &iov_data->vf_info[i];
if (vflre >= 0)
continue;
if (vflre >= 0)
continue;
hw->iov.ops.reset_resources(hw, vf_info);
vf_info->mbx.ops.connect(hw, &vf_info->mbx);
}
} while (i != iov_data->num_vfs);
hw->iov.ops.reset_resources(hw, vf_info);
vf_info->mbx.ops.connect(hw, &vf_info->mbx);
}
read_unlock:
rcu_read_unlock();
@ -99,7 +201,7 @@ s32 fm10k_iov_mbx(struct fm10k_intfc *interface)
int i;
/* if there is no iov_data then there is no mailbox to process */
if (!ACCESS_ONCE(interface->iov_data))
if (!READ_ONCE(interface->iov_data))
return 0;
rcu_read_lock();
@ -126,9 +228,14 @@ process_mbx:
struct fm10k_mbx_info *mbx = &vf_info->mbx;
u16 glort = vf_info->glort;
/* process the SM mailbox first to drain outgoing messages */
hw->mbx.ops.process(hw, &hw->mbx);
/* verify port mapping is valid, if not reset port */
if (vf_info->vf_flags && !fm10k_glort_valid_pf(hw, glort))
if (vf_info->vf_flags && !fm10k_glort_valid_pf(hw, glort)) {
hw->iov.ops.reset_lport(hw, vf_info);
fm10k_clear_macvlan_queue(interface, glort, false);
}
/* reset VFs that have mailbox timed out */
if (!mbx->timeout) {
@ -137,9 +244,14 @@ process_mbx:
}
/* guarantee we have free space in the SM mailbox */
if (!hw->mbx.ops.tx_ready(&hw->mbx, FM10K_VFMBX_MSG_MTU)) {
if (hw->mbx.state == FM10K_STATE_OPEN &&
!hw->mbx.ops.tx_ready(&hw->mbx, FM10K_VFMBX_MSG_MTU)) {
/* keep track of how many times this occurs */
interface->hw_sm_mbx_full++;
/* make sure we try again momentarily */
fm10k_service_event_schedule(interface);
break;
}
@ -187,9 +299,32 @@ void fm10k_iov_suspend(struct pci_dev *pdev)
hw->iov.ops.reset_resources(hw, vf_info);
hw->iov.ops.reset_lport(hw, vf_info);
fm10k_clear_macvlan_queue(interface, vf_info->glort, false);
}
}
static void fm10k_mask_aer_comp_abort(struct pci_dev *pdev)
{
u32 err_mask;
int pos;
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR);
if (!pos)
return;
/* Mask the completion abort bit in the ERR_UNCOR_MASK register,
* preventing the device from reporting these errors to the upstream
* PCIe root device. This avoids bringing down platforms which upgrade
* non-fatal completer aborts into machine check exceptions. Completer
* aborts can occur whenever a VF reads a queue it doesn't own.
*/
pci_read_config_dword(pdev, pos + PCI_ERR_UNCOR_MASK, &err_mask);
err_mask |= PCI_ERR_UNC_COMP_ABORT;
pci_write_config_dword(pdev, pos + PCI_ERR_UNCOR_MASK, err_mask);
mmiowb();
}
int fm10k_iov_resume(struct pci_dev *pdev)
{
struct fm10k_intfc *interface = pci_get_drvdata(pdev);
@ -205,6 +340,12 @@ int fm10k_iov_resume(struct pci_dev *pdev)
if (!iov_data)
return -ENOMEM;
/* Lower severity of completer abort error reporting as
* the VFs can trigger this any time they read a queue
* that they don't own.
*/
fm10k_mask_aer_comp_abort(pdev);
/* allocate hardware resources for the VFs */
hw->iov.ops.assign_resources(hw, num_vfs, num_vfs);
@ -224,7 +365,7 @@ int fm10k_iov_resume(struct pci_dev *pdev)
struct fm10k_vf_info *vf_info = &iov_data->vf_info[i];
/* allocate all but the last GLORT to the VFs */
if (i == ((~hw->mac.dglort_map) >> FM10K_DGLORTMAP_MASK_SHIFT))
if (i == (~hw->mac.dglort_map >> FM10K_DGLORTMAP_MASK_SHIFT))
break;
/* assign GLORT to VF, and restrict it to multicast */
@ -348,20 +489,6 @@ void fm10k_iov_disable(struct pci_dev *pdev)
fm10k_iov_free_data(pdev);
}
static void fm10k_disable_aer_comp_abort(struct pci_dev *pdev)
{
u32 err_sev;
int pos;
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR);
if (!pos)
return;
pci_read_config_dword(pdev, pos + PCI_ERR_UNCOR_SEVER, &err_sev);
err_sev &= ~PCI_ERR_UNC_COMP_ABORT;
pci_write_config_dword(pdev, pos + PCI_ERR_UNCOR_SEVER, err_sev);
}
int fm10k_iov_configure(struct pci_dev *pdev, int num_vfs)
{
int current_vfs = pci_num_vf(pdev);
@ -382,13 +509,7 @@ int fm10k_iov_configure(struct pci_dev *pdev, int num_vfs)
return err;
/* allocate VFs if not already allocated */
if (num_vfs && (num_vfs != current_vfs)) {
/* Disable completer abort error reporting as
* the VFs can trigger this any time they read a queue
* that they don't own.
*/
fm10k_disable_aer_comp_abort(pdev);
if (num_vfs && num_vfs != current_vfs) {
err = pci_enable_sriov(pdev, num_vfs);
if (err) {
dev_err(&pdev->dev,
@ -412,6 +533,8 @@ static inline void fm10k_reset_vf_info(struct fm10k_intfc *interface,
/* disable LPORT for this VF which clears switch rules */
hw->iov.ops.reset_lport(hw, vf_info);
fm10k_clear_macvlan_queue(interface, vf_info->glort, false);
/* assign new MAC+VLAN for this VF */
hw->iov.ops.assign_default_mac_vlan(hw, vf_info);
@ -445,8 +568,13 @@ int fm10k_ndo_set_vf_mac(struct net_device *netdev, int vf_idx, u8 *mac)
return 0;
}
#ifdef IFLA_VF_VLAN_INFO_MAX
int fm10k_ndo_set_vf_vlan(struct net_device *netdev, int vf_idx, u16 vid,
u8 qos, __be16 vlan_proto)
#else
int fm10k_ndo_set_vf_vlan(struct net_device *netdev, int vf_idx, u16 vid,
u8 qos)
#endif
{
struct fm10k_intfc *interface = netdev_priv(netdev);
struct fm10k_iov_data *iov_data = interface->iov_data;
@ -461,6 +589,12 @@ int fm10k_ndo_set_vf_vlan(struct net_device *netdev, int vf_idx, u16 vid,
if (qos || (vid > (VLAN_VID_MASK - 1)))
return -EINVAL;
#ifdef IFLA_VF_VLAN_INFO_MAX
/* VF VLAN Protocol part to default is unsupported */
if (vlan_proto != htons(ETH_P_8021Q))
return -EPROTONOSUPPORT;
#endif
vf_info = &iov_data->vf_info[vf_idx];
/* exit if there is nothing to do */
@ -480,9 +614,9 @@ int fm10k_ndo_set_vf_vlan(struct net_device *netdev, int vf_idx, u16 vid,
#ifdef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
int fm10k_ndo_set_vf_bw(struct net_device *netdev, int vf_idx,
int __always_unused unused, int rate)
int __always_unused min_rate, int max_rate)
#else
int fm10k_ndo_set_vf_bw(struct net_device *netdev, int vf_idx, int rate)
int fm10k_ndo_set_vf_bw(struct net_device *netdev, int vf_idx, int max_rate)
#endif
{
struct fm10k_intfc *interface = netdev_priv(netdev);
@ -494,14 +628,15 @@ int fm10k_ndo_set_vf_bw(struct net_device *netdev, int vf_idx, int rate)
return -EINVAL;
/* rate limit cannot be less than 10Mbs or greater than link speed */
if (rate && ((rate < FM10K_VF_TC_MIN) || rate > FM10K_VF_TC_MAX))
if (max_rate &&
(max_rate < FM10K_VF_TC_MIN || max_rate > FM10K_VF_TC_MAX))
return -EINVAL;
/* store values */
iov_data->vf_info[vf_idx].rate = rate;
iov_data->vf_info[vf_idx].rate = max_rate;
/* update hardware configuration */
hw->iov.ops.configure_tc(hw, vf_idx, rate);
hw->iov.ops.configure_tc(hw, vf_idx, max_rate);
return 0;
}

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include <linux/types.h>
#include <linux/module.h>
@ -30,13 +13,13 @@
#include "fm10k.h"
#define DRV_VERSION "0.20.1"
#define DRV_VERSION "0.26.1"
#define DRV_SUMMARY "Intel(R) Ethernet Switch Host Interface Driver"
const char fm10k_driver_version[] = DRV_VERSION;
char fm10k_driver_name[] = "fm10k";
static const char fm10k_driver_string[] = DRV_SUMMARY;
static const char fm10k_copyright[] =
"Copyright(c) 2013 - 2016 Intel Corporation.";
"Copyright(c) 2013 - 2018 Intel Corporation.";
MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>");
MODULE_DESCRIPTION(DRV_SUMMARY);
@ -58,7 +41,8 @@ static int __init fm10k_init_module(void)
pr_info("%s\n", fm10k_copyright);
/* create driver workqueue */
fm10k_workqueue = create_workqueue("fm10k");
fm10k_workqueue = alloc_workqueue("%s", WQ_MEM_RECLAIM, 0,
fm10k_driver_name);
dev_add_pack(&ies_packet_type);
@ -83,7 +67,6 @@ static void __exit fm10k_exit_module(void)
dev_remove_pack(&ies_packet_type);
/* destroy driver workqueue */
flush_workqueue(fm10k_workqueue);
destroy_workqueue(fm10k_workqueue);
#ifdef HAVE_KFREE_RCU_BARRIER
rcu_barrier();
@ -252,7 +235,7 @@ static bool fm10k_can_reuse_rx_page(struct fm10k_rx_buffer *rx_buffer,
/* Even if we own the page, we are not allowed to use atomic_set()
* This would break get_page_unless_zero() users.
*/
atomic_inc(&page->_count);
page_ref_inc(page);
return true;
}
@ -260,6 +243,7 @@ static bool fm10k_can_reuse_rx_page(struct fm10k_rx_buffer *rx_buffer,
/**
* fm10k_add_rx_frag - Add contents of Rx buffer to sk_buff
* @rx_buffer: buffer containing page to add
* @size: packet size from rx_desc
* @rx_desc: descriptor containing length of buffer written by hardware
* @skb: sk_buff to place the data into
*
@ -272,16 +256,16 @@ static bool fm10k_can_reuse_rx_page(struct fm10k_rx_buffer *rx_buffer,
* true if the buffer can be reused by the interface.
**/
static bool fm10k_add_rx_frag(struct fm10k_rx_buffer *rx_buffer,
unsigned int size,
union fm10k_rx_desc *rx_desc,
struct sk_buff *skb)
{
struct page *page = rx_buffer->page;
unsigned char *va = page_address(page) + rx_buffer->page_offset;
unsigned int size = le16_to_cpu(rx_desc->w.length);
#if (PAGE_SIZE < 8192)
unsigned int truesize = FM10K_RX_BUFSZ;
#else
unsigned int truesize = SKB_DATA_ALIGN(size);
unsigned int truesize = ALIGN(size, 512);
#endif
unsigned int pull_len;
@ -323,6 +307,7 @@ static struct sk_buff *fm10k_fetch_rx_buffer(struct fm10k_ring *rx_ring,
union fm10k_rx_desc *rx_desc,
struct sk_buff *skb)
{
unsigned int size = le16_to_cpu(rx_desc->w.length);
struct fm10k_rx_buffer *rx_buffer;
struct page *page;
@ -359,11 +344,11 @@ static struct sk_buff *fm10k_fetch_rx_buffer(struct fm10k_ring *rx_ring,
dma_sync_single_range_for_cpu(rx_ring->dev,
rx_buffer->dma,
rx_buffer->page_offset,
FM10K_RX_BUFSZ,
size,
DMA_FROM_DEVICE);
/* pull page into skb */
if (fm10k_add_rx_frag(rx_buffer, rx_desc, skb)) {
if (fm10k_add_rx_frag(rx_buffer, size, rx_desc, skb)) {
/* hand second half of page back to the ring */
fm10k_reuse_rx_page(rx_ring, rx_buffer);
} else {
@ -399,13 +384,9 @@ static inline void fm10k_rx_checksum(struct fm10k_ring *ring,
}
/* It must be a TCP or UDP packet with a valid checksum */
#ifdef HAVE_VXLAN_RX_OFFLOAD
if (fm10k_test_staterr(rx_desc, FM10K_RXD_STATUS_L4CS2))
skb->encapsulation = true;
else if (!fm10k_test_staterr(rx_desc, FM10K_RXD_STATUS_L4CS))
#else
if (!fm10k_test_staterr(rx_desc, FM10K_RXD_STATUS_L4CS))
#endif
return;
skb->ip_summed = CHECKSUM_UNNECESSARY;
@ -459,6 +440,15 @@ static void fm10k_type_trans(struct fm10k_ring *rx_ring,
}
#endif
#ifdef NETIF_F_HW_L2FW_DOFFLOAD
/* Record Rx queue, or update macvlan statistics */
if (!l2_accel)
skb_record_rx_queue(skb, rx_ring->queue_index);
else
macvlan_count_rx(netdev_priv(dev), skb->len + ETH_HLEN, true,
false);
#endif
/* If we are not an IES interface or are and the packet is data-plane
* traffic (i.e. has a known DGLORT) then just use eth_type_trans
*/
@ -468,16 +458,6 @@ static void fm10k_type_trans(struct fm10k_ring *rx_ring,
skb->protocol = eth_type_trans(skb, dev);
else
skb->protocol = ies_type_trans(skb);
#ifdef NETIF_F_HW_L2FW_DOFFLOAD
if (!l2_accel)
return;
/* update MACVLAN statistics */
macvlan_count_rx(netdev_priv(dev), skb->len + ETH_HLEN, 1,
!!(rx_desc->w.hdr_info &
cpu_to_le16(FM10K_RXD_HDR_INFO_XC_MASK)));
#endif
}
/**
@ -500,9 +480,9 @@ static unsigned int fm10k_process_skb_fields(struct fm10k_ring *rx_ring,
fm10k_rx_checksum(rx_ring, rx_desc, skb);
FM10K_CB(skb)->fi.w.vlan = rx_desc->w.vlan;
FM10K_CB(skb)->tstamp = rx_desc->q.timestamp;
skb_record_rx_queue(skb, rx_ring->queue_index);
FM10K_CB(skb)->fi.w.vlan = rx_desc->w.vlan;
FM10K_CB(skb)->fi.d.glort = rx_desc->d.glort;
@ -710,11 +690,11 @@ static int fm10k_clean_rx_irq(struct fm10k_q_vector *q_vector,
static struct ethhdr *fm10k_port_is_vxlan(struct sk_buff *skb)
{
struct fm10k_intfc *interface = netdev_priv(skb->dev);
struct fm10k_vxlan_port *vxlan_port;
struct fm10k_udp_port *vxlan_port;
/* we can only offload a vxlan if we recognize it as such */
vxlan_port = list_first_entry_or_null(&interface->vxlan_port,
struct fm10k_vxlan_port, list);
struct fm10k_udp_port, list);
if (!vxlan_port)
return NULL;
@ -875,9 +855,10 @@ static int fm10k_tso(struct fm10k_ring *tx_ring,
return 1;
#ifdef HAVE_ENCAP_TSO_OFFLOAD
err_vxlan:
tx_ring->netdev->features &= ~NETIF_F_GSO_UDP_TUNNEL;
if (!net_ratelimit())
if (net_ratelimit())
netdev_err(tx_ring->netdev,
"TSO requested for unsupported tunnel, disabling offload\n");
return -1;
@ -894,6 +875,10 @@ static void fm10k_tx_csum(struct fm10k_ring *tx_ring,
struct ipv6hdr *ipv6;
u8 *raw;
} network_hdr;
#ifdef HAVE_ENCAP_TSO_OFFLOAD
u8 *transport_hdr;
__be16 frag_off;
#endif
__be16 protocol;
u8 l4_hdr = 0;
@ -912,10 +897,16 @@ static void fm10k_tx_csum(struct fm10k_ring *tx_ring,
goto no_csum;
}
network_hdr.raw = skb_inner_network_header(skb);
#ifdef HAVE_ENCAP_TSO_OFFLOAD
transport_hdr = skb_inner_transport_header(skb);
#endif
} else {
#endif /* HAVE_ENCAP_CSUM_OFFLOAD */
protocol = vlan_get_protocol(skb);
network_hdr.raw = skb_network_header(skb);
#ifdef HAVE_ENCAP_TSO_OFFLOAD
transport_hdr = skb_transport_header(skb);
#endif
#ifdef HAVE_ENCAP_CSUM_OFFLOAD
}
#endif /* HAVE_ENCAP_CSUM_OFFLOAD */
@ -926,15 +917,19 @@ static void fm10k_tx_csum(struct fm10k_ring *tx_ring,
break;
case htons(ETH_P_IPV6):
l4_hdr = network_hdr.ipv6->nexthdr;
#ifdef HAVE_ENCAP_TSO_OFFLOAD
if (likely((transport_hdr - network_hdr.raw) ==
sizeof(struct ipv6hdr)))
break;
ipv6_skip_exthdr(skb, network_hdr.raw - skb->data +
sizeof(struct ipv6hdr),
&l4_hdr, &frag_off);
if (unlikely(frag_off))
l4_hdr = NEXTHDR_FRAGMENT;
#endif
break;
default:
if (unlikely(net_ratelimit())) {
dev_warn(tx_ring->dev,
"partial checksum but ip version=%x!\n",
protocol);
}
tx_ring->tx_stats.csum_err++;
goto no_csum;
break;
}
switch (l4_hdr) {
@ -946,12 +941,22 @@ static void fm10k_tx_csum(struct fm10k_ring *tx_ring,
if (skb->encapsulation)
break;
#endif
/* fall through */
default:
#ifdef HAVE_ENCAP_TSO_OFFLOAD
if (unlikely(net_ratelimit())) {
dev_warn(tx_ring->dev,
"partial checksum, version=%d l4 proto=%x\n",
protocol, l4_hdr);
}
skb_checksum_help(skb);
#else
if (unlikely(net_ratelimit())) {
dev_warn(tx_ring->dev,
"partial checksum but l4 proto=%x!\n",
l4_hdr);
}
#endif /* HAVE_ENCAP_TSO_OFFLOAD */
tx_ring->tx_stats.csum_err++;
goto no_csum;
}
@ -1218,11 +1223,24 @@ static u64 fm10k_get_tx_completed(struct fm10k_ring *ring)
return ring->stats.packets;
}
static u64 fm10k_get_tx_pending(struct fm10k_ring *ring)
/**
* fm10k_get_tx_pending - how many Tx descriptors not processed
* @ring: the ring structure
* @in_sw: is tx_pending being checked in SW or in HW?
*/
u64 fm10k_get_tx_pending(struct fm10k_ring *ring, bool in_sw)
{
/* use SW head and tail until we have real hardware */
u32 head = ring->next_to_clean;
u32 tail = ring->next_to_use;
struct fm10k_intfc *interface = ring->q_vector->interface;
struct fm10k_hw *hw = &interface->hw;
u32 head, tail;
if (likely(in_sw)) {
head = ring->next_to_clean;
tail = ring->next_to_use;
} else {
head = fm10k_read_reg(hw, FM10K_TDH(ring->reg_idx));
tail = fm10k_read_reg(hw, FM10K_TDT(ring->reg_idx));
}
return ((head <= tail) ? tail : tail + ring->count) - head;
}
@ -1231,7 +1249,7 @@ bool fm10k_check_tx_hang(struct fm10k_ring *tx_ring)
{
u32 tx_done = fm10k_get_tx_completed(tx_ring);
u32 tx_done_old = tx_ring->tx_stats.tx_done_old;
u32 tx_pending = fm10k_get_tx_pending(tx_ring);
u32 tx_pending = fm10k_get_tx_pending(tx_ring, true);
clear_check_for_tx_hang(tx_ring);
@ -1247,13 +1265,13 @@ bool fm10k_check_tx_hang(struct fm10k_ring *tx_ring)
/* update completed stats and continue */
tx_ring->tx_stats.tx_done_old = tx_done;
/* reset the countdown */
clear_bit(__FM10K_HANG_CHECK_ARMED, &tx_ring->state);
clear_bit(__FM10K_HANG_CHECK_ARMED, tx_ring->state);
return false;
}
/* make sure it is true for two checks in a row */
return test_and_set_bit(__FM10K_HANG_CHECK_ARMED, &tx_ring->state);
return test_and_set_bit(__FM10K_HANG_CHECK_ARMED, tx_ring->state);
}
/**
@ -1263,9 +1281,9 @@ bool fm10k_check_tx_hang(struct fm10k_ring *tx_ring)
void fm10k_tx_timeout_reset(struct fm10k_intfc *interface)
{
/* Do the reset outside of interrupt context */
if (!test_bit(__FM10K_DOWN, &interface->state)) {
if (!test_bit(__FM10K_DOWN, interface->state)) {
interface->tx_timeout_count++;
interface->flags |= FM10K_FLAG_RESET_REQUESTED;
set_bit(FM10K_FLAG_RESET_REQUESTED, interface->flags);
fm10k_service_event_schedule(interface);
}
}
@ -1286,7 +1304,7 @@ static bool fm10k_clean_tx_irq(struct fm10k_q_vector *q_vector,
unsigned int budget = q_vector->tx.work_limit;
unsigned int i = tx_ring->next_to_clean;
if (test_bit(__FM10K_DOWN, &interface->state))
if (test_bit(__FM10K_DOWN, interface->state))
return true;
tx_buffer = &tx_ring->tx_buffer[i];
@ -1301,7 +1319,7 @@ static bool fm10k_clean_tx_irq(struct fm10k_q_vector *q_vector,
break;
/* prevent any other reads prior to eop_desc */
read_barrier_depends();
smp_rmb();
/* if DD is not set pending work has not been completed */
if (!(eop_desc->flags & FM10K_TXD_FLAG_DONE))
@ -1416,7 +1434,7 @@ static bool fm10k_clean_tx_irq(struct fm10k_q_vector *q_vector,
smp_mb();
if (__netif_subqueue_stopped(tx_ring->netdev,
tx_ring->queue_index) &&
!test_bit(__FM10K_DOWN, &interface->state)) {
!test_bit(__FM10K_DOWN, interface->state)) {
netif_wake_subqueue(tx_ring->netdev,
tx_ring->queue_index);
++tx_ring->tx_stats.restart_queue;
@ -1485,7 +1503,7 @@ static void fm10k_update_itr(struct fm10k_ring_container *ring_container)
* that the calculation will never get below a 1. The bit shift
* accounts for changes in the ITR due to PCIe link speed.
*/
itr_round = ACCESS_ONCE(ring_container->itr_scale) + 8;
itr_round = READ_ONCE(ring_container->itr_scale) + 8;
avg_wire_size += BIT(itr_round) - 1;
avg_wire_size >>= itr_round;
@ -1561,7 +1579,7 @@ static int fm10k_poll(struct napi_struct *napi, int budget)
/* re-enable the q_vector */
fm10k_qv_enable(q_vector);
return 0;
return min(work_done, budget - 1);
}
/**
@ -1948,7 +1966,7 @@ static int fm10k_init_msix_capability(struct fm10k_intfc *interface)
if (v_budget < 0) {
kfree(interface->msix_entries);
interface->msix_entries = NULL;
return -ENOMEM;
return v_budget;
}
/* record the number of queues available for q_vectors */
@ -2025,14 +2043,13 @@ static void fm10k_assign_rings(struct fm10k_intfc *interface)
static void fm10k_init_reta(struct fm10k_intfc *interface)
{
u16 i, rss_i = interface->ring_feature[RING_F_RSS].indices;
struct net_device *netdev = interface->netdev;
u32 reta, *indir;
u32 reta;
/* If the Rx flow indirection table has been configured manually, we
* need to maintain it when possible.
*/
#ifndef IFF_RXFH_CONFIGURED
if (interface->flags & FM10K_FLAG_RXFH_CONFIGURED) {
if (test_bit(FM10K_FLAG_RXFH_CONFIGURED, interface->flags)) {
#else
if (netif_is_rxfh_configured(interface->netdev)) {
#endif
@ -2045,7 +2062,8 @@ static void fm10k_init_reta(struct fm10k_intfc *interface)
continue;
#ifndef IFF_RXFH_CONFIGURED
interface->flags &= ~FM10K_FLAG_RXFH_CONFIGURED;
clear_bit(FM10K_FLAG_RXFH_CONFIGURED,
interface->flags);
#else
/* this should never happen */
#endif
@ -2059,16 +2077,7 @@ static void fm10k_init_reta(struct fm10k_intfc *interface)
}
repopulate_reta:
indir = kcalloc(fm10k_get_reta_size(netdev),
sizeof(indir[0]), GFP_KERNEL);
/* generate redirection table using the default kernel policy */
for (i = 0; i < fm10k_get_reta_size(netdev); i++)
indir[i] = ethtool_rxfh_indir_default(i, rss_i);
fm10k_write_reta(interface, indir);
kfree(indir);
fm10k_write_reta(interface, NULL);
}
/**

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include "fm10k_common.h"
@ -1586,7 +1569,7 @@ s32 fm10k_pfvf_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx,
mbx->mbmem_reg = FM10K_MBMEM_VF(id, 0);
break;
}
/* fallthough */
/* fall through */
default:
return FM10K_MBX_ERR_NO_MBX;
}
@ -2011,9 +1994,10 @@ static void fm10k_sm_mbx_create_reply(struct fm10k_hw *hw,
* function can also be used to respond to an error as the connection
* resetting would also be a means of dealing with errors.
**/
static void fm10k_sm_mbx_process_reset(struct fm10k_hw *hw,
struct fm10k_mbx_info *mbx)
static s32 fm10k_sm_mbx_process_reset(struct fm10k_hw *hw,
struct fm10k_mbx_info *mbx)
{
s32 err = 0;
const enum fm10k_mbx_state state = mbx->state;
switch (state) {
@ -2026,6 +2010,7 @@ static void fm10k_sm_mbx_process_reset(struct fm10k_hw *hw,
case FM10K_STATE_OPEN:
/* flush any incomplete work */
fm10k_sm_mbx_connect_reset(mbx);
err = FM10K_ERR_RESET_REQUESTED;
break;
case FM10K_STATE_CONNECT:
/* Update remote value to match local value */
@ -2035,6 +2020,8 @@ static void fm10k_sm_mbx_process_reset(struct fm10k_hw *hw,
}
fm10k_sm_mbx_create_reply(hw, mbx, mbx->tail);
return err;
}
/**
@ -2115,7 +2102,7 @@ static s32 fm10k_sm_mbx_process(struct fm10k_hw *hw,
switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_VER)) {
case 0:
fm10k_sm_mbx_process_reset(hw, mbx);
err = fm10k_sm_mbx_process_reset(hw, mbx);
break;
case FM10K_SM_MBX_VERSION:
err = fm10k_sm_mbx_process_version_1(hw, mbx);

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _FM10K_MBX_H_
#define _FM10K_MBX_H_
@ -41,6 +24,8 @@ struct fm10k_mbx_info;
#define FM10K_MBX_ACK_INTERRUPT 0x00000010
#define FM10K_MBX_INTERRUPT_ENABLE 0x00000020
#define FM10K_MBX_INTERRUPT_DISABLE 0x00000040
#define FM10K_MBX_GLOBAL_REQ_INTERRUPT 0x00000200
#define FM10K_MBX_GLOBAL_ACK_INTERRUPT 0x00000400
#define FM10K_MBICR(_n) ((_n) + 0x18840)
#define FM10K_GMBX 0x18842

View file

@ -1,28 +1,20 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include "fm10k.h"
#include <linux/vmalloc.h>
#ifdef HAVE_VXLAN_CHECKS
#ifdef HAVE_VXLAN_RX_OFFLOAD
#include <net/vxlan.h>
#endif /* HAVE_VXLAN_CHECKS */
#endif /* HAVE_VXLAN_RX_OFFLOAD */
#ifdef HAVE_GENEVE_RX_OFFLOAD
#include <net/geneve.h>
#endif
#ifdef HAVE_UDP_ENC_RX_OFFLOAD
#include <net/udp_tunnel.h>
#endif /* HAVE_UDP_ENC_RX_OFFLOAD */
#ifdef NETIF_F_HW_L2FW_DOFFLOAD
#include <linux/if_macvlan.h>
#endif /* NETIF_F_HW_L2FW_DOFFLOAD */
/**
* fm10k_setup_tx_resources - allocate Tx resources (Descriptors)
@ -386,129 +378,255 @@ static void fm10k_request_glort_range(struct fm10k_intfc *interface)
}
/**
* fm10k_del_vxlan_port_all
* fm10k_free_udp_port_info
* @interface: board private structure
*
* This function frees the entire vxlan_port list
* This function frees both geneve_port and vxlan_port structures
**/
static void fm10k_del_vxlan_port_all(struct fm10k_intfc *interface)
static void fm10k_free_udp_port_info(struct fm10k_intfc *interface)
{
struct fm10k_vxlan_port *vxlan_port;
struct fm10k_udp_port *port;
/* flush all entries from list */
vxlan_port = list_first_entry_or_null(&interface->vxlan_port,
struct fm10k_vxlan_port, list);
while (vxlan_port) {
list_del(&vxlan_port->list);
kfree(vxlan_port);
vxlan_port = list_first_entry_or_null(&interface->vxlan_port,
struct fm10k_vxlan_port,
list);
/* flush all entries from vxlan list */
port = list_first_entry_or_null(&interface->vxlan_port,
struct fm10k_udp_port, list);
while (port) {
list_del(&port->list);
kfree(port);
port = list_first_entry_or_null(&interface->vxlan_port,
struct fm10k_udp_port,
list);
}
/* flush all entries from geneve list */
port = list_first_entry_or_null(&interface->geneve_port,
struct fm10k_udp_port, list);
while (port) {
list_del(&port->list);
kfree(port);
port = list_first_entry_or_null(&interface->vxlan_port,
struct fm10k_udp_port,
list);
}
}
/**
* fm10k_restore_vxlan_port
* fm10k_restore_udp_port_info
* @interface: board private structure
*
* This function restores the value in the tunnel_cfg register after reset
* This function restores the value in the tunnel_cfg register(s) after reset
**/
static void fm10k_restore_vxlan_port(struct fm10k_intfc *interface)
static void fm10k_restore_udp_port_info(struct fm10k_intfc *interface)
{
struct fm10k_hw *hw = &interface->hw;
struct fm10k_vxlan_port *vxlan_port;
struct fm10k_udp_port *port;
/* only the PF supports configuring tunnels */
if (hw->mac.type != fm10k_mac_pf)
return;
vxlan_port = list_first_entry_or_null(&interface->vxlan_port,
struct fm10k_vxlan_port, list);
port = list_first_entry_or_null(&interface->vxlan_port,
struct fm10k_udp_port, list);
/* restore tunnel configuration register */
fm10k_write_reg(hw, FM10K_TUNNEL_CFG,
(vxlan_port ? ntohs(vxlan_port->port) : 0) |
(port ? ntohs(port->port) : 0) |
(ETH_P_TEB << FM10K_TUNNEL_CFG_NVGRE_SHIFT));
port = list_first_entry_or_null(&interface->geneve_port,
struct fm10k_udp_port, list);
/* restore Geneve tunnel configuration register */
fm10k_write_reg(hw, FM10K_TUNNEL_CFG_GENEVE,
(port ? ntohs(port->port) : 0));
}
static struct fm10k_udp_port *
fm10k_remove_tunnel_port(struct list_head *ports,
struct udp_tunnel_info *ti)
{
struct fm10k_udp_port *port;
list_for_each_entry(port, ports, list) {
if ((port->port == ti->port) &&
(port->sa_family == ti->sa_family)) {
list_del(&port->list);
return port;
}
}
return NULL;
}
static void fm10k_insert_tunnel_port(struct list_head *ports,
struct udp_tunnel_info *ti)
{
struct fm10k_udp_port *port;
/* remove existing port entry from the list so that the newest items
* are always at the tail of the list.
*/
port = fm10k_remove_tunnel_port(ports, ti);
if (!port) {
port = kmalloc(sizeof(*port), GFP_ATOMIC);
if (!port)
return;
port->port = ti->port;
port->sa_family = ti->sa_family;
}
list_add_tail(&port->list, ports);
}
#ifdef HAVE_VXLAN_CHECKS
/**
* fm10k_add_vxlan_port
* @netdev: network interface device structure
* @sa_family: Address family of new port
* @port: port number used for VXLAN
* fm10k_udp_tunnel_add
* @dev: network interface device structure
* @ti: Tunnel endpoint information
*
* This function is called when a new VXLAN interface has added a new port
* number to the range that is currently in use for VXLAN. The new port
* number is always added to the tail so that the port number list should
* match the order in which the ports were allocated. The head of the list
* is always used as the VXLAN port number for offloads.
* This function is called when a new UDP tunnel port has been added.
* Due to hardware restrictions, only one port per type can be offloaded at
* once.
**/
static void fm10k_add_vxlan_port(struct net_device *dev,
sa_family_t sa_family, __be16 port) {
__maybe_unused
static void fm10k_udp_tunnel_add(struct net_device *dev,
struct udp_tunnel_info *ti)
{
struct fm10k_intfc *interface = netdev_priv(dev);
struct fm10k_vxlan_port *vxlan_port;
/* only the PF supports configuring tunnels */
if (interface->hw.mac.type != fm10k_mac_pf)
return;
/* existing ports are pulled out so our new entry is always last */
fm10k_vxlan_port_for_each(vxlan_port, interface) {
if ((vxlan_port->port == port) &&
(vxlan_port->sa_family == sa_family)) {
list_del(&vxlan_port->list);
goto insert_tail;
}
switch (ti->type) {
case UDP_TUNNEL_TYPE_VXLAN:
fm10k_insert_tunnel_port(&interface->vxlan_port, ti);
break;
case UDP_TUNNEL_TYPE_GENEVE:
fm10k_insert_tunnel_port(&interface->geneve_port, ti);
break;
default:
return;
}
/* allocate memory to track ports */
vxlan_port = kmalloc(sizeof(*vxlan_port), GFP_ATOMIC);
if (!vxlan_port)
return;
vxlan_port->port = port;
vxlan_port->sa_family = sa_family;
insert_tail:
/* add new port value to list */
list_add_tail(&vxlan_port->list, &interface->vxlan_port);
fm10k_restore_vxlan_port(interface);
fm10k_restore_udp_port_info(interface);
}
/**
* fm10k_del_vxlan_port
* @netdev: network interface device structure
* @sa_family: Address family of freed port
* @port: port number used for VXLAN
* fm10k_udp_tunnel_del
* @dev: network interface device structure
* @ti: Tunnel end point information
*
* This function is called when a new VXLAN interface has freed a port
* number from the range that is currently in use for VXLAN. The freed
* port is removed from the list and the new head is used to determine
* the port number for offloads.
* This function is called when a new UDP tunnel port is deleted. The freed
* port will be removed from the list, then we reprogram the offloaded port
* based on the head of the list.
**/
static void fm10k_del_vxlan_port(struct net_device *dev,
sa_family_t sa_family, __be16 port) {
__maybe_unused
static void fm10k_udp_tunnel_del(struct net_device *dev,
struct udp_tunnel_info *ti)
{
struct fm10k_intfc *interface = netdev_priv(dev);
struct fm10k_vxlan_port *vxlan_port;
struct fm10k_udp_port *port = NULL;
if (interface->hw.mac.type != fm10k_mac_pf)
return;
/* find the port in the list and free it */
fm10k_vxlan_port_for_each(vxlan_port, interface) {
if ((vxlan_port->port == port) &&
(vxlan_port->sa_family == sa_family)) {
list_del(&vxlan_port->list);
kfree(vxlan_port);
break;
}
switch (ti->type) {
case UDP_TUNNEL_TYPE_VXLAN:
port = fm10k_remove_tunnel_port(&interface->vxlan_port, ti);
break;
case UDP_TUNNEL_TYPE_GENEVE:
port = fm10k_remove_tunnel_port(&interface->geneve_port, ti);
break;
default:
return;
}
fm10k_restore_vxlan_port(interface);
/* if we did remove a port we need to free its memory */
kfree(port);
fm10k_restore_udp_port_info(interface);
}
#endif /* HAVE_VXLAN_CHECKS */
#ifdef HAVE_VXLAN_RX_OFFLOAD
/**
* fm10k_add_vxlan_port
* @dev: network interface device structure
* @sa_family: Address family of added port
* @port: Port number in use for VXLAN
*
**/
static void fm10k_add_vxlan_port(struct net_device *dev,
sa_family_t sa_family, __be16 port)
{
struct udp_tunnel_info ti = {
.type = UDP_TUNNEL_TYPE_VXLAN,
.sa_family = sa_family,
.port = port,
};
fm10k_udp_tunnel_add(dev, &ti);
}
/**
* fm10k_del_vxlan_port
* @dev: network interface device structure
* @sa_family: Address family of deleted port
* @port: Port number in use for VXLAN
*
**/
static void fm10k_del_vxlan_port(struct net_device *dev,
sa_family_t sa_family, __be16 port)
{
struct udp_tunnel_info ti = {
.type = UDP_TUNNEL_TYPE_VXLAN,
.sa_family = sa_family,
.port = port,
};
fm10k_udp_tunnel_del(dev, &ti);
}
#endif /* HAVE_VXLAN_RX_OFFLOAD */
#ifdef HAVE_GENEVE_RX_OFFLOAD
/**
* fm10k_add_geneve_port
* @dev: network interface device structure
* @sa_family: Address family of added port
* @port: Port number in use for GENEVE
*
**/
static void fm10k_add_geneve_port(struct net_device *dev,
sa_family_t sa_family, __be16 port)
{
struct udp_tunnel_info ti = {
.type = UDP_TUNNEL_TYPE_GENEVE,
.sa_family = sa_family,
.port = port,
};
fm10k_udp_tunnel_add(dev, &ti);
}
/**
* fm10k_del_geneve_port
* @dev: network interface device structure
* @sa_family: Address family of deleted port
* @port: Port number in use for GENEVE
*
**/
static void fm10k_del_geneve_port(struct net_device *dev,
sa_family_t sa_family, __be16 port)
{
struct udp_tunnel_info ti = {
.type = UDP_TUNNEL_TYPE_GENEVE,
.sa_family = sa_family,
.port = port,
};
fm10k_udp_tunnel_del(dev, &ti);
}
#endif /* HAVE_GENEVE_RX_OFFLOAD */
/**
* fm10k_open - Called when a network interface is made active
* @netdev: network interface device structure
@ -555,10 +673,16 @@ int fm10k_open(struct net_device *netdev)
if (err)
goto err_set_queues;
#ifdef HAVE_VXLAN_CHECKS
#if defined(HAVE_VXLAN_CHECKS) && !defined(HAVE_UDP_ENC_RX_OFFLOAD)
/* update VXLAN port configuration */
vxlan_get_rx_port(netdev);
#endif
#if defined(HAVE_GENEVE_RX_OFFLOAD) && !defined(HAVE_UDP_ENC_RX_OFFLOAD)
geneve_get_rx_port(netdev);
#endif
#ifdef HAVE_UDP_ENC_RX_OFFLOAD
udp_tunnel_get_rx_info(netdev);
#endif
fm10k_up(interface);
@ -593,7 +717,7 @@ int fm10k_close(struct net_device *netdev)
fm10k_qv_free_irq(interface);
fm10k_del_vxlan_port_all(interface);
fm10k_free_udp_port_info(interface);
fm10k_free_all_tx_resources(interface);
fm10k_free_all_rx_resources(interface);
@ -604,9 +728,13 @@ int fm10k_close(struct net_device *netdev)
static netdev_tx_t fm10k_xmit_frame(struct sk_buff *skb, struct net_device *dev)
{
struct fm10k_intfc *interface = netdev_priv(dev);
int num_tx_queues = READ_ONCE(interface->num_tx_queues);
unsigned int r_idx = skb->queue_mapping;
int err;
if (!num_tx_queues)
return NETDEV_TX_BUSY;
if ((skb->protocol == htons(ETH_P_8021Q)) &&
!skb_vlan_tag_present(skb)) {
/* FM10K only supports hardware tagging, any tags in frame
@ -659,8 +787,8 @@ static netdev_tx_t fm10k_xmit_frame(struct sk_buff *skb, struct net_device *dev)
__skb_put(skb, pad_len);
}
if (r_idx >= interface->num_tx_queues)
r_idx %= interface->num_tx_queues;
if (r_idx >= num_tx_queues)
r_idx %= num_tx_queues;
err = fm10k_xmit_frame_ring(skb, interface->tx_ring[r_idx]);
#ifndef HAVE_TRANS_START_IN_QUEUE
@ -671,6 +799,7 @@ static netdev_tx_t fm10k_xmit_frame(struct sk_buff *skb, struct net_device *dev)
return err;
}
#ifndef HAVE_NETDEVICE_MIN_MAX_MTU
static int fm10k_change_mtu(struct net_device *dev, int new_mtu)
{
if (new_mtu < 68 || new_mtu > FM10K_MAX_JUMBO_FRAME_SIZE)
@ -680,6 +809,7 @@ static int fm10k_change_mtu(struct net_device *dev, int new_mtu)
return 0;
}
#endif
/**
* fm10k_tx_timeout - Respond to a Tx Hang
@ -712,20 +842,158 @@ static void fm10k_tx_timeout(struct net_device *netdev)
}
}
/**
* fm10k_host_mbx_ready - Check PF interface's mailbox readiness
* @interface: board private structure
*
* This function checks if the PF interface's mailbox is ready before queueing
* mailbox messages for transmission. This will prevent filling the TX mailbox
* queue when the receiver is not ready. VF interfaces are exempt from this
* check since it will block all PF-VF mailbox messages from being sent from
* the VF to the PF at initialization.
**/
static bool fm10k_host_mbx_ready(struct fm10k_intfc *interface)
{
struct fm10k_hw *hw = &interface->hw;
return (hw->mac.type == fm10k_mac_vf || interface->host_ready);
}
/**
* fm10k_queue_vlan_request - Queue a VLAN update request
* @interface: the fm10k interface structure
* @vid: the VLAN vid
* @vsi: VSI index number
* @set: whether to set or clear
*
* This function queues up a VLAN update. For VFs, this must be sent to the
* managing PF over the mailbox. For PFs, we'll use the same handling so that
* it's similar to the VF. This avoids storming the PF<->VF mailbox with too
* many VLAN updates during reset.
*/
int fm10k_queue_vlan_request(struct fm10k_intfc *interface,
u32 vid, u8 vsi, bool set)
{
struct fm10k_macvlan_request *request;
unsigned long flags;
/* This must be atomic since we may be called while the netdev
* addr_list_lock is held
*/
request = kzalloc(sizeof(*request), GFP_ATOMIC);
if (!request)
return -ENOMEM;
request->type = FM10K_VLAN_REQUEST;
request->vlan.vid = vid;
request->vlan.vsi = vsi;
request->set = set;
spin_lock_irqsave(&interface->macvlan_lock, flags);
list_add_tail(&request->list, &interface->macvlan_requests);
spin_unlock_irqrestore(&interface->macvlan_lock, flags);
fm10k_macvlan_schedule(interface);
return 0;
}
/**
* fm10k_queue_mac_request - Queue a MAC update request
* @interface: the fm10k interface structure
* @glort: the target glort for this update
* @addr: the address to update
* @vid: the vid to update
* @set: whether to add or remove
*
* This function queues up a MAC request for sending to the switch manager.
* A separate thread monitors the queue and sends updates to the switch
* manager. Return 0 on success, and negative error code on failure.
**/
int fm10k_queue_mac_request(struct fm10k_intfc *interface, u16 glort,
const unsigned char *addr, u16 vid, bool set)
{
struct fm10k_macvlan_request *request;
unsigned long flags;
/* This must be atomic since we may be called while the netdev
* addr_list_lock is held
*/
request = kzalloc(sizeof(*request), GFP_ATOMIC);
if (!request)
return -ENOMEM;
if (is_multicast_ether_addr(addr))
request->type = FM10K_MC_MAC_REQUEST;
else
request->type = FM10K_UC_MAC_REQUEST;
ether_addr_copy(request->mac.addr, addr);
request->mac.glort = glort;
request->mac.vid = vid;
request->set = set;
spin_lock_irqsave(&interface->macvlan_lock, flags);
list_add_tail(&request->list, &interface->macvlan_requests);
spin_unlock_irqrestore(&interface->macvlan_lock, flags);
fm10k_macvlan_schedule(interface);
return 0;
}
/**
* fm10k_clear_macvlan_queue - Cancel pending updates for a given glort
* @interface: the fm10k interface structure
* @glort: the target glort to clear
* @vlans: true to clear VLAN messages, false to ignore them
*
* Cancel any outstanding MAC/VLAN requests for a given glort. This is
* expected to be called when a logical port goes down.
**/
void fm10k_clear_macvlan_queue(struct fm10k_intfc *interface,
u16 glort, bool vlans)
{
struct fm10k_macvlan_request *r, *tmp;
unsigned long flags;
spin_lock_irqsave(&interface->macvlan_lock, flags);
/* Free any outstanding MAC/VLAN requests for this interface */
list_for_each_entry_safe(r, tmp, &interface->macvlan_requests, list) {
switch (r->type) {
case FM10K_MC_MAC_REQUEST:
case FM10K_UC_MAC_REQUEST:
/* Don't free requests for other interfaces */
if (r->mac.glort != glort)
break;
/* fall through */
case FM10K_VLAN_REQUEST:
if (vlans) {
list_del(&r->list);
kfree(r);
}
break;
}
}
spin_unlock_irqrestore(&interface->macvlan_lock, flags);
}
static int fm10k_uc_vlan_unsync(struct net_device *netdev,
const unsigned char *uc_addr)
{
struct fm10k_intfc *interface = netdev_priv(netdev);
struct fm10k_hw *hw = &interface->hw;
u16 glort = interface->glort;
u16 vid = interface->vid;
bool set = !!(vid / VLAN_N_VID);
int err;
int err = -EHOSTDOWN;
/* drop any leading bits on the VLAN ID */
vid &= VLAN_N_VID - 1;
err = hw->mac.ops.update_uc_addr(hw, glort, uc_addr, vid, set, 0);
err = fm10k_queue_mac_request(interface, glort, uc_addr, vid, set);
if (err)
return err;
@ -737,16 +1005,15 @@ static int fm10k_mc_vlan_unsync(struct net_device *netdev,
const unsigned char *mc_addr)
{
struct fm10k_intfc *interface = netdev_priv(netdev);
struct fm10k_hw *hw = &interface->hw;
u16 glort = interface->glort;
u16 vid = interface->vid;
bool set = !!(vid / VLAN_N_VID);
int err;
int err = -EHOSTDOWN;
/* drop any leading bits on the VLAN ID */
vid &= VLAN_N_VID - 1;
err = hw->mac.ops.update_mc_addr(hw, glort, mc_addr, vid, set);
err = fm10k_queue_mac_request(interface, glort, mc_addr, vid, set);
if (err)
return err;
@ -757,11 +1024,15 @@ static int fm10k_mc_vlan_unsync(struct net_device *netdev,
static int fm10k_update_vid(struct net_device *netdev, u16 vid, bool set)
{
struct fm10k_intfc *interface = netdev_priv(netdev);
struct fm10k_hw *hw = &interface->hw;
s32 err;
#ifndef HAVE_VLAN_RX_REGISTER
int i;
#ifdef NETIF_F_HW_L2FW_DOFFLOAD
struct fm10k_l2_accel *l2_accel = interface->l2_accel;
#endif
struct fm10k_hw *hw = &interface->hw;
#ifdef NETIF_F_HW_L2FW_DOFFLOAD
u16 glort;
#endif
s32 err;
int i;
/* updates do not apply to VLAN 0 */
if (!vid)
@ -770,11 +1041,14 @@ static int fm10k_update_vid(struct net_device *netdev, u16 vid, bool set)
if (vid >= VLAN_N_VID)
return -EINVAL;
/* Verify we have permission to add VLANs */
if (hw->mac.vlan_override)
/* Verify that we have permission to add VLANs. If this is a request
* to remove a VLAN, we still want to allow the user to remove the
* VLAN device. In that case, we need to clear the bit in the
* active_vlans bitmask.
*/
if (set && hw->mac.vlan_override)
return -EACCES;
#ifndef HAVE_VLAN_RX_REGISTER
/* update active_vlans bitmask */
set_bit(vid, interface->active_vlans);
if (!set)
@ -790,7 +1064,12 @@ static int fm10k_update_vid(struct net_device *netdev, u16 vid, bool set)
else
rx_ring->vid &= ~FM10K_VLAN_CLEAR;
}
#endif
/* If our VLAN has been overridden, there is no reason to send VLAN
* removal requests as they will be silently ignored.
*/
if (hw->mac.vlan_override)
return 0;
/* Do not remove default VLAN ID related entries from VLAN and MAC
* tables
@ -801,7 +1080,7 @@ static int fm10k_update_vid(struct net_device *netdev, u16 vid, bool set)
/* Do not throw an error if the interface is down. We will sync once
* we come up
*/
if (test_bit(__FM10K_DOWN, &interface->state))
if (test_bit(__FM10K_DOWN, interface->state))
return 0;
fm10k_mbx_lock(interface);
@ -811,17 +1090,35 @@ static int fm10k_update_vid(struct net_device *netdev, u16 vid, bool set)
*/
if (!(netdev->flags & IFF_PROMISC || fm10k_is_ies(netdev)) ||
hw->mac.type == fm10k_mac_vf) {
err = hw->mac.ops.update_vlan(hw, vid, 0, set);
err = fm10k_queue_vlan_request(interface, vid, 0, set);
if (err)
goto err_out;
}
/* update our base MAC address */
err = hw->mac.ops.update_uc_addr(hw, interface->glort, hw->mac.addr,
vid, set, 0);
/* Update our base MAC address */
err = fm10k_queue_mac_request(interface, interface->glort,
hw->mac.addr, vid, set);
if (err)
goto err_out;
#ifdef NETIF_F_HW_L2FW_DOFFLOAD
/* Update L2 accelerated macvlan addresses */
if (l2_accel) {
for (i = 0; i < l2_accel->size; i++) {
struct net_device *sdev = l2_accel->macvlan[i];
if (!sdev)
continue;
glort = l2_accel->dglort + 1 + i;
fm10k_queue_mac_request(interface, glort,
sdev->dev_addr,
vid, set);
}
}
#endif
/* set VLAN ID prior to syncing/unsyncing the VLAN */
interface->vid = vid + (set ? VLAN_N_VID : 0);
@ -907,7 +1204,6 @@ static u16 fm10k_find_next_vlan(struct fm10k_intfc *interface, u16 vid)
static void fm10k_clear_unused_vlans(struct fm10k_intfc *interface)
{
struct fm10k_hw *hw = &interface->hw;
u32 vid, prev_vid;
/* loop through and find any gaps in the table */
@ -919,7 +1215,7 @@ static void fm10k_clear_unused_vlans(struct fm10k_intfc *interface)
/* send request to clear multiple bits at a time */
prev_vid += (vid - prev_vid - 1) << FM10K_VLAN_LENGTH_SHIFT;
hw->mac.ops.update_vlan(hw, prev_vid, 0, false);
fm10k_queue_vlan_request(interface, prev_vid, 0, false);
}
}
@ -927,19 +1223,17 @@ static int __fm10k_uc_sync(struct net_device *dev,
const unsigned char *addr, bool sync)
{
struct fm10k_intfc *interface = netdev_priv(dev);
struct fm10k_hw *hw = &interface->hw;
u16 vid, glort = interface->glort;
s32 err;
if (!is_valid_ether_addr(addr))
return -EADDRNOTAVAIL;
/* update table with current entries */
for (vid = hw->mac.default_vid ? fm10k_find_next_vlan(interface, 0) : 1;
for (vid = fm10k_find_next_vlan(interface, 0);
vid < VLAN_N_VID;
vid = fm10k_find_next_vlan(interface, vid)) {
err = hw->mac.ops.update_uc_addr(hw, glort, addr,
vid, sync, 0);
err = fm10k_queue_mac_request(interface, glort,
addr, vid, sync);
if (err)
return err;
}
@ -996,14 +1290,19 @@ static int __fm10k_mc_sync(struct net_device *dev,
const unsigned char *addr, bool sync)
{
struct fm10k_intfc *interface = netdev_priv(dev);
struct fm10k_hw *hw = &interface->hw;
u16 vid, glort = interface->glort;
s32 err;
/* update table with current entries */
for (vid = hw->mac.default_vid ? fm10k_find_next_vlan(interface, 0) : 1;
if (!is_multicast_ether_addr(addr))
return -EADDRNOTAVAIL;
for (vid = fm10k_find_next_vlan(interface, 0);
vid < VLAN_N_VID;
vid = fm10k_find_next_vlan(interface, vid)) {
hw->mac.ops.update_mc_addr(hw, glort, addr, vid, sync);
err = fm10k_queue_mac_request(interface, glort,
addr, vid, sync);
if (err)
return err;
}
return 0;
@ -1041,15 +1340,25 @@ static void fm10k_set_rx_mode(struct net_device *dev)
/* update xcast mode first, but only if it changed */
if (interface->xcast_mode != xcast_mode) {
/* update VLAN table */
if (xcast_mode == FM10K_XCAST_MODE_PROMISC || fm10k_is_ies(dev))
hw->mac.ops.update_vlan(hw, FM10K_VLAN_ALL, 0, true);
if (interface->xcast_mode == FM10K_XCAST_MODE_PROMISC ||
fm10k_is_ies(dev))
fm10k_clear_unused_vlans(interface);
/* update VLAN table for promiscuous related changes when
* ies-tagging is not enabled
*/
if (!fm10k_is_ies(dev)) {
/* update VLAN table when entering promiscuous mode */
if (xcast_mode == FM10K_XCAST_MODE_PROMISC)
fm10k_queue_vlan_request(interface,
FM10K_VLAN_ALL,
0, true);
/* update xcast mode */
hw->mac.ops.update_xcast_mode(hw, interface->glort, xcast_mode);
/* clear VLAN table when exiting promiscuous mode */
if (interface->xcast_mode == FM10K_XCAST_MODE_PROMISC)
fm10k_clear_unused_vlans(interface);
}
/* update xcast mode if host's mailbox is ready */
if (fm10k_host_mbx_ready(interface))
hw->mac.ops.update_xcast_mode(hw, interface->glort,
xcast_mode);
/* record updated xcast mode state */
interface->xcast_mode = xcast_mode;
@ -1064,9 +1373,16 @@ static void fm10k_set_rx_mode(struct net_device *dev)
void fm10k_restore_rx_state(struct fm10k_intfc *interface)
{
#ifdef NETIF_F_HW_L2FW_DOFFLOAD
struct fm10k_l2_accel *l2_accel = interface->l2_accel;
#endif
struct net_device *netdev = interface->netdev;
struct fm10k_hw *hw = &interface->hw;
#ifdef NETIF_F_HW_L2FW_DOFFLOAD
int xcast_mode, i;
#else
int xcast_mode;
#endif
u16 vid, glort;
/* record glort for this interface */
@ -1084,43 +1400,84 @@ void fm10k_restore_rx_state(struct fm10k_intfc *interface)
fm10k_mbx_lock(interface);
/* Enable logical port */
hw->mac.ops.update_lport_state(hw, glort, interface->glort_count, true);
/* Enable logical port if host's mailbox is ready */
if (fm10k_host_mbx_ready(interface))
hw->mac.ops.update_lport_state(hw, glort,
interface->glort_count, true);
if (xcast_mode == FM10K_XCAST_MODE_PROMISC || fm10k_is_ies(netdev)) {
/* Set VLAN table */
hw->mac.ops.update_vlan(hw, FM10K_VLAN_ALL, 0, true);
fm10k_queue_vlan_request(interface, FM10K_VLAN_ALL, 0, true);
} else {
/* Clear VLAN table */
hw->mac.ops.update_vlan(hw, FM10K_VLAN_ALL, 0, false);
/* Add filter for VLAN 0 */
hw->mac.ops.update_vlan(hw, 0, 0, true);
fm10k_queue_vlan_request(interface, FM10K_VLAN_ALL, 0, false);
}
/* update table with current entries */
for (vid = hw->mac.default_vid ? fm10k_find_next_vlan(interface, 0) : 1;
for (vid = fm10k_find_next_vlan(interface, 0);
vid < VLAN_N_VID;
vid = fm10k_find_next_vlan(interface, vid)) {
hw->mac.ops.update_vlan(hw, vid, 0, true);
hw->mac.ops.update_uc_addr(hw, glort, hw->mac.addr,
vid, true, 0);
fm10k_queue_vlan_request(interface, vid, 0, true);
fm10k_queue_mac_request(interface, glort,
hw->mac.addr, vid, true);
#ifdef NETIF_F_HW_L2FW_DOFFLOAD
/* synchronize macvlan addresses */
if (l2_accel) {
for (i = 0; i < l2_accel->size; i++) {
struct net_device *sdev = l2_accel->macvlan[i];
if (!sdev)
continue;
glort = l2_accel->dglort + 1 + i;
fm10k_queue_mac_request(interface, glort,
sdev->dev_addr,
vid, true);
}
}
#endif
}
/* update xcast mode before synchronizing addresses */
hw->mac.ops.update_xcast_mode(hw, glort, xcast_mode);
/* update xcast mode before synchronizing addresses if host's mailbox
* is ready
*/
if (fm10k_host_mbx_ready(interface))
hw->mac.ops.update_xcast_mode(hw, glort, xcast_mode);
/* synchronize all of the addresses */
__dev_uc_sync(netdev, fm10k_uc_sync, fm10k_uc_unsync);
__dev_mc_sync(netdev, fm10k_mc_sync, fm10k_mc_unsync);
#ifdef NETIF_F_HW_L2FW_DOFFLOAD
/* synchronize macvlan addresses */
if (l2_accel) {
for (i = 0; i < l2_accel->size; i++) {
struct net_device *sdev = l2_accel->macvlan[i];
if (!sdev)
continue;
glort = l2_accel->dglort + 1 + i;
hw->mac.ops.update_xcast_mode(hw, glort,
FM10K_XCAST_MODE_NONE);
fm10k_queue_mac_request(interface, glort,
sdev->dev_addr,
hw->mac.default_vid, true);
}
}
#endif /* NETIF_F_HW_L2FW_DOFFLOAD */
fm10k_mbx_unlock(interface);
/* record updated xcast mode state */
interface->xcast_mode = xcast_mode;
/* Restore tunnel configuration */
fm10k_restore_vxlan_port(interface);
fm10k_restore_udp_port_info(interface);
}
void fm10k_reset_rx_state(struct fm10k_intfc *interface)
@ -1128,11 +1485,21 @@ void fm10k_reset_rx_state(struct fm10k_intfc *interface)
struct net_device *netdev = interface->netdev;
struct fm10k_hw *hw = &interface->hw;
/* Wait for MAC/VLAN work to finish */
while (test_bit(__FM10K_MACVLAN_SCHED, interface->state))
usleep_range(1000, 2000);
/* Cancel pending MAC/VLAN requests */
fm10k_clear_macvlan_queue(interface, interface->glort, true);
fm10k_mbx_lock(interface);
/* clear the logical port state on lower device */
hw->mac.ops.update_lport_state(hw, interface->glort,
interface->glort_count, false);
/* clear the logical port state on lower device if host's mailbox is
* ready
*/
if (fm10k_host_mbx_ready(interface))
hw->mac.ops.update_lport_state(hw, interface->glort,
interface->glort_count, false);
fm10k_mbx_unlock(interface);
@ -1150,11 +1517,16 @@ void fm10k_reset_rx_state(struct fm10k_intfc *interface)
* @netdev: network interface device structure
* @stats: storage space for 64bit statistics
*
* Returns 64bit statistics, for use in the ndo_get_stats64 callback. This
* function replaces fm10k_get_stats for kernels which support it.
* Obtain 64bit statistics in a way that is safe for both 32bit and 64bit
* architectures.
*/
static struct rtnl_link_stats64 *fm10k_get_stats64(struct net_device *netdev,
struct rtnl_link_stats64 *stats)
#ifdef HAVE_VOID_NDO_GET_STATS64
static void fm10k_get_stats64(struct net_device *netdev,
struct rtnl_link_stats64 *stats)
#else
static struct rtnl_link_stats64 *
fm10k_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats)
#endif /* HAVE_VOID_NDO_GET_STATS64 */
{
struct fm10k_intfc *interface = netdev_priv(netdev);
struct fm10k_ring *ring;
@ -1164,7 +1536,7 @@ static struct rtnl_link_stats64 *fm10k_get_stats64(struct net_device *netdev,
rcu_read_lock();
for (i = 0; i < interface->num_rx_queues; i++) {
ring = ACCESS_ONCE(interface->rx_ring[i]);
ring = READ_ONCE(interface->rx_ring[i]);
if (!ring)
continue;
@ -1180,7 +1552,7 @@ static struct rtnl_link_stats64 *fm10k_get_stats64(struct net_device *netdev,
}
for (i = 0; i < interface->num_tx_queues; i++) {
ring = ACCESS_ONCE(interface->tx_ring[i]);
ring = READ_ONCE(interface->tx_ring[i]);
if (!ring)
continue;
@ -1199,8 +1571,10 @@ static struct rtnl_link_stats64 *fm10k_get_stats64(struct net_device *netdev,
/* following stats updated by fm10k_service_task() */
stats->rx_missed_errors = netdev->stats.rx_missed_errors;
#ifndef HAVE_VOID_NDO_GET_STATS64
return stats;
#endif
}
#else
/**
@ -1268,7 +1642,7 @@ int fm10k_setup_tc(struct net_device *dev, u8 tc)
goto err_open;
/* flag to indicate SWPRI has yet to be updated */
interface->flags |= FM10K_FLAG_SWPRI_CONFIG;
set_bit(FM10K_FLAG_SWPRI_CONFIG, interface->flags);
return 0;
err_open:
@ -1284,13 +1658,41 @@ err_queueing_scheme:
}
#ifdef NETIF_F_HW_TC
#if defined(HAVE_NDO_SETUP_TC_REMOVE_TC_TO_NETDEV)
static int __fm10k_setup_tc(struct net_device *dev, enum tc_setup_type type,
void *type_data)
#elif defined(HAVE_NDO_SETUP_TC_CHAIN_INDEX)
static int __fm10k_setup_tc(struct net_device *dev, u32 handle, u32 chain_index,
__be16 proto, struct tc_to_netdev *tc)
#else
static int __fm10k_setup_tc(struct net_device *dev, u32 handle, __be16 proto,
struct tc_to_netdev *tc)
#endif
{
if (tc->type != TC_SETUP_MQPRIO)
return -EINVAL;
#ifdef HAVE_NDO_SETUP_TC_REMOVE_TC_TO_NETDEV
struct tc_mqprio_qopt *mqprio = type_data;
#else
#ifdef TC_MQPRIO_HW_OFFLOAD_MAX
struct tc_mqprio_qopt *mqprio = tc->mqprio;
#endif /* TC_MQPRIO_HW_OFFLOAD_MAX */
unsigned int type = tc->type;
#endif /* HAVE_NDO_SETUP_TC_REMOVE_TC_TO_NETDEV */
if (type != TC_SETUP_QDISC_MQPRIO)
return -EOPNOTSUPP;
#ifdef TC_MQPRIO_HW_OFFLOAD_MAX
mqprio->hw = TC_MQPRIO_HW_OFFLOAD_TCS;
return fm10k_setup_tc(dev, mqprio->num_tc);
#else /* TC_MQPRIO_HW_OFFLOAD_MAX */
#ifndef HAVE_NDO_SETUP_TC_REMOVE_TC_TO_NETDEV
return fm10k_setup_tc(dev, tc->tc);
#else /* !HAVE_NDO_SETUP_TC_REMOVE_TC_TO_NETDEV */
WARN_ONCE(1, "Unable to determine number of traffic classes, likely due to a failed partial backport.");
return -EINVAL;
#endif /* HAVE_NDO_SETUP_TC_REMOVE_TC_TO_NETDEV */
#endif /* !TC_MQPRIO_HW_OFFLOAD_MAX */
}
#endif
@ -1318,7 +1720,14 @@ static void *fm10k_dfwd_add_station(struct net_device *dev,
struct fm10k_dglort_cfg dglort = { 0 };
struct fm10k_hw *hw = &interface->hw;
int size = 0, i;
u16 glort;
u16 vid, glort;
/* The hardware supported by fm10k only filters on the destination MAC
* address. In order to avoid issues we only support offloading modes
* where the hardware can actually provide the functionality.
*/
if (!macvlan_supports_dest_filter(sdev))
return ERR_PTR(-EMEDIUMTYPE);
/* allocate l2 accel structure if it is not available */
if (!l2_accel) {
@ -1383,8 +1792,19 @@ static void *fm10k_dfwd_add_station(struct net_device *dev,
fm10k_mbx_lock(interface);
glort = l2_accel->dglort + 1 + i;
hw->mac.ops.update_xcast_mode(hw, glort, FM10K_XCAST_MODE_MULTI);
hw->mac.ops.update_uc_addr(hw, glort, sdev->dev_addr, 0, true, 0);
if (fm10k_host_mbx_ready(interface))
hw->mac.ops.update_xcast_mode(hw, glort,
FM10K_XCAST_MODE_NONE);
fm10k_queue_mac_request(interface, glort, sdev->dev_addr,
hw->mac.default_vid, true);
for (vid = fm10k_find_next_vlan(interface, 0);
vid < VLAN_N_VID;
vid = fm10k_find_next_vlan(interface, vid))
fm10k_queue_mac_request(interface, glort, sdev->dev_addr,
vid, true);
fm10k_mbx_unlock(interface);
@ -1394,12 +1814,12 @@ static void *fm10k_dfwd_add_station(struct net_device *dev,
static void fm10k_dfwd_del_station(struct net_device *dev, void *priv)
{
struct fm10k_intfc *interface = netdev_priv(dev);
struct fm10k_l2_accel *l2_accel = ACCESS_ONCE(interface->l2_accel);
struct fm10k_l2_accel *l2_accel = READ_ONCE(interface->l2_accel);
struct fm10k_dglort_cfg dglort = { 0 };
struct fm10k_hw *hw = &interface->hw;
struct net_device *sdev = priv;
u16 vid, glort;
int i;
u16 glort;
if (!l2_accel)
return;
@ -1418,8 +1838,19 @@ static void fm10k_dfwd_del_station(struct net_device *dev, void *priv)
fm10k_mbx_lock(interface);
glort = l2_accel->dglort + 1 + i;
hw->mac.ops.update_xcast_mode(hw, glort, FM10K_XCAST_MODE_NONE);
hw->mac.ops.update_uc_addr(hw, glort, sdev->dev_addr, 0, false, 0);
if (fm10k_host_mbx_ready(interface))
hw->mac.ops.update_xcast_mode(hw, glort,
FM10K_XCAST_MODE_NONE);
fm10k_queue_mac_request(interface, glort, sdev->dev_addr,
hw->mac.default_vid, false);
for (vid = fm10k_find_next_vlan(interface, 0);
vid < VLAN_N_VID;
vid = fm10k_find_next_vlan(interface, vid))
fm10k_queue_mac_request(interface, glort, sdev->dev_addr,
vid, false);
fm10k_mbx_unlock(interface);
@ -1462,7 +1893,9 @@ static const struct net_device_ops fm10k_netdev_ops = {
.ndo_validate_addr = eth_validate_addr,
.ndo_start_xmit = fm10k_xmit_frame,
.ndo_set_mac_address = fm10k_set_mac,
#ifndef HAVE_NETDEVICE_MIN_MAX_MTU
.ndo_change_mtu = fm10k_change_mtu,
#endif
.ndo_tx_timeout = fm10k_tx_timeout,
.ndo_vlan_rx_add_vid = fm10k_vlan_rx_add_vid,
.ndo_vlan_rx_kill_vid = fm10k_vlan_rx_kill_vid,
@ -1475,6 +1908,7 @@ static const struct net_device_ops fm10k_netdev_ops = {
#else
.ndo_get_stats = fm10k_get_stats,
#endif /* HAVE_NDO_GET_STATS64 */
#ifndef HAVE_RHEL7_NETDEV_OPS_EXT_NDO_SETUP_TC
#ifdef HAVE_SETUP_TC
#ifdef NETIF_F_HW_TC
.ndo_setup_tc = __fm10k_setup_tc,
@ -1482,12 +1916,15 @@ static const struct net_device_ops fm10k_netdev_ops = {
.ndo_setup_tc = fm10k_setup_tc,
#endif
#endif
#endif /* !HAVE_RHEL7_NETDEV_OPS_EXT_NDO_SETUP_TC */
#ifndef HAVE_MQPRIO
.ndo_select_queue = __netdev_pick_tx,
#endif
#ifdef IFLA_VF_MAX
.ndo_set_vf_mac = fm10k_ndo_set_vf_mac,
#ifndef HAVE_RHEL7_NETDEV_OPS_EXT_NDO_SET_VF_VLAN
.ndo_set_vf_vlan = fm10k_ndo_set_vf_vlan,
#endif
#ifdef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
.ndo_set_vf_rate = fm10k_ndo_set_vf_bw,
#else
@ -1502,14 +1939,41 @@ static const struct net_device_ops fm10k_netdev_ops = {
.ndo_fdb_dump = ndo_dflt_fdb_dump,
#endif
#endif
#ifdef HAVE_VXLAN_CHECKS
#ifdef HAVE_VXLAN_RX_OFFLOAD
.ndo_add_vxlan_port = fm10k_add_vxlan_port,
.ndo_del_vxlan_port = fm10k_del_vxlan_port,
#endif
#ifdef HAVE_GENEVE_RX_OFFLOAD
.ndo_add_geneve_port = fm10k_add_geneve_port,
.ndo_del_geneve_port = fm10k_del_geneve_port,
#endif
#ifdef HAVE_RHEL7_NET_DEVICE_OPS_EXT
.ndo_size = sizeof(const struct net_device_ops),
/* All ops backported into RHEL7.x must go here. Do not place any ops
* which haven't been backported here, as they will otherwise fail to
* compile
*/
.extended = {
#endif
#ifdef HAVE_RHEL7_NETDEV_OPS_EXT_NDO_SET_VF_VLAN
.ndo_set_vf_vlan = fm10k_ndo_set_vf_vlan,
#endif /* HAVE_RHEL7_NETDEV_OPS_EXT_NDO_SET_VF_VLAN */
#ifdef HAVE_UDP_ENC_RX_OFFLOAD
.ndo_udp_tunnel_add = fm10k_udp_tunnel_add,
.ndo_udp_tunnel_del = fm10k_udp_tunnel_del,
#endif
#ifdef NETIF_F_HW_L2FW_DOFFLOAD
.ndo_dfwd_add_station = fm10k_dfwd_add_station,
.ndo_dfwd_del_station = fm10k_dfwd_del_station,
#endif
#ifdef HAVE_RHEL7_NETDEV_OPS_EXT_NDO_SETUP_TC
.ndo_setup_tc_rh = __fm10k_setup_tc,
#endif
#ifdef HAVE_RHEL7_NET_DEVICE_OPS_EXT
/* End of ops backported into RHEL7.x */
},
#endif
#ifdef CONFIG_NET_POLL_CONTROLLER
.ndo_poll_controller = fm10k_netpoll,
#endif
@ -1617,5 +2081,16 @@ struct net_device *fm10k_alloc_netdev(void)
#endif
#endif
#ifdef HAVE_NETDEVICE_MIN_MAX_MTU
/* MTU range: 68 - 15342 */
#ifdef HAVE_RHEL7_EXTENDED_MIN_MAX_MTU
dev->extended->min_mtu = ETH_MIN_MTU;
dev->extended->max_mtu = FM10K_MAX_JUMBO_FRAME_SIZE;
#else
dev->min_mtu = ETH_MIN_MTU;
dev->max_mtu = FM10K_MAX_JUMBO_FRAME_SIZE;
#endif /* HAVE_RHEL7_EXTENDED_MIN_MAX_MTU */
#endif /* HAVE_NETDEVICE_MIN_MAX_MTU */
return dev;
}

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
/* glue for the OS independent part of fm10k
* includes register access macros
@ -36,22 +19,19 @@ u16 fm10k_read_pci_cfg_word(struct fm10k_hw *hw, u32 reg);
/* read operations, indexed using DWORDS */
u32 fm10k_read_reg(struct fm10k_hw *hw, int reg);
#define fm10k_read_reg_array(hw, reg, idx) fm10k_read_reg((hw), (reg) + (idx))
/* write operations, indexed using DWORDS */
#define fm10k_write_reg(hw, reg, val) \
do { \
u32 __iomem *hw_addr = ACCESS_ONCE((hw)->hw_addr); \
u32 __iomem *hw_addr = READ_ONCE((hw)->hw_addr); \
if (!FM10K_REMOVED(hw_addr)) \
writel((val), &hw_addr[(reg)]); \
} while (0)
#define fm10k_write_reg_array(hw, reg, idx, val) \
fm10k_write_reg((hw), (reg) + (idx), (val))
/* Switch register write operations, index using DWORDS */
#define fm10k_write_sw_reg(hw, reg, val) \
do { \
u32 __iomem *sw_addr = ACCESS_ONCE((hw)->sw_addr); \
u32 __iomem *sw_addr = READ_ONCE((hw)->sw_addr); \
if (!FM10K_REMOVED(sw_addr)) \
writel((val), &sw_addr[(reg)]); \
} while (0)

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include <linux/types.h>
#include <linux/module.h>
@ -260,7 +243,7 @@ int fm10k_check_options(struct fm10k_intfc *interface)
}
if (ies)
interface->flags |= FM10K_FLAG_IES_MODE;
set_bit(FM10K_FLAG_IES_MODE, interface->flags);
}
ifc++;

View file

@ -1,24 +1,8 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include <linux/module.h>
#include <linux/interrupt.h>
#include "fm10k.h"
@ -27,7 +11,7 @@ static const struct fm10k_info *fm10k_info_tbl[] = {
[fm10k_device_vf] = &fm10k_vf_info,
};
/**
/*
* fm10k_pci_tbl - PCI Device ID Table
*
* Wildcard entries (PCI_ANY_ID) should come last
@ -63,7 +47,7 @@ u16 fm10k_read_pci_cfg_word(struct fm10k_hw *hw, u32 reg)
u32 fm10k_read_reg(struct fm10k_hw *hw, int reg)
{
u32 __iomem *hw_addr = ACCESS_ONCE(hw->hw_addr);
u32 __iomem *hw_addr = READ_ONCE(hw->hw_addr);
u32 value = 0;
if (FM10K_REMOVED(hw_addr))
@ -91,29 +75,132 @@ static int fm10k_hw_ready(struct fm10k_intfc *interface)
return FM10K_REMOVED(hw->hw_addr) ? -ENODEV : 0;
}
/**
* fm10k_macvlan_schedule - Schedule MAC/VLAN queue task
* @interface: fm10k private interface structure
*
* Schedule the MAC/VLAN queue monitor task. If the MAC/VLAN task cannot be
* started immediately, request that it be restarted when possible.
*/
void fm10k_macvlan_schedule(struct fm10k_intfc *interface)
{
/* Avoid processing the MAC/VLAN queue when the service task is
* disabled, or when we're resetting the device.
*/
if (!test_bit(__FM10K_MACVLAN_DISABLE, interface->state) &&
!test_and_set_bit(__FM10K_MACVLAN_SCHED, interface->state)) {
clear_bit(__FM10K_MACVLAN_REQUEST, interface->state);
/* We delay the actual start of execution in order to allow
* multiple MAC/VLAN updates to accumulate before handling
* them, and to allow some time to let the mailbox drain
* between runs.
*/
queue_delayed_work(fm10k_workqueue,
&interface->macvlan_task, 10);
} else {
set_bit(__FM10K_MACVLAN_REQUEST, interface->state);
}
}
/**
* fm10k_stop_macvlan_task - Stop the MAC/VLAN queue monitor
* @interface: fm10k private interface structure
*
* Wait until the MAC/VLAN queue task has stopped, and cancel any future
* requests.
*/
static void fm10k_stop_macvlan_task(struct fm10k_intfc *interface)
{
/* Disable the MAC/VLAN work item */
set_bit(__FM10K_MACVLAN_DISABLE, interface->state);
/* Make sure we waited until any current invocations have stopped */
cancel_delayed_work_sync(&interface->macvlan_task);
/* We set the __FM10K_MACVLAN_SCHED bit when we schedule the task.
* However, it may not be unset of the MAC/VLAN task never actually
* got a chance to run. Since we've canceled the task here, and it
* cannot be rescheuled right now, we need to ensure the scheduled bit
* gets unset.
*/
clear_bit(__FM10K_MACVLAN_SCHED, interface->state);
}
/**
* fm10k_resume_macvlan_task - Restart the MAC/VLAN queue monitor
* @interface: fm10k private interface structure
*
* Clear the __FM10K_MACVLAN_DISABLE bit and, if a request occurred, schedule
* the MAC/VLAN work monitor.
*/
static void fm10k_resume_macvlan_task(struct fm10k_intfc *interface)
{
/* Re-enable the MAC/VLAN work item */
clear_bit(__FM10K_MACVLAN_DISABLE, interface->state);
/* We might have received a MAC/VLAN request while disabled. If so,
* kick off the queue now.
*/
if (test_bit(__FM10K_MACVLAN_REQUEST, interface->state))
fm10k_macvlan_schedule(interface);
}
void fm10k_service_event_schedule(struct fm10k_intfc *interface)
{
if (!test_bit(__FM10K_SERVICE_DISABLE, &interface->state) &&
!test_and_set_bit(__FM10K_SERVICE_SCHED, &interface->state))
if (!test_bit(__FM10K_SERVICE_DISABLE, interface->state) &&
!test_and_set_bit(__FM10K_SERVICE_SCHED, interface->state)) {
clear_bit(__FM10K_SERVICE_REQUEST, interface->state);
queue_work(fm10k_workqueue, &interface->service_task);
} else {
set_bit(__FM10K_SERVICE_REQUEST, interface->state);
}
}
static void fm10k_service_event_complete(struct fm10k_intfc *interface)
{
WARN_ON(!test_bit(__FM10K_SERVICE_SCHED, &interface->state));
WARN_ON(!test_bit(__FM10K_SERVICE_SCHED, interface->state));
/* flush memory to make sure state is correct before next watchog */
smp_mb__before_atomic();
clear_bit(__FM10K_SERVICE_SCHED, &interface->state);
clear_bit(__FM10K_SERVICE_SCHED, interface->state);
/* If a service event was requested since we started, immediately
* re-schedule now. This ensures we don't drop a request until the
* next timer event.
*/
if (test_bit(__FM10K_SERVICE_REQUEST, interface->state))
fm10k_service_event_schedule(interface);
}
static void fm10k_stop_service_event(struct fm10k_intfc *interface)
{
set_bit(__FM10K_SERVICE_DISABLE, interface->state);
cancel_work_sync(&interface->service_task);
/* It's possible that cancel_work_sync stopped the service task from
* running before it could actually start. In this case the
* __FM10K_SERVICE_SCHED bit will never be cleared. Since we know that
* the service task cannot be running at this point, we need to clear
* the scheduled bit, as otherwise the service task may never be
* restarted.
*/
clear_bit(__FM10K_SERVICE_SCHED, interface->state);
}
static void fm10k_start_service_event(struct fm10k_intfc *interface)
{
clear_bit(__FM10K_SERVICE_DISABLE, interface->state);
fm10k_service_event_schedule(interface);
}
/**
* fm10k_service_timer - Timer Call-back
* @data: pointer to interface cast into an unsigned long
* @t: pointer to timer data
**/
static void fm10k_service_timer(unsigned long data)
static void fm10k_service_timer(struct timer_list *t)
{
struct fm10k_intfc *interface = (struct fm10k_intfc *)data;
struct fm10k_intfc *interface = from_timer(interface, t,
service_timer);
/* Reset the timer */
mod_timer(&interface->service_timer, (HZ * 2) + jiffies);
@ -121,35 +208,36 @@ static void fm10k_service_timer(unsigned long data)
fm10k_service_event_schedule(interface);
}
static void fm10k_detach_subtask(struct fm10k_intfc *interface)
/**
* fm10k_prepare_for_reset - Prepare the driver and device for a pending reset
* @interface: fm10k private data structure
*
* This function prepares for a device reset by shutting as much down as we
* can. It does nothing and returns false if __FM10K_RESETTING was already set
* prior to calling this function. It returns true if it actually did work.
*/
static bool fm10k_prepare_for_reset(struct fm10k_intfc *interface)
{
struct net_device *netdev = interface->netdev;
/* do nothing if device is still present or hw_addr is set */
if (netif_device_present(netdev) || interface->hw.hw_addr)
return;
rtnl_lock();
if (netif_running(netdev))
dev_close(netdev);
rtnl_unlock();
}
static void fm10k_reinit(struct fm10k_intfc *interface)
{
struct net_device *netdev = interface->netdev;
struct fm10k_hw *hw = &interface->hw;
int err;
WARN_ON(in_interrupt());
/* put off any impending NetWatchDogTimeout */
#ifdef HAVE_NETIF_TRANS_UPDATE
netif_trans_update(netdev);
#else
netdev->trans_start = jiffies;
#endif
while (test_and_set_bit(__FM10K_RESETTING, &interface->state))
usleep_range(1000, 2000);
/* Nothing to do if a reset is already in progress */
if (test_and_set_bit(__FM10K_RESETTING, interface->state))
return false;
/* As the MAC/VLAN task will be accessing registers it must not be
* running while we reset. Although the task will not be scheduled
* once we start resetting it may already be running
*/
fm10k_stop_macvlan_task(interface);
rtnl_lock();
@ -167,6 +255,23 @@ static void fm10k_reinit(struct fm10k_intfc *interface)
/* delay any future reset requests */
interface->last_reset = jiffies + (10 * HZ);
rtnl_unlock();
return true;
}
static int fm10k_handle_reset(struct fm10k_intfc *interface)
{
struct net_device *netdev = interface->netdev;
struct fm10k_hw *hw = &interface->hw;
int err;
WARN_ON(!test_bit(__FM10K_RESETTING, interface->state));
rtnl_lock();
pci_set_master(interface->pdev);
/* reset and initialize the hardware so it is in a known state */
err = hw->mac.ops.reset_hw(hw);
if (err) {
@ -187,7 +292,7 @@ static void fm10k_reinit(struct fm10k_intfc *interface)
goto reinit_err;
}
/* reassociate interrupts */
/* re-associate interrupts */
err = fm10k_mbx_request_irq(interface);
if (err)
goto err_mbx_irq;
@ -231,9 +336,11 @@ static void fm10k_reinit(struct fm10k_intfc *interface)
rtnl_unlock();
clear_bit(__FM10K_RESETTING, &interface->state);
fm10k_resume_macvlan_task(interface);
return;
clear_bit(__FM10K_RESETTING, interface->state);
return err;
err_open:
fm10k_uio_free_irq(interface);
err_uio_irq:
@ -245,19 +352,85 @@ reinit_err:
rtnl_unlock();
clear_bit(__FM10K_RESETTING, &interface->state);
clear_bit(__FM10K_RESETTING, interface->state);
return err;
}
static void fm10k_detach_subtask(struct fm10k_intfc *interface)
{
struct net_device *netdev = interface->netdev;
u32 __iomem *hw_addr;
u32 value;
int err;
/* do nothing if netdev is still present or hw_addr is set */
if (netif_device_present(netdev) || interface->hw.hw_addr)
return;
/* We've lost the PCIe register space, and can no longer access the
* device. Shut everything except the detach subtask down and prepare
* to reset the device in case we recover. If we actually prepare for
* reset, indicate that we're detached.
*/
if (fm10k_prepare_for_reset(interface))
set_bit(__FM10K_RESET_DETACHED, interface->state);
/* check the real address space to see if we've recovered */
hw_addr = READ_ONCE(interface->uc_addr);
value = readl(hw_addr);
if (~value) {
/* Make sure the reset was initiated because we detached,
* otherwise we might race with a different reset flow.
*/
if (!test_and_clear_bit(__FM10K_RESET_DETACHED,
interface->state))
return;
/* Restore the hardware address */
interface->hw.hw_addr = interface->uc_addr;
/* PCIe link has been restored, and the device is active
* again. Restore everything and reset the device.
*/
err = fm10k_handle_reset(interface);
if (err) {
netdev_err(netdev, "Unable to reset device: %d\n", err);
interface->hw.hw_addr = NULL;
return;
}
/* Re-attach the netdev */
netif_device_attach(netdev);
netdev_warn(netdev, "PCIe link restored, device now attached\n");
return;
}
}
static void fm10k_reset_subtask(struct fm10k_intfc *interface)
{
if (!(interface->flags & FM10K_FLAG_RESET_REQUESTED))
int err;
if (!test_and_clear_bit(FM10K_FLAG_RESET_REQUESTED,
interface->flags))
return;
interface->flags &= ~FM10K_FLAG_RESET_REQUESTED;
/* If another thread has already prepared to reset the device, we
* should not attempt to handle a reset here, since we'd race with
* that thread. This may happen if we suspend the device or if the
* PCIe link is lost. In this case, we'll just ignore the RESET
* request, as it will (eventually) be taken care of when the thread
* which actually started the reset is finished.
*/
if (!fm10k_prepare_for_reset(interface))
return;
netdev_err(interface->netdev, "Reset interface\n");
fm10k_reinit(interface);
err = fm10k_handle_reset(interface);
if (err)
dev_err(&interface->pdev->dev,
"fm10k_handle_reset failed: %d\n", err);
}
/**
@ -273,7 +446,7 @@ static void fm10k_configure_swpri_map(struct fm10k_intfc *interface)
int i;
/* clear flag indicating update is needed */
interface->flags &= ~FM10K_FLAG_SWPRI_CONFIG;
clear_bit(FM10K_FLAG_SWPRI_CONFIG, interface->flags);
/* these registers are only available on the PF */
if (hw->mac.type != fm10k_mac_pf)
@ -294,14 +467,14 @@ static void fm10k_watchdog_update_host_state(struct fm10k_intfc *interface)
struct fm10k_hw *hw = &interface->hw;
s32 err;
if (test_bit(__FM10K_LINK_DOWN, &interface->state)) {
if (test_bit(__FM10K_LINK_DOWN, interface->state)) {
interface->host_ready = false;
if (time_is_after_jiffies(interface->link_down_event))
return;
clear_bit(__FM10K_LINK_DOWN, &interface->state);
clear_bit(__FM10K_LINK_DOWN, interface->state);
}
if (interface->flags & FM10K_FLAG_SWPRI_CONFIG) {
if (test_bit(FM10K_FLAG_SWPRI_CONFIG, interface->flags)) {
if (rtnl_trylock()) {
fm10k_configure_swpri_map(interface);
rtnl_unlock();
@ -313,7 +486,7 @@ static void fm10k_watchdog_update_host_state(struct fm10k_intfc *interface)
err = hw->mac.ops.get_host_state(hw, &interface->host_ready);
if (err && time_is_before_jiffies(interface->last_reset))
interface->flags |= FM10K_FLAG_RESET_REQUESTED;
set_bit(FM10K_FLAG_RESET_REQUESTED, interface->flags);
/* free the lock */
fm10k_mbx_unlock(interface);
@ -327,6 +500,10 @@ static void fm10k_watchdog_update_host_state(struct fm10k_intfc *interface)
**/
static void fm10k_mbx_subtask(struct fm10k_intfc *interface)
{
/* If we're resetting, bail out */
if (test_bit(__FM10K_RESETTING, interface->state))
return;
/* process upstream mailbox and update device state */
fm10k_watchdog_update_host_state(interface);
@ -388,12 +565,19 @@ void fm10k_update_stats(struct fm10k_intfc *interface)
u64 bytes, pkts;
int i;
/* ensure only one thread updates stats at a time */
if (test_and_set_bit(__FM10K_UPDATING_STATS, interface->state))
return;
/* do not allow stats update via service task for next second */
interface->next_stats_update = jiffies + HZ;
/* gather some stats to the interface struct that are per queue */
for (bytes = 0, pkts = 0, i = 0; i < interface->num_tx_queues; i++) {
struct fm10k_ring *tx_ring = interface->tx_ring[i];
struct fm10k_ring *tx_ring = READ_ONCE(interface->tx_ring[i]);
if (!tx_ring)
continue;
restart_queue += tx_ring->tx_stats.restart_queue;
tx_busy += tx_ring->tx_stats.tx_busy;
@ -412,7 +596,10 @@ void fm10k_update_stats(struct fm10k_intfc *interface)
/* gather some stats to the interface struct that are per queue */
for (bytes = 0, pkts = 0, i = 0; i < interface->num_rx_queues; i++) {
struct fm10k_ring *rx_ring = interface->rx_ring[i];
struct fm10k_ring *rx_ring = READ_ONCE(interface->rx_ring[i]);
if (!rx_ring)
continue;
bytes += rx_ring->stats.bytes;
pkts += rx_ring->stats.packets;
@ -459,11 +646,13 @@ void fm10k_update_stats(struct fm10k_intfc *interface)
/* Fill out the OS statistics structure */
net_stats->rx_errors = rx_errors;
net_stats->rx_dropped = interface->stats.nodesc_drop.count;
clear_bit(__FM10K_UPDATING_STATS, interface->state);
}
/**
* fm10k_watchdog_flush_tx - flush queues on host not ready
* @interface - pointer to the device interface structure
* @interface: pointer to the device interface structure
**/
static void fm10k_watchdog_flush_tx(struct fm10k_intfc *interface)
{
@ -488,18 +677,18 @@ static void fm10k_watchdog_flush_tx(struct fm10k_intfc *interface)
* controller to flush Tx.
*/
if (some_tx_pending)
interface->flags |= FM10K_FLAG_RESET_REQUESTED;
set_bit(FM10K_FLAG_RESET_REQUESTED, interface->flags);
}
/**
* fm10k_watchdog_subtask - check and bring link up
* @interface - pointer to the device interface structure
* @interface: pointer to the device interface structure
**/
static void fm10k_watchdog_subtask(struct fm10k_intfc *interface)
{
/* if interface is down do nothing */
if (test_bit(__FM10K_DOWN, &interface->state) ||
test_bit(__FM10K_RESETTING, &interface->state))
if (test_bit(__FM10K_DOWN, interface->state) ||
test_bit(__FM10K_RESETTING, interface->state))
return;
if (interface->host_ready)
@ -517,7 +706,7 @@ static void fm10k_watchdog_subtask(struct fm10k_intfc *interface)
/**
* fm10k_check_hang_subtask - check for hung queues and dropped interrupts
* @interface - pointer to the device interface structure
* @interface: pointer to the device interface structure
*
* This function serves two purposes. First it strobes the interrupt lines
* in order to make certain interrupts are occurring. Secondly it sets the
@ -529,8 +718,8 @@ static void fm10k_check_hang_subtask(struct fm10k_intfc *interface)
int i;
/* If we're down or resetting, just bail */
if (test_bit(__FM10K_DOWN, &interface->state) ||
test_bit(__FM10K_RESETTING, &interface->state))
if (test_bit(__FM10K_DOWN, interface->state) ||
test_bit(__FM10K_RESETTING, interface->state))
return;
/* rate limit tx hang checks to only once every 2 seconds */
@ -564,9 +753,11 @@ static void fm10k_service_task(struct work_struct *work)
interface = container_of(work, struct fm10k_intfc, service_task);
/* Check whether we're detached first */
fm10k_detach_subtask(interface);
/* tasks run even when interface is down */
fm10k_mbx_subtask(interface);
fm10k_detach_subtask(interface);
fm10k_reset_subtask(interface);
/* tasks only run when interface is up */
@ -577,6 +768,112 @@ static void fm10k_service_task(struct work_struct *work)
fm10k_service_event_complete(interface);
}
/**
* fm10k_macvlan_task - send queued MAC/VLAN requests to switch manager
* @work: pointer to work_struct containing our data
*
* This work item handles sending MAC/VLAN updates to the switch manager. When
* the interface is up, it will attempt to queue mailbox messages to the
* switch manager requesting updates for MAC/VLAN pairs. If the Tx fifo of the
* mailbox is full, it will reschedule itself to try again in a short while.
* This ensures that the driver does not overload the switch mailbox with too
* many simultaneous requests, causing an unnecessary reset.
**/
static void fm10k_macvlan_task(struct work_struct *work)
{
struct fm10k_macvlan_request *item;
struct fm10k_intfc *interface;
struct delayed_work *dwork;
struct list_head *requests;
struct fm10k_hw *hw;
unsigned long flags;
dwork = to_delayed_work(work);
interface = container_of(dwork, struct fm10k_intfc, macvlan_task);
hw = &interface->hw;
requests = &interface->macvlan_requests;
do {
/* Pop the first item off the list */
spin_lock_irqsave(&interface->macvlan_lock, flags);
item = list_first_entry_or_null(requests,
struct fm10k_macvlan_request,
list);
if (item)
list_del_init(&item->list);
spin_unlock_irqrestore(&interface->macvlan_lock, flags);
/* We have no more items to process */
if (!item)
goto done;
fm10k_mbx_lock(interface);
/* Check that we have plenty of space to send the message. We
* want to ensure that the mailbox stays low enough to avoid a
* change in the host state, otherwise we may see spurious
* link up / link down notifications.
*/
if (!hw->mbx.ops.tx_ready(&hw->mbx, FM10K_VFMBX_MSG_MTU + 5)) {
hw->mbx.ops.process(hw, &hw->mbx);
set_bit(__FM10K_MACVLAN_REQUEST, interface->state);
fm10k_mbx_unlock(interface);
/* Put the request back on the list */
spin_lock_irqsave(&interface->macvlan_lock, flags);
list_add(&item->list, requests);
spin_unlock_irqrestore(&interface->macvlan_lock, flags);
break;
}
switch (item->type) {
case FM10K_MC_MAC_REQUEST:
hw->mac.ops.update_mc_addr(hw,
item->mac.glort,
item->mac.addr,
item->mac.vid,
item->set);
break;
case FM10K_UC_MAC_REQUEST:
hw->mac.ops.update_uc_addr(hw,
item->mac.glort,
item->mac.addr,
item->mac.vid,
item->set,
0);
break;
case FM10K_VLAN_REQUEST:
hw->mac.ops.update_vlan(hw,
item->vlan.vid,
item->vlan.vsi,
item->set);
break;
default:
break;
}
fm10k_mbx_unlock(interface);
/* Free the item now that we've sent the update */
kfree(item);
} while (true);
done:
WARN_ON(!test_bit(__FM10K_MACVLAN_SCHED, interface->state));
/* flush memory to make sure state is correct */
smp_mb__before_atomic();
clear_bit(__FM10K_MACVLAN_SCHED, interface->state);
/* If a MAC/VLAN request was scheduled since we started, we should
* re-schedule. However, there is no reason to re-schedule if there is
* no work to do.
*/
if (test_bit(__FM10K_MACVLAN_REQUEST, interface->state))
fm10k_macvlan_schedule(interface);
}
/**
* fm10k_configure_tx_ring - Configure Tx ring after Reset
* @interface: board private structure
@ -629,7 +926,7 @@ static void fm10k_configure_tx_ring(struct fm10k_intfc *interface,
FM10K_PFVTCTL_FTAG_DESC_ENABLE);
/* Initialize XPS */
if (!test_and_set_bit(__FM10K_TX_XPS_INIT_DONE, &ring->state) &&
if (!test_and_set_bit(__FM10K_TX_XPS_INIT_DONE, ring->state) &&
ring->q_vector)
netif_set_xps_queue(ring->netdev,
&ring->q_vector->affinity_mask,
@ -700,15 +997,16 @@ static void fm10k_configure_rx_ring(struct fm10k_intfc *interface,
u64 rdba = ring->dma;
struct fm10k_hw *hw = &interface->hw;
u32 size = ring->count * sizeof(union fm10k_rx_desc);
u32 rxqctl = FM10K_RXQCTL_ENABLE | FM10K_RXQCTL_PF;
u32 rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY;
u32 rxqctl, rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY;
u32 srrctl = FM10K_SRRCTL_BUFFER_CHAINING_EN;
u32 rxint = FM10K_INT_MAP_DISABLE;
u8 rx_pause = interface->rx_pause;
u8 reg_idx = ring->reg_idx;
/* disable queue to avoid issues while updating state */
fm10k_write_reg(hw, FM10K_RXQCTL(reg_idx), 0);
rxqctl = fm10k_read_reg(hw, FM10K_RXQCTL(reg_idx));
rxqctl &= ~FM10K_RXQCTL_ENABLE;
fm10k_write_reg(hw, FM10K_RXQCTL(reg_idx), rxqctl);
fm10k_write_flush(hw);
/* possible poll here to verify ring resources have been cleaned */
@ -749,14 +1047,12 @@ static void fm10k_configure_rx_ring(struct fm10k_intfc *interface,
fm10k_write_reg(hw, FM10K_RXDCTL(reg_idx), rxdctl);
#ifndef HAVE_VLAN_RX_REGISTER
/* assign default VLAN to queue */
ring->vid = hw->mac.default_vid;
/* if we have an active VLAN, disable default VLAN ID */
if (test_bit(hw->mac.default_vid, interface->active_vlans))
ring->vid |= FM10K_VLAN_CLEAR;
#endif
/* Map interrupt */
if (ring->q_vector) {
@ -767,6 +1063,8 @@ static void fm10k_configure_rx_ring(struct fm10k_intfc *interface,
fm10k_write_reg(hw, FM10K_RXINT(reg_idx), rxint);
/* enable queue */
rxqctl = fm10k_read_reg(hw, FM10K_RXQCTL(reg_idx));
rxqctl |= FM10K_RXQCTL_ENABLE;
fm10k_write_reg(hw, FM10K_RXQCTL(reg_idx), rxqctl);
/* place buffers on ring for receive data */
@ -833,9 +1131,9 @@ static void fm10k_configure_dglort(struct fm10k_intfc *interface)
FM10K_MRQC_IPV6 |
FM10K_MRQC_TCP_IPV6;
if (interface->flags & FM10K_FLAG_RSS_FIELD_IPV4_UDP)
if (test_bit(FM10K_FLAG_RSS_FIELD_IPV4_UDP, interface->flags))
mrqc |= FM10K_MRQC_UDP_IPV4;
if (interface->flags & FM10K_FLAG_RSS_FIELD_IPV6_UDP)
if (test_bit(FM10K_FLAG_RSS_FIELD_IPV6_UDP, interface->flags))
mrqc |= FM10K_MRQC_UDP_IPV6;
fm10k_write_reg(hw, FM10K_MRQC(0), mrqc);
@ -952,7 +1250,7 @@ void fm10k_netpoll(struct net_device *netdev)
int i;
/* if interface is down do nothing */
if (test_bit(__FM10K_DOWN, &interface->state))
if (test_bit(__FM10K_DOWN, interface->state))
return;
for (i = 0; i < interface->num_q_vectors; i++)
@ -1116,6 +1414,7 @@ static irqreturn_t fm10k_msix_mbx_pf(int __always_unused irq, void *data)
struct fm10k_hw *hw = &interface->hw;
struct fm10k_mbx_info *mbx = &hw->mbx;
u32 eicr;
s32 err = 0;
/* unmask any set bits related to this interrupt */
eicr = fm10k_read_reg(hw, FM10K_EICR);
@ -1131,17 +1430,20 @@ static irqreturn_t fm10k_msix_mbx_pf(int __always_unused irq, void *data)
/* service mailboxes */
if (fm10k_mbx_trylock(interface)) {
mbx->ops.process(hw, mbx);
err = mbx->ops.process(hw, mbx);
/* handle VFLRE events */
fm10k_iov_event(interface);
fm10k_mbx_unlock(interface);
}
if (err == FM10K_ERR_RESET_REQUESTED)
set_bit(FM10K_FLAG_RESET_REQUESTED, interface->flags);
/* if switch toggled state we should reset GLORTs */
if (eicr & FM10K_EICR_SWITCHNOTREADY) {
/* force link down for at least 4 seconds */
interface->link_down_event = jiffies + (4 * HZ);
set_bit(__FM10K_LINK_DOWN, &interface->state);
set_bit(__FM10K_LINK_DOWN, interface->state);
/* reset dglort_map back to no config */
hw->mac.dglort_map = FM10K_DGLORTMAP_NONE;
@ -1214,12 +1516,12 @@ static s32 fm10k_mbx_mac_addr(struct fm10k_hw *hw, u32 **results,
/* MAC was changed so we need reset */
if (is_valid_ether_addr(hw->mac.perm_addr) &&
!ether_addr_equal(hw->mac.perm_addr, hw->mac.addr))
interface->flags |= FM10K_FLAG_RESET_REQUESTED;
set_bit(FM10K_FLAG_RESET_REQUESTED, interface->flags);
/* VLAN override was changed, or default VLAN changed */
if ((vlan_override != hw->mac.vlan_override) ||
(default_vid != hw->mac.default_vid))
interface->flags |= FM10K_FLAG_RESET_REQUESTED;
set_bit(FM10K_FLAG_RESET_REQUESTED, interface->flags);
return 0;
}
@ -1293,7 +1595,7 @@ static s32 fm10k_lport_map(struct fm10k_hw *hw, u32 **results,
if (!err && hw->swapi.status) {
/* force link down for a reasonable delay */
interface->link_down_event = jiffies + (2 * HZ);
set_bit(__FM10K_LINK_DOWN, &interface->state);
set_bit(__FM10K_LINK_DOWN, interface->state);
/* reset dglort_map back to no config */
hw->mac.dglort_map = FM10K_DGLORTMAP_NONE;
@ -1324,7 +1626,7 @@ static s32 fm10k_lport_map(struct fm10k_hw *hw, u32 **results,
/* we need to reset if port count was just updated */
if (dglort_map != hw->mac.dglort_map)
interface->flags |= FM10K_FLAG_RESET_REQUESTED;
set_bit(FM10K_FLAG_RESET_REQUESTED, interface->flags);
return 0;
}
@ -1363,7 +1665,7 @@ static s32 fm10k_update_pvid(struct fm10k_hw *hw, u32 **results,
/* we need to reset if default VLAN was just updated */
if (pvid != hw->mac.default_vid)
interface->flags |= FM10K_FLAG_RESET_REQUESTED;
set_bit(FM10K_FLAG_RESET_REQUESTED, interface->flags);
hw->mac.default_vid = pvid;
@ -1502,7 +1804,7 @@ int fm10k_qv_request_irq(struct fm10k_intfc *interface)
struct net_device *dev = interface->netdev;
struct fm10k_hw *hw = &interface->hw;
struct msix_entry *entry;
int ri = 0, ti = 0;
unsigned int ri = 0, ti = 0;
int vector, err;
entry = &interface->msix_entries[NON_Q_VECTORS(hw)];
@ -1512,15 +1814,15 @@ int fm10k_qv_request_irq(struct fm10k_intfc *interface)
/* name the vector */
if (q_vector->tx.count && q_vector->rx.count) {
snprintf(q_vector->name, sizeof(q_vector->name) - 1,
"%s-TxRx-%d", dev->name, ri++);
snprintf(q_vector->name, sizeof(q_vector->name),
"%s-TxRx-%u", dev->name, ri++);
ti++;
} else if (q_vector->rx.count) {
snprintf(q_vector->name, sizeof(q_vector->name) - 1,
"%s-rx-%d", dev->name, ri++);
snprintf(q_vector->name, sizeof(q_vector->name),
"%s-rx-%u", dev->name, ri++);
} else if (q_vector->tx.count) {
snprintf(q_vector->name, sizeof(q_vector->name) - 1,
"%s-tx-%d", dev->name, ti++);
snprintf(q_vector->name, sizeof(q_vector->name),
"%s-tx-%u", dev->name, ti++);
} else {
/* skip this unused q_vector */
continue;
@ -1596,8 +1898,11 @@ void fm10k_up(struct fm10k_intfc *interface)
/* configure interrupts */
hw->mac.ops.update_int_moderator(hw);
/* enable statistics capture again */
clear_bit(__FM10K_UPDATING_STATS, interface->state);
/* clear down bit to indicate we are ready to go */
clear_bit(__FM10K_DOWN, &interface->state);
clear_bit(__FM10K_DOWN, interface->state);
/* enable polling cleanups */
fm10k_napi_enable_all(interface);
@ -1628,10 +1933,11 @@ void fm10k_down(struct fm10k_intfc *interface)
{
struct net_device *netdev = interface->netdev;
struct fm10k_hw *hw = &interface->hw;
int err;
int err, i = 0, count = 0;
/* signal that we are down to the interrupt handler and service task */
set_bit(__FM10K_DOWN, &interface->state);
if (test_and_set_bit(__FM10K_DOWN, interface->state))
return;
/* call carrier off first to avoid false dev_watchdog timeouts */
netif_carrier_off(netdev);
@ -1643,18 +1949,57 @@ void fm10k_down(struct fm10k_intfc *interface)
/* reset Rx filters */
fm10k_reset_rx_state(interface);
/* allow 10ms for device to quiesce */
usleep_range(10000, 20000);
/* disable polling routines */
fm10k_napi_disable_all(interface);
/* capture stats one last time before stopping interface */
fm10k_update_stats(interface);
/* prevent updating statistics while we're down */
while (test_and_set_bit(__FM10K_UPDATING_STATS, interface->state))
usleep_range(1000, 2000);
/* skip waiting for TX DMA if we lost PCIe link */
if (FM10K_REMOVED(hw->hw_addr))
goto skip_tx_dma_drain;
/* In some rare circumstances it can take a while for Tx queues to
* quiesce and be fully disabled. Attempt to .stop_hw() first, and
* then if we get ERR_REQUESTS_PENDING, go ahead and wait in a loop
* until the Tx queues have emptied, or until a number of retries. If
* we fail to clear within the retry loop, we will issue a warning
* indicating that Tx DMA is probably hung. Note this means we call
* .stop_hw() twice but this shouldn't cause any problems.
*/
err = hw->mac.ops.stop_hw(hw);
if (err != FM10K_ERR_REQUESTS_PENDING)
goto skip_tx_dma_drain;
#define TX_DMA_DRAIN_RETRIES 25
for (count = 0; count < TX_DMA_DRAIN_RETRIES; count++) {
usleep_range(10000, 20000);
/* start checking at the last ring to have pending Tx */
for (; i < interface->num_tx_queues; i++)
if (fm10k_get_tx_pending(interface->tx_ring[i], false))
break;
/* if all the queues are drained, we can break now */
if (i == interface->num_tx_queues)
break;
}
if (count >= TX_DMA_DRAIN_RETRIES)
dev_err(&interface->pdev->dev,
"Tx queues failed to drain after %d tries. Tx DMA is probably hung.\n",
count);
skip_tx_dma_drain:
/* Disable DMA engine for Tx/Rx */
err = hw->mac.ops.stop_hw(hw);
if (err)
if (err == FM10K_ERR_REQUESTS_PENDING)
dev_err(&interface->pdev->dev,
"due to pending requests hw was not shut down gracefully\n");
else if (err)
dev_err(&interface->pdev->dev, "stop_hw failed: %d\n", err);
/* free any buffers still on the rings */
@ -1665,6 +2010,7 @@ void fm10k_down(struct fm10k_intfc *interface)
/**
* fm10k_sw_init - Initialize general software structures
* @interface: host interface private structure to initialize
* @ent: PCI device ID entry
*
* fm10k_sw_init initializes the interface private data structure.
* Fields are initialized based on PCI device information and
@ -1728,9 +2074,6 @@ static int fm10k_sw_init(struct fm10k_intfc *interface,
netdev->vlan_features |= NETIF_F_HIGHDMA;
}
/* delay any future reset requests */
interface->last_reset = jiffies + (10 * HZ);
/* reset and initialize the hardware so it is in a known state */
err = hw->mac.ops.reset_hw(hw);
if (err) {
@ -1790,103 +2133,27 @@ static int fm10k_sw_init(struct fm10k_intfc *interface,
interface->tx_itr = FM10K_TX_ITR_DEFAULT;
interface->rx_itr = FM10K_ITR_ADAPTIVE | FM10K_RX_ITR_DEFAULT;
/* initialize vxlan_port list */
/* initialize udp port lists */
INIT_LIST_HEAD(&interface->vxlan_port);
INIT_LIST_HEAD(&interface->geneve_port);
/* Initialize the MAC/VLAN queue */
INIT_LIST_HEAD(&interface->macvlan_requests);
netdev_rss_key_fill(rss_key, sizeof(rss_key));
memcpy(interface->rssrk, rss_key, sizeof(rss_key));
/* Initialize the mailbox lock */
spin_lock_init(&interface->mbx_lock);
spin_lock_init(&interface->macvlan_lock);
/* Start off interface as being down */
set_bit(__FM10K_DOWN, &interface->state);
set_bit(__FM10K_DOWN, interface->state);
set_bit(__FM10K_UPDATING_STATS, interface->state);
return 0;
}
static void fm10k_slot_warn(struct fm10k_intfc *interface)
{
enum pcie_link_width width = PCIE_LNK_WIDTH_UNKNOWN;
enum pci_bus_speed speed = PCI_SPEED_UNKNOWN;
struct fm10k_hw *hw = &interface->hw;
int max_gts = 0, expected_gts = 0;
if (pcie_get_minimum_link(interface->pdev, &speed, &width) ||
speed == PCI_SPEED_UNKNOWN || width == PCIE_LNK_WIDTH_UNKNOWN) {
dev_warn(&interface->pdev->dev,
"Unable to determine PCI Express bandwidth.\n");
return;
}
switch (speed) {
case PCIE_SPEED_2_5GT:
/* 8b/10b encoding reduces max throughput by 20% */
max_gts = 2 * width;
break;
case PCIE_SPEED_5_0GT:
/* 8b/10b encoding reduces max throughput by 20% */
max_gts = 4 * width;
break;
case PCIE_SPEED_8_0GT:
/* 128b/130b encoding has less than 2% impact on throughput */
max_gts = 8 * width;
break;
default:
dev_warn(&interface->pdev->dev,
"Unable to determine PCI Express bandwidth.\n");
return;
}
dev_info(&interface->pdev->dev,
"PCI Express bandwidth of %dGT/s available\n",
max_gts);
dev_info(&interface->pdev->dev,
"(Speed:%s, Width: x%d, Encoding Loss:%s, Payload:%s)\n",
(speed == PCIE_SPEED_8_0GT ? "8.0GT/s" :
speed == PCIE_SPEED_5_0GT ? "5.0GT/s" :
speed == PCIE_SPEED_2_5GT ? "2.5GT/s" :
"Unknown"),
hw->bus.width,
(speed == PCIE_SPEED_2_5GT ? "20%" :
speed == PCIE_SPEED_5_0GT ? "20%" :
speed == PCIE_SPEED_8_0GT ? "<2%" :
"Unknown"),
(hw->bus.payload == fm10k_bus_payload_128 ? "128B" :
hw->bus.payload == fm10k_bus_payload_256 ? "256B" :
hw->bus.payload == fm10k_bus_payload_512 ? "512B" :
"Unknown"));
switch (hw->bus_caps.speed) {
case fm10k_bus_speed_2500:
/* 8b/10b encoding reduces max throughput by 20% */
expected_gts = 2 * hw->bus_caps.width;
break;
case fm10k_bus_speed_5000:
/* 8b/10b encoding reduces max throughput by 20% */
expected_gts = 4 * hw->bus_caps.width;
break;
case fm10k_bus_speed_8000:
/* 128b/130b encoding has less than 2% impact on throughput */
expected_gts = 8 * hw->bus_caps.width;
break;
default:
dev_warn(&interface->pdev->dev,
"Unable to determine expected PCI Express bandwidth.\n");
return;
}
if (max_gts >= expected_gts)
return;
dev_warn(&interface->pdev->dev,
"This device requires %dGT/s of bandwidth for optimal performance.\n",
expected_gts);
dev_warn(&interface->pdev->dev,
"A %sslot with x%d lanes is suggested.\n",
(hw->bus_caps.speed == fm10k_bus_speed_2500 ? "2.5GT/s " :
hw->bus_caps.speed == fm10k_bus_speed_5000 ? "5.0GT/s " :
hw->bus_caps.speed == fm10k_bus_speed_8000 ? "8.0GT/s " : ""),
hw->bus_caps.width);
}
/**
* fm10k_probe - Device Initialization Routine
* @pdev: PCI device information struct
@ -1909,9 +2176,18 @@ static int fm10k_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
struct fm10k_intfc *interface;
int err;
if (pdev->error_state != pci_channel_io_normal) {
dev_err(&pdev->dev,
"PCI device still in an error state. Unable to load...\n");
return -EIO;
}
err = pci_enable_device_mem(pdev);
if (err)
if (err) {
dev_err(&pdev->dev,
"PCI enable device failed: %d\n", err);
return err;
}
err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(48));
if (err)
@ -1922,10 +2198,7 @@ static int fm10k_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_dma;
}
err = pci_request_selected_regions(pdev,
pci_select_bars(pdev,
IORESOURCE_MEM),
fm10k_driver_name);
err = pci_request_mem_regions(pdev, fm10k_driver_name);
if (err) {
dev_err(&pdev->dev,
"pci_request_selected_regions failed: %d\n", err);
@ -1977,7 +2250,7 @@ static int fm10k_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
* must ensure it is disabled since we haven't yet requested the timer
* or work item.
*/
set_bit(__FM10K_SERVICE_DISABLE, &interface->state);
set_bit(__FM10K_SERVICE_DISABLE, interface->state);
err = fm10k_mbx_request_irq(interface);
if (err)
@ -2001,10 +2274,12 @@ static int fm10k_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* Initialize service timer and service task late in order to avoid
* cleanup issues.
*/
setup_timer(&interface->service_timer, &fm10k_service_timer,
(unsigned long)interface);
timer_setup(&interface->service_timer, fm10k_service_timer, 0);
INIT_WORK(&interface->service_task, fm10k_service_task);
/* Setup the MAC/VLAN queue */
INIT_DELAYED_WORK(&interface->macvlan_task, fm10k_macvlan_task);
/* kick off service timer now, even when interface is down */
mod_timer(&interface->service_timer, (HZ * 2) + jiffies);
@ -2014,7 +2289,7 @@ static int fm10k_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
dev_warn(&pdev->dev, "Failed to start UIO interface\n");
/* print warning for non-optimal configurations */
fm10k_slot_warn(interface);
pcie_print_link_status(interface->pdev);
/* report MAC address for logging */
dev_info(&pdev->dev, "%pM\n", netdev->dev_addr);
@ -2022,8 +2297,9 @@ static int fm10k_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* enable SR-IOV after registering netdev to enforce PF/VF ordering */
fm10k_iov_configure(pdev, interface->init_vfs);
/* clear the service task disable bit to allow service task to start */
clear_bit(__FM10K_SERVICE_DISABLE, &interface->state);
/* clear the service task disable bit and kick off service task */
clear_bit(__FM10K_SERVICE_DISABLE, interface->state);
fm10k_service_event_schedule(interface);
return 0;
@ -2038,8 +2314,7 @@ err_sw_init:
err_ioremap:
free_netdev(netdev);
err_alloc_netdev:
pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
pci_release_mem_regions(pdev);
err_pci_reg:
err_dma:
pci_disable_device(pdev);
@ -2066,8 +2341,11 @@ static void fm10k_remove(struct pci_dev *pdev)
del_timer_sync(&interface->service_timer);
set_bit(__FM10K_SERVICE_DISABLE, &interface->state);
cancel_work_sync(&interface->service_task);
fm10k_stop_service_event(interface);
fm10k_stop_macvlan_task(interface);
/* Remove all pending MAC/VLAN requests */
fm10k_clear_macvlan_queue(interface, interface->glort, true);
/* free netdev, this may bounce the interrupts due to setup_tc */
if (netdev->reg_state == NETREG_REGISTERED)
@ -2094,29 +2372,127 @@ static void fm10k_remove(struct pci_dev *pdev)
free_netdev(netdev);
pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
pci_release_mem_regions(pdev);
pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev);
}
#ifdef CONFIG_PM
/**
* fm10k_resume - Restore device to pre-sleep state
* @pdev: PCI device information struct
*
* fm10k_resume is called after the system has powered back up from a sleep
* state and is ready to resume operation. This function is meant to restore
* the device back to its pre-sleep state.
**/
static int fm10k_resume(struct pci_dev *pdev)
static void fm10k_prepare_suspend(struct fm10k_intfc *interface)
{
struct fm10k_intfc *interface = pci_get_drvdata(pdev);
/* the watchdog task reads from registers, which might appear like
* a surprise remove if the PCIe device is disabled while we're
* stopped. We stop the watchdog task until after we resume software
* activity.
*
* Note that the MAC/VLAN task will be stopped as part of preparing
* for reset so we don't need to handle it here.
*/
fm10k_stop_service_event(interface);
if (fm10k_prepare_for_reset(interface))
set_bit(__FM10K_RESET_SUSPENDED, interface->state);
}
static int fm10k_handle_resume(struct fm10k_intfc *interface)
{
struct fm10k_hw *hw = &interface->hw;
int err;
/* Even if we didn't properly prepare for reset in
* fm10k_prepare_suspend, we'll attempt to resume anyways.
*/
if (!test_and_clear_bit(__FM10K_RESET_SUSPENDED, interface->state))
dev_warn(&interface->pdev->dev,
"Device was shut down as part of suspend... Attempting to recover\n");
/* reset statistics starting values */
hw->mac.ops.rebind_hw_stats(hw, &interface->stats);
err = fm10k_handle_reset(interface);
if (err)
return err;
/* assume host is not ready, to prevent race with watchdog in case we
* actually don't have connection to the switch
*/
interface->host_ready = false;
fm10k_watchdog_host_not_ready(interface);
/* force link to stay down for a second to prevent link flutter */
interface->link_down_event = jiffies + (HZ);
set_bit(__FM10K_LINK_DOWN, interface->state);
/* restart the service task */
fm10k_start_service_event(interface);
/* Restart the MAC/VLAN request queue in-case of outstanding events */
fm10k_macvlan_schedule(interface);
return err;
}
/**
* fm10k_resume - Generic PM resume hook
* @dev: generic device structure
*
* Generic PM hook used when waking the device from a low power state after
* suspend or hibernation. This function does not need to handle lower PCIe
* device state as the stack takes care of that for us.
**/
static int __maybe_unused fm10k_resume(struct device *dev)
{
struct fm10k_intfc *interface = pci_get_drvdata(to_pci_dev(dev));
struct net_device *netdev = interface->netdev;
struct fm10k_hw *hw = &interface->hw;
u32 err;
int err;
/* refresh hw_addr in case it was dropped */
hw->hw_addr = interface->uc_addr;
err = fm10k_handle_resume(interface);
if (err)
return err;
netif_device_attach(netdev);
return 0;
}
/**
* fm10k_suspend - Generic PM suspend hook
* @dev: generic device structure
*
* Generic PM hook used when setting the device into a low power state for
* system suspend or hibernation. This function does not need to handle lower
* PCIe device state as the stack takes care of that for us.
**/
static int __maybe_unused fm10k_suspend(struct device *dev)
{
struct fm10k_intfc *interface = pci_get_drvdata(to_pci_dev(dev));
struct net_device *netdev = interface->netdev;
netif_device_detach(netdev);
fm10k_prepare_suspend(interface);
return 0;
}
#ifdef USE_LEGACY_PM_SUPPORT
#ifdef CONFIG_PM
/**
* fm10k_legacy_resume - Restore device to pre-sleep state
* @pdev: PCI device information struct
*
* Legacy PCI PM hook for kernels without support for the newer generic power
* management hooks. This function is called to resume the device from a sleep
* state and must restore it to pre-sleep operation.
**/
static int fm10k_legacy_resume(struct pci_dev *pdev)
{
int err;
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
@ -2135,110 +2511,26 @@ static int fm10k_resume(struct pci_dev *pdev)
pci_wake_from_d3(pdev, false);
/* refresh hw_addr in case it was dropped */
hw->hw_addr = interface->uc_addr;
/* reset hardware to known state */
err = hw->mac.ops.init_hw(&interface->hw);
if (err) {
dev_err(&pdev->dev, "init_hw failed: %d\n", err);
return err;
}
/* reset statistics starting values */
hw->mac.ops.rebind_hw_stats(hw, &interface->stats);
rtnl_lock();
err = fm10k_init_queueing_scheme(interface);
if (err)
goto err_queueing_scheme;
err = fm10k_mbx_request_irq(interface);
if (err)
goto err_mbx_irq;
err = fm10k_uio_request_irq(interface);
if (err)
goto err_uio_irq;
err = fm10k_hw_ready(interface);
if (err)
goto err_open;
err = netif_running(netdev) ? fm10k_open(netdev) : 0;
if (err)
goto err_open;
rtnl_unlock();
/* assume host is not ready, to prevent race with watchdog in case we
* actually don't have connection to the switch
*/
interface->host_ready = false;
fm10k_watchdog_host_not_ready(interface);
/* clear the service task disable bit to allow service task to start */
clear_bit(__FM10K_SERVICE_DISABLE, &interface->state);
fm10k_service_event_schedule(interface);
/* restore SR-IOV interface */
fm10k_iov_resume(pdev);
netif_device_attach(netdev);
return 0;
err_open:
fm10k_uio_free_irq(interface);
err_uio_irq:
fm10k_mbx_free_irq(interface);
err_mbx_irq:
fm10k_clear_queueing_scheme(interface);
err_queueing_scheme:
rtnl_unlock();
return err;
return fm10k_resume(&pdev->dev);
}
/**
* fm10k_suspend - Prepare the device for a system sleep state
* fm10k_legacy_suspend - Prepare the device for a system sleep state
* @pdev: PCI device information struct
* @state: device power state
*
* fm10k_suspend is meant to shutdown the device prior to the system entering
* a sleep state. The fm10k hardware does not support wake on lan so the
* driver simply needs to shut down the device so it is in a low power state.
* Legacy PM hook for kernels without support for the newer generic power
* management hooks. This function is called to suspend the device and shut
* down into a low power state.
**/
static int fm10k_suspend(struct pci_dev *pdev,
pm_message_t __always_unused state)
static int fm10k_legacy_suspend(struct pci_dev *pdev,
pm_message_t __always_unused state)
{
struct fm10k_intfc *interface = pci_get_drvdata(pdev);
struct net_device *netdev = interface->netdev;
int err = 0;
int err;
netif_device_detach(netdev);
fm10k_iov_suspend(pdev);
/* the watchdog tasks may read registers, which will appear like a
* surprise-remove event once the PCI device is disabled. This will
* cause us to close the netdevice, so we don't retain the open/closed
* state post-resume. Prevent this by disabling the service task while
* suspended, until we actually resume.
*/
set_bit(__FM10K_SERVICE_DISABLE, &interface->state);
cancel_work_sync(&interface->service_task);
rtnl_lock();
if (netif_running(netdev))
fm10k_close(netdev);
fm10k_uio_free_irq(interface);
fm10k_mbx_free_irq(interface);
fm10k_clear_queueing_scheme(interface);
rtnl_unlock();
err = fm10k_suspend(&pdev->dev);
if (err)
return err;
err = pci_save_state(pdev);
if (err)
@ -2250,8 +2542,9 @@ static int fm10k_suspend(struct pci_dev *pdev,
return 0;
}
#endif /* CONFIG_PM */
#endif /* USE_LEGACY_PM_SUPPORT */
/**
* fm10k_io_error_detected - called when PCI error is detected
* @pdev: Pointer to PCI device
@ -2271,18 +2564,7 @@ static pci_ers_result_t fm10k_io_error_detected(struct pci_dev *pdev,
if (state == pci_channel_io_perm_failure)
return PCI_ERS_RESULT_DISCONNECT;
rtnl_lock();
if (netif_running(netdev))
fm10k_close(netdev);
fm10k_uio_free_irq(interface);
fm10k_mbx_free_irq(interface);
/* free interrupts */
fm10k_clear_queueing_scheme(interface);
rtnl_unlock();
fm10k_prepare_suspend(interface);
/* Request a slot reset. */
return PCI_ERS_RESULT_NEED_RESET;
@ -2296,10 +2578,9 @@ static pci_ers_result_t fm10k_io_error_detected(struct pci_dev *pdev,
*/
static pci_ers_result_t fm10k_io_slot_reset(struct pci_dev *pdev)
{
struct fm10k_intfc *interface = pci_get_drvdata(pdev);
pci_ers_result_t result;
if (pci_enable_device_mem(pdev)) {
if (pci_reenable_device(pdev)) {
dev_err(&pdev->dev,
"Cannot re-enable PCI device after reset.\n");
result = PCI_ERS_RESULT_DISCONNECT;
@ -2314,12 +2595,6 @@ static pci_ers_result_t fm10k_io_slot_reset(struct pci_dev *pdev)
pci_wake_from_d3(pdev, false);
/* refresh hw_addr in case it was dropped */
interface->hw.hw_addr = interface->uc_addr;
interface->flags |= FM10K_FLAG_RESET_REQUESTED;
fm10k_service_event_schedule(interface);
result = PCI_ERS_RESULT_RECOVERED;
}
@ -2339,45 +2614,74 @@ static void fm10k_io_resume(struct pci_dev *pdev)
{
struct fm10k_intfc *interface = pci_get_drvdata(pdev);
struct net_device *netdev = interface->netdev;
struct fm10k_hw *hw = &interface->hw;
int err = 0;
int err;
/* reset hardware to known state */
err = hw->mac.ops.init_hw(&interface->hw);
if (err) {
dev_err(&pdev->dev, "init_hw failed: %d\n", err);
return;
}
/* reset statistics starting values */
hw->mac.ops.rebind_hw_stats(hw, &interface->stats);
rtnl_lock();
err = fm10k_init_queueing_scheme(interface);
if (err) {
dev_err(&interface->pdev->dev,
"init_queueing_scheme failed: %d\n", err);
goto unlock;
}
/* reassociate interrupts */
fm10k_mbx_request_irq(interface);
fm10k_uio_request_irq(interface);
if (netif_running(netdev))
err = fm10k_open(netdev);
/* final check of hardware state before registering the interface */
err = err ? : fm10k_hw_ready(interface);
if (!err)
err = fm10k_handle_resume(interface);
if (err)
dev_warn(&pdev->dev,
"%s failed: %d\n", __func__, err);
else
netif_device_attach(netdev);
unlock:
rtnl_unlock();
}
#if defined(HAVE_PCI_ERROR_HANDLER_RESET_PREPARE) || defined(HAVE_PCI_ERROR_HANDLER_RESET_NOTIFY) || defined(HAVE_RHEL7_PCI_RESET_NOTIFY)
/**
* fm10k_io_reset_prepare - called when PCI function is about to be reset
* @pdev: Pointer to PCI device
*
* This callback is called when the PCI function is about to be reset,
* allowing the device driver to prepare for it.
*/
static void fm10k_io_reset_prepare(struct pci_dev *pdev)
{
/* warn incase we have any active VF devices */
if (pci_num_vf(pdev))
dev_warn(&pdev->dev,
"PCIe FLR may cause issues for any active VF devices\n");
fm10k_prepare_suspend(pci_get_drvdata(pdev));
}
/**
* fm10k_io_reset_done - called when PCI function has finished resetting
* @pdev: Pointer to PCI device
*
* This callback is called just after the PCI function is reset, such as via
* /sys/class/net/<enpX>/device/reset or similar.
*/
static void fm10k_io_reset_done(struct pci_dev *pdev)
{
struct fm10k_intfc *interface = pci_get_drvdata(pdev);
int err = fm10k_handle_resume(interface);
if (err) {
dev_warn(&pdev->dev,
"%s failed: %d\n", __func__, err);
netif_device_detach(interface->netdev);
}
}
#endif
#if defined(HAVE_PCI_ERROR_HANDLER_RESET_NOTIFY) || defined(HAVE_RHEL7_PCI_RESET_NOTIFY)
/**
* fm10k_io_reset_notify - called when PCI function is reset
* @pdev: Pointer to PCI device
* @prepare: true if this call is preparing for a reset
*
* This callback is called when the PCI function is reset such as from
* /sys/class/net/<enpX>/device/reset or similar. When prepare is true, it
* means we should prepare for a function reset. If prepare is false, it means
* the function reset just occurred.
*/
static void fm10k_io_reset_notify(struct pci_dev *pdev, bool prepare)
{
if (prepare)
fm10k_io_reset_prepare(pdev);
else
fm10k_io_reset_done(pdev);
}
#endif
#ifdef HAVE_CONST_STRUCT_PCI_ERROR_HANDLERS
static const struct pci_error_handlers fm10k_err_handler = {
#else
@ -2386,19 +2690,53 @@ static struct pci_error_handlers fm10k_err_handler = {
.error_detected = fm10k_io_error_detected,
.slot_reset = fm10k_io_slot_reset,
.resume = fm10k_io_resume,
#ifdef HAVE_PCI_ERROR_HANDLER_RESET_PREPARE
.reset_prepare = fm10k_io_reset_prepare,
.reset_done = fm10k_io_reset_done,
#endif
#ifdef HAVE_PCI_ERROR_HANDLER_RESET_NOTIFY
.reset_notify = fm10k_io_reset_notify,
#endif
};
#if defined(HAVE_RHEL6_SRIOV_CONFIGURE) || defined(HAVE_RHEL7_PCI_DRIVER_RH)
static struct pci_driver_rh fm10k_driver_rh = {
#ifdef HAVE_RHEL6_SRIOV_CONFIGURE
.sriov_configure = fm10k_iov_configure,
#endif /* HAVE_RHEL6_SRIOV_CONFIGURE */
#ifdef HAVE_RHEL7_PCI_RESET_NOTIFY
.reset_notify = fm10k_io_reset_notify,
#endif /* HAVE_RHEL7_PCI_RESET_NOTIFY */
};
#endif /* HAVE_RHEL6_SRIOV_CONFIGURE || HAVE_RHEL7_PCI_DRIVER_RH */
#ifndef USE_LEGACY_PM_SUPPORT
static SIMPLE_DEV_PM_OPS(fm10k_pm_ops, fm10k_suspend, fm10k_resume);
#endif
static struct pci_driver fm10k_driver = {
.name = fm10k_driver_name,
.id_table = fm10k_pci_tbl,
.probe = fm10k_probe,
.remove = fm10k_remove,
#ifdef USE_LEGACY_PM_SUPPORT
#ifdef CONFIG_PM
.suspend = fm10k_suspend,
.resume = fm10k_resume,
#endif
.suspend = fm10k_legacy_suspend,
.resume = fm10k_legacy_resume,
#endif /* CONFIG_PM */
#else
.driver = {
.pm = &fm10k_pm_ops,
},
#endif /* USE_LEGACY_PM_SUPPORT */
#ifdef HAVE_SRIOV_CONFIGURE
.sriov_configure = fm10k_iov_configure,
#endif
#ifdef HAVE_RHEL6_SRIOV_CONFIGURE
.rh_reserved = &fm10k_driver_rh,
#endif
#ifdef HAVE_RHEL7_PCI_DRIVER_RH
.pci_driver_rh = &fm10k_driver_rh,
#endif
.err_handler = &fm10k_err_handler
};
@ -2410,6 +2748,12 @@ static struct pci_driver fm10k_driver = {
**/
int fm10k_register_pci_driver(void)
{
#ifdef HAVE_RHEL7_PCI_DRIVER_RH
/* The size member must be initialized in the driver via a call to
* set_pci_driver_rh_size before pci_register_driver is called.
*/
set_pci_driver_rh_size(fm10k_driver_rh);
#endif
return pci_register_driver(&fm10k_driver);
}

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include "fm10k_pf.h"
#include "fm10k_vf.h"
@ -51,21 +34,21 @@ static s32 fm10k_reset_hw_pf(struct fm10k_hw *hw)
/* shut down all rings */
err = fm10k_disable_queues_generic(hw, FM10K_MAX_QUEUES);
if (err)
if (err == FM10K_ERR_REQUESTS_PENDING) {
hw->mac.reset_while_pending++;
goto force_reset;
} else if (err) {
return err;
}
/* Verify that DMA is no longer active */
reg = fm10k_read_reg(hw, FM10K_DMA_CTRL);
if (reg & (FM10K_DMA_CTRL_TX_ACTIVE | FM10K_DMA_CTRL_RX_ACTIVE))
return FM10K_ERR_DMA_PENDING;
/* verify the switch is ready for reset */
reg = fm10k_read_reg(hw, FM10K_DMA_CTRL2);
if (!(reg & FM10K_DMA_CTRL2_SWITCH_READY))
goto out;
force_reset:
/* Inititate data path reset */
reg |= FM10K_DMA_CTRL_DATAPATH_RESET;
reg = FM10K_DMA_CTRL_DATAPATH_RESET;
fm10k_write_reg(hw, FM10K_DMA_CTRL, reg);
/* Flush write and allow 100us for reset to complete */
@ -75,10 +58,9 @@ static s32 fm10k_reset_hw_pf(struct fm10k_hw *hw)
/* Verify we made it out of reset */
reg = fm10k_read_reg(hw, FM10K_IP);
if (!(reg & FM10K_IP_NOTINRESET))
err = FM10K_ERR_RESET_FAILED;
return FM10K_ERR_RESET_FAILED;
out:
return err;
return 0;
}
/**
@ -864,14 +846,10 @@ static s32 fm10k_iov_assign_default_mac_vlan_pf(struct fm10k_hw *hw,
vf_q_idx = fm10k_vf_queue_index(hw, vf_idx);
qmap_idx = qmap_stride * vf_idx;
/* MAP Tx queue back to 0 temporarily, and disable it */
fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx), 0);
fm10k_write_reg(hw, FM10K_TXDCTL(vf_q_idx), 0);
/* Determine correct default VLAN ID. The FM10K_VLAN_OVERRIDE bit is
* used here to indicate to the VF that it will not have privilege to
* write VLAN_TABLE. All policy is enforced on the PF but this allows
* the VF to correctly report errors to userspace rqeuests.
* the VF to correctly report errors to userspace requests.
*/
if (vf_info->pf_vid)
vf_vid = vf_info->pf_vid | FM10K_VLAN_OVERRIDE;
@ -883,9 +861,35 @@ static s32 fm10k_iov_assign_default_mac_vlan_pf(struct fm10k_hw *hw,
fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_DEFAULT_MAC,
vf_info->mac, vf_vid);
/* load onto outgoing mailbox, ignore any errors on enqueue */
if (vf_info->mbx.ops.enqueue_tx)
vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg);
/* Configure Queue control register with new VLAN ID. The TXQCTL
* register is RO from the VF, so the PF must do this even in the
* case of notifying the VF of a new VID via the mailbox.
*/
txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) &
FM10K_TXQCTL_VID_MASK;
txqctl |= (vf_idx << FM10K_TXQCTL_TC_SHIFT) |
FM10K_TXQCTL_VF | vf_idx;
for (i = 0; i < queues_per_pool; i++)
fm10k_write_reg(hw, FM10K_TXQCTL(vf_q_idx + i), txqctl);
/* try loading a message onto outgoing mailbox first */
if (vf_info->mbx.ops.enqueue_tx) {
err = vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg);
if (err != FM10K_MBX_ERR_NO_MBX)
return err;
err = 0;
}
/* If we aren't connected to a mailbox, this is most likely because
* the VF driver is not running. It should thus be safe to re-map
* queues and use the registers to pass the MAC address so that the VF
* driver gets correct information during its initialization.
*/
/* MAP Tx queue back to 0 temporarily, and disable it */
fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx), 0);
fm10k_write_reg(hw, FM10K_TXDCTL(vf_q_idx), 0);
/* verify ring has disabled before modifying base address registers */
txdctl = fm10k_read_reg(hw, FM10K_TXDCTL(vf_q_idx));
@ -924,16 +928,6 @@ static s32 fm10k_iov_assign_default_mac_vlan_pf(struct fm10k_hw *hw,
FM10K_TDLEN_ITR_SCALE_SHIFT);
err_out:
/* configure Queue control register */
txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) &
FM10K_TXQCTL_VID_MASK;
txqctl |= (vf_idx << FM10K_TXQCTL_TC_SHIFT) |
FM10K_TXQCTL_VF | vf_idx;
/* assign VLAN ID */
for (i = 0; i < queues_per_pool; i++)
fm10k_write_reg(hw, FM10K_TXQCTL(vf_q_idx + i), txqctl);
/* restore the queue back to VF ownership */
fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx), vf_q_idx);
return err;
@ -1154,7 +1148,7 @@ static void fm10k_iov_update_stats_pf(struct fm10k_hw *hw,
* @results: Pointer array to message, results[0] is pointer to message
* @mbx: Pointer to mailbox information structure
*
* This function is a default handler for MSI-X requests from the VF. The
* This function is a default handler for MSI-X requests from the VF. The
* assumption is that in this case it is acceptable to just directly
* hand off the message from the VF to the underlying shared code.
**/
@ -1170,13 +1164,13 @@ s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *hw, u32 **results,
/**
* fm10k_iov_select_vid - Select correct default VLAN ID
* @hw: Pointer to hardware structure
* @vf_info: pointer to VF information structure
* @vid: VLAN ID to correct
*
* Will report an error if the VLAN ID is out of range. For VID = 0, it will
* return either the pf_vid or sw_vid depending on which one is set.
*/
static s32 fm10k_iov_select_vid(struct fm10k_vf_info *vf_info, u16 vid)
s32 fm10k_iov_select_vid(struct fm10k_vf_info *vf_info, u16 vid)
{
if (!vid)
return vf_info->pf_vid ? vf_info->pf_vid : vf_info->sw_vid;
@ -1324,19 +1318,19 @@ static u8 fm10k_iov_supported_xcast_mode_pf(struct fm10k_vf_info *vf_info,
case FM10K_XCAST_MODE_PROMISC:
if (vf_flags & FM10K_VF_FLAG_PROMISC_CAPABLE)
return FM10K_XCAST_MODE_PROMISC;
/* fallthough */
/* fall through */
case FM10K_XCAST_MODE_ALLMULTI:
if (vf_flags & FM10K_VF_FLAG_ALLMULTI_CAPABLE)
return FM10K_XCAST_MODE_ALLMULTI;
/* fallthough */
/* fall through */
case FM10K_XCAST_MODE_MULTI:
if (vf_flags & FM10K_VF_FLAG_MULTI_CAPABLE)
return FM10K_XCAST_MODE_MULTI;
/* fallthough */
/* fall through */
case FM10K_XCAST_MODE_NONE:
if (vf_flags & FM10K_VF_FLAG_NONE_CAPABLE)
return FM10K_XCAST_MODE_NONE;
/* fallthough */
/* fall through */
default:
break;
}
@ -1620,25 +1614,15 @@ static s32 fm10k_request_lport_map_pf(struct fm10k_hw *hw)
**/
static s32 fm10k_get_host_state_pf(struct fm10k_hw *hw, bool *switch_ready)
{
s32 ret_val = 0;
u32 dma_ctrl2;
/* verify the switch is ready for interaction */
dma_ctrl2 = fm10k_read_reg(hw, FM10K_DMA_CTRL2);
if (!(dma_ctrl2 & FM10K_DMA_CTRL2_SWITCH_READY))
goto out;
return 0;
/* retrieve generic host state info */
ret_val = fm10k_get_host_state_generic(hw, switch_ready);
if (ret_val)
goto out;
/* interface cannot receive traffic without logical ports */
if (hw->mac.dglort_map == FM10K_DGLORTMAP_NONE)
ret_val = fm10k_request_lport_map_pf(hw);
out:
return ret_val;
return fm10k_get_host_state_generic(hw, switch_ready);
}
/* This structure defines the attibutes to be parsed below */
@ -1817,6 +1801,7 @@ static const struct fm10k_mac_ops mac_ops_pf = {
.set_dma_mask = fm10k_set_dma_mask_pf,
.get_fault = fm10k_get_fault_pf,
.get_host_state = fm10k_get_host_state_pf,
.request_lport_map = fm10k_request_lport_map_pf,
};
static const struct fm10k_iov_ops iov_ops_pf = {

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _FM10K_PF_H_
#define _FM10K_PF_H_
@ -117,6 +100,7 @@ extern const struct fm10k_tlv_attr fm10k_err_msg_attr[];
#define FM10K_PF_MSG_ERR_HANDLER(msg, func) \
FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_##msg, fm10k_err_msg_attr, func)
s32 fm10k_iov_select_vid(struct fm10k_vf_info *vf_info, u16 vid);
s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *);
s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *, u32 **,
struct fm10k_mbx_info *);

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include "fm10k_tlv.h"
@ -120,6 +103,7 @@ static s32 fm10k_tlv_attr_get_null_string(u32 *attr, unsigned char *string)
* @msg: Pointer to message block
* @attr_id: Attribute ID
* @mac_addr: MAC address to be stored
* @vlan: VLAN to be stored
*
* This function will reorder a MAC address to be CPU endian and store it
* in the attribute buffer. It will return success if provided with a
@ -155,8 +139,8 @@ s32 fm10k_tlv_attr_put_mac_vlan(u32 *msg, u16 attr_id,
/**
* fm10k_tlv_attr_get_mac_vlan - Get MAC/VLAN stored in attribute
* @attr: Pointer to attribute
* @attr_id: Attribute ID
* @mac_addr: location of buffer to store MAC address
* @vlan: location of buffer to store VLAN
*
* This function pulls the MAC address back out of the attribute and will
* place it in the array pointed by by mac_addr. It will return success
@ -549,7 +533,7 @@ static s32 fm10k_tlv_attr_parse(u32 *attr, u32 **results,
* @hw: Pointer to hardware structure
* @msg: Pointer to message
* @mbx: Pointer to mailbox information structure
* @func: Function array containing list of message handling functions
* @data: Pointer to message handler data structure
*
* This function should be the first function called upon receiving a
* message. The handler will identify the message type and call the correct

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _FM10K_TLV_H_
#define _FM10K_TLV_H_

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _FM10K_TYPE_H_
#define _FM10K_TYPE_H_
@ -153,6 +136,7 @@ struct fm10k_hw;
#define FM10K_DGLORTDEC_INNERRSS_ENABLE 0x08000000
#define FM10K_TUNNEL_CFG 0x0040
#define FM10K_TUNNEL_CFG_NVGRE_SHIFT 16
#define FM10K_TUNNEL_CFG_GENEVE 0x0041
#define FM10K_SWPRI_MAP(_n) ((_n) + 0x0050)
#define FM10K_SWPRI_MAX 16
#define FM10K_RSSRK(_n, _m) (((_n) * 0x10) + (_m) + 0x0800)
@ -224,11 +208,6 @@ struct fm10k_hw;
#define FM10K_STATS_LOOPBACK_DROP 0x3806
#define FM10K_STATS_NODESC_DROP 0x3807
/* Timesync registers */
#define FM10K_SYSTIME 0x3814
#define FM10K_SYSTIME_CFG 0x3818
#define FM10K_SYSTIME_CFG_STEP_MASK 0x0000000F
/* PCIe state registers */
#define FM10K_PHYADDR 0x381C
@ -362,12 +341,10 @@ struct fm10k_hw;
#define FM10K_PFVFLRE(_n) ((0x1 * (_n)) + 0x18844)
#define FM10K_PFVFLREC(_n) ((0x1 * (_n)) + 0x18846)
/* Defines for size of uncacheable and write-combining memories */
/* Defines for size of uncacheable memories */
#define FM10K_UC_ADDR_START 0x000000 /* start of standard regs */
#define FM10K_WC_ADDR_START 0x100000 /* start of Tx Desc Cache */
#define FM10K_DBI_ADDR_START 0x200000 /* start of debug registers */
#define FM10K_UC_ADDR_SIZE (FM10K_WC_ADDR_START - FM10K_UC_ADDR_START)
#define FM10K_WC_ADDR_SIZE (FM10K_DBI_ADDR_START - FM10K_WC_ADDR_START)
#define FM10K_UC_ADDR_END 0x100000 /* end of standard regs */
#define FM10K_UC_ADDR_SIZE (FM10K_UC_ADDR_END - FM10K_UC_ADDR_START)
/* Define timeouts for resets and disables */
#define FM10K_QUEUE_DISABLE_TIMEOUT 100
@ -536,6 +513,7 @@ struct fm10k_mac_ops {
s32 (*stop_hw)(struct fm10k_hw *);
s32 (*get_bus_info)(struct fm10k_hw *);
s32 (*get_host_state)(struct fm10k_hw *, bool *);
s32 (*request_lport_map)(struct fm10k_hw *);
s32 (*update_vlan)(struct fm10k_hw *, u32, u8, bool);
s32 (*read_mac_addr)(struct fm10k_hw *);
s32 (*update_uc_addr)(struct fm10k_hw *, u16, const u8 *,
@ -572,6 +550,7 @@ struct fm10k_mac_info {
bool tx_ready;
u32 dglort_map;
u8 itr_scale;
u64 reset_while_pending;
};
struct fm10k_swapi_table_info {
@ -613,7 +592,6 @@ struct fm10k_vf_info {
u8 vf_flags; /* flags indicating what modes
* are supported for the port
*/
bool trusted; /* VF trust mode */
};
#define FM10K_VF_FLAG_ALLMULTI_CAPABLE (u8)(BIT(FM10K_XCAST_MODE_ALLMULTI))

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include "fm10k.h"
@ -40,6 +23,8 @@ static irqreturn_t fm10k_msix_uio(int __always_unused irq, void *data)
/**
* fm10k_uio_set_irq - enable or disable uio irq
* @interface: pointer to private device structure
* @on: boolean indicating whether to enable or disable the IRQ
**/
static void fm10k_uio_set_irq(struct fm10k_intfc *interface, bool on)
{
@ -68,7 +53,7 @@ static void fm10k_uio_irq_task(struct work_struct *work)
interface = container_of(work, struct fm10k_intfc, uio_task);
/* if the interface is resetting, just re-queue */
if (test_bit(__FM10K_RESETTING, &interface->state)) {
if (test_bit(__FM10K_RESETTING, interface->state)) {
queue_work(fm10k_workqueue, &interface->uio_task);
return;
}
@ -97,7 +82,7 @@ int fm10k_uio_request_irq(struct fm10k_intfc *interface)
struct fm10k_hw *hw = &interface->hw;
int err;
if (!(interface->flags & FM10K_UIO_REGISTERED))
if (!test_bit(FM10K_FLAG_UIO_REGISTERED, interface->flags))
return 0;
/* request the IRQ */
@ -124,7 +109,7 @@ void fm10k_uio_free_irq(struct fm10k_intfc *interface)
struct fm10k_hw *hw = &interface->hw;
struct msix_entry *entry;
if (!(interface->flags & FM10K_UIO_REGISTERED))
if (!test_bit(FM10K_FLAG_UIO_REGISTERED, interface->flags))
return;
/* no uio IRQ to free if MSI-X is not enabled */
@ -204,7 +189,7 @@ int fm10k_uio_probe(struct fm10k_intfc *interface)
/* Enable bits in EIMR register */
fm10k_write_reg(hw, FM10K_EIMR, FM10K_EIMR_ENABLE(SWITCHINTERRUPT));
interface->flags |= FM10K_UIO_REGISTERED;
set_bit(FM10K_FLAG_UIO_REGISTERED, interface->flags);
return 0;
}
@ -213,10 +198,11 @@ void fm10k_uio_remove(struct fm10k_intfc *interface)
{
struct uio_info *uio = &interface->uio;
if (!(interface->flags & FM10K_UIO_REGISTERED))
fm10k_uio_free_irq(interface);
if (!test_and_clear_bit(FM10K_FLAG_UIO_REGISTERED, interface->flags))
return;
fm10k_uio_free_irq(interface);
uio_unregister_device(uio);
cancel_work_sync(&interface->uio_task);

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include "fm10k_vf.h"
@ -34,7 +17,7 @@ static s32 fm10k_stop_hw_vf(struct fm10k_hw *hw)
/* we need to disable the queues before taking further steps */
err = fm10k_stop_hw_generic(hw);
if (err)
if (err && err != FM10K_ERR_REQUESTS_PENDING)
return err;
/* If permanent address is set then we need to restore it */
@ -67,7 +50,7 @@ static s32 fm10k_stop_hw_vf(struct fm10k_hw *hw)
fm10k_write_reg(hw, FM10K_TDLEN(i), tdlen);
}
return 0;
return err;
}
/**
@ -83,7 +66,9 @@ static s32 fm10k_reset_hw_vf(struct fm10k_hw *hw)
/* shut down queues we own and reset DMA configuration */
err = fm10k_stop_hw_vf(hw);
if (err)
if (err == FM10K_ERR_REQUESTS_PENDING)
hw->mac.reset_while_pending++;
else if (err)
return err;
/* Inititate VF reset */
@ -96,9 +81,9 @@ static s32 fm10k_reset_hw_vf(struct fm10k_hw *hw)
/* Clear reset bit and verify it was cleared */
fm10k_write_reg(hw, FM10K_VFCTRL, 0);
if (fm10k_read_reg(hw, FM10K_VFCTRL) & FM10K_VFCTRL_RST)
err = FM10K_ERR_RESET_FAILED;
return FM10K_ERR_RESET_FAILED;
return err;
return 0;
}
/**

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _FM10K_VF_H_
#define _FM10K_VF_H_

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include "fm10k.h"
#include "kcompat.h"
@ -83,7 +66,7 @@ int _kc_skb_pad(struct sk_buff *skb, int pad)
ntail = skb->data_len + pad - (skb->end - skb->tail);
if (likely(skb_cloned(skb) || ntail > 0)) {
if (pskb_expand_head(skb, 0, ntail, GFP_ATOMIC));
if (pskb_expand_head(skb, 0, ntail, GFP_ATOMIC))
goto free_skb;
}
@ -168,8 +151,7 @@ void _kc_free_netdev(struct net_device *netdev)
{
struct adapter_struct *adapter = netdev_priv(netdev);
if (adapter->config_space != NULL)
kfree(adapter->config_space);
kfree(adapter->config_space);
#ifdef CONFIG_SYSFS
if (netdev->reg_state == NETREG_UNINITIALIZED) {
kfree((char *)netdev - netdev->padded);
@ -777,6 +759,34 @@ int __kc_pcie_capability_read_word(struct pci_dev *dev, int pos, u16 *val)
return 0;
}
int __kc_pcie_capability_read_dword(struct pci_dev *dev, int pos, u32 *val)
{
int ret;
*val = 0;
if (pos & 3)
return -EINVAL;
if (__kc_pcie_capability_reg_implemented(dev, pos)) {
ret = pci_read_config_dword(dev, pci_pcie_cap(dev) + pos, val);
/*
* Reset *val to 0 if pci_read_config_dword() fails, it may
* have been written as 0xFFFFFFFF if hardware error happens
* during pci_read_config_dword().
*/
if (ret)
*val = 0;
return ret;
}
if (pci_is_pcie(dev) && pos == PCI_EXP_SLTSTA &&
pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM) {
*val = PCI_EXP_SLTSTA_PDS;
}
return 0;
}
int __kc_pcie_capability_write_word(struct pci_dev *dev, int pos, u16 val)
{
if (pos & 1)
@ -811,112 +821,6 @@ int __kc_pcie_capability_clear_word(struct pci_dev *dev, int pos,
}
#endif /* < 3.7.0 */
/******************************************************************************
* ripped from linux/net/ipv6/exthdrs_core.c, GPL2, no direct copyright,
* inferred copyright from kernel
*/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,8,0) )
int __kc_ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset,
int target, unsigned short *fragoff, int *flags)
{
unsigned int start = skb_network_offset(skb) + sizeof(struct ipv6hdr);
u8 nexthdr = ipv6_hdr(skb)->nexthdr;
unsigned int len;
bool found;
#define __KC_IP6_FH_F_FRAG BIT(0)
#define __KC_IP6_FH_F_AUTH BIT(1)
#define __KC_IP6_FH_F_SKIP_RH BIT(2)
if (fragoff)
*fragoff = 0;
if (*offset) {
struct ipv6hdr _ip6, *ip6;
ip6 = skb_header_pointer(skb, *offset, sizeof(_ip6), &_ip6);
if (!ip6 || (ip6->version != 6)) {
printk(KERN_ERR "IPv6 header not found\n");
return -EBADMSG;
}
start = *offset + sizeof(struct ipv6hdr);
nexthdr = ip6->nexthdr;
}
len = skb->len - start;
do {
struct ipv6_opt_hdr _hdr, *hp;
unsigned int hdrlen;
found = (nexthdr == target);
if ((!ipv6_ext_hdr(nexthdr)) || nexthdr == NEXTHDR_NONE) {
if (target < 0 || found)
break;
return -ENOENT;
}
hp = skb_header_pointer(skb, start, sizeof(_hdr), &_hdr);
if (!hp)
return -EBADMSG;
if (nexthdr == NEXTHDR_ROUTING) {
struct ipv6_rt_hdr _rh, *rh;
rh = skb_header_pointer(skb, start, sizeof(_rh),
&_rh);
if (!rh)
return -EBADMSG;
if (flags && (*flags & __KC_IP6_FH_F_SKIP_RH) &&
rh->segments_left == 0)
found = false;
}
if (nexthdr == NEXTHDR_FRAGMENT) {
unsigned short _frag_off;
__be16 *fp;
if (flags) /* Indicate that this is a fragment */
*flags |= __KC_IP6_FH_F_FRAG;
fp = skb_header_pointer(skb,
start+offsetof(struct frag_hdr,
frag_off),
sizeof(_frag_off),
&_frag_off);
if (!fp)
return -EBADMSG;
_frag_off = ntohs(*fp) & ~0x7;
if (_frag_off) {
if (target < 0 &&
((!ipv6_ext_hdr(hp->nexthdr)) ||
hp->nexthdr == NEXTHDR_NONE)) {
if (fragoff)
*fragoff = _frag_off;
return hp->nexthdr;
}
return -ENOENT;
}
hdrlen = 8;
} else if (nexthdr == NEXTHDR_AUTH) {
if (flags && (*flags & __KC_IP6_FH_F_AUTH) && (target < 0))
break;
hdrlen = (hp->hdrlen + 2) << 2;
} else
hdrlen = ipv6_optlen(hp);
if (!found) {
nexthdr = hp->nexthdr;
len -= hdrlen;
start += hdrlen;
}
} while (!found);
*offset = start;
return nexthdr;
}
#endif /* < 3.8.0 */
/******************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,9,0) )
#ifdef CONFIG_XPS
@ -940,7 +844,7 @@ struct _kc_netdev_queue_attribute {
#define to_kc_netdev_queue_attr(_attr) container_of(_attr, \
struct _kc_netdev_queue_attribute, attr)
int __kc_netif_set_xps_queue(struct net_device *dev, struct cpumask *mask,
int __kc_netif_set_xps_queue(struct net_device *dev, const struct cpumask *mask,
u16 index)
{
struct netdev_queue *txq = netdev_get_tx_queue(dev, index);
@ -1173,14 +1077,12 @@ int __kc_pci_vfs_assigned(struct pci_dev __maybe_unused *dev)
#endif /* CONFIG_PCI_IOV */
#endif /* 3.10.0 */
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,12,0) )
const unsigned char pcie_link_speed[] = {
static const unsigned char __maybe_unused pcie_link_speed[] = {
PCI_SPEED_UNKNOWN, /* 0 */
PCIE_SPEED_2_5GT, /* 1 */
PCIE_SPEED_5_0GT, /* 2 */
PCIE_SPEED_8_0GT, /* 3 */
PCI_SPEED_UNKNOWN, /* 4 */
PCIE_SPEED_16_0GT, /* 4 */
PCI_SPEED_UNKNOWN, /* 5 */
PCI_SPEED_UNKNOWN, /* 6 */
PCI_SPEED_UNKNOWN, /* 7 */
@ -1194,6 +1096,8 @@ const unsigned char pcie_link_speed[] = {
PCI_SPEED_UNKNOWN /* F */
};
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,12,0) )
int __kc_pcie_get_minimum_link(struct pci_dev *dev, enum pci_bus_speed *speed,
enum pcie_link_width *width)
{
@ -1243,24 +1147,113 @@ int __kc_dma_set_mask_and_coherent(struct device *dev, u64 mask)
err = dma_set_coherent_mask(dev, mask);
return err;
}
void __kc_netdev_rss_key_fill(void *buffer, size_t len)
{
/* Set of random keys generated using kernel random number generator */
static const u8 seed[NETDEV_RSS_KEY_LEN] = {0xE6, 0xFA, 0x35, 0x62,
0x95, 0x12, 0x3E, 0xA3, 0xFB, 0x46, 0xC1, 0x5F,
0xB1, 0x43, 0x82, 0x5B, 0x6A, 0x49, 0x50, 0x95,
0xCD, 0xAB, 0xD8, 0x11, 0x8F, 0xC5, 0xBD, 0xBC,
0x6A, 0x4A, 0xB2, 0xD4, 0x1F, 0xFE, 0xBC, 0x41,
0xBF, 0xAC, 0xB2, 0x9A, 0x8F, 0x70, 0xE9, 0x2A,
0xD7, 0xB2, 0x80, 0xB6, 0x5B, 0xAA, 0x9D, 0x20};
BUG_ON(len > NETDEV_RSS_KEY_LEN);
memcpy(buffer, seed, len);
}
#endif /* 3.13.0 */
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,14,0) )
/******************************************************************************
* ripped from linux/net/ipv6/exthdrs_core.c, GPL2, no direct copyright,
* inferred copyright from kernel
*/
int __kc_ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset,
int target, unsigned short *fragoff, int *flags)
{
unsigned int start = skb_network_offset(skb) + sizeof(struct ipv6hdr);
u8 nexthdr = ipv6_hdr(skb)->nexthdr;
unsigned int len;
bool found;
#define __KC_IP6_FH_F_FRAG BIT(0)
#define __KC_IP6_FH_F_AUTH BIT(1)
#define __KC_IP6_FH_F_SKIP_RH BIT(2)
if (fragoff)
*fragoff = 0;
if (*offset) {
struct ipv6hdr _ip6, *ip6;
ip6 = skb_header_pointer(skb, *offset, sizeof(_ip6), &_ip6);
if (!ip6 || (ip6->version != 6)) {
printk(KERN_ERR "IPv6 header not found\n");
return -EBADMSG;
}
start = *offset + sizeof(struct ipv6hdr);
nexthdr = ip6->nexthdr;
}
len = skb->len - start;
do {
struct ipv6_opt_hdr _hdr, *hp;
unsigned int hdrlen;
found = (nexthdr == target);
if ((!ipv6_ext_hdr(nexthdr)) || nexthdr == NEXTHDR_NONE) {
if (target < 0 || found)
break;
return -ENOENT;
}
hp = skb_header_pointer(skb, start, sizeof(_hdr), &_hdr);
if (!hp)
return -EBADMSG;
if (nexthdr == NEXTHDR_ROUTING) {
struct ipv6_rt_hdr _rh, *rh;
rh = skb_header_pointer(skb, start, sizeof(_rh),
&_rh);
if (!rh)
return -EBADMSG;
if (flags && (*flags & __KC_IP6_FH_F_SKIP_RH) &&
rh->segments_left == 0)
found = false;
}
if (nexthdr == NEXTHDR_FRAGMENT) {
unsigned short _frag_off;
__be16 *fp;
if (flags) /* Indicate that this is a fragment */
*flags |= __KC_IP6_FH_F_FRAG;
fp = skb_header_pointer(skb,
start+offsetof(struct frag_hdr,
frag_off),
sizeof(_frag_off),
&_frag_off);
if (!fp)
return -EBADMSG;
_frag_off = ntohs(*fp) & ~0x7;
if (_frag_off) {
if (target < 0 &&
((!ipv6_ext_hdr(hp->nexthdr)) ||
hp->nexthdr == NEXTHDR_NONE)) {
if (fragoff)
*fragoff = _frag_off;
return hp->nexthdr;
}
return -ENOENT;
}
hdrlen = 8;
} else if (nexthdr == NEXTHDR_AUTH) {
if (flags && (*flags & __KC_IP6_FH_F_AUTH) && (target < 0))
break;
hdrlen = (hp->hdrlen + 2) << 2;
} else
hdrlen = ipv6_optlen(hp);
if (!found) {
nexthdr = hp->nexthdr;
len -= hdrlen;
start += hdrlen;
}
} while (!found);
*offset = start;
return nexthdr;
}
int __kc_pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
int minvec, int maxvec)
{
@ -1285,6 +1278,38 @@ int __kc_pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
}
#endif /* 3.14.0 */
#if (LINUX_VERSION_CODE < KERNEL_VERSION(3,15,0))
char *_kc_devm_kstrdup(struct device *dev, const char *s, gfp_t gfp)
{
size_t size;
char *buf;
if (!s)
return NULL;
size = strlen(s) + 1;
buf = devm_kzalloc(dev, size, gfp);
if (buf)
memcpy(buf, s, size);
return buf;
}
void __kc_netdev_rss_key_fill(void *buffer, size_t len)
{
/* Set of random keys generated using kernel random number generator */
static const u8 seed[NETDEV_RSS_KEY_LEN] = {0xE6, 0xFA, 0x35, 0x62,
0x95, 0x12, 0x3E, 0xA3, 0xFB, 0x46, 0xC1, 0x5F,
0xB1, 0x43, 0x82, 0x5B, 0x6A, 0x49, 0x50, 0x95,
0xCD, 0xAB, 0xD8, 0x11, 0x8F, 0xC5, 0xBD, 0xBC,
0x6A, 0x4A, 0xB2, 0xD4, 0x1F, 0xFE, 0xBC, 0x41,
0xBF, 0xAC, 0xB2, 0x9A, 0x8F, 0x70, 0xE9, 0x2A,
0xD7, 0xB2, 0x80, 0xB6, 0x5B, 0xAA, 0x9D, 0x20};
BUG_ON(len > NETDEV_RSS_KEY_LEN);
memcpy(buffer, seed, len);
}
#endif /* 3.15.0 */
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,16,0) )
#ifdef HAVE_SET_RX_MODE
#ifdef NETDEV_HW_ADDR_T_UNICAST
@ -1429,8 +1454,23 @@ void __kc_dev_addr_unsync_dev(struct dev_addr_list **list, int *count,
}
#endif /* NETDEV_HW_ADDR_T_MULTICAST */
#endif /* HAVE_SET_RX_MODE */
void *__kc_devm_kmemdup(struct device *dev, const void *src, size_t len,
gfp_t gfp)
{
void *p;
p = devm_kzalloc(dev, len, gfp);
if (p)
memcpy(p, src, len);
return p;
}
#endif /* 3.16.0 */
/******************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,17,0) )
#endif /* 3.17.0 */
/******************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,18,0) )
#ifndef NO_PTP_SUPPORT
@ -1610,4 +1650,324 @@ void __kc_netdev_rss_key_fill(void *buffer, size_t len)
memcpy(buffer, __kc_netdev_rss_key, len);
}
#endif
int _kc_bitmap_print_to_pagebuf(bool list, char *buf,
const unsigned long *maskp,
int nmaskbits)
{
ptrdiff_t len = PTR_ALIGN(buf + PAGE_SIZE - 1, PAGE_SIZE) - buf - 2;
int n = 0;
if (len > 1) {
n = list ? bitmap_scnlistprintf(buf, len, maskp, nmaskbits) :
bitmap_scnprintf(buf, len, maskp, nmaskbits);
buf[n++] = '\n';
buf[n] = '\0';
}
return n;
}
#endif
/******************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(4,1,0) )
#if !((RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(6,8) && RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,0)) && \
(RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,2)) && \
(SLE_VERSION_CODE > SLE_VERSION(12,1,0)))
unsigned int _kc_cpumask_local_spread(unsigned int i, int node)
{
int cpu;
/* Wrap: we always want a cpu. */
i %= num_online_cpus();
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28) )
/* Kernels prior to 2.6.28 do not have for_each_cpu or
* cpumask_of_node, so just use for_each_online_cpu()
*/
for_each_online_cpu(cpu)
if (i-- == 0)
return cpu;
return 0;
#else
if (node == -1) {
for_each_cpu(cpu, cpu_online_mask)
if (i-- == 0)
return cpu;
} else {
/* NUMA first. */
for_each_cpu_and(cpu, cpumask_of_node(node), cpu_online_mask)
if (i-- == 0)
return cpu;
for_each_cpu(cpu, cpu_online_mask) {
/* Skip NUMA nodes, done above. */
if (cpumask_test_cpu(cpu, cpumask_of_node(node)))
continue;
if (i-- == 0)
return cpu;
}
}
#endif /* KERNEL_VERSION >= 2.6.28 */
BUG();
}
#endif
#endif
/******************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(4,5,0) )
#if (!(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,3)))
#ifdef CONFIG_SPARC
#include <asm/idprom.h>
#include <asm/prom.h>
#endif
int _kc_eth_platform_get_mac_address(struct device *dev __maybe_unused,
u8 *mac_addr __maybe_unused)
{
#if (((LINUX_VERSION_CODE < KERNEL_VERSION(3,1,0)) && defined(CONFIG_OF) && \
!defined(HAVE_STRUCT_DEVICE_OF_NODE) || !defined(CONFIG_OF)) && \
!defined(CONFIG_SPARC))
return -ENODEV;
#else
const unsigned char *addr;
struct device_node *dp;
if (dev_is_pci(dev))
dp = pci_device_to_OF_node(to_pci_dev(dev));
else
#if defined(HAVE_STRUCT_DEVICE_OF_NODE) && defined(CONFIG_OF)
dp = dev->of_node;
#else
dp = NULL;
#endif
addr = NULL;
if (dp)
addr = of_get_mac_address(dp);
#ifdef CONFIG_SPARC
/* Kernel hasn't implemented arch_get_platform_mac_address, but we
* should handle the SPARC case here since it was supported
* originally. This is replaced by arch_get_platform_mac_address()
* upstream.
*/
if (!addr)
addr = idprom->id_ethaddr;
#endif
if (!addr)
return -ENODEV;
ether_addr_copy(mac_addr, addr);
return 0;
#endif
}
#endif /* !(RHEL_RELEASE >= 7.3) */
#endif /* < 4.5.0 */
/*****************************************************************************/
#if ((LINUX_VERSION_CODE < KERNEL_VERSION(4,14,0)) || \
(SLE_VERSION_CODE && (SLE_VERSION_CODE <= SLE_VERSION(12,3,0))) || \
(RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE <= RHEL_RELEASE_VERSION(7,5))))
const char *_kc_phy_speed_to_str(int speed)
{
switch (speed) {
case SPEED_10:
return "10Mbps";
case SPEED_100:
return "100Mbps";
case SPEED_1000:
return "1Gbps";
case SPEED_2500:
return "2.5Gbps";
case SPEED_5000:
return "5Gbps";
case SPEED_10000:
return "10Gbps";
case SPEED_14000:
return "14Gbps";
case SPEED_20000:
return "20Gbps";
case SPEED_25000:
return "25Gbps";
case SPEED_40000:
return "40Gbps";
case SPEED_50000:
return "50Gbps";
case SPEED_56000:
return "56Gbps";
#ifdef SPEED_100000
case SPEED_100000:
return "100Gbps";
#endif
case SPEED_UNKNOWN:
return "Unknown";
default:
return "Unsupported (update phy-core.c)";
}
}
#endif /* (LINUX < 4.14.0) || (SLES <= 12.3.0) || (RHEL <= 7.5) */
/******************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(4,15,0) )
void _kc_ethtool_intersect_link_masks(struct ethtool_link_ksettings *dst,
struct ethtool_link_ksettings *src)
{
unsigned int size = BITS_TO_LONGS(__ETHTOOL_LINK_MODE_MASK_NBITS);
unsigned int idx = 0;
for (; idx < size; idx++) {
dst->link_modes.supported[idx] &=
src->link_modes.supported[idx];
dst->link_modes.advertising[idx] &=
src->link_modes.advertising[idx];
}
}
#endif /* 4.15.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,17,0))
/* PCIe link information */
#define PCIE_SPEED2STR(speed) \
((speed) == PCIE_SPEED_16_0GT ? "16 GT/s" : \
(speed) == PCIE_SPEED_8_0GT ? "8 GT/s" : \
(speed) == PCIE_SPEED_5_0GT ? "5 GT/s" : \
(speed) == PCIE_SPEED_2_5GT ? "2.5 GT/s" : \
"Unknown speed")
/* PCIe speed to Mb/s reduced by encoding overhead */
#define PCIE_SPEED2MBS_ENC(speed) \
((speed) == PCIE_SPEED_16_0GT ? 16000*128/130 : \
(speed) == PCIE_SPEED_8_0GT ? 8000*128/130 : \
(speed) == PCIE_SPEED_5_0GT ? 5000*8/10 : \
(speed) == PCIE_SPEED_2_5GT ? 2500*8/10 : \
0)
static u32
_kc_pcie_bandwidth_available(struct pci_dev *dev,
struct pci_dev **limiting_dev,
enum pci_bus_speed *speed,
enum pcie_link_width *width)
{
u16 lnksta;
enum pci_bus_speed next_speed;
enum pcie_link_width next_width;
u32 bw, next_bw;
if (speed)
*speed = PCI_SPEED_UNKNOWN;
if (width)
*width = PCIE_LNK_WIDTH_UNKNOWN;
bw = 0;
while (dev) {
pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta);
next_speed = pcie_link_speed[lnksta & PCI_EXP_LNKSTA_CLS];
next_width = (lnksta & PCI_EXP_LNKSTA_NLW) >>
PCI_EXP_LNKSTA_NLW_SHIFT;
next_bw = next_width * PCIE_SPEED2MBS_ENC(next_speed);
/* Check if current device limits the total bandwidth */
if (!bw || next_bw <= bw) {
bw = next_bw;
if (limiting_dev)
*limiting_dev = dev;
if (speed)
*speed = next_speed;
if (width)
*width = next_width;
}
dev = pci_upstream_bridge(dev);
}
return bw;
}
static enum pci_bus_speed _kc_pcie_get_speed_cap(struct pci_dev *dev)
{
u32 lnkcap2, lnkcap;
/*
* PCIe r4.0 sec 7.5.3.18 recommends using the Supported Link
* Speeds Vector in Link Capabilities 2 when supported, falling
* back to Max Link Speed in Link Capabilities otherwise.
*/
pcie_capability_read_dword(dev, PCI_EXP_LNKCAP2, &lnkcap2);
if (lnkcap2) { /* PCIe r3.0-compliant */
if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_16_0GB)
return PCIE_SPEED_16_0GT;
else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_8_0GB)
return PCIE_SPEED_8_0GT;
else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_5_0GB)
return PCIE_SPEED_5_0GT;
else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_2_5GB)
return PCIE_SPEED_2_5GT;
return PCI_SPEED_UNKNOWN;
}
pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap);
if (lnkcap) {
if (lnkcap & PCI_EXP_LNKCAP_SLS_16_0GB)
return PCIE_SPEED_16_0GT;
else if (lnkcap & PCI_EXP_LNKCAP_SLS_8_0GB)
return PCIE_SPEED_8_0GT;
else if (lnkcap & PCI_EXP_LNKCAP_SLS_5_0GB)
return PCIE_SPEED_5_0GT;
else if (lnkcap & PCI_EXP_LNKCAP_SLS_2_5GB)
return PCIE_SPEED_2_5GT;
}
return PCI_SPEED_UNKNOWN;
}
static enum pcie_link_width _kc_pcie_get_width_cap(struct pci_dev *dev)
{
u32 lnkcap;
pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap);
if (lnkcap)
return (lnkcap & PCI_EXP_LNKCAP_MLW) >> 4;
return PCIE_LNK_WIDTH_UNKNOWN;
}
static u32
_kc_pcie_bandwidth_capable(struct pci_dev *dev, enum pci_bus_speed *speed,
enum pcie_link_width *width)
{
*speed = _kc_pcie_get_speed_cap(dev);
*width = _kc_pcie_get_width_cap(dev);
if (*speed == PCI_SPEED_UNKNOWN || *width == PCIE_LNK_WIDTH_UNKNOWN)
return 0;
return *width * PCIE_SPEED2MBS_ENC(*speed);
}
void _kc_pcie_print_link_status(struct pci_dev *dev) {
enum pcie_link_width width, width_cap;
enum pci_bus_speed speed, speed_cap;
struct pci_dev *limiting_dev = NULL;
u32 bw_avail, bw_cap;
bw_cap = _kc_pcie_bandwidth_capable(dev, &speed_cap, &width_cap);
bw_avail = _kc_pcie_bandwidth_available(dev, &limiting_dev, &speed,
&width);
if (bw_avail >= bw_cap)
pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth (%s x%d link)\n",
bw_cap / 1000, bw_cap % 1000,
PCIE_SPEED2STR(speed_cap), width_cap);
else
pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth, limited by %s x%d link at %s (capable of %u.%03u Gb/s with %s x%d link)\n",
bw_avail / 1000, bw_avail % 1000,
PCIE_SPEED2STR(speed), width,
limiting_dev ? pci_name(limiting_dev) : "<unknown>",
bw_cap / 1000, bw_cap % 1000,
PCIE_SPEED2STR(speed_cap), width_cap);
}
#endif /* 4.17.0 */

View file

@ -1,22 +1,5 @@
/* Intel(R) Ethernet Switch Host Interface Driver
* Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*/
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _KCOMPAT_H_
#define _KCOMPAT_H_
@ -31,6 +14,7 @@
#include <linux/errno.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/string.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/skbuff.h>
@ -50,6 +34,9 @@
#include <linux/ethtool.h>
#include <linux/if_vlan.h>
#ifndef NSEC_PER_MSEC
#define NSEC_PER_MSEC 1000000L
#endif
#include <net/ipv6.h>
/* UTS_RELEASE is in a different header starting in kernel 2.6.18 */
#ifndef UTS_RELEASE
@ -342,6 +329,34 @@ struct _kc_vlan_hdr {
#define VLAN_PRIO_SHIFT 13
#endif
#ifndef PCI_EXP_LNKSTA_CLS_2_5GB
#define PCI_EXP_LNKSTA_CLS_2_5GB 0x0001
#endif
#ifndef PCI_EXP_LNKSTA_CLS_5_0GB
#define PCI_EXP_LNKSTA_CLS_5_0GB 0x0002
#endif
#ifndef PCI_EXP_LNKSTA_CLS_8_0GB
#define PCI_EXP_LNKSTA_CLS_8_0GB 0x0003
#endif
#ifndef PCI_EXP_LNKSTA_NLW_X1
#define PCI_EXP_LNKSTA_NLW_X1 0x0010
#endif
#ifndef PCI_EXP_LNKSTA_NLW_X2
#define PCI_EXP_LNKSTA_NLW_X2 0x0020
#endif
#ifndef PCI_EXP_LNKSTA_NLW_X4
#define PCI_EXP_LNKSTA_NLW_X4 0x0040
#endif
#ifndef PCI_EXP_LNKSTA_NLW_X8
#define PCI_EXP_LNKSTA_NLW_X8 0x0080
#endif
#ifndef __GFP_COLD
#define __GFP_COLD 0
#endif
@ -690,6 +705,21 @@ struct _kc_ethtool_pauseparam {
#ifndef SPEED_5000
#define SPEED_5000 5000
#endif
#ifndef SPEED_14000
#define SPEED_14000 14000
#endif
#ifndef SPEED_25000
#define SPEED_25000 25000
#endif
#ifndef SPEED_50000
#define SPEED_50000 50000
#endif
#ifndef SPEED_56000
#define SPEED_56000 56000
#endif
#ifndef SPEED_100000
#define SPEED_100000 100000
#endif
#ifndef RHEL_RELEASE_VERSION
#define RHEL_RELEASE_VERSION(a,b) (((a) << 8) + (b))
@ -715,6 +745,16 @@ struct _kc_ethtool_pauseparam {
#define RHEL_RELEASE_CODE 0
#endif
/* RHEL 7 didn't backport the parameter change in
* create_singlethread_workqueue.
* If/when RH corrects this we will want to tighten up the version check.
*/
#if (RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,0))
#undef create_singlethread_workqueue
#define create_singlethread_workqueue(name) \
alloc_ordered_workqueue("%s", WQ_MEM_RECLAIM, name)
#endif
/* Ubuntu Release ABI is the 4th digit of their kernel version. You can find
* it in /usr/src/linux/$(uname -r)/include/generated/utsrelease.h for new
* enough versions of Ubuntu. Otherwise you can simply see it in the output of
@ -744,7 +784,8 @@ struct _kc_ethtool_pauseparam {
* ABI value. Otherwise, it becomes impossible to correlate ABI to version for
* ordering checks.
*/
#define UBUNTU_VERSION_CODE (((LINUX_VERSION_CODE & ~0xFF) << 8) + (UTS_UBUNTU_RELEASE_ABI))
#define UBUNTU_VERSION_CODE (((~0xFF & LINUX_VERSION_CODE) << 8) + \
UTS_UBUNTU_RELEASE_ABI)
#if UTS_UBUNTU_RELEASE_ABI > 255
#error UTS_UBUNTU_RELEASE_ABI is too large...
@ -767,10 +808,11 @@ struct _kc_ethtool_pauseparam {
*/
#define UBUNTU_VERSION(a,b,c,d) ((KERNEL_VERSION(a,b,0) << 8) + (d))
/* SuSE version macro is the same as Linux kernel version */
/* SuSE version macros are the same as Linux kernel version macro */
#ifndef SLE_VERSION
#define SLE_VERSION(a,b,c) KERNEL_VERSION(a,b,c)
#define SLE_VERSION(a,b,c) KERNEL_VERSION(a,b,c)
#endif
#define SLE_LOCALVERSION(a,b,c) KERNEL_VERSION(a,b,c)
#ifdef CONFIG_SUSE_KERNEL
#if ( LINUX_VERSION_CODE == KERNEL_VERSION(2,6,27) )
/* SLES11 GA is 2.6.27 based */
@ -779,35 +821,140 @@ struct _kc_ethtool_pauseparam {
/* SLES11 SP1 is 2.6.32 based */
#define SLE_VERSION_CODE SLE_VERSION(11,1,0)
#elif ( LINUX_VERSION_CODE == KERNEL_VERSION(3,0,13) )
/* SLES11 SP2 is 3.0.13 based */
/* SLES11 SP2 GA is 3.0.13-0.27 */
#define SLE_VERSION_CODE SLE_VERSION(11,2,0)
#elif ((LINUX_VERSION_CODE == KERNEL_VERSION(3,0,76)))
/* SLES11 SP3 is 3.0.76 based */
/* SLES11 SP3 GA is 3.0.76-0.11 */
#define SLE_VERSION_CODE SLE_VERSION(11,3,0)
#elif ((LINUX_VERSION_CODE == KERNEL_VERSION(3,0,101)))
/* SLES11 SP4 is 3.0.101 based */
#define SLE_VERSION_CODE SLE_VERSION(11,4,0)
#elif ((LINUX_VERSION_CODE == KERNEL_VERSION(3,12,28)))
/* SLES12 GA is 3.12.28 based */
#elif (LINUX_VERSION_CODE == KERNEL_VERSION(3,0,101))
#if (SLE_LOCALVERSION_CODE < SLE_LOCALVERSION(0,8,0))
/* some SLES11sp2 update kernels up to 3.0.101-0.7.x */
#define SLE_VERSION_CODE SLE_VERSION(11,2,0)
#elif (SLE_LOCALVERSION_CODE < SLE_LOCALVERSION(63,0,0))
/* most SLES11sp3 update kernels */
#define SLE_VERSION_CODE SLE_VERSION(11,3,0)
#else
/* SLES11 SP4 GA (3.0.101-63) and update kernels 3.0.101-63+ */
#define SLE_VERSION_CODE SLE_VERSION(11,4,0)
#endif
#elif (LINUX_VERSION_CODE == KERNEL_VERSION(3,12,28))
/* SLES12 GA is 3.12.28-4
* kernel updates 3.12.xx-<33 through 52>[.yy] */
#define SLE_VERSION_CODE SLE_VERSION(12,0,0)
#elif (LINUX_VERSION_CODE == KERNEL_VERSION(3,12,49))
/* SLES12 SP1 GA is 3.12.49-11
* updates 3.12.xx-60.yy where xx={51..} */
#define SLE_VERSION_CODE SLE_VERSION(12,1,0)
#elif ((LINUX_VERSION_CODE >= KERNEL_VERSION(4,4,21) && \
(LINUX_VERSION_CODE <= KERNEL_VERSION(4,4,59))) || \
(LINUX_VERSION_CODE >= KERNEL_VERSION(4,4,74) && \
LINUX_VERSION_CODE < KERNEL_VERSION(4,5,0) && \
SLE_LOCALVERSION_CODE >= KERNEL_VERSION(92,0,0) && \
SLE_LOCALVERSION_CODE < KERNEL_VERSION(93,0,0)))
/* SLES12 SP2 GA is 4.4.21-69.
* SLES12 SP2 updates before SLES12 SP3 are: 4.4.{21,38,49,59}
* SLES12 SP2 updates after SLES12 SP3 are: 4.4.{74,90,103,114,120}
* but they all use a SLE_LOCALVERSION_CODE matching 92.nn.y */
#define SLE_VERSION_CODE SLE_VERSION(12,2,0)
#elif ((LINUX_VERSION_CODE == KERNEL_VERSION(4,4,73) || \
LINUX_VERSION_CODE == KERNEL_VERSION(4,4,82) || \
LINUX_VERSION_CODE == KERNEL_VERSION(4,4,92)) || \
(LINUX_VERSION_CODE == KERNEL_VERSION(4,4,103) && \
(SLE_LOCALVERSION_CODE == KERNEL_VERSION(6,33,0) || \
SLE_LOCALVERSION_CODE == KERNEL_VERSION(6,38,0))) || \
(LINUX_VERSION_CODE >= KERNEL_VERSION(4,4,114) && \
LINUX_VERSION_CODE < KERNEL_VERSION(4,5,0) && \
SLE_LOCALVERSION_CODE >= KERNEL_VERSION(94,0,0) && \
SLE_LOCALVERSION_CODE < KERNEL_VERSION(95,0,0)) )
/* SLES12 SP3 GM is 4.4.73-5 and update kernels are 4.4.82-6.3.
* SLES12 SP3 updates not conflicting with SP2 are: 4.4.{82,92}
* SLES12 SP3 updates conflicting with SP2 are:
* - 4.4.103-6.33.1, 4.4.103-6.38.1
* - 4.4.{114,120}-94.nn.y */
#define SLE_VERSION_CODE SLE_VERSION(12,3,0)
#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(4,12,14))
/* SLES15 Beta1 is 4.12.14-2.
* SLES12 SP4 will also use 4.12.14-nn.xx.y */
#define SLE_VERSION_CODE SLE_VERSION(15,0,0)
/* new SLES kernels must be added here with >= based on kernel
* the idea is to order from newest to oldest and just catch all
* of them using the >=
*/
#elif ((LINUX_VERSION_CODE >= KERNEL_VERSION(3,12,47)))
/* SLES12 SP1 is 3.12.47-based */
#define SLE_VERSION_CODE SLE_VERSION(12,1,0)
#endif /* LINUX_VERSION_CODE == KERNEL_VERSION(x,y,z) */
#endif /* CONFIG_SUSE_KERNEL */
#ifndef SLE_VERSION_CODE
#define SLE_VERSION_CODE 0
#endif /* SLE_VERSION_CODE */
#ifndef SLE_LOCALVERSION_CODE
#define SLE_LOCALVERSION_CODE 0
#endif /* SLE_LOCALVERSION_CODE */
#ifdef __KLOCWORK__
/* The following are not compiled into the binary driver; they are here
* only to tune Klocwork scans to workaround false-positive issues.
*/
#ifdef ARRAY_SIZE
#undef ARRAY_SIZE
#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
#endif
#define memcpy(dest, src, len) memcpy_s(dest, len, src, len)
#define memset(dest, ch, len) memset_s(dest, len, ch, len)
static inline int _kc_test_and_clear_bit(int nr, volatile unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
unsigned long old;
unsigned long flags = 0;
_atomic_spin_lock_irqsave(p, flags);
old = *p;
*p = old & ~mask;
_atomic_spin_unlock_irqrestore(p, flags);
return (old & mask) != 0;
}
#define test_and_clear_bit(nr, addr) _kc_test_and_clear_bit(nr, addr)
static inline int _kc_test_and_set_bit(int nr, volatile unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
unsigned long old;
unsigned long flags = 0;
_atomic_spin_lock_irqsave(p, flags);
old = *p;
*p = old | mask;
_atomic_spin_unlock_irqrestore(p, flags);
return (old & mask) != 0;
}
#define test_and_set_bit(nr, addr) _kc_test_and_set_bit(nr, addr)
#ifdef CONFIG_DYNAMIC_DEBUG
#undef dev_dbg
#define dev_dbg(dev, format, arg...) dev_printk(KERN_DEBUG, dev, format, ##arg)
#endif /* CONFIG_DYNAMIC_DEBUG */
#undef list_for_each_entry_safe
#define list_for_each_entry_safe(pos, n, head, member) \
for (n = NULL, pos = list_first_entry(head, typeof(*pos), member); \
&pos->member != (head); \
pos = list_next_entry(pos, member))
#undef hlist_for_each_entry_safe
#define hlist_for_each_entry_safe(pos, n, head, member) \
for (n = NULL, pos = hlist_entry_safe((head)->first, typeof(*(pos)), \
member); \
pos; \
pos = hlist_entry_safe((pos)->member.next, typeof(*(pos)), member))
#ifdef uninitialized_var
#undef uninitialized_var
#define uninitialized_var(x) x = *(&(x))
#endif
#endif /* __KLOCWORK__ */
/*****************************************************************************/
@ -1428,6 +1575,17 @@ static inline int _kc_request_irq(unsigned int irq, new_handler_t handler, unsig
#define request_irq(irq, handler, flags, devname, dev_id) _kc_request_irq((irq), (handler), (flags), (devname), (dev_id))
#define irq_handler_t new_handler_t
#if ( LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,11) )
#ifndef skb_checksum_help
static inline int __kc_skb_checksum_help(struct sk_buff *skb)
{
return skb_checksum_help(skb, 0);
}
#define skb_checksum_help(skb) __kc_skb_checksum_help((skb))
#endif
#endif /* < 2.6.19 && >= 2.6.11 */
/* pci_restore_state and pci_save_state handles MSI/PCIE from 2.6.19 */
#if (!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(5,4)))
#define PCIE_CONFIG_SPACE_LEN 256
@ -1463,8 +1621,9 @@ extern void *_kc_kmemdup(const void *src, size_t len, unsigned gfp);
#endif
#else /* 2.6.19 */
#include <linux/aer.h>
#include <linux/string.h>
#include <linux/pci_hotplug.h>
#define NEW_SKB_CSUM_HELP
#endif /* < 2.6.19 */
/*****************************************************************************/
@ -1551,7 +1710,9 @@ static inline __wsum csum_unfold(__sum16 n)
extern struct pci_dev *_kc_netdev_to_pdev(struct net_device *netdev);
#define netdev_to_dev(netdev) \
pci_dev_to_dev(_kc_netdev_to_pdev(netdev))
#else
#define devm_kzalloc(dev, size, flags) kzalloc(size, flags)
#define devm_kfree(dev, p) kfree(p)
#else /* 2.6.21 */
static inline struct device *netdev_to_dev(struct net_device *netdev)
{
return &netdev->dev;
@ -1636,6 +1797,15 @@ extern void _kc_print_hex_dump(const char *level, const char *prefix_str,
#define ETH_P_PAUSE 0x8808
#endif
static inline int compound_order(struct page *page)
{
return 0;
}
#ifndef SKB_WITH_OVERHEAD
#define SKB_WITH_OVERHEAD(X) \
((X) - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
#endif
#else /* 2.6.22 */
#define ETH_TYPE_TRANS_SETS_DEV
#define HAVE_NETDEV_STATS_IN_NETDEV
@ -1944,6 +2114,10 @@ static inline __u32 _kc_ethtool_cmd_speed(struct ethtool_cmd *ep)
#endif
#define dma_mapping_error(dev, dma_addr) pci_dma_mapping_error(dma_addr)
#ifndef DMA_ATTR_WEAK_ORDERING
#define DMA_ATTR_WEAK_ORDERING 0
#endif
#ifdef HAVE_TX_MQ
extern void _kc_netif_tx_stop_all_queues(struct net_device *);
extern void _kc_netif_tx_wake_all_queues(struct net_device *);
@ -1999,6 +2173,13 @@ extern void __kc_warn_slowpath(const char *file, const int line,
#undef HAVE_IXGBE_DEBUG_FS
#undef HAVE_IGB_DEBUG_FS
#else /* < 2.6.27 */
#define ethtool_cmd_speed_set _kc_ethtool_cmd_speed_set
static inline void _kc_ethtool_cmd_speed_set(struct ethtool_cmd *ep,
__u32 speed)
{
ep->speed = (__u16)(speed & 0xFFFF);
ep->speed_hi = (__u16)(speed >> 16);
}
#define HAVE_TX_MQ
#define HAVE_NETDEV_SELECT_QUEUE
#ifdef CONFIG_DEBUG_FS
@ -2028,6 +2209,9 @@ static inline void __kc_skb_queue_head_init(struct sk_buff_head *list)
#define PCI_EXP_DEVCAP2 36 /* Device Capabilities 2 */
#define PCI_EXP_DEVCTL2 40 /* Device Control 2 */
#define PCI_EXP_DEVCAP_FLR 0x10000000 /* Function Level Reset */
#define PCI_EXP_DEVCTL_BCR_FLR 0x8000 /* Bridge Configuration Retry / FLR */
#endif /* < 2.6.28 */
/*****************************************************************************/
@ -2062,6 +2246,11 @@ extern void _kc_pci_clear_master(struct pci_dev *dev);
#ifndef PCI_EXP_LNKCTL_ASPMC
#define PCI_EXP_LNKCTL_ASPMC 0x0003 /* ASPM Control */
#endif
#ifndef PCI_EXP_LNKCAP_MLW
#define PCI_EXP_LNKCAP_MLW 0x000003f0 /* Maximum Link Width */
#endif
#else /* < 2.6.29 */
#ifndef HAVE_NET_DEVICE_OPS
#define HAVE_NET_DEVICE_OPS
@ -2106,8 +2295,20 @@ static inline void _kc_synchronize_irq(unsigned int a)
#define nr_cpus_node(node) cpumask_weight(cpumask_of_node(node))
#endif
#if (RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(5,5))
#define HAVE_PCI_DEV_IS_VIRTFN_BIT
#endif /* RHEL >= 5.5 */
#if (!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(5,5)))
static inline bool pci_is_root_bus(struct pci_bus *pbus)
{
return !(pbus->parent);
}
#endif
#else /* < 2.6.30 */
#define HAVE_ASPM_QUIRKS
#define HAVE_PCI_DEV_IS_VIRTFN_BIT
#endif /* < 2.6.30 */
/*****************************************************************************/
@ -2154,6 +2355,10 @@ static inline void _kc_synchronize_irq(unsigned int a)
#define ADVERTISED_10000baseKR_Full (1 << 19)
#endif
static inline unsigned long dev_trans_start(struct net_device *dev)
{
return dev->trans_start;
}
#else /* < 2.6.31 */
#ifndef HAVE_NETDEV_STORAGE_ADDRESS
#define HAVE_NETDEV_STORAGE_ADDRESS
@ -2275,9 +2480,10 @@ static inline int _kc_pm_runtime_get_sync(struct device __always_unused *dev)
#ifndef __percpu
#define __percpu
#endif /* __percpu */
#ifndef PORT_DA
#define PORT_DA PORT_OTHER
#endif
#endif /* PORT_DA */
#ifndef PORT_NONE
#define PORT_NONE PORT_OTHER
#endif
@ -2349,6 +2555,10 @@ extern int _kc_pci_num_vf(struct pci_dev *dev);
#endif
#endif /* RHEL_RELEASE_CODE */
#ifndef dev_is_pci
#define dev_is_pci(d) ((d)->bus == &pci_bus_type)
#endif
#ifndef ETH_FLAG_NTUPLE
#define ETH_FLAG_NTUPLE NETIF_F_NTUPLE
#endif
@ -2564,6 +2774,19 @@ ssize_t _kc_simple_write_to_buffer(void *to, size_t available, loff_t *ppos,
const void __user *from, size_t count);
#define simple_write_to_buffer _kc_simple_write_to_buffer
#if (!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6,4)))
static inline struct pci_dev *pci_physfn(struct pci_dev *dev)
{
#ifdef HAVE_PCI_DEV_IS_VIRTFN_BIT
#ifdef CONFIG_PCI_IOV
if (dev->is_virtfn)
dev = dev->physfn;
#endif /* CONFIG_PCI_IOV */
#endif /* HAVE_PCI_DEV_IS_VIRTFN_BIT */
return dev;
}
#endif /* ! RHEL >= 6.4 */
#ifndef PCI_EXP_LNKSTA_NLW_SHIFT
#define PCI_EXP_LNKSTA_NLW_SHIFT 4
#endif
@ -2603,9 +2826,12 @@ static inline int _kc_netif_set_real_num_tx_queues(struct net_device __always_un
#if (RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6,0))
#define HAVE_IRQ_AFFINITY_HINT
#endif
struct device_node;
#else /* < 2.6.35 */
#define HAVE_STRUCT_DEVICE_OF_NODE
#define HAVE_PM_QOS_REQUEST_LIST
#define HAVE_IRQ_AFFINITY_HINT
#include <linux/of.h>
#endif /* < 2.6.35 */
/*****************************************************************************/
@ -2615,6 +2841,11 @@ extern int _kc_ethtool_op_set_flags(struct net_device *, u32, u32);
extern u32 _kc_ethtool_op_get_flags(struct net_device *);
#define ethtool_op_get_flags _kc_ethtool_op_get_flags
enum {
WQ_UNBOUND = 0,
WQ_RESCUER = 0,
};
#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
#ifdef NET_IP_ALIGN
#undef NET_IP_ALIGN
@ -2659,8 +2890,10 @@ do { \
netdev_##level(dev, fmt, ##args); \
} while (0)
#if (!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6,3)))
#undef usleep_range
#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000))
#endif
#define u64_stats_update_begin(a) do { } while(0)
#define u64_stats_update_end(a) do { } while(0)
@ -2690,6 +2923,7 @@ static inline void skb_tx_timestamp(struct sk_buff __always_unused *skb)
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(2,6,37) )
#define HAVE_NON_CONST_PCI_DRIVER_NAME
#ifndef netif_set_real_num_tx_queues
static inline int _kc_netif_set_real_num_tx_queues(struct net_device *dev,
unsigned int txq)
@ -2722,6 +2956,8 @@ static inline int __kc_netif_set_real_num_rx_queues(struct net_device __always_u
#define ETH_FLAG_RXVLAN (1 << 8)
#endif /* ETH_FLAG_RXVLAN */
#define WQ_MEM_RECLAIM WQ_RESCUER
static inline void _kc_skb_checksum_none_assert(struct sk_buff *skb)
{
WARN_ON(skb->ip_summed != CHECKSUM_NONE);
@ -2908,6 +3144,7 @@ static inline __wsum __kc_udp_csum(struct sk_buff *skb)
#ifndef HAVE_NDO_SET_FEATURES
#define HAVE_NDO_SET_FEATURES
#endif
#define HAVE_IRQ_AFFINITY_NOTIFY
#endif /* < 2.6.39 */
/*****************************************************************************/
@ -3014,6 +3251,19 @@ static inline int _kc_kstrtol_from_user(const char __user *s, size_t count,
#ifndef ETH_P_8021AD
#define ETH_P_8021AD 0x88A8
#endif
/* Stub definition for !CONFIG_OF is introduced later */
#ifdef CONFIG_OF
static inline struct device_node *
pci_device_to_OF_node(struct pci_dev __maybe_unused *pdev)
{
#ifdef HAVE_STRUCT_DEVICE_OF_NODE
return pdev ? pdev->dev.of_node : NULL;
#else
return NULL;
#endif /* !HAVE_STRUCT_DEVICE_OF_NODE */
}
#endif /* CONFIG_OF */
#else /* < 3.1.0 */
#ifndef HAVE_DCBNL_IEEE_DELAPP
#define HAVE_DCBNL_IEEE_DELAPP
@ -3123,6 +3373,46 @@ static inline void __kc_skb_frag_unref(skb_frag_t *frag)
#endif
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,3,0) )
/* NOTE: the order of parameters to _kc_alloc_workqueue() is different than
* alloc_workqueue() to avoid compiler warning from -Wvarargs
*/
static inline struct workqueue_struct * __attribute__ ((format(printf, 3, 4)))
_kc_alloc_workqueue(__maybe_unused int flags, __maybe_unused int max_active,
const char *fmt, ...)
{
struct workqueue_struct *wq;
va_list args, temp;
unsigned int len;
char *p;
va_start(args, fmt);
va_copy(temp, args);
len = vsnprintf(NULL, 0, fmt, temp);
va_end(temp);
p = kmalloc(len + 1, GFP_KERNEL);
if (!p) {
va_end(args);
return NULL;
}
vsnprintf(p, len + 1, fmt, args);
va_end(args);
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(2,6,36) )
wq = create_workqueue(p);
#else
wq = alloc_workqueue(p, flags, max_active);
#endif
kfree(p);
return wq;
}
#ifdef alloc_workqueue
#undef alloc_workqueue
#endif
#define alloc_workqueue(fmt, flags, max_active, args...) \
_kc_alloc_workqueue(flags, max_active, fmt, ##args)
#if !(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6,5))
typedef u32 netdev_features_t;
#endif
@ -3219,6 +3509,10 @@ extern void _kc_skb_add_rx_frag(struct sk_buff *, int, struct page *,
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,5,0) )
#ifndef BITS_PER_LONG_LONG
#define BITS_PER_LONG_LONG 64
#endif
#ifndef ether_addr_equal
static inline bool __kc_ether_addr_equal(const u8 *addr1, const u8 *addr2)
{
@ -3227,7 +3521,21 @@ static inline bool __kc_ether_addr_equal(const u8 *addr1, const u8 *addr2)
#define ether_addr_equal(_addr1, _addr2) __kc_ether_addr_equal((_addr1),(_addr2))
#endif
/* Definitions for !CONFIG_OF_NET are introduced in 3.10 */
#ifdef CONFIG_OF_NET
static inline int of_get_phy_mode(struct device_node __always_unused *np)
{
return -ENODEV;
}
static inline const void *
of_get_mac_address(struct device_node __always_unused *np)
{
return NULL;
}
#endif
#else
#include <linux/of_net.h>
#define HAVE_FDB_OPS
#define HAVE_ETHTOOL_GET_TS_INFO
#endif /* < 3.5.0 */
@ -3259,6 +3567,14 @@ static inline bool __kc_ether_addr_equal(const u8 *addr1, const u8 *addr2)
#define __GFP_MEMALLOC 0
#endif
#ifndef eth_broadcast_addr
#define eth_broadcast_addr _kc_eth_broadcast_addr
static inline void _kc_eth_broadcast_addr(u8 *addr)
{
memset(addr, 0xff, ETH_ALEN);
}
#endif
#ifndef eth_random_addr
#define eth_random_addr _kc_eth_random_addr
static inline void _kc_eth_random_addr(u8 *addr)
@ -3268,12 +3584,17 @@ static inline void _kc_eth_random_addr(u8 *addr)
addr[0] |= 0x02; /* set local assignment */
}
#endif /* eth_random_addr */
#ifndef DMA_ATTR_SKIP_CPU_SYNC
#define DMA_ATTR_SKIP_CPU_SYNC 0
#endif
#else /* < 3.6.0 */
#define HAVE_STRUCT_PAGE_PFMEMALLOC
#endif /* < 3.6.0 */
/******************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,7,0) )
#include <linux/workqueue.h>
#ifndef ADVERTISED_40000baseKR4_Full
/* these defines were all added in one commit, so should be safe
* to trigger activiation on one define
@ -3324,7 +3645,7 @@ static inline u32 __kc_mmd_eee_cap_to_ethtool_sup_t(u16 eee_cap)
* mmd_eee_adv_to_ethtool_adv_t
* @eee_adv: value of the MMD EEE Advertisement/Link Partner Ability registers
*
* A small helper function that translates the MMD EEE Advertisment (7.60)
* A small helper function that translates the MMD EEE Advertisement (7.60)
* and MMD EEE Link Partner Ability (7.61) bits to ethtool advertisement
* settings.
*/
@ -3391,8 +3712,7 @@ static inline u8 pci_pcie_type(struct pci_dev *pdev)
u16 reg16;
pos = pci_find_capability(pdev, PCI_CAP_ID_EXP);
if (!pos)
BUG();
BUG_ON(!pos);
pci_read_config_word(pdev, pos + PCI_EXP_FLAGS, &reg16);
return (reg16 & PCI_EXP_FLAGS_TYPE) >> 4;
}
@ -3412,6 +3732,11 @@ int __kc_pcie_capability_read_word(struct pci_dev *dev, int pos, u16 *val);
#define pcie_capability_read_word(d,p,v) __kc_pcie_capability_read_word(d,p,v)
#endif /* pcie_capability_read_word */
#ifndef pcie_capability_read_dword
int __kc_pcie_capability_read_dword(struct pci_dev *dev, int pos, u32 *val);
#define pcie_capability_read_dword(d,p,v) __kc_pcie_capability_read_dword(d,p,v)
#endif
#ifndef pcie_capability_write_word
int __kc_pcie_capability_write_word(struct pci_dev *dev, int pos, u16 val);
#define pcie_capability_write_word(d,p,v) __kc_pcie_capability_write_word(d,p,v)
@ -3443,13 +3768,112 @@ int __kc_pcie_capability_clear_word(struct pci_dev *dev, int pos,
#define napi_gro_flush(_napi, _flush_old) napi_gro_flush(_napi)
#endif /* !RHEL6.8+ */
#if (RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6,6))
#include <linux/hashtable.h>
#else
#define DEFINE_HASHTABLE(name, bits) \
struct hlist_head name[1 << (bits)] = \
{ [0 ... ((1 << (bits)) - 1)] = HLIST_HEAD_INIT }
#define DEFINE_READ_MOSTLY_HASHTABLE(name, bits) \
struct hlist_head name[1 << (bits)] __read_mostly = \
{ [0 ... ((1 << (bits)) - 1)] = HLIST_HEAD_INIT }
#define DECLARE_HASHTABLE(name, bits) \
struct hlist_head name[1 << (bits)]
#define HASH_SIZE(name) (ARRAY_SIZE(name))
#define HASH_BITS(name) ilog2(HASH_SIZE(name))
/* Use hash_32 when possible to allow for fast 32bit hashing in 64bit kernels. */
#define hash_min(val, bits) \
(sizeof(val) <= 4 ? hash_32(val, bits) : hash_long(val, bits))
static inline void __hash_init(struct hlist_head *ht, unsigned int sz)
{
unsigned int i;
for (i = 0; i < sz; i++)
INIT_HLIST_HEAD(&ht[i]);
}
#define hash_init(hashtable) __hash_init(hashtable, HASH_SIZE(hashtable))
#define hash_add(hashtable, node, key) \
hlist_add_head(node, &hashtable[hash_min(key, HASH_BITS(hashtable))])
static inline bool hash_hashed(struct hlist_node *node)
{
return !hlist_unhashed(node);
}
static inline bool __hash_empty(struct hlist_head *ht, unsigned int sz)
{
unsigned int i;
for (i = 0; i < sz; i++)
if (!hlist_empty(&ht[i]))
return false;
return true;
}
#define hash_empty(hashtable) __hash_empty(hashtable, HASH_SIZE(hashtable))
static inline void hash_del(struct hlist_node *node)
{
hlist_del_init(node);
}
#endif /* RHEL >= 6.6 */
/* We don't have @flags support prior to 3.7, so we'll simply ignore the flags
* parameter on these older kernels.
*/
#define __setup_timer(_timer, _fn, _data, _flags) \
setup_timer((_timer), (_fn), (_data)) \
#if ( ! ( RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6,7) ) ) && \
( ! ( SLE_VERSION_CODE >= SLE_VERSION(12,0,0) ) )
#ifndef mod_delayed_work
/**
* __mod_delayed_work - modify delay or queue delayed work
* @wq: workqueue to use
* @dwork: delayed work to queue
* @delay: number of jiffies to wait before queueing
*
* Return: %true if @dwork was pending and was rescheduled;
* %false if it wasn't pending
*
* Note: the dwork parameter was declared as a void*
* to avoid comptibility problems with early 2.6 kernels
* where struct delayed_work is not declared. Unlike the original
* implementation flags are not preserved and it shouldn't be
* used in the interrupt context.
*/
static inline bool __mod_delayed_work(struct workqueue_struct *wq,
void *dwork,
unsigned long delay)
{
bool ret = cancel_delayed_work(dwork);
queue_delayed_work(wq, dwork, delay);
return ret;
}
#define mod_delayed_work(wq, dwork, delay) __mod_delayed_work(wq, dwork, delay)
#endif /* mod_delayed_work */
#endif /* !(RHEL >= 6.7) && !(SLE >= 12.0) */
#else /* >= 3.7.0 */
#include <linux/hashtable.h>
#define HAVE_CONST_STRUCT_PCI_ERROR_HANDLERS
#define USE_CONST_DEV_UC_CHAR
#endif /* >= 3.7.0 */
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,8,0) )
#if (!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6,5)) && \
!(SLE_VERSION_CODE && SLE_VERSION_CODE >= SLE_VERSION(11,4,0)))
#ifndef pci_sriov_set_totalvfs
static inline int __kc_pci_sriov_set_totalvfs(struct pci_dev __always_unused *dev, u16 __always_unused numvfs)
{
@ -3457,6 +3881,7 @@ static inline int __kc_pci_sriov_set_totalvfs(struct pci_dev __always_unused *de
}
#define pci_sriov_set_totalvfs(a, b) __kc_pci_sriov_set_totalvfs((a), (b))
#endif
#endif /* !(RHEL_RELEASE_CODE >= 6.5 && SLE_VERSION_CODE >= 11.4) */
#ifndef PCI_EXP_LNKCTL_ASPM_L0S
#define PCI_EXP_LNKCTL_ASPM_L0S 0x01 /* L0s Enable */
#endif
@ -3479,14 +3904,32 @@ static inline bool __kc_is_link_local_ether_addr(const u8 *addr)
}
#define is_link_local_ether_addr(addr) __kc_is_link_local_ether_addr(addr)
#endif /* is_link_local_ether_addr */
int __kc_ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset,
int target, unsigned short *fragoff, int *flags);
#define ipv6_find_hdr(a, b, c, d, e) __kc_ipv6_find_hdr((a), (b), (c), (d), (e))
#ifndef FLOW_MAC_EXT
#define FLOW_MAC_EXT 0x40000000
#endif /* FLOW_MAC_EXT */
#if (SLE_VERSION_CODE && SLE_VERSION_CODE >= SLE_VERSION(11,4,0))
#define HAVE_SRIOV_CONFIGURE
#endif
#ifndef PCI_EXP_LNKCAP_SLS_2_5GB
#define PCI_EXP_LNKCAP_SLS_2_5GB 0x00000001 /* LNKCAP2 SLS Vector bit 0 */
#endif
#ifndef PCI_EXP_LNKCAP_SLS_5_0GB
#define PCI_EXP_LNKCAP_SLS_5_0GB 0x00000002 /* LNKCAP2 SLS Vector bit 1 */
#endif
#undef PCI_EXP_LNKCAP2_SLS_2_5GB
#define PCI_EXP_LNKCAP2_SLS_2_5GB 0x00000002 /* Supported Speed 2.5GT/s */
#undef PCI_EXP_LNKCAP2_SLS_5_0GB
#define PCI_EXP_LNKCAP2_SLS_5_0GB 0x00000004 /* Supported Speed 5GT/s */
#undef PCI_EXP_LNKCAP2_SLS_8_0GB
#define PCI_EXP_LNKCAP2_SLS_8_0GB 0x00000008 /* Supported Speed 8GT/s */
#else /* >= 3.8.0 */
#ifndef __devinit
#define __devinit
@ -3582,7 +4025,9 @@ int __kc_ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset,
#undef hlist_entry_safe
#define hlist_entry_safe(ptr, type, member) \
(ptr) ? hlist_entry(ptr, type, member) : NULL
({ typeof(ptr) ____ptr = (ptr); \
____ptr ? hlist_entry(____ptr, type, member) : NULL; \
})
#undef hlist_for_each_entry
#define hlist_for_each_entry(pos, head, member) \
@ -3596,8 +4041,40 @@ int __kc_ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset,
pos && ({ n = pos->member.next; 1; }); \
pos = hlist_entry_safe(n, typeof(*pos), member))
#undef hlist_for_each_entry_continue
#define hlist_for_each_entry_continue(pos, member) \
for (pos = hlist_entry_safe((pos)->member.next, typeof(*(pos)), member);\
pos; \
pos = hlist_entry_safe((pos)->member.next, typeof(*(pos)), member))
#undef hlist_for_each_entry_from
#define hlist_for_each_entry_from(pos, member) \
for (; pos; \
pos = hlist_entry_safe((pos)->member.next, typeof(*(pos)), member))
#undef hash_for_each
#define hash_for_each(name, bkt, obj, member) \
for ((bkt) = 0, obj = NULL; obj == NULL && (bkt) < HASH_SIZE(name);\
(bkt)++)\
hlist_for_each_entry(obj, &name[bkt], member)
#undef hash_for_each_safe
#define hash_for_each_safe(name, bkt, tmp, obj, member) \
for ((bkt) = 0, obj = NULL; obj == NULL && (bkt) < HASH_SIZE(name);\
(bkt)++)\
hlist_for_each_entry_safe(obj, tmp, &name[bkt], member)
#undef hash_for_each_possible
#define hash_for_each_possible(name, obj, member, key) \
hlist_for_each_entry(obj, &name[hash_min(key, HASH_BITS(name))], member)
#undef hash_for_each_possible_safe
#define hash_for_each_possible_safe(name, obj, tmp, member, key) \
hlist_for_each_entry_safe(obj, tmp,\
&name[hash_min(key, HASH_BITS(name))], member)
#ifdef CONFIG_XPS
extern int __kc_netif_set_xps_queue(struct net_device *, struct cpumask *, u16);
extern int __kc_netif_set_xps_queue(struct net_device *, const struct cpumask *, u16);
#define netif_set_xps_queue(_dev, _mask, _idx) __kc_netif_set_xps_queue((_dev), (_mask), (_idx))
#else /* CONFIG_XPS */
#define netif_set_xps_queue(_dev, _mask, _idx) do {} while (0)
@ -3673,10 +4150,57 @@ extern int __kc_ndo_dflt_fdb_del(struct ndmsg *ndm, struct net_device *dev,
#ifndef PCI_DEVID
#define PCI_DEVID(bus, devfn) ((((u16)(bus)) << 8) | (devfn))
#endif
/* The definitions for these functions when CONFIG_OF_NET is defined are
* pulled in from <linux/of_net.h>. For kernels older than 3.5 we already have
* backports for when CONFIG_OF_NET is true. These are separated and
* duplicated in order to cover all cases so that all kernels get either the
* real definitions (when CONFIG_OF_NET is defined) or the stub definitions
* (when CONFIG_OF_NET is not defined, or the kernel is too old to have real
* definitions).
*/
#ifndef CONFIG_OF_NET
static inline int of_get_phy_mode(struct device_node __always_unused *np)
{
return -ENODEV;
}
static inline const void *
of_get_mac_address(struct device_node __always_unused *np)
{
return NULL;
}
#endif
#else /* >= 3.10.0 */
#define HAVE_ENCAP_TSO_OFFLOAD
#define USE_DEFAULT_FDB_DEL_DUMP
#define HAVE_SKB_INNER_NETWORK_HEADER
#if (RHEL_RELEASE_CODE && \
(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,0)) && \
(RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(8,0)))
#define HAVE_RHEL7_PCI_DRIVER_RH
#if (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,2))
#define HAVE_RHEL7_PCI_RESET_NOTIFY
#endif /* RHEL >= 7.2 */
#if (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,3))
#if (RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,5))
#define HAVE_GENEVE_RX_OFFLOAD
#endif /* RHEL >=7.3 && RHEL < 7.5 */
#define HAVE_RHEL7_NET_DEVICE_OPS_EXT
#if !defined(HAVE_UDP_ENC_TUNNEL) && IS_ENABLED(CONFIG_GENEVE)
#define HAVE_UDP_ENC_TUNNEL
#endif
#endif /* RHEL >= 7.3 */
/* new hooks added to net_device_ops_extended in RHEL7.4 */
#if (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4))
#define HAVE_RHEL7_NETDEV_OPS_EXT_NDO_SET_VF_VLAN
#define HAVE_RHEL7_NETDEV_OPS_EXT_NDO_UDP_TUNNEL
#define HAVE_UDP_ENC_RX_OFFLOAD
#endif /* RHEL >= 7.4 */
#endif /* RHEL >= 7.0 && RHEL < 8.0 */
#endif /* >= 3.10.0 */
/*****************************************************************************/
@ -3686,6 +4210,9 @@ extern int __kc_ndo_dflt_fdb_del(struct ndmsg *ndm, struct net_device *dev,
(SLE_VERSION_CODE && SLE_VERSION_CODE >= SLE_VERSION(11,4,0)))
#define HAVE_NDO_SET_VF_LINK_STATE
#endif
#if RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,2))
#define HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK
#endif
#else /* >= 3.11.0 */
#define HAVE_NDO_SET_VF_LINK_STATE
#define HAVE_SKB_INNER_PROTOCOL
@ -3704,8 +4231,14 @@ extern int __kc_pcie_get_minimum_link(struct pci_dev *dev,
#if ( SLE_VERSION_CODE && SLE_VERSION_CODE >= SLE_VERSION(12,0,0))
#define HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK
#endif
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(4,8,0) )
#define HAVE_VXLAN_RX_OFFLOAD
#if !defined(HAVE_UDP_ENC_TUNNEL) && IS_ENABLED(CONFIG_VXLAN)
#define HAVE_UDP_ENC_TUNNEL
#endif
#endif /* < 4.8.0 */
#define HAVE_NDO_GET_PHYS_PORT_ID
#define HAVE_NETIF_SET_XPS_QUEUE_CONST_MASK
#endif /* >= 3.12.0 */
/*****************************************************************************/
@ -3715,14 +4248,38 @@ extern int __kc_dma_set_mask_and_coherent(struct device *dev, u64 mask);
#ifndef u64_stats_init
#define u64_stats_init(a) do { } while(0)
#endif
#ifndef BIT_ULL
#undef BIT_ULL
#define BIT_ULL(n) (1ULL << (n))
#if (!(SLE_VERSION_CODE && SLE_VERSION_CODE >= SLE_VERSION(12,0,0)) && \
!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,0)))
static inline struct pci_dev *pci_upstream_bridge(struct pci_dev *dev)
{
dev = pci_physfn(dev);
if (pci_is_root_bus(dev->bus))
return NULL;
return dev->bus->self;
}
#endif
#if (SLE_VERSION_CODE && SLE_VERSION_CODE >= SLE_VERSION(12,1,0))
#undef HAVE_STRUCT_PAGE_PFMEMALLOC
#define HAVE_DCBNL_OPS_SETAPP_RETURN_INT
#endif
#ifndef list_next_entry
#define list_next_entry(pos, member) \
list_entry((pos)->member.next, typeof(*(pos)), member)
#endif
#ifndef list_prev_entry
#define list_prev_entry(pos, member) \
list_entry((pos)->member.prev, typeof(*(pos)), member)
#endif
#if ( LINUX_VERSION_CODE > KERNEL_VERSION(2,6,20) )
#define devm_kcalloc(dev, cnt, size, flags) \
devm_kzalloc(dev, cnt * size, flags)
#endif /* > 2.6.20 */
#else /* >= 3.13.0 */
#define HAVE_VXLAN_CHECKS
@ -3731,12 +4288,16 @@ extern int __kc_dma_set_mask_and_coherent(struct device *dev, u64 mask);
#else
#define HAVE_NDO_SELECT_QUEUE_ACCEL
#endif
#define HAVE_NET_GET_RANDOM_ONCE
#define HAVE_HWMON_DEVICE_REGISTER_WITH_GROUPS
#endif
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,14,0) )
#ifndef U16_MAX
#define U16_MAX ((u16)~0U)
#endif
#ifndef U32_MAX
#define U32_MAX ((u32)~0U)
#endif
@ -3754,6 +4315,14 @@ extern int __kc_dma_set_mask_and_coherent(struct device *dev, u64 mask);
#define PKT_HASH_TYPE_L3 2
#define PKT_HASH_TYPE_L4 3
enum _kc_pkt_hash_types {
_KC_PKT_HASH_TYPE_NONE = PKT_HASH_TYPE_NONE,
_KC_PKT_HASH_TYPE_L2 = PKT_HASH_TYPE_L2,
_KC_PKT_HASH_TYPE_L3 = PKT_HASH_TYPE_L3,
_KC_PKT_HASH_TYPE_L4 = PKT_HASH_TYPE_L4,
};
#define pkt_hash_types _kc_pkt_hash_types
#define skb_set_hash __kc_skb_set_hash
static inline void __kc_skb_set_hash(struct sk_buff __maybe_unused *skb,
u32 __maybe_unused hash,
@ -3770,15 +4339,26 @@ static inline void __kc_skb_set_hash(struct sk_buff __maybe_unused *skb,
#else /* RHEL_RELEASE_CODE >= 7.0 || SLE_VERSION_CODE >= 12.0 */
#if (!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,5)))
#ifndef HAVE_VXLAN_RX_OFFLOAD
#define HAVE_VXLAN_RX_OFFLOAD
#endif /* HAVE_VXLAN_RX_OFFLOAD */
#endif
#if !defined(HAVE_UDP_ENC_TUNNEL) && IS_ENABLED(CONFIG_VXLAN)
#define HAVE_UDP_ENC_TUNNEL
#endif
#ifndef HAVE_VXLAN_CHECKS
#define HAVE_VXLAN_CHECKS
#endif /* HAVE_VXLAN_CHECKS */
#endif /* !(RHEL_RELEASE_CODE >= 7.0 && SLE_VERSION_CODE >= 12.0) */
#if ((RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,3)) ||\
(SLE_VERSION_CODE && SLE_VERSION_CODE >= SLE_VERSION(12,0,0)))
#define HAVE_NDO_DFWD_OPS
#endif
#ifndef pci_enable_msix_range
extern int __kc_pci_enable_msix_range(struct pci_dev *dev,
struct msix_entry *entries,
@ -3803,7 +4383,18 @@ static inline void __kc_ether_addr_copy(u8 *dst, const u8 *src)
#endif
}
#endif /* ether_addr_copy */
int __kc_ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset,
int target, unsigned short *fragoff, int *flags);
#define ipv6_find_hdr(a, b, c, d, e) __kc_ipv6_find_hdr((a), (b), (c), (d), (e))
#ifndef OPTIMIZE_HIDE_VAR
#ifdef __GNUC__
#define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) : "0" (var))
#else
#include <linux/barrier.h>
#define OPTIMIZE_HIDE_VAR(var) barrier()
#endif
#endif
#else /* >= 3.14.0 */
/* for ndo_dfwd_ ops add_station, del_station and _start_xmit */
@ -3822,7 +4413,11 @@ static inline void __kc_ether_addr_copy(u8 *dst, const u8 *src)
#define u64_stats_fetch_retry_irq u64_stats_fetch_retry_bh
#endif
char *_kc_devm_kstrdup(struct device *dev, const char *s, gfp_t gfp);
#define devm_kstrdup(dev, s, gfp) _kc_devm_kstrdup(dev, s, gfp)
#else
#define HAVE_NET_GET_RANDOM_ONCE
#define HAVE_PTP_1588_CLOCK_PINS
#define HAVE_NETDEV_PORT
#endif /* 3.15.0 */
@ -3925,11 +4520,22 @@ static inline void __kc_dev_mc_unsync(struct net_device __maybe_unused *dev,
#define NETIF_F_GSO_UDP_TUNNEL_CSUM 0
#define SKB_GSO_UDP_TUNNEL_CSUM 0
#endif
extern void *__kc_devm_kmemdup(struct device *dev, const void *src, size_t len,
gfp_t gfp);
#define devm_kmemdup __kc_devm_kmemdup
#else
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(4,13,0) )
#define HAVE_PCI_ERROR_HANDLER_RESET_NOTIFY
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(15,0,0)))
#undef HAVE_PCI_ERROR_HANDLER_RESET_NOTIFY
#define HAVE_PCI_ERROR_HANDLER_RESET_PREPARE
#endif /* SLES15 */
#endif /* >= 3.16.0 && < 4.13.0 */
#define HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
#endif /* 3.16.0 */
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,17,0) )
#if !(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6,8) && \
RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,0)) && \
@ -3959,11 +4565,16 @@ static inline struct timespec timespec64_to_timespec(const struct timespec64 ts6
#endif /* timespec64 */
#endif /* !(RHEL6.8<RHEL7.0) && !RHEL7.2+ */
#if !(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4))
#define hlist_add_behind(_a, _b) hlist_add_after(_b, _a)
#endif
#else
#define HAVE_DCBNL_OPS_SETAPP_RETURN_INT
#include <linux/time64.h>
#endif /* 3.17.0 */
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,18,0) )
#ifndef NO_PTP_SUPPORT
#include <linux/errqueue.h>
@ -3982,17 +4593,39 @@ extern unsigned int __kc_eth_get_headlen(unsigned char *data, unsigned int max_l
#if RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,1))
#define HAVE_SKBUFF_CSUM_LEVEL
#endif /* >= RH 7.1 */
/* RHEL 7.3 backported xmit_more */
#if (RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,3))
#define HAVE_SKB_XMIT_MORE
#endif /* >= RH 7.3 */
#undef GENMASK
#define GENMASK(h, l) \
(((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
#undef GENMASK_ULL
#define GENMASK_ULL(h, l) \
(((~0ULL) << (l)) & (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h))))
#else /* 3.18.0 */
#define HAVE_SKBUFF_CSUM_LEVEL
#define HAVE_SKB_XMIT_MORE
#define HAVE_SKB_INNER_PROTOCOL_TYPE
#endif /* 3.18.0 */
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,18,4) )
#else
#define HAVE_NDO_FEATURES_CHECK
#endif /* 3.18.4 */
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,18,13) )
#ifndef WRITE_ONCE
#define WRITE_ONCE(x, val) ({ ACCESS_ONCE(x) = (val); })
#endif
#endif /* 3.18.13 */
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,19,0) )
/* netdev_phys_port_id renamed to netdev_phys_item_id */
#define netdev_phys_item_id netdev_phys_port_id
@ -4003,12 +4636,17 @@ static inline void _kc_napi_complete_done(struct napi_struct *napi,
}
#define napi_complete_done _kc_napi_complete_done
extern int _kc_bitmap_print_to_pagebuf(bool list, char *buf,
const unsigned long *maskp,
int nmaskbits);
#define bitmap_print_to_pagebuf _kc_bitmap_print_to_pagebuf
#ifndef NETDEV_RSS_KEY_LEN
#define NETDEV_RSS_KEY_LEN (13 * 4)
#endif
#if ( !(RHEL_RELEASE_CODE && \
(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6,7) && \
(RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,0)))) )
#if (!(RHEL_RELEASE_CODE && \
((RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6,7) && RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,0)) || \
(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,2)))))
#define netdev_rss_key_fill(buffer, len) __kc_netdev_rss_key_fill(buffer, len)
#endif /* RHEL_RELEASE_CODE */
extern void __kc_netdev_rss_key_fill(void *buffer, size_t len);
@ -4018,6 +4656,9 @@ extern void __kc_netdev_rss_key_fill(void *buffer, size_t len);
#define dma_rmb() rmb()
#endif
#ifndef dev_alloc_pages
#ifndef NUMA_NO_NODE
#define NUMA_NO_NODE -1
#endif
#define dev_alloc_pages(_order) alloc_pages_node(NUMA_NO_NODE, (GFP_ATOMIC | __GFP_COLD | __GFP_COMP | __GFP_MEMALLOC), (_order))
#endif
#ifndef dev_alloc_page
@ -4065,19 +4706,45 @@ static inline struct sk_buff *__kc_napi_alloc_skb(struct napi_struct *napi, unsi
#define __napi_alloc_skb(napi,len,mask) __kc_napi_alloc_skb(napi,len)
#endif /* SKB_ALLOC_NAPI */
#define HAVE_CONFIG_PM_RUNTIME
#if RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,1))
#define NDO_DFLT_BRIDGE_GETLINK_HAS_BRFLAGS
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(6,7)) && \
(RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,0)))
#define HAVE_RXFH_HASHFUNC
#endif /* RHEL_RELEASE_CODE */
#endif /* 6.7 < RHEL < 7.0 */
#if RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,1))
#define HAVE_RXFH_HASHFUNC
#define NDO_DFLT_BRIDGE_GETLINK_HAS_BRFLAGS
#endif /* RHEL > 7.1 */
#ifndef napi_schedule_irqoff
#define napi_schedule_irqoff napi_schedule
#endif
#ifndef READ_ONCE
#define READ_ONCE(_x) ACCESS_ONCE(_x)
#endif
#if RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,2))
#define HAVE_NDO_FDB_ADD_VID
#endif
#ifndef ETH_MODULE_SFF_8636
#define ETH_MODULE_SFF_8636 0x3
#endif
#ifndef ETH_MODULE_SFF_8636_LEN
#define ETH_MODULE_SFF_8636_LEN 256
#endif
#ifndef ETH_MODULE_SFF_8436
#define ETH_MODULE_SFF_8436 0x4
#endif
#ifndef ETH_MODULE_SFF_8436_LEN
#define ETH_MODULE_SFF_8436_LEN 256
#endif
#ifndef writel_relaxed
#define writel_relaxed writel
#endif
#else /* 3.19.0 */
#define HAVE_NDO_FDB_ADD_VID
#define HAVE_RXFH_HASHFUNC
#define NDO_DFLT_BRIDGE_GETLINK_HAS_BRFLAGS
#endif /* 3.19.0 */
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(3,20,0) )
/* vlan_tx_xx functions got renamed to skb_vlan */
#ifndef skb_vlan_tag_get
@ -4089,11 +4756,25 @@ static inline struct sk_buff *__kc_napi_alloc_skb(struct napi_struct *napi, unsi
#if RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,1))
#define HAVE_INCLUDE_LINUX_TIMECOUNTER_H
#endif
#if RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,2))
#define HAVE_NDO_BRIDGE_SET_DEL_LINK_FLAGS
#endif
#else
#define HAVE_INCLUDE_LINUX_TIMECOUNTER_H
#define HAVE_NDO_BRIDGE_SET_DEL_LINK_FLAGS
#endif /* 3.20.0 */
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(4,0,0) )
/* Definition for CONFIG_OF was introduced earlier */
#if !defined(CONFIG_OF) && \
!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,2))
static inline struct device_node *
pci_device_to_OF_node(const struct pci_dev __always_unused *pdev) { return NULL; }
#endif /* !CONFIG_OF && RHEL < 7.3 */
#endif /* < 4.0 */
/*****************************************************************************/
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(4,1,0) )
#ifndef NO_PTP_SUPPORT
#ifdef HAVE_INCLUDE_LINUX_TIMECOUNTER_H
@ -4105,17 +4786,44 @@ static inline void __kc_timecounter_adjtime(struct timecounter *tc, s64 delta)
{
tc->nsec += delta;
}
static inline struct net_device *
of_find_net_device_by_node(struct device_node __always_unused *np)
{
return NULL;
}
#define timecounter_adjtime __kc_timecounter_adjtime
#endif
#else
#if ((RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,2))) || \
(SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(12,2,0))))
#define HAVE_NDO_SET_VF_RSS_QUERY_EN
#endif
#if RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,2))
#define HAVE_NDO_BRIDGE_GETLINK_NLFLAGS
#endif
#if !((RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(6,8) && RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(7,0)) && \
(RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,2)) && \
(SLE_VERSION_CODE > SLE_VERSION(12,1,0)))
extern unsigned int _kc_cpumask_local_spread(unsigned int i, int node);
#define cpumask_local_spread _kc_cpumask_local_spread
#endif
#else /* >= 4,1,0 */
#define HAVE_PTP_CLOCK_INFO_GETTIME64
#define HAVE_NDO_BRIDGE_GETLINK_NLFLAGS
#define HAVE_PASSTHRU_FEATURES_CHECK
#define HAVE_NDO_SET_VF_RSS_QUERY_EN
#define HAVE_NDO_SET_TX_MAXRATE
#endif /* 4,1,0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,1,9))
#if (!(SLE_VERSION_CODE && SLE_VERSION_CODE >= SLE_VERSION(12,1,0)))
#if (!(RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,2)) && \
!((SLE_VERSION_CODE == SLE_VERSION(11,3,0)) && \
(SLE_LOCALVERSION_CODE >= SLE_LOCALVERSION(0,47,71))) && \
!((SLE_VERSION_CODE == SLE_VERSION(11,4,0)) && \
(SLE_LOCALVERSION_CODE >= SLE_LOCALVERSION(65,0,0))) && \
!(SLE_VERSION_CODE >= SLE_VERSION(12,1,0)))
static inline bool page_is_pfmemalloc(struct page __maybe_unused *page)
{
#ifdef HAVE_STRUCT_PAGE_PFMEMALLOC
@ -4124,21 +4832,81 @@ static inline bool page_is_pfmemalloc(struct page __maybe_unused *page)
return false;
#endif
}
#endif /* !SLES12sp1 */
#endif /* !RHEL7.2+ && !SLES11sp3(3.0.101-0.47.71+ update) && !SLES11sp4(3.0.101-65+ update) & !SLES12sp1+ */
#else
#undef HAVE_STRUCT_PAGE_PFMEMALLOC
#endif /* 4.1.9 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,2,0))
#if (!(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,2)) && \
!(SLE_VERSION_CODE >= SLE_VERSION(12,1,0)))
#define ETHTOOL_RX_FLOW_SPEC_RING 0x00000000FFFFFFFFULL
#define ETHTOOL_RX_FLOW_SPEC_RING_VF 0x000000FF00000000ULL
#define ETHTOOL_RX_FLOW_SPEC_RING_VF_OFF 32
static inline __u64 ethtool_get_flow_spec_ring(__u64 ring_cookie)
{
return ETHTOOL_RX_FLOW_SPEC_RING & ring_cookie;
};
static inline __u64 ethtool_get_flow_spec_ring_vf(__u64 ring_cookie)
{
return (ETHTOOL_RX_FLOW_SPEC_RING_VF & ring_cookie) >>
ETHTOOL_RX_FLOW_SPEC_RING_VF_OFF;
};
#endif /* ! RHEL >= 7.2 && ! SLES >= 12.1 */
#if (RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4))
#define HAVE_NDO_DFLT_BRIDGE_GETLINK_VLAN_SUPPORT
#endif
#else
#define HAVE_NDO_DFLT_BRIDGE_GETLINK_VLAN_SUPPORT
#endif /* 4.2.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,4,0))
#else
#if (RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,3))
#define HAVE_NDO_SET_VF_TRUST
#endif /* (RHEL_RELEASE >= 7.3) */
#ifndef CONFIG_64BIT
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3,3,0))
#include <asm-generic/io-64-nonatomic-lo-hi.h> /* 32-bit readq/writeq */
#else /* 3.3.0 => 4.3.x */
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,26))
#include <asm-generic/int-ll64.h>
#endif /* 2.6.26 => 3.3.0 */
#ifndef readq
static inline __u64 readq(const volatile void __iomem *addr)
{
const volatile u32 __iomem *p = addr;
u32 low, high;
low = readl(p);
high = readl(p + 1);
return low + ((u64)high << 32);
}
#define readq readq
#endif
#ifndef writeq
static inline void writeq(__u64 val, volatile void __iomem *addr)
{
writel(val, addr);
writel(val >> 32, addr + 4);
}
#define writeq writeq
#endif
#endif /* < 3.3.0 */
#endif /* !CONFIG_64BIT */
#else /* < 4.4.0 */
#define HAVE_NDO_SET_VF_TRUST
#ifndef CONFIG_64BIT
#include <linux/io-64-nonatomic-lo-hi.h> /* 32-bit readq/writeq */
#endif /* !CONFIG_64BIT */
#endif /* 4.4.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,5,0))
/* protect against a likely backport */
#ifndef NETIF_F_CSUM_MASK
@ -4147,17 +4915,576 @@ static inline bool page_is_pfmemalloc(struct page __maybe_unused *page)
#ifndef NETIF_F_SCTP_CRC
#define NETIF_F_SCTP_CRC NETIF_F_SCTP_CSUM
#endif /* NETIF_F_SCTP_CRC */
#else
#if (!(RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,3)))
#define eth_platform_get_mac_address _kc_eth_platform_get_mac_address
extern int _kc_eth_platform_get_mac_address(struct device *dev __maybe_unused,
u8 *mac_addr __maybe_unused);
#endif /* !(RHEL_RELEASE >= 7.3) */
#else /* 4.5.0 */
#if ( LINUX_VERSION_CODE < KERNEL_VERSION(4,8,0) )
#define HAVE_GENEVE_RX_OFFLOAD
#if !defined(HAVE_UDP_ENC_TUNNEL) && IS_ENABLED(CONFIG_GENEVE)
#define HAVE_UDP_ENC_TUNNEL
#endif
#endif /* < 4.8.0 */
#define HAVE_NETIF_NAPI_ADD_CALLS_NAPI_HASH_ADD
#endif /* 4.5.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,6,0))
#if !(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,3))
static inline unsigned char *skb_checksum_start(const struct sk_buff *skb)
{
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,22))
return skb->head + skb->csum_start;
#else /* < 2.6.22 */
return skb_transport_header(skb);
#endif
}
#endif
#if !(UBUNTU_VERSION_CODE && \
UBUNTU_VERSION_CODE >= UBUNTU_VERSION(4,4,0,21)) && \
!(RHEL_RELEASE_CODE && \
(RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,2))) && \
!(SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(12,3,0)))
static inline void napi_consume_skb(struct sk_buff *skb,
int __always_unused budget)
{
dev_consume_skb_any(skb);
}
#endif /* UBUNTU 4,4,0,21, RHEL 7.2, SLES12 SP3 */
#if !(SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(12,3,0))) && \
!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4))
static inline void csum_replace_by_diff(__sum16 *sum, __wsum diff)
{
* sum = csum_fold(csum_add(diff, ~csum_unfold(*sum)));
}
#endif
#if !(RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,2))) && \
!(SLE_VERSION_CODE && (SLE_VERSION_CODE > SLE_VERSION(12,3,0)))
static inline void page_ref_inc(struct page *page)
{
get_page(page);
}
#else
#define HAVE_PAGE_COUNT_BULK_UPDATE
#endif
#ifndef IPV4_USER_FLOW
#define IPV4_USER_FLOW 0x0d /* spec only (usr_ip4_spec) */
#endif
#else /* >= 4.6.0 */
#define HAVE_PAGE_COUNT_BULK_UPDATE
#define HAVE_ETHTOOL_FLOW_UNION_IP6_SPEC
#define HAVE_PTP_CROSSTIMESTAMP
#endif /* 4.6.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,7,0))
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(12,3,0))) ||\
(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4))
#define HAVE_NETIF_TRANS_UPDATE
#endif
#if (UBUNTU_VERSION_CODE && \
UBUNTU_VERSION_CODE >= UBUNTU_VERSION(4,4,0,21)) || \
(RHEL_RELEASE_CODE && \
RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4)) || \
(SLE_VERSION_CODE && SLE_VERSION_CODE >= SLE_VERSION(12,3,0))
#define HAVE_DEVLINK_SUPPORT
#endif /* UBUNTU 4,4,0,21, RHEL 7.4, SLES12 SP3 */
#else /* 4.7.0 */
#define HAVE_DEVLINK_SUPPORT
#define HAVE_NETIF_TRANS_UPDATE
#define HAVE_ETHTOOL_CONVERT_U32_AND_LINK_MODE
#endif /* 4.7.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,8,0))
#if !(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4))
enum udp_parsable_tunnel_type {
UDP_TUNNEL_TYPE_VXLAN,
UDP_TUNNEL_TYPE_GENEVE,
};
struct udp_tunnel_info {
unsigned short type;
sa_family_t sa_family;
__be16 port;
};
#endif
#if (RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,5))
#define HAVE_TCF_EXTS_TO_LIST
#endif
#if !(SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(12,3,0))) &&\
!(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4))
static inline int
#ifdef HAVE_NON_CONST_PCI_DRIVER_NAME
pci_request_io_regions(struct pci_dev *pdev, char *name)
#else
pci_request_io_regions(struct pci_dev *pdev, const char *name)
#endif
{
return pci_request_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_IO), name);
}
static inline void
pci_release_io_regions(struct pci_dev *pdev)
{
return pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_IO));
}
static inline int
#ifdef HAVE_NON_CONST_PCI_DRIVER_NAME
pci_request_mem_regions(struct pci_dev *pdev, char *name)
#else
pci_request_mem_regions(struct pci_dev *pdev, const char *name)
#endif
{
return pci_request_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM), name);
}
static inline void
pci_release_mem_regions(struct pci_dev *pdev)
{
return pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
}
#endif /* !SLE_VERSION(12,3,0) */
#else
#define HAVE_UDP_ENC_RX_OFFLOAD
#define HAVE_TCF_EXTS_TO_LIST
#endif /* 4.8.0 */
/*****************************************************************************/
#ifdef ETHTOOL_GLINKSETTINGS
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,7,0))
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,3)))
#define HAVE_ETHTOOL_25G_BITS
#define HAVE_ETHTOOL_50G_BITS
#define HAVE_ETHTOOL_100G_BITS
#endif /* RHEL_RELEASE_VERSION(7,3) */
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(12,3,0)))
#define HAVE_ETHTOOL_25G_BITS
#define HAVE_ETHTOOL_50G_BITS
#define HAVE_ETHTOOL_100G_BITS
#endif /* SLE_VERSION(12,3,0) */
#else
#define HAVE_ETHTOOL_25G_BITS
#define HAVE_ETHTOOL_50G_BITS
#define HAVE_ETHTOOL_100G_BITS
#endif /* KERNEL_VERSION(4.7.0) */
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,8,0))
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4)))
#define HAVE_ETHTOOL_NEW_50G_BITS
#endif /* RHEL_RELEASE_VERSION(7,4) */
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(12,3,0)))
#define HAVE_ETHTOOL_NEW_50G_BITS
#endif /* SLE_VERSION(12,3,0) */
#else
#define HAVE_ETHTOOL_NEW_50G_BITS
#endif /* KERNEL_VERSION(4.8.0)*/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,9,0))
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4)))
#define HAVE_ETHTOOL_NEW_1G_BITS
#define HAVE_ETHTOOL_NEW_10G_BITS
#endif /* RHEL_RELEASE_VERSION(7,4) */
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(15,4,0)))
#define HAVE_ETHTOOL_NEW_1G_BITS
#define HAVE_ETHTOOL_NEW_10G_BITS
#endif /* SLE_VERSION(15,4,0) */
#else
#define HAVE_ETHTOOL_NEW_1G_BITS
#define HAVE_ETHTOOL_NEW_10G_BITS
#endif /* KERNEL_VERSION(4.9.0) */
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,10,0))
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4)))
#define HAVE_ETHTOOL_NEW_2500MB_BITS
#define HAVE_ETHTOOL_5G_BITS
#endif /* RHEL_RELEASE_VERSION(7,4) */
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(15,4,0)))
#define HAVE_ETHTOOL_NEW_2500MB_BITS
#define HAVE_ETHTOOL_5G_BITS
#endif /* SLE_VERSION(15,4,0) */
#else
#define HAVE_ETHTOOL_NEW_2500MB_BITS
#define HAVE_ETHTOOL_5G_BITS
#endif /* KERNEL_VERSION(4.10.0) */
#endif /* ETHTOOL_GLINKSETTINGS */
/*****************************************************************************/
#ifdef NETIF_F_HW_TC
#endif /* NETIF_F_HW_TC */
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,9,0))
#ifdef NETIF_F_HW_TC
#if (!(RHEL_RELEASE_CODE) && !(SLE_VERSION_CODE) || \
(SLE_VERSION_CODE && (SLE_VERSION_CODE < SLE_VERSION(12,3,0))))
#define HAVE_TC_FLOWER_VLAN_IN_TAGS
#endif /* !RHEL_RELEASE_CODE && !SLE_VERSION_CODE || SLE_VERSION(12,3,0) */
#endif /* NETIF_F_HW_TC */
#endif /* KERNEL_VERSION(4.9.0) */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,10,0))
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,4)))
#define HAVE_DEV_WALK_API
#endif
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE == SLE_VERSION(12,3,0)))
#define HAVE_STRUCT_DMA_ATTRS
#endif /* (SLES == 12.3.0) */
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(12,3,0)))
#define HAVE_NETDEVICE_MIN_MAX_MTU
#endif /* (SLES >= 12.3.0) */
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,5)))
#define HAVE_STRUCT_DMA_ATTRS
#define HAVE_RHEL7_EXTENDED_MIN_MAX_MTU
#define HAVE_NETDEVICE_MIN_MAX_MTU
#endif
#if (!(SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(12,3,0))) && \
!(RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,5))))
#ifndef dma_map_page_attrs
#define dma_map_page_attrs __kc_dma_map_page_attrs
static inline dma_addr_t __kc_dma_map_page_attrs(struct device *dev,
struct page *page,
size_t offset, size_t size,
enum dma_data_direction dir,
unsigned long __always_unused attrs)
{
return dma_map_page(dev, page, offset, size, dir);
}
#endif
#ifndef dma_unmap_page_attrs
#define dma_unmap_page_attrs __kc_dma_unmap_page_attrs
static inline void __kc_dma_unmap_page_attrs(struct device *dev,
dma_addr_t addr, size_t size,
enum dma_data_direction dir,
unsigned long __always_unused attrs)
{
dma_unmap_page(dev, addr, size, dir);
}
#endif
static inline void __page_frag_cache_drain(struct page *page,
unsigned int count)
{
#ifdef HAVE_PAGE_COUNT_BULK_UPDATE
if (!page_ref_sub_and_test(page, count))
return;
init_page_count(page);
#else
BUG_ON(count > 1);
if (!count)
return;
#endif
__free_pages(page, compound_order(page));
}
#endif /* !SLE_VERSION(12,3,0) && !RHEL_VERSION(7,5) */
#if ((SLE_VERSION_CODE && (SLE_VERSION_CODE > SLE_VERSION(12,3,0))) ||\
(RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,5)))
#define HAVE_SWIOTLB_SKIP_CPU_SYNC
#endif
#if ((SLE_VERSION_CODE && (SLE_VERSION_CODE < SLE_VERSION(15,0,0))) ||\
(RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE <= RHEL_RELEASE_VERSION(7,4))))
#define page_frag_free __free_page_frag
#endif
#ifndef ETH_MIN_MTU
#define ETH_MIN_MTU 68
#endif /* ETH_MIN_MTU */
#else /* >= 4.10 */
#define HAVE_TC_FLOWER_ENC
#define HAVE_NETDEVICE_MIN_MAX_MTU
#define HAVE_SWIOTLB_SKIP_CPU_SYNC
#define HAVE_NETDEV_TC_RESETS_XPS
#define HAVE_XPS_QOS_SUPPORT
#define HAVE_DEV_WALK_API
#endif /* 4.10.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,11,0))
#ifdef CONFIG_NET_RX_BUSY_POLL
#define HAVE_NDO_BUSY_POLL
#endif /* CONFIG_NET_RX_BUSY_POLL */
#if ((SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(12,3,0))) || \
(RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,5))))
#define HAVE_VOID_NDO_GET_STATS64
#endif /* (SLES >= 12.3.0) && (RHEL >= 7.5) */
#else /* > 4.11 */
#define HAVE_VOID_NDO_GET_STATS64
#define HAVE_VM_OPS_FAULT_NO_VMA
#endif /* 4.11.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,13,0))
#define PCI_EXP_LNKCAP_SLS_8_0GB 0x00000003 /* LNKCAP2 SLS Vector bit 2 */
#else /* > 4.13 */
#define HAVE_HWTSTAMP_FILTER_NTP_ALL
#define HAVE_NDO_SETUP_TC_CHAIN_INDEX
#define HAVE_PCI_ERROR_HANDLER_RESET_PREPARE
#define HAVE_PTP_CLOCK_DO_AUX_WORK
#endif /* 4.13.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,14,0))
#ifndef ethtool_link_ksettings_del_link_mode
#define ethtool_link_ksettings_del_link_mode(ptr, name, mode) \
__clear_bit(ETHTOOL_LINK_MODE_ ## mode ## _BIT, (ptr)->link_modes.name)
#endif
#if (SLE_VERSION_CODE && (SLE_VERSION_CODE >= SLE_VERSION(15,0,0)))
#define HAVE_NDO_SETUP_TC_REMOVE_TC_TO_NETDEV
#endif
#if (RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,5)))
#define HAVE_NDO_SETUP_TC_REMOVE_TC_TO_NETDEV
#define HAVE_RHEL7_NETDEV_OPS_EXT_NDO_SETUP_TC
#endif
#define TIMER_DATA_TYPE unsigned long
#define TIMER_FUNC_TYPE void (*)(TIMER_DATA_TYPE)
#define timer_setup(timer, callback, flags) \
__setup_timer((timer), (TIMER_FUNC_TYPE)(callback), \
(TIMER_DATA_TYPE)(timer), (flags))
#define from_timer(var, callback_timer, timer_fieldname) \
container_of(callback_timer, typeof(*var), timer_fieldname)
#ifndef xdp_do_flush_map
#define xdp_do_flush_map() do {} while (0)
#endif
struct _kc_xdp_buff {
void *data;
void *data_end;
void *data_hard_start;
};
#define xdp_buff _kc_xdp_buff
struct _kc_bpf_prog {
};
#define bpf_prog _kc_bpf_prog
#else /* > 4.14 */
#define HAVE_XDP_SUPPORT
#define HAVE_NDO_SETUP_TC_REMOVE_TC_TO_NETDEV
#endif /* 4.14.0 */
/*****************************************************************************/
#ifndef ETHTOOL_GLINKSETTINGS
#define __ETHTOOL_LINK_MODE_MASK_NBITS 32
#define ETHTOOL_LINK_MASK_SIZE BITS_TO_LONGS(__ETHTOOL_LINK_MODE_MASK_NBITS)
/**
* struct ethtool_link_ksettings
* @link_modes: supported and advertising, single item arrays
* @link_modes.supported: bitmask of supported link speeds
* @link_modes.advertising: bitmask of currently advertised speeds
* @base: base link details
* @base.speed: current link speed
* @base.port: current port type
* @base.duplex: current duplex mode
* @base.autoneg: current autonegotiation settings
*
* This struct and the following macros provide a way to support the old
* ethtool get/set_settings API on older kernels, but in the style of the new
* GLINKSETTINGS API. In this way, the same code can be used to support both
* APIs as seemlessly as possible.
*
* It should be noted the old API only has support up to the first 32 bits.
*/
struct ethtool_link_ksettings {
struct {
u32 speed;
u8 port;
u8 duplex;
u8 autoneg;
} base;
struct {
unsigned long supported[ETHTOOL_LINK_MASK_SIZE];
unsigned long advertising[ETHTOOL_LINK_MASK_SIZE];
} link_modes;
};
#define ETHTOOL_LINK_NAME_advertising(mode) ADVERTISED_ ## mode
#define ETHTOOL_LINK_NAME_supported(mode) SUPPORTED_ ## mode
#define ETHTOOL_LINK_NAME(name) ETHTOOL_LINK_NAME_ ## name
#define ETHTOOL_LINK_CONVERT(name, mode) ETHTOOL_LINK_NAME(name)(mode)
/**
* ethtool_link_ksettings_zero_link_mode
* @ptr: ptr to ksettings struct
* @name: supported or advertising
*/
#define ethtool_link_ksettings_zero_link_mode(ptr, name)\
(*((ptr)->link_modes.name) = 0x0)
/**
* ethtool_link_ksettings_add_link_mode
* @ptr: ptr to ksettings struct
* @name: supported or advertising
* @mode: link mode to add
*/
#define ethtool_link_ksettings_add_link_mode(ptr, name, mode)\
(*((ptr)->link_modes.name) |= (typeof(*((ptr)->link_modes.name)))ETHTOOL_LINK_CONVERT(name, mode))
/**
* ethtool_link_ksettings_test_link_mode
* @ptr: ptr to ksettings struct
* @name: supported or advertising
* @mode: link mode to add
*/
#define ethtool_link_ksettings_test_link_mode(ptr, name, mode)\
(!!(*((ptr)->link_modes.name) & ETHTOOL_LINK_CONVERT(name, mode)))
#endif /* !ETHTOOL_GLINKSETTINGS */
/*****************************************************************************/
#if ((LINUX_VERSION_CODE < KERNEL_VERSION(4,14,0)) || \
(SLE_VERSION_CODE && (SLE_VERSION_CODE <= SLE_VERSION(12,3,0))) || \
(RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE <= RHEL_RELEASE_VERSION(7,5))))
#define phy_speed_to_str _kc_phy_speed_to_str
const char *_kc_phy_speed_to_str(int speed);
#else /* (LINUX >= 4.14.0) || (SLES > 12.3.0) || (RHEL > 7.5) */
#include <linux/phy.h>
#endif /* (LINUX < 4.14.0) || (SLES <= 12.3.0) || (RHEL <= 7.5) */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,15,0))
#if !(RHEL_RELEASE_CODE && (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7,6)))
#define TC_SETUP_QDISC_MQPRIO TC_SETUP_MQPRIO
#endif
void _kc_ethtool_intersect_link_masks(struct ethtool_link_ksettings *dst,
struct ethtool_link_ksettings *src);
#define ethtool_intersect_link_masks _kc_ethtool_intersect_link_masks
#else /* >= 4.15 */
#define HAVE_NDO_BPF
#define HAVE_XDP_BUFF_DATA_META
#define HAVE_TC_CB_AND_SETUP_QDISC_MQPRIO
#endif /* 4.15.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,16,0))
#define pci_printk(level, pdev, fmt, arg...) \
dev_printk(level, &(pdev)->dev, fmt, ##arg)
#define pci_emerg(pdev, fmt, arg...) dev_emerg(&(pdev)->dev, fmt, ##arg)
#define pci_alert(pdev, fmt, arg...) dev_alert(&(pdev)->dev, fmt, ##arg)
#define pci_crit(pdev, fmt, arg...) dev_crit(&(pdev)->dev, fmt, ##arg)
#define pci_err(pdev, fmt, arg...) dev_err(&(pdev)->dev, fmt, ##arg)
#define pci_warn(pdev, fmt, arg...) dev_warn(&(pdev)->dev, fmt, ##arg)
#define pci_notice(pdev, fmt, arg...) dev_notice(&(pdev)->dev, fmt, ##arg)
#define pci_info(pdev, fmt, arg...) dev_info(&(pdev)->dev, fmt, ##arg)
#define pci_dbg(pdev, fmt, arg...) dev_dbg(&(pdev)->dev, fmt, ##arg)
#ifndef array_index_nospec
static inline unsigned long _kc_array_index_mask_nospec(unsigned long index,
unsigned long size)
{
/*
* Always calculate and emit the mask even if the compiler
* thinks the mask is not needed. The compiler does not take
* into account the value of @index under speculation.
*/
OPTIMIZER_HIDE_VAR(index);
return ~(long)(index | (size - 1UL - index)) >> (BITS_PER_LONG - 1);
}
#define array_index_nospec(index, size) \
({ \
typeof(index) _i = (index); \
typeof(size) _s = (size); \
unsigned long _mask = _kc_array_index_mask_nospec(_i, _s); \
\
BUILD_BUG_ON(sizeof(_i) > sizeof(long)); \
BUILD_BUG_ON(sizeof(_s) > sizeof(long)); \
\
(typeof(_i)) (_i & _mask); \
})
#endif /* array_index_nospec */
#else /* >= 4.16 */
#include <linux/nospec.h>
#define HAVE_XDP_BUFF_RXQ
#endif /* 4.16.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,17,0))
#include <linux/pci_regs.h>
#include <linux/pci.h>
#define PCIE_SPEED_16_0GT 0x17
#define PCI_EXP_LNKCAP_SLS_16_0GB 0x00000004 /* LNKCAP2 SLS Vector bit 3 */
#define PCI_EXP_LNKSTA_CLS_16_0GB 0x0004 /* Current Link Speed 16.0GT/s */
#define PCI_EXP_LNKCAP2_SLS_16_0GB 0x00000010 /* Supported Speed 16GT/s */
void _kc_pcie_print_link_status(struct pci_dev *dev);
#define pcie_print_link_status _kc_pcie_print_link_status
#endif /* 4.17.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,18,0))
#ifdef NETIF_F_HW_L2FW_DOFFLOAD
#include <linux/if_macvlan.h>
#ifndef macvlan_supports_dest_filter
#define macvlan_supports_dest_filter _kc_macvlan_supports_dest_filter
static inline bool _kc_macvlan_supports_dest_filter(struct net_device *dev)
{
struct macvlan_dev *macvlan = netdev_priv(dev);
return macvlan->mode == MACVLAN_MODE_PRIVATE ||
macvlan->mode == MACVLAN_MODE_VEPA ||
macvlan->mode == MACVLAN_MODE_BRIDGE;
}
#endif
#ifndef macvlan_accel_priv
#define macvlan_accel_priv _kc_macvlan_accel_priv
static inline void *_kc_macvlan_accel_priv(struct net_device *dev)
{
struct macvlan_dev *macvlan = netdev_priv(dev);
return macvlan->fwd_priv;
}
#endif
#ifndef macvlan_release_l2fw_offload
#define macvlan_release_l2fw_offload _kc_macvlan_release_l2fw_offload
static inline int _kc_macvlan_release_l2fw_offload(struct net_device *dev)
{
struct macvlan_dev *macvlan = netdev_priv(dev);
macvlan->fwd_priv = NULL;
return dev_uc_add(macvlan->lowerdev, dev->dev_addr);
}
#endif
#endif /* NETIF_F_HW_L2FW_DOFFLOAD */
#else
#define HAVE_XDP_FRAME_STRUCT
#define HAVE_NDO_XDP_XMIT_BULK_AND_FLAGS
#define NO_NDO_XDP_FLUSH
#endif /* 4.18.0 */
/*****************************************************************************/
#if (LINUX_VERSION_CODE < KERNEL_VERSION(4,19,0))
#ifdef ETHTOOL_GLINKSETTINGS
#define ethtool_ks_clear(ptr, name) \
ethtool_link_ksettings_zero_link_mode(ptr, name)
#define ethtool_ks_add_mode(ptr, name, mode) \
ethtool_link_ksettings_add_link_mode(ptr, name, mode)
#define ethtool_ks_del_mode(ptr, name, mode) \
ethtool_link_ksettings_del_link_mode(ptr, name, mode)
#define ethtool_ks_test(ptr, name, mode) \
ethtool_link_ksettings_test_link_mode(ptr, name, mode)
#endif /* ETHTOOL_GLINKSETTINGS */
#else /* >= 4.19.0 */
#define HAVE_TCF_BLOCK_CB_REGISTER_EXTACK
#define NO_NETDEV_BPF_PROG_ATTACHED
#define HAVE_NDO_SELECT_QUEUE_SB_DEV
#endif /* 4.19.0 */
#endif /* _KCOMPAT_H_ */