Commit Graph

82594 Commits

Author SHA1 Message Date
Liad Kaufman
77ff2c6b49 mac80211: update HE IEs to D3.3
Update element names and new fields according to D3.3 of
the HE spec.

Signed-off-by: Liad Kaufman <liad.kaufman@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2019-02-22 13:46:55 +01:00
Li RongQing
1740771524 mac80211_hwsim: propagate genlmsg_reply return code
genlmsg_reply can fail, so propagate its return code

Signed-off-by: Li RongQing <lirongqing@baidu.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2019-02-22 13:27:22 +01:00
Florian Fainelli
7a25c6c0aa rocker: Add missing break for PRE_BRIDGE_FLAGS
A missing break keyword should have been added after adding support for
PRE_BRIDGE_FLAGS.

Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Fixes: 93700458ff ("rocker: Check Handle PORT_PRE_BRIDGE_FLAGS")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 21:29:23 -08:00
Huazhong Tan
34f81f049e net: hns3: clear command queue's registers when unloading VF driver
According to the hardware's description, the driver should clear
the command queue's registers when uloading VF driver. Otherwise,
these existing value may lead the IMP get into a wrong state.

Fixes: fedd0c15d2 ("net: hns3: Add HNS3 VF IMP(Integrated Management Proc) cmd interface")
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 16:29:05 -08:00
Huazhong Tan
232d0d55fc net: hns3: uninitialize command queue while unloading PF driver
According to the hardware's description, the driver should clear
the command queue's registers when uloading driver. Otherwise,
these existing value may lead the IMP get into a wrong state.

Also this patch adds hclge_cmd_uninit() to do the command queue
uninitialization which includes clearing registers and freeing
memory.

Fixes: 68c0a5c706 ("net: hns3: Add HNS3 IMP(Integrated Mgmt Proc) Cmd Interface Support")
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 16:29:05 -08:00
liuzhongzhu
c6075b1934 net: hns3: Record VF vlan tables
Record the vlan tables that the VF sends to the chip.
After the VF exception, the PF actively clears the VF to chip config.

Signed-off-by: liuzhongzhu <liuzhongzhu@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 16:29:05 -08:00
liuzhongzhu
6dd86902f2 net: hns3: Record VF unicast and multicast tables
Record the unicast and multicast tables that the VF sends to the chip.
After the VF exception, the PF actively clears the VF to chip config.

Signed-off-by: liuzhongzhu <liuzhongzhu@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 16:29:05 -08:00
Weihang Li
3aff0ac973 net: hns3: fix 6th bit of ppp mpf abnormal errors
This patch modify print message of 6th bit of ppp mpf abnormal errors,
there is a extra letter e in it.

Signed-off-by: Weihang Li <liweihang@hisilicon.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 16:29:05 -08:00
Weihang Li
d1f55d6bfc net: hns3: enable 8~11th bit of mac common msi-x error
These bits are enabled now and have been test.

Signed-off-by: Weihang Li <liweihang@hisilicon.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 16:29:05 -08:00
Weihang Li
747fc3f351 net: hns3: some bugfix of ppu(rcb) ras errors
The 3rd and 4th of PPU(RCB) PF Abnormal is RAS errors instead of MSI-X
like other bits. This patch adds process of handling and logging this
two bits. Otherwise, this patch modifies print message of 28th and 29th
bit of PPU MPF Abnormal errors, which keep same with other errors now.

Fixes: f69b10b317 ("net: hns3: handle hw errors of PPU(RCB)")
Signed-off-by: Weihang Li <liweihang@hisilicon.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 16:29:04 -08:00
Weihang Li
3d69e59f42 net: hns3: modify print message of ssu common ecc errors
This patch add information of specific bit in log to be consistent
with other type of errors, so that we can know which memory of ssu
has occurred a ecc ras errors.

Signed-off-by: Weihang Li <liweihang@hisilicon.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 16:29:04 -08:00
Jian Shen
f18635d52c net: hns3: fix port info query issue for copper port
In original codes, for copper port which doesn't connect to phy,
it always returns -EOPNOTSUPP when query port information. This
patch fixes it by return the port information of MAC.

Fixes: 5f373b1585 ("net: hns3: Fix speed/duplex information loss problem when executing ethtool ethx cmd of VF")
Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 16:29:04 -08:00
Jian Shen
db68ca0ef7 net: hns3: convert mac advertize and supported from u32 to link mode
The link mode with bits has been up to more than 31 for some MAC
and phy. Convert to using a linkmode bitmap, which can support all
link modes.

Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 16:29:04 -08:00
Yonglong Liu
676131f7c5 net: hns3: Check variable is valid before assigning it to another
In hnae3_register_ae_dev(), ae_algo->ops is assigned to ae_dev->ops
before check that ae_algo->ops is valid.

And in hnae3_register_ae_algo(), missing check for ae_algo->ops.

This patch fixes them.

Signed-off-by: Yonglong Liu <liuyonglong@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 16:29:04 -08:00
Yonglong Liu
bdd59d6611 net: hns3: add pointer checking at the beginning of the exported functions.
These functions are exported, add pointer checking at the beginning
can make them more safe.

Signed-off-by: Yonglong Liu <liuyonglong@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 16:29:04 -08:00
Petr Machata
bb6c346cef mlxsw: spectrum_buffers: Reject overlarge headroom size requests
cap_max_headroom_size holds maximum headroom size supported.
Overstepping that limit might under certain conditions lead to ASIC
freeze.

Query and store the value, and add mlxsw_sp_sb_max_headroom_cells() for
obtaining the stored value. In __mlxsw_sp_port_headroom_set(), reject
requests where the total port buffer is larger than the advertised
maximum.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 15:57:46 -08:00
Petr Machata
edf777f55a mlxsw: spectrum_buffers: Update port headroom configuration
The recommendation for headroom size for 100Gbps port and 100m cable is
101.6KB, reduced accordingly for split ports. The closest higher number
evenly divisible by cell size for both Spectrum-1 and Spectrum-2, and
such that the number of cells can be further divided by maximum split
factor of 4, is 102528 bytes, or 25632 bytes per lane.

Update mlxsw_sp_port_pb_init() to compute the headroom taking into
account this recommended per-lane value and number of lanes actually
dedicated to a given port.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 15:57:46 -08:00
Petr Machata
fe099bf682 mlxsw: spectrum_buffers: Add Spectrum-2 shared buffer configuration
Customize the tables related to shared buffer configuration to match the
current recommendation for Spectrum-2 systems.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 15:57:46 -08:00
Petr Machata
13f35cc424 mlxsw: spectrum_buffers: Keep mlxsw_sp_sb_mm in sb_vals
The SBMM register configures the shared buffer quota for MC packets
according to Switch-Priority. The default configuration depends on the
chip type. Therefore keep the table and length in struct
mlxsw_sp_sb_vals. Redirect the references from the global definitions to
the fields.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 15:57:46 -08:00
Petr Machata
bb60a62e02 mlxsw: spectrum_buffers: Keep mlxsw_sp_sb_cm in sb_vals
The SBCM register configures shared buffer quota according to
port-priority resp. port-TC. The default configuration depends on the
chip type. Therefore keep the tables and their lengths in struct
mlxsw_sp_sb_vals. Redirect the references from the global definitions to
the fields.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 15:57:46 -08:00
Petr Machata
5d25232eb9 mlxsw: spectrum_buffers: Keep mlxsw_sp_sb_prs in mlxsw_sp_sb_vals
The SBPR register configures shared buffer pools. The default
configuration depends on the chip type. Therefore keep it in struct
mlxsw_sp_sb_vals. Redirect the one reference from the global array to
the field.

Because the pool descriptor ID is implicit in the ordering of array
members, both this array and the pool descriptor array have the same
length. Therefore reuse mlxsw_sp_sb.pool_dess_len for the purpose of
determining the length of SBPR array.

Drop the now useless MLXSW_SP_SB_PRS_LEN.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 15:57:46 -08:00
Petr Machata
cc1ce6ff34 mlxsw: spectrum_buffers: Keep mlxsw_sp_sb_pms in mlxsw_sp_sb_vals
The SBPM register can be used to configure quotas for packets ingressing
from a certain pool to a certain port, and egressing from a certain pool
to a certain port. The default configuration depends on the chip type.
Therefore keep it in struct mlxsw_sp_sb_vals. Redirect the one reference
from the global array to the field.

Because the pool descriptor ID is implicit in the ordering of array
members, both this array and the pool descriptor array have the same
length. Therefore reuse mlxsw_sp_sb.pool_dess_len for the purpose of
determining the length of SBPM array.

Drop the now useless MLXSW_SP_SB_PMS_LEN.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 15:57:46 -08:00
Petr Machata
5d65f5f45e mlxsw: spectrum_buffers: Keep pool descriptors in mlxsw_sp_sb_vals
Keep the table of pool descriptors and its length in struct
mlxsw_sp_sb_vals so that it can be specialized per chip type. Redirect
all users from the global definitions to the mlxsw_sp_sb fields.

Give mlxsw_sp_pool_count() an extra mlxsw_sp parameter so that it can
access the descriptor table.

Drop the now unnecessary MLXSW_SP_SB_POOL_DESS_LEN.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 15:57:45 -08:00
Petr Machata
93d201f775 mlxsw: spectrum_buffers: Allocate prs & pms dynamically
Spectrum-2 will be configured with a different set of pools than
Spectrum-1. The size of prs and pms buffers will therefore depend on the
chip type of the device.

Therefore, instead of reserving an array directly in a structure
definition, allocate the buffer in mlxsw_sp_sb_port{,s}_init().

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 15:57:45 -08:00
Petr Machata
c39f3e0e4f mlxsw: spectrum: Add struct mlxsw_sp_sb_vals
Spectrum-2 will be configured with a different shared buffer
configuration than Spectrum-1. Therefore introduce a structure for
keeping the chip-specific default and immutable configuration.

Configuration mutable in runtime will still be kept in struct
mlxsw_sp_sb.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 15:57:45 -08:00
Jose Abreu
ae9f346dd3 net: stmmac: dwxgmac2: Also use TBU interrupt to clean TX path
TBU interrupt is a normal interrupt and can be used to trigger the
cleaning of TX path. Lets check if it's active in DMA interrupt handler.

While at it, refactor a little bit the function:
	- Don't check if RI is enabled because at function exit we will
	  only clear the interrupts that are enabled so, no event will
	  be missed.

In my tests withe XGMAC2 this increased performance.

Signed-off-by: Jose Abreu <joabreu@synopsys.com>
Cc: Joao Pinto <jpinto@synopsys.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Cc: Alexandre Torgue <alexandre.torgue@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 15:42:34 -08:00
Jose Abreu
1103d3a553 net: stmmac: dwmac4: Also use TBU interrupt to clean TX path
TBU interrupt is a normal interrupt and can be used to trigger the
cleaning of TX path. Lets check if it's active in DMA interrupt handler.

While at it, refactor a little bit the function:
	- Don't check if RI is enabled because at function exit we will
	  only clear the interrupts that are enabled so, no event will be
	  missed.

In my tests with GMAC5 this increased performance.

Signed-off-by: Jose Abreu <joabreu@synopsys.com>
Cc: Joao Pinto <jpinto@synopsys.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Cc: Alexandre Torgue <alexandre.torgue@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 15:42:34 -08:00
Jose Abreu
4ccb45857c net: stmmac: Fix NAPI poll in TX path when in multi-queue
Commit 8fce333170 introduced the concept of NAPI per-channel and
independent cleaning of TX path.

This is currently breaking performance in some cases. The scenario
happens when all packets are being received in Queue 0 but the TX is
performed in Queue != 0.

Fix this by using different NAPI instances per each TX and RX queue, as
suggested by Florian.

Changes from v2:
	- Only force restart transmission if there are pending packets
Changes from v1:
	- Pass entire ring size to TX clean path (Florian)

Signed-off-by: Jose Abreu <joabreu@synopsys.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: Joao Pinto <jpinto@synopsys.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Cc: Alexandre Torgue <alexandre.torgue@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 15:42:34 -08:00
Florian Fainelli
010c8f01aa net: Get rid of switchdev_port_attr_get()
With the bridge no longer calling switchdev_port_attr_get() to obtain
the supported bridge port flags from a driver but instead trying to set
the bridge port flags directly and relying on driver to reject
unsupported configurations, we can effectively get rid of
switchdev_port_attr_get() entirely since this was the only place where
it was called.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 14:55:14 -08:00
Florian Fainelli
cc0c207a5d net: Remove SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS_SUPPORT
Now that we have converted the bridge code and the drivers to check for
bridge port(s) flags at the time we try to set them, there is no need
for a get() -> set() sequence anymore and
SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS_SUPPORT therefore becomes unused.

Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 14:55:14 -08:00
Florian Fainelli
93700458ff rocker: Check Handle PORT_PRE_BRIDGE_FLAGS
In preparation for getting rid of switchdev_port_attr_get(), have rocker
check for the bridge flags being set through switchdev_port_attr_set()
with the SWITCHDEV_ATTR_ID_PORT_PRE_BRIDGE_FLAGS attribute identifier.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 14:55:14 -08:00
Florian Fainelli
c19c44f867 mlxsw: spectrum: Handle PORT_PRE_BRIDGE_FLAGS
In preparation for getting rid of switchdev_port_attr_get(), have mlxsw
check for the bridge flags being set through switchdev_port_attr_set()
when the SWITCHDEV_ATTR_ID_PORT_PRE_BRIDGE_FLAGS attribute identifier is
used.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 14:55:13 -08:00
Russell King
4f85901f00 net: dsa: mv88e6xxx: add support for bridge flags
Add support for the bridge flags to Marvell 88e6xxx bridges, allowing
the multicast and unicast flood properties to be controlled.  These
can be controlled on a per-port basis via commands such as:

	bridge link set dev lan1 flood on|off
	bridge link set dev lan1 mcast_flood on|off

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Vivien Didelot <vivien.didelot@gmail.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 14:53:07 -08:00
Michal Soltys
3c963a3306 bonding: fix PACKET_ORIGDEV regression
This patch fixes a subtle PACKET_ORIGDEV regression which was a side
effect of fixes introduced by:

6a9e461f6f bonding: pass link-local packets to bonding master also.

... to:

b89f04c61e bonding: deliver link-local packets with skb->dev set to link that packets arrived on

While 6a9e461f6f restored pre-b89f04c61efe presence of link-local
packets on bonding masters (which is required e.g. by linux bridges
participating in spanning tree or needed for lab-like setups created
with group_fwd_mask) it also caused the originating device
information to be lost due to cloning.

Maciej Żenczykowski proposed another solution that doesn't require
packet cloning and retains original device information - instead of
returning RX_HANDLER_PASS for all link-local packets it's now limited
only to packets from inactive slaves.

At the same time, packets passed to bonding masters retain correct
information about the originating device and PACKET_ORIGDEV can be used
to determine it.

This elegantly solves all issues so far:

- link-local packets that were removed from bonding masters
- LLDP daemons being forced to explicitly bind to slave interfaces
- PACKET_ORIGDEV having no effect on bond interfaces

Fixes: 6a9e461f6f (bonding: pass link-local packets to bonding master also.)
Reported-by: Vincent Bernat <vincent@bernat.ch>
Signed-off-by: Michal Soltys <soltys@ziu.info>
Signed-off-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 13:20:08 -08:00
Hangbin Liu
ad49bc6361 net: vrf: remove MTU limits for vrf device
Similiar to commit e94cd8113c ("net: remove MTU limits for dummy and
ifb device"), MTU is irrelevant for VRF device. We init it as 64K while
limit it to [68, 1500] may make users feel confused.

Reported-by: Jianlin Shi <jishi@redhat.com>
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 13:10:08 -08:00
Heiner Kallweit
6b4cb6cb13 net: phy: marvell10g: use genphy_c45_check_and_restart_aneg in mv3310_config_aneg
Use new function genphy_c45_check_and_restart_aneg() to reduce
boilerplate code.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 13:03:06 -08:00
Heiner Kallweit
1af9f16840 net: phy: add genphy_c45_check_and_restart_aneg
This function will be used by config_aneg callback implementations of
PHY drivers and allows to reduce boilerplate code.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 13:03:06 -08:00
Heiner Kallweit
cc429d5291 net: phy: use genphy_config_eee_advert in genphy_c45_an_config_aneg
Like in genphy_config_aneg() for clause 22 PHY's, we should keep modes
from being advertised that are known to be broken with EEE.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 13:03:06 -08:00
Heiner Kallweit
cd34499cac net: phy: export genphy_config_eee_advert
We want to use this function in phy-c45.c too, therefore export it.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 13:03:06 -08:00
Heiner Kallweit
51f9f234da net: phy: don't use 10BaseT/half as default in genphy_read_status
If link partner and we can't agree on any mode, then it doesn't make
sense to pretend we would have agreed on 10/half. Therefore set a
proper default.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 12:57:25 -08:00
Heiner Kallweit
40d5432cd5 net: phy: remove orphaned register read in genphy_read_status
After recent changes to genphy_read_status() this orphaned register
read remained as leftover. So remove it.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 12:57:25 -08:00
Jason Gunthorpe
815f748037 Merge branch 'mlx5-next' into rdma.git for-next
From
git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux

To resolve conflicts with net-next and pick up the first patch.

* branch 'mlx5-next':
  net/mlx5: Factor out HCA capabilities functions
  IB/mlx5: Add support for 50Gbps per lane link modes
  net/mlx5: Add support to ext_* fields introduced in Port Type and Speed register
  net/mlx5: Add new fields to Port Type and Speed register
  net/mlx5: Refactor queries to speed fields in Port Type and Speed register
  net/mlx5: E-Switch, Avoid magic numbers when initializing offloads mode
  net/mlx5: Relocate vport macros to the vport header file
  net/mlx5: E-Switch, Normalize the name of uplink vport number
  net/mlx5: Provide an alternative VF upper bound for ECPF
  net/mlx5: Add host params change event
  net/mlx5: Add query host params command
  net/mlx5: Update enable HCA dependency
  net/mlx5: Introduce Mellanox SmartNIC and modify page management logic
  IB/mlx5: Use unified register/load function for uplink and VF vports
  net/mlx5: Use consistent vport num argument type
  net/mlx5: Use void pointer as the type in address_of macro
  net/mlx5: Align ODP capability function with netdev coding style
  mlx5: use RCU lock in mlx5_eq_cq_get()

Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-02-21 12:40:18 -07:00
Jan Sokolowski
c685c69fba ixgbe: don't do any AF_XDP zero-copy transmit if netif is not OK
An issue has been found while testing zero-copy XDP that
causes a reset to be triggered. As it takes some time to
turn the carrier on after setting zc, and we already
start trying to transmit some packets, watchdog considers
this as an erroneous state and triggers a reset.

Don't do any work if netif carrier is not OK.

Fixes: 8221c5eba8 (ixgbe: add AF_XDP zero-copy Tx support)
Signed-off-by: Jan Sokolowski <jan.sokolowski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-02-21 11:11:25 -08:00
Björn Töpel
59eb2a884f i40e: fix XDP_REDIRECT/XDP xmit ring cleanup race
When the driver clears the XDP xmit ring due to re-configuration or
teardown, in-progress ndo_xdp_xmit must be taken into consideration.

The ndo_xdp_xmit function is typically called from a NAPI context that
the driver does not control. Therefore, we must be careful not to
clear the XDP ring, while the call is on-going. This patch adds a
synchronize_rcu() to wait for napi(s) (preempt-disable regions and
softirqs), prior clearing the queue. Further, the __I40E_CONFIG_BUSY
flag is checked in the ndo_xdp_xmit implementation to avoid touching
the XDP xmit queue during re-configuration.

Fixes: d9314c474d ("i40e: add support for XDP_REDIRECT")
Fixes: 123cecd427 ("i40e: added queue pair disable/enable functions")
Reported-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-02-21 11:07:49 -08:00
Magnus Karlsson
4a9b32f30f ixgbe: fix potential RX buffer starvation for AF_XDP
When the RX rings are created they are also populated with buffers so
that packets can be received. Usually these are kernel buffers, but
for AF_XDP in zero-copy mode, these are user-space buffers and in this
case the application might not have sent down any buffers to the
driver at this point. And if no buffers are allocated at ring creation
time, no packets can be received and no interrupts will be generated so
the NAPI poll function that allocates buffers to the rings will never
get executed.

To rectify this, we kick the NAPI context of any queue with an
attached AF_XDP zero-copy socket in two places in the code. Once after
an XDP program has loaded and once after the umem is registered.  This
take care of both cases: XDP program gets loaded first then AF_XDP
socket is created, and the reverse, AF_XDP socket is created first,
then XDP program is loaded.

Fixes: d0bcacd0a1 ("ixgbe: add AF_XDP zero-copy Rx support")
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-02-21 11:02:47 -08:00
Magnus Karlsson
14ffeb52f3 i40e: fix potential RX buffer starvation for AF_XDP
When the RX rings are created they are also populated with buffers
so that packets can be received. Usually these are kernel buffers,
but for AF_XDP in zero-copy mode, these are user-space buffers and
in this case the application might not have sent down any buffers
to the driver at this point. And if no buffers are allocated at ring
creation time, no packets can be received and no interrupts will be
generated so the NAPI poll function that allocates buffers to the
rings will never get executed.

To rectify this, we kick the NAPI context of any queue with an
attached AF_XDP zero-copy socket in two places in the code. Once
after an XDP program has loaded and once after the umem is registered.
This take care of both cases: XDP program gets loaded first then AF_XDP
socket is created, and the reverse, AF_XDP socket is created first,
then XDP program is loaded.

Fixes: 0a714186d3 ("i40e: add AF_XDP zero-copy Rx support")
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-02-21 10:56:17 -08:00
Sudarsana Reddy Kalluru
0ebcebbef1 qed: Read device port count from the shmem
Read port count from the shared memory instead of driver deriving this
value. This change simplifies the driver implementation and also avoids
any dependencies for finding the port-count.

Signed-off-by: Sudarsana Reddy Kalluru <skalluru@marvell.com>
Signed-off-by: Michal Kalderon <mkalderon@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-21 10:51:08 -08:00
Jeff Kirsher
156a67a906 ixgbe: fix older devices that do not support IXGBE_MRQC_L3L4TXSWEN
The enabling L3/L4 filtering for transmit switched packets for all
devices caused unforeseen issue on older devices when trying to send UDP
traffic in an ordered sequence.  This bit was originally intended for X550
devices, which supported this feature, so limit the scope of this bit to
only X550 devices.

Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
2019-02-21 10:49:00 -08:00
Grygorii Strashko
dba235fa70 net: ethernet: ti: cpsw: deprecate cpsw-phy-sel driver
Deprecate cpsw-phy-sel driver as it's been replaced with new
TI phy-gmii-sel PHY driver.

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Tony Lindgren <tony@atomide.com>
2019-02-21 07:33:51 -08:00
Vishal Kulkarni
64ccfd2dbb cxgb4: Mask out interrupts that are not enabled.
There are rare cases where a PL_INT_CAUSE bit may end up getting
set when the corresponding PL_INT_ENABLE bit isn't set.

Signed-off-by: Vishal Kulkarni <vishal@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-20 20:26:17 -08:00