Add support for the Qualcomm Technologies, Inc. EMAC gigabit Ethernet
controller.
This driver supports the following features:
1) Checksum offload.
2) Interrupt coalescing support.
3) SGMII phy.
4) phylib interface for external phy
Based on original work by
Niranjana Vishwanathapura <nvishwan@codeaurora.org>
Gilad Avidov <gavidov@codeaurora.org>
Signed-off-by: Timur Tabi <timur@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Access the priv member of the dsa_switch structure directly, instead of
having an unnecessary helper.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When PCI error is detected, in some architectures (like PowerPC) a slot
reset is performed - the driver's error handlers are in charge of "disable"
device before the reset, and re-enable it after a successful slot reset.
There are two cases though that another path is taken on the code: if the
slot reset is not successful or if too many errors already happened in the
specific adapter (meaning that possibly the device is experiencing a HW
failure that slot reset is not able to solve), the core PCI error mechanism
(called EEH in PowerPC) will remove the adapter from the system, since it
will consider this as a permanent failure on device. In this case, a path
is taken that leads to bnx2x_chip_cleanup() calling bnx2x_reset_hw(), which
then tries to perform a HW reset on chip. This reset won't succeed since
the HW is in a fault state, which can be seen by multiple messages on
kernel log like below:
bnx2x: [bnx2x_issue_dmae_with_comp:552(eth1)]DMAE timeout!
bnx2x: [bnx2x_write_dmae:600(eth1)]DMAE returned failure -1
After some time, the PCI error mechanism gives up on waiting the driver's
correct removal procedure and forcibly remove the adapter from the system.
We can see soft lockup while core PCI error mechanism is waiting for driver
to accomplish the right removal process.
This patch adds a verification to avoid a chip reset whenever the function
is in PCI error state - since this case is only reached when we have a
device being removed because of a permanent failure, the HW chip reset is
not expected to work fine neither is necessary.
Also, as a minor improvement in error path, we avoid the MCP information dump
in case of non-recoverable PCI error (when adapter is about to be removed),
since it will certainly fail.
Reported-by: Harsha Thyagaraja <hathyaga@in.ibm.com>
Signed-off-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com>
Acked-By: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
fdb dumps spanning multiple skb's currently restart from the first
interface again for every skb. This results in unnecessary
iterations on the already visited interfaces and their fdb
entries. In large scale setups, we have seen this to slow
down fdb dumps considerably. On a system with 30k macs we
see fdb dumps spanning across more than 300 skbs.
To fix the problem, this patch replaces the existing single fdb
marker with three markers: netdev hash entries, netdevs and fdb
index to continue where we left off instead of restarting from the
first netdev. This is consistent with link dumps.
In the process of fixing the performance issue, this patch also
re-implements fix done by
commit 472681d57a ("net: ndo_fdb_dump should report -EMSGSIZE to rtnl_fdb_dump")
(with an internal fix from Wilson Kok) in the following ways:
- change ndo_fdb_dump handlers to return error code instead
of the last fdb index
- use cb->args strictly for dump frag markers and not error codes.
This is consistent with other dump functions.
Below results were taken on a system with 1000 netdevs
and 35085 fdb entries:
before patch:
$time bridge fdb show | wc -l
15065
real 1m11.791s
user 0m0.070s
sys 1m8.395s
(existing code does not return all macs)
after patch:
$time bridge fdb show | wc -l
35085
real 0m2.017s
user 0m0.113s
sys 0m1.942s
Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Signed-off-by: Wilson Kok <wkok@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The workqueue "pegasus_workqueue" queues a single work item per pegasus
instance and hence it doesn't require execution ordering. Hence,
alloc_workqueue has been used to replace the deprecated
create_singlethread_workqueue instance.
The WQ_MEM_RECLAIM flag has been set to ensure forward progress under
memory pressure since it's a network driver.
Since there are fixed number of work items, explicit concurrency
limit is unnecessary here.
Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com>
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Petko Manolov <petkan@mip-labs.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
alloc_ordered_workqueue() with WQ_MEM_RECLAIM set, replaces
deprecated create_singlethread_workqueue(). This is the identity
conversion.
The workqueue "wq" queues multiple work items viz
&bond->mcast_work, &nnw->work, &bond->mii_work, &bond->arp_work,
&bond->alb_work, &bond->mii_work, &bond->ad_work, &bond->slave_arr_work
which require strict execution ordering. Hence, an ordered dedicated
workqueue has been used.
Since, it is a network driver, WQ_MEM_RECLAIM has been set to
ensure forward progress under memory pressure.
Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
On ThunderX 88xx pass 2.x chips when TSO is offloaded to HW,
HW posts a CQE for every TSO segment transmitted. Current code
does handles this, but is prone to issues when segment sizes are
small resulting in SW processing too many CQEs and also at times
frees a SKB which is not yet transmitted.
This patch handles the errata in a different way and eliminates issues
with earlier approach, TSO packet is submitted to HW with post_cqe=0,
so that no CQE is posted upon completion of transmission of TSO packet
but a additional HDR + IMMEDIATE descriptors are added to SQ due to
which a CQE is posted and will have required info to be used while
cleanup in napi. This way only one CQE is posted for a TSO packet.
Signed-off-by: Sunil Goutham <sgoutham@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There is a issue in HW where-in while sending GSO sized pkts
as part of TSO, if pkt len falls below configured min packet
size i.e 60, NIC will zero PAD packet and also updates IP total length.
Hence set this value to lessthan min pkt size of MAC + IP + TCP
headers, BGX will anyway do the padding to transmit 64 byte pkt
including FCS.
Signed-off-by: Sunil Goutham <sgoutham@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If DCB is configured on the link partner switch with an
unsupported traffic class configuration (e.g. non-contiguous TCs),
the driver is flagging DCB as disabled. But, for future DCB
LLDPDUs, the driver was checking if the interface was DCB capable
instead of enabled. This was causing a kernel panic when LLDP
was enabled/disabled on the link partner switch.
This patch corrects the situation by having the LLDP event handler
check the correct flag in the pf structure. It also cleans up the
setting and clearing of the enabled flag for other checks.
Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Kalle Valo says:
====================
wireless-drivers fixes for 4.8
ath9k
* fix regression in client mode beacon configuration
* fix a station pointer which resulted in spurious crashes
mwifiex
* fix large amsdu packets causing firmware hang
brcmfmac
* fix deadlock when removing interface
* fix use of mutex in atomic context
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Update the sky2 driver to pass number of packets done to NAPI.
The driver was never updated when napi_complete_done was added.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
A bugfix for backward compatibility handling introduced undefined
behavior for the case that of_parse_phandle() does not return
a valid entry, as "gcc -Wmaybe-unused" reports:
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c: In function 'xgene_enet_phy_connect':
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c:776:6: error: 'phy_dev' may be used uninitialized in this function [-Werror=maybe-uninitialized]
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c: In function 'xgene_enet_mdio_config':
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c:776:6: error: 'phy_dev' may be used uninitialized in this function [-Werror=maybe-uninitialized]
We can work around this by removing the check for zero "np", as
of_phy_connect() will correctly handle a NULL argument so we fall
back into the normal error handling case.
Note that I had previously fixed another bug that resulted in the
exact same warning, but this is a different problem that was
introduced after my original fix.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 03377e381b ("drivers: net: xgene: Fix backward compatibility")
Signed-off-by: David S. Miller <davem@davemloft.net>
The recent commit 087d7a8c91 "tg3: Fix for diasllow rx coalescing
time to be 0" disallow to set Rx coalescing time to be 0 as this stops
generating interrupts for the incoming packets. I found the zero
Tx coalescing time stops generating interrupts for outgoing packets
as well and fires Tx watchdog later. To avoid this, don't allow to set
Tx coalescing time to 0 and also remove subsequent checks that become
senseless.
Cc: satish.baddipadige@broadcom.com
Cc: siva.kallam@broadcom.com
Cc: michael.chan@broadcom.com
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Acked-by: Siva Reddy Kallam <siva.kallam@broadcom.com>
Acked-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
check the coding style with checkpatch.pl and fix the warnings and errors.
Signed-off-by: Hayes Wang <hayeswang@realtek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
drivers/net/ethernet/qlogic/qed/qed_dcbx.c:1230:13-20: WARNING: kzalloc should be used for dcbx_info, instead of kmalloc/memset
drivers/net/ethernet/qlogic/qed/qed_dcbx.c:1192:13-20: WARNING: kzalloc should be used for dcbx_info, instead of kmalloc/memset
Use kzalloc rather than kmalloc followed by memset with 0
This considers some simple cases that are common and easy to validate
Note in particular that there are no ...s in the rule, so all of the
matched code has to be contiguous
Generated by: scripts/coccinelle/api/alloc/kzalloc-simple.cocci
CC: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Acked-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When a VLAN is added on a bridge port we should use the existing unicast
flood configuration of the port instead of assuming it's enabled.
Fixes: 0293038e0c ("mlxsw: spectrum: Add support for flood control")
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In commit 14d39461b3 ("mlxsw: spectrum: Use per-FID struct for the
VLAN-aware bridge") I added a per-FID struct, which member ports can
take a reference on upon VLAN membership configuration.
However, sometimes only the VLAN flags (e.g. egress untagged) are
toggled without changing the VLAN membership. In these cases we
shouldn't take another reference on the FID.
Fixes: 14d39461b3 ("mlxsw: spectrum: Use per-FID struct for the VLAN-aware bridge")
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently the notifier is registered for every asic instance, however the
same block. Fix this by moving the registration to module init.
Fixes: c723c735fa ("mlxsw: spectrum_router: Periodically update the kernel's neigh table")
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Originally, I expected that there would be needed to call update
operation in case RALUE record action is changed. However, that is not
needed since write operation takes care of that nicely. Remove prepared
construct and always call the write operation.
Fixes: 61c503f976 ("mlxsw: spectrum_router: Implement fib4 add/del switchdev obj ops")
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In mlxsw we squash tables 254 and 255 together into HW. Kernel adds/dels
/32 ip to/from both 254 and 255. On del path, that causes the same
prefix being removed twice. Fix this by introducing reference counting
for private mlxsw fib entries. That required a bit of code reshuffle.
Also put dev into fib entry key so the same prefix could be represented
once per every router interface.
Fixes: 61c503f976 ("mlxsw: spectrum_router: Implement fib4 add/del switchdev obj ops")
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
From: Grant Grundler <grundler@chromium.org>
The miii_nway_restart() causes a PHY link change activity and
ax88772_link_reset will be called. link_reset will set
AX_CMD_WRITE_MEDIUM_MODE register correctly.
The asix_write_medium_mode in reset() fills in a default value to the register
which may be different from the negotiation result. So do this first.
Ignore the ret value since it's ignored in XXX_link_reset() functions.
Signed-off-by: Grant Grundler <grundler@google.com>
Signed-off-by: Robert Foss <robert.foss@collabora.com>
Tested-by: Robert Foss <robert.foss@collabora.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
From: Allan Chou <allan@asix.com.tw>
The change fixes AX88772x resume failure by
- Restore incorrect AX88772A PHY registers when resetting
- Need to stop MAC operation when suspending
- Need to restart MII when restoring PHY
Signed-off-by: Allan Chou <allan@asix.com.tw>
Signed-off-by: Robert Foss <robert.foss@collabora.com>
Tested-by: Robert Foss <robert.foss@collabora.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
From: Freddy Xin <freddy@asix.com.tw>
In order to R/W registers in suspend/resume functions, in_pm flags are
added to some functions to determine whether the nopm version of usb
functions is called.
Save BMCR and ANAR PHY registers in suspend function and restore them
in resume function.
Reset HW in resume function to ensure the PHY works correctly.
Signed-off-by: Freddy Xin <freddy@asix.com.tw>
Signed-off-by: Robert Foss <robert.foss@collabora.com>
Tested-by: Robert Foss <robert.foss@collabora.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch takes care of clearing the uninitialized buffer before using it.
1. pfc pri-enable bitmap need to be cleared before setting the requested
enable bits. Without this, the un-touched values will be merged with
requested values and sent to MFW.
2. The data in app-entry field need to be cleared before using it.
3. Clear the output data buffer used in qed_dcbx_query_params().
Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Management firmware requires the selection-field (SF) to be set for
configuring the application/protocol entry in IEEE mode. Without this
setting, the app entry will be configured incorrectly in MFW.
Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Dcbx configuration is not supported for VF interfaces. Hence don't populate
the callbacks for VFs and also fail the dcbx-query for VFs.
Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
a lot of parts in the driver uses devm_* APIs to gain benefits from the
device resource management, so devm_mdiobus_alloc is also used instead
of mdiobus_alloc to have more elegant code flow.
Using common code provided by the devm_* helps to
1) have simplified the code flow as [1] says
2) decrease the risk of incorrect error handling by human
3) only a few drivers used it since it was proposed on linux 3.16,
so just hope to promote for this.
Ref:
[1] https://patchwork.ozlabs.org/patch/344093/
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds the missing of_node_put() after finishing the usage
of of_get_child_by_name.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Acked-by: John Crispin <john@phrozen.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
mtk_stop() must be called to stop for freeing DMA
resources acquired and restoring state changed by mtk_open()
firstly when module removal.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
original mdio_cleanup is not in the symmetric place against where
mdio_init is, so relocate mdio_cleanup to the right one.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Acked-by: John Crispin <john@phrozen.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
these irqs are not used for shared irq and disabled during ethernet stops.
irq requested by devm_request_irq is safe to be freed automatically on
driver detach.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Acked-by: John Crispin <john@phrozen.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
1) If the return value of devm_clk_get is EPROBE_DEFER, we should
defer probing the driver. The change is verified and works based
on 4.8-rc1 staying with the latest clk-next code for MT7623.
2) Changing with the usage of loops to work out if all clocks
required are fine
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
which net device the SKB is complete for depends on the forward port
on txd4 on the corresponding TX descriptor, but the information isn't
set up well in case of SKB fragments that would lead to watchdog timeout
from the upper layer, so fix it up.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Acked-by: John Crispin <john@phrozen.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Check for ethtool_ops structures that are only stored in the ethtool_ops
field of a net_device structure or passed as the second argument to
netdev_set_default_ethtool_ops. These contexts are declared const, so
ethtool_ops structures that have these properties can be declared as const
also.
The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)
// <smpl>
@r disable optional_qualifier@
identifier i;
position p;
@@
static struct ethtool_ops i@p = { ... };
@ok1@
identifier r.i;
struct net_device e;
position p;
@@
e.ethtool_ops = &i@p;
@ok2@
identifier r.i;
expression e;
position p;
@@
netdev_set_default_ethtool_ops(e, &i@p)
@bad@
position p != {r.p,ok1.p,ok2.p};
identifier r.i;
@@
i@p
@depends on !bad disable optional_qualifier@
identifier r.i;
@@
static
+const
struct ethtool_ops i = { ... };
// </smpl>
Suggested-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Check for ethtool_ops structures that are only stored in the ethtool_ops
field of a net_device structure or passed as the second argument to
netdev_set_default_ethtool_ops. These contexts are declared const, so
ethtool_ops structures that have these properties can be declared as const
also.
The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)
// <smpl>
@r disable optional_qualifier@
identifier i;
position p;
@@
static struct ethtool_ops i@p = { ... };
@ok1@
identifier r.i;
struct net_device e;
position p;
@@
e.ethtool_ops = &i@p;
@ok2@
identifier r.i;
expression e;
position p;
@@
netdev_set_default_ethtool_ops(e, &i@p)
@bad@
position p != {r.p,ok1.p,ok2.p};
identifier r.i;
@@
i@p
@depends on !bad disable optional_qualifier@
identifier r.i;
@@
static
+const
struct ethtool_ops i = { ... };
// </smpl>
Suggested-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Check for ethtool_ops structures that are only stored in the ethtool_ops
field of a net_device structure or passed as the second argument to
netdev_set_default_ethtool_ops. These contexts are declared const, so
ethtool_ops structures that have these properties can be declared as const
also.
The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)
// <smpl>
@r disable optional_qualifier@
identifier i;
position p;
@@
static struct ethtool_ops i@p = { ... };
@ok1@
identifier r.i;
struct net_device e;
position p;
@@
e.ethtool_ops = &i@p;
@ok2@
identifier r.i;
expression e;
position p;
@@
netdev_set_default_ethtool_ops(e, &i@p)
@bad@
position p != {r.p,ok1.p,ok2.p};
identifier r.i;
@@
i@p
@depends on !bad disable optional_qualifier@
identifier r.i;
@@
static
+const
struct ethtool_ops i = { ... };
// </smpl>
Suggested-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
In case of misconfiguration, a virtual PPP channel might send packets
back to their parent PPP interface. This typically happens in
misconfigured L2TP setups, where PPP's peer IP address is set with the
IP of the L2TP peer.
When that happens the system hangs due to PPP trying to recursively
lock its xmit path.
[ 243.332155] BUG: spinlock recursion on CPU#1, accel-pppd/926
[ 243.333272] lock: 0xffff880033d90f18, .magic: dead4ead, .owner: accel-pppd/926, .owner_cpu: 1
[ 243.334859] CPU: 1 PID: 926 Comm: accel-pppd Not tainted 4.8.0-rc2 #1
[ 243.336010] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 243.336018] ffff7fffffffffff ffff8800319a77a0 ffffffff8128de85 ffff880033d90f18
[ 243.336018] ffff880033ad8000 ffff8800319a77d8 ffffffff810ad7c0 ffffffff0000039e
[ 243.336018] ffff880033d90f18 ffff880033d90f60 ffff880033d90f18 ffff880033d90f28
[ 243.336018] Call Trace:
[ 243.336018] [<ffffffff8128de85>] dump_stack+0x4f/0x65
[ 243.336018] [<ffffffff810ad7c0>] spin_dump+0xe1/0xeb
[ 243.336018] [<ffffffff810ad7f0>] spin_bug+0x26/0x28
[ 243.336018] [<ffffffff810ad8b9>] do_raw_spin_lock+0x5c/0x160
[ 243.336018] [<ffffffff815522aa>] _raw_spin_lock_bh+0x35/0x3c
[ 243.336018] [<ffffffffa01a88e2>] ? ppp_push+0xa7/0x82d [ppp_generic]
[ 243.336018] [<ffffffffa01a88e2>] ppp_push+0xa7/0x82d [ppp_generic]
[ 243.336018] [<ffffffff810adada>] ? do_raw_spin_unlock+0xc2/0xcc
[ 243.336018] [<ffffffff81084962>] ? preempt_count_sub+0x13/0xc7
[ 243.336018] [<ffffffff81552438>] ? _raw_spin_unlock_irqrestore+0x34/0x49
[ 243.336018] [<ffffffffa01ac657>] ppp_xmit_process+0x48/0x877 [ppp_generic]
[ 243.336018] [<ffffffff81084962>] ? preempt_count_sub+0x13/0xc7
[ 243.336018] [<ffffffff81408cd3>] ? skb_queue_tail+0x71/0x7c
[ 243.336018] [<ffffffffa01ad1c5>] ppp_start_xmit+0x21b/0x22a [ppp_generic]
[ 243.336018] [<ffffffff81426af1>] dev_hard_start_xmit+0x15e/0x32c
[ 243.336018] [<ffffffff81454ed7>] sch_direct_xmit+0xd6/0x221
[ 243.336018] [<ffffffff814273a8>] __dev_queue_xmit+0x52a/0x820
[ 243.336018] [<ffffffff814276a9>] dev_queue_xmit+0xb/0xd
[ 243.336018] [<ffffffff81430a3c>] neigh_direct_output+0xc/0xe
[ 243.336018] [<ffffffff8146b5d7>] ip_finish_output2+0x4d2/0x548
[ 243.336018] [<ffffffff8146a8e6>] ? dst_mtu+0x29/0x2e
[ 243.336018] [<ffffffff8146d49c>] ip_finish_output+0x152/0x15e
[ 243.336018] [<ffffffff8146df84>] ? ip_output+0x74/0x96
[ 243.336018] [<ffffffff8146df9c>] ip_output+0x8c/0x96
[ 243.336018] [<ffffffff8146d55e>] ip_local_out+0x41/0x4a
[ 243.336018] [<ffffffff8146dd15>] ip_queue_xmit+0x531/0x5c5
[ 243.336018] [<ffffffff814a82cd>] ? udp_set_csum+0x207/0x21e
[ 243.336018] [<ffffffffa01f2f04>] l2tp_xmit_skb+0x582/0x5d7 [l2tp_core]
[ 243.336018] [<ffffffffa01ea458>] pppol2tp_xmit+0x1eb/0x257 [l2tp_ppp]
[ 243.336018] [<ffffffffa01acf17>] ppp_channel_push+0x91/0x102 [ppp_generic]
[ 243.336018] [<ffffffffa01ad2d8>] ppp_write+0x104/0x11c [ppp_generic]
[ 243.336018] [<ffffffff811a3c1e>] __vfs_write+0x56/0x120
[ 243.336018] [<ffffffff81239801>] ? fsnotify_perm+0x27/0x95
[ 243.336018] [<ffffffff8123ab01>] ? security_file_permission+0x4d/0x54
[ 243.336018] [<ffffffff811a4ca4>] vfs_write+0xbd/0x11b
[ 243.336018] [<ffffffff811a5a0a>] SyS_write+0x5e/0x96
[ 243.336018] [<ffffffff81552a1b>] entry_SYSCALL_64_fastpath+0x13/0x94
The main entry points for sending packets over a PPP unit are the
.write() and .ndo_start_xmit() callbacks (simplified view):
.write(unit fd) or .ndo_start_xmit()
\
CALL ppp_xmit_process()
\
LOCK unit's xmit path (ppp->wlock)
|
CALL ppp_push()
\
LOCK channel's xmit path (chan->downl)
|
CALL lower layer's .start_xmit() callback
\
... might recursively call .ndo_start_xmit() ...
/
RETURN from .start_xmit()
|
UNLOCK channel's xmit path
/
RETURN from ppp_push()
|
UNLOCK unit's xmit path
/
RETURN from ppp_xmit_process()
Packets can also be directly sent on channels (e.g. LCP packets):
.write(channel fd) or ppp_output_wakeup()
\
CALL ppp_channel_push()
\
LOCK channel's xmit path (chan->downl)
|
CALL lower layer's .start_xmit() callback
\
... might call .ndo_start_xmit() ...
/
RETURN from .start_xmit()
|
UNLOCK channel's xmit path
/
RETURN from ppp_channel_push()
Key points about the lower layer's .start_xmit() callback:
* It can be called directly by a channel fd .write() or by
ppp_output_wakeup() or indirectly by a unit fd .write() or by
.ndo_start_xmit().
* In any case, it's always called with chan->downl held.
* It might route the packet back to its parent unit using
.ndo_start_xmit() as entry point.
This patch detects and breaks recursion in ppp_xmit_process(). This
function is a good candidate for the task because it's called early
enough after .ndo_start_xmit(), it's always part of the recursion
loop and it's on the path of whatever entry point is used to send
a packet on a PPP unit.
Recursion detection is done using the per-cpu ppp_xmit_recursion
variable.
Since ppp_channel_push() too locks the channel's xmit path and calls
the lower layer's .start_xmit() callback, we need to also increment
ppp_xmit_recursion there. However there's no need to check for
recursion, as it's out of the recursion loop.
Reported-by: Feng Gao <gfree.wind@gmail.com>
Signed-off-by: Guillaume Nault <g.nault@alphalink.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Casting away const is bad practice. Since this is ARM specific driver
don't have hardware actually test this.
Having getter functions for ops is really unnecessary code bloat, but
not going to touch that.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add support for the MDB operations. This consists of
loading/purging/dumping multicast addresses for a given port in the ATU.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The MDB support for the mv88e6xxx driver will be very similar to the FDB
support, since it consists of loading/purging/dumping address to/from
the Address Translation Unit (ATU).
Prepare the support for MDB by making the FDB code accessing the ATU
generic. The FDB operations now provide access to the unicast addresses
while the MDB operations will provide access to the multicast addresses.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>