Fix to return a negative error code from the error handling
case instead of 0, as done elsewhere in this function.
Fixes: 73725d9dfd ("nfp: allocate ring SW structs dynamically")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Remove .owner field if calls are used which set it automatically.
Generated by: scripts/coccinelle/api/platform_no_drv_owner.cocci
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fixes the following sparse warning:
drivers/net/dsa/bcm_sf2.c:963:19: warning:
symbol 'bcm_sf2_io_ops' was not declared. Should it be static?
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
for preventing race conditions within ioctl calls.
Signed-off-by: Ivan Mikhaylov <ivan@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
add realization for mac address set and remove dummy callback.
Signed-off-by: Ivan Mikhaylov <ivan@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The device table is required to load modules based on
modaliases. After adding MODULE_DEVICE_TABLE, below entries
for example will be added to modules.alias:
alias of:N*T*Cmediatek,mt7623-ethC* mtk_eth_soc
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If an error occurs in mlx4_init_eq_table the index used in the
err_out_unmap label is one too big which results in a panic in
mlx4_free_eq. This patch fixes the index in the error path.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
add the protection of the race condition between
the reset process and hardware access happening
on the related callbacks.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
struct mtk_eth has already contained struct regmap ethsys pointer
to the address range of the internal circuit reset, so we reuse it
to reset more internal blocks on ethernet hardware such as packet
processing engine (PPE) and frame engine (FE) instead of rstc which
deals with FE only.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
1) original driver only resets DMA used by descriptor rings
which can't guarantee it can recover all various kinds of fatal
errors, so the patch tries to reset the underlying hardware
resource from scratch on Mediatek SoC required for ethernet
running, including power, pin mux control, clock and internal
circuits on the ethernet in order to restore into the initial
state which the rebooted machine gives.
2) add state variable inside structure mtk_eth to help distinguish
mtk_hw_init is called between the initialization during boot time
or re-initialization during the reset process.
3) add ge_mode variable inside structure mtk_mac for restoring
the interface mode of the current setup for the target MAC.
4) remove __init attribute from mtk_hw_init definition
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
introduce power domain control which the digital circuit of
the ethernet belongs to inside the flow of hardware initialization
and deinitialization which helps the entire ethernet hardware block
could restart cleanly and completely as being back to the initial
state when the whole machine reboot.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This cleans up the error path inside mtk_hw_init call, causing it able
to exit appropriately when something fails and also includes refactoring
mtk_cleanup call to make the partial logic reusable on the error path.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
grouping things related to the deinitialization of what
mtk_hw_init call does that help to be reused by the reset
process and the error path handling.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
the existing mtk_hw_init includes hardware and software
initialization inside so that it is slightly hard to reuse
them for the process of the reset recovery, so some splitting
is made here for keeping hardware initializing relevant thing
and the else such as IRQ registration and MDIO initialization
what are all about to the interface of core driver moved to the
other proper place because they have no needs to register IRQ and
re-initialize structure again during the reset process.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As pointed out by smatch, checking the BAID for just >= INVALID
is a bad idea since only 32 (IWL_MAX_BAID) actually exist. Check
the range for that and print invalid ones in the warning.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Inside the reorder timer expire function, there's no point in
disabling BHs since it is in BH context. Remove that.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
If the firmware ever decides to send any new/more notifications
to the RSS queues, the driver would currently try to interpret
those as REPLY_RX_MPDU_CMD and, if the notification was small,
access invalid memory.
Prevent that by checking for REPLY_RX_MPDU_CMD explicitly which
allows ignoring unexpected notifications.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In MSIX mode the number of irq depends on the number of
possible cpus existing on the host.
This cause to bug in case there are offline cores.
Take into account only the online CPUs instead.
Also save it in temporary variable.
Fixes: commit 2e5d4a8f61 ("iwlwifi: pcie: Add new configuration to enable MSIX")
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Function is very indented. Go to msi section if needed to avoid
it and by that make the code more readable.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Add a new config struct for the new 8275 series and add
the first PCI ID for it.
Signed-off-by: Oren Givon <oren.givon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Add a new config struct for the new 9560 series and add
the 4 new PCI IDs for it.
Signed-off-by: Oren Givon <oren.givon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In order to utilize the host's CPUs in the most efficient way
we bind each rx interrupt vector to each CPU on the host.
Each rx interrupt is prioritized to execute only on the designated CPU
rather than any CPU.
Processor affinity takes advantage of the fact that some remnants of
a process that was run on a given processor may remain in that
processor's memory state for example, data in the CPU cache after
another process is run on that CPU. Scheduling that process to execute
on the same processor could result in an efficient use of process by
reducing performance-degrading situations such as cache misses
and parallel processing.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
When a STA is removed in DQA mode, if no traffic went through
its reserved queue, the txq continues to be marked as
reserved and no STA can use it.
Make sure that in such a case the reserved queue is marked
as free when the STA is removed.
Fixes: commit 24afba7690 ("iwlwifi: mvm: support bss dynamic alloc/dealloc of queues")
Signed-off-by: Liad Kaufman <liad.kaufman@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In iwl_mvm_rx_tx_cmd_single(), when checking if a given TID is
aggregated, the driver doesn't check whether or not the queue
itself can be aggregated. For example, a management queue might
be marked as aggregated if TID 0 is aggregated on a (different)
data queue.
Make sure that mgmt frames are sent with TID IWL_TID_NON_QOS,
and in this way make sure no mixups of this sort happen.
Fixes: commit 24afba7690 ("iwlwifi: mvm: support bss dynamic alloc/dealloc of queues")
Signed-off-by: Liad Kaufman <liad.kaufman@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In case the OS provides fewer interrupts than requested, different
causes will share the same interrupt vector as follow:
1.One interrupt less: non rx causes shared with FBQ.
2.Two interrupts less: non rx causes shared with FBQ and RSS.
3.More than two interrupts: we will use fewer RSS queues.
Also make the request depend on the number of online CPUs
instead of possible CPUs.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Support new format. TX response will not be sent anymore,
so all needed data is in the BA response.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
The original intent was to have the general iwl_queue shared
between RX and TX queues, but it is not the actual status.
Since it is not shared with any struct but iwl_txq, it adds
unnecessary complexity. Merge those structs.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Since TFD was enlarged to 256 bytes, the fetch of the TFD
itself is very expensive.
To make DRAM to SRAM more efficient, bits 12-13 will indicate
the number of 64 byte chunks that should be transferred to
SRAM.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Previous patch introduced the new formats. This patch
allocates the new structures and adjusts code accordingly.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In future HW the byte count table address will be configured
by ucode per queue. Add API to expose the byte count table to
the opmode
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Add cxgb_mk_rx_data_ack() to remove duplicate
code to form CPL_RX_DATA_ACK hardware command.
Signed-off-by: Varun Prakash <varun@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add cxgb_mk_abort_rpl() to remove duplicate
code to form CPL_ABORT_RPL hardware command.
Signed-off-by: Varun Prakash <varun@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add cxgb_mk_abort_req() to remove duplicate code
to form CPL_ABORT_REQ hardware command.
Signed-off-by: Varun Prakash <varun@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add cxgb_mk_close_con_req() to remove duplicate
code to form CPL_CLOSE_CON_REQ hardware command.
Signed-off-by: Varun Prakash <varun@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add cxgb_mk_tid_release() to remove duplicate code
to form CPL_TID_RELEASE hardware command.
Signed-off-by: Varun Prakash <varun@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add cxgb_compute_wscale() in libcxgb_cm.h to remove
it's duplicate definitions from cxgb4/cm.c and
cxgbit/cxgbit_cm.c.
Signed-off-by: Varun Prakash <varun@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add cxgb_best_mtu() in libcxgb_cm.h to remove
it's duplicate definitions from cxgb4/cm.c and
cxgbit/cxgbit_cm.c
Signed-off-by: Varun Prakash <varun@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add cxgb_is_neg_adv() in libcxgb_cm.h to remove
it's duplicate definitions from cxgb4/cm.c and
cxgbit/cxgbit_cm.c.
Signed-off-by: Varun Prakash <varun@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add cxgb_find_route6() in libcxgb_cm.c to remove
it's duplicate definitions from cxgb4/cm.c and
cxgbit/cxgbit_cm.c.
Signed-off-by: Varun Prakash <varun@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add cxgb_find_route() in libcxgb_cm.c to remove
it's duplicate definitions from cxgb4/cm.c and
cxgbit/cxgbit_cm.c.
Signed-off-by: Varun Prakash <varun@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add cxgb_get_4tuple() in libcxgb_cm.c to remove
it's duplicate definitions from cxgb4/cm.c and
cxgbit/cxgbit_cm.c.
Signed-off-by: Varun Prakash <varun@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In commit 9ee7b683ea we moved the enablement of msi interrupts earlier in
alx_init_intr. If there is an error in alx_alloc_rings, __alx_open returns
with an error but msi (or msi-x) interrupts stays enabled. Add a new error
label to disable msi (or msi-x) interrupts.
Fixes: 9ee7b683ea ("alx: refactor msi enablement and disablement")
Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The checksum provided by the device doesn't include the L3 headers,
as IPv6 expects
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In a000 devices we have 15 fifos, so in the shared memory
config the number of tx fifos in the array was changed
accordingly.
As it is in the middle of the struct, the parsing code needs
to be duplicated.
To minimize the duplication, do not save variables we never
actually use.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Firmware may lock those registers for access. This results
in 9000 devices with a bus stall and an endless loop of 0x5a5a5a.
Don't dump those registers.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>