Introduce a mv88e6xxx_has_stu() helper to protect the access to the
GLOBAL_VTU_SID register, instead of checking switch families.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This change makes it so that we can use the ethtool rx-vlan-filter flag to
toggle Rx VLAN filtering on and off. This is basically just an extension
of the existing VLAN promisc work in that it just adds support for the
additional ethtool flag.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Added support to match on UDP fields in the transport layer.
Extended core logic to support multiple headers.
Verified with the following filters :
handle 1: u32 divisor 1
u32 ht 800: order 1 link 1: \
offset at 0 mask 0f00 shift 6 plus 0 eat match ip protocol 6 ff
u32 ht 1: order 2 \
match tcp src 1024 ffff match tcp dst 23 ffff action drop
handle 2: u32 divisor 1
u32 ht 800: order 3 link 2: \
offset at 0 mask 0f00 shift 6 plus 0 eat match ip protocol 17 ff
u32 ht 2: order 4 \
match udp src 1025 ffff match udp dst 24 ffff action drop
Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
Acked-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
It is possible on some HW that a system reset could occur when we are
holding the SWFW semaphore lock. So next time the driver was loaded we
would see it incorrectly as locked. This patch will recover from that state
by: Attempting to acquire the semaphore and then regardless of whether or
not it was acquire we immediately release it. This will force us into
a known good state.
Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This commit adds a callback which allows to adjust the maximum transmit
bitrate the card can output. This makes it possible to get a smooth
traffic instead of the default burst-y behaviour when trying to output
e.g. a video stream.
Much of the logic needed to get a correct bcnrc_val was taken from the
ixgbe_set_vf_rate_limit() function.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Xeon D KR backplane is different from other backplanes,
in that we can't use auto-negotiation to determine the
mode. Instead, use whatever the user configured.
Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch adds support for generic Tx checksums to the ixgbevf driver. It
turns out this is actually pretty easy after going over the datasheet as we
were doing a number of steps we didn't need to.
In order to perform a Tx checksum for an L4 header we need to fill in the
following fields in the Tx descriptor:
MACLEN (maximum of 127), retrieved from:
skb_network_offset()
IPLEN (maximum of 511), retrieved from:
skb_checksum_start_offset() - skb_network_offset()
TUCMD.L4T indicates offset and if checksum or crc32c, based on:
skb->csum_offset
The added advantage to doing this is that we can support inner checksum
offloads for tunnels and MPLS while still being able to transparently
insert VLAN tags.
I also took the opportunity to clean-up many of the feature flag
configuration bits to make them a bit more consistent between drivers. In
the case of the VF drivers this meant adding support for SCTP CRCs, and
inner checksum offloads for MPLS and various tunnel types.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch adds support for generic Tx checksums to the ixgbe driver. It
turns out this is actually pretty easy after going over the datasheet as we
were doing a number of steps we didn't need to.
In order to perform a Tx checksum for an L4 header we need to fill in the
following fields in the Tx descriptor:
MACLEN (maximum of 127), retrieved from:
skb_network_offset()
IPLEN (maximum of 511), retrieved from:
skb_checksum_start_offset() - skb_network_offset()
TUCMD.L4T indicates offset and if checksum or crc32c, based on:
skb->csum_offset
The added advantage to doing this is that we can support inner checksum
offloads for tunnels and MPLS while still being able to transparently
insert VLAN tags.
I also took the opportunity to clean-up many of the feature flag
configuration bits to make them a bit more consistent between drivers.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The source for the ops structure contents are const, so make them
so. Copy them in place with structure assignments instead of memcpys.
Make the mbx_ops accessed by reference instead of making a copy of
the source structure. Update copyright date on the touched files.
Reported-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Acked-by: Julia Lawall <julia.lawall@lip6.fr>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We were adding VLAN 0 twice each time we restored the VLAN configuration.
Instead of doing it twice we can just start working through the active
VLANs from ID 1 on and skip the double write.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Remove the sh-irda driver as it appears to be unused since
c0bb9b3027 ("ARCH: ARM: shmobile: Remove ag5evm board support").
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit takes care of the coding style warnings
that are mostly due to a different comment style and
lines over 80 chars, as well as a dangling else.
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Moritz Fischer <moritz.fischer@ettus.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch supports the following interrupts.
- One interrupt for multiple (timestamp, error, gPTP)
- One interrupt for emac
- Four interrupts for dma queue (best effort rx/tx, network control rx/tx)
This patch improve efficiency of the interrupt handler by adding the
interrupt handler corresponding to each interrupt source described
above. Additionally, it reduces the number of times of the access to
EthernetAVB IF.
Also this patch prevent this driver depends on the whim of a boot loader.
[ykaneko0929@gmail.com: define bit names of registers]
[ykaneko0929@gmail.com: add comment for gen3 only registers]
[ykaneko0929@gmail.com: fix coding style]
[ykaneko0929@gmail.com: update changelog]
[ykaneko0929@gmail.com: gen3: fix initialization of interrupts]
[ykaneko0929@gmail.com: gen3: fix clearing interrupts]
[ykaneko0929@gmail.com: gen3: add helper function for request_irq()]
[ykaneko0929@gmail.com: gen3: remove IRQF_SHARED flag for request_irq()]
[ykaneko0929@gmail.com: revert ravb_close() and ravb_ptp_stop()]
[ykaneko0929@gmail.com: avoid calling free_irq() to non-hooked interrupts]
[ykaneko0929@gmail.com: make NC/BE interrupt handler a function]
[ykaneko0929@gmail.com: make timestamp interrupt handler a function]
[ykaneko0929@gmail.com: timestamp interrupt is handled in multiple
interrupt handler instead of dma queue interrupt handler]
Signed-off-by: Kazuya Mizuguchi <kazuya.mizuguchi.ks@renesas.com>
Signed-off-by: Yoshihiro Kaneko <ykaneko0929@gmail.com>
Acked-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
While doing the work on igb I realized there were a few cases where we were
still adding VLANs to the VLVF entries for the PF when they were not
needed. This patch cleans that up so that the only time we add a PF entry
to the VLVF is either for VLAN 0 or if the PF has requested a VLAN that a VF
is already using.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
When running certain routing protocols like VRRP, VF guests need the
ability to set the unicast address of the interface. Extend the new ndo
trust feature to let the hypervisor trust a guest to set/update its own
unicast address.
Signed-off-by: Chas Williams <3chas3@gmail.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Currently, SOL_TIMESTAMPING can only be enabled using setsockopt.
This is very costly when users want to sample writes to gather
tx timestamps.
Add support for enabling SO_TIMESTAMPING via control messages by
using tsflags added in `struct sockcm_cookie` (added in the previous
patches in this series) to set the tx_flags of the last skb created in
a sendmsg. With this patch, the timestamp recording bits in tx_flags
of the skbuff is overridden if SO_TIMESTAMPING is passed in a cmsg.
Please note that this is only effective for overriding the recording
timestamps flags. Users should enable timestamp reporting (e.g.,
SOF_TIMESTAMPING_SOFTWARE | SOF_TIMESTAMPING_OPT_ID) using
socket options and then should ask for SOF_TIMESTAMPING_TX_*
using control messages per sendmsg to sample timestamps for each
write.
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It seem to be non intentionally changed to Tx in
commit adc810900a ("ixgbe: Refactor busy poll socket code to address
multiple issues")
Lock is taken from ixgbe_low_latency_recv, and there under this
lock we use ixgbe_clean_rx_irq so it looks wrong for me to increment
Tx counter.
Yield stats can be shown through ethtool:
ethtool -S enp129s0 | grep yield
Signed-off-by: Pavel Tikhomirov <ptikhomirov@virtuozzo.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
There are many valid WMI commands with only header without any
additional payload. Such WMI commands could not be sent using
the debugfs wmi_send facility. Fix the code to allow sending
of such commands.
Signed-off-by: Lior David <qca_liord@qca.qualcomm.com>
Signed-off-by: Maya Erez <qca_merez@qca.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Check and parse Rx MAC timestamp when firmware sets its flag
to status variable.
10.4 firmware adds it in Rx beacon frame only at this moment.
Drivers and mac80211 may utilize it to detect such clockdrift
or beacon collision and use the result for beacon collision
avoidance.
Signed-off-by: Peter Oh <poh@qca.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Since tx completion and rx indication processing are moved out
of txrx tasklet and rx ring lock contention also removed from txrx
for rx_ind messages, it would be efficient to combine both replenish
and txrx tasks. Refill threshold is adjusted for both AP135 and AP148
(low and high end systems). With this adjustment in AP135, TCP DL is
improved from 603 Mbps to 620 Mbps and UDP DL is improved from 758 Mbps
to 803 Mbps. Also no watchdog are observed on UDP BiDi.
Signed-off-by: Rajkumar Manoharan <rmanohar@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Whenever htt rx indication i.e target to host messages are received
on rx copy engine (CE5), the message will be freed after processing
the response. Then CE 5 will be refilled with new descriptors at
post rx processing. This memory alloc and free operations can be avoided
by reusing the same descriptors.
During CE pipe allocation, full ring is not initialized i.e n-1 entries
are filled up. So for CE 5 full ring should be filled up to reuse
descriptors. Moreover CE 5 write index will be updated in single shot
instead of incremental access. This could avoid multiple pci_write and
ce_ring access. From experiments, It improves CPU usage by ~3% in IPQ4019
platform.
Signed-off-by: Rajkumar Manoharan <rmanohar@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
The physical address necessary to unmap DMA ('bufferp') is stored
in ath10k_skb_cb as 'paddr'. For diag register read and write
operations, 'paddr' is stored in transfer context. ath10k doesn't rely
on the meta/transfer_id. So the unused output arguments {bufferp, nbytesp
and transfer_idp} are removed from CE recv_next completion.
Signed-off-by: Rajkumar Manoharan <rmanohar@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Except qca61x4 family chips (qca6164, qca6174), copy engine 5 is used
for receiving target to host htt messages. In follow up patch, CE5
descriptors will be reused. In such case, same API can not be used as
htc layer callback where the response messages will be freed at the end.
Hence register new API for HTC layer that free up received message and
keep the message handler common for both HTC and HIF layers.
Signed-off-by: Rajkumar Manoharan <rmanohar@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
In follow up patch, htt rx descriptors will be reused instead of
dealloc and refill. To achieve that htt rx indication messages
should not be deferred and should be processed in pci tasklet itself.
Also from rx indication message, mpdu_count alone is used. So it is
maintained as atomic variable and all rx amsdu handlers are done
processed from txrx tasklet. This change get rid of rx_compl_q usage.
Signed-off-by: Rajkumar Manoharan <rmanohar@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Make amsdu handlers (i.e amsdu_pop and rx_h_handler) common to both
rx_ind and frag_ind htt events. It is sufficient to hold rx_ring lock
for amsdu_pop alone and no need to hold it until the packets are
delivered to mac80211. This helps to reduce rx_lock contention as well.
Signed-off-by: Rajkumar Manoharan <rmanohar@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
The fw descriptor was never used and probably never will be. It makes
little sense to maintain support for it. Remove it and simplify rx
processing. This will make it easier to optimize rx processing later
as well.
Signed-off-by: Rajkumar Manoharan <rmanohar@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
To optmize CPU usage htt rx descriptors will be reused instead of
refilling it for htt rx copy engine (CE5). To support that all htt rx
indications should be proecssed at same context. Instead of queueing
actual indication message, queue copied message for txrx processing.
Signed-off-by: Rajkumar Manoharan <rmanohar@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
To optimize CPU usage htt rx descriptors will be reused instead of
refilling it for htt rx copy engine (CE5). To support that all htt rx
indications should be processed at same context. FIFO queue is used
to maintain tx completion status for each msdu. This helps to retain
the order of tx completion.
Signed-off-by: Rajkumar Manoharan <rmanohar@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Rx duration support for per station is part of extended peer
stats, enable provision to parse the same and provide backward
compatibility based on the 'stats_id' event
Signed-off-by: Mohammed Shafi Shajakhan <mohammed@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Add API support for Extended Resource Configuration for 10.4. This
is useful to enable new features like Peer Stats, LTEU etc if the
firmware advertises support for the service. This is also done to
provide backward compatibility with older firmware. Also for clarity
send default host platform type as 'WMI_HOST_PLATFORM_HIGH_PERF',
though this should not make any difference in functionality
Signed-off-by: Raja Mani <rmani@qti.qualcomm.com>
Signed-off-by: Mohammed Shafi Shajakhan <mohammed@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
This patch just updates the driver to the version fully
tested on STi platforms. This version is Jan_2016.
Signed-off-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds the whole GMAC4 support inside the
stmmac d.d. now able to use the new HW and some new features
i.e.: TSO.
It is missing the multi-queue and split Header support at this
stage.
This patch also updates the driver version and the stmmac.txt.
Signed-off-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is to support the snps,dwmac-4.00 and snps,dwmac-4.10a
and related features on the platform driver.
See binding doc for further details.
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
For gmac3, the MMC addr map is: 0x100 - 0x2fc
For gmac4, the MMC addr map is: 0x700 - 0x8fc
So instead of adding 0x600 to the IO address when setup the mmc,
the RMON base address is saved inside the private structure and
then used to manage the counters.
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is the initial support for GMAC4 that includes
the main callbacks to setup the core module: including
Csum, basic filtering, mac address and interrupt (MMC,
MTL, PMT) No LPI added.
Signed-off-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
DMA behavior is linked to descriptor management:
-descriptor mechanism (Tx for example, but it is exactly the same for RX):
-useful registers:
-DMA_CH#_TxDesc_Ring_Len: length of transmit descriptor ring
-DMA_CH#_TxDesc_List_Address: start address of the ring
-DMA_CH#_TxDesc_Tail_Pointer: address of the last
descriptor to send + 1.
-DMA_CH#_TxDesc_Current_App_TxDesc: address of the current
descriptor
-The descriptor Tail Pointer register contains the pointer to the
descriptor address (N). The base address and the current
descriptor decide the address of the current descriptor that the
DMA can process. The descriptors up to one location less than the
one indicated by the descriptor tail pointer (N-1) are owned by
the DMA. The DMA continues to process the descriptors until the
following condition occurs:
"current descriptor pointer == Descriptor Tail pointer"
Then the DMA goes into suspend mode. The application must perform
a write to descriptor tail pointer register and update the tail
pointer to have the following condition and to start a new transfer:
"current descriptor pointer < Descriptor tail pointer"
The DMA automatically wraps around the base address when the end
of ring is reached.
Up to 8 DMA could be use but currently we only use one (channel0)
Signed-off-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is the main header file to define all the
macro used for GMAC4 DMA and CORE parts.
Signed-off-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
One of main changes of GMAC 4.xx IP is descriptors management.
-descriptors are only used in ring mode.
-A descriptor is composed of 4 32bits registers (no more extended
descriptors)
-descriptor mechanism (Tx for example, but it is exactly the same for RX):
-useful registers:
-DMA_CH#_TxDesc_Ring_Len: length of transmit descriptor
ring
-DMA_CH#_TxDesc_List_Address: start address of the ring
-DMA_CH#_TxDesc_Tail_Pointer: address of the last
descriptor to send + 1.
-DMA_CH#_TxDesc_Current_App_TxDesc: address of the current
descriptor
-The descriptor Tail Pointer register contains the pointer to the
descriptor address (N). The base address and the current
descriptor decide the address of the current descriptor that the
DMA can process. The descriptors up to one location less than the
one indicated by the descriptor tail pointer (N-1) are owned by
the DMA. The DMA continues to process the descriptors until the
following condition occurs:
"current descriptor pointer == Descriptor Tail pointer"
Then the DMA goes into suspend mode. The application must perform
a write to descriptor tail pointer register and update the tail
pointer to have the following condition and to start a new
transfer:
"current descriptor pointer < Descriptor tail pointer"
The DMA automatically wraps around the base address when the end
of ring is reached.
-New features are available on IP:
-TSO (TCP Segmentation Offload) for TX only
-Split header: to have header and payload in 2 different buffers
Signed-off-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
synopsys_uid is only used once after setup, to get synopsys_id
by using shitf/mask operation. It's no longer used then.
So, remove this temporary variable and directly compute
synopsys_id from setup routine.
Acked-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: Fabrice Gasnier <fabrice.gasnier@st.com>
Signed-off-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
To avoid lot of check in stmmac_main for display ring management
and support the GMAC4 chip, the display_ring function is moved
into dedicated descriptor file.
Signed-off-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
On next GMAC IP generation (4.xx), the way to get hw feature
is not the same than on previous 3.xx. As it is hardware
dependent, the way to get hw capabilities should be defined in dma ops of
each MAC IP. It will avoid also a huge computation of hw capabilities in
stmmac_main.
Signed-off-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The patch adds support of pause ctrl for HNS V2, and this feature is lost
by HNS V1:
1) service ports can disable rx pause frame,
2) debug ports can open tx/rx pause frame.
And this patch updates the REGs about the pause ctrl when updated
status function called by upper layer routine.
Signed-off-by: Lisheng <lisheng011@huawei.com>
Signed-off-by: Yisen Zhuang <Yisen.Zhuang@huawei.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>