Add support to process RX offload packets in packet capture mode.
To distinguish rx offload packets from normal rx packets,
DP_PEER_METADATA_OFFLOAD bit is set in peer metadata, based on value
of this bit rx packet is delivered to stack or packet capture
component.
Change-Id: Ice656a0bc14efd0382c4949d695daa8e926ce41e
CRs-Fixed: 2856792
Add previously freed nbuf and buffer start address info in rx descriptor.
This helps in debugging use after free access of rx buffers.
Change-Id: I1c883bf049ce75dd0413b85946fe2982648d8004
CRs-Fixed: 2827151
Low memory profiles like 256M and 16M profiles support
only NSS Wi-Fi offload mode and HOST data path APIs are
not used in NSS offload mode
Disable HOST data path APIs which are not used in both
NSS Wi-Fi offload mode and in HOST mode (in NSS offload mode)
CRs-Fixed: 2831478
Change-Id: I6895054a6c96bd446c2df7761ce65feef662a3cc
If a continuation bit is set in msdu desc info and reported
length can fit in single buffer avoid SG processing
Change-Id: I6e8c3e1e657c372d5d915450dda20ba26bac495f
Assertion when detecting rx desc nbuf sanity check failure to get more
info in the RX refill buffer ring for default version.
Change-Id: I8d0255e2f13e2b993f5651b788f895ea06187bf9
CRs-Fixed: 2800602
When multi page alloc is activated, spinlock for rx_desc_pool
is being held for more than 2 seconds, resulting in QDF_BUG.
The major proportion of the time period is used in unmapping
the nbufs.
To fix this, lock rx_desc_pool only to collect nbufs from
rx_desc in a list and unmap and free the nbufs after releasing
the lock.
Change-Id: Iff2078a0de56b51712e2f9a7c5ace7a959e2445d
CRs-Fixed: 2779498
We have come across scenarios where rxdma is pushing
a certain entry more than once to the reo exception
ring. In this scenario, when we try to unmap a buffer,
it can lead to multiple unmap of the same buffer.
Handle this case, by skipping the process of this
duplicate entry, if alrady unmapped, and proceed to the
next entry.
Change-Id: Iae66f27e432f795ba4730911029fa1d63a75cb06
CRs-Fixed: 2739176
Memory optimization of monitor status ring by allocating buffers during
replenish using alloc_skb (linux API).
It creates buffer of required size rather than 4k size(dev_alloc_skb)
Change-Id: I3ae5e403de28c4570f8ac3b50d3ca878a9e4b2f9
CRs-Fixed: 2733931
Rx desc pool have a flag to identify whether frag or nbuf
operation needs to be performed for alloc, map, prep and
free buffer for monitor dest buffer.
This flag will be set only for mon destination desc pool,
if RX_MON_MEM_FRAG feature is enabled.
In all other case, It will be set to Zero and default nbuf
operation will be taken.
This flag get initialized at the time of pdev rx_desc_pool
initialization and gets reset while pdev deinit.
Mon destination buffer will have support for frag if
RX_MON_MEM_FRAG flag is set.
Change-Id: I67c6c823ee4f114035b884c024a1a9054a40665b
CRs-Fixed: 2741757
Same back to back link descriptor address/cookie is observed in
WBM idle link desc ring.
add duplicate link desc check when host
refill link descriptor to WBM through SW2WBM release ring,
also REO reinject ring.
Change-Id: Iaf9defd87670776fa9488d7f650efa3c08fefa60
CRs-Fixed: 2739879
Th nbuf sanity can fail in case when HW posts the
same buffer twice. This case can be handled gracefully
by just skipping the processing of the corresponding rx
descriptor.
Change-Id: I471bb9f364a51937e85249996e427f15872bda97
CRs-Fixed: 2738558
DP RX changes to support RX buffer pool, this is a pre-allocated pool
of buffers which will be utilized during low memory conditions.
Change-Id: I8d89a865f989d4e88c10390861e9d4be72ae0299
CRs-Fixed: 2731517
Currently all the rx ring descriptor contents are left
intact even after these entries are processed. This can,
at times, lead to stale entries being processed, if the
head pointer of any ring is updated before the updated
contents of the ring descriptor gets reflected in the memory.
This can lead to scenarios where the host driver reads a
stale value of sw_cookie, and free/unmap a currently in-use
buffer, thereby leading to the hardware accessing unmapped
memory region.
The sw_cookie is the integral part of al the rx ring
processing. Hence we always mark the sw_cookie as invalid
after dequeuing an entry from the REO2SW ring. Every time
we check for the validity of the sw_cookie before we try to
process an entry from REO2SW ring. if the invalid bit in the
sw_cookie is set, we just skip this entry and move on to the
next entry in the ring.
Change-Id: I0e78fa662b8ba33e64687a4dee4d1a5875ddb4bf
CRs-Fixed: 2730718
Add debug info support for rx descriptors to log
the caller func name and timestamp in replenish
and free scenario.
Change-Id: I1d9b855d14f705094f241bae653f33a94d0e39b7
CRs-Fixed: 2677288
Add check for sg formation.
Only enable chfrag_cont and msdu_continuation if reo
error code is HAL_RX_WBM_ERR_SRC_REO or rxdma_err_code
is HAL_RXDMA_ERR_UNENCRYPTED.
Also chain all nbuf in case of sg in separate buffer
and finally loop through that. This is added because
sometime we dont get desc in sync with hw.
To avoid such mismatch, this buffer is added.
We will process nbuf only when all msdus has been
received.
Change-Id: I3b154a68955db61f3acaa0cb8d130c8918a3d450
CRs-Fixed: 2672126
Do a logical split of dp_soc_attach and
dp_pdev_attach into Allocation and initialization
and dp_soc_detach and dp_pdev_detach into
de-initialization and free routines
Change-Id: I23bdca0ca86db42a4d0b2554cd60d99bb207a647
In OOR error handling scenario, msdu is spread across
two nbufs. Due to this, there is a mismatch between
msdu count fetched from MPDU desc detatils and count
fetched from rx link descriptor.
Fix is to create frag list for the case where msdu
is spread across multiple nbufs.
Change-Id: I1d600a0988b373e68aad6ef815fb2d775763b7cb
CRs-Fixed: 2665963
Split dp_rx_pdev_attach into dp_rx_pdev_desc_pool_alloc,
dp_rx_pdev_desc_pool_init, dp_rx_pdev_buffers_alloc and
dp_rx_pdev_detach into dp_rx_pdev_desc_pool_free, dp_rx
_pdev_desc_pool_deinit, dp_rx_pdev_buffers_free APIs
This split is made because dp_pdev_init is introduced
as part of this FR and these APIs will be called from
dp_pdev_init/dp_pdev_deinit or dp_pdev_attach/dp_pdev_
detach accordingly to maintain the symmetry to DP init
and deinit path
Change-Id: Ib543ddae90b90f4706004080b1f2b7d0e5cfbfbc
CRs-Fixed: 2663595
Split dp_mon_link_desc_pool_setup to alloc and init APIs and
dp_mon_link_desc_pool_cleanup to deinit and free APIs
This split is made because dp_pdev_init is introduced
as part of this FR and these APIs will be called from
dp_pdev_init/dp_pdev_deinit or dp_pdev_attach/dp_pdev_
detach accordingly to maintain the symmetry to DP init
and deinit path
Change-Id: I36b2a98bd317010124916e0b2779938eba3883ea
CRs-Fixed: 2663595
The peer rx packets should be flushed when
deleting the peer and its state should be reset
to DISCONNECT, when deleting the peer.
If the state of peer is not set to DISCONNECT,
then the flushing of rx packets for the peer
which is being deleted will call the rx callback
and submit these packet to the stack, which can
cause unwanted behaviour.
This way the UMAC does not need to specifically
call clear peer before deleting the peer.
Change-Id: I3b5a737126350a361d968f6349aef6291b2e3f56
CRs-Fixed: 2659629
Remove lock to access REO destination rings because 4 rings are
accessed in 4 individual cores.
Change-Id: Ia3f92cc5136dbdbeea1e9cda8d52b474356a3e1a
CRs-Fixed: 2660901
Support RX 2K jump/OOR frame handling from REO2TCL ring.
(a) configure REO error destination ring register to route 2K jump
/OOR frame to REO2TCL ring.
(b) for 2K jump RX frame, only accept ARP frame and drop others,
meanwhile, send delba action frame to remote peer once receive first
2K jump data.
(c) for OOR RX frame, accept ARP/EAPOL/DHCP/IPV6_DHCP frame, otherwise
drop it.
Change-Id: I7cb33279a8ba543686da4eba547e40f86813e057
CRs-Fixed: 2631949
Break up the 2MB descriptor bank memory allocations for WBM
idle link ring. Use multiple page allocation and populate the
WBM idle link descriptor ring with physical addresses of each
DMA page allocated in the descriptor bank.
This is to ensure that no requests for contiguous memory
allocations are made that might result in allocation failures.
For MCL set the page size to 4KB and leave it to max_alloc_size
cfg ini param for WIN specific code.
Change-Id: Iec30321044827c0174366cc02df25a42d38309e0
CRs-Fixed: 2565817
Do not process or drop rxdma error decrpt frames. In case of
decrypt error the decryption is not proper and Rx OLE gets
corrupted bytes. So accessing these can lead to invalid buffers.
Change-Id: Idb3f942facf08fc26bde0fd9826db28955ca01d5
CRs-Fixed: 2613068
Rather than extracting msdu end pkt tlv information per field basis
during fast data path, extract msdu end pkt tlv information at once
and store in local structure.
Change-Id: I0877ba4f824d480cc0851c72090f010852d0d203
Drop the packet if msdu_done bit is not set while processing
rxdma err frames. This is not expected while reaping WBM RX
release ring.
Change-Id: I8776d15ea88319d7d955fdae90958648484dbda0
CRs-Fixed: 2603791
In the function that processes rxdma err frames add null check
before calling vdev callback function. If its valid then deliver
the skb to stack or free the skb.
Change-Id: I7c481eb8f702d9109c4a9f79db7e050ece6c3689
CRs-Fixed: 2607658
Add a framework to configure varying buffer size for both data and monitor
buffers.
For example, with this framework, the user can configure 2K SKB for Data
buffers, monitor status rings, monitor descriptor rings, monitor
destination rings and 4K SKB for monitor buffers through compile time.
Change-Id: I212d04ff6907e71e9c80b69834aa07ecc6db4d2e
CRs-Fixed: 2604646
long length msdu is received and looks this msdu is spread across
multiple nbufs, there is no corresbonding logic for this case.
qdf_set_pkt_len will invoke pskb_expand_head to renew skb->head
buffer, but the rx_tlv_hdr is still pointed to original skb->data
buffer, invalid accessing will happen.
As a WAR, drop this msdu related nbufs after dp_rx_sg_create is done.
Change-Id: Iceb09fd04e4d768325018a8ddd4261ab4f75991a
CRs-Fixed: 2597927
Crash scenario:
(1) frag data A is dropped and related RX desc A is replenished
and reused, but pdev->free_list_tail is still pointed to RX desc A.
(2) frag data B/C is coming, defrag fails then pdev->free_list_head
will point to B-->C RX desc, but pdev->free_list_tail still point to A.
(3) for defrag failing case, host only will replenish 1 RX buffer for
current case, RX desc B is replenished, while C will be free back to
RX desc pool.
(4) dp_rx_add_desc_list_to_free_list will set RX desc A-->next =
free_list, free_list point to C insted.
(5) when step (1) RX desc A replenished buffer indicated to host by
REO2Dst ring, RX desc A -->nbuf actually is pointed to another RX
desc, invalid skb accessing will happen.
Solution:
a. reset tail pointer in dp_rx_add_desc_list_to_free_list at last.
b. reset tail pointer same as head in dp_rx_add_to_free_desc_list
if head->next is NULL.
c. set correct rx_bufs number for replenish when dp_rx_defrag fails.
Change-Id: Ib297baea3605a09dd7d85d1f5ceb95db48a2e1f1
CRs-Fixed: 2603676
Change set 2 of ctrl_ops APIs to replace pdev, vdev and peer
dp handles with pdev_id, vdev_id and peer mac address
along with dp soc handle
Change-Id: I3f180c9c360d564f0b229b447074ad23b7c0a737
1. Move all LMAC rings to SOC from pDEV
2. Dynamically obtain lmac->pdev mapping while handling LMAC interrupts
Change-Id: Ib017d49243405b62fc34099c01a2b898b25341d0
Check for vdev_id mismatch to deliver NBUFs to stack to avoid
hold peer reference while giving nbuf list to stack
Change-Id: Ic475e00d5b1793ada7b26b7af3322ca2fa51836f
Link descriptor were getting freed by the pointer
of the previous freed link descriptor. This patch
fixes by copying the address of the current in a
local descriptor info and using it to free the
current.
Change-Id: I95e137ba5b1f0ad21b0e6fb39f6671e1d5b65ba6
CRs-Fixed: 2577624
Currently defragmented packets use HAL_RX_BUF_RBM_SW1_RBM as the RBM
value for the defragmented packets which are re-injected into REO.
Thus, if REO encounters any error while handling these packets, they
would end up in WBM2SW1 ring (via WBM), which is managed by the FW. The
FW will eventually recycle these buffers back to RXDMA via its refill
process. As a part of defragmentation, host does a 802.11 -> 802.3
header conversion. This is resulting in an address which is not 4
byte aligned. Hence, when RXDMA tries to use these addresses (after FW
recycles them), it may lead to issues.
Change the RBM value of the defragmented buffers which are
re-injected. Now, if REO ends up throwing an error for these
packets, they wll end up in WBM2SW3, which is managed by the host.
The host can then drop these packets and replenish RXDMA with 4 byte
aligned buffers (via FW).
Change-Id: I9d9c25385978d5be855699feb28d292c6f3fffdd
CRs-Fixed: 2572483
When a particular vdev is deleted, the corresponding rx
packets which have been queued to the rx thread are not
flushed. Hence when such packets are submitted to the
network stack, the dev for this skb will be invalid,
since we have already freed the adapter.
Flush out the packets in the rx thread queues, before
deleting the vdev.
CRs-Fixed: 2543392
Change-Id: I2490d0f5ce965f62152613a17a59232521ca058f
rxhost ring backpressure:
identifying rings causing rx backpressure after being notified
by FW message. Adding logs to be able to see at what state
the ap was after a backpressure event was triggered.
Adding radio stats (261) as well as napi stats for better
state description.
Change-Id: I395450be6faaf959f91729516a7b229c5b3396ce
Add device ID change and target type checks for pine.
Also remove memory war added for Hk emulation.
Change-Id: Idf531a48a03202d4fb241a92a1d671ee2b94cfbd
CRs-fixed: 2453899
Tags are programmed using wlanconfig commands. Rx IPv4/v6
TCP/UDP packets matching a 5-tuple are tagged using HawkeyeV2 hardware.
Tags are populated in the skb->cb in the REO/exception/monitor data
path and sent to upper stack
CRs-Fixed: 2502311
Change-Id: I7c999e75fab43b6ecb6f9d9fd4b0351f0b9cfda8
1. Remove vlan tag in tx and enqueue to hardware.
2. Add vlan tag in rx after peer-vlan_id lookup.
Change-Id: I932202540ac03cabdd20ffd4849fe759ea8a7abb
Add code to replace usage of void pointers from
HAL layer and instead use appropriate opaque pointers
Change-Id: Id950bd9130a99014305738937aed736cf0144aca
CRs-Fixed: 2487250