Add support to process RX offload packets in packet capture mode.
To distinguish rx offload packets from normal rx packets,
DP_PEER_METADATA_OFFLOAD bit is set in peer metadata, based on value
of this bit rx packet is delivered to stack or packet capture
component.
Change-Id: Ice656a0bc14efd0382c4949d695daa8e926ce41e
CRs-Fixed: 2856792
DP references of the peer will be removed once we receive PEER
unmap message from the firmware. If the firmware has failed to
send the peer unmap message (due to an assert in FW), then the
peer entries will remain the inactive peer list and will trigger
a explicit QDF_BUG() during driver unload.
Force clean the inactive peer/vdev list during driver unload in such
cases.
Change-Id: Ib8a5f8179ce51378a825034e411f89ff024017e1
CRs-Fixed: 2847742
In case of pdev attach failure reset pdev reference in
soc->pdev_list to NULL before free to avoid use after
free
Also initialize WDS aging timer only when ast hash is
allocated
Change-Id: I6a406bd94aa46a95d9e5bb970ae83b3dfde29d0a
Currently struct ipa_wdi_conn_in_params occupies 1588 bytes and putting
it on the stack is rather expensive, which could potentially lead to
stack corruption.
Fix is to reduce stack usage in dp_ipa_setup by dynamically allocating
struct ipa_wdi_conn_in_params on the heap.
Change-Id: I8f71f44906a5c95f37627f7573b57b7825daaa7e
CRs-Fixed: 2852027
Issue1: Driver RTPM state is ON/NONE, Kernel state is RESUMING.
cdp_runtime_resume is already complete,
hif_pm_runtime_get return -E_INPROGRESS,
dp_tx_hw_enqueue will set the flush event,
but cdp_runtime_resume is already done,
this flush event will be handled only on next pkt tx.
Issue2: Driver RTPM state: Resuming
hif_pm_runtime_get returns -EBUSY,
dp_tx_hw_enqueue is interrupted by any IRQ,
cdp_runtime_resume is completed,
dp_tx_hw_enqueue will set the flush event,
This flush event will be handled only on next pkt tx.
Fix:
Introduce a link_state_up atomic variable in hif to track the link state
change by pld_cb.
Set atomic variable link_state_up=1 in pmo_core_psoc_bus_runtime_resume
just after pld_cb. pld_cb brings the PCIe bus out of suspend state.
Set atomic variable link_state_up=0 in pmo_core_psoc_bus_runtime_suspend
just before pld_cb. pld_cb puts the PCIe bus into suspend state.
Introduce dp_runtime_get and dp_runtime_put.
dp_runtime_get get refcount with increment of an atomic variable.
dp_runtime_put return refcount with decrement of this atomic variable.
If hif_pm_runtime_get returns -EBUSY or -EINPROGRESS,
take the dp runtime refcount using dp_runtime_get,
check if the link state is up, write TX ring HP,
return the dp runtime refcount using dp_runtime_put.
cdp_runtime_suspend should reject the suspend, if dp_runtime_get is non
zero.
cdp_runtime_resume should wait until dp_runtime_get becomes zero or time
out, then flush pending tx for runtime suspend.
Change-Id: I5b97d50cba710082f117f3845f7830712b86cda7
CRs-Fixed: 2844888
During monitor vap creation time, do not allocate monitor buffer
and link descriptors memory if already initilized.
Check if pdev->pdev_mon_init flag is true and return.
CRs-Fixed: 2852399
Change-Id: I87ceaf0af771abed1bff651ef6f2b9ca21e038b9
Some platforms only permit the device to allocate 1 MSI interrupt, then
macro WLAN_ONE_MSI_VECTOR is added to support this one MSI vector mode.
There is only one MSI data count for one MSI vector mode, so it is valid
that 2 more MSI dp groups share one MSI data.
Change-Id: Iaab2a2ba538ea3750a32454855144a0cb0776eca
CRs-Fixed: 2843378
Multipass init is called from vdev attach and
multipass deinit is called from tx vdev detach
To maintain symmetry between init and deinit
move multipass deinit to vdev detch
CRs-Fixed: 2840338
Change-Id: I18657497b9e09ec5cc75d245765f6d0fd7d061fd
Currently if the driver is in runtime suspend/suspending
state, any packet transmission will request for resume
via hif_pm_runtime_get.
Unfortunately there is a logging in the above API which
will lead to more time consumption in the NET_TX softirq
context. This can lead to other delay in processing other
softirqs in the system.
Fix this by skipping the logging in the packet transmission
path.
Change-Id: Icc9f5b67794f7666243eb059f2e07a06a987002e
CRs-Fixed: 2844126
rx_mpdu_missed statistics used by CNE records instances
of BA hole happens, not duplicated frames instances.
Fix this field with correct value to avoid incorrect
VOLTE switching.
Change-Id: I815b9a4caeb6eedf36be66f8650ca98d00542c60
CRs-Fixed: 2848460
In security mode, allow only EAPOL frames in receive path
when peer is not authorized. This feature is enabled per VAP
based on vdev flag and will be applicable for all peers in that
VAP
Change-Id: Ic5dea09c2083f31e8cd301a0cdc3565f247b735c
For lithium based chip, the rssi_comb value fetched from monitor
buffer TLV is in unit dBm already, if plus noise_floor value in
qdf_nbuf_update_radiotap() again, then the rssi showed in wireshark
will be incorrect.
Convert rssi_comb to unit dB and not impact legacy chip in
qdf_nbuf_update_radiotap().
Change-Id: I889a15f39ebc639386405fb0aae1909c0cce75e0
CRs-Fixed: 2844896
Reduce excessive data path logging to kmsg during roaming and
change the logging level for such prints to debug/info
high.
Change-Id: I1e6506de83e59a31304234905a3dd38f6d8b03f3
CRs-Fixed: 2848624
Currently the rx rings history is allocated dynamically
on load time. The memory requirement for saving these
history are more than a page (order 5 to 6 allocations).
Such big memory allocation can fail due to various
reasons, one of them being memory fragmentation.
Fix this by pre-allocating the rx ring history memory.
Also allocate the rx reinject history memory when the
HW accelerated path is used.
Change-Id: Id957cd5df91a2ca7f182dea691a0557b4e386f55
CRs-Fixed: 2844388
WLAN HW can still access the IPA tx doorbell address post
disable pipes if there are any pending tx completions which
could result in a NOC error.
Fix is to reset the WBM2SW ring HP addr to shadow addr in
DDR before pipes are disabled.
Change-Id: I52900eb34530388487923a887354ef8839d8c728
CRs-Fixed: 2846421
When runtime power management is enabled, until the system is fully
resumed, we do not post any message to CE, instead they will be queued
in HTC. Once the device is fully resumed, all HTT messages in the queues
will be downloaded to CE at once. In some corner cases, it is found that
too many FSE caches flush messages getting queued during the runtime
resume; once the device is fully resumed, all these would be downloaded
to the FW/HW at once causing unintended crashes.
Avoid posting FSE cache flush messages until the device is fully resumed
to fix the issue.
Change-Id: I2be21406e8c7a9dcd86df3e93c2568797defcad0
CRs-Fixed: 2843541
Add previously freed nbuf and buffer start address info in rx descriptor.
This helps in debugging use after free access of rx buffers.
Change-Id: I1c883bf049ce75dd0413b85946fe2982648d8004
CRs-Fixed: 2827151
In existing implementation, for monitor mode below allocation
are done at pdev attach and init time.
a. 64 monitor buffer allocation for RxDMA monitor buffer ring
b. Link descriptor memory allocation for monitor link descriptor ring
This memory is waste of memory for customers not using monitor mode and
low memory profile.
To optimize this memory, allocate all buffers and link descriptor memory
at monitor vdev creation time.
Change-Id: I873c76d2f625a782532a101037915b0353928a5b
CRs-Fixed: 2829402
rx_ring_history is an array of pointers, address of pointer is
always a non-NULL value, this always passed the NULL check,
which leads to NULL pointer dereference, fixing the same.
Change-Id: I401203a6f2a5930869cf4002ac0e714d3fdba62f
CRs-Fixed: 2844038
scenario:
FISA new FST entry is initialed, host will start one timer to
send HTT MSG DP_HTT_FST_CACHE_INVALIDATE_FULL to FW in 5 ms,
WOW suspending happened in the same time, PCIe bus get suspended.
5 ms later, HTT msg sending will try to prevent PCIe L1 to update
CE SRNG HP register, hit assert as PCIe bus suspended already.
suspend and cancel the FSE cache flushing timer when dp_bus_suspend,
resume it when dp_bus_resume.
CRs-Fixed: 2843214
Change-Id: Ie2bc115a0de068335d6c46749f52d205cc21f5a3
Account for the Tx buffers allocated for IPA during
init. Add this memory to the overall Tx nbuf memory
allocations. Ensure that the nbuf size is taken from
end pointer to head pointer of nbuf.
Change-Id: Ie3a46c7e7674f3f2e1bf9e0791a7eb53d4bb0b21
CRs-Fixed: 2831015
When driver doing ipa rx buffer smmu mapping,
qdf_spin_lock_bh is used to protect rx descriptor pool,
but might sleep function is called by API ipa_is_ready.
This causes kernel panic about sleeping function called
from invalid context as following call trace:
Call trace:
___might_sleep+0x204/0x208
__might_sleep+0x50/0x88
__mutex_lock_common+0x5c/0x1078
mutex_lock_nested+0x40/0x50
ipa3_is_ready+0x2c/0x60
ipa_is_ready+0x24/0x38
dp_ipa_handle_rx_buf_pool_smmu_mapping+0x2dc/0x6d0 [wlan]
Move the ipa is ready check function call outside of spin lock.
Change-Id: I5d3a79dff8a045791834733514a40f7c1ccb0d8b
CRs-Fixed: 2839292
Low memory profiles like 256M and 16M profiles support
only NSS Wi-Fi offload mode and HOST data path APIs are
not used in NSS offload mode
Disable HOST data path APIs which are not used in both
NSS Wi-Fi offload mode and in HOST mode (in NSS offload mode)
CRs-Fixed: 2831478
Change-Id: I6895054a6c96bd446c2df7761ce65feef662a3cc
As part of monitor mode init, add check for null monitor vdev before
accessing vdev_id in case prevous cleanup of monitor vdev still ongoing.
Change-Id: I0165684c8aa2abea73fa8c0d1692dac789fb20f6
CRs-Fixed: 2835758
During interrupt mask initialization, bitmap of tx ring
mask and reo dest ring mask are filled in host mode based
on the number of rings.
When NSS offload mode and dynamic HW mode are enabled,
interrupt reset is done based on modified ring count.
This is causing invalid hal_srng access for interrupt
context which are not reset.
So, resetting tx ring mask and reo ring mask based on host
mode ring count. Also reset when mask value is non-zero.
Change-Id: I6e3ef2df7a1f71aa17ebf8499467b39bc84f88bf