Currently, we are Flushing fragments for peer as part of rx defrag
clean up. There is the possibility that frag flush is called while
we are accessing frags in rx defrag store fragment path in parallel.
Which will cause access after the free issue.
Adding lock in rx defrag store fragment path to avoid any
parallel flush while accessing the frags.
Change-Id: Ic27caab58429d776449f4b774eb7163ffa6215ac
CRs-Fixed: 2995156
There is no consecutive PN number for fragments in
open/WEP security mode so no point of dropping such frames
Change-Id: If75a99cb89c6b0997e9b07afc582c4141c277bb8
GCMP header and MIC are not removed for received
fragments which will result in incorrect ethertype
and presence of LLC hdr in the data when the frames
are sent to network stack.
Fix is to add support for GCMP in rx de-fragmentation
path.
Change-Id: I83ed29a766e40e32f4b712342ebd40d08a2c65e0
CRs-Fixed: 2941879
IPA unmapping is skipped for WBM internal error case,
do IPA unmapping for rx buffer received in WBM error case.
While reinjecting RX buffer back to REO, software rx desc
unmapped field is not set in sequence which may lead to
map without unmap error in IPA module. So reset the unmapped
filed after IPA mapping is complete.
Change-Id: I42c1eaa1620f975d47ce2938c95b6b89dbbd3eca
CRs-Fixed: 2952671
While Rx buffers are getting umapped from net rx context if IPA
pipes are enabled at same time from MC thread context this is
leading to race condition and IPA map/unmap is going out of sync.
To fix this introducing IPA mapping lock and IPA mapping need to
be handled with lock held.
Change-Id: I9fa71bdb6d4e4aa93fc795cc5dd472a181325991
CRs-Fixed: 2945063
CVE-2020-26145
Broadcast and multicast frames should never be fragmented. Several devices
process broadcasted fragments as normal unfragmented frames. Moreover, some
devices accept plaintext fragmented broadcast or multicast frames in
protected Wi-Fi networks. An adversary can abuse this to inject packets
by encapsulating them in a fragmented plaintext broadcast frame. Even
unicast packets can be encapsulated in broadcast Wi-Fi frames and hence
be injected.
Change-Id: I3181a05e177cf9374a14edb748bc5001d058e0f3
CRs-Fixed: 2893212
Modify check to ensure packet number is consecutive for
fragments and drop the fragments if the check fails.
Change-Id: I2ca0ef6211594ba35aae894e6a385d3d5778bff6
CRs-Fixed: 2874369
There is possiblity of receiving fragmented packets just before
tid setup is done, so rate limit the tid not setup error log
to avoid excessive cosole logging.
Change-Id: I372d3904650fcbf2ad11313da1087da52a0d3dc6
CRs-Fixed: 2884897
The memory below 0x2000 is reserved for the target use,
so any memory in this region should not used by host.
But on some third-party platforms, we observe such
memory is allocated for RX buffer, which cause HW/FW
NOC error, then RX is stuck. To address this,
re-allocate RX buffers when small buffer appear.
Change-Id: Iad118e82f3fe10f92cbf5f7388dc0960542fc03c
CRs-Fixed: 2707190
Use macros like dp_cdp_debug, dp_cdp_err, dp_cdp_info to
print logs for QDF_MODULE_ID_CDP
Change-Id: I550eefa82c3417b8bf83378d4a9c6f382098fea6
CRs-Fixed: 2855937
While performing MIC header check for MPDU fragmented packets
host is expecting last fragment will hold full 8bytes of MIC header,
but this is not true in case of MPDU level fragmentation since
MIC header is part of payload it can split across last two fragments.
Fix this logic by extracting MIC header from last two fragments in case
last fragment doesn't have full 8 bytes of MIC header.
Change-Id: I41aaa35d9a18ac0222ab55be6822f9c9d7f15982
CRs-Fixed: 2790661
The logging macros implicitly takes care of embedding function name
in the log, hence there is no need to include __func__ again.
Getting rid of redundant __func__ reduces driver memory footprint.
Change-Id: I26bfac840ac6732ac83fb008db8e1702996eb21e
CRs-Fixed: 2774457
ring_desc is being validated in dp_rx_err_process before passing
the ring_desc to dp_rx_defrag_store_fragment.
Hence removing the additional checks for ring_desc.
Change-Id: Ib863ea4d512075beed58f09ff6167aa2a556efea
CRs-Fixed: 2771408
Skip the recording of rx error ring and rx reinject
history if the memory allocation for the history
fails during the attach.
Change-Id: Ifa74937d0c37e6013ccaec851dab7d9e53057ac2
CRs-Fixed: 2767579
Do not do PN check for open security mode defrag frames as it
is not applicable.
Change-Id: I7ed1073953c08b191c15c659a0d216eb7ed49b31
CRs-Fixed: 2753520
Add support to get the peer reference with module id
To help debug the peer reference related issues
Change-Id: Ie20c7e710b9784b52f2e0f3d7488509282528a00
Use unified version of dp_peer_find_by_id API
which will take peer reference
Also use unified peer ref release API dp_peer_unref_delete
Change-Id: Ibb516a933020a42a5584dbbbba59f8d9b72dcaa4
hdr_ptr is in skb_buffer data, it's assigned with 6B array,
use uint8_t point convert to avoid SA overflow warnning.
tid has asseration protect, but need to break execute to avoid
of SA warnning
Fix use-after-free of ast_entry
Change-Id: I0835f93291cf3da2b4fd57d8c9a90f20a60c11ee
CRs-Fixed: 2751678
In dp_srng_init, max_buffer_length and prefetch_timer are used
while uninitialized.
In dp_bucket_index, overrunning array cdp_sw_enq_delay leads to
out-of-bounds access.
In dp_rx_defrag_fraglist_insert, cur is first NULL checked but
cur is again set to qdf_nbuf_next and is accessed without
NULL check. Thus do a NULL check again before dereferencing
cur to avoid potential NULL pointer dereference.
In htt_t2h_stats_handler, soc could be NULL while cmn_init_done
is dereferenced. Thus fix it by NULL check soc first and then
dereference cmn_init_done.
Change-Id: Ie6a33347d34862f30ba04a10096d3892af7571d3
CRs-Fixed: 2751573
We have come across scenarios where rxdma is pushing
a certain entry more than once to the reo exception
ring. In this scenario, when we try to unmap a buffer,
it can lead to multiple unmap of the same buffer.
Handle this case, by skipping the process of this
duplicate entry, if alrady unmapped, and proceed to the
next entry.
Change-Id: Iae66f27e432f795ba4730911029fa1d63a75cb06
CRs-Fixed: 2739176
Deliver fragmented frames directly to stack without reinjecting back
to REO. Handle PN check for fragmented frames before delivery.
Drop the frames on PN check failure.
Change-Id: I7865def0d39fa83378073e07d318c34dccc6c6e5
CRs-Fixed: 2739870
Same back to back link descriptor address/cookie is observed in
WBM idle link desc ring.
add duplicate link desc check when host
refill link descriptor to WBM through SW2WBM release ring,
also REO reinject ring.
Change-Id: Iaf9defd87670776fa9488d7f650efa3c08fefa60
CRs-Fixed: 2739879
The rx rings are relatively of smaller size. Any
duplicate entry sent by hardware cannot be back-tracked
by just looking at the ring contents alone.
Hence add a history to track the entries for the
REO2SW, REO exception and SW2REO ring.
Change-Id: I9b0b311950d60a9421378ce0fcc0535be450f713
CRs-Fixed: 2739181
DP RX changes to support RX buffer pool, this is a pre-allocated pool
of buffers which will be utilized during low memory conditions.
Change-Id: I8d89a865f989d4e88c10390861e9d4be72ae0299
CRs-Fixed: 2731517
Currently when running downlink traffic with
fragmentation enabled, the SW2REO is getting
full. Increase the SW2REO ring size to 128 entries.
Change-Id: If43bc72a8cc173d44953ca367573800243b1cc5d
CRs-Fixed: 2738309
Ratelimit the defrag path error logs and add the stats
for fragmented packets received out-of-order which
inturn leads to sequence number mistamtch in defrag path.
Change-Id: I17d4c1cff214a8c8a05abf576701824b293d2883
CRs-Fixed: 2740805
In lithium a peer will have only single peer_id hence remove
peer_ids array from dp_peer structure
Change-Id: Ib98270b7fd98f1199b862e4608f990687914b7cc
Restrict DMA Map/UnMap upto buffer size for packets in rx process.
This gives 2-3% cpu gain in peak throughput.
Change-Id: Iaf5e9f6f734d80b6d2c234bd8e679cf2a81c7e2c
CRs-Fixed: 2660698
Pointer rx_msdu_link_desc returned from call to function
dp_rx_cookie_2_mon_link_desc which may be NULL and may be
de-referenced latter
CR-Fixed: 2645199
Change-Id: I9ccba61df9571fcc99c5d5493194d5ae43a71a7f
pdev_id is being initialized with 0. Since 0 is valid pdev_id, though
pdev is not present for that id, it is being accessed.
Initialized pdev_id to 0xFF by default. Added checks on API to
detect valid pdev_id value corresponding to lmac_id
Change-Id: I2b2a38783615494ccc08e265702815f7e562214b
PDEV was being obtained using lmac_id by directly indexing the
pdev_list array. Instead, we need to use dp_get_pdev_for_lmac_id.
Change-Id: I1c4a0f3df5db59390e17666a5f712c5412e22bb1
CRs-Fixed: 2627909
Add a framework to configure varying buffer size for both data and monitor
buffers.
For example, with this framework, the user can configure 2K SKB for Data
buffers, monitor status rings, monitor descriptor rings, monitor
destination rings and 4K SKB for monitor buffers through compile time.
Change-Id: I212d04ff6907e71e9c80b69834aa07ecc6db4d2e
CRs-Fixed: 2604646
Crash scenario:
(1) frag data A is dropped and related RX desc A is replenished
and reused, but pdev->free_list_tail is still pointed to RX desc A.
(2) frag data B/C is coming, defrag fails then pdev->free_list_head
will point to B-->C RX desc, but pdev->free_list_tail still point to A.
(3) for defrag failing case, host only will replenish 1 RX buffer for
current case, RX desc B is replenished, while C will be free back to
RX desc pool.
(4) dp_rx_add_desc_list_to_free_list will set RX desc A-->next =
free_list, free_list point to C insted.
(5) when step (1) RX desc A replenished buffer indicated to host by
REO2Dst ring, RX desc A -->nbuf actually is pointed to another RX
desc, invalid skb accessing will happen.
Solution:
a. reset tail pointer in dp_rx_add_desc_list_to_free_list at last.
b. reset tail pointer same as head in dp_rx_add_to_free_desc_list
if head->next is NULL.
c. set correct rx_bufs number for replenish when dp_rx_defrag fails.
Change-Id: Ib297baea3605a09dd7d85d1f5ceb95db48a2e1f1
CRs-Fixed: 2603676
1. Move all LMAC rings to SOC from pDEV
2. Dynamically obtain lmac->pdev mapping while handling LMAC interrupts
Change-Id: Ib017d49243405b62fc34099c01a2b898b25341d0
Currently wlan host will re-inject defrag data with RBM 6 to
REO, this data buffer will go to REO-->REO2SW4-->IPA-->FW2RXDMA,
fix below issue introduced by this RX buffer path.
a. FW assert due to FW2RXDMA DMA address not 4 bytes aligned.
b. host skb double allocation due to qdf_nbuf_linearize() for frag skb.
c. Invalid RBM 6 for fragment RX due to RX buffer resue.
Change-Id: I36d831fc14b6b9aa0cea32682823de348f7eecd3
CRs-Fixed: 2591453
when lot of fragment frame with invalid peer_id indicated to
host through reo exception ring, excessive logging cause srng
lock hold long time.
degrade log level as a WAR to avoid excessive logging.
Change-Id: Iffb18ad723d1ac5955868cc8ec99bb0198785ee5
CRs-Fixed: 2584610
Currently defragmented packets use HAL_RX_BUF_RBM_SW1_RBM as the RBM
value for the defragmented packets which are re-injected into REO.
Thus, if REO encounters any error while handling these packets, they
would end up in WBM2SW1 ring (via WBM), which is managed by the FW. The
FW will eventually recycle these buffers back to RXDMA via its refill
process. As a part of defragmentation, host does a 802.11 -> 802.3
header conversion. This is resulting in an address which is not 4
byte aligned. Hence, when RXDMA tries to use these addresses (after FW
recycles them), it may lead to issues.
Change the RBM value of the defragmented buffers which are
re-injected. Now, if REO ends up throwing an error for these
packets, they wll end up in WBM2SW3, which is managed by the host.
The host can then drop these packets and replenish RXDMA with 4 byte
aligned buffers (via FW).
Change-Id: I9d9c25385978d5be855699feb28d292c6f3fffdd
CRs-Fixed: 2572483
Remove pdev and vdev control path handles from data path.
Instead send pdev_id and vdev_id along with opaque soc
handle in ol_if_ops.
Change-Id: I6ee083f07e464f283da0d70ada70a4e10e18e1b2
Fix compilation error in DP_PRINT which is
seen when compiled for lithium datapath.
CRs-Fixed: 2567068
Change-Id: I4bb7bb14a2e521645be6e2677a44a78fa79fcb0b
Expecting a fresh skb from reo exception ring with skb->len of 0 but are
unexpectedly receiving one with len greater than 0, so will drop the
packet.
Change-Id: I1079b40dbb3180dd1ccd87f47668f75d10003934
CRs-Fixed: 2556164
For RX defrag, if incorrect SW peer ID is got from REO exception
ring descriptor, the expectation maybe is AP BSS peer ID but it
is replaced by other peer ID which like SAP self peer that won't
do dp_peer_rx_init, then in dp_rx_defrag_cleanup no chance to
run dp_rx_clear_saved_desc_info to free dst_ring_desc since
rx_tid[].array is NULL, memory leak happened.
Call dp_rx_clear_saved_desc_info always in dp_rx_defrag_cleanup.
Change-Id: Ib1ebfbd976c817d5238ee48196388a8c88189ebc
CRs-Fixed: 2549913
Currently for REO reinject path, first fragment is in the
linear part of the skb buffer while other fragments are
appended to skb buffer as non-linear paged data. The other
point is that for REO reinject buffer, l3_header_padding is
not there, meaning ethernet header is right after struct
rx_pkt_tlvs.
Above implementation will have issues when WLAN IPA path is
enabled.
Firstly, IPA assumes data buffers are linear. Thus need to
linearize skb buffer before reinjecting into REO.
Secondly, when WLAN does IPA pipe connection, RX pkt offset
is hard-coded to RX_PKT_TLVS_LEN + L3_HEADER_PADDING. Thus
need to pad L3_HEADER_PADDING before ethernet header and
after struct rx_pkt_tlvs.
Change-Id: I36d41bc91d28c2580775a1d2e431e139ff02e19e
CRs-Fixed: 2469315
During cleanup path in STA mode, remove false assert as there is just self
dp_peer with no TX and RX TIDS setup.
Change-Id: Id6be7a6b3823c41ddbff67926bda240a4e9b6bd0
CRs-Fixed: 2547680
Add the following HAL macros:
1. HAL_RX_MSDU0_BUFFER_ADDR_LSB
2. HAL_RX_MSDU_DESC_INFO_PTR_GET
3. HAL_ENT_MPDU_DESC_INFO
4. HAL_DST_MPDU_DESC_INFO
Add relevant function pointers to retrieve
descriptor info from the macros based
on chipsets.
Change-Id: I99ce7566a668180c7849eedea915b6f23a8dbf35
CRs-Fixed: 2522133