-Wimplicit-fallthrough is being enabled by default. Some compilers
such as clang require the attribute instead of just a fallthrough comment.
Change-Id: I443da8d7f5e1771dceb3386c4458b0da6a5e9476
CRs-Fixed: 3218236
Add packet logging support by registering
Rx and Tx callback functions for packet logging of
initial 32 Tx and Rx packets.
Change-Id: I91b59b7c5f65f505e3ee730c497347be28955128
CRs-Fixed: 3224881
Initialize txrx_ref_handle for whunt bypass,
Which is complaining use of uninitialized txrx_ref_handle,
as it is not able to detect that txrx_ref_handle is
populated from another API.
Change-Id: I0f69cf4c8442a5655559622a5825d7af35b9ae5e
CRs-Fixed: 3127788
simple Alloc is being used in RX path which avoids
certain debug logic. during free of nbuf we should
avoid this debug logic else it will report it as
double free, this will be triggered only when debug
is enabled
Change-Id: Iadb40071fb733cc4de3291784df5075d5a099a8e
Add 2 hal soc ops, hal_msdu_desc_info_set and
hal_mpdu_desc_info_se for berilliyum datapath.
Attach hal_rx_msdu_ext_desc_info_get_ptr_be to its function
pointer.
Get reo queue descriptor address as a 5 byte value as in
beryllium datapath it is being obtained from rx packet tlv.
Change-Id: Ia58597384e1a3d5eec493b865a88bab4f209502d
CRs-Fixed: 3084532
Currently, we are Flushing fragments for peer as part of rx defrag
clean up. There is the possibility that frag flush is called while
we are accessing frags in rx defrag store fragment path in parallel.
Which will cause access after the free issue.
Extending lock in rx defrag store fragment path to avoid any
parallel flush while accessing the frags.
Change-Id: I1e897b195f61d80ea6738e9a93f7bcaaa04adc97
CRs-Fixed: 3065414
There is a race condition as tid_lock is unlocked early in
dp_rx_defrag_store_fragment and descriptors are replenished
from dp_peer_flush_frags and same descriptor is again
being replenished in fragmented path.
To resolve this, extending lock period till all the operations
on tid.head_frag_desc are done.
Change-Id: I6d2abb2119e3bebf739de9e41334d58ba87ee391
CRs-Fixed: 3068165
Take care of the MLO peer bit indication to be
concatenated with peer_id to access the peer map
object.
Change-Id: Ia603a728101e83829a8906d1b847f42389e78ca6
CRs-Fixed: 3039326
This change includes below
1) Changes needed to increase Tx rings to 4
2) Use WBM2SW4 ring for rx error in QCN9224
3) memset srng at alloc to avoid populating RBM_id
in per packet path and enable implicit RBM
Change-Id: Icbd5ac2378273b8f3c6adc41c611e29551fff22f
Currently, we are Flushing fragments for peer as part of rx defrag
clean up. There is the possibility that frag flush is called while
we are accessing frags in rx defrag store fragment path in parallel.
Which will cause access after the free issue.
Adding lock in rx defrag store fragment path to avoid any
parallel flush while accessing the frags.
Change-Id: Ic27caab58429d776449f4b774eb7163ffa6215ac
CRs-Fixed: 2995156
There is no consecutive PN number for fragments in
open/WEP security mode so no point of dropping such frames
Change-Id: If75a99cb89c6b0997e9b07afc582c4141c277bb8
GCMP header and MIC are not removed for received
fragments which will result in incorrect ethertype
and presence of LLC hdr in the data when the frames
are sent to network stack.
Fix is to add support for GCMP in rx de-fragmentation
path.
Change-Id: I83ed29a766e40e32f4b712342ebd40d08a2c65e0
CRs-Fixed: 2941879
IPA unmapping is skipped for WBM internal error case,
do IPA unmapping for rx buffer received in WBM error case.
While reinjecting RX buffer back to REO, software rx desc
unmapped field is not set in sequence which may lead to
map without unmap error in IPA module. So reset the unmapped
filed after IPA mapping is complete.
Change-Id: I42c1eaa1620f975d47ce2938c95b6b89dbbd3eca
CRs-Fixed: 2952671
While Rx buffers are getting umapped from net rx context if IPA
pipes are enabled at same time from MC thread context this is
leading to race condition and IPA map/unmap is going out of sync.
To fix this introducing IPA mapping lock and IPA mapping need to
be handled with lock held.
Change-Id: I9fa71bdb6d4e4aa93fc795cc5dd472a181325991
CRs-Fixed: 2945063
CVE-2020-26145
Broadcast and multicast frames should never be fragmented. Several devices
process broadcasted fragments as normal unfragmented frames. Moreover, some
devices accept plaintext fragmented broadcast or multicast frames in
protected Wi-Fi networks. An adversary can abuse this to inject packets
by encapsulating them in a fragmented plaintext broadcast frame. Even
unicast packets can be encapsulated in broadcast Wi-Fi frames and hence
be injected.
Change-Id: I3181a05e177cf9374a14edb748bc5001d058e0f3
CRs-Fixed: 2893212
Modify check to ensure packet number is consecutive for
fragments and drop the fragments if the check fails.
Change-Id: I2ca0ef6211594ba35aae894e6a385d3d5778bff6
CRs-Fixed: 2874369
There is possiblity of receiving fragmented packets just before
tid setup is done, so rate limit the tid not setup error log
to avoid excessive cosole logging.
Change-Id: I372d3904650fcbf2ad11313da1087da52a0d3dc6
CRs-Fixed: 2884897
The memory below 0x2000 is reserved for the target use,
so any memory in this region should not used by host.
But on some third-party platforms, we observe such
memory is allocated for RX buffer, which cause HW/FW
NOC error, then RX is stuck. To address this,
re-allocate RX buffers when small buffer appear.
Change-Id: Iad118e82f3fe10f92cbf5f7388dc0960542fc03c
CRs-Fixed: 2707190
Use macros like dp_cdp_debug, dp_cdp_err, dp_cdp_info to
print logs for QDF_MODULE_ID_CDP
Change-Id: I550eefa82c3417b8bf83378d4a9c6f382098fea6
CRs-Fixed: 2855937
While performing MIC header check for MPDU fragmented packets
host is expecting last fragment will hold full 8bytes of MIC header,
but this is not true in case of MPDU level fragmentation since
MIC header is part of payload it can split across last two fragments.
Fix this logic by extracting MIC header from last two fragments in case
last fragment doesn't have full 8 bytes of MIC header.
Change-Id: I41aaa35d9a18ac0222ab55be6822f9c9d7f15982
CRs-Fixed: 2790661
The logging macros implicitly takes care of embedding function name
in the log, hence there is no need to include __func__ again.
Getting rid of redundant __func__ reduces driver memory footprint.
Change-Id: I26bfac840ac6732ac83fb008db8e1702996eb21e
CRs-Fixed: 2774457
ring_desc is being validated in dp_rx_err_process before passing
the ring_desc to dp_rx_defrag_store_fragment.
Hence removing the additional checks for ring_desc.
Change-Id: Ib863ea4d512075beed58f09ff6167aa2a556efea
CRs-Fixed: 2771408
Skip the recording of rx error ring and rx reinject
history if the memory allocation for the history
fails during the attach.
Change-Id: Ifa74937d0c37e6013ccaec851dab7d9e53057ac2
CRs-Fixed: 2767579
Do not do PN check for open security mode defrag frames as it
is not applicable.
Change-Id: I7ed1073953c08b191c15c659a0d216eb7ed49b31
CRs-Fixed: 2753520
Add support to get the peer reference with module id
To help debug the peer reference related issues
Change-Id: Ie20c7e710b9784b52f2e0f3d7488509282528a00
Use unified version of dp_peer_find_by_id API
which will take peer reference
Also use unified peer ref release API dp_peer_unref_delete
Change-Id: Ibb516a933020a42a5584dbbbba59f8d9b72dcaa4
hdr_ptr is in skb_buffer data, it's assigned with 6B array,
use uint8_t point convert to avoid SA overflow warnning.
tid has asseration protect, but need to break execute to avoid
of SA warnning
Fix use-after-free of ast_entry
Change-Id: I0835f93291cf3da2b4fd57d8c9a90f20a60c11ee
CRs-Fixed: 2751678
In dp_srng_init, max_buffer_length and prefetch_timer are used
while uninitialized.
In dp_bucket_index, overrunning array cdp_sw_enq_delay leads to
out-of-bounds access.
In dp_rx_defrag_fraglist_insert, cur is first NULL checked but
cur is again set to qdf_nbuf_next and is accessed without
NULL check. Thus do a NULL check again before dereferencing
cur to avoid potential NULL pointer dereference.
In htt_t2h_stats_handler, soc could be NULL while cmn_init_done
is dereferenced. Thus fix it by NULL check soc first and then
dereference cmn_init_done.
Change-Id: Ie6a33347d34862f30ba04a10096d3892af7571d3
CRs-Fixed: 2751573
We have come across scenarios where rxdma is pushing
a certain entry more than once to the reo exception
ring. In this scenario, when we try to unmap a buffer,
it can lead to multiple unmap of the same buffer.
Handle this case, by skipping the process of this
duplicate entry, if alrady unmapped, and proceed to the
next entry.
Change-Id: Iae66f27e432f795ba4730911029fa1d63a75cb06
CRs-Fixed: 2739176
Deliver fragmented frames directly to stack without reinjecting back
to REO. Handle PN check for fragmented frames before delivery.
Drop the frames on PN check failure.
Change-Id: I7865def0d39fa83378073e07d318c34dccc6c6e5
CRs-Fixed: 2739870
Same back to back link descriptor address/cookie is observed in
WBM idle link desc ring.
add duplicate link desc check when host
refill link descriptor to WBM through SW2WBM release ring,
also REO reinject ring.
Change-Id: Iaf9defd87670776fa9488d7f650efa3c08fefa60
CRs-Fixed: 2739879
The rx rings are relatively of smaller size. Any
duplicate entry sent by hardware cannot be back-tracked
by just looking at the ring contents alone.
Hence add a history to track the entries for the
REO2SW, REO exception and SW2REO ring.
Change-Id: I9b0b311950d60a9421378ce0fcc0535be450f713
CRs-Fixed: 2739181
DP RX changes to support RX buffer pool, this is a pre-allocated pool
of buffers which will be utilized during low memory conditions.
Change-Id: I8d89a865f989d4e88c10390861e9d4be72ae0299
CRs-Fixed: 2731517
Currently when running downlink traffic with
fragmentation enabled, the SW2REO is getting
full. Increase the SW2REO ring size to 128 entries.
Change-Id: If43bc72a8cc173d44953ca367573800243b1cc5d
CRs-Fixed: 2738309
Ratelimit the defrag path error logs and add the stats
for fragmented packets received out-of-order which
inturn leads to sequence number mistamtch in defrag path.
Change-Id: I17d4c1cff214a8c8a05abf576701824b293d2883
CRs-Fixed: 2740805
In lithium a peer will have only single peer_id hence remove
peer_ids array from dp_peer structure
Change-Id: Ib98270b7fd98f1199b862e4608f990687914b7cc
Restrict DMA Map/UnMap upto buffer size for packets in rx process.
This gives 2-3% cpu gain in peak throughput.
Change-Id: Iaf5e9f6f734d80b6d2c234bd8e679cf2a81c7e2c
CRs-Fixed: 2660698
Pointer rx_msdu_link_desc returned from call to function
dp_rx_cookie_2_mon_link_desc which may be NULL and may be
de-referenced latter
CR-Fixed: 2645199
Change-Id: I9ccba61df9571fcc99c5d5493194d5ae43a71a7f
pdev_id is being initialized with 0. Since 0 is valid pdev_id, though
pdev is not present for that id, it is being accessed.
Initialized pdev_id to 0xFF by default. Added checks on API to
detect valid pdev_id value corresponding to lmac_id
Change-Id: I2b2a38783615494ccc08e265702815f7e562214b
PDEV was being obtained using lmac_id by directly indexing the
pdev_list array. Instead, we need to use dp_get_pdev_for_lmac_id.
Change-Id: I1c4a0f3df5db59390e17666a5f712c5412e22bb1
CRs-Fixed: 2627909
Add a framework to configure varying buffer size for both data and monitor
buffers.
For example, with this framework, the user can configure 2K SKB for Data
buffers, monitor status rings, monitor descriptor rings, monitor
destination rings and 4K SKB for monitor buffers through compile time.
Change-Id: I212d04ff6907e71e9c80b69834aa07ecc6db4d2e
CRs-Fixed: 2604646