In lithium a peer will have only single peer_id hence remove
peer_ids array from dp_peer structure
Change-Id: Ib98270b7fd98f1199b862e4608f990687914b7cc
Restrict DMA Map/UnMap upto buffer size for packets in rx process.
This gives 2-3% cpu gain in peak throughput.
Change-Id: Iaf5e9f6f734d80b6d2c234bd8e679cf2a81c7e2c
CRs-Fixed: 2660698
Pointer rx_msdu_link_desc returned from call to function
dp_rx_cookie_2_mon_link_desc which may be NULL and may be
de-referenced latter
CR-Fixed: 2645199
Change-Id: I9ccba61df9571fcc99c5d5493194d5ae43a71a7f
pdev_id is being initialized with 0. Since 0 is valid pdev_id, though
pdev is not present for that id, it is being accessed.
Initialized pdev_id to 0xFF by default. Added checks on API to
detect valid pdev_id value corresponding to lmac_id
Change-Id: I2b2a38783615494ccc08e265702815f7e562214b
PDEV was being obtained using lmac_id by directly indexing the
pdev_list array. Instead, we need to use dp_get_pdev_for_lmac_id.
Change-Id: I1c4a0f3df5db59390e17666a5f712c5412e22bb1
CRs-Fixed: 2627909
Add a framework to configure varying buffer size for both data and monitor
buffers.
For example, with this framework, the user can configure 2K SKB for Data
buffers, monitor status rings, monitor descriptor rings, monitor
destination rings and 4K SKB for monitor buffers through compile time.
Change-Id: I212d04ff6907e71e9c80b69834aa07ecc6db4d2e
CRs-Fixed: 2604646
Crash scenario:
(1) frag data A is dropped and related RX desc A is replenished
and reused, but pdev->free_list_tail is still pointed to RX desc A.
(2) frag data B/C is coming, defrag fails then pdev->free_list_head
will point to B-->C RX desc, but pdev->free_list_tail still point to A.
(3) for defrag failing case, host only will replenish 1 RX buffer for
current case, RX desc B is replenished, while C will be free back to
RX desc pool.
(4) dp_rx_add_desc_list_to_free_list will set RX desc A-->next =
free_list, free_list point to C insted.
(5) when step (1) RX desc A replenished buffer indicated to host by
REO2Dst ring, RX desc A -->nbuf actually is pointed to another RX
desc, invalid skb accessing will happen.
Solution:
a. reset tail pointer in dp_rx_add_desc_list_to_free_list at last.
b. reset tail pointer same as head in dp_rx_add_to_free_desc_list
if head->next is NULL.
c. set correct rx_bufs number for replenish when dp_rx_defrag fails.
Change-Id: Ib297baea3605a09dd7d85d1f5ceb95db48a2e1f1
CRs-Fixed: 2603676
1. Move all LMAC rings to SOC from pDEV
2. Dynamically obtain lmac->pdev mapping while handling LMAC interrupts
Change-Id: Ib017d49243405b62fc34099c01a2b898b25341d0
Currently wlan host will re-inject defrag data with RBM 6 to
REO, this data buffer will go to REO-->REO2SW4-->IPA-->FW2RXDMA,
fix below issue introduced by this RX buffer path.
a. FW assert due to FW2RXDMA DMA address not 4 bytes aligned.
b. host skb double allocation due to qdf_nbuf_linearize() for frag skb.
c. Invalid RBM 6 for fragment RX due to RX buffer resue.
Change-Id: I36d831fc14b6b9aa0cea32682823de348f7eecd3
CRs-Fixed: 2591453
when lot of fragment frame with invalid peer_id indicated to
host through reo exception ring, excessive logging cause srng
lock hold long time.
degrade log level as a WAR to avoid excessive logging.
Change-Id: Iffb18ad723d1ac5955868cc8ec99bb0198785ee5
CRs-Fixed: 2584610
Currently defragmented packets use HAL_RX_BUF_RBM_SW1_RBM as the RBM
value for the defragmented packets which are re-injected into REO.
Thus, if REO encounters any error while handling these packets, they
would end up in WBM2SW1 ring (via WBM), which is managed by the FW. The
FW will eventually recycle these buffers back to RXDMA via its refill
process. As a part of defragmentation, host does a 802.11 -> 802.3
header conversion. This is resulting in an address which is not 4
byte aligned. Hence, when RXDMA tries to use these addresses (after FW
recycles them), it may lead to issues.
Change the RBM value of the defragmented buffers which are
re-injected. Now, if REO ends up throwing an error for these
packets, they wll end up in WBM2SW3, which is managed by the host.
The host can then drop these packets and replenish RXDMA with 4 byte
aligned buffers (via FW).
Change-Id: I9d9c25385978d5be855699feb28d292c6f3fffdd
CRs-Fixed: 2572483
Remove pdev and vdev control path handles from data path.
Instead send pdev_id and vdev_id along with opaque soc
handle in ol_if_ops.
Change-Id: I6ee083f07e464f283da0d70ada70a4e10e18e1b2
Fix compilation error in DP_PRINT which is
seen when compiled for lithium datapath.
CRs-Fixed: 2567068
Change-Id: I4bb7bb14a2e521645be6e2677a44a78fa79fcb0b
Expecting a fresh skb from reo exception ring with skb->len of 0 but are
unexpectedly receiving one with len greater than 0, so will drop the
packet.
Change-Id: I1079b40dbb3180dd1ccd87f47668f75d10003934
CRs-Fixed: 2556164
For RX defrag, if incorrect SW peer ID is got from REO exception
ring descriptor, the expectation maybe is AP BSS peer ID but it
is replaced by other peer ID which like SAP self peer that won't
do dp_peer_rx_init, then in dp_rx_defrag_cleanup no chance to
run dp_rx_clear_saved_desc_info to free dst_ring_desc since
rx_tid[].array is NULL, memory leak happened.
Call dp_rx_clear_saved_desc_info always in dp_rx_defrag_cleanup.
Change-Id: Ib1ebfbd976c817d5238ee48196388a8c88189ebc
CRs-Fixed: 2549913
Currently for REO reinject path, first fragment is in the
linear part of the skb buffer while other fragments are
appended to skb buffer as non-linear paged data. The other
point is that for REO reinject buffer, l3_header_padding is
not there, meaning ethernet header is right after struct
rx_pkt_tlvs.
Above implementation will have issues when WLAN IPA path is
enabled.
Firstly, IPA assumes data buffers are linear. Thus need to
linearize skb buffer before reinjecting into REO.
Secondly, when WLAN does IPA pipe connection, RX pkt offset
is hard-coded to RX_PKT_TLVS_LEN + L3_HEADER_PADDING. Thus
need to pad L3_HEADER_PADDING before ethernet header and
after struct rx_pkt_tlvs.
Change-Id: I36d41bc91d28c2580775a1d2e431e139ff02e19e
CRs-Fixed: 2469315
During cleanup path in STA mode, remove false assert as there is just self
dp_peer with no TX and RX TIDS setup.
Change-Id: Id6be7a6b3823c41ddbff67926bda240a4e9b6bd0
CRs-Fixed: 2547680
Add the following HAL macros:
1. HAL_RX_MSDU0_BUFFER_ADDR_LSB
2. HAL_RX_MSDU_DESC_INFO_PTR_GET
3. HAL_ENT_MPDU_DESC_INFO
4. HAL_DST_MPDU_DESC_INFO
Add relevant function pointers to retrieve
descriptor info from the macros based
on chipsets.
Change-Id: I99ce7566a668180c7849eedea915b6f23a8dbf35
CRs-Fixed: 2522133
Implement hal_rx_get_mpdu_sequence_control_valid
API based on the chipset as
the macro to retrieve sequence control valid
value is chipset dependent.
Change-Id: I01a006094d0330060e9ff1a91200c48c2426f38d
CRs-Fixed: 2522133
Implement hal_rx_mpdu_get_addr4 API based
on the chipset as the macro to retrieve
addr4 value is chipset dependent.
Change-Id: Ie35d01de1619a8ab540bb1b2019a15b436efb7d4
CRs-Fixed: 2522133
Implement hal_rx_mpdu_get_addr3 API
based on the chipset as
the macro to retrieve addr3 value is
chipset dependent.
Change-Id: I3983599b656e82170de5905c08daee3ec164e7a0
CRs-Fixed: 2522133
Implement hal_rx_mpdu_get_addr2 API
based on the chipset as
the macro to retrieve addr2 value is
chipset dependent.
Change-Id: I4026db892d4f2f41db72c50f780ba898b8a17fa7
CRs-Fixed: 2522133
Implement hal_rx_mpdu_get_addr1 API
based on the chipset as the macro to
retrieve addr1 value is
chipset dependent.
Change-Id: I7ed88f2243d397c9d605a08d3b93e17f0004c63d
CRs-Fixed: 2522133
Implement hal_rx_get_mpdu_frame_control_valid API
based on the chipset as the macro to retrieve
frame control valid value is chipset dependent.
Change-Id: I49d16ae44b2e9567ff746d2088058f0c1025ea40
CRs-Fixed: 2522133
Implement hal_rx_mpdu_get_fr_ds API
based on the chipset as the macro to
retrieve for_ds value is chipset
dependent.
Change-Id: I6d41d02ac50cae752567d98645f0447cc122a84f
CRs-Fixed: 2522133
Implement hal_rx_mpdu_get_to_ds API based
on the chipset as the macro to retrieve
to_ds bit value is chipset dependent.
Change-Id: I36d9d14e226bcc604b91d8aecbe52836c5a12272
CRs-Fixed: 2522133
Rx defrag waitlist was not getting cleared during dp_peer_rx_cleanup in
the case of STA mode even though the tid was getting deleted. This
created a scenario where the next time dp_rx_defrag_waitlist_remove was
called, it was trying to access now invalid memory. If a vdev was
disconnected in the middle of receiving traffic, then the tid would be
deleted but the rx frag waitlist would not. Upon reconnecting, the
reception of the next frag would cause a crash due to the now invalid
memory in the waitlist.
Change-Id: I5bb1a31f38fa45128d0f35fafaddaf729c99489d
CRs-Fixed: 2538879
Currently as part of rx frag packets handling, nbuf is unmapped,
but the corresponding rx desc unmapped bit is not set. The fragment
is stored until all the frags are received. Only after all the frags
are received defrag packet is reinjected back to reo. While in between
if driver is unloaded, as part of unload rx desc buffer pool is freed.
During buffer free corresponding rx desc unmapped bit is checked and
if the bit is not set buffer is unampped and then freed. This leads
of double unmap in case of the stored frag list buffers. As part of
this change set the unmapped bit during nbuf unmap.
Change-Id: I24cf1e88f3102bc985f95d2dc325509308a7bef9
CRs-Fixed: 2532302
Make change to remove usage of void pointers for
ring descriptors and instead use a opaque pointer
dp_ring_desc_t.
Change-Id: Ia1e9a3da9eaa3cccf297b2135b52a72f2fe21431
CRs-Fixed: 2484409
Add new CDP API cdp_register_rx_mic_error_ind_handler to
register rx mic error callback. Also, define new structure
cdp_rx_mic_err_info for holding MIC error information.
Change-Id: I4d5d6426b1d5f04848afd48f6dbf51edba291a20
CRs-Fixed: 2488424
In some case, HW will fill in unexpected peer_id into RX PKT TLV,
if this peer_id related peer is valid by coincidence, but actually
this peer won't do dp_peer_rx_init(like SAP Vdev self peer),
then invalid accessing to peer rx tid will happen.
do SW WAR that add checking about peer tid array, if not initialed,
free the rx nbuf.
Change-Id: Icf196b4f92eb341e1ace5128c681d24c41dff6cd
CRs-Fixed: 2468537
SMMU fault is seen in REO reinject path since skb buffer
is not mapped to IPA SMMU domain. Thus map skb buffer
to IPA domain in REO reinject path.
Change-Id: I293353bfce961324af6c6372611e8a57cc13f347
CRs-Fixed: 2463376
In Rx path, We really don't need to flush data and invalidate.
Hence replacing map/unmap flag to QDF_DMA_FROM_DEVICE.
Change-Id: I3de0c73e11a08a875114167a55fe9fe4432f1dd4
CRs-fixed: 2449712
certain fields are now retrieved from reo_destination
ring to nbuf cb. This avoids accessing the RX_TLVs
which are spread across two cache lines. This is
helping improve small pkt (64 byte) performance.
Change-Id: Iee95813356c3dd90ab2e8a2756913def89710fb1
CRs-Fixed: 2426703
Per the Linux Kernel coding style, as enforced by the kernel
checkpatch script, pointers should not be explicitly compared to
NULL. Therefore within dp replace any such comparisons with logical
operations performed on the pointer itself.
Change-Id: I61f3adab1208682d36235421f44a048e35df346d
CRs-Fixed: 2418258
discard fragmented pkts if msdu count is greater than 1
and also some code clean up
Change-Id: I1d0857b5e22f0e4763cfa355b8c92bde697d7540
CRs-Fixed: 2382353
Check if peer is valid before access during flush.
Check waitlist during peer cleanup even if REO queue was
not allocated, since these are independent.
Add a timeout to avoid calling flush function too frequently.
Change-Id: Ib8da2014f81a48ccc3ca6a330209a942ac0998a2
peer->rx_tid[tid].array is initiailized when peer TID is setup.
It seems like we are processing the fragmented Rx packet before peer
TID is setup. Drop the fragmented packet in this case.
Change-Id: Ic076e59a9074efff9fed9f9154aa973c41f67341
CRs-Fixed: 2388684
Reduce log level from info to debug level for
received fragmented packets in dp_rx_defrag path.
Change-Id: I0d1c7bf91e0337a56ea9e52565e0cbdf47a1772d
CRs-Fixed: 2385483
Link descriptors should be returned to WBM release ring if
fragments are not reinjected due to defrag errors.
Change-Id: Ia37db9f195f6848092918cf7cc221dc50e827ac5
Need to use tid_lock to protect the rx_tid data structure in the
dp_rx_reorder_flush_frag function. Or else there will be race
condition issues.
CRs-Fixed: 2357226
Change-Id: I860a64b529f0548eced7b537c4180a2c57175a56
Change rx defrag related log level from info to debug,
it's not necessary to print normal rx defrag operation related
log default, this can avoid panic caused by excessive logging
when receive lot of fragment data.
Change-Id: Id712d546a760377a6f59b321f73d7ae5ca4af564
CRs-Fixed: 2353869
Currently buffers reaped in REO exception ring handler are being
replenished into 5G MAC ring always.
Fix this by using appropriate MAC ring for replenish
Change-Id: I04f5a1179a7df4b018b6a0b435e2a0421ef534e5