dp_rx_process stack frame has grown to exceed the
stack frame size of 4096. dp_rx_deliver_to_stack_no_peer
is a big function which should not be inline. Calling it
in other function increases the stack size consumed by the
caller function a lot.
Since dp_rx_deliver_to_stack_no_peer is not called very
frequently from dp_rx_process, changing its type to non-inline
function does not hit the core rx datapath much. Hence
change dp_rx_deliver_to_stack_no_peer to a non-inline
function.
Change-Id: Ib042f74c1f5a9cbe5fd947a24f004bb2fecf1fb1
CRs-Fixed: 2636365
Host rx return_buffer_manager should always be 4 or 6. Add check for
invalid return_buffer_manager value in ring descriptor.
Change-Id: I509dd58ddd89e6a0ce1bffa509dcfabbd0fbc975
CRs-Fixed: 2632372
If RX packets reaped from REO2SW ring hit rx_reap_loop_pkt_limit,
REO2SW ring reaping will break and stop, but for scattered msdu case,
all related buffer should be received one time for further processing,
otherwise dp_rx_sg_create can not handle correctly.
(1) make sure all buffers for scattered msdu is received then allow
break when rx_reap_loop_pkt_limit hit.
(2) refine skb unmap location in case msdu_scatter_wait_break logic is
hit, then may double unmap for same skb(not for current issue).
Change-Id: I85d385ee9c3b1a5ed56ae5e5b68636d04968553f
CRs-Fixed: 2632082
The rx descriptor obtained using the cookie
can be NULL if the cookie is invalid. Hence
dereferencing the rx descriptor without any
validation can cause invalid address access.
Fix this by validating the rx descriptor
which has been obtained using the cookie from
the hal ring descriptor.
Change-Id: Ib584f0d8175b581d15b0e1c67d2f6ed9119ecbfc
CRs-Fixed: 2629254
We are seeing a invalid memory access crash
in the dp_get_pdev_for_mac_id call from
dp_rx_process_invalid_peer, due to invalid mac id passed,
probably due to some stack correction.
We should instead use dp_get_pdev_for_lmac_id from
dp_rx_process_invalid_peer, where for invalid
mac id, we assert.
Change-Id: I0737132b5bbdd2fcbdb714d4643a69184ae3821e
CRs-Fixed: 2618432
Currently mesh mode uses channel numbering which is derived
from APIs that don't support 6GHz channels numbering due to the
overloading of 6GHz channels with 2.4GHz and 5GHz.
Add support to obtain the correct channel number (and auxiliary
information like band and frequency) through the new APIs that
support 6GHz.
Change-Id: Ib0b39ebae2a22bd6b2b5d17b9058c3c2100e0d59
CRs-Fixed: 2605229
Packets delivered to FISA via exception err path doesnot have TLVs.
FISA handling requires additional TLVs. dp rx core handling
skips TLVs, save TLV length info in nbuf->cb so that TLVs
are recovered back in FISA.
Change-Id: I53fab2e19abcbf82697ea6f53a4ddf3ea0dd0699
CRs-Fixed: 2620844
Rather than extracting msdu end pkt tlv information per field basis
during fast data path, extract msdu end pkt tlv information at once
and store in local structure.
Change-Id: I0877ba4f824d480cc0851c72090f010852d0d203
Maintain packet counters for each peer based on protocol. Following 3
protocols are supported
* ICMP (IPv4)
* ARP (IPv4)
* EAP
Change-Id: I56dd9bbedd7b6698b7d155a524b242e8cabd76c3
CRs-Fixed: 2604877
In the function that processes rxdma err frames add null check
before calling vdev callback function. If its valid then deliver
the skb to stack or free the skb.
Change-Id: I7c481eb8f702d9109c4a9f79db7e050ece6c3689
CRs-Fixed: 2607658
Add a framework to configure varying buffer size for both data and monitor
buffers.
For example, with this framework, the user can configure 2K SKB for Data
buffers, monitor status rings, monitor descriptor rings, monitor
destination rings and 4K SKB for monitor buffers through compile time.
Change-Id: I212d04ff6907e71e9c80b69834aa07ecc6db4d2e
CRs-Fixed: 2604646
long length msdu is received and looks this msdu is spread across
multiple nbufs, there is no corresbonding logic for this case.
qdf_set_pkt_len will invoke pskb_expand_head to renew skb->head
buffer, but the rx_tlv_hdr is still pointed to original skb->data
buffer, invalid accessing will happen.
As a WAR, drop this msdu related nbufs after dp_rx_sg_create is done.
Change-Id: Iceb09fd04e4d768325018a8ddd4261ab4f75991a
CRs-Fixed: 2597927
If the vdev is marked as delete in progress, then the
packets received on that particular vdev should not be
submitted to the network stack.
Drop the packets if the vdev is marked to be deleted.
CRs-Fixed: 2599128
Change-Id: I0996669bea368b6c0afc10e1d929bc2f314ae6fe
Change set 2 of ctrl_ops APIs to replace pdev, vdev and peer
dp handles with pdev_id, vdev_id and peer mac address
along with dp soc handle
Change-Id: I3f180c9c360d564f0b229b447074ad23b7c0a737
multipass_rx_pkt_drop is peer level stats counter
used to count multipass rx packet dropped frame.
Accumulate this counter at vdev and pdev level.
It also initializes multipass_en flag to false at
vdev attach.
Change-Id: Idaa85a71c80eefb9359abb026402b71aa28ad6a2
CRs-Fixed: 2595551
1. Move all LMAC rings to SOC from pDEV
2. Dynamically obtain lmac->pdev mapping while handling LMAC interrupts
Change-Id: Ib017d49243405b62fc34099c01a2b898b25341d0
There is a chance of leak of RX buffers if peer disconnects
while we are in middle for processing the list of MSDUs in a
AMSDU
Change-Id: I0081ec96da95ea570903dbd5d91c866c8c141667
Add a check to validate invalid_peer_head_msdu before accessing
to avoid NULL dereference.
Change-Id: I9218bdd1100b48a32240546f380b1437ae72c406
CRs-Fixed: 2585651
Check for vdev_id mismatch to deliver NBUFs to stack to avoid
hold peer reference while giving nbuf list to stack
Change-Id: Ic475e00d5b1793ada7b26b7af3322ca2fa51836f
Change cmn_ops APIs to replace pdev, vdev and peer
dp handles with pdev_id, vdev_id and peer mac address
along with dp soc handle
Change-Id: I5716a87cad56b1dfe8dd56f193bbb6ff923a6af1
Remove pdev and vdev control path handles from data path.
Instead send pdev_id and vdev_id along with opaque soc
handle in ol_if_ops.
Change-Id: I6ee083f07e464f283da0d70ada70a4e10e18e1b2
Add a check to drop unicast frame being sent to same originating
VAP after hmmc conversion of IGMP control packets
Change-Id: Ic25812a7848af793075a0cb483100ebcf59d85b2
When a particular vdev is deleted, the corresponding rx
packets which have been queued to the rx thread are not
flushed. Hence when such packets are submitted to the
network stack, the dev for this skb will be invalid,
since we have already freed the adapter.
Flush out the packets in the rx thread queues, before
deleting the vdev.
CRs-Fixed: 2543392
Change-Id: I2490d0f5ce965f62152613a17a59232521ca058f
Add an atomic variable to indicate IPA pipes are connected.
Use it to ensure that SMMU mapping for rx buffers is sent
to IPA even if REO is not remapped but IPA pipes are connected.
Change-Id: I5d82dc073fc2f0de6df102f7bfd2a1e945297aa8
CRs-Fixed: 2552128
Check if REO ring is near full at the end of dp_rx_process. In case the
ring is near full, reap the packets in the ring (and replenish, send to
upper layer) until the quota allows. Ignore the HIF yield time
limit in such cases.
This change is needed to prevent back pressure from the REO ring(in case
it gets full). Backpressure from REO ring (to LMAC) may lead to a
watchdog and eventually a FW crash. Hence, avoid such a scenario by
reaping as many packets as the 'quota' allows when the REO ring is in
aforementioned condition.
A sid-effect of this change would be that at times the RX softirq may run
longer (till the quota limit) than the configured HIF yield time.
However, this logic is not expected to kick-in in perf builds. The issue
is reported for a defconfig build where lots debug options are enabled
in the kernel which can slow the processing down.
Change-Id: I2eb6544c159ec5957d10386b1750fd96473fe13a
CRs-Fixed: 2540964
While handling a frag with no peer, donot set packet length as this is
already done while handling the fragment before re-injection into the
REO. Without this, qdf_nbuf_set_pktlen will fail while doing a skb_put
on a non-linear packet.
Also, donot use L2 header offset while doing a pull head for the RX frag.
Change-Id: Ie1faeebf548b589ad524b31d51444c5934a7b976
CRs-Fixed: 2502756
Implement hal_rx_tlv_get_tcp_chksum API
to retrieve tcp_udp_checksum value
based on the chipset.
Change-Id: Ifab970f10af06f8c0cdbd14d57cb66b49bae1648
CRs-Fixed: 2522133
Implement hal_rx_mpdu_get_addr2 API
based on the chipset as
the macro to retrieve addr2 value is
chipset dependent.
Change-Id: I4026db892d4f2f41db72c50f780ba898b8a17fa7
CRs-Fixed: 2522133
Implement hal_rx_mpdu_get_addr1 API
based on the chipset as the macro to
retrieve addr1 value is
chipset dependent.
Change-Id: I7ed88f2243d397c9d605a08d3b93e17f0004c63d
CRs-Fixed: 2522133
Implement hal_rx_mpdu_get_fr_ds API
based on the chipset as the macro to
retrieve for_ds value is chipset
dependent.
Change-Id: I6d41d02ac50cae752567d98645f0447cc122a84f
CRs-Fixed: 2522133
Implement hal_rx_mpdu_get_to_ds API based
on the chipset as the macro to retrieve
to_ds bit value is chipset dependent.
Change-Id: I36d9d14e226bcc604b91d8aecbe52836c5a12272
CRs-Fixed: 2522133
Implement hal_rx_get_mpdu_mac_ad4_valid API
based on the chipset as the macro to retrieve
mac_ad4 value is chipset dependent.
Change-Id: I9d7cc90e798d4f0775e915fe6edcb1a1f5129490
CRs-Fixed: 2522133
Implement hal_rx_msdu_end_dat API based on
the chipset as the macro to retrieve da_is_valid
value is chipset dependent.
Change-Id: I79f06eaa2576e7516c21f963b2c149aac7f62c64
CRs-Fixed: 2522133
Implement hal_rx_msdu_end_l3_hdr_padding_get API
based on the chipset as the macro to retrieve
sa_idx value is chipset dependent.
Change-Id: Ice1fc2d70e339dc1d80fa6f34f37c5a7aa074be5
CRs-Fixed: 2522133
Implement hal_rx_desc_is_first_msdu API
based on the chipset as the macro to
retrieve first_msdu bit value is chipset
dependent.
Change-Id: I8e8a3aceb225b591b96e6f8453ffbebf1f78e529
CRs-Fixed: 2522133
Implement hal_rx_msdu_end_sa_idx API
based on the chipset as the macro to
retrieve sa_idx value is chipset
dependent.
Change-Id: Ib874520be9e7ad778c2a9a3c415e5c3047450b31
CRs-Fixed: 2522133
Implement hal_rx_msdu_end_sa_is_valid_get API
based on the chipset as the macro to retrieve
sa_idx valid bit is chipset dependent.
Change-Id: I8bcb7030554331922ed12ea9da3ef51cd64b5c40
CRs-Fixed: 2522133
Implement hal_rx_msdu_end_da_is_mcbc_get based
on the chipset as the macro to retrieve
mcbc value is chipset dependent.
Change-Id: I860d259515c31345501080577d7a34beb97e5f60
CRs-Fixed: 2522133
Add runtime pm apis to record last busy mark set by data path
during dp rx process.
Currently for mcl in some scenerios runtime sync suspend is issued
which puts the link in low power state with out waiting for link
inactivity timeout. This leads to throughput degradation in case of
rx direction as in rx processing data path extends the timer by marking
last busy to avoid immediate runtime suspend. But runtime sync suspend
does not take in to account of the link idle timeout value before
suspending the link. So using this apis, before issuing runtime sync
suspend check the duration since last busy mark by data path rx process.
If the duration is less than run time delay just do runtime put.
Change-Id: I694ef6ddec8f11fed44bf3b9e5ae18ad93b6ec24
CRs-Fixed: 2512326
Local peer_id is being cleaned up across DP, HDD and PS/WMA.
So, any references to local peer_id/sta_id will be replaced
by peer mac address and all interactions between the layers
will be based on peer mac address.
Cleanup references to peer_local_id from the
following APIs:
1) dp_rx_process
2) dp_rx_process_rxdma_err
Change-Id: Ia1c8bd091759132367718e6c6d5f67cf5a98e507
CRs-Fixed: 2512706
MDNS packets if forwarded for a NAN vdev can lead to potential flooding
of the air interface. Hence donot forward them.
Change-Id: Idfdedfb0b5b553745440587448230013f3b56a7d
CRs-Fixed: 2503360
In dp_rx_process there are two cases in which vdev will be
null. One is when peer is invalid case which happens when
packets comes on reo with out any valid peer and vdev set.
Second case is fetching more data in case of napi no yield
condition and there are no buffers to fetch. In current
dp_rx_process gro flush is called at the end of nbuff process
and with out checking for vdev sanity it is referenced. So
add vdev null sanity check to prevent NULL pointer dereference.
Change-Id: Ie2d480108118d9b83373a450aecabee57675c41d
CRs-Fixed: 2507067
In case of TCP packets being processed by dp_rx_process, send out GRO
flush indication to the thread.
CRs-Fixed: 2500152
Change-Id: I4f464456d423e4680955992c0acf0ed5f4e618b8
This feature enables user to change HW mode dynamically
from DBS to DBS_SBS mode and vice-versa. Currently, HW
mode configuration is only possible through INI setting
requiring a subsequent reboot.
Relevant DP changes are:
1. Add API cdp_txrx_handle_pdev_status_change to pass
pdev 'up' or 'down' status to DP module
2. Add pdev-status check in dp_rx_process_invalid_peer
3. Add pdev-status check in dp_tx_comp_handler to free
buffer and release descriptor
Change-Id: I74b144abb1b0dc41a26a18ad28f872e6457e9653
CRs-fixed: 2490212
Tags are programmed using wlanconfig commands. Rx IPv4/v6
TCP/UDP packets matching a 5-tuple are tagged using HawkeyeV2 hardware.
Tags are populated in the skb->cb in the REO/exception/monitor data
path and sent to upper stack
CRs-Fixed: 2502311
Change-Id: I7c999e75fab43b6ecb6f9d9fd4b0351f0b9cfda8