Currently all the rx ring descriptor contents are left
intact even after these entries are processed. This can,
at times, lead to stale entries being processed, if the
head pointer of any ring is updated before the updated
contents of the ring descriptor gets reflected in the memory.
This can lead to scenarios where the host driver reads a
stale value of sw_cookie, and free/unmap a currently in-use
buffer, thereby leading to the hardware accessing unmapped
memory region.
The sw_cookie is the integral part of al the rx ring
processing. Hence we always mark the sw_cookie as invalid
after dequeuing an entry from the REO2SW ring. Every time
we check for the validity of the sw_cookie before we try to
process an entry from REO2SW ring. if the invalid bit in the
sw_cookie is set, we just skip this entry and move on to the
next entry in the ring.
Change-Id: I0e78fa662b8ba33e64687a4dee4d1a5875ddb4bf
CRs-Fixed: 2730718
Ths issue scenario is that valid peer is fetched from
peer_id in dp_rx_process and peer ref count is released
prior to invoking dp_rx_deliver_to_stack. In parallel,
the peer is freed in a different context. This results in
use after free within dp_rx_check_delivery_to_stack since
stale peer is dereferenced to update stats.
Fix is to decrement peer ref cnt after dp_rx_deliver_to_stack
Change-Id: I145247f7795f926faba66c05927fdae0599f0cad
CRs-Fixed: 2720396
Configure the client as isolated peer if part of isolation
list while creating/associating the node or adding the peer
to the isolation list.
Do not forward the packets to and from clients in isolation
list instead accelerate to upper stack.
CRs-Fixed: 2689868
Change-Id: I67fd4dee0fb76c993746cdd66c70c241d407239a
In lithium a peer will have only single peer_id hence remove
peer_ids array from dp_peer structure
Change-Id: Ib98270b7fd98f1199b862e4608f990687914b7cc
FISA RX aggregation is not necessary for non-regular RX delivery
as it requires extra FISA flush and also may impact regular
dp_rx_process() RX FISA aggregation.
Add exception frame flag for non-regular RX delivery, so that
FISA path can identify this frame and bypass FISA RX.
Change-Id: Ic06cb72b516221754b124a673ab6c4f392947897
CRs-Fixed: 2680255
Add debug info support for rx descriptors to log
the caller func name and timestamp in replenish
and free scenario.
Change-Id: I1d9b855d14f705094f241bae653f33a94d0e39b7
CRs-Fixed: 2677288
Two back to back same RX desc is received from
REO2SW1 ring. After first RX desc is processed,
RX_desc nbuf will be set to null.
when second REO entry/same RX desc is processing,
dp_rx_desc_nbuf_sanity_check() will access to RX_desc nbuf, null
nbuf accessing lead to panic.
As a WAR, check RX_desc in_use flag firstly to avoid
invalid accessing to nbuf, move
dp_rx_desc_nbuf_sanity_check() after it.
Change-Id: Ib9455c76af85cf83587c1428b20a9ad9e93a9499
CRs-Fixed: 2672088
Check the return status of the osif->rx function and in case
of failure drop the skb. This is needed when OOR error frame
is received and if the frame was not delivered to stack it
needs to be dropped.
Add error counter to periodic stats to determine how many Rx
packets were rejected or were dropped since deliver to
stack failed.
Add the new status check for delivering rx frames to stack
under MCC specific flag - DELIVERY_TO_STACK_STATUS_CHECK.
Change-Id: I9b1c795f168774669783cc601e68003a7747a279
CRs-Fixed: 2672498
Do a logical split of dp_soc_attach and
dp_pdev_attach into Allocation and initialization
and dp_soc_detach and dp_pdev_detach into
de-initialization and free routines
Change-Id: I23bdca0ca86db42a4d0b2554cd60d99bb207a647
Fix the skb leak in dp_rx_process where rx descriptor cookie
validity fails. This skb should be cleaned up as part of the
rx desc and nbuf free function called during the driver unload.
However this will ensure that the skb released and added rx desc
added to the free list during dp_rx_process itself.
Add skb map, unmap functions, line numbers and if the nbuf is
mapped or unmapped to the nbuf tracking table. This debug info
will be logged once the skb is leaked.
Change-Id: I52dbf38922be20fc0aaea380e0e572af16de773e
CRs-Fixed: 2662992
In OOR error handling scenario, msdu is spread across
two nbufs. Due to this, there is a mismatch between
msdu count fetched from MPDU desc detatils and count
fetched from rx link descriptor.
Fix is to create frag list for the case where msdu
is spread across multiple nbufs.
Change-Id: I1d600a0988b373e68aad6ef815fb2d775763b7cb
CRs-Fixed: 2665963
Split dp_rx_pdev_attach into dp_rx_pdev_desc_pool_alloc,
dp_rx_pdev_desc_pool_init, dp_rx_pdev_buffers_alloc and
dp_rx_pdev_detach into dp_rx_pdev_desc_pool_free, dp_rx
_pdev_desc_pool_deinit, dp_rx_pdev_buffers_free APIs
This split is made because dp_pdev_init is introduced
as part of this FR and these APIs will be called from
dp_pdev_init/dp_pdev_deinit or dp_pdev_attach/dp_pdev_
detach accordingly to maintain the symmetry to DP init
and deinit path
Change-Id: Ib543ddae90b90f4706004080b1f2b7d0e5cfbfbc
CRs-Fixed: 2663595
Restrict DMA Map/UnMap upto buffer size for packets in rx process.
This gives 2-3% cpu gain in peak throughput.
Change-Id: Iaf5e9f6f734d80b6d2c234bd8e679cf2a81c7e2c
CRs-Fixed: 2660698
Remove lock to access REO destination rings because 4 rings are
accessed in 4 individual cores.
Change-Id: Ia3f92cc5136dbdbeea1e9cda8d52b474356a3e1a
CRs-Fixed: 2660901
Support RX 2K jump/OOR frame handling from REO2TCL ring.
(a) configure REO error destination ring register to route 2K jump
/OOR frame to REO2TCL ring.
(b) for 2K jump RX frame, only accept ARP frame and drop others,
meanwhile, send delba action frame to remote peer once receive first
2K jump data.
(c) for OOR RX frame, accept ARP/EAPOL/DHCP/IPV6_DHCP frame, otherwise
drop it.
Change-Id: I7cb33279a8ba543686da4eba547e40f86813e057
CRs-Fixed: 2631949
Remove debug dump call to dp_rx_desc_dump() as cookie rx_descriptor is
invalid.
Change-Id: I106ebc2f872e43079abd6e6e493c90022fd09c3b
CRs-Fixed: 2638059
dp_rx_process stack frame has grown to exceed the
stack frame size of 4096. dp_rx_deliver_to_stack_no_peer
is a big function which should not be inline. Calling it
in other function increases the stack size consumed by the
caller function a lot.
Since dp_rx_deliver_to_stack_no_peer is not called very
frequently from dp_rx_process, changing its type to non-inline
function does not hit the core rx datapath much. Hence
change dp_rx_deliver_to_stack_no_peer to a non-inline
function.
Change-Id: Ib042f74c1f5a9cbe5fd947a24f004bb2fecf1fb1
CRs-Fixed: 2636365
Host rx return_buffer_manager should always be 4 or 6. Add check for
invalid return_buffer_manager value in ring descriptor.
Change-Id: I509dd58ddd89e6a0ce1bffa509dcfabbd0fbc975
CRs-Fixed: 2632372
If RX packets reaped from REO2SW ring hit rx_reap_loop_pkt_limit,
REO2SW ring reaping will break and stop, but for scattered msdu case,
all related buffer should be received one time for further processing,
otherwise dp_rx_sg_create can not handle correctly.
(1) make sure all buffers for scattered msdu is received then allow
break when rx_reap_loop_pkt_limit hit.
(2) refine skb unmap location in case msdu_scatter_wait_break logic is
hit, then may double unmap for same skb(not for current issue).
Change-Id: I85d385ee9c3b1a5ed56ae5e5b68636d04968553f
CRs-Fixed: 2632082
The rx descriptor obtained using the cookie
can be NULL if the cookie is invalid. Hence
dereferencing the rx descriptor without any
validation can cause invalid address access.
Fix this by validating the rx descriptor
which has been obtained using the cookie from
the hal ring descriptor.
Change-Id: Ib584f0d8175b581d15b0e1c67d2f6ed9119ecbfc
CRs-Fixed: 2629254
We are seeing a invalid memory access crash
in the dp_get_pdev_for_mac_id call from
dp_rx_process_invalid_peer, due to invalid mac id passed,
probably due to some stack correction.
We should instead use dp_get_pdev_for_lmac_id from
dp_rx_process_invalid_peer, where for invalid
mac id, we assert.
Change-Id: I0737132b5bbdd2fcbdb714d4643a69184ae3821e
CRs-Fixed: 2618432
Currently mesh mode uses channel numbering which is derived
from APIs that don't support 6GHz channels numbering due to the
overloading of 6GHz channels with 2.4GHz and 5GHz.
Add support to obtain the correct channel number (and auxiliary
information like band and frequency) through the new APIs that
support 6GHz.
Change-Id: Ib0b39ebae2a22bd6b2b5d17b9058c3c2100e0d59
CRs-Fixed: 2605229
Packets delivered to FISA via exception err path doesnot have TLVs.
FISA handling requires additional TLVs. dp rx core handling
skips TLVs, save TLV length info in nbuf->cb so that TLVs
are recovered back in FISA.
Change-Id: I53fab2e19abcbf82697ea6f53a4ddf3ea0dd0699
CRs-Fixed: 2620844
Rather than extracting msdu end pkt tlv information per field basis
during fast data path, extract msdu end pkt tlv information at once
and store in local structure.
Change-Id: I0877ba4f824d480cc0851c72090f010852d0d203
Maintain packet counters for each peer based on protocol. Following 3
protocols are supported
* ICMP (IPv4)
* ARP (IPv4)
* EAP
Change-Id: I56dd9bbedd7b6698b7d155a524b242e8cabd76c3
CRs-Fixed: 2604877
In the function that processes rxdma err frames add null check
before calling vdev callback function. If its valid then deliver
the skb to stack or free the skb.
Change-Id: I7c481eb8f702d9109c4a9f79db7e050ece6c3689
CRs-Fixed: 2607658
Add a framework to configure varying buffer size for both data and monitor
buffers.
For example, with this framework, the user can configure 2K SKB for Data
buffers, monitor status rings, monitor descriptor rings, monitor
destination rings and 4K SKB for monitor buffers through compile time.
Change-Id: I212d04ff6907e71e9c80b69834aa07ecc6db4d2e
CRs-Fixed: 2604646
long length msdu is received and looks this msdu is spread across
multiple nbufs, there is no corresbonding logic for this case.
qdf_set_pkt_len will invoke pskb_expand_head to renew skb->head
buffer, but the rx_tlv_hdr is still pointed to original skb->data
buffer, invalid accessing will happen.
As a WAR, drop this msdu related nbufs after dp_rx_sg_create is done.
Change-Id: Iceb09fd04e4d768325018a8ddd4261ab4f75991a
CRs-Fixed: 2597927
If the vdev is marked as delete in progress, then the
packets received on that particular vdev should not be
submitted to the network stack.
Drop the packets if the vdev is marked to be deleted.
CRs-Fixed: 2599128
Change-Id: I0996669bea368b6c0afc10e1d929bc2f314ae6fe
Change set 2 of ctrl_ops APIs to replace pdev, vdev and peer
dp handles with pdev_id, vdev_id and peer mac address
along with dp soc handle
Change-Id: I3f180c9c360d564f0b229b447074ad23b7c0a737
multipass_rx_pkt_drop is peer level stats counter
used to count multipass rx packet dropped frame.
Accumulate this counter at vdev and pdev level.
It also initializes multipass_en flag to false at
vdev attach.
Change-Id: Idaa85a71c80eefb9359abb026402b71aa28ad6a2
CRs-Fixed: 2595551
1. Move all LMAC rings to SOC from pDEV
2. Dynamically obtain lmac->pdev mapping while handling LMAC interrupts
Change-Id: Ib017d49243405b62fc34099c01a2b898b25341d0
There is a chance of leak of RX buffers if peer disconnects
while we are in middle for processing the list of MSDUs in a
AMSDU
Change-Id: I0081ec96da95ea570903dbd5d91c866c8c141667
Add a check to validate invalid_peer_head_msdu before accessing
to avoid NULL dereference.
Change-Id: I9218bdd1100b48a32240546f380b1437ae72c406
CRs-Fixed: 2585651
Check for vdev_id mismatch to deliver NBUFs to stack to avoid
hold peer reference while giving nbuf list to stack
Change-Id: Ic475e00d5b1793ada7b26b7af3322ca2fa51836f
Change cmn_ops APIs to replace pdev, vdev and peer
dp handles with pdev_id, vdev_id and peer mac address
along with dp soc handle
Change-Id: I5716a87cad56b1dfe8dd56f193bbb6ff923a6af1
Remove pdev and vdev control path handles from data path.
Instead send pdev_id and vdev_id along with opaque soc
handle in ol_if_ops.
Change-Id: I6ee083f07e464f283da0d70ada70a4e10e18e1b2
Add a check to drop unicast frame being sent to same originating
VAP after hmmc conversion of IGMP control packets
Change-Id: Ic25812a7848af793075a0cb483100ebcf59d85b2
When a particular vdev is deleted, the corresponding rx
packets which have been queued to the rx thread are not
flushed. Hence when such packets are submitted to the
network stack, the dev for this skb will be invalid,
since we have already freed the adapter.
Flush out the packets in the rx thread queues, before
deleting the vdev.
CRs-Fixed: 2543392
Change-Id: I2490d0f5ce965f62152613a17a59232521ca058f
Add an atomic variable to indicate IPA pipes are connected.
Use it to ensure that SMMU mapping for rx buffers is sent
to IPA even if REO is not remapped but IPA pipes are connected.
Change-Id: I5d82dc073fc2f0de6df102f7bfd2a1e945297aa8
CRs-Fixed: 2552128
Check if REO ring is near full at the end of dp_rx_process. In case the
ring is near full, reap the packets in the ring (and replenish, send to
upper layer) until the quota allows. Ignore the HIF yield time
limit in such cases.
This change is needed to prevent back pressure from the REO ring(in case
it gets full). Backpressure from REO ring (to LMAC) may lead to a
watchdog and eventually a FW crash. Hence, avoid such a scenario by
reaping as many packets as the 'quota' allows when the REO ring is in
aforementioned condition.
A sid-effect of this change would be that at times the RX softirq may run
longer (till the quota limit) than the configured HIF yield time.
However, this logic is not expected to kick-in in perf builds. The issue
is reported for a defconfig build where lots debug options are enabled
in the kernel which can slow the processing down.
Change-Id: I2eb6544c159ec5957d10386b1750fd96473fe13a
CRs-Fixed: 2540964
While handling a frag with no peer, donot set packet length as this is
already done while handling the fragment before re-injection into the
REO. Without this, qdf_nbuf_set_pktlen will fail while doing a skb_put
on a non-linear packet.
Also, donot use L2 header offset while doing a pull head for the RX frag.
Change-Id: Ie1faeebf548b589ad524b31d51444c5934a7b976
CRs-Fixed: 2502756
Implement hal_rx_tlv_get_tcp_chksum API
to retrieve tcp_udp_checksum value
based on the chipset.
Change-Id: Ifab970f10af06f8c0cdbd14d57cb66b49bae1648
CRs-Fixed: 2522133
Implement hal_rx_mpdu_get_addr2 API
based on the chipset as
the macro to retrieve addr2 value is
chipset dependent.
Change-Id: I4026db892d4f2f41db72c50f780ba898b8a17fa7
CRs-Fixed: 2522133