Commit Graph

284 Commitit

Tekijä SHA1 Viesti Päivämäärä
Rakesh Pillai
d7a0b3f14c qcacmn: Handle the nbuf sanity failure gracefully
Th nbuf sanity can fail in case when HW posts the
same buffer twice. This case can be handled gracefully
by just skipping the processing of the corresponding rx
descriptor.

Change-Id: I471bb9f364a51937e85249996e427f15872bda97
CRs-Fixed: 2738558
2020-08-01 21:08:54 -07:00
Manikanta Pubbisetty
fa2844b787 qcacmn: DP RX changes for RX buffer pool support
DP RX changes to support RX buffer pool, this is a pre-allocated pool
of buffers which will be utilized during low memory conditions.

Change-Id: I8d89a865f989d4e88c10390861e9d4be72ae0299
CRs-Fixed: 2731517
2020-08-01 13:13:36 -07:00
Jinwei Chen
f3bffbfdd2 qcacmn: handle IPA buffer smmu map/unmap correctly
Handle ipa buffer smmu map/unmap with below changes,
(1) Do IPA smmu unmap for RX buffer received from REO
exception/WBM RX release/REO DST/RXDMA DST ring.
(2) Align IPA smmu map length to qdf_nbuf_map_nytes_single()
with fixed length.

Change-Id: I1ed46b31ed31f5b7e4e2484d519bc85d35ce1e69
CRs-Fixed: 2728644
2020-07-23 13:17:09 -07:00
Rakesh Pillai
eec104893e qcacmn: Add stats for stale cookie read from rx ring
Add a count for the number of times we read a stale
cookie value from the REO2SW ring.

Change-Id: I4b20fa93f5261b4ccb9479b7b3469a294703a184
CRs-Fixed: 2736028
2020-07-23 13:17:06 -07:00
Jinwei Chen
87d4f73245 qcacmn: Handle raw frames and invalid flow_idx stats
Make sure to drop the raw Rx frames as both driver and stack
are not expected to handle them.
Add counter for invalid fisa flow_idx packet received.

Change-Id: I5107c554b8ce6a9a7973f2aeca44bb0f360dc2df
CRs-Fixed: 2733981
2020-07-20 04:47:00 -07:00
Rakesh Pillai
28f1bf3f4e qcacmn: Invalidate ring desc cookie after processing
Currently all the rx ring descriptor contents are left
intact even after these entries are processed. This can,
at times, lead to stale entries being processed, if the
head pointer of any ring is updated before the updated
contents of the ring descriptor gets reflected in the memory.

This can lead to scenarios where the host driver reads a
stale value of sw_cookie, and free/unmap a currently in-use
buffer, thereby leading to the hardware accessing unmapped
memory region.

The sw_cookie is the integral part of al the rx ring
processing. Hence we always mark the sw_cookie as invalid
after dequeuing an entry from the REO2SW ring. Every time
we check for the validity of the sw_cookie before we try to
process an entry from REO2SW ring. if the invalid bit in the
sw_cookie is set, we just skip this entry and move on to the
next entry in the ring.

Change-Id: I0e78fa662b8ba33e64687a4dee4d1a5875ddb4bf
CRs-Fixed: 2730718
2020-07-18 00:00:04 -07:00
Aniruddha Paul
1b7f50b096 qcacmn: Update the Tx/Rx Delay histograms
Update the Host Tx/Rx per packet delay histogram

Change-Id: I40c3c05b2eb90427bd83783f13f2a7a3df41d232
2020-07-17 21:25:58 -07:00
Yeshwanth Sriram Guntuka
90e1136afb qcacmn: Decrement peer ref cnt after dp_rx_deliver_to_stack
Ths issue scenario is that valid peer is fetched from
peer_id in dp_rx_process and peer ref count is released
prior to invoking dp_rx_deliver_to_stack. In parallel,
the peer is freed in a different context. This results in
use after free within dp_rx_check_delivery_to_stack since
stale peer is dereferenced to update stats.

Fix is to decrement peer ref cnt after dp_rx_deliver_to_stack

Change-Id: I145247f7795f926faba66c05927fdae0599f0cad
CRs-Fixed: 2720396
2020-07-02 08:48:21 -07:00
Balaganapathy Palanisamy
a497ea80aa qcacmn: Add peer isolation support per vap
Configure the client as isolated peer if part of isolation
list while creating/associating the node or adding the peer
to the isolation list.
Do not forward the packets to and from clients in isolation
list instead accelerate to upper stack.

CRs-Fixed: 2689868

Change-Id: I67fd4dee0fb76c993746cdd66c70c241d407239a
2020-06-03 08:08:13 -07:00
Chaithanya Garrepalli
1d144f88bd qcacmn: store peer_id instead of peer_ids array in dp_peer
In lithium a peer will have only single peer_id hence remove
peer_ids array from dp_peer structure

Change-Id: Ib98270b7fd98f1199b862e4608f990687914b7cc
2020-05-29 13:13:37 -07:00
Yeshwanth Sriram Guntuka
1173b39f0f qcacmn: Update peer rx mpdu count per mcs rate
Update the peer rx mpdu count per mcs rate as part
of peer stats updation.

Change-Id: I945d32c7701f5f5c9bfbbaa6ab4576b94389c84c
CRs-Fixed: 2688068
2020-05-21 20:34:59 -07:00
Jinwei Chen
800b1b181b qcacmn: add exception frame flag for non-regular RX delivery
FISA RX aggregation is not necessary for non-regular RX delivery
as it requires extra FISA flush and also may impact regular
dp_rx_process() RX FISA aggregation.
  Add exception frame flag for non-regular RX delivery, so that
FISA path can identify this frame and bypass FISA RX.

Change-Id: Ic06cb72b516221754b124a673ab6c4f392947897
CRs-Fixed: 2680255
2020-05-14 13:04:20 -07:00
Yeshwanth Sriram Guntuka
8a2c60e8f5 qcacmn: Add debug info support for rx descriptors
Add debug info support for rx descriptors to log
the caller func name and timestamp in replenish
and free scenario.

Change-Id: I1d9b855d14f705094f241bae653f33a94d0e39b7
CRs-Fixed: 2677288
2020-05-13 12:39:51 -07:00
Jinwei Chen
ac1aea6e59 qcacmn: WAR for duplicate RX desc issue from REO2SW ring
Two back to back same RX desc is received from
REO2SW1 ring. After first RX desc is processed,
RX_desc nbuf will be set to null.
when second REO entry/same RX desc is processing,
dp_rx_desc_nbuf_sanity_check() will access to RX_desc nbuf, null
nbuf accessing lead to panic.

As a WAR, check RX_desc in_use flag firstly to avoid
invalid accessing to nbuf, move
dp_rx_desc_nbuf_sanity_check() after it.

Change-Id: Ib9455c76af85cf83587c1428b20a9ad9e93a9499
CRs-Fixed: 2672088
2020-05-04 21:27:59 -07:00
Nisha Menon
a2f5ca7c04 qcacmn: Check return status of deliver to stack function
Check the return status of the osif->rx function and in case
of failure drop the skb. This is needed when OOR error frame
is received and if the frame was not delivered to stack it
needs to be dropped.
Add error counter to periodic stats to determine how many Rx
packets were rejected or were dropped since deliver to
stack failed.
Add the new status check for delivering rx frames to stack
under MCC specific flag - DELIVERY_TO_STACK_STATUS_CHECK.

Change-Id: I9b1c795f168774669783cc601e68003a7747a279
CRs-Fixed: 2672498
2020-05-03 18:19:10 -07:00
phadiman
1f3652debc qcacmn: Datapath init-deinit changes
Do a logical split of dp_soc_attach and
dp_pdev_attach into Allocation and initialization
and dp_soc_detach and dp_pdev_detach into
de-initialization and free routines

Change-Id: I23bdca0ca86db42a4d0b2554cd60d99bb207a647
2020-05-02 21:59:42 -07:00
Nisha Menon
12cc7c6558 qcacmn: SKB buf memory Leak@ Func dp_pdev_nbuf_alloc_and_map
Fix the skb leak in dp_rx_process where rx descriptor cookie
validity fails. This skb should be cleaned up as part of the
rx desc and nbuf free function called during the driver unload.
However this will ensure that the skb released and added rx desc
added to the free list during dp_rx_process itself.

Add skb map, unmap functions, line numbers and if the nbuf is
mapped or unmapped to the nbuf tracking table. This debug info
will be logged once the skb is leaked.

Change-Id: I52dbf38922be20fc0aaea380e0e572af16de773e
CRs-Fixed: 2662992
2020-04-30 18:21:23 -07:00
Yeshwanth Sriram Guntuka
7dad533e6c qcacmn: Handle scattered msdu in OOR error scenario
In OOR error handling scenario, msdu is spread across
two nbufs. Due to this, there is a mismatch between
msdu count fetched from MPDU desc detatils and count
fetched from rx link descriptor.

Fix is to create frag list for the case where msdu
is spread across multiple nbufs.

Change-Id: I1d600a0988b373e68aad6ef815fb2d775763b7cb
CRs-Fixed: 2665963
2020-04-28 03:59:14 -07:00
phadiman
b133d310ec qcacmn: Split dp_rx_pdev_attach and dp_rx_pdev_detach
Split dp_rx_pdev_attach into dp_rx_pdev_desc_pool_alloc,
dp_rx_pdev_desc_pool_init, dp_rx_pdev_buffers_alloc and
dp_rx_pdev_detach into dp_rx_pdev_desc_pool_free, dp_rx
_pdev_desc_pool_deinit, dp_rx_pdev_buffers_free APIs

This split is made because dp_pdev_init is introduced
as part of this FR and these APIs will be called from
dp_pdev_init/dp_pdev_deinit or dp_pdev_attach/dp_pdev_
detach accordingly to maintain the symmetry to DP init
and deinit path

Change-Id: Ib543ddae90b90f4706004080b1f2b7d0e5cfbfbc
CRs-Fixed: 2663595
2020-04-27 18:02:37 -07:00
Varsha Mishra
c71df5eef4 qcacmn: Restrict DMA Map/UnMap upto buffer size
Restrict DMA Map/UnMap upto buffer size for packets in rx process.
This gives 2-3% cpu gain in peak throughput.

Change-Id: Iaf5e9f6f734d80b6d2c234bd8e679cf2a81c7e2c
CRs-Fixed: 2660698
2020-04-23 16:26:49 -07:00
Varsha Mishra
4c39342f6a qcacmn: Lockless access of reo destination rings
Remove lock to access REO destination rings because 4 rings are
accessed in 4 individual cores.

Change-Id: Ia3f92cc5136dbdbeea1e9cda8d52b474356a3e1a
CRs-Fixed: 2660901
2020-04-11 07:35:07 -07:00
Jinwei Chen
b3e587db52 qcacmn: Support RX 2K jump/OOR frame handling from REO2TCL ring
Support RX 2K jump/OOR frame handling from REO2TCL ring.
(a) configure REO error destination ring register to route 2K jump
/OOR frame to REO2TCL ring.
(b) for 2K jump RX frame, only accept ARP frame and drop others,
meanwhile, send delba action frame to remote peer once receive first
2K jump data.
(c) for OOR RX frame, accept ARP/EAPOL/DHCP/IPV6_DHCP frame, otherwise
drop it.

Change-Id: I7cb33279a8ba543686da4eba547e40f86813e057
CRs-Fixed: 2631949
2020-03-24 19:58:16 -07:00
Saket Jha
42f305e423 qcacmn: Avoid accessing invalid cookie rx_descriptor
Remove debug dump call to dp_rx_desc_dump() as cookie rx_descriptor is
invalid.

Change-Id: I106ebc2f872e43079abd6e6e493c90022fd09c3b
CRs-Fixed: 2638059
2020-03-16 19:55:12 -07:00
Rakesh Pillai
f2e0f22bf7 qcacmn: Fix stack frame overflow for dp_rx_process
dp_rx_process stack frame has grown to exceed the
stack frame size of 4096. dp_rx_deliver_to_stack_no_peer
is a big function which should not be inline. Calling it
in other function increases the stack size consumed by the
caller function a lot.

Since dp_rx_deliver_to_stack_no_peer is not called very
frequently from dp_rx_process, changing its type to non-inline
function does not hit the core rx datapath much. Hence
change dp_rx_deliver_to_stack_no_peer to a non-inline
function.

Change-Id: Ib042f74c1f5a9cbe5fd947a24f004bb2fecf1fb1
CRs-Fixed: 2636365
2020-03-16 03:47:48 -07:00
Saket Jha
d364435727 qcacmn: Validate rx return_buffer_manager
Host rx return_buffer_manager should always be 4 or 6. Add check for
invalid return_buffer_manager value in ring descriptor.

Change-Id: I509dd58ddd89e6a0ce1bffa509dcfabbd0fbc975
CRs-Fixed: 2632372
2020-03-06 18:52:30 -08:00
Jinwei Chen
356c6b714e qcacmn: fix null skb accessing due to incomplete scattered msdu
If RX packets reaped from REO2SW ring hit rx_reap_loop_pkt_limit,
REO2SW ring reaping will break and stop, but for scattered msdu case,
all related buffer should be received one time for further processing,
otherwise dp_rx_sg_create can not handle correctly.

(1) make sure all buffers for scattered msdu is received then allow
break when rx_reap_loop_pkt_limit hit.
(2) refine skb unmap location in case msdu_scatter_wait_break logic is
hit, then may double unmap for same skb(not for current issue).

Change-Id: I85d385ee9c3b1a5ed56ae5e5b68636d04968553f
CRs-Fixed: 2632082
2020-03-04 09:56:49 -08:00
Rakesh Pillai
7a26da4e7e qcacmn: Validate the rx descriptor before dereferencing
The rx descriptor obtained using the cookie
can be NULL if the cookie is invalid. Hence
dereferencing the rx descriptor without any
validation can cause invalid address access.

Fix this by validating the rx descriptor
which has been obtained using the cookie from
the hal ring descriptor.

Change-Id: Ib584f0d8175b581d15b0e1c67d2f6ed9119ecbfc
CRs-Fixed: 2629254
2020-03-03 18:55:00 -08:00
Vivek
10d8aff10e qcacmn: Use right API for retrieving pdev based on id
We are seeing a invalid memory access crash
in the dp_get_pdev_for_mac_id call from
dp_rx_process_invalid_peer, due to invalid mac id passed,
probably due to some stack correction.

We should instead use dp_get_pdev_for_lmac_id from
dp_rx_process_invalid_peer, where for invalid
mac id, we assert.

Change-Id: I0737132b5bbdd2fcbdb714d4643a69184ae3821e
CRs-Fixed: 2618432
2020-02-24 22:02:03 -08:00
Aditya Sathish
041409f98f qcacmn: Add 6GHz support for chan/freq/band usage in mesh mode
Currently mesh mode uses channel numbering which is derived
from APIs that don't support 6GHz channels numbering due to the
overloading of 6GHz channels with 2.4GHz and 5GHz.

Add support to obtain the correct channel number (and auxiliary
information like band and frequency) through the new APIs that
support 6GHz.

Change-Id: Ib0b39ebae2a22bd6b2b5d17b9058c3c2100e0d59
CRs-Fixed: 2605229
2020-02-20 08:32:09 -08:00
Manjunathappa Prakash
c4667b8b12 qcacmn: Save Rx TLV offset info so as to recover in FISA
Packets delivered to FISA via exception err path doesnot have TLVs.
FISA handling requires additional TLVs. dp rx core handling
skips TLVs, save TLV length info in nbuf->cb so that TLVs
are recovered back in FISA.

Change-Id: I53fab2e19abcbf82697ea6f53a4ddf3ea0dd0699
CRs-Fixed: 2620844
2020-02-20 06:39:48 -08:00
Manjunathappa Prakash
b896f0eade qcacmn: Add NULL check for vdev before accessing
Add NULL check for valid vdev before accessing it.

Change-Id: I977671bd7f612a30e1cb3b72d6b46200eaf1a34c
CRs-Fixed: 2606040
2020-02-12 11:59:15 -08:00
Manjunathappa Prakash
5d73e075e8 qcacmn: Add support to deliver the packets to FISA
Hook FISA specific callback to deliver the Rx packets to
FISA frame work.

Change-Id: Ia16c857764c0c3bf99c3855eac01659eb03c7608
CRs-Fixed: 2599917
2020-02-12 11:58:59 -08:00
syed touqeer pasha
6997a37a1e qcacmn: Extract msdu end TLV information at once during Rx fast path
Rather than extracting msdu end pkt tlv information per field basis
during fast data path, extract msdu end pkt tlv information at once
and store in local structure.

Change-Id: I0877ba4f824d480cc0851c72090f010852d0d203
2020-02-05 02:28:41 -08:00
Adil Saeed Musthafa
bbc4de06d7 qcacmn: per-peer protocol count tracking
Maintain packet counters for each peer based on protocol. Following 3
protocols are supported
* ICMP (IPv4)
* ARP (IPv4)
* EAP

Change-Id: I56dd9bbedd7b6698b7d155a524b242e8cabd76c3
CRs-Fixed: 2604877
2020-02-04 14:17:14 -08:00
Nisha Menon
4f6336687c qcacmn: Add vdev callback null check in rxdma err processing
In the function that processes rxdma err frames add null check
before calling vdev callback function. If its valid then deliver
the skb to stack or free the skb.

Change-Id: I7c481eb8f702d9109c4a9f79db7e050ece6c3689
CRs-Fixed: 2607658
2020-01-30 11:15:55 -08:00
Shashikala Prabhu
03a9f5b19c qcacmn: Add framework to configure varying data/monitor buf size
Add a framework to configure varying buffer size for both data and monitor
buffers.
For example, with this framework, the user can configure 2K SKB for Data
buffers, monitor status rings, monitor descriptor rings, monitor
destination rings and 4K SKB for monitor buffers through compile time.

Change-Id: I212d04ff6907e71e9c80b69834aa07ecc6db4d2e
CRs-Fixed: 2604646
2020-01-29 18:08:33 -08:00
Jinwei Chen
0b92469595 qcacmn: fix invalid accessing to rx_tlv_hdr due to scattered msdu
long length msdu is received and looks this msdu is spread across
multiple nbufs, there is no corresbonding logic for this case.
qdf_set_pkt_len will invoke pskb_expand_head to renew skb->head
buffer, but the rx_tlv_hdr is still pointed to original skb->data
buffer, invalid accessing will happen.
  As a WAR, drop this msdu related nbufs after dp_rx_sg_create is done.

Change-Id: Iceb09fd04e4d768325018a8ddd4261ab4f75991a
CRs-Fixed: 2597927
2020-01-26 16:23:50 -08:00
Rakesh Pillai
c1aeb35c3c qcacmn: Do not process RX packet if vdev is pending delete
If the vdev is marked as delete in progress, then the
packets received on that particular vdev should not be
submitted to the network stack.

Drop the packets if the vdev is marked to be deleted.

CRs-Fixed: 2599128
Change-Id: I0996669bea368b6c0afc10e1d929bc2f314ae6fe
2020-01-21 14:03:05 -08:00
Pavankumar Nandeshwar
0ce3870654 qcacmn: Modify set 2 of ctrl_ops in dp to for umac-dp decoupling
Change set 2 of ctrl_ops APIs to replace pdev, vdev and peer
dp handles with pdev_id, vdev_id and peer mac address
along with dp soc handle

Change-Id: I3f180c9c360d564f0b229b447074ad23b7c0a737
2020-01-20 17:52:06 -08:00
Ankit Kumar
53581e92fd qcacmn: Update counter in mutipass rx packet drop
multipass_rx_pkt_drop is peer level stats counter
used to count multipass rx packet dropped frame.
Accumulate this counter at vdev and pdev level.

It also initializes multipass_en flag to false at
vdev attach.

Change-Id: Idaa85a71c80eefb9359abb026402b71aa28ad6a2
CRs-Fixed: 2595551
2020-01-17 19:50:47 -08:00
Amit Shukla
1edfe5ae7c qcacmn: Data path changes for Dynamic Mode Change FR. Changes include-
1. Move all LMAC rings to SOC from pDEV
	2. Dynamically obtain lmac->pdev mapping while handling LMAC interrupts

Change-Id: Ib017d49243405b62fc34099c01a2b898b25341d0
2020-01-16 17:14:39 -08:00
Linux Build Service Account
2105b28e4c Merge "qcacmn: Reduce excessive console logging" 2020-01-12 19:35:57 -08:00
Chaithanya Garrepalli
79b64ac4ba qcacmn: fix potential memory leak in dp_rx_process
There is a chance of leak of RX buffers if peer disconnects
while we are in middle for processing the list of MSDUs in a
AMSDU

Change-Id: I0081ec96da95ea570903dbd5d91c866c8c141667
2020-01-11 22:28:04 -08:00
Venkata Sharath Chandra Manchala
09d116aee9 qcacmn: Avoid invalid invalid_peer_head_msdu list
Add a check to validate invalid_peer_head_msdu before accessing
to avoid NULL dereference.

Change-Id: I9218bdd1100b48a32240546f380b1437ae72c406
CRs-Fixed: 2585651
2020-01-10 20:12:26 -08:00
Chaithanya Garrepalli
52511a17d1 qcacmn: check for vdev_id mismatch to deliver NBUFs to stack
Check for vdev_id mismatch to deliver NBUFs to stack to avoid
hold peer reference while giving nbuf list to stack

Change-Id: Ic475e00d5b1793ada7b26b7af3322ca2fa51836f
2019-12-31 01:24:43 -08:00
Pavankumar Nandeshwar
a234716d1d qcacmn: cmn_ops changes in datapath for umac-dp decoupling
Change cmn_ops APIs to replace pdev, vdev and peer
dp handles with pdev_id, vdev_id and peer mac address
along with dp soc handle

Change-Id: I5716a87cad56b1dfe8dd56f193bbb6ff923a6af1
2019-12-27 03:24:59 -08:00
Saket Jha
6ef034094f qcacmn: Initialize qdf_nbuf_t before use
Initialize qdf_nbuf_t variables before use.

Change-Id: I4922de4c9f31c98dc745802dbb23a0757795769d
CRs-Fixed: 2587451
2019-12-20 20:06:36 -08:00
Pavankumar Nandeshwar
4c7b81b540 qcacmn: removal of cp handles and changes for ol_if_ops
Remove pdev and vdev control path handles from data path.
Instead send pdev_id and vdev_id along with opaque soc
handle in ol_if_ops.

Change-Id: I6ee083f07e464f283da0d70ada70a4e10e18e1b2
2019-12-04 07:45:10 -08:00
Mainak Sen
8bc9b42eb3 qcacmn: Loopback check for ucast frame after hmmc
Add a check to drop unicast frame being sent to same originating
VAP after hmmc conversion of IGMP control packets

Change-Id: Ic25812a7848af793075a0cb483100ebcf59d85b2
2019-12-01 19:16:31 -08:00
Rakesh Pillai
534a143d8f qcacmn: Add support to flush rx packets for a vdev
When a particular vdev is deleted, the corresponding rx
packets which have been queued to the rx thread are not
flushed. Hence when such packets are submitted to the
network stack, the dev for this skb will be invalid,
since we have already freed the adapter.

Flush out the packets in the rx thread queues, before
deleting the vdev.

CRs-Fixed: 2543392
Change-Id: I2490d0f5ce965f62152613a17a59232521ca058f
2019-11-01 00:18:13 -07:00