1. Add flag to support hlos id override feature in dp vdev
2. Update tid from nbuf->priority in dp_tx_send
3. Update tid to nbuf->priority in dp_rx_process
Change-Id: I66e8d77733a667f3f60b77e0d7bb444f7c5ad93d
Add debug support for VDEV refcount to take
refcount by module id and decrement corresponding
refcount with same module id
Change-Id: I15c075816994ba70155fefbc0bce208b20fb9a59
Add new API dp_vdev_get_ref_by_id() which will return vdev
pointer by holding a reference. Caller of this API has
to ensure that this reference is released by calling
dp_vdev_unref_delete() API
New lock soc->vdev_map_lock is introduced to protect
vdev id to object array
Change-Id: I883e328932e35ef31254125492dbae20cebe0e00
Remove the peer backpointer in ast entry and store
peer_id instead
Assign peer_id in AST entry in AST MAP event,
also add the ast entry to peers ast list
In AST map & AST unmap APIs use ast find by vdev_id
Change-Id: I74d9828dc309149d98f6f577b5c8304cb087fd76
Add support to get the peer reference with module id
To help debug the peer reference related issues
Change-Id: Ie20c7e710b9784b52f2e0f3d7488509282528a00
Use unified version of dp_peer_find_by_id API
which will take peer reference
Also use unified peer ref release API dp_peer_unref_delete
Change-Id: Ibb516a933020a42a5584dbbbba59f8d9b72dcaa4
We have come across scenarios where rxdma is pushing
a certain entry more than once to the reo exception
ring. In this scenario, when we try to unmap a buffer,
it can lead to multiple unmap of the same buffer.
Handle this case, by skipping the process of this
duplicate entry, if alrady unmapped, and proceed to the
next entry.
Change-Id: Iae66f27e432f795ba4730911029fa1d63a75cb06
CRs-Fixed: 2739176
Memory optimization of monitor status ring by allocating buffers during
replenish using alloc_skb (linux API).
It creates buffer of required size rather than 4k size(dev_alloc_skb)
Change-Id: I3ae5e403de28c4570f8ac3b50d3ca878a9e4b2f9
CRs-Fixed: 2733931
Rx desc pool have a flag to identify whether frag or nbuf
operation needs to be performed for alloc, map, prep and
free buffer for monitor dest buffer.
This flag will be set only for mon destination desc pool,
if RX_MON_MEM_FRAG feature is enabled.
In all other case, It will be set to Zero and default nbuf
operation will be taken.
This flag get initialized at the time of pdev rx_desc_pool
initialization and gets reset while pdev deinit.
Mon destination buffer will have support for frag if
RX_MON_MEM_FRAG flag is set.
Change-Id: I67c6c823ee4f114035b884c024a1a9054a40665b
CRs-Fixed: 2741757
Add WMI support for WMI_TWT_SESSION_STATS_EVENTID. This event
contains stats for a given TWT session.
Change-Id: I01d5f7b30da803ee713a14c1d1124b8af7161bca
CRs-Fixed: 2609951
The rx rings are relatively of smaller size. Any
duplicate entry sent by hardware cannot be back-tracked
by just looking at the ring contents alone.
Hence add a history to track the entries for the
REO2SW, REO exception and SW2REO ring.
Change-Id: I9b0b311950d60a9421378ce0fcc0535be450f713
CRs-Fixed: 2739181
Th nbuf sanity can fail in case when HW posts the
same buffer twice. This case can be handled gracefully
by just skipping the processing of the corresponding rx
descriptor.
Change-Id: I471bb9f364a51937e85249996e427f15872bda97
CRs-Fixed: 2738558
DP RX changes to support RX buffer pool, this is a pre-allocated pool
of buffers which will be utilized during low memory conditions.
Change-Id: I8d89a865f989d4e88c10390861e9d4be72ae0299
CRs-Fixed: 2731517
Add a count for the number of times we read a stale
cookie value from the REO2SW ring.
Change-Id: I4b20fa93f5261b4ccb9479b7b3469a294703a184
CRs-Fixed: 2736028
Make sure to drop the raw Rx frames as both driver and stack
are not expected to handle them.
Add counter for invalid fisa flow_idx packet received.
Change-Id: I5107c554b8ce6a9a7973f2aeca44bb0f360dc2df
CRs-Fixed: 2733981
Currently all the rx ring descriptor contents are left
intact even after these entries are processed. This can,
at times, lead to stale entries being processed, if the
head pointer of any ring is updated before the updated
contents of the ring descriptor gets reflected in the memory.
This can lead to scenarios where the host driver reads a
stale value of sw_cookie, and free/unmap a currently in-use
buffer, thereby leading to the hardware accessing unmapped
memory region.
The sw_cookie is the integral part of al the rx ring
processing. Hence we always mark the sw_cookie as invalid
after dequeuing an entry from the REO2SW ring. Every time
we check for the validity of the sw_cookie before we try to
process an entry from REO2SW ring. if the invalid bit in the
sw_cookie is set, we just skip this entry and move on to the
next entry in the ring.
Change-Id: I0e78fa662b8ba33e64687a4dee4d1a5875ddb4bf
CRs-Fixed: 2730718
Ths issue scenario is that valid peer is fetched from
peer_id in dp_rx_process and peer ref count is released
prior to invoking dp_rx_deliver_to_stack. In parallel,
the peer is freed in a different context. This results in
use after free within dp_rx_check_delivery_to_stack since
stale peer is dereferenced to update stats.
Fix is to decrement peer ref cnt after dp_rx_deliver_to_stack
Change-Id: I145247f7795f926faba66c05927fdae0599f0cad
CRs-Fixed: 2720396
Configure the client as isolated peer if part of isolation
list while creating/associating the node or adding the peer
to the isolation list.
Do not forward the packets to and from clients in isolation
list instead accelerate to upper stack.
CRs-Fixed: 2689868
Change-Id: I67fd4dee0fb76c993746cdd66c70c241d407239a
In lithium a peer will have only single peer_id hence remove
peer_ids array from dp_peer structure
Change-Id: Ib98270b7fd98f1199b862e4608f990687914b7cc
FISA RX aggregation is not necessary for non-regular RX delivery
as it requires extra FISA flush and also may impact regular
dp_rx_process() RX FISA aggregation.
Add exception frame flag for non-regular RX delivery, so that
FISA path can identify this frame and bypass FISA RX.
Change-Id: Ic06cb72b516221754b124a673ab6c4f392947897
CRs-Fixed: 2680255
Add debug info support for rx descriptors to log
the caller func name and timestamp in replenish
and free scenario.
Change-Id: I1d9b855d14f705094f241bae653f33a94d0e39b7
CRs-Fixed: 2677288
Two back to back same RX desc is received from
REO2SW1 ring. After first RX desc is processed,
RX_desc nbuf will be set to null.
when second REO entry/same RX desc is processing,
dp_rx_desc_nbuf_sanity_check() will access to RX_desc nbuf, null
nbuf accessing lead to panic.
As a WAR, check RX_desc in_use flag firstly to avoid
invalid accessing to nbuf, move
dp_rx_desc_nbuf_sanity_check() after it.
Change-Id: Ib9455c76af85cf83587c1428b20a9ad9e93a9499
CRs-Fixed: 2672088
Check the return status of the osif->rx function and in case
of failure drop the skb. This is needed when OOR error frame
is received and if the frame was not delivered to stack it
needs to be dropped.
Add error counter to periodic stats to determine how many Rx
packets were rejected or were dropped since deliver to
stack failed.
Add the new status check for delivering rx frames to stack
under MCC specific flag - DELIVERY_TO_STACK_STATUS_CHECK.
Change-Id: I9b1c795f168774669783cc601e68003a7747a279
CRs-Fixed: 2672498
Do a logical split of dp_soc_attach and
dp_pdev_attach into Allocation and initialization
and dp_soc_detach and dp_pdev_detach into
de-initialization and free routines
Change-Id: I23bdca0ca86db42a4d0b2554cd60d99bb207a647
Fix the skb leak in dp_rx_process where rx descriptor cookie
validity fails. This skb should be cleaned up as part of the
rx desc and nbuf free function called during the driver unload.
However this will ensure that the skb released and added rx desc
added to the free list during dp_rx_process itself.
Add skb map, unmap functions, line numbers and if the nbuf is
mapped or unmapped to the nbuf tracking table. This debug info
will be logged once the skb is leaked.
Change-Id: I52dbf38922be20fc0aaea380e0e572af16de773e
CRs-Fixed: 2662992
In OOR error handling scenario, msdu is spread across
two nbufs. Due to this, there is a mismatch between
msdu count fetched from MPDU desc detatils and count
fetched from rx link descriptor.
Fix is to create frag list for the case where msdu
is spread across multiple nbufs.
Change-Id: I1d600a0988b373e68aad6ef815fb2d775763b7cb
CRs-Fixed: 2665963
Split dp_rx_pdev_attach into dp_rx_pdev_desc_pool_alloc,
dp_rx_pdev_desc_pool_init, dp_rx_pdev_buffers_alloc and
dp_rx_pdev_detach into dp_rx_pdev_desc_pool_free, dp_rx
_pdev_desc_pool_deinit, dp_rx_pdev_buffers_free APIs
This split is made because dp_pdev_init is introduced
as part of this FR and these APIs will be called from
dp_pdev_init/dp_pdev_deinit or dp_pdev_attach/dp_pdev_
detach accordingly to maintain the symmetry to DP init
and deinit path
Change-Id: Ib543ddae90b90f4706004080b1f2b7d0e5cfbfbc
CRs-Fixed: 2663595
Restrict DMA Map/UnMap upto buffer size for packets in rx process.
This gives 2-3% cpu gain in peak throughput.
Change-Id: Iaf5e9f6f734d80b6d2c234bd8e679cf2a81c7e2c
CRs-Fixed: 2660698
Remove lock to access REO destination rings because 4 rings are
accessed in 4 individual cores.
Change-Id: Ia3f92cc5136dbdbeea1e9cda8d52b474356a3e1a
CRs-Fixed: 2660901
Support RX 2K jump/OOR frame handling from REO2TCL ring.
(a) configure REO error destination ring register to route 2K jump
/OOR frame to REO2TCL ring.
(b) for 2K jump RX frame, only accept ARP frame and drop others,
meanwhile, send delba action frame to remote peer once receive first
2K jump data.
(c) for OOR RX frame, accept ARP/EAPOL/DHCP/IPV6_DHCP frame, otherwise
drop it.
Change-Id: I7cb33279a8ba543686da4eba547e40f86813e057
CRs-Fixed: 2631949
Remove debug dump call to dp_rx_desc_dump() as cookie rx_descriptor is
invalid.
Change-Id: I106ebc2f872e43079abd6e6e493c90022fd09c3b
CRs-Fixed: 2638059
dp_rx_process stack frame has grown to exceed the
stack frame size of 4096. dp_rx_deliver_to_stack_no_peer
is a big function which should not be inline. Calling it
in other function increases the stack size consumed by the
caller function a lot.
Since dp_rx_deliver_to_stack_no_peer is not called very
frequently from dp_rx_process, changing its type to non-inline
function does not hit the core rx datapath much. Hence
change dp_rx_deliver_to_stack_no_peer to a non-inline
function.
Change-Id: Ib042f74c1f5a9cbe5fd947a24f004bb2fecf1fb1
CRs-Fixed: 2636365
Host rx return_buffer_manager should always be 4 or 6. Add check for
invalid return_buffer_manager value in ring descriptor.
Change-Id: I509dd58ddd89e6a0ce1bffa509dcfabbd0fbc975
CRs-Fixed: 2632372
If RX packets reaped from REO2SW ring hit rx_reap_loop_pkt_limit,
REO2SW ring reaping will break and stop, but for scattered msdu case,
all related buffer should be received one time for further processing,
otherwise dp_rx_sg_create can not handle correctly.
(1) make sure all buffers for scattered msdu is received then allow
break when rx_reap_loop_pkt_limit hit.
(2) refine skb unmap location in case msdu_scatter_wait_break logic is
hit, then may double unmap for same skb(not for current issue).
Change-Id: I85d385ee9c3b1a5ed56ae5e5b68636d04968553f
CRs-Fixed: 2632082
The rx descriptor obtained using the cookie
can be NULL if the cookie is invalid. Hence
dereferencing the rx descriptor without any
validation can cause invalid address access.
Fix this by validating the rx descriptor
which has been obtained using the cookie from
the hal ring descriptor.
Change-Id: Ib584f0d8175b581d15b0e1c67d2f6ed9119ecbfc
CRs-Fixed: 2629254
We are seeing a invalid memory access crash
in the dp_get_pdev_for_mac_id call from
dp_rx_process_invalid_peer, due to invalid mac id passed,
probably due to some stack correction.
We should instead use dp_get_pdev_for_lmac_id from
dp_rx_process_invalid_peer, where for invalid
mac id, we assert.
Change-Id: I0737132b5bbdd2fcbdb714d4643a69184ae3821e
CRs-Fixed: 2618432
Currently mesh mode uses channel numbering which is derived
from APIs that don't support 6GHz channels numbering due to the
overloading of 6GHz channels with 2.4GHz and 5GHz.
Add support to obtain the correct channel number (and auxiliary
information like band and frequency) through the new APIs that
support 6GHz.
Change-Id: Ib0b39ebae2a22bd6b2b5d17b9058c3c2100e0d59
CRs-Fixed: 2605229
Packets delivered to FISA via exception err path doesnot have TLVs.
FISA handling requires additional TLVs. dp rx core handling
skips TLVs, save TLV length info in nbuf->cb so that TLVs
are recovered back in FISA.
Change-Id: I53fab2e19abcbf82697ea6f53a4ddf3ea0dd0699
CRs-Fixed: 2620844
Rather than extracting msdu end pkt tlv information per field basis
during fast data path, extract msdu end pkt tlv information at once
and store in local structure.
Change-Id: I0877ba4f824d480cc0851c72090f010852d0d203
Maintain packet counters for each peer based on protocol. Following 3
protocols are supported
* ICMP (IPv4)
* ARP (IPv4)
* EAP
Change-Id: I56dd9bbedd7b6698b7d155a524b242e8cabd76c3
CRs-Fixed: 2604877
In the function that processes rxdma err frames add null check
before calling vdev callback function. If its valid then deliver
the skb to stack or free the skb.
Change-Id: I7c481eb8f702d9109c4a9f79db7e050ece6c3689
CRs-Fixed: 2607658
Add a framework to configure varying buffer size for both data and monitor
buffers.
For example, with this framework, the user can configure 2K SKB for Data
buffers, monitor status rings, monitor descriptor rings, monitor
destination rings and 4K SKB for monitor buffers through compile time.
Change-Id: I212d04ff6907e71e9c80b69834aa07ecc6db4d2e
CRs-Fixed: 2604646
long length msdu is received and looks this msdu is spread across
multiple nbufs, there is no corresbonding logic for this case.
qdf_set_pkt_len will invoke pskb_expand_head to renew skb->head
buffer, but the rx_tlv_hdr is still pointed to original skb->data
buffer, invalid accessing will happen.
As a WAR, drop this msdu related nbufs after dp_rx_sg_create is done.
Change-Id: Iceb09fd04e4d768325018a8ddd4261ab4f75991a
CRs-Fixed: 2597927