certain fields are now retrieved from reo_destination
ring to nbuf cb. This avoids accessing the RX_TLVs
which are spread across two cache lines. This is
helping improve small pkt (64 byte) performance.
Change-Id: Iee95813356c3dd90ab2e8a2756913def89710fb1
CRs-Fixed: 2426703
Track PPDU Id history from monitor status and destination
rings, and display as part of monitor stats.
Change-Id: I7b8985f93b1cdb6eb5210bba5a65e9bfb617a710
Set tid invalid before enqueuing frame to transmit classifier.
When not set to invalid, tid is not classified by HW.
Change-Id: I4fb96a010fed1e0178d7d70a83503e1dcf5f755a
1. Reduce trace level of stats from fatal to info_high
2. Move stats APIs to dp_stats.c
Change-Id: I0b27a254075610fbbafcc8b09b72403ccb83473a
CRs-Fixed: 2446189
Count broadcast stats on rx side after rx_tlv_hdr is extracted.
After removing rx_tlv_hdr, nbuf->data points to actual payload.
Change-Id: I20e4029fd84bc3d76e2fd47eba74b9bef72a32c0
1) Add one counter to track the invaild TX release source
2) Capture he descriptor that caused the issue
3) Print the invalid release source
CRs-Fixed: 2380964
Change-Id: I0dc410ae2e7c9df58ef53e3f20ca7979d086659e
Currently, if BA session for a tid is active, we tear down
current session and do not send an addba response. Clients are expected
to send new addba req. But this is not happening, therefore
we accept new addba req after flushing tid queue and then setup
queue with latest addba request values.
Also change is made to not update ssn to 0 so that old sequence
can be continued to use.
Change-Id: I129ce74d7414c3255d5a4360f39fec9ed3d7650f
CRs-fixed: 2428837
Peer stats struct members initial value 0, so when other functions call
to update peer stats have 0 rssi value which updates mininum
rssi since its the least value. So initialize the rssi with invalid
value initially
Change-Id: Ife033598c7ac47a9a26595f97bff058570d3e91f
CRs-Fixed: 2403735
Add target specific HW header for ipq6018 compilation.
Remove the 8074 header dependencies for 6018 compilation.
Change-Id: I8e45e3e039a4596c6722538405dcd381918fa6b1
CCE stats are reported across rings today. Since HKv2 is a SMP, it
creates the possibility of missing stats when two or more CPUs are
updating the counters for the same protocol type. In order to avoid
this, we can either use locks or store stats on a per-CPU basis.
Since tagging happens on the core data path, locks are not preferred.
Instead, we use a per-ring counter, since each CPU operates on a REO
ring. Exception packets today go to Ring-0 (also normal rx Reo-0
go to Ring-0). Hence tags are maintained separately for exception
ring as well.
Change-Id: Ib01934619fb026bf64a8ac998eaec2d4df7f5a6e
CRs-Fixed: 2438143
Observed that when IPA offload is enabled, RX packets
are not routed correctly to IPA ring. Currently only
IX0 of REO_DESTINATION_CTRL_IX registers are remapped,
which only covers 3-bit reo_destination_indication of
range 0 to 7.
Fix is to remap REO_DESTINATION_CTRL_IX2|3 registers
so that reo_destination_indication of range 16 to
31 can also be routed REO2IPA ring when IPA offload
is enabled. Upon IPA offload is disabled, save values
of IX2 and IX3 are reset back to HW.
Change-Id: I3428b450ab10076d27c7628a3729e8cec088bd94
CRs-Fixed: 2434331
Revert change "Limit maxinum nss number as 2 for MCL platform" and
align with WIN function: hal_rx_msdu_start_nss_get_8074v2().
besides, only increase rx.nss[NSS - 1] when NSS is > 0 and rx packet
type is 11N/AC/AX.
Change-Id: I0a64f00a3d252c806216cc3196e71290f111c88a
CRs-Fixed: 2429329
Currenlty RX buffer mappings to IPA domain are released when
REO2IPA ring are disabled but before IPA pipes are disabled.
There will be chances that IPA still accessing buffers with
mappings released, which could lead to SMMU fault.
Fix is to release SMMU mappings to IPA domain until IPA pipes
are disabled.
Change-Id: I62ac99e6a9b83cfd1e70a17ffacdea3ca3720a5c
CRs-Fixed: 2436812
Deadlock happens when control path takes scn_lock and calls
cdp_peer_delete() which in turn tries to acquire vdev_list_lock.
If other core starts handling peer_unmap and holds vdev_list_lock
and tries to releases reference of control peer, which in turn
tries to acquire scn_lock leading to deadlock.
Pass vdev object intead of vdev id to dp_reset_and_release_peer_mem
which will avoid need for vdev_list_lock and fix thhis deadlock.
Change-Id: I805bb4958b9b5ec4b29d70ba67e7410572201a60
CRs-Fixed: 2426717
Add stop_th and start_th for QCA_LL_TX_FLOW_CONTROL_V2 disabled
platform, which is pdev based tx_desc pool. Change pdev tx_desc pool
size from 1056 to 900, default stop_th is 15% start_th is 25%, this
setting is exactly same as QCA_LL_TX_FLOW_CONTROL_V2. Pause netif tx
queues for all vdevs when stop_th reached instead of dropping frames.
Reduce pdev pool size could significantly reduce firmware wmm drop. Both
of host and firmware frame dropps lead to bad TCP throughput.
Change-Id: I77daf8c9fdef624f8ec479885b7705deb1fef142
CRs-Fixed: 2437471
Add a sanity check to avoid sending a htt
txrx_stats request with an invalid mac_id
parameter to firmware.
Change-Id: Iae980bbffdcf6759b6d467849c5ebc65628f17ba
Crs-Fixed: 2438693
Currently frames will be dropped if it arrives before
peer is registered with data path module by control path.
So cache rx frames and flush them to the upper layer when
peer is registered at data path.
Change-Id: I086122fffdcf33e25ba57774ef944550cdd2fa20
CRs-Fixed: 2329308
With this feature, using appropriate commands, link layer, network layer,
transport layer and some of the application protocols can be tagged with
the user provided tag values for easier identification of protocols. The
supported protocols today are as follows.
ARP, DHCPv4, DHCPv6, DNS over TCP (v4), DNS over TCP (v6), DNS over UDP
(v4), DNS over UDP (v6), ICMPv4, ICMPv6, TCPv4, TCPv6, UDPv4,
UDPv6, IPv4, IPv6, EAP.
Receive packets are tagged by hardware. Tags are applied after the first
matching rule. Hence it is recommended that the rules are
programmed in such a way that tags are configured from application layer
to data link layer to get expected results.
Change-Id: Ibdc2bd2b78234f482074955e89fb93f05988eaca
Currently there is no support to update tx stats for
non-qos frame. Update tx stats for non-qos data frames
Change-Id: Ib61dd6e3b0c676b5af806c4b1e5cca22ea6ffaa5
CRs-Fixed: 2421925
FW assigns the pdev_id to lmac_id mapping for each platform(HK, CYP, etc).
This information is passed on to host via WMI_MAC_PHY_CAPABILITIES tlv.
Instead of hard-coding the mapping again in host, use the info sent by FW.
Change-Id: I01ad81e97a1b4aa1f0fea3951f6e8285a0f0c039
Currently we are upadting sojourn stats for all tid.
but rate stats maintains stats for only data tids 0-7.
updating stats for tid value exceeding 7 is leading it to
assert.
Update correct tx rate, rx rate and sojourn stats.
Change-Id: Ib2f5791aa7b8e85aac6283c46f58ce7e821a559b
CRs-Fixed: 2428648 2429575 2429575
Refactor the code to move the serialization of
rx reorder queue setup wmi command to target-if
layer.
CRs-Fixed: 2431099
Change-Id: I6b383f5e875fec55c3586dfee576894f6eb35f73
Support WDI 3.0 SW path intra-bss forwarding. Major
difference for WDI 3.0 is the metadata info passed
from ipa driver in skb->cb[].
Previously intra-bss fwd decision is done by FW and
it passes fw_desc to IPA where IPA driver passes onto
WLAN driver. Now for WDI 3.0, FW is not involved in RX
path and SW path intra-bss fwd decision has to be done
in wlan driver.
Change-Id: Ibc2246620490905fd992a2df31cc6f241cc63592
CRs-Fixed: 2432831
Currently vdev deletion happens as a part of last
peer's unma and vdev creation happens in a different context invoked
by the upper layer in dp_vdev_attach_wifi3.
In a scenario when the old vdev is being deleted and a new one is being
created, with the same vdev id (e.g. P2p -> SAP transition), its
possible that the old and new vdevs co-exist in the pdev->vdev_list
at the same time. In such a case, the API (get_vdev_from_vdev_id) can
return the old stale vdev, even when a new vdev has been created. This
is not the expected behaviour.
Check for "delete.pending" flag before returning the vdev. If the flag
is set, skip that vdev.
Change some logging to dp_ logging functions.
CRs-Fixed: 2432430
Change-Id: Icadd84059dfbab12910678e5b24007010c15b6a8
Print OFDMA tx and rx rate stats received
from firmware as part of txrx_stats 9.
Change-Id: I081016480bdb0d161e33be7ee789497c43ad6cc0
CRs-Fixed: 2415731
On low memory system, skb allocation failures are observed in
dp_rx_buffers_replenish during driver loading phase within
dp_rx_pdev_attach code path. Per kernel memory team analysis,
failures are caused by large amount of atomic allocations.
Currently when replenishing RX buffers, srng spinlock is first
grabbed and then skb buffers are being requested, which makes
skb allocation happen in atomic context.
Fix is to create a new API for refilling rx buffers when
loading driver. In the API, skbs are allocated into a
pool without srng lock grabbed. Srng lock is grabbed only
when refilling already allocated skb to rxdma ring.
Change-Id: I9d45a42c9f2b0af7bf1d8031adb474d66ff9db59
CRs-Fixed: 2425798
The datatype of average and round up tx rate was uint32_t which truncates
the upper byte of the average and round up tx rate value.
changing the datatype of average and round up tx rate from uint32_t to
uint64_t.
Change-Id: I3809ba7cfc73ec0b4609981e70d57112ac958f33
CRs-Fixed: 2409751
Add a CDP function which processes the peer delete response
and deletes the static ast entry for the peer
Change-Id: Id646979ed07bd82609ca08e7ef3201382d07b1de
CRs-fixed: 2385115
In dp_peer_add_ast() call the CP callback inside ast lock
to ensure the ast add and delete commands are called in a
sequence
Change-Id: If76bf8dca34555e56074e74792f3283b6b5a28d3
CRs-fixed: 2424637
On low memory system, skb allocation failures are observed in
dp_tx_ipa_uc_attach during driver loading phase. Per kernel
memory team analysis, failures are caused by large amount of
atomic allocations. Currently when allocating IPA TX skbs,
srng spinlock is first grabbed and then skb buffers are being
requested, which makes skb allocation happen in atomic context.
Fix is to make skb allocations with srng unlocked since it is
safe for race conditions during driver loading time. This ensures
skb allocations in process context.
Change-Id: I1624276c087c8247d672fb7cea5daded07ab93a3
CRs-Fixed: 2426489
Count of MSDUs Tx with AMPDU aggregation and
MSDUs without AMPDU aggregation is maintained. This
count is obtained from per ppdu stats.
Change-Id: I7c38846534bbe5f2fd0a9b3602a9d453e1c1f449
currently host will dump TLV with info level if HW indicate RX
data with invalid sw_peer_id, this will introduce excessive logging
issue, host panic.
degrade the log level to debug level.
Change-Id: Ia24f411bdfd9ec8e315874340d3bf046c245a72b
CRs-Fixed: 2420640
Add support to measure interrupt latencies, store them
in different "bins" and display the constructed histogram
through user command.
Change-Id: I268be6b39745d550ac7db5d321c2d1fdb2d17d41
CRs-Fixed: 2365812
When the status is FLOW_POOL_ACTIVE_PAUSED, all queues are expected to
be paused. Still some on the fly packet can make it to the driver. In
such a case, the function dp_tx_desc_alloc takes a default action of
WLAN_WAKE_NON_PRIORITY_QUEUE. This is incorrect, since it will wake all
non priority queues, when they should be paused.
Make default action as WLAN_NETIF_ACTION_TYPE_NONE. If this is the
action to be taken, donot call the pause_cb from dp_tx_desc_alloc.
Rate limit log levels in case dp_tx_desc_alloc fails.
Change-Id: I1ef3018e90576d2c3aaa0d10d56e9b155681271b
CRs-Fixed: 2421813
In some cases, the msdu_len as retrieved fromm msdu_start TLV is
greater than RX_BUFFER_SIZE. In such cases, when qdf_nbuf_set_pktlen
is called, it will expand the skb to accommodate the bigger packet.
This makes the rx_tlv_hdr var invalid, since it continues to point to
the older skb->data. Access of this rx_tlv_hdr will cause exception.
Drop the packet if msdu_len > RX_BUFFER_SIZE. This is not expected while
reaping WBM RX Release ring.
Change-Id: I60890e4d3ee0afa451884d4df458eb50be280280
CRs-Fixed: 2426245
Filter out data packets from mon destination
ring in m_copy mode as we already receive
first 100 bytes of data in mon status ring
Change-Id: I09321aa42b850d7b3cdc0bda4c7c51f79a6f850e
IPA driver adds is_txr_rn_db_pcie_addr and
is_evt_rn_db_pcie_addr in ipa_wdi_pipe_setup_info and
ipa_wdi_pipe_setup_info_smmu structure to check if doorbell
address is DDR address or PCIe memory mapped address. Thus
set the addr flag accordingly for IPA transfer and event
rings.
Change-Id: Ia55d14535db3818439e3884cfb61c3a1d81b86fb
CRs-Fixed: 2422162
At the time of dp vdev registration, register tx completion
callback so that tx completions will be called for packets
whose tx completion is requested.
Change-Id: Ic3402f41a0962111901271fc8669974dfea2eb1c
CRs-Fixed: 2416554
Not typecasting 64bit value to 16bit value before assigning is causing
corruption of nearby structure members. Typecasting this value fixes
this corruption.
Change-Id: Iddaf697eb90241f9948aeaafe7985b8c2ea63048
peer attach failed due to memory allocation failure
which triggered soc restart, as part of soc restart
peer detach tried to free AST hash table and entries
which was not allocated.
For peer hash table and AST hash table free only if
the memory is allocated
CRs-Fixed: 2425963
Change-Id: Ib7a624c91f544f1a8da2b96b4d342a13b9f3b162
Increase the maximum allowed OFDMA users to 37
from previously set 8, in cases where more than
8 clients are connected and the PPDU has information
for multiple users like in Trigger frames.
Change-Id: I6b0b9092674514c831694457fc71e68fee512e1f
RX_BUFFER_SIZE macro got introduced by mistake during rebase. The macro
is present in hal_rx.h. Removing the extra macro.
CRs-Fixed: 2382076
Change-Id: Ia66079d6d4543b4e3fd99e9f0c0376353a92aa9c