In ME6 we are currently using a workaround to send the native wifi packet
through the exception path, as the older HKv1 hardware didn’t support
single packet AMSDU. Since, the hardware is supporting this starting
from HKv2, we are sending ME packets as normal packets to FW with
DMS indication in Peer based meta for Beryllium.
Change-Id: Ic154b438df2c811c845e7c2eaadf252985d419ad
This change will enable packet logging for the fragments
from RX monitor status ring and send it to the pktlog module
for post processing through WDI event.
Change-Id: I283a20e7d0fa1f9b88223a989beda529beff6718
CRs-Fixed: 3074184
Whenever firmware receives the sync interrupt, it would
update the MLO offset to the scratch registers. Additionally,
firmware also sends a HTT message.
The values from the HTT message are copied into two structures:
1. into the dp_pdev structure, for monitor mode usage
2. into the scn structure for updating the pktlog header
A new WDI event is introduced for MLO timestamp sync into
pktlog header. Host sends the WDI event once it receives
htt msg and the data gets parsed in the pktlog handler and
eventually pktlog header gets updated with those values.
Whenever pktlog memory is being filled, the latest value from scn
structure is filled into the pktlog header. Upon receiving a new
HTT message, the value in the scn structure is updated and
subsequently into the pktlog records as well. All the 8 bytes
from the HTT message are required to calculate the MLO timestamp
offset. The calculation of MLO timestamp offset is to be taken
care by the PKTLOG parsing script.
Change-Id: If1d40b4a889e58b358f0feae692958bca2b8e1de
CRs-Fixed: 3074184
- A new pktlog mode called "hybrid", is added for QCN9224 only.
- Create WDI event for hybrid mode
- Add dummy APIs for monitor filter setting
- Send WMI_PKTLOG_EVENT_HYBRID_TX to FW for umac TLV subscription
Change-Id: I47f4e14bfc766f29a0ab4a8c07ab19e0d919e66b
CRs-Fixed: 3074184
Define MAX MCS to as per 11be
Define packet type as per 11be
Add new RU values
Add new BW gains
CRs-Fixed: 3068482
Change-Id: I18b2460011e2869608fe24743e4373e3e8f3fc1a
In case of WIN hal_soc will be freed in wifi down
path, this pointer is not valid at pdev_detach or
soc_detach.
Change to populate dmac source ring flag to dp_soc
as access is needed at pdev_detach or soc_detach
Change-Id: I628746bdd05ba3791d3d0e6b6dfdf160ed368e9a
Split setup API for IPA Rx Refill Ring into allocation and
initialization APIs and call them at pdev attach and pdev init
respectively.
Similary split cleanup API into deinitialization and free APIs
and call them at pdev deinit and detach respectively.
Move procedures related to IPA Tx and WBM rings from pdev to soc.
Also add cdp support to set ipa cfg parameter in soc_cfg struct.
Change-Id: I04bb9270ae2dff51746a42b358fbe346f1087f84
CRs-Fixed: 3053542
CCE disable was done for 8074 V1 specifically which is no longer
needed for future releases and hence removing CCE check in TX.
per packet path
Change-Id: Ia31c5e2cb181e4a3409fa1f1abce8c55524b1b05
Fix the invalid VAP instance error seen when RCC is
enabled and ol_ath_stats (enhanced stats) feature is
disabled case.
Populate gI as well similar to other rate stats in
CFR handler where this information would not be
available if ol_ath_stats is disabled.
CRs-Fixed: 3056621
Change-Id: Id1af6359408cb977ba0becf298121ee10dd5ef72
Rx patch changes for multichip MLO
1. Create ini for rx ring mask for each chip
2. Configure hash based routing for each chip based
on lmac_peer_id_msb
3. Peer setup changes to configure lmac_peer_id_msb
to enable hash based routing
4. Rx Replenish changes to provide buffers back to owner
SOC of reo ring
Change-Id: Ibbe6e81f9e62d88d9bb289a082dd14b4362252c4
DP peer changes required for multi-chip MLO.
This change includes
1) Adding MLO peer to global peer hash at ML context
2) Add ML peer to all partner chips id to objtable
Change-Id: I230a6c1b14484c587b190a9a318fe9ffb1caea11
Changes needed for MLO soc attach to pass chip_id,
dp_ml_context from upper layer.
This change also takes care of assigning appropriate
RBM id for IDLE link descriptors based on chip_id.
Change-Id: I8f5f08c524d91942e6e458f048700b7bdd900107
Changes to have HW cookie conversion context per
desc pool.
This context will be used to program CMEM of the
other SOC in case multi-chip MLO.
Change-Id: I5ec68813e8fcb6d124698a52f5553acf9a7b1795
Prefetch RX HW desc, SW desc and SKB in pipeline
fasion in the first loop of RX processing.
This has improved TPUT by 200Mbps and provided a
10% gain in CPU (single core)
PINE with other optimizations: 3960Mbps @ 100% core-3
PINE + pipeline prefetch: 4130Mbps @ 90% core-3
Change-Id: I47f351601b264eb3a2b50e4154229d55da738724
Added an API to do a batch invalidation of REO descs
saw an improvement of 40 to 45 Mbps.
Note: this change is applicable only for cached
descriptors
PINE with Default driver: 3189 @ 100% core-3
PINE with skb prefetch: 3469 @ 100% core-3
PINE with skb pre + batch inv: 3506 @ 100% core-3
Change-Id: Ic2cf294972acfe5765448a18bed7e903562836c3
This force use BA64 ini config is no longer needed, because another
gRxAggregationSize can do the same settings and more flexible.
Change is used to remove this config.
Change-Id: Ie780489849f8b701481a628a9bca2b4112460bd8
CRs-Fixed: 3076982
Dynamic GRO feature is enabled by default and aimed for specific
customers. Add an ini control to allow other customers to config
this feature enable/disable.
Change-Id: I7f505599327ac131b3cdac9b4d9e038861b1aeb6
CRs-Fixed: 3074689
For IPQ products, there is 1 refill ring which is of hardware type
and host replenishes the buffers onto this ring so that hardware can
use these buffers for Rx.
In IPA offload mode, the buffer replenishment model is different from
the one mentioned above. There are 3 refill rings, out of which,
2 are software refill rings (1 for host and 1 for IPA), and last ring
is hardware ring given to FW.
Ring given to IPA is to refill the buffers after processing the
regular Rx packets and ring given to host is to refill the buffers
after processing of exception packets. Since there are 2 entities to
refill the buffers, the hardware ring given to FW multiplexes these 2
software rings and provides the buffers to hardware.
Make changes to follow above replenishment model for SDX+Pine
integration.
Change-Id: I0d9e4ec811a3023a258e0a6b9ee22ccdffcebafa
CRs-Fixed: 3049633
Tid in RX frame header may be larger than MAX TID allowed
value, this will lead a out of boundary array access and
lead to kernel crash at last. Change is aimed to do a TID
check and discard such frame when necessary.
Change-Id: Ie9e7a1816d197d05cf845e81251ef7772721b849
CRs-Fixed: 3071743
Some platforms don't have mon_register_intr_ops enabled so add
a similar macro around dp_mon_register_intr_ops to resolve
compilation issues.
Change-Id: Id9c7bb45d965005d4dd0dde3a08f254464244147
CRs-fixed: 3075651
Check of Invalid QOS tag which can be set for non-mscs clients
connected on same VAP as MSCS clients. TID override needs to be
avoided for those clients
Change-Id: I651a354e740fe6aee74f94b59ac2e6f154a80c6d
remove the assert when HTT_TX_FW2WBM_TX_STATUS_MEC_NOTIFY
is received and mec_fw_offload is enabled.
Change-Id: I1b9c876822e3e9c05b5035af82afa484106e880a
For DA MC/BC RX frame, host should not use da_idx to access soc->ast_table
as this da_idx is not valid, OB accessing might happen.
Change-Id: I5c78d869e32536effe2635e561cb6881cdc97c38
CRs-Fixed: 3072841
As part of code change we are removing csum_enabled flag check.
In that case, csum_enabled becomes redundant since we are not using it
in the code base, So remove redundant code as part of the cleanup.
Change-Id: Iac411b20f06436053b1969a1af9e3b3ee418c34c
CRs-Fixed: 3070858
Currently, to set checksum enable flags in Tx descriptor
we are checking the csum_enabled flag along with stack checksum
offload request. In case of roaming from latency-critical connection
to non-latency critical connection we request stack to calculate
checksum and set csum_enabled flag to 0. For some packets which
are already queued where the stack has not calculated checksum and
csum_enabled flag is set to 0, we will not set the checksum enable
flags in Tx descriptors for those packets.
To fix the issue remove the csum_enabled flag check and directly
check if the stack has calculated checksum for those packets or not.
Change-Id: I8d7754afc3d0a33315b85b0113cd3062e5783e28
CRs-Fixed: 3070858
Define a new cdp interface to fetch peer delay stats.
Define a new cdp interface to fetch peer jitter stats.
These interface APIs will be used by upper layers to fetch delay and
jitter stats per peer for telemetry stats feature.
Change-Id: I96ee6a861fa2626e7e1fba3df7df9ec64ff7e946
CRs-Fixed: 3071493
In WBM error processing read peer_id from peer_meta_data
instead of sw_peer_id.
This changes is needed because we need to process Rx packet
on ML peer. But in MLO case sw_peer_id field contains
link_peer_id where as peer_meta_data has ml_peer_id.
Change-Id: I3f469adfdf7efa88cb081e94fa9fe0c54c1fb078
When monitor KO not loaded Return QDF_STATUS_E_FAILURE
from dp_monitor_tx_add_to_comp_queue() so that caller
will free the buffer
Change-Id: Idebcf81121767ccd93d95308433241fcf0a93c93
Currently, we are Flushing fragments for peer as part of rx defrag
clean up. There is the possibility that frag flush is called while
we are accessing frags in rx defrag store fragment path in parallel.
Which will cause access after the free issue.
Extending lock in rx defrag store fragment path to avoid any
parallel flush while accessing the frags.
Change-Id: I1e897b195f61d80ea6738e9a93f7bcaaa04adc97
CRs-Fixed: 3065414
There is a race condition as tid_lock is unlocked early in
dp_rx_defrag_store_fragment and descriptors are replenished
from dp_peer_flush_frags and same descriptor is again
being replenished in fragmented path.
To resolve this, extending lock period till all the operations
on tid.head_frag_desc are done.
Change-Id: I6d2abb2119e3bebf739de9e41334d58ba87ee391
CRs-Fixed: 3068165
Avoid tracing data packets at high throughput levels. This helps save
CPU cycles.
CRs-Fixed: 3059712
Change-Id: Ideca04e142a2f4080cd5758fe27f0b526ea4f9ad
Add timestamps for last TX/RX in peer stats. This stat is needed to be
passed on to upper layers on demand.
CRs-Fixed: 3059712
Change-Id: If0d8b7dc3c139e4d1b670bf98f3f574f02ea9715
Add support to invoke HIF runtime PM APIs in TX packet path based on
throughput level. Essentially at high throughputs, where runtime PM is
not expected to provide power gains, invoking the APIs is deterimental to
performance.
Hence invoke runtime_pm APIs based on throughput level.
CRs-Fixed: 3059712
Change-Id: I49d111bffa8a37c68abbdd54911f9ecc22430562