Each DXE control block is associated to a specific channel.
The channel lock is always taken before accessing a control block.
There is no need to have an extra (useless) spinlock for the control
block skb.
Signed-off-by: Loic Poulain <loic.poulain@linaro.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
wcn36xx_start_tx function retrieves the buffer descriptor from the
channel control queue to start filling tx buffer information. However,
nothing prevents this same buffer to be concurrently accessed in a
concurent tx call, leading to potential buffer coruption and firmware
crash (observed during iperf test). The channel control queue should
only be accessed and updated with the channel lock.
Fix this issue by using a local buffer descriptor which will be copied
in the thread-safe wcn36xx_dxe_tx_frame.
Note that buffer descriptor size is few bytes so the introduced copy
overhead is insignificant. Moreover, this allows to keep the locked
section minimal.
Signed-off-by: Loic Poulain <loic.poulain@linaro.org>
Signed-off-by: Ramon Fried <rfried@codeaurora.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
IRQ reason was not cheked for errors.
Although error handing is not currently supported, it
will be nice to output an error value to the log if the
DMA operation failed.
Signed-off-by: Ramon Fried <rfried@codeaurora.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
DXE channel defaults used hardcoded magic values.
Added bit definitions of the control register and
calculate this values in compilation for clarity.
Signed-off-by: Ramon Fried <rfried@codeaurora.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
DXE descriptor control registers used hardcoded magic values. Added bit
definitions of the control register and calculate this values in compilation
for clarity. No functional changes.
Signed-off-by: Ramon Fried <rfried@codeaurora.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
The CCU block in WCNSS is configured for appropriate routing of
interrupts from the DXE to the application cpu, this is not dependant on
the iris version (wcn3660 vs wcn3680), but rather if the SoC has a riva
or pronto built in.
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Split the wcnss mmio space into explicit regions for ccu and dxe and
acquire these from the node referenced by the qcom,mmio phandle.
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
wcn36xx implements a ring buffer for transmitted frames for each
(high and low priority) DMA channel. The ring buffers are lockless:
new frames are inserted at the head of the queue, while finished
packets are reaped from the tail.
Unfortunately, the list manipulations are missing any kind of barriers
so are susceptible to various races: for example, a TX completion
handler might read an updated desc->ctrl before the head has actually
advanced, and then null out the ctl->skb pointer while it is still
being used in the TX path.
Simplify things here by adding a spin lock when traversing the ring.
This change increased stability for me without adding any noticeable
overhead on my platform (xperia z).
Signed-off-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>