qcacmn: Properly handle RX REO reinject packets

Currently for REO reinject path, first fragment is in the
linear part of the skb buffer while other fragments are
appended to skb buffer as non-linear paged data. The other
point is that for REO reinject buffer, l3_header_padding is
not there, meaning ethernet header is right after struct
rx_pkt_tlvs.

Above implementation will have issues when WLAN IPA path is
enabled.

Firstly, IPA assumes data buffers are linear. Thus need to
linearize skb buffer before reinjecting into REO.

Secondly, when WLAN does IPA pipe connection, RX pkt offset
is hard-coded to RX_PKT_TLVS_LEN + L3_HEADER_PADDING. Thus
need to pad L3_HEADER_PADDING before ethernet header and
after struct rx_pkt_tlvs.

Change-Id: I36d41bc91d28c2580775a1d2e431e139ff02e19e
CRs-Fixed: 2469315
This commit is contained in:
Jia Ding
2019-08-07 14:17:31 +08:00
committed by nshrivas
부모 ae310478dd
커밋 fef509bc58
3개의 변경된 파일69개의 추가작업 그리고 2개의 파일을 삭제

파일 보기

@@ -113,6 +113,8 @@ bool dp_reo_remap_config(struct dp_soc *soc, uint32_t *remap1,
uint32_t *remap2);
bool dp_ipa_is_mdm_platform(void);
qdf_nbuf_t dp_ipa_handle_rx_reo_reinject(struct dp_soc *soc, qdf_nbuf_t nbuf);
#else
static inline int dp_ipa_uc_detach(struct dp_soc *soc, struct dp_pdev *pdev)
{
@@ -136,5 +138,12 @@ static inline QDF_STATUS dp_ipa_handle_rx_buf_smmu_mapping(struct dp_soc *soc,
{
return QDF_STATUS_SUCCESS;
}
static inline qdf_nbuf_t dp_ipa_handle_rx_reo_reinject(struct dp_soc *soc,
qdf_nbuf_t nbuf)
{
return nbuf;
}
#endif
#endif /* _DP_IPA_H_ */