The csum_valid bit being set to zero in the v5 csum offload header has two
possible meanings. Either 1) the checksum is bad, or 2) the checksum was
not calculated. When a packet is received with such a header, we need to
manually checksum the packet to avoid reporting potentially valid packets
as having bad checksums.
Change-Id: I6a85d7a01c844be625c11c80ba381ac4dbd0366d
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
Steer packets from LLC to different CPU cores for rmnet processing
by assigning a different RX queue. Also bypass rmnet_offload and
rmnet_shs for LL packets.
Change-Id: I459dabe8dd02132614f0e2cf461c89274f18223c
Acked-by: Weiyi Chen <weiyic@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
DL markers received over this channel need to be silently dropped, as
processing them will interfere with the standard DL marker processing on
the default channel.
Change-Id: Id6b36c3f877bf15768e3ac0a5ea8803656375a2b
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
Packets are now sent over a dedicated MHI channel when indicated by the
DFC driver.
New dedicated channel is controlled by rmnet driver. Buffers are allocated
and supplied to it as needed from a recyclable pool for RX on the channel,
and packets will be sent to it and freed manually once the channel
indicates that they have been sent.
Low latency packets can be aggregated like standard QMAP packets, but have
their own aggregation state to prevent mixing default and low latency
flows, and to allow each type of flow to use their own send functions
(i.e. dev_queue_xmit() versus rmnet_ll_send_skb()).
Low latency packets also have their own load-balancing scheme, and do not
need to use the SHS module for balancing. To facilitate this, we mark the
low latency packets with a non-zero priority value upon receipt from the
MHI chainnel and avoid sending any such marked packets to the SHS ingress
hook.
DFC has been updated with a new netlink message type to handle swapping a
list of bearers from one channel to another. The actual swap is performed
asynchronously, and separate netlink ACKs will be sent to the userspace
socket when the switch has been completed.
Change-Id: I93861d4b004f399ba203d76a71b2f01fa5c0d5d2
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
Define the hooks to be used be perf tether ingress and egress.
CRs-Fixed: 2813607
Change-Id: I68c4cc1e73c60e784fd4117679b3a373d29f539c
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
RmNet would previously update the page offset and page length values
contained within each skb_frag_t in the SKB received from the physical
driver during deaggregation. This ensured that the next data to be
deaggregated was always at the "start" of the SKB.
This approach is problematic as it creates a race between the RmNet
deaggregation logic and any usespace application listening on a standard
packet socket (i.e. PACKET_RX/TX_RING socket options were not set, so
packet_rcv() is used by the kernel for handling the socket). Since
packet_rcv() creates a clone of the incoming SKB and queues it until the
point where userspace can read it, the cloned SKB in the queue and the
original SKB being processed by RmNet will refer to the same
skb_shared_info struct lying at the end of the buffer pointed to by
skb->head. This means that when RmNet updates these values inside of the
skb_frag_t struct in the SKB as it processes them, the same changes will
be reflected in the cloned SKB waiting in the queue for the packet socket.
When userspace calls recv() to listen for data, this SKB will then be
copied into the user provided buffer via skb_copy_datagram_iter(). This
copy will result in -EFAULT being returned to the user since each page in
the SKB will have length 0.
This updates the deaggregation logic to maintain the current position in
the SKB being deaggregated via different means to avoid this race with
userspace. Instead of receiving the current fragment to start checking,
rmnet_frag_deaggregate_one() recieves the offset into the SKB to start at
and now returns the number of bytes that it has placed into a descriptor
struct to the calling function.
Change-Id: I9d0f5d8be6d47a69d6b0260fd20907ad69e377ff
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
This patch adds the necessary handling for the physical device driver to
use multiple pages to hold data in the SKBs. Essentially, the following
changes are implemented:
- rmnet_frag_descriptor struct now hold a list of frags, instead of a
single one. Pushing, pulling, and trimming APIs are updated to use
this new format.
- QMAP deaggregation now loops over each element in skb_shinfo->frags
looking for data. Packets are allowed to be split across mutliple
pages. All pages containing data for a particular packet will be added
to the frag_descriptor struct representing it.
- a new API, rmnet_frag_header_ptr() has been added for safely accessing
packet headers. This API, modeled after skb_header_pointer(), handles
the fact that headers could potentially be split across 2 pages. A
pointer to the location of the header is returned in the usual case
where the header is physically contiguous. If not, the header is
linearized into the user-provided buffer to allow normal header struct
read access.
- this new header access API is used in all places on the DL path when
headers are needed, including QMAP command processing, QMAPv1
handling, QMAPv5 checksum offload, and QMAPv5 coalescing.
- RSB/RSC segmentation handling is updated to add all necessary pages
containing packet data to the newly created descriptor. Additionally,
the pages containing L3 and L4 headers are added as well, as this
allows easier downstream processing, and guarantees that the header
data will not be freed until all packets that need them have been
converted into SKBs.
- as all frag_descriptors are now guaranteed to contain the L3 and L4
header data (and because they are no longer guaranteed to be on the
same page), the hdr_ptr member has been removed as it no longer serves
a purpose.
Change-Id: Iebb677a6ae7e442fa55e0d131af59cde1b5ce18a
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
When calculating the length of the IPv6 header chain, lengths of the IPv6
extension headers are not checked against the overall packet lengths and
thus it's possible to parse past the end of the packet when the packet is
malformed.
This adds the necessary bounds checking to ensure that parsing stops if the
end of the packet is reached to avoid the following:
Unable to handle kernel paging request at virtual address
pc : rmnet_frag_ipv6_skip_exthdr+0xc0/0x108 [rmnet_core]
lr : rmnet_frag_ipv6_skip_exthdr+0x68/0x108 [rmnet_core]
Call trace:
rmnet_frag_ipv6_skip_exthdr+0xc0/0x108 [rmnet_core]
DATARMNET29e8d137c4+0x1a0/0x3e0 [rmnet_offload]
rmnet_frag_ingress_handler+0x294/0x404 [rmnet_core]
rmnet_rx_handler+0x1b4/0x284 [rmnet_core]
__netif_receive_skb_core+0x740/0xd2c
__netif_receive_skb+0x44/0x158
Change-Id: Ib2e2ebce733bd4d14a3dfc175133638b15015277
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
Similar to the UL direction, allow the hardware checksum offload support
to be toggled off with the NETIF_F_RXCSUM flag though ethtool.
Change-Id: I38e43cf9c13363eee340793878be7639f18254e3
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
Allows calculation of the average buffer utilization for RSB/RSC packets.
Change-Id: Id719b97ceffc62b1b9ce28bfab8ec32c6604529c
Acked-by: Conner Huff <chuff@codeaurora.org>
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
This brings the RmNet and DFC modules up to date with the 4.19 tip as of
commit 9b38611ea527 ("rmnet: Reduce synchronize_rcu calls").
As part of this, the rmnet_ctl driver was also incorporated, using commit
4ceee3aafb7d ("rmnet_ctl: Add IPC logging and optimizations")
Change-Id: Ic45d46074c7401dfed408c769cfb6462dac0d4ee
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
Inital commit of rmnet_core net device driver in dlkm form
in datarmnet. This requires rmnet to be disabled in the
kernel and for it to be loaded before dependent modules.
CRs-Fixed: 2558810
Change-Id: I742e85033fa0999bf9069d43ce73ab9a622a8388
Acked-by: Raul Martinez <mraul@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>