Графік комітів

14 Коміти

Автор SHA1 Повідомлення Дата
Sean Tranchetti
6899297aa7 rmnet_core: Manually checksum csum_valid = 0 packets
The csum_valid bit being set to zero in the v5 csum offload header has two
possible meanings. Either 1) the checksum is bad, or 2) the checksum was
not calculated. When a packet is received with such a header, we need to
manually checksum the packet to avoid reporting potentially valid packets
as having bad checksums.

Change-Id: I6a85d7a01c844be625c11c80ba381ac4dbd0366d
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
2021-06-16 15:03:51 -07:00
Subash Abhinov Kasiviswanathan
af7b029c04 rmnet_core: LL receive packet steering
Steer packets from LLC to different CPU cores for rmnet processing
by assigning a different RX queue. Also bypass rmnet_offload and
rmnet_shs for LL packets.

Change-Id: I459dabe8dd02132614f0e2cf461c89274f18223c
Acked-by: Weiyi Chen <weiyic@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2021-05-14 10:31:29 -07:00
Sean Tranchetti
9e18715d96 rmnet_core: Discard DL markers received over LL channel
DL markers received over this channel need to be silently dropped, as
processing them will interfere with the standard DL marker processing on
the default channel.

Change-Id: Id6b36c3f877bf15768e3ac0a5ea8803656375a2b
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
2021-05-14 10:30:34 -07:00
Sean Tranchetti
b38dff7d79 rmnet_core: Add Low Latency framework
Packets are now sent over a dedicated MHI channel when indicated by the
DFC driver.

New dedicated channel is controlled by rmnet driver. Buffers are allocated
and supplied to it as needed from a recyclable pool for RX on the channel,
and packets will be sent to it and freed manually once the channel
indicates that they have been sent.

Low latency packets can be aggregated like standard QMAP packets, but have
their own aggregation state to prevent mixing default and low latency
flows, and to allow each type of flow to use their own send functions
(i.e. dev_queue_xmit() versus rmnet_ll_send_skb()).

Low latency packets also have their own load-balancing scheme, and do not
need to use the SHS module for balancing. To facilitate this, we mark the
low latency packets with a non-zero priority value upon receipt from the
MHI chainnel and avoid sending any such marked packets to the SHS ingress
hook.

DFC has been updated with a new netlink message type to handle swapping a
list of bearers from one channel to another. The actual swap is performed
asynchronously, and separate netlink ACKs will be sent to the userspace
socket when the switch has been completed.

Change-Id: I93861d4b004f399ba203d76a71b2f01fa5c0d5d2
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
2021-05-12 17:02:51 -07:00
Subash Abhinov Kasiviswanathan
eec13e2c69 core: Add hooks for perf tether ingress and egress
Define the hooks to be used be perf tether ingress and egress.

CRs-Fixed: 2813607
Change-Id: I68c4cc1e73c60e784fd4117679b3a373d29f539c
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2021-04-26 19:39:44 -06:00
Sean Tranchetti
4c8ad36ec7 rmnet_core: Don't alter page offset or page length during deaggregation
RmNet would previously update the page offset and page length values
contained within each skb_frag_t in the SKB received from the physical
driver during deaggregation. This ensured that the next data to be
deaggregated was always at the "start" of the SKB.

This approach is problematic as it creates a race between the RmNet
deaggregation logic and any usespace application listening on a standard
packet socket (i.e. PACKET_RX/TX_RING socket options were not set, so
packet_rcv() is used by the kernel for handling the socket). Since
packet_rcv() creates a clone of the incoming SKB and queues it until the
point where userspace can read it, the cloned SKB in the queue and the
original SKB being processed by RmNet will refer to the same
skb_shared_info struct lying at the end of the buffer pointed to by
skb->head. This means that when RmNet updates these values inside of the
skb_frag_t struct in the SKB as it processes them, the same changes will
be reflected in the cloned SKB waiting in the queue for the packet socket.
When userspace calls recv() to listen for data, this SKB will then be
copied into the user provided buffer via skb_copy_datagram_iter(). This
copy will result in -EFAULT being returned to the user since each page in
the SKB will have length 0.

This updates the deaggregation logic to maintain the current position in
the SKB being deaggregated via different means to avoid this race with
userspace. Instead of receiving the current fragment to start checking,
rmnet_frag_deaggregate_one() recieves the offset into the SKB to start at
and now returns the number of bytes that it has placed into a descriptor
struct to the calling function.

Change-Id: I9d0f5d8be6d47a69d6b0260fd20907ad69e377ff
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
2021-02-10 16:37:17 -08:00
Conner Huff
4fe7a4add4 datarmnet: Fastforward core from data-kernel.lnx.1.1 to data-kernel.lnx.1.2
Catches up to commit b4d76675a6feeb57f7188d5354e1cf82b7adb012.

Change-Id: Ib1f00be9799712bd4ab0381cde648287f98a61a1
Signed-off-by: Conner Huff <chuff@codeaurora.org>
2021-02-01 18:46:12 -08:00
Sean Tranchetti
06bf42ec24 rmnet_core: Allow scatter-gather on the DL path
This patch adds the necessary handling for the physical device driver to
use multiple pages to hold data in the SKBs. Essentially, the following
changes are implemented:
  - rmnet_frag_descriptor struct now hold a list of frags, instead of a
    single one. Pushing, pulling, and trimming APIs are updated to use
    this new format.
  - QMAP deaggregation now loops over each element in skb_shinfo->frags
    looking for data. Packets are allowed to be split across mutliple
    pages. All pages containing data for a particular packet will be added
    to the frag_descriptor struct representing it.
  - a new API, rmnet_frag_header_ptr() has been added for safely accessing
    packet headers. This API, modeled after skb_header_pointer(), handles
    the fact that headers could potentially be split across 2 pages. A
    pointer to the location of the header is returned in the usual case
    where the header is physically contiguous. If not, the header is
    linearized into the user-provided buffer to allow normal header struct
    read access.
  - this new header access API is used in all places on the DL path when
    headers are needed, including QMAP command processing, QMAPv1
    handling, QMAPv5 checksum offload, and QMAPv5 coalescing.
  - RSB/RSC segmentation handling is updated to add all necessary pages
    containing packet data to the newly created descriptor. Additionally,
    the pages containing L3 and L4 headers are added as well, as this
    allows easier downstream processing, and guarantees that the header
    data will not be freed until all packets that need them have been
    converted into SKBs.
  - as all frag_descriptors are now guaranteed to contain the L3 and L4
    header data (and because they are no longer guaranteed to be on the
    same page), the hdr_ptr member has been removed as it no longer serves
    a purpose.

Change-Id: Iebb677a6ae7e442fa55e0d131af59cde1b5ce18a
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
2020-08-19 10:44:16 -06:00
Sean Tranchetti
c583c04062 core: rmnet: validate ipv6 extension header lengths
When calculating the length of the IPv6 header chain, lengths of the IPv6
extension headers are not checked against the overall packet lengths and
thus it's possible to parse past the end of the packet when the packet is
malformed.

This adds the necessary bounds checking to ensure that parsing stops if the
end of the packet is reached to avoid the following:
Unable to handle kernel paging request at virtual address
pc : rmnet_frag_ipv6_skip_exthdr+0xc0/0x108 [rmnet_core]
lr : rmnet_frag_ipv6_skip_exthdr+0x68/0x108 [rmnet_core]
Call trace:
  rmnet_frag_ipv6_skip_exthdr+0xc0/0x108 [rmnet_core]
  DATARMNET29e8d137c4+0x1a0/0x3e0 [rmnet_offload]
  rmnet_frag_ingress_handler+0x294/0x404 [rmnet_core]
  rmnet_rx_handler+0x1b4/0x284 [rmnet_core]
  __netif_receive_skb_core+0x740/0xd2c
  __netif_receive_skb+0x44/0x158

Change-Id: Ib2e2ebce733bd4d14a3dfc175133638b15015277
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
2020-07-28 15:04:31 -06:00
Sean Tranchetti
94d3f634df rmnet: Control DL csum offload with RXCSUM
Similar to the UL direction, allow the hardware checksum offload support
to be toggled off with the NETIF_F_RXCSUM flag though ethtool.

Change-Id: I38e43cf9c13363eee340793878be7639f18254e3
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
2020-05-18 17:35:32 -06:00
Subash Abhinov Kasiviswanathan
2204dee91b core: Remove dl marker v1 usage
This removes the unused dl marker v1 definitions, handlers and users
since these are no longer used.

This also fixes the following CFI warning-
CFI failure (target: rmnet_perf_core_handle_map_control_end.cfi_jt+0x0/0x4 [rmnet_perf]):
WARNING: CPU: 1 PID: 0 at kernel/cfi.c:29 __ubsan_handle_cfi_check_fail+0x4c/0x54
pstate: 60400005 (nZCv daif +PAN -UAO)
pc: __ubsan_handle_cfi_check_fail+0x4c/0x54
lr: __ubsan_handle_cfi_check_fail+0x4c/0x54
Call trace:
__ubsan_handle_cfi_check_fail+0x4c/0x54
__cfi_check+0x204/0x220 [rmnet_perf]
rmnet_map_dl_trl_notify_v2+0x58/0x88 [rmnet_core]
rmnet_frag_flow_command+0x110/0x120 [rmnet_core]
rmnet_frag_ingress_handler+0xe0/0x3bc [rmnet_core]
rmnet_rx_handler+0x1cc/0x2a4 [rmnet_core]
__netif_receive_skb_core+0x554/0xdc4
process_backlog$4cc8cf18b485f47de9fc54109f04daea+0x1a4/0x314
net_rx_action$4cc8cf18b485f47de9fc54109f04daea+0x144/0x578
__do_softirq+0x250/0x580
irq_exit+0xcc/0xd0
handle_IPI+0x228/0x3b0
efi_header_end+0x148/0x17c
el1_irq+0x108/0x200
lpm_cpuidle_enter$a9941074ca35bb1f25355cf2ff310eae+0x57c/0x5c8
cpuidle_enter_state+0x130/0x334
cpuidle_enter+0x38/0x50
do_idle.llvm.4847091502713502628+0x1e4/0x2ec
cpu_startup_entry+0x24/0x28
__cpu_disable+0x0/0xbc

CRs-Fixed: 2647192
Change-Id: I5dc625f0791aff9738b2128f99c79d7e0dadf26d
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2020-03-22 21:21:49 -06:00
Sean Tranchetti
1ea6d3f011 net: ethernet: qualcomm: rmnet: Track RSB/RSC byte counts
Allows calculation of the average buffer utilization for RSB/RSC packets.

Change-Id: Id719b97ceffc62b1b9ce28bfab8ec32c6604529c
Acked-by: Conner Huff <chuff@codeaurora.org>
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
2020-02-26 18:22:34 -08:00
Sean Tranchetti
eeb4944964 core: rmnet: Fastforward to 4.19 tip
This brings the RmNet and DFC modules up to date with the 4.19 tip as of
commit 9b38611ea527 ("rmnet: Reduce synchronize_rcu calls").

As part of this, the rmnet_ctl driver was also incorporated, using commit
4ceee3aafb7d ("rmnet_ctl: Add IPC logging and optimizations")

Change-Id: Ic45d46074c7401dfed408c769cfb6462dac0d4ee
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
2020-01-23 13:31:14 -07:00
Subash Abhinov Kasiviswanathan
08d4972b2a core: Rmnet initial commit
Inital commit of rmnet_core net device driver in dlkm form
in datarmnet. This requires rmnet to be disabled in the
kernel and for it to be loaded before dependent modules.

CRs-Fixed: 2558810
Change-Id: I742e85033fa0999bf9069d43ce73ab9a622a8388
Acked-by: Raul Martinez <mraul@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2019-12-10 15:22:43 -07:00