UDP GSO packets from network stack currently do not honor the
gso_max_size limits set in a driver. As a result, there needs
to be some enforcement done at the driver itself prior to the
transmit to hardware.
Instead of setting the gso_max_size on the rmnet devices, the
gso_max_size is instead set on the physical device. This
ensures that the network stack processing happens with the
maximum gso size possible for the TCP case.
CRs-Fixed: 2981039
Change-Id: I5280ea79f868e2b933f2604f8a33fbf33687f76c
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Add strings to display the cumulative result of the segmentation of
TSO packets on LL channel per rmnet device
CRs-Fixed: 2982506
Change-Id: I2454080d89c7a39a67004e0143463cd35772d45c
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
The csum_valid bit being set to zero in the v5 csum offload header has two
possible meanings. Either 1) the checksum is bad, or 2) the checksum was
not calculated. When a packet is received with such a header, we need to
manually checksum the packet to avoid reporting potentially valid packets
as having bad checksums.
Change-Id: I6a85d7a01c844be625c11c80ba381ac4dbd0366d
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
maxtype should be set to the value of the highest attribute type that
is valid, not the number of valid attributes accepted. This ensures that
attributes with types higher than this number are ignored properly.
Change-Id: Ife53a4a9a7327a8d89e709320eb1c6cb50992922
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
Some drivers may not set the transport header when queueing packets
to the network stack. Network stack will end up setting the
transport offset to 0 in this case.
When these packets arrive in rmnet and have to be checked for
TCP ACKs, usage of standard helpers for tcp header is not possible
as it relies on the transport header offset to be set.
CRs-Fixed: 2965067
Change-Id: I50d53b7762fd75d1b17ccc6765abe65568aaa4e0
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Break dependency between rmnet_core and rmnet_ctl
so that we can compile rmnet_core without
rmnet_ctl on yocto based targets.
Change-Id: I3bfa3dbcd24f9343d073107f40b6d98c77aba881
Signed-off-by: Conner Huff <chuff@codeaurora.org>
Add check to ensure that rmnet_port value returned
by rmnet_get_port is not null before we go ahead
and try to queue up an skb to be transmitted via it.
Change-Id: I7890f4f2bcbb8dd300957c4fb12ec77f0412f4a6
Signed-off-by: Conner Huff <chuff@codeaurora.org>
The tcp ancillary bit has been updated to be sent in almost all cases
unlike earlier where this was sent in case of TCP DL traffic.
This bit will now not be set only in case of adverse scenarios, so
this change ensures that the ACK queue state is updated in case the
ancillary bit is unset.
CRs-Fixed: 2957344
Change-Id: I4e1f26c9d3fabc64401284e36b48d82a4f3a5161
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
This provides the flexibility to use TARGET_BOARD_PLATFORM to maintain
a common codebase while differentiating between products with
TARGET_PRODUCT.
CRs-Fixed: 2955427
Change-Id: I824730142b6a68968c6bb44fec841ed49642ef7a
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Steer packets from LLC to different CPU cores for rmnet processing
by assigning a different RX queue. Also bypass rmnet_offload and
rmnet_shs for LL packets.
Change-Id: I459dabe8dd02132614f0e2cf461c89274f18223c
Acked-by: Weiyi Chen <weiyic@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
LLC acks are now sent in a workqueue context.
Change-Id: Ic162e6ad7575f9a6e73e5c96d7bc14c26c8ffe61
Acked-by: Weiyi Chen <weiyic@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
ULSO is not supported on the LL endpoint. If such skbs are received by the
rmnet driver, they must be segmented in software before transmitting them.
Change-Id: I0103d06c6bfa8eb96cfbde85f68b1b45034a93e5
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
Allows the use of different uplink aggregation parameters for the
default and low-latency uplink aggregation states. To faciliate this,
both contexts now have their own page recycling elements instead of a
single port-wide list, as well as their own instance of the
rmnet_egress_agg_params struct.
To configure these paramters, a new netlink attribute has been created
for specifying which aggregation state the given IFLA_RMNET_UL_AGG_PARAMS
attribute should apply to. For compatibility with user space, the default
state will be chosen if this element is not provided.
Change-Id: Ia340bbb479d9427658a153a2019f4891da0b741c
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
DL markers received over this channel need to be silently dropped, as
processing them will interfere with the standard DL marker processing on
the default channel.
Change-Id: Id6b36c3f877bf15768e3ac0a5ea8803656375a2b
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
Allows the use of the LL channel on IPA based targetds. MHI specific
functionality is split into the new rmnet_ll_mhi.c file, and IPA is
placed in rmnet_ll_ipa.c. rmnet_ll.c works as a generic interface to the
core rmnet module, and handles calling specific functions in the active
HW module to provide the low latency channel functionality.
Change-Id: Id3e77b8433134872eba09818fc662fc109687d80
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
Add support to switch bearers between default and lower latency
channels via QMAP commands.
Change-Id: I6662f59c713e8e3ab7409f50871bec11d9908c67
Acked-by: Weiyi Chen <weiyic@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Packets are now sent over a dedicated MHI channel when indicated by the
DFC driver.
New dedicated channel is controlled by rmnet driver. Buffers are allocated
and supplied to it as needed from a recyclable pool for RX on the channel,
and packets will be sent to it and freed manually once the channel
indicates that they have been sent.
Low latency packets can be aggregated like standard QMAP packets, but have
their own aggregation state to prevent mixing default and low latency
flows, and to allow each type of flow to use their own send functions
(i.e. dev_queue_xmit() versus rmnet_ll_send_skb()).
Low latency packets also have their own load-balancing scheme, and do not
need to use the SHS module for balancing. To facilitate this, we mark the
low latency packets with a non-zero priority value upon receipt from the
MHI chainnel and avoid sending any such marked packets to the SHS ingress
hook.
DFC has been updated with a new netlink message type to handle swapping a
list of bearers from one channel to another. The actual swap is performed
asynchronously, and separate netlink ACKs will be sent to the userspace
socket when the switch has been completed.
Change-Id: I93861d4b004f399ba203d76a71b2f01fa5c0d5d2
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
Define the hooks to be used be perf tether ingress and egress.
CRs-Fixed: 2813607
Change-Id: I68c4cc1e73c60e784fd4117679b3a373d29f539c
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
This adds support for the new transmit offload header as well as
the handling needed for it in the core driver.
CRs-Fixed: 2810638
Change-Id: I8ce2e0772209faf3d585e7d9d8d56eceb695d586
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
RmNet would previously update the page offset and page length values
contained within each skb_frag_t in the SKB received from the physical
driver during deaggregation. This ensured that the next data to be
deaggregated was always at the "start" of the SKB.
This approach is problematic as it creates a race between the RmNet
deaggregation logic and any usespace application listening on a standard
packet socket (i.e. PACKET_RX/TX_RING socket options were not set, so
packet_rcv() is used by the kernel for handling the socket). Since
packet_rcv() creates a clone of the incoming SKB and queues it until the
point where userspace can read it, the cloned SKB in the queue and the
original SKB being processed by RmNet will refer to the same
skb_shared_info struct lying at the end of the buffer pointed to by
skb->head. This means that when RmNet updates these values inside of the
skb_frag_t struct in the SKB as it processes them, the same changes will
be reflected in the cloned SKB waiting in the queue for the packet socket.
When userspace calls recv() to listen for data, this SKB will then be
copied into the user provided buffer via skb_copy_datagram_iter(). This
copy will result in -EFAULT being returned to the user since each page in
the SKB will have length 0.
This updates the deaggregation logic to maintain the current position in
the SKB being deaggregated via different means to avoid this race with
userspace. Instead of receiving the current fragment to start checking,
rmnet_frag_deaggregate_one() recieves the offset into the SKB to start at
and now returns the number of bytes that it has placed into a descriptor
struct to the calling function.
Change-Id: I9d0f5d8be6d47a69d6b0260fd20907ad69e377ff
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
Kbuild flag for LE based target was not being picked up
by cflags as expected, but as an unintended consequence
when rmnet_shs would try to include rmnet_trace.h,
the LE flag which wasn't used to build the rmnet_core.ko
was seen as unheard of causing compilation issue. To
get around this constraint, we are instead changing
the featurization flags in rmnet_trace.h to filter
by kernel version first for LE targets, and then
from there filtering by target name specifically.
Change-Id: I0f9a40f2ed7bfacc492cf2bb99b816c86ec710ed
Signed-off-by: Conner Huff <chuff@codeaurora.org>
These changes will help to have TRACE_INCLUDE_PATH according to
target directory structure. Target specific flag was added to detect
correct source path.
Change-Id: I04463c7c30a700f6d697a5de5df69cf9de7805ce
Signed-off-by: Mayank Vishwari <mayankvi@codeaurora.org>
Port and adjust code as needed to enable compilation
of rmnet_core.ko and rmnet_ctl.ko for taro.
Change-Id: I1ef4ea71115827f49dc3bd49aaf516eff91c2138
Signed-off-by: Conner Huff <chuff@codeaurora.org>
Ensure that the generic netlink response has a memset payload
before sending to userspace
Change-Id: Ie2fa92ce80bb3c0716e779cebeaedb2d31d759c1
Acked-by: Ryan Chapman <rchapman@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Signed-off-by: Kaustubh Pandey <kapandey@codeaurora.org>
Include in .bb recipe file mention of kernel header files
so that we no longer need to put a relative path to include
qmi header.
Change-Id: I266e167eb5970da3cd9206d289135248ddc4f791
Signed-off-by: Conner Huff <chuff@codeaurora.org>
Relative path is required for now for compilation
to work for both ftraces and qmi headers.
Also introduce Makefile.am for autotools compilation
and make changes to Makefile structure.
Change-Id: Iff673d79c5424c78e4d9763517c18dff5c731e95
Signed-off-by: Conner Huff <chuff@codeaurora.org>
This patch adds the necessary handling for the physical device driver to
use multiple pages to hold data in the SKBs. Essentially, the following
changes are implemented:
- rmnet_frag_descriptor struct now hold a list of frags, instead of a
single one. Pushing, pulling, and trimming APIs are updated to use
this new format.
- QMAP deaggregation now loops over each element in skb_shinfo->frags
looking for data. Packets are allowed to be split across mutliple
pages. All pages containing data for a particular packet will be added
to the frag_descriptor struct representing it.
- a new API, rmnet_frag_header_ptr() has been added for safely accessing
packet headers. This API, modeled after skb_header_pointer(), handles
the fact that headers could potentially be split across 2 pages. A
pointer to the location of the header is returned in the usual case
where the header is physically contiguous. If not, the header is
linearized into the user-provided buffer to allow normal header struct
read access.
- this new header access API is used in all places on the DL path when
headers are needed, including QMAP command processing, QMAPv1
handling, QMAPv5 checksum offload, and QMAPv5 coalescing.
- RSB/RSC segmentation handling is updated to add all necessary pages
containing packet data to the newly created descriptor. Additionally,
the pages containing L3 and L4 headers are added as well, as this
allows easier downstream processing, and guarantees that the header
data will not be freed until all packets that need them have been
converted into SKBs.
- as all frag_descriptors are now guaranteed to contain the L3 and L4
header data (and because they are no longer guaranteed to be on the
same page), the hdr_ptr member has been removed as it no longer serves
a purpose.
Change-Id: Iebb677a6ae7e442fa55e0d131af59cde1b5ce18a
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
Re-send QMAP DFC_CONFIG command if no ack is received upon
first QMAP indication.
Change-Id: I33ec5cbbf3550d1df03ca1dd990bf8ad58ad9582
Acked-by: Weiyi Chen <weiyic@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
When calculating the length of the IPv6 header chain, lengths of the IPv6
extension headers are not checked against the overall packet lengths and
thus it's possible to parse past the end of the packet when the packet is
malformed.
This adds the necessary bounds checking to ensure that parsing stops if the
end of the packet is reached to avoid the following:
Unable to handle kernel paging request at virtual address
pc : rmnet_frag_ipv6_skip_exthdr+0xc0/0x108 [rmnet_core]
lr : rmnet_frag_ipv6_skip_exthdr+0x68/0x108 [rmnet_core]
Call trace:
rmnet_frag_ipv6_skip_exthdr+0xc0/0x108 [rmnet_core]
DATARMNET29e8d137c4+0x1a0/0x3e0 [rmnet_offload]
rmnet_frag_ingress_handler+0x294/0x404 [rmnet_core]
rmnet_rx_handler+0x1b4/0x284 [rmnet_core]
__netif_receive_skb_core+0x740/0xd2c
__netif_receive_skb+0x44/0x158
Change-Id: Ib2e2ebce733bd4d14a3dfc175133638b15015277
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
Detects TCP pure acks so they can be put into the dedicated TX queues
even if they contains various options.
Change-Id: I6a9b714ccb58616ff49a150467d33a348d88ec64
Acked-by: Weiyi Chen <weiyic@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Add watchdog timer to recover potential data stall when data is
not going to the expected DRB and no DFC indication is received.
Change-Id: Iaa4b4814967cf9400c36115a083922376d23928d
Acked-by: Weiyi Chen <weiyic@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Fix a null pointer dereference issue when data packets trigger the
queuing of powersave work before the powersave workqueue is initialized.
Change-Id: Ia3515a7aaa47cb41568c39462bca73ceae11ea9c
Acked-by: Weiyi Chen <weiyic@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
This patch adds trace events to help with debug for gso feature
by identifying the packets (and their lengths) that are using
the segmentation offload feature.
Adding source and destination port number info
in the gso trace events to differentiate between
the flows.
CRs-Fixed: 2697145
Change-Id: I4f9786afa799cb1589bd07393c0922913037390d
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Re-arrange ipa registration sequence to handle cases where rmnet_ctl
is being initialized before ipa is ready.
Change-Id: Ic8416ad8f96f818e32f1320e997287ebfe755d03
Acked-by: Weiyi Chen <weiyic@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Reduce the minimum allowed UL aggregation timeout to 1ms
as a lower limit might be preferable to reduce latency for
sporadic traffic scenarios.
CRs-fixed: 2692360
Change-Id: Iba1c02232fa83d7cac112bd4b3f625128e2da88b
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Cleanup entries in the task boost list when the pid becomes inactive.
Change-Id: I0b1b2ef81cda470cd08b31ab4e78f81d346b9b70
Acked-by: Ryan Chapman <rchapman@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Similar to the UL direction, allow the hardware checksum offload support
to be toggled off with the NETIF_F_RXCSUM flag though ethtool.
Change-Id: I38e43cf9c13363eee340793878be7639f18254e3
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
Do not remove module if the dependent modules are removed.
CRs-Fixed: 2683697
Change-Id: I35539aff061fe57a85f0bb8eb3dcf40499eca760
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>