Snap for 7518683 from 40a113903d to android12-5.10-keystone-qcom-release

Change-Id: I59364128cfae382c0c930a9c317326c6a7196f2d
This commit is contained in:
Android Build Coastguard Worker
2021-07-03 00:00:24 +00:00
60 changed files with 12717 additions and 16225 deletions

View File

@@ -1279,3 +1279,239 @@ Description: This entry shows the configured size of WriteBooster buffer.
0400h corresponds to 4GB.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/hpb_version
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the HPB specification version.
The full information about the descriptor could be found at UFS
HPB (Host Performance Booster) Extension specifications.
Example: version 1.2.3 = 0123h
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/hpb_control
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows an indication of the HPB control mode.
00h: Host control mode
01h: Device control mode
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/hpb_region_size
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the bHPBRegionSize which can be calculated
as in the following (in bytes):
HPB Region size = 512B * 2^bHPBRegionSize
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/hpb_number_lu
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the maximum number of HPB LU supported by
the device.
00h: HPB is not supported by the device.
01h ~ 20h: Maximum number of HPB LU supported by the device
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/hpb_subregion_size
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the bHPBSubRegionSize, which can be
calculated as in the following (in bytes) and shall be a multiple of
logical block size:
HPB Sub-Region size = 512B x 2^bHPBSubRegionSize
bHPBSubRegionSize shall not exceed bHPBRegionSize.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/hpb_max_active_regions
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the maximum number of active HPB regions that
is supported by the device.
The file is read only.
What: /sys/class/scsi_device/*/device/unit_descriptor/hpb_lu_max_active_regions
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the maximum number of HPB regions assigned to
the HPB logical unit.
The file is read only.
What: /sys/class/scsi_device/*/device/unit_descriptor/hpb_pinned_region_start_offset
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the start offset of HPB pinned region.
The file is read only.
What: /sys/class/scsi_device/*/device/unit_descriptor/hpb_number_pinned_regions
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the number of HPB pinned regions assigned to
the HPB logical unit.
The file is read only.
What: /sys/class/scsi_device/*/device/hpb_stats/hit_cnt
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the number of reads that changed to HPB read.
The file is read only.
What: /sys/class/scsi_device/*/device/hpb_stats/miss_cnt
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the number of reads that cannot be changed to
HPB read.
The file is read only.
What: /sys/class/scsi_device/*/device/hpb_stats/rb_noti_cnt
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the number of response UPIUs that has
recommendations for activating sub-regions and/or inactivating region.
The file is read only.
What: /sys/class/scsi_device/*/device/hpb_stats/rb_active_cnt
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the number of active sub-regions recommended by
response UPIUs.
The file is read only.
What: /sys/class/scsi_device/*/device/hpb_stats/rb_inactive_cnt
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the number of inactive regions recommended by
response UPIUs.
The file is read only.
What: /sys/class/scsi_device/*/device/hpb_stats/map_req_cnt
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the number of read buffer commands for
activating sub-regions recommended by response UPIUs.
The file is read only.
What: /sys/class/scsi_device/*/device/hpb_params/requeue_timeout_ms
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the requeue timeout threshold for write buffer
command in ms. This value can be changed by writing proper integer to
this entry.
What: /sys/bus/platform/drivers/ufshcd/*/attributes/max_data_size_hpb_single_cmd
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the maximum HPB data size for using single HPB
command.
=== ========
00h 4KB
01h 8KB
02h 12KB
...
FFh 1024KB
=== ========
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/flags/hpb_enable
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the status of HPB.
== ============================
0 HPB is not enabled.
1 HPB is enabled
== ============================
The file is read only.
What: /sys/class/scsi_device/*/device/hpb_param_sysfs/activation_thld
Date: February 2021
Contact: Avri Altman <avri.altman@wdc.com>
Description: In host control mode, reads are the major source of activation
trials. once this threshold hs met, the region is added to the
"to-be-activated" list. Since we reset the read counter upon
write, this include sending a rb command updating the region
ppn as well.
What: /sys/class/scsi_device/*/device/hpb_param_sysfs/normalization_factor
Date: February 2021
Contact: Avri Altman <avri.altman@wdc.com>
Description: In host control mode, We think of the regions as "buckets".
Those buckets are being filled with reads, and emptied on write.
We use entries_per_srgn - the amount of blocks in a subregion as
our bucket size. This applies because HPB1.0 only concern a
single-block reads. Once the bucket size is crossed, we trigger
a normalization work - not only to avoid overflow, but mainly
because we want to keep those counters normalized, as we are
using those reads as a comparative score, to make various decisions.
The normalization is dividing (shift right) the read counter by
the normalization_factor. If during consecutive normalizations
an active region has exhaust its reads - inactivate it.
What: /sys/class/scsi_device/*/device/hpb_param_sysfs/eviction_thld_enter
Date: February 2021
Contact: Avri Altman <avri.altman@wdc.com>
Description: Region deactivation is often due to the fact that eviction took
place: a region become active on the expense of another. This is
happening when the max-active-regions limit has crossed.
In host mode, eviction is considered an extreme measure. We
want to verify that the entering region has enough reads, and
the exiting region has much less reads. eviction_thld_enter is
the min reads that a region must have in order to be considered
as a candidate to evict other region.
What: /sys/class/scsi_device/*/device/hpb_param_sysfs/eviction_thld_exit
Date: February 2021
Contact: Avri Altman <avri.altman@wdc.com>
Description: same as above for the exiting region. A region is consider to
be a candidate to be evicted, only if it has less reads than
eviction_thld_exit.
What: /sys/class/scsi_device/*/device/hpb_param_sysfs/read_timeout_ms
Date: February 2021
Contact: Avri Altman <avri.altman@wdc.com>
Description: In order not to hang on to “cold” regions, we shall inactivate
a region that has no READ access for a predefined amount of
time - read_timeout_ms. If read_timeout_ms has expired, and the
region is dirty - it is less likely that we can make any use of
HPB-READing it. So we inactivate it. Still, deactivation has
its overhead, and we may still benefit from HPB-READing this
region if it is clean - see read_timeout_expiries.
What: /sys/class/scsi_device/*/device/hpb_param_sysfs/read_timeout_expiries
Date: February 2021
Contact: Avri Altman <avri.altman@wdc.com>
Description: if the region read timeout has expired, but the region is clean,
just re-wind its timer for another spin. Do that as long as it
is clean and did not exhaust its read_timeout_expiries threshold.
What: /sys/class/scsi_device/*/device/hpb_param_sysfs/timeout_polling_interval_ms
Date: February 2021
Contact: Avri Altman <avri.altman@wdc.com>
Description: the frequency in which the delayed worker that checks the
read_timeouts is awaken.
What: /sys/class/scsi_device/*/device/hpb_param_sysfs/inflight_map_req
Date: February 2021
Contact: Avri Altman <avri.altman@wdc.com>
Description: in host control mode the host is the originator of map requests.
To not flood the device with map requests, use a simple throttling
mechanism that limits the number of inflight map requests.

View File

@@ -50,10 +50,3 @@ KernelVersion: v5.12
Contact: Hridya Valsaraju <hridya@google.com>
Description: This file is read-only and contains a map_counter indicating the
number of distinct device mappings of the attachment.
What: /sys/kernel/dmabuf/buffers/<inode_number>/mmap_count
Date: January 2021
KernelVersion: v5.10
Contact: Kalesh Singh <kaleshsingh@google.com>
Description: This file is read-only and contains a counter indicating the
number of times the buffer has been mmap().

View File

@@ -54,6 +54,7 @@ v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgrou
5-3-3. IO Latency
5-3-3-1. How IO Latency Throttling Works
5-3-3-2. IO Latency Interface Files
5-3-4. IO Priority
5-4. PID
5-4-1. PID Interface Files
5-5. Cpuset
@@ -1848,6 +1849,60 @@ IO Latency Interface Files
duration of time between evaluation events. Windows only elapse
with IO activity. Idle periods extend the most recent window.
IO Priority
~~~~~~~~~~~
A single attribute controls the behavior of the I/O priority cgroup policy,
namely the blkio.prio.class attribute. The following values are accepted for
that attribute:
no-change
Do not modify the I/O priority class.
none-to-rt
For requests that do not have an I/O priority class (NONE),
change the I/O priority class into RT. Do not modify
the I/O priority class of other requests.
restrict-to-be
For requests that do not have an I/O priority class or that have I/O
priority class RT, change it into BE. Do not modify the I/O priority
class of requests that have priority class IDLE.
idle
Change the I/O priority class of all requests into IDLE, the lowest
I/O priority class.
The following numerical values are associated with the I/O priority policies:
+-------------+---+
| no-change | 0 |
+-------------+---+
| none-to-rt | 1 |
+-------------+---+
| rt-to-be | 2 |
+-------------+---+
| all-to-idle | 3 |
+-------------+---+
The numerical value that corresponds to each I/O priority class is as follows:
+-------------------------------+---+
| IOPRIO_CLASS_NONE | 0 |
+-------------------------------+---+
| IOPRIO_CLASS_RT (real-time) | 1 |
+-------------------------------+---+
| IOPRIO_CLASS_BE (best effort) | 2 |
+-------------------------------+---+
| IOPRIO_CLASS_IDLE | 3 |
+-------------------------------+---+
The algorithm to set the I/O priority class for a request is as follows:
- Translate the I/O priority class policy into a number.
- Change the request I/O priority class into the maximum of the I/O priority
class policy number and the numerical I/O priority class.
PID
---

File diff suppressed because it is too large Load Diff

View File

@@ -2,19 +2,25 @@
# commonly used symbols
add_uevent_var
alloc_io_pgtable_ops
__alloc_skb
alloc_workqueue
__arch_copy_from_user
__arch_copy_to_user
arm64_const_caps_ready
arm64_use_ng_mappings
bcmp
blocking_notifier_call_chain
blocking_notifier_chain_register
blocking_notifier_chain_unregister
bpf_trace_run1
bpf_trace_run2
bpf_trace_run3
bpf_trace_run4
bpf_trace_run5
bpf_trace_run6
bus_register
bus_unregister
cancel_delayed_work
cancel_delayed_work_sync
cancel_work_sync
capable
@@ -23,6 +29,8 @@
cdev_init
__cfi_slowpath
__check_object_size
__class_register
class_unregister
clk_bulk_disable
clk_bulk_enable
clk_bulk_prepare
@@ -54,7 +62,9 @@
cpumask_next
cpu_number
__cpu_online_mask
__cpu_possible_mask
crc32_le
_ctype
debugfs_create_dir
debugfs_create_file
debugfs_create_u32
@@ -62,8 +72,10 @@
debugfs_remove
default_llseek
delayed_work_timer_fn
del_timer
del_timer_sync
destroy_workqueue
dev_close
dev_coredumpv
dev_driver_string
_dev_err
@@ -108,6 +120,7 @@
devm_pinctrl_register
devm_platform_ioremap_resource
devm_regmap_add_irq_chip
devm_regmap_field_alloc
__devm_regmap_init
__devm_regmap_init_i2c
__devm_regmap_init_mmio_clk
@@ -175,15 +188,20 @@
drm_helper_probe_single_connector_modes
drm_mode_vrefresh
enable_irq
eth_mac_addr
eth_platform_get_mac_address
ethtool_op_get_link
eth_type_trans
eth_validate_addr
event_triggers_call
find_next_bit
find_next_zero_bit
finish_wait
flush_work
flush_workqueue
free_io_pgtable_ops
free_irq
gcd
generic_handle_irq
generic_mii_ioctl
get_device
@@ -232,12 +250,13 @@
idr_alloc_cyclic
idr_destroy
idr_find
idr_for_each
idr_get_next
idr_remove
ieee80211_get_channel_khz
init_net
__init_swait_queue_head
init_timer_key
init_uts_ns
init_wait_entry
__init_waitqueue_head
iomem_resource
@@ -276,6 +295,8 @@
irq_to_desc
is_vmalloc_addr
jiffies
jiffies_to_msecs
jiffies_to_usecs
kasan_flag_enabled
kasprintf
kernel_connect
@@ -284,6 +305,7 @@
kernel_sendmsg
kfree
kfree_const
kfree_sensitive
kfree_skb
__kmalloc
kmalloc_caches
@@ -297,8 +319,12 @@
ktime_get
ktime_get_mono_fast_ns
ktime_get_real_ts64
kvfree
kvfree_call_rcu
kvmalloc_node
__list_add_valid
__list_del_entry_valid
__local_bh_enable_ip
__log_post_read_mmio
__log_read_mmio
__log_write_mmio
@@ -313,10 +339,12 @@
memremap
memset
memstart_addr
memunmap
mii_ethtool_gset
mii_nway_restart
misc_deregister
misc_register
mod_delayed_work_on
mod_timer
module_layout
__msecs_to_jiffies
@@ -333,10 +361,15 @@
netdev_err
netdev_info
netdev_warn
netif_carrier_on
netif_napi_add
__netif_napi_del
__nla_parse
nla_put
no_llseek
nr_cpu_ids
nvmem_cell_get
nvmem_cell_put
nvmem_cell_read
of_address_to_resource
of_alias_get_id
@@ -348,6 +381,7 @@
of_device_is_compatible
of_device_uevent_modalias
of_dma_configure_id
of_find_device_by_node
of_find_property
of_fwnode_ops
of_genpd_add_provider_onecell
@@ -380,8 +414,11 @@
of_property_read_u32_index
of_property_read_variable_u32_array
of_property_read_variable_u8_array
of_prop_next_u32
of_reserved_mem_lookup
param_ops_bool
param_ops_charp
param_ops_int
param_ops_uint
__pci_register_driver
pci_unregister_driver
@@ -419,10 +456,12 @@
__pm_runtime_set_status
__pm_runtime_suspend
__pm_runtime_use_autosuspend
preempt_schedule
preempt_schedule_notrace
prepare_to_wait_event
printk
pskb_expand_head
__pskb_pull_tail
put_device
__put_task_struct
qcom_smem_state_register
@@ -451,10 +490,13 @@
regcache_cache_only
regcache_mark_dirty
regcache_sync
register_netdevice_notifier
register_reboot_notifier
__register_rpmsg_driver
regmap_bulk_read
regmap_bulk_write
regmap_field_read
regmap_field_update_bits_base
__regmap_init
regmap_irq_get_virq
regmap_multi_reg_write
@@ -480,6 +522,7 @@
reset_control_assert
reset_control_deassert
reset_control_reset
round_jiffies_up
rpmsg_register_device
rpmsg_send
rpmsg_unregister_device
@@ -490,6 +533,9 @@
rproc_del
rproc_free
rproc_remove_subdev
rtnl_is_locked
rtnl_lock
rtnl_unlock
schedule
schedule_timeout
scnprintf
@@ -507,10 +553,14 @@
single_open
single_release
skb_clone
skb_copy
skb_copy_bits
skb_copy_expand
skb_dequeue
skb_pull
skb_push
skb_put
skb_queue_head
skb_queue_purge
skb_queue_tail
skb_trim
@@ -557,16 +607,22 @@
strncpy
strpbrk
strsep
__sw_hweight16
__sw_hweight32
__sw_hweight64
__sw_hweight8
synchronize_irq
synchronize_net
synchronize_rcu
syscon_node_to_regmap
syscon_regmap_lookup_by_phandle
sysfs_create_link
sysfs_remove_link
sysrq_mask
system_power_efficient_wq
system_wq
tasklet_init
tasklet_kill
__tasklet_schedule
thermal_cooling_device_unregister
trace_event_buffer_commit
@@ -597,6 +653,7 @@
uart_write_wakeup
__udelay
unregister_chrdev_region
unregister_netdevice_notifier
unregister_reboot_notifier
unregister_rpmsg_driver
usb_deregister
@@ -623,6 +680,7 @@
usbnet_write_cmd_async
usbnet_write_cmd_nopm
usb_register_driver
__usecs_to_jiffies
usleep_range
vabits_actual
vfree
@@ -657,12 +715,10 @@
iommu_group_ref_get
iommu_put_dma_cookie
of_dma_is_coherent
param_ops_int
pci_bus_type
pci_device_group
# required by asix.ko
eth_mac_addr
genphy_resume
mdiobus_alloc_size
mdiobus_free
@@ -679,7 +735,6 @@
phy_print_status
phy_start
phy_stop
skb_copy_expand
usbnet_change_mtu
usbnet_get_drvinfo
usbnet_get_link
@@ -687,98 +742,23 @@
usbnet_set_link_ksettings
usbnet_unlink_rx_urbs
# required by ath.ko
freq_reg_info
reg_initiator_name
wiphy_apply_custom_regulatory
# required by ath10k_core.ko
bcmp
cancel_delayed_work
__cfg80211_alloc_event_skb
__cfg80211_alloc_reply_skb
cfg80211_calculate_bitrate
cfg80211_find_elem_match
cfg80211_find_vendor_elem
cfg80211_get_bss
cfg80211_put_bss
__cfg80211_send_event_skb
cfg80211_vendor_cmd_reply
cpu_latency_qos_add_request
cpu_latency_qos_remove_request
device_get_mac_address
device_set_wakeup_enable
firmware_request_nowarn
guid_gen
idr_for_each
ieee80211_alloc_hw_nm
ieee80211_beacon_cntdwn_is_complete
ieee80211_beacon_get_template
ieee80211_beacon_get_tim
ieee80211_beacon_loss
ieee80211_beacon_update_cntdwn
ieee80211_bss_get_elem
ieee80211_channel_to_freq_khz
ieee80211_connection_loss
ieee80211_csa_finish
ieee80211_find_sta
ieee80211_find_sta_by_ifaddr
ieee80211_free_hw
ieee80211_free_txskb
ieee80211_hdrlen
ieee80211_iterate_active_interfaces_atomic
ieee80211_iterate_stations_atomic
ieee80211_iter_chan_contexts_atomic
ieee80211_manage_rx_ba_offl
ieee80211_next_txq
ieee80211_proberesp_get
ieee80211_queue_delayed_work
ieee80211_queue_work
ieee80211_radar_detected
ieee80211_ready_on_channel
ieee80211_register_hw
ieee80211_remain_on_channel_expired
ieee80211_report_low_ack
ieee80211_restart_hw
ieee80211_rx_napi
ieee80211_scan_completed
__ieee80211_schedule_txq
ieee80211_sta_register_airtime
ieee80211_stop_queue
ieee80211_stop_queues
ieee80211_tdls_oper_request
ieee80211_tx_dequeue
ieee80211_txq_get_depth
ieee80211_txq_may_transmit
ieee80211_txq_schedule_start
ieee80211_tx_rate_update
ieee80211_tx_status
ieee80211_tx_status_irqsafe
ieee80211_unregister_hw
ieee80211_wake_queue
ieee80211_wake_queues
init_dummy_netdev
init_uts_ns
__kfifo_alloc
__kfifo_free
__local_bh_enable_ip
__nla_parse
nla_put
param_ops_ulong
regulatory_hint
skb_copy
skb_dequeue_tail
skb_queue_head
skb_realloc_headroom
strlcat
strscpy
__sw_hweight16
__sw_hweight8
thermal_cooling_device_register
vzalloc
wiphy_read_of_freq_limits
wiphy_rfkill_set_hw_state
wiphy_to_ieee80211_hw
# required by ath10k_pci.ko
pci_clear_master
@@ -801,11 +781,9 @@
iommu_map
# required by ax88179_178a.ko
ethtool_op_get_link
ethtool_op_get_ts_info
mii_ethtool_get_link_ksettings
mii_ethtool_set_link_ksettings
netif_carrier_on
# required by bam_dma.ko
dma_async_device_register
@@ -815,7 +793,6 @@
of_dma_controller_free
of_dma_controller_register
pm_runtime_irq_safe
tasklet_kill
tasklet_setup
vchan_dma_desc_free_list
vchan_find_desc
@@ -823,6 +800,59 @@
vchan_tx_desc_free
vchan_tx_submit
# required by cfg80211.ko
bpf_trace_run10
bpf_trace_run7
debugfs_rename
dev_change_net_namespace
__dev_get_by_index
dev_get_by_index
device_add
device_del
device_rename
genlmsg_multicast_allns
genlmsg_put
genl_register_family
genl_unregister_family
get_net_ns_by_fd
get_net_ns_by_pid
inet_csk_get_port
key_create_or_update
key_put
keyring_alloc
ktime_get_coarse_with_offset
memcmp
netif_rx_ni
netlink_broadcast
netlink_register_notifier
netlink_unicast
netlink_unregister_notifier
net_ns_type_operations
nla_find
nla_memcpy
nla_put_64bit
nla_reserve
__nla_validate
__put_net
rb_erase
rb_insert_color
register_pernet_device
request_firmware_nowait
rfkill_alloc
rfkill_blocked
rfkill_destroy
rfkill_pause_polling
rfkill_register
rfkill_resume_polling
rfkill_set_hw_state
rfkill_unregister
skb_add_rx_frag
__sock_create
trace_print_array_seq
unregister_pernet_device
verify_pkcs7_signature
wireless_nlevent_flush
# required by clk-qcom.ko
__clk_determine_rate
clk_fixed_factor_ops
@@ -836,7 +866,6 @@
__clk_mux_determine_rate_closest
divider_ro_round_rate_parent
of_find_node_opts_by_path
of_prop_next_u32
pm_genpd_remove_subdomain
# required by clk-rpmh.ko
@@ -858,7 +887,6 @@
gpiod_get_value_cansleep
gpiod_set_debounce
gpiod_to_irq
system_power_efficient_wq
# required by fastrpc.ko
dma_buf_attach
@@ -925,9 +953,6 @@
i2c_put_dma_safe_msg_buf
of_machine_is_compatible
# required by i2c-qup.ko
__usecs_to_jiffies
# required by i2c-rk3x.ko
clk_notifier_register
clk_notifier_unregister
@@ -953,12 +978,88 @@
mipi_dsi_device_unregister
of_find_mipi_dsi_host_by_node
# required by mac80211.ko
alloc_netdev_mqs
__alloc_percpu_gfp
arc4_crypt
arc4_setkey
call_rcu
crc32_be
crypto_aead_decrypt
crypto_aead_encrypt
crypto_aead_setauthsize
crypto_aead_setkey
crypto_alloc_aead
crypto_alloc_shash
crypto_alloc_skcipher
crypto_destroy_tfm
__crypto_memneq
crypto_shash_digest
crypto_shash_finup
crypto_shash_setkey
crypto_shash_update
crypto_skcipher_decrypt
crypto_skcipher_encrypt
crypto_skcipher_setkey
__crypto_xor
dev_alloc_name
dev_fetch_sw_netstats
dev_printk
dev_queue_xmit
ether_setup
flush_delayed_work
free_netdev
free_percpu
get_random_u32
__hw_addr_init
__hw_addr_sync
__hw_addr_unsync
kernel_param_lock
kernel_param_unlock
kfree_skb_list
ktime_get_seconds
ktime_get_with_offset
napi_gro_receive
netdev_set_default_ethtool_ops
netif_carrier_off
netif_receive_skb
netif_receive_skb_list
netif_rx
netif_tx_stop_all_queues
netif_tx_wake_queue
net_ratelimit
__per_cpu_offset
prandom_bytes
prandom_u32
___pskb_trim
rcu_barrier
register_inet6addr_notifier
register_inetaddr_notifier
register_netdevice
rhashtable_free_and_destroy
rhashtable_insert_slow
rhltable_init
__rht_bucket_nested
rht_bucket_nested
rht_bucket_nested_insert
round_jiffies
round_jiffies_relative
sg_init_one
skb_checksum_help
skb_clone_sk
skb_complete_wifi_ack
skb_ensure_writable
__skb_get_hash
__skb_gso_segment
system_freezable_wq
unregister_inet6addr_notifier
unregister_inetaddr_notifier
unregister_netdevice_many
unregister_netdevice_queue
# required by msm.ko
__bitmap_andnot
__bitmap_weight
bpf_trace_run1
bpf_trace_run2
bpf_trace_run6
bpf_trace_run8
clk_get_parent
__clk_hw_register_divider
@@ -975,10 +1076,8 @@
component_master_add_with_match
component_master_del
component_unbind_all
_ctype
debugfs_create_bool
debugfs_create_u64
del_timer
dev_coredumpm
devfreq_recommended_opp
devfreq_resume_device
@@ -1198,12 +1297,9 @@
kthread_create_worker
kthread_destroy_worker
kthread_queue_work
kvfree
kvmalloc_node
llist_add_batch
memdup_user_nul
memparse
memunmap
mipi_dsi_create_packet
mipi_dsi_host_register
mipi_dsi_host_unregister
@@ -1211,20 +1307,16 @@
mutex_lock_interruptible
mutex_trylock_recursive
nsecs_to_jiffies
nvmem_cell_get
nvmem_cell_put
of_clk_hw_onecell_get
of_device_is_available
of_drm_find_bridge
of_drm_find_panel
of_find_device_by_node
of_find_matching_node_and_match
of_get_compatible_child
of_graph_get_endpoint_by_regs
of_graph_get_next_endpoint
of_graph_get_remote_port_parent
of_icc_get
param_ops_charp
phy_calibrate
phy_configure
pid_task
@@ -1240,7 +1332,6 @@
regulator_get
regulator_put
reservation_ww_class
round_jiffies_up
sched_set_fifo
schedule_timeout_interruptible
__sg_page_iter_dma_next
@@ -1280,7 +1371,6 @@
dma_pool_create
dma_pool_destroy
dma_pool_free
flush_work
free_pages
gen_pool_dma_alloc_align
gen_pool_dma_zalloc_align
@@ -1391,7 +1481,6 @@
cpufreq_get_driver_data
cpufreq_register_driver
cpufreq_unregister_driver
__cpu_possible_mask
dev_pm_opp_adjust_voltage
dev_pm_opp_disable
dev_pm_opp_enable
@@ -1450,9 +1539,6 @@
# required by qcom_hwspinlock.ko
devm_hwspin_lock_register
devm_regmap_field_alloc
regmap_field_read
regmap_field_update_bits_base
# required by qcom_pil_info.ko
__memset_io
@@ -1476,7 +1562,6 @@
__num_online_cpus
# required by qcom_spmi-regulator.ko
jiffies_to_usecs
regulator_disable_regmap
regulator_enable_regmap
regulator_is_enabled_regmap
@@ -1486,14 +1571,16 @@
rproc_get_by_child
try_wait_for_completion
# required by qrtr-smd.ko
__pskb_pull_tail
# required by qcom_tsens.ko
debugfs_lookup
devm_thermal_zone_of_sensor_register
thermal_zone_device_update
thermal_zone_get_slope
# required by qrtr-tun.ko
_copy_to_iter
# required by qrtr.ko
__alloc_skb
autoremove_wake_function
datagram_poll
do_wait_intr_irq
@@ -1507,7 +1594,6 @@
refcount_dec_and_mutex_lock
release_sock
sk_alloc
skb_copy_bits
skb_copy_datagram_iter
skb_free_datagram
__skb_pad
@@ -1526,7 +1612,6 @@
sock_queue_rcv_skb
sock_register
sock_unregister
synchronize_rcu
# required by reboot-mode.ko
devres_add
@@ -1545,8 +1630,6 @@
# required by rmtfs_mem.ko
alloc_chrdev_region
__class_register
class_unregister
# required by rtc-pm8xxx.ko
devm_request_any_context_irq
@@ -1605,9 +1688,6 @@
snd_soc_of_parse_aux_devs
snd_soc_of_parse_card_name
# required by snd-soc-rl6231.ko
gcd
# required by snd-soc-rt5663.ko
regcache_cache_bypass
snd_soc_add_component_controls
@@ -1666,7 +1746,6 @@
spi_delay_exec
spi_finalize_current_message
spi_get_next_queued_message
tasklet_init
# required by spmi-pmic-arb.ko
irq_domain_set_info
@@ -1686,7 +1765,6 @@
dma_sync_sg_for_cpu
dma_sync_sg_for_device
__free_pages
preempt_schedule
__sg_page_iter_next
# required by ufs_qcom.ko

View File

@@ -115,35 +115,6 @@
cdev_device_add
cdev_device_del
cdev_init
__cfg80211_alloc_event_skb
__cfg80211_alloc_reply_skb
cfg80211_chandef_create
cfg80211_ch_switch_notify
cfg80211_connect_done
cfg80211_del_sta_sinfo
cfg80211_disconnected
cfg80211_external_auth_request
cfg80211_find_elem_match
cfg80211_get_bss
cfg80211_ibss_joined
cfg80211_inform_bss_frame_data
cfg80211_mgmt_tx_status
cfg80211_michael_mic_failure
cfg80211_new_sta
cfg80211_port_authorized
cfg80211_put_bss
cfg80211_ready_on_channel
cfg80211_remain_on_channel_expired
cfg80211_roamed
cfg80211_rx_mgmt_khz
cfg80211_scan_done
cfg80211_sched_scan_results
cfg80211_sched_scan_stopped
cfg80211_sched_scan_stopped_rtnl
__cfg80211_send_event_skb
cfg80211_unlink_bss
cfg80211_unregister_wdev
cfg80211_vendor_cmd_reply
__cfi_slowpath
__check_object_size
__class_create
@@ -887,9 +858,6 @@
idr_for_each
idr_preload
idr_remove
ieee80211_channel_to_freq_khz
ieee80211_freq_khz_to_channel
ieee80211_get_channel_khz
iio_device_unregister
import_iovec
in6_pton
@@ -1503,7 +1471,6 @@
regulator_set_voltage
regulator_set_voltage_sel_regmap
regulator_unregister
regulatory_hint
release_firmware
__release_region
remap_pfn_range
@@ -1871,6 +1838,7 @@
__traceiter_android_vh_of_i2c_get_board_info
__traceiter_android_vh_pagecache_get_page
__traceiter_android_vh_rmqueue
__traceiter_android_vh_setscheduler_uclamp
__traceiter_android_vh_thermal_pm_notify_suspend
__traceiter_android_vh_timerfd_create
__traceiter_android_vh_typec_store_partner_src_caps
@@ -1940,6 +1908,7 @@
__tracepoint_android_vh_of_i2c_get_board_info
__tracepoint_android_vh_pagecache_get_page
__tracepoint_android_vh_rmqueue
__tracepoint_android_vh_setscheduler_uclamp
__tracepoint_android_vh_thermal_pm_notify_suspend
__tracepoint_android_vh_timerfd_create
__tracepoint_android_vh_typec_store_partner_src_caps
@@ -2193,11 +2162,6 @@
watchdog_register_device
watchdog_set_restart_priority
watchdog_unregister_device
wiphy_apply_custom_regulatory
wiphy_free
wiphy_new_nm
wiphy_register
wiphy_unregister
woken_wake_function
work_busy
__xfrm_state_destroy

View File

@@ -166,12 +166,6 @@
ida_alloc_range
ida_destroy
ida_free
ieee80211_channel_to_freq_khz
ieee80211_connection_loss
ieee80211_find_sta
ieee80211_get_hdrlen_from_skb
ieee80211_queue_delayed_work
ieee80211_stop_rx_ba_session
__init_swait_queue_head
init_timer_key
init_wait_entry
@@ -1225,17 +1219,10 @@
tcpci_unregister_port
# required by wl18xx.ko
__cfg80211_alloc_event_skb
__cfg80211_send_event_skb
ieee80211_radar_detected
kstrtou8_from_user
# required by wlcore.ko
bcmp
__cfg80211_alloc_reply_skb
cfg80211_find_elem_match
cfg80211_find_vendor_elem
cfg80211_vendor_cmd_reply
complete_all
consume_skb
device_create_bin_file
@@ -1244,40 +1231,6 @@
dev_pm_set_dedicated_wake_irq
disable_irq_nosync
get_random_u32
ieee80211_alloc_hw_nm
ieee80211_ap_probereq_get
ieee80211_beacon_get_tim
ieee80211_chswitch_done
ieee80211_cqm_beacon_loss_notify
ieee80211_cqm_rssi_notify
ieee80211_csa_finish
ieee80211_free_hw
ieee80211_free_txskb
ieee80211_freq_khz_to_channel
ieee80211_hdrlen
ieee80211_iterate_active_interfaces_atomic
ieee80211_iterate_interfaces
ieee80211_nullfunc_get
ieee80211_probereq_get
ieee80211_proberesp_get
ieee80211_pspoll_get
ieee80211_queue_work
ieee80211_ready_on_channel
ieee80211_register_hw
ieee80211_remain_on_channel_expired
ieee80211_report_low_ack
ieee80211_restart_hw
ieee80211_rx_napi
ieee80211_scan_completed
ieee80211_sched_scan_results
ieee80211_sched_scan_stopped
ieee80211_sta_ps_transition
ieee80211_stop_queue
ieee80211_stop_queues
ieee80211_tx_status
ieee80211_unregister_hw
ieee80211_wake_queue
ieee80211_wake_queues
jiffies_to_msecs
jiffies_to_usecs
__local_bh_enable_ip
@@ -1286,14 +1239,12 @@
no_seek_end_llseek
_raw_spin_trylock
request_firmware_nowait
rfc1042_header
skb_dequeue
skb_push
skb_put
skb_queue_head
skb_trim
vscnprintf
wiphy_to_ieee80211_hw
# required by wlcore_sdio.ko
platform_device_add

View File

@@ -92,38 +92,6 @@
cdev_device_add
cdev_device_del
cdev_init
__cfg80211_alloc_event_skb
__cfg80211_alloc_reply_skb
cfg80211_cac_event
cfg80211_chandef_create
cfg80211_ch_switch_notify
cfg80211_classify8021d
cfg80211_connect_done
cfg80211_del_sta_sinfo
cfg80211_disconnected
cfg80211_external_auth_request
cfg80211_find_elem_match
cfg80211_ft_event
cfg80211_get_bss
cfg80211_inform_bss_data
cfg80211_inform_bss_frame_data
cfg80211_mgmt_tx_status
cfg80211_michael_mic_failure
cfg80211_new_sta
cfg80211_pmksa_candidate_notify
cfg80211_put_bss
cfg80211_radar_event
cfg80211_ready_on_channel
cfg80211_remain_on_channel_expired
cfg80211_roamed
cfg80211_rx_mgmt_khz
cfg80211_scan_done
cfg80211_sched_scan_results
cfg80211_sched_scan_stopped
__cfg80211_send_event_skb
cfg80211_tdls_oper_request
cfg80211_unlink_bss
cfg80211_vendor_cmd_reply
__cfi_slowpath
__check_object_size
check_preempt_curr
@@ -829,9 +797,6 @@
idr_for_each
idr_get_next
idr_remove
ieee80211_channel_to_freq_khz
ieee80211_freq_khz_to_channel
ieee80211_get_channel_khz
iio_alloc_pollfunc
iio_buffer_init
iio_buffer_put
@@ -1482,7 +1447,6 @@
regulator_set_voltage_time
regulator_set_voltage_time_sel
regulator_sync_voltage
regulatory_hint
release_firmware
release_pages
__release_region
@@ -2179,11 +2143,6 @@
__warn_printk
watchdog_init_timeout
watchdog_set_restart_priority
wiphy_apply_custom_regulatory
wiphy_free
wiphy_new_nm
wiphy_register
wiphy_unregister
wireless_send_event
woken_wake_function
work_busy

View File

@@ -121,37 +121,6 @@
cdev_device_add
cdev_device_del
cdev_init
__cfg80211_alloc_event_skb
__cfg80211_alloc_reply_skb
cfg80211_calculate_bitrate
cfg80211_chandef_create
cfg80211_ch_switch_notify
cfg80211_connect_done
cfg80211_del_sta_sinfo
cfg80211_disconnected
cfg80211_external_auth_request
cfg80211_ft_event
cfg80211_get_bss
cfg80211_gtk_rekey_notify
cfg80211_inform_bss_frame_data
cfg80211_mgmt_tx_status
cfg80211_michael_mic_failure
cfg80211_new_sta
cfg80211_pmksa_candidate_notify
cfg80211_put_bss
cfg80211_ready_on_channel
cfg80211_remain_on_channel_expired
cfg80211_roamed
cfg80211_rx_mgmt_khz
cfg80211_rx_unprot_mlme_mgmt
cfg80211_scan_done
cfg80211_sched_scan_results
__cfg80211_send_event_skb
cfg80211_stop_iface
cfg80211_tdls_oper_request
cfg80211_unlink_bss
cfg80211_update_owe_info_event
cfg80211_vendor_cmd_reply
__cfi_slowpath
cgroup_path_ns
cgroup_taskset_first
@@ -1084,9 +1053,6 @@
idr_preload
idr_remove
idr_replace
ieee80211_freq_khz_to_channel
ieee80211_get_channel_khz
ieee80211_hdrlen
iio_channel_get_all
iio_read_channel_processed
import_iovec
@@ -1960,7 +1926,6 @@
regulator_set_mode
regulator_set_voltage
regulator_unregister_notifier
regulatory_set_wiphy_regd
release_firmware
__release_region
release_sock
@@ -2015,8 +1980,10 @@
rproc_add_subdev
rproc_alloc
rproc_boot
rproc_coredump
rproc_coredump_add_custom_segment
rproc_coredump_add_segment
rproc_coredump_cleanup
rproc_coredump_set_elf_info
rproc_coredump_using_sections
rproc_del
@@ -2480,8 +2447,6 @@
__traceiter_android_vh_cpuidle_psci_enter
__traceiter_android_vh_cpuidle_psci_exit
__traceiter_android_vh_dump_throttled_rt_tasks
__traceiter_android_vh_force_compatible_post
__traceiter_android_vh_force_compatible_pre
__traceiter_android_vh_freq_table_limits
__traceiter_android_vh_ftrace_dump_buffer
__traceiter_android_vh_ftrace_format_check
@@ -2496,6 +2461,7 @@
__traceiter_android_vh_logbuf
__traceiter_android_vh_logbuf_pr_cont
__traceiter_android_vh_printk_hotplug
__traceiter_android_vh_rproc_recovery
__traceiter_android_vh_scheduler_tick
__traceiter_android_vh_show_max_freq
__traceiter_android_vh_show_resume_epoch_val
@@ -2589,8 +2555,6 @@
__tracepoint_android_vh_cpuidle_psci_enter
__tracepoint_android_vh_cpuidle_psci_exit
__tracepoint_android_vh_dump_throttled_rt_tasks
__tracepoint_android_vh_force_compatible_post
__tracepoint_android_vh_force_compatible_pre
__tracepoint_android_vh_freq_table_limits
__tracepoint_android_vh_ftrace_dump_buffer
__tracepoint_android_vh_ftrace_format_check
@@ -2604,10 +2568,12 @@
__tracepoint_android_vh_jiffies_update
__tracepoint_android_vh_logbuf
__tracepoint_android_vh_logbuf_pr_cont
__tracepoint_android_vh_oom_check_panic
__tracepoint_android_vh_printk_hotplug
__tracepoint_android_vh_process_killed
__tracepoint_android_vh_psi_event
__tracepoint_android_vh_psi_group
__tracepoint_android_vh_rproc_recovery
__tracepoint_android_vh_scheduler_tick
__tracepoint_android_vh_show_max_freq
__tracepoint_android_vh_show_resume_epoch_val
@@ -2922,10 +2888,6 @@
wakeup_source_register
wakeup_source_unregister
__warn_printk
wiphy_free
wiphy_new_nm
wiphy_register
wiphy_unregister
wireless_send_event
woken_wake_function
work_busy

View File

@@ -317,9 +317,6 @@
idr_find
idr_for_each
idr_remove
ieee80211_channel_to_freq_khz
ieee80211_freq_khz_to_channel
ieee80211_get_channel_khz
iget_failed
iget5_locked
ignore_console_lock_warning
@@ -1956,40 +1953,11 @@
# required by sprdwl_ng.ko
bcmp
build_skb
__cfg80211_alloc_event_skb
__cfg80211_alloc_reply_skb
cfg80211_chandef_create
cfg80211_ch_switch_notify
cfg80211_connect_done
cfg80211_cqm_rssi_notify
cfg80211_del_sta_sinfo
cfg80211_disconnected
cfg80211_find_elem_match
cfg80211_get_bss
cfg80211_ibss_joined
cfg80211_inform_bss_data
cfg80211_mgmt_tx_status
cfg80211_michael_mic_failure
cfg80211_new_sta
cfg80211_put_bss
cfg80211_ready_on_channel
cfg80211_remain_on_channel_expired
cfg80211_roamed
cfg80211_rx_mgmt
cfg80211_rx_unprot_mlme_mgmt
cfg80211_scan_done
cfg80211_sched_scan_results
__cfg80211_send_event_skb
cfg80211_tdls_oper_request
cfg80211_unlink_bss
cfg80211_unregister_wdev
cfg80211_vendor_cmd_reply
console_printk
consume_skb
_ctype
dev_get_by_index
down_timeout
freq_reg_info
genlmsg_put
jiffies_to_usecs
kfree_skb_list
@@ -2007,7 +1975,6 @@
register_inet6addr_notifier
register_inetaddr_notifier
register_netdevice
regulatory_hint
rtnl_lock
rtnl_unlock
simple_open
@@ -2017,10 +1984,6 @@
unregister_inet6addr_notifier
unregister_inetaddr_notifier
unregister_netdevice_queue
wiphy_free
wiphy_new_nm
wiphy_register
wiphy_unregister
# required by sunwave_fp.ko
input_unregister_device

View File

@@ -34,8 +34,6 @@
cancel_delayed_work_sync
cancel_work_sync
capable
cfg80211_inform_bss_data
cfg80211_put_bss
__cfi_slowpath
__check_object_size
__class_create
@@ -565,10 +563,6 @@
# required by mac80211_hwsim.ko
alloc_netdev_mqs
__cfg80211_alloc_event_skb
__cfg80211_alloc_reply_skb
__cfg80211_send_event_skb
cfg80211_vendor_cmd_reply
debugfs_attr_read
debugfs_attr_write
dev_alloc_name
@@ -583,28 +577,6 @@
hrtimer_forward
hrtimer_init
hrtimer_start_range_ns
ieee80211_alloc_hw_nm
ieee80211_beacon_cntdwn_is_complete
ieee80211_beacon_get_tim
ieee80211_csa_finish
ieee80211_free_hw
ieee80211_free_txskb
ieee80211_get_buffered_bc
ieee80211_get_tx_rates
ieee80211_iterate_active_interfaces_atomic
ieee80211_probereq_get
ieee80211_queue_delayed_work
ieee80211_radar_detected
ieee80211_ready_on_channel
ieee80211_register_hw
ieee80211_remain_on_channel_expired
ieee80211_rx_irqsafe
ieee80211_scan_completed
ieee80211_stop_queues
ieee80211_stop_tx_ba_cb_irqsafe
ieee80211_tx_status_irqsafe
ieee80211_unregister_hw
ieee80211_wake_queues
init_net
__netdev_alloc_skb
netif_rx
@@ -619,7 +591,6 @@
nla_put
param_ops_ushort
register_pernet_device
regulatory_hint
rhashtable_destroy
rhashtable_init
rhashtable_insert_slow
@@ -635,7 +606,6 @@
skb_trim
skb_unlink
unregister_pernet_device
wiphy_apply_custom_regulatory
# required by md-mod.ko
ack_all_badblocks
@@ -940,9 +910,6 @@
devm_mfd_add_devices
# required by virt_wifi.ko
cfg80211_connect_done
cfg80211_disconnected
cfg80211_scan_done
__dev_get_by_index
dev_printk
__module_get
@@ -952,13 +919,8 @@
rtnl_link_unregister
skb_clone
unregister_netdevice_many
wiphy_free
wiphy_new_nm
wiphy_register
wiphy_unregister
# required by virt_wifi_sim.ko
ieee80211_get_channel_khz
release_firmware
request_firmware
@@ -1325,3 +1287,28 @@
_raw_read_unlock
_raw_write_lock
_raw_write_unlock
# required by gs_usb.ko
usb_kill_anchored_urbs
alloc_candev_mqs
register_candev
free_candev
can_change_mtu
open_candev
usb_anchor_urb
usb_unanchor_urb
alloc_can_skb
can_get_echo_skb
alloc_can_err_skb
close_candev
can_put_echo_skb
can_free_echo_skb
unregister_candev
# required by vcan.ko
sock_efree
# required by slcan.ko
tty_mode_ioctl
tty_hangup
hex_asc_upper

View File

@@ -1,3 +1,8 @@
CONFIG_CFG80211=m
CONFIG_NL80211_TESTMODE=y
# CONFIG_CFG80211_DEFAULT_PS is not set
# CONFIG_CFG80211_CRDA_SUPPORT is not set
CONFIG_MAC80211=m
CONFIG_QRTR=m
CONFIG_QRTR_TUN=m
CONFIG_SCSI_UFS_QCOM=m

View File

@@ -97,6 +97,7 @@ CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y
CONFIG_MODULE_SCMVERSION=y
CONFIG_BLK_CGROUP_IOCOST=y
CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y
CONFIG_IOSCHED_BFQ=y
@@ -257,6 +258,7 @@ CONFIG_NET_ACT_MIRRED=y
CONFIG_NET_ACT_SKBEDIT=y
CONFIG_VSOCKETS=y
CONFIG_BPF_JIT=y
CONFIG_CAN=y
CONFIG_BT=y
CONFIG_BT_RFCOMM=y
CONFIG_BT_RFCOMM_TTY=y
@@ -266,11 +268,6 @@ CONFIG_BT_HCIUART=y
CONFIG_BT_HCIUART_LL=y
CONFIG_BT_HCIUART_BCM=y
CONFIG_BT_HCIUART_QCA=y
CONFIG_CFG80211=y
CONFIG_NL80211_TESTMODE=y
CONFIG_CFG80211_CERTIFICATION_ONUS=y
CONFIG_CFG80211_REG_CELLULAR_HINTS=y
CONFIG_MAC80211=y
CONFIG_RFKILL=y
CONFIG_PCI=y
CONFIG_PCIEPORTBUS=y

View File

@@ -50,3 +50,8 @@ CONFIG_PHY_HI3660_USB=m
CONFIG_PINCTRL_SINGLE=m
CONFIG_DMABUF_HEAPS_CMA=m
CONFIG_DMABUF_HEAPS_SYSTEM=m
CONFIG_CFG80211=m
CONFIG_NL80211_TESTMODE=y
# CONFIG_CFG80211_DEFAULT_PS is not set
# CONFIG_CFG80211_CRDA_SUPPORT is not set
CONFIG_MAC80211=m

View File

@@ -73,6 +73,7 @@ CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y
CONFIG_MODULE_SCMVERSION=y
CONFIG_BLK_CGROUP_IOCOST=y
CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y
CONFIG_IOSCHED_BFQ=y
@@ -233,6 +234,7 @@ CONFIG_NET_ACT_MIRRED=y
CONFIG_NET_ACT_SKBEDIT=y
CONFIG_VSOCKETS=y
CONFIG_BPF_JIT=y
CONFIG_CAN=y
CONFIG_BT=y
CONFIG_BT_RFCOMM=y
CONFIG_BT_RFCOMM_TTY=y
@@ -242,13 +244,6 @@ CONFIG_BT_HCIUART=y
CONFIG_BT_HCIUART_LL=y
CONFIG_BT_HCIUART_BCM=y
CONFIG_BT_HCIUART_QCA=y
CONFIG_CFG80211=y
CONFIG_NL80211_TESTMODE=y
CONFIG_CFG80211_CERTIFICATION_ONUS=y
CONFIG_CFG80211_REG_CELLULAR_HINTS=y
# CONFIG_CFG80211_DEFAULT_PS is not set
# CONFIG_CFG80211_CRDA_SUPPORT is not set
CONFIG_MAC80211=y
CONFIG_RFKILL=y
CONFIG_PCI=y
CONFIG_PCIEPORTBUS=y

View File

@@ -133,6 +133,13 @@ config BLK_WBT
dynamically on an algorithm loosely based on CoDel, factoring in
the realtime performance of the disk.
config BLK_WBT_MQ
bool "Enable writeback throttling by default"
default y
depends on BLK_WBT
help
Enable writeback throttling by default for request-based block devices.
config BLK_CGROUP_IOLATENCY
bool "Enable support for latency based cgroup IO protection"
depends on BLK_CGROUP=y
@@ -155,12 +162,14 @@ config BLK_CGROUP_IOCOST
distributes IO capacity between different groups based on
their share of the overall weight distribution.
config BLK_WBT_MQ
bool "Multiqueue writeback throttling"
default y
depends on BLK_WBT
config BLK_CGROUP_IOPRIO
bool "Cgroup I/O controller for assigning an I/O priority class"
depends on BLK_CGROUP
help
Enable writeback throttling by default on multiqueue devices.
Enable the .prio interface for assigning an I/O priority class to
requests. The I/O priority class affects the order in which an I/O
scheduler and block devices process requests. Only some I/O schedulers
and some block devices support I/O priorities.
config BLK_DEBUG_FS
bool "Block layer debugging information in debugfs"

View File

@@ -9,6 +9,12 @@ config MQ_IOSCHED_DEADLINE
help
MQ version of the deadline IO scheduler.
config MQ_IOSCHED_DEADLINE_CGROUP
tristate
default y
depends on MQ_IOSCHED_DEADLINE
depends on BLK_CGROUP
config MQ_IOSCHED_KYBER
tristate "Kyber I/O scheduler"
default y

View File

@@ -17,9 +17,12 @@ obj-$(CONFIG_BLK_DEV_BSGLIB) += bsg-lib.o
obj-$(CONFIG_BLK_CGROUP) += blk-cgroup.o
obj-$(CONFIG_BLK_CGROUP_RWSTAT) += blk-cgroup-rwstat.o
obj-$(CONFIG_BLK_DEV_THROTTLING) += blk-throttle.o
obj-$(CONFIG_BLK_CGROUP_IOPRIO) += blk-ioprio.o
obj-$(CONFIG_BLK_CGROUP_IOLATENCY) += blk-iolatency.o
obj-$(CONFIG_BLK_CGROUP_IOCOST) += blk-iocost.o
obj-$(CONFIG_MQ_IOSCHED_DEADLINE) += mq-deadline.o
mq-deadline-y += mq-deadline-main.o
mq-deadline-$(CONFIG_MQ_IOSCHED_DEADLINE_CGROUP)+= mq-deadline-cgroup.o
obj-$(CONFIG_MQ_IOSCHED_KYBER) += kyber-iosched.o
bfq-y := bfq-iosched.o bfq-wf2q.o bfq-cgroup.o
obj-$(CONFIG_IOSCHED_BFQ) += bfq.o

View File

@@ -4640,9 +4640,6 @@ static bool bfq_has_work(struct blk_mq_hw_ctx *hctx)
{
struct bfq_data *bfqd = hctx->queue->elevator->elevator_data;
if (!atomic_read(&hctx->elevator_queued))
return false;
/*
* Avoiding lock: a race on bfqd->busy_queues should cause at
* most a call to dispatch for nothing
@@ -5557,7 +5554,6 @@ static void bfq_insert_requests(struct blk_mq_hw_ctx *hctx,
rq = list_first_entry(list, struct request, queuelist);
list_del_init(&rq->queuelist);
bfq_insert_request(hctx, rq, at_head);
atomic_inc(&hctx->elevator_queued);
}
}
@@ -5925,7 +5921,6 @@ static void bfq_finish_requeue_request(struct request *rq)
bfq_completed_request(bfqq, bfqd);
bfq_finish_requeue_request_body(bfqq);
atomic_dec(&rq->mq_hctx->elevator_queued);
spin_unlock_irqrestore(&bfqd->lock, flags);
} else {

View File

@@ -31,6 +31,7 @@
#include <linux/tracehook.h>
#include <linux/psi.h>
#include "blk.h"
#include "blk-ioprio.h"
#define MAX_KEY_LEN 100
@@ -1181,15 +1182,18 @@ int blkcg_init_queue(struct request_queue *q)
if (preloaded)
radix_tree_preload_end();
ret = blk_iolatency_init(q);
if (ret)
goto err_destroy_all;
ret = blk_ioprio_init(q);
if (ret)
goto err_destroy_all;
ret = blk_throtl_init(q);
if (ret)
goto err_destroy_all;
ret = blk_iolatency_init(q);
if (ret) {
blk_throtl_exit(q);
goto err_destroy_all;
}
return 0;
err_destroy_all:

262
block/blk-ioprio.c Normal file
View File

@@ -0,0 +1,262 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Block rq-qos policy for assigning an I/O priority class to requests.
*
* Using an rq-qos policy for assigning I/O priority class has two advantages
* over using the ioprio_set() system call:
*
* - This policy is cgroup based so it has all the advantages of cgroups.
* - While ioprio_set() does not affect page cache writeback I/O, this rq-qos
* controller affects page cache writeback I/O for filesystems that support
* assiociating a cgroup with writeback I/O. See also
* Documentation/admin-guide/cgroup-v2.rst.
*/
#include <linux/blk-cgroup.h>
#include <linux/blk-mq.h>
#include <linux/blk_types.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include "blk-ioprio.h"
#include "blk-rq-qos.h"
/**
* enum prio_policy - I/O priority class policy.
* @POLICY_NO_CHANGE: (default) do not modify the I/O priority class.
* @POLICY_NONE_TO_RT: modify IOPRIO_CLASS_NONE into IOPRIO_CLASS_RT.
* @POLICY_RESTRICT_TO_BE: modify IOPRIO_CLASS_NONE and IOPRIO_CLASS_RT into
* IOPRIO_CLASS_BE.
* @POLICY_ALL_TO_IDLE: change the I/O priority class into IOPRIO_CLASS_IDLE.
*
* See also <linux/ioprio.h>.
*/
enum prio_policy {
POLICY_NO_CHANGE = 0,
POLICY_NONE_TO_RT = 1,
POLICY_RESTRICT_TO_BE = 2,
POLICY_ALL_TO_IDLE = 3,
};
static const char *policy_name[] = {
[POLICY_NO_CHANGE] = "no-change",
[POLICY_NONE_TO_RT] = "none-to-rt",
[POLICY_RESTRICT_TO_BE] = "restrict-to-be",
[POLICY_ALL_TO_IDLE] = "idle",
};
static struct blkcg_policy ioprio_policy;
/**
* struct ioprio_blkg - Per (cgroup, request queue) data.
* @pd: blkg_policy_data structure.
*/
struct ioprio_blkg {
struct blkg_policy_data pd;
};
/**
* struct ioprio_blkcg - Per cgroup data.
* @cpd: blkcg_policy_data structure.
* @prio_policy: One of the IOPRIO_CLASS_* values. See also <linux/ioprio.h>.
*/
struct ioprio_blkcg {
struct blkcg_policy_data cpd;
enum prio_policy prio_policy;
};
static inline struct ioprio_blkg *pd_to_ioprio(struct blkg_policy_data *pd)
{
return pd ? container_of(pd, struct ioprio_blkg, pd) : NULL;
}
static struct ioprio_blkcg *blkcg_to_ioprio_blkcg(struct blkcg *blkcg)
{
return container_of(blkcg_to_cpd(blkcg, &ioprio_policy),
struct ioprio_blkcg, cpd);
}
static struct ioprio_blkcg *
ioprio_blkcg_from_css(struct cgroup_subsys_state *css)
{
return blkcg_to_ioprio_blkcg(css_to_blkcg(css));
}
static struct ioprio_blkcg *ioprio_blkcg_from_bio(struct bio *bio)
{
struct blkg_policy_data *pd = blkg_to_pd(bio->bi_blkg, &ioprio_policy);
if (!pd)
return NULL;
return blkcg_to_ioprio_blkcg(pd->blkg->blkcg);
}
static int ioprio_show_prio_policy(struct seq_file *sf, void *v)
{
struct ioprio_blkcg *blkcg = ioprio_blkcg_from_css(seq_css(sf));
seq_printf(sf, "%s\n", policy_name[blkcg->prio_policy]);
return 0;
}
static ssize_t ioprio_set_prio_policy(struct kernfs_open_file *of, char *buf,
size_t nbytes, loff_t off)
{
struct ioprio_blkcg *blkcg = ioprio_blkcg_from_css(of_css(of));
int ret;
if (off != 0)
return -EIO;
/* kernfs_fop_write_iter() terminates 'buf' with '\0'. */
ret = sysfs_match_string(policy_name, buf);
if (ret < 0)
return ret;
blkcg->prio_policy = ret;
return nbytes;
}
static struct blkg_policy_data *
ioprio_alloc_pd(gfp_t gfp, struct request_queue *q, struct blkcg *blkcg)
{
struct ioprio_blkg *ioprio_blkg;
ioprio_blkg = kzalloc(sizeof(*ioprio_blkg), gfp);
if (!ioprio_blkg)
return NULL;
return &ioprio_blkg->pd;
}
static void ioprio_free_pd(struct blkg_policy_data *pd)
{
struct ioprio_blkg *ioprio_blkg = pd_to_ioprio(pd);
kfree(ioprio_blkg);
}
static struct blkcg_policy_data *ioprio_alloc_cpd(gfp_t gfp)
{
struct ioprio_blkcg *blkcg;
blkcg = kzalloc(sizeof(*blkcg), gfp);
if (!blkcg)
return NULL;
blkcg->prio_policy = POLICY_NO_CHANGE;
return &blkcg->cpd;
}
static void ioprio_free_cpd(struct blkcg_policy_data *cpd)
{
struct ioprio_blkcg *blkcg = container_of(cpd, typeof(*blkcg), cpd);
kfree(blkcg);
}
#define IOPRIO_ATTRS \
{ \
.name = "prio.class", \
.seq_show = ioprio_show_prio_policy, \
.write = ioprio_set_prio_policy, \
}, \
{ } /* sentinel */
/* cgroup v2 attributes */
static struct cftype ioprio_files[] = {
IOPRIO_ATTRS
};
/* cgroup v1 attributes */
static struct cftype ioprio_legacy_files[] = {
IOPRIO_ATTRS
};
static struct blkcg_policy ioprio_policy = {
.dfl_cftypes = ioprio_files,
.legacy_cftypes = ioprio_legacy_files,
.cpd_alloc_fn = ioprio_alloc_cpd,
.cpd_free_fn = ioprio_free_cpd,
.pd_alloc_fn = ioprio_alloc_pd,
.pd_free_fn = ioprio_free_pd,
};
struct blk_ioprio {
struct rq_qos rqos;
};
static void blkcg_ioprio_track(struct rq_qos *rqos, struct request *rq,
struct bio *bio)
{
struct ioprio_blkcg *blkcg = ioprio_blkcg_from_bio(bio);
/*
* Except for IOPRIO_CLASS_NONE, higher I/O priority numbers
* correspond to a lower priority. Hence, the max_t() below selects
* the lower priority of bi_ioprio and the cgroup I/O priority class.
* If the cgroup policy has been set to POLICY_NO_CHANGE == 0, the
* bio I/O priority is not modified. If the bio I/O priority equals
* IOPRIO_CLASS_NONE, the cgroup I/O priority is assigned to the bio.
*/
bio->bi_ioprio = max_t(u16, bio->bi_ioprio,
IOPRIO_PRIO_VALUE(blkcg->prio_policy, 0));
}
static void blkcg_ioprio_exit(struct rq_qos *rqos)
{
struct blk_ioprio *blkioprio_blkg =
container_of(rqos, typeof(*blkioprio_blkg), rqos);
blkcg_deactivate_policy(rqos->q, &ioprio_policy);
kfree(blkioprio_blkg);
}
static struct rq_qos_ops blkcg_ioprio_ops = {
.track = blkcg_ioprio_track,
.exit = blkcg_ioprio_exit,
};
int blk_ioprio_init(struct request_queue *q)
{
struct blk_ioprio *blkioprio_blkg;
struct rq_qos *rqos;
int ret;
blkioprio_blkg = kzalloc(sizeof(*blkioprio_blkg), GFP_KERNEL);
if (!blkioprio_blkg)
return -ENOMEM;
ret = blkcg_activate_policy(q, &ioprio_policy);
if (ret) {
kfree(blkioprio_blkg);
return ret;
}
rqos = &blkioprio_blkg->rqos;
rqos->id = RQ_QOS_IOPRIO;
rqos->ops = &blkcg_ioprio_ops;
rqos->q = q;
/*
* Registering the rq-qos policy after activating the blk-cgroup
* policy guarantees that ioprio_blkcg_from_bio(bio) != NULL in the
* rq-qos callbacks.
*/
rq_qos_add(q, rqos);
return 0;
}
static int __init ioprio_init(void)
{
return blkcg_policy_register(&ioprio_policy);
}
static void __exit ioprio_exit(void)
{
blkcg_policy_unregister(&ioprio_policy);
}
module_init(ioprio_init);
module_exit(ioprio_exit);

19
block/blk-ioprio.h Normal file
View File

@@ -0,0 +1,19 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _BLK_IOPRIO_H_
#define _BLK_IOPRIO_H_
#include <linux/kconfig.h>
struct request_queue;
#ifdef CONFIG_BLK_CGROUP_IOPRIO
int blk_ioprio_init(struct request_queue *q);
#else
static inline int blk_ioprio_init(struct request_queue *q)
{
return 0;
}
#endif
#endif /* _BLK_IOPRIO_H_ */

View File

@@ -939,6 +939,21 @@ void blk_mq_debugfs_unregister_sched(struct request_queue *q)
q->sched_debugfs_dir = NULL;
}
static const char *rq_qos_id_to_name(enum rq_qos_id id)
{
switch (id) {
case RQ_QOS_WBT:
return "wbt";
case RQ_QOS_LATENCY:
return "latency";
case RQ_QOS_COST:
return "cost";
case RQ_QOS_IOPRIO:
return "ioprio";
}
return "unknown";
}
void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos)
{
debugfs_remove_recursive(rqos->debugfs_dir);

View File

@@ -26,6 +26,8 @@ struct blk_mq_tags {
* request pool
*/
spinlock_t lock;
ANDROID_OEM_DATA(1);
};
extern struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags,

View File

@@ -1668,6 +1668,42 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
}
EXPORT_SYMBOL(blk_mq_run_hw_queue);
/*
* Is the request queue handled by an IO scheduler that does not respect
* hardware queues when dispatching?
*/
static bool blk_mq_has_sqsched(struct request_queue *q)
{
struct elevator_queue *e = q->elevator;
if (e && e->type->ops.dispatch_request &&
!(e->type->elevator_features & ELEVATOR_F_MQ_AWARE))
return true;
return false;
}
/*
* Return prefered queue to dispatch from (if any) for non-mq aware IO
* scheduler.
*/
static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
{
struct blk_mq_hw_ctx *hctx;
/*
* If the IO scheduler does not respect hardware queues when
* dispatching, we just don't bother with multiple HW queues and
* dispatch from hctx for the current CPU since running multiple queues
* just causes lock contention inside the scheduler and pointless cache
* bouncing.
*/
hctx = blk_mq_map_queue_type(q, HCTX_TYPE_DEFAULT,
raw_smp_processor_id());
if (!blk_mq_hctx_stopped(hctx))
return hctx;
return NULL;
}
/**
* blk_mq_run_hw_queues - Run all hardware queues in a request queue.
* @q: Pointer to the request queue to run.
@@ -1675,13 +1711,22 @@ EXPORT_SYMBOL(blk_mq_run_hw_queue);
*/
void blk_mq_run_hw_queues(struct request_queue *q, bool async)
{
struct blk_mq_hw_ctx *hctx;
struct blk_mq_hw_ctx *hctx, *sq_hctx;
int i;
sq_hctx = NULL;
if (blk_mq_has_sqsched(q))
sq_hctx = blk_mq_get_sq_hctx(q);
queue_for_each_hw_ctx(q, hctx, i) {
if (blk_mq_hctx_stopped(hctx))
continue;
/*
* Dispatch from this hctx either if there's no hctx preferred
* by IO scheduler or if it has requests that bypass the
* scheduler.
*/
if (!sq_hctx || sq_hctx == hctx ||
!list_empty_careful(&hctx->dispatch))
blk_mq_run_hw_queue(hctx, async);
}
}
@@ -1694,13 +1739,22 @@ EXPORT_SYMBOL(blk_mq_run_hw_queues);
*/
void blk_mq_delay_run_hw_queues(struct request_queue *q, unsigned long msecs)
{
struct blk_mq_hw_ctx *hctx;
struct blk_mq_hw_ctx *hctx, *sq_hctx;
int i;
sq_hctx = NULL;
if (blk_mq_has_sqsched(q))
sq_hctx = blk_mq_get_sq_hctx(q);
queue_for_each_hw_ctx(q, hctx, i) {
if (blk_mq_hctx_stopped(hctx))
continue;
/*
* Dispatch from this hctx either if there's no hctx preferred
* by IO scheduler or if it has requests that bypass the
* scheduler.
*/
if (!sq_hctx || sq_hctx == hctx ||
!list_empty_careful(&hctx->dispatch))
blk_mq_delay_run_hw_queue(hctx, msecs);
}
}
@@ -2740,7 +2794,6 @@ blk_mq_alloc_hctx(struct request_queue *q, struct blk_mq_tag_set *set,
goto free_hctx;
atomic_set(&hctx->nr_active, 0);
atomic_set(&hctx->elevator_queued, 0);
if (node == NUMA_NO_NODE)
node = set->numa_node;
hctx->numa_node = node;

View File

@@ -35,6 +35,8 @@ struct blk_mq_ctx {
struct request_queue *queue;
struct blk_mq_ctxs *ctxs;
struct kobject kobj;
ANDROID_OEM_DATA_ARRAY(1, 2);
} ____cacheline_aligned_in_smp;
void blk_mq_exit_queue(struct request_queue *q);

View File

@@ -16,6 +16,7 @@ enum rq_qos_id {
RQ_QOS_WBT,
RQ_QOS_LATENCY,
RQ_QOS_COST,
RQ_QOS_IOPRIO,
};
struct rq_wait {
@@ -78,19 +79,6 @@ static inline struct rq_qos *blkcg_rq_qos(struct request_queue *q)
return rq_qos_id(q, RQ_QOS_LATENCY);
}
static inline const char *rq_qos_id_to_name(enum rq_qos_id id)
{
switch (id) {
case RQ_QOS_WBT:
return "wbt";
case RQ_QOS_LATENCY:
return "latency";
case RQ_QOS_COST:
return "cost";
}
return "unknown";
}
static inline void rq_wait_init(struct rq_wait *rq_wait)
{
atomic_set(&rq_wait->inflight, 0);

View File

@@ -1030,6 +1030,7 @@ static struct elevator_type kyber_sched = {
#endif
.elevator_attrs = kyber_sched_attrs,
.elevator_name = "kyber",
.elevator_features = ELEVATOR_F_MQ_AWARE,
.elevator_owner = THIS_MODULE,
};

126
block/mq-deadline-cgroup.c Normal file
View File

@@ -0,0 +1,126 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/blk-cgroup.h>
#include <linux/ioprio.h>
#include "mq-deadline-cgroup.h"
static struct blkcg_policy dd_blkcg_policy;
static struct blkcg_policy_data *dd_cpd_alloc(gfp_t gfp)
{
struct dd_blkcg *pd;
pd = kzalloc(sizeof(*pd), gfp);
if (!pd)
return NULL;
pd->stats = alloc_percpu_gfp(typeof(*pd->stats),
GFP_KERNEL | __GFP_ZERO);
if (!pd->stats) {
kfree(pd);
return NULL;
}
return &pd->cpd;
}
static void dd_cpd_free(struct blkcg_policy_data *cpd)
{
struct dd_blkcg *dd_blkcg = container_of(cpd, typeof(*dd_blkcg), cpd);
free_percpu(dd_blkcg->stats);
kfree(dd_blkcg);
}
static struct dd_blkcg *dd_blkcg_from_pd(struct blkg_policy_data *pd)
{
return container_of(blkcg_to_cpd(pd->blkg->blkcg, &dd_blkcg_policy),
struct dd_blkcg, cpd);
}
/*
* Convert an association between a block cgroup and a request queue into a
* pointer to the mq-deadline information associated with a (blkcg, queue) pair.
*/
struct dd_blkcg *dd_blkcg_from_bio(struct bio *bio)
{
struct blkg_policy_data *pd;
pd = blkg_to_pd(bio->bi_blkg, &dd_blkcg_policy);
if (!pd)
return NULL;
return dd_blkcg_from_pd(pd);
}
static size_t dd_pd_stat(struct blkg_policy_data *pd, char *buf, size_t size)
{
static const char *const prio_class_name[] = {
[IOPRIO_CLASS_NONE] = "NONE",
[IOPRIO_CLASS_RT] = "RT",
[IOPRIO_CLASS_BE] = "BE",
[IOPRIO_CLASS_IDLE] = "IDLE",
};
struct dd_blkcg *blkcg = dd_blkcg_from_pd(pd);
int res = 0;
u8 prio;
for (prio = 0; prio < ARRAY_SIZE(blkcg->stats->stats); prio++)
res += scnprintf(buf + res, size - res,
" [%s] dispatched=%u inserted=%u merged=%u",
prio_class_name[prio],
ddcg_sum(blkcg, dispatched, prio) +
ddcg_sum(blkcg, merged, prio) -
ddcg_sum(blkcg, completed, prio),
ddcg_sum(blkcg, inserted, prio) -
ddcg_sum(blkcg, completed, prio),
ddcg_sum(blkcg, merged, prio));
return res;
}
static struct blkg_policy_data *dd_pd_alloc(gfp_t gfp, struct request_queue *q,
struct blkcg *blkcg)
{
struct dd_blkg *pd;
pd = kzalloc(sizeof(*pd), gfp);
if (!pd)
return NULL;
return &pd->pd;
}
static void dd_pd_free(struct blkg_policy_data *pd)
{
struct dd_blkg *dd_blkg = container_of(pd, typeof(*dd_blkg), pd);
kfree(dd_blkg);
}
static struct blkcg_policy dd_blkcg_policy = {
.cpd_alloc_fn = dd_cpd_alloc,
.cpd_free_fn = dd_cpd_free,
.pd_alloc_fn = dd_pd_alloc,
.pd_free_fn = dd_pd_free,
.pd_stat_fn = dd_pd_stat,
};
int dd_activate_policy(struct request_queue *q)
{
return blkcg_activate_policy(q, &dd_blkcg_policy);
}
void dd_deactivate_policy(struct request_queue *q)
{
blkcg_deactivate_policy(q, &dd_blkcg_policy);
}
int __init dd_blkcg_init(void)
{
return blkcg_policy_register(&dd_blkcg_policy);
}
void __exit dd_blkcg_exit(void)
{
blkcg_policy_unregister(&dd_blkcg_policy);
}

114
block/mq-deadline-cgroup.h Normal file
View File

@@ -0,0 +1,114 @@
/* SPDX-License-Identifier: GPL-2.0 */
#if !defined(_MQ_DEADLINE_CGROUP_H_)
#define _MQ_DEADLINE_CGROUP_H_
#include <linux/blk-cgroup.h>
struct request_queue;
/**
* struct io_stats_per_prio - I/O statistics per I/O priority class.
* @inserted: Number of inserted requests.
* @merged: Number of merged requests.
* @dispatched: Number of dispatched requests.
* @completed: Number of I/O completions.
*/
struct io_stats_per_prio {
local_t inserted;
local_t merged;
local_t dispatched;
local_t completed;
};
/* I/O statistics per I/O cgroup per I/O priority class (IOPRIO_CLASS_*). */
struct blkcg_io_stats {
struct io_stats_per_prio stats[4];
};
/**
* struct dd_blkcg - Per cgroup data.
* @cpd: blkcg_policy_data structure.
* @stats: I/O statistics.
*/
struct dd_blkcg {
struct blkcg_policy_data cpd; /* must be the first member */
struct blkcg_io_stats __percpu *stats;
};
/*
* Count one event of type 'event_type' and with I/O priority class
* 'prio_class'.
*/
#define ddcg_count(ddcg, event_type, prio_class) do { \
if (ddcg) { \
struct blkcg_io_stats *io_stats = get_cpu_ptr((ddcg)->stats); \
\
BUILD_BUG_ON(!__same_type((ddcg), struct dd_blkcg *)); \
BUILD_BUG_ON(!__same_type((prio_class), u8)); \
local_inc(&io_stats->stats[(prio_class)].event_type); \
put_cpu_ptr(io_stats); \
} \
} while (0)
/*
* Returns the total number of ddcg_count(ddcg, event_type, prio_class) calls
* across all CPUs. No locking or barriers since it is fine if the returned
* sum is slightly outdated.
*/
#define ddcg_sum(ddcg, event_type, prio) ({ \
unsigned int cpu; \
u32 sum = 0; \
\
BUILD_BUG_ON(!__same_type((ddcg), struct dd_blkcg *)); \
BUILD_BUG_ON(!__same_type((prio), u8)); \
for_each_present_cpu(cpu) \
sum += local_read(&per_cpu_ptr((ddcg)->stats, cpu)-> \
stats[(prio)].event_type); \
sum; \
})
#ifdef CONFIG_BLK_CGROUP
/**
* struct dd_blkg - Per (cgroup, request queue) data.
* @pd: blkg_policy_data structure.
*/
struct dd_blkg {
struct blkg_policy_data pd; /* must be the first member */
};
struct dd_blkcg *dd_blkcg_from_bio(struct bio *bio);
int dd_activate_policy(struct request_queue *q);
void dd_deactivate_policy(struct request_queue *q);
int __init dd_blkcg_init(void);
void __exit dd_blkcg_exit(void);
#else /* CONFIG_BLK_CGROUP */
static inline struct dd_blkcg *dd_blkcg_from_bio(struct bio *bio)
{
return NULL;
}
static inline int dd_activate_policy(struct request_queue *q)
{
return 0;
}
static inline void dd_deactivate_policy(struct request_queue *q)
{
}
static inline int dd_blkcg_init(void)
{
return 0;
}
static inline void dd_blkcg_exit(void)
{
}
#endif /* CONFIG_BLK_CGROUP */
#endif /* _MQ_DEADLINE_CGROUP_H_ */

1170
block/mq-deadline-main.c Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,822 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* MQ Deadline i/o scheduler - adaptation of the legacy deadline scheduler,
* for the blk-mq scheduling framework
*
* Copyright (C) 2016 Jens Axboe <axboe@kernel.dk>
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/blkdev.h>
#include <linux/blk-mq.h>
#include <linux/elevator.h>
#include <linux/bio.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/compiler.h>
#include <linux/rbtree.h>
#include <linux/sbitmap.h>
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-debugfs.h"
#include "blk-mq-tag.h"
#include "blk-mq-sched.h"
/*
* See Documentation/block/deadline-iosched.rst
*/
static const int read_expire = HZ / 2; /* max time before a read is submitted. */
static const int write_expire = 5 * HZ; /* ditto for writes, these limits are SOFT! */
static const int writes_starved = 2; /* max times reads can starve a write */
static const int fifo_batch = 16; /* # of sequential requests treated as one
by the above parameters. For throughput. */
struct deadline_data {
/*
* run time data
*/
/*
* requests (deadline_rq s) are present on both sort_list and fifo_list
*/
struct rb_root sort_list[2];
struct list_head fifo_list[2];
/*
* next in sort order. read, write or both are NULL
*/
struct request *next_rq[2];
unsigned int batching; /* number of sequential requests made */
unsigned int starved; /* times reads have starved writes */
/*
* settings that change how the i/o scheduler behaves
*/
int fifo_expire[2];
int fifo_batch;
int writes_starved;
int front_merges;
spinlock_t lock;
spinlock_t zone_lock;
struct list_head dispatch;
};
static inline struct rb_root *
deadline_rb_root(struct deadline_data *dd, struct request *rq)
{
return &dd->sort_list[rq_data_dir(rq)];
}
/*
* get the request after `rq' in sector-sorted order
*/
static inline struct request *
deadline_latter_request(struct request *rq)
{
struct rb_node *node = rb_next(&rq->rb_node);
if (node)
return rb_entry_rq(node);
return NULL;
}
static void
deadline_add_rq_rb(struct deadline_data *dd, struct request *rq)
{
struct rb_root *root = deadline_rb_root(dd, rq);
elv_rb_add(root, rq);
}
static inline void
deadline_del_rq_rb(struct deadline_data *dd, struct request *rq)
{
const int data_dir = rq_data_dir(rq);
if (dd->next_rq[data_dir] == rq)
dd->next_rq[data_dir] = deadline_latter_request(rq);
elv_rb_del(deadline_rb_root(dd, rq), rq);
}
/*
* remove rq from rbtree and fifo.
*/
static void deadline_remove_request(struct request_queue *q, struct request *rq)
{
struct deadline_data *dd = q->elevator->elevator_data;
list_del_init(&rq->queuelist);
/*
* We might not be on the rbtree, if we are doing an insert merge
*/
if (!RB_EMPTY_NODE(&rq->rb_node))
deadline_del_rq_rb(dd, rq);
elv_rqhash_del(q, rq);
if (q->last_merge == rq)
q->last_merge = NULL;
}
static void dd_request_merged(struct request_queue *q, struct request *req,
enum elv_merge type)
{
struct deadline_data *dd = q->elevator->elevator_data;
/*
* if the merge was a front merge, we need to reposition request
*/
if (type == ELEVATOR_FRONT_MERGE) {
elv_rb_del(deadline_rb_root(dd, req), req);
deadline_add_rq_rb(dd, req);
}
}
static void dd_merged_requests(struct request_queue *q, struct request *req,
struct request *next)
{
/*
* if next expires before rq, assign its expire time to rq
* and move into next position (next will be deleted) in fifo
*/
if (!list_empty(&req->queuelist) && !list_empty(&next->queuelist)) {
if (time_before((unsigned long)next->fifo_time,
(unsigned long)req->fifo_time)) {
list_move(&req->queuelist, &next->queuelist);
req->fifo_time = next->fifo_time;
}
}
/*
* kill knowledge of next, this one is a goner
*/
deadline_remove_request(q, next);
}
/*
* move an entry to dispatch queue
*/
static void
deadline_move_request(struct deadline_data *dd, struct request *rq)
{
const int data_dir = rq_data_dir(rq);
dd->next_rq[READ] = NULL;
dd->next_rq[WRITE] = NULL;
dd->next_rq[data_dir] = deadline_latter_request(rq);
/*
* take it off the sort and fifo list
*/
deadline_remove_request(rq->q, rq);
}
/*
* deadline_check_fifo returns 0 if there are no expired requests on the fifo,
* 1 otherwise. Requires !list_empty(&dd->fifo_list[data_dir])
*/
static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
{
struct request *rq = rq_entry_fifo(dd->fifo_list[ddir].next);
/*
* rq is expired!
*/
if (time_after_eq(jiffies, (unsigned long)rq->fifo_time))
return 1;
return 0;
}
/*
* For the specified data direction, return the next request to
* dispatch using arrival ordered lists.
*/
static struct request *
deadline_fifo_request(struct deadline_data *dd, int data_dir)
{
struct request *rq;
unsigned long flags;
if (WARN_ON_ONCE(data_dir != READ && data_dir != WRITE))
return NULL;
if (list_empty(&dd->fifo_list[data_dir]))
return NULL;
rq = rq_entry_fifo(dd->fifo_list[data_dir].next);
if (data_dir == READ || !blk_queue_is_zoned(rq->q))
return rq;
/*
* Look for a write request that can be dispatched, that is one with
* an unlocked target zone.
*/
spin_lock_irqsave(&dd->zone_lock, flags);
list_for_each_entry(rq, &dd->fifo_list[WRITE], queuelist) {
if (blk_req_can_dispatch_to_zone(rq))
goto out;
}
rq = NULL;
out:
spin_unlock_irqrestore(&dd->zone_lock, flags);
return rq;
}
/*
* For the specified data direction, return the next request to
* dispatch using sector position sorted lists.
*/
static struct request *
deadline_next_request(struct deadline_data *dd, int data_dir)
{
struct request *rq;
unsigned long flags;
if (WARN_ON_ONCE(data_dir != READ && data_dir != WRITE))
return NULL;
rq = dd->next_rq[data_dir];
if (!rq)
return NULL;
if (data_dir == READ || !blk_queue_is_zoned(rq->q))
return rq;
/*
* Look for a write request that can be dispatched, that is one with
* an unlocked target zone.
*/
spin_lock_irqsave(&dd->zone_lock, flags);
while (rq) {
if (blk_req_can_dispatch_to_zone(rq))
break;
rq = deadline_latter_request(rq);
}
spin_unlock_irqrestore(&dd->zone_lock, flags);
return rq;
}
/*
* deadline_dispatch_requests selects the best request according to
* read/write expire, fifo_batch, etc
*/
static struct request *__dd_dispatch_request(struct deadline_data *dd)
{
struct request *rq, *next_rq;
bool reads, writes;
int data_dir;
if (!list_empty(&dd->dispatch)) {
rq = list_first_entry(&dd->dispatch, struct request, queuelist);
list_del_init(&rq->queuelist);
goto done;
}
reads = !list_empty(&dd->fifo_list[READ]);
writes = !list_empty(&dd->fifo_list[WRITE]);
/*
* batches are currently reads XOR writes
*/
rq = deadline_next_request(dd, WRITE);
if (!rq)
rq = deadline_next_request(dd, READ);
if (rq && dd->batching < dd->fifo_batch)
/* we have a next request are still entitled to batch */
goto dispatch_request;
/*
* at this point we are not running a batch. select the appropriate
* data direction (read / write)
*/
if (reads) {
BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[READ]));
if (deadline_fifo_request(dd, WRITE) &&
(dd->starved++ >= dd->writes_starved))
goto dispatch_writes;
data_dir = READ;
goto dispatch_find_request;
}
/*
* there are either no reads or writes have been starved
*/
if (writes) {
dispatch_writes:
BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[WRITE]));
dd->starved = 0;
data_dir = WRITE;
goto dispatch_find_request;
}
return NULL;
dispatch_find_request:
/*
* we are not running a batch, find best request for selected data_dir
*/
next_rq = deadline_next_request(dd, data_dir);
if (deadline_check_fifo(dd, data_dir) || !next_rq) {
/*
* A deadline has expired, the last request was in the other
* direction, or we have run out of higher-sectored requests.
* Start again from the request with the earliest expiry time.
*/
rq = deadline_fifo_request(dd, data_dir);
} else {
/*
* The last req was the same dir and we have a next request in
* sort order. No expired requests so continue on from here.
*/
rq = next_rq;
}
/*
* For a zoned block device, if we only have writes queued and none of
* them can be dispatched, rq will be NULL.
*/
if (!rq)
return NULL;
dd->batching = 0;
dispatch_request:
/*
* rq is the selected appropriate request.
*/
dd->batching++;
deadline_move_request(dd, rq);
done:
/*
* If the request needs its target zone locked, do it.
*/
blk_req_zone_write_lock(rq);
rq->rq_flags |= RQF_STARTED;
return rq;
}
/*
* One confusing aspect here is that we get called for a specific
* hardware queue, but we may return a request that is for a
* different hardware queue. This is because mq-deadline has shared
* state for all hardware queues, in terms of sorting, FIFOs, etc.
*/
static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx)
{
struct deadline_data *dd = hctx->queue->elevator->elevator_data;
struct request *rq;
spin_lock(&dd->lock);
rq = __dd_dispatch_request(dd);
spin_unlock(&dd->lock);
if (rq)
atomic_dec(&rq->mq_hctx->elevator_queued);
return rq;
}
static void dd_exit_queue(struct elevator_queue *e)
{
struct deadline_data *dd = e->elevator_data;
BUG_ON(!list_empty(&dd->fifo_list[READ]));
BUG_ON(!list_empty(&dd->fifo_list[WRITE]));
kfree(dd);
}
/*
* initialize elevator private data (deadline_data).
*/
static int dd_init_queue(struct request_queue *q, struct elevator_type *e)
{
struct deadline_data *dd;
struct elevator_queue *eq;
eq = elevator_alloc(q, e);
if (!eq)
return -ENOMEM;
dd = kzalloc_node(sizeof(*dd), GFP_KERNEL, q->node);
if (!dd) {
kobject_put(&eq->kobj);
return -ENOMEM;
}
eq->elevator_data = dd;
INIT_LIST_HEAD(&dd->fifo_list[READ]);
INIT_LIST_HEAD(&dd->fifo_list[WRITE]);
dd->sort_list[READ] = RB_ROOT;
dd->sort_list[WRITE] = RB_ROOT;
dd->fifo_expire[READ] = read_expire;
dd->fifo_expire[WRITE] = write_expire;
dd->writes_starved = writes_starved;
dd->front_merges = 1;
dd->fifo_batch = fifo_batch;
spin_lock_init(&dd->lock);
spin_lock_init(&dd->zone_lock);
INIT_LIST_HEAD(&dd->dispatch);
q->elevator = eq;
return 0;
}
static int dd_request_merge(struct request_queue *q, struct request **rq,
struct bio *bio)
{
struct deadline_data *dd = q->elevator->elevator_data;
sector_t sector = bio_end_sector(bio);
struct request *__rq;
if (!dd->front_merges)
return ELEVATOR_NO_MERGE;
__rq = elv_rb_find(&dd->sort_list[bio_data_dir(bio)], sector);
if (__rq) {
BUG_ON(sector != blk_rq_pos(__rq));
if (elv_bio_merge_ok(__rq, bio)) {
*rq = __rq;
return ELEVATOR_FRONT_MERGE;
}
}
return ELEVATOR_NO_MERGE;
}
static bool dd_bio_merge(struct request_queue *q, struct bio *bio,
unsigned int nr_segs)
{
struct deadline_data *dd = q->elevator->elevator_data;
struct request *free = NULL;
bool ret;
spin_lock(&dd->lock);
ret = blk_mq_sched_try_merge(q, bio, nr_segs, &free);
spin_unlock(&dd->lock);
if (free)
blk_mq_free_request(free);
return ret;
}
/*
* add rq to rbtree and fifo
*/
static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
bool at_head)
{
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
const int data_dir = rq_data_dir(rq);
/*
* This may be a requeue of a write request that has locked its
* target zone. If it is the case, this releases the zone lock.
*/
blk_req_zone_write_unlock(rq);
if (blk_mq_sched_try_insert_merge(q, rq))
return;
blk_mq_sched_request_inserted(rq);
if (at_head || blk_rq_is_passthrough(rq)) {
if (at_head)
list_add(&rq->queuelist, &dd->dispatch);
else
list_add_tail(&rq->queuelist, &dd->dispatch);
} else {
deadline_add_rq_rb(dd, rq);
if (rq_mergeable(rq)) {
elv_rqhash_add(q, rq);
if (!q->last_merge)
q->last_merge = rq;
}
/*
* set expire time and add to fifo list
*/
rq->fifo_time = jiffies + dd->fifo_expire[data_dir];
list_add_tail(&rq->queuelist, &dd->fifo_list[data_dir]);
}
}
static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
struct list_head *list, bool at_head)
{
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
spin_lock(&dd->lock);
while (!list_empty(list)) {
struct request *rq;
rq = list_first_entry(list, struct request, queuelist);
list_del_init(&rq->queuelist);
dd_insert_request(hctx, rq, at_head);
atomic_inc(&hctx->elevator_queued);
}
spin_unlock(&dd->lock);
}
/*
* Nothing to do here. This is defined only to ensure that .finish_request
* method is called upon request completion.
*/
static void dd_prepare_request(struct request *rq)
{
}
/*
* For zoned block devices, write unlock the target zone of
* completed write requests. Do this while holding the zone lock
* spinlock so that the zone is never unlocked while deadline_fifo_request()
* or deadline_next_request() are executing. This function is called for
* all requests, whether or not these requests complete successfully.
*
* For a zoned block device, __dd_dispatch_request() may have stopped
* dispatching requests if all the queued requests are write requests directed
* at zones that are already locked due to on-going write requests. To ensure
* write request dispatch progress in this case, mark the queue as needing a
* restart to ensure that the queue is run again after completion of the
* request and zones being unlocked.
*/
static void dd_finish_request(struct request *rq)
{
struct request_queue *q = rq->q;
if (blk_queue_is_zoned(q)) {
struct deadline_data *dd = q->elevator->elevator_data;
unsigned long flags;
spin_lock_irqsave(&dd->zone_lock, flags);
blk_req_zone_write_unlock(rq);
if (!list_empty(&dd->fifo_list[WRITE]))
blk_mq_sched_mark_restart_hctx(rq->mq_hctx);
spin_unlock_irqrestore(&dd->zone_lock, flags);
}
}
static bool dd_has_work(struct blk_mq_hw_ctx *hctx)
{
struct deadline_data *dd = hctx->queue->elevator->elevator_data;
if (!atomic_read(&hctx->elevator_queued))
return false;
return !list_empty_careful(&dd->dispatch) ||
!list_empty_careful(&dd->fifo_list[0]) ||
!list_empty_careful(&dd->fifo_list[1]);
}
/*
* sysfs parts below
*/
static ssize_t
deadline_var_show(int var, char *page)
{
return sprintf(page, "%d\n", var);
}
static void
deadline_var_store(int *var, const char *page)
{
char *p = (char *) page;
*var = simple_strtol(p, &p, 10);
}
#define SHOW_FUNCTION(__FUNC, __VAR, __CONV) \
static ssize_t __FUNC(struct elevator_queue *e, char *page) \
{ \
struct deadline_data *dd = e->elevator_data; \
int __data = __VAR; \
if (__CONV) \
__data = jiffies_to_msecs(__data); \
return deadline_var_show(__data, (page)); \
}
SHOW_FUNCTION(deadline_read_expire_show, dd->fifo_expire[READ], 1);
SHOW_FUNCTION(deadline_write_expire_show, dd->fifo_expire[WRITE], 1);
SHOW_FUNCTION(deadline_writes_starved_show, dd->writes_starved, 0);
SHOW_FUNCTION(deadline_front_merges_show, dd->front_merges, 0);
SHOW_FUNCTION(deadline_fifo_batch_show, dd->fifo_batch, 0);
#undef SHOW_FUNCTION
#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV) \
static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count) \
{ \
struct deadline_data *dd = e->elevator_data; \
int __data; \
deadline_var_store(&__data, (page)); \
if (__data < (MIN)) \
__data = (MIN); \
else if (__data > (MAX)) \
__data = (MAX); \
if (__CONV) \
*(__PTR) = msecs_to_jiffies(__data); \
else \
*(__PTR) = __data; \
return count; \
}
STORE_FUNCTION(deadline_read_expire_store, &dd->fifo_expire[READ], 0, INT_MAX, 1);
STORE_FUNCTION(deadline_write_expire_store, &dd->fifo_expire[WRITE], 0, INT_MAX, 1);
STORE_FUNCTION(deadline_writes_starved_store, &dd->writes_starved, INT_MIN, INT_MAX, 0);
STORE_FUNCTION(deadline_front_merges_store, &dd->front_merges, 0, 1, 0);
STORE_FUNCTION(deadline_fifo_batch_store, &dd->fifo_batch, 0, INT_MAX, 0);
#undef STORE_FUNCTION
#define DD_ATTR(name) \
__ATTR(name, 0644, deadline_##name##_show, deadline_##name##_store)
static struct elv_fs_entry deadline_attrs[] = {
DD_ATTR(read_expire),
DD_ATTR(write_expire),
DD_ATTR(writes_starved),
DD_ATTR(front_merges),
DD_ATTR(fifo_batch),
__ATTR_NULL
};
#ifdef CONFIG_BLK_DEBUG_FS
#define DEADLINE_DEBUGFS_DDIR_ATTRS(ddir, name) \
static void *deadline_##name##_fifo_start(struct seq_file *m, \
loff_t *pos) \
__acquires(&dd->lock) \
{ \
struct request_queue *q = m->private; \
struct deadline_data *dd = q->elevator->elevator_data; \
\
spin_lock(&dd->lock); \
return seq_list_start(&dd->fifo_list[ddir], *pos); \
} \
\
static void *deadline_##name##_fifo_next(struct seq_file *m, void *v, \
loff_t *pos) \
{ \
struct request_queue *q = m->private; \
struct deadline_data *dd = q->elevator->elevator_data; \
\
return seq_list_next(v, &dd->fifo_list[ddir], pos); \
} \
\
static void deadline_##name##_fifo_stop(struct seq_file *m, void *v) \
__releases(&dd->lock) \
{ \
struct request_queue *q = m->private; \
struct deadline_data *dd = q->elevator->elevator_data; \
\
spin_unlock(&dd->lock); \
} \
\
static const struct seq_operations deadline_##name##_fifo_seq_ops = { \
.start = deadline_##name##_fifo_start, \
.next = deadline_##name##_fifo_next, \
.stop = deadline_##name##_fifo_stop, \
.show = blk_mq_debugfs_rq_show, \
}; \
\
static int deadline_##name##_next_rq_show(void *data, \
struct seq_file *m) \
{ \
struct request_queue *q = data; \
struct deadline_data *dd = q->elevator->elevator_data; \
struct request *rq = dd->next_rq[ddir]; \
\
if (rq) \
__blk_mq_debugfs_rq_show(m, rq); \
return 0; \
}
DEADLINE_DEBUGFS_DDIR_ATTRS(READ, read)
DEADLINE_DEBUGFS_DDIR_ATTRS(WRITE, write)
#undef DEADLINE_DEBUGFS_DDIR_ATTRS
static int deadline_batching_show(void *data, struct seq_file *m)
{
struct request_queue *q = data;
struct deadline_data *dd = q->elevator->elevator_data;
seq_printf(m, "%u\n", dd->batching);
return 0;
}
static int deadline_starved_show(void *data, struct seq_file *m)
{
struct request_queue *q = data;
struct deadline_data *dd = q->elevator->elevator_data;
seq_printf(m, "%u\n", dd->starved);
return 0;
}
static void *deadline_dispatch_start(struct seq_file *m, loff_t *pos)
__acquires(&dd->lock)
{
struct request_queue *q = m->private;
struct deadline_data *dd = q->elevator->elevator_data;
spin_lock(&dd->lock);
return seq_list_start(&dd->dispatch, *pos);
}
static void *deadline_dispatch_next(struct seq_file *m, void *v, loff_t *pos)
{
struct request_queue *q = m->private;
struct deadline_data *dd = q->elevator->elevator_data;
return seq_list_next(v, &dd->dispatch, pos);
}
static void deadline_dispatch_stop(struct seq_file *m, void *v)
__releases(&dd->lock)
{
struct request_queue *q = m->private;
struct deadline_data *dd = q->elevator->elevator_data;
spin_unlock(&dd->lock);
}
static const struct seq_operations deadline_dispatch_seq_ops = {
.start = deadline_dispatch_start,
.next = deadline_dispatch_next,
.stop = deadline_dispatch_stop,
.show = blk_mq_debugfs_rq_show,
};
#define DEADLINE_QUEUE_DDIR_ATTRS(name) \
{#name "_fifo_list", 0400, .seq_ops = &deadline_##name##_fifo_seq_ops}, \
{#name "_next_rq", 0400, deadline_##name##_next_rq_show}
static const struct blk_mq_debugfs_attr deadline_queue_debugfs_attrs[] = {
DEADLINE_QUEUE_DDIR_ATTRS(read),
DEADLINE_QUEUE_DDIR_ATTRS(write),
{"batching", 0400, deadline_batching_show},
{"starved", 0400, deadline_starved_show},
{"dispatch", 0400, .seq_ops = &deadline_dispatch_seq_ops},
{},
};
#undef DEADLINE_QUEUE_DDIR_ATTRS
#endif
static struct elevator_type mq_deadline = {
.ops = {
.insert_requests = dd_insert_requests,
.dispatch_request = dd_dispatch_request,
.prepare_request = dd_prepare_request,
.finish_request = dd_finish_request,
.next_request = elv_rb_latter_request,
.former_request = elv_rb_former_request,
.bio_merge = dd_bio_merge,
.request_merge = dd_request_merge,
.requests_merged = dd_merged_requests,
.request_merged = dd_request_merged,
.has_work = dd_has_work,
.init_sched = dd_init_queue,
.exit_sched = dd_exit_queue,
},
#ifdef CONFIG_BLK_DEBUG_FS
.queue_debugfs_attrs = deadline_queue_debugfs_attrs,
#endif
.elevator_attrs = deadline_attrs,
.elevator_name = "mq-deadline",
.elevator_alias = "deadline",
.elevator_features = ELEVATOR_F_ZBD_SEQ_WRITE,
.elevator_owner = THIS_MODULE,
};
MODULE_ALIAS("mq-deadline-iosched");
static int __init deadline_init(void)
{
return elv_register(&mq_deadline);
}
static void __exit deadline_exit(void)
{
elv_unregister(&mq_deadline);
}
module_init(deadline_init);
module_exit(deadline_exit);
MODULE_AUTHOR("Jens Axboe");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("MQ deadline IO scheduler");

View File

@@ -1,5 +1,5 @@
BRANCH=android12-5.10
KMI_GENERATION=7
KMI_GENERATION=8
LLVM=1
DEPMOD=depmod

View File

@@ -572,6 +572,7 @@ struct binder_transaction {
*/
spinlock_t lock;
ANDROID_VENDOR_DATA(1);
ANDROID_OEM_DATA_ARRAY(1, 2);
};
/**

View File

@@ -63,6 +63,7 @@
#include <trace/hooks/user.h>
#include <trace/hooks/cpuidle_psci.h>
#include <trace/hooks/fips140.h>
#include <trace/hooks/remoteproc.h>
/*
* Export tracepoints that act as a bare tracehook (ie: have no trace event
@@ -321,8 +322,6 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_v4l2subdev_set_fmt);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_v4l2subdev_set_frame_interval);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_scmi_timeout_sync);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_find_new_ilb);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_force_compatible_pre);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_force_compatible_post);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_uid);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_user);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_freq_qos_add_request);
@@ -339,3 +338,4 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_force_compatible_pre);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_force_compatible_post);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_print_transaction_info);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_setscheduler_uclamp);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_rproc_recovery);

View File

@@ -52,13 +52,6 @@ static ssize_t exporter_name_show(struct dma_buf *dmabuf,
return sysfs_emit(buf, "%s\n", dmabuf->exp_name);
}
static ssize_t mmap_count_show(struct dma_buf *dmabuf,
struct dma_buf_stats_attribute *attr,
char *buf)
{
return sysfs_emit(buf, "%d\n", dmabuf->mmap_count);
}
static ssize_t size_show(struct dma_buf *dmabuf,
struct dma_buf_stats_attribute *attr,
char *buf)
@@ -69,13 +62,10 @@ static ssize_t size_show(struct dma_buf *dmabuf,
static struct dma_buf_stats_attribute exporter_name_attribute =
__ATTR_RO(exporter_name);
static struct dma_buf_stats_attribute size_attribute = __ATTR_RO(size);
static struct dma_buf_stats_attribute mmap_count_attribute =
__ATTR_RO(mmap_count);
static struct attribute *dma_buf_stats_default_attrs[] = {
&exporter_name_attribute.attr,
&size_attribute.attr,
&mmap_count_attribute.attr,
NULL,
};
ATTRIBUTE_GROUPS(dma_buf_stats_default);

View File

@@ -149,59 +149,6 @@ static struct file_system_type dma_buf_fs_type = {
.kill_sb = kill_anon_super,
};
#ifdef CONFIG_DMABUF_SYSFS_STATS
static void dma_buf_vma_open(struct vm_area_struct *vma)
{
struct dma_buf *dmabuf = vma->vm_file->private_data;
dmabuf->mmap_count++;
/* call the heap provided vma open() op */
if (dmabuf->exp_vm_ops->open)
dmabuf->exp_vm_ops->open(vma);
}
static void dma_buf_vma_close(struct vm_area_struct *vma)
{
struct dma_buf *dmabuf = vma->vm_file->private_data;
if (dmabuf->mmap_count)
dmabuf->mmap_count--;
/* call the heap provided vma close() op */
if (dmabuf->exp_vm_ops->close)
dmabuf->exp_vm_ops->close(vma);
}
static int dma_buf_do_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
{
int ret;
struct file *orig_vm_file = vma->vm_file;
/* call this first because the exporter might override vma->vm_ops */
ret = dmabuf->ops->mmap(dmabuf, vma);
if (ret)
return ret;
if (orig_vm_file != vma->vm_file)
return 0;
/* save the exporter provided vm_ops */
dmabuf->exp_vm_ops = vma->vm_ops;
dmabuf->vm_ops = *(dmabuf->exp_vm_ops);
/* override open() and close() to provide buffer mmap count */
dmabuf->vm_ops.open = dma_buf_vma_open;
dmabuf->vm_ops.close = dma_buf_vma_close;
vma->vm_ops = &dmabuf->vm_ops;
dmabuf->mmap_count++;
return ret;
}
#else /* CONFIG_DMABUF_SYSFS_STATS */
static int dma_buf_do_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
{
return dmabuf->ops->mmap(dmabuf, vma);
}
#endif /* CONFIG_DMABUF_SYSFS_STATS */
static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
{
struct dma_buf *dmabuf;
@@ -220,7 +167,7 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
dmabuf->size >> PAGE_SHIFT)
return -EINVAL;
return dma_buf_do_mmap(dmabuf, vma);
return dmabuf->ops->mmap(dmabuf, vma);
}
static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)

View File

@@ -30,6 +30,8 @@ struct mmc_bus_ops {
int (*hw_reset)(struct mmc_host *);
int (*sw_reset)(struct mmc_host *);
bool (*cache_enabled)(struct mmc_host *);
ANDROID_VENDOR_DATA_ARRAY(1, 2);
};
void mmc_attach_bus(struct mmc_host *host, const struct mmc_bus_ops *ops);

View File

@@ -39,6 +39,7 @@
#include <linux/virtio_ring.h>
#include <asm/byteorder.h>
#include <linux/platform_device.h>
#include <trace/hooks/remoteproc.h>
#include "remoteproc_internal.h"
@@ -1725,6 +1726,7 @@ int rproc_trigger_recovery(struct rproc *rproc)
release_firmware(firmware_p);
unlock_mutex:
trace_android_vh_rproc_recovery(rproc);
mutex_unlock(&rproc->lock);
return ret;
}

View File

@@ -183,3 +183,12 @@ config SCSI_UFS_CRYPTO
Enabling this makes it possible for the kernel to use the crypto
capabilities of the UFS device (if present) to perform crypto
operations on data being transferred to/from the device.
config SCSI_UFS_HPB
bool "Support UFS Host Performance Booster"
depends on SCSI_UFSHCD
help
The UFS HPB feature improves random read performance. It caches
L2P (logical to physical) map of UFS to host DRAM. The driver uses HPB
read command by piggybacking physical page number for bypassing FTL (flash
translation layer)'s L2P address translation.

View File

@@ -8,6 +8,7 @@ ufshcd-core-y += ufshcd.o ufs-sysfs.o
ufshcd-core-$(CONFIG_DEBUG_FS) += ufs-debugfs.o
ufshcd-core-$(CONFIG_SCSI_UFS_BSG) += ufs_bsg.o
ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o
ufshcd-core-$(CONFIG_SCSI_UFS_HPB) += ufshpb.o
obj-$(CONFIG_SCSI_UFS_DWC_TC_PCI) += tc-dwc-g210-pci.o ufshcd-dwc.o tc-dwc-g210.o
obj-$(CONFIG_SCSI_UFS_DWC_TC_PLATFORM) += tc-dwc-g210-pltfrm.o ufshcd-dwc.o tc-dwc-g210.o

View File

@@ -522,6 +522,8 @@ UFS_DEVICE_DESC_PARAM(device_version, _DEV_VER, 2);
UFS_DEVICE_DESC_PARAM(number_of_secure_wpa, _NUM_SEC_WPA, 1);
UFS_DEVICE_DESC_PARAM(psa_max_data_size, _PSA_MAX_DATA, 4);
UFS_DEVICE_DESC_PARAM(psa_state_timeout, _PSA_TMT, 1);
UFS_DEVICE_DESC_PARAM(hpb_version, _HPB_VER, 2);
UFS_DEVICE_DESC_PARAM(hpb_control, _HPB_CONTROL, 1);
UFS_DEVICE_DESC_PARAM(ext_feature_sup, _EXT_UFS_FEATURE_SUP, 4);
UFS_DEVICE_DESC_PARAM(wb_presv_us_en, _WB_PRESRV_USRSPC_EN, 1);
UFS_DEVICE_DESC_PARAM(wb_type, _WB_TYPE, 1);
@@ -554,6 +556,8 @@ static struct attribute *ufs_sysfs_device_descriptor[] = {
&dev_attr_number_of_secure_wpa.attr,
&dev_attr_psa_max_data_size.attr,
&dev_attr_psa_state_timeout.attr,
&dev_attr_hpb_version.attr,
&dev_attr_hpb_control.attr,
&dev_attr_ext_feature_sup.attr,
&dev_attr_wb_presv_us_en.attr,
&dev_attr_wb_type.attr,
@@ -627,6 +631,10 @@ UFS_GEOMETRY_DESC_PARAM(enh4_memory_max_alloc_units,
_ENM4_MAX_NUM_UNITS, 4);
UFS_GEOMETRY_DESC_PARAM(enh4_memory_capacity_adjustment_factor,
_ENM4_CAP_ADJ_FCTR, 2);
UFS_GEOMETRY_DESC_PARAM(hpb_region_size, _HPB_REGION_SIZE, 1);
UFS_GEOMETRY_DESC_PARAM(hpb_number_lu, _HPB_NUMBER_LU, 1);
UFS_GEOMETRY_DESC_PARAM(hpb_subregion_size, _HPB_SUBREGION_SIZE, 1);
UFS_GEOMETRY_DESC_PARAM(hpb_max_active_regions, _HPB_MAX_ACTIVE_REGS, 2);
UFS_GEOMETRY_DESC_PARAM(wb_max_alloc_units, _WB_MAX_ALLOC_UNITS, 4);
UFS_GEOMETRY_DESC_PARAM(wb_max_wb_luns, _WB_MAX_WB_LUNS, 1);
UFS_GEOMETRY_DESC_PARAM(wb_buff_cap_adj, _WB_BUFF_CAP_ADJ, 1);
@@ -664,6 +672,10 @@ static struct attribute *ufs_sysfs_geometry_descriptor[] = {
&dev_attr_enh3_memory_capacity_adjustment_factor.attr,
&dev_attr_enh4_memory_max_alloc_units.attr,
&dev_attr_enh4_memory_capacity_adjustment_factor.attr,
&dev_attr_hpb_region_size.attr,
&dev_attr_hpb_number_lu.attr,
&dev_attr_hpb_subregion_size.attr,
&dev_attr_hpb_max_active_regions.attr,
&dev_attr_wb_max_alloc_units.attr,
&dev_attr_wb_max_wb_luns.attr,
&dev_attr_wb_buff_cap_adj.attr,
@@ -905,6 +917,7 @@ UFS_FLAG(disable_fw_update, _PERMANENTLY_DISABLE_FW_UPDATE);
UFS_FLAG(wb_enable, _WB_EN);
UFS_FLAG(wb_flush_en, _WB_BUFF_FLUSH_EN);
UFS_FLAG(wb_flush_during_h8, _WB_BUFF_FLUSH_DURING_HIBERN8);
UFS_FLAG(hpb_enable, _HPB_EN);
static struct attribute *ufs_sysfs_device_flags[] = {
&dev_attr_device_init.attr,
@@ -918,6 +931,7 @@ static struct attribute *ufs_sysfs_device_flags[] = {
&dev_attr_wb_enable.attr,
&dev_attr_wb_flush_en.attr,
&dev_attr_wb_flush_during_h8.attr,
&dev_attr_hpb_enable.attr,
NULL,
};
@@ -953,6 +967,7 @@ static ssize_t _name##_show(struct device *dev, \
static DEVICE_ATTR_RO(_name)
UFS_ATTRIBUTE(boot_lun_enabled, _BOOT_LU_EN);
UFS_ATTRIBUTE(max_data_size_hpb_single_cmd, _MAX_HPB_SINGLE_CMD);
UFS_ATTRIBUTE(current_power_mode, _POWER_MODE);
UFS_ATTRIBUTE(active_icc_level, _ACTIVE_ICC_LVL);
UFS_ATTRIBUTE(ooo_data_enabled, _OOO_DATA_EN);
@@ -976,6 +991,7 @@ UFS_ATTRIBUTE(wb_cur_buf, _CURR_WB_BUFF_SIZE);
static struct attribute *ufs_sysfs_attributes[] = {
&dev_attr_boot_lun_enabled.attr,
&dev_attr_max_data_size_hpb_single_cmd.attr,
&dev_attr_current_power_mode.attr,
&dev_attr_active_icc_level.attr,
&dev_attr_ooo_data_enabled.attr,
@@ -1048,6 +1064,9 @@ UFS_UNIT_DESC_PARAM(provisioning_type, _PROVISIONING_TYPE, 1);
UFS_UNIT_DESC_PARAM(physical_memory_resourse_count, _PHY_MEM_RSRC_CNT, 8);
UFS_UNIT_DESC_PARAM(context_capabilities, _CTX_CAPABILITIES, 2);
UFS_UNIT_DESC_PARAM(large_unit_granularity, _LARGE_UNIT_SIZE_M1, 1);
UFS_UNIT_DESC_PARAM(hpb_lu_max_active_regions, _HPB_LU_MAX_ACTIVE_RGNS, 2);
UFS_UNIT_DESC_PARAM(hpb_pinned_region_start_offset, _HPB_PIN_RGN_START_OFF, 2);
UFS_UNIT_DESC_PARAM(hpb_number_pinned_regions, _HPB_NUM_PIN_RGNS, 2);
UFS_UNIT_DESC_PARAM(wb_buf_alloc_units, _WB_BUF_ALLOC_UNITS, 4);
@@ -1065,6 +1084,9 @@ static struct attribute *ufs_sysfs_unit_descriptor[] = {
&dev_attr_physical_memory_resourse_count.attr,
&dev_attr_context_capabilities.attr,
&dev_attr_large_unit_granularity.attr,
&dev_attr_hpb_lu_max_active_regions.attr,
&dev_attr_hpb_pinned_region_start_offset.attr,
&dev_attr_hpb_number_pinned_regions.attr,
&dev_attr_wb_buf_alloc_units.attr,
NULL,
};

View File

@@ -122,12 +122,14 @@ enum flag_idn {
QUERY_FLAG_IDN_WB_EN = 0x0E,
QUERY_FLAG_IDN_WB_BUFF_FLUSH_EN = 0x0F,
QUERY_FLAG_IDN_WB_BUFF_FLUSH_DURING_HIBERN8 = 0x10,
QUERY_FLAG_IDN_HPB_RESET = 0x11,
QUERY_FLAG_IDN_HPB_EN = 0x12,
};
/* Attribute idn for Query requests */
enum attr_idn {
QUERY_ATTR_IDN_BOOT_LU_EN = 0x00,
QUERY_ATTR_IDN_RESERVED = 0x01,
QUERY_ATTR_IDN_MAX_HPB_SINGLE_CMD = 0x01,
QUERY_ATTR_IDN_POWER_MODE = 0x02,
QUERY_ATTR_IDN_ACTIVE_ICC_LVL = 0x03,
QUERY_ATTR_IDN_OOO_DATA_EN = 0x04,
@@ -195,6 +197,9 @@ enum unit_desc_param {
UNIT_DESC_PARAM_PHY_MEM_RSRC_CNT = 0x18,
UNIT_DESC_PARAM_CTX_CAPABILITIES = 0x20,
UNIT_DESC_PARAM_LARGE_UNIT_SIZE_M1 = 0x22,
UNIT_DESC_PARAM_HPB_LU_MAX_ACTIVE_RGNS = 0x23,
UNIT_DESC_PARAM_HPB_PIN_RGN_START_OFF = 0x25,
UNIT_DESC_PARAM_HPB_NUM_PIN_RGNS = 0x27,
UNIT_DESC_PARAM_WB_BUF_ALLOC_UNITS = 0x29,
};
@@ -235,6 +240,8 @@ enum device_desc_param {
DEVICE_DESC_PARAM_PSA_MAX_DATA = 0x25,
DEVICE_DESC_PARAM_PSA_TMT = 0x29,
DEVICE_DESC_PARAM_PRDCT_REV = 0x2A,
DEVICE_DESC_PARAM_HPB_VER = 0x40,
DEVICE_DESC_PARAM_HPB_CONTROL = 0x42,
DEVICE_DESC_PARAM_EXT_UFS_FEATURE_SUP = 0x4F,
DEVICE_DESC_PARAM_WB_PRESRV_USRSPC_EN = 0x53,
DEVICE_DESC_PARAM_WB_TYPE = 0x54,
@@ -283,6 +290,10 @@ enum geometry_desc_param {
GEOMETRY_DESC_PARAM_ENM4_MAX_NUM_UNITS = 0x3E,
GEOMETRY_DESC_PARAM_ENM4_CAP_ADJ_FCTR = 0x42,
GEOMETRY_DESC_PARAM_OPT_LOG_BLK_SIZE = 0x44,
GEOMETRY_DESC_PARAM_HPB_REGION_SIZE = 0x48,
GEOMETRY_DESC_PARAM_HPB_NUMBER_LU = 0x49,
GEOMETRY_DESC_PARAM_HPB_SUBREGION_SIZE = 0x4A,
GEOMETRY_DESC_PARAM_HPB_MAX_ACTIVE_REGS = 0x4B,
GEOMETRY_DESC_PARAM_WB_MAX_ALLOC_UNITS = 0x4F,
GEOMETRY_DESC_PARAM_WB_MAX_WB_LUNS = 0x53,
GEOMETRY_DESC_PARAM_WB_BUFF_CAP_ADJ = 0x54,
@@ -327,8 +338,10 @@ enum {
/* Possible values for dExtendedUFSFeaturesSupport */
enum {
UFS_DEV_HPB_SUPPORT = BIT(7),
UFS_DEV_WRITE_BOOSTER_SUP = BIT(8),
};
#define UFS_DEV_HPB_SUPPORT_VERSION 0x310
#define POWER_DESC_MAX_SIZE 0x62
#define POWER_DESC_MAX_ACTV_ICC_LVLS 16
@@ -460,6 +473,41 @@ struct utp_cmd_rsp {
u8 sense_data[UFS_SENSE_SIZE];
};
struct ufshpb_active_field {
__be16 active_rgn;
__be16 active_srgn;
};
#define HPB_ACT_FIELD_SIZE 4
/**
* struct utp_hpb_rsp - Response UPIU structure
* @residual_transfer_count: Residual transfer count DW-3
* @reserved1: Reserved double words DW-4 to DW-7
* @sense_data_len: Sense data length DW-8 U16
* @desc_type: Descriptor type of sense data
* @additional_len: Additional length of sense data
* @hpb_op: HPB operation type
* @lun: LUN of response UPIU
* @active_rgn_cnt: Active region count
* @inactive_rgn_cnt: Inactive region count
* @hpb_active_field: Recommended to read HPB region and subregion
* @hpb_inactive_field: To be inactivated HPB region and subregion
*/
struct utp_hpb_rsp {
__be32 residual_transfer_count;
__be32 reserved1[4];
__be16 sense_data_len;
u8 desc_type;
u8 additional_len;
u8 hpb_op;
u8 lun;
u8 active_rgn_cnt;
u8 inactive_rgn_cnt;
struct ufshpb_active_field hpb_active_field[2];
__be16 hpb_inactive_field[2];
};
#define UTP_HPB_RSP_SIZE 40
/**
* struct utp_upiu_rsp - general upiu response structure
* @header: UPIU header structure DW-0 to DW-2
@@ -470,6 +518,7 @@ struct utp_upiu_rsp {
struct utp_upiu_header header;
union {
struct utp_cmd_rsp sr;
struct utp_hpb_rsp hr;
struct utp_upiu_query qr;
};
};
@@ -543,6 +592,8 @@ struct ufs_dev_info {
u32 d_wb_alloc_units;
bool b_rpm_dev_flush_capable;
u8 b_presrv_uspc_en;
/* UFS HPB related flag */
bool hpb_enabled;
};
/**

View File

@@ -23,6 +23,7 @@
#include "ufs-debugfs.h"
#include "ufs_bsg.h"
#include "ufshcd-crypto.h"
#include "ufshpb.h"
#include <asm/unaligned.h>
#include <linux/blkdev.h>
@@ -730,7 +731,7 @@ static inline void ufshcd_utmrl_clear(struct ufs_hba *hba, u32 pos)
*/
static inline void ufshcd_outstanding_req_clear(struct ufs_hba *hba, int tag)
{
__clear_bit(tag, &hba->outstanding_reqs);
clear_bit(tag, &hba->outstanding_reqs);
}
/**
@@ -1956,15 +1957,19 @@ static void ufshcd_clk_scaling_start_busy(struct ufs_hba *hba)
{
bool queue_resume_work = false;
ktime_t curr_t = ktime_get();
unsigned long flags;
if (!ufshcd_is_clkscaling_supported(hba))
return;
spin_lock_irqsave(hba->host->host_lock, flags);
if (!hba->clk_scaling.active_reqs++)
queue_resume_work = true;
if (!hba->clk_scaling.is_enabled || hba->pm_op_in_progress)
if (!hba->clk_scaling.is_enabled || hba->pm_op_in_progress) {
spin_unlock_irqrestore(hba->host->host_lock, flags);
return;
}
if (queue_resume_work)
queue_work(hba->clk_scaling.workq,
@@ -1980,21 +1985,26 @@ static void ufshcd_clk_scaling_start_busy(struct ufs_hba *hba)
hba->clk_scaling.busy_start_t = curr_t;
hba->clk_scaling.is_busy_started = true;
}
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
static void ufshcd_clk_scaling_update_busy(struct ufs_hba *hba)
{
struct ufs_clk_scaling *scaling = &hba->clk_scaling;
unsigned long flags;
if (!ufshcd_is_clkscaling_supported(hba))
return;
spin_lock_irqsave(hba->host->host_lock, flags);
hba->clk_scaling.active_reqs--;
if (!hba->outstanding_reqs && scaling->is_busy_started) {
scaling->tot_busy_t += ktime_to_us(ktime_sub(ktime_get(),
scaling->busy_start_t));
scaling->busy_start_t = 0;
scaling->is_busy_started = false;
}
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
static inline int ufshcd_monitor_opcode2dir(u8 opcode)
@@ -2020,15 +2030,20 @@ static inline bool ufshcd_should_inform_monitor(struct ufs_hba *hba,
static void ufshcd_start_monitor(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
{
int dir = ufshcd_monitor_opcode2dir(*lrbp->cmd->cmnd);
unsigned long flags;
spin_lock_irqsave(hba->host->host_lock, flags);
if (dir >= 0 && hba->monitor.nr_queued[dir]++ == 0)
hba->monitor.busy_start_ts[dir] = ktime_get();
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
static void ufshcd_update_monitor(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
{
int dir = ufshcd_monitor_opcode2dir(*lrbp->cmd->cmnd);
unsigned long flags;
spin_lock_irqsave(hba->host->host_lock, flags);
if (dir >= 0 && hba->monitor.nr_queued[dir] > 0) {
struct request *req = lrbp->cmd->request;
struct ufs_hba_monitor *m = &hba->monitor;
@@ -2052,6 +2067,7 @@ static void ufshcd_update_monitor(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
/* Push forward the busy start of monitor */
m->busy_start_ts[dir] = now;
}
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
/**
@@ -2070,10 +2086,21 @@ void ufshcd_send_command(struct ufs_hba *hba, unsigned int task_tag)
trace_android_vh_ufs_send_command(hba, lrbp);
ufshcd_add_command_trace(hba, task_tag, "send");
ufshcd_clk_scaling_start_busy(hba);
__set_bit(task_tag, &hba->outstanding_reqs);
if (unlikely(ufshcd_should_inform_monitor(hba, lrbp)))
ufshcd_start_monitor(hba, lrbp);
ufshcd_writel(hba, 1 << task_tag, REG_UTP_TRANSFER_REQ_DOOR_BELL);
if (ufshcd_has_utrlcnr(hba)) {
set_bit(task_tag, &hba->outstanding_reqs);
ufshcd_writel(hba, 1 << task_tag,
REG_UTP_TRANSFER_REQ_DOOR_BELL);
} else {
unsigned long flags;
spin_lock_irqsave(hba->host->host_lock, flags);
set_bit(task_tag, &hba->outstanding_reqs);
ufshcd_writel(hba, 1 << task_tag,
REG_UTP_TRANSFER_REQ_DOOR_BELL);
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
/* Make sure that doorbell is committed immediately */
wmb();
}
@@ -2637,7 +2664,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
{
struct ufshcd_lrb *lrbp;
struct ufs_hba *hba;
unsigned long flags;
int tag;
int err = 0;
@@ -2654,6 +2680,43 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
if (!down_read_trylock(&hba->clk_scaling_lock))
return SCSI_MLQUEUE_HOST_BUSY;
switch (hba->ufshcd_state) {
case UFSHCD_STATE_OPERATIONAL:
case UFSHCD_STATE_EH_SCHEDULED_NON_FATAL:
break;
case UFSHCD_STATE_EH_SCHEDULED_FATAL:
/*
* pm_runtime_get_sync() is used at error handling preparation
* stage. If a scsi cmd, e.g. the SSU cmd, is sent from hba's
* PM ops, it can never be finished if we let SCSI layer keep
* retrying it, which gets err handler stuck forever. Neither
* can we let the scsi cmd pass through, because UFS is in bad
* state, the scsi cmd may eventually time out, which will get
* err handler blocked for too long. So, just fail the scsi cmd
* sent from PM ops, err handler can recover PM error anyways.
*/
if (hba->pm_op_in_progress) {
hba->force_reset = true;
set_host_byte(cmd, DID_BAD_TARGET);
cmd->scsi_done(cmd);
goto out;
}
fallthrough;
case UFSHCD_STATE_RESET:
err = SCSI_MLQUEUE_HOST_BUSY;
goto out;
case UFSHCD_STATE_ERROR:
set_host_byte(cmd, DID_ERROR);
cmd->scsi_done(cmd);
goto out;
default:
dev_WARN_ONCE(hba->dev, 1, "%s: invalid state %d\n",
__func__, hba->ufshcd_state);
set_host_byte(cmd, DID_BAD_TARGET);
cmd->scsi_done(cmd);
goto out;
}
hba->req_abort_count = 0;
err = ufshcd_hold(hba, true);
@@ -2664,7 +2727,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
WARN_ON(ufshcd_is_clkgating_allowed(hba) &&
(hba->clk_gating.state != CLKS_ON));
lrbp = &hba->lrb[tag];
if (unlikely(test_bit(tag, &hba->outstanding_reqs))) {
if (hba->pm_op_in_progress)
set_host_byte(cmd, DID_BAD_TARGET);
@@ -2674,6 +2736,7 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
goto out;
}
lrbp = &hba->lrb[tag];
WARN_ON(lrbp->cmd);
lrbp->cmd = cmd;
lrbp->sense_bufflen = UFS_SENSE_SIZE;
@@ -2693,6 +2756,13 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
lrbp->req_abort_skip = false;
err = ufshpb_prep(hba, lrbp);
if (err == -EAGAIN) {
lrbp->cmd = NULL;
ufshcd_release(hba);
goto out;
}
ufshcd_comp_scsi_upiu(hba, lrbp);
err = ufshcd_map_sg(hba, lrbp);
@@ -2704,51 +2774,7 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
/* Make sure descriptors are ready before ringing the doorbell */
wmb();
spin_lock_irqsave(hba->host->host_lock, flags);
switch (hba->ufshcd_state) {
case UFSHCD_STATE_OPERATIONAL:
case UFSHCD_STATE_EH_SCHEDULED_NON_FATAL:
break;
case UFSHCD_STATE_EH_SCHEDULED_FATAL:
/*
* pm_runtime_get_sync() is used at error handling preparation
* stage. If a scsi cmd, e.g. the SSU cmd, is sent from hba's
* PM ops, it can never be finished if we let SCSI layer keep
* retrying it, which gets err handler stuck forever. Neither
* can we let the scsi cmd pass through, because UFS is in bad
* state, the scsi cmd may eventually time out, which will get
* err handler blocked for too long. So, just fail the scsi cmd
* sent from PM ops, err handler can recover PM error anyways.
*/
if (hba->pm_op_in_progress) {
hba->force_reset = true;
set_host_byte(cmd, DID_BAD_TARGET);
goto out_compl_cmd;
}
fallthrough;
case UFSHCD_STATE_RESET:
err = SCSI_MLQUEUE_HOST_BUSY;
goto out_compl_cmd;
case UFSHCD_STATE_ERROR:
set_host_byte(cmd, DID_ERROR);
goto out_compl_cmd;
default:
dev_WARN_ONCE(hba->dev, 1, "%s: invalid state %d\n",
__func__, hba->ufshcd_state);
set_host_byte(cmd, DID_BAD_TARGET);
goto out_compl_cmd;
}
ufshcd_send_command(hba, tag);
spin_unlock_irqrestore(hba->host->host_lock, flags);
goto out;
out_compl_cmd:
scsi_dma_unmap(lrbp->cmd);
lrbp->cmd = NULL;
spin_unlock_irqrestore(hba->host->host_lock, flags);
ufshcd_release(hba);
if (!err)
cmd->scsi_done(cmd);
out:
up_read(&hba->clk_scaling_lock);
return err;
@@ -2903,7 +2929,6 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
int err;
int tag;
struct completion wait;
unsigned long flags;
down_read(&hba->clk_scaling_lock);
@@ -2923,34 +2948,30 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
req->timeout = msecs_to_jiffies(2 * timeout);
blk_mq_start_request(req);
init_completion(&wait);
lrbp = &hba->lrb[tag];
if (unlikely(test_bit(tag, &hba->outstanding_reqs))) {
err = -EBUSY;
goto out;
}
init_completion(&wait);
lrbp = &hba->lrb[tag];
WARN_ON(lrbp->cmd);
err = ufshcd_compose_dev_cmd(hba, lrbp, cmd_type, tag);
if (unlikely(err))
goto out_put_tag;
goto out;
hba->dev_cmd.complete = &wait;
ufshcd_add_query_upiu_trace(hba, tag, "query_send");
/* Make sure descriptors are ready before ringing the doorbell */
wmb();
spin_lock_irqsave(hba->host->host_lock, flags);
ufshcd_send_command(hba, tag);
spin_unlock_irqrestore(hba->host->host_lock, flags);
err = ufshcd_wait_for_dev_cmd(hba, lrbp, timeout);
out:
ufshcd_add_query_upiu_trace(hba, tag,
err ? "query_complete_err" : "query_complete");
out_put_tag:
out:
blk_put_request(req);
out_unlock:
up_read(&hba->clk_scaling_lock);
@@ -4895,6 +4916,26 @@ static int ufshcd_change_queue_depth(struct scsi_device *sdev, int depth)
return scsi_change_queue_depth(sdev, depth);
}
static void ufshcd_hpb_destroy(struct ufs_hba *hba, struct scsi_device *sdev)
{
/* skip well-known LU */
if ((sdev->lun >= UFS_UPIU_MAX_UNIT_NUM_ID) ||
!(hba->dev_info.hpb_enabled) || !ufshpb_is_allowed(hba))
return;
ufshpb_destroy_lu(hba, sdev);
}
static void ufshcd_hpb_configure(struct ufs_hba *hba, struct scsi_device *sdev)
{
/* skip well-known LU */
if ((sdev->lun >= UFS_UPIU_MAX_UNIT_NUM_ID) ||
!(hba->dev_info.hpb_enabled) || !ufshpb_is_allowed(hba))
return;
ufshpb_init_hpb_lu(hba, sdev);
}
/**
* ufshcd_slave_configure - adjust SCSI device configurations
* @sdev: pointer to SCSI device
@@ -4904,6 +4945,8 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
struct ufs_hba *hba = shost_priv(sdev->host);
struct request_queue *q = sdev->request_queue;
ufshcd_hpb_configure(hba, sdev);
blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1);
if (hba->quirks & UFSHCD_QUIRK_ALIGN_SG_WITH_PAGE_SIZE)
blk_queue_update_dma_alignment(q, PAGE_SIZE - 1);
@@ -4927,6 +4970,9 @@ static void ufshcd_slave_destroy(struct scsi_device *sdev)
struct ufs_hba *hba;
hba = shost_priv(sdev->host);
ufshcd_hpb_destroy(hba, sdev);
/* Drop the reference as it won't be needed anymore */
if (ufshcd_scsi_to_upiu_lun(sdev->lun) == UFS_UPIU_UFS_DEVICE_WLUN) {
unsigned long flags;
@@ -5037,6 +5083,9 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
*/
pm_runtime_get_noresume(hba->dev);
}
if (scsi_status == SAM_STAT_GOOD)
ufshpb_rsp_upiu(hba, lrbp);
break;
case UPIU_TRANSACTION_REJECT_UPIU:
/* TODO: handle Reject UPIU Response */
@@ -5083,6 +5132,24 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
return result;
}
static bool ufshcd_is_auto_hibern8_error(struct ufs_hba *hba,
u32 intr_mask)
{
if (!ufshcd_is_auto_hibern8_supported(hba) ||
!ufshcd_is_auto_hibern8_enabled(hba))
return false;
if (!(intr_mask & UFSHCD_UIC_HIBERN8_MASK))
return false;
if (hba->active_uic_cmd &&
(hba->active_uic_cmd->command == UIC_CMD_DME_HIBER_ENTER ||
hba->active_uic_cmd->command == UIC_CMD_DME_HIBER_EXIT))
return false;
return true;
}
/**
* ufshcd_uic_cmd_compl - handle completion of uic command
* @hba: per adapter instance
@@ -5096,6 +5163,10 @@ static irqreturn_t ufshcd_uic_cmd_compl(struct ufs_hba *hba, u32 intr_status)
{
irqreturn_t retval = IRQ_NONE;
spin_lock(hba->host->host_lock);
if (ufshcd_is_auto_hibern8_error(hba, intr_status))
hba->errors |= (UFSHCD_UIC_HIBERN8_MASK & intr_status);
if ((intr_status & UIC_COMMAND_COMPL) && hba->active_uic_cmd) {
hba->active_uic_cmd->argument2 |=
ufshcd_get_uic_cmd_result(hba);
@@ -5116,6 +5187,7 @@ static irqreturn_t ufshcd_uic_cmd_compl(struct ufs_hba *hba, u32 intr_status)
if (retval == IRQ_HANDLED)
ufshcd_add_uic_command_trace(hba, hba->active_uic_cmd,
"complete");
spin_unlock(hba->host->host_lock);
return retval;
}
@@ -5152,7 +5224,7 @@ static void __ufshcd_transfer_req_compl(struct ufs_hba *hba,
lrbp->cmd = NULL;
/* Do not touch lrbp after scsi done */
cmd->scsi_done(cmd);
__ufshcd_release(hba);
ufshcd_release(hba);
update_scaling = true;
} else if (lrbp->command_type == UTP_CMD_TYPE_DEV_MANAGE ||
lrbp->command_type == UTP_CMD_TYPE_UFS_STORAGE) {
@@ -5164,25 +5236,23 @@ static void __ufshcd_transfer_req_compl(struct ufs_hba *hba,
update_scaling = true;
}
}
if (ufshcd_is_clkscaling_supported(hba) && update_scaling)
hba->clk_scaling.active_reqs--;
}
if (update_scaling)
ufshcd_clk_scaling_update_busy(hba);
}
}
/**
* ufshcd_transfer_req_compl - handle SCSI and query command completion
* ufshcd_trc_handler - handle transfer requests completion
* @hba: per adapter instance
* @use_utrlcnr: get completed requests from UTRLCNR
*
* Returns
* IRQ_HANDLED - If interrupt is valid
* IRQ_NONE - If invalid interrupt
*/
static irqreturn_t ufshcd_transfer_req_compl(struct ufs_hba *hba)
static irqreturn_t ufshcd_trc_handler(struct ufs_hba *hba, bool use_utrlcnr)
{
unsigned long completed_reqs;
u32 tr_doorbell;
unsigned long completed_reqs = 0;
/* Resetting interrupt aggregation counters first and reading the
* DOOR_BELL afterward allows us to handle all the completed requests.
@@ -5195,8 +5265,24 @@ static irqreturn_t ufshcd_transfer_req_compl(struct ufs_hba *hba)
!(hba->quirks & UFSHCI_QUIRK_SKIP_RESET_INTR_AGGR))
ufshcd_reset_intr_aggr(hba);
if (use_utrlcnr) {
u32 utrlcnr;
utrlcnr = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_LIST_COMPL);
if (utrlcnr) {
ufshcd_writel(hba, utrlcnr,
REG_UTP_TRANSFER_REQ_LIST_COMPL);
completed_reqs = utrlcnr;
}
} else {
unsigned long flags;
u32 tr_doorbell;
spin_lock_irqsave(hba->host->host_lock, flags);
tr_doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL);
completed_reqs = tr_doorbell ^ hba->outstanding_reqs;
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
if (completed_reqs) {
__ufshcd_transfer_req_compl(hba, completed_reqs);
@@ -5705,7 +5791,7 @@ out:
/* Complete requests that have door-bell cleared */
static void ufshcd_complete_requests(struct ufs_hba *hba)
{
ufshcd_transfer_req_compl(hba);
ufshcd_trc_handler(hba, false);
ufshcd_tmc_handler(hba);
}
@@ -5954,13 +6040,11 @@ static void ufshcd_err_handler(struct work_struct *work)
ufshcd_set_eh_in_progress(hba);
spin_unlock_irqrestore(hba->host->host_lock, flags);
ufshcd_err_handling_prepare(hba);
/* Complete requests that have door-bell cleared by h/w */
ufshcd_complete_requests(hba);
spin_lock_irqsave(hba->host->host_lock, flags);
if (hba->ufshcd_state != UFSHCD_STATE_ERROR)
hba->ufshcd_state = UFSHCD_STATE_RESET;
/* Complete requests that have door-bell cleared by h/w */
ufshcd_complete_requests(hba);
/*
* A full reset and restore might have happened after preparation
* is finished, double check whether we should stop.
@@ -6043,12 +6127,11 @@ static void ufshcd_err_handler(struct work_struct *work)
}
lock_skip_pending_xfer_clear:
spin_lock_irqsave(hba->host->host_lock, flags);
/* Complete the requests that are cleared by s/w */
ufshcd_complete_requests(hba);
hba->silence_err_logs = false;
spin_lock_irqsave(hba->host->host_lock, flags);
hba->silence_err_logs = false;
if (err_xfer || err_tm) {
needs_reset = true;
goto do_reset;
@@ -6198,37 +6281,23 @@ static irqreturn_t ufshcd_update_uic_error(struct ufs_hba *hba)
return retval;
}
static bool ufshcd_is_auto_hibern8_error(struct ufs_hba *hba,
u32 intr_mask)
{
if (!ufshcd_is_auto_hibern8_supported(hba) ||
!ufshcd_is_auto_hibern8_enabled(hba))
return false;
if (!(intr_mask & UFSHCD_UIC_HIBERN8_MASK))
return false;
if (hba->active_uic_cmd &&
(hba->active_uic_cmd->command == UIC_CMD_DME_HIBER_ENTER ||
hba->active_uic_cmd->command == UIC_CMD_DME_HIBER_EXIT))
return false;
return true;
}
/**
* ufshcd_check_errors - Check for errors that need s/w attention
* @hba: per-adapter instance
* @intr_status: interrupt status generated by the controller
*
* Returns
* IRQ_HANDLED - If interrupt is valid
* IRQ_NONE - If invalid interrupt
*/
static irqreturn_t ufshcd_check_errors(struct ufs_hba *hba)
static irqreturn_t ufshcd_check_errors(struct ufs_hba *hba, u32 intr_status)
{
bool queue_eh_work = false;
irqreturn_t retval = IRQ_NONE;
spin_lock(hba->host->host_lock);
hba->errors |= UFSHCD_ERROR_MASK & intr_status;
if (hba->errors & INT_FATAL_ERRORS) {
ufshcd_update_evt_hist(hba, UFS_EVT_FATAL_ERR,
hba->errors);
@@ -6285,6 +6354,9 @@ static irqreturn_t ufshcd_check_errors(struct ufs_hba *hba)
* itself without s/w intervention or errors that will be
* handled by the SCSI core layer.
*/
hba->errors = 0;
hba->uic_error = 0;
spin_unlock(hba->host->host_lock);
return retval;
}
@@ -6319,13 +6391,17 @@ static bool ufshcd_compl_tm(struct request *req, void *priv, bool reserved)
*/
static irqreturn_t ufshcd_tmc_handler(struct ufs_hba *hba)
{
unsigned long flags;
struct request_queue *q = hba->tmf_queue;
struct ctm_info ci = {
.hba = hba,
.pending = ufshcd_readl(hba, REG_UTP_TASK_REQ_DOOR_BELL),
};
spin_lock_irqsave(hba->host->host_lock, flags);
ci.pending = ufshcd_readl(hba, REG_UTP_TASK_REQ_DOOR_BELL);
blk_mq_tagset_busy_iter(q->tag_set, ufshcd_compl_tm, &ci);
spin_unlock_irqrestore(hba->host->host_lock, flags);
return ci.ncpl ? IRQ_HANDLED : IRQ_NONE;
}
@@ -6342,22 +6418,17 @@ static irqreturn_t ufshcd_sl_intr(struct ufs_hba *hba, u32 intr_status)
{
irqreturn_t retval = IRQ_NONE;
hba->errors = UFSHCD_ERROR_MASK & intr_status;
if (ufshcd_is_auto_hibern8_error(hba, intr_status))
hba->errors |= (UFSHCD_UIC_HIBERN8_MASK & intr_status);
if (hba->errors)
retval |= ufshcd_check_errors(hba);
if (intr_status & UFSHCD_UIC_MASK)
retval |= ufshcd_uic_cmd_compl(hba, intr_status);
if (intr_status & UFSHCD_ERROR_MASK || hba->errors)
retval |= ufshcd_check_errors(hba, intr_status);
if (intr_status & UTP_TASK_REQ_COMPL)
retval |= ufshcd_tmc_handler(hba);
if (intr_status & UTP_TRANSFER_REQ_COMPL)
retval |= ufshcd_transfer_req_compl(hba);
retval |= ufshcd_trc_handler(hba, ufshcd_has_utrlcnr(hba));
return retval;
}
@@ -6378,7 +6449,6 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba)
struct ufs_hba *hba = __hba;
int retries = hba->nutrs;
spin_lock(hba->host->host_lock);
intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS);
hba->ufs_stats.last_intr_status = intr_status;
hba->ufs_stats.last_intr_ts = ktime_get();
@@ -6410,7 +6480,6 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba)
ufshcd_dump_regs(hba, 0, UFSHCI_REG_SPACE_SIZE, "host_regs: ");
}
spin_unlock(hba->host->host_lock);
return retval;
}
@@ -6587,7 +6656,6 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
int err = 0;
int tag;
struct completion wait;
unsigned long flags;
u8 upiu_flags;
down_read(&hba->clk_scaling_lock);
@@ -6600,13 +6668,13 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
tag = req->tag;
WARN_ON_ONCE(!ufshcd_valid_tag(hba, tag));
init_completion(&wait);
if (unlikely(test_bit(tag, &hba->outstanding_reqs))) {
err = -EBUSY;
goto out;
}
lrbp = &hba->lrb[tag];
init_completion(&wait);
lrbp = &hba->lrb[tag];
WARN_ON(lrbp->cmd);
lrbp->cmd = NULL;
lrbp->sense_bufflen = 0;
@@ -6644,10 +6712,8 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
/* Make sure descriptors are ready before ringing the doorbell */
wmb();
spin_lock_irqsave(hba->host->host_lock, flags);
ufshcd_send_command(hba, tag);
spin_unlock_irqrestore(hba->host->host_lock, flags);
ufshcd_send_command(hba, tag);
/*
* ignore the returning value here - ufshcd_check_query_response is
* bound to fail since dev_cmd.query and dev_cmd.type were left empty.
@@ -6766,7 +6832,6 @@ static int ufshcd_eh_device_reset_handler(struct scsi_cmnd *cmd)
u32 pos;
int err;
u8 resp = 0xF, lun;
unsigned long flags;
host = cmd->device->host;
hba = shost_priv(host);
@@ -6785,11 +6850,9 @@ static int ufshcd_eh_device_reset_handler(struct scsi_cmnd *cmd)
err = ufshcd_clear_cmd(hba, pos);
if (err)
break;
__ufshcd_transfer_req_compl(hba, pos);
}
}
spin_lock_irqsave(host->host_lock, flags);
ufshcd_transfer_req_compl(hba);
spin_unlock_irqrestore(host->host_lock, flags);
out:
hba->req_abort_count = 0;
@@ -6965,19 +7028,16 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
* will fail, due to spec violation, scsi err handling next step
* will be to send LU reset which, again, is a spec violation.
* To avoid these unnecessary/illegal steps, first we clean up
* the lrb taken by this cmd and mark the lrb as in_use, then
* queue the eh_work and bail.
* the lrb taken by this cmd and re-set it in outstanding_reqs,
* then queue the eh_work and bail.
*/
if (lrbp->lun == UFS_UPIU_UFS_DEVICE_WLUN) {
ufshcd_update_evt_hist(hba, UFS_EVT_ABORT, lrbp->lun);
spin_lock_irqsave(host->host_lock, flags);
if (lrbp->cmd) {
__ufshcd_transfer_req_compl(hba, (1UL << tag));
__set_bit(tag, &hba->outstanding_reqs);
set_bit(tag, &hba->outstanding_reqs);
spin_lock_irqsave(host->host_lock, flags);
hba->force_reset = true;
ufshcd_schedule_eh_work(hba);
}
spin_unlock_irqrestore(host->host_lock, flags);
goto out;
}
@@ -6990,9 +7050,7 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
if (!err) {
cleanup:
spin_lock_irqsave(host->host_lock, flags);
__ufshcd_transfer_req_compl(hba, (1UL << tag));
spin_unlock_irqrestore(host->host_lock, flags);
out:
err = SUCCESS;
} else {
@@ -7022,19 +7080,16 @@ out:
static int ufshcd_host_reset_and_restore(struct ufs_hba *hba)
{
int err;
unsigned long flags;
ufshpb_reset_host(hba);
/*
* Stop the host controller and complete the requests
* cleared by h/w
*/
ufshcd_hba_stop(hba);
spin_lock_irqsave(hba->host->host_lock, flags);
hba->silence_err_logs = true;
ufshcd_complete_requests(hba);
hba->silence_err_logs = false;
spin_unlock_irqrestore(hba->host->host_lock, flags);
/* scale up clocks to max frequency before full reinitialization */
ufshcd_set_clk_freq(hba, true);
@@ -7420,6 +7475,7 @@ static int ufs_get_device_desc(struct ufs_hba *hba)
{
int err;
u8 model_index;
u8 b_ufs_feature_sup;
u8 *desc_buf;
struct ufs_dev_info *dev_info = &hba->dev_info;
@@ -7447,9 +7503,26 @@ static int ufs_get_device_desc(struct ufs_hba *hba)
/* getting Specification Version in big endian format */
dev_info->wspecversion = desc_buf[DEVICE_DESC_PARAM_SPEC_VER] << 8 |
desc_buf[DEVICE_DESC_PARAM_SPEC_VER + 1];
b_ufs_feature_sup = desc_buf[DEVICE_DESC_PARAM_UFS_FEAT];
model_index = desc_buf[DEVICE_DESC_PARAM_PRDCT_NAME];
if (dev_info->wspecversion >= UFS_DEV_HPB_SUPPORT_VERSION &&
(b_ufs_feature_sup & UFS_DEV_HPB_SUPPORT)) {
bool hpb_en = false;
ufshpb_get_dev_info(hba, desc_buf);
if (!ufshpb_is_legacy(hba))
err = ufshcd_query_flag_retry(hba,
UPIU_QUERY_OPCODE_READ_FLAG,
QUERY_FLAG_IDN_HPB_EN, 0,
&hpb_en);
if (ufshpb_is_legacy(hba) || (!err && hpb_en))
dev_info->hpb_enabled = true;
}
err = ufshcd_read_string_desc(hba, model_index,
&dev_info->model, SD_ASCII_STD);
if (err < 0) {
@@ -7678,6 +7751,10 @@ static int ufshcd_device_geo_params_init(struct ufs_hba *hba)
else if (desc_buf[GEOMETRY_DESC_PARAM_MAX_NUM_LUN] == 0)
hba->dev_info.max_lu_supported = 8;
if (hba->desc_size[QUERY_DESC_IDN_GEOMETRY] >=
GEOMETRY_DESC_PARAM_HPB_MAX_ACTIVE_REGS)
ufshpb_get_geo_info(hba, desc_buf);
out:
kfree(desc_buf);
return err;
@@ -7820,6 +7897,7 @@ static int ufshcd_add_lus(struct ufs_hba *hba)
}
ufs_bsg_probe(hba);
ufshpb_init(hba);
scsi_scan_host(hba->host);
pm_runtime_put_sync(hba->dev);
@@ -7965,6 +8043,7 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool async)
/* Enable Auto-Hibernate if configured */
ufshcd_auto_hibern8_enable(hba);
ufshpb_reset(hba);
out:
spin_lock_irqsave(hba->host->host_lock, flags);
if (ret)
@@ -8012,6 +8091,10 @@ out:
static const struct attribute_group *ufshcd_driver_groups[] = {
&ufs_sysfs_unit_descriptor_group,
&ufs_sysfs_lun_attributes_group,
#ifdef CONFIG_SCSI_UFS_HPB
&ufs_sysfs_hpb_stat_group,
&ufs_sysfs_hpb_param_group,
#endif
NULL,
};
@@ -8731,6 +8814,8 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
req_link_state = UIC_LINK_OFF_STATE;
}
ufshpb_suspend(hba);
/*
* If we can't transition into any of the low power modes
* just gate the clocks.
@@ -8849,6 +8934,7 @@ enable_gating:
hba->dev_info.b_rpm_dev_flush_capable = false;
ufshcd_clear_ua_wluns(hba);
ufshcd_release(hba);
ufshpb_resume(hba);
out:
if (hba->dev_info.b_rpm_dev_flush_capable) {
schedule_delayed_work(&hba->rpm_dev_flush_recheck_work,
@@ -8948,6 +9034,8 @@ static int ufshcd_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op)
/* Enable Auto-Hibernate if configured */
ufshcd_auto_hibern8_enable(hba);
ufshpb_resume(hba);
if (hba->dev_info.b_rpm_dev_flush_capable) {
hba->dev_info.b_rpm_dev_flush_capable = false;
cancel_delayed_work(&hba->rpm_dev_flush_recheck_work);
@@ -9178,6 +9266,7 @@ EXPORT_SYMBOL(ufshcd_shutdown);
void ufshcd_remove(struct ufs_hba *hba)
{
ufs_bsg_remove(hba);
ufshpb_remove(hba);
ufs_sysfs_remove_nodes(hba->dev);
blk_cleanup_queue(hba->tmf_queue);
blk_mq_free_tag_set(&hba->tmf_tag_set);

View File

@@ -653,6 +653,31 @@ struct ufs_hba_variant_params {
u32 wb_flush_threshold;
};
#ifdef CONFIG_SCSI_UFS_HPB
/**
* struct ufshpb_dev_info - UFSHPB device related info
* @num_lu: the number of user logical unit to check whether all lu finished
* initialization
* @rgn_size: device reported HPB region size
* @srgn_size: device reported HPB sub-region size
* @slave_conf_cnt: counter to check all lu finished initialization
* @hpb_disabled: flag to check if HPB is disabled
* @max_hpb_single_cmd: device reported bMAX_DATA_SIZE_FOR_SINGLE_CMD value
* @is_legacy: flag to check HPB 1.0
* @control_mode: either host or device
*/
struct ufshpb_dev_info {
int num_lu;
int rgn_size;
int srgn_size;
atomic_t slave_conf_cnt;
bool hpb_disabled;
u8 max_hpb_single_cmd;
bool is_legacy;
u8 control_mode;
};
#endif
struct ufs_hba_monitor {
unsigned long chunk_size;
@@ -863,6 +888,10 @@ struct ufs_hba {
bool wb_enabled;
struct delayed_work rpm_dev_flush_recheck_work;
#ifdef CONFIG_SCSI_UFS_HPB
struct ufshpb_dev_info ufshpb_dev;
#endif
struct ufs_hba_monitor monitor;
#ifdef CONFIG_SCSI_UFS_CRYPTO
@@ -1166,6 +1195,11 @@ static inline u32 ufshcd_vops_get_ufs_hci_version(struct ufs_hba *hba)
return ufshcd_readl(hba, REG_UFS_VERSION);
}
static inline bool ufshcd_has_utrlcnr(struct ufs_hba *hba)
{
return (hba->ufs_version >= ufshci_version(3, 0));
}
static inline int ufshcd_vops_clk_scale_notify(struct ufs_hba *hba,
bool up, enum ufs_notify_change_status status)
{
@@ -1230,8 +1264,13 @@ static inline int ufshcd_vops_pwr_change_notify(struct ufs_hba *hba,
static inline void ufshcd_vops_setup_xfer_req(struct ufs_hba *hba, int tag,
bool is_scsi_cmd)
{
if (hba->vops && hba->vops->setup_xfer_req)
return hba->vops->setup_xfer_req(hba, tag, is_scsi_cmd);
if (hba->vops && hba->vops->setup_xfer_req) {
unsigned long flags;
spin_lock_irqsave(hba->host->host_lock, flags);
hba->vops->setup_xfer_req(hba, tag, is_scsi_cmd);
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
}
static inline void ufshcd_vops_setup_task_mgmt(struct ufs_hba *hba,

View File

@@ -39,6 +39,7 @@ enum {
REG_UTP_TRANSFER_REQ_DOOR_BELL = 0x58,
REG_UTP_TRANSFER_REQ_LIST_CLEAR = 0x5C,
REG_UTP_TRANSFER_REQ_LIST_RUN_STOP = 0x60,
REG_UTP_TRANSFER_REQ_LIST_COMPL = 0x64,
REG_UTP_TASK_REQ_LIST_BASE_L = 0x70,
REG_UTP_TASK_REQ_LIST_BASE_H = 0x74,
REG_UTP_TASK_REQ_DOOR_BELL = 0x78,

2910
drivers/scsi/ufs/ufshpb.c Normal file

File diff suppressed because it is too large Load Diff

321
drivers/scsi/ufs/ufshpb.h Normal file
View File

@@ -0,0 +1,321 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Universal Flash Storage Host Performance Booster
*
* Copyright (C) 2017-2021 Samsung Electronics Co., Ltd.
*
* Authors:
* Yongmyung Lee <ymhungry.lee@samsung.com>
* Jinyoung Choi <j-young.choi@samsung.com>
*/
#ifndef _UFSHPB_H_
#define _UFSHPB_H_
/* hpb response UPIU macro */
#define HPB_RSP_NONE 0x0
#define HPB_RSP_REQ_REGION_UPDATE 0x1
#define HPB_RSP_DEV_RESET 0x2
#define MAX_ACTIVE_NUM 2
#define MAX_INACTIVE_NUM 2
#define DEV_DATA_SEG_LEN 0x14
#define DEV_SENSE_SEG_LEN 0x12
#define DEV_DES_TYPE 0x80
#define DEV_ADDITIONAL_LEN 0x10
/* hpb map & entries macro */
#define HPB_RGN_SIZE_UNIT 512
#define HPB_ENTRY_BLOCK_SIZE 4096
#define HPB_ENTRY_SIZE 0x8
#define PINNED_NOT_SET U32_MAX
/* hpb support chunk size */
#define HPB_LEGACY_CHUNK_HIGH 1
#define HPB_MULTI_CHUNK_LOW 7
#define HPB_MULTI_CHUNK_HIGH 256
/* hpb vender defined opcode */
#define UFSHPB_READ 0xF8
#define UFSHPB_READ_BUFFER 0xF9
#define UFSHPB_READ_BUFFER_ID 0x01
#define UFSHPB_WRITE_BUFFER 0xFA
#define UFSHPB_WRITE_BUFFER_INACT_SINGLE_ID 0x01
#define UFSHPB_WRITE_BUFFER_PREFETCH_ID 0x02
#define UFSHPB_WRITE_BUFFER_INACT_ALL_ID 0x03
#define HPB_WRITE_BUFFER_CMD_LENGTH 10
#define MAX_HPB_READ_ID 0x7F
#define HPB_READ_BUFFER_CMD_LENGTH 10
#define LU_ENABLED_HPB_FUNC 0x02
#define HPB_RESET_REQ_RETRIES 10
#define HPB_MAP_REQ_RETRIES 5
#define HPB_REQUEUE_TIME_MS 0
#define HPB_SUPPORT_VERSION 0x200
#define HPB_SUPPORT_LEGACY_VERSION 0x100
enum UFSHPB_MODE {
HPB_HOST_CONTROL,
HPB_DEVICE_CONTROL,
};
enum UFSHPB_STATE {
HPB_INIT = 0,
HPB_PRESENT = 1,
HPB_SUSPEND,
HPB_FAILED,
HPB_RESET,
};
enum HPB_RGN_STATE {
HPB_RGN_INACTIVE,
HPB_RGN_ACTIVE,
/* pinned regions are always active */
HPB_RGN_PINNED,
};
enum HPB_SRGN_STATE {
HPB_SRGN_UNUSED,
HPB_SRGN_INVALID,
HPB_SRGN_VALID,
HPB_SRGN_ISSUED,
};
/**
* struct ufshpb_lu_info - UFSHPB logical unit related info
* @num_blocks: the number of logical block
* @pinned_start: the start region number of pinned region
* @num_pinned: the number of pinned regions
* @max_active_rgns: maximum number of active regions
*/
struct ufshpb_lu_info {
int num_blocks;
int pinned_start;
int num_pinned;
int max_active_rgns;
};
struct ufshpb_map_ctx {
struct page **m_page;
unsigned long *ppn_dirty;
};
struct ufshpb_subregion {
struct ufshpb_map_ctx *mctx;
enum HPB_SRGN_STATE srgn_state;
int rgn_idx;
int srgn_idx;
bool is_last;
/* subregion reads - for host mode */
unsigned int reads;
/* below information is used by rsp_list */
struct list_head list_act_srgn;
};
struct ufshpb_region {
struct ufshpb_lu *hpb;
struct ufshpb_subregion *srgn_tbl;
enum HPB_RGN_STATE rgn_state;
int rgn_idx;
int srgn_cnt;
/* below information is used by rsp_list */
struct list_head list_inact_rgn;
/* below information is used by lru */
struct list_head list_lru_rgn;
unsigned long rgn_flags;
#define RGN_FLAG_DIRTY 0
#define RGN_FLAG_UPDATE 1
/* region reads - for host mode */
spinlock_t rgn_lock;
unsigned int reads;
/* region "cold" timer - for host mode */
ktime_t read_timeout;
unsigned int read_timeout_expiries;
struct list_head list_expired_rgn;
};
#define for_each_sub_region(rgn, i, srgn) \
for ((i) = 0; \
((i) < (rgn)->srgn_cnt) && ((srgn) = &(rgn)->srgn_tbl[i]); \
(i)++)
/**
* struct ufshpb_req - HPB related request structure (write/read buffer)
* @req: block layer request structure
* @bio: bio for this request
* @hpb: ufshpb_lu structure that related to
* @list_req: ufshpb_req mempool list
* @sense: store its sense data
* @mctx: L2P map information
* @rgn_idx: target region index
* @srgn_idx: target sub-region index
* @lun: target logical unit number
* @m_page: L2P map information data for pre-request
* @len: length of host-side cached L2P map in m_page
* @lpn: start LPN of L2P map in m_page
*/
struct ufshpb_req {
struct request *req;
struct bio *bio;
struct ufshpb_lu *hpb;
struct list_head list_req;
union {
struct {
struct ufshpb_map_ctx *mctx;
unsigned int rgn_idx;
unsigned int srgn_idx;
unsigned int lun;
} rb;
struct {
struct page *m_page;
unsigned int len;
unsigned long lpn;
} wb;
};
};
struct victim_select_info {
struct list_head lh_lru_rgn; /* LRU list of regions */
int max_lru_active_cnt; /* supported hpb #region - pinned #region */
atomic_t active_cnt;
};
/**
* ufshpb_params - ufs hpb parameters
* @requeue_timeout_ms - requeue threshold of wb command (0x2)
* @activation_thld - min reads [IOs] to activate/update a region
* @normalization_factor - shift right the region's reads
* @eviction_thld_enter - min reads [IOs] for the entering region in eviction
* @eviction_thld_exit - max reads [IOs] for the exiting region in eviction
* @read_timeout_ms - timeout [ms] from the last read IO to the region
* @read_timeout_expiries - amount of allowable timeout expireis
* @timeout_polling_interval_ms - frequency in which timeouts are checked
* @inflight_map_req - number of inflight map requests
*/
struct ufshpb_params {
unsigned int requeue_timeout_ms;
unsigned int activation_thld;
unsigned int normalization_factor;
unsigned int eviction_thld_enter;
unsigned int eviction_thld_exit;
unsigned int read_timeout_ms;
unsigned int read_timeout_expiries;
unsigned int timeout_polling_interval_ms;
unsigned int inflight_map_req;
};
struct ufshpb_stats {
u64 hit_cnt;
u64 miss_cnt;
u64 rb_noti_cnt;
u64 rb_active_cnt;
u64 rb_inactive_cnt;
u64 map_req_cnt;
u64 pre_req_cnt;
u64 umap_req_cnt;
};
struct ufshpb_lu {
int lun;
struct scsi_device *sdev_ufs_lu;
spinlock_t rgn_state_lock; /* for protect rgn/srgn state */
struct ufshpb_region *rgn_tbl;
atomic_t hpb_state;
spinlock_t rsp_list_lock;
struct list_head lh_act_srgn; /* hold rsp_list_lock */
struct list_head lh_inact_rgn; /* hold rsp_list_lock */
/* pre request information */
struct ufshpb_req *pre_req;
int num_inflight_pre_req;
int throttle_pre_req;
int num_inflight_map_req;
struct list_head lh_pre_req_free;
int cur_read_id;
int pre_req_min_tr_len;
int pre_req_max_tr_len;
/* cached L2P map management worker */
struct work_struct map_work;
/* for selecting victim */
struct victim_select_info lru_info;
struct work_struct ufshpb_normalization_work;
struct delayed_work ufshpb_read_to_work;
unsigned long work_data_bits;
#define TIMEOUT_WORK_RUNNING 0
/* pinned region information */
u32 lu_pinned_start;
u32 lu_pinned_end;
/* HPB related configuration */
u32 rgns_per_lu;
u32 srgns_per_lu;
u32 last_srgn_entries;
int srgns_per_rgn;
u32 srgn_mem_size;
u32 entries_per_rgn_mask;
u32 entries_per_rgn_shift;
u32 entries_per_srgn;
u32 entries_per_srgn_mask;
u32 entries_per_srgn_shift;
u32 pages_per_srgn;
bool is_hcm;
struct ufshpb_stats stats;
struct ufshpb_params params;
struct kmem_cache *map_req_cache;
struct kmem_cache *m_page_cache;
struct list_head list_hpb_lu;
};
struct ufs_hba;
struct ufshcd_lrb;
#ifndef CONFIG_SCSI_UFS_HPB
static int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) { return 0; }
static void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) {}
static void ufshpb_resume(struct ufs_hba *hba) {}
static void ufshpb_suspend(struct ufs_hba *hba) {}
static void ufshpb_reset(struct ufs_hba *hba) {}
static void ufshpb_reset_host(struct ufs_hba *hba) {}
static void ufshpb_init(struct ufs_hba *hba) {}
static void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev) {}
static void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev) {}
static void ufshpb_remove(struct ufs_hba *hba) {}
static bool ufshpb_is_allowed(struct ufs_hba *hba) { return false; }
static void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf) {}
static void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf) {}
static bool ufshpb_is_legacy(struct ufs_hba *hba) { return false; }
#else
int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp);
void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp);
void ufshpb_resume(struct ufs_hba *hba);
void ufshpb_suspend(struct ufs_hba *hba);
void ufshpb_reset(struct ufs_hba *hba);
void ufshpb_reset_host(struct ufs_hba *hba);
void ufshpb_init(struct ufs_hba *hba);
void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev);
void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev);
void ufshpb_remove(struct ufs_hba *hba);
bool ufshpb_is_allowed(struct ufs_hba *hba);
void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf);
void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf);
bool ufshpb_is_legacy(struct ufs_hba *hba);
extern struct attribute_group ufs_sysfs_hpb_stat_group;
extern struct attribute_group ufs_sysfs_hpb_param_group;
#endif
#endif /* End of Header */

View File

@@ -139,10 +139,6 @@ struct blk_mq_hw_ctx {
* shared across request queues.
*/
atomic_t nr_active;
/**
* @elevator_queued: Number of queued requests on hctx.
*/
atomic_t elevator_queued;
/** @cpuhp_online: List to store request if CPU is going to die */
struct hlist_node cpuhp_online;

View File

@@ -588,6 +588,8 @@ struct request_queue {
#define BLK_MAX_WRITE_HINTS 5
u64 write_hints[BLK_MAX_WRITE_HINTS];
ANDROID_OEM_DATA(1);
};
/* Keep blk_queue_flag_name[] in sync with the definitions below */

View File

@@ -381,9 +381,6 @@ struct dma_buf_ops {
* @sysfs_entry: for exposing information about this buffer in sysfs.
* The attachment_uid member of @sysfs_entry is protected by dma_resv lock
* and is incremented on each attach.
* @mmap_count: number of times buffer has been mmapped.
* @exp_vm_ops: the vm ops provided by the buffer exporter.
* @vm_ops: the overridden vm_ops used to track mmap_count of the buffer.
*
* This represents a shared buffer, created by calling dma_buf_export(). The
* userspace representation is a normal file descriptor, which can be created by
@@ -427,9 +424,6 @@ struct dma_buf {
unsigned int attachment_uid;
struct kset *attach_stats_kset;
} *sysfs_entry;
int mmap_count;
const struct vm_operations_struct *exp_vm_ops;
struct vm_operations_struct vm_ops;
#endif
};

View File

@@ -172,6 +172,8 @@ extern struct request *elv_rb_find(struct rb_root *, sector_t);
/* Supports zoned block devices sequential write constraint */
#define ELEVATOR_F_ZBD_SEQ_WRITE (1U << 0)
/* Supports scheduling on multiple hardware queues */
#define ELEVATOR_F_MQ_AWARE (1U << 1)
#endif /* CONFIG_BLOCK */
#endif

View File

@@ -468,13 +468,6 @@ struct mm_struct {
*/
atomic_t has_pinned;
/**
* @write_protect_seq: Locked when any thread is write
* protecting pages mapped by this mm to enforce a later COW,
* for instance during page table copying for fork().
*/
seqcount_t write_protect_seq;
#ifdef CONFIG_MMU
atomic_long_t pgtables_bytes; /* PTE page table pages */
#endif
@@ -483,6 +476,18 @@ struct mm_struct {
spinlock_t page_table_lock; /* Protects page tables and some
* counters
*/
/*
* With some kernel config, the current mmap_lock's offset
* inside 'mm_struct' is at 0x120, which is very optimal, as
* its two hot fields 'count' and 'owner' sit in 2 different
* cachelines, and when mmap_lock is highly contended, both
* of the 2 fields will be accessed frequently, current layout
* will help to reduce cache bouncing.
*
* So please be careful with adding new fields before
* mmap_lock, which can easily push the 2 fields into one
* cacheline.
*/
struct rw_semaphore mmap_lock;
struct list_head mmlist; /* List of maybe swapped mm's. These
@@ -503,7 +508,15 @@ struct mm_struct {
unsigned long stack_vm; /* VM_STACK */
unsigned long def_flags;
/**
* @write_protect_seq: Locked when any thread is write
* protecting pages mapped by this mm to enforce a later COW,
* for instance during page table copying for fork().
*/
seqcount_t write_protect_seq;
spinlock_t arg_lock; /* protect the below fields */
unsigned long start_code, end_code, start_data, end_data;
unsigned long start_brk, brk, start_stack;
unsigned long arg_start, arg_end, env_start, env_end;

View File

@@ -313,6 +313,8 @@ struct mmc_card {
unsigned int bouncesz; /* Bounce buffer size */
struct workqueue_struct *complete_wq; /* Private workqueue */
ANDROID_VENDOR_DATA(1);
};
static inline bool mmc_large_sector(struct mmc_card *card)

View File

@@ -485,6 +485,7 @@ struct mmc_host {
/* Host Software Queue support */
bool hsq_enabled;
ANDROID_VENDOR_DATA(1);
ANDROID_OEM_DATA(1);
unsigned long private[] ____cacheline_aligned;

View File

@@ -0,0 +1,21 @@
/* SPDX-License-Identifier: GPL-2.0 */
#undef TRACE_SYSTEM
#define TRACE_SYSTEM remoteproc
#define TRACE_INCLUDE_PATH trace/hooks
#if !defined(_TRACE_HOOK_RPROC_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_HOOK_RPROC_H
#include <linux/tracepoint.h>
#include <trace/hooks/vendor_hooks.h>
struct rproc;
DECLARE_HOOK(android_vh_rproc_recovery,
TP_PROTO(struct rproc *rproc),
TP_ARGS(rproc));
#endif /* _TRACE_HOOK_RPROC_H */
/* This part must be outside protection */
#include <trace/define_trace.h>

View File

@@ -370,14 +370,6 @@ DECLARE_RESTRICTED_HOOK(android_rvh_find_new_ilb,
TP_PROTO(struct cpumask *nohz_idle_cpus_mask, int *ilb),
TP_ARGS(nohz_idle_cpus_mask, ilb), 1);
DECLARE_HOOK(android_vh_force_compatible_pre,
TP_PROTO(void *unused),
TP_ARGS(unused));
DECLARE_HOOK(android_vh_force_compatible_post,
TP_PROTO(void *unused),
TP_ARGS(unused));
DECLARE_RESTRICTED_HOOK(android_rvh_force_compatible_pre,
TP_PROTO(void *unused),
TP_ARGS(unused), 1);

View File

@@ -2107,7 +2107,6 @@ void force_compatible_cpus_allowed_ptr(struct task_struct *p)
* offlining of the chosen destination CPU, so take the hotplug
* lock to ensure that the migration succeeds.
*/
trace_android_vh_force_compatible_pre(NULL);
trace_android_rvh_force_compatible_pre(NULL);
cpus_read_lock();
if (!cpumask_available(new_mask))
@@ -2133,7 +2132,6 @@ out_set_mask:
WARN_ON(set_cpus_allowed_ptr(p, override_mask));
out_free_mask:
cpus_read_unlock();
trace_android_vh_force_compatible_post(NULL);
trace_android_rvh_force_compatible_post(NULL);
free_cpumask_var(new_mask);
}

View File

@@ -613,6 +613,8 @@ struct cfs_rq {
int throttle_count;
struct list_head throttled_list;
#endif /* CONFIG_CFS_BANDWIDTH */
ANDROID_VENDOR_DATA_ARRAY(1, 16);
#endif /* CONFIG_FAIR_GROUP_SCHED */
};

View File

@@ -3244,9 +3244,6 @@ static int bpf_skb_proto_4_to_6(struct sk_buff *skb)
u32 off = skb_mac_header_len(skb);
int ret;
if (skb_is_gso(skb) && !skb_is_gso_tcp(skb))
return -ENOTSUPP;
ret = skb_cow(skb, len_diff);
if (unlikely(ret < 0))
return ret;
@@ -3258,17 +3255,11 @@ static int bpf_skb_proto_4_to_6(struct sk_buff *skb)
if (skb_is_gso(skb)) {
struct skb_shared_info *shinfo = skb_shinfo(skb);
/* SKB_GSO_TCPV4 needs to be changed into
* SKB_GSO_TCPV6.
*/
/* SKB_GSO_TCPV4 needs to be changed into SKB_GSO_TCPV6. */
if (shinfo->gso_type & SKB_GSO_TCPV4) {
shinfo->gso_type &= ~SKB_GSO_TCPV4;
shinfo->gso_type |= SKB_GSO_TCPV6;
}
/* Header must be checked, and gso_segs recomputed. */
shinfo->gso_type |= SKB_GSO_DODGY;
shinfo->gso_segs = 0;
}
skb->protocol = htons(ETH_P_IPV6);
@@ -3283,9 +3274,6 @@ static int bpf_skb_proto_6_to_4(struct sk_buff *skb)
u32 off = skb_mac_header_len(skb);
int ret;
if (skb_is_gso(skb) && !skb_is_gso_tcp(skb))
return -ENOTSUPP;
ret = skb_unclone(skb, GFP_ATOMIC);
if (unlikely(ret < 0))
return ret;
@@ -3297,17 +3285,11 @@ static int bpf_skb_proto_6_to_4(struct sk_buff *skb)
if (skb_is_gso(skb)) {
struct skb_shared_info *shinfo = skb_shinfo(skb);
/* SKB_GSO_TCPV6 needs to be changed into
* SKB_GSO_TCPV4.
*/
/* SKB_GSO_TCPV6 needs to be changed into SKB_GSO_TCPV4. */
if (shinfo->gso_type & SKB_GSO_TCPV6) {
shinfo->gso_type &= ~SKB_GSO_TCPV6;
shinfo->gso_type |= SKB_GSO_TCPV4;
}
/* Header must be checked, and gso_segs recomputed. */
shinfo->gso_type |= SKB_GSO_DODGY;
shinfo->gso_segs = 0;
}
skb->protocol = htons(ETH_P_IP);