Merge branch 'android12-5.10' into android12-5.10-lts

Sync up with android12-5.10 for the following commits:

973a19f620 ANDROID: Update the ABI representation
55d7c4eca6 ANDROID: Update symbol list for mtk
46e579716d FROMGIT: selinux: use __GFP_NOWARN with GFP_NOWAIT
53ccd64e35 ANDROID: GKI: 6/18/2021 KMI update
c3fe3880d6 ANDROID: power: Add ANDROID_OEM_DATA_ARRAY in freq_qos_request.
cb99d1b88c ANDROID: gic: change  gic resume vendor hook para
0c24ee770c BACKPORT: FROMGIT: kasan: disable freed user page poisoning with HW tags
339a43c9bd BACKPORT: FROMGIT: arm64: mte: handle tags zeroing at page allocation time
3cb3beda6f FROMGIT: kasan: use separate (un)poison implementation for integrated init
1773be5077 ANDROID: Add SND_VERBOSE_PROCFS for alsa framework
ae618c699c FROMGIT: scsi: ufs: Utilize Transfer Request List Completion Notification Register
7613068f95 BACKPORT: FROMGIT: scsi: ufs: Optimize host lock on transfer requests send/compl paths
bfcb3876f5 FROMGIT: scsi: ufs: qcom: Use ufshci_version() function
641d850a48 FROMGIT: scsi: ufs: core: Use a function to calculate versions
a0679e6b5c FROMGIT: scsi: ufs: Remove a redundant command completion logic in error handler
53fa86c571 BACKPORT: FROMGIT: scsi: ufs: core: Introduce HBA performance monitor sysfs nodes
fe4ba3ccfc ANDROID: GKI: USB: add Android ABI padding to some structures
531cba772c FROMGIT: usb: typec: tcpm: Introduce snk_vdo_v1 for SVDM version 1.0
e5474ff19b ANDROID: GKI: enable CONFIG_PCI_IOV=y
da33f6fa6c ANDROID: mm: Add hooks to filemap_fault for oem's optimization
97e5f9c0f8 FROMLIST: mm: compaction: fix wakeup logic of proactive compaction
71fdbce075 FROMLIST: mm: compaction: support triggering of proactive compaction by user
e6c0526092 ANDROID: minor fixups of xt_IDLETIMER support
15fdca98a0 FROMGIT: usb: typec: Add the missed altmode_id_remove() in typec_register_altmode()
57cb3d1f7b FROMGIT: usb: typec: tcpm: Relax disconnect threshold during power negotiation
c2ca0980e6 FROMGIT: usb: typec: tcpm: Ignore Vsafe0v in PR_SWAP_SNK_SRC_SOURCE_ON state
c4707b7f87 FROMGIT: usb: typec: tcpci: Fix up sink disconnect thresholds for PD
bba0d8a87e ANDROID: GKI: Enable some necessary CFG80211 configs for WIFI
49f5842539 ANDROID: Add send_sig_info to the reserved symbol list
76081a5f72 FROMLIST: kbuild: mkcompile_h: consider timestamp if KBUILD_BUILD_TIMESTAMP is set

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I557b967c58e6e50d619fefe4e67253dbb2886506
This commit is contained in:
Greg Kroah-Hartman
2021-06-21 11:01:27 +02:00
48 changed files with 8266 additions and 6240 deletions

View File

@@ -987,6 +987,132 @@ Description: This entry shows the target state of an UFS UIC link
The file is read only. The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/monitor_enable
Date: January 2021
Contact: Can Guo <cang@codeaurora.org>
Description: This file shows the status of performance monitor enablement
and it can be used to start/stop the monitor. When the monitor
is stopped, the performance data collected is also cleared.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/monitor_chunk_size
Date: January 2021
Contact: Can Guo <cang@codeaurora.org>
Description: This file tells the monitor to focus on requests transferring
data of specific chunk size (in Bytes). 0 means any chunk size.
It can only be changed when monitor is disabled.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_total_sectors
Date: January 2021
Contact: Can Guo <cang@codeaurora.org>
Description: This file shows how many sectors (in 512 Bytes) have been
sent from device to host after monitor gets started.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_total_busy
Date: January 2021
Contact: Can Guo <cang@codeaurora.org>
Description: This file shows how long (in micro seconds) has been spent
sending data from device to host after monitor gets started.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_nr_requests
Date: January 2021
Contact: Can Guo <cang@codeaurora.org>
Description: This file shows how many read requests have been sent after
monitor gets started.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_max
Date: January 2021
Contact: Can Guo <cang@codeaurora.org>
Description: This file shows the maximum latency (in micro seconds) of
read requests after monitor gets started.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_min
Date: January 2021
Contact: Can Guo <cang@codeaurora.org>
Description: This file shows the minimum latency (in micro seconds) of
read requests after monitor gets started.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_avg
Date: January 2021
Contact: Can Guo <cang@codeaurora.org>
Description: This file shows the average latency (in micro seconds) of
read requests after monitor gets started.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_sum
Date: January 2021
Contact: Can Guo <cang@codeaurora.org>
Description: This file shows the total latency (in micro seconds) of
read requests sent after monitor gets started.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_total_sectors
Date: January 2021
Contact: Can Guo <cang@codeaurora.org>
Description: This file shows how many sectors (in 512 Bytes) have been sent
from host to device after monitor gets started.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_total_busy
Date: January 2021
Contact: Can Guo <cang@codeaurora.org>
Description: This file shows how long (in micro seconds) has been spent
sending data from host to device after monitor gets started.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_nr_requests
Date: January 2021
Contact: Can Guo <cang@codeaurora.org>
Description: This file shows how many write requests have been sent after
monitor gets started.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_max
Date: January 2021
Contact: Can Guo <cang@codeaurora.org>
Description: This file shows the maximum latency (in micro seconds) of write
requests after monitor gets started.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_min
Date: January 2021
Contact: Can Guo <cang@codeaurora.org>
Description: This file shows the minimum latency (in micro seconds) of write
requests after monitor gets started.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_avg
Date: January 2021
Contact: Can Guo <cang@codeaurora.org>
Description: This file shows the average latency (in micro seconds) of write
requests after monitor gets started.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_sum
Date: January 2021
Contact: Can Guo <cang@codeaurora.org>
Description: This file shows the total latency (in micro seconds) of write
requests after monitor gets started.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_presv_us_en What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_presv_us_en
Date: June 2020 Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org> Contact: Asutosh Das <asutoshd@codeaurora.org>

View File

@@ -127,7 +127,8 @@ compaction_proactiveness
This tunable takes a value in the range [0, 100] with a default value of This tunable takes a value in the range [0, 100] with a default value of
20. This tunable determines how aggressively compaction is done in the 20. This tunable determines how aggressively compaction is done in the
background. Setting it to 0 disables proactive compaction. background. On write of non zero value to this tunable will immediately
trigger the proactive compaction. Setting it to 0 disables proactive compaction.
Note that compaction has a non-trivial system-wide impact as pages Note that compaction has a non-trivial system-wide impact as pages
belonging to different processes are moved around, which could also lead belonging to different processes are moved around, which could also lead

12848
android/abi_gki_aarch64.xml Normal file → Executable file

File diff suppressed because it is too large Load Diff

View File

@@ -552,6 +552,7 @@
__drm_atomic_helper_crtc_destroy_state __drm_atomic_helper_crtc_destroy_state
__drm_atomic_helper_crtc_duplicate_state __drm_atomic_helper_crtc_duplicate_state
__drm_atomic_helper_crtc_reset __drm_atomic_helper_crtc_reset
drm_atomic_helper_disable_all
drm_atomic_helper_disable_plane drm_atomic_helper_disable_plane
drm_atomic_helper_duplicate_state drm_atomic_helper_duplicate_state
drm_atomic_helper_fake_vblank drm_atomic_helper_fake_vblank
@@ -574,6 +575,7 @@
drm_atomic_private_obj_init drm_atomic_private_obj_init
drm_atomic_set_crtc_for_plane drm_atomic_set_crtc_for_plane
drm_atomic_set_fb_for_plane drm_atomic_set_fb_for_plane
drm_atomic_set_mode_for_crtc
drm_atomic_state_alloc drm_atomic_state_alloc
drm_atomic_state_clear drm_atomic_state_clear
__drm_atomic_state_free __drm_atomic_state_free
@@ -641,6 +643,7 @@
drmm_mode_config_init drmm_mode_config_init
drm_mode_config_reset drm_mode_config_reset
drm_mode_convert_to_umode drm_mode_convert_to_umode
drm_mode_copy
drm_mode_duplicate drm_mode_duplicate
drm_mode_equal drm_mode_equal
drm_mode_equal_no_clocks drm_mode_equal_no_clocks
@@ -1615,6 +1618,7 @@
skb_clone skb_clone
skb_copy skb_copy
skb_dequeue skb_dequeue
skb_dequeue_tail
skb_pull skb_pull
skb_push skb_push
skb_put skb_put

View File

@@ -21,6 +21,7 @@
__alloc_percpu __alloc_percpu
__alloc_skb __alloc_skb
alloc_workqueue alloc_workqueue
all_vm_events
android_debug_symbol android_debug_symbol
android_rvh_probe_register android_rvh_probe_register
anon_inode_getfd anon_inode_getfd
@@ -31,6 +32,7 @@
arch_timer_read_counter arch_timer_read_counter
arm64_const_caps_ready arm64_const_caps_ready
arm64_use_ng_mappings arm64_use_ng_mappings
__arm_smccc_hvc
__arm_smccc_smc __arm_smccc_smc
arp_tbl arp_tbl
atomic_notifier_call_chain atomic_notifier_call_chain
@@ -169,18 +171,24 @@
clk_unprepare clk_unprepare
clocks_calc_mult_shift clocks_calc_mult_shift
__close_fd __close_fd
cma_alloc
cma_release
compat_alloc_user_space compat_alloc_user_space
compat_ptr_ioctl compat_ptr_ioctl
complete complete
complete_all complete_all
completion_done completion_done
component_add component_add
component_add_typed
component_bind_all component_bind_all
component_del component_del
component_master_add_with_match component_master_add_with_match
component_master_del component_master_del
component_match_add_release component_match_add_release
component_match_add_typed
component_unbind_all component_unbind_all
config_ep_by_speed
config_group_init_type_name
console_drivers console_drivers
console_suspend_enabled console_suspend_enabled
console_unlock console_unlock
@@ -195,6 +203,7 @@
cpu_bit_bitmap cpu_bit_bitmap
cpufreq_add_update_util_hook cpufreq_add_update_util_hook
cpufreq_cpu_get cpufreq_cpu_get
cpufreq_cpu_put
cpufreq_disable_fast_switch cpufreq_disable_fast_switch
cpufreq_driver_fast_switch cpufreq_driver_fast_switch
cpufreq_driver_resolve_freq cpufreq_driver_resolve_freq
@@ -232,9 +241,13 @@
cpumask_next_and cpumask_next_and
cpu_number cpu_number
__cpu_online_mask __cpu_online_mask
cpu_pm_register_notifier
cpu_pm_unregister_notifier
__cpu_possible_mask __cpu_possible_mask
__cpu_present_mask __cpu_present_mask
cpu_scale cpu_scale
cpus_read_lock
cpus_read_unlock
cpu_subsys cpu_subsys
cpu_topology cpu_topology
crc32_le crc32_le
@@ -306,10 +319,13 @@
dev_get_by_name dev_get_by_name
dev_get_regmap dev_get_regmap
dev_get_stats dev_get_stats
device_add
device_add_disk device_add_disk
device_create device_create
device_create_bin_file device_create_bin_file
device_create_file device_create_file
device_create_with_groups
device_del
device_destroy device_destroy
device_for_each_child device_for_each_child
device_get_child_node_count device_get_child_node_count
@@ -320,18 +336,22 @@
device_link_add device_link_add
device_link_remove device_link_remove
device_property_present device_property_present
device_property_read_string
device_property_read_u32_array device_property_read_u32_array
device_register device_register
device_remove_bin_file device_remove_bin_file
device_remove_file device_remove_file
device_rename device_rename
device_set_of_node_from_dev device_set_of_node_from_dev
device_show_bool
device_store_bool
device_unregister device_unregister
_dev_info _dev_info
__dev_kfree_skb_any __dev_kfree_skb_any
__devm_alloc_percpu __devm_alloc_percpu
devm_blk_ksm_init devm_blk_ksm_init
devm_clk_bulk_get devm_clk_bulk_get
devm_clk_bulk_get_optional
devm_clk_get devm_clk_get
devm_clk_get_optional devm_clk_get_optional
devm_clk_register devm_clk_register
@@ -350,8 +370,10 @@
devm_gpio_free devm_gpio_free
devm_gpio_request devm_gpio_request
devm_gpio_request_one devm_gpio_request_one
devm_hwrng_register
devm_i2c_new_dummy_device devm_i2c_new_dummy_device
devm_iio_channel_get devm_iio_channel_get
devm_iio_channel_get_all
devm_iio_device_alloc devm_iio_device_alloc
__devm_iio_device_register __devm_iio_device_register
devm_input_allocate_device devm_input_allocate_device
@@ -371,6 +393,7 @@
devm_of_phy_get_by_index devm_of_phy_get_by_index
__devm_of_phy_provider_register __devm_of_phy_provider_register
devm_of_platform_populate devm_of_platform_populate
devm_of_pwm_get
devm_phy_create devm_phy_create
devm_phy_get devm_phy_get
devm_pinctrl_get devm_pinctrl_get
@@ -386,7 +409,9 @@
__devm_regmap_init_i2c __devm_regmap_init_i2c
__devm_regmap_init_mmio_clk __devm_regmap_init_mmio_clk
devm_regulator_get devm_regulator_get
devm_regulator_get_exclusive
devm_regulator_get_optional devm_regulator_get_optional
devm_regulator_put
devm_regulator_register devm_regulator_register
devm_regulator_register_notifier devm_regulator_register_notifier
devm_regulator_unregister_notifier devm_regulator_unregister_notifier
@@ -470,10 +495,16 @@
dma_free_attrs dma_free_attrs
dma_get_sgtable_attrs dma_get_sgtable_attrs
dma_heap_add dma_heap_add
dma_heap_buffer_alloc
dma_heap_bufferfd_alloc
dma_heap_buffer_free
dma_heap_find
dma_heap_get_dev dma_heap_get_dev
dma_heap_get_name dma_heap_get_name
dma_heap_put
dmam_alloc_attrs dmam_alloc_attrs
dma_map_page_attrs dma_map_page_attrs
dma_map_resource
dma_map_sg_attrs dma_map_sg_attrs
dmam_free_coherent dmam_free_coherent
dma_mmap_attrs dma_mmap_attrs
@@ -489,6 +520,7 @@
dma_sync_single_for_cpu dma_sync_single_for_cpu
dma_sync_single_for_device dma_sync_single_for_device
dma_unmap_page_attrs dma_unmap_page_attrs
dma_unmap_resource
dma_unmap_sg_attrs dma_unmap_sg_attrs
do_exit do_exit
do_wait_intr_irq do_wait_intr_irq
@@ -642,6 +674,7 @@
failure_tracking failure_tracking
fd_install fd_install
fget fget
find_get_pid
find_last_bit find_last_bit
find_next_bit find_next_bit
find_next_zero_bit find_next_zero_bit
@@ -658,6 +691,10 @@
font_vga_8x16 font_vga_8x16
for_each_kernel_tracepoint for_each_kernel_tracepoint
fput fput
frame_vector_create
frame_vector_destroy
frame_vector_to_pages
frame_vector_to_pfns
free_irq free_irq
free_netdev free_netdev
__free_pages __free_pages
@@ -694,6 +731,7 @@
gen_pool_set_algo gen_pool_set_algo
gen_pool_size gen_pool_size
gen_pool_virt_to_phys gen_pool_virt_to_phys
getboottime64
get_cpu_device get_cpu_device
get_cpu_idle_time get_cpu_idle_time
get_cpu_idle_time_us get_cpu_idle_time_us
@@ -703,6 +741,7 @@
__get_free_pages __get_free_pages
get_governor_parent_kobj get_governor_parent_kobj
get_kernel_pages get_kernel_pages
get_pid_task
get_random_bytes get_random_bytes
get_random_u32 get_random_u32
__get_task_comm __get_task_comm
@@ -712,6 +751,7 @@
get_user_pages get_user_pages
get_user_pages_fast get_user_pages_fast
get_user_pages_remote get_user_pages_remote
get_vaddr_frames
get_zeroed_page get_zeroed_page
gic_nonsecure_priorities gic_nonsecure_priorities
gov_attr_set_get gov_attr_set_get
@@ -725,6 +765,7 @@
gpiochip_lock_as_irq gpiochip_lock_as_irq
gpiochip_unlock_as_irq gpiochip_unlock_as_irq
gpiod_direction_input gpiod_direction_input
gpiod_direction_output
gpiod_direction_output_raw gpiod_direction_output_raw
gpiod_get_raw_value gpiod_get_raw_value
gpiod_set_debounce gpiod_set_debounce
@@ -737,6 +778,7 @@
gpio_to_desc gpio_to_desc
handle_level_irq handle_level_irq
handle_nested_irq handle_nested_irq
handle_simple_irq
hashlen_string hashlen_string
have_governor_per_policy have_governor_per_policy
hci_alloc_dev hci_alloc_dev
@@ -764,6 +806,7 @@
i2c_smbus_write_i2c_block_data i2c_smbus_write_i2c_block_data
i2c_transfer i2c_transfer
i2c_transfer_buffer_flags i2c_transfer_buffer_flags
i2c_unregister_device
icc_link_create icc_link_create
icc_node_add icc_node_add
icc_node_create icc_node_create
@@ -774,6 +817,9 @@
icc_put icc_put
icc_set_bw icc_set_bw
icc_sync_state icc_sync_state
ida_alloc_range
ida_destroy
ida_free
idr_alloc idr_alloc
idr_destroy idr_destroy
idr_find idr_find
@@ -783,10 +829,12 @@
ieee80211_channel_to_freq_khz ieee80211_channel_to_freq_khz
ieee80211_freq_khz_to_channel ieee80211_freq_khz_to_channel
ieee80211_get_channel_khz ieee80211_get_channel_khz
iio_alloc_pollfunc
iio_buffer_init iio_buffer_init
iio_buffer_put iio_buffer_put
iio_channel_get iio_channel_get
iio_channel_release iio_channel_release
iio_dealloc_pollfunc
iio_device_attach_buffer iio_device_attach_buffer
__iio_device_register __iio_device_register
iio_device_unregister iio_device_unregister
@@ -796,6 +844,7 @@
iio_read_channel_attribute iio_read_channel_attribute
iio_read_channel_processed iio_read_channel_processed
iio_read_channel_raw iio_read_channel_raw
iio_trigger_notify_done
inc_zone_page_state inc_zone_page_state
in_egroup_p in_egroup_p
init_net init_net
@@ -834,14 +883,17 @@
iommu_put_dma_cookie iommu_put_dma_cookie
iommu_unmap iommu_unmap
__ioremap __ioremap
ioremap_cache
iounmap iounmap
iput iput
ipv6_skip_exthdr ipv6_skip_exthdr
irq_create_mapping_affinity irq_create_mapping_affinity
irq_create_of_mapping irq_create_of_mapping
irq_dispose_mapping
__irq_domain_add __irq_domain_add
irq_domain_remove irq_domain_remove
irq_domain_simple_ops irq_domain_simple_ops
irq_domain_xlate_onetwocell
irq_domain_xlate_twocell irq_domain_xlate_twocell
irq_find_mapping irq_find_mapping
irq_get_irqchip_state irq_get_irqchip_state
@@ -850,14 +902,20 @@
irq_of_parse_and_map irq_of_parse_and_map
irq_set_affinity_hint irq_set_affinity_hint
irq_set_chained_handler_and_data irq_set_chained_handler_and_data
irq_set_chip
irq_set_chip_and_handler_name irq_set_chip_and_handler_name
irq_set_chip_data irq_set_chip_data
irq_set_irq_type irq_set_irq_type
irq_set_irq_wake irq_set_irq_wake
irq_set_parent
irq_to_desc irq_to_desc
irq_work_queue irq_work_queue
irq_work_run
irq_work_sync irq_work_sync
is_dma_buf_file
is_vmalloc_addr is_vmalloc_addr
iterate_fd
jiffies_64_to_clock_t
jiffies jiffies
jiffies_to_msecs jiffies_to_msecs
jiffies_to_usecs jiffies_to_usecs
@@ -892,8 +950,10 @@
kmem_cache_create kmem_cache_create
kmem_cache_destroy kmem_cache_destroy
kmem_cache_free kmem_cache_free
kobject_add
kobject_create_and_add kobject_create_and_add
kobject_del kobject_del
kobject_init
kobject_init_and_add kobject_init_and_add
kobject_put kobject_put
kobject_uevent kobject_uevent
@@ -945,6 +1005,7 @@
kvmalloc_node kvmalloc_node
led_classdev_flash_register_ext led_classdev_flash_register_ext
led_classdev_flash_unregister led_classdev_flash_unregister
led_classdev_unregister
led_get_flash_fault led_get_flash_fault
led_set_brightness_sync led_set_brightness_sync
led_set_flash_brightness led_set_flash_brightness
@@ -968,6 +1029,9 @@
lzo1x_1_compress lzo1x_1_compress
lzo1x_decompress_safe lzo1x_decompress_safe
lzorle1x_1_compress lzorle1x_1_compress
match_hex
match_int
match_token
mbox_chan_received_data mbox_chan_received_data
mbox_client_txdone mbox_client_txdone
mbox_controller_register mbox_controller_register
@@ -975,16 +1039,27 @@
mbox_free_channel mbox_free_channel
mbox_request_channel mbox_request_channel
mbox_send_message mbox_send_message
media_create_intf_link
media_create_pad_link
media_device_cleanup
media_device_init media_device_init
__media_device_register __media_device_register
media_device_unregister media_device_unregister
media_devnode_create
media_devnode_remove
media_entity_pads_init media_entity_pads_init
media_entity_remove_links
media_pipeline_start
media_pipeline_stop
memblock_end_of_DRAM memblock_end_of_DRAM
memchr
memcmp memcmp
memcpy memcpy
__memcpy_fromio __memcpy_fromio
__memcpy_toio __memcpy_toio
memdup_user
memmove memmove
memory_read_from_buffer
memparse memparse
memremap memremap
memset64 memset64
@@ -995,10 +1070,12 @@
migrate_swap migrate_swap
mipi_dsi_attach mipi_dsi_attach
mipi_dsi_dcs_read mipi_dsi_dcs_read
mipi_dsi_dcs_write
mipi_dsi_dcs_write_buffer mipi_dsi_dcs_write_buffer
mipi_dsi_detach mipi_dsi_detach
mipi_dsi_driver_register_full mipi_dsi_driver_register_full
mipi_dsi_driver_unregister mipi_dsi_driver_unregister
mipi_dsi_generic_read
mipi_dsi_generic_write mipi_dsi_generic_write
mipi_dsi_host_register mipi_dsi_host_register
mipi_dsi_host_unregister mipi_dsi_host_unregister
@@ -1013,6 +1090,7 @@
mmc_free_host mmc_free_host
mmc_gpio_get_cd mmc_gpio_get_cd
mmc_gpio_get_ro mmc_gpio_get_ro
mmc_hw_reset
mmc_of_parse mmc_of_parse
mmc_regulator_get_supply mmc_regulator_get_supply
mmc_regulator_set_ocr mmc_regulator_set_ocr
@@ -1049,6 +1127,7 @@
netif_carrier_on netif_carrier_on
netif_napi_add netif_napi_add
netif_receive_skb netif_receive_skb
netif_receive_skb_list
netif_rx netif_rx
netif_rx_ni netif_rx_ni
netif_tx_stop_all_queues netif_tx_stop_all_queues
@@ -1156,6 +1235,7 @@
of_remove_property of_remove_property
of_reserved_mem_device_init_by_idx of_reserved_mem_device_init_by_idx
of_reserved_mem_lookup of_reserved_mem_lookup
of_root
of_thermal_get_trip_points of_thermal_get_trip_points
of_translate_address of_translate_address
on_each_cpu on_each_cpu
@@ -1184,7 +1264,12 @@
pause_cpus pause_cpus
PDE_DATA PDE_DATA
__per_cpu_offset __per_cpu_offset
perf_event_create_kernel_counter
perf_event_disable
perf_event_enable
perf_event_release_kernel
perf_event_update_userpage perf_event_update_userpage
perf_num_counters
perf_pmu_migrate_context perf_pmu_migrate_context
perf_pmu_register perf_pmu_register
perf_pmu_unregister perf_pmu_unregister
@@ -1213,9 +1298,12 @@
pinctrl_utils_free_map pinctrl_utils_free_map
pinctrl_utils_reserve_map pinctrl_utils_reserve_map
pin_user_pages_fast pin_user_pages_fast
pin_user_pages_remote
platform_bus_type platform_bus_type
platform_device_add platform_device_add
platform_device_add_data
platform_device_alloc platform_device_alloc
platform_device_del
platform_device_put platform_device_put
platform_device_register platform_device_register
platform_device_register_full platform_device_register_full
@@ -1224,6 +1312,7 @@
platform_driver_unregister platform_driver_unregister
platform_get_irq platform_get_irq
platform_get_irq_byname platform_get_irq_byname
platform_get_irq_byname_optional
platform_get_irq_optional platform_get_irq_optional
platform_get_resource platform_get_resource
platform_get_resource_byname platform_get_resource_byname
@@ -1257,6 +1346,7 @@
power_supply_get_by_name power_supply_get_by_name
power_supply_get_drvdata power_supply_get_drvdata
power_supply_get_property power_supply_get_property
power_supply_put
power_supply_register power_supply_register
power_supply_reg_notifier power_supply_reg_notifier
power_supply_set_property power_supply_set_property
@@ -1265,16 +1355,20 @@
prepare_to_wait_event prepare_to_wait_event
print_hex_dump print_hex_dump
printk printk
printk_deferred
proc_create proc_create
proc_create_data proc_create_data
proc_create_single_data
proc_mkdir proc_mkdir
proc_remove proc_remove
proc_set_user proc_set_user
put_device put_device
put_disk put_disk
__put_page __put_page
put_pid
__put_task_struct __put_task_struct
put_unused_fd put_unused_fd
put_vaddr_frames
pwm_apply_state pwm_apply_state
pwmchip_add pwmchip_add
pwmchip_remove pwmchip_remove
@@ -1332,6 +1426,7 @@
register_netdev register_netdev
register_netdevice register_netdevice
register_netdevice_notifier register_netdevice_notifier
register_oom_notifier
register_pernet_subsys register_pernet_subsys
register_pm_notifier register_pm_notifier
register_reboot_notifier register_reboot_notifier
@@ -1346,6 +1441,8 @@
regmap_field_update_bits_base regmap_field_update_bits_base
__regmap_init __regmap_init
regmap_irq_get_domain regmap_irq_get_domain
regmap_raw_read
regmap_raw_write
regmap_read regmap_read
regmap_update_bits_base regmap_update_bits_base
regmap_write regmap_write
@@ -1355,6 +1452,7 @@
regulator_enable regulator_enable
regulator_enable_regmap regulator_enable_regmap
regulator_get regulator_get
regulator_get_current_limit_regmap
regulator_get_optional regulator_get_optional
regulator_get_voltage regulator_get_voltage
regulator_get_voltage_sel_regmap regulator_get_voltage_sel_regmap
@@ -1369,6 +1467,7 @@
regulator_notifier_call_chain regulator_notifier_call_chain
regulator_put regulator_put
regulator_set_current_limit regulator_set_current_limit
regulator_set_current_limit_regmap
regulator_set_mode regulator_set_mode
regulator_set_voltage regulator_set_voltage
regulator_set_voltage_sel_regmap regulator_set_voltage_sel_regmap
@@ -1425,13 +1524,17 @@
sched_setattr_nocheck sched_setattr_nocheck
sched_set_normal sched_set_normal
sched_setscheduler sched_setscheduler
sched_setscheduler_nocheck
sched_uclamp_used sched_uclamp_used
schedule schedule
schedule_timeout schedule_timeout
schedutil_cpu_util schedutil_cpu_util
scmi_driver_register scmi_driver_register
scmi_driver_unregister scmi_driver_unregister
scmi_protocol_register
scnprintf scnprintf
scsi_device_quiesce
__scsi_iterate_devices
sdio_claim_host sdio_claim_host
sdio_claim_irq sdio_claim_irq
sdio_disable_func sdio_disable_func
@@ -1453,6 +1556,7 @@
sdio_writel sdio_writel
sdio_writesb sdio_writesb
send_sig send_sig
send_sig_info
seq_hex_dump seq_hex_dump
seq_lseek seq_lseek
seq_open seq_open
@@ -1493,6 +1597,7 @@
__sg_page_iter_start __sg_page_iter_start
shmem_file_setup shmem_file_setup
si_mem_available si_mem_available
si_meminfo
simple_attr_open simple_attr_open
simple_attr_read simple_attr_read
simple_attr_release simple_attr_release
@@ -1514,6 +1619,8 @@
skb_queue_tail skb_queue_tail
skb_realloc_headroom skb_realloc_headroom
skb_trim skb_trim
smp_call_function_single
snd_card_add_dev_attr
snd_ctl_boolean_mono_info snd_ctl_boolean_mono_info
snd_jack_set_key snd_jack_set_key
snd_pcm_format_physical_width snd_pcm_format_physical_width
@@ -1580,6 +1687,7 @@
spmi_register_write spmi_register_write
spmi_register_zero_write spmi_register_zero_write
sprintf sprintf
sprint_symbol_no_offset
srcu_init_notifier_head srcu_init_notifier_head
srcu_notifier_call_chain srcu_notifier_call_chain
srcu_notifier_chain_register srcu_notifier_chain_register
@@ -1620,6 +1728,7 @@
synchronize_irq synchronize_irq
synchronize_net synchronize_net
synchronize_rcu synchronize_rcu
synchronize_srcu
syscon_node_to_regmap syscon_node_to_regmap
syscon_regmap_lookup_by_compatible syscon_regmap_lookup_by_compatible
syscon_regmap_lookup_by_phandle syscon_regmap_lookup_by_phandle
@@ -1649,9 +1758,12 @@
tasklet_init tasklet_init
tasklet_kill tasklet_kill
__tasklet_schedule __tasklet_schedule
tasklist_lock
__task_pid_nr_ns __task_pid_nr_ns
__task_rq_lock
thermal_cooling_device_unregister thermal_cooling_device_unregister
thermal_of_cooling_device_register thermal_of_cooling_device_register
thermal_zone_device_update
thermal_zone_get_temp thermal_zone_get_temp
thermal_zone_get_zone_by_name thermal_zone_get_zone_by_name
tick_nohz_get_idle_calls_cpu tick_nohz_get_idle_calls_cpu
@@ -1668,8 +1780,11 @@
trace_event_raw_init trace_event_raw_init
trace_event_reg trace_event_reg
trace_handle_return trace_handle_return
__traceiter_android_rvh_cpu_overutilized
__traceiter_android_rvh_dequeue_task __traceiter_android_rvh_dequeue_task
__traceiter_android_rvh_dequeue_task_fair
__traceiter_android_rvh_enqueue_task __traceiter_android_rvh_enqueue_task
__traceiter_android_rvh_enqueue_task_fair
__traceiter_android_rvh_find_busiest_group __traceiter_android_rvh_find_busiest_group
__traceiter_android_rvh_find_energy_efficient_cpu __traceiter_android_rvh_find_energy_efficient_cpu
__traceiter_android_rvh_finish_prio_fork __traceiter_android_rvh_finish_prio_fork
@@ -1687,11 +1802,30 @@
__traceiter_android_vh_binder_set_priority __traceiter_android_vh_binder_set_priority
__traceiter_android_vh_binder_transaction_init __traceiter_android_vh_binder_transaction_init
__traceiter_android_vh_cgroup_set_task __traceiter_android_vh_cgroup_set_task
__traceiter_android_vh_commit_creds
__traceiter_android_vh_em_cpu_energy __traceiter_android_vh_em_cpu_energy
__traceiter_android_vh_exit_creds
__traceiter_android_vh_finish_update_load_avg_se
__traceiter_android_vh_iommu_alloc_iova
__traceiter_android_vh_iommu_free_iova
__traceiter_android_vh_override_creds
__traceiter_android_vh_prepare_update_load_avg_se
__traceiter_android_vh_revert_creds
__traceiter_android_vh_rwsem_init __traceiter_android_vh_rwsem_init
__traceiter_android_vh_rwsem_wake __traceiter_android_vh_rwsem_wake
__traceiter_android_vh_rwsem_write_finished __traceiter_android_vh_rwsem_write_finished
__traceiter_android_vh_scheduler_tick __traceiter_android_vh_scheduler_tick
__traceiter_android_vh_selinux_avc_insert
__traceiter_android_vh_selinux_avc_lookup
__traceiter_android_vh_selinux_avc_node_delete
__traceiter_android_vh_selinux_avc_node_replace
__traceiter_android_vh_selinux_is_initialized
__traceiter_android_vh_set_memory_nx
__traceiter_android_vh_set_memory_ro
__traceiter_android_vh_set_memory_rw
__traceiter_android_vh_set_memory_x
__traceiter_android_vh_set_module_permit_after_init
__traceiter_android_vh_set_module_permit_before_init
__traceiter_android_vh_set_wake_flags __traceiter_android_vh_set_wake_flags
__traceiter_android_vh_syscall_prctl_finished __traceiter_android_vh_syscall_prctl_finished
__traceiter_cpu_frequency __traceiter_cpu_frequency
@@ -1700,9 +1834,13 @@
__traceiter_rwmmio_post_read __traceiter_rwmmio_post_read
__traceiter_rwmmio_read __traceiter_rwmmio_read
__traceiter_rwmmio_write __traceiter_rwmmio_write
__traceiter_sched_update_nr_running_tp
trace_output_call trace_output_call
__tracepoint_android_rvh_cpu_overutilized
__tracepoint_android_rvh_dequeue_task __tracepoint_android_rvh_dequeue_task
__tracepoint_android_rvh_dequeue_task_fair
__tracepoint_android_rvh_enqueue_task __tracepoint_android_rvh_enqueue_task
__tracepoint_android_rvh_enqueue_task_fair
__tracepoint_android_rvh_find_busiest_group __tracepoint_android_rvh_find_busiest_group
__tracepoint_android_rvh_find_energy_efficient_cpu __tracepoint_android_rvh_find_energy_efficient_cpu
__tracepoint_android_rvh_finish_prio_fork __tracepoint_android_rvh_finish_prio_fork
@@ -1720,11 +1858,30 @@
__tracepoint_android_vh_binder_set_priority __tracepoint_android_vh_binder_set_priority
__tracepoint_android_vh_binder_transaction_init __tracepoint_android_vh_binder_transaction_init
__tracepoint_android_vh_cgroup_set_task __tracepoint_android_vh_cgroup_set_task
__tracepoint_android_vh_commit_creds
__tracepoint_android_vh_em_cpu_energy __tracepoint_android_vh_em_cpu_energy
__tracepoint_android_vh_exit_creds
__tracepoint_android_vh_finish_update_load_avg_se
__tracepoint_android_vh_iommu_alloc_iova
__tracepoint_android_vh_iommu_free_iova
__tracepoint_android_vh_override_creds
__tracepoint_android_vh_prepare_update_load_avg_se
__tracepoint_android_vh_revert_creds
__tracepoint_android_vh_rwsem_init __tracepoint_android_vh_rwsem_init
__tracepoint_android_vh_rwsem_wake __tracepoint_android_vh_rwsem_wake
__tracepoint_android_vh_rwsem_write_finished __tracepoint_android_vh_rwsem_write_finished
__tracepoint_android_vh_scheduler_tick __tracepoint_android_vh_scheduler_tick
__tracepoint_android_vh_selinux_avc_insert
__tracepoint_android_vh_selinux_avc_lookup
__tracepoint_android_vh_selinux_avc_node_delete
__tracepoint_android_vh_selinux_avc_node_replace
__tracepoint_android_vh_selinux_is_initialized
__tracepoint_android_vh_set_memory_nx
__tracepoint_android_vh_set_memory_ro
__tracepoint_android_vh_set_memory_rw
__tracepoint_android_vh_set_memory_x
__tracepoint_android_vh_set_module_permit_after_init
__tracepoint_android_vh_set_module_permit_before_init
__tracepoint_android_vh_set_wake_flags __tracepoint_android_vh_set_wake_flags
__tracepoint_android_vh_syscall_prctl_finished __tracepoint_android_vh_syscall_prctl_finished
__tracepoint_cpu_frequency __tracepoint_cpu_frequency
@@ -1735,6 +1892,8 @@
__tracepoint_rwmmio_post_read __tracepoint_rwmmio_post_read
__tracepoint_rwmmio_read __tracepoint_rwmmio_read
__tracepoint_rwmmio_write __tracepoint_rwmmio_write
__tracepoint_sched_update_nr_running_tp
tracepoint_srcu
trace_print_array_seq trace_print_array_seq
trace_print_flags_seq trace_print_flags_seq
trace_print_symbols_seq trace_print_symbols_seq
@@ -1755,6 +1914,7 @@
typec_get_drvdata typec_get_drvdata
typec_mux_get_drvdata typec_mux_get_drvdata
typec_mux_register typec_mux_register
typec_mux_set
typec_mux_unregister typec_mux_unregister
typec_partner_set_identity typec_partner_set_identity
typec_register_partner typec_register_partner
@@ -1794,6 +1954,7 @@
ufshcd_uic_hibern8_exit ufshcd_uic_hibern8_exit
unlock_page unlock_page
unmap_mapping_range unmap_mapping_range
unpin_user_page
unpin_user_pages unpin_user_pages
unregister_blkdev unregister_blkdev
__unregister_chrdev __unregister_chrdev
@@ -1805,11 +1966,13 @@
unregister_netdev unregister_netdev
unregister_netdevice_notifier unregister_netdevice_notifier
unregister_netdevice_queue unregister_netdevice_queue
unregister_oom_notifier
unregister_pernet_subsys unregister_pernet_subsys
unregister_pm_notifier unregister_pm_notifier
unregister_reboot_notifier unregister_reboot_notifier
unregister_rpmsg_driver unregister_rpmsg_driver
unregister_shrinker unregister_shrinker
unregister_syscore_ops
unregister_virtio_device unregister_virtio_device
unregister_virtio_driver unregister_virtio_driver
up up
@@ -1817,13 +1980,22 @@
update_rq_clock update_rq_clock
up_read up_read
up_write up_write
usb_add_function
usb_add_gadget_udc usb_add_gadget_udc
usb_add_hcd usb_add_hcd
usb_copy_descriptors
usb_create_hcd usb_create_hcd
usb_create_shared_hcd usb_create_shared_hcd
usb_debug_root usb_debug_root
usb_del_gadget_udc usb_del_gadget_udc
usb_disabled usb_disabled
usb_ep_alloc_request
usb_ep_autoconfig
usb_ep_dequeue
usb_ep_disable
usb_ep_enable
usb_ep_free_request
usb_ep_queue
usb_ep_set_halt usb_ep_set_halt
usb_ep_set_maxpacket_limit usb_ep_set_maxpacket_limit
usb_gadget_giveback_request usb_gadget_giveback_request
@@ -1834,6 +2006,8 @@
usb_get_maximum_speed usb_get_maximum_speed
usb_hcd_is_primary_hcd usb_hcd_is_primary_hcd
usb_hcd_poll_rh_status usb_hcd_poll_rh_status
usb_interface_id
usb_put_function_instance
usb_put_hcd usb_put_hcd
usb_remove_hcd usb_remove_hcd
usb_role_switch_get usb_role_switch_get
@@ -1842,6 +2016,7 @@
usb_role_switch_set_role usb_role_switch_set_role
usb_role_switch_unregister usb_role_switch_unregister
usb_speed_string usb_speed_string
usb_string_id
__usecs_to_jiffies __usecs_to_jiffies
usleep_range usleep_range
uuid_null uuid_null
@@ -1851,18 +2026,24 @@
v4l2_async_notifier_unregister v4l2_async_notifier_unregister
v4l2_async_register_subdev v4l2_async_register_subdev
v4l2_async_unregister_subdev v4l2_async_unregister_subdev
v4l2_compat_ioctl32
v4l2_ctrl_handler_free v4l2_ctrl_handler_free
v4l2_ctrl_handler_init_class v4l2_ctrl_handler_init_class
v4l2_ctrl_handler_setup v4l2_ctrl_handler_setup
v4l2_ctrl_new_custom v4l2_ctrl_new_custom
v4l2_ctrl_new_std v4l2_ctrl_new_std
v4l2_ctrl_new_std_menu v4l2_ctrl_new_std_menu
v4l2_ctrl_request_complete
__v4l2_ctrl_s_ctrl __v4l2_ctrl_s_ctrl
v4l2_ctrl_subscribe_event v4l2_ctrl_subscribe_event
v4l2_device_register v4l2_device_register
v4l2_device_register_subdev
__v4l2_device_register_subdev_nodes __v4l2_device_register_subdev_nodes
v4l2_device_unregister v4l2_device_unregister
v4l2_device_unregister_subdev
v4l2_event_queue
v4l2_event_queue_fh v4l2_event_queue_fh
v4l2_event_subdev_unsubscribe
v4l2_event_subscribe v4l2_event_subscribe
v4l2_event_unsubscribe v4l2_event_unsubscribe
v4l2_fh_add v4l2_fh_add
@@ -1870,9 +2051,9 @@
v4l2_fh_exit v4l2_fh_exit
v4l2_fh_init v4l2_fh_init
v4l2_fh_is_singular v4l2_fh_is_singular
v4l2_fh_open
v4l2_m2m_buf_queue v4l2_m2m_buf_queue
v4l2_m2m_buf_remove v4l2_m2m_buf_remove
v4l2_m2m_buf_remove_by_buf
v4l2_m2m_ctx_init v4l2_m2m_ctx_init
v4l2_m2m_ctx_release v4l2_m2m_ctx_release
v4l2_m2m_dqbuf v4l2_m2m_dqbuf
@@ -1899,17 +2080,38 @@
v4l2_m2m_suspend v4l2_m2m_suspend
v4l2_m2m_try_schedule v4l2_m2m_try_schedule
v4l2_src_change_event_subscribe v4l2_src_change_event_subscribe
v4l2_subdev_call_wrappers
v4l2_subdev_init v4l2_subdev_init
v4l2_subdev_link_validate
v4l2_subdev_link_validate_default
v4l_bound_align_image v4l_bound_align_image
vabits_actual vabits_actual
vb2_buffer_done vb2_buffer_done
vb2_common_vm_ops
vb2_create_framevec
vb2_destroy_framevec
vb2_dma_contig_memops vb2_dma_contig_memops
vb2_fop_mmap
vb2_fop_poll
vb2_fop_release
vb2_ioctl_create_bufs
vb2_ioctl_dqbuf
vb2_ioctl_expbuf
vb2_ioctl_prepare_buf
vb2_ioctl_qbuf
vb2_ioctl_querybuf
vb2_ioctl_reqbufs
vb2_ioctl_streamoff
vb2_ioctl_streamon
vb2_ops_wait_finish vb2_ops_wait_finish
vb2_ops_wait_prepare vb2_ops_wait_prepare
vb2_plane_cookie vb2_plane_cookie
vb2_plane_vaddr vb2_plane_vaddr
vb2_queue_init vb2_queue_init
vb2_queue_release vb2_queue_release
vb2_request_object_is_buffer
vb2_request_queue
vb2_request_validate
vchan_dma_desc_free_list vchan_dma_desc_free_list
vchan_init vchan_init
vchan_tx_desc_free vchan_tx_desc_free
@@ -1937,7 +2139,9 @@
vmap vmap
vm_event_states vm_event_states
vmf_insert_pfn_prot vmf_insert_pfn_prot
vm_map_ram
vm_node_stat vm_node_stat
vm_unmap_ram
vm_zone_stat vm_zone_stat
vring_del_virtqueue vring_del_virtqueue
vring_interrupt vring_interrupt
@@ -1951,6 +2155,7 @@
wait_for_completion_interruptible wait_for_completion_interruptible
wait_for_completion_interruptible_timeout wait_for_completion_interruptible_timeout
wait_for_completion_killable wait_for_completion_killable
wait_for_completion_killable_timeout
wait_for_completion_timeout wait_for_completion_timeout
wait_woken wait_woken
__wake_up __wake_up
@@ -1959,6 +2164,7 @@
wakeup_source_add wakeup_source_add
wakeup_source_create wakeup_source_create
wakeup_source_register wakeup_source_register
wakeup_source_remove
wakeup_source_unregister wakeup_source_unregister
__warn_printk __warn_printk
watchdog_init_timeout watchdog_init_timeout
@@ -1985,3 +2191,6 @@
zlib_deflateInit2 zlib_deflateInit2
zlib_deflateReset zlib_deflateReset
zlib_deflate_workspacesize zlib_deflate_workspacesize
# preserved by --additions-only
v4l2_m2m_buf_remove_by_buf

View File

@@ -1233,6 +1233,8 @@
# required by virtio_pci.ko # required by virtio_pci.ko
irq_set_affinity_hint irq_set_affinity_hint
pci_alloc_irq_vectors_affinity pci_alloc_irq_vectors_affinity
pci_disable_sriov
pci_enable_sriov
pci_find_capability pci_find_capability
pci_find_ext_capability pci_find_ext_capability
pci_find_next_capability pci_find_next_capability
@@ -1242,6 +1244,7 @@
pci_irq_vector pci_irq_vector
pci_release_selected_regions pci_release_selected_regions
pci_request_selected_regions pci_request_selected_regions
pci_vfs_assigned
synchronize_irq synchronize_irq
virtio_device_freeze virtio_device_freeze
virtio_device_restore virtio_device_restore

View File

@@ -267,13 +267,14 @@ CONFIG_BT_HCIUART_BCM=y
CONFIG_BT_HCIUART_QCA=y CONFIG_BT_HCIUART_QCA=y
CONFIG_CFG80211=y CONFIG_CFG80211=y
CONFIG_NL80211_TESTMODE=y CONFIG_NL80211_TESTMODE=y
# CONFIG_CFG80211_DEFAULT_PS is not set CONFIG_CFG80211_CERTIFICATION_ONUS=y
# CONFIG_CFG80211_CRDA_SUPPORT is not set CONFIG_CFG80211_REG_CELLULAR_HINTS=y
CONFIG_MAC80211=y CONFIG_MAC80211=y
CONFIG_RFKILL=y CONFIG_RFKILL=y
CONFIG_PCI=y CONFIG_PCI=y
CONFIG_PCIEPORTBUS=y CONFIG_PCIEPORTBUS=y
CONFIG_PCIEAER=y CONFIG_PCIEAER=y
CONFIG_PCI_IOV=y
CONFIG_PCI_HOST_GENERIC=y CONFIG_PCI_HOST_GENERIC=y
CONFIG_PCIE_DW_PLAT_EP=y CONFIG_PCIE_DW_PLAT_EP=y
CONFIG_PCIE_QCOM=y CONFIG_PCIE_QCOM=y
@@ -436,7 +437,6 @@ CONFIG_SND=y
CONFIG_SND_HRTIMER=y CONFIG_SND_HRTIMER=y
CONFIG_SND_DYNAMIC_MINORS=y CONFIG_SND_DYNAMIC_MINORS=y
# CONFIG_SND_SUPPORT_OLD_API is not set # CONFIG_SND_SUPPORT_OLD_API is not set
# CONFIG_SND_VERBOSE_PROCFS is not set
# CONFIG_SND_DRIVERS is not set # CONFIG_SND_DRIVERS is not set
CONFIG_SND_USB_AUDIO=y CONFIG_SND_USB_AUDIO=y
CONFIG_SND_SOC=y CONFIG_SND_SOC=y

View File

@@ -37,6 +37,7 @@ void mte_free_tag_storage(char *storage);
/* track which pages have valid allocation tags */ /* track which pages have valid allocation tags */
#define PG_mte_tagged PG_arch_2 #define PG_mte_tagged PG_arch_2
void mte_zero_clear_page_tags(void *addr);
void mte_sync_tags(pte_t *ptep, pte_t pte); void mte_sync_tags(pte_t *ptep, pte_t pte);
void mte_copy_page_tags(void *kto, const void *kfrom); void mte_copy_page_tags(void *kto, const void *kfrom);
void flush_mte_state(void); void flush_mte_state(void);
@@ -53,6 +54,9 @@ int mte_ptrace_copy_tags(struct task_struct *child, long request,
/* unused if !CONFIG_ARM64_MTE, silence the compiler */ /* unused if !CONFIG_ARM64_MTE, silence the compiler */
#define PG_mte_tagged 0 #define PG_mte_tagged 0
static inline void mte_zero_clear_page_tags(void *addr)
{
}
static inline void mte_sync_tags(pte_t *ptep, pte_t pte) static inline void mte_sync_tags(pte_t *ptep, pte_t pte)
{ {
} }

View File

@@ -13,6 +13,7 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/personality.h> /* for READ_IMPLIES_EXEC */ #include <linux/personality.h> /* for READ_IMPLIES_EXEC */
#include <linux/types.h> /* for gfp_t */
#include <asm/pgtable-types.h> #include <asm/pgtable-types.h>
struct page; struct page;
@@ -28,10 +29,13 @@ void copy_user_highpage(struct page *to, struct page *from,
void copy_highpage(struct page *to, struct page *from); void copy_highpage(struct page *to, struct page *from);
#define __HAVE_ARCH_COPY_HIGHPAGE #define __HAVE_ARCH_COPY_HIGHPAGE
#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO | __GFP_CMA, vma, vaddr) unsigned long vaddr);
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
void tag_clear_highpage(struct page *to);
#define __HAVE_ARCH_TAG_CLEAR_HIGHPAGE
#define clear_user_page(page, vaddr, pg) clear_page(page) #define clear_user_page(page, vaddr, pg) clear_page(page)
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from)

View File

@@ -36,6 +36,26 @@ SYM_FUNC_START(mte_clear_page_tags)
ret ret
SYM_FUNC_END(mte_clear_page_tags) SYM_FUNC_END(mte_clear_page_tags)
/*
* Zero the page and tags at the same time
*
* Parameters:
* x0 - address to the beginning of the page
*/
SYM_FUNC_START(mte_zero_clear_page_tags)
mrs x1, dczid_el0
and w1, w1, #0xf
mov x2, #4
lsl x1, x2, x1
and x0, x0, #(1 << MTE_TAG_SHIFT) - 1 // clear the tag
1: dc gzva, x0
add x0, x0, x1
tst x0, #(PAGE_SIZE - 1)
b.ne 1b
ret
SYM_FUNC_END(mte_zero_clear_page_tags)
/* /*
* Copy the tags from the source page to the destination one * Copy the tags from the source page to the destination one
* x0 - address of the destination page * x0 - address of the destination page

View File

@@ -968,3 +968,29 @@ void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr,
debug_exception_exit(regs); debug_exception_exit(regs);
} }
NOKPROBE_SYMBOL(do_debug_exception); NOKPROBE_SYMBOL(do_debug_exception);
/*
* Used during anonymous page fault handling.
*/
struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
unsigned long vaddr)
{
gfp_t flags = GFP_HIGHUSER_MOVABLE | __GFP_ZERO | __GFP_CMA;
/*
* If the page is mapped with PROT_MTE, initialise the tags at the
* point of allocation and page zeroing as this is usually faster than
* separate DC ZVA and STGM.
*/
if (vma->vm_flags & VM_MTE)
flags |= __GFP_ZEROTAGS;
return alloc_page_vma(flags, vma, vaddr);
}
void tag_clear_highpage(struct page *page)
{
mte_zero_clear_page_tags(page_address(page));
page_kasan_tag_reset(page);
set_bit(PG_mte_tagged, &page->flags);
}

View File

@@ -46,9 +46,13 @@
#endif #endif
#ifdef CONFIG_KASAN_HW_TAGS #ifdef CONFIG_KASAN_HW_TAGS
#define TCR_KASAN_HW_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1 | TCR_TBID1 #define TCR_MTE_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1 | TCR_TBID1
#else #else
#define TCR_KASAN_HW_FLAGS 0 /*
* The mte_zero_clear_page_tags() implementation uses DC GZVA, which relies on
* TBI being enabled at EL1.
*/
#define TCR_MTE_FLAGS TCR_TBI1 | TCR_TBID1
#endif #endif
/* /*
@@ -462,7 +466,7 @@ SYM_FUNC_START(__cpu_setup)
msr_s SYS_TFSRE0_EL1, xzr msr_s SYS_TFSRE0_EL1, xzr
/* set the TCR_EL1 bits */ /* set the TCR_EL1 bits */
mov_q mte_tcr, TCR_KASAN_HW_FLAGS mov_q mte_tcr, TCR_MTE_FLAGS
1: 1:
#endif #endif
msr mair_el1, x5 msr mair_el1, x5

View File

@@ -244,6 +244,8 @@ CONFIG_BT_HCIUART_BCM=y
CONFIG_BT_HCIUART_QCA=y CONFIG_BT_HCIUART_QCA=y
CONFIG_CFG80211=y CONFIG_CFG80211=y
CONFIG_NL80211_TESTMODE=y CONFIG_NL80211_TESTMODE=y
CONFIG_CFG80211_CERTIFICATION_ONUS=y
CONFIG_CFG80211_REG_CELLULAR_HINTS=y
# CONFIG_CFG80211_DEFAULT_PS is not set # CONFIG_CFG80211_DEFAULT_PS is not set
# CONFIG_CFG80211_CRDA_SUPPORT is not set # CONFIG_CFG80211_CRDA_SUPPORT is not set
CONFIG_MAC80211=y CONFIG_MAC80211=y
@@ -252,6 +254,7 @@ CONFIG_PCI=y
CONFIG_PCIEPORTBUS=y CONFIG_PCIEPORTBUS=y
CONFIG_PCIEAER=y CONFIG_PCIEAER=y
CONFIG_PCI_MSI=y CONFIG_PCI_MSI=y
CONFIG_PCI_IOV=y
CONFIG_PCIE_DW_PLAT_EP=y CONFIG_PCIE_DW_PLAT_EP=y
CONFIG_PCI_ENDPOINT=y CONFIG_PCI_ENDPOINT=y
CONFIG_FW_LOADER_USER_HELPER=y CONFIG_FW_LOADER_USER_HELPER=y
@@ -390,7 +393,6 @@ CONFIG_SND=y
CONFIG_SND_HRTIMER=y CONFIG_SND_HRTIMER=y
CONFIG_SND_DYNAMIC_MINORS=y CONFIG_SND_DYNAMIC_MINORS=y
# CONFIG_SND_SUPPORT_OLD_API is not set # CONFIG_SND_SUPPORT_OLD_API is not set
# CONFIG_SND_VERBOSE_PROCFS is not set
# CONFIG_SND_DRIVERS is not set # CONFIG_SND_DRIVERS is not set
CONFIG_SND_USB_AUDIO=y CONFIG_SND_USB_AUDIO=y
CONFIG_SND_SOC=y CONFIG_SND_SOC=y

View File

@@ -1,5 +1,5 @@
BRANCH=android12-5.10 BRANCH=android12-5.10
KMI_GENERATION=6 KMI_GENERATION=7
LLVM=1 LLVM=1
DEPMOD=depmod DEPMOD=depmod

View File

@@ -193,6 +193,8 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cma_alloc_start);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cma_alloc_finish); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cma_alloc_finish);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_rmqueue); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_rmqueue);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_pagecache_get_page); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_pagecache_get_page);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_filemap_fault_get_page);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_filemap_fault_cache_page);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_enable_thermal_genl_check); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_enable_thermal_genl_check);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_thermal_pm_notify_suspend); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_thermal_pm_notify_suspend);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ufs_fill_prdt); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ufs_fill_prdt);

View File

@@ -1269,7 +1269,7 @@ static inline void gic_cpu_pm_init(void) { }
#ifdef CONFIG_PM #ifdef CONFIG_PM
void gic_resume(void) void gic_resume(void)
{ {
trace_android_vh_gic_resume(gic_data.domain, gic_data.dist_base); trace_android_vh_gic_resume(&gic_data);
} }
EXPORT_SYMBOL_GPL(gic_resume); EXPORT_SYMBOL_GPL(gic_resume);

View File

@@ -838,9 +838,9 @@ static u32 ufs_qcom_get_ufs_hci_version(struct ufs_hba *hba)
struct ufs_qcom_host *host = ufshcd_get_variant(hba); struct ufs_qcom_host *host = ufshcd_get_variant(hba);
if (host->hw_ver.major == 0x1) if (host->hw_ver.major == 0x1)
return UFSHCI_VERSION_11; return ufshci_version(1, 1);
else else
return UFSHCI_VERSION_20; return ufshci_version(2, 0);
} }
/** /**

View File

@@ -207,6 +207,242 @@ static const struct attribute_group ufs_sysfs_default_group = {
.attrs = ufs_sysfs_ufshcd_attrs, .attrs = ufs_sysfs_ufshcd_attrs,
}; };
static ssize_t monitor_enable_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
return sysfs_emit(buf, "%d\n", hba->monitor.enabled);
}
static ssize_t monitor_enable_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
unsigned long value, flags;
if (kstrtoul(buf, 0, &value))
return -EINVAL;
value = !!value;
spin_lock_irqsave(hba->host->host_lock, flags);
if (value == hba->monitor.enabled)
goto out_unlock;
if (!value) {
memset(&hba->monitor, 0, sizeof(hba->monitor));
} else {
hba->monitor.enabled = true;
hba->monitor.enabled_ts = ktime_get();
}
out_unlock:
spin_unlock_irqrestore(hba->host->host_lock, flags);
return count;
}
static ssize_t monitor_chunk_size_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
return sysfs_emit(buf, "%lu\n", hba->monitor.chunk_size);
}
static ssize_t monitor_chunk_size_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
unsigned long value, flags;
if (kstrtoul(buf, 0, &value))
return -EINVAL;
spin_lock_irqsave(hba->host->host_lock, flags);
/* Only allow chunk size change when monitor is disabled */
if (!hba->monitor.enabled)
hba->monitor.chunk_size = value;
spin_unlock_irqrestore(hba->host->host_lock, flags);
return count;
}
static ssize_t read_total_sectors_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
return sysfs_emit(buf, "%lu\n", hba->monitor.nr_sec_rw[READ]);
}
static ssize_t read_total_busy_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
return sysfs_emit(buf, "%llu\n",
ktime_to_us(hba->monitor.total_busy[READ]));
}
static ssize_t read_nr_requests_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
return sysfs_emit(buf, "%lu\n", hba->monitor.nr_req[READ]);
}
static ssize_t read_req_latency_avg_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
struct ufs_hba_monitor *m = &hba->monitor;
return sysfs_emit(buf, "%llu\n", div_u64(ktime_to_us(m->lat_sum[READ]),
m->nr_req[READ]));
}
static ssize_t read_req_latency_max_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
return sysfs_emit(buf, "%llu\n",
ktime_to_us(hba->monitor.lat_max[READ]));
}
static ssize_t read_req_latency_min_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
return sysfs_emit(buf, "%llu\n",
ktime_to_us(hba->monitor.lat_min[READ]));
}
static ssize_t read_req_latency_sum_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
return sysfs_emit(buf, "%llu\n",
ktime_to_us(hba->monitor.lat_sum[READ]));
}
static ssize_t write_total_sectors_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
return sysfs_emit(buf, "%lu\n", hba->monitor.nr_sec_rw[WRITE]);
}
static ssize_t write_total_busy_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
return sysfs_emit(buf, "%llu\n",
ktime_to_us(hba->monitor.total_busy[WRITE]));
}
static ssize_t write_nr_requests_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
return sysfs_emit(buf, "%lu\n", hba->monitor.nr_req[WRITE]);
}
static ssize_t write_req_latency_avg_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
struct ufs_hba_monitor *m = &hba->monitor;
return sysfs_emit(buf, "%llu\n", div_u64(ktime_to_us(m->lat_sum[WRITE]),
m->nr_req[WRITE]));
}
static ssize_t write_req_latency_max_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
return sysfs_emit(buf, "%llu\n",
ktime_to_us(hba->monitor.lat_max[WRITE]));
}
static ssize_t write_req_latency_min_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
return sysfs_emit(buf, "%llu\n",
ktime_to_us(hba->monitor.lat_min[WRITE]));
}
static ssize_t write_req_latency_sum_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
return sysfs_emit(buf, "%llu\n",
ktime_to_us(hba->monitor.lat_sum[WRITE]));
}
static DEVICE_ATTR_RW(monitor_enable);
static DEVICE_ATTR_RW(monitor_chunk_size);
static DEVICE_ATTR_RO(read_total_sectors);
static DEVICE_ATTR_RO(read_total_busy);
static DEVICE_ATTR_RO(read_nr_requests);
static DEVICE_ATTR_RO(read_req_latency_avg);
static DEVICE_ATTR_RO(read_req_latency_max);
static DEVICE_ATTR_RO(read_req_latency_min);
static DEVICE_ATTR_RO(read_req_latency_sum);
static DEVICE_ATTR_RO(write_total_sectors);
static DEVICE_ATTR_RO(write_total_busy);
static DEVICE_ATTR_RO(write_nr_requests);
static DEVICE_ATTR_RO(write_req_latency_avg);
static DEVICE_ATTR_RO(write_req_latency_max);
static DEVICE_ATTR_RO(write_req_latency_min);
static DEVICE_ATTR_RO(write_req_latency_sum);
static struct attribute *ufs_sysfs_monitor_attrs[] = {
&dev_attr_monitor_enable.attr,
&dev_attr_monitor_chunk_size.attr,
&dev_attr_read_total_sectors.attr,
&dev_attr_read_total_busy.attr,
&dev_attr_read_nr_requests.attr,
&dev_attr_read_req_latency_avg.attr,
&dev_attr_read_req_latency_max.attr,
&dev_attr_read_req_latency_min.attr,
&dev_attr_read_req_latency_sum.attr,
&dev_attr_write_total_sectors.attr,
&dev_attr_write_total_busy.attr,
&dev_attr_write_nr_requests.attr,
&dev_attr_write_req_latency_avg.attr,
&dev_attr_write_req_latency_max.attr,
&dev_attr_write_req_latency_min.attr,
&dev_attr_write_req_latency_sum.attr,
NULL
};
static const struct attribute_group ufs_sysfs_monitor_group = {
.name = "monitor",
.attrs = ufs_sysfs_monitor_attrs,
};
static ssize_t ufs_sysfs_read_desc_param(struct ufs_hba *hba, static ssize_t ufs_sysfs_read_desc_param(struct ufs_hba *hba,
enum desc_idn desc_id, enum desc_idn desc_id,
u8 desc_index, u8 desc_index,
@@ -769,6 +1005,7 @@ static const struct attribute_group ufs_sysfs_attributes_group = {
static const struct attribute_group *ufs_sysfs_groups[] = { static const struct attribute_group *ufs_sysfs_groups[] = {
&ufs_sysfs_default_group, &ufs_sysfs_default_group,
&ufs_sysfs_monitor_group,
&ufs_sysfs_device_descriptor_group, &ufs_sysfs_device_descriptor_group,
&ufs_sysfs_interconnect_descriptor_group, &ufs_sysfs_interconnect_descriptor_group,
&ufs_sysfs_geometry_descriptor_group, &ufs_sysfs_geometry_descriptor_group,

View File

@@ -637,23 +637,12 @@ int ufshcd_wait_for_register(struct ufs_hba *hba, u32 reg, u32 mask,
*/ */
static inline u32 ufshcd_get_intr_mask(struct ufs_hba *hba) static inline u32 ufshcd_get_intr_mask(struct ufs_hba *hba)
{ {
u32 intr_mask = 0; if (hba->ufs_version == ufshci_version(1, 0))
return INTERRUPT_MASK_ALL_VER_10;
if (hba->ufs_version <= ufshci_version(2, 0))
return INTERRUPT_MASK_ALL_VER_11;
switch (hba->ufs_version) { return INTERRUPT_MASK_ALL_VER_21;
case UFSHCI_VERSION_10:
intr_mask = INTERRUPT_MASK_ALL_VER_10;
break;
case UFSHCI_VERSION_11:
case UFSHCI_VERSION_20:
intr_mask = INTERRUPT_MASK_ALL_VER_11;
break;
case UFSHCI_VERSION_21:
default:
intr_mask = INTERRUPT_MASK_ALL_VER_21;
break;
}
return intr_mask;
} }
/** /**
@@ -664,10 +653,22 @@ static inline u32 ufshcd_get_intr_mask(struct ufs_hba *hba)
*/ */
static inline u32 ufshcd_get_ufs_version(struct ufs_hba *hba) static inline u32 ufshcd_get_ufs_version(struct ufs_hba *hba)
{ {
if (hba->quirks & UFSHCD_QUIRK_BROKEN_UFS_HCI_VERSION) u32 ufshci_ver;
return ufshcd_vops_get_ufs_hci_version(hba);
return ufshcd_readl(hba, REG_UFS_VERSION); if (hba->quirks & UFSHCD_QUIRK_BROKEN_UFS_HCI_VERSION)
ufshci_ver = ufshcd_vops_get_ufs_hci_version(hba);
else
ufshci_ver = ufshcd_readl(hba, REG_UFS_VERSION);
/*
* UFSHCI v1.x uses a different version scheme, in order
* to allow the use of comparisons with the ufshci_version
* function, we convert it to the same scheme as ufs 2.0+.
*/
if (ufshci_ver & 0x00010000)
return ufshci_version(1, ufshci_ver & 0x00000100);
return ufshci_ver;
} }
/** /**
@@ -729,7 +730,7 @@ static inline void ufshcd_utmrl_clear(struct ufs_hba *hba, u32 pos)
*/ */
static inline void ufshcd_outstanding_req_clear(struct ufs_hba *hba, int tag) static inline void ufshcd_outstanding_req_clear(struct ufs_hba *hba, int tag)
{ {
__clear_bit(tag, &hba->outstanding_reqs); clear_bit(tag, &hba->outstanding_reqs);
} }
/** /**
@@ -899,8 +900,7 @@ static inline bool ufshcd_is_hba_active(struct ufs_hba *hba)
u32 ufshcd_get_local_unipro_ver(struct ufs_hba *hba) u32 ufshcd_get_local_unipro_ver(struct ufs_hba *hba)
{ {
/* HCI version 1.0 and 1.1 supports UniPro 1.41 */ /* HCI version 1.0 and 1.1 supports UniPro 1.41 */
if ((hba->ufs_version == UFSHCI_VERSION_10) || if (hba->ufs_version <= ufshci_version(1, 1))
(hba->ufs_version == UFSHCI_VERSION_11))
return UFS_UNIPRO_VER_1_41; return UFS_UNIPRO_VER_1_41;
else else
return UFS_UNIPRO_VER_1_6; return UFS_UNIPRO_VER_1_6;
@@ -1956,15 +1956,19 @@ static void ufshcd_clk_scaling_start_busy(struct ufs_hba *hba)
{ {
bool queue_resume_work = false; bool queue_resume_work = false;
ktime_t curr_t = ktime_get(); ktime_t curr_t = ktime_get();
unsigned long flags;
if (!ufshcd_is_clkscaling_supported(hba)) if (!ufshcd_is_clkscaling_supported(hba))
return; return;
spin_lock_irqsave(hba->host->host_lock, flags);
if (!hba->clk_scaling.active_reqs++) if (!hba->clk_scaling.active_reqs++)
queue_resume_work = true; queue_resume_work = true;
if (!hba->clk_scaling.is_enabled || hba->pm_op_in_progress) if (!hba->clk_scaling.is_enabled || hba->pm_op_in_progress) {
spin_unlock_irqrestore(hba->host->host_lock, flags);
return; return;
}
if (queue_resume_work) if (queue_resume_work)
queue_work(hba->clk_scaling.workq, queue_work(hba->clk_scaling.workq,
@@ -1980,22 +1984,91 @@ static void ufshcd_clk_scaling_start_busy(struct ufs_hba *hba)
hba->clk_scaling.busy_start_t = curr_t; hba->clk_scaling.busy_start_t = curr_t;
hba->clk_scaling.is_busy_started = true; hba->clk_scaling.is_busy_started = true;
} }
spin_unlock_irqrestore(hba->host->host_lock, flags);
} }
static void ufshcd_clk_scaling_update_busy(struct ufs_hba *hba) static void ufshcd_clk_scaling_update_busy(struct ufs_hba *hba)
{ {
struct ufs_clk_scaling *scaling = &hba->clk_scaling; struct ufs_clk_scaling *scaling = &hba->clk_scaling;
unsigned long flags;
if (!ufshcd_is_clkscaling_supported(hba)) if (!ufshcd_is_clkscaling_supported(hba))
return; return;
spin_lock_irqsave(hba->host->host_lock, flags);
hba->clk_scaling.active_reqs--;
if (!hba->outstanding_reqs && scaling->is_busy_started) { if (!hba->outstanding_reqs && scaling->is_busy_started) {
scaling->tot_busy_t += ktime_to_us(ktime_sub(ktime_get(), scaling->tot_busy_t += ktime_to_us(ktime_sub(ktime_get(),
scaling->busy_start_t)); scaling->busy_start_t));
scaling->busy_start_t = 0; scaling->busy_start_t = 0;
scaling->is_busy_started = false; scaling->is_busy_started = false;
} }
spin_unlock_irqrestore(hba->host->host_lock, flags);
} }
static inline int ufshcd_monitor_opcode2dir(u8 opcode)
{
if (opcode == READ_6 || opcode == READ_10 || opcode == READ_16)
return READ;
else if (opcode == WRITE_6 || opcode == WRITE_10 || opcode == WRITE_16)
return WRITE;
else
return -EINVAL;
}
static inline bool ufshcd_should_inform_monitor(struct ufs_hba *hba,
struct ufshcd_lrb *lrbp)
{
struct ufs_hba_monitor *m = &hba->monitor;
return (m->enabled && lrbp && lrbp->cmd &&
(!m->chunk_size || m->chunk_size == lrbp->cmd->sdb.length) &&
ktime_before(hba->monitor.enabled_ts, lrbp->issue_time_stamp));
}
static void ufshcd_start_monitor(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
{
int dir = ufshcd_monitor_opcode2dir(*lrbp->cmd->cmnd);
unsigned long flags;
spin_lock_irqsave(hba->host->host_lock, flags);
if (dir >= 0 && hba->monitor.nr_queued[dir]++ == 0)
hba->monitor.busy_start_ts[dir] = ktime_get();
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
static void ufshcd_update_monitor(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
{
int dir = ufshcd_monitor_opcode2dir(*lrbp->cmd->cmnd);
unsigned long flags;
spin_lock_irqsave(hba->host->host_lock, flags);
if (dir >= 0 && hba->monitor.nr_queued[dir] > 0) {
struct request *req = lrbp->cmd->request;
struct ufs_hba_monitor *m = &hba->monitor;
ktime_t now, inc, lat;
now = lrbp->compl_time_stamp;
inc = ktime_sub(now, m->busy_start_ts[dir]);
m->total_busy[dir] = ktime_add(m->total_busy[dir], inc);
m->nr_sec_rw[dir] += blk_rq_sectors(req);
/* Update latencies */
m->nr_req[dir]++;
lat = ktime_sub(now, lrbp->issue_time_stamp);
m->lat_sum[dir] += lat;
if (m->lat_max[dir] < lat || !m->lat_max[dir])
m->lat_max[dir] = lat;
if (m->lat_min[dir] > lat || !m->lat_min[dir])
m->lat_min[dir] = lat;
m->nr_queued[dir]--;
/* Push forward the busy start of monitor */
m->busy_start_ts[dir] = now;
}
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
/** /**
* ufshcd_send_command - Send SCSI or device management commands * ufshcd_send_command - Send SCSI or device management commands
* @hba: per adapter instance * @hba: per adapter instance
@@ -2012,8 +2085,21 @@ void ufshcd_send_command(struct ufs_hba *hba, unsigned int task_tag)
trace_android_vh_ufs_send_command(hba, lrbp); trace_android_vh_ufs_send_command(hba, lrbp);
ufshcd_add_command_trace(hba, task_tag, "send"); ufshcd_add_command_trace(hba, task_tag, "send");
ufshcd_clk_scaling_start_busy(hba); ufshcd_clk_scaling_start_busy(hba);
__set_bit(task_tag, &hba->outstanding_reqs); if (unlikely(ufshcd_should_inform_monitor(hba, lrbp)))
ufshcd_writel(hba, 1 << task_tag, REG_UTP_TRANSFER_REQ_DOOR_BELL); ufshcd_start_monitor(hba, lrbp);
if (ufshcd_has_utrlcnr(hba)) {
set_bit(task_tag, &hba->outstanding_reqs);
ufshcd_writel(hba, 1 << task_tag,
REG_UTP_TRANSFER_REQ_DOOR_BELL);
} else {
unsigned long flags;
spin_lock_irqsave(hba->host->host_lock, flags);
set_bit(task_tag, &hba->outstanding_reqs);
ufshcd_writel(hba, 1 << task_tag,
REG_UTP_TRANSFER_REQ_DOOR_BELL);
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
/* Make sure that doorbell is committed immediately */ /* Make sure that doorbell is committed immediately */
wmb(); wmb();
} }
@@ -2307,7 +2393,7 @@ static void ufshcd_enable_intr(struct ufs_hba *hba, u32 intrs)
{ {
u32 set = ufshcd_readl(hba, REG_INTERRUPT_ENABLE); u32 set = ufshcd_readl(hba, REG_INTERRUPT_ENABLE);
if (hba->ufs_version == UFSHCI_VERSION_10) { if (hba->ufs_version == ufshci_version(1, 0)) {
u32 rw; u32 rw;
rw = set & INTERRUPT_MASK_RW_VER_10; rw = set & INTERRUPT_MASK_RW_VER_10;
set = rw | ((set ^ intrs) & intrs); set = rw | ((set ^ intrs) & intrs);
@@ -2327,7 +2413,7 @@ static void ufshcd_disable_intr(struct ufs_hba *hba, u32 intrs)
{ {
u32 set = ufshcd_readl(hba, REG_INTERRUPT_ENABLE); u32 set = ufshcd_readl(hba, REG_INTERRUPT_ENABLE);
if (hba->ufs_version == UFSHCI_VERSION_10) { if (hba->ufs_version == ufshci_version(1, 0)) {
u32 rw; u32 rw;
rw = (set & INTERRUPT_MASK_RW_VER_10) & rw = (set & INTERRUPT_MASK_RW_VER_10) &
~(intrs & INTERRUPT_MASK_RW_VER_10); ~(intrs & INTERRUPT_MASK_RW_VER_10);
@@ -2490,8 +2576,7 @@ static int ufshcd_compose_devman_upiu(struct ufs_hba *hba,
u8 upiu_flags; u8 upiu_flags;
int ret = 0; int ret = 0;
if ((hba->ufs_version == UFSHCI_VERSION_10) || if (hba->ufs_version <= ufshci_version(1, 1))
(hba->ufs_version == UFSHCI_VERSION_11))
lrbp->command_type = UTP_CMD_TYPE_DEV_MANAGE; lrbp->command_type = UTP_CMD_TYPE_DEV_MANAGE;
else else
lrbp->command_type = UTP_CMD_TYPE_UFS_STORAGE; lrbp->command_type = UTP_CMD_TYPE_UFS_STORAGE;
@@ -2518,8 +2603,7 @@ static int ufshcd_comp_scsi_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
u8 upiu_flags; u8 upiu_flags;
int ret = 0; int ret = 0;
if ((hba->ufs_version == UFSHCI_VERSION_10) || if (hba->ufs_version <= ufshci_version(1, 1))
(hba->ufs_version == UFSHCI_VERSION_11))
lrbp->command_type = UTP_CMD_TYPE_SCSI; lrbp->command_type = UTP_CMD_TYPE_SCSI;
else else
lrbp->command_type = UTP_CMD_TYPE_UFS_STORAGE; lrbp->command_type = UTP_CMD_TYPE_UFS_STORAGE;
@@ -2579,7 +2663,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
{ {
struct ufshcd_lrb *lrbp; struct ufshcd_lrb *lrbp;
struct ufs_hba *hba; struct ufs_hba *hba;
unsigned long flags;
int tag; int tag;
int err = 0; int err = 0;
@@ -2596,6 +2679,43 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
if (!down_read_trylock(&hba->clk_scaling_lock)) if (!down_read_trylock(&hba->clk_scaling_lock))
return SCSI_MLQUEUE_HOST_BUSY; return SCSI_MLQUEUE_HOST_BUSY;
switch (hba->ufshcd_state) {
case UFSHCD_STATE_OPERATIONAL:
case UFSHCD_STATE_EH_SCHEDULED_NON_FATAL:
break;
case UFSHCD_STATE_EH_SCHEDULED_FATAL:
/*
* pm_runtime_get_sync() is used at error handling preparation
* stage. If a scsi cmd, e.g. the SSU cmd, is sent from hba's
* PM ops, it can never be finished if we let SCSI layer keep
* retrying it, which gets err handler stuck forever. Neither
* can we let the scsi cmd pass through, because UFS is in bad
* state, the scsi cmd may eventually time out, which will get
* err handler blocked for too long. So, just fail the scsi cmd
* sent from PM ops, err handler can recover PM error anyways.
*/
if (hba->pm_op_in_progress) {
hba->force_reset = true;
set_host_byte(cmd, DID_BAD_TARGET);
cmd->scsi_done(cmd);
goto out;
}
fallthrough;
case UFSHCD_STATE_RESET:
err = SCSI_MLQUEUE_HOST_BUSY;
goto out;
case UFSHCD_STATE_ERROR:
set_host_byte(cmd, DID_ERROR);
cmd->scsi_done(cmd);
goto out;
default:
dev_WARN_ONCE(hba->dev, 1, "%s: invalid state %d\n",
__func__, hba->ufshcd_state);
set_host_byte(cmd, DID_BAD_TARGET);
cmd->scsi_done(cmd);
goto out;
}
hba->req_abort_count = 0; hba->req_abort_count = 0;
err = ufshcd_hold(hba, true); err = ufshcd_hold(hba, true);
@@ -2606,8 +2726,7 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
WARN_ON(ufshcd_is_clkgating_allowed(hba) && WARN_ON(ufshcd_is_clkgating_allowed(hba) &&
(hba->clk_gating.state != CLKS_ON)); (hba->clk_gating.state != CLKS_ON));
lrbp = &hba->lrb[tag]; if (unlikely(test_bit(tag, &hba->outstanding_reqs))) {
if (unlikely(lrbp->in_use)) {
if (hba->pm_op_in_progress) if (hba->pm_op_in_progress)
set_host_byte(cmd, DID_BAD_TARGET); set_host_byte(cmd, DID_BAD_TARGET);
else else
@@ -2616,6 +2735,7 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
goto out; goto out;
} }
lrbp = &hba->lrb[tag];
WARN_ON(lrbp->cmd); WARN_ON(lrbp->cmd);
lrbp->cmd = cmd; lrbp->cmd = cmd;
lrbp->sense_bufflen = UFS_SENSE_SIZE; lrbp->sense_bufflen = UFS_SENSE_SIZE;
@@ -2646,51 +2766,7 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
/* Make sure descriptors are ready before ringing the doorbell */ /* Make sure descriptors are ready before ringing the doorbell */
wmb(); wmb();
spin_lock_irqsave(hba->host->host_lock, flags);
switch (hba->ufshcd_state) {
case UFSHCD_STATE_OPERATIONAL:
case UFSHCD_STATE_EH_SCHEDULED_NON_FATAL:
break;
case UFSHCD_STATE_EH_SCHEDULED_FATAL:
/*
* pm_runtime_get_sync() is used at error handling preparation
* stage. If a scsi cmd, e.g. the SSU cmd, is sent from hba's
* PM ops, it can never be finished if we let SCSI layer keep
* retrying it, which gets err handler stuck forever. Neither
* can we let the scsi cmd pass through, because UFS is in bad
* state, the scsi cmd may eventually time out, which will get
* err handler blocked for too long. So, just fail the scsi cmd
* sent from PM ops, err handler can recover PM error anyways.
*/
if (hba->pm_op_in_progress) {
hba->force_reset = true;
set_host_byte(cmd, DID_BAD_TARGET);
goto out_compl_cmd;
}
fallthrough;
case UFSHCD_STATE_RESET:
err = SCSI_MLQUEUE_HOST_BUSY;
goto out_compl_cmd;
case UFSHCD_STATE_ERROR:
set_host_byte(cmd, DID_ERROR);
goto out_compl_cmd;
default:
dev_WARN_ONCE(hba->dev, 1, "%s: invalid state %d\n",
__func__, hba->ufshcd_state);
set_host_byte(cmd, DID_BAD_TARGET);
goto out_compl_cmd;
}
ufshcd_send_command(hba, tag); ufshcd_send_command(hba, tag);
spin_unlock_irqrestore(hba->host->host_lock, flags);
goto out;
out_compl_cmd:
scsi_dma_unmap(lrbp->cmd);
lrbp->cmd = NULL;
spin_unlock_irqrestore(hba->host->host_lock, flags);
ufshcd_release(hba);
if (!err)
cmd->scsi_done(cmd);
out: out:
up_read(&hba->clk_scaling_lock); up_read(&hba->clk_scaling_lock);
return err; return err;
@@ -2845,7 +2921,6 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
int err; int err;
int tag; int tag;
struct completion wait; struct completion wait;
unsigned long flags;
down_read(&hba->clk_scaling_lock); down_read(&hba->clk_scaling_lock);
@@ -2865,34 +2940,30 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
req->timeout = msecs_to_jiffies(2 * timeout); req->timeout = msecs_to_jiffies(2 * timeout);
blk_mq_start_request(req); blk_mq_start_request(req);
init_completion(&wait); if (unlikely(test_bit(tag, &hba->outstanding_reqs))) {
lrbp = &hba->lrb[tag];
if (unlikely(lrbp->in_use)) {
err = -EBUSY; err = -EBUSY;
goto out; goto out;
} }
init_completion(&wait);
lrbp = &hba->lrb[tag];
WARN_ON(lrbp->cmd); WARN_ON(lrbp->cmd);
err = ufshcd_compose_dev_cmd(hba, lrbp, cmd_type, tag); err = ufshcd_compose_dev_cmd(hba, lrbp, cmd_type, tag);
if (unlikely(err)) if (unlikely(err))
goto out_put_tag; goto out;
hba->dev_cmd.complete = &wait; hba->dev_cmd.complete = &wait;
ufshcd_add_query_upiu_trace(hba, tag, "query_send"); ufshcd_add_query_upiu_trace(hba, tag, "query_send");
/* Make sure descriptors are ready before ringing the doorbell */ /* Make sure descriptors are ready before ringing the doorbell */
wmb(); wmb();
spin_lock_irqsave(hba->host->host_lock, flags);
ufshcd_send_command(hba, tag); ufshcd_send_command(hba, tag);
spin_unlock_irqrestore(hba->host->host_lock, flags);
err = ufshcd_wait_for_dev_cmd(hba, lrbp, timeout); err = ufshcd_wait_for_dev_cmd(hba, lrbp, timeout);
out:
ufshcd_add_query_upiu_trace(hba, tag, ufshcd_add_query_upiu_trace(hba, tag,
err ? "query_complete_err" : "query_complete"); err ? "query_complete_err" : "query_complete");
out_put_tag: out:
blk_put_request(req); blk_put_request(req);
out_unlock: out_unlock:
up_read(&hba->clk_scaling_lock); up_read(&hba->clk_scaling_lock);
@@ -5024,6 +5095,24 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
return result; return result;
} }
static bool ufshcd_is_auto_hibern8_error(struct ufs_hba *hba,
u32 intr_mask)
{
if (!ufshcd_is_auto_hibern8_supported(hba) ||
!ufshcd_is_auto_hibern8_enabled(hba))
return false;
if (!(intr_mask & UFSHCD_UIC_HIBERN8_MASK))
return false;
if (hba->active_uic_cmd &&
(hba->active_uic_cmd->command == UIC_CMD_DME_HIBER_ENTER ||
hba->active_uic_cmd->command == UIC_CMD_DME_HIBER_EXIT))
return false;
return true;
}
/** /**
* ufshcd_uic_cmd_compl - handle completion of uic command * ufshcd_uic_cmd_compl - handle completion of uic command
* @hba: per adapter instance * @hba: per adapter instance
@@ -5037,6 +5126,10 @@ static irqreturn_t ufshcd_uic_cmd_compl(struct ufs_hba *hba, u32 intr_status)
{ {
irqreturn_t retval = IRQ_NONE; irqreturn_t retval = IRQ_NONE;
spin_lock(hba->host->host_lock);
if (ufshcd_is_auto_hibern8_error(hba, intr_status))
hba->errors |= (UFSHCD_UIC_HIBERN8_MASK & intr_status);
if ((intr_status & UIC_COMMAND_COMPL) && hba->active_uic_cmd) { if ((intr_status & UIC_COMMAND_COMPL) && hba->active_uic_cmd) {
hba->active_uic_cmd->argument2 |= hba->active_uic_cmd->argument2 |=
ufshcd_get_uic_cmd_result(hba); ufshcd_get_uic_cmd_result(hba);
@@ -5057,6 +5150,7 @@ static irqreturn_t ufshcd_uic_cmd_compl(struct ufs_hba *hba, u32 intr_status)
if (retval == IRQ_HANDLED) if (retval == IRQ_HANDLED)
ufshcd_add_uic_command_trace(hba, hba->active_uic_cmd, ufshcd_add_uic_command_trace(hba, hba->active_uic_cmd,
"complete"); "complete");
spin_unlock(hba->host->host_lock);
return retval; return retval;
} }
@@ -5075,11 +5169,14 @@ static void __ufshcd_transfer_req_compl(struct ufs_hba *hba,
bool update_scaling = false; bool update_scaling = false;
for_each_set_bit(index, &completed_reqs, hba->nutrs) { for_each_set_bit(index, &completed_reqs, hba->nutrs) {
if (!test_and_clear_bit(index, &hba->outstanding_reqs))
continue;
lrbp = &hba->lrb[index]; lrbp = &hba->lrb[index];
lrbp->in_use = false;
lrbp->compl_time_stamp = ktime_get(); lrbp->compl_time_stamp = ktime_get();
cmd = lrbp->cmd; cmd = lrbp->cmd;
if (cmd) { if (cmd) {
if (unlikely(ufshcd_should_inform_monitor(hba, lrbp)))
ufshcd_update_monitor(hba, lrbp);
trace_android_vh_ufs_compl_command(hba, lrbp); trace_android_vh_ufs_compl_command(hba, lrbp);
ufshcd_add_command_trace(hba, index, "complete"); ufshcd_add_command_trace(hba, index, "complete");
result = ufshcd_transfer_rsp_status(hba, lrbp); result = ufshcd_transfer_rsp_status(hba, lrbp);
@@ -5090,7 +5187,7 @@ static void __ufshcd_transfer_req_compl(struct ufs_hba *hba,
lrbp->cmd = NULL; lrbp->cmd = NULL;
/* Do not touch lrbp after scsi done */ /* Do not touch lrbp after scsi done */
cmd->scsi_done(cmd); cmd->scsi_done(cmd);
__ufshcd_release(hba); ufshcd_release(hba);
update_scaling = true; update_scaling = true;
} else if (lrbp->command_type == UTP_CMD_TYPE_DEV_MANAGE || } else if (lrbp->command_type == UTP_CMD_TYPE_DEV_MANAGE ||
lrbp->command_type == UTP_CMD_TYPE_UFS_STORAGE) { lrbp->command_type == UTP_CMD_TYPE_UFS_STORAGE) {
@@ -5102,28 +5199,23 @@ static void __ufshcd_transfer_req_compl(struct ufs_hba *hba,
update_scaling = true; update_scaling = true;
} }
} }
if (ufshcd_is_clkscaling_supported(hba) && update_scaling) if (update_scaling)
hba->clk_scaling.active_reqs--; ufshcd_clk_scaling_update_busy(hba);
} }
/* clear corresponding bits of completed commands */
hba->outstanding_reqs ^= completed_reqs;
ufshcd_clk_scaling_update_busy(hba);
} }
/** /**
* ufshcd_transfer_req_compl - handle SCSI and query command completion * ufshcd_trc_handler - handle transfer requests completion
* @hba: per adapter instance * @hba: per adapter instance
* @use_utrlcnr: get completed requests from UTRLCNR
* *
* Returns * Returns
* IRQ_HANDLED - If interrupt is valid * IRQ_HANDLED - If interrupt is valid
* IRQ_NONE - If invalid interrupt * IRQ_NONE - If invalid interrupt
*/ */
static irqreturn_t ufshcd_transfer_req_compl(struct ufs_hba *hba) static irqreturn_t ufshcd_trc_handler(struct ufs_hba *hba, bool use_utrlcnr)
{ {
unsigned long completed_reqs; unsigned long completed_reqs = 0;
u32 tr_doorbell;
/* Resetting interrupt aggregation counters first and reading the /* Resetting interrupt aggregation counters first and reading the
* DOOR_BELL afterward allows us to handle all the completed requests. * DOOR_BELL afterward allows us to handle all the completed requests.
@@ -5136,8 +5228,24 @@ static irqreturn_t ufshcd_transfer_req_compl(struct ufs_hba *hba)
!(hba->quirks & UFSHCI_QUIRK_SKIP_RESET_INTR_AGGR)) !(hba->quirks & UFSHCI_QUIRK_SKIP_RESET_INTR_AGGR))
ufshcd_reset_intr_aggr(hba); ufshcd_reset_intr_aggr(hba);
tr_doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL); if (use_utrlcnr) {
completed_reqs = tr_doorbell ^ hba->outstanding_reqs; u32 utrlcnr;
utrlcnr = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_LIST_COMPL);
if (utrlcnr) {
ufshcd_writel(hba, utrlcnr,
REG_UTP_TRANSFER_REQ_LIST_COMPL);
completed_reqs = utrlcnr;
}
} else {
unsigned long flags;
u32 tr_doorbell;
spin_lock_irqsave(hba->host->host_lock, flags);
tr_doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL);
completed_reqs = tr_doorbell ^ hba->outstanding_reqs;
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
if (completed_reqs) { if (completed_reqs) {
__ufshcd_transfer_req_compl(hba, completed_reqs); __ufshcd_transfer_req_compl(hba, completed_reqs);
@@ -5646,7 +5754,7 @@ out:
/* Complete requests that have door-bell cleared */ /* Complete requests that have door-bell cleared */
static void ufshcd_complete_requests(struct ufs_hba *hba) static void ufshcd_complete_requests(struct ufs_hba *hba)
{ {
ufshcd_transfer_req_compl(hba); ufshcd_trc_handler(hba, false);
ufshcd_tmc_handler(hba); ufshcd_tmc_handler(hba);
} }
@@ -5895,13 +6003,11 @@ static void ufshcd_err_handler(struct work_struct *work)
ufshcd_set_eh_in_progress(hba); ufshcd_set_eh_in_progress(hba);
spin_unlock_irqrestore(hba->host->host_lock, flags); spin_unlock_irqrestore(hba->host->host_lock, flags);
ufshcd_err_handling_prepare(hba); ufshcd_err_handling_prepare(hba);
/* Complete requests that have door-bell cleared by h/w */
ufshcd_complete_requests(hba);
spin_lock_irqsave(hba->host->host_lock, flags); spin_lock_irqsave(hba->host->host_lock, flags);
if (hba->ufshcd_state != UFSHCD_STATE_ERROR) if (hba->ufshcd_state != UFSHCD_STATE_ERROR)
hba->ufshcd_state = UFSHCD_STATE_RESET; hba->ufshcd_state = UFSHCD_STATE_RESET;
/* Complete requests that have door-bell cleared by h/w */
ufshcd_complete_requests(hba);
/* /*
* A full reset and restore might have happened after preparation * A full reset and restore might have happened after preparation
* is finished, double check whether we should stop. * is finished, double check whether we should stop.
@@ -5984,12 +6090,11 @@ static void ufshcd_err_handler(struct work_struct *work)
} }
lock_skip_pending_xfer_clear: lock_skip_pending_xfer_clear:
spin_lock_irqsave(hba->host->host_lock, flags);
/* Complete the requests that are cleared by s/w */ /* Complete the requests that are cleared by s/w */
ufshcd_complete_requests(hba); ufshcd_complete_requests(hba);
hba->silence_err_logs = false;
spin_lock_irqsave(hba->host->host_lock, flags);
hba->silence_err_logs = false;
if (err_xfer || err_tm) { if (err_xfer || err_tm) {
needs_reset = true; needs_reset = true;
goto do_reset; goto do_reset;
@@ -6022,19 +6127,6 @@ lock_skip_pending_xfer_clear:
do_reset: do_reset:
/* Fatal errors need reset */ /* Fatal errors need reset */
if (needs_reset) { if (needs_reset) {
unsigned long max_doorbells = (1UL << hba->nutrs) - 1;
/*
* ufshcd_reset_and_restore() does the link reinitialization
* which will need atleast one empty doorbell slot to send the
* device management commands (NOP and query commands).
* If there is no slot empty at this moment then free up last
* slot forcefully.
*/
if (hba->outstanding_reqs == max_doorbells)
__ufshcd_transfer_req_compl(hba,
(1UL << (hba->nutrs - 1)));
hba->force_reset = false; hba->force_reset = false;
spin_unlock_irqrestore(hba->host->host_lock, flags); spin_unlock_irqrestore(hba->host->host_lock, flags);
err = ufshcd_reset_and_restore(hba); err = ufshcd_reset_and_restore(hba);
@@ -6152,37 +6244,23 @@ static irqreturn_t ufshcd_update_uic_error(struct ufs_hba *hba)
return retval; return retval;
} }
static bool ufshcd_is_auto_hibern8_error(struct ufs_hba *hba,
u32 intr_mask)
{
if (!ufshcd_is_auto_hibern8_supported(hba) ||
!ufshcd_is_auto_hibern8_enabled(hba))
return false;
if (!(intr_mask & UFSHCD_UIC_HIBERN8_MASK))
return false;
if (hba->active_uic_cmd &&
(hba->active_uic_cmd->command == UIC_CMD_DME_HIBER_ENTER ||
hba->active_uic_cmd->command == UIC_CMD_DME_HIBER_EXIT))
return false;
return true;
}
/** /**
* ufshcd_check_errors - Check for errors that need s/w attention * ufshcd_check_errors - Check for errors that need s/w attention
* @hba: per-adapter instance * @hba: per-adapter instance
* @intr_status: interrupt status generated by the controller
* *
* Returns * Returns
* IRQ_HANDLED - If interrupt is valid * IRQ_HANDLED - If interrupt is valid
* IRQ_NONE - If invalid interrupt * IRQ_NONE - If invalid interrupt
*/ */
static irqreturn_t ufshcd_check_errors(struct ufs_hba *hba) static irqreturn_t ufshcd_check_errors(struct ufs_hba *hba, u32 intr_status)
{ {
bool queue_eh_work = false; bool queue_eh_work = false;
irqreturn_t retval = IRQ_NONE; irqreturn_t retval = IRQ_NONE;
spin_lock(hba->host->host_lock);
hba->errors |= UFSHCD_ERROR_MASK & intr_status;
if (hba->errors & INT_FATAL_ERRORS) { if (hba->errors & INT_FATAL_ERRORS) {
ufshcd_update_evt_hist(hba, UFS_EVT_FATAL_ERR, ufshcd_update_evt_hist(hba, UFS_EVT_FATAL_ERR,
hba->errors); hba->errors);
@@ -6239,6 +6317,9 @@ static irqreturn_t ufshcd_check_errors(struct ufs_hba *hba)
* itself without s/w intervention or errors that will be * itself without s/w intervention or errors that will be
* handled by the SCSI core layer. * handled by the SCSI core layer.
*/ */
hba->errors = 0;
hba->uic_error = 0;
spin_unlock(hba->host->host_lock);
return retval; return retval;
} }
@@ -6273,13 +6354,17 @@ static bool ufshcd_compl_tm(struct request *req, void *priv, bool reserved)
*/ */
static irqreturn_t ufshcd_tmc_handler(struct ufs_hba *hba) static irqreturn_t ufshcd_tmc_handler(struct ufs_hba *hba)
{ {
unsigned long flags;
struct request_queue *q = hba->tmf_queue; struct request_queue *q = hba->tmf_queue;
struct ctm_info ci = { struct ctm_info ci = {
.hba = hba, .hba = hba,
.pending = ufshcd_readl(hba, REG_UTP_TASK_REQ_DOOR_BELL),
}; };
spin_lock_irqsave(hba->host->host_lock, flags);
ci.pending = ufshcd_readl(hba, REG_UTP_TASK_REQ_DOOR_BELL);
blk_mq_tagset_busy_iter(q->tag_set, ufshcd_compl_tm, &ci); blk_mq_tagset_busy_iter(q->tag_set, ufshcd_compl_tm, &ci);
spin_unlock_irqrestore(hba->host->host_lock, flags);
return ci.ncpl ? IRQ_HANDLED : IRQ_NONE; return ci.ncpl ? IRQ_HANDLED : IRQ_NONE;
} }
@@ -6296,22 +6381,17 @@ static irqreturn_t ufshcd_sl_intr(struct ufs_hba *hba, u32 intr_status)
{ {
irqreturn_t retval = IRQ_NONE; irqreturn_t retval = IRQ_NONE;
hba->errors = UFSHCD_ERROR_MASK & intr_status;
if (ufshcd_is_auto_hibern8_error(hba, intr_status))
hba->errors |= (UFSHCD_UIC_HIBERN8_MASK & intr_status);
if (hba->errors)
retval |= ufshcd_check_errors(hba);
if (intr_status & UFSHCD_UIC_MASK) if (intr_status & UFSHCD_UIC_MASK)
retval |= ufshcd_uic_cmd_compl(hba, intr_status); retval |= ufshcd_uic_cmd_compl(hba, intr_status);
if (intr_status & UFSHCD_ERROR_MASK || hba->errors)
retval |= ufshcd_check_errors(hba, intr_status);
if (intr_status & UTP_TASK_REQ_COMPL) if (intr_status & UTP_TASK_REQ_COMPL)
retval |= ufshcd_tmc_handler(hba); retval |= ufshcd_tmc_handler(hba);
if (intr_status & UTP_TRANSFER_REQ_COMPL) if (intr_status & UTP_TRANSFER_REQ_COMPL)
retval |= ufshcd_transfer_req_compl(hba); retval |= ufshcd_trc_handler(hba, ufshcd_has_utrlcnr(hba));
return retval; return retval;
} }
@@ -6332,7 +6412,6 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba)
struct ufs_hba *hba = __hba; struct ufs_hba *hba = __hba;
int retries = hba->nutrs; int retries = hba->nutrs;
spin_lock(hba->host->host_lock);
intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS); intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS);
hba->ufs_stats.last_intr_status = intr_status; hba->ufs_stats.last_intr_status = intr_status;
hba->ufs_stats.last_intr_ts = ktime_get(); hba->ufs_stats.last_intr_ts = ktime_get();
@@ -6364,7 +6443,6 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba)
ufshcd_dump_regs(hba, 0, UFSHCI_REG_SPACE_SIZE, "host_regs: "); ufshcd_dump_regs(hba, 0, UFSHCI_REG_SPACE_SIZE, "host_regs: ");
} }
spin_unlock(hba->host->host_lock);
return retval; return retval;
} }
@@ -6541,7 +6619,6 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
int err = 0; int err = 0;
int tag; int tag;
struct completion wait; struct completion wait;
unsigned long flags;
u8 upiu_flags; u8 upiu_flags;
down_read(&hba->clk_scaling_lock); down_read(&hba->clk_scaling_lock);
@@ -6554,13 +6631,13 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
tag = req->tag; tag = req->tag;
WARN_ON_ONCE(!ufshcd_valid_tag(hba, tag)); WARN_ON_ONCE(!ufshcd_valid_tag(hba, tag));
init_completion(&wait); if (unlikely(test_bit(tag, &hba->outstanding_reqs))) {
lrbp = &hba->lrb[tag];
if (unlikely(lrbp->in_use)) {
err = -EBUSY; err = -EBUSY;
goto out; goto out;
} }
init_completion(&wait);
lrbp = &hba->lrb[tag];
WARN_ON(lrbp->cmd); WARN_ON(lrbp->cmd);
lrbp->cmd = NULL; lrbp->cmd = NULL;
lrbp->sense_bufflen = 0; lrbp->sense_bufflen = 0;
@@ -6571,15 +6648,10 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
ufshcd_prepare_lrbp_crypto(NULL, lrbp); ufshcd_prepare_lrbp_crypto(NULL, lrbp);
hba->dev_cmd.type = cmd_type; hba->dev_cmd.type = cmd_type;
switch (hba->ufs_version) { if (hba->ufs_version <= ufshci_version(1, 1))
case UFSHCI_VERSION_10:
case UFSHCI_VERSION_11:
lrbp->command_type = UTP_CMD_TYPE_DEV_MANAGE; lrbp->command_type = UTP_CMD_TYPE_DEV_MANAGE;
break; else
default:
lrbp->command_type = UTP_CMD_TYPE_UFS_STORAGE; lrbp->command_type = UTP_CMD_TYPE_UFS_STORAGE;
break;
}
/* update the task tag in the request upiu */ /* update the task tag in the request upiu */
req_upiu->header.dword_0 |= cpu_to_be32(tag); req_upiu->header.dword_0 |= cpu_to_be32(tag);
@@ -6603,10 +6675,8 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
/* Make sure descriptors are ready before ringing the doorbell */ /* Make sure descriptors are ready before ringing the doorbell */
wmb(); wmb();
spin_lock_irqsave(hba->host->host_lock, flags);
ufshcd_send_command(hba, tag);
spin_unlock_irqrestore(hba->host->host_lock, flags);
ufshcd_send_command(hba, tag);
/* /*
* ignore the returning value here - ufshcd_check_query_response is * ignore the returning value here - ufshcd_check_query_response is
* bound to fail since dev_cmd.query and dev_cmd.type were left empty. * bound to fail since dev_cmd.query and dev_cmd.type were left empty.
@@ -6725,7 +6795,6 @@ static int ufshcd_eh_device_reset_handler(struct scsi_cmnd *cmd)
u32 pos; u32 pos;
int err; int err;
u8 resp = 0xF, lun; u8 resp = 0xF, lun;
unsigned long flags;
host = cmd->device->host; host = cmd->device->host;
hba = shost_priv(host); hba = shost_priv(host);
@@ -6744,11 +6813,9 @@ static int ufshcd_eh_device_reset_handler(struct scsi_cmnd *cmd)
err = ufshcd_clear_cmd(hba, pos); err = ufshcd_clear_cmd(hba, pos);
if (err) if (err)
break; break;
__ufshcd_transfer_req_compl(hba, pos);
} }
} }
spin_lock_irqsave(host->host_lock, flags);
ufshcd_transfer_req_compl(hba);
spin_unlock_irqrestore(host->host_lock, flags);
out: out:
hba->req_abort_count = 0; hba->req_abort_count = 0;
@@ -6924,20 +6991,16 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
* will fail, due to spec violation, scsi err handling next step * will fail, due to spec violation, scsi err handling next step
* will be to send LU reset which, again, is a spec violation. * will be to send LU reset which, again, is a spec violation.
* To avoid these unnecessary/illegal steps, first we clean up * To avoid these unnecessary/illegal steps, first we clean up
* the lrb taken by this cmd and mark the lrb as in_use, then * the lrb taken by this cmd and re-set it in outstanding_reqs,
* queue the eh_work and bail. * then queue the eh_work and bail.
*/ */
if (lrbp->lun == UFS_UPIU_UFS_DEVICE_WLUN) { if (lrbp->lun == UFS_UPIU_UFS_DEVICE_WLUN) {
ufshcd_update_evt_hist(hba, UFS_EVT_ABORT, lrbp->lun); ufshcd_update_evt_hist(hba, UFS_EVT_ABORT, lrbp->lun);
__ufshcd_transfer_req_compl(hba, (1UL << tag));
set_bit(tag, &hba->outstanding_reqs);
spin_lock_irqsave(host->host_lock, flags); spin_lock_irqsave(host->host_lock, flags);
if (lrbp->cmd) { hba->force_reset = true;
__ufshcd_transfer_req_compl(hba, (1UL << tag)); ufshcd_schedule_eh_work(hba);
__set_bit(tag, &hba->outstanding_reqs);
lrbp->in_use = true;
hba->force_reset = true;
ufshcd_schedule_eh_work(hba);
}
spin_unlock_irqrestore(host->host_lock, flags); spin_unlock_irqrestore(host->host_lock, flags);
goto out; goto out;
} }
@@ -6950,9 +7013,7 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
if (!err) { if (!err) {
cleanup: cleanup:
spin_lock_irqsave(host->host_lock, flags);
__ufshcd_transfer_req_compl(hba, (1UL << tag)); __ufshcd_transfer_req_compl(hba, (1UL << tag));
spin_unlock_irqrestore(host->host_lock, flags);
out: out:
err = SUCCESS; err = SUCCESS;
} else { } else {
@@ -6982,19 +7043,15 @@ out:
static int ufshcd_host_reset_and_restore(struct ufs_hba *hba) static int ufshcd_host_reset_and_restore(struct ufs_hba *hba)
{ {
int err; int err;
unsigned long flags;
/* /*
* Stop the host controller and complete the requests * Stop the host controller and complete the requests
* cleared by h/w * cleared by h/w
*/ */
ufshcd_hba_stop(hba); ufshcd_hba_stop(hba);
spin_lock_irqsave(hba->host->host_lock, flags);
hba->silence_err_logs = true; hba->silence_err_logs = true;
ufshcd_complete_requests(hba); ufshcd_complete_requests(hba);
hba->silence_err_logs = false; hba->silence_err_logs = false;
spin_unlock_irqrestore(hba->host->host_lock, flags);
/* scale up clocks to max frequency before full reinitialization */ /* scale up clocks to max frequency before full reinitialization */
ufshcd_set_clk_freq(hba, true); ufshcd_set_clk_freq(hba, true);
@@ -9265,10 +9322,7 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
/* Get UFS version supported by the controller */ /* Get UFS version supported by the controller */
hba->ufs_version = ufshcd_get_ufs_version(hba); hba->ufs_version = ufshcd_get_ufs_version(hba);
if ((hba->ufs_version != UFSHCI_VERSION_10) && if (hba->ufs_version < ufshci_version(1, 0))
(hba->ufs_version != UFSHCI_VERSION_11) &&
(hba->ufs_version != UFSHCI_VERSION_20) &&
(hba->ufs_version != UFSHCI_VERSION_21))
dev_err(hba->dev, "invalid UFS version 0x%x\n", dev_err(hba->dev, "invalid UFS version 0x%x\n",
hba->ufs_version); hba->ufs_version);

View File

@@ -188,7 +188,6 @@ struct ufs_pm_lvl_states {
* @crypto_key_slot: the key slot to use for inline crypto (-1 if none) * @crypto_key_slot: the key slot to use for inline crypto (-1 if none)
* @data_unit_num: the data unit number for the first block for inline crypto * @data_unit_num: the data unit number for the first block for inline crypto
* @req_abort_skip: skip request abort task flag * @req_abort_skip: skip request abort task flag
* @in_use: indicates that this lrb is still in use
*/ */
struct ufshcd_lrb { struct ufshcd_lrb {
struct utp_transfer_req_desc *utr_descriptor_ptr; struct utp_transfer_req_desc *utr_descriptor_ptr;
@@ -218,7 +217,6 @@ struct ufshcd_lrb {
#endif #endif
bool req_abort_skip; bool req_abort_skip;
bool in_use;
}; };
/** /**
@@ -655,6 +653,25 @@ struct ufs_hba_variant_params {
u32 wb_flush_threshold; u32 wb_flush_threshold;
}; };
struct ufs_hba_monitor {
unsigned long chunk_size;
unsigned long nr_sec_rw[2];
ktime_t total_busy[2];
unsigned long nr_req[2];
/* latencies*/
ktime_t lat_sum[2];
ktime_t lat_max[2];
ktime_t lat_min[2];
u32 nr_queued[2];
ktime_t busy_start_ts[2];
ktime_t enabled_ts;
bool enabled;
};
/** /**
* struct ufs_hba - per adapter private structure * struct ufs_hba - per adapter private structure
* @mmio_base: UFSHCI base register address * @mmio_base: UFSHCI base register address
@@ -846,6 +863,8 @@ struct ufs_hba {
bool wb_enabled; bool wb_enabled;
struct delayed_work rpm_dev_flush_recheck_work; struct delayed_work rpm_dev_flush_recheck_work;
struct ufs_hba_monitor monitor;
#ifdef CONFIG_SCSI_UFS_CRYPTO #ifdef CONFIG_SCSI_UFS_CRYPTO
union ufs_crypto_capabilities crypto_capabilities; union ufs_crypto_capabilities crypto_capabilities;
union ufs_crypto_cap_entry *crypto_cap_array; union ufs_crypto_cap_entry *crypto_cap_array;
@@ -1146,6 +1165,11 @@ static inline u32 ufshcd_vops_get_ufs_hci_version(struct ufs_hba *hba)
return ufshcd_readl(hba, REG_UFS_VERSION); return ufshcd_readl(hba, REG_UFS_VERSION);
} }
static inline bool ufshcd_has_utrlcnr(struct ufs_hba *hba)
{
return (hba->ufs_version >= ufshci_version(3, 0));
}
static inline int ufshcd_vops_clk_scale_notify(struct ufs_hba *hba, static inline int ufshcd_vops_clk_scale_notify(struct ufs_hba *hba,
bool up, enum ufs_notify_change_status status) bool up, enum ufs_notify_change_status status)
{ {

View File

@@ -39,6 +39,7 @@ enum {
REG_UTP_TRANSFER_REQ_DOOR_BELL = 0x58, REG_UTP_TRANSFER_REQ_DOOR_BELL = 0x58,
REG_UTP_TRANSFER_REQ_LIST_CLEAR = 0x5C, REG_UTP_TRANSFER_REQ_LIST_CLEAR = 0x5C,
REG_UTP_TRANSFER_REQ_LIST_RUN_STOP = 0x60, REG_UTP_TRANSFER_REQ_LIST_RUN_STOP = 0x60,
REG_UTP_TRANSFER_REQ_LIST_COMPL = 0x64,
REG_UTP_TASK_REQ_LIST_BASE_L = 0x70, REG_UTP_TASK_REQ_LIST_BASE_L = 0x70,
REG_UTP_TASK_REQ_LIST_BASE_H = 0x74, REG_UTP_TASK_REQ_LIST_BASE_H = 0x74,
REG_UTP_TASK_REQ_DOOR_BELL = 0x78, REG_UTP_TASK_REQ_DOOR_BELL = 0x78,
@@ -74,13 +75,17 @@ enum {
#define MINOR_VERSION_NUM_MASK UFS_MASK(0xFFFF, 0) #define MINOR_VERSION_NUM_MASK UFS_MASK(0xFFFF, 0)
#define MAJOR_VERSION_NUM_MASK UFS_MASK(0xFFFF, 16) #define MAJOR_VERSION_NUM_MASK UFS_MASK(0xFFFF, 16)
/* Controller UFSHCI version */ /*
enum { * Controller UFSHCI version
UFSHCI_VERSION_10 = 0x00010000, /* 1.0 */ * - 2.x and newer use the following scheme:
UFSHCI_VERSION_11 = 0x00010100, /* 1.1 */ * major << 8 + minor << 4
UFSHCI_VERSION_20 = 0x00000200, /* 2.0 */ * - 1.x has been converted to match this in
UFSHCI_VERSION_21 = 0x00000210, /* 2.1 */ * ufshcd_get_ufs_version()
}; */
static inline u32 ufshci_version(u32 major, u32 minor)
{
return (major << 8) + (minor << 4);
}
/* /*
* HCDDID - Host Controller Identification Descriptor * HCDDID - Host Controller Identification Descriptor

View File

@@ -572,8 +572,10 @@ typec_register_altmode(struct device *parent,
int ret; int ret;
alt = kzalloc(sizeof(*alt), GFP_KERNEL); alt = kzalloc(sizeof(*alt), GFP_KERNEL);
if (!alt) if (!alt) {
altmode_id_remove(parent, id);
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
}
alt->adev.svid = desc->svid; alt->adev.svid = desc->svid;
alt->adev.mode = desc->mode; alt->adev.mode = desc->mode;

View File

@@ -22,8 +22,12 @@
#define PD_RETRY_COUNT_DEFAULT 3 #define PD_RETRY_COUNT_DEFAULT 3
#define PD_RETRY_COUNT_3_0_OR_HIGHER 2 #define PD_RETRY_COUNT_3_0_OR_HIGHER 2
#define AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV 3500 #define AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV 3500
#define AUTO_DISCHARGE_PD_HEADROOM_MV 850 #define VSINKPD_MIN_IR_DROP_MV 750
#define AUTO_DISCHARGE_PPS_HEADROOM_MV 1250 #define VSRC_NEW_MIN_PERCENT 95
#define VSRC_VALID_MIN_MV 500
#define VPPS_NEW_MIN_PERCENT 95
#define VPPS_VALID_MIN_MV 100
#define VSINKDISCONNECT_PD_MIN_PERCENT 90
#define tcpc_presenting_rd(reg, cc) \ #define tcpc_presenting_rd(reg, cc) \
(!(TCPC_ROLE_CTRL_DRP & (reg)) && \ (!(TCPC_ROLE_CTRL_DRP & (reg)) && \
@@ -364,11 +368,13 @@ static int tcpci_set_auto_vbus_discharge_threshold(struct tcpc_dev *dev, enum ty
threshold = AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV; threshold = AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV;
} else if (mode == TYPEC_PWR_MODE_PD) { } else if (mode == TYPEC_PWR_MODE_PD) {
if (pps_active) if (pps_active)
threshold = (95 * requested_vbus_voltage_mv / 100) - threshold = ((VPPS_NEW_MIN_PERCENT * requested_vbus_voltage_mv / 100) -
AUTO_DISCHARGE_PD_HEADROOM_MV; VSINKPD_MIN_IR_DROP_MV - VPPS_VALID_MIN_MV) *
VSINKDISCONNECT_PD_MIN_PERCENT / 100;
else else
threshold = (95 * requested_vbus_voltage_mv / 100) - threshold = ((VSRC_NEW_MIN_PERCENT * requested_vbus_voltage_mv / 100) -
AUTO_DISCHARGE_PPS_HEADROOM_MV; VSINKPD_MIN_IR_DROP_MV - VSRC_VALID_MIN_MV) *
VSINKDISCONNECT_PD_MIN_PERCENT / 100;
} else { } else {
/* 3.5V for non-pd sink */ /* 3.5V for non-pd sink */
threshold = AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV; threshold = AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV;

View File

@@ -402,6 +402,8 @@ struct tcpm_port {
unsigned int nr_src_pdo; unsigned int nr_src_pdo;
u32 snk_pdo[PDO_MAX_OBJECTS]; u32 snk_pdo[PDO_MAX_OBJECTS];
unsigned int nr_snk_pdo; unsigned int nr_snk_pdo;
u32 snk_vdo_v1[VDO_MAX_OBJECTS];
unsigned int nr_snk_vdo_v1;
u32 snk_vdo[VDO_MAX_OBJECTS]; u32 snk_vdo[VDO_MAX_OBJECTS];
unsigned int nr_snk_vdo; unsigned int nr_snk_vdo;
@@ -1611,18 +1613,16 @@ static int tcpm_pd_svdm(struct tcpm_port *port, struct typec_altmode *adev,
*/ */
if ((port->data_role == TYPEC_DEVICE || svdm_version >= SVDM_VER_2_0) && if ((port->data_role == TYPEC_DEVICE || svdm_version >= SVDM_VER_2_0) &&
port->nr_snk_vdo) { port->nr_snk_vdo) {
/* if (svdm_version < SVDM_VER_2_0) {
* Product Type DFP and Connector Type are not defined in SVDM for (i = 0; i < port->nr_snk_vdo_v1; i++)
* version 1.0 and shall be set to zero. response[i + 1] = port->snk_vdo_v1[i];
*/ rlen = port->nr_snk_vdo_v1 + 1;
if (svdm_version < SVDM_VER_2_0)
response[1] = port->snk_vdo[0] & ~IDH_DFP_MASK } else {
& ~IDH_CONN_MASK; for (i = 0; i < port->nr_snk_vdo; i++)
else response[i + 1] = port->snk_vdo[i];
response[1] = port->snk_vdo[0]; rlen = port->nr_snk_vdo + 1;
for (i = 1; i < port->nr_snk_vdo; i++) }
response[i + 1] = port->snk_vdo[i];
rlen = port->nr_snk_vdo + 1;
} }
break; break;
case CMD_DISCOVER_SVID: case CMD_DISCOVER_SVID:
@@ -2629,6 +2629,11 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
} else { } else {
next_state = SNK_WAIT_CAPABILITIES; next_state = SNK_WAIT_CAPABILITIES;
} }
/* Threshold was relaxed before sending Request. Restore it back. */
tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD,
port->pps_data.active,
port->supply_voltage);
tcpm_set_state(port, next_state, 0); tcpm_set_state(port, next_state, 0);
break; break;
case SNK_NEGOTIATE_PPS_CAPABILITIES: case SNK_NEGOTIATE_PPS_CAPABILITIES:
@@ -2642,6 +2647,11 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
port->send_discover) port->send_discover)
port->vdm_sm_running = true; port->vdm_sm_running = true;
/* Threshold was relaxed before sending Request. Restore it back. */
tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD,
port->pps_data.active,
port->supply_voltage);
tcpm_set_state(port, SNK_READY, 0); tcpm_set_state(port, SNK_READY, 0);
break; break;
case DR_SWAP_SEND: case DR_SWAP_SEND:
@@ -3361,6 +3371,12 @@ static int tcpm_pd_send_request(struct tcpm_port *port)
if (ret < 0) if (ret < 0)
return ret; return ret;
/*
* Relax the threshold as voltage will be adjusted after Accept Message plus tSrcTransition.
* It is safer to modify the threshold here.
*/
tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, 0);
memset(&msg, 0, sizeof(msg)); memset(&msg, 0, sizeof(msg));
msg.header = PD_HEADER_LE(PD_DATA_REQUEST, msg.header = PD_HEADER_LE(PD_DATA_REQUEST,
port->pwr_role, port->pwr_role,
@@ -3458,6 +3474,9 @@ static int tcpm_pd_send_pps_request(struct tcpm_port *port)
if (ret < 0) if (ret < 0)
return ret; return ret;
/* Relax the threshold as voltage will be adjusted right after Accept Message. */
tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, 0);
memset(&msg, 0, sizeof(msg)); memset(&msg, 0, sizeof(msg));
msg.header = PD_HEADER_LE(PD_DATA_REQUEST, msg.header = PD_HEADER_LE(PD_DATA_REQUEST,
port->pwr_role, port->pwr_role,
@@ -4285,6 +4304,10 @@ static void run_state_machine(struct tcpm_port *port)
port->hard_reset_count = 0; port->hard_reset_count = 0;
ret = tcpm_pd_send_request(port); ret = tcpm_pd_send_request(port);
if (ret < 0) { if (ret < 0) {
/* Restore back to the original state */
tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD,
port->pps_data.active,
port->supply_voltage);
/* Let the Source send capabilities again. */ /* Let the Source send capabilities again. */
tcpm_set_state(port, SNK_WAIT_CAPABILITIES, 0); tcpm_set_state(port, SNK_WAIT_CAPABILITIES, 0);
} else { } else {
@@ -4295,6 +4318,10 @@ static void run_state_machine(struct tcpm_port *port)
case SNK_NEGOTIATE_PPS_CAPABILITIES: case SNK_NEGOTIATE_PPS_CAPABILITIES:
ret = tcpm_pd_send_pps_request(port); ret = tcpm_pd_send_pps_request(port);
if (ret < 0) { if (ret < 0) {
/* Restore back to the original state */
tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD,
port->pps_data.active,
port->supply_voltage);
port->pps_status = ret; port->pps_status = ret;
/* /*
* If this was called due to updates to sink * If this was called due to updates to sink
@@ -5332,6 +5359,7 @@ static void _tcpm_pd_vbus_vsafe0v(struct tcpm_port *port)
} }
break; break;
case PR_SWAP_SNK_SRC_SINK_OFF: case PR_SWAP_SNK_SRC_SINK_OFF:
case PR_SWAP_SNK_SRC_SOURCE_ON:
/* Do nothing, vsafe0v is expected during transition */ /* Do nothing, vsafe0v is expected during transition */
break; break;
default: default:
@@ -6084,6 +6112,22 @@ sink:
return ret; return ret;
} }
/* If sink-vdos is found, sink-vdos-v1 is expected for backward compatibility. */
if (port->nr_snk_vdo) {
ret = fwnode_property_count_u32(fwnode, "sink-vdos-v1");
if (ret < 0)
return ret;
else if (ret == 0)
return -ENODATA;
port->nr_snk_vdo_v1 = min(ret, VDO_MAX_OBJECTS);
ret = fwnode_property_read_u32_array(fwnode, "sink-vdos-v1",
port->snk_vdo_v1,
port->nr_snk_vdo_v1);
if (ret < 0)
return ret;
}
return 0; return 0;
} }

View File

@@ -85,6 +85,8 @@ extern int sysctl_compact_memory;
extern unsigned int sysctl_compaction_proactiveness; extern unsigned int sysctl_compaction_proactiveness;
extern int sysctl_compaction_handler(struct ctl_table *table, int write, extern int sysctl_compaction_handler(struct ctl_table *table, int write,
void *buffer, size_t *length, loff_t *ppos); void *buffer, size_t *length, loff_t *ppos);
extern int compaction_proactiveness_sysctl_handler(struct ctl_table *table,
int write, void *buffer, size_t *length, loff_t *ppos);
extern int sysctl_extfrag_threshold; extern int sysctl_extfrag_threshold;
extern int sysctl_compact_unevictable_allowed; extern int sysctl_compact_unevictable_allowed;

View File

@@ -39,16 +39,18 @@ struct vm_area_struct;
#define ___GFP_HARDWALL 0x100000u #define ___GFP_HARDWALL 0x100000u
#define ___GFP_THISNODE 0x200000u #define ___GFP_THISNODE 0x200000u
#define ___GFP_ACCOUNT 0x400000u #define ___GFP_ACCOUNT 0x400000u
#define ___GFP_ZEROTAGS 0x800000u
#define ___GFP_SKIP_KASAN_POISON 0x1000000u
#ifdef CONFIG_CMA #ifdef CONFIG_CMA
#define ___GFP_CMA 0x800000u #define ___GFP_CMA 0x2000000u
#else #else
#define ___GFP_CMA 0 #define ___GFP_CMA 0
#endif #endif
#ifdef CONFIG_LOCKDEP #ifdef CONFIG_LOCKDEP
#ifdef CONFIG_CMA #ifdef CONFIG_CMA
#define ___GFP_NOLOCKDEP 0x1000000u #define ___GFP_NOLOCKDEP 0x4000000u
#else #else
#define ___GFP_NOLOCKDEP 0x800000u #define ___GFP_NOLOCKDEP 0x2000000u
#endif #endif
#else #else
#define ___GFP_NOLOCKDEP 0 #define ___GFP_NOLOCKDEP 0
@@ -226,19 +228,28 @@ struct vm_area_struct;
* %__GFP_COMP address compound page metadata. * %__GFP_COMP address compound page metadata.
* *
* %__GFP_ZERO returns a zeroed page on success. * %__GFP_ZERO returns a zeroed page on success.
*
* %__GFP_ZEROTAGS returns a page with zeroed memory tags on success, if
* __GFP_ZERO is set.
*
* %__GFP_SKIP_KASAN_POISON returns a page which does not need to be poisoned
* on deallocation. Typically used for userspace pages. Currently only has an
* effect in HW tags mode.
*/ */
#define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) #define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN)
#define __GFP_COMP ((__force gfp_t)___GFP_COMP) #define __GFP_COMP ((__force gfp_t)___GFP_COMP)
#define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO)
#define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS)
#define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON)
/* Disable lockdep for GFP context tracking */ /* Disable lockdep for GFP context tracking */
#define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP)
/* Room for N __GFP_FOO bits */ /* Room for N __GFP_FOO bits */
#ifdef CONFIG_CMA #ifdef CONFIG_CMA
#define __GFP_BITS_SHIFT (24 + IS_ENABLED(CONFIG_LOCKDEP)) #define __GFP_BITS_SHIFT (26 + IS_ENABLED(CONFIG_LOCKDEP))
#else #else
#define __GFP_BITS_SHIFT (23 + IS_ENABLED(CONFIG_LOCKDEP)) #define __GFP_BITS_SHIFT (25 + IS_ENABLED(CONFIG_LOCKDEP))
#endif #endif
#define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1))
@@ -320,7 +331,8 @@ struct vm_area_struct;
#define GFP_DMA __GFP_DMA #define GFP_DMA __GFP_DMA
#define GFP_DMA32 __GFP_DMA32 #define GFP_DMA32 __GFP_DMA32
#define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM) #define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM)
#define GFP_HIGHUSER_MOVABLE (GFP_HIGHUSER | __GFP_MOVABLE) #define GFP_HIGHUSER_MOVABLE (GFP_HIGHUSER | __GFP_MOVABLE | \
__GFP_SKIP_KASAN_POISON)
#define GFP_TRANSHUGE_LIGHT ((GFP_HIGHUSER_MOVABLE | __GFP_COMP | \ #define GFP_TRANSHUGE_LIGHT ((GFP_HIGHUSER_MOVABLE | __GFP_COMP | \
__GFP_NOMEMALLOC | __GFP_NOWARN) & ~__GFP_RECLAIM) __GFP_NOMEMALLOC | __GFP_NOWARN) & ~__GFP_RECLAIM)
#define GFP_TRANSHUGE (GFP_TRANSHUGE_LIGHT | __GFP_DIRECT_RECLAIM) #define GFP_TRANSHUGE (GFP_TRANSHUGE_LIGHT | __GFP_DIRECT_RECLAIM)

View File

@@ -265,6 +265,14 @@ static inline void clear_highpage(struct page *page)
kunmap_atomic(kaddr); kunmap_atomic(kaddr);
} }
#ifndef __HAVE_ARCH_TAG_CLEAR_HIGHPAGE
static inline void tag_clear_highpage(struct page *page)
{
}
#endif
static inline void zero_user_segments(struct page *page, static inline void zero_user_segments(struct page *page,
unsigned start1, unsigned end1, unsigned start1, unsigned end1,
unsigned start2, unsigned end2) unsigned start2, unsigned end2)

View File

@@ -2,6 +2,7 @@
#ifndef _LINUX_KASAN_H #ifndef _LINUX_KASAN_H
#define _LINUX_KASAN_H #define _LINUX_KASAN_H
#include <linux/bug.h>
#include <linux/static_key.h> #include <linux/static_key.h>
#include <linux/types.h> #include <linux/types.h>
@@ -79,14 +80,6 @@ static inline void kasan_disable_current(void) {}
#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
#ifdef CONFIG_KASAN
struct kasan_cache {
int alloc_meta_offset;
int free_meta_offset;
bool is_kmalloc;
};
#ifdef CONFIG_KASAN_HW_TAGS #ifdef CONFIG_KASAN_HW_TAGS
DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled); DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled);
@@ -101,11 +94,14 @@ static inline bool kasan_has_integrated_init(void)
return kasan_enabled(); return kasan_enabled();
} }
void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags);
void kasan_free_pages(struct page *page, unsigned int order);
#else /* CONFIG_KASAN_HW_TAGS */ #else /* CONFIG_KASAN_HW_TAGS */
static inline bool kasan_enabled(void) static inline bool kasan_enabled(void)
{ {
return true; return IS_ENABLED(CONFIG_KASAN);
} }
static inline bool kasan_has_integrated_init(void) static inline bool kasan_has_integrated_init(void)
@@ -113,8 +109,30 @@ static inline bool kasan_has_integrated_init(void)
return false; return false;
} }
static __always_inline void kasan_alloc_pages(struct page *page,
unsigned int order, gfp_t flags)
{
/* Only available for integrated init. */
BUILD_BUG();
}
static __always_inline void kasan_free_pages(struct page *page,
unsigned int order)
{
/* Only available for integrated init. */
BUILD_BUG();
}
#endif /* CONFIG_KASAN_HW_TAGS */ #endif /* CONFIG_KASAN_HW_TAGS */
#ifdef CONFIG_KASAN
struct kasan_cache {
int alloc_meta_offset;
int free_meta_offset;
bool is_kmalloc;
};
slab_flags_t __kasan_never_merge(void); slab_flags_t __kasan_never_merge(void);
static __always_inline slab_flags_t kasan_never_merge(void) static __always_inline slab_flags_t kasan_never_merge(void)
{ {
@@ -130,20 +148,20 @@ static __always_inline void kasan_unpoison_range(const void *addr, size_t size)
__kasan_unpoison_range(addr, size); __kasan_unpoison_range(addr, size);
} }
void __kasan_alloc_pages(struct page *page, unsigned int order, bool init); void __kasan_poison_pages(struct page *page, unsigned int order, bool init);
static __always_inline void kasan_alloc_pages(struct page *page, static __always_inline void kasan_poison_pages(struct page *page,
unsigned int order, bool init) unsigned int order, bool init)
{ {
if (kasan_enabled()) if (kasan_enabled())
__kasan_alloc_pages(page, order, init); __kasan_poison_pages(page, order, init);
} }
void __kasan_free_pages(struct page *page, unsigned int order, bool init); void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init);
static __always_inline void kasan_free_pages(struct page *page, static __always_inline void kasan_unpoison_pages(struct page *page,
unsigned int order, bool init) unsigned int order, bool init)
{ {
if (kasan_enabled()) if (kasan_enabled())
__kasan_free_pages(page, order, init); __kasan_unpoison_pages(page, order, init);
} }
void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size, void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
@@ -285,21 +303,15 @@ void kasan_restore_multi_shot(bool enabled);
#else /* CONFIG_KASAN */ #else /* CONFIG_KASAN */
static inline bool kasan_enabled(void)
{
return false;
}
static inline bool kasan_has_integrated_init(void)
{
return false;
}
static inline slab_flags_t kasan_never_merge(void) static inline slab_flags_t kasan_never_merge(void)
{ {
return 0; return 0;
} }
static inline void kasan_unpoison_range(const void *address, size_t size) {} static inline void kasan_unpoison_range(const void *address, size_t size) {}
static inline void kasan_alloc_pages(struct page *page, unsigned int order, bool init) {} static inline void kasan_poison_pages(struct page *page, unsigned int order,
static inline void kasan_free_pages(struct page *page, unsigned int order, bool init) {} bool init) {}
static inline void kasan_unpoison_pages(struct page *page, unsigned int order,
bool init) {}
static inline void kasan_cache_create(struct kmem_cache *cache, static inline void kasan_cache_create(struct kmem_cache *cache,
unsigned int *size, unsigned int *size,
slab_flags_t *flags) {} slab_flags_t *flags) {}

View File

@@ -573,6 +573,7 @@ struct vm_fault {
*/ */
unsigned long vma_flags; unsigned long vma_flags;
pgprot_t vma_page_prot; pgprot_t vma_page_prot;
ANDROID_OEM_DATA_ARRAY(1, 2);
}; };
/* page entry size for vm->huge_fault() */ /* page entry size for vm->huge_fault() */

View File

@@ -783,6 +783,7 @@ typedef struct pglist_data {
enum zone_type kcompactd_highest_zoneidx; enum zone_type kcompactd_highest_zoneidx;
wait_queue_head_t kcompactd_wait; wait_queue_head_t kcompactd_wait;
struct task_struct *kcompactd; struct task_struct *kcompactd;
bool proactive_compact_trigger;
#endif #endif
/* /*
* This is a per-node reserve of pages that are not available * This is a per-node reserve of pages that are not available

View File

@@ -138,6 +138,9 @@ enum pageflags {
#endif #endif
#ifdef CONFIG_64BIT #ifdef CONFIG_64BIT
PG_arch_2, PG_arch_2,
#endif
#ifdef CONFIG_KASAN_HW_TAGS
PG_skip_kasan_poison,
#endif #endif
__NR_PAGEFLAGS, __NR_PAGEFLAGS,
@@ -444,6 +447,12 @@ TESTCLEARFLAG(Young, young, PF_ANY)
PAGEFLAG(Idle, idle, PF_ANY) PAGEFLAG(Idle, idle, PF_ANY)
#endif #endif
#ifdef CONFIG_KASAN_HW_TAGS
PAGEFLAG(SkipKASanPoison, skip_kasan_poison, PF_HEAD)
#else
PAGEFLAG_FALSE(SkipKASanPoison)
#endif
/* /*
* PageReported() is used to track reported free pages within the Buddy * PageReported() is used to track reported free pages within the Buddy
* allocator. We can use the non-atomic version of the test and set * allocator. We can use the non-atomic version of the test and set

View File

@@ -93,6 +93,7 @@ struct freq_qos_request {
enum freq_qos_req_type type; enum freq_qos_req_type type;
struct plist_node pnode; struct plist_node pnode;
struct freq_constraints *qos; struct freq_constraints *qos;
ANDROID_OEM_DATA_ARRAY(1, 2);
}; };

View File

@@ -22,6 +22,7 @@
#include <linux/sched.h> /* for current && schedule_timeout */ #include <linux/sched.h> /* for current && schedule_timeout */
#include <linux/mutex.h> /* for struct mutex */ #include <linux/mutex.h> /* for struct mutex */
#include <linux/pm_runtime.h> /* for runtime PM */ #include <linux/pm_runtime.h> /* for runtime PM */
#include <linux/android_kabi.h>
struct usb_device; struct usb_device;
struct usb_driver; struct usb_driver;
@@ -257,6 +258,11 @@ struct usb_interface {
struct device dev; /* interface specific device info */ struct device dev; /* interface specific device info */
struct device *usb_dev; struct device *usb_dev;
struct work_struct reset_ws; /* for resets in atomic context */ struct work_struct reset_ws; /* for resets in atomic context */
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
}; };
#define to_usb_interface(d) container_of(d, struct usb_interface, dev) #define to_usb_interface(d) container_of(d, struct usb_interface, dev)
@@ -402,6 +408,11 @@ struct usb_host_bos {
struct usb_ssp_cap_descriptor *ssp_cap; struct usb_ssp_cap_descriptor *ssp_cap;
struct usb_ss_container_id_descriptor *ss_id; struct usb_ss_container_id_descriptor *ss_id;
struct usb_ptm_cap_descriptor *ptm_cap; struct usb_ptm_cap_descriptor *ptm_cap;
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
}; };
int __usb_get_extra_descriptor(char *buffer, unsigned size, int __usb_get_extra_descriptor(char *buffer, unsigned size,
@@ -465,6 +476,11 @@ struct usb_bus {
struct mon_bus *mon_bus; /* non-null when associated */ struct mon_bus *mon_bus; /* non-null when associated */
int monitored; /* non-zero when monitored */ int monitored; /* non-zero when monitored */
#endif #endif
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
}; };
struct usb_dev_state; struct usb_dev_state;
@@ -709,6 +725,11 @@ struct usb_device {
u16 hub_delay; u16 hub_delay;
unsigned use_generic_driver:1; unsigned use_generic_driver:1;
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
}; };
#define to_usb_device(d) container_of(d, struct usb_device, dev) #define to_usb_device(d) container_of(d, struct usb_device, dev)
@@ -1210,6 +1231,11 @@ struct usb_driver {
unsigned int supports_autosuspend:1; unsigned int supports_autosuspend:1;
unsigned int disable_hub_initiated_lpm:1; unsigned int disable_hub_initiated_lpm:1;
unsigned int soft_unbind:1; unsigned int soft_unbind:1;
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
}; };
#define to_usb_driver(d) container_of(d, struct usb_driver, drvwrap.driver) #define to_usb_driver(d) container_of(d, struct usb_driver, drvwrap.driver)
@@ -1595,6 +1621,12 @@ struct urb {
int error_count; /* (return) number of ISO errors */ int error_count; /* (return) number of ISO errors */
void *context; /* (in) context for completion */ void *context; /* (in) context for completion */
usb_complete_t complete; /* (in) completion routine */ usb_complete_t complete; /* (in) completion routine */
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
struct usb_iso_packet_descriptor iso_frame_desc[]; struct usb_iso_packet_descriptor iso_frame_desc[];
/* (in) ISO ONLY */ /* (in) ISO ONLY */
}; };

View File

@@ -25,6 +25,7 @@
#include <linux/rwsem.h> #include <linux/rwsem.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/idr.h> #include <linux/idr.h>
#include <linux/android_kabi.h>
#define MAX_TOPO_LEVEL 6 #define MAX_TOPO_LEVEL 6
@@ -225,6 +226,11 @@ struct usb_hcd {
* (ohci 32, uhci 1024, ehci 256/512/1024). * (ohci 32, uhci 1024, ehci 256/512/1024).
*/ */
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
/* The HC driver's private data is stored at the end of /* The HC driver's private data is stored at the end of
* this structure. * this structure.
*/ */
@@ -410,6 +416,10 @@ struct hc_driver {
/* Call for power on/off the port if necessary */ /* Call for power on/off the port if necessary */
int (*port_power)(struct usb_hcd *hcd, int portnum, bool enable); int (*port_power)(struct usb_hcd *hcd, int portnum, bool enable);
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
}; };
static inline int hcd_giveback_urb_in_bh(struct usb_hcd *hcd) static inline int hcd_giveback_urb_in_bh(struct usb_hcd *hcd)
@@ -561,6 +571,11 @@ struct usb_tt {
spinlock_t lock; spinlock_t lock;
struct list_head clear_list; /* of usb_tt_clear */ struct list_head clear_list; /* of usb_tt_clear */
struct work_struct clear_work; struct work_struct clear_work;
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
}; };
struct usb_tt_clear { struct usb_tt_clear {

View File

@@ -23,6 +23,8 @@
#ifndef __LINUX_USB_USBNET_H #ifndef __LINUX_USB_USBNET_H
#define __LINUX_USB_USBNET_H #define __LINUX_USB_USBNET_H
#include <linux/android_kabi.h>
/* interface from usbnet core to each USB networking link we handle */ /* interface from usbnet core to each USB networking link we handle */
struct usbnet { struct usbnet {
/* housekeeping */ /* housekeeping */
@@ -83,6 +85,11 @@ struct usbnet {
# define EVENT_LINK_CHANGE 11 # define EVENT_LINK_CHANGE 11
# define EVENT_SET_RX_MODE 12 # define EVENT_SET_RX_MODE 12
# define EVENT_NO_IP_ALIGN 13 # define EVENT_NO_IP_ALIGN 13
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
}; };
static inline struct usb_driver *driver_of(struct usb_interface *intf) static inline struct usb_driver *driver_of(struct usb_interface *intf)
@@ -172,6 +179,9 @@ struct driver_info {
int out; /* tx endpoint */ int out; /* tx endpoint */
unsigned long data; /* Misc driver specific data */ unsigned long data; /* Misc driver specific data */
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
}; };
/* Minidrivers are just drivers using the "usbnet" core as a powerful /* Minidrivers are just drivers using the "usbnet" core as a powerful

View File

@@ -85,6 +85,12 @@
#define IF_HAVE_PG_ARCH_2(flag,string) #define IF_HAVE_PG_ARCH_2(flag,string)
#endif #endif
#ifdef CONFIG_KASAN_HW_TAGS
#define IF_HAVE_PG_SKIP_KASAN_POISON(flag,string) ,{1UL << flag, string}
#else
#define IF_HAVE_PG_SKIP_KASAN_POISON(flag,string)
#endif
#define __def_pageflag_names \ #define __def_pageflag_names \
{1UL << PG_locked, "locked" }, \ {1UL << PG_locked, "locked" }, \
{1UL << PG_waiters, "waiters" }, \ {1UL << PG_waiters, "waiters" }, \
@@ -112,7 +118,8 @@ IF_HAVE_PG_UNCACHED(PG_uncached, "uncached" ) \
IF_HAVE_PG_HWPOISON(PG_hwpoison, "hwpoison" ) \ IF_HAVE_PG_HWPOISON(PG_hwpoison, "hwpoison" ) \
IF_HAVE_PG_IDLE(PG_young, "young" ) \ IF_HAVE_PG_IDLE(PG_young, "young" ) \
IF_HAVE_PG_IDLE(PG_idle, "idle" ) \ IF_HAVE_PG_IDLE(PG_idle, "idle" ) \
IF_HAVE_PG_ARCH_2(PG_arch_2, "arch_2" ) IF_HAVE_PG_ARCH_2(PG_arch_2, "arch_2" ) \
IF_HAVE_PG_SKIP_KASAN_POISON(PG_skip_kasan_poison, "skip_kasan_poison")
#define show_page_flags(flags) \ #define show_page_flags(flags) \
(flags) ? __print_flags(flags, "|", \ (flags) ? __print_flags(flags, "|", \

View File

@@ -7,15 +7,14 @@
#if !defined(_TRACE_HOOK_GIC_H) || defined(TRACE_HEADER_MULTI_READ) #if !defined(_TRACE_HOOK_GIC_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_HOOK_GIC_H #define _TRACE_HOOK_GIC_H
#include <linux/irqdomain.h>
#include <linux/tracepoint.h> #include <linux/tracepoint.h>
#include <trace/hooks/vendor_hooks.h> #include <trace/hooks/vendor_hooks.h>
struct gic_chip_data;
DECLARE_HOOK(android_vh_gic_resume, DECLARE_HOOK(android_vh_gic_resume,
TP_PROTO(struct irq_domain *domain, void __iomem *dist_base), TP_PROTO(struct gic_chip_data *gd),
TP_ARGS(domain, dist_base)); TP_ARGS(gd));
/* macro versions of hooks are no longer required */ /* macro versions of hooks are no longer required */
#endif /* _TRACE_HOOK_GIC_H */ #endif /* _TRACE_HOOK_GIC_H */

View File

@@ -42,6 +42,12 @@ DECLARE_HOOK(android_vh_pagecache_get_page,
TP_PROTO(struct address_space *mapping, pgoff_t index, TP_PROTO(struct address_space *mapping, pgoff_t index,
int fgp_flags, gfp_t gfp_mask, struct page *page), int fgp_flags, gfp_t gfp_mask, struct page *page),
TP_ARGS(mapping, index, fgp_flags, gfp_mask, page)); TP_ARGS(mapping, index, fgp_flags, gfp_mask, page));
DECLARE_HOOK(android_vh_filemap_fault_get_page,
TP_PROTO(struct vm_fault *vmf, struct page **page, bool *retry),
TP_ARGS(vmf, page, retry));
DECLARE_HOOK(android_vh_filemap_fault_cache_page,
TP_PROTO(struct vm_fault *vmf, struct page *page),
TP_ARGS(vmf, page));
DECLARE_HOOK(android_vh_meminfo_proc_show, DECLARE_HOOK(android_vh_meminfo_proc_show,
TP_PROTO(struct seq_file *m), TP_PROTO(struct seq_file *m),
TP_ARGS(m)); TP_ARGS(m));

View File

@@ -2858,7 +2858,7 @@ static struct ctl_table vm_table[] = {
.data = &sysctl_compaction_proactiveness, .data = &sysctl_compaction_proactiveness,
.maxlen = sizeof(sysctl_compaction_proactiveness), .maxlen = sizeof(sysctl_compaction_proactiveness),
.mode = 0644, .mode = 0644,
.proc_handler = proc_dointvec_minmax, .proc_handler = compaction_proactiveness_sysctl_handler,
.extra1 = SYSCTL_ZERO, .extra1 = SYSCTL_ZERO,
.extra2 = &one_hundred, .extra2 = &one_hundred,
}, },

View File

@@ -2650,6 +2650,30 @@ int sysctl_compact_memory;
*/ */
unsigned int __read_mostly sysctl_compaction_proactiveness = 20; unsigned int __read_mostly sysctl_compaction_proactiveness = 20;
int compaction_proactiveness_sysctl_handler(struct ctl_table *table, int write,
void *buffer, size_t *length, loff_t *ppos)
{
int rc, nid;
rc = proc_dointvec_minmax(table, write, buffer, length, ppos);
if (rc)
return rc;
if (write && sysctl_compaction_proactiveness) {
for_each_online_node(nid) {
pg_data_t *pgdat = NODE_DATA(nid);
if (pgdat->proactive_compact_trigger)
continue;
pgdat->proactive_compact_trigger = true;
wake_up_interruptible(&pgdat->kcompactd_wait);
}
}
return 0;
}
/* /*
* This is the entry point for compacting all nodes via * This is the entry point for compacting all nodes via
* /proc/sys/vm/compact_memory * /proc/sys/vm/compact_memory
@@ -2694,7 +2718,8 @@ void compaction_unregister_node(struct node *node)
static inline bool kcompactd_work_requested(pg_data_t *pgdat) static inline bool kcompactd_work_requested(pg_data_t *pgdat)
{ {
return pgdat->kcompactd_max_order > 0 || kthread_should_stop(); return pgdat->kcompactd_max_order > 0 || kthread_should_stop() ||
pgdat->proactive_compact_trigger;
} }
static bool kcompactd_node_suitable(pg_data_t *pgdat) static bool kcompactd_node_suitable(pg_data_t *pgdat)
@@ -2843,11 +2868,15 @@ static int kcompactd(void *p)
while (!kthread_should_stop()) { while (!kthread_should_stop()) {
unsigned long pflags; unsigned long pflags;
long timeout;
timeout = sysctl_compaction_proactiveness ?
msecs_to_jiffies(HPAGE_FRAG_CHECK_INTERVAL_MSEC) :
MAX_SCHEDULE_TIMEOUT;
trace_mm_compaction_kcompactd_sleep(pgdat->node_id); trace_mm_compaction_kcompactd_sleep(pgdat->node_id);
if (wait_event_freezable_timeout(pgdat->kcompactd_wait, if (wait_event_freezable_timeout(pgdat->kcompactd_wait,
kcompactd_work_requested(pgdat), kcompactd_work_requested(pgdat), timeout) &&
msecs_to_jiffies(HPAGE_FRAG_CHECK_INTERVAL_MSEC))) { !pgdat->proactive_compact_trigger) {
psi_memstall_enter(&pflags); psi_memstall_enter(&pflags);
kcompactd_do_work(pgdat); kcompactd_do_work(pgdat);
@@ -2859,10 +2888,20 @@ static int kcompactd(void *p)
if (should_proactive_compact_node(pgdat)) { if (should_proactive_compact_node(pgdat)) {
unsigned int prev_score, score; unsigned int prev_score, score;
if (proactive_defer) { /*
* On wakeup of proactive compaction by sysctl
* write, ignore the accumulated defer score.
* Anyway, if the proactive compaction didn't
* make any progress for the new value, it will
* be further deferred by 2^COMPACT_MAX_DEFER_SHIFT
* times.
*/
if (proactive_defer &&
!pgdat->proactive_compact_trigger) {
proactive_defer--; proactive_defer--;
continue; continue;
} }
prev_score = fragmentation_score_node(pgdat); prev_score = fragmentation_score_node(pgdat);
proactive_compact_node(pgdat); proactive_compact_node(pgdat);
score = fragmentation_score_node(pgdat); score = fragmentation_score_node(pgdat);
@@ -2873,6 +2912,8 @@ static int kcompactd(void *p)
proactive_defer = score < prev_score ? proactive_defer = score < prev_score ?
0 : 1 << COMPACT_MAX_DEFER_SHIFT; 0 : 1 << COMPACT_MAX_DEFER_SHIFT;
} }
if (pgdat->proactive_compact_trigger)
pgdat->proactive_compact_trigger = false;
} }
return 0; return 0;

View File

@@ -2728,13 +2728,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
struct inode *inode = mapping->host; struct inode *inode = mapping->host;
pgoff_t offset = vmf->pgoff; pgoff_t offset = vmf->pgoff;
pgoff_t max_off; pgoff_t max_off;
struct page *page; struct page *page = NULL;
vm_fault_t ret = 0; vm_fault_t ret = 0;
bool retry = false;
max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
if (unlikely(offset >= max_off)) if (unlikely(offset >= max_off))
return VM_FAULT_SIGBUS; return VM_FAULT_SIGBUS;
trace_android_vh_filemap_fault_get_page(vmf, &page, &retry);
if (unlikely(retry))
goto out_retry;
if (unlikely(page))
goto page_ok;
/* /*
* Do we have something in the page cache already? * Do we have something in the page cache already?
*/ */
@@ -2790,6 +2797,7 @@ retry_find:
goto out_retry; goto out_retry;
} }
page_ok:
/* /*
* Found the page and have a reference on it. * Found the page and have a reference on it.
* We must recheck i_size under page lock. * We must recheck i_size under page lock.
@@ -2835,8 +2843,10 @@ out_retry:
* re-find the vma and come back and find our hopefully still populated * re-find the vma and come back and find our hopefully still populated
* page. * page.
*/ */
if (page) if (page) {
trace_android_vh_filemap_fault_cache_page(vmf, page);
put_page(page); put_page(page);
}
if (fpin) if (fpin)
fput(fpin); fput(fpin);
return ret | VM_FAULT_RETRY; return ret | VM_FAULT_RETRY;

View File

@@ -97,7 +97,7 @@ slab_flags_t __kasan_never_merge(void)
return 0; return 0;
} }
void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init)
{ {
u8 tag; u8 tag;
unsigned long i; unsigned long i;
@@ -111,7 +111,7 @@ void __kasan_alloc_pages(struct page *page, unsigned int order, bool init)
kasan_unpoison(page_address(page), PAGE_SIZE << order, init); kasan_unpoison(page_address(page), PAGE_SIZE << order, init);
} }
void __kasan_free_pages(struct page *page, unsigned int order, bool init) void __kasan_poison_pages(struct page *page, unsigned int order, bool init)
{ {
if (likely(!PageHighMem(page))) if (likely(!PageHighMem(page)))
kasan_poison(page_address(page), PAGE_SIZE << order, kasan_poison(page_address(page), PAGE_SIZE << order,

View File

@@ -238,6 +238,38 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
return &alloc_meta->free_track[0]; return &alloc_meta->free_track[0];
} }
void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags)
{
/*
* This condition should match the one in post_alloc_hook() in
* page_alloc.c.
*/
bool init = !want_init_on_free() && want_init_on_alloc(flags);
if (flags & __GFP_SKIP_KASAN_POISON)
SetPageSkipKASanPoison(page);
if (flags & __GFP_ZEROTAGS) {
int i;
for (i = 0; i != 1 << order; ++i)
tag_clear_highpage(page + i);
} else {
kasan_unpoison_pages(page, order, init);
}
}
void kasan_free_pages(struct page *page, unsigned int order)
{
/*
* This condition should match the one in free_pages_prepare() in
* page_alloc.c.
*/
bool init = want_init_on_free();
kasan_poison_pages(page, order, init);
}
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST) #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
void kasan_set_tagging_report_once(bool state) void kasan_set_tagging_report_once(bool state)

View File

@@ -106,7 +106,8 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element)
if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc)
kasan_slab_free_mempool(element); kasan_slab_free_mempool(element);
else if (pool->alloc == mempool_alloc_pages) else if (pool->alloc == mempool_alloc_pages)
kasan_free_pages(element, (unsigned long)pool->pool_data, false); kasan_poison_pages(element, (unsigned long)pool->pool_data,
false);
} }
static void kasan_unpoison_element(mempool_t *pool, void *element) static void kasan_unpoison_element(mempool_t *pool, void *element)
@@ -114,7 +115,8 @@ static void kasan_unpoison_element(mempool_t *pool, void *element)
if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc)
kasan_unpoison_range(element, __ksize(element)); kasan_unpoison_range(element, __ksize(element));
else if (pool->alloc == mempool_alloc_pages) else if (pool->alloc == mempool_alloc_pages)
kasan_alloc_pages(element, (unsigned long)pool->pool_data, false); kasan_unpoison_pages(element, (unsigned long)pool->pool_data,
false);
} }
static __always_inline void add_element(mempool_t *pool, void *element) static __always_inline void add_element(mempool_t *pool, void *element)

View File

@@ -395,7 +395,7 @@ int page_group_by_mobility_disabled __read_mostly;
static DEFINE_STATIC_KEY_TRUE(deferred_pages); static DEFINE_STATIC_KEY_TRUE(deferred_pages);
/* /*
* Calling kasan_free_pages() only after deferred memory initialization * Calling kasan_poison_pages() only after deferred memory initialization
* has completed. Poisoning pages during deferred memory init will greatly * has completed. Poisoning pages during deferred memory init will greatly
* lengthen the process and cause problem in large memory systems as the * lengthen the process and cause problem in large memory systems as the
* deferred pages initialization is done with interrupt disabled. * deferred pages initialization is done with interrupt disabled.
@@ -407,15 +407,12 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages);
* on-demand allocation and then freed again before the deferred pages * on-demand allocation and then freed again before the deferred pages
* initialization is done, but this is not likely to happen. * initialization is done, but this is not likely to happen.
*/ */
static inline void kasan_free_nondeferred_pages(struct page *page, int order, static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
bool init, fpi_t fpi_flags)
{ {
if (static_branch_unlikely(&deferred_pages)) return static_branch_unlikely(&deferred_pages) ||
return; (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && (fpi_flags & FPI_SKIP_KASAN_POISON)) ||
(fpi_flags & FPI_SKIP_KASAN_POISON)) PageSkipKASanPoison(page);
return;
kasan_free_pages(page, order, init);
} }
/* Returns true if the struct page for the pfn is uninitialised */ /* Returns true if the struct page for the pfn is uninitialised */
@@ -466,13 +463,11 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn)
return false; return false;
} }
#else #else
static inline void kasan_free_nondeferred_pages(struct page *page, int order, static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
bool init, fpi_t fpi_flags)
{ {
if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && return (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
(fpi_flags & FPI_SKIP_KASAN_POISON)) (fpi_flags & FPI_SKIP_KASAN_POISON)) ||
return; PageSkipKASanPoison(page);
kasan_free_pages(page, order, init);
} }
static inline bool early_page_uninitialised(unsigned long pfn) static inline bool early_page_uninitialised(unsigned long pfn)
@@ -1257,10 +1252,16 @@ out:
return ret; return ret;
} }
static void kernel_init_free_pages(struct page *page, int numpages) static void kernel_init_free_pages(struct page *page, int numpages, bool zero_tags)
{ {
int i; int i;
if (zero_tags) {
for (i = 0; i < numpages; i++)
tag_clear_highpage(page + i);
return;
}
/* s390's use of memset() could override KASAN redzones. */ /* s390's use of memset() could override KASAN redzones. */
kasan_disable_current(); kasan_disable_current();
for (i = 0; i < numpages; i++) { for (i = 0; i < numpages; i++) {
@@ -1276,7 +1277,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
unsigned int order, bool check_free, fpi_t fpi_flags) unsigned int order, bool check_free, fpi_t fpi_flags)
{ {
int bad = 0; int bad = 0;
bool init; bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags);
VM_BUG_ON_PAGE(PageTail(page), page); VM_BUG_ON_PAGE(PageTail(page), page);
@@ -1347,10 +1348,17 @@ static __always_inline bool free_pages_prepare(struct page *page,
* With hardware tag-based KASAN, memory tags must be set before the * With hardware tag-based KASAN, memory tags must be set before the
* page becomes unavailable via debug_pagealloc or arch_free_page. * page becomes unavailable via debug_pagealloc or arch_free_page.
*/ */
init = want_init_on_free(); if (kasan_has_integrated_init()) {
if (init && !kasan_has_integrated_init()) if (!skip_kasan_poison)
kernel_init_free_pages(page, 1 << order); kasan_free_pages(page, order);
kasan_free_nondeferred_pages(page, order, init, fpi_flags); } else {
bool init = want_init_on_free();
if (init)
kernel_init_free_pages(page, 1 << order, false);
if (!skip_kasan_poison)
kasan_poison_pages(page, order, init);
}
/* /*
* arch_free_page() can make the page's contents inaccessible. s390 * arch_free_page() can make the page's contents inaccessible. s390
@@ -2345,8 +2353,6 @@ static bool check_new_pages(struct page *page, unsigned int order)
inline void post_alloc_hook(struct page *page, unsigned int order, inline void post_alloc_hook(struct page *page, unsigned int order,
gfp_t gfp_flags) gfp_t gfp_flags)
{ {
bool init;
set_page_private(page, 0); set_page_private(page, 0);
set_page_refcounted(page); set_page_refcounted(page);
@@ -2358,10 +2364,16 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
* kasan_alloc_pages and kernel_init_free_pages must be * kasan_alloc_pages and kernel_init_free_pages must be
* kept together to avoid discrepancies in behavior. * kept together to avoid discrepancies in behavior.
*/ */
init = !want_init_on_free() && want_init_on_alloc(gfp_flags); if (kasan_has_integrated_init()) {
kasan_alloc_pages(page, order, init); kasan_alloc_pages(page, order, gfp_flags);
if (init && !kasan_has_integrated_init()) } else {
kernel_init_free_pages(page, 1 << order); bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
kasan_unpoison_pages(page, order, init);
if (init)
kernel_init_free_pages(page, 1 << order,
gfp_flags & __GFP_ZEROTAGS);
}
kernel_unpoison_pages(page, 1 << order); kernel_unpoison_pages(page, 1 << order);
set_page_owner(page, order, gfp_flags); set_page_owner(page, order, gfp_flags);

View File

@@ -112,7 +112,7 @@ static void notify_netlink_uevent(const char *iface, struct idletimer_tg *timer)
res = snprintf(iface_msg, NLMSG_MAX_SIZE, "INTERFACE=%s", res = snprintf(iface_msg, NLMSG_MAX_SIZE, "INTERFACE=%s",
iface); iface);
if (NLMSG_MAX_SIZE <= res) { if (NLMSG_MAX_SIZE <= res) {
pr_err("message too long (%d)", res); pr_err("message too long (%d)\n", res);
return; return;
} }
@@ -122,25 +122,25 @@ static void notify_netlink_uevent(const char *iface, struct idletimer_tg *timer)
state ? "active" : "inactive"); state ? "active" : "inactive");
if (NLMSG_MAX_SIZE <= res) { if (NLMSG_MAX_SIZE <= res) {
pr_err("message too long (%d)", res); pr_err("message too long (%d)\n", res);
return; return;
} }
if (state) { if (state) {
res = snprintf(uid_msg, NLMSG_MAX_SIZE, "UID=%u", timer->uid); res = snprintf(uid_msg, NLMSG_MAX_SIZE, "UID=%u", timer->uid);
if (NLMSG_MAX_SIZE <= res) if (NLMSG_MAX_SIZE <= res)
pr_err("message too long (%d)", res); pr_err("message too long (%d)\n", res);
} else { } else {
res = snprintf(uid_msg, NLMSG_MAX_SIZE, "UID="); res = snprintf(uid_msg, NLMSG_MAX_SIZE, "UID=");
if (NLMSG_MAX_SIZE <= res) if (NLMSG_MAX_SIZE <= res)
pr_err("message too long (%d)", res); pr_err("message too long (%d)\n", res);
} }
time_ns = timespec64_to_ns(&ts); time_ns = timespec64_to_ns(&ts);
res = snprintf(timestamp_msg, NLMSG_MAX_SIZE, "TIME_NS=%llu", time_ns); res = snprintf(timestamp_msg, NLMSG_MAX_SIZE, "TIME_NS=%llu", time_ns);
if (NLMSG_MAX_SIZE <= res) { if (NLMSG_MAX_SIZE <= res) {
timestamp_msg[0] = '\0'; timestamp_msg[0] = '\0';
pr_err("message too long (%d)", res); pr_err("message too long (%d)\n", res);
} }
pr_debug("putting nlmsg: <%s> <%s> <%s> <%s>\n", iface_msg, state_msg, pr_debug("putting nlmsg: <%s> <%s> <%s> <%s>\n", iface_msg, state_msg,
@@ -323,12 +323,12 @@ static int idletimer_tg_create(struct idletimer_tg_info *info)
ret = sysfs_create_file(idletimer_tg_kobj, &info->timer->attr.attr); ret = sysfs_create_file(idletimer_tg_kobj, &info->timer->attr.attr);
if (ret < 0) { if (ret < 0) {
pr_debug("couldn't add file to sysfs"); pr_debug("couldn't add file to sysfs\n");
goto out_free_attr; goto out_free_attr;
} }
list_add(&info->timer->entry, &idletimer_tg_list); list_add(&info->timer->entry, &idletimer_tg_list);
pr_debug("timer type value is 0."); pr_debug("timer type value is 0.\n");
info->timer->timer_type = 0; info->timer->timer_type = 0;
info->timer->refcnt = 1; info->timer->refcnt = 1;
info->timer->send_nl_msg = false; info->timer->send_nl_msg = false;
@@ -389,7 +389,7 @@ static int idletimer_tg_create_v1(struct idletimer_tg_info_v1 *info)
ret = sysfs_create_file(idletimer_tg_kobj, &info->timer->attr.attr); ret = sysfs_create_file(idletimer_tg_kobj, &info->timer->attr.attr);
if (ret < 0) { if (ret < 0) {
pr_debug("couldn't add file to sysfs"); pr_debug("couldn't add file to sysfs\n");
goto out_free_attr; goto out_free_attr;
} }
@@ -397,7 +397,7 @@ static int idletimer_tg_create_v1(struct idletimer_tg_info_v1 *info)
kobject_uevent(idletimer_tg_kobj,KOBJ_ADD); kobject_uevent(idletimer_tg_kobj,KOBJ_ADD);
list_add(&info->timer->entry, &idletimer_tg_list); list_add(&info->timer->entry, &idletimer_tg_list);
pr_debug("timer type value is %u", info->timer_type); pr_debug("timer type value is %u\n", info->timer_type);
info->timer->timer_type = info->timer_type; info->timer->timer_type = info->timer_type;
info->timer->refcnt = 1; info->timer->refcnt = 1;
info->timer->send_nl_msg = (info->send_nl_msg != 0); info->timer->send_nl_msg = (info->send_nl_msg != 0);

View File

@@ -70,15 +70,23 @@ UTS_VERSION="$(echo $UTS_VERSION $CONFIG_FLAGS $TIMESTAMP | cut -b -$UTS_LEN)"
# Only replace the real compile.h if the new one is different, # Only replace the real compile.h if the new one is different,
# in order to preserve the timestamp and avoid unnecessary # in order to preserve the timestamp and avoid unnecessary
# recompilations. # recompilations.
# We don't consider the file changed if only the date/time changed. # We don't consider the file changed if only the date/time changed,
# unless KBUILD_BUILD_TIMESTAMP was explicitly set (e.g. for
# reproducible builds with that value referring to a commit timestamp).
# A kernel config change will increase the generation number, thus # A kernel config change will increase the generation number, thus
# causing compile.h to be updated (including date/time) due to the # causing compile.h to be updated (including date/time) due to the
# changed comment in the # changed comment in the
# first line. # first line.
if [ -z "$KBUILD_BUILD_TIMESTAMP" ]; then
IGNORE_PATTERN="UTS_VERSION"
else
IGNORE_PATTERN="NOT_A_PATTERN_TO_BE_MATCHED"
fi
if [ -r $TARGET ] && \ if [ -r $TARGET ] && \
grep -v 'UTS_VERSION' $TARGET > .tmpver.1 && \ grep -v $IGNORE_PATTERN $TARGET > .tmpver.1 && \
grep -v 'UTS_VERSION' .tmpcompile > .tmpver.2 && \ grep -v $IGNORE_PATTERN .tmpcompile > .tmpver.2 && \
cmp -s .tmpver.1 .tmpver.2; then cmp -s .tmpver.1 .tmpver.2; then
rm -f .tmpcompile rm -f .tmpcompile
else else

View File

@@ -300,26 +300,27 @@ static struct avc_xperms_decision_node
struct avc_xperms_decision_node *xpd_node; struct avc_xperms_decision_node *xpd_node;
struct extended_perms_decision *xpd; struct extended_perms_decision *xpd;
xpd_node = kmem_cache_zalloc(avc_xperms_decision_cachep, GFP_NOWAIT); xpd_node = kmem_cache_zalloc(avc_xperms_decision_cachep,
GFP_NOWAIT | __GFP_NOWARN);
if (!xpd_node) if (!xpd_node)
return NULL; return NULL;
xpd = &xpd_node->xpd; xpd = &xpd_node->xpd;
if (which & XPERMS_ALLOWED) { if (which & XPERMS_ALLOWED) {
xpd->allowed = kmem_cache_zalloc(avc_xperms_data_cachep, xpd->allowed = kmem_cache_zalloc(avc_xperms_data_cachep,
GFP_NOWAIT); GFP_NOWAIT | __GFP_NOWARN);
if (!xpd->allowed) if (!xpd->allowed)
goto error; goto error;
} }
if (which & XPERMS_AUDITALLOW) { if (which & XPERMS_AUDITALLOW) {
xpd->auditallow = kmem_cache_zalloc(avc_xperms_data_cachep, xpd->auditallow = kmem_cache_zalloc(avc_xperms_data_cachep,
GFP_NOWAIT); GFP_NOWAIT | __GFP_NOWARN);
if (!xpd->auditallow) if (!xpd->auditallow)
goto error; goto error;
} }
if (which & XPERMS_DONTAUDIT) { if (which & XPERMS_DONTAUDIT) {
xpd->dontaudit = kmem_cache_zalloc(avc_xperms_data_cachep, xpd->dontaudit = kmem_cache_zalloc(avc_xperms_data_cachep,
GFP_NOWAIT); GFP_NOWAIT | __GFP_NOWARN);
if (!xpd->dontaudit) if (!xpd->dontaudit)
goto error; goto error;
} }
@@ -347,7 +348,7 @@ static struct avc_xperms_node *avc_xperms_alloc(void)
{ {
struct avc_xperms_node *xp_node; struct avc_xperms_node *xp_node;
xp_node = kmem_cache_zalloc(avc_xperms_cachep, GFP_NOWAIT); xp_node = kmem_cache_zalloc(avc_xperms_cachep, GFP_NOWAIT | __GFP_NOWARN);
if (!xp_node) if (!xp_node)
return xp_node; return xp_node;
INIT_LIST_HEAD(&xp_node->xpd_head); INIT_LIST_HEAD(&xp_node->xpd_head);
@@ -505,7 +506,7 @@ static struct avc_node *avc_alloc_node(struct selinux_avc *avc)
{ {
struct avc_node *node; struct avc_node *node;
node = kmem_cache_zalloc(avc_node_cachep, GFP_NOWAIT); node = kmem_cache_zalloc(avc_node_cachep, GFP_NOWAIT | __GFP_NOWARN);
if (!node) if (!node)
goto out; goto out;