Commit Graph

46 Commits

Author SHA1 Message Date
Raghavendra Rao Ananta
5bd75403be Merge remote-tracking branch 'remotes/origin/tmp-f686d9f' into msm-lahaina
* remotes/origin/tmp-f686d9f:
  ANDROID: update abi_gki_aarch64.xml for 5.2-rc6
  Linux 5.2-rc6
  Revert "iommu/vt-d: Fix lock inversion between iommu->lock and device_domain_lock"
  Bluetooth: Fix regression with minimum encryption key size alignment
  tcp: refine memory limit test in tcp_fragment()
  x86/vdso: Prevent segfaults due to hoisted vclock reads
  SUNRPC: Fix a credential refcount leak
  Revert "SUNRPC: Declare RPC timers as TIMER_DEFERRABLE"
  net :sunrpc :clnt :Fix xps refcount imbalance on the error path
  NFS4: Only set creation opendata if O_CREAT
  ANDROID: gki_defconfig: workaround to enable configs
  ANDROID: gki_defconfig: more configs for partners
  ARM: 8867/1: vdso: pass --be8 to linker if necessary
  KVM: nVMX: reorganize initial steps of vmx_set_nested_state
  KVM: PPC: Book3S HV: Invalidate ERAT when flushing guest TLB entries
  habanalabs: use u64_to_user_ptr() for reading user pointers
  nfsd: replace Jeff by Chuck as nfsd co-maintainer
  inet: clear num_timeout reqsk_alloc()
  PCI/P2PDMA: Ignore root complex whitelist when an IOMMU is present
  net: mvpp2: debugfs: Add pmap to fs dump
  ipv6: Default fib6_type to RTN_UNICAST when not set
  net: hns3: Fix inconsistent indenting
  net/af_iucv: always register net_device notifier
  net/af_iucv: build proper skbs for HiperTransport
  net/af_iucv: remove GFP_DMA restriction for HiperTransport
  doc: fix documentation about UIO_MEM_LOGICAL using
  MAINTAINERS / Documentation: Thorsten Scherer is the successor of Gavin Schenk
  docs: fb: Add TER16x32 to the available font names
  MAINTAINERS: fpga: hand off maintainership to Moritz
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 507
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 506
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 505
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 504
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 503
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 502
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 501
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 499
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 498
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 497
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 496
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 495
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 491
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 490
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 489
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 488
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 487
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 486
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 485
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 484
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 482
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 481
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 480
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 479
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 477
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 475
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 474
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 473
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 472
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 471
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 469
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 468
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 467
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 466
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 465
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 464
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 463
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 462
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 461
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 460
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 459
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 457
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 456
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 455
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 454
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 452
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 451
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 250
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 248
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 247
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 246
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 245
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 244
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 243
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 239
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 238
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 237
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 235
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 233
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 232
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 231
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 230
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 226
  KVM: arm/arm64: Fix emulated ptimer irq injection
  net: dsa: mv88e6xxx: fix shift of FID bits in mv88e6185_g1_vtu_loadpurge()
  tests: kvm: Check for a kernel warning
  kvm: tests: Sort tests in the Makefile alphabetically
  KVM: x86/mmu: Allocate PAE root array when using SVM's 32-bit NPT
  KVM: x86: Modify struct kvm_nested_state to have explicit fields for data
  fanotify: update connector fsid cache on add mark
  quota: fix a problem about transfer quota
  drm/i915: Don't clobber M/N values during fastset check
  powerpc: enable a 30-bit ZONE_DMA for 32-bit pmac
  ovl: make i_ino consistent with st_ino in more cases
  scsi: qla2xxx: Fix hardlockup in abort command during driver remove
  scsi: ufs: Avoid runtime suspend possibly being blocked forever
  scsi: qedi: update driver version to 8.37.0.20
  scsi: qedi: Check targetname while finding boot target information
  hvsock: fix epollout hang from race condition
  net/udp_gso: Allow TX timestamp with UDP GSO
  net: netem: fix use after free and double free with packet corruption
  net: netem: fix backlog accounting for corrupted GSO frames
  net: lio_core: fix potential sign-extension overflow on large shift
  tipc: pass tunnel dev as NULL to udp_tunnel(6)_xmit_skb
  ip6_tunnel: allow not to count pkts on tstats by passing dev as NULL
  ip_tunnel: allow not to count pkts on tstats by setting skb's dev to NULL
  apparmor: reset pos on failure to unpack for various functions
  apparmor: enforce nullbyte at end of tag string
  apparmor: fix PROFILE_MEDIATES for untrusted input
  RDMA/efa: Handle mmap insertions overflow
  tun: wake up waitqueues after IFF_UP is set
  drm: return -EFAULT if copy_to_user() fails
  net: remove duplicate fetch in sock_getsockopt
  tipc: fix issues with early FAILOVER_MSG from peer
  bnx2x: Check if transceiver implements DDM before access
  xhci: detect USB 3.2 capable host controllers correctly
  usb: xhci: Don't try to recover an endpoint if port is in error state.
  KVM: fix typo in documentation
  drm/panfrost: Make sure a BO is only unmapped when appropriate
  md: fix for divide error in status_resync
  soc: ixp4xx: npe: Fix an IS_ERR() vs NULL check in probe
  arm64/mm: don't initialize pgd_cache twice
  MAINTAINERS: Update my email address
  arm64/sve: <uapi/asm/ptrace.h> should not depend on <uapi/linux/prctl.h>
  ovl: fix typo in MODULE_PARM_DESC
  ovl: fix bogus -Wmaybe-unitialized warning
  ovl: don't fail with disconnected lower NFS
  mmc: core: Prevent processing SDIO IRQs when the card is suspended
  mmc: sdhci: sdhci-pci-o2micro: Correctly set bus width when tuning
  brcmfmac: sdio: Don't tune while the card is off
  mmc: core: Add sdio_retune_hold_now() and sdio_retune_release()
  brcmfmac: sdio: Disable auto-tuning around commands expected to fail
  mmc: core: API to temporarily disable retuning for SDIO CRC errors
  Revert "brcmfmac: disable command decode in sdio_aos"
  ARM: ixp4xx: include irqs.h where needed
  ARM: ixp4xx: mark ixp4xx_irq_setup as __init
  ARM: ixp4xx: don't select SERIAL_OF_PLATFORM
  firmware: trusted_foundations: add ARMv7 dependency
  usb: dwc2: Use generic PHY width in params setup
  RDMA/efa: Fix success return value in case of error
  IB/hfi1: Handle port down properly in pio
  IB/hfi1: Handle wakeup of orphaned QPs for pio
  IB/hfi1: Wakeup QPs orphaned on wait list after flush
  IB/hfi1: Use aborts to trigger RC throttling
  IB/hfi1: Create inline to get extended headers
  IB/hfi1: Silence txreq allocation warnings
  IB/hfi1: Avoid hardlockup with flushlist_lock
  KVM: PPC: Book3S HV: Only write DAWR[X] when handling h_set_dawr in real mode
  KVM: PPC: Book3S HV: Fix r3 corruption in h_set_dabr()
  fs/namespace: fix unprivileged mount propagation
  vfs: fsmount: add missing mntget()
  cifs: fix GlobalMid_Lock bug in cifs_reconnect
  SMB3: retry on STATUS_INSUFFICIENT_RESOURCES instead of failing write
  staging: erofs: add requirements field in superblock
  arm64: ssbd: explicitly depend on <linux/prctl.h>
  block: fix page leak when merging to same page
  block: return from __bio_try_merge_page if merging occured in the same page
  Btrfs: fix failure to persist compression property xattr deletion on fsync
  riscv: remove unused barrier defines
  usb: chipidea: udc: workaround for endpoint conflict issue
  MAINTAINERS: Change QCOM repo location
  mmc: mediatek: fix SDIO IRQ detection issue
  mmc: mediatek: fix SDIO IRQ interrupt handle flow
  mmc: core: complete HS400 before checking status
  riscv: mm: synchronize MMU after pte change
  MAINTAINERS: Update my email address to use @kernel.org
  ANDROID: update abi_gki_aarch64.xml for 5.2-rc5
  riscv: dts: add initial board data for the SiFive HiFive Unleashed
  riscv: dts: add initial support for the SiFive FU540-C000 SoC
  dt-bindings: riscv: convert cpu binding to json-schema
  dt-bindings: riscv: sifive: add YAML documentation for the SiFive FU540
  arch: riscv: add support for building DTB files from DT source data
  drm/i915/gvt: ignore unexpected pvinfo write
  lapb: fixed leak of control-blocks.
  tipc: purge deferredq list for each grp member in tipc_group_delete
  ax25: fix inconsistent lock state in ax25_destroy_timer
  neigh: fix use-after-free read in pneigh_get_next
  tcp: fix compile error if !CONFIG_SYSCTL
  hv_sock: Suppress bogus "may be used uninitialized" warnings
  be2net: Fix number of Rx queues used for flow hashing
  net: handle 802.1P vlan 0 packets properly
  Linux 5.2-rc5
  tcp: enforce tcp_min_snd_mss in tcp_mtu_probing()
  tcp: add tcp_min_snd_mss sysctl
  tcp: tcp_fragment() should apply sane memory limits
  tcp: limit payload size of sacked skbs
  Revert "net: phylink: set the autoneg state in phylink_phy_change"
  bpf: fix nested bpf tracepoints with per-cpu data
  bpf: Fix out of bounds memory access in bpf_sk_storage
  vsock/virtio: set SOCK_DONE on peer shutdown
  net: dsa: rtl8366: Fix up VLAN filtering
  net: phylink: set the autoneg state in phylink_phy_change
  powerpc/32: fix build failure on book3e with KVM
  powerpc/booke: fix fast syscall entry on SMP
  powerpc/32s: fix initial setup of segment registers on secondary CPU
  x86/microcode, cpuhotplug: Add a microcode loader CPU hotplug callback
  net: add high_order_alloc_disable sysctl/static key
  tcp: add tcp_tx_skb_cache sysctl
  tcp: add tcp_rx_skb_cache sysctl
  sysctl: define proc_do_static_key()
  hv_netvsc: Set probe mode to sync
  net: sched: flower: don't call synchronize_rcu() on mask creation
  net: dsa: fix warning same module names
  sctp: Free cookie before we memdup a new one
  net: dsa: microchip: Don't try to read stats for unused ports
  qmi_wwan: extend permitted QMAP mux_id value range
  qmi_wwan: avoid RCU stalls on device disconnect when in QMAP mode
  qmi_wwan: add network device usage statistics for qmimux devices
  qmi_wwan: add support for QMAP padding in the RX path
  bpf, x64: fix stack layout of JITed bpf code
  Smack: Restore the smackfsdef mount option and add missing prefixes
  bpf, devmap: Add missing RCU read lock on flush
  bpf, devmap: Add missing bulk queue free
  bpf, devmap: Fix premature entry free on destroying map
  ftrace: Fix NULL pointer dereference in free_ftrace_func_mapper()
  module: Fix livepatch/ftrace module text permissions race
  tracing/uprobe: Fix obsolete comment on trace_uprobe_create()
  tracing/uprobe: Fix NULL pointer dereference in trace_uprobe_create()
  tracing: Make two symbols static
  tracing: avoid build warning with HAVE_NOP_MCOUNT
  tracing: Fix out-of-range read in trace_stack_print()
  gfs2: Fix rounding error in gfs2_iomap_page_prepare
  net: phylink: further mac_config documentation improvements
  nfc: Ensure presence of required attributes in the deactivate_target handler
  btrfs: start readahead also in seed devices
  x86/kasan: Fix boot with 5-level paging and KASAN
  cfg80211: report measurement start TSF correctly
  cfg80211: fix memory leak of wiphy device name
  cfg80211: util: fix bit count off by one
  mac80211: do not start any work during reconfigure flow
  cfg80211: use BIT_ULL in cfg80211_parse_mbssid_data()
  mac80211: only warn once on chanctx_conf being NULL
  mac80211: drop robust management frames from unknown TA
  gpu: ipu-v3: image-convert: Fix image downsize coefficients
  gpu: ipu-v3: image-convert: Fix input bytesperline for packed formats
  gpu: ipu-v3: image-convert: Fix input bytesperline width/height align
  thunderbolt: Implement CIO reset correctly for Titan Ridge
  ARM: davinci: da8xx: specify dma_coherent_mask for lcdc
  ARM: davinci: da850-evm: call regulator_has_full_constraints()
  timekeeping: Repair ktime_get_coarse*() granularity
  Revert "ALSA: hda/realtek - Improve the headset mic for Acer Aspire laptops"
  ANDROID: update abi_gki_aarch64.xml
  mm/devm_memremap_pages: fix final page put race
  PCI/P2PDMA: track pgmap references per resource, not globally
  lib/genalloc: introduce chunk owners
  PCI/P2PDMA: fix the gen_pool_add_virt() failure path
  mm/devm_memremap_pages: introduce devm_memunmap_pages
  drivers/base/devres: introduce devm_release_action()
  mm/vmscan.c: fix trying to reclaim unevictable LRU page
  coredump: fix race condition between collapse_huge_page() and core dumping
  mm/mlock.c: change count_mm_mlocked_page_nr return type
  mm: mmu_gather: remove __tlb_reset_range() for force flush
  fs/ocfs2: fix race in ocfs2_dentry_attach_lock()
  mm/vmscan.c: fix recent_rotated history
  mm/mlock.c: mlockall error for flag MCL_ONFAULT
  scripts/decode_stacktrace.sh: prefix addr2line with $CROSS_COMPILE
  mm/list_lru.c: fix memory leak in __memcg_init_list_lru_node
  mm: memcontrol: don't batch updates of local VM stats and events
  PCI: PM: Skip devices in D0 for suspend-to-idle
  ANDROID: Removed extraneous configs from gki
  powerpc/bpf: use unsigned division instruction for 64-bit operations
  bpf: fix div64 overflow tests to properly detect errors
  bpf: sync BPF_FIB_LOOKUP flag changes with BPF uapi
  bpf: simplify definition of BPF_FIB_LOOKUP related flags
  cifs: add spinlock for the openFileList to cifsInodeInfo
  cifs: fix panic in smb2_reconnect
  x86/fpu: Don't use current->mm to check for a kthread
  KVM: nVMX: use correct clean fields when copying from eVMCS
  vfio-ccw: Destroy kmem cache region on module exit
  block/ps3vram: Use %llu to format sector_t after LBDAF removal
  libata: Extend quirks for the ST1000LM024 drives with NOLPM quirk
  bcache: only set BCACHE_DEV_WB_RUNNING when cached device attached
  bcache: fix stack corruption by PRECEDING_KEY()
  arm64/sve: Fix missing SVE/FPSIMD endianness conversions
  blk-mq: remove WARN_ON(!q->elevator) from blk_mq_sched_free_requests
  blkio-controller.txt: Remove references to CFQ
  block/switching-sched.txt: Update to blk-mq schedulers
  null_blk: remove duplicate check for report zone
  blk-mq: no need to check return value of debugfs_create functions
  io_uring: fix memory leak of UNIX domain socket inode
  block: force select mq-deadline for zoned block devices
  binder: fix possible UAF when freeing buffer
  drm/amdgpu: return 0 by default in amdgpu_pm_load_smu_firmware
  drm/amdgpu: Fix bounds checking in amdgpu_ras_is_supported()
  ANDROID: x86 gki_defconfig: enable DMA_CMA
  ANDROID: Fixed x86 regression
  ANDROID: gki_defconfig: enable DMA_CMA
  Input: synaptics - enable SMBus on ThinkPad E480 and E580
  net: mvpp2: prs: Use the correct helpers when removing all VID filters
  net: mvpp2: prs: Fix parser range for VID filtering
  mlxsw: spectrum: Disallow prio-tagged packets when PVID is removed
  mlxsw: spectrum_buffers: Reduce pool size on Spectrum-2
  selftests: tc_flower: Add TOS matching test
  mlxsw: spectrum_flower: Fix TOS matching
  selftests: mlxsw: Test nexthop offload indication
  mlxsw: spectrum_router: Refresh nexthop neighbour when it becomes dead
  mlxsw: spectrum: Use different seeds for ECMP and LAG hash
  net: tls, correctly account for copied bytes with multiple sk_msgs
  vrf: Increment Icmp6InMsgs on the original netdev
  cpuset: restore sanity to cpuset_cpus_allowed_fallback()
  net: ethtool: Allow matching on vlan DEI bit
  linux-next: DOC: RDS: Fix a typo in rds.txt
  x86/kgdb: Return 0 from kgdb_arch_set_breakpoint()
  mpls: fix af_mpls dependencies for real
  selinux: fix a missing-check bug in selinux_sb_eat_lsm_opts()
  selinux: fix a missing-check bug in selinux_add_mnt_opt( )
  arm64: tlbflush: Ensure start/end of address range are aligned to stride
  usb: typec: Make sure an alt mode exist before getting its partner
  KVM: arm/arm64: vgic: Fix kvm_device leak in vgic_its_destroy
  KVM: arm64: Filter out invalid core register IDs in KVM_GET_REG_LIST
  KVM: arm64: Implement vq_present() as a macro
  xdp: check device pointer before clearing
  bpf: net: Set sk_bpf_storage back to NULL for cloned sk
  Btrfs: fix race between block group removal and block group allocation
  clocksource/drivers/arm_arch_timer: Don't trace count reader functions
  i2c: pca-platform: Fix GPIO lookup code
  thunderbolt: Make sure device runtime resume completes before taking domain lock
  drm: add fallback override/firmware EDID modes workaround
  i2c: acorn: fix i2c warning
  arm64: Don't unconditionally add -Wno-psabi to KBUILD_CFLAGS
  drm/edid: abstract override/firmware EDID retrieval
  platform/mellanox: mlxreg-hotplug: Add devm_free_irq call to remove flow
  platform/x86: mlx-platform: Fix parent device in i2c-mux-reg device registration
  platform/x86: intel-vbtn: Report switch events when event wakes device
  platform/x86: asus-wmi: Only Tell EC the OS will handle display hotkeys from asus_nb_wmi
  ARM: mvebu_v7_defconfig: fix Ethernet on Clearfog
  x86/resctrl: Prevent NULL pointer dereference when local MBM is disabled
  x86/resctrl: Don't stop walking closids when a locksetup group is found
  iommu/arm-smmu: Avoid constant zero in TLBI writes
  drm/i915/perf: fix whitelist on Gen10+
  drm/i915/sdvo: Implement proper HDMI audio support for SDVO
  drm/i915: Fix per-pixel alpha with CCS
  drm/i915/dmc: protect against reading random memory
  drm/i915/dsi: Use a fuzzy check for burst mode clock check
  Input: imx_keypad - make sure keyboard can always wake up system
  selinux: log raw contexts as untrusted strings
  ptrace: restore smp_rmb() in __ptrace_may_access()
  IB/hfi1: Correct tid qp rcd to match verbs context
  IB/hfi1: Close PSM sdma_progress sleep window
  IB/hfi1: Validate fault injection opcode user input
  geneve: Don't assume linear buffers in error handler
  vxlan: Don't assume linear buffers in error handler
  net: openvswitch: do not free vport if register_netdevice() is failed.
  net: correct udp zerocopy refcnt also when zerocopy only on append
  drm/amdgpu/{uvd,vcn}: fetch ring's read_ptr after alloc
  ovl: fix wrong flags check in FS_IOC_FS[SG]ETXATTR ioctls
  riscv: Fix udelay in RV32.
  drm/vmwgfx: fix a warning due to missing dma_parms
  riscv: export pm_power_off again
  drm/vmwgfx: Honor the sg list segment size limitation
  RISC-V: defconfig: enable clocks, serial console
  drm/vmwgfx: Use the backdoor port if the HB port is not available
  bpf: lpm_trie: check left child of last leftmost node for NULL
  Revert "fuse: require /dev/fuse reads to have enough buffer capacity"
  ALSA: ice1712: Check correct return value to snd_i2c_sendbytes (EWS/DMX 6Fire)
  ALSA: oxfw: allow PCM capture for Stanton SCS.1m
  ALSA: firewire-motu: fix destruction of data for isochronous resources
  s390/ctl_reg: mark __ctl_set_bit and __ctl_clear_bit as __always_inline
  s390/boot: disable address-of-packed-member warning
  ANDROID: update gki aarch64 ABI representation
  cgroup: Fix css_task_iter_advance_css_set() cset skip condition
  drm/panfrost: Require the simple_ondemand governor
  drm/panfrost: make devfreq optional again
  drm/gem_shmem: Use a writecombine mapping for ->vaddr
  mmc: sdhi: disallow HS400 for M3-W ES1.2, RZ/G2M, and V3H
  ASoC: Intel: sst: fix kmalloc call with wrong flags
  ASoC: core: Fix deadlock in snd_soc_instantiate_card()
  cgroup/bfq: revert bfq.weight symlink change
  ARM: dts: am335x phytec boards: Fix cd-gpios active level
  ARM: dts: dra72x: Disable usb4_tm target module
  nfp: ensure skb network header is set for packet redirect
  tcp: fix undo spurious SYNACK in passive Fast Open
  mpls: fix af_mpls dependencies
  ibmvnic: Fix unchecked return codes of memory allocations
  ibmvnic: Refresh device multicast list after reset
  ibmvnic: Do not close unopened driver during reset
  mpls: fix warning with multi-label encap
  net: phy: rename Asix Electronics PHY driver
  ipv6: flowlabel: fl6_sock_lookup() must use atomic_inc_not_zero
  net: ipv4: fib_semantics: fix uninitialized variable
  Input: iqs5xx - get axis info before calling input_mt_init_slots()
  Linux 5.2-rc4
  drm: panel-orientation-quirks: Add quirk for GPD MicroPC
  drm: panel-orientation-quirks: Add quirk for GPD pocket2
  counter/ftm-quaddec: Add missing dependencies in Kconfig
  staging: iio: adt7316: Fix build errors when GPIOLIB is not set
  x86/fpu: Update kernel's FPU state before using for the fsave header
  MAINTAINERS: Karthikeyan Ramasubramanian is MIA
  i2c: xiic: Add max_read_len quirk
  ANDROID: update ABI representation
  gpio: pca953x: hack to fix 24 bit gpio expanders
  net/mlx5e: Support tagged tunnel over bond
  net/mlx5e: Avoid detaching non-existing netdev under switchdev mode
  net/mlx5e: Fix source port matching in fdb peer flow rule
  net/mlx5e: Replace reciprocal_scale in TX select queue function
  net/mlx5e: Add ndo_set_feature for uplink representor
  net/mlx5: Avoid reloading already removed devices
  net/mlx5: Update pci error handler entries and command translation
  RAS/CEC: Convert the timer callback to a workqueue
  RAS/CEC: Fix binary search function
  x86/mm/KASLR: Compute the size of the vmemmap section properly
  can: purge socket error queue on sock destruct
  can: flexcan: Remove unneeded registration message
  can: af_can: Fix error path of can_init()
  can: m_can: implement errata "Needless activation of MRAF irq"
  can: mcp251x: add support for mcp25625
  dt-bindings: can: mcp251x: add mcp25625 support
  can: xilinx_can: use correct bittiming_const for CAN FD core
  can: flexcan: fix timeout when set small bitrate
  can: usb: Kconfig: Remove duplicate menu entry
  lockref: Limit number of cmpxchg loop retries
  uaccess: add noop untagged_addr definition
  x86/insn-eval: Fix use-after-free access to LDT entry
  kbuild: use more portable 'command -v' for cc-cross-prefix
  s390/unwind: correct stack switching during unwind
  scsi: hpsa: correct ioaccel2 chaining
  btrfs: Always trim all unallocated space in btrfs_trim_free_extents
  netfilter: ipv6: nf_defrag: accept duplicate fragments again
  powerpc/32s: fix booting with CONFIG_PPC_EARLY_DEBUG_BOOTX
  drm/meson: fix G12A primary plane disabling
  drm/meson: fix primary plane disabling
  drm/meson: fix G12A HDMI PLL settings for 4K60 1000/1001 variations
  block, bfq: add weight symlink to the bfq.weight cgroup parameter
  cgroup: let a symlink too be created with a cftype file
  powerpc/64s: __find_linux_pte() synchronization vs pmdp_invalidate()
  powerpc/64s: Fix THP PMD collapse serialisation
  powerpc: Fix kexec failure on book3s/32
  drm/nouveau/secboot/gp10[2467]: support newer FW to fix SEC2 failures on some boards
  drm/nouveau/secboot: enable loading of versioned LS PMU/SEC2 ACR msgqueue FW
  drm/nouveau/secboot: split out FW version-specific LS function pointers
  drm/nouveau/secboot: pass max supported FW version to LS load funcs
  drm/nouveau/core: support versioned firmware loading
  drm/nouveau/core: pass subdev into nvkm_firmware_get, rather than device
  block: free sched's request pool in blk_cleanup_queue
  bpf: expand section tests for test_section_names
  bpf: more msg_name rewrite tests to test_sock_addr
  bpf, bpftool: enable recvmsg attach types
  bpf, libbpf: enable recvmsg attach types
  bpf: sync tooling uapi header
  bpf: fix unconnected udp hooks
  vfio/mdev: Synchronize device create/remove with parent removal
  vfio/mdev: Avoid creating sysfs remove file on stale device removal
  pktgen: do not sleep with the thread lock held.
  net: mvpp2: Use strscpy to handle stat strings
  net: rds: fix memory leak in rds_ib_flush_mr_pool
  ipv6: fix EFAULT on sendto with icmpv6 and hdrincl
  ipv6: use READ_ONCE() for inet->hdrincl as in ipv4
  soundwire: intel: set dai min and max channels correctly
  soundwire: stream: fix bad unlock balance
  x86/fpu: Use fault_in_pages_writeable() for pre-faulting
  nvme-rdma: use dynamic dma mapping per command
  nvme: Fix u32 overflow in the number of namespace list calculation
  vfio/mdev: Improve the create/remove sequence
  SoC: rt274: Fix internal jack assignment in set_jack callback
  ALSA: hdac: fix memory release for SST and SOF drivers
  ASoC: SOF: Intel: hda: use the defined ppcap functions
  ASoC: core: move DAI pre-links initiation to snd_soc_instantiate_card
  ASoC: Intel: cht_bsw_rt5672: fix kernel oops with platform_name override
  ASoC: Intel: cht_bsw_nau8824: fix kernel oops with platform_name override
  ASoC: Intel: bytcht_es8316: fix kernel oops with platform_name override
  ASoC: Intel: cht_bsw_max98090: fix kernel oops with platform_name override
  Revert "gfs2: Replace gl_revokes with a GLF flag"
  arm64: Silence gcc warnings about arch ABI drift
  parisc: Fix crash due alternative coding for NP iopdir_fdc bit
  parisc: Use lpa instruction to load physical addresses in driver code
  parisc: configs: Remove useless UEVENT_HELPER_PATH
  parisc: Use implicit space register selection for loading the coherence index of I/O pdirs
  usb: gadget: udc: lpc32xx: fix return value check in lpc32xx_udc_probe()
  usb: gadget: dwc2: fix zlp handling
  usb: dwc2: Set actual frame number for completed ISOC transfer for none DDMA
  usb: gadget: udc: lpc32xx: allocate descriptor with GFP_ATOMIC
  usb: gadget: fusb300_udc: Fix memory leak of fusb300->ep[i]
  usb: phy: mxs: Disable external charger detect in mxs_phy_hw_init()
  usb: dwc2: Fix DMA cache alignment issues
  usb: dwc2: host: Fix wMaxPacketSize handling (fix webcam regression)
  ARM64: trivial: s/TIF_SECOMP/TIF_SECCOMP/ comment typo fix
  drm/komeda: Potential error pointer dereference
  drm/komeda: remove set but not used variable 'kcrtc'
  x86/CPU: Add more Icelake model numbers
  hwmon: (pmbus/core) Treat parameters as paged if on multiple pages
  hwmon: (pmbus/core) mutex_lock write in pmbus_set_samples
  hwmon: (core) add thermal sensors only if dev->of_node is present
  Revert "fib_rules: return 0 directly if an exactly same rule exists when NLM_F_EXCL not supplied"
  net: aquantia: fix wol configuration not applied sometimes
  ethtool: fix potential userspace buffer overflow
  Fix memory leak in sctp_process_init
  net: rds: fix memory leak when unload rds_rdma
  ipv6: fix the check before getting the cookie in rt6_get_cookie
  ipv4: not do cache for local delivery if bc_forwarding is enabled
  selftests: vm: Fix test build failure when built by itself
  tools: bpftool: Fix JSON output when lookup fails
  mmc: also set max_segment_size in the device
  mtip32xx: also set max_segment_size in the device
  rsxx: don't call dma_set_max_seg_size
  nvme-pci: don't limit DMA segement size
  s390/qeth: handle error when updating TX queue count
  s390/qeth: fix VLAN attribute in bridge_hostnotify udev event
  s390/qeth: check dst entry before use
  s390/qeth: handle limited IPv4 broadcast in L3 TX path
  ceph: fix error handling in ceph_get_caps()
  ceph: avoid iput_final() while holding mutex or in dispatch thread
  ceph: single workqueue for inode related works
  cgroup: css_task_iter_skip()'d iterators must be advanced before accessed
  drm/amd/amdgpu: add RLC firmware to support raven1 refresh
  drm/amd/powerplay: add set_power_profile_mode for raven1_refresh
  drm/amdgpu: fix ring test failure issue during s3 in vce 3.0 (V2)
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 450
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 449
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 448
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 446
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 445
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 444
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 443
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 442
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 441
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 440
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 438
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 437
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 436
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 435
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 434
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 433
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 432
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 431
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 430
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 429
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 428
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 426
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 424
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 423
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 422
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 421
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 420
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 419
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 418
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 417
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 416
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 414
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 412
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 411
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 410
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 409
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 408
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 407
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 406
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 405
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 404
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 403
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 402
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 401
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 400
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 399
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 398
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 397
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 396
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 395
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 394
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 393
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 392
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 391
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 390
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 389
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 388
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 387
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 380
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 378
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 377
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 376
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 375
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 373
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 372
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 371
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 370
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 367
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 365
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 364
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 363
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 362
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 354
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 353
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 352
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 351
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 350
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 349
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 348
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 347
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 346
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 345
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 344
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 343
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 342
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 341
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 340
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 339
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 338
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 336
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 335
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 334
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 333
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 332
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 330
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 328
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 326
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 325
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 324
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 323
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 322
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 321
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 320
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 316
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 315
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 314
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 313
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 312
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 311
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 310
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 309
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 308
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 307
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 305
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 301
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 300
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 299
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 297
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 296
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 295
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 294
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 292
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 291
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 290
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 289
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 288
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 287
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 286
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 285
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 284
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 283
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 282
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 281
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 280
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 278
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 277
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 276
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 275
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 274
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 273
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 272
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 271
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 270
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 269
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 268
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 267
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 266
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 265
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 264
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 263
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 262
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 260
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 258
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 257
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 256
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 254
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 253
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 252
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 251
  lib/test_stackinit: Handle Clang auto-initialization pattern
  block: Drop unlikely before IS_ERR(_OR_NULL)
  xen/swiotlb: don't initialize swiotlb twice on arm64
  s390/mm: fix address space detection in exception handling
  HID: logitech-dj: Fix 064d:c52f receiver support
  Revert "HID: core: Call request_module before doing device_add"
  Revert "HID: core: Do not call request_module() in async context"
  Revert "HID: Increase maximum report size allowed by hid_field_extract()"
  tests: fix pidfd-test compilation
  signal: improve comments
  samples: fix pidfd-metadata compilation
  arm64: arch_timer: mark functions as __always_inline
  arm64: smp: Moved cpu_logical_map[] to smp.h
  arm64: cpufeature: Fix missing ZFR0 in __read_sysreg_by_encoding()
  selftests/bpf: move test_lirc_mode2_user to TEST_GEN_PROGS_EXTENDED
  USB: Fix chipmunk-like voice when using Logitech C270 for recording audio.
  USB: usb-storage: Add new ID to ums-realtek
  udmabuf: actually unmap the scatterlist
  net: fix indirect calls helpers for ptype list hooks.
  net: ipvlan: Fix ipvlan device tso disabled while NETIF_F_IP_CSUM is set
  scsi: smartpqi: unlock on error in pqi_submit_raid_request_synchronous()
  scsi: ufs: Check that space was properly alloced in copy_query_response
  udp: only choose unbound UDP socket for multicast when not in a VRF
  net/tls: replace the sleeping lock around RX resync with a bit lock
  Revert "net/tls: avoid NULL-deref on resync during device removal"
  block: aoe: no need to check return value of debugfs_create functions
  net: dsa: sja1105: Fix link speed not working at 100 Mbps and below
  net: phylink: avoid reducing support mask
  scripts/checkstack.pl: Fix arm64 wrong or unknown architecture
  kbuild: tar-pkg: enable communication with jobserver
  kconfig: tests: fix recursive inclusion unit test
  kbuild: teach kselftest-merge to find nested config files
  nvmet: fix data_len to 0 for bdev-backed write_zeroes
  MAINTAINERS: Hand over skd maintainership
  ASoC: sun4i-i2s: Add offset to RX channel select
  ASoC: sun4i-i2s: Fix sun8i tx channel offset mask
  ASoC: max98090: remove 24-bit format support if RJ is 0
  ASoC: da7219: Fix build error without CONFIG_I2C
  ASoC: SOF: Intel: hda: Fix COMPILE_TEST build error
  drm/arm/hdlcd: Allow a bit of clock tolerance
  drm/arm/hdlcd: Actually validate CRTC modes
  drm/arm/mali-dp: Add a loop around the second set CVAL and try 5 times
  drm/komeda: fixing of DMA mapping sg segment warning
  netfilter: ipv6: nf_defrag: fix leakage of unqueued fragments
  habanalabs: Read upper bits of trace buffer from RWPHI
  arm64: arch_k3: Fix kconfig dependency warning
  drm: don't block fb changes for async plane updates
  drm/vc4: fix fb references in async update
  drm/msm: fix fb references in async update
  drm/amd: fix fb references in async update
  drm/rockchip: fix fb references in async update
  xen-blkfront: switch kcalloc to kvcalloc for large array allocation
  drm/mediatek: call mtk_dsi_stop() after mtk_drm_crtc_atomic_disable()
  drm/mediatek: clear num_pipes when unbind driver
  drm/mediatek: call drm_atomic_helper_shutdown() when unbinding driver
  drm/mediatek: unbind components in mtk_drm_unbind()
  drm/mediatek: fix unbind functions
  net: sfp: read eeprom in maximum 16 byte increments
  selftests: set sysctl bc_forwarding properly in router_broadcast.sh
  ANDROID: update gki aarch64 ABI representation
  net: ethernet: mediatek: Use NET_IP_ALIGN to judge if HW RX_2BYTE_OFFSET is enabled
  net: ethernet: mediatek: Use hw_feature to judge if HWLRO is supported
  net: ethernet: ti: cpsw_ethtool: fix ethtool ring param set
  ANDROID: gki_defconfig: Enable CMA, SLAB_FREELIST (RANDOM and HARDENED) on x86
  bpf: udp: Avoid calling reuseport's bpf_prog from udp_gro
  bpf: udp: ipv6: Avoid running reuseport's bpf_prog from __udp6_lib_err
  rcu: locking and unlocking need to always be at least barriers
  ANDROID: gki_defconfig: enable SLAB_FREELIST_RANDOM, SLAB_FREELIST_HARDENED
  ANDROID: gki_defconfig: enable CMA and increase CMA_AREAS
  ASoC: SOF: fix DSP oops definitions in FW ABI
  ASoC: hda: fix unbalanced codec dev refcount for HDA_DEV_ASOC
  ASoC: SOF: ipc: replace fw ready bitfield with explicit bit ordering
  ASoC: SOF: bump to ABI 3.6
  ASoC: SOF: soundwire: add initial soundwire support
  ASoC: SOF: uapi: mirror firmware changes
  ASoC: Intel: Baytrail: add quirk for Aegex 10 (RU2) tablet
  xfs: inode btree scrubber should calculate im_boffset correctly
  mmc: sdhci_am654: Fix SLOTTYPE write
  usb: typec: ucsi: ccg: fix memory leak in do_flash
  ANDROID: update gki aarch64 ABI representation
  habanalabs: Fix virtual address access via debugfs for 2MB pages
  drm/komeda: Constify the usage of komeda_component/pipeline/dev_funcs
  x86/power: Fix 'nosmt' vs hibernation triple fault during resume
  mm/vmalloc: Avoid rare case of flushing TLB with weird arguments
  mm/vmalloc: Fix calculation of direct map addr range
  PM: sleep: Add kerneldoc comments to some functions
  drm/i915/gvt: save RING_HEAD into vreg when vgpu switched out
  sparc: perf: fix updated event period in response to PERF_EVENT_IOC_PERIOD
  mdesc: fix a missing-check bug in get_vdev_port_node_info()
  drm/i915/gvt: add F_CMD_ACCESS flag for wa regs
  sparc64: Fix regression in non-hypervisor TLB flush xcall
  packet: unconditionally free po->rollover
  Update my email address
  net: hns: Fix loopback test failed at copper ports
  Linux 5.2-rc3
  net: dsa: mv88e6xxx: avoid error message on remove from VLAN 0
  mm, compaction: make sure we isolate a valid PFN
  include/linux/generic-radix-tree.h: fix kerneldoc comment
  kernel/signal.c: trace_signal_deliver when signal_group_exit
  drivers/iommu/intel-iommu.c: fix variable 'iommu' set but not used
  spdxcheck.py: fix directory structures
  kasan: initialize tag to 0xff in __kasan_kmalloc
  z3fold: fix sheduling while atomic
  scripts/gdb: fix invocation when CONFIG_COMMON_CLK is not set
  mm/gup: continue VM_FAULT_RETRY processing even for pre-faults
  ocfs2: fix error path kobject memory leak
  memcg: make it work on sparse non-0-node systems
  mm, memcg: consider subtrees in memory.events
  prctl_set_mm: downgrade mmap_sem to read lock
  prctl_set_mm: refactor checks from validate_prctl_map
  kernel/fork.c: make max_threads symbol static
  arch/arm/boot/compressed/decompress.c: fix build error due to lz4 changes
  arch/parisc/configs/c8000_defconfig: remove obsoleted CONFIG_DEBUG_SLAB_LEAK
  mm/vmalloc.c: fix typo in comment
  lib/sort.c: fix kernel-doc notation warnings
  mm: fix Documentation/vm/hmm.rst Sphinx warnings
  treewide: fix typos of SPDX-License-Identifier
  crypto: ux500 - fix license comment syntax error
  MAINTAINERS: add I2C DT bindings to ARM platforms
  MAINTAINERS: add DT bindings to i2c drivers
  mwifiex: Fix heap overflow in mwifiex_uap_parse_tail_ies()
  iwlwifi: mvm: change TLC config cmd sent by rs to be async
  iwlwifi: Fix double-free problems in iwl_req_fw_callback()
  iwlwifi: fix AX201 killer sku loading firmware issue
  iwlwifi: print fseq info upon fw assert
  iwlwifi: clear persistence bit according to device family
  iwlwifi: fix load in rfkill flow for unified firmware
  iwlwifi: mvm: remove d3_sram debugfs file
  bpf, riscv: clear high 32 bits for ALU32 add/sub/neg/lsh/rsh/arsh
  libbpf: Return btf_fd for load_sk_storage_btf
  HID: a4tech: fix horizontal scrolling
  HID: hyperv: Add a module description line
  net: dsa: sja1105: Don't store frame type in skb->cb
  block: print offending values when cloned rq limits are exceeded
  blk-mq: Document the blk_mq_hw_queue_to_node() arguments
  blk-mq: Fix spelling in a source code comment
  block: Fix bsg_setup_queue() kernel-doc header
  block: Fix rq_qos_wait() kernel-doc header
  block: Fix blk_mq_*_map_queues() kernel-doc headers
  block: Fix throtl_pending_timer_fn() kernel-doc header
  block: Convert blk_invalidate_devt() header into a non-kernel-doc header
  block/partitions/ldm: Convert a kernel-doc header into a non-kernel-doc header
  leds: avoid flush_work in atomic context
  cgroup: Include dying leaders with live threads in PROCS iterations
  cgroup: Implement css_task_iter_skip()
  cgroup: Call cgroup_release() before __exit_signal()
  netfilter: nf_tables: fix module autoload with inet family
  Revert "lockd: Show pid of lockd for remote locks"
  ALSA: hda/realtek - Update headset mode for ALC256
  fs/adfs: fix filename fixup handling for "/" and "//" names
  fs/adfs: move append_filetype_suffix() into adfs_object_fixup()
  fs/adfs: remove truncated filename hashing
  fs/adfs: factor out filename fixup
  fs/adfs: factor out object fixups
  fs/adfs: factor out filename case lowering
  fs/adfs: factor out filename comparison
  ovl: doc: add non-standard corner cases
  pstore/ram: Run without kernel crash dump region
  MAINTAINERS: add Vasily Gorbik and Christian Borntraeger for s390
  MAINTAINERS: Farewell Martin Schwidefsky
  pstore: Set tfm to NULL on free_buf_for_compression
  nds32: add new emulations for floating point instruction
  nds32: Avoid IEX status being incorrectly modified
  math-emu: Use statement expressions to fix Wshift-count-overflow warning
  net: correct zerocopy refcnt with udp MSG_MORE
  ethtool: Check for vlan etype or vlan tci when parsing flow_rule
  net: don't clear sock->sk early to avoid trouble in strparser
  net-gro: fix use-after-free read in napi_gro_frags()
  net: dsa: tag_8021q: Create a stable binary format
  net: dsa: tag_8021q: Change order of rx_vid setup
  net: mvpp2: fix bad MVPP2_TXQ_SCHED_TOKEN_CNTR_REG queue value
  docs cgroups: add another example size for hugetlb
  NFSv4.1: Fix bug only first CB_NOTIFY_LOCK is handled
  NFSv4.1: Again fix a race where CB_NOTIFY_LOCK fails to wake a waiter
  ipv4: tcp_input: fix stack out of bounds when parsing TCP options.
  mlxsw: spectrum: Prevent force of 56G
  mlxsw: spectrum_acl: Avoid warning after identical rules insertion
  SUNRPC: Fix a use after free when a server rejects the RPCSEC_GSS credential
  net: dsa: mv88e6xxx: fix handling of upper half of STATS_TYPE_PORT
  SUNRPC fix regression in umount of a secure mount
  r8169: fix MAC address being lost in PCI D3
  treewide: Add SPDX license identifier - Kbuild
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 225
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 224
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 223
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 222
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 221
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 220
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 218
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 217
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 216
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 215
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 214
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 213
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 211
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 210
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 209
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 207
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 206
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 203
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 201
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 200
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 199
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 198
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 197
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 195
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 194
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 193
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 191
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 190
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 188
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 185
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 183
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 182
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 180
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 179
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 178
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 177
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 176
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 175
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 174
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 173
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 172
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 171
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 170
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 167
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 166
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 165
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 164
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 162
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 161
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 160
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 159
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 158
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 157
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 156
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 155
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 154
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 153
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 151
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 150
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 149
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 148
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 147
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 145
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 144
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 143
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 142
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 140
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 139
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 138
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 137
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 136
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 135
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 133
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 132
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 131
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 130
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 129
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 128
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 127
  treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 126
  net: core: support XDP generic on stacked devices.
  netvsc: unshare skb in VF rx handler
  udp: Avoid post-GRO UDP checksum recalculation
  nvme-tcp: fix queue mapping when queue count is limited
  nvme-rdma: fix queue mapping when queue count is limited
  fpga: zynqmp-fpga: Correctly handle error pointer
  selftests: vm: install test_vmalloc.sh for run_vmtests
  userfaultfd: selftest: fix compiler warning
  kselftest/cgroup: fix incorrect test_core skip
  kselftest/cgroup: fix unexpected testing failure on test_core
  kselftest/cgroup: fix unexpected testing failure on test_memcontrol
  xtensa: Fix section mismatch between memblock_reserve and mem_reserve
  signal/ptrace: Don't leak unitialized kernel memory with PTRACE_PEEK_SIGINFO
  mwifiex: Abort at too short BSS descriptor element
  mwifiex: Fix possible buffer overflows at parsing bss descriptor
  drm/i915/gvt: Assign NULL to the pointer after memory free.
  drm/i915/gvt: Check if cur_pt_type is valid
  x86: intel_epb: Do not build when CONFIG_PM is unset
  crypto: hmac - fix memory leak in hmac_init_tfm()
  crypto: jitterentropy - change back to module_init()
  ARM: dts: Drop bogus CLKSEL for timer12 on dra7
  KVM: PPC: Book3S HV: Restore SPRG3 in kvmhv_p9_guest_entry()
  KVM: PPC: Book3S HV: Fix lockdep warning when entering guest on POWER9
  KVM: PPC: Book3S HV: XIVE: Fix page offset when clearing ESB pages
  KVM: PPC: Book3S HV: XIVE: Take the srcu read lock when accessing memslots
  KVM: PPC: Book3S HV: XIVE: Do not clear IRQ data of passthrough interrupts
  KVM: PPC: Book3S HV: XIVE: Introduce a new mutex for the XIVE device
  drm/i915/gvt: Fix cmd length of VEB_DI_IECP
  drm/i915/gvt: refine ggtt range validation
  drm/i915/gvt: Fix vGPU CSFE_CHICKEN1_REG mmio handler
  drm/i915/gvt: Fix GFX_MODE handling
  drm/i915/gvt: Update force-to-nonpriv register whitelist
  drm/i915/gvt: Initialize intel_gvt_gtt_entry in stack
  ima: show rules with IMA_INMASK correctly
  evm: check hash algorithm passed to init_desc()
  scsi: libsas: delete sas port if expander discover failed
  scsi: libsas: only clear phy->in_shutdown after shutdown event done
  scsi: scsi_dh_alua: Fix possible null-ptr-deref
  scsi: smartpqi: properly set both the DMA mask and the coherent DMA mask
  scsi: zfcp: fix to prevent port_remove with pure auto scan LUNs (only sdevs)
  scsi: zfcp: fix missing zfcp_port reference put on -EBUSY from port_remove
  scsi: libcxgbi: add a check for NULL pointer in cxgbi_check_route()
  net: phy: dp83867: Set up RGMII TX delay
  net: phy: dp83867: do not call config_init twice
  net: phy: dp83867: increase SGMII autoneg timer duration
  net: phy: dp83867: fix speed 10 in sgmii mode
  net: phy: marvell10g: report if the PHY fails to boot firmware
  net: phylink: ensure consistent phy interface mode
  cgroup: Use css_tryget() instead of css_tryget_online() in task_get_css()
  blk-mq: Fix memory leak in error handling
  usbip: usbip_host: fix stub_dev lock context imbalance regression
  net: sh_eth: fix mdio access in sh_eth_close() for R-Car Gen2 and RZ/A1 SoCs
  MIPS: uprobes: remove set but not used variable 'epc'
  s390/crypto: fix possible sleep during spinlock aquired
  MIPS: pistachio: Build uImage.gz by default
  MIPS: Make virt_addr_valid() return bool
  MIPS: Bounds check virt_addr_valid
  CIFS: cifs_read_allocate_pages: don't iterate through whole page array on ENOMEM
  RDMA/efa: Remove MAYEXEC flag check from mmap flow
  mlx5: avoid 64-bit division
  IB/hfi1: Validate page aligned for a given virtual address
  IB/{qib, hfi1, rdmavt}: Correct ibv_devinfo max_mr value
  IB/hfi1: Insure freeze_work work_struct is canceled on shutdown
  IB/rdmavt: Fix alloc_qpn() WARN_ON()
  ASoC: sun4i-codec: fix first delay on Speaker
  drm/amdgpu: reserve stollen vram for raven series
  media: venus: hfi_parser: fix a regression in parser
  selftests: bpf: fix compiler warning in flow_dissector test
  arm64: use the correct function type for __arm64_sys_ni_syscall
  arm64: use the correct function type in SYSCALL_DEFINE0
  arm64: fix syscall_fn_t type
  block: don't protect generic_make_request_checks with blk_queue_enter
  block: move blk_exit_queue into __blk_release_queue
  selftests: bpf: complete sub-register zero extension checks
  selftests: bpf: move sub-register zero extension checks into subreg.c
  ovl: detect overlapping layers
  drm/i915/icl: Add WaDisableBankHangMode
  ALSA: fireface: Use ULL suffixes for 64-bit constants
  signal/arm64: Use force_sig not force_sig_fault for SIGKILL
  nl80211: fill all policy .type entries
  mac80211: free peer keys before vif down in mesh
  ANDROID: ABI out: Use the extension .xml rather then .out
  drm/mediatek: respect page offset for PRIME mmap calls
  drm/mediatek: adjust ddp clock control flow
  ALSA: hda/realtek - Improve the headset mic for Acer Aspire laptops
  KVM: PPC: Book3S HV: XIVE: Fix the enforced limit on the vCPU identifier
  KVM: PPC: Book3S HV: XIVE: Do not test the EQ flag validity when resetting
  KVM: PPC: Book3S HV: XIVE: Clear file mapping when device is released
  KVM: PPC: Book3S HV: Don't take kvm->lock around kvm_for_each_vcpu
  KVM: PPC: Book3S: Use new mutex to synchronize access to rtas token list
  KVM: PPC: Book3S HV: Use new mutex to synchronize MMU setup
  KVM: PPC: Book3S HV: Avoid touching arch.mmu_ready in XIVE release functions
  Revert "drivers: thermal: tsens: Add new operation to check if a sensor is enabled"
  net/mlx5e: Disable rxhash when CQE compress is enabled
  net/mlx5e: restrict the real_dev of vlan device is the same as uplink device
  net/mlx5: Allocate root ns memory using kzalloc to match kfree
  net/mlx5: Avoid double free in fs init error unwinding path
  net/mlx5: Avoid double free of root ns in the error flow path
  net/mlx5: Fix error handling in mlx5_load()
  Documentation: net-sysfs: Remove duplicate PHY device documentation
  llc: fix skb leak in llc_build_and_send_ui_pkt()
  selftests: pmtu: Fix encapsulating device in pmtu_vti6_link_change_mtu
  dfs_cache: fix a wrong use of kfree in flush_cache_ent()
  fs/cifs/smb2pdu.c: fix buffer free in SMB2_ioctl_free
  cifs: fix memory leak of pneg_inbuf on -EOPNOTSUPP ioctl case
  xenbus: Avoid deadlock during suspend due to open transactions
  xen/pvcalls: Remove set but not used variable
  tracing: Avoid memory leak in predicate_parse()
  habanalabs: fix bug in checking huge page optimization
  mmc: sdhci: Fix SDIO IRQ thread deadlock
  dpaa_eth: use only online CPU portals
  net: mvneta: Fix err code path of probe
  net: stmmac: Do not output error on deferred probe
  Btrfs: fix race updating log root item during fsync
  Btrfs: fix wrong ctime and mtime of a directory after log replay
  ARC: [plat-hsdk] Get rid of inappropriate PHY settings
  ARC: [plat-hsdk]: Add support of Vivante GPU
  ARC: [plat-hsdk]: enable creg-gpio controller
  Btrfs: fix fsync not persisting changed attributes of a directory
  btrfs: qgroup: Check bg while resuming relocation to avoid NULL pointer dereference
  btrfs: reloc: Also queue orphan reloc tree for cleanup to avoid BUG_ON()
  Btrfs: incremental send, fix emission of invalid clone operations
  Btrfs: incremental send, fix file corruption when no-holes feature is enabled
  btrfs: correct zstd workspace manager lock to use spin_lock_bh()
  btrfs: Ensure replaced device doesn't have pending chunk allocation
  ia64: fix build errors by exporting paddr_to_nid()
  ASoC: SOF: Intel: hda: fix the hda init chip
  ASoC: SOF: ipc: fix a race, leading to IPC timeouts
  ASoC: SOF: control: correct the copy size for bytes kcontrol put
  ASoC: SOF: pcm: remove warning - initialize workqueue on open
  ASoC: SOF: pcm: clear hw_params_upon_resume flag correctly
  ASoC: SOF: core: fix error handling with the probe workqueue
  ASoC: SOF: core: remove snd_soc_unregister_component in case of error
  ASoC: SOF: core: remove DSP after unregistering machine driver
  ASoC: soc-core: fixup references at soc_cleanup_card_resources()
  arm64/module: revert to unsigned interpretation of ABS16/32 relocations
  KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
  kvm: fix compile on s390 part 2
  xprtrdma: Use struct_size() in kzalloc()
  tools headers UAPI: Sync kvm.h headers with the kernel sources
  perf record: Fix s390 missing module symbol and warning for non-root users
  perf machine: Read also the end of the kernel
  perf test vmlinux-kallsyms: Ignore aliases to _etext when searching on kallsyms
  perf session: Add missing swap ops for namespace events
  perf namespace: Protect reading thread's namespace
  tools headers UAPI: Sync drm/drm.h with the kernel
  s390/crypto: fix gcm-aes-s390 selftest failures
  s390/zcrypt: Fix wrong dispatching for control domain CPRBs
  s390/pci: fix assignment of bus resources
  s390/pci: fix struct definition for set PCI function
  s390: mark __cpacf_check_opcode() and cpacf_query_func() as __always_inline
  s390: add unreachable() to dump_fault_info() to fix -Wmaybe-uninitialized
  tools headers UAPI: Sync drm/i915_drm.h with the kernel
  tools headers UAPI: Sync linux/fs.h with the kernel
  tools headers UAPI: Sync linux/sched.h with the kernel
  tools arch x86: Sync asm/cpufeatures.h with the with the kernel
  tools include UAPI: Update copy of files related to new fspick, fsmount, fsconfig, fsopen, move_mount and open_tree syscalls
  perf arm64: Fix mksyscalltbl when system kernel headers are ahead of the kernel
  perf data: Fix 'strncat may truncate' build failure with recent gcc
  arm64: Fix the arm64_personality() syscall wrapper redirection
  rtw88: Make some symbols static
  rtw88: avoid circular locking between local->iflist_mtx and rtwdev->mutex
  rsi: Properly initialize data in rsi_sdio_ta_reset
  rtw88: fix unassigned rssi_level in rtw_sta_info
  rtw88: fix subscript above array bounds compiler warning
  fuse: extract helper for range writeback
  fuse: fix copy_file_range() in the writeback case
  mmc: meson-gx: fix irq ack
  mmc: tmio: fix SCC error handling to avoid false positive CRC error
  mmc: tegra: Fix a warning message
  memstick: mspro_block: Fix an error code in mspro_block_issue_req()
  mac80211: mesh: fix RCU warning
  nl80211: fix station_info pertid memory leak
  mac80211: Do not use stack memory with scatterlist for GMAC
  ALSA: line6: Assure canceling delayed work at disconnection
  configfs: Fix use-after-free when accessing sd->s_dentry
  ALSA: hda - Force polling mode on CNL for fixing codec communication
  i2c: synquacer: fix synquacer_i2c_doxfer() return value
  i2c: mlxcpld: Fix wrong initialization order in probe
  i2c: dev: fix potential memory leak in i2cdev_ioctl_rdwr
  RDMA/core: Fix panic when port_data isn't initialized
  RDMA/uverbs: Pass udata on uverbs error unwind
  RDMA/core: Clear out the udata before error unwind
  net: aquantia: tcp checksum 0xffff being handled incorrectly
  net: aquantia: fix LRO with FCS error
  net: aquantia: check rx csum for all packets in LRO session
  net: aquantia: tx clean budget logic error
  vhost: scsi: add weight support
  vhost: vsock: add weight support
  vhost_net: fix possible infinite loop
  vhost: introduce vhost_exceeds_weight()
  virtio: Fix indentation of VIRTIO_MMIO
  virtio: add unlikely() to WARN_ON_ONCE()
  iommu/vt-d: Set the right field for Page Walk Snoop
  iommu/vt-d: Fix lock inversion between iommu->lock and device_domain_lock
  iommu: Add missing new line for dma type
  drm/etnaviv: lock MMU while dumping core
  block: Don't revalidate bdev of hidden gendisk
  loop: Don't change loop device under exclusive opener
  drm/imx: ipuv3-plane: fix atomic update status query for non-plus i.MX6Q
  drm/qxl: drop WARN_ONCE()
  iio: temperature: mlx90632 Relax the compatibility check
  iio: imu: st_lsm6dsx: fix PM support for st_lsm6dsx i2c controller
  staging:iio:ad7150: fix threshold mode config bit
  fuse: add FUSE_WRITE_KILL_PRIV
  fuse: fallocate: fix return with locked inode
  PCI: PM: Avoid possible suspend-to-idle issue
  ACPI: PM: Call pm_set_suspend_via_firmware() during hibernation
  ACPI/PCI: PM: Add missing wakeup.flags.valid checks
  ovl: support the FS_IOC_FS[SG]ETXATTR ioctls
  soundwire: stream: fix out of boundary access on port properties
  net: tulip: de4x5: Drop redundant MODULE_DEVICE_TABLE()
  selftests/tls: add test for sleeping even though there is data
  net/tls: fix no wakeup on partial reads
  selftests/tls: test for lowat overshoot with multiple records
  net/tls: fix lowat calculation if some data came from previous record
  dpaa2-eth: Make constant 64-bit long
  dpaa2-eth: Use PTR_ERR_OR_ZERO where appropriate
  dpaa2-eth: Fix potential spectre issue
  bonding/802.3ad: fix slave link initialization transition states
  io_uring: Fix __io_uring_register() false success
  net: ethtool: Document get_rxfh_context and set_rxfh_context ethtool ops
  net: stmmac: dwmac-mediatek: modify csr_clk value to fix mdio read/write fail
  net: stmmac: fix csr_clk can't be zero issue
  net: stmmac: update rx tail pointer register to fix rx dma hang issue.
  ip_sockglue: Fix missing-check bug in ip_ra_control()
  ipv6_sockglue: Fix a missing-check bug in ip6_ra_control()
  efi: Allow the number of EFI configuration tables entries to be zero
  efi/x86/Add missing error handling to old_memmap 1:1 mapping code
  parisc: Fix compiler warnings in float emulation code
  parisc/slab: cleanup after /proc/slab_allocators removal
  bpf: sockmap, fix use after free from sleep in psock backlog workqueue
  net: sched: don't use tc_action->order during action dump
  cxgb4: Revert "cxgb4: Remove SGE_HOST_PAGE_SIZE dependency on page size"
  net: fec: fix the clk mismatch in failed_reset path
  habanalabs: Avoid using a non-initialized MMU cache mutex
  habanalabs: fix debugfs code
  uapi/habanalabs: add opcode for enable/disable device debug mode
  habanalabs: halt debug engines on user process close
  selftests: rtc: rtctest: specify timeouts
  selftests/harness: Allow test to configure timeout
  selftests/ftrace: Add checkbashisms meta-testcase
  selftests/ftrace: Make a script checkbashisms clean
  media: smsusb: better handle optional alignment
  test_firmware: Use correct snprintf() limit
  genwqe: Prevent an integer overflow in the ioctl
  parport: Fix mem leak in parport_register_dev_model
  fpga: dfl: expand minor range when registering chrdev region
  fpga: dfl: Add lockdep classes for pdata->lock
  fpga: dfl: afu: Pass the correct device to dma_mapping_error()
  fpga: stratix10-soc: fix use-after-free on s10_init()
  w1: ds2408: Fix typo after 49695ac468 (reset on output_write retry with readback)
  kheaders: Do not regenerate archive if config is not changed
  kheaders: Move from proc to sysfs
  drm/amd/display: Don't load DMCU for Raven 1 (v2)
  drm/i915: Maintain consistent documentation subsection ordering
  scripts/sphinx-pre-install: make it handle Sphinx versions
  docs: Fix conf.py for Sphinx 2.0
  vt/fbcon: deinitialize resources in visual_init() after failed memory allocation
  xfs: fix broken log reservation debugging
  clocksource/drivers/timer-ti-dm: Change to new style declaration
  ASoC: core: lock client_mutex while removing link components
  ASoC: simple-card: Restore original configuration of DAI format
  {nl,mac}80211: allow 4addr AP operation on crypto controlled devices
  mac80211_hwsim: mark expected switch fall-through
  mac80211: fix rate reporting inside cfg80211_calculate_bitrate_he()
  mac80211: remove set but not used variable 'old'
  mac80211: handle deauthentication/disassociation from TDLS peer
  gpio: fix gpio-adp5588 build errors
  pinctrl: stmfx: Fix compile issue when CONFIG_OF_GPIO is not defined
  staging: kpc2000: Add dependency on MFD_CORE to kconfig symbol 'KPC2000'
  perf/ring-buffer: Use regular variables for nesting
  perf/ring-buffer: Always use {READ,WRITE}_ONCE() for rb->user_page data
  perf/ring_buffer: Add ordering to rb->nest increment
  perf/ring_buffer: Fix exposing a temporarily decreased data_head
  x86/CPU/AMD: Don't force the CPB cap when running under a hypervisor
  x86/boot: Provide KASAN compatible aliases for string routines
  ALSA: hda/realtek - Enable micmute LED for Huawei laptops
  Input: uinput - add compat ioctl number translation for UI_*_FF_UPLOAD
  Input: silead - add MSSL0017 to acpi_device_id
  cxgb4: offload VLAN flows regardless of VLAN ethtype
  hsr: fix don't prune the master node from the node_db
  net: mvpp2: cls: Fix leaked ethtool_rx_flow_rule
  docs: fix multiple doc build warnings in enumeration.rst
  lib/list_sort: fix kerneldoc build error
  docs: fix numaperf.rst and add it to the doc tree
  doc: Cope with the deprecation of AutoReporter
  doc: Cope with Sphinx logging deprecations
  bpf: sockmap, restore sk_write_space when psock gets dropped
  selftests: bpf: add zero extend checks for ALU32 and/or/xor
  bpf, riscv: clear target register high 32-bits for and/or/xor on ALU32
  spi: abort spi_sync if failed to prepare_transfer_hardware
  ALSA: hda/realtek - Set default power save node to 0
  ipv4/igmp: fix build error if !CONFIG_IP_MULTICAST
  powerpc/kexec: Fix loading of kernel + initramfs with kexec_file_load()
  MIPS: TXx9: Fix boot crash in free_initmem()
  MIPS: remove a space after -I to cope with header search paths for VDSO
  MIPS: mark ginvt() as __always_inline
  ipv4/igmp: fix another memory leak in igmpv3_del_delrec()
  bnxt_en: Device serial number is supported only for PFs.
  bnxt_en: Reduce memory usage when running in kdump kernel.
  bnxt_en: Fix possible BUG() condition when calling pci_disable_msix().
  bnxt_en: Fix aggregation buffer leak under OOM condition.
  ipv6: Fix redirect with VRF
  net: stmmac: fix reset gpio free missing
  mISDN: make sure device name is NUL terminated
  net: macb: save/restore the remaining registers and features
  media: dvb: warning about dvb frequency limits produces too much noise
  net/tls: don't ignore netdev notifications if no TLS features
  net/tls: fix state removal with feature flags off
  net/tls: avoid NULL-deref on resync during device removal
  Documentation: add TLS offload documentation
  Documentation: tls: RSTify the ktls documentation
  Documentation: net: move device drivers docs to a submenu
  mISDN: Fix indenting in dsp_cmx.c
  ocelot: Dont allocate another multicast list, use __dev_mc_sync
  Validate required parameters in inet6_validate_link_af
  xhci: Use %zu for printing size_t type
  xhci: Convert xhci_handshake() to use readl_poll_timeout_atomic()
  xhci: Fix immediate data transfer if buffer is already DMA mapped
  usb: xhci: avoid null pointer deref when bos field is NULL
  usb: xhci: Fix a potential null pointer dereference in xhci_debugfs_create_endpoint()
  xhci: update bounce buffer with correct sg num
  media: usb: siano: Fix false-positive "uninitialized variable" warning
  spi: spi-fsl-spi: call spi_finalize_current_message() at the end
  ALSA: hda/realtek - Check headset type by unplug and resume
  powerpc/perf: Fix MMCRA corruption by bhrb_filter
  powerpc/powernv: Return for invalid IMC domain
  HID: logitech-hidpp: Add support for the S510 remote control
  HID: multitouch: handle faulty Elo touch device
  selftests: netfilter: add flowtable test script
  netfilter: nft_flow_offload: IPCB is only valid for ipv4 family
  netfilter: nft_flow_offload: don't offload when sequence numbers need adjustment
  netfilter: nft_flow_offload: set liberal tracking mode for tcp
  netfilter: nf_flow_table: ignore DF bit setting
  ASoC: Intel: sof-rt5682: fix AMP quirk support
  ASoC: Intel: sof-rt5682: fix for codec button mapping
  clk: ti: clkctrl: Fix clkdm_clk handling
  clk: imx: imx8mm: fix int pll clk gate
  clk: sifive: restrict Kconfig scope for the FU540 PRCI driver
  RDMA/hns: Fix PD memory leak for internal allocation
  netfilter: nat: fix udp checksum corruption
  selftests: netfilter: missing error check when setting up veth interface
  RDMA/srp: Rename SRP sysfs name after IB device rename trigger
  ipvs: Fix use-after-free in ip_vs_in
  ARC: [plat-hsdk]: Add missing FIFO size entry in GMAC node
  ARC: [plat-hsdk]: Add missing multicast filter bins number to GMAC node
  samples, bpf: suppress compiler warning
  samples, bpf: fix to change the buffer size for read()
  bpf: Check sk_fullsock() before returning from bpf_sk_lookup()
  bpf: fix out-of-bounds read in __bpf_skc_lookup
  Documentation/networking: fix af_xdp.rst Sphinx warnings
  netfilter: nft_fib: Fix existence check support
  netfilter: nf_queue: fix reinject verdict handling
  dmaengine: sprd: Add interrupt support for 2-stage transfer
  dmaengine: sprd: Fix the right place to configure 2-stage transfer
  dmaengine: sprd: Fix block length overflow
  dmaengine: sprd: Fix the incorrect start for 2-stage destination channels
  dmaengine: sprd: Add validation of current descriptor in irq handler
  dmaengine: sprd: Fix the possible crash when getting descriptor status
  tty: max310x: Fix external crystal register setup
  serial: sh-sci: disable DMA for uart_console
  serial: imx: remove log spamming error message
  tty: serial: msm_serial: Fix XON/XOFF
  USB: serial: option: add Telit 0x1260 and 0x1261 compositions
  USB: serial: pl2303: add Allied Telesis VT-Kit3
  USB: serial: option: add support for Simcom SIM7500/SIM7600 RNDIS mode
  dmaengine: tegra210-adma: Fix spelling
  dmaengine: tegra210-adma: Fix channel FIFO configuration
  dmaengine: tegra210-adma: Fix crash during probe
  dmaengine: mediatek-cqdma: sleeping in atomic context
  dmaengine: dw-axi-dmac: fix null dereference when pointer first is null
  perf/x86/intel/ds: Fix EVENT vs. UEVENT PEBS constraints
  USB: rio500: update Documentation
  USB: rio500: simplify locking
  USB: rio500: fix memory leak in close after disconnect
  USB: rio500: refuse more than one device at a time
  usbip: usbip_host: fix BUG: sleeping function called from invalid context
  USB: sisusbvga: fix oops in error path of sisusb_probe
  USB: Add LPM quirk for Surface Dock GigE adapter
  media: usb: siano: Fix general protection fault in smsusb
  usb: mtu3: fix up undefined reference to usb_debug_root
  USB: Fix slab-out-of-bounds write in usb_get_bos_descriptor
  Input: elantech - enable middle button support on 2 ThinkPads
  dmaengine: fsl-qdma: Add improvement
  dmaengine: jz4780: Fix transfers being ACKed too soon
  gcc-plugins: Fix build failures under Darwin host
  MAINTAINERS: Update Stefan Wahren email address
  netfilter: nf_tables: fix oops during rule dump
  ARC: mm: SIGSEGV userspace trying to access kernel virtual memory
  ARC: fix build warnings
  ARM: dts: bcm: Add missing device_type = "memory" property
  soc: bcm: brcmstb: biuctrl: Register writes require a barrier
  soc: brcmstb: Fix error path for unsupported CPUs
  ARM: dts: dra71x: Disable usb4_tm target module
  ARM: dts: dra71x: Disable rtc target module
  ARM: dts: dra76x: Disable usb4_tm target module
  ARM: dts: dra76x: Disable rtc target module
  ASoC: simple-card: Fix configuration of DAI format
  ASoC: Intel: soc-acpi: Fix machine selection order
  ASoC: rt5677-spi: Handle over reading when flipping bytes
  ASoC: soc-dpm: fixup DAI active unbalance
  pinctrl: intel: Clear interrupt status in mask/unmask callback
  pinctrl: intel: Use GENMASK() consistently
  parisc: Allow building 64-bit kernel without -mlong-calls compiler option
  parisc: Kconfig: remove ARCH_DISCARD_MEMBLOCK
  staging: wilc1000: Fix some double unlock bugs in wilc_wlan_cleanup()
  staging: vc04_services: prevent integer overflow in create_pagelist()
  Staging: vc04_services: Fix a couple error codes
  staging: wlan-ng: fix adapter initialization failure
  staging: kpc2000: double unlock in error handling in kpc_dma_transfer()
  staging: kpc2000: Fix build error without CONFIG_UIO
  staging: kpc2000: fix build error on xtensa
  staging: erofs: set sb->s_root to NULL when failing from __getname()
  ARM: imx: cpuidle-imx6sx: Restrict the SW2ISO increase to i.MX6SX
  firmware: imx: SCU irq should ONLY be enabled after SCU IPC is ready
  arm64: imx: Fix build error without CONFIG_SOC_BUS
  ima: fix wrong signed policy requirement when not appraising
  x86/ima: Check EFI_RUNTIME_SERVICES before using
  stacktrace: Unbreak stack_trace_save_tsk_reliable()
  HID: wacom: Sync INTUOSP2_BT touch state after each frame if necessary
  HID: wacom: Correct button numbering 2nd-gen Intuos Pro over Bluetooth
  HID: wacom: Send BTN_TOUCH in response to INTUOSP2_BT eraser contact
  HID: wacom: Don't report anything prior to the tool entering range
  HID: wacom: Don't set tool type until we're in range
  ASoC: cs42xx8: Add regcache mask dirty
  regulator: tps6507x: Fix boot regression due to testing wrong init_data pointer
  ASoC: fsl_asrc: Fix the issue about unsupported rate
  spi: bitbang: Fix NULL pointer dereference in spi_unregister_master
  Input: elan_i2c - increment wakeup count if wake source
  wireless: Skip directory when generating certificates
  ASoC: ak4458: rstn_control - return a non-zero on error only
  ASoC: soc-pcm: BE dai needs prepare when pause release after resume
  ASoC: ak4458: add return value for ak4458_probe
  ASoC : cs4265 : readable register too low
  ASoC: SOF: fix error in verbose ipc command parsing
  ASoC: SOF: fix race in FW boot timeout handling
  ASoC: SOF: nocodec: fix undefined reference
  iio: adc: ti-ads8688: fix timestamp is not updated in buffer
  iio: dac: ds4422/ds4424 fix chip verification
  HID: rmi: Use SET_REPORT request on control endpoint for Acer Switch 3 and 5
  HID: logitech-hidpp: add support for the MX5500 keyboard
  HID: logitech-dj: add support for the Logitech MX5500's Bluetooth Mini-Receiver
  HID: i2c-hid: add iBall Aer3 to descriptor override
  spi: Fix Raspberry Pi breakage
  ARM: dts: dra76x: Update MMC2_HS200_MANUAL1 iodelay values
  ARM: dts: am57xx-idk: Remove support for voltage switching for SD card
  bus: ti-sysc: Handle devices with no control registers
  ARM: dts: Configure osc clock for d_can on am335x
  iio: imu: mpu6050: Fix FIFO layout for ICM20602
  lkdtm/bugs: Adjust recursion test to avoid elision
  lkdtm/usercopy: Moves the KERNEL_DS test to non-canonical
  iio: adc: ads124: avoid buffer overflow
  iio: adc: modify NPCM ADC read reference voltage

Change-Id: I98c823993370027391cc21dfb239c3049f025136
Signed-off-by: Raghavendra Rao Ananta <rananta@codeaurora.org>
2019-07-01 17:41:24 -07:00
Waiman Long
ad53fa10fa locking/qspinlock_stat: Introduce generic lockevent_*() counting APIs
The percpu event counts used by qspinlock code can be useful for
other locking code as well. So a new set of lockevent_* counting APIs
is introduced with the lock event names extracted out into the new
lock_events_list.h header file for easier addition in the future.

The existing qstat_inc() calls are replaced by either lockevent_inc() or
lockevent_cond_inc() calls.

The qstat_hop() call is renamed to lockevent_pv_hop(). The "reset_counters"
debugfs file is also renamed to ".reset_counts".

Signed-off-by: Waiman Long <longman@redhat.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/20190404174320.22416-8-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10 10:56:03 +02:00
Waiman Long
733000c7ff locking/qspinlock: Remove unnecessary BUG_ON() call
With the > 4 nesting levels case handled by the commit:

  d682b596d9 ("locking/qspinlock: Handle > 4 slowpath nesting levels")

the BUG_ON() call in encode_tail() will never actually be triggered.

Remove it.

Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/1551057253-3231-1-git-send-email-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-28 07:55:38 +01:00
Waiman Long
412f34a82c locking/qspinlock_stat: Track the no MCS node available case
Track the number of slowpath locking operations that are being done
without any MCS node available as well renaming lock_index[123] to make
them more descriptive.

Using these stat counters is one way to find out if a code path is
being exercised.

Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: James Morse <james.morse@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: SRINIVAS <srinivas.eeda@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Link: https://lkml.kernel.org/r/1548798828-16156-3-git-send-email-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04 09:03:30 +01:00
Waiman Long
d682b596d9 locking/qspinlock: Handle > 4 slowpath nesting levels
Four queue nodes per CPU are allocated to enable up to 4 nesting levels
using the per-CPU nodes. Nested NMIs are possible in some architectures.
Still it is very unlikely that we will ever hit more than 4 nested
levels with contention in the slowpath.

When that rare condition happens, however, it is likely that the system
will hang or crash shortly after that. It is not good and we need to
handle this exception case.

This is done by spinning directly on the lock using repeated trylock.
This alternative code path should only be used when there is nested
NMIs. Assuming that the locks used by those NMI handlers will not be
heavily contended, a simple TAS locking should work out.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: James Morse <james.morse@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: SRINIVAS <srinivas.eeda@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Link: https://lkml.kernel.org/r/1548798828-16156-2-git-send-email-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04 09:03:29 +01:00
Waiman Long
0fa809ca7f locking/pvqspinlock: Extend node size when pvqspinlock is configured
The qspinlock code supports up to 4 levels of slowpath nesting using
four per-CPU mcs_spinlock structures. For 64-bit architectures, they
fit nicely in one 64-byte cacheline.

For para-virtualized (PV) qspinlocks it needs to store more information
in the per-CPU node structure than there is space for. It uses a trick
to use a second cacheline to hold the extra information that it needs.
So PV qspinlock needs to access two extra cachelines for its information
whereas the native qspinlock code only needs one extra cacheline.

Freshly added counter profiling of the qspinlock code, however, revealed
that it was very rare to use more than two levels of slowpath nesting.
So it doesn't make sense to penalize PV qspinlock code in order to have
four mcs_spinlock structures in the same cacheline to optimize for a case
in the native qspinlock code that rarely happens.

Extend the per-CPU node structure to have two more long words when PV
qspinlock locks are configured to hold the extra data that it needs.

As a result, the PV qspinlock code will enjoy the same benefit of using
just one extra cacheline like the native counterpart, for most cases.

[ mingo: Minor changelog edits. ]

Signed-off-by: Waiman Long <longman@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/1539697507-28084-2-git-send-email-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-17 08:37:32 +02:00
Waiman Long
1222109a53 locking/qspinlock_stat: Count instances of nested lock slowpaths
Queued spinlock supports up to 4 levels of lock slowpath nesting -
user context, soft IRQ, hard IRQ and NMI. However, we are not sure how
often the nesting happens.

So add 3 more per-CPU stat counters to track the number of instances where
nesting index goes to 1, 2 and 3 respectively.

On a dual-socket 64-core 128-thread Zen server, the following were the
new stat counter values under different circumstances:

         State                         slowpath   index1   index2   index3
         -----                         --------   ------   ------   -------
  After bootup                         1,012,150    82       0        0
  After parallel build + perf-top    125,195,009    82       0        0

So the chance of having more than 2 levels of nesting is extremely low.

[ mingo: Minor changelog edits. ]

Signed-off-by: Waiman Long <longman@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/1539697507-28084-1-git-send-email-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-17 08:37:31 +02:00
Peter Zijlstra
7aa54be297 locking/qspinlock, x86: Provide liveness guarantee
On x86 we cannot do fetch_or() with a single instruction and thus end up
using a cmpxchg loop, this reduces determinism. Replace the fetch_or()
with a composite operation: tas-pending + load.

Using two instructions of course opens a window we previously did not
have. Consider the scenario:

	CPU0		CPU1		CPU2

 1)	lock
	  trylock -> (0,0,1)

 2)			lock
			  trylock /* fail */

 3)	unlock -> (0,0,0)

 4)					lock
					  trylock -> (0,0,1)

 5)			  tas-pending -> (0,1,1)
			  load-val <- (0,1,0) from 3

 6)			  clear-pending-set-locked -> (0,0,1)

			  FAIL: _2_ owners

where 5) is our new composite operation. When we consider each part of
the qspinlock state as a separate variable (as we can when
_Q_PENDING_BITS == 8) then the above is entirely possible, because
tas-pending will only RmW the pending byte, so the later load is able
to observe prior tail and lock state (but not earlier than its own
trylock, which operates on the whole word, due to coherence).

To avoid this we need 2 things:

 - the load must come after the tas-pending (obviously, otherwise it
   can trivially observe prior state).

 - the tas-pending must be a full word RmW instruction, it cannot be an XCHGB for
   example, such that we cannot observe other state prior to setting
   pending.

On x86 we can realize this by using "LOCK BTS m32, r32" for
tas-pending followed by a regular load.

Note that observing later state is not a problem:

 - if we fail to observe a later unlock, we'll simply spin-wait for
   that store to become visible.

 - if we observe a later xchg_tail(), there is no difference from that
   xchg_tail() having taken place before the tas-pending.

Suggested-by: Will Deacon <will.deacon@arm.com>
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: andrea.parri@amarulasolutions.com
Cc: longman@redhat.com
Fixes: 59fb586b4a ("locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpath")
Link: https://lkml.kernel.org/r/20181003130957.183726335@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-16 17:33:54 +02:00
Peter Zijlstra
756b1df4c2 locking/qspinlock: Rework some comments
While working my way through the code again; I felt the comments could
use help.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: andrea.parri@amarulasolutions.com
Cc: longman@redhat.com
Link: https://lkml.kernel.org/r/20181003130257.156322446@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-16 17:33:54 +02:00
Peter Zijlstra
53bf57fab7 locking/qspinlock: Re-order code
Flip the branch condition after atomic_fetch_or_acquire(_Q_PENDING_VAL)
such that we loose the indent. This also result in a more natural code
flow IMO.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: andrea.parri@amarulasolutions.com
Cc: longman@redhat.com
Link: https://lkml.kernel.org/r/20181003130257.156322446@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-16 17:33:53 +02:00
Waiman Long
81d3dc9a34 locking/qspinlock: Add stat tracking for pending vs. slowpath
Currently, the qspinlock_stat code tracks only statistical counts in the
PV qspinlock code. However, it may also be useful to track the number
of locking operations done via the pending code vs. the MCS lock queue
slowpath for the non-PV case.

The qspinlock stat code is modified to do that. The stat counter
pv_lock_slowpath is renamed to lock_slowpath so that it can be used by
both the PV and non-PV cases.

Signed-off-by: Waiman Long <longman@redhat.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Cc: will.deacon@arm.com
Link: http://lkml.kernel.org/r/1524738868-31318-14-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-04-27 09:48:53 +02:00
Will Deacon
ae75d9089f locking/qspinlock: Use try_cmpxchg() instead of cmpxchg() when locking
When reaching the head of an uncontended queue on the qspinlock slow-path,
using a try_cmpxchg() instead of a cmpxchg() operation to transition the
lock work to _Q_LOCKED_VAL generates slightly better code for x86 and
pretty much identical code for arm64.

Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-13-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-04-27 09:48:52 +02:00
Will Deacon
9d4646d14d locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb()
The qspinlock slowpath must ensure that the MCS node is fully initialised
before it can be reached by another other CPU. This is currently enforced
by using a RELEASE operation when updating the tail and also when linking
the node into the waitqueue, since the control dependency off xchg_tail
is insufficient to enforce sufficient ordering, see:

  95bcade33a ("locking/qspinlock: Ensure node is initialised before updating prev->next")

Back-to-back RELEASE operations may be expensive on some architectures,
particularly those that implement them using fences under the hood. We
can replace the two RELEASE operations with a single smp_wmb() fence and
use RELAXED operations for the subsequent publishing of the node.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-12-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-04-27 09:48:52 +02:00
Will Deacon
c131a198c4 locking/qspinlock: Use smp_cond_load_relaxed() to wait for next node
When a locker reaches the head of the queue and takes the lock, a
concurrent locker may enqueue and force the lock holder to spin
whilst its node->next field is initialised. Rather than open-code
a READ_ONCE/cpu_relax() loop, this can be implemented using
smp_cond_load_relaxed() instead.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-10-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-04-27 09:48:50 +02:00
Will Deacon
f9c811fac4 locking/qspinlock: Use atomic_cond_read_acquire()
Rather than dig into the counter field of the atomic_t inside the
qspinlock structure so that we can call smp_cond_load_acquire(), use
atomic_cond_read_acquire() instead, which operates on the atomic_t
directly.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-8-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-04-27 09:48:49 +02:00
Will Deacon
c61da58d8a locking/qspinlock: Kill cmpxchg() loop when claiming lock from head of queue
When a queued locker reaches the head of the queue, it claims the lock
by setting _Q_LOCKED_VAL in the lockword. If there isn't contention, it
must also clear the tail as part of this operation so that subsequent
lockers can avoid taking the slowpath altogether.

Currently this is expressed as a cmpxchg() loop that practically only
runs up to two iterations. This is confusing to the reader and unhelpful
to the compiler. Rewrite the cmpxchg() loop without the loop, so that a
failed cmpxchg() implies that there is contention and we just need to
write to _Q_LOCKED_VAL without considering the rest of the lockword.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-7-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-04-27 09:48:48 +02:00
Will Deacon
59fb586b4a locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpath
The qspinlock locking slowpath utilises a "pending" bit as a simple form
of an embedded test-and-set lock that can avoid the overhead of explicit
queuing in cases where the lock is held but uncontended. This bit is
managed using a cmpxchg() loop which tries to transition the uncontended
lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1).

Unfortunately, the cmpxchg() loop is unbounded and lockers can be starved
indefinitely if the lock word is seen to oscillate between unlocked
(0,0,0) and locked (0,0,1). This could happen if concurrent lockers are
able to take the lock in the cmpxchg() loop without queuing and pass it
around amongst themselves.

This patch fixes the problem by unconditionally setting _Q_PENDING_VAL
using atomic_fetch_or, and then inspecting the old value to see whether
we need to spin on the current lock owner, or whether we now effectively
hold the lock. The tricky scenario is when concurrent lockers end up
queuing on the lock and the lock becomes available, causing us to see
a lockword of (n,0,0). With pending now set, simply queuing could lead
to deadlock as the head of the queue may not have observed the pending
flag being cleared. Conversely, if the head of the queue did observe
pending being cleared, then it could transition the lock from (n,0,0) ->
(0,0,1) meaning that any attempt to "undo" our setting of the pending
bit could race with a concurrent locker trying to set it.

We handle this race by preserving the pending bit when taking the lock
after reaching the head of the queue and leaving the tail entry intact
if we saw pending set, because we know that the tail is going to be
updated shortly.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-6-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-04-27 09:48:47 +02:00
Will Deacon
6512276d97 locking/qspinlock: Bound spinning on pending->locked transition in slowpath
If a locker taking the qspinlock slowpath reads a lock value indicating
that only the pending bit is set, then it will spin whilst the
concurrent pending->locked transition takes effect.

Unfortunately, there is no guarantee that such a transition will ever be
observed since concurrent lockers could continuously set pending and
hand over the lock amongst themselves, leading to starvation. Whilst
this would probably resolve in practice, it means that it is not
possible to prove liveness properties about the lock and means that lock
acquisition time is unbounded.

Rather than removing the pending->locked spinning from the slowpath
altogether (which has been shown to heavily penalise a 2-threaded
locking stress test on x86), this patch replaces the explicit spinning
with a call to atomic_cond_read_relaxed and allows the architecture to
provide a bound on the number of spins. For architectures that can
respond to changes in cacheline state in their smp_cond_load implementation,
it should be sufficient to use the default bound of 1.

Suggested-by: Waiman Long <longman@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-4-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-04-27 09:48:46 +02:00
Will Deacon
625e88be1f locking/qspinlock: Merge 'struct __qspinlock' into 'struct qspinlock'
'struct __qspinlock' provides a handy union of fields so that
subcomponents of the lockword can be accessed by name, without having to
manage shifts and masks explicitly and take endianness into account.

This is useful in qspinlock.h and also potentially in arch headers, so
move the 'struct __qspinlock' into 'struct qspinlock' and kill the extra
definition.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Acked-by: Boqun Feng <boqun.feng@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-3-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-04-27 09:48:45 +02:00
Will Deacon
11dc13224c locking/qspinlock: Ensure node->count is updated before initialising node
When queuing on the qspinlock, the count field for the current CPU's head
node is incremented. This needn't be atomic because locking in e.g. IRQ
context is balanced and so an IRQ will return with node->count as it
found it.

However, the compiler could in theory reorder the initialisation of
node[idx] before the increment of the head node->count, causing an
IRQ to overwrite the initialised node and potentially corrupt the lock
state.

Avoid the potential for this harmful compiler reordering by placing a
barrier() between the increment of the head node->count and the subsequent
node initialisation.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1518528177-19169-3-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-02-13 14:50:14 +01:00
Will Deacon
95bcade33a locking/qspinlock: Ensure node is initialised before updating prev->next
When a locker ends up queuing on the qspinlock locking slowpath, we
initialise the relevant mcs node and publish it indirectly by updating
the tail portion of the lock word using xchg_tail. If we find that there
was a pre-existing locker in the queue, we subsequently update their
->next field to point at our node so that we are notified when it's our
turn to take the lock.

This can be roughly illustrated as follows:

  /* Initialise the fields in node and encode a pointer to node in tail */
  tail = initialise_node(node);

  /*
   * Exchange tail into the lockword using an atomic read-modify-write
   * operation with release semantics
   */
  old = xchg_tail(lock, tail);

  /* If there was a pre-existing waiter ... */
  if (old & _Q_TAIL_MASK) {
	prev = decode_tail(old);
	smp_read_barrier_depends();

	/* ... then update their ->next field to point to node.
	WRITE_ONCE(prev->next, node);
  }

The conditional update of prev->next therefore relies on the address
dependency from the result of xchg_tail ensuring order against the
prior initialisation of node. However, since the release semantics of
the xchg_tail operation apply only to the write portion of the RmW,
then this ordering is not guaranteed and it is possible for the CPU
to return old before the writes to node have been published, consequently
allowing us to point prev->next to an uninitialised node.

This patch fixes the problem by making the update of prev->next a RELEASE
operation, which also removes the reliance on dependency ordering.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1518528177-19169-2-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-02-13 14:50:14 +01:00
Paul E. McKenney
548095dea6 locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath()
Queued spinlocks are not used by DEC Alpha, and furthermore operations
such as READ_ONCE() and release/relaxed RMW atomics are being changed
to imply smp_read_barrier_depends().  This commit therefore removes the
now-redundant smp_read_barrier_depends() from queued_spin_lock_slowpath(),
and adjusts the comments accordingly.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
2017-12-04 10:52:55 -08:00
Paul E. McKenney
d3a024abbc locking: Remove spin_unlock_wait() generic definitions
There is no agreed-upon definition of spin_unlock_wait()'s semantics,
and it appears that all callers could do just as well with a lock/unlock
pair.  This commit therefore removes spin_unlock_wait() and related
definitions from core code.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Andrea Parri <parri.andrea@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
2017-08-17 08:08:58 -07:00
Stafford Horne
5671360f29 locking/qspinlock: Explicitly include asm/prefetch.h
In architectures that use qspinlock, like x86, prefetch is loaded
indirectly via the asm/qspinlock.h include.  On other architectures, like
OpenRISC, which may want to use asm-generic/qspinlock.h the built will
fail without the asm/prefetch.h include.

Fix this by including directly.

Signed-off-by: Stafford Horne <shorne@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20170707195658.23840-1-shorne@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-07-08 11:01:11 +02:00
Pan Xinhui
0dceeaf599 locking/qspinlock: Use __this_cpu_dec() instead of full-blown this_cpu_dec()
queued_spin_lock_slowpath() should not worry about another
queued_spin_lock_slowpath() running in interrupt context and
changing node->count by accident, because node->count keeps
the same value every time we enter/leave queued_spin_lock_slowpath().

On some architectures this_cpu_dec() will save/restore irq flags,
which has high overhead. Use the much cheaper __this_cpu_dec() instead.

Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman.Long@hpe.com
Link: http://lkml.kernel.org/r/1465886247-3773-1-git-send-email-xinhui.pan@linux.vnet.ibm.com
[ Rewrote changelog. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-06-27 11:37:41 +02:00
Peter Zijlstra
33ac279677 locking/barriers: Introduce smp_acquire__after_ctrl_dep()
Introduce smp_acquire__after_ctrl_dep(), this construct is not
uncommon, but the lack of this barrier is.

Use it to better express smp_rmb() uses in WRITE_ONCE(), the IPC
semaphore code and the qspinlock code.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-06-14 11:55:14 +02:00
Peter Zijlstra
1f03e8d291 locking/barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()
This new form allows using hardware assisted waiting.

Some hardware (ARM64 and x86) allow monitoring an address for changes,
so by providing a pointer we can use this to replace the cpu_relax()
with hardware optimized methods in the future.

Requested-by: Will Deacon <will.deacon@arm.com>
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-06-14 11:54:27 +02:00
Peter Zijlstra
055ce0fd1b locking/qspinlock: Add comments
I figured we need to document the spin_is_locked() and
spin_unlock_wait() constraints somwehere.

Ideally 'someone' would rewrite Documentation/atomic_ops.txt and we
could find a place in there. But currently that document is stale to
the point of hardly being useful.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <waiman.long@hpe.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-06-08 14:44:01 +02:00
Peter Zijlstra
8d53fa1904 locking/qspinlock: Clarify xchg_tail() ordering
While going over the code I noticed that xchg_tail() is a RELEASE but
had no obvious pairing commented.

It pairs with a somewhat unique address dependency through
decode_tail().

So the store-release of xchg_tail() is paired by the address
dependency of the load of xchg_tail followed by the dereference from
the pointer computed from that load.

The @old -> @prev transformation itself is pure, and therefore does
not depend on external state, so that is immaterial wrt. ordering.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <waiman.long@hpe.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-06-08 14:44:00 +02:00
Peter Zijlstra
2c61002271 locking/qspinlock: Fix spin_unlock_wait() some more
While this prior commit:

  54cf809b95 ("locking,qspinlock: Fix spin_is_locked() and spin_unlock_wait()")

... fixes spin_is_locked() and spin_unlock_wait() for the usage
in ipc/sem and netfilter, it does not in fact work right for the
usage in task_work and futex.

So while the 2 locks crossed problem:

	spin_lock(A)		spin_lock(B)
	if (!spin_is_locked(B)) spin_unlock_wait(A)
	  foo()			foo();

... works with the smp_mb() injected by both spin_is_locked() and
spin_unlock_wait(), this is not sufficient for:

	flag = 1;
	smp_mb();		spin_lock()
	spin_unlock_wait()	if (!flag)
				  // add to lockless list
	// iterate lockless list

... because in this scenario, the store from spin_lock() can be delayed
past the load of flag, uncrossing the variables and loosing the
guarantee.

This patch reworks spin_is_locked() and spin_unlock_wait() to work in
both cases by exploiting the observation that while the lock byte
store can be delayed, the contender must have registered itself
visibly in other state contained in the word.

It also allows for architectures to override both functions, as PPC
and ARM64 have an additional issue for which we currently have no
generic solution.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Giovanni Gherdovich <ggherdovich@suse.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <waiman.long@hpe.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org # v4.2 and later
Fixes: 54cf809b95 ("locking,qspinlock: Fix spin_is_locked() and spin_unlock_wait()")
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-06-08 14:29:08 +02:00
Waiman Long
cb037fdad6 locking/qspinlock: Use smp_cond_acquire() in pending code
The newly introduced smp_cond_acquire() was used to replace the
slowpath lock acquisition loop. Similarly, the new function can also
be applied to the pending bit locking loop. This patch uses the new
function in that loop.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Douglas Hatch <doug.hatch@hpe.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott J Norton <scott.norton@hpe.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1449778666-13593-1-git-send-email-Waiman.Long@hpe.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-02-29 10:02:42 +01:00
Waiman Long
cd0272fab7 locking/pvqspinlock: Queue node adaptive spinning
In an overcommitted guest where some vCPUs have to be halted to make
forward progress in other areas, it is highly likely that a vCPU later
in the spinlock queue will be spinning while the ones earlier in the
queue would have been halted. The spinning in the later vCPUs is then
just a waste of precious CPU cycles because they are not going to
get the lock soon as the earlier ones have to be woken up and take
their turn to get the lock.

This patch implements an adaptive spinning mechanism where the vCPU
will call pv_wait() if the previous vCPU is not running.

Linux kernel builds were run in KVM guest on an 8-socket, 4
cores/socket Westmere-EX system and a 4-socket, 8 cores/socket
Haswell-EX system. Both systems are configured to have 32 physical
CPUs. The kernel build times before and after the patch were:

		    Westmere			Haswell
  Patch		32 vCPUs    48 vCPUs	32 vCPUs    48 vCPUs
  -----		--------    --------    --------    --------
  Before patch   3m02.3s     5m00.2s     1m43.7s     3m03.5s
  After patch    3m03.0s     4m37.5s	 1m43.0s     2m47.2s

For 32 vCPUs, this patch doesn't cause any noticeable change in
performance. For 48 vCPUs (over-committed), there is about 8%
performance improvement.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Douglas Hatch <doug.hatch@hpe.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott J Norton <scott.norton@hpe.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1447114167-47185-8-git-send-email-Waiman.Long@hpe.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-12-04 11:39:51 +01:00
Waiman Long
1c4941fd53 locking/pvqspinlock: Allow limited lock stealing
This patch allows one attempt for the lock waiter to steal the lock
when entering the PV slowpath. To prevent lock starvation, the pending
bit will be set by the queue head vCPU when it is in the active lock
spinning loop to disable any lock stealing attempt.  This helps to
reduce the performance penalty caused by lock waiter preemption while
not having much of the downsides of a real unfair lock.

The pv_wait_head() function was renamed as pv_wait_head_or_lock()
as it was modified to acquire the lock before returning. This is
necessary because of possible lock stealing attempts from other tasks.

Linux kernel builds were run in KVM guest on an 8-socket, 4
cores/socket Westmere-EX system and a 4-socket, 8 cores/socket
Haswell-EX system. Both systems are configured to have 32 physical
CPUs. The kernel build times before and after the patch were:

                    Westmere                    Haswell
  Patch         32 vCPUs    48 vCPUs    32 vCPUs    48 vCPUs
  -----         --------    --------    --------    --------
  Before patch   3m15.6s    10m56.1s     1m44.1s     5m29.1s
  After patch    3m02.3s     5m00.2s     1m43.7s     3m03.5s

For the overcommited case (48 vCPUs), this patch is able to reduce
kernel build time by more than 54% for Westmere and 44% for Haswell.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Douglas Hatch <doug.hatch@hpe.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott J Norton <scott.norton@hpe.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1447190336-53317-1-git-send-email-Waiman.Long@hpe.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-12-04 11:39:51 +01:00
Peter Zijlstra
b3e0b1b6d8 locking, sched: Introduce smp_cond_acquire() and use it
Introduce smp_cond_acquire() which combines a control dependency and a
read barrier to form acquire semantics.

This primitive has two benefits:

 - it documents control dependencies,
 - its typically cheaper than using smp_load_acquire() in a loop.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-12-04 10:33:41 +01:00
Waiman Long
aa68744f80 locking/qspinlock: Avoid redundant read of next pointer
With optimistic prefetch of the next node cacheline, the next pointer
may have been properly inititalized. As a result, the reading
of node->next in the contended path may be redundant. This patch
eliminates the redundant read if the next pointer value is not NULL.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Douglas Hatch <doug.hatch@hpe.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott J Norton <scott.norton@hpe.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1447114167-47185-4-git-send-email-Waiman.Long@hpe.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-11-23 10:02:00 +01:00
Waiman Long
81b5598665 locking/qspinlock: Prefetch the next node cacheline
A queue head CPU, after acquiring the lock, will have to notify
the next CPU in the wait queue that it has became the new queue
head. This involves loading a new cacheline from the MCS node of the
next CPU. That operation can be expensive and add to the latency of
locking operation.

This patch addes code to optmistically prefetch the next MCS node
cacheline if the next pointer is defined and it has been spinning
for the MCS lock for a while. This reduces the locking latency and
improves the system throughput.

The performance change will depend on whether the prefetch overhead
can be hidden within the latency of the lock spin loop. On really
short critical section, there may not be performance gain at all. With
longer critical section, however, it was found to have a performance
boost of 5-10% over a range of different queue depths with a spinlock
loop microbenchmark.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Douglas Hatch <doug.hatch@hpe.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott J Norton <scott.norton@hpe.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1447114167-47185-3-git-send-email-Waiman.Long@hpe.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-11-23 10:01:59 +01:00
Waiman Long
64d816cba0 locking/qspinlock: Use _acquire/_release() versions of cmpxchg() & xchg()
This patch replaces the cmpxchg() and xchg() calls in the native
qspinlock code with the more relaxed _acquire or _release versions of
those calls to enable other architectures to adopt queued spinlocks
with less memory barrier performance overhead.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Douglas Hatch <doug.hatch@hpe.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott J Norton <scott.norton@hpe.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1447114167-47185-2-git-send-email-Waiman.Long@hpe.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-11-23 10:01:58 +01:00
Peter Zijlstra
43b3f02899 locking/qspinlock/x86: Fix performance regression under unaccelerated VMs
Dave ran into horrible performance on a VM without PARAVIRT_SPINLOCKS
set and Linus noted that the test-and-set implementation was retarded.

One should spin on the variable with a load, not a RMW.

While there, remove 'queued' from the name, as the lock isn't queued
at all, but a simple test-and-set.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Reported-by: Dave Chinner <david@fromorbit.com>
Tested-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <Waiman.Long@hp.com>
Cc: stable@vger.kernel.org # v4.2+
Link: http://lkml.kernel.org/r/20150904152523.GR18673@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-09-11 07:49:42 +02:00
Waiman Long
75d2270280 locking/pvqspinlock: Only kick CPU at unlock time
For an over-committed guest with more vCPUs than physical CPUs
available, it is possible that a vCPU may be kicked twice before
getting the lock - once before it becomes queue head and once again
before it gets the lock. All these CPU kicking and halting (VMEXIT)
can be expensive and slow down system performance.

This patch adds a new vCPU state (vcpu_hashed) which enables the code
to delay CPU kicking until at unlock time. Once this state is set,
the new lock holder will set _Q_SLOW_VAL and fill in the hash table
on behalf of the halted queue head vCPU. The original vcpu_halted
state will be used by pv_wait_node() only to differentiate other
queue nodes from the qeue head.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Douglas Hatch <doug.hatch@hp.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1436647018-49734-2-git-send-email-Waiman.Long@hp.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-03 10:57:11 +02:00
Waiman Long
a23db284fe locking/pvqspinlock: Implement simple paravirt support for the qspinlock
Provide a separate (second) version of the spin_lock_slowpath for
paravirt along with a special unlock path.

The second slowpath is generated by adding a few pv hooks to the
normal slowpath, but where those will compile away for the native
case, they expand into special wait/wake code for the pv version.

The actual MCS queue can use extra storage in the mcs_nodes[] array to
keep track of state and therefore uses directed wakeups.

The head contender has no such storage directly visible to the
unlocker.  So the unlocker searches a hash table with open addressing
using a simple binary Galois linear feedback shift register.

Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Daniel J Blueman <daniel@numascale.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Douglas Hatch <doug.hatch@hp.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1429901803-29771-9-git-send-email-Waiman.Long@hp.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-08 12:37:05 +02:00
Peter Zijlstra (Intel)
2aa79af642 locking/qspinlock: Revert to test-and-set on hypervisors
When we detect a hypervisor (!paravirt, see qspinlock paravirt support
patches), revert to a simple test-and-set lock to avoid the horrors
of queue preemption.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Daniel J Blueman <daniel@numascale.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Douglas Hatch <doug.hatch@hp.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: virtualization@lists.linux-foundation.org
Cc: xen-devel@lists.xenproject.org
Link: http://lkml.kernel.org/r/1429901803-29771-8-git-send-email-Waiman.Long@hp.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-08 12:36:58 +02:00
Waiman Long
2c83e8e949 locking/qspinlock: Use a simple write to grab the lock
Currently, atomic_cmpxchg() is used to get the lock. However, this
is not really necessary if there is more than one task in the queue
and the queue head don't need to reset the tail code. For that case,
a simple write to set the lock bit is enough as the queue head will
be the only one eligible to get the lock as long as it checks that
both the lock and pending bits are not set. The current pending bit
waiting code will ensure that the bit will not be set as soon as the
tail code in the lock is set.

With that change, the are some slight improvement in the performance
of the queued spinlock in the 5M loop micro-benchmark run on a 4-socket
Westere-EX machine as shown in the tables below.

		[Standalone/Embedded - same node]
  # of tasks	Before patch	After patch	%Change
  ----------	-----------	----------	-------
       3	 2324/2321	2248/2265	 -3%/-2%
       4	 2890/2896	2819/2831	 -2%/-2%
       5	 3611/3595	3522/3512	 -2%/-2%
       6	 4281/4276	4173/4160	 -3%/-3%
       7	 5018/5001	4875/4861	 -3%/-3%
       8	 5759/5750	5563/5568	 -3%/-3%

		[Standalone/Embedded - different nodes]
  # of tasks	Before patch	After patch	%Change
  ----------	-----------	----------	-------
       3	12242/12237	12087/12093	 -1%/-1%
       4	10688/10696	10507/10521	 -2%/-2%

It was also found that this change produced a much bigger performance
improvement in the newer IvyBridge-EX chip and was essentially to close
the performance gap between the ticket spinlock and queued spinlock.

The disk workload of the AIM7 benchmark was run on a 4-socket
Westmere-EX machine with both ext4 and xfs RAM disks at 3000 users
on a 3.14 based kernel. The results of the test runs were:

                AIM7 XFS Disk Test
  kernel                 JPM    Real Time   Sys Time    Usr Time
  -----                  ---    ---------   --------    --------
  ticketlock            5678233    3.17       96.61       5.81
  qspinlock             5750799    3.13       94.83       5.97

                AIM7 EXT4 Disk Test
  kernel                 JPM    Real Time   Sys Time    Usr Time
  -----                  ---    ---------   --------    --------
  ticketlock            1114551   16.15      509.72       7.11
  qspinlock             2184466    8.24      232.99       6.01

The ext4 filesystem run had a much higher spinlock contention than
the xfs filesystem run.

The "ebizzy -m" test was also run with the following results:

  kernel               records/s  Real Time   Sys Time    Usr Time
  -----                ---------  ---------   --------    --------
  ticketlock             2075       10.00      216.35       3.49
  qspinlock              3023       10.00      198.20       4.80

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Daniel J Blueman <daniel@numascale.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Douglas Hatch <doug.hatch@hp.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: virtualization@lists.linux-foundation.org
Cc: xen-devel@lists.xenproject.org
Link: http://lkml.kernel.org/r/1429901803-29771-7-git-send-email-Waiman.Long@hp.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-08 12:36:55 +02:00
Peter Zijlstra (Intel)
69f9cae909 locking/qspinlock: Optimize for smaller NR_CPUS
When we allow for a max NR_CPUS < 2^14 we can optimize the pending
wait-acquire and the xchg_tail() operations.

By growing the pending bit to a byte, we reduce the tail to 16bit.
This means we can use xchg16 for the tail part and do away with all
the repeated compxchg() operations.

This in turn allows us to unconditionally acquire; the locked state
as observed by the wait loops cannot change. And because both locked
and pending are now a full byte we can use simple stores for the
state transition, obviating one atomic operation entirely.

This optimization is needed to make the qspinlock achieve performance
parity with ticket spinlock at light load.

All this is horribly broken on Alpha pre EV56 (and any other arch that
cannot do single-copy atomic byte stores).

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Daniel J Blueman <daniel@numascale.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Douglas Hatch <doug.hatch@hp.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: virtualization@lists.linux-foundation.org
Cc: xen-devel@lists.xenproject.org
Link: http://lkml.kernel.org/r/1429901803-29771-6-git-send-email-Waiman.Long@hp.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-08 12:36:48 +02:00
Waiman Long
6403bd7d0e locking/qspinlock: Extract out code snippets for the next patch
This is a preparatory patch that extracts out the following 2 code
snippets to prepare for the next performance optimization patch.

 1) the logic for the exchange of new and previous tail code words
    into a new xchg_tail() function.
 2) the logic for clearing the pending bit and setting the locked bit
    into a new clear_pending_set_locked() function.

This patch also simplifies the trylock operation before queuing by
calling queued_spin_trylock() directly.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Daniel J Blueman <daniel@numascale.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Douglas Hatch <doug.hatch@hp.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: virtualization@lists.linux-foundation.org
Cc: xen-devel@lists.xenproject.org
Link: http://lkml.kernel.org/r/1429901803-29771-5-git-send-email-Waiman.Long@hp.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-08 12:36:41 +02:00
Peter Zijlstra (Intel)
c1fb159db9 locking/qspinlock: Add pending bit
Because the qspinlock needs to touch a second cacheline (the per-cpu
mcs_nodes[]); add a pending bit and allow a single in-word spinner
before we punt to the second cacheline.

It is possible so observe the pending bit without the locked bit when
the last owner has just released but the pending owner has not yet
taken ownership.

In this case we would normally queue -- because the pending bit is
already taken. However, in this case the pending bit is guaranteed
to be released 'soon', therefore wait for it and avoid queueing.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Daniel J Blueman <daniel@numascale.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Douglas Hatch <doug.hatch@hp.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: virtualization@lists.linux-foundation.org
Cc: xen-devel@lists.xenproject.org
Link: http://lkml.kernel.org/r/1429901803-29771-4-git-send-email-Waiman.Long@hp.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-08 12:36:32 +02:00
Waiman Long
a33fda35e3 locking/qspinlock: Introduce a simple generic 4-byte queued spinlock
This patch introduces a new generic queued spinlock implementation that
can serve as an alternative to the default ticket spinlock. Compared
with the ticket spinlock, this queued spinlock should be almost as fair
as the ticket spinlock. It has about the same speed in single-thread
and it can be much faster in high contention situations especially when
the spinlock is embedded within the data structure to be protected.

Only in light to moderate contention where the average queue depth
is around 1-3 will this queued spinlock be potentially a bit slower
due to the higher slowpath overhead.

This queued spinlock is especially suit to NUMA machines with a large
number of cores as the chance of spinlock contention is much higher
in those machines. The cost of contention is also higher because of
slower inter-node memory traffic.

Due to the fact that spinlocks are acquired with preemption disabled,
the process will not be migrated to another CPU while it is trying
to get a spinlock. Ignoring interrupt handling, a CPU can only be
contending in one spinlock at any one time. Counting soft IRQ, hard
IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting
activities.  By allocating a set of per-cpu queue nodes and used them
to form a waiting queue, we can encode the queue node address into a
much smaller 24-bit size (including CPU number and queue node index)
leaving one byte for the lock.

Please note that the queue node is only needed when waiting for the
lock. Once the lock is acquired, the queue node can be released to
be used later.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Daniel J Blueman <daniel@numascale.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Douglas Hatch <doug.hatch@hp.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: virtualization@lists.linux-foundation.org
Cc: xen-devel@lists.xenproject.org
Link: http://lkml.kernel.org/r/1429901803-29771-2-git-send-email-Waiman.Long@hp.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-08 12:36:25 +02:00