Commit Graph

914 Commits

Author SHA1 Message Date
Raghavendra Rao Ananta
c249464a77 Merge remote-tracking branch 'remotes/origin/tmp-aefd2d632eb0' into msm-lahaina
* remotes/origin/tmp-aefd2d632eb0:
  ANDROID: binder: fix sleeping from invalid function caused by RT inheritance
  FROMGIT: scsi: ufs: override auto suspend tunables for ufs
  FROMGIT: scsi: core: allow auto suspend override by low-level driver
  Linux 5.4-rc6
  net: fix installing orphaned programs
  net: cls_bpf: fix NULL deref on offload filter removal
  selftests: bpf: Skip write only files in debugfs
  selftests: net: reuseport_dualstack: fix uninitalized parameter
  r8169: fix wrong PHY ID issue with RTL8168dp
  net: dsa: bcm_sf2: Fix IMP setup for port different than 8
  net: phylink: Fix phylink_dbg() macro
  gve: Fixes DMA synchronization.
  inet: stop leaking jiffies on the wire
  ixgbe: Remove duplicate clear_bit() call
  Documentation: networking: device drivers: Remove stray asterisks
  e1000: fix memory leaks
  i40e: Fix receive buffer starvation for AF_XDP
  igb: Fix constant media auto sense switching when no cable is connected
  net: ethernet: arc: add the missed clk_disable_unprepare
  ANDROID: gki_defconfig: enable CONFIG_KEYBOARD_GPIO
  NFS: Fix an RCU lock leak in nfs4_refresh_delegation_stateid()
  NFSv4: Don't allow a cached open with a revoked delegation
  arm64: apply ARM64_ERRATUM_843419 workaround for Brahma-B53 core
  arm64: Brahma-B53 is SSB and spectre v2 safe
  arm64: apply ARM64_ERRATUM_845719 workaround for Brahma-B53 core
  igb: Enable media autosense for the i350.
  igb/igc: Don't warn on fatal read failures when the device is removed
  tcp: increase tcp_max_syn_backlog max value
  net: increase SOMAXCONN to 4096
  netdevsim: Fix use-after-free during device dismantle
  rxrpc: Fix handling of last subpacket of jumbo packet
  usb: dwc3: gadget: fix race when disabling ep with cancelled xfers
  ANDROID: Remove KVM_INTEL allmodconfig workaround
  ANDROID: Fix x86_64 allmodconfig build
  iocost: don't nest spin_lock_irq in ioc_weight_write()
  s390/idle: fix cpu idle time calculation
  s390/unwind: fix mixing regs and sp
  s390/cmm: fix information leak in cmm_timeout_handler()
  arm64: cpufeature: Enable Qualcomm Falkor errata 1009 for Kryo
  KVM: vmx, svm: always run with EFER.NXE=1 when shadow paging is active
  kvm: call kvm_arch_destroy_vm if vm creation fails
  efi/efi_test: Lock down /dev/efi_test and require CAP_SYS_ADMIN
  x86, efi: Never relocate kernel below lowest acceptable address
  efi: libstub/arm: Account for firmware reserved memory at the base of RAM
  efi/random: Treat EFI_RNG_PROTOCOL output as bootloader randomness
  efi/tpm: Return -EINVAL when determining tpm final events log size fails
  efi: Make CONFIG_EFI_RCI2_TABLE selectable on x86 only
  hv_netvsc: Fix error handling in netvsc_attach()
  hv_netvsc: Fix error handling in netvsc_set_features()
  cxgb4: fix panic when attaching to ULD fail
  net: annotate lockless accesses to sk->sk_napi_id
  FROMLIST: scsi: ufs-qcom: enter and exit hibern8 during clock scaling
  FROMLIST: scsi: ufs: export hibern8 entry and exit
  ANDROID: scsi: ufs: UFS crypto variant operations API
  ANDROID: gki_defconfig: enable inline encryption
  FROMLIST: ext4: add inline encryption support
  FROMLIST: f2fs: add inline encryption support
  FROMLIST: fscrypt: add inline encryption support
  FROMLIST: scsi: ufs: Add inline encryption support to UFS
  FROMLIST: scsi: ufs: UFS crypto API
  FROMLIST: scsi: ufs: UFS driver v2.1 spec crypto additions
  FROMLIST: block: blk-crypto for Inline Encryption
  ANDROID: block: Fix bio_crypt_should_process WARN_ON
  ALSA: timer: Fix mutex deadlock at releasing card
  FROMLIST: block: Add encryption context to struct bio
  io_uring: ensure we clear io_kiocb->result before each issue
  parisc: fix frame pointer in ftrace_regs_caller()
  net: annotate accesses to sk->sk_incoming_cpu
  FROMLIST: block: Keyslot Manager for Inline Encryption
  FROMLIST: f2fs: add support for IV_INO_LBLK_64 encryption policies
  FROMLIST: ext4: add support for IV_INO_LBLK_64 encryption policies
  FROMLIST: fscrypt: add support for IV_INO_LBLK_64 policies
  FROMLIST: docs: ioctl-number: document fscrypt ioctl numbers
  FROMLIST: fscrypt: zeroize fscrypt_info before freeing
  FROMLIST: fscrypt: remove struct fscrypt_ctx
  FROMLIST: fscrypt: invoke crypto API for ESSIV handling
  mlxsw: core: Unpublish devlink parameters during reload
  qed: Optimize execution time for nvm attributes configuration.
  vxlan: fix unexpected failure of vxlan_changelink()
  qed: fix spelling mistake "queuess" -> "queues"
  SUNRPC: Destroy the back channel when we destroy the host transport
  SUNRPC: The RDMA back channel mustn't disappear while requests are outstanding
  SUNRPC: The TCP back channel mustn't disappear while requests are outstanding
  drm/amdgpu: enable -msse2 for GCC 7.1+ users
  drm/amdgpu: fix stack alignment ABI mismatch for GCC 7.1+
  drm/amdgpu: fix stack alignment ABI mismatch for Clang
  drm/radeon: Fix EEH during kexec
  drm/amdgpu/gmc10: properly set BANK_SELECT and FRAGMENT_SIZE
  drm/amdgpu/powerplay/vega10: allow undervolting in p7
  dc.c:use kzalloc without test
  drm/amd/display: setting the DIG_MODE to the correct value.
  drm/amd/display: Passive DP->HDMI dongle detection fix
  drm/amd/display: add 50us buffer as WA for pstate switch in active
  drm/amd/display: Allow inverted gamma
  drm/amd/display: do not synchronize "drr" displays
  drm/amdgpu: If amdgpu_ib_schedule fails return back the error.
  drm/sched: Set error to s_fence if HW job submission failed.
  drm/amdgpu/gfx10: update gfx golden settings for navi12
  drm/amdgpu/gfx10: update gfx golden settings for navi14
  drm/amdgpu/gfx10: update gfx golden settings
  drm/amd/display: Change Navi14's DWB flag to 1
  drm/amdgpu/sdma5: do not execute 0-sized IBs (v2)
  drm/amdgpu: Fix SDMA hang when performing VKexample test
  iwlwifi: fw api: support new API for scan config cmd
  mt76: dma: fix buffer unmap with non-linear skbs
  mt76: mt76x2e: disable pcie_aspm by default
  ALSA: hda - Fix mutex deadlock in HDMI codec driver
  usb: cdns3: gadget: Fix g_audio use case when connected to Super-Speed host
  usb: cdns3: gadget: reset EP_CLAIMED flag while unloading
  gfs2: Fix initialisation of args for remount
  iommu/vt-d: Fix panic after kexec -p for kdump
  iommu/amd: Apply the same IVRS IOAPIC workaround to Acer Aspire A315-41
  iommu/ipmmu-vmsa: Remove dev_err() on platform_get_irq() failure
  nl80211: fix validation of mesh path nexthop
  nl80211: Disallow setting of HT for channel 14
  USB: serial: whiteheat: fix line-speed endianness
  USB: serial: whiteheat: fix potential slab corruption
  MAINTAINERS: Change to my personal email address
  drm/i915: Fix PCH reference clock for FDI on HSW/BDW
  net: rtnetlink: fix a typo fbd -> fdb
  net/smc: fix refcounting for non-blocking connect()
  bonding: fix using uninitialized mode_lock
  net: fec_ptp: Use platform_get_irq_xxx_optional() to avoid error message
  net: fec_main: Use platform_get_irq_byname_optional() to avoid error message
  MAINTAINERS: remove Dave Watson as TLS maintainer
  vxlan: check tun_info options_len properly
  erspan: fix the tun_info options_len check for erspan
  net: hisilicon: Fix ping latency when deal with high throughput
  net/mlx4_core: Dynamically set guaranteed amount of counters per VF
  net/mlx5e: Initialize on stack link modes bitmap
  net/mlx5e: Fix ethtool self test: link speed
  net/mlx5e: Fix handling of compressed CQEs in case of low NAPI budget
  net/mlx5e: Don't store direct pointer to action's tunnel info
  net/mlx5: Fix NULL pointer dereference in extended destination
  net/mlx5: Fix rtable reference leak
  net/mlx5e: Only skip encap flows update when encap init failed
  net/mlx5e: Replace kfree with kvfree when free vhca stats
  net/mlx5e: Remove incorrect match criteria assignment line
  net/mlx5e: Determine source port properly for vlan push action
  net/mlx5: Fix flow counter list auto bits struct
  net: mscc: ocelot: refuse to overwrite the port's native vlan
  net: mscc: ocelot: fix vlan_filtering when enslaving to bridge before link is up
  wimax: i2400: Fix memory leak in i2400m_op_rfkill_sw_toggle
  drm/i915/tgl: Fix doc not corresponding to code
  ANDROID: staging: ion: Fix dynamic heap ID assignment
  ANDROID: Fix typo for FROMLIST: section
  drm/panfrost: Don't dereference bogus MMU pointers
  ANDROID: media: increase video max frame number
  drm/panfrost: fix -Wmissing-prototypes warnings
  net: hisilicon: Fix "Trying to free already-free IRQ"
  fjes: Handle workqueue allocation failure
  Revert "sched: Rework pick_next_task() slow-path"
  arm64: cpufeature: Enable Qualcomm Falkor/Kryo errata 1003
  drm/etnaviv: fix dumping of iommuv2
  drm/etnaviv: reinstate MMUv1 command buffer window check
  drm/etnaviv: fix deadlock in GPU coredump
  arm64: Ensure VM_WRITE|VM_SHARED ptes are clean by default
  um-ubd: Entrust re-queue to the upper layers
  nvme-multipath: remove unused groups_only mode in ana log
  nvme-multipath: fix possible io hang after ctrl reconnect
  powerpc/powernv: Fix CPU idle to be called with IRQs disabled
  sched/topology: Allow sched_asym_cpucapacity to be disabled
  sched/topology: Don't try to build empty sched domains
  USB: gadget: Reject endpoints with 0 maxpacket value
  powerpc/prom_init: Undo relocation before entering secure mode
  ANDROID: dummy_cpufreq: Implement get()
  ANDROID: gki_defconfig: enable CONFIG_CPUSETS
  ANDROID: virtio: virtio_input: Set the amount of multitouch slots in virtio input
  scsi: qla2xxx: stop timer in shutdown path
  hwmon: (ina3221) Fix read timeout issue
  net: usb: lan78xx: Disable interrupts before calling generic_handle_irq()
  Revert "ANDROID: Revert "kheaders: make headers archive reproducible""
  net: dsa: sja1105: improve NET_DSA_SJA1105_TAS dependency
  net: ethernet: ftgmac100: Fix DMA coherency issue with SW checksum
  net: fix sk_page_frag() recursion from memory reclaim
  udp: fix data-race in udp_set_dev_scratch()
  net: dpaa2: Use the correct style for SPDX License Identifier
  net: add READ_ONCE() annotation in __skb_wait_for_more_packets()
  net: use skb_queue_empty_lockless() in busy poll contexts
  net: use skb_queue_empty_lockless() in poll() handlers
  udp: use skb_queue_empty_lockless()
  net: add skb_queue_empty_lockless()
  ANDROID: gki_defconfig: enable CONFIG_CPU_FREQ_GOV_CONSERVATIVE
  RDMA/hns: Prevent memory leaks of eq->buf_list
  RISC-V: Add PCIe I/O BAR memory mapping
  RDMA/iw_cxgb4: Avoid freeing skb twice in arp failure case
  RDMA/mlx5: Use irq xarray locking for mkey_table
  ANDROID: fix VIDEOBUF2_CORE dependency in 'allmodconfig' builds
  UAS: Revert commit 3ae62a4209 ("UAS: fix alignment of scatter/gather segments")
  usb-storage: Revert commit 747668dbc0 ("usb-storage: Set virt_boundary_mask to avoid SG overflows")
  usbip: Fix free of unallocated memory in vhci tx
  usbip: tools: Fix read_usb_vudc_device() error path handling
  usb: xhci: fix __le32/__le64 accessors in debugfs code
  usb: xhci: fix Immediate Data Transfer endianness
  xhci: Fix use-after-free regression in xhci clear hub TT implementation
  USB: ldusb: fix control-message timeout
  USB: ldusb: use unsigned size format specifiers
  USB: ldusb: fix ring-buffer locking
  USB: Skip endpoints with 0 maxpacket length
  io_uring: don't touch ctx in setup after ring fd install
  ANDROID: modpost: fix up merge issues due to namespace removal
  Revert "ALSA: hda: Flush interrupts on disabling"
  perf/headers: Fix spelling s/EACCESS/EACCES/, s/privilidge/privilege/
  perf/x86/uncore: Fix event group support
  perf/x86/amd/ibs: Handle erratum #420 only on the affected CPU family (10h)
  perf/x86/amd/ibs: Fix reading of the IBS OpData register and thus precise RIP validity
  perf/core: Start rejecting the syscall with attr.__reserved_2 set
  vringh: fix copy direction of vringh_iov_push_kern()
  vsock/virtio: remove unused 'work' field from 'struct virtio_vsock_pkt'
  virtio_ring: fix stalls for packed rings
  riscv: for C functions called only from assembly, mark with __visible
  riscv: fp: add missing __user pointer annotations
  riscv: add missing header file includes
  riscv: mark some code and data as file-static
  riscv: init: merge split string literals in preprocessor directive
  riscv: add prototypes for assembly language functions from head.S
  io_uring: Fix leaked shadow_req
  fix memory leak in large read decrypt offload
  Linux 5.4-rc5
  usb: cdns3: gadget: Don't manage pullups
  usb: dwc3: remove the call trace of USBx_GFLADJ
  usb: gadget: configfs: fix concurrent issue between composite APIs
  usb: dwc3: pci: prevent memory leak in dwc3_pci_probe
  usb: gadget: composite: Fix possible double free memory bug
  usb: gadget: udc: atmel: Fix interrupt storm in FIFO mode.
  usb: renesas_usbhs: fix type of buf
  usb: renesas_usbhs: Fix warnings in usbhsg_recip_handler_std_set_device()
  usb: gadget: udc: renesas_usb3: Fix __le16 warnings
  usb: renesas_usbhs: fix __le16 warnings
  usb: cdns3: include host-export,h for cdns3_host_init
  usb: mtu3: fix missing include of mtu3_dr.h
  usb: fsl: Check memory resource before releasing it
  usb: dwc3: select CONFIG_REGMAP_MMIO
  ANDROID: add README.md
  selftests: fib_tests: add more tests for metric update
  ipv4: fix route update on metric change.
  net: Zeroing the structure ethtool_wolinfo in ethtool_get_wol()
  ALSA: bebob: Fix prototype of helper function to return negative value
  cxgb4: request the TX CIDX updates to status page
  netns: fix GFP flags in rtnl_net_notifyid()
  net: ethernet: Use the correct style for SPDX License Identifier
  net/smc: keep vlan_id for SMC-R in smc_listen_work()
  net/smc: fix closing of fallback SMC sockets
  ANDROID: virt_wifi: Add data ops for scan data simulation
  ANDROID: Allow DRM_IOCTL_MODE_*_DUMB for render clients.
  riscv: cleanup do_trap_break
  net: hwbm: if CONFIG_NET_HWBM unset, make stub functions static
  ANDROID: cpufreq: create dummy cpufreq driver
  net: mvneta: make stub functions static inline
  net: sch_generic: Use pfifo_fast as fallback scheduler for CAN hardware
  nbd: verify socket is supported during setup
  ata: libahci_platform: Fix regulator_get_optional() misuse
  nbd: handle racing with error'ed out commands
  nbd: protect cmd->status with cmd->lock
  Revert "ANDROID: x86: Remove a useless warning message"
  ANDROID: init: GKI: enable hidden configs for media
  ANDROID: gki_defconfig: add FORTIFY_SOURCE, remove SPMI_MSM_PMIC_ARB
  Revert "Revert "Revert "Revert "x86/mm: Identify the end of the kernel area to be reserved""""
  build.config.*: Link android-mainline kernels with LLD
  ANDROID: ALSA: jack: Update supported jack switch types
  ANDROID: ASoC: compress: fix unsigned integer overflow check
  io_uring: fix bad inflight accounting for SETUP_IOPOLL|SETUP_SQTHREAD
  io_uring: used cached copies of sq->dropped and cq->overflow
  FROMLIST: iommu: Export core IOMMU functions to kernel modules
  FROMLIST: PCI: Export PCI ACS and DMA searching functions to modules
  FROMLIST: of: Export of_phandle_iterator_args() to modules
  ARM: dts: stm32: relax qspi pins slew-rate for stm32mp157
  io_uring: Fix race for sqes with userspace
  io_uring: Fix broken links with offloading
  io_uring: Fix corrupted user_data
  xen: issue deprecation warning for 32-bit pv guest
  kvm: Allocate memslots and buses before calling kvm_arch_init_vm
  powerpc/powernv/eeh: Fix oops when probing cxl devices
  irqchip/sifive-plic: Skip contexts except supervisor in plic_init()
  ANDROID: soc: qcom: Add required header to irq.h
  ACPI: processor: Add QoS requests for all CPUs
  cifs: Fix cifsInodeInfo lock_sem deadlock when reconnect occurs
  CIFS: Fix use after free of file info structures
  CIFS: Fix retry mid list corruption on reconnects
  scsi: sd: define variable dif as unsigned int instead of bool
  ANDROID: v4l2-compat-ioctl32.c: copy reserved fields
  scsi: target: cxgbit: Fix cxgbit_fw4_ack()
  IB/core: Avoid deadlock during netlink message handling
  ANDROID: of: property: Enable of_devlink by default
  virt_wifi: fix refcnt leak in module exit routine
  net: remove unnecessary variables and callback
  vxlan: add adjacent link to limit depth level
  net: core: add ignore flag to netdev_adjacent structure
  macsec: fix refcnt leak in module exit routine
  team: fix nested locking lockdep warning
  bonding: use dynamic lockdep key instead of subclass
  bonding: fix unexpected IFF_BONDING bit unset
  net: core: add generic lockdep keys
  net: core: limit nested device depth
  keys: Fix memory leak in copy_net_ns
  ANDROID: of: property: Make sure child dependencies don't block probing of parent
  ANDROID: driver core: Allow fwnode_operations.add_links to differentiate errors
  ANDROID: driver core: Allow a device to wait on optional suppliers
  ANDROID: driver core: Add device link support for SYNC_STATE_ONLY flag
  FROMGIT: docs: driver-model: Add documentation for sync_state
  FROMGIT: driver: core: Improve documentation for fwnode_operations.add_links()
  FROMGIT: of: property: Minor code formatting/style clean ups
  i2c: stm32f7: remove warning when compiling with W=1
  i2c: stm32f7: fix a race in slave mode with arbitration loss irq
  i2c: stm32f7: fix first byte to send in slave mode
  i2c: mt65xx: fix NULL ptr dereference
  RDMA/nldev: Skip counter if port doesn't match
  irqchip/gic-v3-its: Use the exact ITSList for VMOVP
  FROMLIST: drivers: pinctrl: msm: setup GPIO chip in hierarchy
  FROMLIST: drivers: irqchip: pdc: Add irqchip set/get state calls
  FROMLIST: genirq: Introduce irq_chip_get/set_parent_state calls
  FROMLIST: drivers: irqchip: pdc: additionally set type in SPI config registers
  FROMLIST: dt-bindings/interrupt-controller: pdc: add SPI config register
  FROMLIST: of: irq: document properties for wakeup interrupt parent
  FROMLIST: drivers: irqchip: add PDC irqdomain for wakeup capable GPIOs
  FROMLIST: drivers: irqchip: pdc: Do not toggle IRQ_ENABLE during mask/unmask
  FROMLIST: drivers: irqchip: qcom-pdc: update max PDC interrupts
  FROMLIST: irqdomain: add bus token DOMAIN_BUS_WAKEUP
  gfs2: Fix memory leak when gfs2meta's fs_context is freed
  ALSA: hda/realtek - Fix 2 front mics of codec 0x623
  ALSA: hda/realtek - Add support for ALC623
  ALSA: usb-audio: Add DSD support for Gustard U16/X26 USB Interface
  netfilter: nft_payload: fix missing check for matching length in offloads
  ipvs: move old_secure_tcp into struct netns_ipvs
  ipvs: don't ignore errors in case refcounting ip_vs module fails
  ANDROID: drop patches/ symbolic link
  mfd: mt6397: Fix probe after changing mt6397-core
  net: phy: smsc: LAN8740: add PHY_RST_AFTER_CLK_EN flag
  MIPS: tlbex: Fix build_restore_pagemask KScratch restore
  io_uring: correct timeout req sequence when inserting a new entry
  io_uring : correct timeout req sequence when waiting timeout
  io_uring: revert "io_uring: optimize submit_and_wait API"
  MIPS: bmips: mark exception vectors as char arrays
  xsk: Fix registration of Rx-only sockets
  net/flow_dissector: switch to siphash
  MAINTAINERS: Update the Spreadtrum SoC maintainer
  ANDROID: Revert "ANDROID: Removed check for asm-goto"
  riscv: cleanup <asm/bug.h>
  riscv: Fix undefined reference to vmemmap_populate_basepages
  riscv: Fix implicit declaration of 'page_to_section'
  riscv: fix fs/proc/kcore.c compilation with sparsemem enabled
  ANDROID: sdcardfs: evict dentries on fscrypt key removal
  ANDROID: fscrypt: add key removal notifier chain
  ANDROID: move up spin_unlock_bh() ahead of remove_proc_entry()
  ANDROID: Kconfig.gki: Add hidden MMC config support
  ANDROID: Kconfig.gki: Add Hidden QCOM configs
  ANDROID: Kconfig.gki: Add SND_PCM_ELD to HIDDEN_DRM configs
  ANDROID: Kconfig.gki: Add extra audio selections
  ANDROID: Kconfig.gki: Add extra GKI_HIDDEN_REGMAP_CONFIGS selections
  ANDROID: Four part re-add of asm-goto usage [4/4]
  ANDROID: Four part re-add of asm-goto usage [3/4]
  ANDROID: Four part re-add of asm-goto usage [2/4]
  ANDROID: Four part re-add of asm-goto usage [1/4]
  ANDROID: Move out patches/ and replace by link to kernel/common-patches project
  of: reserved_mem: add missing of_node_put() for proper ref-counting
  of: unittest: fix memory leak in unittest_data_add
  dt-bindings: riscv: Fix CPU schema errors
  MAINTAINERS: Remove Gregory and Brian for ARCH_BRCMSTB
  drm/v3d: Fix memory leak in v3d_submit_cl_ioctl
  panfrost: Properly undo pm_runtime_enable when deferring a probe
  dmaengine: cppi41: Fix cppi41_dma_prep_slave_sg() when idle
  posix-cpu-timers: Fix two trivial comments
  timers/sched_clock: Include local timekeeping.h for missing declarations
  lib/vdso: Make clock_getres() POSIX compliant again
  fuse: redundant get_fuse_inode() calls in fuse_writepages_fill()
  fuse: Add changelog entries for protocols 7.1 - 7.8
  fuse: truncate pending writes on O_TRUNC
  fuse: flush dirty data/metadata before non-truncate setattr
  netfilter: nf_tables_offload: restore basechain deletion
  netfilter: nf_flow_table: set timeout before insertion into hashes
  rtlwifi: rtl_pci: Fix problem of too small skb->len
  iwlwifi: pcie: 0x2720 is qu and 0x30DC is not
  iwlwifi: pcie: add workaround for power gating in integrated 22000
  iwlwifi: mvm: handle iwl_mvm_tvqm_enable_txq() error return
  iwlwifi: pcie: fix all 9460 entries for qnj
  iwlwifi: pcie: fix PCI ID 0x2720 configs that should be soc
  rtlwifi: Fix potential overflow on P2P code
  iwlwifi: pcie: fix merge damage on making QnJ exclusive
  scripts/nsdeps: use alternative sed delimiter
  virtiofs: Remove set but not used variable 'fc'
  fs/dax: Fix pmd vs pte conflict detection
  opp: Reinitialize the list_kref before adding the static OPPs again
  bpf: Fix use after free in bpf_get_prog_name
  ALSA: hda: Add Tigerlake/Jasperlake PCI ID
  scsi: qla2xxx: Fix partial flash write of MBI
  scsi: qla2xxx: Initialized mailbox to prevent driver load failure
  scsi: lpfc: Honor module parameter lpfc_use_adisc
  ipv6: include <net/addrconf.h> for missing declarations
  net: openvswitch: free vport unless register_netdevice() succeeds
  selftests: Make l2tp.sh executable
  net: sched: taprio: fix -Wmissing-prototypes warnings
  bnxt_en: Avoid disabling pci device in bnxt_remove_one() for already disabled device.
  bnxt_en: Minor formatting changes in FW devlink_health_reporter
  bnxt_en: Adjust the time to wait before polling firmware readiness.
  bnxt_en: Fix devlink NVRAM related byte order related issues.
  bnxt_en: Fix the size of devlink MSIX parameters.
  net: stmmac: Fix the problem of tso_xmit
  dynamic_debug: provide dynamic_hex_dump stub
  bpf: Fix use after free in subprog's jited symbol removal
  RDMA/uverbs: Prevent potential underflow
  KVM: nVMX: Don't leak L1 MMIO regions to L2
  ARC: perf: Accommodate big-endian CPU
  ARC: [plat-hsdk]: Enable on-boardi SPI ADC IC
  ARC: [plat-hsdk]: Enable on-board SPI NOR flash IC
  KVM: SVM: Fix potential wrong physical id in avic_handle_ldr_update
  cpufreq: Cancel policy update work scheduled before freeing
  s390/kaslr: add support for R_390_GLOB_DAT relocation type
  s390/zcrypt: fix memleak at release
  ALSA: usb-audio: Fix copy&paste error in the validator
  perf/aux: Fix AUX output stopping
  kvm: clear kvmclock MSR on reset
  KVM: x86: fix bugon.cocci warnings
  KVM: VMX: Remove specialized handling of unexpected exit-reasons
  selftests: kvm: fix sync_regs_test with newer gccs
  selftests: kvm: vmx_dirty_log_test: skip the test when VMX is not supported
  selftests: kvm: consolidate VMX support checks
  selftests: kvm: vmx_set_nested_state_test: don't check for VMX support twice
  KVM: Don't shrink/grow vCPU halt_poll_ns if host side polling is disabled
  selftests: kvm: synchronize .gitignore to Makefile
  kvm: x86: Expose RDPID in KVM_GET_SUPPORTED_CPUID
  cpuidle: haltpoll: Take 'idle=' override into account
  ACPI: NFIT: Fix unlock on error in scrub_show()
  tracing: Fix race in perf_trace_buf initialization
  x86/cpu/vmware: Fix platform detection VMWARE_PORT macro
  x86/cpu/vmware: Use the full form of INL in VMWARE_HYPERCALL, for clang/llvm
  xdp: Handle device unregister for devmap_hash map type
  r8152: add device id for Lenovo ThinkPad USB-C Dock Gen 2
  ipv4: fix IPSKB_FRAG_PMTU handling with fragmentation
  ARM: 8926/1: v7m: remove register save to stack before svc
  Input: st1232 - fix reporting multitouch coordinates
  Revert "pwm: Let pwm_get_state() return the last implemented state"
  mmc: mxs: fix flags passed to dmaengine_prep_slave_sg
  virtiofs: Retry request submission from worker context
  virtiofs: Count pending forgets as in_flight forgets
  virtiofs: Set FR_SENT flag only after request has been sent
  virtiofs: No need to check fpq->connected state
  virtiofs: Do not end request in submission context
  fuse: don't advise readdirplus for negative lookup
  drm/komeda: Fix typos in komeda_splitter_validate
  drm/komeda: Don't flush inactive pipes
  i2c: aspeed: fix master pending state handling
  mmc: cqhci: Commit descriptors before setting the doorbell
  mmc: sdhci-omap: Fix Tuning procedure for temperatures < -20C
  ALSA: hda/realtek - Add support for ALC711
  perf/aux: Fix tracking of auxiliary trace buffer allocation
  fuse: don't dereference req->args on finished request
  opp: core: Revert "add regulators enable and disable"
  cifs: Fix missed free operations
  CIFS: avoid using MID 0xFFFF
  cifs: clarify comment about timestamp granularity for old servers
  cifs: Handle -EINPROGRESS only when noblockcnt is set
  PM: QoS: Drop frequency QoS types from device PM QoS
  cpufreq: Use per-policy frequency QoS
  PM: QoS: Introduce frequency QoS
  Linux 5.4-rc4
  hwmon: (nct7904) Fix the incorrect value of vsen_mask & tcpu_mask & temp_mode in nct7904_data struct.
  perf/x86/intel/pt: Fix base for single entry topa
  KVM: arm64: pmu: Reset sample period on overflow handling
  KVM: arm64: pmu: Set the CHAINED attribute before creating the in-kernel event
  arm64: KVM: Handle PMCR_EL0.LC as RES1 on pure AArch64 systems
  KVM: arm64: pmu: Fix cycle counter truncation
  net: reorder 'struct net' fields to avoid false sharing
  net: dsa: fix switch tree list
  net: ethernet: dwmac-sun8i: show message only when switching to promisc
  net: aquantia: add an error handling in aq_nic_set_multicast_list
  net: netem: correct the parent's backlog when corrupted packet was dropped
  net: netem: fix error path for corrupted GSO frames
  macb: propagate errors when getting optional clocks
  xen/netback: fix error path of xenvif_connect_data()
  net: hns3: fix mis-counting IRQ vector numbers issue
  scripts/gdb: fix debugging modules on s390
  kernel/events/uprobes.c: only do FOLL_SPLIT_PMD for uprobe register
  mm/thp: allow dropping THP from page cache
  mm/vmscan.c: support removing arbitrary sized pages from mapping
  mm/thp: fix node page state in split_huge_page_to_list()
  proc/meminfo: fix output alignment
  mm/init-mm.c: include <linux/mman.h> for vm_committed_as_batch
  mm/filemap.c: include <linux/ramfs.h> for generic_file_vm_ops definition
  mm: include <linux/huge_mm.h> for is_vma_temporary_stack
  zram: fix race between backing_dev_show and backing_dev_store
  mm/memcontrol: update lruvec counters in mem_cgroup_move_account
  ocfs2: fix panic due to ocfs2_wq is null
  hugetlbfs: don't access uninitialized memmaps in pfn_range_valid_gigantic()
  mm: memblock: do not enforce current limit for memblock_phys* family
  mm: memcg: get number of pages on the LRU list in memcgroup base on lru_zone_size
  mm/gup: fix a misnamed "write" argument, and a related bug
  mm/gup_benchmark: add a missing "w" to getopt string
  ocfs2: fix error handling in ocfs2_setattr()
  mm: memcg/slab: fix panic in __free_slab() caused by premature memcg pointer release
  mm/memunmap: don't access uninitialized memmap in memunmap_pages()
  mm/memory_hotplug: don't access uninitialized memmaps in shrink_pgdat_span()
  mm/page_owner: don't access uninitialized memmaps when reading /proc/pagetypeinfo
  scripts/gdb: fix lx-dmesg when CONFIG_PRINTK_CALLER is set
  mm/memory-failure.c: don't access uninitialized memmaps in memory_failure()
  fs/proc/page.c: don't access uninitialized memmaps in fs/proc/page.c
  drivers/base/memory.c: don't access uninitialized memmaps in soft_offline_page_store()
  xdp: Prevent overflow in devmap_hash cost calculation for 32-bit builds
  filldir[64]: remove WARN_ON_ONCE() for bad directory entries
  scsi: ufs-bsg: Wake the device before sending raw upiu commands
  scsi: lpfc: Check queue pointer before use
  mips: vdso: Fix __arch_get_hw_counter()
  MAINTAINERS: Use @kernel.org address for Paul Burton
  scsi: qla2xxx: fixup incorrect usage of host_byte
  selftests/bpf: More compatible nc options in test_tc_edt
  net/mlx5: fix memory leak in mlx5_fw_fatal_reporter_dump
  net/mlx5: prevent memory leak in mlx5_fpga_conn_create_cq
  net/mlx5e: TX, Fix consumer index of error cqe dump
  net/mlx5e: kTLS, Enhance TX resync flow
  net/mlx5e: kTLS, Save a copy of the crypto info
  net/mlx5e: kTLS, Remove unneeded cipher type checks
  net/mlx5e: kTLS, Limit DUMP wqe size
  net/mlx5e: kTLS, Fix missing SQ edge fill
  net/mlx5e: kTLS, Fix page refcnt leak in TX resync error flow
  net/mlx5e: kTLS, Save by-value copy of the record frags
  net/mlx5e: kTLS, Save only the frag page to release at completion
  net/mlx5e: kTLS, Size of a Dump WQE is fixed
  net/mlx5e: kTLS, Release reference on DUMPed fragments in shutdown flow
  net/mlx5e: Tx, Zero-memset WQE info struct upon update
  net/mlx5e: Tx, Fix assumption of single WQEBB of NOP in cleanup flow
  usb: cdns3: Error out if USB_DR_MODE_UNKNOWN in cdns3_core_init_role()
  ARM: dts: bcm2837-rpi-cm3: Avoid leds-gpio probing issue
  USB: ldusb: fix read info leaks
  IB/core: Use rdma_read_gid_l2_fields to compare GID L2 fields
  RDMA/qedr: Fix reported firmware version
  RDMA/siw: free siw_base_qp in kref release routine
  tracing: Fix "gfp_t" format for synthetic events
  RDMA/iwcm: move iw_rem_ref() calls out of spinlock
  iw_cxgb4: fix ECN check on the passive accept
  net: usb: lan78xx: Connect PHY before registering MAC
  vsock/virtio: discard packets if credit is not respected
  vsock/virtio: send a credit update when buffer size is changed
  mlxsw: spectrum_trap: Push Ethernet header before reporting trap
  ASoC: SOF: control: return true when kcontrol values change
  ASoC: stm32: sai: fix sysclk management on shutdown
  ASoC: Intel: sof-rt5682: add a check for devm_clk_get
  ASoC: rsnd: Reinitialize bit clock inversion flag for every format setting
  net: ensure correct skb->tstamp in various fragmenters
  net: bcmgenet: reset 40nm EPHY on energy detect
  net: bcmgenet: soft reset 40nm EPHYs before MAC init
  net: phy: bcm7xxx: define soft_reset for 40nm EPHY
  net: bcmgenet: don't set phydev->link from MAC
  bus: ti-sysc: Fix watchdog quirk handling
  ARM: OMAP2+: Add pdata for OMAP3 ISP IOMMU
  ARM: OMAP2+: Plug in device_enable/idle ops for IOMMUs
  iommu/vt-d: Return the correct dma mask when we are bypassing the IOMMU
  iommu/amd: Check PM_LEVEL_SIZE() condition in locked section
  nvme-pci: Set the prp2 correctly when using more than 4k page
  HID: i2c-hid: add Trekstor Primebook C11B to descriptor override
  symbol namespaces: revert to previous __ksymtab name scheme
  modpost: make updating the symbol namespace explicit
  modpost: delegate updating namespaces to separate function
  HID: logitech-hidpp: do all FF cleanup in hidpp_ff_destroy()
  HID: logitech-hidpp: rework device validation
  HID: logitech-hidpp: split g920_get_config()
  HID: i2c-hid: Remove runtime power management
  x86/boot/acpi: Move get_cmdline_acpi_rsdp() under #ifdef guard
  x86/hyperv: Set pv_info.name to "Hyper-V"
  ACPI: CPPC: Set pcc_data[pcc_ss_id] to NULL in acpi_cppc_processor_exit()
  dmaengine: qcom: bam_dma: Fix resource leak
  scsi: lpfc: remove left-over BUILD_NVME defines
  scsi: core: try to get module before removing device
  scsi: hpsa: add missing hunks in reset-patch
  scsi: target: core: Do not overwrite CDB byte 1
  net: Update address for MediaTek ethernet driver in MAINTAINERS
  ipv4: fix race condition between route lookup and invalidation
  ipv4: Return -ENETUNREACH if we can't create route but saddr is valid
  net: phy: micrel: Update KSZ87xx PHY name
  net: phy: micrel: Discern KSZ8051 and KSZ8795 PHYs
  io_uring: fix logic error in io_timeout
  io_uring: fix up O_NONBLOCK handling for sockets
  drm/amdgpu/vce: fix allocation size in enc ring test
  drm/amdgpu: fix error handling in amdgpu_bo_list_create
  drm/amdgpu: fix potential VM faults
  drm/amdgpu: user pages array memory leak fix
  drm/amdgpu/vcn: fix allocation size in enc ring test
  drm/amdgpu/uvd7: fix allocation size in enc ring test (v2)
  drm/amdgpu/uvd6: fix allocation size in enc ring test (v2)
  IB/hfi1: Use a common pad buffer for 9B and 16B packets
  IB/hfi1: Avoid excessive retry for TID RDMA READ request
  RDMA/mlx5: Clear old rate limit when closing QP
  net: dsa: microchip: Add shared regmap mutex
  net: dsa: microchip: Do not reinit mutexes on KSZ87xx
  net: stmmac: fix argument to stmmac_pcs_ctrl_ane()
  dpaa2-eth: Fix TX FQID values
  dpaa2-eth: add irq for the dpmac connect/disconnect event
  usb: hso: obey DMA rules in tiocmget
  Btrfs: check for the full sync flag while holding the inode lock during fsync
  Btrfs: fix qgroup double free after failure to reserve metadata for delalloc
  coccinelle: api/devm_platform_ioremap_resource: remove useless script
  ALSA: hda - Force runtime PM on Nvidia HDMI codecs
  dm cache: fix bugs when a GFP_NOWAIT allocation fails
  ARM: davinci_all_defconfig: enable GPIO backlight
  ARM: davinci: dm365: Fix McBSP dma_slave_map entry
  binder: Don't modify VMA bounds in ->mmap handler
  btrfs: tracepoints: Fix bad entry members of qgroup events
  btrfs: tracepoints: Fix wrong parameter order for qgroup events
  stop_machine: Avoid potential race behaviour
  EDAC/ghes: Fix Use after free in ghes_edac remove path
  ALSA: hda/realtek - Enable headset mic on Asus MJ401TA
  ALSA: usb-audio: Disable quirks for BOSS Katana amplifiers
  kheaders: substituting --sort in archive creation
  powerpc/32s: fix allow/prevent_user_access() when crossing segment boundaries.
  net: stmmac: disable/enable ptp_ref_clk in suspend/resume flow
  net: phy: Fix "link partner" information disappear issue
  rxrpc: use rcu protection while reading sk->sk_user_data
  drm/i915: Fixup preempt-to-busy vs resubmission of a virtual request
  drm/i915/userptr: Never allow userptr into the mappable GGTT
  drm/i915: Favor last VBT child device with conflicting AUX ch/DDC pin
  drm/i915/execlists: Refactor -EIO markup of hung requests
  Revert "blackhole_netdev: fix syzkaller reported issue"
  arm64: tags: Preserve tags for addresses translated via TTBR1
  arm64: mm: fix inverted PAR_EL1.F check
  arm64: sysreg: fix incorrect definition of SYS_PAR_EL1_F
  arm64: entry.S: Do not preempt from IRQ before all cpufeatures are enabled
  md/raid0: fix warning message for parameter default_layout
  kthread: make __kthread_queue_delayed_work static
  pinctrl: aspeed-g6: Rename SD3 to EMMC and rework pin groups
  pinctrl: aspeed-g6: Fix UART13 group pinmux
  pinctrl: aspeed-g6: Make SIG_DESC_CLEAR() behave intuitively
  pinctrl: aspeed-g6: Fix I3C3/I3C4 pinmux configuration
  pinctrl: aspeed-g6: Fix I2C14 SDA description
  pinctrl: aspeed-g6: Sort pins for sanity
  dt-bindings: pinctrl: aspeed-g6: Rework SD3 function and groups
  perf kmem: Fix memory leak in compact_gfp_flags()
  usercopy: Avoid soft lockups in test_check_nonzero_user()
  pinctrl: berlin: as370: fix a typo s/spififib/spdifib
  ACPI: processor: Avoid NULL pointer dereferences at init time
  USB: serial: ti_usb_3410_5052: clean up serial data access
  USB: serial: ti_usb_3410_5052: fix port-close races
  xtensa: fix change_bit in exclusive access option
  HID: intel-ish-hid: fix wrong error handling in ishtp_cl_alloc_tx_ring()
  RISC-V: fix virtual address overlapped in FIXADDR_START and VMEMMAP_START
  net: usb: sr9800: fix uninitialized local variable
  net: bcmgenet: Fix RGMII_MODE_EN value for GENET v1/2/3
  net: stmmac: make tc_flow_parsers static
  davinci_cpdma: make cpdma_chan_split_pool static
  net: i82596: fix dma_alloc_attr for sni_82596
  sctp: change sctp_prot .no_autobind with true
  sched: etf: Fix ordering of packets with same txtime
  net: avoid potential infinite loop in tc_ctl_action()
  net: dsa: sja1105: Use the correct style for SPDX License Identifier
  tcp: fix a possible lockdep splat in tcp_done()
  arm: dts: mediatek: Update mt7629 dts to reflect the latest dt-binding
  net: ethernet: mediatek: Fix MT7629 missing GMII mode support
  Revert "Input: elantech - enable SMBus on new (2018+) systems"
  net/sched: fix corrupted L2 header with MPLS 'push' and 'pop' actions
  net: avoid errors when trying to pop MLPS header on non-MPLS packets
  net: cavium: Use the correct style for SPDX License Identifier
  net: dsa: microchip: Use the correct style for SPDX License Identifier
  PCI: PM: Fix pci_power_up()
  xtensa: virt: fix PCI IO ports mapping
  libata/ahci: Fix PCS quirk application
  vfio/type1: Initialize resv_msi_base
  8250-men-mcb: fix error checking when get_num_ports returns -ENODEV
  USB: usblp: fix use-after-free on disconnect
  usb: udc: lpc32xx: fix bad bit shift operation
  usb: cdns3: Fix dequeue implementation.
  USB: legousbtower: fix a signedness bug in tower_probe()
  USB: legousbtower: fix memleak on disconnect
  USB: ldusb: fix memleak on disconnect
  net: ethernet: broadcom: have drivers select DIMLIB as needed
  net: Update address for vrf and l3mdev in MAINTAINERS
  net: bcmgenet: Set phydev->dev_flags only for internal PHYs
  blackhole_netdev: fix syzkaller reported issue
  ARM: dts: bcm2835-rpi-zero-w: Fix bus-width of sdhci
  sparc64: disable fast-GUP due to unexplained oopses
  btrfs: qgroup: Always free PREALLOC META reserve in btrfs_delalloc_release_extents()
  drm/panfrost: Handle resetting on timeout better
  blk-rq-qos: fix first node deletion of rq_qos_del()
  blkcg: Fix multiple bugs in blkcg_activate_policy()
  xfs: change the seconds fields in xfs_bulkstat to signed
  tools headers UAPI: Sync sched.h with the kernel
  rbd: cancel lock_dwork if the wait is interrupted
  ceph: just skip unrecognized info in ceph_reply_info_extra
  tools headers kvm: Sync kvm.h headers with the kernel sources
  tools headers kvm: Sync kvm headers with the kernel sources
  tools headers kvm: Sync kvm headers with the kernel sources
  perf c2c: Fix memory leak in build_cl_output()
  perf tools: Fix mode setting in copyfile_mode_ns()
  perf annotate: Fix multiple memory and file descriptor leaks
  io_uring: consider the overflow of sequence for timeout req
  perf tools: Fix resource leak of closedir() on the error paths
  perf evlist: Fix fix for freed id arrays
  perf jvmti: Link against tools/lib/ctype.h to have weak strlcpy()
  scripts: setlocalversion: fix a bashism
  kbuild: update comment about KBUILD_ALLDIRS
  virtio-fs: don't show mount options
  nvme-tcp: fix possible leakage during error flow
  nvmet-loop: fix possible leakage during error flow
  btrfs: don't needlessly create extent-refs kernel thread
  iommu/amd: Fix incorrect PASID decoding from event log
  iommu/ipmmu-vmsa: Only call platform_get_irq() when interrupt is mandatory
  iommu/rockchip: Don't use platform_get_irq to implicitly count irqs
  dmaengine: sprd: Fix the possible memory leak issue
  dmaengine: xilinx_dma: Fix control reg update in vdma_channel_set_config
  dmaengine: xilinx_dma: Fix 64-bit simple AXIDMA transfer
  x86/apic/x2apic: Fix a NULL pointer deref when handling a dying cpu
  x86/hyperv: Make vapic support x2apic mode
  KVM: PPC: Book3S HV: XIVE: Ensure VP isn't already in use
  arm64: hibernate: check pgd table allocation
  arm64: cpufeature: Treat ID_AA64ZFR0_EL1 as RAZ when SVE is not enabled
  net: aquantia: correctly handle macvlan and multicast coexistence
  net: aquantia: do not pass lro session with invalid tcp checksum
  net: aquantia: when cleaning hw cache it should be toggled
  net: aquantia: temperature retrieval fix
  gpio: lynxpoint: set default handler to be handle_bad_irq()
  gpio: merrifield: Move hardware initialization to callback
  gpio: lynxpoint: Move hardware initialization to callback
  gpio: intel-mid: Move hardware initialization to callback
  gpiolib: Initialize the hardware with a callback
  gpio: merrifield: Restore use of irq_base
  xtensa: drop EXPORT_SYMBOL for outs*/ins*
  mm/memory-failure: poison read receives SIGKILL instead of SIGBUS if mmaped more than once
  mm/slab.c: fix kernel-doc warning for __ksize()
  xarray.h: fix kernel-doc warning
  bitmap.h: fix kernel-doc warning and typo
  fs/fs-writeback.c: fix kernel-doc warning
  fs/libfs.c: fix kernel-doc warning
  fs/direct-io.c: fix kernel-doc warning
  mm, compaction: fix wrong pfn handling in __reset_isolation_pfn()
  mm, hugetlb: allow hugepage allocations to reclaim as needed
  lib/test_meminit: add a kmem_cache_alloc_bulk() test
  mm/slub.c: init_on_free=1 should wipe freelist ptr for bulk allocations
  lib/generic-radix-tree.c: add kmemleak annotations
  mm/slub: fix a deadlock in show_slab_objects()
  mm, page_owner: rename flag indicating that page is allocated
  mm, page_owner: decouple freeing stack trace from debug_pagealloc
  mm, page_owner: fix off-by-one error in __set_page_owner_handle()
  xtensa: fix type conversion in __get_user_[no]check
  xtensa: clean up assembly arguments in uaccess macros
  block: Fix elv_support_iosched()
  parisc: Remove 32-bit DMA enforcement from sba_iommu
  parisc: Fix vmap memory leak in ioremap()/iounmap()
  parisc: prefer __section from compiler_attributes.h
  parisc: sysctl.c: Use CONFIG_PARISC instead of __hppa_ define
  firmware: dmi: Fix unlikely out-of-bounds read in save_mem_devices
  riscv: tlbflush: remove confusing comment on local_flush_tlb_all()
  riscv: dts: HiFive Unleashed: add default chosen/stdout-path
  riscv: remove the switch statement in do_trap_break()
  drm/panfrost: Add missing GPU feature registers
  bpf: lwtunnel: Fix reroute supplying invalid dst
  xtensa: fix {get,put}_user() for 64bit values
  kmemleak: Do not corrupt the object_list during clean-up
  nvme-tcp: Initialize sk->sk_ll_usec only with NET_RX_BUSY_POLL
  nvme: Wait for reset state when required
  nvme: Prevent resets during paused controller state
  nvme: Restart request timers in resetting state
  nvme: Remove ADMIN_ONLY state
  nvme-pci: Free tagset if no IO queues
  hrtimer: Annotate lockless access to timer->base
  staging: wlan-ng: fix exit return when sme->key_idx >= NUM_WEPKEYS
  ARM: imx_v6_v7_defconfig: Enable CONFIG_DRM_MSM
  arm64: dts: imx8mn: Use correct clock for usdhc's ipg clk
  arm64: dts: imx8mm: Use correct clock for usdhc's ipg clk
  arm64: dts: imx8mq: Use correct clock for usdhc's ipg clk
  platform/x86: i2c-multi-instantiate: Fail the probe if no IRQ provided
  ARM: dts: imx7s: Correct GPT's ipg clock source
  ARM: dts: vf610-zii-scu4-aib: Specify 'i2c-mux-idle-disconnect'
  drm/ttm: fix handling in ttm_bo_add_mem_to_lru
  drm/ttm: Restore ttm prefaulting
  drm/ttm: fix busy reference in ttm_mem_evict_first
  ARM: dts: imx6q-logicpd: Re-Enable SNVS power key
  ath10k: fix latency issue for QCA988x
  virtio-fs: Change module name to virtiofs.ko
  dmaengine: imx-sdma: fix size check for sdma script_number
  dmaengine: tegra210-adma: fix transfer failure
  arm64: dts: lx2160a: Correct CPU core idle state name
  dmaengine: sprd: Fix the link-list pointer register configuration issue
  batman-adv: Avoid free/alloc race when handling OGM buffer
  batman-adv: Avoid free/alloc race when handling OGM2 buffer
  netdevsim: Fix error handling in nsim_fib_init and nsim_fib_exit
  net/ibmvnic: Fix EOI when running in XIVE mode.
  net: lpc_eth: avoid resetting twice
  tcp: annotate sk->sk_wmem_queued lockless reads
  tcp: annotate sk->sk_sndbuf lockless reads
  tcp: annotate sk->sk_rcvbuf lockless reads
  tcp: annotate tp->urg_seq lockless reads
  tcp: annotate tp->snd_nxt lockless reads
  tcp: annotate tp->write_seq lockless reads
  tcp: annotate tp->copied_seq lockless reads
  tcp: annotate tp->rcv_nxt lockless reads
  tcp: add rcu protection around tp->fastopen_rsk
  vhost/test: stop device before reset
  tools/virtio: xen stub
  mailmap: Add Simon Arlott (replacement for expired email address)
  rxrpc: Fix possible NULL pointer access in ICMP handling
  drm/amdgpu/sdma5: fix mask value of POLL_REGMEM packet for pipe sync
  drm/amdgpu: Bail earlier when amdgpu.cik_/si_support is not set to 1
  Revert "drm/radeon: Fix EEH during kexec"
  Input: synaptics-rmi4 - avoid processing unknown IRQs
  btrfs: block-group: Fix a memory leak due to missing btrfs_put_block_group()
  drm/msm/dsi: Implement reset correctly
  Btrfs: add missing extents release on file extent cluster relocation error
  x86/boot/64: Round memory hole size up to next PMD page
  x86/boot/64: Make level2_kernel_pgt pages invalid outside kernel area
  arm64: Fix kcore macros after 52-bit virtual addressing fallout
  tools/virtio: more stubs
  net/smc: receive pending data after RCV_SHUTDOWN
  net/smc: receive returns without data
  net/smc: fix SMCD link group creation with VLAN id
  net: update net_dim documentation after rename
  r8169: fix jumbo packet handling on resume from suspend
  arm64: dts: rockchip: Fix override mode for rk3399-kevin panel
  arm64: dts: rockchip: Fix usb-c on Hugsun X99 TV Box
  arm64: dts: rockchip: fix RockPro64 sdmmc settings
  ARM: 8914/1: NOMMU: Fix exc_ret for XIP
  ARM: 8908/1: add __always_inline to functions called from __get_user_check()
  HID: google: add magnemite/masterball USB ids
  MAINTAINERS: Add BCM2711 to BCM2835 ARCH
  dma-buf/resv: fix exclusive fence get
  drm/edid: Add 6 bpc quirk for SDC panel in Lenovo G50
  dm snapshot: rework COW throttling to fix deadlock
  dm snapshot: introduce account_start_copy() and account_end_copy()
  drm/tiny: Kconfig: Remove always-y THERMAL dep. from TINYDRM_REPAPER
  platform/x86: intel_punit_ipc: Avoid error message when retrieving IRQ
  platform/x86: classmate-laptop: remove unused variable
  opp: of: drop incorrect lockdep_assert_held()
  PM: sleep: include <linux/pm_runtime.h> for pm_wq
  cpufreq: Avoid cpufreq_suspend() deadlock on system shutdown
  ACPI: PM: Drop Dell XPS13 9360 from LPS0 Idle _DSM blacklist
  net: silence KCSAN warnings about sk->sk_backlog.len reads
  net: annotate sk->sk_rcvlowat lockless reads
  net: silence KCSAN warnings around sk_add_backlog() calls
  tcp: annotate lockless access to tcp_memory_pressure
  net: add {READ|WRITE}_ONCE() annotations on ->rskq_accept_head
  net: avoid possible false sharing in sk_leave_memory_pressure()
  tun: remove possible false sharing in tun_flow_update()
  netfilter: conntrack: avoid possible false sharing
  netns: fix NLM_F_ECHO mechanism for RTM_NEWNSID
  scsi: ch: Make it possible to open a ch device multiple times again
  scsi: fix kconfig dependency warning related to 53C700_LE_ON_BE
  scsi: sni_53c710: fix compilation error
  scsi: scsi_dh_alua: handle RTPG sense code correctly during state transitions
  scsi: qla2xxx: fix a potential NULL pointer dereference
  net: usb: qmi_wwan: add Telit 0x1050 composition
  act_mirred: Fix mirred_init_module error handling
  net: taprio: Fix returning EINVAL when configuring without flags
  s390/qeth: Fix initialization of vnicc cmd masks during set online
  s390/qeth: Fix error handling during VNICC initialization
  phylink: fix kernel-doc warnings
  sctp: add chunks to sk_backlog when the newsk sk_socket is not set
  bonding: fix potential NULL deref in bond_update_slave_arr
  net: stmmac: fix disabling flexible PPS output
  net: stmmac: fix length of PTP clock's name string
  ARM: mm: alignment: use "u32" for 32-bit instructions
  ARM: mm: fix alignment handler faults under memory pressure
  ARM: dts: Use level interrupt for omap4 & 5 wlcore
  drivers/amba: fix reset control error handling
  ASoC: simple_card_utils.h: Fix potential multiple redefinition error
  ASoC: msm8916-wcd-digital: add missing MIX2 path for RX1/2
  drm/amdgpu/powerplay: fix typo in mvdd table setup
  iwlwifi: pcie: change qu with jf devices to use qu configuration
  iwlwifi: exclude GEO SAR support for 3168
  iwlwifi: pcie: fix memory leaks in iwl_pcie_ctxt_info_gen3_init
  iwlwifi: dbg_ini: fix memory leak in alloc_sgtable
  iwlwifi: pcie: fix rb_allocator workqueue allocation
  iwlwifi: pcie: fix indexing in command dump for new HW
  iwlwifi: mvm: fix race in sync rx queue notification
  iwlwifi: mvm: force single phy init
  iwlwifi: fix ACPI table revision checks
  iwlwifi: don't access trans_cfg via cfg
  memstick: jmb38x_ms: Fix an error handling path in 'jmb38x_ms_probe()'
  mmc: sdhci-iproc: fix spurious interrupts on Multiblock reads with bcm2711
  pinctrl: armada-37xx: swap polarity on LED group
  arm64: dts: armada-3720-turris-mox: convert usb-phy to phy-supply
  ip6erspan: remove the incorrect mtu limit for ip6erspan
  Doc: networking/device_drivers/pensando: fix ionic.rst warnings
  NFC: pn533: fix use-after-free and memleaks
  Input: soc_button_array - partial revert of support for newer surface devices
  net_sched: fix backward compatibility for TCA_ACT_KIND
  net_sched: fix backward compatibility for TCA_KIND
  net/mlx5: DR, Allow insertion of duplicate rules
  selftests/bpf: More compatible nc options in test_lwt_ip_encap
  selftests/bpf: Set rp_filter in test_flow_dissector
  llc: fix sk_buff refcounting in llc_conn_state_process()
  llc: fix another potential sk_buff leak in llc_ui_sendmsg()
  llc: fix sk_buff leak in llc_conn_service()
  llc: fix sk_buff leak in llc_sap_state_process()
  dm clone: Make __hash_find static
  rt2x00: remove input-polldev.h header
  ARM: dts: am3874-iceboard: Fix 'i2c-mux-idle-disconnect' usage
  ARM: dts: omap5: fix gpu_cm clock provider name
  arm64: Allow CAVIUM_TX2_ERRATUM_219 to be selected
  arm64: Avoid Cavium TX2 erratum 219 when switching TTBR
  arm64: Enable workaround for Cavium TX2 erratum 219 when running SMT
  arm64: KVM: Trap VM ops when ARM64_WORKAROUND_CAVIUM_TX2_219_TVM is set
  mac80211: fix scan when operating on DFS channels in ETSI domains
  mac80211: accept deauth frames in IBSS mode
  cfg80211: fix a bunch of RCU issues in multi-bssid code
  nl80211: fix memory leak in nl80211_get_ftm_responder_stats
  ptp: fix typo of "mechanism" in Kconfig help text
  ionic: fix stats memory dereference
  ASoC: core: Fix pcm code debugfs error
  ARM: dts: sun7i: Drop the module clock from the device tree
  dt-bindings: media: sun4i-csi: Drop the module clock
  rxrpc: Fix call crypto state cleanup
  rxrpc: rxrpc_peer needs to hold a ref on the rxrpc_local record
  rxrpc: Fix trace-after-put looking at the put call record
  rxrpc: Fix trace-after-put looking at the put connection record
  rxrpc: Fix trace-after-put looking at the put peer record
  media: dt-bindings: Fix building error for dt_binding_check
  rxrpc: Fix call ref leak
  ALSA: hdac: clear link output stream mapping
  ALSA: hda/realtek: Reduce the Headphone static noise on XPS 9350/9360
  lib: test_user_copy: style cleanup
  net: stmmac: selftests: Fix L2 Hash Filter test
  net: stmmac: gmac4+: Not all Unicast addresses may be available
  net: stmmac: selftests: Check if filtering is available before running
  net: dsa: b53: Do not clear existing mirrored port mask
  arm64: dts: zii-ultra: fix ARM regulator states
  soc: imx: imx-scu: Getting UID from SCU should have response
  pinctrl: stmfx: fix null pointer on remove
  pinctrl: iproc: allow for error from platform_get_irq()
  nvme: retain split access workaround for capability reads
  nvme: fix possible deadlock when nvme_update_formats fails
  pinctrl: ns2: Fix off by one bugs in ns2_pinmux_enable()
  pinctrl: bcm-iproc: Use SPDX header
  pinctrl: armada-37xx: fix control of pins 32 and up
  arm64: dts: rockchip: fix RockPro64 sdhci settings
  arm64: dts: rockchip: fix RockPro64 vdd-log regulator settings
  regulator: qcom-rpmh: Fix PMIC5 BoB min voltage
  ARM: dts: logicpd-torpedo-som: Remove twl_keypad
  cfg80211: wext: avoid copying malformed SSIDs
  mac80211: Reject malformed SSID elements
  mac80211_hwsim: fix incorrect dev_alloc_name failure goto
  MAINTAINERS: Add hp_sdc drivers to parisc arch
  scsi: MAINTAINERS: Update qla2xxx driver
  scsi: zfcp: fix reaction on bit error threshold notification
  scsi: core: save/restore command resid for error handling
  dt-bindings: arm: rockchip: fix Theobroma-System board bindings
  arm64: dts: rockchip: fix Rockpro64 RK808 interrupt line
  HID: Fix assumption that devices have inputs
  ARM: omap2plus_defconfig: Fix selected panels after generic panel changes
  samples/bpf: Add a workaround for asm_inline
  xsk: Fix crash in poll when device does not support ndo_xsk_wakeup
  samples/bpf: Fix build for task_fd_query_user.c
  ASoc: rockchip: i2s: Fix RPM imbalance
  mmc: sh_mmcif: Use platform_get_irq_optional() for optional interrupt
  mmc: renesas_sdhi: Do not use platform_get_irq() to count interrupts
  ACPI: HMAT: ACPI_HMAT_MEMORY_PD_VALID is deprecated since ACPI-6.3
  Input: goodix - add support for 9-bytes reports
  Input: da9063 - fix capability and drop KEY_SLEEP
  ASoC: wm_adsp: Don't generate kcontrols without READ flags
  sysfs: Fixes __BIN_ATTR_WO() macro
  rt2x00: initialize last_reset
  selftests/bpf: test_progs: Don't leak server_fd in test_sockopt_inherit
  selftests/bpf: test_progs: Don't leak server_fd in tcp_rtt
  regulator: pfuze100-regulator: Variable "val" in pfuze100_regulator_probe() could be uninitialized
  ASoC: intel: bytcr_rt5651: add null check to support_button_press
  ASoC: intel: sof_rt5682: add remove function to disable jack
  ASoC: rt5682: add NULL handler to set_jack function
  ASoC: intel: sof_rt5682: use separate route map for dmic
  ASoC: SOF: Intel: hda: Disable DMI L1 entry during capture
  ASoC: SOF: Intel: initialise and verify FW crash dump data.
  ASoC: SOF: Intel: hda: fix warnings during FW load
  ASoC: SOF: pcm: harden PCM STOP sequence
  ASoC: SOF: pcm: fix resource leak in hw_free
  ASoC: SOF: topology: fix parse fail issue for byte/bool tuple types
  ASoC: SOF: loader: fix kernel oops on firmware boot failure
  regulator: lochnagar: Add on_off_delay for VDDCORE
  ASoC: wm_adsp: Fix theoretical NULL pointer for alg_region
  pinctrl: cherryview: restore Strago DMI workaround for all versions
  pinctrl: intel: Allocate IRQ chip dynamic
  HID: prodikeys: make array keys static const, makes object smaller
  HID: fix error message in hid_open_report()
  ASoC: max98373: check for device node before parsing
  regulator: ti-abb: Fix timeout in ti_abb_wait_txdone/ti_abb_clear_all_txdone
  iommu/io-pgtable-arm: Support all Mali configurations
  iommu/io-pgtable-arm: Correct Mali attributes
  iommu/arm-smmu: Free context bitmap in the err path of arm_smmu_init_domain_context
  scsi: qla2xxx: Remove WARN_ON_ONCE in qla2x00_status_cont_entry()
  scsi: sd: Ignore a failure to sync cache due to lack of authorization
  arm64: dts: Fix gpio to pinmux mapping
  libbpf: handle symbol versioning properly for libbpf.a
  arm64: dts: allwinner: a64: sopine-baseboard: Add PHY regulator delay
  arm64: dts: allwinner: a64: Drop PMU node
  arm64: dts: allwinner: a64: pine64-plus: Add PHY regulator delay
  tools: bpf: Use !building_out_of_srctree to determine srctree
  ASoC: topology: Fix a signedness bug in soc_tplg_dapm_widget_create()
  scsi: core: fix dh and multipathing for SCSI hosts without request batching
  scsi: core: fix missing .cleanup_rq for SCSI hosts without request batching
  regulator: da9062: fix suspend_enable/disable preparation
  dt-bindings: fixed-regulator: fix compatible enum
  regulator: fixed: Prevent NULL pointer dereference when !CONFIG_OF
  ASoC: soc-component: fix a couple missing error assignments
  ASoC: wm8994: Do not register inapplicable controls for WM1811
  ASoC: samsung: arndale: Add missing OF node dereferencing
  irqchip/sifive-plic: Switch to fasteoi flow
  irqchip/gic-v3: Fix GIC_LINE_NR accessor
  regulator: core: make regulator_register() EPROBE_DEFER aware
  regulator: of: fix suspend-min/max-voltage parsing
  irqchip/atmel-aic5: Add support for sam9x60 irqchip
  irqchip/al-fic: Add support for irq retrigger

Change-Id: I5e7fd941c93a36889378f480cc27d8ea77d11b39
Signed-off-by: Raghavendra Rao Ananta <rananta@codeaurora.org>
2019-11-04 17:30:19 -08:00
qctecmdr
f4de4c753b Merge "mm: vmscan: support complete shrinker reclaim" 2019-10-25 02:45:51 -07:00
Liam Mark
1b45604c21 mm: vmscan: support complete shrinker reclaim
Ensure that shrinkers are given the option to completely drop
their caches even when their caches are smaller than the batch size.

This change helps improve memory headroom by ensuring that under
significant memory pressure shrinkers can drop all of their caches.

This change only attempts to more aggressively call the shrinkers
during background memory reclaim, inorder to avoid hurting the
perforamnce of direct memory reclaim.

Change-Id: I8dbc29c054add639e4810e36fd2c8a063e5c52f3
Signed-off-by: Liam Mark <lmark@codeaurora.org>
Signed-off-by: Patrick Daly <pdaly@codeaurora.org>
Signed-off-by: Sudarshan Rajagopalan <sudaraja@codeaurora.org>
Signed-off-by: Isaac J. Manjarres <isaacm@codeaurora.org>
2019-10-24 10:22:26 -07:00
Patrick Daly
1839defde4 mm: vmscan: support equal reclaim for anon and file pages
When performing memory reclaim support treating anonymous and
file backed pages equally.

Swapping anonymous pages out to memory can be efficient enough
to justify treating anonymous and file backed pages equally.

CRs-Fixed: 648984
Change-Id: I6315b8557020d1e27a34225bb9cefbef1fb43266
Signed-off-by: Liam Mark <lmark@codeaurora.org>
Signed-off-by: Patrick Daly <pdaly@codeaurora.org>
Signed-off-by: Sudarshan Rajagopalan <sudaraja@codeaurora.org>
[swatsrid@codeaurora.org: Fix trivial merge conflicts]
Signed-off-by: Swathi Sridhar <swatsrid@codeaurora.org>
[isaacm@codeaurora.org: Fix trivial merge conflicts]
Signed-off-by: Isaac J. Manjarres <isaacm@codeaurora.org>
2019-10-24 10:22:10 -07:00
William Kucharski
906d278d75 mm/vmscan.c: support removing arbitrary sized pages from mapping
__remove_mapping() assumes that pages can only be either base pages or
HPAGE_PMD_SIZE.  Ask the page what size it is.

Link: http://lkml.kernel.org/r/20191017164223.2762148-4-songliubraving@fb.com
Fixes: 99cb0dbd47 ("mm,thp: add read-only THP support for (non-shmem) FS")
Signed-off-by: William Kucharski <william.kucharski@oracle.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Song Liu <songliubraving@fb.com>
Acked-by: Yang Shi <yang.shi@linux.alibaba.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-19 06:32:32 -04:00
Honglei Wang
b11edebbc9 mm: memcg: get number of pages on the LRU list in memcgroup base on lru_zone_size
Commit 1a61ab8038 ("mm: memcontrol: replace zone summing with
lruvec_page_state()") has made lruvec_page_state to use per-cpu counters
instead of calculating it directly from lru_zone_size with an idea that
this would be more effective.

Tim has reported that this is not really the case for their database
benchmark which is showing an opposite results where lruvec_page_state
is taking up a huge chunk of CPU cycles (about 25% of the system time
which is roughly 7% of total cpu cycles) on 5.3 kernels.  The workload
is running on a larger machine (96cpus), it has many cgroups (500) and
it is heavily direct reclaim bound.

Tim Chen said:

: The problem can also be reproduced by running simple multi-threaded
: pmbench benchmark with a fast Optane SSD swap (see profile below).
:
:
: 6.15%     3.08%  pmbench          [kernel.vmlinux]            [k] lruvec_lru_size
:             |
:             |--3.07%--lruvec_lru_size
:             |          |
:             |          |--2.11%--cpumask_next
:             |          |          |
:             |          |           --1.66%--find_next_bit
:             |          |
:             |           --0.57%--call_function_interrupt
:             |                     |
:             |                      --0.55%--smp_call_function_interrupt
:             |
:             |--1.59%--0x441f0fc3d009
:             |          _ops_rdtsc_init_base_freq
:             |          access_histogram
:             |          page_fault
:             |          __do_page_fault
:             |          handle_mm_fault
:             |          __handle_mm_fault
:             |          |
:             |           --1.54%--do_swap_page
:             |                     swapin_readahead
:             |                     swap_cluster_readahead
:             |                     |
:             |                      --1.53%--read_swap_cache_async
:             |                                __read_swap_cache_async
:             |                                alloc_pages_vma
:             |                                __alloc_pages_nodemask
:             |                                __alloc_pages_slowpath
:             |                                try_to_free_pages
:             |                                do_try_to_free_pages
:             |                                shrink_node
:             |                                shrink_node_memcg
:             |                                |
:             |                                |--0.77%--lruvec_lru_size
:             |                                |
:             |                                 --0.76%--inactive_list_is_low
:             |                                           |
:             |                                            --0.76%--lruvec_lru_size
:             |
:              --1.50%--measure_read
:                        page_fault
:                        __do_page_fault
:                        handle_mm_fault
:                        __handle_mm_fault
:                        do_swap_page
:                        swapin_readahead
:                        swap_cluster_readahead
:                        |
:                         --1.48%--read_swap_cache_async
:                                   __read_swap_cache_async
:                                   alloc_pages_vma
:                                   __alloc_pages_nodemask
:                                   __alloc_pages_slowpath
:                                   try_to_free_pages
:                                   do_try_to_free_pages
:                                   shrink_node
:                                   shrink_node_memcg
:                                   |
:                                   |--0.75%--inactive_list_is_low
:                                   |          |
:                                   |           --0.75%--lruvec_lru_size
:                                   |
:                                    --0.73%--lruvec_lru_size

The likely culprit is the cache traffic the lruvec_page_state_local
generates.  Dave Hansen says:

: I was thinking purely of the cache footprint.  If it's reading
: pn->lruvec_stat_local->count[idx] is three separate cachelines, so 192
: bytes of cache *96 CPUs = 18k of data, mostly read-only.  1 cgroup would
: be 18k of data for the whole system and the caching would be pretty
: efficient and all 18k would probably survive a tight page fault loop in
: the L1.  500 cgroups would be ~90k of data per CPU thread which doesn't
: fit in the L1 and probably wouldn't survive a tight page fault loop if
: both logical threads were banging on different cgroups.
:
: It's just a theory, but it's why I noted the number of cgroups when I
: initially saw this show up in profiles

Fix the regression by partially reverting the said commit and calculate
the lru size explicitly.

Link: http://lkml.kernel.org/r/20190905071034.16822-1-honglei.wang@oracle.com
Fixes: 1a61ab8038 ("mm: memcontrol: replace zone summing with lruvec_page_state()")
Signed-off-by: Honglei Wang <honglei.wang@oracle.com>
Reported-by: Tim Chen <tim.c.chen@linux.intel.com>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
Tested-by: Tim Chen <tim.c.chen@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Roman Gushchin <guro@fb.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: <stable@vger.kernel.org>	[5.2+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-19 06:32:32 -04:00
Chris Down
1bc63fb127 mm, memcg: make scan aggression always exclude protection
This patch is an incremental improvement on the existing
memory.{low,min} relative reclaim work to base its scan pressure
calculations on how much protection is available compared to the current
usage, rather than how much the current usage is over some protection
threshold.

This change doesn't change the experience for the user in the normal
case too much.  One benefit is that it replaces the (somewhat arbitrary)
100% cutoff with an indefinite slope, which makes it easier to ballpark
a memory.low value.

As well as this, the old methodology doesn't quite apply generically to
machines with varying amounts of physical memory.  Let's say we have a
top level cgroup, workload.slice, and another top level cgroup,
system-management.slice.  We want to roughly give 12G to
system-management.slice, so on a 32GB machine we set memory.low to 20GB
in workload.slice, and on a 64GB machine we set memory.low to 52GB.
However, because these are relative amounts to the total machine size,
while the amount of memory we want to generally be willing to yield to
system.slice is absolute (12G), we end up putting more pressure on
system.slice just because we have a larger machine and a larger workload
to fill it, which seems fairly unintuitive.  With this new behaviour, we
don't end up with this unintended side effect.

Previously the way that memory.low protection works is that if you are
50% over a certain baseline, you get 50% of your normal scan pressure.
This is certainly better than the previous cliff-edge behaviour, but it
can be improved even further by always considering memory under the
currently enforced protection threshold to be out of bounds.  This means
that we can set relatively low memory.low thresholds for variable or
bursty workloads while still getting a reasonable level of protection,
whereas with the previous version we may still trivially hit the 100%
clamp.  The previous 100% clamp is also somewhat arbitrary, whereas this
one is more concretely based on the currently enforced protection
threshold, which is likely easier to reason about.

There is also a subtle issue with the way that proportional reclaim
worked previously -- it promotes having no memory.low, since it makes
pressure higher during low reclaim.  This happens because we base our
scan pressure modulation on how far memory.current is between memory.min
and memory.low, but if memory.low is unset, we only use the overage
method.  In most cromulent configurations, this then means that we end
up with *more* pressure than with no memory.low at all when we're in low
reclaim, which is not really very usable or expected.

With this patch, memory.low and memory.min affect reclaim pressure in a
more understandable and composable way.  For example, from a user
standpoint, "protected" memory now remains untouchable from a reclaim
aggression standpoint, and users can also have more confidence that
bursty workloads will still receive some amount of guaranteed
protection.

Link: http://lkml.kernel.org/r/20190322160307.GA3316@chrisdown.name
Signed-off-by: Chris Down <chris@chrisdown.name>
Reviewed-by: Roman Gushchin <guro@fb.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-07 15:47:20 -07:00
Chris Down
9de7ca46ad mm, memcg: make memory.emin the baseline for utilisation determination
Roman points out that when when we do the low reclaim pass, we scale the
reclaim pressure relative to position between 0 and the maximum
protection threshold.

However, if the maximum protection is based on memory.elow, and
memory.emin is above zero, this means we still may get binary behaviour
on second-pass low reclaim.  This is because we scale starting at 0, not
starting at memory.emin, and since we don't scan at all below emin, we
end up with cliff behaviour.

This should be a fairly uncommon case since usually we don't go into the
second pass, but it makes sense to scale our low reclaim pressure
starting at emin.

You can test this by catting two large sparse files, one in a cgroup
with emin set to some moderate size compared to physical RAM, and
another cgroup without any emin.  In both cgroups, set an elow larger
than 50% of physical RAM.  The one with emin will have less page
scanning, as reclaim pressure is lower.

Rebase on top of and apply the same idea as what was applied to handle
cgroup_memory=disable properly for the original proportional patch
http://lkml.kernel.org/r/20190201045711.GA18302@chrisdown.name ("mm,
memcg: Handle cgroup_disable=memory when getting memcg protection").

Link: http://lkml.kernel.org/r/20190201051810.GA18895@chrisdown.name
Signed-off-by: Chris Down <chris@chrisdown.name>
Suggested-by: Roman Gushchin <guro@fb.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Dennis Zhou <dennis@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-07 15:47:20 -07:00
Chris Down
9783aa9917 mm, memcg: proportional memory.{low,min} reclaim
cgroup v2 introduces two memory protection thresholds: memory.low
(best-effort) and memory.min (hard protection).  While they generally do
what they say on the tin, there is a limitation in their implementation
that makes them difficult to use effectively: that cliff behaviour often
manifests when they become eligible for reclaim.  This patch implements
more intuitive and usable behaviour, where we gradually mount more
reclaim pressure as cgroups further and further exceed their protection
thresholds.

This cliff edge behaviour happens because we only choose whether or not
to reclaim based on whether the memcg is within its protection limits
(see the use of mem_cgroup_protected in shrink_node), but we don't vary
our reclaim behaviour based on this information.  Imagine the following
timeline, with the numbers the lruvec size in this zone:

1. memory.low=1000000, memory.current=999999. 0 pages may be scanned.
2. memory.low=1000000, memory.current=1000000. 0 pages may be scanned.
3. memory.low=1000000, memory.current=1000001. 1000001* pages may be
   scanned. (?!)

* Of course, we won't usually scan all available pages in the zone even
  without this patch because of scan control priority, over-reclaim
  protection, etc.  However, as shown by the tests at the end, these
  techniques don't sufficiently throttle such an extreme change in input,
  so cliff-like behaviour isn't really averted by their existence alone.

Here's an example of how this plays out in practice.  At Facebook, we are
trying to protect various workloads from "system" software, like
configuration management tools, metric collectors, etc (see this[0] case
study).  In order to find a suitable memory.low value, we start by
determining the expected memory range within which the workload will be
comfortable operating.  This isn't an exact science -- memory usage deemed
"comfortable" will vary over time due to user behaviour, differences in
composition of work, etc, etc.  As such we need to ballpark memory.low,
but doing this is currently problematic:

1. If we end up setting it too low for the workload, it won't have
   *any* effect (see discussion above).  The group will receive the full
   weight of reclaim and won't have any priority while competing with the
   less important system software, as if we had no memory.low configured
   at all.

2. Because of this behaviour, we end up erring on the side of setting
   it too high, such that the comfort range is reliably covered.  However,
   protected memory is completely unavailable to the rest of the system,
   so we might cause undue memory and IO pressure there when we *know* we
   have some elasticity in the workload.

3. Even if we get the value totally right, smack in the middle of the
   comfort zone, we get extreme jumps between no pressure and full
   pressure that cause unpredictable pressure spikes in the workload due
   to the current binary reclaim behaviour.

With this patch, we can set it to our ballpark estimation without too much
worry.  Any undesirable behaviour, such as too much or too little reclaim
pressure on the workload or system will be proportional to how far our
estimation is off.  This means we can set memory.low much more
conservatively and thus waste less resources *without* the risk of the
workload falling off a cliff if we overshoot.

As a more abstract technical description, this unintuitive behaviour
results in having to give high-priority workloads a large protection
buffer on top of their expected usage to function reliably, as otherwise
we have abrupt periods of dramatically increased memory pressure which
hamper performance.  Having to set these thresholds so high wastes
resources and generally works against the principle of work conservation.
In addition, having proportional memory reclaim behaviour has other
benefits.  Most notably, before this patch it's basically mandatory to set
memory.low to a higher than desirable value because otherwise as soon as
you exceed memory.low, all protection is lost, and all pages are eligible
to scan again.  By contrast, having a gradual ramp in reclaim pressure
means that you now still get some protection when thresholds are exceeded,
which means that one can now be more comfortable setting memory.low to
lower values without worrying that all protection will be lost.  This is
important because workingset size is really hard to know exactly,
especially with variable workloads, so at least getting *some* protection
if your workingset size grows larger than you expect increases user
confidence in setting memory.low without a huge buffer on top being
needed.

Thanks a lot to Johannes Weiner and Tejun Heo for their advice and
assistance in thinking about how to make this work better.

In testing these changes, I intended to verify that:

1. Changes in page scanning become gradual and proportional instead of
   binary.

   To test this, I experimented stepping further and further down
   memory.low protection on a workload that floats around 19G workingset
   when under memory.low protection, watching page scan rates for the
   workload cgroup:

   +------------+-----------------+--------------------+--------------+
   | memory.low | test (pgscan/s) | control (pgscan/s) | % of control |
   +------------+-----------------+--------------------+--------------+
   |        21G |               0 |                  0 | N/A          |
   |        17G |             867 |               3799 | 23%          |
   |        12G |            1203 |               3543 | 34%          |
   |         8G |            2534 |               3979 | 64%          |
   |         4G |            3980 |               4147 | 96%          |
   |          0 |            3799 |               3980 | 95%          |
   +------------+-----------------+--------------------+--------------+

   As you can see, the test kernel (with a kernel containing this
   patch) ramps up page scanning significantly more gradually than the
   control kernel (without this patch).

2. More gradual ramp up in reclaim aggression doesn't result in
   premature OOMs.

   To test this, I wrote a script that slowly increments the number of
   pages held by stress(1)'s --vm-keep mode until a production system
   entered severe overall memory contention.  This script runs in a highly
   protected slice taking up the majority of available system memory.
   Watching vmstat revealed that page scanning continued essentially
   nominally between test and control, without causing forward reclaim
   progress to become arrested.

[0]: https://facebookmicrosites.github.io/cgroup2/docs/overview.html#case-study-the-fbtax2-project

[akpm@linux-foundation.org: reflow block comments to fit in 80 cols]
[chris@chrisdown.name: handle cgroup_disable=memory when getting memcg protection]
  Link: http://lkml.kernel.org/r/20190201045711.GA18302@chrisdown.name
Link: http://lkml.kernel.org/r/20190124014455.GA6396@chrisdown.name
Signed-off-by: Chris Down <chris@chrisdown.name>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Roman Gushchin <guro@fb.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-07 15:47:20 -07:00
Minchan Kim
1a4e58cce8 mm: introduce MADV_PAGEOUT
When a process expects no accesses to a certain memory range for a long
time, it could hint kernel that the pages can be reclaimed instantly but
data should be preserved for future use.  This could reduce workingset
eviction so it ends up increasing performance.

This patch introduces the new MADV_PAGEOUT hint to madvise(2) syscall.
MADV_PAGEOUT can be used by a process to mark a memory range as not
expected to be used for a long time so that kernel reclaims *any LRU*
pages instantly.  The hint can help kernel in deciding which pages to
evict proactively.

A note: It doesn't apply SWAP_CLUSTER_MAX LRU page isolation limit
intentionally because it's automatically bounded by PMD size.  If PMD
size(e.g., 256) makes some trouble, we could fix it later by limit it to
SWAP_CLUSTER_MAX[1].

- man-page material

MADV_PAGEOUT (since Linux x.x)

Do not expect access in the near future so pages in the specified
regions could be reclaimed instantly regardless of memory pressure.
Thus, access in the range after successful operation could cause
major page fault but never lose the up-to-date contents unlike
MADV_DONTNEED. Pages belonging to a shared mapping are only processed
if a write access is allowed for the calling process.

MADV_PAGEOUT cannot be applied to locked pages, Huge TLB pages, or
VM_PFNMAP pages.

[1] https://lore.kernel.org/lkml/20190710194719.GS29695@dhcp22.suse.cz/

[minchan@kernel.org: clear PG_active on MADV_PAGEOUT]
  Link: http://lkml.kernel.org/r/20190802200643.GA181880@google.com
[akpm@linux-foundation.org: resolve conflicts with hmm.git]
Link: http://lkml.kernel.org/r/20190726023435.214162-5-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reported-by: kbuild test robot <lkp@intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Chris Zankel <chris@zankel.net>
Cc: Daniel Colascione <dancol@google.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Hillf Danton <hdanton@sina.com>
Cc: Joel Fernandes (Google) <joel@joelfernandes.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Oleksandr Natalenko <oleksandr@redhat.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Sonny Rao <sonnyrao@google.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Tim Murray <timmurray@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-25 17:51:41 -07:00
Minchan Kim
8940b34a4e mm: change PAGEREF_RECLAIM_CLEAN with PAGE_REFRECLAIM
The local variable references in shrink_page_list is PAGEREF_RECLAIM_CLEAN
as default.  It is for preventing to reclaim dirty pages when CMA try to
migrate pages.  Strictly speaking, we don't need it because CMA didn't
allow to write out by .may_writepage = 0 in reclaim_clean_pages_from_list.

Moreover, it has a problem to prevent anonymous pages's swap out even
though force_reclaim = true in shrink_page_list on upcoming patch.  So
this patch makes references's default value to PAGEREF_RECLAIM and rename
force_reclaim with ignore_references to make it more clear.

This is a preparatory work for next patch.

Link: http://lkml.kernel.org/r/20190726023435.214162-3-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Chris Zankel <chris@zankel.net>
Cc: Daniel Colascione <dancol@google.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Hillf Danton <hdanton@sina.com>
Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Joel Fernandes (Google) <joel@joelfernandes.org>
Cc: kbuild test robot <lkp@intel.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Oleksandr Natalenko <oleksandr@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Sonny Rao <sonnyrao@google.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Tim Murray <timmurray@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-25 17:51:41 -07:00
Yang Shi
0a432dcbeb mm: shrinker: make shrinker not depend on memcg kmem
Currently shrinker is just allocated and can work when memcg kmem is
enabled.  But, THP deferred split shrinker is not slab shrinker, it
doesn't make too much sense to have such shrinker depend on memcg kmem.
It should be able to reclaim THP even though memcg kmem is disabled.

Introduce a new shrinker flag, SHRINKER_NONSLAB, for non-slab shrinker.
When memcg kmem is disabled, just such shrinkers can be called in
shrinking memcg slab.

[yang.shi@linux.alibaba.com: add comment]
  Link: http://lkml.kernel.org/r/1566496227-84952-4-git-send-email-yang.shi@linux.alibaba.com
Link: http://lkml.kernel.org/r/1565144277-36240-4-git-send-email-yang.shi@linux.alibaba.com
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Qian Cai <cai@lca.pw>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-24 15:54:11 -07:00
Yang Shi
7ae88534cd mm: move mem_cgroup_uncharge out of __page_cache_release()
A later patch makes THP deferred split shrinker memcg aware, but it needs
page->mem_cgroup information in THP destructor, which is called after
mem_cgroup_uncharge() now.

So move mem_cgroup_uncharge() from __page_cache_release() to compound page
destructor, which is called by both THP and other compound pages except
HugeTLB.  And call it in __put_single_page() for single order page.

Link: http://lkml.kernel.org/r/1565144277-36240-3-git-send-email-yang.shi@linux.alibaba.com
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
Suggested-by: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Qian Cai <cai@lca.pw>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-24 15:54:11 -07:00
Vlastimil Babka
5ee04716c4 mm, reclaim: cleanup should_continue_reclaim()
After commit "mm, reclaim: make should_continue_reclaim perform dryrun
detection", closer look at the function shows, that nr_reclaimed == 0
means the function will always return false.  And since non-zero
nr_reclaimed implies non_zero nr_scanned, testing nr_scanned serves no
purpose, and so does the testing for __GFP_RETRY_MAYFAIL.

This patch thus cleans up the function to test only !nr_reclaimed upfront,
and remove the __GFP_RETRY_MAYFAIL test and nr_scanned parameter
completely.  Comment is also updated, explaining that approximating "full
LRU list has been scanned" with nr_scanned == 0 didn't really work.

Link: http://lkml.kernel.org/r/20190806014744.15446-3-mike.kravetz@oracle.com
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Hillf Danton <hdanton@sina.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-24 15:54:10 -07:00
Hillf Danton
1c6c15971e mm, reclaim: make should_continue_reclaim perform dryrun detection
Patch series "address hugetlb page allocation stalls", v2.

Allocation of hugetlb pages via sysctl or procfs can stall for minutes or
hours.  A simple example on a two node system with 8GB of memory is as
follows:

echo 4096 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
echo 4096 > /proc/sys/vm/nr_hugepages

Obviously, both allocation attempts will fall short of their 8GB goal.
However, one or both of these commands may stall and not be interruptible.
The issues were initially discussed in mail thread [1] and RFC code at
[2].

This series addresses the issues causing the stalls.  There are two
distinct fixes, a cleanup, and an optimization.  The reclaim patch by
Hillf and compaction patch by Vlasitmil address corner cases in their
respective areas.  hugetlb page allocation could stall due to either of
these issues.  Vlasitmil added a cleanup patch after Hillf's
modifications.  The hugetlb patch by Mike is an optimization suggested
during the debug and development process.

[1] http://lkml.kernel.org/r/d38a095e-dc39-7e82-bb76-2c9247929f07@oracle.com
[2] http://lkml.kernel.org/r/20190724175014.9935-1-mike.kravetz@oracle.com

This patch (of 4):

Address the issue of should_continue_reclaim returning true too often for
__GFP_RETRY_MAYFAIL attempts when !nr_reclaimed and nr_scanned.  This was
observed during hugetlb page allocation causing stalls for minutes or
hours.

We can stop reclaiming pages if compaction reports it can make a progress.
There might be side-effects for other high-order allocations that would
potentially benefit from reclaiming more before compaction so that they
would be faster and less likely to stall.  However, the consequences of
premature/over-reclaim are considered worse.

We can also bail out of reclaiming pages if we know that there are not
enough inactive lru pages left to satisfy the costly allocation.

We can give up reclaiming pages too if we see dryrun occur, with the
certainty of plenty of inactive pages.  IOW with dryrun detected, we are
sure we have reclaimed as many pages as we could.

Link: http://lkml.kernel.org/r/20190806014744.15446-2-mike.kravetz@oracle.com
Signed-off-by: Hillf Danton <hdanton@sina.com>
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Tested-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-24 15:54:10 -07:00
Johannes Weiner
1ba6fc9af3 mm: vmscan: do not share cgroup iteration between reclaimers
One of our services observed a high rate of cgroup OOM kills in the
presence of large amounts of clean cache.  Debugging showed that the
culprit is the shared cgroup iteration in page reclaim.

Under high allocation concurrency, multiple threads enter reclaim at the
same time.  Fearing overreclaim when we first switched from the single
global LRU to cgrouped LRU lists, we introduced a shared iteration state
for reclaim invocations - whether 1 or 20 reclaimers are active
concurrently, we only walk the cgroup tree once: the 1st reclaimer
reclaims the first cgroup, the second the second one etc.  With more
reclaimers than cgroups, we start another walk from the top.

This sounded reasonable at the time, but the problem is that reclaim
concurrency doesn't scale with allocation concurrency.  As reclaim
concurrency increases, the amount of memory individual reclaimers get to
scan gets smaller and smaller.  Individual reclaimers may only see one
cgroup per cycle, and that may not have much reclaimable memory.  We see
individual reclaimers declare OOM when there is plenty of reclaimable
memory available in cgroups they didn't visit.

This patch does away with the shared iterator, and every reclaimer is
allowed to scan the full cgroup tree and see all of reclaimable memory,
just like it would on a non-cgrouped system.  This way, when OOM is
declared, we know that the reclaimer actually had a chance.

To still maintain fairness in reclaim pressure, disallow cgroup reclaim
from bailing out of the tree walk early.  Kswapd and regular direct
reclaim already don't bail, so it's not clear why limit reclaim would have
to, especially since it only walks subtrees to begin with.

This change completely eliminates the OOM kills on our service, while
showing no signs of overreclaim - no increased scan rates, %sys time, or
abrupt free memory spikes.  I tested across 100 machines that have 64G of
RAM and host about 300 cgroups each.

[ It's possible overreclaim never was a *practical* issue to begin
  with - it was simply a concern we had on the mailing lists at the
  time, with no real data to back it up. But we have also added more
  bail-out conditions deeper inside reclaim (e.g. the proportional
  exit in shrink_node_memcg) since. Regardless, now we have data that
  suggests full walks are more reliable and scale just fine. ]

Link: http://lkml.kernel.org/r/20190812192316.13615-1-hannes@cmpxchg.org
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Roman Gushchin <guro@fb.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-24 15:54:08 -07:00
Matthew Wilcox (Oracle)
d8c6546b1a mm: introduce compound_nr()
Replace 1 << compound_order(page) with compound_nr(page).  Minor
improvements in readability.

Link: http://lkml.kernel.org/r/20190721104612.19120-4-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-24 15:54:08 -07:00
Michal Hocko
d2e5fb927e mm, memcg: do not set reclaim_state on soft limit reclaim
Adric Blake has noticed[1] the following warning:

  WARNING: CPU: 7 PID: 175 at mm/vmscan.c:245 set_task_reclaim_state+0x1e/0x40
  [...]
  Call Trace:
   mem_cgroup_shrink_node+0x9b/0x1d0
   mem_cgroup_soft_limit_reclaim+0x10c/0x3a0
   balance_pgdat+0x276/0x540
   kswapd+0x200/0x3f0
   ? wait_woken+0x80/0x80
   kthread+0xfd/0x130
   ? balance_pgdat+0x540/0x540
   ? kthread_park+0x80/0x80
   ret_from_fork+0x35/0x40
  ---[ end trace 727343df67b2398a ]---

which tells us that soft limit reclaim is about to overwrite the
reclaim_state configured up in the call chain (kswapd in this case but
the direct reclaim is equally possible).  This means that reclaim stats
would get misleading once the soft reclaim returns and another reclaim
is done.

Fix the warning by dropping set_task_reclaim_state from the soft reclaim
which is always called with reclaim_state set up.

[1] http://lkml.kernel.org/r/CAE1jjeePxYPvw1mw2B3v803xHVR_BNnz0hQUY_JDMN8ny29M6w@mail.gmail.com

Link: http://lkml.kernel.org/r/20190828071808.20410-1-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Reported-by: Adric Blake <promarbler14@gmail.com>
Acked-by: Yafang Shao <laoar.shao@gmail.com>
Acked-by: Yang Shi <yang.shi@linux.alibaba.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Hillf Danton <hdanton@sina.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-08-30 18:00:50 -07:00
Mel Gorman
28360f3987 mm, vmscan: do not special-case slab reclaim when watermarks are boosted
Dave Chinner reported a problem pointing a finger at commit 1c30844d2d
("mm: reclaim small amounts of memory when an external fragmentation
event occurs").

The report is extensive:

  https://lore.kernel.org/linux-mm/20190807091858.2857-1-david@fromorbit.com/

and it's worth recording the most relevant parts (colorful language and
typos included).

	When running a simple, steady state 4kB file creation test to
	simulate extracting tarballs larger than memory full of small
	files into the filesystem, I noticed that once memory fills up
	the cache balance goes to hell.

	The workload is creating one dirty cached inode for every dirty
	page, both of which should require a single IO each to clean and
	reclaim, and creation of inodes is throttled by the rate at which
	dirty writeback runs at (via balance dirty pages). Hence the ingest
	rate of new cached inodes and page cache pages is identical and
	steady. As a result, memory reclaim should quickly find a steady
	balance between page cache and inode caches.

	The moment memory fills, the page cache is reclaimed at a much
	faster rate than the inode cache, and evidence suggests that
	the inode cache shrinker is not being called when large batches
	of pages are being reclaimed. In roughly the same time period
	that it takes to fill memory with 50% pages and 50% slab caches,
	memory reclaim reduces the page cache down to just dirty pages
	and slab caches fill the entirety of memory.

	The LRU is largely full of dirty pages, and we're getting spikes
	of random writeback from memory reclaim so it's all going to shit.
	Behaviour never recovers, the page cache remains pinned at just
	dirty pages, and nothing I could tune would make any difference.
	vfs_cache_pressure makes no difference - I would set it so high
	it should trim the entire inode caches in a single pass, yet it
	didn't do anything. It was clear from tracing and live telemetry
	that the shrinkers were pretty much not running except when
	there was absolutely no memory free at all, and then they did
	the minimum necessary to free memory to make progress.

	So I went looking at the code, trying to find places where pages
	got reclaimed and the shrinkers weren't called. There's only one
	- kswapd doing boosted reclaim as per commit 1c30844d2d ("mm:
	reclaim small amounts of memory when an external fragmentation
	event occurs").

The watermark boosting introduced by the commit is triggered in response
to an allocation "fragmentation event".  The boosting was not intended
to target THP specifically and triggers even if THP is disabled.
However, with Dave's perfectly reasonable workload, fragmentation events
can be very common given the ratio of slab to page cache allocations so
boosting remains active for long periods of time.

As high-order allocations might use compaction and compaction cannot
move slab pages the decision was made in the commit to special-case
kswapd when watermarks are boosted -- kswapd avoids reclaiming slab as
reclaiming slab does not directly help compaction.

As Dave notes, this decision means that slab can be artificially
protected for long periods of time and messes up the balance with slab
and page caches.

Removing the special casing can still indirectly help avoid
fragmentation by avoiding fragmentation-causing events due to slab
allocation as pages from a slab pageblock will have some slab objects
freed.  Furthermore, with the special casing, reclaim behaviour is
unpredictable as kswapd sometimes examines slab and sometimes does not
in a manner that is tricky to tune or analyse.

This patch removes the special casing.  The downside is that this is not
a universal performance win.  Some benchmarks that depend on the
residency of data when rereading metadata may see a regression when slab
reclaim is restored to its original behaviour.  Similarly, some
benchmarks that only read-once or write-once may perform better when
page reclaim is too aggressive.  The primary upside is that slab
shrinker is less surprising (arguably more sane but that's a matter of
opinion), behaves consistently regardless of the fragmentation state of
the system and properly obeys VM sysctls.

A fsmark benchmark configuration was constructed similar to what Dave
reported and is codified by the mmtest configuration
config-io-fsmark-small-file-stream.  It was evaluated on a 1-socket
machine to avoid dealing with NUMA-related issues and the timing of
reclaim.  The storage was an SSD Samsung Evo and a fresh trimmed XFS
filesystem was used for the test data.

This is not an exact replication of Dave's setup.  The configuration
scales its parameters depending on the memory size of the SUT to behave
similarly across machines.  The parameters mean the first sample
reported by fs_mark is using 50% of RAM which will barely be throttled
and look like a big outlier.  Dave used fake NUMA to have multiple
kswapd instances which I didn't replicate.  Finally, the number of
iterations differ from Dave's test as the target disk was not large
enough.  While not identical, it should be representative.

  fsmark
                                     5.3.0-rc3              5.3.0-rc3
                                       vanilla          shrinker-v1r1
  Min       1-files/sec     4444.80 (   0.00%)     4765.60 (   7.22%)
  1st-qrtle 1-files/sec     5005.10 (   0.00%)     5091.70 (   1.73%)
  2nd-qrtle 1-files/sec     4917.80 (   0.00%)     4855.60 (  -1.26%)
  3rd-qrtle 1-files/sec     4667.40 (   0.00%)     4831.20 (   3.51%)
  Max-1     1-files/sec    11421.50 (   0.00%)     9999.30 ( -12.45%)
  Max-5     1-files/sec    11421.50 (   0.00%)     9999.30 ( -12.45%)
  Max-10    1-files/sec    11421.50 (   0.00%)     9999.30 ( -12.45%)
  Max-90    1-files/sec     4649.60 (   0.00%)     4780.70 (   2.82%)
  Max-95    1-files/sec     4491.00 (   0.00%)     4768.20 (   6.17%)
  Max-99    1-files/sec     4491.00 (   0.00%)     4768.20 (   6.17%)
  Max       1-files/sec    11421.50 (   0.00%)     9999.30 ( -12.45%)
  Hmean     1-files/sec     5004.75 (   0.00%)     5075.96 (   1.42%)
  Stddev    1-files/sec     1778.70 (   0.00%)     1369.66 (  23.00%)
  CoeffVar  1-files/sec       33.70 (   0.00%)       26.05 (  22.71%)
  BHmean-99 1-files/sec     5053.72 (   0.00%)     5101.52 (   0.95%)
  BHmean-95 1-files/sec     5053.72 (   0.00%)     5101.52 (   0.95%)
  BHmean-90 1-files/sec     5107.05 (   0.00%)     5131.41 (   0.48%)
  BHmean-75 1-files/sec     5208.45 (   0.00%)     5206.68 (  -0.03%)
  BHmean-50 1-files/sec     5405.53 (   0.00%)     5381.62 (  -0.44%)
  BHmean-25 1-files/sec     6179.75 (   0.00%)     6095.14 (  -1.37%)

                     5.3.0-rc3   5.3.0-rc3
                       vanillashrinker-v1r1
  Duration User         501.82      497.29
  Duration System      4401.44     4424.08
  Duration Elapsed     8124.76     8358.05

This is showing a slight skew for the max result representing a large
outlier for the 1st, 2nd and 3rd quartile are similar indicating that
the bulk of the results show little difference.  Note that an earlier
version of the fsmark configuration showed a regression but that
included more samples taken while memory was still filling.

Note that the elapsed time is higher.  Part of this is that the
configuration included time to delete all the test files when the test
completes -- the test automation handles the possibility of testing
fsmark with multiple thread counts.  Without the patch, many of these
objects would be memory resident which is part of what the patch is
addressing.

There are other important observations that justify the patch.

1. With the vanilla kernel, the number of dirty pages in the system is
   very low for much of the test. With this patch, dirty pages is
   generally kept at 10% which matches vm.dirty_background_ratio which
   is normal expected historical behaviour.

2. With the vanilla kernel, the ratio of Slab/Pagecache is close to
   0.95 for much of the test i.e. Slab is being left alone and
   dominating memory consumption. With the patch applied, the ratio
   varies between 0.35 and 0.45 with the bulk of the measured ratios
   roughly half way between those values. This is a different balance to
   what Dave reported but it was at least consistent.

3. Slabs are scanned throughout the entire test with the patch applied.
   The vanille kernel has periods with no scan activity and then
   relatively massive spikes.

4. Without the patch, kswapd scan rates are very variable. With the
   patch, the scan rates remain quite steady.

4. Overall vmstats are closer to normal expectations

	                                5.3.0-rc3      5.3.0-rc3
	                                  vanilla  shrinker-v1r1
    Ops Direct pages scanned             99388.00      328410.00
    Ops Kswapd pages scanned          45382917.00    33451026.00
    Ops Kswapd pages reclaimed        30869570.00    25239655.00
    Ops Direct pages reclaimed           74131.00        5830.00
    Ops Kswapd efficiency %                 68.02          75.45
    Ops Kswapd velocity                   5585.75        4002.25
    Ops Page reclaim immediate         1179721.00      430927.00
    Ops Slabs scanned                 62367361.00    73581394.00
    Ops Direct inode steals               2103.00        1002.00
    Ops Kswapd inode steals             570180.00     5183206.00

	o Vanilla kernel is hitting direct reclaim more frequently,
	  not very much in absolute terms but the fact the patch
	  reduces it is interesting
	o "Page reclaim immediate" in the vanilla kernel indicates
	  dirty pages are being encountered at the tail of the LRU.
	  This is generally bad and means in this case that the LRU
	  is not long enough for dirty pages to be cleaned by the
	  background flush in time. This is much reduced by the
	  patch.
	o With the patch, kswapd is reclaiming 10 times more slab
	  pages than with the vanilla kernel. This is indicative
	  of the watermark boosting over-protecting slab

A more complete set of tests were run that were part of the basis for
introducing boosting and while there are some differences, they are well
within tolerances.

Bottom line, the special casing kswapd to avoid slab behaviour is
unpredictable and can lead to abnormal results for normal workloads.

This patch restores the expected behaviour that slab and page cache is
balanced consistently for a workload with a steady allocation ratio of
slab/pagecache pages.  It also means that if there are workloads that
favour the preservation of slab over pagecache that it can be tuned via
vm.vfs_cache_pressure where as the vanilla kernel effectively ignores
the parameter when boosting is active.

Link: http://lkml.kernel.org/r/20190808182946.GM2739@techsingularity.net
Fixes: 1c30844d2d ("mm: reclaim small amounts of memory when an external fragmentation event occurs")
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: <stable@vger.kernel.org>	[5.0+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-08-13 16:06:53 -07:00
Yang Shi
fa1e512fac mm: vmscan: check if mem cgroup is disabled or not before calling memcg slab shrinker
Shakeel Butt reported premature oom on kernel with
"cgroup_disable=memory" since mem_cgroup_is_root() returns false even
though memcg is actually NULL.  The drop_caches is also broken.

It is because commit aeed1d325d ("mm/vmscan.c: generalize
shrink_slab() calls in shrink_node()") removed the !memcg check before
!mem_cgroup_is_root().  And, surprisingly root memcg is allocated even
though memory cgroup is disabled by kernel boot parameter.

Add mem_cgroup_disabled() check to make reclaimer work as expected.

Link: http://lkml.kernel.org/r/1563385526-20805-1-git-send-email-yang.shi@linux.alibaba.com
Fixes: aeed1d325d ("mm/vmscan.c: generalize shrink_slab() calls in shrink_node()")
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
Reported-by: Shakeel Butt <shakeelb@google.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Reviewed-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Jan Hadrava <had@kam.mff.cuni.cz>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Roman Gushchin <guro@fb.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Qian Cai <cai@lca.pw>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: <stable@vger.kernel.org>	[4.19+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-08-03 07:02:00 -07:00
Andrew Morton
1732d2b011 mm/vmscan.c: add checks for incorrect handling of current->reclaim_state
Six sites are presently altering current->reclaim_state.  There is a
risk that one function stomps on a caller's value.  Use a helper
function to catch such errors.

Cc: Yafang Shao <laoar.shao@gmail.com>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-16 19:23:21 -07:00
Yafang Shao
0308f7cf19 mm/vmscan.c: calculate reclaimed slab caches in all reclaim paths
There are six different reclaim paths by now:

 - kswapd reclaim path
 - node reclaim path
 - hibernate preallocate memory reclaim path
 - direct reclaim path
 - memcg reclaim path
 - memcg softlimit reclaim path

The slab caches reclaimed in these paths are only calculated in the
above three paths.

There're some drawbacks if we don't calculate the reclaimed slab caches.

 - The sc->nr_reclaimed isn't correct if there're some slab caches
   relcaimed in this path.

 - The slab caches may be reclaimed thoroughly if there're lots of
   reclaimable slab caches and few page caches.

   Let's take an easy example for this case. If one memcg is full of
   slab caches and the limit of it is 512M, in other words there're
   approximately 512M slab caches in this memcg. Then the limit of the
   memcg is reached and the memcg reclaim begins, and then in this memcg
   reclaim path it will continuesly reclaim the slab caches until the
   sc->priority drops to 0. After this reclaim stops, you will find
   there're few slab caches left, which is less than 20M in my test
   case. While after this patch applied the number is greater than 300M
   and the sc->priority only drops to 3.

Link: http://lkml.kernel.org/r/1561112086-6169-3-git-send-email-laoar.shao@gmail.com
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Reviewed-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-16 19:23:21 -07:00
Yafang Shao
e5ca8071fe mm/vmscan.c: add a new member reclaim_state in struct shrink_control
Patch series "mm/vmscan: calculate reclaimed slab in all reclaim paths".

This patchset is to fix the issues in doing shrink slab.

There're six different reclaim paths by now,
 - kswapd reclaim path
 - node reclaim path
 - hibernate preallocate memory reclaim path
 - direct reclaim path
 - memcg reclaim path
 - memcg softlimit reclaim path

The slab caches reclaimed in these paths are only calculated in the
above three paths.  The issues are detailed explained in patch #2.  We
should calculate the reclaimed slab caches in every reclaim path.  In
order to do it, the struct reclaim_state is placed into the struct
shrink_control.

In node reclaim path, there'is another issue about shrinking slab, which
is adressed in "mm/vmscan: shrink slab in node reclaim"
(https://lore.kernel.org/linux-mm/1559874946-22960-1-git-send-email-laoar.shao@gmail.com/).

This patch (of 2):

The struct reclaim_state is used to record how many slab caches are
reclaimed in one reclaim path.  The struct shrink_control is used to
control one reclaim path.  So we'd better put reclaim_state into
shrink_control.

[laoar.shao@gmail.com: remove reclaim_state assignment from __perform_reclaim()]
Link: http://lkml.kernel.org/r/1561381582-13697-1-git-send-email-laoar.shao@gmail.com
Link: http://lkml.kernel.org/r/1561112086-6169-2-git-send-email-laoar.shao@gmail.com
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-16 19:23:21 -07:00
Yang Shi
98879b3b9e mm: vmscan: correct some vmscan counters for THP swapout
Commit bd4c82c22c ("mm, THP, swap: delay splitting THP after swapped
out"), THP can be swapped out in a whole.  But, nr_reclaimed and some
other vm counters still get inc'ed by one even though a whole THP (512
pages) gets swapped out.

This doesn't make too much sense to memory reclaim.

For example, direct reclaim may just need reclaim SWAP_CLUSTER_MAX
pages, reclaiming one THP could fulfill it.  But, if nr_reclaimed is not
increased correctly, direct reclaim may just waste time to reclaim more
pages, SWAP_CLUSTER_MAX * 512 pages in worst case.

And, it may cause pgsteal_{kswapd|direct} is greater than
pgscan_{kswapd|direct}, like the below:

pgsteal_kswapd 122933
pgsteal_direct 26600225
pgscan_kswapd 174153
pgscan_direct 14678312

nr_reclaimed and nr_scanned must be fixed in parallel otherwise it would
break some page reclaim logic, e.g.

vmpressure: this looks at the scanned/reclaimed ratio so it won't change
semantics as long as scanned & reclaimed are fixed in parallel.

compaction/reclaim: compaction wants a certain number of physical pages
freed up before going back to compacting.

kswapd priority raising: kswapd raises priority if we scan fewer pages
than the reclaim target (which itself is obviously expressed in order-0
pages).  As a result, kswapd can falsely raise its aggressiveness even
when it's making great progress.

Other than nr_scanned and nr_reclaimed, some other counters, e.g.
pgactivate, nr_skipped, nr_ref_keep and nr_unmap_fail need to be fixed too
since they are user visible via cgroup, /proc/vmstat or trace points,
otherwise they would be underreported.

When isolating pages from LRUs, nr_taken has been accounted in base page,
but nr_scanned and nr_skipped are still accounted in THP.  It doesn't make
too much sense too since this may cause trace point underreport the
numbers as well.

So accounting those counters in base page instead of accounting THP as one
page.

nr_dirty, nr_unqueued_dirty, nr_congested and nr_writeback are used by
file cache, so they are not impacted by THP swap.

This change may result in lower steal/scan ratio in some cases since THP
may get split during page reclaim, then a part of tail pages get reclaimed
instead of the whole 512 pages, but nr_scanned is accounted by 512,
particularly for direct reclaim.  But, this should be not a significant
issue.

Link: http://lkml.kernel.org/r/1559025859-72759-2-git-send-email-yang.shi@linux.alibaba.com
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
Reviewed-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Hillf Danton <hdanton@sina.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-12 11:05:46 -07:00
Yang Shi
af5d440365 mm: vmscan: remove double slab pressure by inc'ing sc->nr_scanned
Commit 9092c71bb7 ("mm: use sc->priority for slab shrink targets") has
broken up the relationship between sc->nr_scanned and slab pressure.
The sc->nr_scanned can't double slab pressure anymore.  So, it sounds no
sense to still keep sc->nr_scanned inc'ed.  Actually, it would prevent
from adding pressure on slab shrink since excessive sc->nr_scanned would
prevent from scan->priority raise.

The bonnie test doesn't show this would change the behavior of slab
shrinkers.

				w/		w/o
			  /sec    %CP      /sec      %CP
Sequential delete: 	3960.6    94.6    3997.6     96.2
Random delete: 		2518      63.8    2561.6     64.6

The slight increase of "/sec" without the patch would be caused by the
slight increase of CPU usage.

Link: http://lkml.kernel.org/r/1559025859-72759-1-git-send-email-yang.shi@linux.alibaba.com
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Hillf Danton <hdanton@sina.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-12 11:05:46 -07:00
Kuo-Hsin Yang
2c012a4ad1 mm: vmscan: scan anonymous pages on file refaults
When file refaults are detected and there are many inactive file pages,
the system never reclaim anonymous pages, the file pages are dropped
aggressively when there are still a lot of cold anonymous pages and
system thrashes.  This issue impacts the performance of applications
with large executable, e.g.  chrome.

With this patch, when file refault is detected, inactive_list_is_low()
always returns true for file pages in get_scan_count() to enable
scanning anonymous pages.

The problem can be reproduced by the following test program.

---8<---
void fallocate_file(const char *filename, off_t size)
{
	struct stat st;
	int fd;

	if (!stat(filename, &st) && st.st_size >= size)
		return;

	fd = open(filename, O_WRONLY | O_CREAT, 0600);
	if (fd < 0) {
		perror("create file");
		exit(1);
	}
	if (posix_fallocate(fd, 0, size)) {
		perror("fallocate");
		exit(1);
	}
	close(fd);
}

long *alloc_anon(long size)
{
	long *start = malloc(size);
	memset(start, 1, size);
	return start;
}

long access_file(const char *filename, long size, long rounds)
{
	int fd, i;
	volatile char *start1, *end1, *start2;
	const int page_size = getpagesize();
	long sum = 0;

	fd = open(filename, O_RDONLY);
	if (fd == -1) {
		perror("open");
		exit(1);
	}

	/*
	 * Some applications, e.g. chrome, use a lot of executable file
	 * pages, map some of the pages with PROT_EXEC flag to simulate
	 * the behavior.
	 */
	start1 = mmap(NULL, size / 2, PROT_READ | PROT_EXEC, MAP_SHARED,
		      fd, 0);
	if (start1 == MAP_FAILED) {
		perror("mmap");
		exit(1);
	}
	end1 = start1 + size / 2;

	start2 = mmap(NULL, size / 2, PROT_READ, MAP_SHARED, fd, size / 2);
	if (start2 == MAP_FAILED) {
		perror("mmap");
		exit(1);
	}

	for (i = 0; i < rounds; ++i) {
		struct timeval before, after;
		volatile char *ptr1 = start1, *ptr2 = start2;
		gettimeofday(&before, NULL);
		for (; ptr1 < end1; ptr1 += page_size, ptr2 += page_size)
			sum += *ptr1 + *ptr2;
		gettimeofday(&after, NULL);
		printf("File access time, round %d: %f (sec)
", i,
		       (after.tv_sec - before.tv_sec) +
		       (after.tv_usec - before.tv_usec) / 1000000.0);
	}
	return sum;
}

int main(int argc, char *argv[])
{
	const long MB = 1024 * 1024;
	long anon_mb, file_mb, file_rounds;
	const char filename[] = "large";
	long *ret1;
	long ret2;

	if (argc != 4) {
		printf("usage: thrash ANON_MB FILE_MB FILE_ROUNDS
");
		exit(0);
	}
	anon_mb = atoi(argv[1]);
	file_mb = atoi(argv[2]);
	file_rounds = atoi(argv[3]);

	fallocate_file(filename, file_mb * MB);
	printf("Allocate %ld MB anonymous pages
", anon_mb);
	ret1 = alloc_anon(anon_mb * MB);
	printf("Access %ld MB file pages
", file_mb);
	ret2 = access_file(filename, file_mb * MB, file_rounds);
	printf("Print result to prevent optimization: %ld
",
	       *ret1 + ret2);
	return 0;
}
---8<---

Running the test program on 2GB RAM VM with kernel 5.2.0-rc5, the program
fills ram with 2048 MB memory, access a 200 MB file for 10 times.  Without
this patch, the file cache is dropped aggresively and every access to the
file is from disk.

  $ ./thrash 2048 200 10
  Allocate 2048 MB anonymous pages
  Access 200 MB file pages
  File access time, round 0: 2.489316 (sec)
  File access time, round 1: 2.581277 (sec)
  File access time, round 2: 2.487624 (sec)
  File access time, round 3: 2.449100 (sec)
  File access time, round 4: 2.420423 (sec)
  File access time, round 5: 2.343411 (sec)
  File access time, round 6: 2.454833 (sec)
  File access time, round 7: 2.483398 (sec)
  File access time, round 8: 2.572701 (sec)
  File access time, round 9: 2.493014 (sec)

With this patch, these file pages can be cached.

  $ ./thrash 2048 200 10
  Allocate 2048 MB anonymous pages
  Access 200 MB file pages
  File access time, round 0: 2.475189 (sec)
  File access time, round 1: 2.440777 (sec)
  File access time, round 2: 2.411671 (sec)
  File access time, round 3: 1.955267 (sec)
  File access time, round 4: 0.029924 (sec)
  File access time, round 5: 0.000808 (sec)
  File access time, round 6: 0.000771 (sec)
  File access time, round 7: 0.000746 (sec)
  File access time, round 8: 0.000738 (sec)
  File access time, round 9: 0.000747 (sec)

Checked the swap out stats during the test [1], 19006 pages swapped out
with this patch, 3418 pages swapped out without this patch. There are
more swap out, but I think it's within reasonable range when file backed
data set doesn't fit into the memory.

$ ./thrash 2000 100 2100 5 1 # ANON_MB FILE_EXEC FILE_NOEXEC ROUNDS
PROCESSES Allocate 2000 MB anonymous pages active_anon: 1613644,
inactive_anon: 348656, active_file: 892, inactive_file: 1384 (kB)
pswpout: 7972443, pgpgin: 478615246 Access 100 MB executable file pages
Access 2100 MB regular file pages File access time, round 0: 12.165,
(sec) active_anon: 1433788, inactive_anon: 478116, active_file: 17896,
inactive_file: 24328 (kB) File access time, round 1: 11.493, (sec)
active_anon: 1430576, inactive_anon: 477144, active_file: 25440,
inactive_file: 26172 (kB) File access time, round 2: 11.455, (sec)
active_anon: 1427436, inactive_anon: 476060, active_file: 21112,
inactive_file: 28808 (kB) File access time, round 3: 11.454, (sec)
active_anon: 1420444, inactive_anon: 473632, active_file: 23216,
inactive_file: 35036 (kB) File access time, round 4: 11.479, (sec)
active_anon: 1413964, inactive_anon: 471460, active_file: 31728,
inactive_file: 32224 (kB) pswpout: 7991449 (+ 19006), pgpgin: 489924366
(+ 11309120)

With 4 processes accessing non-overlapping parts of a large file, 30316
pages swapped out with this patch, 5152 pages swapped out without this
patch.  The swapout number is small comparing to pgpgin.

[1]: https://github.com/vovo/testing/blob/master/mem_thrash.c

Link: http://lkml.kernel.org/r/20190701081038.GA83398@google.com
Fixes: e986850598 ("mm,vmscan: only evict file pages when we have plenty")
Fixes: 7c5bd705d8 ("mm: memcg: only evict file pages when we have plenty")
Signed-off-by: Kuo-Hsin Yang <vovoy@chromium.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Sonny Rao <sonnyrao@chromium.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Rik van Riel <riel@redhat.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: <stable@vger.kernel.org>	[4.12+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-12 11:05:39 -07:00
Shakeel Butt
dffcac2cb8 mm/vmscan.c: prevent useless kswapd loops
In production we have noticed hard lockups on large machines running
large jobs due to kswaps hoarding lru lock within isolate_lru_pages when
sc->reclaim_idx is 0 which is a small zone.  The lru was couple hundred
GiBs and the condition (page_zonenum(page) > sc->reclaim_idx) in
isolate_lru_pages() was basically skipping GiBs of pages while holding
the LRU spinlock with interrupt disabled.

On further inspection, it seems like there are two issues:

(1) If kswapd on the return from balance_pgdat() could not sleep (i.e.
    node is still unbalanced), the classzone_idx is unintentionally set
    to 0 and the whole reclaim cycle of kswapd will try to reclaim only
    the lowest and smallest zone while traversing the whole memory.

(2) Fundamentally isolate_lru_pages() is really bad when the
    allocation has woken kswapd for a smaller zone on a very large machine
    running very large jobs.  It can hoard the LRU spinlock while skipping
    over 100s of GiBs of pages.

This patch only fixes (1).  (2) needs a more fundamental solution.  To
fix (1), in the kswapd context, if pgdat->kswapd_classzone_idx is
invalid use the classzone_idx of the previous kswapd loop otherwise use
the one the waker has requested.

Link: http://lkml.kernel.org/r/20190701201847.251028-1-shakeelb@google.com
Fixes: e716f2eb24 ("mm, vmscan: prevent kswapd sleeping prematurely due to mismatched classzone_idx")
Signed-off-by: Shakeel Butt <shakeelb@google.com>
Reviewed-by: Yang Shi <yang.shi@linux.alibaba.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Hillf Danton <hdanton@sina.com>
Cc: Roman Gushchin <guro@fb.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-05 11:12:07 +09:00
Minchan Kim
a58f2cef26 mm/vmscan.c: fix trying to reclaim unevictable LRU page
There was the below bug report from Wu Fangsuo.

On the CMA allocation path, isolate_migratepages_range() could isolate
unevictable LRU pages and reclaim_clean_page_from_list() can try to
reclaim them if they are clean file-backed pages.

  page:ffffffbf02f33b40 count:86 mapcount:84 mapping:ffffffc08fa7a810 index:0x24
  flags: 0x19040c(referenced|uptodate|arch_1|mappedtodisk|unevictable|mlocked)
  raw: 000000000019040c ffffffc08fa7a810 0000000000000024 0000005600000053
  raw: ffffffc009b05b20 ffffffc009b05b20 0000000000000000 ffffffc09bf3ee80
  page dumped because: VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page))
  page->mem_cgroup:ffffffc09bf3ee80
  ------------[ cut here ]------------
  kernel BUG at /home/build/farmland/adroid9.0/kernel/linux/mm/vmscan.c:1350!
  Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
  Modules linked in:
  CPU: 0 PID: 7125 Comm: syz-executor Tainted: G S              4.14.81 #3
  Hardware name: ASR AQUILAC EVB (DT)
  task: ffffffc00a54cd00 task.stack: ffffffc009b00000
  PC is at shrink_page_list+0x1998/0x3240
  LR is at shrink_page_list+0x1998/0x3240
  pc : [<ffffff90083a2158>] lr : [<ffffff90083a2158>] pstate: 60400045
  sp : ffffffc009b05940
  ..
     shrink_page_list+0x1998/0x3240
     reclaim_clean_pages_from_list+0x3c0/0x4f0
     alloc_contig_range+0x3bc/0x650
     cma_alloc+0x214/0x668
     ion_cma_allocate+0x98/0x1d8
     ion_alloc+0x200/0x7e0
     ion_ioctl+0x18c/0x378
     do_vfs_ioctl+0x17c/0x1780
     SyS_ioctl+0xac/0xc0

Wu found it's due to commit ad6b67041a ("mm: remove SWAP_MLOCK in
ttu").  Before that, unevictable pages go to cull_mlocked so that we
can't reach the VM_BUG_ON_PAGE line.

To fix the issue, this patch filters out unevictable LRU pages from the
reclaim_clean_pages_from_list in CMA.

Link: http://lkml.kernel.org/r/20190524071114.74202-1-minchan@kernel.org
Fixes: ad6b67041a ("mm: remove SWAP_MLOCK in ttu")
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reported-by: Wu Fangsuo <fangsuowu@asrmicro.com>
Debugged-by: Wu Fangsuo <fangsuowu@asrmicro.com>
Tested-by: Wu Fangsuo <fangsuowu@asrmicro.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Pankaj Suryawanshi <pankaj.suryawanshi@einfochips.com>
Cc: <stable@vger.kernel.org>	[4.12+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-06-13 17:34:56 -10:00
Kirill Tkhai
b17f18aff2 mm/vmscan.c: fix recent_rotated history
Johannes pointed out that after commit 886cf1901d ("mm: move
recent_rotated pages calculation to shrink_inactive_list()") we lost all
zone_reclaim_stat::recent_rotated history.

This fixes it.

Link: http://lkml.kernel.org/r/155905972210.26456.11178359431724024112.stgit@localhost.localdomain
Fixes: 886cf1901d ("mm: move recent_rotated pages calculation to shrink_inactive_list()")
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Reported-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-06-13 17:34:56 -10:00
Johannes Weiner
205b20cc5a mm: memcontrol: make cgroup stats and events query API explicitly local
Patch series "mm: memcontrol: memory.stat cost & correctness".

The cgroup memory.stat file holds recursive statistics for the entire
subtree.  The current implementation does this tree walk on-demand
whenever the file is read.  This is giving us problems in production.

1. The cost of aggregating the statistics on-demand is high.  A lot of
   system service cgroups are mostly idle and their stats don't change
   between reads, yet we always have to check them.  There are also always
   some lazily-dying cgroups sitting around that are pinned by a handful
   of remaining page cache; the same applies to them.

   In an application that periodically monitors memory.stat in our
   fleet, we have seen the aggregation consume up to 5% CPU time.

2. When cgroups die and disappear from the cgroup tree, so do their
   accumulated vm events.  The result is that the event counters at
   higher-level cgroups can go backwards and confuse some of our
   automation, let alone people looking at the graphs over time.

To address both issues, this patch series changes the stat
implementation to spill counts upwards when the counters change.

The upward spilling is batched using the existing per-cpu cache.  In a
sparse file stress test with 5 level cgroup nesting, the additional cost
of the flushing was negligible (a little under 1% of CPU at 100% CPU
utilization, compared to the 5% of reading memory.stat during regular
operation).

This patch (of 4):

memcg_page_state(), lruvec_page_state(), memcg_sum_events() are
currently returning the state of the local memcg or lruvec, not the
recursive state.

In practice there is a demand for both versions, although the callers
that want the recursive counts currently sum them up by hand.

Per default, cgroups are considered recursive entities and generally we
expect more users of the recursive counters, with the local counts being
special cases.  To reflect that in the name, add a _local suffix to the
current implementations.

The following patch will re-incarnate these functions with recursive
semantics, but with an O(1) implementation.

[hannes@cmpxchg.org: fix bisection hole]
  Link: http://lkml.kernel.org/r/20190417160347.GC23013@cmpxchg.org
Link: http://lkml.kernel.org/r/20190412151507.2769-2-hannes@cmpxchg.org
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Reviewed-by: Roman Gushchin <guro@fb.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 19:52:53 -07:00
Yafang Shao
2fa2690ca6 mm/vmscan.c: don't disable irq again when count pgrefill for memcg
We can use __count_memcg_events() directly because this callsite is alreay
protected by spin_lock_irq().

Link: http://lkml.kernel.org/r/1556093494-30798-1-git-send-email-laoar.shao@gmail.com
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 09:47:51 -07:00
Kirill Tkhai
f46b79120e mm/vmscan.c: simplify shrink_inactive_list()
This merges together duplicated patterns of code.  Also, replace
count_memcg_events() with its irq-careless namesake, because they are
already called in interrupts disabled context.

Link: http://lkml.kernel.org/r/2ece1df4-2989-bc9b-6172-61e9fdde5bfd@virtuozzo.com
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Baoquan He <bhe@redhat.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>

Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 09:47:50 -07:00
Yafang Shao
3481c37ffa mm/vmscan: drop may_writepage and classzone_idx from direct reclaim begin template
There are three tracepoints using this template, which are
mm_vmscan_direct_reclaim_begin,
mm_vmscan_memcg_reclaim_begin,
mm_vmscan_memcg_softlimit_reclaim_begin.

Regarding mm_vmscan_direct_reclaim_begin,
sc.may_writepage is !laptop_mode, that's a static setting, and
reclaim_idx is derived from gfp_mask which is already show in this
tracepoint.

Regarding mm_vmscan_memcg_reclaim_begin,
may_writepage is !laptop_mode too, and reclaim_idx is (MAX_NR_ZONES-1),
which are both static value.

mm_vmscan_memcg_softlimit_reclaim_begin is the same with
mm_vmscan_memcg_reclaim_begin.

So we can drop them all.

Link: http://lkml.kernel.org/r/1553736322-32235-1-git-send-email-laoar.shao@gmail.com
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 09:47:48 -07:00
Johannes Weiner
1a61ab8038 mm: memcontrol: replace zone summing with lruvec_page_state()
Instead of adding up the zone counters, use lruvec_page_state() to get the
node state directly.  This is a bit cheaper and more stream-lined.

Link: http://lkml.kernel.org/r/20190228163020.24100-3-hannes@cmpxchg.org
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Roman Gushchin <guro@fb.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 09:47:46 -07:00
Yafang Shao
132bb8cfc9 mm/vmscan: add tracepoints for node reclaim
The page alloc fast path it may perform node reclaim, which may cause a
latency spike.  We should add tracepoint for this event, and also measure
the latency it causes.

So bellow two tracepoints are introduced,
	mm_vmscan_node_reclaim_begin
	mm_vmscan_node_reclaim_end

Link: http://lkml.kernel.org/r/1551421452-5385-1-git-send-email-laoar.shao@gmail.com
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Souptick Joarder <jrdr.linux@gmail.com>
Cc: <shaoyafang@didiglobal.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 09:47:46 -07:00
Kirill Tkhai
a222f34158 mm: generalize putback scan functions
This combines two similar functions move_active_pages_to_lru() and
putback_inactive_pages() into single move_pages_to_lru().  This remove
duplicate code and makes object file size smaller.

Before:
   text	   data	    bss	    dec	    hex	filename
  57082	   4732	    128	  61942	   f1f6	mm/vmscan.o
After:
   text	   data	    bss	    dec	    hex	filename
  55112	   4600	    128	  59840	   e9c0	mm/vmscan.o

Note, that now we are checking for !page_evictable() coming from
shrink_active_list(), which shouldn't change any behavior since that path
works with evictable pages only.

Link: http://lkml.kernel.org/r/155290129627.31489.8321971028677203248.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 09:47:45 -07:00
Kirill Tkhai
f372d89e5d mm: remove pages_to_free argument of move_active_pages_to_lru()
We may use input argument list as output argument too.  This makes the
function more similar to putback_inactive_pages().

Link: http://lkml.kernel.org/r/155290129079.31489.16180612694090502942.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 09:47:45 -07:00
Kirill Tkhai
9851ac1359 mm: move nr_deactivate accounting to shrink_active_list()
We know which LRU is not active.

[chris@chrisdown.name: fix build on !CONFIG_MEMCG]
  Link: http://lkml.kernel.org/r/20190322150513.GA22021@chrisdown.name
Link: http://lkml.kernel.org/r/155290128498.31489.18250485448913338607.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Signed-off-by: Chris Down <chris@chrisdown.name>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 09:47:45 -07:00
Kirill Tkhai
886cf1901d mm: move recent_rotated pages calculation to shrink_inactive_list()
Patch series "mm: Generalize putback functions"]

putback_inactive_pages() and move_active_pages_to_lru() are almost
similar, so this patchset merges them ina single function.

This patch (of 4):

The patch moves the calculation from putback_inactive_pages() to
shrink_inactive_list().  This makes putback_inactive_pages() looking more
similar to move_active_pages_to_lru().

To do that, we account activated pages in reclaim_stat::nr_activate.
Since a page may change its LRU type from anon to file cache inside
shrink_page_list() (see ClearPageSwapBacked()), we have to account pages
for the both types.  So, nr_activate becomes an array.

Previously we used nr_activate to account PGACTIVATE events, but now we
account them into pgactivate variable (since they are about number of
pages in general, not about sum of hpage_nr_pages).

Link: http://lkml.kernel.org/r/155290127956.31489.3393586616054413298.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 09:47:45 -07:00
Linus Torvalds
0968621917 Merge tag 'printk-for-5.2' of git://git.kernel.org/pub/scm/linux/kernel/git/pmladek/printk
Pull printk updates from Petr Mladek:

 - Allow state reset of printk_once() calls.

 - Prevent crashes when dereferencing invalid pointers in vsprintf().
   Only the first byte is checked for simplicity.

 - Make vsprintf warnings consistent and inlined.

 - Treewide conversion of obsolete %pf, %pF to %ps, %pF printf
   modifiers.

 - Some clean up of vsprintf and test_printf code.

* tag 'printk-for-5.2' of git://git.kernel.org/pub/scm/linux/kernel/git/pmladek/printk:
  lib/vsprintf: Make function pointer_string static
  vsprintf: Limit the length of inlined error messages
  vsprintf: Avoid confusion between invalid address and value
  vsprintf: Prevent crash when dereferencing invalid pointers
  vsprintf: Consolidate handling of unknown pointer specifiers
  vsprintf: Factor out %pO handler as kobject_string()
  vsprintf: Factor out %pV handler as va_format()
  vsprintf: Factor out %p[iI] handler as ip_addr_string()
  vsprintf: Do not check address of well-known strings
  vsprintf: Consistent %pK handling for kptr_restrict == 0
  vsprintf: Shuffle restricted_pointer()
  printk: Tie printk_once / printk_deferred_once into .data.once for reset
  treewide: Switch printk users from %pf and %pF to %ps and %pS, respectively
  lib/test_printf: Switch to bitmap_zalloc()
2019-05-07 09:18:12 -07:00
Johannes Weiner
3b991208b8 mm: fix inactive list balancing between NUMA nodes and cgroups
During !CONFIG_CGROUP reclaim, we expand the inactive list size if it's
thrashing on the node that is about to be reclaimed.  But when cgroups
are enabled, we suddenly ignore the node scope and use the cgroup scope
only.  The result is that pressure bleeds between NUMA nodes depending
on whether cgroups are merely compiled into Linux.  This behavioral
difference is unexpected and undesirable.

When the refault adaptivity of the inactive list was first introduced,
there were no statistics at the lruvec level - the intersection of node
and memcg - so it was better than nothing.

But now that we have that infrastructure, use lruvec_page_state() to
make the list balancing decision always NUMA aware.

[hannes@cmpxchg.org: fix bisection hole]
  Link: http://lkml.kernel.org/r/20190417155241.GB23013@cmpxchg.org
Link: http://lkml.kernel.org/r/20190412144438.2645-1-hannes@cmpxchg.org
Fixes: 2a2e48854d ("mm: vmscan: fix IO/refault regression in cache workingset transition")
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-04-19 09:46:05 -07:00
Sakari Ailus
d75f773c86 treewide: Switch printk users from %pf and %pF to %ps and %pS, respectively
%pF and %pf are functionally equivalent to %pS and %ps conversion
specifiers. The former are deprecated, therefore switch the current users
to use the preferred variant.

The changes have been produced by the following command:

	git grep -l '%p[fF]' | grep -v '^\(tools\|Documentation\)/' | \
	while read i; do perl -i -pe 's/%pf/%ps/g; s/%pF/%pS/g;' $i; done

And verifying the result.

Link: http://lkml.kernel.org/r/20190325193229.23390-1-sakari.ailus@linux.intel.com
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: sparclinux@vger.kernel.org
Cc: linux-um@lists.infradead.org
Cc: xen-devel@lists.xenproject.org
Cc: linux-acpi@vger.kernel.org
Cc: linux-pm@vger.kernel.org
Cc: drbd-dev@lists.linbit.com
Cc: linux-block@vger.kernel.org
Cc: linux-mmc@vger.kernel.org
Cc: linux-nvdimm@lists.01.org
Cc: linux-pci@vger.kernel.org
Cc: linux-scsi@vger.kernel.org
Cc: linux-btrfs@vger.kernel.org
Cc: linux-f2fs-devel@lists.sourceforge.net
Cc: linux-mm@kvack.org
Cc: ceph-devel@vger.kernel.org
Cc: netdev@vger.kernel.org
Signed-off-by: Sakari Ailus <sakari.ailus@linux.intel.com>
Acked-by: David Sterba <dsterba@suse.com> (for btrfs)
Acked-by: Mike Rapoport <rppt@linux.ibm.com> (for mm/memblock.c)
Acked-by: Bjorn Helgaas <bhelgaas@google.com> (for drivers/pci)
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
2019-04-09 14:19:06 +02:00
Andrey Ryabinin
f4b7e272b5 mm: remove zone_lru_lock() function, access ->lru_lock directly
We have common pattern to access lru_lock from a page pointer:
	zone_lru_lock(page_zone(page))

Which is silly, because it unfolds to this:
	&NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)]->zone_pgdat->lru_lock
while we can simply do
	&NODE_DATA(page_to_nid(page))->lru_lock

Remove zone_lru_lock() function, since it's only complicate things.  Use
'page_pgdat(page)->lru_lock' pattern instead.

[aryabinin@virtuozzo.com: a slightly better version of __split_huge_page()]
  Link: http://lkml.kernel.org/r/20190301121651.7741-1-aryabinin@virtuozzo.com
Link: http://lkml.kernel.org/r/20190228083329.31892-2-aryabinin@virtuozzo.com
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: William Kucharski <william.kucharski@oracle.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-05 21:07:21 -08:00
Andrey Ryabinin
a7ca12f9d9 mm/workingset: remove unused @mapping argument in workingset_eviction()
workingset_eviction() doesn't use and never did use the @mapping
argument.  Remove it.

Link: http://lkml.kernel.org/r/20190228083329.31892-1-aryabinin@virtuozzo.com
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Rik van Riel <riel@surriel.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: William Kucharski <william.kucharski@oracle.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-05 21:07:21 -08:00
Alexey Dobriyan
b9726c26dc numa: make "nr_node_ids" unsigned int
Number of NUMA nodes can't be negative.

This saves a few bytes on x86_64:

	add/remove: 0/0 grow/shrink: 4/21 up/down: 27/-265 (-238)
	Function                                     old     new   delta
	hv_synic_alloc.cold                           88     110     +22
	prealloc_shrinker                            260     262      +2
	bootstrap                                    249     251      +2
	sched_init_numa                             1566    1567      +1
	show_slab_objects                            778     777      -1
	s_show                                      1201    1200      -1
	kmem_cache_init                              346     345      -1
	__alloc_workqueue_key                       1146    1145      -1
	mem_cgroup_css_alloc                        1614    1612      -2
	__do_sys_swapon                             4702    4699      -3
	__list_lru_init                              655     651      -4
	nic_probe                                   2379    2374      -5
	store_user_store                             118     111      -7
	red_zone_store                               106      99      -7
	poison_store                                 106      99      -7
	wq_numa_init                                 348     338     -10
	__kmem_cache_empty                            75      65     -10
	task_numa_free                               186     173     -13
	merge_across_nodes_store                     351     336     -15
	irq_create_affinity_masks                   1261    1246     -15
	do_numa_crng_init                            343     321     -22
	task_numa_fault                             4760    4737     -23
	swapfile_init                                179     156     -23
	hv_synic_alloc                               536     492     -44
	apply_wqattrs_prepare                        746     695     -51

Link: http://lkml.kernel.org/r/20190201223029.GA15820@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-05 21:07:19 -08:00
Kirill Tkhai
060f005f07 mm/vmscan.c: do not allocate duplicate stack variables in shrink_page_list()
On path shrink_inactive_list() ---> shrink_page_list() we allocate stack
variables for the statistics twice.  This is completely useless, and
this just consumes stack much more, then we really need.

The patch kills duplicate stack variables from shrink_page_list(), and
this reduce stack usage and object file size significantly:

Stack usage:
  Before: vmscan.c:1122:22:shrink_page_list	648	static
  After:  vmscan.c:1122:22:shrink_page_list	616	static

Size of vmscan.o:
           text	   data	    bss	    dec	    hex	filename
  Before: 56866	   4720	    128	  61714	   f112	mm/vmscan.o
  After:  56770	   4720	    128	  61618	   f0b2	mm/vmscan.o

Link: http://lkml.kernel.org/r/154894900030.5211.12104993874109647641.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-05 21:07:19 -08:00
Yang Shi
2bb0f34fe3 mm: vmscan: do not iterate all mem cgroups for global direct reclaim
In current implementation, both kswapd and direct reclaim has to iterate
all mem cgroups.  It is not a problem before offline mem cgroups could
be iterated.  But, currently with iterating offline mem cgroups, it
could be very time consuming.  In our workloads, we saw over 400K mem
cgroups accumulated in some cases, only a few hundred are online memcgs.
Although kswapd could help out to reduce the number of memcgs, direct
reclaim still get hit with iterating a number of offline memcgs in some
cases.  We experienced the responsiveness problems due to this
occassionally.

A simple test with pref shows it may take around 220ms to iterate 8K
memcgs in direct reclaim:
             dd 13873 [011]   578.542919: vmscan:mm_vmscan_direct_reclaim_begin
             dd 13873 [011]   578.758689: vmscan:mm_vmscan_direct_reclaim_end
So for 400K, it may take around 11 seconds to iterate all memcgs.

Here just break the iteration once it reclaims enough pages as what
memcg direct reclaim does.  This may hurt the fairness among memcgs.
But the cached iterator cookie could help to achieve the fairness more
or less.

Link: http://lkml.kernel.org/r/1548799877-10949-1-git-send-email-yang.shi@linux.alibaba.com
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-05 21:07:19 -08:00
Kirill Tkhai
a9e7c39fa9 mm/vmscan.c: remove 7th argument of isolate_lru_pages()
We may simply check for sc->may_unmap in isolate_lru_pages() instead of
doing that in both of its callers.

Link: http://lkml.kernel.org/r/154748280735.29962.15867846875217618569.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-05 21:07:18 -08:00
Wei Yang
8bb4e7a2ee mm: fix some typos in mm directory
No functional change.

Link: http://lkml.kernel.org/r/20190118235123.27843-1-richard.weiyang@gmail.com
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Pekka Enberg <penberg@kernel.org>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-05 21:07:18 -08:00
Dave Chinner
a9a238e83f Revert "mm: slowly shrink slabs with a relatively small number of objects"
This reverts commit 172b06c32b ("mm: slowly shrink slabs with a
relatively small number of objects").

This change changes the agressiveness of shrinker reclaim, causing small
cache and low priority reclaim to greatly increase scanning pressure on
small caches.  As a result, light memory pressure has a disproportionate
affect on small caches, and causes large caches to be reclaimed much
faster than previously.

As a result, it greatly perturbs the delicate balance of the VFS caches
(dentry/inode vs file page cache) such that the inode/dentry caches are
reclaimed much, much faster than the page cache and this drives us into
several other caching imbalance related problems.

As such, this is a bad change and needs to be reverted.

[ Needs some massaging to retain the later seekless shrinker
  modifications.]

Link: http://lkml.kernel.org/r/20190130041707.27750-3-david@fromorbit.com
Fixes: 172b06c32b ("mm: slowly shrink slabs with a relatively small number of objects")
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Cc: Wolfgang Walter <linux@stwm.de>
Cc: Roman Gushchin <guro@fb.com>
Cc: Spock <dairinin@gmail.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-02-12 16:33:18 -08:00