Merge android12-5.10.11 (ba15277) into msm-5.10

* refs/heads/tmp-ba15277:
  Linux 5.10.11
  Revert "mm: fix initialization of struct page for holes in memory layout"
  mm: fix initialization of struct page for holes in memory layout
  Commit 9bb48c82aced ("tty: implement write_iter") converted the tty layer to use write_iter. Fix the redirected_tty_write declaration also in n_tty and change the comparisons to use write_iter instead of write. also in n_tty and change the comparisons to use write_iter instead of write.
  fs/pipe: allow sendfile() to pipe again
  interconnect: imx8mq: Use icc_sync_state
  kernfs: wire up ->splice_read and ->splice_write
  kernfs: implement ->write_iter
  kernfs: implement ->read_iter
  bpf: Local storage helpers should check nullness of owner ptr passed
  drm/i915/hdcp: Get conn while content_type changed
  ASoC: SOF: Intel: hda: Avoid checking jack on system suspend
  tcp: Fix potential use-after-free due to double kfree()
  x86/sev-es: Handle string port IO to kernel memory properly
  net: systemport: free dev before on error path
  tty: fix up hung_up_tty_write() conversion
  tty: implement write_iter
  x86/sev: Fix nonistr violation
  pinctrl: qcom: Don't clear pending interrupts when enabling
  pinctrl: qcom: Properly clear "intr_ack_high" interrupts when unmasking
  pinctrl: qcom: No need to read-modify-write the interrupt status
  pinctrl: qcom: Allow SoCs to specify a GPIO function that's not 0
  net: core: devlink: use right genl user_ptr when handling port param get/set
  net: mscc: ocelot: Fix multicast to the CPU port
  tcp: fix TCP_USER_TIMEOUT with zero window
  tcp: do not mess with cloned skbs in tcp_add_backlog()
  net: dsa: b53: fix an off by one in checking "vlan->vid"
  net: Disable NETIF_F_HW_TLS_RX when RXCSUM is disabled
  net: mscc: ocelot: allow offloading of bridge on top of LAG
  ipv6: set multicast flag on the multicast route
  net_sched: reject silly cell_log in qdisc_get_rtab()
  net_sched: avoid shift-out-of-bounds in tcindex_set_parms()
  ipv6: create multicast route with RTPROT_KERNEL
  udp: mask TOS bits in udp_v4_early_demux()
  net_sched: gen_estimator: support large ewma log
  tcp: fix TCP socket rehash stats mis-accounting
  kasan: fix incorrect arguments passing in kasan_add_zero_shadow
  kasan: fix unaligned address is unhandled in kasan_remove_zero_shadow
  skbuff: back tiny skbs with kmalloc() in __netdev_alloc_skb() too
  lightnvm: fix memory leak when submit fails
  cachefiles: Drop superfluous readpages aops NULL check
  nvme-pci: fix error unwind in nvme_map_data
  nvme-pci: refactor nvme_unmap_data
  sh_eth: Fix power down vs. is_opened flag ordering
  selftests/powerpc: Fix exit status of pkey tests
  net: dsa: mv88e6xxx: also read STU state in mv88e6250_g1_vtu_getnext
  octeontx2-af: Fix missing check bugs in rvu_cgx.c
  ASoC: SOF: Intel: fix page fault at probe if i915 init fails
  locking/lockdep: Cure noinstr fail
  sh: Remove unused HAVE_COPY_THREAD_TLS macro
  sh: dma: fix kconfig dependency for G2_DMA
  drm/i915/hdcp: Update CP property in update_pipe
  tools: gpio: fix %llu warning in gpio-watch.c
  tools: gpio: fix %llu warning in gpio-event-mon.c
  netfilter: rpfilter: mask ecn bits before fib lookup
  cls_flower: call nla_ok() before nla_next()
  x86/cpu/amd: Set __max_die_per_package on AMD
  x86/entry: Fix noinstr fail
  drm/i915: Only enable DFP 4:4:4->4:2:0 conversion when outputting YCbCr 4:4:4
  drm/i915: s/intel_dp_sink_dpms/intel_dp_set_power/
  driver core: Extend device_is_dependent()
  driver core: Fix device link device name collision
  drivers core: Free dma_range_map when driver probe failed
  xhci: tegra: Delay for disabling LFPS detector
  xhci: make sure TRB is fully written before giving it to the controller
  usb: cdns3: imx: fix can't create core device the second time issue
  usb: cdns3: imx: fix writing read-only memory issue
  usb: bdc: Make bdc pci driver depend on BROKEN
  usb: udc: core: Use lock when write to soft_connect
  USB: gadget: dummy-hcd: Fix errors in port-reset handling
  usb: gadget: aspeed: fix stop dma register setting.
  USB: ehci: fix an interrupt calltrace error
  ehci: fix EHCI host controller initialization sequence
  serial: mvebu-uart: fix tx lost characters at power off
  stm class: Fix module init return on allocation failure
  intel_th: pci: Add Alder Lake-P support
  io_uring: fix short read retries for non-reg files
  io_uring: fix SQPOLL IORING_OP_CLOSE cancelation state
  io_uring: iopoll requests should also wake task ->in_idle state
  mm: fix numa stats for thp migration
  mm: memcg: fix memcg file_dirty numa stat
  mm: memcg/slab: optimize objcg stock draining
  proc_sysctl: fix oops caused by incorrect command parameters
  x86/setup: don't remove E820_TYPE_RAM for pfn 0
  x86/mmx: Use KFPU_387 for MMX string operations
  x86/topology: Make __max_die_per_package available unconditionally
  x86/fpu: Add kernel_fpu_begin_mask() to selectively initialize state
  irqchip/mips-cpu: Set IPI domain parent chip
  cifs: do not fail __smb_send_rqst if non-fatal signals are pending
  powerpc/64s: fix scv entry fallback flush vs interrupt
  counter:ti-eqep: remove floor
  iio: adc: ti_am335x_adc: remove omitted iio_kfifo_free()
  drivers: iio: temperature: Add delay after the addressed reset command in mlx90632.c
  iio: ad5504: Fix setting power-down state
  iio: common: st_sensors: fix possible infinite loop in st_sensors_irq_thread
  i2c: sprd: depend on COMMON_CLK to fix compile tests
  perf evlist: Fix id index for heterogeneous systems
  can: peak_usb: fix use after free bugs
  can: vxcan: vxcan_xmit: fix use after free bug
  can: dev: can_restart: fix use after free bug
  selftests: net: fib_tests: remove duplicate log test
  xsk: Clear pool even for inactive queues
  ALSA: hda: Balance runtime/system PM if direct-complete is disabled
  gpio: sifive: select IRQ_DOMAIN_HIERARCHY rather than depend on it
  platform/x86: hp-wmi: Don't log a warning on HPWMI_RET_UNKNOWN_COMMAND errors
  platform/x86: intel-vbtn: Drop HP Stream x360 Convertible PC 11 from allow-list
  drm/vc4: Unify PCM card's driver_name
  i2c: octeon: check correct size of maximum RECV_LEN packet
  iov_iter: fix the uaccess area in copy_compat_iovec_from_user
  printk: fix kmsg_dump_get_buffer length calulations
  printk: ringbuffer: fix line counting
  RDMA/cma: Fix error flow in default_roce_mode_store
  RDMA/umem: Avoid undefined behavior of rounddown_pow_of_two()
  drm/amdkfd: Fix out-of-bounds read in kdf_create_vcrat_image_cpu()
  bpf: Reject too big ctx_size_in for raw_tp test run
  arm64: entry: remove redundant IRQ flag tracing
  powerpc: Fix alignment bug within the init sections
  powerpc: Use the common INIT_DATA_SECTION macro in vmlinux.lds.S
  bpf: Prevent double bpf_prog_put call from bpf_tracing_prog_attach
  crypto: omap-sham - Fix link error without crypto-engine
  scsi: ufs: Fix tm request when non-fatal error happens
  scsi: ufs: ufshcd-pltfrm depends on HAS_IOMEM
  scsi: megaraid_sas: Fix MEGASAS_IOC_FIRMWARE regression
  btrfs: print the actual offset in btrfs_root_name
  RDMA/ucma: Do not miss ctx destruction steps in some cases
  pinctrl: mediatek: Fix fallback call path
  pinctrl: aspeed: g6: Fix PWMG0 pinctrl setting
  gpiolib: cdev: fix frame size warning in gpio_ioctl()
  nfsd: Don't set eof on a truncated READ_PLUS
  nfsd: Fixes for nfsd4_encode_read_plus_data()
  x86/xen: fix 'nopvspin' build error
  RISC-V: Fix maximum allowed phsyical memory for RV32
  RISC-V: Set current memblock limit
  libperf tests: Fail when failing to get a tracepoint id
  libperf tests: If a test fails return non-zero
  io_uring: flush timeouts that should already have expired
  drm/nouveau/kms/nv50-: fix case where notifier buffer is at offset 0
  drm/nouveau/mmu: fix vram heap sizing
  drm/nouveau/i2c/gm200: increase width of aux semaphore owner fields
  drm/nouveau/privring: ack interrupts the same way as RM
  drm/nouveau/bios: fix issue shadowing expansion ROMs
  drm/amd/display: Fix to be able to stop crc calculation
  HID: logitech-hidpp: Add product ID for MX Ergo in Bluetooth mode
  drm/amd/display: disable dcn10 pipe split by default
  drm/amdgpu/psp: fix psp gfx ctrl cmds
  riscv: defconfig: enable gpio support for HiFive Unleashed
  dts: phy: add GPIO number and active state used for phy reset
  dts: phy: fix missing mdio device and probe failure of vsc8541-01 device
  x86/xen: Fix xen_hvm_smp_init() when vector callback not available
  x86/xen: Add xen_no_vector_callback option to test PCI INTX delivery
  xen: Fix event channel callback via INTX/GSI
  arm64: make atomic helpers __always_inline
  riscv: cacheinfo: Fix using smp_processor_id() in preemptible
  ALSA: hda/tegra: fix tegra-hda on tegra30 soc
  clk: tegra30: Add hda clock default rates to clock driver
  HID: Ignore battery for Elan touchscreen on ASUS UX550
  HID: logitech-dj: add the G602 receiver
  riscv: Enable interrupts during syscalls with M-Mode
  riscv: Fix sifive serial driver
  riscv: Fix kernel time_init()
  scsi: sd: Suppress spurious errors when WRITE SAME is being disabled
  scsi: scsi_debug: Fix memleak in scsi_debug_init()
  scsi: qedi: Correct max length of CHAP secret
  scsi: ufs: Correct the LUN used in eh_device_reset_handler() callback
  scsi: ufs: Relax the condition of UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL
  x86/hyperv: Fix kexec panic/hang issues
  dm integrity: select CRYPTO_SKCIPHER
  HID: sony: select CONFIG_CRC32
  HID: multitouch: Enable multi-input for Synaptics pointstick/touchpad device
  SUNRPC: Handle TCP socket sends with kernel_sendpage() again
  ASoC: rt711: mutex between calibration and power state changes
  ASoC: Intel: haswell: Add missing pm_ops
  drm/i915: Check for rq->hwsp validity after acquiring RCU lock
  drm/i915/gt: Prevent use of engine->wa_ctx after error
  drm/amd/display: DCN2X Find Secondary Pipe properly in MPO + ODM Case
  drm/amdgpu: remove gpu info firmware of green sardine
  drm/syncobj: Fix use-after-free
  drm/atomic: put state on error path
  dm integrity: conditionally disable "recalculate" feature
  dm integrity: fix a crash if "recalculate" used without "internal_hash"
  dm: avoid filesystem lookup in dm_get_dev_t()
  mmc: sdhci-brcmstb: Fix mmc timeout errors on S5 suspend
  mmc: sdhci-xenon: fix 1.8v regulator stabilization
  mmc: sdhci-of-dwcmshc: fix rpmb access
  mmc: core: don't initialize block size from ext_csd if not present
  pinctrl: ingenic: Fix JZ4760 support
  fs: fix lazytime expiration handling in __writeback_single_inode()
  btrfs: send: fix invalid clone operations when cloning from the same file and root
  btrfs: don't clear ret in btrfs_start_dirty_block_groups
  btrfs: fix lockdep splat in btrfs_recover_relocation
  btrfs: do not double free backref nodes on error
  btrfs: don't get an EINTR during drop_snapshot for reloc
  ACPI: scan: Make acpi_bus_get_device() clear return pointer on error
  dm crypt: fix copy and paste bug in crypt_alloc_req_aead
  crypto: xor - Fix divide error in do_xor_speed()
  ALSA: hda/via: Add minimum mute flag
  ALSA: hda/realtek - Limit int mic boost on Acer Aspire E5-575T
  ALSA: seq: oss: Fix missing error check in snd_seq_oss_synth_make_info()
  platform/x86: ideapad-laptop: Disable touchpad_switch for ELAN0634
  platform/x86: i2c-multi-instantiate: Don't create platform device for INT3515 ACPI nodes
  i2c: bpmp-tegra: Ignore unknown I2C_M flags
  i2c: tegra: Wait for config load atomically while in ISR
  mtd: rawnand: nandsim: Fix the logic when selecting Hamming soft ECC engine
  mtd: rawnand: gpmi: fix dst bit offset when extracting raw payload
  scsi: target: tcmu: Fix use-after-free of se_cmd->priv
  ANDROID: simplify vendor hook definitions
  ANDROID: add macros to create OEM data fields
  ANDROID: dma-buf: fix return type mismatch
  ANDROID: cpu/hotplug: create vendor hook for cpu_up/cpu_down
  FROMLIST: fuse: Introduce passthrough for mmap
  ANDROID: Fix sparse warning in wp_page_copy caused by SPF patchset
  FROMLIST: fuse: Use daemon creds in passthrough mode
  FROMLIST: fuse: Handle asynchronous read and write in passthrough
  FROMLIST: fuse: Introduce synchronous read and write for passthrough
  FROMLIST: fuse: Passthrough initialization and release
  FROMLIST: fuse: Definitions and ioctl for passthrough
  FROMLIST: fuse: 32-bit user space ioctl compat for fuse device
  FROMLIST: fs: Generic function to convert iocb to rw flags
  Revert "FROMLIST: fuse: Definitions and ioctl() for passthrough"
  Revert "FROMLIST: fuse: Passthrough initialization and release"
  Revert "FROMLIST: fuse: Introduce synchronous read and write for passthrough"
  Revert "FROMLIST: fuse: Handle asynchronous read and write in passthrough"
  Revert "FROMLIST: fuse: Use daemon creds in passthrough mode"
  Revert "FROMLIST: fuse: Fix colliding FUSE_PASSTHROUGH flag"
  UPSTREAM: usb: xhci-mtk: fix unreleased bandwidth data
  ANDROID: sched: export task_rq_lock
  ANDROID: GKI: make VIDEOBUF2_DMA_CONTIG under GKI_HIDDEN_MEDIA_CONFIGS
  ANDROID: clang: update to 12.0.1
  FROMLIST: dma-buf: heaps: add chunk heap to dmabuf heaps
  FROMLIST: dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable
  FROMLIST: mm: failfast mode with __GFP_NORETRY in alloc_contig_range
  FROMLIST: mm: cma: introduce gfp flag in cma_alloc instead of no_warn
  UPSTREAM: kernfs: wire up ->splice_read and ->splice_write
  UPSTREAM: kernfs: implement ->write_iter
  UPSTREAM: kernfs: implement ->read_iter
  UPSTREAM: usb: typec: tcpm: Create legacy PDOs for PD2 connection

Conflicts:
	Documentation/devicetree/bindings
	drivers/dma-buf/heaps/Kconfig
	drivers/dma-buf/heaps/Makefile
	drivers/pinctrl/qcom/pinctrl-msm.h

Change-Id: I6412ddc7b1d215b7ea8bff5815277e13e8143888
Signed-off-by: Ivaylo Georgiev <irgeorgiev@codeaurora.org>
This commit is contained in:
Ivaylo Georgiev
2021-02-08 01:54:18 -08:00
committed by Elliot Berman
277 changed files with 127699 additions and 126391 deletions

View File

@@ -5,8 +5,8 @@ Description:
Provide a place in sysfs for the device link objects in the Provide a place in sysfs for the device link objects in the
kernel at any given time. The name of a device link directory, kernel at any given time. The name of a device link directory,
denoted as ... above, is of the form <supplier>--<consumer> denoted as ... above, is of the form <supplier>--<consumer>
where <supplier> is the supplier device name and <consumer> is where <supplier> is the supplier bus:device name and <consumer>
the consumer device name. is the consumer bus:device name.
What: /sys/class/devlink/.../auto_remove_on What: /sys/class/devlink/.../auto_remove_on
Date: May 2020 Date: May 2020

View File

@@ -4,5 +4,6 @@ Contact: Saravana Kannan <saravanak@google.com>
Description: Description:
The /sys/devices/.../consumer:<consumer> are symlinks to device The /sys/devices/.../consumer:<consumer> are symlinks to device
links where this device is the supplier. <consumer> denotes the links where this device is the supplier. <consumer> denotes the
name of the consumer in that device link. There can be zero or name of the consumer in that device link and is of the form
more of these symlinks for a given device. bus:device name. There can be zero or more of these symlinks
for a given device.

View File

@@ -4,5 +4,6 @@ Contact: Saravana Kannan <saravanak@google.com>
Description: Description:
The /sys/devices/.../supplier:<supplier> are symlinks to device The /sys/devices/.../supplier:<supplier> are symlinks to device
links where this device is the consumer. <supplier> denotes the links where this device is the consumer. <supplier> denotes the
name of the supplier in that device link. There can be zero or name of the supplier in that device link and is of the form
more of these symlinks for a given device. bus:device name. There can be zero or more of these symlinks
for a given device.

View File

@@ -177,14 +177,20 @@ bitmap_flush_interval:number
The bitmap flush interval in milliseconds. The metadata buffers The bitmap flush interval in milliseconds. The metadata buffers
are synchronized when this interval expires. are synchronized when this interval expires.
allow_discards
Allow block discard requests (a.k.a. TRIM) for the integrity device.
Discards are only allowed to devices using internal hash.
fix_padding fix_padding
Use a smaller padding of the tag area that is more Use a smaller padding of the tag area that is more
space-efficient. If this option is not present, large padding is space-efficient. If this option is not present, large padding is
used - that is for compatibility with older kernels. used - that is for compatibility with older kernels.
allow_discards legacy_recalculate
Allow block discard requests (a.k.a. TRIM) for the integrity device. Allow recalculating of volumes with HMAC keys. This is disabled by
Discards are only allowed to devices using internal hash. default for security reasons - an attacker could modify the volume,
set recalc_sector to zero, and the kernel would not detect the
modification.
The journal mode (D/J), buffer_sectors, journal_watermark, commit_time and The journal mode (D/J), buffer_sectors, journal_watermark, commit_time and
allow_discards can be changed when reloading the target (load an inactive allow_discards can be changed when reloading the target (load an inactive

View File

@@ -5977,6 +5977,10 @@
This option is obsoleted by the "nopv" option, which This option is obsoleted by the "nopv" option, which
has equivalent effect for XEN platform. has equivalent effect for XEN platform.
xen_no_vector_callback
[KNL,X86,XEN] Disable the vector callback for Xen
event channel interrupts.
xen_scrub_pages= [XEN] xen_scrub_pages= [XEN]
Boolean option to control scrubbing pages before giving them back Boolean option to control scrubbing pages before giving them back
to Xen, for use by other domains. Can be also changed at runtime to Xen, for use by other domains. Can be also changed at runtime

View File

@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
VERSION = 5 VERSION = 5
PATCHLEVEL = 10 PATCHLEVEL = 10
SUBLEVEL = 10 SUBLEVEL = 11
EXTRAVERSION = EXTRAVERSION =
NAME = Kleptomaniac Octopus NAME = Kleptomaniac Octopus

File diff suppressed because it is too large Load Diff

View File

@@ -379,7 +379,7 @@ static int __init xen_guest_init(void)
} }
gnttab_init(); gnttab_init();
if (!xen_initial_domain()) if (!xen_initial_domain())
xenbus_probe(NULL); xenbus_probe();
/* /*
* Making sure board specific code will not set up ops for * Making sure board specific code will not set up ops for

View File

@@ -17,7 +17,7 @@
#include <asm/lse.h> #include <asm/lse.h>
#define ATOMIC_OP(op) \ #define ATOMIC_OP(op) \
static inline void arch_##op(int i, atomic_t *v) \ static __always_inline void arch_##op(int i, atomic_t *v) \
{ \ { \
__lse_ll_sc_body(op, i, v); \ __lse_ll_sc_body(op, i, v); \
} }
@@ -32,7 +32,7 @@ ATOMIC_OP(atomic_sub)
#undef ATOMIC_OP #undef ATOMIC_OP
#define ATOMIC_FETCH_OP(name, op) \ #define ATOMIC_FETCH_OP(name, op) \
static inline int arch_##op##name(int i, atomic_t *v) \ static __always_inline int arch_##op##name(int i, atomic_t *v) \
{ \ { \
return __lse_ll_sc_body(op##name, i, v); \ return __lse_ll_sc_body(op##name, i, v); \
} }
@@ -56,7 +56,7 @@ ATOMIC_FETCH_OPS(atomic_sub_return)
#undef ATOMIC_FETCH_OPS #undef ATOMIC_FETCH_OPS
#define ATOMIC64_OP(op) \ #define ATOMIC64_OP(op) \
static inline void arch_##op(long i, atomic64_t *v) \ static __always_inline void arch_##op(long i, atomic64_t *v) \
{ \ { \
__lse_ll_sc_body(op, i, v); \ __lse_ll_sc_body(op, i, v); \
} }
@@ -71,7 +71,7 @@ ATOMIC64_OP(atomic64_sub)
#undef ATOMIC64_OP #undef ATOMIC64_OP
#define ATOMIC64_FETCH_OP(name, op) \ #define ATOMIC64_FETCH_OP(name, op) \
static inline long arch_##op##name(long i, atomic64_t *v) \ static __always_inline long arch_##op##name(long i, atomic64_t *v) \
{ \ { \
return __lse_ll_sc_body(op##name, i, v); \ return __lse_ll_sc_body(op##name, i, v); \
} }
@@ -94,7 +94,7 @@ ATOMIC64_FETCH_OPS(atomic64_sub_return)
#undef ATOMIC64_FETCH_OP #undef ATOMIC64_FETCH_OP
#undef ATOMIC64_FETCH_OPS #undef ATOMIC64_FETCH_OPS
static inline long arch_atomic64_dec_if_positive(atomic64_t *v) static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v)
{ {
return __lse_ll_sc_body(atomic64_dec_if_positive, v); return __lse_ll_sc_body(atomic64_dec_if_positive, v);
} }

View File

@@ -947,13 +947,6 @@ static void set_32bit_cpus_allowed(void)
asmlinkage void do_notify_resume(struct pt_regs *regs, asmlinkage void do_notify_resume(struct pt_regs *regs,
unsigned long thread_flags) unsigned long thread_flags)
{ {
/*
* The assembly code enters us with IRQs off, but it hasn't
* informed the tracing code of that for efficiency reasons.
* Update the trace code with the current status.
*/
trace_hardirqs_off();
do { do {
/* Check valid user FS if needed */ /* Check valid user FS if needed */
addr_limit_user_check(); addr_limit_user_check();

View File

@@ -165,15 +165,8 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) { if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) {
local_daif_mask(); local_daif_mask();
flags = current_thread_info()->flags; flags = current_thread_info()->flags;
if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP)) { if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP))
/*
* We're off to userspace, where interrupts are
* always enabled after we restore the flags from
* the SPSR.
*/
trace_hardirqs_on();
return; return;
}
local_daif_restore(DAIF_PROCCTX); local_daif_restore(DAIF_PROCCTX);
} }

View File

@@ -63,6 +63,12 @@
nop; \ nop; \
nop; nop;
#define SCV_ENTRY_FLUSH_SLOT \
SCV_ENTRY_FLUSH_FIXUP_SECTION; \
nop; \
nop; \
nop;
/* /*
* r10 must be free to use, r13 must be paca * r10 must be free to use, r13 must be paca
*/ */
@@ -70,6 +76,13 @@
STF_ENTRY_BARRIER_SLOT; \ STF_ENTRY_BARRIER_SLOT; \
ENTRY_FLUSH_SLOT ENTRY_FLUSH_SLOT
/*
* r10, ctr must be free to use, r13 must be paca
*/
#define SCV_INTERRUPT_TO_KERNEL \
STF_ENTRY_BARRIER_SLOT; \
SCV_ENTRY_FLUSH_SLOT
/* /*
* Macros for annotating the expected destination of (h)rfid * Macros for annotating the expected destination of (h)rfid
* *

View File

@@ -221,6 +221,14 @@ label##3: \
FTR_ENTRY_OFFSET 957b-958b; \ FTR_ENTRY_OFFSET 957b-958b; \
.popsection; .popsection;
#define SCV_ENTRY_FLUSH_FIXUP_SECTION \
957: \
.pushsection __scv_entry_flush_fixup,"a"; \
.align 2; \
958: \
FTR_ENTRY_OFFSET 957b-958b; \
.popsection;
#define RFI_FLUSH_FIXUP_SECTION \ #define RFI_FLUSH_FIXUP_SECTION \
951: \ 951: \
.pushsection __rfi_flush_fixup,"a"; \ .pushsection __rfi_flush_fixup,"a"; \
@@ -254,10 +262,12 @@ label##3: \
extern long stf_barrier_fallback; extern long stf_barrier_fallback;
extern long entry_flush_fallback; extern long entry_flush_fallback;
extern long scv_entry_flush_fallback;
extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup; extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup; extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
extern long __start___uaccess_flush_fixup, __stop___uaccess_flush_fixup; extern long __start___uaccess_flush_fixup, __stop___uaccess_flush_fixup;
extern long __start___entry_flush_fixup, __stop___entry_flush_fixup; extern long __start___entry_flush_fixup, __stop___entry_flush_fixup;
extern long __start___scv_entry_flush_fixup, __stop___scv_entry_flush_fixup;
extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup; extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup; extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
extern long __start__btb_flush_fixup, __stop__btb_flush_fixup; extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;

View File

@@ -75,7 +75,7 @@ BEGIN_FTR_SECTION
bne .Ltabort_syscall bne .Ltabort_syscall
END_FTR_SECTION_IFSET(CPU_FTR_TM) END_FTR_SECTION_IFSET(CPU_FTR_TM)
#endif #endif
INTERRUPT_TO_KERNEL SCV_INTERRUPT_TO_KERNEL
mr r10,r1 mr r10,r1
ld r1,PACAKSAVE(r13) ld r1,PACAKSAVE(r13)
std r10,0(r1) std r10,0(r1)

View File

@@ -2993,6 +2993,25 @@ TRAMP_REAL_BEGIN(entry_flush_fallback)
ld r11,PACA_EXRFI+EX_R11(r13) ld r11,PACA_EXRFI+EX_R11(r13)
blr blr
/*
* The SCV entry flush happens with interrupts enabled, so it must disable
* to prevent EXRFI being clobbered by NMIs (e.g., soft_nmi_common). r10
* (containing LR) does not need to be preserved here because scv entry
* puts 0 in the pt_regs, CTR can be clobbered for the same reason.
*/
TRAMP_REAL_BEGIN(scv_entry_flush_fallback)
li r10,0
mtmsrd r10,1
lbz r10,PACAIRQHAPPENED(r13)
ori r10,r10,PACA_IRQ_HARD_DIS
stb r10,PACAIRQHAPPENED(r13)
std r11,PACA_EXRFI+EX_R11(r13)
L1D_DISPLACEMENT_FLUSH
ld r11,PACA_EXRFI+EX_R11(r13)
li r10,MSR_RI
mtmsrd r10,1
blr
TRAMP_REAL_BEGIN(rfi_flush_fallback) TRAMP_REAL_BEGIN(rfi_flush_fallback)
SET_SCRATCH0(r13); SET_SCRATCH0(r13);
GET_PACA(r13); GET_PACA(r13);

View File

@@ -145,6 +145,13 @@ SECTIONS
__stop___entry_flush_fixup = .; __stop___entry_flush_fixup = .;
} }
. = ALIGN(8);
__scv_entry_flush_fixup : AT(ADDR(__scv_entry_flush_fixup) - LOAD_OFFSET) {
__start___scv_entry_flush_fixup = .;
*(__scv_entry_flush_fixup)
__stop___scv_entry_flush_fixup = .;
}
. = ALIGN(8); . = ALIGN(8);
__stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) { __stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) {
__start___stf_exit_barrier_fixup = .; __start___stf_exit_barrier_fixup = .;
@@ -187,6 +194,12 @@ SECTIONS
.init.text : AT(ADDR(.init.text) - LOAD_OFFSET) { .init.text : AT(ADDR(.init.text) - LOAD_OFFSET) {
_sinittext = .; _sinittext = .;
INIT_TEXT INIT_TEXT
/*
*.init.text might be RO so we must ensure this section ends on
* a page boundary.
*/
. = ALIGN(PAGE_SIZE);
_einittext = .; _einittext = .;
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
*(.tramp.ftrace.init); *(.tramp.ftrace.init);
@@ -200,21 +213,9 @@ SECTIONS
EXIT_TEXT EXIT_TEXT
} }
.init.data : AT(ADDR(.init.data) - LOAD_OFFSET) { . = ALIGN(PAGE_SIZE);
INIT_DATA
}
.init.setup : AT(ADDR(.init.setup) - LOAD_OFFSET) { INIT_DATA_SECTION(16)
INIT_SETUP(16)
}
.initcall.init : AT(ADDR(.initcall.init) - LOAD_OFFSET) {
INIT_CALLS
}
.con_initcall.init : AT(ADDR(.con_initcall.init) - LOAD_OFFSET) {
CON_INITCALL
}
. = ALIGN(8); . = ALIGN(8);
__ftr_fixup : AT(ADDR(__ftr_fixup) - LOAD_OFFSET) { __ftr_fixup : AT(ADDR(__ftr_fixup) - LOAD_OFFSET) {
@@ -242,9 +243,6 @@ SECTIONS
__stop___fw_ftr_fixup = .; __stop___fw_ftr_fixup = .;
} }
#endif #endif
.init.ramfs : AT(ADDR(.init.ramfs) - LOAD_OFFSET) {
INIT_RAM_FS
}
PERCPU_SECTION(L1_CACHE_BYTES) PERCPU_SECTION(L1_CACHE_BYTES)

View File

@@ -290,9 +290,6 @@ void do_entry_flush_fixups(enum l1d_flush_type types)
long *start, *end; long *start, *end;
int i; int i;
start = PTRRELOC(&__start___entry_flush_fixup);
end = PTRRELOC(&__stop___entry_flush_fixup);
instrs[0] = 0x60000000; /* nop */ instrs[0] = 0x60000000; /* nop */
instrs[1] = 0x60000000; /* nop */ instrs[1] = 0x60000000; /* nop */
instrs[2] = 0x60000000; /* nop */ instrs[2] = 0x60000000; /* nop */
@@ -312,6 +309,8 @@ void do_entry_flush_fixups(enum l1d_flush_type types)
if (types & L1D_FLUSH_MTTRIG) if (types & L1D_FLUSH_MTTRIG)
instrs[i++] = 0x7c12dba6; /* mtspr TRIG2,r0 (SPR #882) */ instrs[i++] = 0x7c12dba6; /* mtspr TRIG2,r0 (SPR #882) */
start = PTRRELOC(&__start___entry_flush_fixup);
end = PTRRELOC(&__stop___entry_flush_fixup);
for (i = 0; start < end; start++, i++) { for (i = 0; start < end; start++, i++) {
dest = (void *)start + *start; dest = (void *)start + *start;
@@ -328,6 +327,25 @@ void do_entry_flush_fixups(enum l1d_flush_type types)
patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2]));
} }
start = PTRRELOC(&__start___scv_entry_flush_fixup);
end = PTRRELOC(&__stop___scv_entry_flush_fixup);
for (; start < end; start++, i++) {
dest = (void *)start + *start;
pr_devel("patching dest %lx\n", (unsigned long)dest);
patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0]));
if (types == L1D_FLUSH_FALLBACK)
patch_branch((struct ppc_inst *)(dest + 1), (unsigned long)&scv_entry_flush_fallback,
BRANCH_SET_LINK);
else
patch_instruction((struct ppc_inst *)(dest + 1), ppc_inst(instrs[1]));
patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2]));
}
printk(KERN_DEBUG "entry-flush: patched %d locations (%s flush)\n", i, printk(KERN_DEBUG "entry-flush: patched %d locations (%s flush)\n", i,
(types == L1D_FLUSH_NONE) ? "no" : (types == L1D_FLUSH_NONE) ? "no" :
(types == L1D_FLUSH_FALLBACK) ? "fallback displacement" : (types == L1D_FLUSH_FALLBACK) ? "fallback displacement" :

View File

@@ -134,7 +134,7 @@ config PA_BITS
config PAGE_OFFSET config PAGE_OFFSET
hex hex
default 0xC0000000 if 32BIT && MAXPHYSMEM_2GB default 0xC0000000 if 32BIT && MAXPHYSMEM_1GB
default 0x80000000 if 64BIT && !MMU default 0x80000000 if 64BIT && !MMU
default 0xffffffff80000000 if 64BIT && MAXPHYSMEM_2GB default 0xffffffff80000000 if 64BIT && MAXPHYSMEM_2GB
default 0xffffffe000000000 if 64BIT && MAXPHYSMEM_128GB default 0xffffffe000000000 if 64BIT && MAXPHYSMEM_128GB
@@ -247,10 +247,12 @@ config MODULE_SECTIONS
choice choice
prompt "Maximum Physical Memory" prompt "Maximum Physical Memory"
default MAXPHYSMEM_2GB if 32BIT default MAXPHYSMEM_1GB if 32BIT
default MAXPHYSMEM_2GB if 64BIT && CMODEL_MEDLOW default MAXPHYSMEM_2GB if 64BIT && CMODEL_MEDLOW
default MAXPHYSMEM_128GB if 64BIT && CMODEL_MEDANY default MAXPHYSMEM_128GB if 64BIT && CMODEL_MEDANY
config MAXPHYSMEM_1GB
bool "1GiB"
config MAXPHYSMEM_2GB config MAXPHYSMEM_2GB
bool "2GiB" bool "2GiB"
config MAXPHYSMEM_128GB config MAXPHYSMEM_128GB

View File

@@ -88,7 +88,9 @@
phy-mode = "gmii"; phy-mode = "gmii";
phy-handle = <&phy0>; phy-handle = <&phy0>;
phy0: ethernet-phy@0 { phy0: ethernet-phy@0 {
compatible = "ethernet-phy-id0007.0771";
reg = <0>; reg = <0>;
reset-gpios = <&gpio 12 GPIO_ACTIVE_LOW>;
}; };
}; };

View File

@@ -64,6 +64,8 @@ CONFIG_HW_RANDOM=y
CONFIG_HW_RANDOM_VIRTIO=y CONFIG_HW_RANDOM_VIRTIO=y
CONFIG_SPI=y CONFIG_SPI=y
CONFIG_SPI_SIFIVE=y CONFIG_SPI_SIFIVE=y
CONFIG_GPIOLIB=y
CONFIG_GPIO_SIFIVE=y
# CONFIG_PTP_1588_CLOCK is not set # CONFIG_PTP_1588_CLOCK is not set
CONFIG_POWER_RESET=y CONFIG_POWER_RESET=y
CONFIG_DRM=y CONFIG_DRM=y

View File

@@ -26,7 +26,16 @@ cache_get_priv_group(struct cacheinfo *this_leaf)
static struct cacheinfo *get_cacheinfo(u32 level, enum cache_type type) static struct cacheinfo *get_cacheinfo(u32 level, enum cache_type type)
{ {
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(smp_processor_id()); /*
* Using raw_smp_processor_id() elides a preemptability check, but this
* is really indicative of a larger problem: the cacheinfo UABI assumes
* that cores have a homonogenous view of the cache hierarchy. That
* happens to be the case for the current set of RISC-V systems, but
* likely won't be true in general. Since there's no way to provide
* correct information for these systems via the current UABI we're
* just eliding the check for now.
*/
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(raw_smp_processor_id());
struct cacheinfo *this_leaf; struct cacheinfo *this_leaf;
int index; int index;

View File

@@ -155,6 +155,15 @@ skip_context_tracking:
tail do_trap_unknown tail do_trap_unknown
handle_syscall: handle_syscall:
#ifdef CONFIG_RISCV_M_MODE
/*
* When running is M-Mode (no MMU config), MPIE does not get set.
* As a result, we need to force enable interrupts here because
* handle_exception did not do set SR_IE as it always sees SR_PIE
* being cleared.
*/
csrs CSR_STATUS, SR_IE
#endif
#if defined(CONFIG_TRACE_IRQFLAGS) || defined(CONFIG_CONTEXT_TRACKING) #if defined(CONFIG_TRACE_IRQFLAGS) || defined(CONFIG_CONTEXT_TRACKING)
/* Recover a0 - a7 for system calls */ /* Recover a0 - a7 for system calls */
REG_L a0, PT_A0(sp) REG_L a0, PT_A0(sp)

View File

@@ -4,6 +4,7 @@
* Copyright (C) 2017 SiFive * Copyright (C) 2017 SiFive
*/ */
#include <linux/of_clk.h>
#include <linux/clocksource.h> #include <linux/clocksource.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <asm/sbi.h> #include <asm/sbi.h>
@@ -24,6 +25,8 @@ void __init time_init(void)
riscv_timebase = prop; riscv_timebase = prop;
lpj_fine = riscv_timebase / HZ; lpj_fine = riscv_timebase / HZ;
of_clk_init(NULL);
timer_probe(); timer_probe();
} }

View File

@@ -155,9 +155,10 @@ disable:
void __init setup_bootmem(void) void __init setup_bootmem(void)
{ {
phys_addr_t mem_start = 0; phys_addr_t mem_start = 0;
phys_addr_t start, end = 0; phys_addr_t start, dram_end, end = 0;
phys_addr_t vmlinux_end = __pa_symbol(&_end); phys_addr_t vmlinux_end = __pa_symbol(&_end);
phys_addr_t vmlinux_start = __pa_symbol(&_start); phys_addr_t vmlinux_start = __pa_symbol(&_start);
phys_addr_t max_mapped_addr = __pa(~(ulong)0);
u64 i; u64 i;
/* Find the memory region containing the kernel */ /* Find the memory region containing the kernel */
@@ -179,7 +180,18 @@ void __init setup_bootmem(void)
/* Reserve from the start of the kernel to the end of the kernel */ /* Reserve from the start of the kernel to the end of the kernel */
memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start); memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
max_pfn = PFN_DOWN(memblock_end_of_DRAM()); dram_end = memblock_end_of_DRAM();
/*
* memblock allocator is not aware of the fact that last 4K bytes of
* the addressable memory can not be mapped because of IS_ERR_VALUE
* macro. Make sure that last 4k bytes are not usable by memblock
* if end of dram is equal to maximum addressable memory.
*/
if (max_mapped_addr == (dram_end - 1))
memblock_set_current_limit(max_mapped_addr - 4096);
max_pfn = PFN_DOWN(dram_end);
max_low_pfn = max_pfn; max_low_pfn = max_pfn;
set_max_mapnr(max_low_pfn); set_max_mapnr(max_low_pfn);

View File

@@ -30,7 +30,6 @@ config SUPERH
select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB
select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_COPY_THREAD_TLS
select HAVE_DEBUG_BUGVERBOSE select HAVE_DEBUG_BUGVERBOSE
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE

View File

@@ -63,8 +63,7 @@ config PVR2_DMA
config G2_DMA config G2_DMA
tristate "G2 Bus DMA support" tristate "G2 Bus DMA support"
depends on SH_DREAMCAST depends on SH_DREAMCAST && SH_DMA_API
select SH_DMA_API
help help
This enables support for the DMA controller for the Dreamcast's This enables support for the DMA controller for the Dreamcast's
G2 bus. Drivers that want this will generally enable this on G2 bus. Drivers that want this will generally enable this on

View File

@@ -73,10 +73,8 @@ static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs,
unsigned int nr) unsigned int nr)
{ {
if (likely(nr < IA32_NR_syscalls)) { if (likely(nr < IA32_NR_syscalls)) {
instrumentation_begin();
nr = array_index_nospec(nr, IA32_NR_syscalls); nr = array_index_nospec(nr, IA32_NR_syscalls);
regs->ax = ia32_sys_call_table[nr](regs); regs->ax = ia32_sys_call_table[nr](regs);
instrumentation_end();
} }
} }
@@ -91,8 +89,11 @@ __visible noinstr void do_int80_syscall_32(struct pt_regs *regs)
* or may not be necessary, but it matches the old asm behavior. * or may not be necessary, but it matches the old asm behavior.
*/ */
nr = (unsigned int)syscall_enter_from_user_mode(regs, nr); nr = (unsigned int)syscall_enter_from_user_mode(regs, nr);
instrumentation_begin();
do_syscall_32_irqs_on(regs, nr); do_syscall_32_irqs_on(regs, nr);
instrumentation_end();
syscall_exit_to_user_mode(regs); syscall_exit_to_user_mode(regs);
} }
@@ -121,11 +122,12 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs)
res = get_user(*(u32 *)&regs->bp, res = get_user(*(u32 *)&regs->bp,
(u32 __user __force *)(unsigned long)(u32)regs->sp); (u32 __user __force *)(unsigned long)(u32)regs->sp);
} }
instrumentation_end();
if (res) { if (res) {
/* User code screwed up. */ /* User code screwed up. */
regs->ax = -EFAULT; regs->ax = -EFAULT;
instrumentation_end();
syscall_exit_to_user_mode(regs); syscall_exit_to_user_mode(regs);
return false; return false;
} }
@@ -135,6 +137,8 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs)
/* Now this is just like a normal syscall. */ /* Now this is just like a normal syscall. */
do_syscall_32_irqs_on(regs, nr); do_syscall_32_irqs_on(regs, nr);
instrumentation_end();
syscall_exit_to_user_mode(regs); syscall_exit_to_user_mode(regs);
return true; return true;
} }

View File

@@ -16,6 +16,7 @@
#include <asm/hyperv-tlfs.h> #include <asm/hyperv-tlfs.h>
#include <asm/mshyperv.h> #include <asm/mshyperv.h>
#include <asm/idtentry.h> #include <asm/idtentry.h>
#include <linux/kexec.h>
#include <linux/version.h> #include <linux/version.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/mm.h> #include <linux/mm.h>
@@ -26,6 +27,8 @@
#include <linux/syscore_ops.h> #include <linux/syscore_ops.h>
#include <clocksource/hyperv_timer.h> #include <clocksource/hyperv_timer.h>
int hyperv_init_cpuhp;
void *hv_hypercall_pg; void *hv_hypercall_pg;
EXPORT_SYMBOL_GPL(hv_hypercall_pg); EXPORT_SYMBOL_GPL(hv_hypercall_pg);
@@ -424,6 +427,7 @@ void __init hyperv_init(void)
register_syscore_ops(&hv_syscore_ops); register_syscore_ops(&hv_syscore_ops);
hyperv_init_cpuhp = cpuhp;
return; return;
remove_cpuhp_state: remove_cpuhp_state:

View File

@@ -16,14 +16,25 @@
* Use kernel_fpu_begin/end() if you intend to use FPU in kernel context. It * Use kernel_fpu_begin/end() if you intend to use FPU in kernel context. It
* disables preemption so be careful if you intend to use it for long periods * disables preemption so be careful if you intend to use it for long periods
* of time. * of time.
* If you intend to use the FPU in softirq you need to check first with * If you intend to use the FPU in irq/softirq you need to check first with
* irq_fpu_usable() if it is possible. * irq_fpu_usable() if it is possible.
*/ */
extern void kernel_fpu_begin(void);
/* Kernel FPU states to initialize in kernel_fpu_begin_mask() */
#define KFPU_387 _BITUL(0) /* 387 state will be initialized */
#define KFPU_MXCSR _BITUL(1) /* MXCSR will be initialized */
extern void kernel_fpu_begin_mask(unsigned int kfpu_mask);
extern void kernel_fpu_end(void); extern void kernel_fpu_end(void);
extern bool irq_fpu_usable(void); extern bool irq_fpu_usable(void);
extern void fpregs_mark_activate(void); extern void fpregs_mark_activate(void);
/* Code that is unaware of kernel_fpu_begin_mask() can use this */
static inline void kernel_fpu_begin(void)
{
kernel_fpu_begin_mask(KFPU_387 | KFPU_MXCSR);
}
/* /*
* Use fpregs_lock() while editing CPU's FPU registers or fpu->state. * Use fpregs_lock() while editing CPU's FPU registers or fpu->state.
* A context switch will (and softirq might) save CPU's FPU registers to * A context switch will (and softirq might) save CPU's FPU registers to

View File

@@ -74,6 +74,8 @@ static inline void hv_disable_stimer0_percpu_irq(int irq) {}
#if IS_ENABLED(CONFIG_HYPERV) #if IS_ENABLED(CONFIG_HYPERV)
extern int hyperv_init_cpuhp;
extern void *hv_hypercall_pg; extern void *hv_hypercall_pg;
extern void __percpu **hyperv_pcpu_input_arg; extern void __percpu **hyperv_pcpu_input_arg;

View File

@@ -110,6 +110,8 @@ extern const struct cpumask *cpu_coregroup_mask(int cpu);
#define topology_die_id(cpu) (cpu_data(cpu).cpu_die_id) #define topology_die_id(cpu) (cpu_data(cpu).cpu_die_id)
#define topology_core_id(cpu) (cpu_data(cpu).cpu_core_id) #define topology_core_id(cpu) (cpu_data(cpu).cpu_core_id)
extern unsigned int __max_die_per_package;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#define topology_die_cpumask(cpu) (per_cpu(cpu_die_map, cpu)) #define topology_die_cpumask(cpu) (per_cpu(cpu_die_map, cpu))
#define topology_core_cpumask(cpu) (per_cpu(cpu_core_map, cpu)) #define topology_core_cpumask(cpu) (per_cpu(cpu_core_map, cpu))
@@ -118,8 +120,6 @@ extern const struct cpumask *cpu_coregroup_mask(int cpu);
extern unsigned int __max_logical_packages; extern unsigned int __max_logical_packages;
#define topology_max_packages() (__max_logical_packages) #define topology_max_packages() (__max_logical_packages)
extern unsigned int __max_die_per_package;
static inline int topology_max_die_per_package(void) static inline int topology_max_die_per_package(void)
{ {
return __max_die_per_package; return __max_die_per_package;

View File

@@ -569,12 +569,12 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
u32 ecx; u32 ecx;
ecx = cpuid_ecx(0x8000001e); ecx = cpuid_ecx(0x8000001e);
nodes_per_socket = ((ecx >> 8) & 7) + 1; __max_die_per_package = nodes_per_socket = ((ecx >> 8) & 7) + 1;
} else if (boot_cpu_has(X86_FEATURE_NODEID_MSR)) { } else if (boot_cpu_has(X86_FEATURE_NODEID_MSR)) {
u64 value; u64 value;
rdmsrl(MSR_FAM10H_NODE_ID, value); rdmsrl(MSR_FAM10H_NODE_ID, value);
nodes_per_socket = ((value >> 3) & 7) + 1; __max_die_per_package = nodes_per_socket = ((value >> 3) & 7) + 1;
} }
if (!boot_cpu_has(X86_FEATURE_AMD_SSBD) && if (!boot_cpu_has(X86_FEATURE_AMD_SSBD) &&

View File

@@ -135,14 +135,32 @@ static void hv_machine_shutdown(void)
{ {
if (kexec_in_progress && hv_kexec_handler) if (kexec_in_progress && hv_kexec_handler)
hv_kexec_handler(); hv_kexec_handler();
/*
* Call hv_cpu_die() on all the CPUs, otherwise later the hypervisor
* corrupts the old VP Assist Pages and can crash the kexec kernel.
*/
if (kexec_in_progress && hyperv_init_cpuhp > 0)
cpuhp_remove_state(hyperv_init_cpuhp);
/* The function calls stop_other_cpus(). */
native_machine_shutdown(); native_machine_shutdown();
/* Disable the hypercall page when there is only 1 active CPU. */
if (kexec_in_progress)
hyperv_cleanup();
} }
static void hv_machine_crash_shutdown(struct pt_regs *regs) static void hv_machine_crash_shutdown(struct pt_regs *regs)
{ {
if (hv_crash_handler) if (hv_crash_handler)
hv_crash_handler(regs); hv_crash_handler(regs);
/* The function calls crash_smp_send_stop(). */
native_machine_crash_shutdown(regs); native_machine_crash_shutdown(regs);
/* Disable the hypercall page when there is only 1 active CPU. */
hyperv_cleanup();
} }
#endif /* CONFIG_KEXEC_CORE */ #endif /* CONFIG_KEXEC_CORE */
#endif /* CONFIG_HYPERV */ #endif /* CONFIG_HYPERV */

View File

@@ -25,10 +25,10 @@
#define BITS_SHIFT_NEXT_LEVEL(eax) ((eax) & 0x1f) #define BITS_SHIFT_NEXT_LEVEL(eax) ((eax) & 0x1f)
#define LEVEL_MAX_SIBLINGS(ebx) ((ebx) & 0xffff) #define LEVEL_MAX_SIBLINGS(ebx) ((ebx) & 0xffff)
#ifdef CONFIG_SMP
unsigned int __max_die_per_package __read_mostly = 1; unsigned int __max_die_per_package __read_mostly = 1;
EXPORT_SYMBOL(__max_die_per_package); EXPORT_SYMBOL(__max_die_per_package);
#ifdef CONFIG_SMP
/* /*
* Check if given CPUID extended toplogy "leaf" is implemented * Check if given CPUID extended toplogy "leaf" is implemented
*/ */

View File

@@ -121,7 +121,7 @@ int copy_fpregs_to_fpstate(struct fpu *fpu)
} }
EXPORT_SYMBOL(copy_fpregs_to_fpstate); EXPORT_SYMBOL(copy_fpregs_to_fpstate);
void kernel_fpu_begin(void) void kernel_fpu_begin_mask(unsigned int kfpu_mask)
{ {
preempt_disable(); preempt_disable();
@@ -141,13 +141,14 @@ void kernel_fpu_begin(void)
} }
__cpu_invalidate_fpregs_state(); __cpu_invalidate_fpregs_state();
if (boot_cpu_has(X86_FEATURE_XMM)) /* Put sane initial values into the control registers. */
if (likely(kfpu_mask & KFPU_MXCSR) && boot_cpu_has(X86_FEATURE_XMM))
ldmxcsr(MXCSR_DEFAULT); ldmxcsr(MXCSR_DEFAULT);
if (boot_cpu_has(X86_FEATURE_FPU)) if (unlikely(kfpu_mask & KFPU_387) && boot_cpu_has(X86_FEATURE_FPU))
asm volatile ("fninit"); asm volatile ("fninit");
} }
EXPORT_SYMBOL_GPL(kernel_fpu_begin); EXPORT_SYMBOL_GPL(kernel_fpu_begin_mask);
void kernel_fpu_end(void) void kernel_fpu_end(void)
{ {

View File

@@ -665,17 +665,6 @@ static void __init trim_platform_memory_ranges(void)
static void __init trim_bios_range(void) static void __init trim_bios_range(void)
{ {
/*
* A special case is the first 4Kb of memory;
* This is a BIOS owned area, not kernel ram, but generally
* not listed as such in the E820 table.
*
* This typically reserves additional memory (64KiB by default)
* since some BIOSes are known to corrupt low memory. See the
* Kconfig help text for X86_RESERVE_LOW.
*/
e820__range_update(0, PAGE_SIZE, E820_TYPE_RAM, E820_TYPE_RESERVED);
/* /*
* special case: Some BIOSes report the PC BIOS * special case: Some BIOSes report the PC BIOS
* area (640Kb -> 1Mb) as RAM even though it is not. * area (640Kb -> 1Mb) as RAM even though it is not.
@@ -733,6 +722,15 @@ early_param("reservelow", parse_reservelow);
static void __init trim_low_memory_range(void) static void __init trim_low_memory_range(void)
{ {
/*
* A special case is the first 4Kb of memory;
* This is a BIOS owned area, not kernel ram, but generally
* not listed as such in the E820 table.
*
* This typically reserves additional memory (64KiB by default)
* since some BIOSes are known to corrupt low memory. See the
* Kconfig help text for X86_RESERVE_LOW.
*/
memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE)); memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE));
} }

View File

@@ -225,7 +225,7 @@ static inline u64 sev_es_rd_ghcb_msr(void)
return __rdmsr(MSR_AMD64_SEV_ES_GHCB); return __rdmsr(MSR_AMD64_SEV_ES_GHCB);
} }
static inline void sev_es_wr_ghcb_msr(u64 val) static __always_inline void sev_es_wr_ghcb_msr(u64 val)
{ {
u32 low, high; u32 low, high;
@@ -286,6 +286,12 @@ static enum es_result vc_write_mem(struct es_em_ctxt *ctxt,
u16 d2; u16 d2;
u8 d1; u8 d1;
/* If instruction ran in kernel mode and the I/O buffer is in kernel space */
if (!user_mode(ctxt->regs) && !access_ok(target, size)) {
memcpy(dst, buf, size);
return ES_OK;
}
switch (size) { switch (size) {
case 1: case 1:
memcpy(&d1, buf, 1); memcpy(&d1, buf, 1);
@@ -335,6 +341,12 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
u16 d2; u16 d2;
u8 d1; u8 d1;
/* If instruction ran in kernel mode and the I/O buffer is in kernel space */
if (!user_mode(ctxt->regs) && !access_ok(s, size)) {
memcpy(buf, src, size);
return ES_OK;
}
switch (size) { switch (size) {
case 1: case 1:
if (get_user(d1, s)) if (get_user(d1, s))

View File

@@ -26,6 +26,16 @@
#include <asm/fpu/api.h> #include <asm/fpu/api.h>
#include <asm/asm.h> #include <asm/asm.h>
/*
* Use KFPU_387. MMX instructions are not affected by MXCSR,
* but both AMD and Intel documentation states that even integer MMX
* operations will result in #MF if an exception is pending in FCW.
*
* EMMS is not needed afterwards because, after calling kernel_fpu_end(),
* any subsequent user of the 387 stack will reinitialize it using
* KFPU_387.
*/
void *_mmx_memcpy(void *to, const void *from, size_t len) void *_mmx_memcpy(void *to, const void *from, size_t len)
{ {
void *p; void *p;
@@ -37,7 +47,7 @@ void *_mmx_memcpy(void *to, const void *from, size_t len)
p = to; p = to;
i = len >> 6; /* len/64 */ i = len >> 6; /* len/64 */
kernel_fpu_begin(); kernel_fpu_begin_mask(KFPU_387);
__asm__ __volatile__ ( __asm__ __volatile__ (
"1: prefetch (%0)\n" /* This set is 28 bytes */ "1: prefetch (%0)\n" /* This set is 28 bytes */
@@ -127,7 +137,7 @@ static void fast_clear_page(void *page)
{ {
int i; int i;
kernel_fpu_begin(); kernel_fpu_begin_mask(KFPU_387);
__asm__ __volatile__ ( __asm__ __volatile__ (
" pxor %%mm0, %%mm0\n" : : " pxor %%mm0, %%mm0\n" : :
@@ -160,7 +170,7 @@ static void fast_copy_page(void *to, void *from)
{ {
int i; int i;
kernel_fpu_begin(); kernel_fpu_begin_mask(KFPU_387);
/* /*
* maybe the prefetch stuff can go before the expensive fnsave... * maybe the prefetch stuff can go before the expensive fnsave...
@@ -247,7 +257,7 @@ static void fast_clear_page(void *page)
{ {
int i; int i;
kernel_fpu_begin(); kernel_fpu_begin_mask(KFPU_387);
__asm__ __volatile__ ( __asm__ __volatile__ (
" pxor %%mm0, %%mm0\n" : : " pxor %%mm0, %%mm0\n" : :
@@ -282,7 +292,7 @@ static void fast_copy_page(void *to, void *from)
{ {
int i; int i;
kernel_fpu_begin(); kernel_fpu_begin_mask(KFPU_387);
__asm__ __volatile__ ( __asm__ __volatile__ (
"1: prefetch (%0)\n" "1: prefetch (%0)\n"

View File

@@ -188,6 +188,8 @@ static int xen_cpu_dead_hvm(unsigned int cpu)
return 0; return 0;
} }
static bool no_vector_callback __initdata;
static void __init xen_hvm_guest_init(void) static void __init xen_hvm_guest_init(void)
{ {
if (xen_pv_domain()) if (xen_pv_domain())
@@ -207,7 +209,7 @@ static void __init xen_hvm_guest_init(void)
xen_panic_handler_init(); xen_panic_handler_init();
if (xen_feature(XENFEAT_hvm_callback_vector)) if (!no_vector_callback && xen_feature(XENFEAT_hvm_callback_vector))
xen_have_vector_callback = 1; xen_have_vector_callback = 1;
xen_hvm_smp_init(); xen_hvm_smp_init();
@@ -233,6 +235,13 @@ static __init int xen_parse_nopv(char *arg)
} }
early_param("xen_nopv", xen_parse_nopv); early_param("xen_nopv", xen_parse_nopv);
static __init int xen_parse_no_vector_callback(char *arg)
{
no_vector_callback = true;
return 0;
}
early_param("xen_no_vector_callback", xen_parse_no_vector_callback);
bool __init xen_hvm_need_lapic(void) bool __init xen_hvm_need_lapic(void)
{ {
if (xen_pv_domain()) if (xen_pv_domain())

View File

@@ -33,9 +33,11 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
int cpu; int cpu;
native_smp_prepare_cpus(max_cpus); native_smp_prepare_cpus(max_cpus);
WARN_ON(xen_smp_intr_init(0));
xen_init_lock_cpu(0); if (xen_have_vector_callback) {
WARN_ON(xen_smp_intr_init(0));
xen_init_lock_cpu(0);
}
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
if (cpu == 0) if (cpu == 0)
@@ -50,9 +52,11 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
static void xen_hvm_cpu_die(unsigned int cpu) static void xen_hvm_cpu_die(unsigned int cpu)
{ {
if (common_cpu_die(cpu) == 0) { if (common_cpu_die(cpu) == 0) {
xen_smp_intr_free(cpu); if (xen_have_vector_callback) {
xen_uninit_lock_cpu(cpu); xen_smp_intr_free(cpu);
xen_teardown_timer(cpu); xen_uninit_lock_cpu(cpu);
xen_teardown_timer(cpu);
}
} }
} }
#else #else
@@ -64,14 +68,19 @@ static void xen_hvm_cpu_die(unsigned int cpu)
void __init xen_hvm_smp_init(void) void __init xen_hvm_smp_init(void)
{ {
if (!xen_have_vector_callback) smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
return;
smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus; smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
smp_ops.smp_send_reschedule = xen_smp_send_reschedule; smp_ops.smp_cpus_done = xen_smp_cpus_done;
smp_ops.cpu_die = xen_hvm_cpu_die; smp_ops.cpu_die = xen_hvm_cpu_die;
if (!xen_have_vector_callback) {
#ifdef CONFIG_PARAVIRT_SPINLOCKS
nopvspin = true;
#endif
return;
}
smp_ops.smp_send_reschedule = xen_smp_send_reschedule;
smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi; smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi; smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
smp_ops.smp_cpus_done = xen_smp_cpus_done;
} }

View File

@@ -4,7 +4,7 @@ KMI_GENERATION=0
LLVM=1 LLVM=1
DEPMOD=depmod DEPMOD=depmod
DTC=dtc DTC=dtc
CLANG_PREBUILT_BIN=prebuilts-master/clang/host/linux-x86/clang-r399163b/bin CLANG_PREBUILT_BIN=prebuilts-master/clang/host/linux-x86/clang-r407598/bin
BUILDTOOLS_PREBUILT_BIN=build/build-tools/path/linux-x86 BUILDTOOLS_PREBUILT_BIN=build/build-tools/path/linux-x86
STOP_SHIP_TRACEPRINTK=1 STOP_SHIP_TRACEPRINTK=1

View File

@@ -107,6 +107,8 @@ do_xor_speed(struct xor_block_template *tmpl, void *b1, void *b2)
preempt_enable(); preempt_enable();
// bytes/ns == GB/s, multiply by 1000 to get MB/s [not MiB/s] // bytes/ns == GB/s, multiply by 1000 to get MB/s [not MiB/s]
if (!min)
min = 1;
speed = (1000 * REPS * BENCH_SIZE) / (unsigned int)ktime_to_ns(min); speed = (1000 * REPS * BENCH_SIZE) / (unsigned int)ktime_to_ns(min);
tmpl->speed = speed; tmpl->speed = speed;

View File

@@ -586,6 +586,8 @@ static int acpi_get_device_data(acpi_handle handle, struct acpi_device **device,
if (!device) if (!device)
return -EINVAL; return -EINVAL;
*device = NULL;
status = acpi_get_data_full(handle, acpi_scan_drop_device, status = acpi_get_data_full(handle, acpi_scan_drop_device,
(void **)device, callback); (void **)device, callback);
if (ACPI_FAILURE(status) || !*device) { if (ACPI_FAILURE(status) || !*device) {

View File

@@ -9,6 +9,7 @@
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
#include <trace/hooks/vendor_hooks.h> #include <trace/hooks/vendor_hooks.h>
#include <trace/hooks/sched.h> #include <trace/hooks/sched.h>
#include <trace/hooks/cpu.h>
#include <trace/hooks/fpsimd.h> #include <trace/hooks/fpsimd.h>
#include <trace/hooks/binder.h> #include <trace/hooks/binder.h>
#include <trace/hooks/rwsem.h> #include <trace/hooks/rwsem.h>
@@ -133,3 +134,5 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_allow_domain_state);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_map_util_freq); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_map_util_freq);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_report_bug); EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_report_bug);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_em_cpu_energy); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_em_cpu_energy);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cpu_up);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cpu_down);

View File

@@ -208,6 +208,16 @@ int device_links_read_lock_held(void)
#endif #endif
#endif /* !CONFIG_SRCU */ #endif /* !CONFIG_SRCU */
static bool device_is_ancestor(struct device *dev, struct device *target)
{
while (target->parent) {
target = target->parent;
if (dev == target)
return true;
}
return false;
}
/** /**
* device_is_dependent - Check if one device depends on another one * device_is_dependent - Check if one device depends on another one
* @dev: Device to check dependencies for. * @dev: Device to check dependencies for.
@@ -221,7 +231,12 @@ int device_is_dependent(struct device *dev, void *target)
struct device_link *link; struct device_link *link;
int ret; int ret;
if (dev == target) /*
* The "ancestors" check is needed to catch the case when the target
* device has not been completely initialized yet and it is still
* missing from the list of children of its parent device.
*/
if (dev == target || device_is_ancestor(dev, target))
return 1; return 1;
ret = device_for_each_child(dev, target, device_is_dependent); ret = device_for_each_child(dev, target, device_is_dependent);
@@ -456,7 +471,9 @@ static int devlink_add_symlinks(struct device *dev,
struct device *con = link->consumer; struct device *con = link->consumer;
char *buf; char *buf;
len = max(strlen(dev_name(sup)), strlen(dev_name(con))); len = max(strlen(dev_bus_name(sup)) + strlen(dev_name(sup)),
strlen(dev_bus_name(con)) + strlen(dev_name(con)));
len += strlen(":");
len += strlen("supplier:") + 1; len += strlen("supplier:") + 1;
buf = kzalloc(len, GFP_KERNEL); buf = kzalloc(len, GFP_KERNEL);
if (!buf) if (!buf)
@@ -470,12 +487,12 @@ static int devlink_add_symlinks(struct device *dev,
if (ret) if (ret)
goto err_con; goto err_con;
snprintf(buf, len, "consumer:%s", dev_name(con)); snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con));
ret = sysfs_create_link(&sup->kobj, &link->link_dev.kobj, buf); ret = sysfs_create_link(&sup->kobj, &link->link_dev.kobj, buf);
if (ret) if (ret)
goto err_con_dev; goto err_con_dev;
snprintf(buf, len, "supplier:%s", dev_name(sup)); snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup));
ret = sysfs_create_link(&con->kobj, &link->link_dev.kobj, buf); ret = sysfs_create_link(&con->kobj, &link->link_dev.kobj, buf);
if (ret) if (ret)
goto err_sup_dev; goto err_sup_dev;
@@ -483,7 +500,7 @@ static int devlink_add_symlinks(struct device *dev,
goto out; goto out;
err_sup_dev: err_sup_dev:
snprintf(buf, len, "consumer:%s", dev_name(con)); snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con));
sysfs_remove_link(&sup->kobj, buf); sysfs_remove_link(&sup->kobj, buf);
err_con_dev: err_con_dev:
sysfs_remove_link(&link->link_dev.kobj, "consumer"); sysfs_remove_link(&link->link_dev.kobj, "consumer");
@@ -506,7 +523,9 @@ static void devlink_remove_symlinks(struct device *dev,
sysfs_remove_link(&link->link_dev.kobj, "consumer"); sysfs_remove_link(&link->link_dev.kobj, "consumer");
sysfs_remove_link(&link->link_dev.kobj, "supplier"); sysfs_remove_link(&link->link_dev.kobj, "supplier");
len = max(strlen(dev_name(sup)), strlen(dev_name(con))); len = max(strlen(dev_bus_name(sup)) + strlen(dev_name(sup)),
strlen(dev_bus_name(con)) + strlen(dev_name(con)));
len += strlen(":");
len += strlen("supplier:") + 1; len += strlen("supplier:") + 1;
buf = kzalloc(len, GFP_KERNEL); buf = kzalloc(len, GFP_KERNEL);
if (!buf) { if (!buf) {
@@ -514,9 +533,9 @@ static void devlink_remove_symlinks(struct device *dev,
return; return;
} }
snprintf(buf, len, "supplier:%s", dev_name(sup)); snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup));
sysfs_remove_link(&con->kobj, buf); sysfs_remove_link(&con->kobj, buf);
snprintf(buf, len, "consumer:%s", dev_name(con)); snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con));
sysfs_remove_link(&sup->kobj, buf); sysfs_remove_link(&sup->kobj, buf);
kfree(buf); kfree(buf);
} }
@@ -737,8 +756,9 @@ struct device_link *device_link_add(struct device *consumer,
link->link_dev.class = &devlink_class; link->link_dev.class = &devlink_class;
device_set_pm_not_required(&link->link_dev); device_set_pm_not_required(&link->link_dev);
dev_set_name(&link->link_dev, "%s--%s", dev_set_name(&link->link_dev, "%s:%s--%s:%s",
dev_name(supplier), dev_name(consumer)); dev_bus_name(supplier), dev_name(supplier),
dev_bus_name(consumer), dev_name(consumer));
if (device_register(&link->link_dev)) { if (device_register(&link->link_dev)) {
put_device(consumer); put_device(consumer);
put_device(supplier); put_device(supplier);
@@ -1808,9 +1828,7 @@ const char *dev_driver_string(const struct device *dev)
* never change once they are set, so they don't need special care. * never change once they are set, so they don't need special care.
*/ */
drv = READ_ONCE(dev->driver); drv = READ_ONCE(dev->driver);
return drv ? drv->name : return drv ? drv->name : dev_bus_name(dev);
(dev->bus ? dev->bus->name :
(dev->class ? dev->class->name : ""));
} }
EXPORT_SYMBOL(dev_driver_string); EXPORT_SYMBOL(dev_driver_string);

View File

@@ -612,6 +612,8 @@ dev_groups_failed:
else if (drv->remove) else if (drv->remove)
drv->remove(dev); drv->remove(dev);
probe_failed: probe_failed:
kfree(dev->dma_range_map);
dev->dma_range_map = NULL;
if (dev->bus) if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier, blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_DRIVER_NOT_BOUND, dev); BUS_NOTIFY_DRIVER_NOT_BOUND, dev);

View File

@@ -1256,6 +1256,8 @@ static struct tegra_clk_init_table init_table[] __initdata = {
{ TEGRA30_CLK_I2S3_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 }, { TEGRA30_CLK_I2S3_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
{ TEGRA30_CLK_I2S4_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 }, { TEGRA30_CLK_I2S4_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
{ TEGRA30_CLK_VIMCLK_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 }, { TEGRA30_CLK_VIMCLK_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
{ TEGRA30_CLK_HDA, TEGRA30_CLK_PLL_P, 102000000, 0 },
{ TEGRA30_CLK_HDA2CODEC_2X, TEGRA30_CLK_PLL_P, 48000000, 0 },
/* must be the last entry */ /* must be the last entry */
{ TEGRA30_CLK_CLK_MAX, TEGRA30_CLK_CLK_MAX, 0, 0 }, { TEGRA30_CLK_CLK_MAX, TEGRA30_CLK_CLK_MAX, 0, 0 },
}; };

View File

@@ -235,36 +235,6 @@ static ssize_t ti_eqep_position_ceiling_write(struct counter_device *counter,
return len; return len;
} }
static ssize_t ti_eqep_position_floor_read(struct counter_device *counter,
struct counter_count *count,
void *ext_priv, char *buf)
{
struct ti_eqep_cnt *priv = counter->priv;
u32 qposinit;
regmap_read(priv->regmap32, QPOSINIT, &qposinit);
return sprintf(buf, "%u\n", qposinit);
}
static ssize_t ti_eqep_position_floor_write(struct counter_device *counter,
struct counter_count *count,
void *ext_priv, const char *buf,
size_t len)
{
struct ti_eqep_cnt *priv = counter->priv;
int err;
u32 res;
err = kstrtouint(buf, 0, &res);
if (err < 0)
return err;
regmap_write(priv->regmap32, QPOSINIT, res);
return len;
}
static ssize_t ti_eqep_position_enable_read(struct counter_device *counter, static ssize_t ti_eqep_position_enable_read(struct counter_device *counter,
struct counter_count *count, struct counter_count *count,
void *ext_priv, char *buf) void *ext_priv, char *buf)
@@ -301,11 +271,6 @@ static struct counter_count_ext ti_eqep_position_ext[] = {
.read = ti_eqep_position_ceiling_read, .read = ti_eqep_position_ceiling_read,
.write = ti_eqep_position_ceiling_write, .write = ti_eqep_position_ceiling_write,
}, },
{
.name = "floor",
.read = ti_eqep_position_floor_read,
.write = ti_eqep_position_floor_write,
},
{ {
.name = "enable", .name = "enable",
.read = ti_eqep_position_enable_read, .read = ti_eqep_position_enable_read,

View File

@@ -366,6 +366,7 @@ if CRYPTO_DEV_OMAP
config CRYPTO_DEV_OMAP_SHAM config CRYPTO_DEV_OMAP_SHAM
tristate "Support for OMAP MD5/SHA1/SHA2 hw accelerator" tristate "Support for OMAP MD5/SHA1/SHA2 hw accelerator"
depends on ARCH_OMAP2PLUS depends on ARCH_OMAP2PLUS
select CRYPTO_ENGINE
select CRYPTO_SHA1 select CRYPTO_SHA1
select CRYPTO_MD5 select CRYPTO_MD5
select CRYPTO_SHA256 select CRYPTO_SHA256

View File

@@ -13,6 +13,14 @@ config DMABUF_HEAPS_CMA
by the Contiguous Memory Allocator (CMA). If your system has these by the Contiguous Memory Allocator (CMA). If your system has these
regions, you should say Y here. regions, you should say Y here.
config DMABUF_HEAPS_CHUNK
bool "DMA-BUF CHUNK Heap"
depends on DMABUF_HEAPS && DMA_CMA
help
Choose this option to enable dma-buf CHUNK heap. This heap is backed
by the Contiguous Memory Allocator (CMA) and allocates the buffers that
are arranged into a list of fixed size chunks taken from CMA.
config QCOM_DMABUF_HEAPS config QCOM_DMABUF_HEAPS
tristate "QCOM DMA-BUF Heaps" tristate "QCOM DMA-BUF Heaps"
depends on DMABUF_HEAPS depends on DMABUF_HEAPS

View File

@@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o
obj-$(CONFIG_DMABUF_HEAPS_CHUNK) += chunk_heap.o
obj-$(CONFIG_QCOM_DMABUF_HEAPS) += qcom_dma_heaps.o obj-$(CONFIG_QCOM_DMABUF_HEAPS) += qcom_dma_heaps.o
qcom_dma_heaps-y := qcom_dma_heap.o qcom_dma_heaps-y := qcom_dma_heap.o
qcom_dma_heaps-$(CONFIG_QCOM_DMABUF_HEAPS_SYSTEM) += qcom_system_heap.o qcom_heap_helpers.o \ qcom_dma_heaps-$(CONFIG_QCOM_DMABUF_HEAPS_SYSTEM) += qcom_system_heap.o qcom_heap_helpers.o \

View File

@@ -0,0 +1,491 @@
// SPDX-License-Identifier: GPL-2.0
/*
* DMA-BUF chunk heap exporter
*
* Copyright (c) 2020 Samsung Electronics Co., Ltd.
* Author: <hyesoo.yu@samsung.com> for Samsung Electronics.
*/
#include <linux/cma.h>
#include <linux/device.h>
#include <linux/dma-buf.h>
#include <linux/dma-heap.h>
#include <linux/dma-mapping.h>
#include <linux/dma-map-ops.h>
#include <linux/err.h>
#include <linux/errno.h>
#include <linux/highmem.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_fdt.h>
#include <linux/of_reserved_mem.h>
#include <linux/scatterlist.h>
#include <linux/sched/signal.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
struct chunk_heap {
struct dma_heap *heap;
uint32_t order;
struct cma *cma;
};
struct chunk_heap_buffer {
struct chunk_heap *heap;
struct list_head attachments;
struct mutex lock;
struct sg_table sg_table;
unsigned long len;
int vmap_cnt;
void *vaddr;
};
struct chunk_heap_attachment {
struct device *dev;
struct sg_table *table;
struct list_head list;
bool mapped;
};
static struct chunk_heap chunk_heaps[MAX_CMA_AREAS] __initdata;
static unsigned int chunk_heap_count __initdata;
static struct sg_table *dup_sg_table(struct sg_table *table)
{
struct sg_table *new_table;
int ret, i;
struct scatterlist *sg, *new_sg;
new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
if (!new_table)
return ERR_PTR(-ENOMEM);
ret = sg_alloc_table(new_table, table->orig_nents, GFP_KERNEL);
if (ret) {
kfree(new_table);
return ERR_PTR(-ENOMEM);
}
new_sg = new_table->sgl;
for_each_sgtable_sg(table, sg, i) {
sg_set_page(new_sg, sg_page(sg), sg->length, sg->offset);
new_sg = sg_next(new_sg);
}
return new_table;
}
static int chunk_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment)
{
struct chunk_heap_buffer *buffer = dmabuf->priv;
struct chunk_heap_attachment *a;
struct sg_table *table;
a = kzalloc(sizeof(*a), GFP_KERNEL);
if (!a)
return -ENOMEM;
table = dup_sg_table(&buffer->sg_table);
if (IS_ERR(table)) {
kfree(a);
return -ENOMEM;
}
a->table = table;
a->dev = attachment->dev;
attachment->priv = a;
mutex_lock(&buffer->lock);
list_add(&a->list, &buffer->attachments);
mutex_unlock(&buffer->lock);
return 0;
}
static void chunk_heap_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment)
{
struct chunk_heap_buffer *buffer = dmabuf->priv;
struct chunk_heap_attachment *a = attachment->priv;
mutex_lock(&buffer->lock);
list_del(&a->list);
mutex_unlock(&buffer->lock);
sg_free_table(a->table);
kfree(a->table);
kfree(a);
}
static struct sg_table *chunk_heap_map_dma_buf(struct dma_buf_attachment *attachment,
enum dma_data_direction direction)
{
struct chunk_heap_attachment *a = attachment->priv;
struct sg_table *table = a->table;
int ret;
if (a->mapped)
return table;
ret = dma_map_sgtable(attachment->dev, table, direction, 0);
if (ret)
return ERR_PTR(ret);
a->mapped = true;
return table;
}
static void chunk_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
struct sg_table *table,
enum dma_data_direction direction)
{
struct chunk_heap_attachment *a = attachment->priv;
a->mapped = false;
dma_unmap_sgtable(attachment->dev, table, direction, 0);
}
static int chunk_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
{
struct chunk_heap_buffer *buffer = dmabuf->priv;
struct chunk_heap_attachment *a;
mutex_lock(&buffer->lock);
if (buffer->vmap_cnt)
invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
list_for_each_entry(a, &buffer->attachments, list) {
if (!a->mapped)
continue;
dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
}
mutex_unlock(&buffer->lock);
return 0;
}
static int chunk_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
{
struct chunk_heap_buffer *buffer = dmabuf->priv;
struct chunk_heap_attachment *a;
mutex_lock(&buffer->lock);
if (buffer->vmap_cnt)
flush_kernel_vmap_range(buffer->vaddr, buffer->len);
list_for_each_entry(a, &buffer->attachments, list) {
if (!a->mapped)
continue;
dma_sync_sgtable_for_device(a->dev, a->table, direction);
}
mutex_unlock(&buffer->lock);
return 0;
}
static int chunk_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
{
struct chunk_heap_buffer *buffer = dmabuf->priv;
struct sg_table *table = &buffer->sg_table;
unsigned long addr = vma->vm_start;
struct sg_page_iter piter;
int ret;
for_each_sgtable_page(table, &piter, vma->vm_pgoff) {
struct page *page = sg_page_iter_page(&piter);
ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE,
vma->vm_page_prot);
if (ret)
return ret;
addr += PAGE_SIZE;
if (addr >= vma->vm_end)
return 0;
}
return 0;
}
static void *chunk_heap_do_vmap(struct chunk_heap_buffer *buffer)
{
struct sg_table *table = &buffer->sg_table;
int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
struct page **pages = vmalloc(sizeof(struct page *) * npages);
struct page **tmp = pages;
struct sg_page_iter piter;
void *vaddr;
if (!pages)
return ERR_PTR(-ENOMEM);
for_each_sgtable_page(table, &piter, 0) {
WARN_ON(tmp - pages >= npages);
*tmp++ = sg_page_iter_page(&piter);
}
vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
vfree(pages);
if (!vaddr)
return ERR_PTR(-ENOMEM);
return vaddr;
}
static void *chunk_heap_vmap(struct dma_buf *dmabuf)
{
struct chunk_heap_buffer *buffer = dmabuf->priv;
void *vaddr;
mutex_lock(&buffer->lock);
if (buffer->vmap_cnt) {
vaddr = buffer->vaddr;
} else {
vaddr = chunk_heap_do_vmap(buffer);
if (IS_ERR(vaddr)) {
mutex_unlock(&buffer->lock);
return vaddr;
}
buffer->vaddr = vaddr;
}
buffer->vmap_cnt++;
mutex_unlock(&buffer->lock);
return vaddr;
}
static void chunk_heap_vunmap(struct dma_buf *dmabuf, void *vaddr)
{
struct chunk_heap_buffer *buffer = dmabuf->priv;
mutex_lock(&buffer->lock);
if (!--buffer->vmap_cnt) {
vunmap(buffer->vaddr);
buffer->vaddr = NULL;
}
mutex_unlock(&buffer->lock);
}
static void chunk_heap_dma_buf_release(struct dma_buf *dmabuf)
{
struct chunk_heap_buffer *buffer = dmabuf->priv;
struct chunk_heap *chunk_heap = buffer->heap;
struct sg_table *table;
struct scatterlist *sg;
int i;
table = &buffer->sg_table;
for_each_sgtable_sg(table, sg, i)
cma_release(chunk_heap->cma, sg_page(sg), 1 << chunk_heap->order);
sg_free_table(table);
kfree(buffer);
}
static const struct dma_buf_ops chunk_heap_buf_ops = {
.attach = chunk_heap_attach,
.detach = chunk_heap_detach,
.map_dma_buf = chunk_heap_map_dma_buf,
.unmap_dma_buf = chunk_heap_unmap_dma_buf,
.begin_cpu_access = chunk_heap_dma_buf_begin_cpu_access,
.end_cpu_access = chunk_heap_dma_buf_end_cpu_access,
.mmap = chunk_heap_mmap,
.vmap = chunk_heap_vmap,
.vunmap = chunk_heap_vunmap,
.release = chunk_heap_dma_buf_release,
};
struct dma_buf *chunk_heap_allocate(struct dma_heap *heap, unsigned long len,
unsigned long fd_flags, unsigned long heap_flags)
{
struct chunk_heap *chunk_heap = dma_heap_get_drvdata(heap);
struct chunk_heap_buffer *buffer;
DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
struct dma_buf *dmabuf;
struct sg_table *table;
struct scatterlist *sg;
struct page **pages;
unsigned int chunk_size = PAGE_SIZE << chunk_heap->order;
unsigned int count, alloced = 0;
unsigned int alloc_order = max_t(unsigned int, pageblock_order, chunk_heap->order);
unsigned int nr_chunks_per_alloc = 1 << (alloc_order - chunk_heap->order);
gfp_t gfp_flags = GFP_KERNEL|__GFP_NORETRY;
int ret = -ENOMEM;
pgoff_t pg;
buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
if (!buffer)
return ERR_PTR(ret);
INIT_LIST_HEAD(&buffer->attachments);
mutex_init(&buffer->lock);
buffer->heap = chunk_heap;
buffer->len = ALIGN(len, chunk_size);
count = buffer->len / chunk_size;
pages = kvmalloc_array(count, sizeof(*pages), GFP_KERNEL);
if (!pages)
goto err_pages;
while (alloced < count) {
struct page *page;
int i;
while (count - alloced < nr_chunks_per_alloc) {
alloc_order--;
nr_chunks_per_alloc >>= 1;
}
page = cma_alloc(chunk_heap->cma, 1 << alloc_order,
alloc_order, gfp_flags);
if (!page) {
if (gfp_flags & __GFP_NORETRY) {
gfp_flags &= ~__GFP_NORETRY;
continue;
}
break;
}
for (i = 0; i < nr_chunks_per_alloc; i++, alloced++) {
pages[alloced] = page;
page += 1 << chunk_heap->order;
}
}
if (alloced < count)
goto err_alloc;
table = &buffer->sg_table;
if (sg_alloc_table(table, count, GFP_KERNEL))
goto err_alloc;
sg = table->sgl;
for (pg = 0; pg < count; pg++) {
sg_set_page(sg, pages[pg], chunk_size, 0);
sg = sg_next(sg);
}
exp_info.ops = &chunk_heap_buf_ops;
exp_info.size = buffer->len;
exp_info.flags = fd_flags;
exp_info.priv = buffer;
dmabuf = dma_buf_export(&exp_info);
if (IS_ERR(dmabuf)) {
ret = PTR_ERR(dmabuf);
goto err_export;
}
kvfree(pages);
ret = dma_buf_fd(dmabuf, fd_flags);
if (ret < 0) {
dma_buf_put(dmabuf);
return ERR_PTR(ret);
}
return dmabuf;
err_export:
sg_free_table(table);
err_alloc:
for (pg = 0; pg < alloced; pg++)
cma_release(chunk_heap->cma, pages[pg], 1 << chunk_heap->order);
kvfree(pages);
err_pages:
kfree(buffer);
return ERR_PTR(ret);
}
static const struct dma_heap_ops chunk_heap_ops = {
.allocate = chunk_heap_allocate,
};
#define CHUNK_PREFIX "chunk-"
static int register_chunk_heap(struct chunk_heap *chunk_heap_info)
{
struct dma_heap_export_info exp_info;
const char *name = cma_get_name(chunk_heap_info->cma);
size_t len = strlen(CHUNK_PREFIX) + strlen(name) + 1;
char *buf = kmalloc(len, GFP_KERNEL);
if (!buf)
return -ENOMEM;
sprintf(buf, CHUNK_PREFIX"%s", cma_get_name(chunk_heap_info->cma));
buf[len] = '\0';
exp_info.name = buf;
exp_info.name = cma_get_name(chunk_heap_info->cma);
exp_info.ops = &chunk_heap_ops;
exp_info.priv = chunk_heap_info;
chunk_heap_info->heap = dma_heap_add(&exp_info);
if (IS_ERR(chunk_heap_info->heap)) {
kfree(buf);
return PTR_ERR(chunk_heap_info->heap);
}
return 0;
}
static int __init chunk_heap_init(void)
{
unsigned int i;
for (i = 0; i < chunk_heap_count; i++)
register_chunk_heap(&chunk_heaps[i]);
return 0;
}
module_init(chunk_heap_init);
#ifdef CONFIG_OF_EARLY_FLATTREE
static int __init dmabuf_chunk_heap_area_init(struct reserved_mem *rmem)
{
int ret;
struct cma *cma;
struct chunk_heap *chunk_heap_info;
const __be32 *chunk_order;
phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
phys_addr_t mask = align - 1;
if ((rmem->base & mask) || (rmem->size & mask)) {
pr_err("Incorrect alignment for CMA region\n");
return -EINVAL;
}
ret = cma_init_reserved_mem(rmem->base, rmem->size, 0, rmem->name, &cma);
if (ret) {
pr_err("Reserved memory: unable to setup CMA region\n");
return ret;
}
/* Architecture specific contiguous memory fixup. */
dma_contiguous_early_fixup(rmem->base, rmem->size);
chunk_heap_info = &chunk_heaps[chunk_heap_count];
chunk_heap_info->cma = cma;
chunk_order = of_get_flat_dt_prop(rmem->fdt_node, "chunk-order", NULL);
if (chunk_order)
chunk_heap_info->order = be32_to_cpu(*chunk_order);
else
chunk_heap_info->order = 4;
chunk_heap_count++;
return 0;
}
RESERVEDMEM_OF_DECLARE(dmabuf_chunk_heap, "dma_heap,chunk",
dmabuf_chunk_heap_area_init);
#endif
MODULE_DESCRIPTION("DMA-BUF Chunk Heap");
MODULE_LICENSE("GPL v2");

View File

@@ -292,7 +292,7 @@ static struct dma_buf *cma_heap_allocate(struct dma_heap *heap,
if (align > CONFIG_CMA_ALIGNMENT) if (align > CONFIG_CMA_ALIGNMENT)
align = CONFIG_CMA_ALIGNMENT; align = CONFIG_CMA_ALIGNMENT;
cma_pages = cma_alloc(cma_heap->cma, pagecount, align, false); cma_pages = cma_alloc(cma_heap->cma, pagecount, align, GFP_KERNEL);
if (!cma_pages) if (!cma_pages)
goto free_buffer; goto free_buffer;

View File

@@ -508,7 +508,8 @@ config GPIO_SAMA5D2_PIOBU
config GPIO_SIFIVE config GPIO_SIFIVE
bool "SiFive GPIO support" bool "SiFive GPIO support"
depends on OF_GPIO && IRQ_DOMAIN_HIERARCHY depends on OF_GPIO
select IRQ_DOMAIN_HIERARCHY
select GPIO_GENERIC select GPIO_GENERIC
select GPIOLIB_IRQCHIP select GPIOLIB_IRQCHIP
select REGMAP_MMIO select REGMAP_MMIO

View File

@@ -1960,6 +1960,21 @@ struct gpio_chardev_data {
#endif #endif
}; };
static int chipinfo_get(struct gpio_chardev_data *cdev, void __user *ip)
{
struct gpio_device *gdev = cdev->gdev;
struct gpiochip_info chipinfo;
memset(&chipinfo, 0, sizeof(chipinfo));
strscpy(chipinfo.name, dev_name(&gdev->dev), sizeof(chipinfo.name));
strscpy(chipinfo.label, gdev->label, sizeof(chipinfo.label));
chipinfo.lines = gdev->ngpio;
if (copy_to_user(ip, &chipinfo, sizeof(chipinfo)))
return -EFAULT;
return 0;
}
#ifdef CONFIG_GPIO_CDEV_V1 #ifdef CONFIG_GPIO_CDEV_V1
/* /*
* returns 0 if the versions match, else the previously selected ABI version * returns 0 if the versions match, else the previously selected ABI version
@@ -1974,6 +1989,41 @@ static int lineinfo_ensure_abi_version(struct gpio_chardev_data *cdata,
return abiv; return abiv;
} }
static int lineinfo_get_v1(struct gpio_chardev_data *cdev, void __user *ip,
bool watch)
{
struct gpio_desc *desc;
struct gpioline_info lineinfo;
struct gpio_v2_line_info lineinfo_v2;
if (copy_from_user(&lineinfo, ip, sizeof(lineinfo)))
return -EFAULT;
/* this doubles as a range check on line_offset */
desc = gpiochip_get_desc(cdev->gdev->chip, lineinfo.line_offset);
if (IS_ERR(desc))
return PTR_ERR(desc);
if (watch) {
if (lineinfo_ensure_abi_version(cdev, 1))
return -EPERM;
if (test_and_set_bit(lineinfo.line_offset, cdev->watched_lines))
return -EBUSY;
}
gpio_desc_to_lineinfo(desc, &lineinfo_v2);
gpio_v2_line_info_to_v1(&lineinfo_v2, &lineinfo);
if (copy_to_user(ip, &lineinfo, sizeof(lineinfo))) {
if (watch)
clear_bit(lineinfo.line_offset, cdev->watched_lines);
return -EFAULT;
}
return 0;
}
#endif #endif
static int lineinfo_get(struct gpio_chardev_data *cdev, void __user *ip, static int lineinfo_get(struct gpio_chardev_data *cdev, void __user *ip,
@@ -2011,6 +2061,22 @@ static int lineinfo_get(struct gpio_chardev_data *cdev, void __user *ip,
return 0; return 0;
} }
static int lineinfo_unwatch(struct gpio_chardev_data *cdev, void __user *ip)
{
__u32 offset;
if (copy_from_user(&offset, ip, sizeof(offset)))
return -EFAULT;
if (offset >= cdev->gdev->ngpio)
return -EINVAL;
if (!test_and_clear_bit(offset, cdev->watched_lines))
return -EBUSY;
return 0;
}
/* /*
* gpio_ioctl() - ioctl handler for the GPIO chardev * gpio_ioctl() - ioctl handler for the GPIO chardev
*/ */
@@ -2018,80 +2084,24 @@ static long gpio_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{ {
struct gpio_chardev_data *cdev = file->private_data; struct gpio_chardev_data *cdev = file->private_data;
struct gpio_device *gdev = cdev->gdev; struct gpio_device *gdev = cdev->gdev;
struct gpio_chip *gc = gdev->chip;
void __user *ip = (void __user *)arg; void __user *ip = (void __user *)arg;
__u32 offset;
/* We fail any subsequent ioctl():s when the chip is gone */ /* We fail any subsequent ioctl():s when the chip is gone */
if (!gc) if (!gdev->chip)
return -ENODEV; return -ENODEV;
/* Fill in the struct and pass to userspace */ /* Fill in the struct and pass to userspace */
if (cmd == GPIO_GET_CHIPINFO_IOCTL) { if (cmd == GPIO_GET_CHIPINFO_IOCTL) {
struct gpiochip_info chipinfo; return chipinfo_get(cdev, ip);
memset(&chipinfo, 0, sizeof(chipinfo));
strscpy(chipinfo.name, dev_name(&gdev->dev),
sizeof(chipinfo.name));
strscpy(chipinfo.label, gdev->label,
sizeof(chipinfo.label));
chipinfo.lines = gdev->ngpio;
if (copy_to_user(ip, &chipinfo, sizeof(chipinfo)))
return -EFAULT;
return 0;
#ifdef CONFIG_GPIO_CDEV_V1 #ifdef CONFIG_GPIO_CDEV_V1
} else if (cmd == GPIO_GET_LINEINFO_IOCTL) {
struct gpio_desc *desc;
struct gpioline_info lineinfo;
struct gpio_v2_line_info lineinfo_v2;
if (copy_from_user(&lineinfo, ip, sizeof(lineinfo)))
return -EFAULT;
/* this doubles as a range check on line_offset */
desc = gpiochip_get_desc(gc, lineinfo.line_offset);
if (IS_ERR(desc))
return PTR_ERR(desc);
gpio_desc_to_lineinfo(desc, &lineinfo_v2);
gpio_v2_line_info_to_v1(&lineinfo_v2, &lineinfo);
if (copy_to_user(ip, &lineinfo, sizeof(lineinfo)))
return -EFAULT;
return 0;
} else if (cmd == GPIO_GET_LINEHANDLE_IOCTL) { } else if (cmd == GPIO_GET_LINEHANDLE_IOCTL) {
return linehandle_create(gdev, ip); return linehandle_create(gdev, ip);
} else if (cmd == GPIO_GET_LINEEVENT_IOCTL) { } else if (cmd == GPIO_GET_LINEEVENT_IOCTL) {
return lineevent_create(gdev, ip); return lineevent_create(gdev, ip);
} else if (cmd == GPIO_GET_LINEINFO_WATCH_IOCTL) { } else if (cmd == GPIO_GET_LINEINFO_IOCTL ||
struct gpio_desc *desc; cmd == GPIO_GET_LINEINFO_WATCH_IOCTL) {
struct gpioline_info lineinfo; return lineinfo_get_v1(cdev, ip,
struct gpio_v2_line_info lineinfo_v2; cmd == GPIO_GET_LINEINFO_WATCH_IOCTL);
if (copy_from_user(&lineinfo, ip, sizeof(lineinfo)))
return -EFAULT;
/* this doubles as a range check on line_offset */
desc = gpiochip_get_desc(gc, lineinfo.line_offset);
if (IS_ERR(desc))
return PTR_ERR(desc);
if (lineinfo_ensure_abi_version(cdev, 1))
return -EPERM;
if (test_and_set_bit(lineinfo.line_offset, cdev->watched_lines))
return -EBUSY;
gpio_desc_to_lineinfo(desc, &lineinfo_v2);
gpio_v2_line_info_to_v1(&lineinfo_v2, &lineinfo);
if (copy_to_user(ip, &lineinfo, sizeof(lineinfo))) {
clear_bit(lineinfo.line_offset, cdev->watched_lines);
return -EFAULT;
}
return 0;
#endif /* CONFIG_GPIO_CDEV_V1 */ #endif /* CONFIG_GPIO_CDEV_V1 */
} else if (cmd == GPIO_V2_GET_LINEINFO_IOCTL || } else if (cmd == GPIO_V2_GET_LINEINFO_IOCTL ||
cmd == GPIO_V2_GET_LINEINFO_WATCH_IOCTL) { cmd == GPIO_V2_GET_LINEINFO_WATCH_IOCTL) {
@@ -2100,16 +2110,7 @@ static long gpio_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
} else if (cmd == GPIO_V2_GET_LINE_IOCTL) { } else if (cmd == GPIO_V2_GET_LINE_IOCTL) {
return linereq_create(gdev, ip); return linereq_create(gdev, ip);
} else if (cmd == GPIO_GET_LINEINFO_UNWATCH_IOCTL) { } else if (cmd == GPIO_GET_LINEINFO_UNWATCH_IOCTL) {
if (copy_from_user(&offset, ip, sizeof(offset))) return lineinfo_unwatch(cdev, ip);
return -EFAULT;
if (offset >= cdev->gdev->ngpio)
return -EINVAL;
if (!test_and_clear_bit(offset, cdev->watched_lines))
return -EBUSY;
return 0;
} }
return -EINVAL; return -EINVAL;
} }

View File

@@ -80,7 +80,6 @@ MODULE_FIRMWARE("amdgpu/renoir_gpu_info.bin");
MODULE_FIRMWARE("amdgpu/navi10_gpu_info.bin"); MODULE_FIRMWARE("amdgpu/navi10_gpu_info.bin");
MODULE_FIRMWARE("amdgpu/navi14_gpu_info.bin"); MODULE_FIRMWARE("amdgpu/navi14_gpu_info.bin");
MODULE_FIRMWARE("amdgpu/navi12_gpu_info.bin"); MODULE_FIRMWARE("amdgpu/navi12_gpu_info.bin");
MODULE_FIRMWARE("amdgpu/green_sardine_gpu_info.bin");
#define AMDGPU_RESUME_MS 2000 #define AMDGPU_RESUME_MS 2000

View File

@@ -47,7 +47,7 @@ enum psp_gfx_crtl_cmd_id
GFX_CTRL_CMD_ID_DISABLE_INT = 0x00060000, /* disable PSP-to-Gfx interrupt */ GFX_CTRL_CMD_ID_DISABLE_INT = 0x00060000, /* disable PSP-to-Gfx interrupt */
GFX_CTRL_CMD_ID_MODE1_RST = 0x00070000, /* trigger the Mode 1 reset */ GFX_CTRL_CMD_ID_MODE1_RST = 0x00070000, /* trigger the Mode 1 reset */
GFX_CTRL_CMD_ID_GBR_IH_SET = 0x00080000, /* set Gbr IH_RB_CNTL registers */ GFX_CTRL_CMD_ID_GBR_IH_SET = 0x00080000, /* set Gbr IH_RB_CNTL registers */
GFX_CTRL_CMD_ID_CONSUME_CMD = 0x000A0000, /* send interrupt to psp for updating write pointer of vf */ GFX_CTRL_CMD_ID_CONSUME_CMD = 0x00090000, /* send interrupt to psp for updating write pointer of vf */
GFX_CTRL_CMD_ID_DESTROY_GPCOM_RING = 0x000C0000, /* destroy GPCOM ring */ GFX_CTRL_CMD_ID_DESTROY_GPCOM_RING = 0x000C0000, /* destroy GPCOM ring */
GFX_CTRL_CMD_ID_MAX = 0x000F0000, /* max command ID */ GFX_CTRL_CMD_ID_MAX = 0x000F0000, /* max command ID */

View File

@@ -1034,11 +1034,14 @@ static int kfd_create_vcrat_image_cpu(void *pcrat_image, size_t *size)
(struct crat_subtype_iolink *)sub_type_hdr); (struct crat_subtype_iolink *)sub_type_hdr);
if (ret < 0) if (ret < 0)
return ret; return ret;
crat_table->length += (sub_type_hdr->length * entries);
crat_table->total_entries += entries;
sub_type_hdr = (typeof(sub_type_hdr))((char *)sub_type_hdr + if (entries) {
sub_type_hdr->length * entries); crat_table->length += (sub_type_hdr->length * entries);
crat_table->total_entries += entries;
sub_type_hdr = (typeof(sub_type_hdr))((char *)sub_type_hdr +
sub_type_hdr->length * entries);
}
#else #else
pr_info("IO link not available for non x86 platforms\n"); pr_info("IO link not available for non x86 platforms\n");
#endif #endif

View File

@@ -113,7 +113,7 @@ int amdgpu_dm_crtc_configure_crc_source(struct drm_crtc *crtc,
mutex_lock(&adev->dm.dc_lock); mutex_lock(&adev->dm.dc_lock);
/* Enable CRTC CRC generation if necessary. */ /* Enable CRTC CRC generation if necessary. */
if (dm_is_crc_source_crtc(source)) { if (dm_is_crc_source_crtc(source) || source == AMDGPU_DM_PIPE_CRC_SOURCE_NONE) {
if (!dc_stream_configure_crc(stream_state->ctx->dc, if (!dc_stream_configure_crc(stream_state->ctx->dc,
stream_state, enable, enable)) { stream_state, enable, enable)) {
ret = -EINVAL; ret = -EINVAL;

View File

@@ -608,8 +608,8 @@ static const struct dc_debug_options debug_defaults_drv = {
.disable_pplib_clock_request = false, .disable_pplib_clock_request = false,
.disable_pplib_wm_range = false, .disable_pplib_wm_range = false,
.pplib_wm_report_mode = WM_REPORT_DEFAULT, .pplib_wm_report_mode = WM_REPORT_DEFAULT,
.pipe_split_policy = MPC_SPLIT_DYNAMIC, .pipe_split_policy = MPC_SPLIT_AVOID,
.force_single_disp_pipe_split = true, .force_single_disp_pipe_split = false,
.disable_dcc = DCC_ENABLE, .disable_dcc = DCC_ENABLE,
.voltage_align_fclk = true, .voltage_align_fclk = true,
.disable_stereo_support = true, .disable_stereo_support = true,

View File

@@ -2520,8 +2520,7 @@ struct pipe_ctx *dcn20_find_secondary_pipe(struct dc *dc,
* if this primary pipe has a bottom pipe in prev. state * if this primary pipe has a bottom pipe in prev. state
* and if the bottom pipe is still available (which it should be), * and if the bottom pipe is still available (which it should be),
* pick that pipe as secondary * pick that pipe as secondary
* Same logic applies for ODM pipes. Since mpo is not allowed with odm * Same logic applies for ODM pipes
* check in else case.
*/ */
if (dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].bottom_pipe) { if (dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].bottom_pipe) {
preferred_pipe_idx = dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].bottom_pipe->pipe_idx; preferred_pipe_idx = dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].bottom_pipe->pipe_idx;
@@ -2529,7 +2528,9 @@ struct pipe_ctx *dcn20_find_secondary_pipe(struct dc *dc,
secondary_pipe = &res_ctx->pipe_ctx[preferred_pipe_idx]; secondary_pipe = &res_ctx->pipe_ctx[preferred_pipe_idx];
secondary_pipe->pipe_idx = preferred_pipe_idx; secondary_pipe->pipe_idx = preferred_pipe_idx;
} }
} else if (dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].next_odm_pipe) { }
if (secondary_pipe == NULL &&
dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].next_odm_pipe) {
preferred_pipe_idx = dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].next_odm_pipe->pipe_idx; preferred_pipe_idx = dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].next_odm_pipe->pipe_idx;
if (res_ctx->pipe_ctx[preferred_pipe_idx].stream == NULL) { if (res_ctx->pipe_ctx[preferred_pipe_idx].stream == NULL) {
secondary_pipe = &res_ctx->pipe_ctx[preferred_pipe_idx]; secondary_pipe = &res_ctx->pipe_ctx[preferred_pipe_idx];

View File

@@ -3007,7 +3007,7 @@ int drm_atomic_helper_set_config(struct drm_mode_set *set,
ret = handle_conflicting_encoders(state, true); ret = handle_conflicting_encoders(state, true);
if (ret) if (ret)
return ret; goto fail;
ret = drm_atomic_commit(state); ret = drm_atomic_commit(state);

View File

@@ -388,19 +388,18 @@ int drm_syncobj_find_fence(struct drm_file *file_private,
return -ENOENT; return -ENOENT;
*fence = drm_syncobj_fence_get(syncobj); *fence = drm_syncobj_fence_get(syncobj);
drm_syncobj_put(syncobj);
if (*fence) { if (*fence) {
ret = dma_fence_chain_find_seqno(fence, point); ret = dma_fence_chain_find_seqno(fence, point);
if (!ret) if (!ret)
return 0; goto out;
dma_fence_put(*fence); dma_fence_put(*fence);
} else { } else {
ret = -EINVAL; ret = -EINVAL;
} }
if (!(flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT)) if (!(flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT))
return ret; goto out;
memset(&wait, 0, sizeof(wait)); memset(&wait, 0, sizeof(wait));
wait.task = current; wait.task = current;
@@ -432,6 +431,9 @@ int drm_syncobj_find_fence(struct drm_file *file_private,
if (wait.node.next) if (wait.node.next)
drm_syncobj_remove_wait(syncobj, &wait); drm_syncobj_remove_wait(syncobj, &wait);
out:
drm_syncobj_put(syncobj);
return ret; return ret;
} }
EXPORT_SYMBOL(drm_syncobj_find_fence); EXPORT_SYMBOL(drm_syncobj_find_fence);

View File

@@ -3387,7 +3387,7 @@ static void tgl_ddi_pre_enable_dp(struct intel_atomic_state *state,
intel_ddi_init_dp_buf_reg(encoder); intel_ddi_init_dp_buf_reg(encoder);
if (!is_mst) if (!is_mst)
intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON); intel_dp_set_power(intel_dp, DP_SET_POWER_D0);
intel_dp_sink_set_decompression_state(intel_dp, crtc_state, true); intel_dp_sink_set_decompression_state(intel_dp, crtc_state, true);
/* /*
@@ -3469,8 +3469,8 @@ static void hsw_ddi_pre_enable_dp(struct intel_atomic_state *state,
intel_ddi_init_dp_buf_reg(encoder); intel_ddi_init_dp_buf_reg(encoder);
if (!is_mst) if (!is_mst)
intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON); intel_dp_set_power(intel_dp, DP_SET_POWER_D0);
intel_dp_configure_protocol_converter(intel_dp); intel_dp_configure_protocol_converter(intel_dp, crtc_state);
intel_dp_sink_set_decompression_state(intel_dp, crtc_state, intel_dp_sink_set_decompression_state(intel_dp, crtc_state,
true); true);
intel_dp_sink_set_fec_ready(intel_dp, crtc_state); intel_dp_sink_set_fec_ready(intel_dp, crtc_state);
@@ -3647,7 +3647,7 @@ static void intel_ddi_post_disable_dp(struct intel_atomic_state *state,
* Power down sink before disabling the port, otherwise we end * Power down sink before disabling the port, otherwise we end
* up getting interrupts from the sink on detecting link loss. * up getting interrupts from the sink on detecting link loss.
*/ */
intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF); intel_dp_set_power(intel_dp, DP_SET_POWER_D3);
if (INTEL_GEN(dev_priv) >= 12) { if (INTEL_GEN(dev_priv) >= 12) {
if (is_mst) { if (is_mst) {

View File

@@ -3496,22 +3496,22 @@ void intel_dp_sink_set_decompression_state(struct intel_dp *intel_dp,
enable ? "enable" : "disable"); enable ? "enable" : "disable");
} }
/* If the sink supports it, try to set the power state appropriately */ /* If the device supports it, try to set the power state appropriately */
void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode) void intel_dp_set_power(struct intel_dp *intel_dp, u8 mode)
{ {
struct drm_i915_private *i915 = dp_to_i915(intel_dp); struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
int ret, i; int ret, i;
/* Should have a valid DPCD by this point */ /* Should have a valid DPCD by this point */
if (intel_dp->dpcd[DP_DPCD_REV] < 0x11) if (intel_dp->dpcd[DP_DPCD_REV] < 0x11)
return; return;
if (mode != DRM_MODE_DPMS_ON) { if (mode != DP_SET_POWER_D0) {
if (downstream_hpd_needs_d0(intel_dp)) if (downstream_hpd_needs_d0(intel_dp))
return; return;
ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER, ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER, mode);
DP_SET_POWER_D3);
} else { } else {
struct intel_lspcon *lspcon = dp_to_lspcon(intel_dp); struct intel_lspcon *lspcon = dp_to_lspcon(intel_dp);
@@ -3520,8 +3520,7 @@ void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode)
* time to wake up. * time to wake up.
*/ */
for (i = 0; i < 3; i++) { for (i = 0; i < 3; i++) {
ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER, ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER, mode);
DP_SET_POWER_D0);
if (ret == 1) if (ret == 1)
break; break;
msleep(1); msleep(1);
@@ -3532,8 +3531,9 @@ void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode)
} }
if (ret != 1) if (ret != 1)
drm_dbg_kms(&i915->drm, "failed to %s sink power state\n", drm_dbg_kms(&i915->drm, "[ENCODER:%d:%s] Set power to %s failed\n",
mode == DRM_MODE_DPMS_ON ? "enable" : "disable"); encoder->base.base.id, encoder->base.name,
mode == DP_SET_POWER_D0 ? "D0" : "D3");
} }
static bool cpt_dp_port_selected(struct drm_i915_private *dev_priv, static bool cpt_dp_port_selected(struct drm_i915_private *dev_priv,
@@ -3707,7 +3707,7 @@ static void intel_disable_dp(struct intel_atomic_state *state,
* ensure that we have vdd while we switch off the panel. */ * ensure that we have vdd while we switch off the panel. */
intel_edp_panel_vdd_on(intel_dp); intel_edp_panel_vdd_on(intel_dp);
intel_edp_backlight_off(old_conn_state); intel_edp_backlight_off(old_conn_state);
intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF); intel_dp_set_power(intel_dp, DP_SET_POWER_D3);
intel_edp_panel_off(intel_dp); intel_edp_panel_off(intel_dp);
} }
@@ -3856,7 +3856,8 @@ static void intel_dp_enable_port(struct intel_dp *intel_dp,
intel_de_posting_read(dev_priv, intel_dp->output_reg); intel_de_posting_read(dev_priv, intel_dp->output_reg);
} }
void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp) void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state)
{ {
struct drm_i915_private *i915 = dp_to_i915(intel_dp); struct drm_i915_private *i915 = dp_to_i915(intel_dp);
u8 tmp; u8 tmp;
@@ -3875,8 +3876,8 @@ void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp)
drm_dbg_kms(&i915->drm, "Failed to set protocol converter HDMI mode to %s\n", drm_dbg_kms(&i915->drm, "Failed to set protocol converter HDMI mode to %s\n",
enableddisabled(intel_dp->has_hdmi_sink)); enableddisabled(intel_dp->has_hdmi_sink));
tmp = intel_dp->dfp.ycbcr_444_to_420 ? tmp = crtc_state->output_format == INTEL_OUTPUT_FORMAT_YCBCR444 &&
DP_CONVERSION_TO_YCBCR420_ENABLE : 0; intel_dp->dfp.ycbcr_444_to_420 ? DP_CONVERSION_TO_YCBCR420_ENABLE : 0;
if (drm_dp_dpcd_writeb(&intel_dp->aux, if (drm_dp_dpcd_writeb(&intel_dp->aux,
DP_PROTOCOL_CONVERTER_CONTROL_1, tmp) != 1) DP_PROTOCOL_CONVERTER_CONTROL_1, tmp) != 1)
@@ -3929,8 +3930,8 @@ static void intel_enable_dp(struct intel_atomic_state *state,
lane_mask); lane_mask);
} }
intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON); intel_dp_set_power(intel_dp, DP_SET_POWER_D0);
intel_dp_configure_protocol_converter(intel_dp); intel_dp_configure_protocol_converter(intel_dp, pipe_config);
intel_dp_start_link_train(intel_dp); intel_dp_start_link_train(intel_dp);
intel_dp_stop_link_train(intel_dp); intel_dp_stop_link_train(intel_dp);

View File

@@ -50,8 +50,9 @@ int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
int link_rate, u8 lane_count); int link_rate, u8 lane_count);
int intel_dp_retrain_link(struct intel_encoder *encoder, int intel_dp_retrain_link(struct intel_encoder *encoder,
struct drm_modeset_acquire_ctx *ctx); struct drm_modeset_acquire_ctx *ctx);
void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode); void intel_dp_set_power(struct intel_dp *intel_dp, u8 mode);
void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp); void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state);
void intel_dp_sink_set_decompression_state(struct intel_dp *intel_dp, void intel_dp_sink_set_decompression_state(struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state, const struct intel_crtc_state *crtc_state,
bool enable); bool enable);

View File

@@ -488,7 +488,7 @@ static void intel_mst_pre_enable_dp(struct intel_atomic_state *state,
intel_dp->active_mst_links); intel_dp->active_mst_links);
if (first_mst_stream) if (first_mst_stream)
intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON); intel_dp_set_power(intel_dp, DP_SET_POWER_D0);
drm_dp_send_power_updown_phy(&intel_dp->mst_mgr, connector->port, true); drm_dp_send_power_updown_phy(&intel_dp->mst_mgr, connector->port, true);

View File

@@ -2187,6 +2187,7 @@ void intel_hdcp_update_pipe(struct intel_atomic_state *state,
if (content_protection_type_changed) { if (content_protection_type_changed) {
mutex_lock(&hdcp->mutex); mutex_lock(&hdcp->mutex);
hdcp->value = DRM_MODE_CONTENT_PROTECTION_DESIRED; hdcp->value = DRM_MODE_CONTENT_PROTECTION_DESIRED;
drm_connector_get(&connector->base);
schedule_work(&hdcp->prop_work); schedule_work(&hdcp->prop_work);
mutex_unlock(&hdcp->mutex); mutex_unlock(&hdcp->mutex);
} }
@@ -2198,6 +2199,14 @@ void intel_hdcp_update_pipe(struct intel_atomic_state *state,
desired_and_not_enabled = desired_and_not_enabled =
hdcp->value != DRM_MODE_CONTENT_PROTECTION_ENABLED; hdcp->value != DRM_MODE_CONTENT_PROTECTION_ENABLED;
mutex_unlock(&hdcp->mutex); mutex_unlock(&hdcp->mutex);
/*
* If HDCP already ENABLED and CP property is DESIRED, schedule
* prop_work to update correct CP property to user space.
*/
if (!desired_and_not_enabled && !content_protection_type_changed) {
drm_connector_get(&connector->base);
schedule_work(&hdcp->prop_work);
}
} }
if (desired_and_not_enabled || content_protection_type_changed) if (desired_and_not_enabled || content_protection_type_changed)

View File

@@ -134,11 +134,6 @@ static bool remove_signaling_context(struct intel_breadcrumbs *b,
return true; return true;
} }
static inline bool __request_completed(const struct i915_request *rq)
{
return i915_seqno_passed(__hwsp_seqno(rq), rq->fence.seqno);
}
__maybe_unused static bool __maybe_unused static bool
check_signal_order(struct intel_context *ce, struct i915_request *rq) check_signal_order(struct intel_context *ce, struct i915_request *rq)
{ {
@@ -257,7 +252,7 @@ static void signal_irq_work(struct irq_work *work)
list_for_each_entry_rcu(rq, &ce->signals, signal_link) { list_for_each_entry_rcu(rq, &ce->signals, signal_link) {
bool release; bool release;
if (!__request_completed(rq)) if (!__i915_request_is_complete(rq))
break; break;
if (!test_and_clear_bit(I915_FENCE_FLAG_SIGNAL, if (!test_and_clear_bit(I915_FENCE_FLAG_SIGNAL,
@@ -379,7 +374,7 @@ static void insert_breadcrumb(struct i915_request *rq)
* straight onto a signaled list, and queue the irq worker for * straight onto a signaled list, and queue the irq worker for
* its signal completion. * its signal completion.
*/ */
if (__request_completed(rq)) { if (__i915_request_is_complete(rq)) {
if (__signal_request(rq) && if (__signal_request(rq) &&
llist_add(&rq->signal_node, &b->signaled_requests)) llist_add(&rq->signal_node, &b->signaled_requests))
irq_work_queue(&b->irq_work); irq_work_queue(&b->irq_work);

View File

@@ -3936,6 +3936,9 @@ err:
static void lrc_destroy_wa_ctx(struct intel_engine_cs *engine) static void lrc_destroy_wa_ctx(struct intel_engine_cs *engine)
{ {
i915_vma_unpin_and_release(&engine->wa_ctx.vma, 0); i915_vma_unpin_and_release(&engine->wa_ctx.vma, 0);
/* Called on error unwind, clear all flags to prevent further use */
memset(&engine->wa_ctx, 0, sizeof(engine->wa_ctx));
} }
typedef u32 *(*wa_bb_func_t)(struct intel_engine_cs *engine, u32 *batch); typedef u32 *(*wa_bb_func_t)(struct intel_engine_cs *engine, u32 *batch);

View File

@@ -126,6 +126,10 @@ static void __rcu_cacheline_free(struct rcu_head *rcu)
struct intel_timeline_cacheline *cl = struct intel_timeline_cacheline *cl =
container_of(rcu, typeof(*cl), rcu); container_of(rcu, typeof(*cl), rcu);
/* Must wait until after all *rq->hwsp are complete before removing */
i915_gem_object_unpin_map(cl->hwsp->vma->obj);
__idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS));
i915_active_fini(&cl->active); i915_active_fini(&cl->active);
kfree(cl); kfree(cl);
} }
@@ -133,11 +137,6 @@ static void __rcu_cacheline_free(struct rcu_head *rcu)
static void __idle_cacheline_free(struct intel_timeline_cacheline *cl) static void __idle_cacheline_free(struct intel_timeline_cacheline *cl)
{ {
GEM_BUG_ON(!i915_active_is_idle(&cl->active)); GEM_BUG_ON(!i915_active_is_idle(&cl->active));
i915_gem_object_unpin_map(cl->hwsp->vma->obj);
i915_vma_put(cl->hwsp->vma);
__idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS));
call_rcu(&cl->rcu, __rcu_cacheline_free); call_rcu(&cl->rcu, __rcu_cacheline_free);
} }
@@ -179,7 +178,6 @@ cacheline_alloc(struct intel_timeline_hwsp *hwsp, unsigned int cacheline)
return ERR_CAST(vaddr); return ERR_CAST(vaddr);
} }
i915_vma_get(hwsp->vma);
cl->hwsp = hwsp; cl->hwsp = hwsp;
cl->vaddr = page_pack_bits(vaddr, cacheline); cl->vaddr = page_pack_bits(vaddr, cacheline);

View File

@@ -435,7 +435,7 @@ static inline u32 hwsp_seqno(const struct i915_request *rq)
static inline bool __i915_request_has_started(const struct i915_request *rq) static inline bool __i915_request_has_started(const struct i915_request *rq)
{ {
return i915_seqno_passed(hwsp_seqno(rq), rq->fence.seqno - 1); return i915_seqno_passed(__hwsp_seqno(rq), rq->fence.seqno - 1);
} }
/** /**
@@ -466,11 +466,19 @@ static inline bool __i915_request_has_started(const struct i915_request *rq)
*/ */
static inline bool i915_request_started(const struct i915_request *rq) static inline bool i915_request_started(const struct i915_request *rq)
{ {
bool result;
if (i915_request_signaled(rq)) if (i915_request_signaled(rq))
return true; return true;
/* Remember: started but may have since been preempted! */ result = true;
return __i915_request_has_started(rq); rcu_read_lock(); /* the HWSP may be freed at runtime */
if (likely(!i915_request_signaled(rq)))
/* Remember: started but may have since been preempted! */
result = __i915_request_has_started(rq);
rcu_read_unlock();
return result;
} }
/** /**
@@ -483,10 +491,16 @@ static inline bool i915_request_started(const struct i915_request *rq)
*/ */
static inline bool i915_request_is_running(const struct i915_request *rq) static inline bool i915_request_is_running(const struct i915_request *rq)
{ {
bool result;
if (!i915_request_is_active(rq)) if (!i915_request_is_active(rq))
return false; return false;
return __i915_request_has_started(rq); rcu_read_lock();
result = __i915_request_has_started(rq) && i915_request_is_active(rq);
rcu_read_unlock();
return result;
} }
/** /**
@@ -510,12 +524,25 @@ static inline bool i915_request_is_ready(const struct i915_request *rq)
return !list_empty(&rq->sched.link); return !list_empty(&rq->sched.link);
} }
static inline bool __i915_request_is_complete(const struct i915_request *rq)
{
return i915_seqno_passed(__hwsp_seqno(rq), rq->fence.seqno);
}
static inline bool i915_request_completed(const struct i915_request *rq) static inline bool i915_request_completed(const struct i915_request *rq)
{ {
bool result;
if (i915_request_signaled(rq)) if (i915_request_signaled(rq))
return true; return true;
return i915_seqno_passed(hwsp_seqno(rq), rq->fence.seqno); result = true;
rcu_read_lock(); /* the HWSP may be freed at runtime */
if (likely(!i915_request_signaled(rq)))
result = __i915_request_is_complete(rq);
rcu_read_unlock();
return result;
} }
static inline void i915_request_mark_complete(struct i915_request *rq) static inline void i915_request_mark_complete(struct i915_request *rq)

View File

@@ -221,7 +221,7 @@ nv50_dmac_wait(struct nvif_push *push, u32 size)
int int
nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp, nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp,
const s32 *oclass, u8 head, void *data, u32 size, u64 syncbuf, const s32 *oclass, u8 head, void *data, u32 size, s64 syncbuf,
struct nv50_dmac *dmac) struct nv50_dmac *dmac)
{ {
struct nouveau_cli *cli = (void *)device->object.client; struct nouveau_cli *cli = (void *)device->object.client;
@@ -270,7 +270,7 @@ nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp,
if (ret) if (ret)
return ret; return ret;
if (!syncbuf) if (syncbuf < 0)
return 0; return 0;
ret = nvif_object_ctor(&dmac->base.user, "kmsSyncCtxDma", NV50_DISP_HANDLE_SYNCBUF, ret = nvif_object_ctor(&dmac->base.user, "kmsSyncCtxDma", NV50_DISP_HANDLE_SYNCBUF,

View File

@@ -95,7 +95,7 @@ struct nv50_outp_atom {
int nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp, int nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp,
const s32 *oclass, u8 head, void *data, u32 size, const s32 *oclass, u8 head, void *data, u32 size,
u64 syncbuf, struct nv50_dmac *dmac); s64 syncbuf, struct nv50_dmac *dmac);
void nv50_dmac_destroy(struct nv50_dmac *); void nv50_dmac_destroy(struct nv50_dmac *);
/* /*

View File

@@ -76,7 +76,7 @@ wimmc37b_init_(const struct nv50_wimm_func *func, struct nouveau_drm *drm,
int ret; int ret;
ret = nv50_dmac_create(&drm->client.device, &disp->disp->object, ret = nv50_dmac_create(&drm->client.device, &disp->disp->object,
&oclass, 0, &args, sizeof(args), 0, &oclass, 0, &args, sizeof(args), -1,
&wndw->wimm); &wndw->wimm);
if (ret) { if (ret) {
NV_ERROR(drm, "wimm%04x allocation failed: %d\n", oclass, ret); NV_ERROR(drm, "wimm%04x allocation failed: %d\n", oclass, ret);

View File

@@ -75,7 +75,7 @@ shadow_image(struct nvkm_bios *bios, int idx, u32 offset, struct shadow *mthd)
nvkm_debug(subdev, "%08x: type %02x, %d bytes\n", nvkm_debug(subdev, "%08x: type %02x, %d bytes\n",
image.base, image.type, image.size); image.base, image.type, image.size);
if (!shadow_fetch(bios, mthd, image.size)) { if (!shadow_fetch(bios, mthd, image.base + image.size)) {
nvkm_debug(subdev, "%08x: fetch failed\n", image.base); nvkm_debug(subdev, "%08x: fetch failed\n", image.base);
return 0; return 0;
} }

View File

@@ -33,7 +33,7 @@ static void
gm200_i2c_aux_fini(struct gm200_i2c_aux *aux) gm200_i2c_aux_fini(struct gm200_i2c_aux *aux)
{ {
struct nvkm_device *device = aux->base.pad->i2c->subdev.device; struct nvkm_device *device = aux->base.pad->i2c->subdev.device;
nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00310000, 0x00000000); nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00710000, 0x00000000);
} }
static int static int
@@ -54,10 +54,10 @@ gm200_i2c_aux_init(struct gm200_i2c_aux *aux)
AUX_ERR(&aux->base, "begin idle timeout %08x", ctrl); AUX_ERR(&aux->base, "begin idle timeout %08x", ctrl);
return -EBUSY; return -EBUSY;
} }
} while (ctrl & 0x03010000); } while (ctrl & 0x07010000);
/* set some magic, and wait up to 1ms for it to appear */ /* set some magic, and wait up to 1ms for it to appear */
nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00300000, ureq); nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00700000, ureq);
timeout = 1000; timeout = 1000;
do { do {
ctrl = nvkm_rd32(device, 0x00d954 + (aux->ch * 0x50)); ctrl = nvkm_rd32(device, 0x00d954 + (aux->ch * 0x50));
@@ -67,7 +67,7 @@ gm200_i2c_aux_init(struct gm200_i2c_aux *aux)
gm200_i2c_aux_fini(aux); gm200_i2c_aux_fini(aux);
return -EBUSY; return -EBUSY;
} }
} while ((ctrl & 0x03000000) != urep); } while ((ctrl & 0x07000000) != urep);
return 0; return 0;
} }

View File

@@ -22,6 +22,7 @@
* Authors: Ben Skeggs * Authors: Ben Skeggs
*/ */
#include "priv.h" #include "priv.h"
#include <subdev/timer.h>
static void static void
gf100_ibus_intr_hub(struct nvkm_subdev *ibus, int i) gf100_ibus_intr_hub(struct nvkm_subdev *ibus, int i)
@@ -31,7 +32,6 @@ gf100_ibus_intr_hub(struct nvkm_subdev *ibus, int i)
u32 data = nvkm_rd32(device, 0x122124 + (i * 0x0400)); u32 data = nvkm_rd32(device, 0x122124 + (i * 0x0400));
u32 stat = nvkm_rd32(device, 0x122128 + (i * 0x0400)); u32 stat = nvkm_rd32(device, 0x122128 + (i * 0x0400));
nvkm_debug(ibus, "HUB%d: %06x %08x (%08x)\n", i, addr, data, stat); nvkm_debug(ibus, "HUB%d: %06x %08x (%08x)\n", i, addr, data, stat);
nvkm_mask(device, 0x122128 + (i * 0x0400), 0x00000200, 0x00000000);
} }
static void static void
@@ -42,7 +42,6 @@ gf100_ibus_intr_rop(struct nvkm_subdev *ibus, int i)
u32 data = nvkm_rd32(device, 0x124124 + (i * 0x0400)); u32 data = nvkm_rd32(device, 0x124124 + (i * 0x0400));
u32 stat = nvkm_rd32(device, 0x124128 + (i * 0x0400)); u32 stat = nvkm_rd32(device, 0x124128 + (i * 0x0400));
nvkm_debug(ibus, "ROP%d: %06x %08x (%08x)\n", i, addr, data, stat); nvkm_debug(ibus, "ROP%d: %06x %08x (%08x)\n", i, addr, data, stat);
nvkm_mask(device, 0x124128 + (i * 0x0400), 0x00000200, 0x00000000);
} }
static void static void
@@ -53,7 +52,6 @@ gf100_ibus_intr_gpc(struct nvkm_subdev *ibus, int i)
u32 data = nvkm_rd32(device, 0x128124 + (i * 0x0400)); u32 data = nvkm_rd32(device, 0x128124 + (i * 0x0400));
u32 stat = nvkm_rd32(device, 0x128128 + (i * 0x0400)); u32 stat = nvkm_rd32(device, 0x128128 + (i * 0x0400));
nvkm_debug(ibus, "GPC%d: %06x %08x (%08x)\n", i, addr, data, stat); nvkm_debug(ibus, "GPC%d: %06x %08x (%08x)\n", i, addr, data, stat);
nvkm_mask(device, 0x128128 + (i * 0x0400), 0x00000200, 0x00000000);
} }
void void
@@ -90,6 +88,12 @@ gf100_ibus_intr(struct nvkm_subdev *ibus)
intr1 &= ~stat; intr1 &= ~stat;
} }
} }
nvkm_mask(device, 0x121c4c, 0x0000003f, 0x00000002);
nvkm_msec(device, 2000,
if (!(nvkm_rd32(device, 0x121c4c) & 0x0000003f))
break;
);
} }
static int static int

View File

@@ -22,6 +22,7 @@
* Authors: Ben Skeggs * Authors: Ben Skeggs
*/ */
#include "priv.h" #include "priv.h"
#include <subdev/timer.h>
static void static void
gk104_ibus_intr_hub(struct nvkm_subdev *ibus, int i) gk104_ibus_intr_hub(struct nvkm_subdev *ibus, int i)
@@ -31,7 +32,6 @@ gk104_ibus_intr_hub(struct nvkm_subdev *ibus, int i)
u32 data = nvkm_rd32(device, 0x122124 + (i * 0x0800)); u32 data = nvkm_rd32(device, 0x122124 + (i * 0x0800));
u32 stat = nvkm_rd32(device, 0x122128 + (i * 0x0800)); u32 stat = nvkm_rd32(device, 0x122128 + (i * 0x0800));
nvkm_debug(ibus, "HUB%d: %06x %08x (%08x)\n", i, addr, data, stat); nvkm_debug(ibus, "HUB%d: %06x %08x (%08x)\n", i, addr, data, stat);
nvkm_mask(device, 0x122128 + (i * 0x0800), 0x00000200, 0x00000000);
} }
static void static void
@@ -42,7 +42,6 @@ gk104_ibus_intr_rop(struct nvkm_subdev *ibus, int i)
u32 data = nvkm_rd32(device, 0x124124 + (i * 0x0800)); u32 data = nvkm_rd32(device, 0x124124 + (i * 0x0800));
u32 stat = nvkm_rd32(device, 0x124128 + (i * 0x0800)); u32 stat = nvkm_rd32(device, 0x124128 + (i * 0x0800));
nvkm_debug(ibus, "ROP%d: %06x %08x (%08x)\n", i, addr, data, stat); nvkm_debug(ibus, "ROP%d: %06x %08x (%08x)\n", i, addr, data, stat);
nvkm_mask(device, 0x124128 + (i * 0x0800), 0x00000200, 0x00000000);
} }
static void static void
@@ -53,7 +52,6 @@ gk104_ibus_intr_gpc(struct nvkm_subdev *ibus, int i)
u32 data = nvkm_rd32(device, 0x128124 + (i * 0x0800)); u32 data = nvkm_rd32(device, 0x128124 + (i * 0x0800));
u32 stat = nvkm_rd32(device, 0x128128 + (i * 0x0800)); u32 stat = nvkm_rd32(device, 0x128128 + (i * 0x0800));
nvkm_debug(ibus, "GPC%d: %06x %08x (%08x)\n", i, addr, data, stat); nvkm_debug(ibus, "GPC%d: %06x %08x (%08x)\n", i, addr, data, stat);
nvkm_mask(device, 0x128128 + (i * 0x0800), 0x00000200, 0x00000000);
} }
void void
@@ -90,6 +88,12 @@ gk104_ibus_intr(struct nvkm_subdev *ibus)
intr1 &= ~stat; intr1 &= ~stat;
} }
} }
nvkm_mask(device, 0x12004c, 0x0000003f, 0x00000002);
nvkm_msec(device, 2000,
if (!(nvkm_rd32(device, 0x12004c) & 0x0000003f))
break;
);
} }
static int static int

View File

@@ -316,9 +316,9 @@ nvkm_mmu_vram(struct nvkm_mmu *mmu)
{ {
struct nvkm_device *device = mmu->subdev.device; struct nvkm_device *device = mmu->subdev.device;
struct nvkm_mm *mm = &device->fb->ram->vram; struct nvkm_mm *mm = &device->fb->ram->vram;
const u32 sizeN = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NORMAL); const u64 sizeN = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NORMAL);
const u32 sizeU = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NOMAP); const u64 sizeU = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NOMAP);
const u32 sizeM = nvkm_mm_heap_size(mm, NVKM_RAM_MM_MIXED); const u64 sizeM = nvkm_mm_heap_size(mm, NVKM_RAM_MM_MIXED);
u8 type = NVKM_MEM_KIND * !!mmu->func->kind; u8 type = NVKM_MEM_KIND * !!mmu->func->kind;
u8 heap = NVKM_MEM_VRAM; u8 heap = NVKM_MEM_VRAM;
int heapM, heapN, heapU; int heapM, heapN, heapU;

View File

@@ -1268,6 +1268,7 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *vc4_hdmi)
card->dai_link = dai_link; card->dai_link = dai_link;
card->num_links = 1; card->num_links = 1;
card->name = vc4_hdmi->variant->card_name; card->name = vc4_hdmi->variant->card_name;
card->driver_name = "vc4-hdmi";
card->dev = dev; card->dev = dev;
card->owner = THIS_MODULE; card->owner = THIS_MODULE;

View File

@@ -910,6 +910,7 @@ config HID_SONY
depends on NEW_LEDS depends on NEW_LEDS
depends on LEDS_CLASS depends on LEDS_CLASS
select POWER_SUPPLY select POWER_SUPPLY
select CRC32
help help
Support for Support for

View File

@@ -387,6 +387,7 @@
#define USB_DEVICE_ID_TOSHIBA_CLICK_L9W 0x0401 #define USB_DEVICE_ID_TOSHIBA_CLICK_L9W 0x0401
#define USB_DEVICE_ID_HP_X2 0x074d #define USB_DEVICE_ID_HP_X2 0x074d
#define USB_DEVICE_ID_HP_X2_10_COVER 0x0755 #define USB_DEVICE_ID_HP_X2_10_COVER 0x0755
#define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706
#define USB_VENDOR_ID_ELECOM 0x056e #define USB_VENDOR_ID_ELECOM 0x056e
#define USB_DEVICE_ID_ELECOM_BM084 0x0061 #define USB_DEVICE_ID_ELECOM_BM084 0x0061

View File

@@ -322,6 +322,8 @@ static const struct hid_device_id hid_battery_quirks[] = {
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,
USB_DEVICE_ID_LOGITECH_DINOVO_EDGE_KBD), USB_DEVICE_ID_LOGITECH_DINOVO_EDGE_KBD),
HID_BATTERY_QUIRK_IGNORE }, HID_BATTERY_QUIRK_IGNORE },
{ HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN),
HID_BATTERY_QUIRK_IGNORE },
{} {}
}; };

View File

@@ -1869,6 +1869,10 @@ static const struct hid_device_id logi_dj_receivers[] = {
HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
0xc531), 0xc531),
.driver_data = recvr_type_gaming_hidpp}, .driver_data = recvr_type_gaming_hidpp},
{ /* Logitech G602 receiver (0xc537) */
HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
0xc537),
.driver_data = recvr_type_gaming_hidpp},
{ /* Logitech lightspeed receiver (0xc539) */ { /* Logitech lightspeed receiver (0xc539) */
HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1), USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1),

View File

@@ -4051,6 +4051,8 @@ static const struct hid_device_id hidpp_devices[] = {
{ /* MX Master mouse over Bluetooth */ { /* MX Master mouse over Bluetooth */
HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb012), HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb012),
.driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
{ /* MX Ergo trackball over Bluetooth */
HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01d) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01e), { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01e),
.driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
{ /* MX Master 3 mouse over Bluetooth */ { /* MX Master 3 mouse over Bluetooth */

View File

@@ -2054,6 +2054,10 @@ static const struct hid_device_id mt_devices[] = {
HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
USB_VENDOR_ID_SYNAPTICS, 0xce08) }, USB_VENDOR_ID_SYNAPTICS, 0xce08) },
{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
USB_VENDOR_ID_SYNAPTICS, 0xce09) },
/* TopSeed panels */ /* TopSeed panels */
{ .driver_data = MT_CLS_TOPSEED, { .driver_data = MT_CLS_TOPSEED,
MT_USB_DEVICE(USB_VENDOR_ID_TOPSEED2, MT_USB_DEVICE(USB_VENDOR_ID_TOPSEED2,

View File

@@ -2542,7 +2542,6 @@ static void hv_kexec_handler(void)
/* Make sure conn_state is set as hv_synic_cleanup checks for it */ /* Make sure conn_state is set as hv_synic_cleanup checks for it */
mb(); mb();
cpuhp_remove_state(hyperv_cpuhp_online); cpuhp_remove_state(hyperv_cpuhp_online);
hyperv_cleanup();
}; };
static void hv_crash_handler(struct pt_regs *regs) static void hv_crash_handler(struct pt_regs *regs)
@@ -2558,7 +2557,6 @@ static void hv_crash_handler(struct pt_regs *regs)
cpu = smp_processor_id(); cpu = smp_processor_id();
hv_stimer_cleanup(cpu); hv_stimer_cleanup(cpu);
hv_synic_disable_regs(cpu); hv_synic_disable_regs(cpu);
hyperv_cleanup();
}; };
static int hv_synic_suspend(void) static int hv_synic_suspend(void)

View File

@@ -268,6 +268,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7aa6), PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7aa6),
.driver_data = (kernel_ulong_t)&intel_th_2x, .driver_data = (kernel_ulong_t)&intel_th_2x,
}, },
{
/* Alder Lake-P */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x51a6),
.driver_data = (kernel_ulong_t)&intel_th_2x,
},
{ {
/* Alder Lake CPU */ /* Alder Lake CPU */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f), PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f),

View File

@@ -64,7 +64,7 @@ static void stm_heartbeat_unlink(struct stm_source_data *data)
static int stm_heartbeat_init(void) static int stm_heartbeat_init(void)
{ {
int i, ret = -ENOMEM; int i, ret;
if (nr_devs < 0 || nr_devs > STM_HEARTBEAT_MAX) if (nr_devs < 0 || nr_devs > STM_HEARTBEAT_MAX)
return -EINVAL; return -EINVAL;
@@ -72,8 +72,10 @@ static int stm_heartbeat_init(void)
for (i = 0; i < nr_devs; i++) { for (i = 0; i < nr_devs; i++) {
stm_heartbeat[i].data.name = stm_heartbeat[i].data.name =
kasprintf(GFP_KERNEL, "heartbeat.%d", i); kasprintf(GFP_KERNEL, "heartbeat.%d", i);
if (!stm_heartbeat[i].data.name) if (!stm_heartbeat[i].data.name) {
ret = -ENOMEM;
goto fail_unregister; goto fail_unregister;
}
stm_heartbeat[i].data.nr_chans = 1; stm_heartbeat[i].data.nr_chans = 1;
stm_heartbeat[i].data.link = stm_heartbeat_link; stm_heartbeat[i].data.link = stm_heartbeat_link;

View File

@@ -1026,6 +1026,7 @@ config I2C_SIRF
config I2C_SPRD config I2C_SPRD
tristate "Spreadtrum I2C interface" tristate "Spreadtrum I2C interface"
depends on I2C=y && (ARCH_SPRD || COMPILE_TEST) depends on I2C=y && (ARCH_SPRD || COMPILE_TEST)
depends on COMMON_CLK
help help
If you say yes to this option, support will be included for the If you say yes to this option, support will be included for the
Spreadtrum I2C interface. Spreadtrum I2C interface.

View File

@@ -347,7 +347,7 @@ static int octeon_i2c_read(struct octeon_i2c *i2c, int target,
if (result) if (result)
return result; return result;
if (recv_len && i == 0) { if (recv_len && i == 0) {
if (data[i] > I2C_SMBUS_BLOCK_MAX + 1) if (data[i] > I2C_SMBUS_BLOCK_MAX)
return -EPROTO; return -EPROTO;
length += data[i]; length += data[i];
} }

View File

@@ -80,7 +80,7 @@ static int tegra_bpmp_xlate_flags(u16 flags, u16 *out)
flags &= ~I2C_M_RECV_LEN; flags &= ~I2C_M_RECV_LEN;
} }
return (flags != 0) ? -EINVAL : 0; return 0;
} }
/** /**

View File

@@ -533,7 +533,7 @@ static int tegra_i2c_poll_register(struct tegra_i2c_dev *i2c_dev,
void __iomem *addr = i2c_dev->base + tegra_i2c_reg_addr(i2c_dev, reg); void __iomem *addr = i2c_dev->base + tegra_i2c_reg_addr(i2c_dev, reg);
u32 val; u32 val;
if (!i2c_dev->atomic_mode) if (!i2c_dev->atomic_mode && !in_irq())
return readl_relaxed_poll_timeout(addr, val, !(val & mask), return readl_relaxed_poll_timeout(addr, val, !(val & mask),
delay_us, timeout_us); delay_us, timeout_us);

View File

@@ -397,16 +397,12 @@ static int tiadc_iio_buffered_hardware_setup(struct device *dev,
ret = devm_request_threaded_irq(dev, irq, pollfunc_th, pollfunc_bh, ret = devm_request_threaded_irq(dev, irq, pollfunc_th, pollfunc_bh,
flags, indio_dev->name, indio_dev); flags, indio_dev->name, indio_dev);
if (ret) if (ret)
goto error_kfifo_free; return ret;
indio_dev->setup_ops = setup_ops; indio_dev->setup_ops = setup_ops;
indio_dev->modes |= INDIO_BUFFER_SOFTWARE; indio_dev->modes |= INDIO_BUFFER_SOFTWARE;
return 0; return 0;
error_kfifo_free:
iio_kfifo_free(indio_dev->buffer);
return ret;
} }
static const char * const chan_name_ain[] = { static const char * const chan_name_ain[] = {

View File

@@ -23,35 +23,31 @@
* @sdata: Sensor data. * @sdata: Sensor data.
* *
* returns: * returns:
* 0 - no new samples available * false - no new samples available or read error
* 1 - new samples available * true - new samples available
* negative - error or unknown
*/ */
static int st_sensors_new_samples_available(struct iio_dev *indio_dev, static bool st_sensors_new_samples_available(struct iio_dev *indio_dev,
struct st_sensor_data *sdata) struct st_sensor_data *sdata)
{ {
int ret, status; int ret, status;
/* How would I know if I can't check it? */ /* How would I know if I can't check it? */
if (!sdata->sensor_settings->drdy_irq.stat_drdy.addr) if (!sdata->sensor_settings->drdy_irq.stat_drdy.addr)
return -EINVAL; return true;
/* No scan mask, no interrupt */ /* No scan mask, no interrupt */
if (!indio_dev->active_scan_mask) if (!indio_dev->active_scan_mask)
return 0; return false;
ret = regmap_read(sdata->regmap, ret = regmap_read(sdata->regmap,
sdata->sensor_settings->drdy_irq.stat_drdy.addr, sdata->sensor_settings->drdy_irq.stat_drdy.addr,
&status); &status);
if (ret < 0) { if (ret < 0) {
dev_err(sdata->dev, "error checking samples available\n"); dev_err(sdata->dev, "error checking samples available\n");
return ret; return false;
} }
if (status & sdata->sensor_settings->drdy_irq.stat_drdy.mask) return !!(status & sdata->sensor_settings->drdy_irq.stat_drdy.mask);
return 1;
return 0;
} }
/** /**
@@ -180,9 +176,15 @@ int st_sensors_allocate_trigger(struct iio_dev *indio_dev,
/* Tell the interrupt handler that we're dealing with edges */ /* Tell the interrupt handler that we're dealing with edges */
if (irq_trig == IRQF_TRIGGER_FALLING || if (irq_trig == IRQF_TRIGGER_FALLING ||
irq_trig == IRQF_TRIGGER_RISING) irq_trig == IRQF_TRIGGER_RISING) {
if (!sdata->sensor_settings->drdy_irq.stat_drdy.addr) {
dev_err(&indio_dev->dev,
"edge IRQ not supported w/o stat register.\n");
err = -EOPNOTSUPP;
goto iio_trigger_free;
}
sdata->edge_irq = true; sdata->edge_irq = true;
else } else {
/* /*
* If we're not using edges (i.e. level interrupts) we * If we're not using edges (i.e. level interrupts) we
* just mask off the IRQ, handle one interrupt, then * just mask off the IRQ, handle one interrupt, then
@@ -190,6 +192,7 @@ int st_sensors_allocate_trigger(struct iio_dev *indio_dev,
* interrupt handler top half again and start over. * interrupt handler top half again and start over.
*/ */
irq_trig |= IRQF_ONESHOT; irq_trig |= IRQF_ONESHOT;
}
/* /*
* If the interrupt pin is Open Drain, by definition this * If the interrupt pin is Open Drain, by definition this

View File

@@ -187,9 +187,9 @@ static ssize_t ad5504_write_dac_powerdown(struct iio_dev *indio_dev,
return ret; return ret;
if (pwr_down) if (pwr_down)
st->pwr_down_mask |= (1 << chan->channel);
else
st->pwr_down_mask &= ~(1 << chan->channel); st->pwr_down_mask &= ~(1 << chan->channel);
else
st->pwr_down_mask |= (1 << chan->channel);
ret = ad5504_spi_write(st, AD5504_ADDR_CTRL, ret = ad5504_spi_write(st, AD5504_ADDR_CTRL,
AD5504_DAC_PWRDWN_MODE(st->pwr_down_mode) | AD5504_DAC_PWRDWN_MODE(st->pwr_down_mode) |

View File

@@ -248,6 +248,12 @@ static int mlx90632_set_meas_type(struct regmap *regmap, u8 type)
if (ret < 0) if (ret < 0)
return ret; return ret;
/*
* Give the mlx90632 some time to reset properly before sending a new I2C command
* if this is not done, the following I2C command(s) will not be accepted.
*/
usleep_range(150, 200);
ret = regmap_write_bits(regmap, MLX90632_REG_CONTROL, ret = regmap_write_bits(regmap, MLX90632_REG_CONTROL,
(MLX90632_CFG_MTYP_MASK | MLX90632_CFG_PWR_MASK), (MLX90632_CFG_MTYP_MASK | MLX90632_CFG_PWR_MASK),
(MLX90632_MTYP_STATUS(type) | MLX90632_PWR_STATUS_HALT)); (MLX90632_MTYP_STATUS(type) | MLX90632_PWR_STATUS_HALT));

View File

@@ -131,8 +131,10 @@ static ssize_t default_roce_mode_store(struct config_item *item,
return ret; return ret;
gid_type = ib_cache_gid_parse_type_str(buf); gid_type = ib_cache_gid_parse_type_str(buf);
if (gid_type < 0) if (gid_type < 0) {
cma_configfs_params_put(cma_dev);
return -EINVAL; return -EINVAL;
}
ret = cma_set_default_gid_type(cma_dev, group->port_num, gid_type); ret = cma_set_default_gid_type(cma_dev, group->port_num, gid_type);

View File

@@ -95,8 +95,6 @@ struct ucma_context {
u64 uid; u64 uid;
struct list_head list; struct list_head list;
/* sync between removal event and id destroy, protected by file mut */
int destroying;
struct work_struct close_work; struct work_struct close_work;
}; };
@@ -122,7 +120,7 @@ static DEFINE_XARRAY_ALLOC(ctx_table);
static DEFINE_XARRAY_ALLOC(multicast_table); static DEFINE_XARRAY_ALLOC(multicast_table);
static const struct file_operations ucma_fops; static const struct file_operations ucma_fops;
static int __destroy_id(struct ucma_context *ctx); static int ucma_destroy_private_ctx(struct ucma_context *ctx);
static inline struct ucma_context *_ucma_find_context(int id, static inline struct ucma_context *_ucma_find_context(int id,
struct ucma_file *file) struct ucma_file *file)
@@ -179,19 +177,14 @@ static void ucma_close_id(struct work_struct *work)
/* once all inflight tasks are finished, we close all underlying /* once all inflight tasks are finished, we close all underlying
* resources. The context is still alive till its explicit destryoing * resources. The context is still alive till its explicit destryoing
* by its creator. * by its creator. This puts back the xarray's reference.
*/ */
ucma_put_ctx(ctx); ucma_put_ctx(ctx);
wait_for_completion(&ctx->comp); wait_for_completion(&ctx->comp);
/* No new events will be generated after destroying the id. */ /* No new events will be generated after destroying the id. */
rdma_destroy_id(ctx->cm_id); rdma_destroy_id(ctx->cm_id);
/* /* Reading the cm_id without holding a positive ref is not allowed */
* At this point ctx->ref is zero so the only place the ctx can be is in
* a uevent or in __destroy_id(). Since the former doesn't touch
* ctx->cm_id and the latter sync cancels this, there is no races with
* this store.
*/
ctx->cm_id = NULL; ctx->cm_id = NULL;
} }
@@ -204,7 +197,6 @@ static struct ucma_context *ucma_alloc_ctx(struct ucma_file *file)
return NULL; return NULL;
INIT_WORK(&ctx->close_work, ucma_close_id); INIT_WORK(&ctx->close_work, ucma_close_id);
refcount_set(&ctx->ref, 1);
init_completion(&ctx->comp); init_completion(&ctx->comp);
/* So list_del() will work if we don't do ucma_finish_ctx() */ /* So list_del() will work if we don't do ucma_finish_ctx() */
INIT_LIST_HEAD(&ctx->list); INIT_LIST_HEAD(&ctx->list);
@@ -218,6 +210,13 @@ static struct ucma_context *ucma_alloc_ctx(struct ucma_file *file)
return ctx; return ctx;
} }
static void ucma_set_ctx_cm_id(struct ucma_context *ctx,
struct rdma_cm_id *cm_id)
{
refcount_set(&ctx->ref, 1);
ctx->cm_id = cm_id;
}
static void ucma_finish_ctx(struct ucma_context *ctx) static void ucma_finish_ctx(struct ucma_context *ctx)
{ {
lockdep_assert_held(&ctx->file->mut); lockdep_assert_held(&ctx->file->mut);
@@ -303,7 +302,7 @@ static int ucma_connect_event_handler(struct rdma_cm_id *cm_id,
ctx = ucma_alloc_ctx(listen_ctx->file); ctx = ucma_alloc_ctx(listen_ctx->file);
if (!ctx) if (!ctx)
goto err_backlog; goto err_backlog;
ctx->cm_id = cm_id; ucma_set_ctx_cm_id(ctx, cm_id);
uevent = ucma_create_uevent(listen_ctx, event); uevent = ucma_create_uevent(listen_ctx, event);
if (!uevent) if (!uevent)
@@ -321,8 +320,7 @@ static int ucma_connect_event_handler(struct rdma_cm_id *cm_id,
return 0; return 0;
err_alloc: err_alloc:
xa_erase(&ctx_table, ctx->id); ucma_destroy_private_ctx(ctx);
kfree(ctx);
err_backlog: err_backlog:
atomic_inc(&listen_ctx->backlog); atomic_inc(&listen_ctx->backlog);
/* Returning error causes the new ID to be destroyed */ /* Returning error causes the new ID to be destroyed */
@@ -356,8 +354,12 @@ static int ucma_event_handler(struct rdma_cm_id *cm_id,
wake_up_interruptible(&ctx->file->poll_wait); wake_up_interruptible(&ctx->file->poll_wait);
} }
if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL && !ctx->destroying) if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL) {
queue_work(system_unbound_wq, &ctx->close_work); xa_lock(&ctx_table);
if (xa_load(&ctx_table, ctx->id) == ctx)
queue_work(system_unbound_wq, &ctx->close_work);
xa_unlock(&ctx_table);
}
return 0; return 0;
} }
@@ -461,13 +463,12 @@ static ssize_t ucma_create_id(struct ucma_file *file, const char __user *inbuf,
ret = PTR_ERR(cm_id); ret = PTR_ERR(cm_id);
goto err1; goto err1;
} }
ctx->cm_id = cm_id; ucma_set_ctx_cm_id(ctx, cm_id);
resp.id = ctx->id; resp.id = ctx->id;
if (copy_to_user(u64_to_user_ptr(cmd.response), if (copy_to_user(u64_to_user_ptr(cmd.response),
&resp, sizeof(resp))) { &resp, sizeof(resp))) {
xa_erase(&ctx_table, ctx->id); ucma_destroy_private_ctx(ctx);
__destroy_id(ctx);
return -EFAULT; return -EFAULT;
} }
@@ -477,8 +478,7 @@ static ssize_t ucma_create_id(struct ucma_file *file, const char __user *inbuf,
return 0; return 0;
err1: err1:
xa_erase(&ctx_table, ctx->id); ucma_destroy_private_ctx(ctx);
kfree(ctx);
return ret; return ret;
} }
@@ -516,68 +516,73 @@ static void ucma_cleanup_mc_events(struct ucma_multicast *mc)
rdma_unlock_handler(mc->ctx->cm_id); rdma_unlock_handler(mc->ctx->cm_id);
} }
/* static int ucma_cleanup_ctx_events(struct ucma_context *ctx)
* ucma_free_ctx is called after the underlying rdma CM-ID is destroyed. At
* this point, no new events will be reported from the hardware. However, we
* still need to cleanup the UCMA context for this ID. Specifically, there
* might be events that have not yet been consumed by the user space software.
* mutex. After that we release them as needed.
*/
static int ucma_free_ctx(struct ucma_context *ctx)
{ {
int events_reported; int events_reported;
struct ucma_event *uevent, *tmp; struct ucma_event *uevent, *tmp;
LIST_HEAD(list); LIST_HEAD(list);
ucma_cleanup_multicast(ctx); /* Cleanup events not yet reported to the user.*/
/* Cleanup events not yet reported to the user. */
mutex_lock(&ctx->file->mut); mutex_lock(&ctx->file->mut);
list_for_each_entry_safe(uevent, tmp, &ctx->file->event_list, list) { list_for_each_entry_safe(uevent, tmp, &ctx->file->event_list, list) {
if (uevent->ctx == ctx || uevent->conn_req_ctx == ctx) if (uevent->ctx != ctx)
continue;
if (uevent->resp.event == RDMA_CM_EVENT_CONNECT_REQUEST &&
xa_cmpxchg(&ctx_table, uevent->conn_req_ctx->id,
uevent->conn_req_ctx, XA_ZERO_ENTRY,
GFP_KERNEL) == uevent->conn_req_ctx) {
list_move_tail(&uevent->list, &list); list_move_tail(&uevent->list, &list);
continue;
}
list_del(&uevent->list);
kfree(uevent);
} }
list_del(&ctx->list); list_del(&ctx->list);
events_reported = ctx->events_reported; events_reported = ctx->events_reported;
mutex_unlock(&ctx->file->mut); mutex_unlock(&ctx->file->mut);
/* /*
* If this was a listening ID then any connections spawned from it * If this was a listening ID then any connections spawned from it that
* that have not been delivered to userspace are cleaned up too. * have not been delivered to userspace are cleaned up too. Must be done
* Must be done outside any locks. * outside any locks.
*/ */
list_for_each_entry_safe(uevent, tmp, &list, list) { list_for_each_entry_safe(uevent, tmp, &list, list) {
list_del(&uevent->list); ucma_destroy_private_ctx(uevent->conn_req_ctx);
if (uevent->resp.event == RDMA_CM_EVENT_CONNECT_REQUEST &&
uevent->conn_req_ctx != ctx)
__destroy_id(uevent->conn_req_ctx);
kfree(uevent); kfree(uevent);
} }
mutex_destroy(&ctx->mutex);
kfree(ctx);
return events_reported; return events_reported;
} }
static int __destroy_id(struct ucma_context *ctx) /*
* When this is called the xarray must have a XA_ZERO_ENTRY in the ctx->id (ie
* the ctx is not public to the user). This either because:
* - ucma_finish_ctx() hasn't been called
* - xa_cmpxchg() succeed to remove the entry (only one thread can succeed)
*/
static int ucma_destroy_private_ctx(struct ucma_context *ctx)
{ {
/* int events_reported;
* If the refcount is already 0 then ucma_close_id() has already
* destroyed the cm_id, otherwise holding the refcount keeps cm_id
* valid. Prevent queue_work() from being called.
*/
if (refcount_inc_not_zero(&ctx->ref)) {
rdma_lock_handler(ctx->cm_id);
ctx->destroying = 1;
rdma_unlock_handler(ctx->cm_id);
ucma_put_ctx(ctx);
}
/*
* Destroy the underlying cm_id. New work queuing is prevented now by
* the removal from the xarray. Once the work is cancled ref will either
* be 0 because the work ran to completion and consumed the ref from the
* xarray, or it will be positive because we still have the ref from the
* xarray. This can also be 0 in cases where cm_id was never set
*/
cancel_work_sync(&ctx->close_work); cancel_work_sync(&ctx->close_work);
/* At this point it's guaranteed that there is no inflight closing task */ if (refcount_read(&ctx->ref))
if (ctx->cm_id)
ucma_close_id(&ctx->close_work); ucma_close_id(&ctx->close_work);
return ucma_free_ctx(ctx);
events_reported = ucma_cleanup_ctx_events(ctx);
ucma_cleanup_multicast(ctx);
WARN_ON(xa_cmpxchg(&ctx_table, ctx->id, XA_ZERO_ENTRY, NULL,
GFP_KERNEL) != NULL);
mutex_destroy(&ctx->mutex);
kfree(ctx);
return events_reported;
} }
static ssize_t ucma_destroy_id(struct ucma_file *file, const char __user *inbuf, static ssize_t ucma_destroy_id(struct ucma_file *file, const char __user *inbuf,
@@ -596,14 +601,17 @@ static ssize_t ucma_destroy_id(struct ucma_file *file, const char __user *inbuf,
xa_lock(&ctx_table); xa_lock(&ctx_table);
ctx = _ucma_find_context(cmd.id, file); ctx = _ucma_find_context(cmd.id, file);
if (!IS_ERR(ctx)) if (!IS_ERR(ctx)) {
__xa_erase(&ctx_table, ctx->id); if (__xa_cmpxchg(&ctx_table, ctx->id, ctx, XA_ZERO_ENTRY,
GFP_KERNEL) != ctx)
ctx = ERR_PTR(-ENOENT);
}
xa_unlock(&ctx_table); xa_unlock(&ctx_table);
if (IS_ERR(ctx)) if (IS_ERR(ctx))
return PTR_ERR(ctx); return PTR_ERR(ctx);
resp.events_reported = __destroy_id(ctx); resp.events_reported = ucma_destroy_private_ctx(ctx);
if (copy_to_user(u64_to_user_ptr(cmd.response), if (copy_to_user(u64_to_user_ptr(cmd.response),
&resp, sizeof(resp))) &resp, sizeof(resp)))
ret = -EFAULT; ret = -EFAULT;
@@ -1777,15 +1785,16 @@ static int ucma_close(struct inode *inode, struct file *filp)
* prevented by this being a FD release function. The list_add_tail() in * prevented by this being a FD release function. The list_add_tail() in
* ucma_connect_event_handler() can run concurrently, however it only * ucma_connect_event_handler() can run concurrently, however it only
* adds to the list *after* a listening ID. By only reading the first of * adds to the list *after* a listening ID. By only reading the first of
* the list, and relying on __destroy_id() to block * the list, and relying on ucma_destroy_private_ctx() to block
* ucma_connect_event_handler(), no additional locking is needed. * ucma_connect_event_handler(), no additional locking is needed.
*/ */
while (!list_empty(&file->ctx_list)) { while (!list_empty(&file->ctx_list)) {
struct ucma_context *ctx = list_first_entry( struct ucma_context *ctx = list_first_entry(
&file->ctx_list, struct ucma_context, list); &file->ctx_list, struct ucma_context, list);
xa_erase(&ctx_table, ctx->id); WARN_ON(xa_cmpxchg(&ctx_table, ctx->id, ctx, XA_ZERO_ENTRY,
__destroy_id(ctx); GFP_KERNEL) != ctx);
ucma_destroy_private_ctx(ctx);
} }
kfree(file); kfree(file);
return 0; return 0;

Some files were not shown because too many files have changed in this diff Show More