Merge 5.10.28 into android12-5.10
Changes in 5.10.28 arm64: mm: correct the inside linear map range during hotplug check bpf: Fix fexit trampoline. virtiofs: Fail dax mount if device does not support it ext4: shrink race window in ext4_should_retry_alloc() ext4: fix bh ref count on error paths fs: nfsd: fix kconfig dependency warning for NFSD_V4 rpc: fix NULL dereference on kmalloc failure iomap: Fix negative assignment to unsigned sis->pages in iomap_swapfile_activate ASoC: rt1015: fix i2c communication error ASoC: rt5640: Fix dac- and adc- vol-tlv values being off by a factor of 10 ASoC: rt5651: Fix dac- and adc- vol-tlv values being off by a factor of 10 ASoC: sgtl5000: set DAP_AVC_CTRL register to correct default value on probe ASoC: es8316: Simplify adc_pga_gain_tlv table ASoC: soc-core: Prevent warning if no DMI table is present ASoC: cs42l42: Fix Bitclock polarity inversion ASoC: cs42l42: Fix channel width support ASoC: cs42l42: Fix mixer volume control ASoC: cs42l42: Always wait at least 3ms after reset NFSD: fix error handling in NFSv4.0 callbacks kernel: freezer should treat PF_IO_WORKER like PF_KTHREAD for freezing vhost: Fix vhost_vq_reset() io_uring: fix ->flags races by linked timeouts scsi: st: Fix a use after free in st_open() scsi: qla2xxx: Fix broken #endif placement staging: comedi: cb_pcidas: fix request_irq() warn staging: comedi: cb_pcidas64: fix request_irq() warn ASoC: rt5659: Update MCLK rate in set_sysclk() ASoC: rt711: add snd_soc_component remove callback thermal/core: Add NULL pointer check before using cooling device stats locking/ww_mutex: Simplify use_ww_ctx & ww_ctx handling locking/ww_mutex: Fix acquire/release imbalance in ww_acquire_init()/ww_acquire_fini() nvmet-tcp: fix kmap leak when data digest in use io_uring: imply MSG_NOSIGNAL for send[msg]()/recv[msg]() calls static_call: Align static_call_is_init() patching condition ext4: do not iput inode under running transaction in ext4_rename() io_uring: call req_set_fail_links() on short send[msg]()/recv[msg]() with MSG_WAITALL net: mvpp2: fix interrupt mask/unmask skip condition flow_dissector: fix TTL and TOS dissection on IPv4 fragments can: dev: move driver related infrastructure into separate subdir net: introduce CAN specific pointer in the struct net_device can: tcan4x5x: fix max register value brcmfmac: clear EAP/association status bits on linkdown events ath11k: add ieee80211_unregister_hw to avoid kernel crash caused by NULL pointer rtw88: coex: 8821c: correct antenna switch function netdevsim: dev: Initialize FIB module after debugfs iwlwifi: pcie: don't disable interrupts for reg_lock ath10k: hold RCU lock when calling ieee80211_find_sta_by_ifaddr() net: ethernet: aquantia: Handle error cleanup of start on open appletalk: Fix skb allocation size in loopback case net: ipa: remove two unused register definitions net: ipa: fix register write command validation net: wan/lmc: unregister device when no matching device is found net: 9p: advance iov on empty read bpf: Remove MTU check in __bpf_skb_max_len ACPI: tables: x86: Reserve memory occupied by ACPI tables ACPI: processor: Fix CPU0 wakeup in acpi_idle_play_dead() ALSA: usb-audio: Apply sample rate quirk to Logitech Connect ALSA: hda: Re-add dropped snd_poewr_change_state() calls ALSA: hda: Add missing sanity checks in PM prepare/complete callbacks ALSA: hda/realtek: fix a determine_headset_type issue for a Dell AIO ALSA: hda/realtek: call alc_update_headset_mode() in hp_automute_hook ALSA: hda/realtek: fix mute/micmute LEDs for HP 640 G8 xtensa: fix uaccess-related livelock in do_page_fault xtensa: move coprocessor_flush to the .text section KVM: SVM: load control fields from VMCB12 before checking them KVM: SVM: ensure that EFER.SVME is set when running nested guest or on nested vmexit PM: runtime: Fix race getting/putting suppliers at probe PM: runtime: Fix ordering in pm_runtime_get_suppliers() tracing: Fix stack trace event size s390/vdso: copy tod_steering_delta value to vdso_data page s390/vdso: fix tod_steering_delta type mm: fix race by making init_zero_pfn() early_initcall drm/amdkfd: dqm fence memory corruption drm/amdgpu: fix offset calculation in amdgpu_vm_bo_clear_mappings() drm/amdgpu: check alignment on CPU page for bo map reiserfs: update reiserfs_xattrs_initialized() condition drm/imx: fix memory leak when fails to init drm/tegra: dc: Restore coupling of display controllers drm/tegra: sor: Grab runtime PM reference across reset vfio/nvlink: Add missing SPAPR_TCE_IOMMU depends pinctrl: rockchip: fix restore error in resume extcon: Add stubs for extcon_register_notifier_all() functions extcon: Fix error handling in extcon_dev_register firmware: stratix10-svc: reset COMMAND_RECONFIG_FLAG_PARTIAL to 0 usb: dwc3: pci: Enable dis_uX_susphy_quirk for Intel Merrifield video: hyperv_fb: Fix a double free in hvfb_probe firewire: nosy: Fix a use-after-free bug in nosy_ioctl() usbip: vhci_hcd fix shift out-of-bounds in vhci_hub_control() USB: quirks: ignore remote wake-up on Fibocom L850-GL LTE modem usb: musb: Fix suspend with devices connected for a64 usb: xhci-mtk: fix broken streams issue on 0.96 xHCI cdc-acm: fix BREAK rx code path adding necessary calls USB: cdc-acm: untangle a circular dependency between callback and softint USB: cdc-acm: downgrade message to debug USB: cdc-acm: fix double free on probe failure USB: cdc-acm: fix use-after-free after probe failure usb: gadget: udc: amd5536udc_pci fix null-ptr-dereference usb: dwc2: Fix HPRT0.PrtSusp bit setting for HiKey 960 board. usb: dwc2: Prevent core suspend when port connection flag is 0 usb: dwc3: qcom: skip interconnect init for ACPI probe usb: dwc3: gadget: Clear DEP flags after stop transfers in ep disable soc: qcom-geni-se: Cleanup the code to remove proxy votes staging: rtl8192e: Fix incorrect source in memcpy() staging: rtl8192e: Change state information from u16 to u8 driver core: clear deferred probe reason on probe retry drivers: video: fbcon: fix NULL dereference in fbcon_cursor() riscv: evaluate put_user() arg before enabling user access Revert "kernel: freezer should treat PF_IO_WORKER like PF_KTHREAD for freezing" bpf: Use NOP_ATOMIC5 instead of emit_nops(&prog, 5) for BPF_TRAMP_F_CALL_ORIG Linux 5.10.28 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: Ifdbbeda8de3ee22a7aa3f5d3b10becf0aba1a124
This commit is contained in:
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 5
|
VERSION = 5
|
||||||
PATCHLEVEL = 10
|
PATCHLEVEL = 10
|
||||||
SUBLEVEL = 27
|
SUBLEVEL = 28
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = Dare mighty things
|
NAME = Dare mighty things
|
||||||
|
|
||||||
|
|||||||
@@ -1450,14 +1450,30 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
|
|||||||
|
|
||||||
static bool inside_linear_region(u64 start, u64 size)
|
static bool inside_linear_region(u64 start, u64 size)
|
||||||
{
|
{
|
||||||
|
u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual));
|
||||||
|
u64 end_linear_pa = __pa(PAGE_END - 1);
|
||||||
|
|
||||||
|
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
|
||||||
|
/*
|
||||||
|
* Check for a wrap, it is possible because of randomized linear
|
||||||
|
* mapping the start physical address is actually bigger than
|
||||||
|
* the end physical address. In this case set start to zero
|
||||||
|
* because [0, end_linear_pa] range must still be able to cover
|
||||||
|
* all addressable physical addresses.
|
||||||
|
*/
|
||||||
|
if (start_linear_pa > end_linear_pa)
|
||||||
|
start_linear_pa = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
WARN_ON(start_linear_pa > end_linear_pa);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]
|
* Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]
|
||||||
* accommodating both its ends but excluding PAGE_END. Max physical
|
* accommodating both its ends but excluding PAGE_END. Max physical
|
||||||
* range which can be mapped inside this linear mapping range, must
|
* range which can be mapped inside this linear mapping range, must
|
||||||
* also be derived from its end points.
|
* also be derived from its end points.
|
||||||
*/
|
*/
|
||||||
return start >= __pa(_PAGE_OFFSET(vabits_actual)) &&
|
return start >= start_linear_pa && (start + size - 1) <= end_linear_pa;
|
||||||
(start + size - 1) <= __pa(PAGE_END - 1);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
int arch_add_memory(int nid, u64 start, u64 size,
|
int arch_add_memory(int nid, u64 start, u64 size,
|
||||||
|
|||||||
@@ -306,7 +306,9 @@ do { \
|
|||||||
* data types like structures or arrays.
|
* data types like structures or arrays.
|
||||||
*
|
*
|
||||||
* @ptr must have pointer-to-simple-variable type, and @x must be assignable
|
* @ptr must have pointer-to-simple-variable type, and @x must be assignable
|
||||||
* to the result of dereferencing @ptr.
|
* to the result of dereferencing @ptr. The value of @x is copied to avoid
|
||||||
|
* re-ordering where @x is evaluated inside the block that enables user-space
|
||||||
|
* access (thus bypassing user space protection if @x is a function).
|
||||||
*
|
*
|
||||||
* Caller must check the pointer with access_ok() before calling this
|
* Caller must check the pointer with access_ok() before calling this
|
||||||
* function.
|
* function.
|
||||||
@@ -316,12 +318,13 @@ do { \
|
|||||||
#define __put_user(x, ptr) \
|
#define __put_user(x, ptr) \
|
||||||
({ \
|
({ \
|
||||||
__typeof__(*(ptr)) __user *__gu_ptr = (ptr); \
|
__typeof__(*(ptr)) __user *__gu_ptr = (ptr); \
|
||||||
|
__typeof__(*__gu_ptr) __val = (x); \
|
||||||
long __pu_err = 0; \
|
long __pu_err = 0; \
|
||||||
\
|
\
|
||||||
__chk_user_ptr(__gu_ptr); \
|
__chk_user_ptr(__gu_ptr); \
|
||||||
\
|
\
|
||||||
__enable_user_access(); \
|
__enable_user_access(); \
|
||||||
__put_user_nocheck(x, __gu_ptr, __pu_err); \
|
__put_user_nocheck(__val, __gu_ptr, __pu_err); \
|
||||||
__disable_user_access(); \
|
__disable_user_access(); \
|
||||||
\
|
\
|
||||||
__pu_err; \
|
__pu_err; \
|
||||||
|
|||||||
@@ -6,7 +6,7 @@
|
|||||||
#include <vdso/datapage.h>
|
#include <vdso/datapage.h>
|
||||||
|
|
||||||
struct arch_vdso_data {
|
struct arch_vdso_data {
|
||||||
__u64 tod_steering_delta;
|
__s64 tod_steering_delta;
|
||||||
__u64 tod_steering_end;
|
__u64 tod_steering_end;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -398,6 +398,7 @@ static void clock_sync_global(unsigned long long delta)
|
|||||||
tod_steering_delta);
|
tod_steering_delta);
|
||||||
tod_steering_end = now + (abs(tod_steering_delta) << 15);
|
tod_steering_end = now + (abs(tod_steering_delta) << 15);
|
||||||
vdso_data->arch_data.tod_steering_end = tod_steering_end;
|
vdso_data->arch_data.tod_steering_end = tod_steering_end;
|
||||||
|
vdso_data->arch_data.tod_steering_delta = tod_steering_delta;
|
||||||
|
|
||||||
/* Update LPAR offset. */
|
/* Update LPAR offset. */
|
||||||
if (ptff_query(PTFF_QTO) && ptff(&qto, sizeof(qto), PTFF_QTO) == 0)
|
if (ptff_query(PTFF_QTO) && ptff(&qto, sizeof(qto), PTFF_QTO) == 0)
|
||||||
|
|||||||
@@ -132,6 +132,7 @@ void native_play_dead(void);
|
|||||||
void play_dead_common(void);
|
void play_dead_common(void);
|
||||||
void wbinvd_on_cpu(int cpu);
|
void wbinvd_on_cpu(int cpu);
|
||||||
int wbinvd_on_all_cpus(void);
|
int wbinvd_on_all_cpus(void);
|
||||||
|
bool wakeup_cpu0(void);
|
||||||
|
|
||||||
void native_smp_send_reschedule(int cpu);
|
void native_smp_send_reschedule(int cpu);
|
||||||
void native_send_call_func_ipi(const struct cpumask *mask);
|
void native_send_call_func_ipi(const struct cpumask *mask);
|
||||||
|
|||||||
@@ -1554,10 +1554,18 @@ void __init acpi_boot_table_init(void)
|
|||||||
/*
|
/*
|
||||||
* Initialize the ACPI boot-time table parser.
|
* Initialize the ACPI boot-time table parser.
|
||||||
*/
|
*/
|
||||||
if (acpi_table_init()) {
|
if (acpi_locate_initial_tables())
|
||||||
disable_acpi();
|
disable_acpi();
|
||||||
return;
|
else
|
||||||
}
|
acpi_reserve_initial_tables();
|
||||||
|
}
|
||||||
|
|
||||||
|
int __init early_acpi_boot_init(void)
|
||||||
|
{
|
||||||
|
if (acpi_disabled)
|
||||||
|
return 1;
|
||||||
|
|
||||||
|
acpi_table_init_complete();
|
||||||
|
|
||||||
acpi_table_parse(ACPI_SIG_BOOT, acpi_parse_sbf);
|
acpi_table_parse(ACPI_SIG_BOOT, acpi_parse_sbf);
|
||||||
|
|
||||||
@@ -1570,18 +1578,9 @@ void __init acpi_boot_table_init(void)
|
|||||||
} else {
|
} else {
|
||||||
printk(KERN_WARNING PREFIX "Disabling ACPI support\n");
|
printk(KERN_WARNING PREFIX "Disabling ACPI support\n");
|
||||||
disable_acpi();
|
disable_acpi();
|
||||||
return;
|
return 1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
int __init early_acpi_boot_init(void)
|
|
||||||
{
|
|
||||||
/*
|
|
||||||
* If acpi_disabled, bail out
|
|
||||||
*/
|
|
||||||
if (acpi_disabled)
|
|
||||||
return 1;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Process the Multiple APIC Description Table (MADT), if present
|
* Process the Multiple APIC Description Table (MADT), if present
|
||||||
|
|||||||
@@ -1051,6 +1051,9 @@ void __init setup_arch(char **cmdline_p)
|
|||||||
|
|
||||||
cleanup_highmap();
|
cleanup_highmap();
|
||||||
|
|
||||||
|
/* Look for ACPI tables and reserve memory occupied by them. */
|
||||||
|
acpi_boot_table_init();
|
||||||
|
|
||||||
memblock_set_current_limit(ISA_END_ADDRESS);
|
memblock_set_current_limit(ISA_END_ADDRESS);
|
||||||
e820__memblock_setup();
|
e820__memblock_setup();
|
||||||
|
|
||||||
@@ -1136,11 +1139,6 @@ void __init setup_arch(char **cmdline_p)
|
|||||||
|
|
||||||
early_platform_quirks();
|
early_platform_quirks();
|
||||||
|
|
||||||
/*
|
|
||||||
* Parse the ACPI tables for possible boot-time SMP configuration.
|
|
||||||
*/
|
|
||||||
acpi_boot_table_init();
|
|
||||||
|
|
||||||
early_acpi_boot_init();
|
early_acpi_boot_init();
|
||||||
|
|
||||||
initmem_init();
|
initmem_init();
|
||||||
|
|||||||
@@ -1655,7 +1655,7 @@ void play_dead_common(void)
|
|||||||
local_irq_disable();
|
local_irq_disable();
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool wakeup_cpu0(void)
|
bool wakeup_cpu0(void)
|
||||||
{
|
{
|
||||||
if (smp_processor_id() == 0 && enable_start_cpu0)
|
if (smp_processor_id() == 0 && enable_start_cpu0)
|
||||||
return true;
|
return true;
|
||||||
|
|||||||
@@ -246,11 +246,18 @@ static bool nested_vmcb_check_controls(struct vmcb_control_area *control)
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12)
|
static bool nested_vmcb_check_save(struct vcpu_svm *svm, struct vmcb *vmcb12)
|
||||||
{
|
{
|
||||||
struct kvm_vcpu *vcpu = &svm->vcpu;
|
struct kvm_vcpu *vcpu = &svm->vcpu;
|
||||||
bool vmcb12_lma;
|
bool vmcb12_lma;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* FIXME: these should be done after copying the fields,
|
||||||
|
* to avoid TOC/TOU races. For these save area checks
|
||||||
|
* the possible damage is limited since kvm_set_cr0 and
|
||||||
|
* kvm_set_cr4 handle failure; EFER_SVME is an exception
|
||||||
|
* so it is force-set later in nested_prepare_vmcb_save.
|
||||||
|
*/
|
||||||
if ((vmcb12->save.efer & EFER_SVME) == 0)
|
if ((vmcb12->save.efer & EFER_SVME) == 0)
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
@@ -271,7 +278,7 @@ static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12)
|
|||||||
if (kvm_valid_cr4(&svm->vcpu, vmcb12->save.cr4))
|
if (kvm_valid_cr4(&svm->vcpu, vmcb12->save.cr4))
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
return nested_vmcb_check_controls(&vmcb12->control);
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void load_nested_vmcb_control(struct vcpu_svm *svm,
|
static void load_nested_vmcb_control(struct vcpu_svm *svm,
|
||||||
@@ -396,7 +403,14 @@ static void nested_prepare_vmcb_save(struct vcpu_svm *svm, struct vmcb *vmcb12)
|
|||||||
svm->vmcb->save.gdtr = vmcb12->save.gdtr;
|
svm->vmcb->save.gdtr = vmcb12->save.gdtr;
|
||||||
svm->vmcb->save.idtr = vmcb12->save.idtr;
|
svm->vmcb->save.idtr = vmcb12->save.idtr;
|
||||||
kvm_set_rflags(&svm->vcpu, vmcb12->save.rflags);
|
kvm_set_rflags(&svm->vcpu, vmcb12->save.rflags);
|
||||||
svm_set_efer(&svm->vcpu, vmcb12->save.efer);
|
|
||||||
|
/*
|
||||||
|
* Force-set EFER_SVME even though it is checked earlier on the
|
||||||
|
* VMCB12, because the guest can flip the bit between the check
|
||||||
|
* and now. Clearing EFER_SVME would call svm_free_nested.
|
||||||
|
*/
|
||||||
|
svm_set_efer(&svm->vcpu, vmcb12->save.efer | EFER_SVME);
|
||||||
|
|
||||||
svm_set_cr0(&svm->vcpu, vmcb12->save.cr0);
|
svm_set_cr0(&svm->vcpu, vmcb12->save.cr0);
|
||||||
svm_set_cr4(&svm->vcpu, vmcb12->save.cr4);
|
svm_set_cr4(&svm->vcpu, vmcb12->save.cr4);
|
||||||
svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = vmcb12->save.cr2;
|
svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = vmcb12->save.cr2;
|
||||||
@@ -454,7 +468,6 @@ int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb12_gpa,
|
|||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
svm->nested.vmcb12_gpa = vmcb12_gpa;
|
svm->nested.vmcb12_gpa = vmcb12_gpa;
|
||||||
load_nested_vmcb_control(svm, &vmcb12->control);
|
|
||||||
nested_prepare_vmcb_save(svm, vmcb12);
|
nested_prepare_vmcb_save(svm, vmcb12);
|
||||||
nested_prepare_vmcb_control(svm);
|
nested_prepare_vmcb_control(svm);
|
||||||
|
|
||||||
@@ -501,7 +514,10 @@ int nested_svm_vmrun(struct vcpu_svm *svm)
|
|||||||
if (WARN_ON_ONCE(!svm->nested.initialized))
|
if (WARN_ON_ONCE(!svm->nested.initialized))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
if (!nested_vmcb_checks(svm, vmcb12)) {
|
load_nested_vmcb_control(svm, &vmcb12->control);
|
||||||
|
|
||||||
|
if (!nested_vmcb_check_save(svm, vmcb12) ||
|
||||||
|
!nested_vmcb_check_controls(&svm->nested.ctl)) {
|
||||||
vmcb12->control.exit_code = SVM_EXIT_ERR;
|
vmcb12->control.exit_code = SVM_EXIT_ERR;
|
||||||
vmcb12->control.exit_code_hi = 0;
|
vmcb12->control.exit_code_hi = 0;
|
||||||
vmcb12->control.exit_info_1 = 0;
|
vmcb12->control.exit_info_1 = 0;
|
||||||
@@ -1205,6 +1221,8 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
|
|||||||
*/
|
*/
|
||||||
if (!(save->cr0 & X86_CR0_PG))
|
if (!(save->cr0 & X86_CR0_PG))
|
||||||
goto out_free;
|
goto out_free;
|
||||||
|
if (!(save->efer & EFER_SVME))
|
||||||
|
goto out_free;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* All checks done, we can enter guest mode. L1 control fields
|
* All checks done, we can enter guest mode. L1 control fields
|
||||||
|
|||||||
@@ -1735,7 +1735,7 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
|
|||||||
* add rsp, 8 // skip eth_type_trans's frame
|
* add rsp, 8 // skip eth_type_trans's frame
|
||||||
* ret // return to its caller
|
* ret // return to its caller
|
||||||
*/
|
*/
|
||||||
int arch_prepare_bpf_trampoline(void *image, void *image_end,
|
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
|
||||||
const struct btf_func_model *m, u32 flags,
|
const struct btf_func_model *m, u32 flags,
|
||||||
struct bpf_tramp_progs *tprogs,
|
struct bpf_tramp_progs *tprogs,
|
||||||
void *orig_call)
|
void *orig_call)
|
||||||
@@ -1774,6 +1774,15 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end,
|
|||||||
|
|
||||||
save_regs(m, &prog, nr_args, stack_size);
|
save_regs(m, &prog, nr_args, stack_size);
|
||||||
|
|
||||||
|
if (flags & BPF_TRAMP_F_CALL_ORIG) {
|
||||||
|
/* arg1: mov rdi, im */
|
||||||
|
emit_mov_imm64(&prog, BPF_REG_1, (long) im >> 32, (u32) (long) im);
|
||||||
|
if (emit_call(&prog, __bpf_tramp_enter, prog)) {
|
||||||
|
ret = -EINVAL;
|
||||||
|
goto cleanup;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (fentry->nr_progs)
|
if (fentry->nr_progs)
|
||||||
if (invoke_bpf(m, &prog, fentry, stack_size))
|
if (invoke_bpf(m, &prog, fentry, stack_size))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
@@ -1792,8 +1801,7 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end,
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (flags & BPF_TRAMP_F_CALL_ORIG) {
|
if (flags & BPF_TRAMP_F_CALL_ORIG) {
|
||||||
if (fentry->nr_progs || fmod_ret->nr_progs)
|
restore_regs(m, &prog, nr_args, stack_size);
|
||||||
restore_regs(m, &prog, nr_args, stack_size);
|
|
||||||
|
|
||||||
/* call original function */
|
/* call original function */
|
||||||
if (emit_call(&prog, orig_call, prog)) {
|
if (emit_call(&prog, orig_call, prog)) {
|
||||||
@@ -1802,6 +1810,9 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end,
|
|||||||
}
|
}
|
||||||
/* remember return value in a stack for bpf prog to access */
|
/* remember return value in a stack for bpf prog to access */
|
||||||
emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
|
emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
|
||||||
|
im->ip_after_call = prog;
|
||||||
|
memcpy(prog, ideal_nops[NOP_ATOMIC5], X86_PATCH_SIZE);
|
||||||
|
prog += X86_PATCH_SIZE;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (fmod_ret->nr_progs) {
|
if (fmod_ret->nr_progs) {
|
||||||
@@ -1832,9 +1843,17 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end,
|
|||||||
* the return value is only updated on the stack and still needs to be
|
* the return value is only updated on the stack and still needs to be
|
||||||
* restored to R0.
|
* restored to R0.
|
||||||
*/
|
*/
|
||||||
if (flags & BPF_TRAMP_F_CALL_ORIG)
|
if (flags & BPF_TRAMP_F_CALL_ORIG) {
|
||||||
|
im->ip_epilogue = prog;
|
||||||
|
/* arg1: mov rdi, im */
|
||||||
|
emit_mov_imm64(&prog, BPF_REG_1, (long) im >> 32, (u32) (long) im);
|
||||||
|
if (emit_call(&prog, __bpf_tramp_exit, prog)) {
|
||||||
|
ret = -EINVAL;
|
||||||
|
goto cleanup;
|
||||||
|
}
|
||||||
/* restore original return value back into RAX */
|
/* restore original return value back into RAX */
|
||||||
emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
|
emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
|
||||||
|
}
|
||||||
|
|
||||||
EMIT1(0x5B); /* pop rbx */
|
EMIT1(0x5B); /* pop rbx */
|
||||||
EMIT1(0xC9); /* leave */
|
EMIT1(0xC9); /* leave */
|
||||||
|
|||||||
@@ -99,37 +99,6 @@
|
|||||||
LOAD_CP_REGS_TAB(6)
|
LOAD_CP_REGS_TAB(6)
|
||||||
LOAD_CP_REGS_TAB(7)
|
LOAD_CP_REGS_TAB(7)
|
||||||
|
|
||||||
/*
|
|
||||||
* coprocessor_flush(struct thread_info*, index)
|
|
||||||
* a2 a3
|
|
||||||
*
|
|
||||||
* Save coprocessor registers for coprocessor 'index'.
|
|
||||||
* The register values are saved to or loaded from the coprocessor area
|
|
||||||
* inside the task_info structure.
|
|
||||||
*
|
|
||||||
* Note that this function doesn't update the coprocessor_owner information!
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
|
|
||||||
ENTRY(coprocessor_flush)
|
|
||||||
|
|
||||||
/* reserve 4 bytes on stack to save a0 */
|
|
||||||
abi_entry(4)
|
|
||||||
|
|
||||||
s32i a0, a1, 0
|
|
||||||
movi a0, .Lsave_cp_regs_jump_table
|
|
||||||
addx8 a3, a3, a0
|
|
||||||
l32i a4, a3, 4
|
|
||||||
l32i a3, a3, 0
|
|
||||||
add a2, a2, a4
|
|
||||||
beqz a3, 1f
|
|
||||||
callx0 a3
|
|
||||||
1: l32i a0, a1, 0
|
|
||||||
|
|
||||||
abi_ret(4)
|
|
||||||
|
|
||||||
ENDPROC(coprocessor_flush)
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Entry condition:
|
* Entry condition:
|
||||||
*
|
*
|
||||||
@@ -245,6 +214,39 @@ ENTRY(fast_coprocessor)
|
|||||||
|
|
||||||
ENDPROC(fast_coprocessor)
|
ENDPROC(fast_coprocessor)
|
||||||
|
|
||||||
|
.text
|
||||||
|
|
||||||
|
/*
|
||||||
|
* coprocessor_flush(struct thread_info*, index)
|
||||||
|
* a2 a3
|
||||||
|
*
|
||||||
|
* Save coprocessor registers for coprocessor 'index'.
|
||||||
|
* The register values are saved to or loaded from the coprocessor area
|
||||||
|
* inside the task_info structure.
|
||||||
|
*
|
||||||
|
* Note that this function doesn't update the coprocessor_owner information!
|
||||||
|
*
|
||||||
|
*/
|
||||||
|
|
||||||
|
ENTRY(coprocessor_flush)
|
||||||
|
|
||||||
|
/* reserve 4 bytes on stack to save a0 */
|
||||||
|
abi_entry(4)
|
||||||
|
|
||||||
|
s32i a0, a1, 0
|
||||||
|
movi a0, .Lsave_cp_regs_jump_table
|
||||||
|
addx8 a3, a3, a0
|
||||||
|
l32i a4, a3, 4
|
||||||
|
l32i a3, a3, 0
|
||||||
|
add a2, a2, a4
|
||||||
|
beqz a3, 1f
|
||||||
|
callx0 a3
|
||||||
|
1: l32i a0, a1, 0
|
||||||
|
|
||||||
|
abi_ret(4)
|
||||||
|
|
||||||
|
ENDPROC(coprocessor_flush)
|
||||||
|
|
||||||
.data
|
.data
|
||||||
|
|
||||||
ENTRY(coprocessor_owner)
|
ENTRY(coprocessor_owner)
|
||||||
|
|||||||
@@ -112,8 +112,11 @@ good_area:
|
|||||||
*/
|
*/
|
||||||
fault = handle_mm_fault(vma, address, flags, regs);
|
fault = handle_mm_fault(vma, address, flags, regs);
|
||||||
|
|
||||||
if (fault_signal_pending(fault, regs))
|
if (fault_signal_pending(fault, regs)) {
|
||||||
|
if (!user_mode(regs))
|
||||||
|
goto bad_page_fault;
|
||||||
return;
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
if (unlikely(fault & VM_FAULT_ERROR)) {
|
if (unlikely(fault & VM_FAULT_ERROR)) {
|
||||||
if (fault & VM_FAULT_OOM)
|
if (fault & VM_FAULT_OOM)
|
||||||
|
|||||||
@@ -29,6 +29,7 @@
|
|||||||
*/
|
*/
|
||||||
#ifdef CONFIG_X86
|
#ifdef CONFIG_X86
|
||||||
#include <asm/apic.h>
|
#include <asm/apic.h>
|
||||||
|
#include <asm/cpu.h>
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#define ACPI_PROCESSOR_CLASS "processor"
|
#define ACPI_PROCESSOR_CLASS "processor"
|
||||||
@@ -542,6 +543,12 @@ static int acpi_idle_play_dead(struct cpuidle_device *dev, int index)
|
|||||||
wait_for_freeze();
|
wait_for_freeze();
|
||||||
} else
|
} else
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
|
|
||||||
|
#if defined(CONFIG_X86) && defined(CONFIG_HOTPLUG_CPU)
|
||||||
|
/* If NMI wants to wake up CPU0, start CPU0. */
|
||||||
|
if (wakeup_cpu0())
|
||||||
|
start_cpu0();
|
||||||
|
#endif
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Never reached */
|
/* Never reached */
|
||||||
|
|||||||
@@ -780,7 +780,7 @@ acpi_status acpi_os_table_override(struct acpi_table_header *existing_table,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* acpi_table_init()
|
* acpi_locate_initial_tables()
|
||||||
*
|
*
|
||||||
* find RSDP, find and checksum SDT/XSDT.
|
* find RSDP, find and checksum SDT/XSDT.
|
||||||
* checksum all tables, print SDT/XSDT
|
* checksum all tables, print SDT/XSDT
|
||||||
@@ -788,7 +788,7 @@ acpi_status acpi_os_table_override(struct acpi_table_header *existing_table,
|
|||||||
* result: sdt_entry[] is initialized
|
* result: sdt_entry[] is initialized
|
||||||
*/
|
*/
|
||||||
|
|
||||||
int __init acpi_table_init(void)
|
int __init acpi_locate_initial_tables(void)
|
||||||
{
|
{
|
||||||
acpi_status status;
|
acpi_status status;
|
||||||
|
|
||||||
@@ -803,9 +803,45 @@ int __init acpi_table_init(void)
|
|||||||
status = acpi_initialize_tables(initial_tables, ACPI_MAX_TABLES, 0);
|
status = acpi_initialize_tables(initial_tables, ACPI_MAX_TABLES, 0);
|
||||||
if (ACPI_FAILURE(status))
|
if (ACPI_FAILURE(status))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
acpi_table_initrd_scan();
|
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
void __init acpi_reserve_initial_tables(void)
|
||||||
|
{
|
||||||
|
int i;
|
||||||
|
|
||||||
|
for (i = 0; i < ACPI_MAX_TABLES; i++) {
|
||||||
|
struct acpi_table_desc *table_desc = &initial_tables[i];
|
||||||
|
u64 start = table_desc->address;
|
||||||
|
u64 size = table_desc->length;
|
||||||
|
|
||||||
|
if (!start || !size)
|
||||||
|
break;
|
||||||
|
|
||||||
|
pr_info("Reserving %4s table memory at [mem 0x%llx-0x%llx]\n",
|
||||||
|
table_desc->signature.ascii, start, start + size - 1);
|
||||||
|
|
||||||
|
memblock_reserve(start, size);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void __init acpi_table_init_complete(void)
|
||||||
|
{
|
||||||
|
acpi_table_initrd_scan();
|
||||||
check_multiple_madt();
|
check_multiple_madt();
|
||||||
|
}
|
||||||
|
|
||||||
|
int __init acpi_table_init(void)
|
||||||
|
{
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
ret = acpi_locate_initial_tables();
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
acpi_table_init_complete();
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -97,6 +97,9 @@ static void deferred_probe_work_func(struct work_struct *work)
|
|||||||
|
|
||||||
get_device(dev);
|
get_device(dev);
|
||||||
|
|
||||||
|
kfree(dev->p->deferred_probe_reason);
|
||||||
|
dev->p->deferred_probe_reason = NULL;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Drop the mutex while probing each device; the probe path may
|
* Drop the mutex while probing each device; the probe path may
|
||||||
* manipulate the deferred list
|
* manipulate the deferred list
|
||||||
|
|||||||
@@ -1690,8 +1690,8 @@ void pm_runtime_get_suppliers(struct device *dev)
|
|||||||
device_links_read_lock_held())
|
device_links_read_lock_held())
|
||||||
if (link->flags & DL_FLAG_PM_RUNTIME) {
|
if (link->flags & DL_FLAG_PM_RUNTIME) {
|
||||||
link->supplier_preactivated = true;
|
link->supplier_preactivated = true;
|
||||||
refcount_inc(&link->rpm_active);
|
|
||||||
pm_runtime_get_sync(link->supplier);
|
pm_runtime_get_sync(link->supplier);
|
||||||
|
refcount_inc(&link->rpm_active);
|
||||||
}
|
}
|
||||||
|
|
||||||
device_links_read_unlock(idx);
|
device_links_read_unlock(idx);
|
||||||
@@ -1704,6 +1704,8 @@ void pm_runtime_get_suppliers(struct device *dev)
|
|||||||
void pm_runtime_put_suppliers(struct device *dev)
|
void pm_runtime_put_suppliers(struct device *dev)
|
||||||
{
|
{
|
||||||
struct device_link *link;
|
struct device_link *link;
|
||||||
|
unsigned long flags;
|
||||||
|
bool put;
|
||||||
int idx;
|
int idx;
|
||||||
|
|
||||||
idx = device_links_read_lock();
|
idx = device_links_read_lock();
|
||||||
@@ -1712,7 +1714,11 @@ void pm_runtime_put_suppliers(struct device *dev)
|
|||||||
device_links_read_lock_held())
|
device_links_read_lock_held())
|
||||||
if (link->supplier_preactivated) {
|
if (link->supplier_preactivated) {
|
||||||
link->supplier_preactivated = false;
|
link->supplier_preactivated = false;
|
||||||
if (refcount_dec_not_one(&link->rpm_active))
|
spin_lock_irqsave(&dev->power.lock, flags);
|
||||||
|
put = pm_runtime_status_suspended(dev) &&
|
||||||
|
refcount_dec_not_one(&link->rpm_active);
|
||||||
|
spin_unlock_irqrestore(&dev->power.lock, flags);
|
||||||
|
if (put)
|
||||||
pm_runtime_put(link->supplier);
|
pm_runtime_put(link->supplier);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1241,6 +1241,7 @@ int extcon_dev_register(struct extcon_dev *edev)
|
|||||||
sizeof(*edev->nh), GFP_KERNEL);
|
sizeof(*edev->nh), GFP_KERNEL);
|
||||||
if (!edev->nh) {
|
if (!edev->nh) {
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
|
device_unregister(&edev->dev);
|
||||||
goto err_dev;
|
goto err_dev;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -346,6 +346,7 @@ nosy_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
|||||||
struct client *client = file->private_data;
|
struct client *client = file->private_data;
|
||||||
spinlock_t *client_list_lock = &client->lynx->client_list_lock;
|
spinlock_t *client_list_lock = &client->lynx->client_list_lock;
|
||||||
struct nosy_stats stats;
|
struct nosy_stats stats;
|
||||||
|
int ret;
|
||||||
|
|
||||||
switch (cmd) {
|
switch (cmd) {
|
||||||
case NOSY_IOC_GET_STATS:
|
case NOSY_IOC_GET_STATS:
|
||||||
@@ -360,11 +361,15 @@ nosy_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
|||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
case NOSY_IOC_START:
|
case NOSY_IOC_START:
|
||||||
|
ret = -EBUSY;
|
||||||
spin_lock_irq(client_list_lock);
|
spin_lock_irq(client_list_lock);
|
||||||
list_add_tail(&client->link, &client->lynx->client_list);
|
if (list_empty(&client->link)) {
|
||||||
|
list_add_tail(&client->link, &client->lynx->client_list);
|
||||||
|
ret = 0;
|
||||||
|
}
|
||||||
spin_unlock_irq(client_list_lock);
|
spin_unlock_irq(client_list_lock);
|
||||||
|
|
||||||
return 0;
|
return ret;
|
||||||
|
|
||||||
case NOSY_IOC_STOP:
|
case NOSY_IOC_STOP:
|
||||||
spin_lock_irq(client_list_lock);
|
spin_lock_irq(client_list_lock);
|
||||||
|
|||||||
@@ -2223,8 +2223,8 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
|
|||||||
uint64_t eaddr;
|
uint64_t eaddr;
|
||||||
|
|
||||||
/* validate the parameters */
|
/* validate the parameters */
|
||||||
if (saddr & AMDGPU_GPU_PAGE_MASK || offset & AMDGPU_GPU_PAGE_MASK ||
|
if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK ||
|
||||||
size == 0 || size & AMDGPU_GPU_PAGE_MASK)
|
size == 0 || size & ~PAGE_MASK)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
/* make sure object fit at this offset */
|
/* make sure object fit at this offset */
|
||||||
@@ -2289,8 +2289,8 @@ int amdgpu_vm_bo_replace_map(struct amdgpu_device *adev,
|
|||||||
int r;
|
int r;
|
||||||
|
|
||||||
/* validate the parameters */
|
/* validate the parameters */
|
||||||
if (saddr & AMDGPU_GPU_PAGE_MASK || offset & AMDGPU_GPU_PAGE_MASK ||
|
if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK ||
|
||||||
size == 0 || size & AMDGPU_GPU_PAGE_MASK)
|
size == 0 || size & ~PAGE_MASK)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
/* make sure object fit at this offset */
|
/* make sure object fit at this offset */
|
||||||
@@ -2435,7 +2435,7 @@ int amdgpu_vm_bo_clear_mappings(struct amdgpu_device *adev,
|
|||||||
after->start = eaddr + 1;
|
after->start = eaddr + 1;
|
||||||
after->last = tmp->last;
|
after->last = tmp->last;
|
||||||
after->offset = tmp->offset;
|
after->offset = tmp->offset;
|
||||||
after->offset += after->start - tmp->start;
|
after->offset += (after->start - tmp->start) << PAGE_SHIFT;
|
||||||
after->flags = tmp->flags;
|
after->flags = tmp->flags;
|
||||||
after->bo_va = tmp->bo_va;
|
after->bo_va = tmp->bo_va;
|
||||||
list_add(&after->list, &tmp->bo_va->invalids);
|
list_add(&after->list, &tmp->bo_va->invalids);
|
||||||
|
|||||||
@@ -155,7 +155,7 @@ static int dbgdev_diq_submit_ib(struct kfd_dbgdev *dbgdev,
|
|||||||
|
|
||||||
/* Wait till CP writes sync code: */
|
/* Wait till CP writes sync code: */
|
||||||
status = amdkfd_fence_wait_timeout(
|
status = amdkfd_fence_wait_timeout(
|
||||||
(unsigned int *) rm_state,
|
rm_state,
|
||||||
QUEUESTATE__ACTIVE, 1500);
|
QUEUESTATE__ACTIVE, 1500);
|
||||||
|
|
||||||
kfd_gtt_sa_free(dbgdev->dev, mem_obj);
|
kfd_gtt_sa_free(dbgdev->dev, mem_obj);
|
||||||
|
|||||||
@@ -1167,7 +1167,7 @@ static int start_cpsch(struct device_queue_manager *dqm)
|
|||||||
if (retval)
|
if (retval)
|
||||||
goto fail_allocate_vidmem;
|
goto fail_allocate_vidmem;
|
||||||
|
|
||||||
dqm->fence_addr = dqm->fence_mem->cpu_ptr;
|
dqm->fence_addr = (uint64_t *)dqm->fence_mem->cpu_ptr;
|
||||||
dqm->fence_gpu_addr = dqm->fence_mem->gpu_addr;
|
dqm->fence_gpu_addr = dqm->fence_mem->gpu_addr;
|
||||||
|
|
||||||
init_interrupts(dqm);
|
init_interrupts(dqm);
|
||||||
@@ -1340,8 +1340,8 @@ out:
|
|||||||
return retval;
|
return retval;
|
||||||
}
|
}
|
||||||
|
|
||||||
int amdkfd_fence_wait_timeout(unsigned int *fence_addr,
|
int amdkfd_fence_wait_timeout(uint64_t *fence_addr,
|
||||||
unsigned int fence_value,
|
uint64_t fence_value,
|
||||||
unsigned int timeout_ms)
|
unsigned int timeout_ms)
|
||||||
{
|
{
|
||||||
unsigned long end_jiffies = msecs_to_jiffies(timeout_ms) + jiffies;
|
unsigned long end_jiffies = msecs_to_jiffies(timeout_ms) + jiffies;
|
||||||
|
|||||||
@@ -192,7 +192,7 @@ struct device_queue_manager {
|
|||||||
uint16_t vmid_pasid[VMID_NUM];
|
uint16_t vmid_pasid[VMID_NUM];
|
||||||
uint64_t pipelines_addr;
|
uint64_t pipelines_addr;
|
||||||
uint64_t fence_gpu_addr;
|
uint64_t fence_gpu_addr;
|
||||||
unsigned int *fence_addr;
|
uint64_t *fence_addr;
|
||||||
struct kfd_mem_obj *fence_mem;
|
struct kfd_mem_obj *fence_mem;
|
||||||
bool active_runlist;
|
bool active_runlist;
|
||||||
int sched_policy;
|
int sched_policy;
|
||||||
|
|||||||
@@ -345,7 +345,7 @@ fail_create_runlist_ib:
|
|||||||
}
|
}
|
||||||
|
|
||||||
int pm_send_query_status(struct packet_manager *pm, uint64_t fence_address,
|
int pm_send_query_status(struct packet_manager *pm, uint64_t fence_address,
|
||||||
uint32_t fence_value)
|
uint64_t fence_value)
|
||||||
{
|
{
|
||||||
uint32_t *buffer, size;
|
uint32_t *buffer, size;
|
||||||
int retval = 0;
|
int retval = 0;
|
||||||
|
|||||||
@@ -283,7 +283,7 @@ static int pm_unmap_queues_v9(struct packet_manager *pm, uint32_t *buffer,
|
|||||||
}
|
}
|
||||||
|
|
||||||
static int pm_query_status_v9(struct packet_manager *pm, uint32_t *buffer,
|
static int pm_query_status_v9(struct packet_manager *pm, uint32_t *buffer,
|
||||||
uint64_t fence_address, uint32_t fence_value)
|
uint64_t fence_address, uint64_t fence_value)
|
||||||
{
|
{
|
||||||
struct pm4_mes_query_status *packet;
|
struct pm4_mes_query_status *packet;
|
||||||
|
|
||||||
|
|||||||
@@ -263,7 +263,7 @@ static int pm_unmap_queues_vi(struct packet_manager *pm, uint32_t *buffer,
|
|||||||
}
|
}
|
||||||
|
|
||||||
static int pm_query_status_vi(struct packet_manager *pm, uint32_t *buffer,
|
static int pm_query_status_vi(struct packet_manager *pm, uint32_t *buffer,
|
||||||
uint64_t fence_address, uint32_t fence_value)
|
uint64_t fence_address, uint64_t fence_value)
|
||||||
{
|
{
|
||||||
struct pm4_mes_query_status *packet;
|
struct pm4_mes_query_status *packet;
|
||||||
|
|
||||||
|
|||||||
@@ -1006,8 +1006,8 @@ int pqm_get_wave_state(struct process_queue_manager *pqm,
|
|||||||
u32 *ctl_stack_used_size,
|
u32 *ctl_stack_used_size,
|
||||||
u32 *save_area_used_size);
|
u32 *save_area_used_size);
|
||||||
|
|
||||||
int amdkfd_fence_wait_timeout(unsigned int *fence_addr,
|
int amdkfd_fence_wait_timeout(uint64_t *fence_addr,
|
||||||
unsigned int fence_value,
|
uint64_t fence_value,
|
||||||
unsigned int timeout_ms);
|
unsigned int timeout_ms);
|
||||||
|
|
||||||
/* Packet Manager */
|
/* Packet Manager */
|
||||||
@@ -1043,7 +1043,7 @@ struct packet_manager_funcs {
|
|||||||
uint32_t filter_param, bool reset,
|
uint32_t filter_param, bool reset,
|
||||||
unsigned int sdma_engine);
|
unsigned int sdma_engine);
|
||||||
int (*query_status)(struct packet_manager *pm, uint32_t *buffer,
|
int (*query_status)(struct packet_manager *pm, uint32_t *buffer,
|
||||||
uint64_t fence_address, uint32_t fence_value);
|
uint64_t fence_address, uint64_t fence_value);
|
||||||
int (*release_mem)(uint64_t gpu_addr, uint32_t *buffer);
|
int (*release_mem)(uint64_t gpu_addr, uint32_t *buffer);
|
||||||
|
|
||||||
/* Packet sizes */
|
/* Packet sizes */
|
||||||
@@ -1065,7 +1065,7 @@ int pm_send_set_resources(struct packet_manager *pm,
|
|||||||
struct scheduling_resources *res);
|
struct scheduling_resources *res);
|
||||||
int pm_send_runlist(struct packet_manager *pm, struct list_head *dqm_queues);
|
int pm_send_runlist(struct packet_manager *pm, struct list_head *dqm_queues);
|
||||||
int pm_send_query_status(struct packet_manager *pm, uint64_t fence_address,
|
int pm_send_query_status(struct packet_manager *pm, uint64_t fence_address,
|
||||||
uint32_t fence_value);
|
uint64_t fence_value);
|
||||||
|
|
||||||
int pm_send_unmap_queue(struct packet_manager *pm, enum kfd_queue_type type,
|
int pm_send_unmap_queue(struct packet_manager *pm, enum kfd_queue_type type,
|
||||||
enum kfd_unmap_queues_filter mode,
|
enum kfd_unmap_queues_filter mode,
|
||||||
|
|||||||
@@ -215,7 +215,7 @@ static int imx_drm_bind(struct device *dev)
|
|||||||
|
|
||||||
ret = drmm_mode_config_init(drm);
|
ret = drmm_mode_config_init(drm);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
goto err_kms;
|
||||||
|
|
||||||
ret = drm_vblank_init(drm, MAX_CRTC);
|
ret = drm_vblank_init(drm, MAX_CRTC);
|
||||||
if (ret)
|
if (ret)
|
||||||
|
|||||||
@@ -2499,22 +2499,18 @@ static int tegra_dc_couple(struct tegra_dc *dc)
|
|||||||
* POWER_CONTROL registers during CRTC enabling.
|
* POWER_CONTROL registers during CRTC enabling.
|
||||||
*/
|
*/
|
||||||
if (dc->soc->coupled_pm && dc->pipe == 1) {
|
if (dc->soc->coupled_pm && dc->pipe == 1) {
|
||||||
u32 flags = DL_FLAG_PM_RUNTIME | DL_FLAG_AUTOREMOVE_CONSUMER;
|
struct device *companion;
|
||||||
struct device_link *link;
|
struct tegra_dc *parent;
|
||||||
struct device *partner;
|
|
||||||
|
|
||||||
partner = driver_find_device(dc->dev->driver, NULL, NULL,
|
companion = driver_find_device(dc->dev->driver, NULL, (const void *)0,
|
||||||
tegra_dc_match_by_pipe);
|
tegra_dc_match_by_pipe);
|
||||||
if (!partner)
|
if (!companion)
|
||||||
return -EPROBE_DEFER;
|
return -EPROBE_DEFER;
|
||||||
|
|
||||||
link = device_link_add(dc->dev, partner, flags);
|
parent = dev_get_drvdata(companion);
|
||||||
if (!link) {
|
dc->client.parent = &parent->client;
|
||||||
dev_err(dc->dev, "failed to link controllers\n");
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
dev_dbg(dc->dev, "coupled to %s\n", dev_name(partner));
|
dev_dbg(dc->dev, "coupled to %s\n", dev_name(companion));
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|||||||
@@ -3115,6 +3115,12 @@ static int tegra_sor_init(struct host1x_client *client)
|
|||||||
* kernel is possible.
|
* kernel is possible.
|
||||||
*/
|
*/
|
||||||
if (sor->rst) {
|
if (sor->rst) {
|
||||||
|
err = pm_runtime_resume_and_get(sor->dev);
|
||||||
|
if (err < 0) {
|
||||||
|
dev_err(sor->dev, "failed to get runtime PM: %d\n", err);
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
err = reset_control_acquire(sor->rst);
|
err = reset_control_acquire(sor->rst);
|
||||||
if (err < 0) {
|
if (err < 0) {
|
||||||
dev_err(sor->dev, "failed to acquire SOR reset: %d\n",
|
dev_err(sor->dev, "failed to acquire SOR reset: %d\n",
|
||||||
@@ -3148,6 +3154,7 @@ static int tegra_sor_init(struct host1x_client *client)
|
|||||||
}
|
}
|
||||||
|
|
||||||
reset_control_release(sor->rst);
|
reset_control_release(sor->rst);
|
||||||
|
pm_runtime_put(sor->dev);
|
||||||
}
|
}
|
||||||
|
|
||||||
err = clk_prepare_enable(sor->clk_safe);
|
err = clk_prepare_enable(sor->clk_safe);
|
||||||
|
|||||||
@@ -7,12 +7,7 @@ obj-$(CONFIG_CAN_VCAN) += vcan.o
|
|||||||
obj-$(CONFIG_CAN_VXCAN) += vxcan.o
|
obj-$(CONFIG_CAN_VXCAN) += vxcan.o
|
||||||
obj-$(CONFIG_CAN_SLCAN) += slcan.o
|
obj-$(CONFIG_CAN_SLCAN) += slcan.o
|
||||||
|
|
||||||
obj-$(CONFIG_CAN_DEV) += can-dev.o
|
obj-y += dev/
|
||||||
can-dev-y += dev.o
|
|
||||||
can-dev-y += rx-offload.o
|
|
||||||
|
|
||||||
can-dev-$(CONFIG_CAN_LEDS) += led.o
|
|
||||||
|
|
||||||
obj-y += rcar/
|
obj-y += rcar/
|
||||||
obj-y += spi/
|
obj-y += spi/
|
||||||
obj-y += usb/
|
obj-y += usb/
|
||||||
|
|||||||
7
drivers/net/can/dev/Makefile
Normal file
7
drivers/net/can/dev/Makefile
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
|
|
||||||
|
obj-$(CONFIG_CAN_DEV) += can-dev.o
|
||||||
|
can-dev-y += dev.o
|
||||||
|
can-dev-y += rx-offload.o
|
||||||
|
|
||||||
|
can-dev-$(CONFIG_CAN_LEDS) += led.o
|
||||||
@@ -747,6 +747,7 @@ EXPORT_SYMBOL_GPL(alloc_can_err_skb);
|
|||||||
struct net_device *alloc_candev_mqs(int sizeof_priv, unsigned int echo_skb_max,
|
struct net_device *alloc_candev_mqs(int sizeof_priv, unsigned int echo_skb_max,
|
||||||
unsigned int txqs, unsigned int rxqs)
|
unsigned int txqs, unsigned int rxqs)
|
||||||
{
|
{
|
||||||
|
struct can_ml_priv *can_ml;
|
||||||
struct net_device *dev;
|
struct net_device *dev;
|
||||||
struct can_priv *priv;
|
struct can_priv *priv;
|
||||||
int size;
|
int size;
|
||||||
@@ -778,7 +779,8 @@ struct net_device *alloc_candev_mqs(int sizeof_priv, unsigned int echo_skb_max,
|
|||||||
priv = netdev_priv(dev);
|
priv = netdev_priv(dev);
|
||||||
priv->dev = dev;
|
priv->dev = dev;
|
||||||
|
|
||||||
dev->ml_priv = (void *)priv + ALIGN(sizeof_priv, NETDEV_ALIGN);
|
can_ml = (void *)priv + ALIGN(sizeof_priv, NETDEV_ALIGN);
|
||||||
|
can_set_ml_priv(dev, can_ml);
|
||||||
|
|
||||||
if (echo_skb_max) {
|
if (echo_skb_max) {
|
||||||
priv->echo_skb_max = echo_skb_max;
|
priv->echo_skb_max = echo_skb_max;
|
||||||
@@ -88,7 +88,7 @@
|
|||||||
|
|
||||||
#define TCAN4X5X_MRAM_START 0x8000
|
#define TCAN4X5X_MRAM_START 0x8000
|
||||||
#define TCAN4X5X_MCAN_OFFSET 0x1000
|
#define TCAN4X5X_MCAN_OFFSET 0x1000
|
||||||
#define TCAN4X5X_MAX_REGISTER 0x8fff
|
#define TCAN4X5X_MAX_REGISTER 0x8ffc
|
||||||
|
|
||||||
#define TCAN4X5X_CLEAR_ALL_INT 0xffffffff
|
#define TCAN4X5X_CLEAR_ALL_INT 0xffffffff
|
||||||
#define TCAN4X5X_SET_ALL_INT 0xffffffff
|
#define TCAN4X5X_SET_ALL_INT 0xffffffff
|
||||||
|
|||||||
@@ -516,6 +516,7 @@ static struct slcan *slc_alloc(void)
|
|||||||
int i;
|
int i;
|
||||||
char name[IFNAMSIZ];
|
char name[IFNAMSIZ];
|
||||||
struct net_device *dev = NULL;
|
struct net_device *dev = NULL;
|
||||||
|
struct can_ml_priv *can_ml;
|
||||||
struct slcan *sl;
|
struct slcan *sl;
|
||||||
int size;
|
int size;
|
||||||
|
|
||||||
@@ -538,7 +539,8 @@ static struct slcan *slc_alloc(void)
|
|||||||
|
|
||||||
dev->base_addr = i;
|
dev->base_addr = i;
|
||||||
sl = netdev_priv(dev);
|
sl = netdev_priv(dev);
|
||||||
dev->ml_priv = (void *)sl + ALIGN(sizeof(*sl), NETDEV_ALIGN);
|
can_ml = (void *)sl + ALIGN(sizeof(*sl), NETDEV_ALIGN);
|
||||||
|
can_set_ml_priv(dev, can_ml);
|
||||||
|
|
||||||
/* Initialize channel control data */
|
/* Initialize channel control data */
|
||||||
sl->magic = SLCAN_MAGIC;
|
sl->magic = SLCAN_MAGIC;
|
||||||
|
|||||||
@@ -153,7 +153,7 @@ static void vcan_setup(struct net_device *dev)
|
|||||||
dev->addr_len = 0;
|
dev->addr_len = 0;
|
||||||
dev->tx_queue_len = 0;
|
dev->tx_queue_len = 0;
|
||||||
dev->flags = IFF_NOARP;
|
dev->flags = IFF_NOARP;
|
||||||
dev->ml_priv = netdev_priv(dev);
|
can_set_ml_priv(dev, netdev_priv(dev));
|
||||||
|
|
||||||
/* set flags according to driver capabilities */
|
/* set flags according to driver capabilities */
|
||||||
if (echo)
|
if (echo)
|
||||||
|
|||||||
@@ -141,6 +141,8 @@ static const struct net_device_ops vxcan_netdev_ops = {
|
|||||||
|
|
||||||
static void vxcan_setup(struct net_device *dev)
|
static void vxcan_setup(struct net_device *dev)
|
||||||
{
|
{
|
||||||
|
struct can_ml_priv *can_ml;
|
||||||
|
|
||||||
dev->type = ARPHRD_CAN;
|
dev->type = ARPHRD_CAN;
|
||||||
dev->mtu = CANFD_MTU;
|
dev->mtu = CANFD_MTU;
|
||||||
dev->hard_header_len = 0;
|
dev->hard_header_len = 0;
|
||||||
@@ -149,7 +151,9 @@ static void vxcan_setup(struct net_device *dev)
|
|||||||
dev->flags = (IFF_NOARP|IFF_ECHO);
|
dev->flags = (IFF_NOARP|IFF_ECHO);
|
||||||
dev->netdev_ops = &vxcan_netdev_ops;
|
dev->netdev_ops = &vxcan_netdev_ops;
|
||||||
dev->needs_free_netdev = true;
|
dev->needs_free_netdev = true;
|
||||||
dev->ml_priv = netdev_priv(dev) + ALIGN(sizeof(struct vxcan_priv), NETDEV_ALIGN);
|
|
||||||
|
can_ml = netdev_priv(dev) + ALIGN(sizeof(struct vxcan_priv), NETDEV_ALIGN);
|
||||||
|
can_set_ml_priv(dev, can_ml);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* forward declaration for rtnl_create_link() */
|
/* forward declaration for rtnl_create_link() */
|
||||||
|
|||||||
@@ -71,8 +71,10 @@ static int aq_ndev_open(struct net_device *ndev)
|
|||||||
goto err_exit;
|
goto err_exit;
|
||||||
|
|
||||||
err = aq_nic_start(aq_nic);
|
err = aq_nic_start(aq_nic);
|
||||||
if (err < 0)
|
if (err < 0) {
|
||||||
|
aq_nic_stop(aq_nic);
|
||||||
goto err_exit;
|
goto err_exit;
|
||||||
|
}
|
||||||
|
|
||||||
err_exit:
|
err_exit:
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
|
|||||||
@@ -1153,7 +1153,7 @@ static void mvpp2_interrupts_unmask(void *arg)
|
|||||||
u32 val;
|
u32 val;
|
||||||
|
|
||||||
/* If the thread isn't used, don't do anything */
|
/* If the thread isn't used, don't do anything */
|
||||||
if (smp_processor_id() > port->priv->nthreads)
|
if (smp_processor_id() >= port->priv->nthreads)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
val = MVPP2_CAUSE_MISC_SUM_MASK |
|
val = MVPP2_CAUSE_MISC_SUM_MASK |
|
||||||
@@ -2287,7 +2287,7 @@ static void mvpp2_txq_sent_counter_clear(void *arg)
|
|||||||
int queue;
|
int queue;
|
||||||
|
|
||||||
/* If the thread isn't used, don't do anything */
|
/* If the thread isn't used, don't do anything */
|
||||||
if (smp_processor_id() > port->priv->nthreads)
|
if (smp_processor_id() >= port->priv->nthreads)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
for (queue = 0; queue < port->ntxqs; queue++) {
|
for (queue = 0; queue < port->ntxqs; queue++) {
|
||||||
|
|||||||
@@ -48,16 +48,6 @@
|
|||||||
#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(ee) \
|
#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(ee) \
|
||||||
(0x0000c01c + 0x1000 * (ee))
|
(0x0000c01c + 0x1000 * (ee))
|
||||||
|
|
||||||
#define GSI_INTER_EE_SRC_CH_IRQ_CLR_OFFSET \
|
|
||||||
GSI_INTER_EE_N_SRC_CH_IRQ_CLR_OFFSET(GSI_EE_AP)
|
|
||||||
#define GSI_INTER_EE_N_SRC_CH_IRQ_CLR_OFFSET(ee) \
|
|
||||||
(0x0000c028 + 0x1000 * (ee))
|
|
||||||
|
|
||||||
#define GSI_INTER_EE_SRC_EV_CH_IRQ_CLR_OFFSET \
|
|
||||||
GSI_INTER_EE_N_SRC_EV_CH_IRQ_CLR_OFFSET(GSI_EE_AP)
|
|
||||||
#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_CLR_OFFSET(ee) \
|
|
||||||
(0x0000c02c + 0x1000 * (ee))
|
|
||||||
|
|
||||||
#define GSI_CH_C_CNTXT_0_OFFSET(ch) \
|
#define GSI_CH_C_CNTXT_0_OFFSET(ch) \
|
||||||
GSI_EE_N_CH_C_CNTXT_0_OFFSET((ch), GSI_EE_AP)
|
GSI_EE_N_CH_C_CNTXT_0_OFFSET((ch), GSI_EE_AP)
|
||||||
#define GSI_EE_N_CH_C_CNTXT_0_OFFSET(ch, ee) \
|
#define GSI_EE_N_CH_C_CNTXT_0_OFFSET(ch, ee) \
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
// SPDX-License-Identifier: GPL-2.0
|
// SPDX-License-Identifier: GPL-2.0
|
||||||
|
|
||||||
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
|
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
|
||||||
* Copyright (C) 2019-2020 Linaro Ltd.
|
* Copyright (C) 2019-2021 Linaro Ltd.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#include <linux/types.h>
|
#include <linux/types.h>
|
||||||
@@ -244,11 +244,15 @@ static bool ipa_cmd_register_write_offset_valid(struct ipa *ipa,
|
|||||||
if (ipa->version != IPA_VERSION_3_5_1)
|
if (ipa->version != IPA_VERSION_3_5_1)
|
||||||
bit_count += hweight32(REGISTER_WRITE_FLAGS_OFFSET_HIGH_FMASK);
|
bit_count += hweight32(REGISTER_WRITE_FLAGS_OFFSET_HIGH_FMASK);
|
||||||
BUILD_BUG_ON(bit_count > 32);
|
BUILD_BUG_ON(bit_count > 32);
|
||||||
offset_max = ~0 >> (32 - bit_count);
|
offset_max = ~0U >> (32 - bit_count);
|
||||||
|
|
||||||
|
/* Make sure the offset can be represented by the field(s)
|
||||||
|
* that holds it. Also make sure the offset is not outside
|
||||||
|
* the overall IPA memory range.
|
||||||
|
*/
|
||||||
if (offset > offset_max || ipa->mem_offset > offset_max - offset) {
|
if (offset > offset_max || ipa->mem_offset > offset_max - offset) {
|
||||||
dev_err(dev, "%s offset too large 0x%04x + 0x%04x > 0x%04x)\n",
|
dev_err(dev, "%s offset too large 0x%04x + 0x%04x > 0x%04x)\n",
|
||||||
ipa->mem_offset + offset, offset_max);
|
name, ipa->mem_offset, offset, offset_max);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -261,12 +265,24 @@ static bool ipa_cmd_register_write_valid(struct ipa *ipa)
|
|||||||
const char *name;
|
const char *name;
|
||||||
u32 offset;
|
u32 offset;
|
||||||
|
|
||||||
offset = ipa_reg_filt_rout_hash_flush_offset(ipa->version);
|
/* If hashed tables are supported, ensure the hash flush register
|
||||||
name = "filter/route hash flush";
|
* offset will fit in a register write IPA immediate command.
|
||||||
if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
|
*/
|
||||||
return false;
|
if (ipa->version != IPA_VERSION_4_2) {
|
||||||
|
offset = ipa_reg_filt_rout_hash_flush_offset(ipa->version);
|
||||||
|
name = "filter/route hash flush";
|
||||||
|
if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
offset = IPA_REG_ENDP_STATUS_N_OFFSET(IPA_ENDPOINT_COUNT);
|
/* Each endpoint can have a status endpoint associated with it,
|
||||||
|
* and this is recorded in an endpoint register. If the modem
|
||||||
|
* crashes, we reset the status endpoint for all modem endpoints
|
||||||
|
* using a register write IPA immediate command. Make sure the
|
||||||
|
* worst case (highest endpoint number) offset of that endpoint
|
||||||
|
* fits in the register write command field(s) that must hold it.
|
||||||
|
*/
|
||||||
|
offset = IPA_REG_ENDP_STATUS_N_OFFSET(IPA_ENDPOINT_COUNT - 1);
|
||||||
name = "maximal endpoint status";
|
name = "maximal endpoint status";
|
||||||
if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
|
if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
|
||||||
return false;
|
return false;
|
||||||
|
|||||||
@@ -1008,23 +1008,25 @@ static int nsim_dev_reload_create(struct nsim_dev *nsim_dev,
|
|||||||
nsim_dev->fw_update_status = true;
|
nsim_dev->fw_update_status = true;
|
||||||
nsim_dev->fw_update_overwrite_mask = 0;
|
nsim_dev->fw_update_overwrite_mask = 0;
|
||||||
|
|
||||||
nsim_dev->fib_data = nsim_fib_create(devlink, extack);
|
|
||||||
if (IS_ERR(nsim_dev->fib_data))
|
|
||||||
return PTR_ERR(nsim_dev->fib_data);
|
|
||||||
|
|
||||||
nsim_devlink_param_load_driverinit_values(devlink);
|
nsim_devlink_param_load_driverinit_values(devlink);
|
||||||
|
|
||||||
err = nsim_dev_dummy_region_init(nsim_dev, devlink);
|
err = nsim_dev_dummy_region_init(nsim_dev, devlink);
|
||||||
if (err)
|
if (err)
|
||||||
goto err_fib_destroy;
|
return err;
|
||||||
|
|
||||||
err = nsim_dev_traps_init(devlink);
|
err = nsim_dev_traps_init(devlink);
|
||||||
if (err)
|
if (err)
|
||||||
goto err_dummy_region_exit;
|
goto err_dummy_region_exit;
|
||||||
|
|
||||||
|
nsim_dev->fib_data = nsim_fib_create(devlink, extack);
|
||||||
|
if (IS_ERR(nsim_dev->fib_data)) {
|
||||||
|
err = PTR_ERR(nsim_dev->fib_data);
|
||||||
|
goto err_traps_exit;
|
||||||
|
}
|
||||||
|
|
||||||
err = nsim_dev_health_init(nsim_dev, devlink);
|
err = nsim_dev_health_init(nsim_dev, devlink);
|
||||||
if (err)
|
if (err)
|
||||||
goto err_traps_exit;
|
goto err_fib_destroy;
|
||||||
|
|
||||||
err = nsim_dev_port_add_all(nsim_dev, nsim_bus_dev->port_count);
|
err = nsim_dev_port_add_all(nsim_dev, nsim_bus_dev->port_count);
|
||||||
if (err)
|
if (err)
|
||||||
@@ -1039,12 +1041,12 @@ static int nsim_dev_reload_create(struct nsim_dev *nsim_dev,
|
|||||||
|
|
||||||
err_health_exit:
|
err_health_exit:
|
||||||
nsim_dev_health_exit(nsim_dev);
|
nsim_dev_health_exit(nsim_dev);
|
||||||
|
err_fib_destroy:
|
||||||
|
nsim_fib_destroy(devlink, nsim_dev->fib_data);
|
||||||
err_traps_exit:
|
err_traps_exit:
|
||||||
nsim_dev_traps_exit(devlink);
|
nsim_dev_traps_exit(devlink);
|
||||||
err_dummy_region_exit:
|
err_dummy_region_exit:
|
||||||
nsim_dev_dummy_region_exit(nsim_dev);
|
nsim_dev_dummy_region_exit(nsim_dev);
|
||||||
err_fib_destroy:
|
|
||||||
nsim_fib_destroy(devlink, nsim_dev->fib_data);
|
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1076,15 +1078,9 @@ int nsim_dev_probe(struct nsim_bus_dev *nsim_bus_dev)
|
|||||||
if (err)
|
if (err)
|
||||||
goto err_devlink_free;
|
goto err_devlink_free;
|
||||||
|
|
||||||
nsim_dev->fib_data = nsim_fib_create(devlink, NULL);
|
|
||||||
if (IS_ERR(nsim_dev->fib_data)) {
|
|
||||||
err = PTR_ERR(nsim_dev->fib_data);
|
|
||||||
goto err_resources_unregister;
|
|
||||||
}
|
|
||||||
|
|
||||||
err = devlink_register(devlink, &nsim_bus_dev->dev);
|
err = devlink_register(devlink, &nsim_bus_dev->dev);
|
||||||
if (err)
|
if (err)
|
||||||
goto err_fib_destroy;
|
goto err_resources_unregister;
|
||||||
|
|
||||||
err = devlink_params_register(devlink, nsim_devlink_params,
|
err = devlink_params_register(devlink, nsim_devlink_params,
|
||||||
ARRAY_SIZE(nsim_devlink_params));
|
ARRAY_SIZE(nsim_devlink_params));
|
||||||
@@ -1104,9 +1100,15 @@ int nsim_dev_probe(struct nsim_bus_dev *nsim_bus_dev)
|
|||||||
if (err)
|
if (err)
|
||||||
goto err_traps_exit;
|
goto err_traps_exit;
|
||||||
|
|
||||||
|
nsim_dev->fib_data = nsim_fib_create(devlink, NULL);
|
||||||
|
if (IS_ERR(nsim_dev->fib_data)) {
|
||||||
|
err = PTR_ERR(nsim_dev->fib_data);
|
||||||
|
goto err_debugfs_exit;
|
||||||
|
}
|
||||||
|
|
||||||
err = nsim_dev_health_init(nsim_dev, devlink);
|
err = nsim_dev_health_init(nsim_dev, devlink);
|
||||||
if (err)
|
if (err)
|
||||||
goto err_debugfs_exit;
|
goto err_fib_destroy;
|
||||||
|
|
||||||
err = nsim_bpf_dev_init(nsim_dev);
|
err = nsim_bpf_dev_init(nsim_dev);
|
||||||
if (err)
|
if (err)
|
||||||
@@ -1124,6 +1126,8 @@ err_bpf_dev_exit:
|
|||||||
nsim_bpf_dev_exit(nsim_dev);
|
nsim_bpf_dev_exit(nsim_dev);
|
||||||
err_health_exit:
|
err_health_exit:
|
||||||
nsim_dev_health_exit(nsim_dev);
|
nsim_dev_health_exit(nsim_dev);
|
||||||
|
err_fib_destroy:
|
||||||
|
nsim_fib_destroy(devlink, nsim_dev->fib_data);
|
||||||
err_debugfs_exit:
|
err_debugfs_exit:
|
||||||
nsim_dev_debugfs_exit(nsim_dev);
|
nsim_dev_debugfs_exit(nsim_dev);
|
||||||
err_traps_exit:
|
err_traps_exit:
|
||||||
@@ -1135,8 +1139,6 @@ err_params_unregister:
|
|||||||
ARRAY_SIZE(nsim_devlink_params));
|
ARRAY_SIZE(nsim_devlink_params));
|
||||||
err_dl_unregister:
|
err_dl_unregister:
|
||||||
devlink_unregister(devlink);
|
devlink_unregister(devlink);
|
||||||
err_fib_destroy:
|
|
||||||
nsim_fib_destroy(devlink, nsim_dev->fib_data);
|
|
||||||
err_resources_unregister:
|
err_resources_unregister:
|
||||||
devlink_resources_unregister(devlink, NULL);
|
devlink_resources_unregister(devlink, NULL);
|
||||||
err_devlink_free:
|
err_devlink_free:
|
||||||
@@ -1153,10 +1155,10 @@ static void nsim_dev_reload_destroy(struct nsim_dev *nsim_dev)
|
|||||||
debugfs_remove(nsim_dev->take_snapshot);
|
debugfs_remove(nsim_dev->take_snapshot);
|
||||||
nsim_dev_port_del_all(nsim_dev);
|
nsim_dev_port_del_all(nsim_dev);
|
||||||
nsim_dev_health_exit(nsim_dev);
|
nsim_dev_health_exit(nsim_dev);
|
||||||
|
nsim_fib_destroy(devlink, nsim_dev->fib_data);
|
||||||
nsim_dev_traps_exit(devlink);
|
nsim_dev_traps_exit(devlink);
|
||||||
nsim_dev_dummy_region_exit(nsim_dev);
|
nsim_dev_dummy_region_exit(nsim_dev);
|
||||||
mutex_destroy(&nsim_dev->port_list_lock);
|
mutex_destroy(&nsim_dev->port_list_lock);
|
||||||
nsim_fib_destroy(devlink, nsim_dev->fib_data);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void nsim_dev_remove(struct nsim_bus_dev *nsim_bus_dev)
|
void nsim_dev_remove(struct nsim_bus_dev *nsim_bus_dev)
|
||||||
|
|||||||
@@ -901,6 +901,8 @@ static int lmc_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
printk(KERN_WARNING "%s: LMC UNKNOWN CARD!\n", dev->name);
|
printk(KERN_WARNING "%s: LMC UNKNOWN CARD!\n", dev->name);
|
||||||
|
unregister_hdlc_device(dev);
|
||||||
|
return -EIO;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -576,13 +576,13 @@ static void ath10k_wmi_event_tdls_peer(struct ath10k *ar, struct sk_buff *skb)
|
|||||||
case WMI_TDLS_TEARDOWN_REASON_TX:
|
case WMI_TDLS_TEARDOWN_REASON_TX:
|
||||||
case WMI_TDLS_TEARDOWN_REASON_RSSI:
|
case WMI_TDLS_TEARDOWN_REASON_RSSI:
|
||||||
case WMI_TDLS_TEARDOWN_REASON_PTR_TIMEOUT:
|
case WMI_TDLS_TEARDOWN_REASON_PTR_TIMEOUT:
|
||||||
|
rcu_read_lock();
|
||||||
station = ieee80211_find_sta_by_ifaddr(ar->hw,
|
station = ieee80211_find_sta_by_ifaddr(ar->hw,
|
||||||
ev->peer_macaddr.addr,
|
ev->peer_macaddr.addr,
|
||||||
NULL);
|
NULL);
|
||||||
if (!station) {
|
if (!station) {
|
||||||
ath10k_warn(ar, "did not find station from tdls peer event");
|
ath10k_warn(ar, "did not find station from tdls peer event");
|
||||||
kfree(tb);
|
goto exit;
|
||||||
return;
|
|
||||||
}
|
}
|
||||||
arvif = ath10k_get_arvif(ar, __le32_to_cpu(ev->vdev_id));
|
arvif = ath10k_get_arvif(ar, __le32_to_cpu(ev->vdev_id));
|
||||||
ieee80211_tdls_oper_request(
|
ieee80211_tdls_oper_request(
|
||||||
@@ -593,6 +593,9 @@ static void ath10k_wmi_event_tdls_peer(struct ath10k *ar, struct sk_buff *skb)
|
|||||||
);
|
);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
exit:
|
||||||
|
rcu_read_unlock();
|
||||||
kfree(tb);
|
kfree(tb);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -6317,17 +6317,20 @@ static int __ath11k_mac_register(struct ath11k *ar)
|
|||||||
ret = ath11k_regd_update(ar, true);
|
ret = ath11k_regd_update(ar, true);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
ath11k_err(ar->ab, "ath11k regd update failed: %d\n", ret);
|
ath11k_err(ar->ab, "ath11k regd update failed: %d\n", ret);
|
||||||
goto err_free_if_combs;
|
goto err_unregister_hw;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = ath11k_debugfs_register(ar);
|
ret = ath11k_debugfs_register(ar);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
ath11k_err(ar->ab, "debugfs registration failed: %d\n", ret);
|
ath11k_err(ar->ab, "debugfs registration failed: %d\n", ret);
|
||||||
goto err_free_if_combs;
|
goto err_unregister_hw;
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
err_unregister_hw:
|
||||||
|
ieee80211_unregister_hw(ar->hw);
|
||||||
|
|
||||||
err_free_if_combs:
|
err_free_if_combs:
|
||||||
kfree(ar->hw->wiphy->iface_combinations[0].limits);
|
kfree(ar->hw->wiphy->iface_combinations[0].limits);
|
||||||
kfree(ar->hw->wiphy->iface_combinations);
|
kfree(ar->hw->wiphy->iface_combinations);
|
||||||
|
|||||||
@@ -5611,7 +5611,8 @@ static bool brcmf_is_linkup(struct brcmf_cfg80211_vif *vif,
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool brcmf_is_linkdown(const struct brcmf_event_msg *e)
|
static bool brcmf_is_linkdown(struct brcmf_cfg80211_vif *vif,
|
||||||
|
const struct brcmf_event_msg *e)
|
||||||
{
|
{
|
||||||
u32 event = e->event_code;
|
u32 event = e->event_code;
|
||||||
u16 flags = e->flags;
|
u16 flags = e->flags;
|
||||||
@@ -5620,6 +5621,8 @@ static bool brcmf_is_linkdown(const struct brcmf_event_msg *e)
|
|||||||
(event == BRCMF_E_DISASSOC_IND) ||
|
(event == BRCMF_E_DISASSOC_IND) ||
|
||||||
((event == BRCMF_E_LINK) && (!(flags & BRCMF_EVENT_MSG_LINK)))) {
|
((event == BRCMF_E_LINK) && (!(flags & BRCMF_EVENT_MSG_LINK)))) {
|
||||||
brcmf_dbg(CONN, "Processing link down\n");
|
brcmf_dbg(CONN, "Processing link down\n");
|
||||||
|
clear_bit(BRCMF_VIF_STATUS_EAP_SUCCESS, &vif->sme_state);
|
||||||
|
clear_bit(BRCMF_VIF_STATUS_ASSOC_SUCCESS, &vif->sme_state);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
return false;
|
return false;
|
||||||
@@ -6067,7 +6070,7 @@ brcmf_notify_connect_status(struct brcmf_if *ifp,
|
|||||||
} else
|
} else
|
||||||
brcmf_bss_connect_done(cfg, ndev, e, true);
|
brcmf_bss_connect_done(cfg, ndev, e, true);
|
||||||
brcmf_net_setcarrier(ifp, true);
|
brcmf_net_setcarrier(ifp, true);
|
||||||
} else if (brcmf_is_linkdown(e)) {
|
} else if (brcmf_is_linkdown(ifp->vif, e)) {
|
||||||
brcmf_dbg(CONN, "Linkdown\n");
|
brcmf_dbg(CONN, "Linkdown\n");
|
||||||
if (!brcmf_is_ibssmode(ifp->vif) &&
|
if (!brcmf_is_ibssmode(ifp->vif) &&
|
||||||
test_bit(BRCMF_VIF_STATUS_CONNECTED,
|
test_bit(BRCMF_VIF_STATUS_CONNECTED,
|
||||||
|
|||||||
@@ -2026,7 +2026,7 @@ static bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans,
|
|||||||
int ret;
|
int ret;
|
||||||
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
||||||
|
|
||||||
spin_lock_irqsave(&trans_pcie->reg_lock, *flags);
|
spin_lock_bh(&trans_pcie->reg_lock);
|
||||||
|
|
||||||
if (trans_pcie->cmd_hold_nic_awake)
|
if (trans_pcie->cmd_hold_nic_awake)
|
||||||
goto out;
|
goto out;
|
||||||
@@ -2111,7 +2111,7 @@ static bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans,
|
|||||||
}
|
}
|
||||||
|
|
||||||
err:
|
err:
|
||||||
spin_unlock_irqrestore(&trans_pcie->reg_lock, *flags);
|
spin_unlock_bh(&trans_pcie->reg_lock);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -2149,7 +2149,7 @@ static void iwl_trans_pcie_release_nic_access(struct iwl_trans *trans,
|
|||||||
* scheduled on different CPUs (after we drop reg_lock).
|
* scheduled on different CPUs (after we drop reg_lock).
|
||||||
*/
|
*/
|
||||||
out:
|
out:
|
||||||
spin_unlock_irqrestore(&trans_pcie->reg_lock, *flags);
|
spin_unlock_bh(&trans_pcie->reg_lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int iwl_trans_pcie_read_mem(struct iwl_trans *trans, u32 addr,
|
static int iwl_trans_pcie_read_mem(struct iwl_trans *trans, u32 addr,
|
||||||
@@ -2403,11 +2403,10 @@ static void iwl_trans_pcie_set_bits_mask(struct iwl_trans *trans, u32 reg,
|
|||||||
u32 mask, u32 value)
|
u32 mask, u32 value)
|
||||||
{
|
{
|
||||||
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
||||||
unsigned long flags;
|
|
||||||
|
|
||||||
spin_lock_irqsave(&trans_pcie->reg_lock, flags);
|
spin_lock_bh(&trans_pcie->reg_lock);
|
||||||
__iwl_trans_pcie_set_bits_mask(trans, reg, mask, value);
|
__iwl_trans_pcie_set_bits_mask(trans, reg, mask, value);
|
||||||
spin_unlock_irqrestore(&trans_pcie->reg_lock, flags);
|
spin_unlock_bh(&trans_pcie->reg_lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static const char *get_csr_string(int cmd)
|
static const char *get_csr_string(int cmd)
|
||||||
|
|||||||
@@ -78,7 +78,6 @@ static int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans,
|
|||||||
struct iwl_txq *txq = trans->txqs.txq[trans->txqs.cmd.q_id];
|
struct iwl_txq *txq = trans->txqs.txq[trans->txqs.cmd.q_id];
|
||||||
struct iwl_device_cmd *out_cmd;
|
struct iwl_device_cmd *out_cmd;
|
||||||
struct iwl_cmd_meta *out_meta;
|
struct iwl_cmd_meta *out_meta;
|
||||||
unsigned long flags;
|
|
||||||
void *dup_buf = NULL;
|
void *dup_buf = NULL;
|
||||||
dma_addr_t phys_addr;
|
dma_addr_t phys_addr;
|
||||||
int i, cmd_pos, idx;
|
int i, cmd_pos, idx;
|
||||||
@@ -291,11 +290,11 @@ static int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans,
|
|||||||
if (txq->read_ptr == txq->write_ptr && txq->wd_timeout)
|
if (txq->read_ptr == txq->write_ptr && txq->wd_timeout)
|
||||||
mod_timer(&txq->stuck_timer, jiffies + txq->wd_timeout);
|
mod_timer(&txq->stuck_timer, jiffies + txq->wd_timeout);
|
||||||
|
|
||||||
spin_lock_irqsave(&trans_pcie->reg_lock, flags);
|
spin_lock(&trans_pcie->reg_lock);
|
||||||
/* Increment and update queue's write index */
|
/* Increment and update queue's write index */
|
||||||
txq->write_ptr = iwl_txq_inc_wrap(trans, txq->write_ptr);
|
txq->write_ptr = iwl_txq_inc_wrap(trans, txq->write_ptr);
|
||||||
iwl_txq_inc_wr_ptr(trans, txq);
|
iwl_txq_inc_wr_ptr(trans, txq);
|
||||||
spin_unlock_irqrestore(&trans_pcie->reg_lock, flags);
|
spin_unlock(&trans_pcie->reg_lock);
|
||||||
|
|
||||||
out:
|
out:
|
||||||
spin_unlock_bh(&txq->lock);
|
spin_unlock_bh(&txq->lock);
|
||||||
|
|||||||
@@ -321,12 +321,10 @@ static void iwl_pcie_txq_unmap(struct iwl_trans *trans, int txq_id)
|
|||||||
txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr);
|
txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr);
|
||||||
|
|
||||||
if (txq->read_ptr == txq->write_ptr) {
|
if (txq->read_ptr == txq->write_ptr) {
|
||||||
unsigned long flags;
|
spin_lock(&trans_pcie->reg_lock);
|
||||||
|
|
||||||
spin_lock_irqsave(&trans_pcie->reg_lock, flags);
|
|
||||||
if (txq_id == trans->txqs.cmd.q_id)
|
if (txq_id == trans->txqs.cmd.q_id)
|
||||||
iwl_pcie_clear_cmd_in_flight(trans);
|
iwl_pcie_clear_cmd_in_flight(trans);
|
||||||
spin_unlock_irqrestore(&trans_pcie->reg_lock, flags);
|
spin_unlock(&trans_pcie->reg_lock);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -931,7 +929,6 @@ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx)
|
|||||||
{
|
{
|
||||||
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
||||||
struct iwl_txq *txq = trans->txqs.txq[txq_id];
|
struct iwl_txq *txq = trans->txqs.txq[txq_id];
|
||||||
unsigned long flags;
|
|
||||||
int nfreed = 0;
|
int nfreed = 0;
|
||||||
u16 r;
|
u16 r;
|
||||||
|
|
||||||
@@ -962,9 +959,10 @@ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx)
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (txq->read_ptr == txq->write_ptr) {
|
if (txq->read_ptr == txq->write_ptr) {
|
||||||
spin_lock_irqsave(&trans_pcie->reg_lock, flags);
|
/* BHs are also disabled due to txq->lock */
|
||||||
|
spin_lock(&trans_pcie->reg_lock);
|
||||||
iwl_pcie_clear_cmd_in_flight(trans);
|
iwl_pcie_clear_cmd_in_flight(trans);
|
||||||
spin_unlock_irqrestore(&trans_pcie->reg_lock, flags);
|
spin_unlock(&trans_pcie->reg_lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
iwl_pcie_txq_progress(txq);
|
iwl_pcie_txq_progress(txq);
|
||||||
@@ -1173,7 +1171,6 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
|
|||||||
struct iwl_txq *txq = trans->txqs.txq[trans->txqs.cmd.q_id];
|
struct iwl_txq *txq = trans->txqs.txq[trans->txqs.cmd.q_id];
|
||||||
struct iwl_device_cmd *out_cmd;
|
struct iwl_device_cmd *out_cmd;
|
||||||
struct iwl_cmd_meta *out_meta;
|
struct iwl_cmd_meta *out_meta;
|
||||||
unsigned long flags;
|
|
||||||
void *dup_buf = NULL;
|
void *dup_buf = NULL;
|
||||||
dma_addr_t phys_addr;
|
dma_addr_t phys_addr;
|
||||||
int idx;
|
int idx;
|
||||||
@@ -1416,20 +1413,19 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
|
|||||||
if (txq->read_ptr == txq->write_ptr && txq->wd_timeout)
|
if (txq->read_ptr == txq->write_ptr && txq->wd_timeout)
|
||||||
mod_timer(&txq->stuck_timer, jiffies + txq->wd_timeout);
|
mod_timer(&txq->stuck_timer, jiffies + txq->wd_timeout);
|
||||||
|
|
||||||
spin_lock_irqsave(&trans_pcie->reg_lock, flags);
|
spin_lock(&trans_pcie->reg_lock);
|
||||||
ret = iwl_pcie_set_cmd_in_flight(trans, cmd);
|
ret = iwl_pcie_set_cmd_in_flight(trans, cmd);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
idx = ret;
|
idx = ret;
|
||||||
spin_unlock_irqrestore(&trans_pcie->reg_lock, flags);
|
goto unlock_reg;
|
||||||
goto out;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Increment and update queue's write index */
|
/* Increment and update queue's write index */
|
||||||
txq->write_ptr = iwl_txq_inc_wrap(trans, txq->write_ptr);
|
txq->write_ptr = iwl_txq_inc_wrap(trans, txq->write_ptr);
|
||||||
iwl_pcie_txq_inc_wr_ptr(trans, txq);
|
iwl_pcie_txq_inc_wr_ptr(trans, txq);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&trans_pcie->reg_lock, flags);
|
unlock_reg:
|
||||||
|
spin_unlock(&trans_pcie->reg_lock);
|
||||||
out:
|
out:
|
||||||
spin_unlock_bh(&txq->lock);
|
spin_unlock_bh(&txq->lock);
|
||||||
free_dup_buf:
|
free_dup_buf:
|
||||||
|
|||||||
@@ -720,8 +720,8 @@ static void rtw8821c_coex_cfg_ant_switch(struct rtw_dev *rtwdev, u8 ctrl_type,
|
|||||||
regval = (!polarity_inverse ? 0x1 : 0x2);
|
regval = (!polarity_inverse ? 0x1 : 0x2);
|
||||||
}
|
}
|
||||||
|
|
||||||
rtw_write8_mask(rtwdev, REG_RFE_CTRL8, BIT_MASK_R_RFE_SEL_15,
|
rtw_write32_mask(rtwdev, REG_RFE_CTRL8, BIT_MASK_R_RFE_SEL_15,
|
||||||
regval);
|
regval);
|
||||||
break;
|
break;
|
||||||
case COEX_SWITCH_CTRL_BY_PTA:
|
case COEX_SWITCH_CTRL_BY_PTA:
|
||||||
rtw_write32_clr(rtwdev, REG_LED_CFG, BIT_DPDT_SEL_EN);
|
rtw_write32_clr(rtwdev, REG_LED_CFG, BIT_DPDT_SEL_EN);
|
||||||
@@ -731,8 +731,8 @@ static void rtw8821c_coex_cfg_ant_switch(struct rtw_dev *rtwdev, u8 ctrl_type,
|
|||||||
PTA_CTRL_PIN);
|
PTA_CTRL_PIN);
|
||||||
|
|
||||||
regval = (!polarity_inverse ? 0x2 : 0x1);
|
regval = (!polarity_inverse ? 0x2 : 0x1);
|
||||||
rtw_write8_mask(rtwdev, REG_RFE_CTRL8, BIT_MASK_R_RFE_SEL_15,
|
rtw_write32_mask(rtwdev, REG_RFE_CTRL8, BIT_MASK_R_RFE_SEL_15,
|
||||||
regval);
|
regval);
|
||||||
break;
|
break;
|
||||||
case COEX_SWITCH_CTRL_BY_ANTDIV:
|
case COEX_SWITCH_CTRL_BY_ANTDIV:
|
||||||
rtw_write32_clr(rtwdev, REG_LED_CFG, BIT_DPDT_SEL_EN);
|
rtw_write32_clr(rtwdev, REG_LED_CFG, BIT_DPDT_SEL_EN);
|
||||||
@@ -758,11 +758,11 @@ static void rtw8821c_coex_cfg_ant_switch(struct rtw_dev *rtwdev, u8 ctrl_type,
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (ctrl_type == COEX_SWITCH_CTRL_BY_BT) {
|
if (ctrl_type == COEX_SWITCH_CTRL_BY_BT) {
|
||||||
rtw_write32_clr(rtwdev, REG_CTRL_TYPE, BIT_CTRL_TYPE1);
|
rtw_write8_clr(rtwdev, REG_CTRL_TYPE, BIT_CTRL_TYPE1);
|
||||||
rtw_write32_clr(rtwdev, REG_CTRL_TYPE, BIT_CTRL_TYPE2);
|
rtw_write8_clr(rtwdev, REG_CTRL_TYPE, BIT_CTRL_TYPE2);
|
||||||
} else {
|
} else {
|
||||||
rtw_write32_set(rtwdev, REG_CTRL_TYPE, BIT_CTRL_TYPE1);
|
rtw_write8_set(rtwdev, REG_CTRL_TYPE, BIT_CTRL_TYPE1);
|
||||||
rtw_write32_set(rtwdev, REG_CTRL_TYPE, BIT_CTRL_TYPE2);
|
rtw_write8_set(rtwdev, REG_CTRL_TYPE, BIT_CTRL_TYPE2);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1098,11 +1098,11 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
|
|||||||
cmd->rbytes_done += ret;
|
cmd->rbytes_done += ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
nvmet_tcp_unmap_pdu_iovec(cmd);
|
||||||
if (queue->data_digest) {
|
if (queue->data_digest) {
|
||||||
nvmet_tcp_prep_recv_ddgst(cmd);
|
nvmet_tcp_prep_recv_ddgst(cmd);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
nvmet_tcp_unmap_pdu_iovec(cmd);
|
|
||||||
|
|
||||||
if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
|
if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
|
||||||
cmd->rbytes_done == cmd->req.transfer_len) {
|
cmd->rbytes_done == cmd->req.transfer_len) {
|
||||||
|
|||||||
@@ -3727,12 +3727,15 @@ static int __maybe_unused rockchip_pinctrl_suspend(struct device *dev)
|
|||||||
static int __maybe_unused rockchip_pinctrl_resume(struct device *dev)
|
static int __maybe_unused rockchip_pinctrl_resume(struct device *dev)
|
||||||
{
|
{
|
||||||
struct rockchip_pinctrl *info = dev_get_drvdata(dev);
|
struct rockchip_pinctrl *info = dev_get_drvdata(dev);
|
||||||
int ret = regmap_write(info->regmap_base, RK3288_GRF_GPIO6C_IOMUX,
|
int ret;
|
||||||
rk3288_grf_gpio6c_iomux |
|
|
||||||
GPIO6C6_SEL_WRITE_ENABLE);
|
|
||||||
|
|
||||||
if (ret)
|
if (info->ctrl->type == RK3288) {
|
||||||
return ret;
|
ret = regmap_write(info->regmap_base, RK3288_GRF_GPIO6C_IOMUX,
|
||||||
|
rk3288_grf_gpio6c_iomux |
|
||||||
|
GPIO6C6_SEL_WRITE_ENABLE);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
return pinctrl_force_default(info->pctl_dev);
|
return pinctrl_force_default(info->pctl_dev);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -116,7 +116,6 @@
|
|||||||
(min(1270, ((ql) > 0) ? (QLA_TGT_DATASEGS_PER_CMD_24XX + \
|
(min(1270, ((ql) > 0) ? (QLA_TGT_DATASEGS_PER_CMD_24XX + \
|
||||||
QLA_TGT_DATASEGS_PER_CONT_24XX*((ql) - 1)) : 0))
|
QLA_TGT_DATASEGS_PER_CONT_24XX*((ql) - 1)) : 0))
|
||||||
#endif
|
#endif
|
||||||
#endif
|
|
||||||
|
|
||||||
#define GET_TARGET_ID(ha, iocb) ((HAS_EXTENDED_IDS(ha)) \
|
#define GET_TARGET_ID(ha, iocb) ((HAS_EXTENDED_IDS(ha)) \
|
||||||
? le16_to_cpu((iocb)->u.isp2x.target.extended) \
|
? le16_to_cpu((iocb)->u.isp2x.target.extended) \
|
||||||
@@ -244,6 +243,7 @@ struct ctio_to_2xxx {
|
|||||||
#ifndef CTIO_RET_TYPE
|
#ifndef CTIO_RET_TYPE
|
||||||
#define CTIO_RET_TYPE 0x17 /* CTIO return entry */
|
#define CTIO_RET_TYPE 0x17 /* CTIO return entry */
|
||||||
#define ATIO_TYPE7 0x06 /* Accept target I/O entry for 24xx */
|
#define ATIO_TYPE7 0x06 /* Accept target I/O entry for 24xx */
|
||||||
|
#endif
|
||||||
|
|
||||||
struct fcp_hdr {
|
struct fcp_hdr {
|
||||||
uint8_t r_ctl;
|
uint8_t r_ctl;
|
||||||
|
|||||||
@@ -1269,8 +1269,8 @@ static int st_open(struct inode *inode, struct file *filp)
|
|||||||
spin_lock(&st_use_lock);
|
spin_lock(&st_use_lock);
|
||||||
if (STp->in_use) {
|
if (STp->in_use) {
|
||||||
spin_unlock(&st_use_lock);
|
spin_unlock(&st_use_lock);
|
||||||
scsi_tape_put(STp);
|
|
||||||
DEBC_printk(STp, "Device already in use.\n");
|
DEBC_printk(STp, "Device already in use.\n");
|
||||||
|
scsi_tape_put(STp);
|
||||||
return (-EBUSY);
|
return (-EBUSY);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -3,7 +3,6 @@
|
|||||||
|
|
||||||
#include <linux/acpi.h>
|
#include <linux/acpi.h>
|
||||||
#include <linux/clk.h>
|
#include <linux/clk.h>
|
||||||
#include <linux/console.h>
|
|
||||||
#include <linux/slab.h>
|
#include <linux/slab.h>
|
||||||
#include <linux/dma-mapping.h>
|
#include <linux/dma-mapping.h>
|
||||||
#include <linux/io.h>
|
#include <linux/io.h>
|
||||||
@@ -91,14 +90,11 @@ struct geni_wrapper {
|
|||||||
struct device *dev;
|
struct device *dev;
|
||||||
void __iomem *base;
|
void __iomem *base;
|
||||||
struct clk_bulk_data ahb_clks[NUM_AHB_CLKS];
|
struct clk_bulk_data ahb_clks[NUM_AHB_CLKS];
|
||||||
struct geni_icc_path to_core;
|
|
||||||
};
|
};
|
||||||
|
|
||||||
static const char * const icc_path_names[] = {"qup-core", "qup-config",
|
static const char * const icc_path_names[] = {"qup-core", "qup-config",
|
||||||
"qup-memory"};
|
"qup-memory"};
|
||||||
|
|
||||||
static struct geni_wrapper *earlycon_wrapper;
|
|
||||||
|
|
||||||
#define QUP_HW_VER_REG 0x4
|
#define QUP_HW_VER_REG 0x4
|
||||||
|
|
||||||
/* Common SE registers */
|
/* Common SE registers */
|
||||||
@@ -828,44 +824,11 @@ int geni_icc_disable(struct geni_se *se)
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL(geni_icc_disable);
|
EXPORT_SYMBOL(geni_icc_disable);
|
||||||
|
|
||||||
void geni_remove_earlycon_icc_vote(void)
|
|
||||||
{
|
|
||||||
struct platform_device *pdev;
|
|
||||||
struct geni_wrapper *wrapper;
|
|
||||||
struct device_node *parent;
|
|
||||||
struct device_node *child;
|
|
||||||
|
|
||||||
if (!earlycon_wrapper)
|
|
||||||
return;
|
|
||||||
|
|
||||||
wrapper = earlycon_wrapper;
|
|
||||||
parent = of_get_next_parent(wrapper->dev->of_node);
|
|
||||||
for_each_child_of_node(parent, child) {
|
|
||||||
if (!of_device_is_compatible(child, "qcom,geni-se-qup"))
|
|
||||||
continue;
|
|
||||||
|
|
||||||
pdev = of_find_device_by_node(child);
|
|
||||||
if (!pdev)
|
|
||||||
continue;
|
|
||||||
|
|
||||||
wrapper = platform_get_drvdata(pdev);
|
|
||||||
icc_put(wrapper->to_core.path);
|
|
||||||
wrapper->to_core.path = NULL;
|
|
||||||
|
|
||||||
}
|
|
||||||
of_node_put(parent);
|
|
||||||
|
|
||||||
earlycon_wrapper = NULL;
|
|
||||||
}
|
|
||||||
EXPORT_SYMBOL(geni_remove_earlycon_icc_vote);
|
|
||||||
|
|
||||||
static int geni_se_probe(struct platform_device *pdev)
|
static int geni_se_probe(struct platform_device *pdev)
|
||||||
{
|
{
|
||||||
struct device *dev = &pdev->dev;
|
struct device *dev = &pdev->dev;
|
||||||
struct resource *res;
|
struct resource *res;
|
||||||
struct geni_wrapper *wrapper;
|
struct geni_wrapper *wrapper;
|
||||||
struct console __maybe_unused *bcon;
|
|
||||||
bool __maybe_unused has_earlycon = false;
|
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
wrapper = devm_kzalloc(dev, sizeof(*wrapper), GFP_KERNEL);
|
wrapper = devm_kzalloc(dev, sizeof(*wrapper), GFP_KERNEL);
|
||||||
@@ -888,43 +851,6 @@ static int geni_se_probe(struct platform_device *pdev)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_SERIAL_EARLYCON
|
|
||||||
for_each_console(bcon) {
|
|
||||||
if (!strcmp(bcon->name, "qcom_geni")) {
|
|
||||||
has_earlycon = true;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if (!has_earlycon)
|
|
||||||
goto exit;
|
|
||||||
|
|
||||||
wrapper->to_core.path = devm_of_icc_get(dev, "qup-core");
|
|
||||||
if (IS_ERR(wrapper->to_core.path))
|
|
||||||
return PTR_ERR(wrapper->to_core.path);
|
|
||||||
/*
|
|
||||||
* Put minmal BW request on core clocks on behalf of early console.
|
|
||||||
* The vote will be removed earlycon exit function.
|
|
||||||
*
|
|
||||||
* Note: We are putting vote on each QUP wrapper instead only to which
|
|
||||||
* earlycon is connected because QUP core clock of different wrapper
|
|
||||||
* share same voltage domain. If core1 is put to 0, then core2 will
|
|
||||||
* also run at 0, if not voted. Default ICC vote will be removed ASA
|
|
||||||
* we touch any of the core clock.
|
|
||||||
* core1 = core2 = max(core1, core2)
|
|
||||||
*/
|
|
||||||
ret = icc_set_bw(wrapper->to_core.path, GENI_DEFAULT_BW,
|
|
||||||
GENI_DEFAULT_BW);
|
|
||||||
if (ret) {
|
|
||||||
dev_err(&pdev->dev, "%s: ICC BW voting failed for core: %d\n",
|
|
||||||
__func__, ret);
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (of_get_compatible_child(pdev->dev.of_node, "qcom,geni-debug-uart"))
|
|
||||||
earlycon_wrapper = wrapper;
|
|
||||||
of_node_put(pdev->dev.of_node);
|
|
||||||
exit:
|
|
||||||
#endif
|
|
||||||
dev_set_drvdata(dev, wrapper);
|
dev_set_drvdata(dev, wrapper);
|
||||||
dev_dbg(dev, "GENI SE Driver probed\n");
|
dev_dbg(dev, "GENI SE Driver probed\n");
|
||||||
return devm_of_platform_populate(dev);
|
return devm_of_platform_populate(dev);
|
||||||
|
|||||||
@@ -1281,7 +1281,7 @@ static int cb_pcidas_auto_attach(struct comedi_device *dev,
|
|||||||
devpriv->amcc + AMCC_OP_REG_INTCSR);
|
devpriv->amcc + AMCC_OP_REG_INTCSR);
|
||||||
|
|
||||||
ret = request_irq(pcidev->irq, cb_pcidas_interrupt, IRQF_SHARED,
|
ret = request_irq(pcidev->irq, cb_pcidas_interrupt, IRQF_SHARED,
|
||||||
dev->board_name, dev);
|
"cb_pcidas", dev);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
dev_dbg(dev->class_dev, "unable to allocate irq %d\n",
|
dev_dbg(dev->class_dev, "unable to allocate irq %d\n",
|
||||||
pcidev->irq);
|
pcidev->irq);
|
||||||
|
|||||||
@@ -4035,7 +4035,7 @@ static int auto_attach(struct comedi_device *dev,
|
|||||||
init_stc_registers(dev);
|
init_stc_registers(dev);
|
||||||
|
|
||||||
retval = request_irq(pcidev->irq, handle_interrupt, IRQF_SHARED,
|
retval = request_irq(pcidev->irq, handle_interrupt, IRQF_SHARED,
|
||||||
dev->board_name, dev);
|
"cb_pcidas64", dev);
|
||||||
if (retval) {
|
if (retval) {
|
||||||
dev_dbg(dev->class_dev, "unable to allocate irq %u\n",
|
dev_dbg(dev->class_dev, "unable to allocate irq %u\n",
|
||||||
pcidev->irq);
|
pcidev->irq);
|
||||||
|
|||||||
@@ -1105,7 +1105,7 @@ struct rtllib_network {
|
|||||||
bool bWithAironetIE;
|
bool bWithAironetIE;
|
||||||
bool bCkipSupported;
|
bool bCkipSupported;
|
||||||
bool bCcxRmEnable;
|
bool bCcxRmEnable;
|
||||||
u16 CcxRmState[2];
|
u8 CcxRmState[2];
|
||||||
bool bMBssidValid;
|
bool bMBssidValid;
|
||||||
u8 MBssidMask;
|
u8 MBssidMask;
|
||||||
u8 MBssid[ETH_ALEN];
|
u8 MBssid[ETH_ALEN];
|
||||||
|
|||||||
@@ -1968,7 +1968,7 @@ static void rtllib_parse_mife_generic(struct rtllib_device *ieee,
|
|||||||
info_element->data[2] == 0x96 &&
|
info_element->data[2] == 0x96 &&
|
||||||
info_element->data[3] == 0x01) {
|
info_element->data[3] == 0x01) {
|
||||||
if (info_element->len == 6) {
|
if (info_element->len == 6) {
|
||||||
memcpy(network->CcxRmState, &info_element[4], 2);
|
memcpy(network->CcxRmState, &info_element->data[4], 2);
|
||||||
if (network->CcxRmState[0] != 0)
|
if (network->CcxRmState[0] != 0)
|
||||||
network->bCcxRmEnable = true;
|
network->bCcxRmEnable = true;
|
||||||
else
|
else
|
||||||
|
|||||||
@@ -1177,12 +1177,6 @@ static inline void qcom_geni_serial_enable_early_read(struct geni_se *se,
|
|||||||
struct console *con) { }
|
struct console *con) { }
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
static int qcom_geni_serial_earlycon_exit(struct console *con)
|
|
||||||
{
|
|
||||||
geni_remove_earlycon_icc_vote();
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static struct qcom_geni_private_data earlycon_private_data;
|
static struct qcom_geni_private_data earlycon_private_data;
|
||||||
|
|
||||||
static int __init qcom_geni_serial_earlycon_setup(struct earlycon_device *dev,
|
static int __init qcom_geni_serial_earlycon_setup(struct earlycon_device *dev,
|
||||||
@@ -1233,7 +1227,6 @@ static int __init qcom_geni_serial_earlycon_setup(struct earlycon_device *dev,
|
|||||||
writel(stop_bit_len, uport->membase + SE_UART_TX_STOP_BIT_LEN);
|
writel(stop_bit_len, uport->membase + SE_UART_TX_STOP_BIT_LEN);
|
||||||
|
|
||||||
dev->con->write = qcom_geni_serial_earlycon_write;
|
dev->con->write = qcom_geni_serial_earlycon_write;
|
||||||
dev->con->exit = qcom_geni_serial_earlycon_exit;
|
|
||||||
dev->con->setup = NULL;
|
dev->con->setup = NULL;
|
||||||
qcom_geni_serial_enable_early_read(&se, dev->con);
|
qcom_geni_serial_enable_early_read(&se, dev->con);
|
||||||
|
|
||||||
|
|||||||
@@ -147,17 +147,29 @@ static inline int acm_set_control(struct acm *acm, int control)
|
|||||||
#define acm_send_break(acm, ms) \
|
#define acm_send_break(acm, ms) \
|
||||||
acm_ctrl_msg(acm, USB_CDC_REQ_SEND_BREAK, ms, NULL, 0)
|
acm_ctrl_msg(acm, USB_CDC_REQ_SEND_BREAK, ms, NULL, 0)
|
||||||
|
|
||||||
static void acm_kill_urbs(struct acm *acm)
|
static void acm_poison_urbs(struct acm *acm)
|
||||||
{
|
{
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
usb_kill_urb(acm->ctrlurb);
|
usb_poison_urb(acm->ctrlurb);
|
||||||
for (i = 0; i < ACM_NW; i++)
|
for (i = 0; i < ACM_NW; i++)
|
||||||
usb_kill_urb(acm->wb[i].urb);
|
usb_poison_urb(acm->wb[i].urb);
|
||||||
for (i = 0; i < acm->rx_buflimit; i++)
|
for (i = 0; i < acm->rx_buflimit; i++)
|
||||||
usb_kill_urb(acm->read_urbs[i]);
|
usb_poison_urb(acm->read_urbs[i]);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void acm_unpoison_urbs(struct acm *acm)
|
||||||
|
{
|
||||||
|
int i;
|
||||||
|
|
||||||
|
for (i = 0; i < acm->rx_buflimit; i++)
|
||||||
|
usb_unpoison_urb(acm->read_urbs[i]);
|
||||||
|
for (i = 0; i < ACM_NW; i++)
|
||||||
|
usb_unpoison_urb(acm->wb[i].urb);
|
||||||
|
usb_unpoison_urb(acm->ctrlurb);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Write buffer management.
|
* Write buffer management.
|
||||||
* All of these assume proper locks taken by the caller.
|
* All of these assume proper locks taken by the caller.
|
||||||
@@ -226,9 +238,10 @@ static int acm_start_wb(struct acm *acm, struct acm_wb *wb)
|
|||||||
|
|
||||||
rc = usb_submit_urb(wb->urb, GFP_ATOMIC);
|
rc = usb_submit_urb(wb->urb, GFP_ATOMIC);
|
||||||
if (rc < 0) {
|
if (rc < 0) {
|
||||||
dev_err(&acm->data->dev,
|
if (rc != -EPERM)
|
||||||
"%s - usb_submit_urb(write bulk) failed: %d\n",
|
dev_err(&acm->data->dev,
|
||||||
__func__, rc);
|
"%s - usb_submit_urb(write bulk) failed: %d\n",
|
||||||
|
__func__, rc);
|
||||||
acm_write_done(acm, wb);
|
acm_write_done(acm, wb);
|
||||||
}
|
}
|
||||||
return rc;
|
return rc;
|
||||||
@@ -313,8 +326,10 @@ static void acm_process_notification(struct acm *acm, unsigned char *buf)
|
|||||||
acm->iocount.dsr++;
|
acm->iocount.dsr++;
|
||||||
if (difference & ACM_CTRL_DCD)
|
if (difference & ACM_CTRL_DCD)
|
||||||
acm->iocount.dcd++;
|
acm->iocount.dcd++;
|
||||||
if (newctrl & ACM_CTRL_BRK)
|
if (newctrl & ACM_CTRL_BRK) {
|
||||||
acm->iocount.brk++;
|
acm->iocount.brk++;
|
||||||
|
tty_insert_flip_char(&acm->port, 0, TTY_BREAK);
|
||||||
|
}
|
||||||
if (newctrl & ACM_CTRL_RI)
|
if (newctrl & ACM_CTRL_RI)
|
||||||
acm->iocount.rng++;
|
acm->iocount.rng++;
|
||||||
if (newctrl & ACM_CTRL_FRAMING)
|
if (newctrl & ACM_CTRL_FRAMING)
|
||||||
@@ -480,11 +495,6 @@ static void acm_read_bulk_callback(struct urb *urb)
|
|||||||
dev_vdbg(&acm->data->dev, "got urb %d, len %d, status %d\n",
|
dev_vdbg(&acm->data->dev, "got urb %d, len %d, status %d\n",
|
||||||
rb->index, urb->actual_length, status);
|
rb->index, urb->actual_length, status);
|
||||||
|
|
||||||
if (!acm->dev) {
|
|
||||||
dev_dbg(&acm->data->dev, "%s - disconnected\n", __func__);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
switch (status) {
|
switch (status) {
|
||||||
case 0:
|
case 0:
|
||||||
usb_mark_last_busy(acm->dev);
|
usb_mark_last_busy(acm->dev);
|
||||||
@@ -649,7 +659,8 @@ static void acm_port_dtr_rts(struct tty_port *port, int raise)
|
|||||||
|
|
||||||
res = acm_set_control(acm, val);
|
res = acm_set_control(acm, val);
|
||||||
if (res && (acm->ctrl_caps & USB_CDC_CAP_LINE))
|
if (res && (acm->ctrl_caps & USB_CDC_CAP_LINE))
|
||||||
dev_err(&acm->control->dev, "failed to set dtr/rts\n");
|
/* This is broken in too many devices to spam the logs */
|
||||||
|
dev_dbg(&acm->control->dev, "failed to set dtr/rts\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
static int acm_port_activate(struct tty_port *port, struct tty_struct *tty)
|
static int acm_port_activate(struct tty_port *port, struct tty_struct *tty)
|
||||||
@@ -731,6 +742,7 @@ static void acm_port_shutdown(struct tty_port *port)
|
|||||||
* Need to grab write_lock to prevent race with resume, but no need to
|
* Need to grab write_lock to prevent race with resume, but no need to
|
||||||
* hold it due to the tty-port initialised flag.
|
* hold it due to the tty-port initialised flag.
|
||||||
*/
|
*/
|
||||||
|
acm_poison_urbs(acm);
|
||||||
spin_lock_irq(&acm->write_lock);
|
spin_lock_irq(&acm->write_lock);
|
||||||
spin_unlock_irq(&acm->write_lock);
|
spin_unlock_irq(&acm->write_lock);
|
||||||
|
|
||||||
@@ -747,7 +759,8 @@ static void acm_port_shutdown(struct tty_port *port)
|
|||||||
usb_autopm_put_interface_async(acm->control);
|
usb_autopm_put_interface_async(acm->control);
|
||||||
}
|
}
|
||||||
|
|
||||||
acm_kill_urbs(acm);
|
acm_unpoison_urbs(acm);
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void acm_tty_cleanup(struct tty_struct *tty)
|
static void acm_tty_cleanup(struct tty_struct *tty)
|
||||||
@@ -1503,12 +1516,16 @@ skip_countries:
|
|||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
alloc_fail6:
|
alloc_fail6:
|
||||||
|
if (!acm->combined_interfaces) {
|
||||||
|
/* Clear driver data so that disconnect() returns early. */
|
||||||
|
usb_set_intfdata(data_interface, NULL);
|
||||||
|
usb_driver_release_interface(&acm_driver, data_interface);
|
||||||
|
}
|
||||||
if (acm->country_codes) {
|
if (acm->country_codes) {
|
||||||
device_remove_file(&acm->control->dev,
|
device_remove_file(&acm->control->dev,
|
||||||
&dev_attr_wCountryCodes);
|
&dev_attr_wCountryCodes);
|
||||||
device_remove_file(&acm->control->dev,
|
device_remove_file(&acm->control->dev,
|
||||||
&dev_attr_iCountryCodeRelDate);
|
&dev_attr_iCountryCodeRelDate);
|
||||||
kfree(acm->country_codes);
|
|
||||||
}
|
}
|
||||||
device_remove_file(&acm->control->dev, &dev_attr_bmCapabilities);
|
device_remove_file(&acm->control->dev, &dev_attr_bmCapabilities);
|
||||||
alloc_fail5:
|
alloc_fail5:
|
||||||
@@ -1540,8 +1557,14 @@ static void acm_disconnect(struct usb_interface *intf)
|
|||||||
if (!acm)
|
if (!acm)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
mutex_lock(&acm->mutex);
|
|
||||||
acm->disconnected = true;
|
acm->disconnected = true;
|
||||||
|
/*
|
||||||
|
* there is a circular dependency. acm_softint() can resubmit
|
||||||
|
* the URBs in error handling so we need to block any
|
||||||
|
* submission right away
|
||||||
|
*/
|
||||||
|
acm_poison_urbs(acm);
|
||||||
|
mutex_lock(&acm->mutex);
|
||||||
if (acm->country_codes) {
|
if (acm->country_codes) {
|
||||||
device_remove_file(&acm->control->dev,
|
device_remove_file(&acm->control->dev,
|
||||||
&dev_attr_wCountryCodes);
|
&dev_attr_wCountryCodes);
|
||||||
@@ -1560,7 +1583,6 @@ static void acm_disconnect(struct usb_interface *intf)
|
|||||||
tty_kref_put(tty);
|
tty_kref_put(tty);
|
||||||
}
|
}
|
||||||
|
|
||||||
acm_kill_urbs(acm);
|
|
||||||
cancel_delayed_work_sync(&acm->dwork);
|
cancel_delayed_work_sync(&acm->dwork);
|
||||||
|
|
||||||
tty_unregister_device(acm_tty_driver, acm->minor);
|
tty_unregister_device(acm_tty_driver, acm->minor);
|
||||||
@@ -1602,7 +1624,7 @@ static int acm_suspend(struct usb_interface *intf, pm_message_t message)
|
|||||||
if (cnt)
|
if (cnt)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
acm_kill_urbs(acm);
|
acm_poison_urbs(acm);
|
||||||
cancel_delayed_work_sync(&acm->dwork);
|
cancel_delayed_work_sync(&acm->dwork);
|
||||||
acm->urbs_in_error_delay = 0;
|
acm->urbs_in_error_delay = 0;
|
||||||
|
|
||||||
@@ -1615,6 +1637,7 @@ static int acm_resume(struct usb_interface *intf)
|
|||||||
struct urb *urb;
|
struct urb *urb;
|
||||||
int rv = 0;
|
int rv = 0;
|
||||||
|
|
||||||
|
acm_unpoison_urbs(acm);
|
||||||
spin_lock_irq(&acm->write_lock);
|
spin_lock_irq(&acm->write_lock);
|
||||||
|
|
||||||
if (--acm->susp_count)
|
if (--acm->susp_count)
|
||||||
|
|||||||
@@ -498,6 +498,10 @@ static const struct usb_device_id usb_quirk_list[] = {
|
|||||||
/* DJI CineSSD */
|
/* DJI CineSSD */
|
||||||
{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
|
{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
|
||||||
|
|
||||||
|
/* Fibocom L850-GL LTE Modem */
|
||||||
|
{ USB_DEVICE(0x2cb7, 0x0007), .driver_info =
|
||||||
|
USB_QUIRK_IGNORE_REMOTE_WAKEUP },
|
||||||
|
|
||||||
/* INTEL VALUE SSD */
|
/* INTEL VALUE SSD */
|
||||||
{ USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME },
|
{ USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME },
|
||||||
|
|
||||||
|
|||||||
@@ -4322,7 +4322,8 @@ static int _dwc2_hcd_suspend(struct usb_hcd *hcd)
|
|||||||
if (hsotg->op_state == OTG_STATE_B_PERIPHERAL)
|
if (hsotg->op_state == OTG_STATE_B_PERIPHERAL)
|
||||||
goto unlock;
|
goto unlock;
|
||||||
|
|
||||||
if (hsotg->params.power_down > DWC2_POWER_DOWN_PARAM_PARTIAL)
|
if (hsotg->params.power_down != DWC2_POWER_DOWN_PARAM_PARTIAL ||
|
||||||
|
hsotg->flags.b.port_connect_status == 0)
|
||||||
goto skip_power_saving;
|
goto skip_power_saving;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -5398,7 +5399,7 @@ int dwc2_host_enter_hibernation(struct dwc2_hsotg *hsotg)
|
|||||||
dwc2_writel(hsotg, hprt0, HPRT0);
|
dwc2_writel(hsotg, hprt0, HPRT0);
|
||||||
|
|
||||||
/* Wait for the HPRT0.PrtSusp register field to be set */
|
/* Wait for the HPRT0.PrtSusp register field to be set */
|
||||||
if (dwc2_hsotg_wait_bit_set(hsotg, HPRT0, HPRT0_SUSP, 3000))
|
if (dwc2_hsotg_wait_bit_set(hsotg, HPRT0, HPRT0_SUSP, 5000))
|
||||||
dev_warn(hsotg->dev, "Suspend wasn't generated\n");
|
dev_warn(hsotg->dev, "Suspend wasn't generated\n");
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -120,6 +120,8 @@ static const struct property_entry dwc3_pci_intel_properties[] = {
|
|||||||
static const struct property_entry dwc3_pci_mrfld_properties[] = {
|
static const struct property_entry dwc3_pci_mrfld_properties[] = {
|
||||||
PROPERTY_ENTRY_STRING("dr_mode", "otg"),
|
PROPERTY_ENTRY_STRING("dr_mode", "otg"),
|
||||||
PROPERTY_ENTRY_STRING("linux,extcon-name", "mrfld_bcove_pwrsrc"),
|
PROPERTY_ENTRY_STRING("linux,extcon-name", "mrfld_bcove_pwrsrc"),
|
||||||
|
PROPERTY_ENTRY_BOOL("snps,dis_u3_susphy_quirk"),
|
||||||
|
PROPERTY_ENTRY_BOOL("snps,dis_u2_susphy_quirk"),
|
||||||
PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"),
|
PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"),
|
||||||
{}
|
{}
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -153,6 +153,11 @@ static int udc_pci_probe(
|
|||||||
pci_set_master(pdev);
|
pci_set_master(pdev);
|
||||||
pci_try_set_mwi(pdev);
|
pci_try_set_mwi(pdev);
|
||||||
|
|
||||||
|
dev->phys_addr = resource;
|
||||||
|
dev->irq = pdev->irq;
|
||||||
|
dev->pdev = pdev;
|
||||||
|
dev->dev = &pdev->dev;
|
||||||
|
|
||||||
/* init dma pools */
|
/* init dma pools */
|
||||||
if (use_dma) {
|
if (use_dma) {
|
||||||
retval = init_dma_pools(dev);
|
retval = init_dma_pools(dev);
|
||||||
@@ -160,11 +165,6 @@ static int udc_pci_probe(
|
|||||||
goto err_dma;
|
goto err_dma;
|
||||||
}
|
}
|
||||||
|
|
||||||
dev->phys_addr = resource;
|
|
||||||
dev->irq = pdev->irq;
|
|
||||||
dev->pdev = pdev;
|
|
||||||
dev->dev = &pdev->dev;
|
|
||||||
|
|
||||||
/* general probing */
|
/* general probing */
|
||||||
if (udc_probe(dev)) {
|
if (udc_probe(dev)) {
|
||||||
retval = -ENODEV;
|
retval = -ENODEV;
|
||||||
|
|||||||
@@ -2004,10 +2004,14 @@ static void musb_pm_runtime_check_session(struct musb *musb)
|
|||||||
MUSB_DEVCTL_HR;
|
MUSB_DEVCTL_HR;
|
||||||
switch (devctl & ~s) {
|
switch (devctl & ~s) {
|
||||||
case MUSB_QUIRK_B_DISCONNECT_99:
|
case MUSB_QUIRK_B_DISCONNECT_99:
|
||||||
musb_dbg(musb, "Poll devctl in case of suspend after disconnect\n");
|
if (musb->quirk_retries && !musb->flush_irq_work) {
|
||||||
schedule_delayed_work(&musb->irq_work,
|
musb_dbg(musb, "Poll devctl in case of suspend after disconnect\n");
|
||||||
msecs_to_jiffies(1000));
|
schedule_delayed_work(&musb->irq_work,
|
||||||
break;
|
msecs_to_jiffies(1000));
|
||||||
|
musb->quirk_retries--;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
fallthrough;
|
||||||
case MUSB_QUIRK_B_INVALID_VBUS_91:
|
case MUSB_QUIRK_B_INVALID_VBUS_91:
|
||||||
if (musb->quirk_retries && !musb->flush_irq_work) {
|
if (musb->quirk_retries && !musb->flush_irq_work) {
|
||||||
musb_dbg(musb,
|
musb_dbg(musb,
|
||||||
|
|||||||
@@ -594,6 +594,8 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
|
|||||||
pr_err("invalid port number %d\n", wIndex);
|
pr_err("invalid port number %d\n", wIndex);
|
||||||
goto error;
|
goto error;
|
||||||
}
|
}
|
||||||
|
if (wValue >= 32)
|
||||||
|
goto error;
|
||||||
if (hcd->speed == HCD_USB3) {
|
if (hcd->speed == HCD_USB3) {
|
||||||
if ((vhci_hcd->port_status[rhport] &
|
if ((vhci_hcd->port_status[rhport] &
|
||||||
USB_SS_PORT_STAT_POWER) != 0) {
|
USB_SS_PORT_STAT_POWER) != 0) {
|
||||||
|
|||||||
@@ -42,7 +42,7 @@ config VFIO_PCI_IGD
|
|||||||
|
|
||||||
config VFIO_PCI_NVLINK2
|
config VFIO_PCI_NVLINK2
|
||||||
def_bool y
|
def_bool y
|
||||||
depends on VFIO_PCI && PPC_POWERNV
|
depends on VFIO_PCI && PPC_POWERNV && SPAPR_TCE_IOMMU
|
||||||
help
|
help
|
||||||
VFIO PCI support for P9 Witherspoon machine with NVIDIA V100 GPUs
|
VFIO PCI support for P9 Witherspoon machine with NVIDIA V100 GPUs
|
||||||
|
|
||||||
|
|||||||
@@ -332,8 +332,8 @@ static void vhost_vq_reset(struct vhost_dev *dev,
|
|||||||
vq->error_ctx = NULL;
|
vq->error_ctx = NULL;
|
||||||
vq->kick = NULL;
|
vq->kick = NULL;
|
||||||
vq->log_ctx = NULL;
|
vq->log_ctx = NULL;
|
||||||
vhost_reset_is_le(vq);
|
|
||||||
vhost_disable_cross_endian(vq);
|
vhost_disable_cross_endian(vq);
|
||||||
|
vhost_reset_is_le(vq);
|
||||||
vq->busyloop_timeout = 0;
|
vq->busyloop_timeout = 0;
|
||||||
vq->umem = NULL;
|
vq->umem = NULL;
|
||||||
vq->iotlb = NULL;
|
vq->iotlb = NULL;
|
||||||
|
|||||||
@@ -1344,6 +1344,9 @@ static void fbcon_cursor(struct vc_data *vc, int mode)
|
|||||||
|
|
||||||
ops->cursor_flash = (mode == CM_ERASE) ? 0 : 1;
|
ops->cursor_flash = (mode == CM_ERASE) ? 0 : 1;
|
||||||
|
|
||||||
|
if (!ops->cursor)
|
||||||
|
return;
|
||||||
|
|
||||||
ops->cursor(vc, info, mode, get_color(vc, info, c, 1),
|
ops->cursor(vc, info, mode, get_color(vc, info, c, 1),
|
||||||
get_color(vc, info, c, 0));
|
get_color(vc, info, c, 0));
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1031,7 +1031,6 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info)
|
|||||||
PCI_DEVICE_ID_HYPERV_VIDEO, NULL);
|
PCI_DEVICE_ID_HYPERV_VIDEO, NULL);
|
||||||
if (!pdev) {
|
if (!pdev) {
|
||||||
pr_err("Unable to find PCI Hyper-V video\n");
|
pr_err("Unable to find PCI Hyper-V video\n");
|
||||||
kfree(info->apertures);
|
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1129,7 +1128,6 @@ getmem_done:
|
|||||||
} else {
|
} else {
|
||||||
pci_dev_put(pdev);
|
pci_dev_put(pdev);
|
||||||
}
|
}
|
||||||
kfree(info->apertures);
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
@@ -1141,7 +1139,6 @@ err2:
|
|||||||
err1:
|
err1:
|
||||||
if (!gen2vm)
|
if (!gen2vm)
|
||||||
pci_dev_put(pdev);
|
pci_dev_put(pdev);
|
||||||
kfree(info->apertures);
|
|
||||||
|
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -626,27 +626,41 @@ int ext4_claim_free_clusters(struct ext4_sb_info *sbi,
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* ext4_should_retry_alloc() - check if a block allocation should be retried
|
* ext4_should_retry_alloc() - check if a block allocation should be retried
|
||||||
* @sb: super block
|
* @sb: superblock
|
||||||
* @retries: number of attemps has been made
|
* @retries: number of retry attempts made so far
|
||||||
*
|
*
|
||||||
* ext4_should_retry_alloc() is called when ENOSPC is returned, and if
|
* ext4_should_retry_alloc() is called when ENOSPC is returned while
|
||||||
* it is profitable to retry the operation, this function will wait
|
* attempting to allocate blocks. If there's an indication that a pending
|
||||||
* for the current or committing transaction to complete, and then
|
* journal transaction might free some space and allow another attempt to
|
||||||
* return TRUE. We will only retry once.
|
* succeed, this function will wait for the current or committing transaction
|
||||||
|
* to complete and then return TRUE.
|
||||||
*/
|
*/
|
||||||
int ext4_should_retry_alloc(struct super_block *sb, int *retries)
|
int ext4_should_retry_alloc(struct super_block *sb, int *retries)
|
||||||
{
|
{
|
||||||
if (!ext4_has_free_clusters(EXT4_SB(sb), 1, 0) ||
|
struct ext4_sb_info *sbi = EXT4_SB(sb);
|
||||||
(*retries)++ > 1 ||
|
|
||||||
!EXT4_SB(sb)->s_journal)
|
if (!sbi->s_journal)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
if (++(*retries) > 3) {
|
||||||
|
percpu_counter_inc(&sbi->s_sra_exceeded_retry_limit);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* if there's no indication that blocks are about to be freed it's
|
||||||
|
* possible we just missed a transaction commit that did so
|
||||||
|
*/
|
||||||
smp_mb();
|
smp_mb();
|
||||||
if (EXT4_SB(sb)->s_mb_free_pending == 0)
|
if (sbi->s_mb_free_pending == 0)
|
||||||
return 0;
|
return ext4_has_free_clusters(sbi, 1, 0);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* it's possible we've just missed a transaction commit here,
|
||||||
|
* so ignore the returned status
|
||||||
|
*/
|
||||||
jbd_debug(1, "%s: retrying operation after ENOSPC\n", sb->s_id);
|
jbd_debug(1, "%s: retrying operation after ENOSPC\n", sb->s_id);
|
||||||
jbd2_journal_force_commit_nested(EXT4_SB(sb)->s_journal);
|
(void) jbd2_journal_force_commit_nested(sbi->s_journal);
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1474,6 +1474,7 @@ struct ext4_sb_info {
|
|||||||
struct percpu_counter s_freeinodes_counter;
|
struct percpu_counter s_freeinodes_counter;
|
||||||
struct percpu_counter s_dirs_counter;
|
struct percpu_counter s_dirs_counter;
|
||||||
struct percpu_counter s_dirtyclusters_counter;
|
struct percpu_counter s_dirtyclusters_counter;
|
||||||
|
struct percpu_counter s_sra_exceeded_retry_limit;
|
||||||
struct blockgroup_lock *s_blockgroup_lock;
|
struct blockgroup_lock *s_blockgroup_lock;
|
||||||
struct proc_dir_entry *s_proc;
|
struct proc_dir_entry *s_proc;
|
||||||
struct kobject s_kobj;
|
struct kobject s_kobj;
|
||||||
|
|||||||
@@ -1950,13 +1950,13 @@ static int __ext4_journalled_writepage(struct page *page,
|
|||||||
if (!ret)
|
if (!ret)
|
||||||
ret = err;
|
ret = err;
|
||||||
|
|
||||||
if (!ext4_has_inline_data(inode))
|
|
||||||
ext4_walk_page_buffers(NULL, page_bufs, 0, len,
|
|
||||||
NULL, bput_one);
|
|
||||||
ext4_set_inode_state(inode, EXT4_STATE_JDATA);
|
ext4_set_inode_state(inode, EXT4_STATE_JDATA);
|
||||||
out:
|
out:
|
||||||
unlock_page(page);
|
unlock_page(page);
|
||||||
out_no_pagelock:
|
out_no_pagelock:
|
||||||
|
if (!inline_data && page_bufs)
|
||||||
|
ext4_walk_page_buffers(NULL, page_bufs, 0, len,
|
||||||
|
NULL, bput_one);
|
||||||
brelse(inode_bh);
|
brelse(inode_bh);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3903,14 +3903,14 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|||||||
*/
|
*/
|
||||||
retval = -ENOENT;
|
retval = -ENOENT;
|
||||||
if (!old.bh || le32_to_cpu(old.de->inode) != old.inode->i_ino)
|
if (!old.bh || le32_to_cpu(old.de->inode) != old.inode->i_ino)
|
||||||
goto end_rename;
|
goto release_bh;
|
||||||
|
|
||||||
new.bh = ext4_find_entry(new.dir, &new.dentry->d_name,
|
new.bh = ext4_find_entry(new.dir, &new.dentry->d_name,
|
||||||
&new.de, &new.inlined, NULL);
|
&new.de, &new.inlined, NULL);
|
||||||
if (IS_ERR(new.bh)) {
|
if (IS_ERR(new.bh)) {
|
||||||
retval = PTR_ERR(new.bh);
|
retval = PTR_ERR(new.bh);
|
||||||
new.bh = NULL;
|
new.bh = NULL;
|
||||||
goto end_rename;
|
goto release_bh;
|
||||||
}
|
}
|
||||||
if (new.bh) {
|
if (new.bh) {
|
||||||
if (!new.inode) {
|
if (!new.inode) {
|
||||||
@@ -3927,15 +3927,13 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|||||||
handle = ext4_journal_start(old.dir, EXT4_HT_DIR, credits);
|
handle = ext4_journal_start(old.dir, EXT4_HT_DIR, credits);
|
||||||
if (IS_ERR(handle)) {
|
if (IS_ERR(handle)) {
|
||||||
retval = PTR_ERR(handle);
|
retval = PTR_ERR(handle);
|
||||||
handle = NULL;
|
goto release_bh;
|
||||||
goto end_rename;
|
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
whiteout = ext4_whiteout_for_rename(&old, credits, &handle);
|
whiteout = ext4_whiteout_for_rename(&old, credits, &handle);
|
||||||
if (IS_ERR(whiteout)) {
|
if (IS_ERR(whiteout)) {
|
||||||
retval = PTR_ERR(whiteout);
|
retval = PTR_ERR(whiteout);
|
||||||
whiteout = NULL;
|
goto release_bh;
|
||||||
goto end_rename;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -4072,16 +4070,18 @@ end_rename:
|
|||||||
ext4_resetent(handle, &old,
|
ext4_resetent(handle, &old,
|
||||||
old.inode->i_ino, old_file_type);
|
old.inode->i_ino, old_file_type);
|
||||||
drop_nlink(whiteout);
|
drop_nlink(whiteout);
|
||||||
|
ext4_orphan_add(handle, whiteout);
|
||||||
}
|
}
|
||||||
unlock_new_inode(whiteout);
|
unlock_new_inode(whiteout);
|
||||||
|
ext4_journal_stop(handle);
|
||||||
iput(whiteout);
|
iput(whiteout);
|
||||||
|
} else {
|
||||||
|
ext4_journal_stop(handle);
|
||||||
}
|
}
|
||||||
|
release_bh:
|
||||||
brelse(old.dir_bh);
|
brelse(old.dir_bh);
|
||||||
brelse(old.bh);
|
brelse(old.bh);
|
||||||
brelse(new.bh);
|
brelse(new.bh);
|
||||||
if (handle)
|
|
||||||
ext4_journal_stop(handle);
|
|
||||||
return retval;
|
return retval;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1226,6 +1226,7 @@ static void ext4_put_super(struct super_block *sb)
|
|||||||
percpu_counter_destroy(&sbi->s_freeinodes_counter);
|
percpu_counter_destroy(&sbi->s_freeinodes_counter);
|
||||||
percpu_counter_destroy(&sbi->s_dirs_counter);
|
percpu_counter_destroy(&sbi->s_dirs_counter);
|
||||||
percpu_counter_destroy(&sbi->s_dirtyclusters_counter);
|
percpu_counter_destroy(&sbi->s_dirtyclusters_counter);
|
||||||
|
percpu_counter_destroy(&sbi->s_sra_exceeded_retry_limit);
|
||||||
percpu_free_rwsem(&sbi->s_writepages_rwsem);
|
percpu_free_rwsem(&sbi->s_writepages_rwsem);
|
||||||
#ifdef CONFIG_QUOTA
|
#ifdef CONFIG_QUOTA
|
||||||
for (i = 0; i < EXT4_MAXQUOTAS; i++)
|
for (i = 0; i < EXT4_MAXQUOTAS; i++)
|
||||||
@@ -5008,6 +5009,9 @@ no_journal:
|
|||||||
if (!err)
|
if (!err)
|
||||||
err = percpu_counter_init(&sbi->s_dirtyclusters_counter, 0,
|
err = percpu_counter_init(&sbi->s_dirtyclusters_counter, 0,
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
|
if (!err)
|
||||||
|
err = percpu_counter_init(&sbi->s_sra_exceeded_retry_limit, 0,
|
||||||
|
GFP_KERNEL);
|
||||||
if (!err)
|
if (!err)
|
||||||
err = percpu_init_rwsem(&sbi->s_writepages_rwsem);
|
err = percpu_init_rwsem(&sbi->s_writepages_rwsem);
|
||||||
|
|
||||||
@@ -5120,6 +5124,7 @@ failed_mount6:
|
|||||||
percpu_counter_destroy(&sbi->s_freeinodes_counter);
|
percpu_counter_destroy(&sbi->s_freeinodes_counter);
|
||||||
percpu_counter_destroy(&sbi->s_dirs_counter);
|
percpu_counter_destroy(&sbi->s_dirs_counter);
|
||||||
percpu_counter_destroy(&sbi->s_dirtyclusters_counter);
|
percpu_counter_destroy(&sbi->s_dirtyclusters_counter);
|
||||||
|
percpu_counter_destroy(&sbi->s_sra_exceeded_retry_limit);
|
||||||
percpu_free_rwsem(&sbi->s_writepages_rwsem);
|
percpu_free_rwsem(&sbi->s_writepages_rwsem);
|
||||||
failed_mount5:
|
failed_mount5:
|
||||||
ext4_ext_release(sb);
|
ext4_ext_release(sb);
|
||||||
|
|||||||
@@ -24,6 +24,7 @@ typedef enum {
|
|||||||
attr_session_write_kbytes,
|
attr_session_write_kbytes,
|
||||||
attr_lifetime_write_kbytes,
|
attr_lifetime_write_kbytes,
|
||||||
attr_reserved_clusters,
|
attr_reserved_clusters,
|
||||||
|
attr_sra_exceeded_retry_limit,
|
||||||
attr_inode_readahead,
|
attr_inode_readahead,
|
||||||
attr_trigger_test_error,
|
attr_trigger_test_error,
|
||||||
attr_first_error_time,
|
attr_first_error_time,
|
||||||
@@ -208,6 +209,7 @@ EXT4_ATTR_FUNC(delayed_allocation_blocks, 0444);
|
|||||||
EXT4_ATTR_FUNC(session_write_kbytes, 0444);
|
EXT4_ATTR_FUNC(session_write_kbytes, 0444);
|
||||||
EXT4_ATTR_FUNC(lifetime_write_kbytes, 0444);
|
EXT4_ATTR_FUNC(lifetime_write_kbytes, 0444);
|
||||||
EXT4_ATTR_FUNC(reserved_clusters, 0644);
|
EXT4_ATTR_FUNC(reserved_clusters, 0644);
|
||||||
|
EXT4_ATTR_FUNC(sra_exceeded_retry_limit, 0444);
|
||||||
|
|
||||||
EXT4_ATTR_OFFSET(inode_readahead_blks, 0644, inode_readahead,
|
EXT4_ATTR_OFFSET(inode_readahead_blks, 0644, inode_readahead,
|
||||||
ext4_sb_info, s_inode_readahead_blks);
|
ext4_sb_info, s_inode_readahead_blks);
|
||||||
@@ -257,6 +259,7 @@ static struct attribute *ext4_attrs[] = {
|
|||||||
ATTR_LIST(session_write_kbytes),
|
ATTR_LIST(session_write_kbytes),
|
||||||
ATTR_LIST(lifetime_write_kbytes),
|
ATTR_LIST(lifetime_write_kbytes),
|
||||||
ATTR_LIST(reserved_clusters),
|
ATTR_LIST(reserved_clusters),
|
||||||
|
ATTR_LIST(sra_exceeded_retry_limit),
|
||||||
ATTR_LIST(inode_readahead_blks),
|
ATTR_LIST(inode_readahead_blks),
|
||||||
ATTR_LIST(inode_goal),
|
ATTR_LIST(inode_goal),
|
||||||
ATTR_LIST(mb_stats),
|
ATTR_LIST(mb_stats),
|
||||||
@@ -380,6 +383,10 @@ static ssize_t ext4_attr_show(struct kobject *kobj,
|
|||||||
return snprintf(buf, PAGE_SIZE, "%llu\n",
|
return snprintf(buf, PAGE_SIZE, "%llu\n",
|
||||||
(unsigned long long)
|
(unsigned long long)
|
||||||
atomic64_read(&sbi->s_resv_clusters));
|
atomic64_read(&sbi->s_resv_clusters));
|
||||||
|
case attr_sra_exceeded_retry_limit:
|
||||||
|
return snprintf(buf, PAGE_SIZE, "%llu\n",
|
||||||
|
(unsigned long long)
|
||||||
|
percpu_counter_sum(&sbi->s_sra_exceeded_retry_limit));
|
||||||
case attr_inode_readahead:
|
case attr_inode_readahead:
|
||||||
case attr_pointer_ui:
|
case attr_pointer_ui:
|
||||||
if (!ptr)
|
if (!ptr)
|
||||||
|
|||||||
@@ -1324,8 +1324,15 @@ static int virtio_fs_fill_super(struct super_block *sb, struct fs_context *fsc)
|
|||||||
|
|
||||||
/* virtiofs allocates and installs its own fuse devices */
|
/* virtiofs allocates and installs its own fuse devices */
|
||||||
ctx->fudptr = NULL;
|
ctx->fudptr = NULL;
|
||||||
if (ctx->dax)
|
if (ctx->dax) {
|
||||||
|
if (!fs->dax_dev) {
|
||||||
|
err = -EINVAL;
|
||||||
|
pr_err("virtio-fs: dax can't be enabled as filesystem"
|
||||||
|
" device does not support it.\n");
|
||||||
|
goto err_free_fuse_devs;
|
||||||
|
}
|
||||||
ctx->dax_dev = fs->dax_dev;
|
ctx->dax_dev = fs->dax_dev;
|
||||||
|
}
|
||||||
err = fuse_fill_super_common(sb, ctx);
|
err = fuse_fill_super_common(sb, ctx);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto err_free_fuse_devs;
|
goto err_free_fuse_devs;
|
||||||
|
|||||||
@@ -4401,6 +4401,7 @@ static int io_sendmsg(struct io_kiocb *req, bool force_nonblock,
|
|||||||
struct io_async_msghdr iomsg, *kmsg;
|
struct io_async_msghdr iomsg, *kmsg;
|
||||||
struct socket *sock;
|
struct socket *sock;
|
||||||
unsigned flags;
|
unsigned flags;
|
||||||
|
int min_ret = 0;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
sock = sock_from_file(req->file, &ret);
|
sock = sock_from_file(req->file, &ret);
|
||||||
@@ -4421,12 +4422,15 @@ static int io_sendmsg(struct io_kiocb *req, bool force_nonblock,
|
|||||||
kmsg = &iomsg;
|
kmsg = &iomsg;
|
||||||
}
|
}
|
||||||
|
|
||||||
flags = req->sr_msg.msg_flags;
|
flags = req->sr_msg.msg_flags | MSG_NOSIGNAL;
|
||||||
if (flags & MSG_DONTWAIT)
|
if (flags & MSG_DONTWAIT)
|
||||||
req->flags |= REQ_F_NOWAIT;
|
req->flags |= REQ_F_NOWAIT;
|
||||||
else if (force_nonblock)
|
else if (force_nonblock)
|
||||||
flags |= MSG_DONTWAIT;
|
flags |= MSG_DONTWAIT;
|
||||||
|
|
||||||
|
if (flags & MSG_WAITALL)
|
||||||
|
min_ret = iov_iter_count(&kmsg->msg.msg_iter);
|
||||||
|
|
||||||
ret = __sys_sendmsg_sock(sock, &kmsg->msg, flags);
|
ret = __sys_sendmsg_sock(sock, &kmsg->msg, flags);
|
||||||
if (force_nonblock && ret == -EAGAIN)
|
if (force_nonblock && ret == -EAGAIN)
|
||||||
return io_setup_async_msg(req, kmsg);
|
return io_setup_async_msg(req, kmsg);
|
||||||
@@ -4436,7 +4440,7 @@ static int io_sendmsg(struct io_kiocb *req, bool force_nonblock,
|
|||||||
if (kmsg->iov != kmsg->fast_iov)
|
if (kmsg->iov != kmsg->fast_iov)
|
||||||
kfree(kmsg->iov);
|
kfree(kmsg->iov);
|
||||||
req->flags &= ~REQ_F_NEED_CLEANUP;
|
req->flags &= ~REQ_F_NEED_CLEANUP;
|
||||||
if (ret < 0)
|
if (ret < min_ret)
|
||||||
req_set_fail_links(req);
|
req_set_fail_links(req);
|
||||||
__io_req_complete(req, ret, 0, cs);
|
__io_req_complete(req, ret, 0, cs);
|
||||||
return 0;
|
return 0;
|
||||||
@@ -4450,6 +4454,7 @@ static int io_send(struct io_kiocb *req, bool force_nonblock,
|
|||||||
struct iovec iov;
|
struct iovec iov;
|
||||||
struct socket *sock;
|
struct socket *sock;
|
||||||
unsigned flags;
|
unsigned flags;
|
||||||
|
int min_ret = 0;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
sock = sock_from_file(req->file, &ret);
|
sock = sock_from_file(req->file, &ret);
|
||||||
@@ -4465,12 +4470,15 @@ static int io_send(struct io_kiocb *req, bool force_nonblock,
|
|||||||
msg.msg_controllen = 0;
|
msg.msg_controllen = 0;
|
||||||
msg.msg_namelen = 0;
|
msg.msg_namelen = 0;
|
||||||
|
|
||||||
flags = req->sr_msg.msg_flags;
|
flags = req->sr_msg.msg_flags | MSG_NOSIGNAL;
|
||||||
if (flags & MSG_DONTWAIT)
|
if (flags & MSG_DONTWAIT)
|
||||||
req->flags |= REQ_F_NOWAIT;
|
req->flags |= REQ_F_NOWAIT;
|
||||||
else if (force_nonblock)
|
else if (force_nonblock)
|
||||||
flags |= MSG_DONTWAIT;
|
flags |= MSG_DONTWAIT;
|
||||||
|
|
||||||
|
if (flags & MSG_WAITALL)
|
||||||
|
min_ret = iov_iter_count(&msg.msg_iter);
|
||||||
|
|
||||||
msg.msg_flags = flags;
|
msg.msg_flags = flags;
|
||||||
ret = sock_sendmsg(sock, &msg);
|
ret = sock_sendmsg(sock, &msg);
|
||||||
if (force_nonblock && ret == -EAGAIN)
|
if (force_nonblock && ret == -EAGAIN)
|
||||||
@@ -4478,7 +4486,7 @@ static int io_send(struct io_kiocb *req, bool force_nonblock,
|
|||||||
if (ret == -ERESTARTSYS)
|
if (ret == -ERESTARTSYS)
|
||||||
ret = -EINTR;
|
ret = -EINTR;
|
||||||
|
|
||||||
if (ret < 0)
|
if (ret < min_ret)
|
||||||
req_set_fail_links(req);
|
req_set_fail_links(req);
|
||||||
__io_req_complete(req, ret, 0, cs);
|
__io_req_complete(req, ret, 0, cs);
|
||||||
return 0;
|
return 0;
|
||||||
@@ -4630,6 +4638,7 @@ static int io_recvmsg(struct io_kiocb *req, bool force_nonblock,
|
|||||||
struct socket *sock;
|
struct socket *sock;
|
||||||
struct io_buffer *kbuf;
|
struct io_buffer *kbuf;
|
||||||
unsigned flags;
|
unsigned flags;
|
||||||
|
int min_ret = 0;
|
||||||
int ret, cflags = 0;
|
int ret, cflags = 0;
|
||||||
|
|
||||||
sock = sock_from_file(req->file, &ret);
|
sock = sock_from_file(req->file, &ret);
|
||||||
@@ -4659,12 +4668,15 @@ static int io_recvmsg(struct io_kiocb *req, bool force_nonblock,
|
|||||||
1, req->sr_msg.len);
|
1, req->sr_msg.len);
|
||||||
}
|
}
|
||||||
|
|
||||||
flags = req->sr_msg.msg_flags;
|
flags = req->sr_msg.msg_flags | MSG_NOSIGNAL;
|
||||||
if (flags & MSG_DONTWAIT)
|
if (flags & MSG_DONTWAIT)
|
||||||
req->flags |= REQ_F_NOWAIT;
|
req->flags |= REQ_F_NOWAIT;
|
||||||
else if (force_nonblock)
|
else if (force_nonblock)
|
||||||
flags |= MSG_DONTWAIT;
|
flags |= MSG_DONTWAIT;
|
||||||
|
|
||||||
|
if (flags & MSG_WAITALL)
|
||||||
|
min_ret = iov_iter_count(&kmsg->msg.msg_iter);
|
||||||
|
|
||||||
ret = __sys_recvmsg_sock(sock, &kmsg->msg, req->sr_msg.umsg,
|
ret = __sys_recvmsg_sock(sock, &kmsg->msg, req->sr_msg.umsg,
|
||||||
kmsg->uaddr, flags);
|
kmsg->uaddr, flags);
|
||||||
if (force_nonblock && ret == -EAGAIN)
|
if (force_nonblock && ret == -EAGAIN)
|
||||||
@@ -4677,7 +4689,7 @@ static int io_recvmsg(struct io_kiocb *req, bool force_nonblock,
|
|||||||
if (kmsg->iov != kmsg->fast_iov)
|
if (kmsg->iov != kmsg->fast_iov)
|
||||||
kfree(kmsg->iov);
|
kfree(kmsg->iov);
|
||||||
req->flags &= ~REQ_F_NEED_CLEANUP;
|
req->flags &= ~REQ_F_NEED_CLEANUP;
|
||||||
if (ret < 0)
|
if (ret < min_ret || ((flags & MSG_WAITALL) && (kmsg->msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))))
|
||||||
req_set_fail_links(req);
|
req_set_fail_links(req);
|
||||||
__io_req_complete(req, ret, cflags, cs);
|
__io_req_complete(req, ret, cflags, cs);
|
||||||
return 0;
|
return 0;
|
||||||
@@ -4693,6 +4705,7 @@ static int io_recv(struct io_kiocb *req, bool force_nonblock,
|
|||||||
struct socket *sock;
|
struct socket *sock;
|
||||||
struct iovec iov;
|
struct iovec iov;
|
||||||
unsigned flags;
|
unsigned flags;
|
||||||
|
int min_ret = 0;
|
||||||
int ret, cflags = 0;
|
int ret, cflags = 0;
|
||||||
|
|
||||||
sock = sock_from_file(req->file, &ret);
|
sock = sock_from_file(req->file, &ret);
|
||||||
@@ -4717,12 +4730,15 @@ static int io_recv(struct io_kiocb *req, bool force_nonblock,
|
|||||||
msg.msg_iocb = NULL;
|
msg.msg_iocb = NULL;
|
||||||
msg.msg_flags = 0;
|
msg.msg_flags = 0;
|
||||||
|
|
||||||
flags = req->sr_msg.msg_flags;
|
flags = req->sr_msg.msg_flags | MSG_NOSIGNAL;
|
||||||
if (flags & MSG_DONTWAIT)
|
if (flags & MSG_DONTWAIT)
|
||||||
req->flags |= REQ_F_NOWAIT;
|
req->flags |= REQ_F_NOWAIT;
|
||||||
else if (force_nonblock)
|
else if (force_nonblock)
|
||||||
flags |= MSG_DONTWAIT;
|
flags |= MSG_DONTWAIT;
|
||||||
|
|
||||||
|
if (flags & MSG_WAITALL)
|
||||||
|
min_ret = iov_iter_count(&msg.msg_iter);
|
||||||
|
|
||||||
ret = sock_recvmsg(sock, &msg, flags);
|
ret = sock_recvmsg(sock, &msg, flags);
|
||||||
if (force_nonblock && ret == -EAGAIN)
|
if (force_nonblock && ret == -EAGAIN)
|
||||||
return -EAGAIN;
|
return -EAGAIN;
|
||||||
@@ -4731,7 +4747,7 @@ static int io_recv(struct io_kiocb *req, bool force_nonblock,
|
|||||||
out_free:
|
out_free:
|
||||||
if (req->flags & REQ_F_BUFFER_SELECTED)
|
if (req->flags & REQ_F_BUFFER_SELECTED)
|
||||||
cflags = io_put_recv_kbuf(req);
|
cflags = io_put_recv_kbuf(req);
|
||||||
if (ret < 0)
|
if (ret < min_ret || ((flags & MSG_WAITALL) && (msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))))
|
||||||
req_set_fail_links(req);
|
req_set_fail_links(req);
|
||||||
__io_req_complete(req, ret, cflags, cs);
|
__io_req_complete(req, ret, cflags, cs);
|
||||||
return 0;
|
return 0;
|
||||||
@@ -6242,7 +6258,6 @@ static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
|
|||||||
spin_unlock_irqrestore(&ctx->completion_lock, flags);
|
spin_unlock_irqrestore(&ctx->completion_lock, flags);
|
||||||
|
|
||||||
if (prev) {
|
if (prev) {
|
||||||
req_set_fail_links(prev);
|
|
||||||
io_async_find_and_cancel(ctx, req, prev->user_data, -ETIME);
|
io_async_find_and_cancel(ctx, req, prev->user_data, -ETIME);
|
||||||
io_put_req_deferred(prev, 1);
|
io_put_req_deferred(prev, 1);
|
||||||
} else {
|
} else {
|
||||||
|
|||||||
@@ -170,6 +170,16 @@ int iomap_swapfile_activate(struct swap_info_struct *sis,
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If this swapfile doesn't contain even a single page-aligned
|
||||||
|
* contiguous range of blocks, reject this useless swapfile to
|
||||||
|
* prevent confusion later on.
|
||||||
|
*/
|
||||||
|
if (isi.nr_pages == 0) {
|
||||||
|
pr_warn("swapon: Cannot find a single usable page in file.\n");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
*pagespan = 1 + isi.highest_ppage - isi.lowest_ppage;
|
*pagespan = 1 + isi.highest_ppage - isi.lowest_ppage;
|
||||||
sis->max = isi.nr_pages;
|
sis->max = isi.nr_pages;
|
||||||
sis->pages = isi.nr_pages - 1;
|
sis->pages = isi.nr_pages - 1;
|
||||||
|
|||||||
@@ -73,6 +73,7 @@ config NFSD_V4
|
|||||||
select NFSD_V3
|
select NFSD_V3
|
||||||
select FS_POSIX_ACL
|
select FS_POSIX_ACL
|
||||||
select SUNRPC_GSS
|
select SUNRPC_GSS
|
||||||
|
select CRYPTO
|
||||||
select CRYPTO_MD5
|
select CRYPTO_MD5
|
||||||
select CRYPTO_SHA256
|
select CRYPTO_SHA256
|
||||||
select GRACE_PERIOD
|
select GRACE_PERIOD
|
||||||
|
|||||||
@@ -1189,6 +1189,7 @@ static void nfsd4_cb_done(struct rpc_task *task, void *calldata)
|
|||||||
switch (task->tk_status) {
|
switch (task->tk_status) {
|
||||||
case -EIO:
|
case -EIO:
|
||||||
case -ETIMEDOUT:
|
case -ETIMEDOUT:
|
||||||
|
case -EACCES:
|
||||||
nfsd4_mark_cb_down(clp, task->tk_status);
|
nfsd4_mark_cb_down(clp, task->tk_status);
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
|
|||||||
@@ -43,7 +43,7 @@ void reiserfs_security_free(struct reiserfs_security_handle *sec);
|
|||||||
|
|
||||||
static inline int reiserfs_xattrs_initialized(struct super_block *sb)
|
static inline int reiserfs_xattrs_initialized(struct super_block *sb)
|
||||||
{
|
{
|
||||||
return REISERFS_SB(sb)->priv_root != NULL;
|
return REISERFS_SB(sb)->priv_root && REISERFS_SB(sb)->xattr_root;
|
||||||
}
|
}
|
||||||
|
|
||||||
#define xattr_size(size) ((size) + sizeof(struct reiserfs_xattr_header))
|
#define xattr_size(size) ((size) + sizeof(struct reiserfs_xattr_header))
|
||||||
|
|||||||
@@ -222,10 +222,14 @@ void __iomem *__acpi_map_table(unsigned long phys, unsigned long size);
|
|||||||
void __acpi_unmap_table(void __iomem *map, unsigned long size);
|
void __acpi_unmap_table(void __iomem *map, unsigned long size);
|
||||||
int early_acpi_boot_init(void);
|
int early_acpi_boot_init(void);
|
||||||
int acpi_boot_init (void);
|
int acpi_boot_init (void);
|
||||||
|
void acpi_boot_table_prepare (void);
|
||||||
void acpi_boot_table_init (void);
|
void acpi_boot_table_init (void);
|
||||||
int acpi_mps_check (void);
|
int acpi_mps_check (void);
|
||||||
int acpi_numa_init (void);
|
int acpi_numa_init (void);
|
||||||
|
|
||||||
|
int acpi_locate_initial_tables (void);
|
||||||
|
void acpi_reserve_initial_tables (void);
|
||||||
|
void acpi_table_init_complete (void);
|
||||||
int acpi_table_init (void);
|
int acpi_table_init (void);
|
||||||
int acpi_table_parse(char *id, acpi_tbl_table_handler handler);
|
int acpi_table_parse(char *id, acpi_tbl_table_handler handler);
|
||||||
int __init acpi_table_parse_entries(char *id, unsigned long table_size,
|
int __init acpi_table_parse_entries(char *id, unsigned long table_size,
|
||||||
@@ -807,9 +811,12 @@ static inline int acpi_boot_init(void)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline void acpi_boot_table_prepare(void)
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
static inline void acpi_boot_table_init(void)
|
static inline void acpi_boot_table_init(void)
|
||||||
{
|
{
|
||||||
return;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline int acpi_mps_check(void)
|
static inline int acpi_mps_check(void)
|
||||||
|
|||||||
@@ -20,6 +20,7 @@
|
|||||||
#include <linux/module.h>
|
#include <linux/module.h>
|
||||||
#include <linux/kallsyms.h>
|
#include <linux/kallsyms.h>
|
||||||
#include <linux/capability.h>
|
#include <linux/capability.h>
|
||||||
|
#include <linux/percpu-refcount.h>
|
||||||
|
|
||||||
struct bpf_verifier_env;
|
struct bpf_verifier_env;
|
||||||
struct bpf_verifier_log;
|
struct bpf_verifier_log;
|
||||||
@@ -556,7 +557,8 @@ struct bpf_tramp_progs {
|
|||||||
* fentry = a set of program to run before calling original function
|
* fentry = a set of program to run before calling original function
|
||||||
* fexit = a set of program to run after original function
|
* fexit = a set of program to run after original function
|
||||||
*/
|
*/
|
||||||
int arch_prepare_bpf_trampoline(void *image, void *image_end,
|
struct bpf_tramp_image;
|
||||||
|
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *tr, void *image, void *image_end,
|
||||||
const struct btf_func_model *m, u32 flags,
|
const struct btf_func_model *m, u32 flags,
|
||||||
struct bpf_tramp_progs *tprogs,
|
struct bpf_tramp_progs *tprogs,
|
||||||
void *orig_call);
|
void *orig_call);
|
||||||
@@ -565,6 +567,8 @@ u64 notrace __bpf_prog_enter(void);
|
|||||||
void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start);
|
void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start);
|
||||||
void notrace __bpf_prog_enter_sleepable(void);
|
void notrace __bpf_prog_enter_sleepable(void);
|
||||||
void notrace __bpf_prog_exit_sleepable(void);
|
void notrace __bpf_prog_exit_sleepable(void);
|
||||||
|
void notrace __bpf_tramp_enter(struct bpf_tramp_image *tr);
|
||||||
|
void notrace __bpf_tramp_exit(struct bpf_tramp_image *tr);
|
||||||
|
|
||||||
struct bpf_ksym {
|
struct bpf_ksym {
|
||||||
unsigned long start;
|
unsigned long start;
|
||||||
@@ -583,6 +587,18 @@ enum bpf_tramp_prog_type {
|
|||||||
BPF_TRAMP_REPLACE, /* more than MAX */
|
BPF_TRAMP_REPLACE, /* more than MAX */
|
||||||
};
|
};
|
||||||
|
|
||||||
|
struct bpf_tramp_image {
|
||||||
|
void *image;
|
||||||
|
struct bpf_ksym ksym;
|
||||||
|
struct percpu_ref pcref;
|
||||||
|
void *ip_after_call;
|
||||||
|
void *ip_epilogue;
|
||||||
|
union {
|
||||||
|
struct rcu_head rcu;
|
||||||
|
struct work_struct work;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
struct bpf_trampoline {
|
struct bpf_trampoline {
|
||||||
/* hlist for trampoline_table */
|
/* hlist for trampoline_table */
|
||||||
struct hlist_node hlist;
|
struct hlist_node hlist;
|
||||||
@@ -605,9 +621,8 @@ struct bpf_trampoline {
|
|||||||
/* Number of attached programs. A counter per kind. */
|
/* Number of attached programs. A counter per kind. */
|
||||||
int progs_cnt[BPF_TRAMP_MAX];
|
int progs_cnt[BPF_TRAMP_MAX];
|
||||||
/* Executable image of trampoline */
|
/* Executable image of trampoline */
|
||||||
void *image;
|
struct bpf_tramp_image *cur_image;
|
||||||
u64 selector;
|
u64 selector;
|
||||||
struct bpf_ksym ksym;
|
|
||||||
};
|
};
|
||||||
|
|
||||||
struct bpf_attach_target_info {
|
struct bpf_attach_target_info {
|
||||||
@@ -691,6 +706,8 @@ void bpf_image_ksym_add(void *data, struct bpf_ksym *ksym);
|
|||||||
void bpf_image_ksym_del(struct bpf_ksym *ksym);
|
void bpf_image_ksym_del(struct bpf_ksym *ksym);
|
||||||
void bpf_ksym_add(struct bpf_ksym *ksym);
|
void bpf_ksym_add(struct bpf_ksym *ksym);
|
||||||
void bpf_ksym_del(struct bpf_ksym *ksym);
|
void bpf_ksym_del(struct bpf_ksym *ksym);
|
||||||
|
int bpf_jit_charge_modmem(u32 pages);
|
||||||
|
void bpf_jit_uncharge_modmem(u32 pages);
|
||||||
#else
|
#else
|
||||||
static inline int bpf_trampoline_link_prog(struct bpf_prog *prog,
|
static inline int bpf_trampoline_link_prog(struct bpf_prog *prog,
|
||||||
struct bpf_trampoline *tr)
|
struct bpf_trampoline *tr)
|
||||||
@@ -780,7 +797,6 @@ struct bpf_prog_aux {
|
|||||||
bool func_proto_unreliable;
|
bool func_proto_unreliable;
|
||||||
bool sleepable;
|
bool sleepable;
|
||||||
bool tail_call_reachable;
|
bool tail_call_reachable;
|
||||||
enum bpf_tramp_prog_type trampoline_prog_type;
|
|
||||||
struct hlist_node tramp_hlist;
|
struct hlist_node tramp_hlist;
|
||||||
/* BTF_KIND_FUNC_PROTO for valid attach_btf_id */
|
/* BTF_KIND_FUNC_PROTO for valid attach_btf_id */
|
||||||
const struct btf_type *attach_func_proto;
|
const struct btf_type *attach_func_proto;
|
||||||
|
|||||||
@@ -44,6 +44,7 @@
|
|||||||
|
|
||||||
#include <linux/can.h>
|
#include <linux/can.h>
|
||||||
#include <linux/list.h>
|
#include <linux/list.h>
|
||||||
|
#include <linux/netdevice.h>
|
||||||
|
|
||||||
#define CAN_SFF_RCV_ARRAY_SZ (1 << CAN_SFF_ID_BITS)
|
#define CAN_SFF_RCV_ARRAY_SZ (1 << CAN_SFF_ID_BITS)
|
||||||
#define CAN_EFF_RCV_HASH_BITS 10
|
#define CAN_EFF_RCV_HASH_BITS 10
|
||||||
@@ -65,4 +66,15 @@ struct can_ml_priv {
|
|||||||
#endif
|
#endif
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static inline struct can_ml_priv *can_get_ml_priv(struct net_device *dev)
|
||||||
|
{
|
||||||
|
return netdev_get_ml_priv(dev, ML_PRIV_CAN);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void can_set_ml_priv(struct net_device *dev,
|
||||||
|
struct can_ml_priv *ml_priv)
|
||||||
|
{
|
||||||
|
netdev_set_ml_priv(dev, ml_priv, ML_PRIV_CAN);
|
||||||
|
}
|
||||||
|
|
||||||
#endif /* CAN_ML_H */
|
#endif /* CAN_ML_H */
|
||||||
|
|||||||
@@ -271,6 +271,29 @@ static inline void devm_extcon_unregister_notifier(struct device *dev,
|
|||||||
struct extcon_dev *edev, unsigned int id,
|
struct extcon_dev *edev, unsigned int id,
|
||||||
struct notifier_block *nb) { }
|
struct notifier_block *nb) { }
|
||||||
|
|
||||||
|
static inline int extcon_register_notifier_all(struct extcon_dev *edev,
|
||||||
|
struct notifier_block *nb)
|
||||||
|
{
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline int extcon_unregister_notifier_all(struct extcon_dev *edev,
|
||||||
|
struct notifier_block *nb)
|
||||||
|
{
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline int devm_extcon_register_notifier_all(struct device *dev,
|
||||||
|
struct extcon_dev *edev,
|
||||||
|
struct notifier_block *nb)
|
||||||
|
{
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void devm_extcon_unregister_notifier_all(struct device *dev,
|
||||||
|
struct extcon_dev *edev,
|
||||||
|
struct notifier_block *nb) { }
|
||||||
|
|
||||||
static inline struct extcon_dev *extcon_get_extcon_dev(const char *extcon_name)
|
static inline struct extcon_dev *extcon_get_extcon_dev(const char *extcon_name)
|
||||||
{
|
{
|
||||||
return ERR_PTR(-ENODEV);
|
return ERR_PTR(-ENODEV);
|
||||||
|
|||||||
@@ -56,7 +56,7 @@
|
|||||||
* COMMAND_RECONFIG_FLAG_PARTIAL:
|
* COMMAND_RECONFIG_FLAG_PARTIAL:
|
||||||
* Set to FPGA configuration type (full or partial).
|
* Set to FPGA configuration type (full or partial).
|
||||||
*/
|
*/
|
||||||
#define COMMAND_RECONFIG_FLAG_PARTIAL 1
|
#define COMMAND_RECONFIG_FLAG_PARTIAL 0
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Timeout settings for service clients:
|
* Timeout settings for service clients:
|
||||||
|
|||||||
@@ -1633,6 +1633,12 @@ enum netdev_priv_flags {
|
|||||||
#define IFF_L3MDEV_RX_HANDLER IFF_L3MDEV_RX_HANDLER
|
#define IFF_L3MDEV_RX_HANDLER IFF_L3MDEV_RX_HANDLER
|
||||||
#define IFF_LIVE_RENAME_OK IFF_LIVE_RENAME_OK
|
#define IFF_LIVE_RENAME_OK IFF_LIVE_RENAME_OK
|
||||||
|
|
||||||
|
/* Specifies the type of the struct net_device::ml_priv pointer */
|
||||||
|
enum netdev_ml_priv_type {
|
||||||
|
ML_PRIV_NONE,
|
||||||
|
ML_PRIV_CAN,
|
||||||
|
};
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* struct net_device - The DEVICE structure.
|
* struct net_device - The DEVICE structure.
|
||||||
*
|
*
|
||||||
@@ -1828,6 +1834,7 @@ enum netdev_priv_flags {
|
|||||||
* @nd_net: Network namespace this network device is inside
|
* @nd_net: Network namespace this network device is inside
|
||||||
*
|
*
|
||||||
* @ml_priv: Mid-layer private
|
* @ml_priv: Mid-layer private
|
||||||
|
* @ml_priv_type: Mid-layer private type
|
||||||
* @lstats: Loopback statistics
|
* @lstats: Loopback statistics
|
||||||
* @tstats: Tunnel statistics
|
* @tstats: Tunnel statistics
|
||||||
* @dstats: Dummy statistics
|
* @dstats: Dummy statistics
|
||||||
@@ -2140,8 +2147,10 @@ struct net_device {
|
|||||||
possible_net_t nd_net;
|
possible_net_t nd_net;
|
||||||
|
|
||||||
/* mid-layer private */
|
/* mid-layer private */
|
||||||
|
void *ml_priv;
|
||||||
|
enum netdev_ml_priv_type ml_priv_type;
|
||||||
|
|
||||||
union {
|
union {
|
||||||
void *ml_priv;
|
|
||||||
struct pcpu_lstats __percpu *lstats;
|
struct pcpu_lstats __percpu *lstats;
|
||||||
struct pcpu_sw_netstats __percpu *tstats;
|
struct pcpu_sw_netstats __percpu *tstats;
|
||||||
struct pcpu_dstats __percpu *dstats;
|
struct pcpu_dstats __percpu *dstats;
|
||||||
@@ -2340,6 +2349,29 @@ static inline void netdev_reset_rx_headroom(struct net_device *dev)
|
|||||||
netdev_set_rx_headroom(dev, -1);
|
netdev_set_rx_headroom(dev, -1);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline void *netdev_get_ml_priv(struct net_device *dev,
|
||||||
|
enum netdev_ml_priv_type type)
|
||||||
|
{
|
||||||
|
if (dev->ml_priv_type != type)
|
||||||
|
return NULL;
|
||||||
|
|
||||||
|
return dev->ml_priv;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void netdev_set_ml_priv(struct net_device *dev,
|
||||||
|
void *ml_priv,
|
||||||
|
enum netdev_ml_priv_type type)
|
||||||
|
{
|
||||||
|
WARN(dev->ml_priv_type && dev->ml_priv_type != type,
|
||||||
|
"Overwriting already set ml_priv_type (%u) with different ml_priv_type (%u)!\n",
|
||||||
|
dev->ml_priv_type, type);
|
||||||
|
WARN(!dev->ml_priv_type && dev->ml_priv,
|
||||||
|
"Overwriting already set ml_priv and ml_priv_type is ML_PRIV_NONE!\n");
|
||||||
|
|
||||||
|
dev->ml_priv = ml_priv;
|
||||||
|
dev->ml_priv_type = type;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Net namespace inlines
|
* Net namespace inlines
|
||||||
*/
|
*/
|
||||||
|
|||||||
@@ -462,7 +462,5 @@ void geni_icc_set_tag(struct geni_se *se, u32 tag);
|
|||||||
int geni_icc_enable(struct geni_se *se);
|
int geni_icc_enable(struct geni_se *se);
|
||||||
|
|
||||||
int geni_icc_disable(struct geni_se *se);
|
int geni_icc_disable(struct geni_se *se);
|
||||||
|
|
||||||
void geni_remove_earlycon_icc_vote(void);
|
|
||||||
#endif
|
#endif
|
||||||
#endif
|
#endif
|
||||||
|
|||||||
@@ -173,9 +173,10 @@ static inline void ww_acquire_done(struct ww_acquire_ctx *ctx)
|
|||||||
*/
|
*/
|
||||||
static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx)
|
static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx)
|
||||||
{
|
{
|
||||||
#ifdef CONFIG_DEBUG_MUTEXES
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
mutex_release(&ctx->dep_map, _THIS_IP_);
|
mutex_release(&ctx->dep_map, _THIS_IP_);
|
||||||
|
#endif
|
||||||
|
#ifdef CONFIG_DEBUG_MUTEXES
|
||||||
DEBUG_LOCKS_WARN_ON(ctx->acquired);
|
DEBUG_LOCKS_WARN_ON(ctx->acquired);
|
||||||
if (!IS_ENABLED(CONFIG_PROVE_LOCKING))
|
if (!IS_ENABLED(CONFIG_PROVE_LOCKING))
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -431,7 +431,7 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
|
|||||||
|
|
||||||
tprogs[BPF_TRAMP_FENTRY].progs[0] = prog;
|
tprogs[BPF_TRAMP_FENTRY].progs[0] = prog;
|
||||||
tprogs[BPF_TRAMP_FENTRY].nr_progs = 1;
|
tprogs[BPF_TRAMP_FENTRY].nr_progs = 1;
|
||||||
err = arch_prepare_bpf_trampoline(image,
|
err = arch_prepare_bpf_trampoline(NULL, image,
|
||||||
st_map->image + PAGE_SIZE,
|
st_map->image + PAGE_SIZE,
|
||||||
&st_ops->func_models[i], 0,
|
&st_ops->func_models[i], 0,
|
||||||
tprogs, NULL);
|
tprogs, NULL);
|
||||||
|
|||||||
@@ -827,7 +827,7 @@ static int __init bpf_jit_charge_init(void)
|
|||||||
}
|
}
|
||||||
pure_initcall(bpf_jit_charge_init);
|
pure_initcall(bpf_jit_charge_init);
|
||||||
|
|
||||||
static int bpf_jit_charge_modmem(u32 pages)
|
int bpf_jit_charge_modmem(u32 pages)
|
||||||
{
|
{
|
||||||
if (atomic_long_add_return(pages, &bpf_jit_current) >
|
if (atomic_long_add_return(pages, &bpf_jit_current) >
|
||||||
(bpf_jit_limit >> PAGE_SHIFT)) {
|
(bpf_jit_limit >> PAGE_SHIFT)) {
|
||||||
@@ -840,7 +840,7 @@ static int bpf_jit_charge_modmem(u32 pages)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void bpf_jit_uncharge_modmem(u32 pages)
|
void bpf_jit_uncharge_modmem(u32 pages)
|
||||||
{
|
{
|
||||||
atomic_long_sub(pages, &bpf_jit_current);
|
atomic_long_sub(pages, &bpf_jit_current);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -59,19 +59,10 @@ void bpf_image_ksym_del(struct bpf_ksym *ksym)
|
|||||||
PAGE_SIZE, true, ksym->name);
|
PAGE_SIZE, true, ksym->name);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void bpf_trampoline_ksym_add(struct bpf_trampoline *tr)
|
|
||||||
{
|
|
||||||
struct bpf_ksym *ksym = &tr->ksym;
|
|
||||||
|
|
||||||
snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu", tr->key);
|
|
||||||
bpf_image_ksym_add(tr->image, ksym);
|
|
||||||
}
|
|
||||||
|
|
||||||
static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
|
static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
|
||||||
{
|
{
|
||||||
struct bpf_trampoline *tr;
|
struct bpf_trampoline *tr;
|
||||||
struct hlist_head *head;
|
struct hlist_head *head;
|
||||||
void *image;
|
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
mutex_lock(&trampoline_mutex);
|
mutex_lock(&trampoline_mutex);
|
||||||
@@ -86,14 +77,6 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
|
|||||||
if (!tr)
|
if (!tr)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
/* is_root was checked earlier. No need for bpf_jit_charge_modmem() */
|
|
||||||
image = bpf_jit_alloc_exec_page();
|
|
||||||
if (!image) {
|
|
||||||
kfree(tr);
|
|
||||||
tr = NULL;
|
|
||||||
goto out;
|
|
||||||
}
|
|
||||||
|
|
||||||
tr->key = key;
|
tr->key = key;
|
||||||
INIT_HLIST_NODE(&tr->hlist);
|
INIT_HLIST_NODE(&tr->hlist);
|
||||||
hlist_add_head(&tr->hlist, head);
|
hlist_add_head(&tr->hlist, head);
|
||||||
@@ -101,9 +84,6 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
|
|||||||
mutex_init(&tr->mutex);
|
mutex_init(&tr->mutex);
|
||||||
for (i = 0; i < BPF_TRAMP_MAX; i++)
|
for (i = 0; i < BPF_TRAMP_MAX; i++)
|
||||||
INIT_HLIST_HEAD(&tr->progs_hlist[i]);
|
INIT_HLIST_HEAD(&tr->progs_hlist[i]);
|
||||||
tr->image = image;
|
|
||||||
INIT_LIST_HEAD_RCU(&tr->ksym.lnode);
|
|
||||||
bpf_trampoline_ksym_add(tr);
|
|
||||||
out:
|
out:
|
||||||
mutex_unlock(&trampoline_mutex);
|
mutex_unlock(&trampoline_mutex);
|
||||||
return tr;
|
return tr;
|
||||||
@@ -187,10 +167,143 @@ bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total)
|
|||||||
return tprogs;
|
return tprogs;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void __bpf_tramp_image_put_deferred(struct work_struct *work)
|
||||||
|
{
|
||||||
|
struct bpf_tramp_image *im;
|
||||||
|
|
||||||
|
im = container_of(work, struct bpf_tramp_image, work);
|
||||||
|
bpf_image_ksym_del(&im->ksym);
|
||||||
|
trace_android_vh_set_memory_nx((unsigned long)im->image, 1);
|
||||||
|
bpf_jit_free_exec(im->image);
|
||||||
|
bpf_jit_uncharge_modmem(1);
|
||||||
|
percpu_ref_exit(&im->pcref);
|
||||||
|
kfree_rcu(im, rcu);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* callback, fexit step 3 or fentry step 2 */
|
||||||
|
static void __bpf_tramp_image_put_rcu(struct rcu_head *rcu)
|
||||||
|
{
|
||||||
|
struct bpf_tramp_image *im;
|
||||||
|
|
||||||
|
im = container_of(rcu, struct bpf_tramp_image, rcu);
|
||||||
|
INIT_WORK(&im->work, __bpf_tramp_image_put_deferred);
|
||||||
|
schedule_work(&im->work);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* callback, fexit step 2. Called after percpu_ref_kill confirms. */
|
||||||
|
static void __bpf_tramp_image_release(struct percpu_ref *pcref)
|
||||||
|
{
|
||||||
|
struct bpf_tramp_image *im;
|
||||||
|
|
||||||
|
im = container_of(pcref, struct bpf_tramp_image, pcref);
|
||||||
|
call_rcu_tasks(&im->rcu, __bpf_tramp_image_put_rcu);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* callback, fexit or fentry step 1 */
|
||||||
|
static void __bpf_tramp_image_put_rcu_tasks(struct rcu_head *rcu)
|
||||||
|
{
|
||||||
|
struct bpf_tramp_image *im;
|
||||||
|
|
||||||
|
im = container_of(rcu, struct bpf_tramp_image, rcu);
|
||||||
|
if (im->ip_after_call)
|
||||||
|
/* the case of fmod_ret/fexit trampoline and CONFIG_PREEMPTION=y */
|
||||||
|
percpu_ref_kill(&im->pcref);
|
||||||
|
else
|
||||||
|
/* the case of fentry trampoline */
|
||||||
|
call_rcu_tasks(&im->rcu, __bpf_tramp_image_put_rcu);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void bpf_tramp_image_put(struct bpf_tramp_image *im)
|
||||||
|
{
|
||||||
|
/* The trampoline image that calls original function is using:
|
||||||
|
* rcu_read_lock_trace to protect sleepable bpf progs
|
||||||
|
* rcu_read_lock to protect normal bpf progs
|
||||||
|
* percpu_ref to protect trampoline itself
|
||||||
|
* rcu tasks to protect trampoline asm not covered by percpu_ref
|
||||||
|
* (which are few asm insns before __bpf_tramp_enter and
|
||||||
|
* after __bpf_tramp_exit)
|
||||||
|
*
|
||||||
|
* The trampoline is unreachable before bpf_tramp_image_put().
|
||||||
|
*
|
||||||
|
* First, patch the trampoline to avoid calling into fexit progs.
|
||||||
|
* The progs will be freed even if the original function is still
|
||||||
|
* executing or sleeping.
|
||||||
|
* In case of CONFIG_PREEMPT=y use call_rcu_tasks() to wait on
|
||||||
|
* first few asm instructions to execute and call into
|
||||||
|
* __bpf_tramp_enter->percpu_ref_get.
|
||||||
|
* Then use percpu_ref_kill to wait for the trampoline and the original
|
||||||
|
* function to finish.
|
||||||
|
* Then use call_rcu_tasks() to make sure few asm insns in
|
||||||
|
* the trampoline epilogue are done as well.
|
||||||
|
*
|
||||||
|
* In !PREEMPT case the task that got interrupted in the first asm
|
||||||
|
* insns won't go through an RCU quiescent state which the
|
||||||
|
* percpu_ref_kill will be waiting for. Hence the first
|
||||||
|
* call_rcu_tasks() is not necessary.
|
||||||
|
*/
|
||||||
|
if (im->ip_after_call) {
|
||||||
|
int err = bpf_arch_text_poke(im->ip_after_call, BPF_MOD_JUMP,
|
||||||
|
NULL, im->ip_epilogue);
|
||||||
|
WARN_ON(err);
|
||||||
|
if (IS_ENABLED(CONFIG_PREEMPTION))
|
||||||
|
call_rcu_tasks(&im->rcu, __bpf_tramp_image_put_rcu_tasks);
|
||||||
|
else
|
||||||
|
percpu_ref_kill(&im->pcref);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* The trampoline without fexit and fmod_ret progs doesn't call original
|
||||||
|
* function and doesn't use percpu_ref.
|
||||||
|
* Use call_rcu_tasks_trace() to wait for sleepable progs to finish.
|
||||||
|
* Then use call_rcu_tasks() to wait for the rest of trampoline asm
|
||||||
|
* and normal progs.
|
||||||
|
*/
|
||||||
|
call_rcu_tasks_trace(&im->rcu, __bpf_tramp_image_put_rcu_tasks);
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, u32 idx)
|
||||||
|
{
|
||||||
|
struct bpf_tramp_image *im;
|
||||||
|
struct bpf_ksym *ksym;
|
||||||
|
void *image;
|
||||||
|
int err = -ENOMEM;
|
||||||
|
|
||||||
|
im = kzalloc(sizeof(*im), GFP_KERNEL);
|
||||||
|
if (!im)
|
||||||
|
goto out;
|
||||||
|
|
||||||
|
err = bpf_jit_charge_modmem(1);
|
||||||
|
if (err)
|
||||||
|
goto out_free_im;
|
||||||
|
|
||||||
|
err = -ENOMEM;
|
||||||
|
im->image = image = bpf_jit_alloc_exec_page();
|
||||||
|
if (!image)
|
||||||
|
goto out_uncharge;
|
||||||
|
|
||||||
|
err = percpu_ref_init(&im->pcref, __bpf_tramp_image_release, 0, GFP_KERNEL);
|
||||||
|
if (err)
|
||||||
|
goto out_free_image;
|
||||||
|
|
||||||
|
ksym = &im->ksym;
|
||||||
|
INIT_LIST_HEAD_RCU(&ksym->lnode);
|
||||||
|
snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu_%u", key, idx);
|
||||||
|
bpf_image_ksym_add(image, ksym);
|
||||||
|
return im;
|
||||||
|
|
||||||
|
out_free_image:
|
||||||
|
bpf_jit_free_exec(im->image);
|
||||||
|
out_uncharge:
|
||||||
|
bpf_jit_uncharge_modmem(1);
|
||||||
|
out_free_im:
|
||||||
|
kfree(im);
|
||||||
|
out:
|
||||||
|
return ERR_PTR(err);
|
||||||
|
}
|
||||||
|
|
||||||
static int bpf_trampoline_update(struct bpf_trampoline *tr)
|
static int bpf_trampoline_update(struct bpf_trampoline *tr)
|
||||||
{
|
{
|
||||||
void *old_image = tr->image + ((tr->selector + 1) & 1) * PAGE_SIZE/2;
|
struct bpf_tramp_image *im;
|
||||||
void *new_image = tr->image + (tr->selector & 1) * PAGE_SIZE/2;
|
|
||||||
struct bpf_tramp_progs *tprogs;
|
struct bpf_tramp_progs *tprogs;
|
||||||
u32 flags = BPF_TRAMP_F_RESTORE_REGS;
|
u32 flags = BPF_TRAMP_F_RESTORE_REGS;
|
||||||
int err, total;
|
int err, total;
|
||||||
@@ -200,41 +313,42 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
|
|||||||
return PTR_ERR(tprogs);
|
return PTR_ERR(tprogs);
|
||||||
|
|
||||||
if (total == 0) {
|
if (total == 0) {
|
||||||
err = unregister_fentry(tr, old_image);
|
err = unregister_fentry(tr, tr->cur_image->image);
|
||||||
|
bpf_tramp_image_put(tr->cur_image);
|
||||||
|
tr->cur_image = NULL;
|
||||||
tr->selector = 0;
|
tr->selector = 0;
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
im = bpf_tramp_image_alloc(tr->key, tr->selector);
|
||||||
|
if (IS_ERR(im)) {
|
||||||
|
err = PTR_ERR(im);
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
if (tprogs[BPF_TRAMP_FEXIT].nr_progs ||
|
if (tprogs[BPF_TRAMP_FEXIT].nr_progs ||
|
||||||
tprogs[BPF_TRAMP_MODIFY_RETURN].nr_progs)
|
tprogs[BPF_TRAMP_MODIFY_RETURN].nr_progs)
|
||||||
flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
|
flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
|
||||||
|
|
||||||
/* Though the second half of trampoline page is unused a task could be
|
err = arch_prepare_bpf_trampoline(im, im->image, im->image + PAGE_SIZE,
|
||||||
* preempted in the middle of the first half of trampoline and two
|
|
||||||
* updates to trampoline would change the code from underneath the
|
|
||||||
* preempted task. Hence wait for tasks to voluntarily schedule or go
|
|
||||||
* to userspace.
|
|
||||||
* The same trampoline can hold both sleepable and non-sleepable progs.
|
|
||||||
* synchronize_rcu_tasks_trace() is needed to make sure all sleepable
|
|
||||||
* programs finish executing.
|
|
||||||
* Wait for these two grace periods together.
|
|
||||||
*/
|
|
||||||
synchronize_rcu_mult(call_rcu_tasks, call_rcu_tasks_trace);
|
|
||||||
|
|
||||||
err = arch_prepare_bpf_trampoline(new_image, new_image + PAGE_SIZE / 2,
|
|
||||||
&tr->func.model, flags, tprogs,
|
&tr->func.model, flags, tprogs,
|
||||||
tr->func.addr);
|
tr->func.addr);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
if (tr->selector)
|
WARN_ON(tr->cur_image && tr->selector == 0);
|
||||||
|
WARN_ON(!tr->cur_image && tr->selector);
|
||||||
|
if (tr->cur_image)
|
||||||
/* progs already running at this address */
|
/* progs already running at this address */
|
||||||
err = modify_fentry(tr, old_image, new_image);
|
err = modify_fentry(tr, tr->cur_image->image, im->image);
|
||||||
else
|
else
|
||||||
/* first time registering */
|
/* first time registering */
|
||||||
err = register_fentry(tr, new_image);
|
err = register_fentry(tr, im->image);
|
||||||
if (err)
|
if (err)
|
||||||
goto out;
|
goto out;
|
||||||
|
if (tr->cur_image)
|
||||||
|
bpf_tramp_image_put(tr->cur_image);
|
||||||
|
tr->cur_image = im;
|
||||||
tr->selector++;
|
tr->selector++;
|
||||||
out:
|
out:
|
||||||
kfree(tprogs);
|
kfree(tprogs);
|
||||||
@@ -366,18 +480,12 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
|
|||||||
goto out;
|
goto out;
|
||||||
if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT])))
|
if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT])))
|
||||||
goto out;
|
goto out;
|
||||||
bpf_image_ksym_del(&tr->ksym);
|
/* This code will be executed even when the last bpf_tramp_image
|
||||||
/* This code will be executed when all bpf progs (both sleepable and
|
* is alive. All progs are detached from the trampoline and the
|
||||||
* non-sleepable) went through
|
* trampoline image is patched with jmp into epilogue to skip
|
||||||
* bpf_prog_put()->call_rcu[_tasks_trace]()->bpf_prog_free_deferred().
|
* fexit progs. The fentry-only trampoline will be freed via
|
||||||
* Hence no need for another synchronize_rcu_tasks_trace() here,
|
* multiple rcu callbacks.
|
||||||
* but synchronize_rcu_tasks() is still needed, since trampoline
|
|
||||||
* may not have had any sleepable programs and we need to wait
|
|
||||||
* for tasks to get out of trampoline code before freeing it.
|
|
||||||
*/
|
*/
|
||||||
synchronize_rcu_tasks();
|
|
||||||
trace_android_vh_set_memory_nx((unsigned long)tr->image, 1);
|
|
||||||
bpf_jit_free_exec(tr->image);
|
|
||||||
hlist_del(&tr->hlist);
|
hlist_del(&tr->hlist);
|
||||||
kfree(tr);
|
kfree(tr);
|
||||||
out:
|
out:
|
||||||
@@ -436,8 +544,18 @@ void notrace __bpf_prog_exit_sleepable(void)
|
|||||||
rcu_read_unlock_trace();
|
rcu_read_unlock_trace();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void notrace __bpf_tramp_enter(struct bpf_tramp_image *tr)
|
||||||
|
{
|
||||||
|
percpu_ref_get(&tr->pcref);
|
||||||
|
}
|
||||||
|
|
||||||
|
void notrace __bpf_tramp_exit(struct bpf_tramp_image *tr)
|
||||||
|
{
|
||||||
|
percpu_ref_put(&tr->pcref);
|
||||||
|
}
|
||||||
|
|
||||||
int __weak
|
int __weak
|
||||||
arch_prepare_bpf_trampoline(void *image, void *image_end,
|
arch_prepare_bpf_trampoline(struct bpf_tramp_image *tr, void *image, void *image_end,
|
||||||
const struct btf_func_model *m, u32 flags,
|
const struct btf_func_model *m, u32 flags,
|
||||||
struct bpf_tramp_progs *tprogs,
|
struct bpf_tramp_progs *tprogs,
|
||||||
void *orig_call)
|
void *orig_call)
|
||||||
|
|||||||
@@ -641,7 +641,7 @@ static inline int mutex_can_spin_on_owner(struct mutex *lock)
|
|||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx,
|
mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx,
|
||||||
const bool use_ww_ctx, struct mutex_waiter *waiter)
|
struct mutex_waiter *waiter)
|
||||||
{
|
{
|
||||||
if (!waiter) {
|
if (!waiter) {
|
||||||
/*
|
/*
|
||||||
@@ -717,7 +717,7 @@ fail:
|
|||||||
#else
|
#else
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx,
|
mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx,
|
||||||
const bool use_ww_ctx, struct mutex_waiter *waiter)
|
struct mutex_waiter *waiter)
|
||||||
{
|
{
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
@@ -937,6 +937,9 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
|
|||||||
struct ww_mutex *ww;
|
struct ww_mutex *ww;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
if (!use_ww_ctx)
|
||||||
|
ww_ctx = NULL;
|
||||||
|
|
||||||
might_sleep();
|
might_sleep();
|
||||||
|
|
||||||
#ifdef CONFIG_DEBUG_MUTEXES
|
#ifdef CONFIG_DEBUG_MUTEXES
|
||||||
@@ -944,7 +947,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
ww = container_of(lock, struct ww_mutex, base);
|
ww = container_of(lock, struct ww_mutex, base);
|
||||||
if (use_ww_ctx && ww_ctx) {
|
if (ww_ctx) {
|
||||||
if (unlikely(ww_ctx == READ_ONCE(ww->ctx)))
|
if (unlikely(ww_ctx == READ_ONCE(ww->ctx)))
|
||||||
return -EALREADY;
|
return -EALREADY;
|
||||||
|
|
||||||
@@ -961,10 +964,10 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
|
|||||||
mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);
|
mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);
|
||||||
|
|
||||||
if (__mutex_trylock(lock) ||
|
if (__mutex_trylock(lock) ||
|
||||||
mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, NULL)) {
|
mutex_optimistic_spin(lock, ww_ctx, NULL)) {
|
||||||
/* got the lock, yay! */
|
/* got the lock, yay! */
|
||||||
lock_acquired(&lock->dep_map, ip);
|
lock_acquired(&lock->dep_map, ip);
|
||||||
if (use_ww_ctx && ww_ctx)
|
if (ww_ctx)
|
||||||
ww_mutex_set_context_fastpath(ww, ww_ctx);
|
ww_mutex_set_context_fastpath(ww, ww_ctx);
|
||||||
preempt_enable();
|
preempt_enable();
|
||||||
return 0;
|
return 0;
|
||||||
@@ -975,7 +978,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
|
|||||||
* After waiting to acquire the wait_lock, try again.
|
* After waiting to acquire the wait_lock, try again.
|
||||||
*/
|
*/
|
||||||
if (__mutex_trylock(lock)) {
|
if (__mutex_trylock(lock)) {
|
||||||
if (use_ww_ctx && ww_ctx)
|
if (ww_ctx)
|
||||||
__ww_mutex_check_waiters(lock, ww_ctx);
|
__ww_mutex_check_waiters(lock, ww_ctx);
|
||||||
|
|
||||||
goto skip_wait;
|
goto skip_wait;
|
||||||
@@ -1029,7 +1032,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
|
|||||||
goto err;
|
goto err;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (use_ww_ctx && ww_ctx) {
|
if (ww_ctx) {
|
||||||
ret = __ww_mutex_check_kill(lock, &waiter, ww_ctx);
|
ret = __ww_mutex_check_kill(lock, &waiter, ww_ctx);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto err;
|
goto err;
|
||||||
@@ -1042,7 +1045,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
|
|||||||
* ww_mutex needs to always recheck its position since its waiter
|
* ww_mutex needs to always recheck its position since its waiter
|
||||||
* list is not FIFO ordered.
|
* list is not FIFO ordered.
|
||||||
*/
|
*/
|
||||||
if ((use_ww_ctx && ww_ctx) || !first) {
|
if (ww_ctx || !first) {
|
||||||
first = __mutex_waiter_is_first(lock, &waiter);
|
first = __mutex_waiter_is_first(lock, &waiter);
|
||||||
if (first)
|
if (first)
|
||||||
__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
|
__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
|
||||||
@@ -1055,7 +1058,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
|
|||||||
* or we must see its unlock and acquire.
|
* or we must see its unlock and acquire.
|
||||||
*/
|
*/
|
||||||
if (__mutex_trylock(lock) ||
|
if (__mutex_trylock(lock) ||
|
||||||
(first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, &waiter)))
|
(first && mutex_optimistic_spin(lock, ww_ctx, &waiter)))
|
||||||
break;
|
break;
|
||||||
|
|
||||||
spin_lock(&lock->wait_lock);
|
spin_lock(&lock->wait_lock);
|
||||||
@@ -1065,7 +1068,7 @@ acquired:
|
|||||||
__set_current_state(TASK_RUNNING);
|
__set_current_state(TASK_RUNNING);
|
||||||
trace_android_vh_mutex_wait_finish(lock);
|
trace_android_vh_mutex_wait_finish(lock);
|
||||||
|
|
||||||
if (use_ww_ctx && ww_ctx) {
|
if (ww_ctx) {
|
||||||
/*
|
/*
|
||||||
* Wound-Wait; we stole the lock (!first_waiter), check the
|
* Wound-Wait; we stole the lock (!first_waiter), check the
|
||||||
* waiters as anyone might want to wound us.
|
* waiters as anyone might want to wound us.
|
||||||
@@ -1085,7 +1088,7 @@ skip_wait:
|
|||||||
/* got the lock - cleanup and rejoice! */
|
/* got the lock - cleanup and rejoice! */
|
||||||
lock_acquired(&lock->dep_map, ip);
|
lock_acquired(&lock->dep_map, ip);
|
||||||
|
|
||||||
if (use_ww_ctx && ww_ctx)
|
if (ww_ctx)
|
||||||
ww_mutex_lock_acquired(ww, ww_ctx);
|
ww_mutex_lock_acquired(ww, ww_ctx);
|
||||||
|
|
||||||
spin_unlock(&lock->wait_lock);
|
spin_unlock(&lock->wait_lock);
|
||||||
|
|||||||
@@ -149,6 +149,7 @@ void __static_call_update(struct static_call_key *key, void *tramp, void *func)
|
|||||||
};
|
};
|
||||||
|
|
||||||
for (site_mod = &first; site_mod; site_mod = site_mod->next) {
|
for (site_mod = &first; site_mod; site_mod = site_mod->next) {
|
||||||
|
bool init = system_state < SYSTEM_RUNNING;
|
||||||
struct module *mod = site_mod->mod;
|
struct module *mod = site_mod->mod;
|
||||||
|
|
||||||
if (!site_mod->sites) {
|
if (!site_mod->sites) {
|
||||||
@@ -168,6 +169,7 @@ void __static_call_update(struct static_call_key *key, void *tramp, void *func)
|
|||||||
if (mod) {
|
if (mod) {
|
||||||
stop = mod->static_call_sites +
|
stop = mod->static_call_sites +
|
||||||
mod->num_static_call_sites;
|
mod->num_static_call_sites;
|
||||||
|
init = mod->state == MODULE_STATE_COMING;
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
@@ -175,16 +177,8 @@ void __static_call_update(struct static_call_key *key, void *tramp, void *func)
|
|||||||
site < stop && static_call_key(site) == key; site++) {
|
site < stop && static_call_key(site) == key; site++) {
|
||||||
void *site_addr = static_call_addr(site);
|
void *site_addr = static_call_addr(site);
|
||||||
|
|
||||||
if (static_call_is_init(site)) {
|
if (!init && static_call_is_init(site))
|
||||||
/*
|
continue;
|
||||||
* Don't write to call sites which were in
|
|
||||||
* initmem and have since been freed.
|
|
||||||
*/
|
|
||||||
if (!mod && system_state >= SYSTEM_RUNNING)
|
|
||||||
continue;
|
|
||||||
if (mod && !within_module_init((unsigned long)site_addr, mod))
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!kernel_text_address((unsigned long)site_addr)) {
|
if (!kernel_text_address((unsigned long)site_addr)) {
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -2985,7 +2985,8 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer,
|
|||||||
|
|
||||||
size = nr_entries * sizeof(unsigned long);
|
size = nr_entries * sizeof(unsigned long);
|
||||||
event = __trace_buffer_lock_reserve(buffer, TRACE_STACK,
|
event = __trace_buffer_lock_reserve(buffer, TRACE_STACK,
|
||||||
sizeof(*entry) + size, flags, pc);
|
(sizeof(*entry) - sizeof(entry->caller)) + size,
|
||||||
|
flags, pc);
|
||||||
if (!event)
|
if (!event)
|
||||||
goto out;
|
goto out;
|
||||||
entry = ring_buffer_event_data(event);
|
entry = ring_buffer_event_data(event);
|
||||||
|
|||||||
@@ -170,7 +170,7 @@ static int __init init_zero_pfn(void)
|
|||||||
zero_pfn = page_to_pfn(ZERO_PAGE(0));
|
zero_pfn = page_to_pfn(ZERO_PAGE(0));
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
core_initcall(init_zero_pfn);
|
early_initcall(init_zero_pfn);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Only trace rss_stat when there is a 512kb cross over.
|
* Only trace rss_stat when there is a 512kb cross over.
|
||||||
|
|||||||
@@ -1617,10 +1617,6 @@ p9_client_read_once(struct p9_fid *fid, u64 offset, struct iov_iter *to,
|
|||||||
}
|
}
|
||||||
|
|
||||||
p9_debug(P9_DEBUG_9P, "<<< RREAD count %d\n", count);
|
p9_debug(P9_DEBUG_9P, "<<< RREAD count %d\n", count);
|
||||||
if (!count) {
|
|
||||||
p9_tag_remove(clnt, req);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (non_zc) {
|
if (non_zc) {
|
||||||
int n = copy_to_iter(dataptr, count, to);
|
int n = copy_to_iter(dataptr, count, to);
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user