Merge 5.10.54 into android12-5.10-lts
Changes in 5.10.54 igc: Fix use-after-free error during reset igb: Fix use-after-free error during reset igc: change default return of igc_read_phy_reg() ixgbe: Fix an error handling path in 'ixgbe_probe()' igc: Fix an error handling path in 'igc_probe()' igb: Fix an error handling path in 'igb_probe()' fm10k: Fix an error handling path in 'fm10k_probe()' e1000e: Fix an error handling path in 'e1000_probe()' iavf: Fix an error handling path in 'iavf_probe()' igb: Check if num of q_vectors is smaller than max before array access igb: Fix position of assignment to *ring gve: Fix an error handling path in 'gve_probe()' net: add kcov handle to skb extensions bonding: fix suspicious RCU usage in bond_ipsec_add_sa() bonding: fix null dereference in bond_ipsec_add_sa() ixgbevf: use xso.real_dev instead of xso.dev in callback functions of struct xfrmdev_ops bonding: fix suspicious RCU usage in bond_ipsec_del_sa() bonding: disallow setting nested bonding + ipsec offload bonding: Add struct bond_ipesc to manage SA bonding: fix suspicious RCU usage in bond_ipsec_offload_ok() bonding: fix incorrect return value of bond_ipsec_offload_ok() ipv6: fix 'disable_policy' for fwd packets stmmac: platform: Fix signedness bug in stmmac_probe_config_dt() selftests: icmp_redirect: remove from checking for IPv6 route get selftests: icmp_redirect: IPv6 PMTU info should be cleared after redirect pwm: sprd: Ensure configuring period and duty_cycle isn't wrongly skipped cxgb4: fix IRQ free race during driver unload mptcp: fix warning in __skb_flow_dissect() when do syn cookie for subflow join nvme-pci: do not call nvme_dev_remove_admin from nvme_remove KVM: x86/pmu: Clear anythread deprecated bit when 0xa leaf is unsupported on the SVM perf inject: Fix dso->nsinfo refcounting perf map: Fix dso->nsinfo refcounting perf probe: Fix dso->nsinfo refcounting perf env: Fix sibling_dies memory leak perf test session_topology: Delete session->evlist perf test event_update: Fix memory leak of evlist perf dso: Fix memory leak in dso__new_map() perf test maps__merge_in: Fix memory leak of maps perf env: Fix memory leak of cpu_pmu_caps perf report: Free generated help strings for sort option perf script: Fix memory 'threads' and 'cpus' leaks on exit perf lzma: Close lzma stream on exit perf probe-file: Delete namelist in del_events() on the error path perf data: Close all files in close_dir() perf sched: Fix record failure when CONFIG_SCHEDSTATS is not set ASoC: wm_adsp: Correct wm_coeff_tlv_get handling spi: imx: add a check for speed_hz before calculating the clock spi: stm32: fixes pm_runtime calls in probe/remove regulator: hi6421: Use correct variable type for regmap api val argument regulator: hi6421: Fix getting wrong drvdata spi: mediatek: fix fifo rx mode ASoC: rt5631: Fix regcache sync errors on resume bpf, test: fix NULL pointer dereference on invalid expected_attach_type bpf: Fix tail_call_reachable rejection for interpreter when jit failed xdp, net: Fix use-after-free in bpf_xdp_link_release timers: Fix get_next_timer_interrupt() with no timers pending liquidio: Fix unintentional sign extension issue on left shift of u16 s390/bpf: Perform r1 range checking before accessing jit->seen_reg[r1] bpf, sockmap: Fix potential memory leak on unlikely error case bpf, sockmap, tcp: sk_prot needs inuse_idx set for proc stats bpf, sockmap, udp: sk_prot needs inuse_idx set for proc stats bpftool: Check malloc return value in mount_bpffs_for_pin net: fix uninit-value in caif_seqpkt_sendmsg usb: hso: fix error handling code of hso_create_net_device dma-mapping: handle vmalloc addresses in dma_common_{mmap,get_sgtable} efi/tpm: Differentiate missing and invalid final event log table. net: decnet: Fix sleeping inside in af_decnet KVM: PPC: Book3S: Fix CONFIG_TRANSACTIONAL_MEM=n crash KVM: PPC: Fix kvm_arch_vcpu_ioctl vcpu_load leak net: sched: fix memory leak in tcindex_partial_destroy_work sctp: trim optlen when it's a huge value in sctp_setsockopt netrom: Decrease sock refcount when sock timers expire scsi: iscsi: Fix iface sysfs attr detection scsi: target: Fix protect handling in WRITE SAME(32) spi: cadence: Correct initialisation of runtime PM again ACPI: Kconfig: Fix table override from built-in initrd bnxt_en: don't disable an already disabled PCI device bnxt_en: Refresh RoCE capabilities in bnxt_ulp_probe() bnxt_en: Add missing check for BNXT_STATE_ABORT_ERR in bnxt_fw_rset_task() bnxt_en: Validate vlan protocol ID on RX packets bnxt_en: Check abort error state in bnxt_half_open_nic() net: hisilicon: rename CACHE_LINE_MASK to avoid redefinition net/tcp_fastopen: fix data races around tfo_active_disable_stamp ALSA: hda: intel-dsp-cfg: add missing ElkhartLake PCI ID net: hns3: fix possible mismatches resp of mailbox net: hns3: fix rx VLAN offload state inconsistent issue spi: spi-bcm2835: Fix deadlock net/sched: act_skbmod: Skip non-Ethernet packets ipv6: fix another slab-out-of-bounds in fib6_nh_flush_exceptions ceph: don't WARN if we're still opening a session to an MDS nvme-pci: don't WARN_ON in nvme_reset_work if ctrl.state is not RESETTING Revert "USB: quirks: ignore remote wake-up on Fibocom L850-GL LTE modem" afs: Fix tracepoint string placement with built-in AFS r8169: Avoid duplicate sysfs entry creation error nvme: set the PRACT bit when using Write Zeroes with T10 PI sctp: update active_key for asoc when old key is being replaced tcp: disable TFO blackhole logic by default net: dsa: sja1105: make VID 4095 a bridge VLAN too net: sched: cls_api: Fix the the wrong parameter drm/panel: raspberrypi-touchscreen: Prevent double-free cifs: only write 64kb at a time when fallocating a small region of a file cifs: fix fallocate when trying to allocate a hole. proc: Avoid mixing integer types in mem_rw() mmc: core: Don't allocate IDA for OF aliases s390/ftrace: fix ftrace_update_ftrace_func implementation s390/boot: fix use of expolines in the DMA code ALSA: usb-audio: Add missing proc text entry for BESPOKEN type ALSA: usb-audio: Add registration quirk for JBL Quantum headsets ALSA: sb: Fix potential ABBA deadlock in CSP driver ALSA: hda/realtek: Fix pop noise and 2 Front Mic issues on a machine ALSA: hdmi: Expose all pins on MSI MS-7C94 board ALSA: pcm: Call substream ack() method upon compat mmap commit ALSA: pcm: Fix mmap capability check Revert "usb: renesas-xhci: Fix handling of unknown ROM state" usb: xhci: avoid renesas_usb_fw.mem when it's unusable xhci: Fix lost USB 2 remote wake KVM: PPC: Book3S: Fix H_RTAS rets buffer overflow KVM: PPC: Book3S HV Nested: Sanitise H_ENTER_NESTED TM state usb: hub: Disable USB 3 device initiated lpm if exit latency is too high usb: hub: Fix link power management max exit latency (MEL) calculations USB: usb-storage: Add LaCie Rugged USB3-FW to IGNORE_UAS usb: max-3421: Prevent corruption of freed memory usb: renesas_usbhs: Fix superfluous irqs happen after usb_pkt_pop() USB: serial: option: add support for u-blox LARA-R6 family USB: serial: cp210x: fix comments for GE CS1000 USB: serial: cp210x: add ID for CEL EM3588 USB ZigBee stick usb: gadget: Fix Unbalanced pm_runtime_enable in tegra_xudc_probe usb: dwc2: gadget: Fix GOUTNAK flow for Slave mode. usb: dwc2: gadget: Fix sending zero length packet in DDMA mode. usb: typec: stusb160x: register role switch before interrupt registration firmware/efi: Tell memblock about EFI iomem reservations tracepoints: Update static_call before tp_funcs when adding a tracepoint tracing/histogram: Rename "cpu" to "common_cpu" tracing: Fix bug in rb_per_cpu_empty() that might cause deadloop. tracing: Synthetic event field_pos is an index not a boolean btrfs: check for missing device in btrfs_trim_fs media: ngene: Fix out-of-bounds bug in ngene_command_config_free_buf() ixgbe: Fix packet corruption due to missing DMA sync bus: mhi: core: Validate channel ID when processing command completions posix-cpu-timers: Fix rearm racing against process tick selftest: use mmap instead of posix_memalign to allocate memory io_uring: explicitly count entries for poll reqs io_uring: remove double poll entry on arm failure userfaultfd: do not untag user pointers memblock: make for_each_mem_range() traverse MEMBLOCK_HOTPLUG regions hugetlbfs: fix mount mode command line processing rbd: don't hold lock_rwsem while running_list is being drained rbd: always kick acquire on "acquired" and "released" notifications misc: eeprom: at24: Always append device id even if label property is set. nds32: fix up stack guard gap driver core: Prevent warning when removing a device link from unregistered consumer drm: Return -ENOTTY for non-drm ioctls drm/amdgpu: update golden setting for sienna_cichlid net: dsa: mv88e6xxx: enable SerDes RX stats for Topaz net: dsa: mv88e6xxx: enable SerDes PCS register dump via ethtool -d on Topaz PCI: Mark AMD Navi14 GPU ATS as broken bonding: fix build issue skbuff: Release nfct refcount on napi stolen or re-used skbs Documentation: Fix intiramfs script name perf inject: Close inject.output on exit usb: ehci: Prevent missed ehci interrupts with edge-triggered MSI drm/i915/gvt: Clear d3_entered on elsp cmd submission. sfc: ensure correct number of XDP queues xhci: add xhci_get_virt_ep() helper skbuff: Fix build with SKB extensions disabled Linux 5.10.54 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: Ifd2823b47ab1544cd1f168b138624ffe060a471e
This commit is contained in:
@@ -45,14 +45,24 @@ how the user addresses are used by the kernel:
|
|||||||
|
|
||||||
1. User addresses not accessed by the kernel but used for address space
|
1. User addresses not accessed by the kernel but used for address space
|
||||||
management (e.g. ``mprotect()``, ``madvise()``). The use of valid
|
management (e.g. ``mprotect()``, ``madvise()``). The use of valid
|
||||||
tagged pointers in this context is allowed with the exception of
|
tagged pointers in this context is allowed with these exceptions:
|
||||||
``brk()``, ``mmap()`` and the ``new_address`` argument to
|
|
||||||
``mremap()`` as these have the potential to alias with existing
|
|
||||||
user addresses.
|
|
||||||
|
|
||||||
NOTE: This behaviour changed in v5.6 and so some earlier kernels may
|
- ``brk()``, ``mmap()`` and the ``new_address`` argument to
|
||||||
incorrectly accept valid tagged pointers for the ``brk()``,
|
``mremap()`` as these have the potential to alias with existing
|
||||||
``mmap()`` and ``mremap()`` system calls.
|
user addresses.
|
||||||
|
|
||||||
|
NOTE: This behaviour changed in v5.6 and so some earlier kernels may
|
||||||
|
incorrectly accept valid tagged pointers for the ``brk()``,
|
||||||
|
``mmap()`` and ``mremap()`` system calls.
|
||||||
|
|
||||||
|
- The ``range.start``, ``start`` and ``dst`` arguments to the
|
||||||
|
``UFFDIO_*`` ``ioctl()``s used on a file descriptor obtained from
|
||||||
|
``userfaultfd()``, as fault addresses subsequently obtained by reading
|
||||||
|
the file descriptor will be untagged, which may otherwise confuse
|
||||||
|
tag-unaware programs.
|
||||||
|
|
||||||
|
NOTE: This behaviour changed in v5.14 and so some earlier kernels may
|
||||||
|
incorrectly accept valid tagged pointers for this system call.
|
||||||
|
|
||||||
2. User addresses accessed by the kernel (e.g. ``write()``). This ABI
|
2. User addresses accessed by the kernel (e.g. ``write()``). This ABI
|
||||||
relaxation is disabled by default and the application thread needs to
|
relaxation is disabled by default and the application thread needs to
|
||||||
|
@@ -69,17 +69,17 @@ early userspace image can be built by an unprivileged user.
|
|||||||
|
|
||||||
As a technical note, when directories and files are specified, the
|
As a technical note, when directories and files are specified, the
|
||||||
entire CONFIG_INITRAMFS_SOURCE is passed to
|
entire CONFIG_INITRAMFS_SOURCE is passed to
|
||||||
usr/gen_initramfs_list.sh. This means that CONFIG_INITRAMFS_SOURCE
|
usr/gen_initramfs.sh. This means that CONFIG_INITRAMFS_SOURCE
|
||||||
can really be interpreted as any legal argument to
|
can really be interpreted as any legal argument to
|
||||||
gen_initramfs_list.sh. If a directory is specified as an argument then
|
gen_initramfs.sh. If a directory is specified as an argument then
|
||||||
the contents are scanned, uid/gid translation is performed, and
|
the contents are scanned, uid/gid translation is performed, and
|
||||||
usr/gen_init_cpio file directives are output. If a directory is
|
usr/gen_init_cpio file directives are output. If a directory is
|
||||||
specified as an argument to usr/gen_initramfs_list.sh then the
|
specified as an argument to usr/gen_initramfs.sh then the
|
||||||
contents of the file are simply copied to the output. All of the output
|
contents of the file are simply copied to the output. All of the output
|
||||||
directives from directory scanning and file contents copying are
|
directives from directory scanning and file contents copying are
|
||||||
processed by usr/gen_init_cpio.
|
processed by usr/gen_init_cpio.
|
||||||
|
|
||||||
See also 'usr/gen_initramfs_list.sh -h'.
|
See also 'usr/gen_initramfs.sh -h'.
|
||||||
|
|
||||||
Where's this all leading?
|
Where's this all leading?
|
||||||
=========================
|
=========================
|
||||||
|
@@ -170,7 +170,7 @@ Documentation/driver-api/early-userspace/early_userspace_support.rst for more de
|
|||||||
The kernel does not depend on external cpio tools. If you specify a
|
The kernel does not depend on external cpio tools. If you specify a
|
||||||
directory instead of a configuration file, the kernel's build infrastructure
|
directory instead of a configuration file, the kernel's build infrastructure
|
||||||
creates a configuration file from that directory (usr/Makefile calls
|
creates a configuration file from that directory (usr/Makefile calls
|
||||||
usr/gen_initramfs_list.sh), and proceeds to package up that directory
|
usr/gen_initramfs.sh), and proceeds to package up that directory
|
||||||
using the config file (by feeding it to usr/gen_init_cpio, which is created
|
using the config file (by feeding it to usr/gen_init_cpio, which is created
|
||||||
from usr/gen_init_cpio.c). The kernel's build-time cpio creation code is
|
from usr/gen_init_cpio.c). The kernel's build-time cpio creation code is
|
||||||
entirely self-contained, and the kernel's boot-time extractor is also
|
entirely self-contained, and the kernel's boot-time extractor is also
|
||||||
|
@@ -751,7 +751,7 @@ tcp_fastopen_blackhole_timeout_sec - INTEGER
|
|||||||
initial value when the blackhole issue goes away.
|
initial value when the blackhole issue goes away.
|
||||||
0 to disable the blackhole detection.
|
0 to disable the blackhole detection.
|
||||||
|
|
||||||
By default, it is set to 1hr.
|
By default, it is set to 0 (feature is disabled).
|
||||||
|
|
||||||
tcp_fastopen_key - list of comma separated 32-digit hexadecimal INTEGERs
|
tcp_fastopen_key - list of comma separated 32-digit hexadecimal INTEGERs
|
||||||
The list consists of a primary key and an optional backup key. The
|
The list consists of a primary key and an optional backup key. The
|
||||||
|
@@ -191,7 +191,7 @@ Documentation written by Tom Zanussi
|
|||||||
with the event, in nanoseconds. May be
|
with the event, in nanoseconds. May be
|
||||||
modified by .usecs to have timestamps
|
modified by .usecs to have timestamps
|
||||||
interpreted as microseconds.
|
interpreted as microseconds.
|
||||||
cpu int the cpu on which the event occurred.
|
common_cpu int the cpu on which the event occurred.
|
||||||
====================== ==== =======================================
|
====================== ==== =======================================
|
||||||
|
|
||||||
Extended error information
|
Extended error information
|
||||||
|
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 5
|
VERSION = 5
|
||||||
PATCHLEVEL = 10
|
PATCHLEVEL = 10
|
||||||
SUBLEVEL = 53
|
SUBLEVEL = 54
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = Dare mighty things
|
NAME = Dare mighty things
|
||||||
|
|
||||||
|
@@ -59,7 +59,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
|
|||||||
|
|
||||||
vma = find_vma(mm, addr);
|
vma = find_vma(mm, addr);
|
||||||
if (TASK_SIZE - len >= addr &&
|
if (TASK_SIZE - len >= addr &&
|
||||||
(!vma || addr + len <= vma->vm_start))
|
(!vma || addr + len <= vm_start_gap(vma)))
|
||||||
return addr;
|
return addr;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -2366,8 +2366,10 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu)
|
|||||||
HFSCR_DSCR | HFSCR_VECVSX | HFSCR_FP | HFSCR_PREFIX;
|
HFSCR_DSCR | HFSCR_VECVSX | HFSCR_FP | HFSCR_PREFIX;
|
||||||
if (cpu_has_feature(CPU_FTR_HVMODE)) {
|
if (cpu_has_feature(CPU_FTR_HVMODE)) {
|
||||||
vcpu->arch.hfscr &= mfspr(SPRN_HFSCR);
|
vcpu->arch.hfscr &= mfspr(SPRN_HFSCR);
|
||||||
|
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
|
||||||
if (cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
|
if (cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
|
||||||
vcpu->arch.hfscr |= HFSCR_TM;
|
vcpu->arch.hfscr |= HFSCR_TM;
|
||||||
|
#endif
|
||||||
}
|
}
|
||||||
if (cpu_has_feature(CPU_FTR_TM_COMP))
|
if (cpu_has_feature(CPU_FTR_TM_COMP))
|
||||||
vcpu->arch.hfscr |= HFSCR_TM;
|
vcpu->arch.hfscr |= HFSCR_TM;
|
||||||
|
@@ -232,6 +232,9 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
|
|||||||
if (vcpu->kvm->arch.l1_ptcr == 0)
|
if (vcpu->kvm->arch.l1_ptcr == 0)
|
||||||
return H_NOT_AVAILABLE;
|
return H_NOT_AVAILABLE;
|
||||||
|
|
||||||
|
if (MSR_TM_TRANSACTIONAL(vcpu->arch.shregs.msr))
|
||||||
|
return H_BAD_MODE;
|
||||||
|
|
||||||
/* copy parameters in */
|
/* copy parameters in */
|
||||||
hv_ptr = kvmppc_get_gpr(vcpu, 4);
|
hv_ptr = kvmppc_get_gpr(vcpu, 4);
|
||||||
regs_ptr = kvmppc_get_gpr(vcpu, 5);
|
regs_ptr = kvmppc_get_gpr(vcpu, 5);
|
||||||
@@ -254,6 +257,23 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
|
|||||||
if (l2_hv.vcpu_token >= NR_CPUS)
|
if (l2_hv.vcpu_token >= NR_CPUS)
|
||||||
return H_PARAMETER;
|
return H_PARAMETER;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* L1 must have set up a suspended state to enter the L2 in a
|
||||||
|
* transactional state, and only in that case. These have to be
|
||||||
|
* filtered out here to prevent causing a TM Bad Thing in the
|
||||||
|
* host HRFID. We could synthesize a TM Bad Thing back to the L1
|
||||||
|
* here but there doesn't seem like much point.
|
||||||
|
*/
|
||||||
|
if (MSR_TM_SUSPENDED(vcpu->arch.shregs.msr)) {
|
||||||
|
if (!MSR_TM_ACTIVE(l2_regs.msr))
|
||||||
|
return H_BAD_MODE;
|
||||||
|
} else {
|
||||||
|
if (l2_regs.msr & MSR_TS_MASK)
|
||||||
|
return H_BAD_MODE;
|
||||||
|
if (WARN_ON_ONCE(vcpu->arch.shregs.msr & MSR_TS_MASK))
|
||||||
|
return H_BAD_MODE;
|
||||||
|
}
|
||||||
|
|
||||||
/* translate lpid */
|
/* translate lpid */
|
||||||
l2 = kvmhv_get_nested(vcpu->kvm, l2_hv.lpid, true);
|
l2 = kvmhv_get_nested(vcpu->kvm, l2_hv.lpid, true);
|
||||||
if (!l2)
|
if (!l2)
|
||||||
|
@@ -242,6 +242,17 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
|
|||||||
* value so we can restore it on the way out.
|
* value so we can restore it on the way out.
|
||||||
*/
|
*/
|
||||||
orig_rets = args.rets;
|
orig_rets = args.rets;
|
||||||
|
if (be32_to_cpu(args.nargs) >= ARRAY_SIZE(args.args)) {
|
||||||
|
/*
|
||||||
|
* Don't overflow our args array: ensure there is room for
|
||||||
|
* at least rets[0] (even if the call specifies 0 nret).
|
||||||
|
*
|
||||||
|
* Each handler must then check for the correct nargs and nret
|
||||||
|
* values, but they may always return failure in rets[0].
|
||||||
|
*/
|
||||||
|
rc = -EINVAL;
|
||||||
|
goto fail;
|
||||||
|
}
|
||||||
args.rets = &args.args[be32_to_cpu(args.nargs)];
|
args.rets = &args.args[be32_to_cpu(args.nargs)];
|
||||||
|
|
||||||
mutex_lock(&vcpu->kvm->arch.rtas_token_lock);
|
mutex_lock(&vcpu->kvm->arch.rtas_token_lock);
|
||||||
@@ -269,9 +280,17 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
|
|||||||
fail:
|
fail:
|
||||||
/*
|
/*
|
||||||
* We only get here if the guest has called RTAS with a bogus
|
* We only get here if the guest has called RTAS with a bogus
|
||||||
* args pointer. That means we can't get to the args, and so we
|
* args pointer or nargs/nret values that would overflow the
|
||||||
* can't fail the RTAS call. So fail right out to userspace,
|
* array. That means we can't get to the args, and so we can't
|
||||||
* which should kill the guest.
|
* fail the RTAS call. So fail right out to userspace, which
|
||||||
|
* should kill the guest.
|
||||||
|
*
|
||||||
|
* SLOF should actually pass the hcall return value from the
|
||||||
|
* rtas handler call in r3, so enter_rtas could be modified to
|
||||||
|
* return a failure indication in r3 and we could return such
|
||||||
|
* errors to the guest rather than failing to host userspace.
|
||||||
|
* However old guests that don't test for failure could then
|
||||||
|
* continue silently after errors, so for now we won't do this.
|
||||||
*/
|
*/
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
|
@@ -2041,9 +2041,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
|
|||||||
{
|
{
|
||||||
struct kvm_enable_cap cap;
|
struct kvm_enable_cap cap;
|
||||||
r = -EFAULT;
|
r = -EFAULT;
|
||||||
vcpu_load(vcpu);
|
|
||||||
if (copy_from_user(&cap, argp, sizeof(cap)))
|
if (copy_from_user(&cap, argp, sizeof(cap)))
|
||||||
goto out;
|
goto out;
|
||||||
|
vcpu_load(vcpu);
|
||||||
r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap);
|
r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap);
|
||||||
vcpu_put(vcpu);
|
vcpu_put(vcpu);
|
||||||
break;
|
break;
|
||||||
@@ -2067,9 +2067,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
|
|||||||
case KVM_DIRTY_TLB: {
|
case KVM_DIRTY_TLB: {
|
||||||
struct kvm_dirty_tlb dirty;
|
struct kvm_dirty_tlb dirty;
|
||||||
r = -EFAULT;
|
r = -EFAULT;
|
||||||
vcpu_load(vcpu);
|
|
||||||
if (copy_from_user(&dirty, argp, sizeof(dirty)))
|
if (copy_from_user(&dirty, argp, sizeof(dirty)))
|
||||||
goto out;
|
goto out;
|
||||||
|
vcpu_load(vcpu);
|
||||||
r = kvm_vcpu_ioctl_dirty_tlb(vcpu, &dirty);
|
r = kvm_vcpu_ioctl_dirty_tlb(vcpu, &dirty);
|
||||||
vcpu_put(vcpu);
|
vcpu_put(vcpu);
|
||||||
break;
|
break;
|
||||||
|
@@ -9,16 +9,6 @@
|
|||||||
#include <asm/errno.h>
|
#include <asm/errno.h>
|
||||||
#include <asm/sigp.h>
|
#include <asm/sigp.h>
|
||||||
|
|
||||||
#ifdef CC_USING_EXPOLINE
|
|
||||||
.pushsection .dma.text.__s390_indirect_jump_r14,"axG"
|
|
||||||
__dma__s390_indirect_jump_r14:
|
|
||||||
larl %r1,0f
|
|
||||||
ex 0,0(%r1)
|
|
||||||
j .
|
|
||||||
0: br %r14
|
|
||||||
.popsection
|
|
||||||
#endif
|
|
||||||
|
|
||||||
.section .dma.text,"ax"
|
.section .dma.text,"ax"
|
||||||
/*
|
/*
|
||||||
* Simplified version of expoline thunk. The normal thunks can not be used here,
|
* Simplified version of expoline thunk. The normal thunks can not be used here,
|
||||||
@@ -27,11 +17,10 @@ __dma__s390_indirect_jump_r14:
|
|||||||
* affects a few functions that are not performance-relevant.
|
* affects a few functions that are not performance-relevant.
|
||||||
*/
|
*/
|
||||||
.macro BR_EX_DMA_r14
|
.macro BR_EX_DMA_r14
|
||||||
#ifdef CC_USING_EXPOLINE
|
larl %r1,0f
|
||||||
jg __dma__s390_indirect_jump_r14
|
ex 0,0(%r1)
|
||||||
#else
|
j .
|
||||||
br %r14
|
0: br %r14
|
||||||
#endif
|
|
||||||
.endm
|
.endm
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@@ -27,6 +27,7 @@ void ftrace_caller(void);
|
|||||||
|
|
||||||
extern char ftrace_graph_caller_end;
|
extern char ftrace_graph_caller_end;
|
||||||
extern unsigned long ftrace_plt;
|
extern unsigned long ftrace_plt;
|
||||||
|
extern void *ftrace_func;
|
||||||
|
|
||||||
struct dyn_arch_ftrace { };
|
struct dyn_arch_ftrace { };
|
||||||
|
|
||||||
|
@@ -57,6 +57,7 @@
|
|||||||
* > brasl %r0,ftrace_caller # offset 0
|
* > brasl %r0,ftrace_caller # offset 0
|
||||||
*/
|
*/
|
||||||
|
|
||||||
|
void *ftrace_func __read_mostly = ftrace_stub;
|
||||||
unsigned long ftrace_plt;
|
unsigned long ftrace_plt;
|
||||||
|
|
||||||
static inline void ftrace_generate_orig_insn(struct ftrace_insn *insn)
|
static inline void ftrace_generate_orig_insn(struct ftrace_insn *insn)
|
||||||
@@ -120,6 +121,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
|
|||||||
|
|
||||||
int ftrace_update_ftrace_func(ftrace_func_t func)
|
int ftrace_update_ftrace_func(ftrace_func_t func)
|
||||||
{
|
{
|
||||||
|
ftrace_func = func;
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -67,13 +67,13 @@ ENTRY(ftrace_caller)
|
|||||||
#ifdef CONFIG_HAVE_MARCH_Z196_FEATURES
|
#ifdef CONFIG_HAVE_MARCH_Z196_FEATURES
|
||||||
aghik %r2,%r0,-MCOUNT_INSN_SIZE
|
aghik %r2,%r0,-MCOUNT_INSN_SIZE
|
||||||
lgrl %r4,function_trace_op
|
lgrl %r4,function_trace_op
|
||||||
lgrl %r1,ftrace_trace_function
|
lgrl %r1,ftrace_func
|
||||||
#else
|
#else
|
||||||
lgr %r2,%r0
|
lgr %r2,%r0
|
||||||
aghi %r2,-MCOUNT_INSN_SIZE
|
aghi %r2,-MCOUNT_INSN_SIZE
|
||||||
larl %r4,function_trace_op
|
larl %r4,function_trace_op
|
||||||
lg %r4,0(%r4)
|
lg %r4,0(%r4)
|
||||||
larl %r1,ftrace_trace_function
|
larl %r1,ftrace_func
|
||||||
lg %r1,0(%r1)
|
lg %r1,0(%r1)
|
||||||
#endif
|
#endif
|
||||||
lgr %r3,%r14
|
lgr %r3,%r14
|
||||||
|
@@ -112,7 +112,7 @@ static inline void reg_set_seen(struct bpf_jit *jit, u32 b1)
|
|||||||
{
|
{
|
||||||
u32 r1 = reg2hex[b1];
|
u32 r1 = reg2hex[b1];
|
||||||
|
|
||||||
if (!jit->seen_reg[r1] && r1 >= 6 && r1 <= 15)
|
if (r1 >= 6 && r1 <= 15 && !jit->seen_reg[r1])
|
||||||
jit->seen_reg[r1] = 1;
|
jit->seen_reg[r1] = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -684,7 +684,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
|
|||||||
|
|
||||||
edx.split.num_counters_fixed = min(cap.num_counters_fixed, MAX_FIXED_COUNTERS);
|
edx.split.num_counters_fixed = min(cap.num_counters_fixed, MAX_FIXED_COUNTERS);
|
||||||
edx.split.bit_width_fixed = cap.bit_width_fixed;
|
edx.split.bit_width_fixed = cap.bit_width_fixed;
|
||||||
edx.split.anythread_deprecated = 1;
|
if (cap.version)
|
||||||
|
edx.split.anythread_deprecated = 1;
|
||||||
edx.split.reserved1 = 0;
|
edx.split.reserved1 = 0;
|
||||||
edx.split.reserved2 = 0;
|
edx.split.reserved2 = 0;
|
||||||
|
|
||||||
|
@@ -359,7 +359,7 @@ config ACPI_TABLE_UPGRADE
|
|||||||
config ACPI_TABLE_OVERRIDE_VIA_BUILTIN_INITRD
|
config ACPI_TABLE_OVERRIDE_VIA_BUILTIN_INITRD
|
||||||
bool "Override ACPI tables from built-in initrd"
|
bool "Override ACPI tables from built-in initrd"
|
||||||
depends on ACPI_TABLE_UPGRADE
|
depends on ACPI_TABLE_UPGRADE
|
||||||
depends on INITRAMFS_SOURCE!="" && INITRAMFS_COMPRESSION=""
|
depends on INITRAMFS_SOURCE!="" && INITRAMFS_COMPRESSION_NONE
|
||||||
help
|
help
|
||||||
This option provides functionality to override arbitrary ACPI tables
|
This option provides functionality to override arbitrary ACPI tables
|
||||||
from built-in uncompressed initrd.
|
from built-in uncompressed initrd.
|
||||||
|
@@ -571,8 +571,10 @@ static void devlink_remove_symlinks(struct device *dev,
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup));
|
if (device_is_registered(con)) {
|
||||||
sysfs_remove_link(&con->kobj, buf);
|
snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup));
|
||||||
|
sysfs_remove_link(&con->kobj, buf);
|
||||||
|
}
|
||||||
snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con));
|
snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con));
|
||||||
sysfs_remove_link(&sup->kobj, buf);
|
sysfs_remove_link(&sup->kobj, buf);
|
||||||
kfree(buf);
|
kfree(buf);
|
||||||
|
@@ -4147,8 +4147,6 @@ again:
|
|||||||
|
|
||||||
static bool rbd_quiesce_lock(struct rbd_device *rbd_dev)
|
static bool rbd_quiesce_lock(struct rbd_device *rbd_dev)
|
||||||
{
|
{
|
||||||
bool need_wait;
|
|
||||||
|
|
||||||
dout("%s rbd_dev %p\n", __func__, rbd_dev);
|
dout("%s rbd_dev %p\n", __func__, rbd_dev);
|
||||||
lockdep_assert_held_write(&rbd_dev->lock_rwsem);
|
lockdep_assert_held_write(&rbd_dev->lock_rwsem);
|
||||||
|
|
||||||
@@ -4160,11 +4158,11 @@ static bool rbd_quiesce_lock(struct rbd_device *rbd_dev)
|
|||||||
*/
|
*/
|
||||||
rbd_dev->lock_state = RBD_LOCK_STATE_RELEASING;
|
rbd_dev->lock_state = RBD_LOCK_STATE_RELEASING;
|
||||||
rbd_assert(!completion_done(&rbd_dev->releasing_wait));
|
rbd_assert(!completion_done(&rbd_dev->releasing_wait));
|
||||||
need_wait = !list_empty(&rbd_dev->running_list);
|
if (list_empty(&rbd_dev->running_list))
|
||||||
downgrade_write(&rbd_dev->lock_rwsem);
|
return true;
|
||||||
if (need_wait)
|
|
||||||
wait_for_completion(&rbd_dev->releasing_wait);
|
up_write(&rbd_dev->lock_rwsem);
|
||||||
up_read(&rbd_dev->lock_rwsem);
|
wait_for_completion(&rbd_dev->releasing_wait);
|
||||||
|
|
||||||
down_write(&rbd_dev->lock_rwsem);
|
down_write(&rbd_dev->lock_rwsem);
|
||||||
if (rbd_dev->lock_state != RBD_LOCK_STATE_RELEASING)
|
if (rbd_dev->lock_state != RBD_LOCK_STATE_RELEASING)
|
||||||
@@ -4250,15 +4248,11 @@ static void rbd_handle_acquired_lock(struct rbd_device *rbd_dev, u8 struct_v,
|
|||||||
if (!rbd_cid_equal(&cid, &rbd_empty_cid)) {
|
if (!rbd_cid_equal(&cid, &rbd_empty_cid)) {
|
||||||
down_write(&rbd_dev->lock_rwsem);
|
down_write(&rbd_dev->lock_rwsem);
|
||||||
if (rbd_cid_equal(&cid, &rbd_dev->owner_cid)) {
|
if (rbd_cid_equal(&cid, &rbd_dev->owner_cid)) {
|
||||||
/*
|
dout("%s rbd_dev %p cid %llu-%llu == owner_cid\n",
|
||||||
* we already know that the remote client is
|
__func__, rbd_dev, cid.gid, cid.handle);
|
||||||
* the owner
|
} else {
|
||||||
*/
|
rbd_set_owner_cid(rbd_dev, &cid);
|
||||||
up_write(&rbd_dev->lock_rwsem);
|
|
||||||
return;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
rbd_set_owner_cid(rbd_dev, &cid);
|
|
||||||
downgrade_write(&rbd_dev->lock_rwsem);
|
downgrade_write(&rbd_dev->lock_rwsem);
|
||||||
} else {
|
} else {
|
||||||
down_read(&rbd_dev->lock_rwsem);
|
down_read(&rbd_dev->lock_rwsem);
|
||||||
@@ -4283,14 +4277,12 @@ static void rbd_handle_released_lock(struct rbd_device *rbd_dev, u8 struct_v,
|
|||||||
if (!rbd_cid_equal(&cid, &rbd_empty_cid)) {
|
if (!rbd_cid_equal(&cid, &rbd_empty_cid)) {
|
||||||
down_write(&rbd_dev->lock_rwsem);
|
down_write(&rbd_dev->lock_rwsem);
|
||||||
if (!rbd_cid_equal(&cid, &rbd_dev->owner_cid)) {
|
if (!rbd_cid_equal(&cid, &rbd_dev->owner_cid)) {
|
||||||
dout("%s rbd_dev %p unexpected owner, cid %llu-%llu != owner_cid %llu-%llu\n",
|
dout("%s rbd_dev %p cid %llu-%llu != owner_cid %llu-%llu\n",
|
||||||
__func__, rbd_dev, cid.gid, cid.handle,
|
__func__, rbd_dev, cid.gid, cid.handle,
|
||||||
rbd_dev->owner_cid.gid, rbd_dev->owner_cid.handle);
|
rbd_dev->owner_cid.gid, rbd_dev->owner_cid.handle);
|
||||||
up_write(&rbd_dev->lock_rwsem);
|
} else {
|
||||||
return;
|
rbd_set_owner_cid(rbd_dev, &rbd_empty_cid);
|
||||||
}
|
}
|
||||||
|
|
||||||
rbd_set_owner_cid(rbd_dev, &rbd_empty_cid);
|
|
||||||
downgrade_write(&rbd_dev->lock_rwsem);
|
downgrade_write(&rbd_dev->lock_rwsem);
|
||||||
} else {
|
} else {
|
||||||
down_read(&rbd_dev->lock_rwsem);
|
down_read(&rbd_dev->lock_rwsem);
|
||||||
|
@@ -706,11 +706,18 @@ static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
|
|||||||
cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
|
cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
|
||||||
|
|
||||||
chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
|
chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
|
||||||
mhi_chan = &mhi_cntrl->mhi_chan[chan];
|
|
||||||
write_lock_bh(&mhi_chan->lock);
|
if (chan < mhi_cntrl->max_chan &&
|
||||||
mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre);
|
mhi_cntrl->mhi_chan[chan].configured) {
|
||||||
complete(&mhi_chan->completion);
|
mhi_chan = &mhi_cntrl->mhi_chan[chan];
|
||||||
write_unlock_bh(&mhi_chan->lock);
|
write_lock_bh(&mhi_chan->lock);
|
||||||
|
mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre);
|
||||||
|
complete(&mhi_chan->completion);
|
||||||
|
write_unlock_bh(&mhi_chan->lock);
|
||||||
|
} else {
|
||||||
|
dev_err(&mhi_cntrl->mhi_dev->dev,
|
||||||
|
"Completion packet for invalid channel ID: %d\n", chan);
|
||||||
|
}
|
||||||
|
|
||||||
mhi_del_ring_element(mhi_cntrl, mhi_ring);
|
mhi_del_ring_element(mhi_cntrl, mhi_ring);
|
||||||
}
|
}
|
||||||
|
@@ -896,6 +896,7 @@ static int __init efi_memreserve_map_root(void)
|
|||||||
static int efi_mem_reserve_iomem(phys_addr_t addr, u64 size)
|
static int efi_mem_reserve_iomem(phys_addr_t addr, u64 size)
|
||||||
{
|
{
|
||||||
struct resource *res, *parent;
|
struct resource *res, *parent;
|
||||||
|
int ret;
|
||||||
|
|
||||||
res = kzalloc(sizeof(struct resource), GFP_ATOMIC);
|
res = kzalloc(sizeof(struct resource), GFP_ATOMIC);
|
||||||
if (!res)
|
if (!res)
|
||||||
@@ -908,7 +909,17 @@ static int efi_mem_reserve_iomem(phys_addr_t addr, u64 size)
|
|||||||
|
|
||||||
/* we expect a conflict with a 'System RAM' region */
|
/* we expect a conflict with a 'System RAM' region */
|
||||||
parent = request_resource_conflict(&iomem_resource, res);
|
parent = request_resource_conflict(&iomem_resource, res);
|
||||||
return parent ? request_resource(parent, res) : 0;
|
ret = parent ? request_resource(parent, res) : 0;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Given that efi_mem_reserve_iomem() can be called at any
|
||||||
|
* time, only call memblock_reserve() if the architecture
|
||||||
|
* keeps the infrastructure around.
|
||||||
|
*/
|
||||||
|
if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK) && !ret)
|
||||||
|
memblock_reserve(addr, size);
|
||||||
|
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)
|
int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)
|
||||||
|
@@ -62,9 +62,11 @@ int __init efi_tpm_eventlog_init(void)
|
|||||||
tbl_size = sizeof(*log_tbl) + log_tbl->size;
|
tbl_size = sizeof(*log_tbl) + log_tbl->size;
|
||||||
memblock_reserve(efi.tpm_log, tbl_size);
|
memblock_reserve(efi.tpm_log, tbl_size);
|
||||||
|
|
||||||
if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR ||
|
if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR) {
|
||||||
log_tbl->version != EFI_TCG2_EVENT_LOG_FORMAT_TCG_2) {
|
pr_info("TPM Final Events table not present\n");
|
||||||
pr_warn(FW_BUG "TPM Final Events table missing or invalid\n");
|
goto out;
|
||||||
|
} else if (log_tbl->version != EFI_TCG2_EVENT_LOG_FORMAT_TCG_2) {
|
||||||
|
pr_warn(FW_BUG "TPM Final Events table invalid\n");
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -3137,6 +3137,7 @@ static const struct soc15_reg_golden golden_settings_gc_10_3[] =
|
|||||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_PERFCOUNTER7_SELECT, 0xf0f001ff, 0x00000000),
|
SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_PERFCOUNTER7_SELECT, 0xf0f001ff, 0x00000000),
|
||||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_PERFCOUNTER8_SELECT, 0xf0f001ff, 0x00000000),
|
SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_PERFCOUNTER8_SELECT, 0xf0f001ff, 0x00000000),
|
||||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_PERFCOUNTER9_SELECT, 0xf0f001ff, 0x00000000),
|
SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_PERFCOUNTER9_SELECT, 0xf0f001ff, 0x00000000),
|
||||||
|
SOC15_REG_GOLDEN_VALUE(GC, 0, mmSX_DEBUG_1, 0x00010000, 0x00010020),
|
||||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmTA_CNTL_AUX, 0xfff7ffff, 0x01030000),
|
SOC15_REG_GOLDEN_VALUE(GC, 0, mmTA_CNTL_AUX, 0xfff7ffff, 0x01030000),
|
||||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmUTCL1_CTRL, 0xffbfffff, 0x00a00000)
|
SOC15_REG_GOLDEN_VALUE(GC, 0, mmUTCL1_CTRL, 0xffbfffff, 0x00a00000)
|
||||||
};
|
};
|
||||||
|
@@ -827,6 +827,9 @@ long drm_ioctl(struct file *filp,
|
|||||||
if (drm_dev_is_unplugged(dev))
|
if (drm_dev_is_unplugged(dev))
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
|
|
||||||
|
if (DRM_IOCTL_TYPE(cmd) != DRM_IOCTL_BASE)
|
||||||
|
return -ENOTTY;
|
||||||
|
|
||||||
is_driver_ioctl = nr >= DRM_COMMAND_BASE && nr < DRM_COMMAND_END;
|
is_driver_ioctl = nr >= DRM_COMMAND_BASE && nr < DRM_COMMAND_END;
|
||||||
|
|
||||||
if (is_driver_ioctl) {
|
if (is_driver_ioctl) {
|
||||||
|
@@ -1728,6 +1728,21 @@ static int elsp_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
|
|||||||
if (drm_WARN_ON(&i915->drm, !engine))
|
if (drm_WARN_ON(&i915->drm, !engine))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Due to d3_entered is used to indicate skipping PPGTT invalidation on
|
||||||
|
* vGPU reset, it's set on D0->D3 on PCI config write, and cleared after
|
||||||
|
* vGPU reset if in resuming.
|
||||||
|
* In S0ix exit, the device power state also transite from D3 to D0 as
|
||||||
|
* S3 resume, but no vGPU reset (triggered by QEMU devic model). After
|
||||||
|
* S0ix exit, all engines continue to work. However the d3_entered
|
||||||
|
* remains set which will break next vGPU reset logic (miss the expected
|
||||||
|
* PPGTT invalidation).
|
||||||
|
* Engines can only work in D0. Thus the 1st elsp write gives GVT a
|
||||||
|
* chance to clear d3_entered.
|
||||||
|
*/
|
||||||
|
if (vgpu->d3_entered)
|
||||||
|
vgpu->d3_entered = false;
|
||||||
|
|
||||||
execlist = &vgpu->submission.execlist[engine->id];
|
execlist = &vgpu->submission.execlist[engine->id];
|
||||||
|
|
||||||
execlist->elsp_dwords.data[3 - execlist->elsp_dwords.index] = data;
|
execlist->elsp_dwords.data[3 - execlist->elsp_dwords.index] = data;
|
||||||
|
@@ -447,7 +447,6 @@ static int rpi_touchscreen_remove(struct i2c_client *i2c)
|
|||||||
drm_panel_remove(&ts->base);
|
drm_panel_remove(&ts->base);
|
||||||
|
|
||||||
mipi_dsi_device_unregister(ts->dsi);
|
mipi_dsi_device_unregister(ts->dsi);
|
||||||
kfree(ts->dsi);
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@@ -385,7 +385,7 @@ static int ngene_command_config_free_buf(struct ngene *dev, u8 *config)
|
|||||||
|
|
||||||
com.cmd.hdr.Opcode = CMD_CONFIGURE_FREE_BUFFER;
|
com.cmd.hdr.Opcode = CMD_CONFIGURE_FREE_BUFFER;
|
||||||
com.cmd.hdr.Length = 6;
|
com.cmd.hdr.Length = 6;
|
||||||
memcpy(&com.cmd.ConfigureBuffers.config, config, 6);
|
memcpy(&com.cmd.ConfigureFreeBuffers.config, config, 6);
|
||||||
com.in_len = 6;
|
com.in_len = 6;
|
||||||
com.out_len = 0;
|
com.out_len = 0;
|
||||||
|
|
||||||
|
@@ -407,12 +407,14 @@ enum _BUFFER_CONFIGS {
|
|||||||
|
|
||||||
struct FW_CONFIGURE_FREE_BUFFERS {
|
struct FW_CONFIGURE_FREE_BUFFERS {
|
||||||
struct FW_HEADER hdr;
|
struct FW_HEADER hdr;
|
||||||
u8 UVI1_BufferLength;
|
struct {
|
||||||
u8 UVI2_BufferLength;
|
u8 UVI1_BufferLength;
|
||||||
u8 TVO_BufferLength;
|
u8 UVI2_BufferLength;
|
||||||
u8 AUD1_BufferLength;
|
u8 TVO_BufferLength;
|
||||||
u8 AUD2_BufferLength;
|
u8 AUD1_BufferLength;
|
||||||
u8 TVA_BufferLength;
|
u8 AUD2_BufferLength;
|
||||||
|
u8 TVA_BufferLength;
|
||||||
|
} __packed config;
|
||||||
} __attribute__ ((__packed__));
|
} __attribute__ ((__packed__));
|
||||||
|
|
||||||
struct FW_CONFIGURE_UART {
|
struct FW_CONFIGURE_UART {
|
||||||
|
@@ -714,23 +714,20 @@ static int at24_probe(struct i2c_client *client)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If the 'label' property is not present for the AT24 EEPROM,
|
* We initialize nvmem_config.id to NVMEM_DEVID_AUTO even if the
|
||||||
* then nvmem_config.id is initialised to NVMEM_DEVID_AUTO,
|
* label property is set as some platform can have multiple eeproms
|
||||||
* and this will append the 'devid' to the name of the NVMEM
|
* with same label and we can not register each of those with same
|
||||||
* device. This is purely legacy and the AT24 driver has always
|
* label. Failing to register those eeproms trigger cascade failure
|
||||||
* defaulted to this. However, if the 'label' property is
|
* on such platform.
|
||||||
* present then this means that the name is specified by the
|
|
||||||
* firmware and this name should be used verbatim and so it is
|
|
||||||
* not necessary to append the 'devid'.
|
|
||||||
*/
|
*/
|
||||||
|
nvmem_config.id = NVMEM_DEVID_AUTO;
|
||||||
|
|
||||||
if (device_property_present(dev, "label")) {
|
if (device_property_present(dev, "label")) {
|
||||||
nvmem_config.id = NVMEM_DEVID_NONE;
|
|
||||||
err = device_property_read_string(dev, "label",
|
err = device_property_read_string(dev, "label",
|
||||||
&nvmem_config.name);
|
&nvmem_config.name);
|
||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
} else {
|
} else {
|
||||||
nvmem_config.id = NVMEM_DEVID_AUTO;
|
|
||||||
nvmem_config.name = dev_name(dev);
|
nvmem_config.name = dev_name(dev);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -75,7 +75,8 @@ static void mmc_host_classdev_release(struct device *dev)
|
|||||||
{
|
{
|
||||||
struct mmc_host *host = cls_dev_to_mmc_host(dev);
|
struct mmc_host *host = cls_dev_to_mmc_host(dev);
|
||||||
wakeup_source_unregister(host->ws);
|
wakeup_source_unregister(host->ws);
|
||||||
ida_simple_remove(&mmc_host_ida, host->index);
|
if (of_alias_get_id(host->parent->of_node, "mmc") < 0)
|
||||||
|
ida_simple_remove(&mmc_host_ida, host->index);
|
||||||
kfree(host);
|
kfree(host);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -437,7 +438,7 @@ static int mmc_first_nonreserved_index(void)
|
|||||||
*/
|
*/
|
||||||
struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
|
struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
|
||||||
{
|
{
|
||||||
int err;
|
int index;
|
||||||
struct mmc_host *host;
|
struct mmc_host *host;
|
||||||
int alias_id, min_idx, max_idx;
|
int alias_id, min_idx, max_idx;
|
||||||
|
|
||||||
@@ -450,20 +451,19 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
|
|||||||
|
|
||||||
alias_id = of_alias_get_id(dev->of_node, "mmc");
|
alias_id = of_alias_get_id(dev->of_node, "mmc");
|
||||||
if (alias_id >= 0) {
|
if (alias_id >= 0) {
|
||||||
min_idx = alias_id;
|
index = alias_id;
|
||||||
max_idx = alias_id + 1;
|
|
||||||
} else {
|
} else {
|
||||||
min_idx = mmc_first_nonreserved_index();
|
min_idx = mmc_first_nonreserved_index();
|
||||||
max_idx = 0;
|
max_idx = 0;
|
||||||
|
|
||||||
|
index = ida_simple_get(&mmc_host_ida, min_idx, max_idx, GFP_KERNEL);
|
||||||
|
if (index < 0) {
|
||||||
|
kfree(host);
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
err = ida_simple_get(&mmc_host_ida, min_idx, max_idx, GFP_KERNEL);
|
host->index = index;
|
||||||
if (err < 0) {
|
|
||||||
kfree(host);
|
|
||||||
return NULL;
|
|
||||||
}
|
|
||||||
|
|
||||||
host->index = err;
|
|
||||||
|
|
||||||
dev_set_name(&host->class_dev, "mmc%d", host->index);
|
dev_set_name(&host->class_dev, "mmc%d", host->index);
|
||||||
host->ws = wakeup_source_register(NULL, dev_name(&host->class_dev));
|
host->ws = wakeup_source_register(NULL, dev_name(&host->class_dev));
|
||||||
|
@@ -385,24 +385,85 @@ static int bond_vlan_rx_kill_vid(struct net_device *bond_dev,
|
|||||||
static int bond_ipsec_add_sa(struct xfrm_state *xs)
|
static int bond_ipsec_add_sa(struct xfrm_state *xs)
|
||||||
{
|
{
|
||||||
struct net_device *bond_dev = xs->xso.dev;
|
struct net_device *bond_dev = xs->xso.dev;
|
||||||
|
struct bond_ipsec *ipsec;
|
||||||
struct bonding *bond;
|
struct bonding *bond;
|
||||||
struct slave *slave;
|
struct slave *slave;
|
||||||
|
int err;
|
||||||
|
|
||||||
if (!bond_dev)
|
if (!bond_dev)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
rcu_read_lock();
|
||||||
bond = netdev_priv(bond_dev);
|
bond = netdev_priv(bond_dev);
|
||||||
slave = rcu_dereference(bond->curr_active_slave);
|
slave = rcu_dereference(bond->curr_active_slave);
|
||||||
xs->xso.real_dev = slave->dev;
|
if (!slave) {
|
||||||
bond->xs = xs;
|
rcu_read_unlock();
|
||||||
|
return -ENODEV;
|
||||||
|
}
|
||||||
|
|
||||||
if (!(slave->dev->xfrmdev_ops
|
if (!slave->dev->xfrmdev_ops ||
|
||||||
&& slave->dev->xfrmdev_ops->xdo_dev_state_add)) {
|
!slave->dev->xfrmdev_ops->xdo_dev_state_add ||
|
||||||
|
netif_is_bond_master(slave->dev)) {
|
||||||
slave_warn(bond_dev, slave->dev, "Slave does not support ipsec offload\n");
|
slave_warn(bond_dev, slave->dev, "Slave does not support ipsec offload\n");
|
||||||
|
rcu_read_unlock();
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
return slave->dev->xfrmdev_ops->xdo_dev_state_add(xs);
|
ipsec = kmalloc(sizeof(*ipsec), GFP_ATOMIC);
|
||||||
|
if (!ipsec) {
|
||||||
|
rcu_read_unlock();
|
||||||
|
return -ENOMEM;
|
||||||
|
}
|
||||||
|
xs->xso.real_dev = slave->dev;
|
||||||
|
|
||||||
|
err = slave->dev->xfrmdev_ops->xdo_dev_state_add(xs);
|
||||||
|
if (!err) {
|
||||||
|
ipsec->xs = xs;
|
||||||
|
INIT_LIST_HEAD(&ipsec->list);
|
||||||
|
spin_lock_bh(&bond->ipsec_lock);
|
||||||
|
list_add(&ipsec->list, &bond->ipsec_list);
|
||||||
|
spin_unlock_bh(&bond->ipsec_lock);
|
||||||
|
} else {
|
||||||
|
kfree(ipsec);
|
||||||
|
}
|
||||||
|
rcu_read_unlock();
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void bond_ipsec_add_sa_all(struct bonding *bond)
|
||||||
|
{
|
||||||
|
struct net_device *bond_dev = bond->dev;
|
||||||
|
struct bond_ipsec *ipsec;
|
||||||
|
struct slave *slave;
|
||||||
|
|
||||||
|
rcu_read_lock();
|
||||||
|
slave = rcu_dereference(bond->curr_active_slave);
|
||||||
|
if (!slave)
|
||||||
|
goto out;
|
||||||
|
|
||||||
|
if (!slave->dev->xfrmdev_ops ||
|
||||||
|
!slave->dev->xfrmdev_ops->xdo_dev_state_add ||
|
||||||
|
netif_is_bond_master(slave->dev)) {
|
||||||
|
spin_lock_bh(&bond->ipsec_lock);
|
||||||
|
if (!list_empty(&bond->ipsec_list))
|
||||||
|
slave_warn(bond_dev, slave->dev,
|
||||||
|
"%s: no slave xdo_dev_state_add\n",
|
||||||
|
__func__);
|
||||||
|
spin_unlock_bh(&bond->ipsec_lock);
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
|
spin_lock_bh(&bond->ipsec_lock);
|
||||||
|
list_for_each_entry(ipsec, &bond->ipsec_list, list) {
|
||||||
|
ipsec->xs->xso.real_dev = slave->dev;
|
||||||
|
if (slave->dev->xfrmdev_ops->xdo_dev_state_add(ipsec->xs)) {
|
||||||
|
slave_warn(bond_dev, slave->dev, "%s: failed to add SA\n", __func__);
|
||||||
|
ipsec->xs->xso.real_dev = NULL;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
spin_unlock_bh(&bond->ipsec_lock);
|
||||||
|
out:
|
||||||
|
rcu_read_unlock();
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -412,27 +473,77 @@ static int bond_ipsec_add_sa(struct xfrm_state *xs)
|
|||||||
static void bond_ipsec_del_sa(struct xfrm_state *xs)
|
static void bond_ipsec_del_sa(struct xfrm_state *xs)
|
||||||
{
|
{
|
||||||
struct net_device *bond_dev = xs->xso.dev;
|
struct net_device *bond_dev = xs->xso.dev;
|
||||||
|
struct bond_ipsec *ipsec;
|
||||||
struct bonding *bond;
|
struct bonding *bond;
|
||||||
struct slave *slave;
|
struct slave *slave;
|
||||||
|
|
||||||
if (!bond_dev)
|
if (!bond_dev)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
rcu_read_lock();
|
||||||
bond = netdev_priv(bond_dev);
|
bond = netdev_priv(bond_dev);
|
||||||
slave = rcu_dereference(bond->curr_active_slave);
|
slave = rcu_dereference(bond->curr_active_slave);
|
||||||
|
|
||||||
if (!slave)
|
if (!slave)
|
||||||
return;
|
goto out;
|
||||||
|
|
||||||
xs->xso.real_dev = slave->dev;
|
if (!xs->xso.real_dev)
|
||||||
|
goto out;
|
||||||
|
|
||||||
if (!(slave->dev->xfrmdev_ops
|
WARN_ON(xs->xso.real_dev != slave->dev);
|
||||||
&& slave->dev->xfrmdev_ops->xdo_dev_state_delete)) {
|
|
||||||
|
if (!slave->dev->xfrmdev_ops ||
|
||||||
|
!slave->dev->xfrmdev_ops->xdo_dev_state_delete ||
|
||||||
|
netif_is_bond_master(slave->dev)) {
|
||||||
slave_warn(bond_dev, slave->dev, "%s: no slave xdo_dev_state_delete\n", __func__);
|
slave_warn(bond_dev, slave->dev, "%s: no slave xdo_dev_state_delete\n", __func__);
|
||||||
return;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
slave->dev->xfrmdev_ops->xdo_dev_state_delete(xs);
|
slave->dev->xfrmdev_ops->xdo_dev_state_delete(xs);
|
||||||
|
out:
|
||||||
|
spin_lock_bh(&bond->ipsec_lock);
|
||||||
|
list_for_each_entry(ipsec, &bond->ipsec_list, list) {
|
||||||
|
if (ipsec->xs == xs) {
|
||||||
|
list_del(&ipsec->list);
|
||||||
|
kfree(ipsec);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
spin_unlock_bh(&bond->ipsec_lock);
|
||||||
|
rcu_read_unlock();
|
||||||
|
}
|
||||||
|
|
||||||
|
static void bond_ipsec_del_sa_all(struct bonding *bond)
|
||||||
|
{
|
||||||
|
struct net_device *bond_dev = bond->dev;
|
||||||
|
struct bond_ipsec *ipsec;
|
||||||
|
struct slave *slave;
|
||||||
|
|
||||||
|
rcu_read_lock();
|
||||||
|
slave = rcu_dereference(bond->curr_active_slave);
|
||||||
|
if (!slave) {
|
||||||
|
rcu_read_unlock();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
spin_lock_bh(&bond->ipsec_lock);
|
||||||
|
list_for_each_entry(ipsec, &bond->ipsec_list, list) {
|
||||||
|
if (!ipsec->xs->xso.real_dev)
|
||||||
|
continue;
|
||||||
|
|
||||||
|
if (!slave->dev->xfrmdev_ops ||
|
||||||
|
!slave->dev->xfrmdev_ops->xdo_dev_state_delete ||
|
||||||
|
netif_is_bond_master(slave->dev)) {
|
||||||
|
slave_warn(bond_dev, slave->dev,
|
||||||
|
"%s: no slave xdo_dev_state_delete\n",
|
||||||
|
__func__);
|
||||||
|
} else {
|
||||||
|
slave->dev->xfrmdev_ops->xdo_dev_state_delete(ipsec->xs);
|
||||||
|
}
|
||||||
|
ipsec->xs->xso.real_dev = NULL;
|
||||||
|
}
|
||||||
|
spin_unlock_bh(&bond->ipsec_lock);
|
||||||
|
rcu_read_unlock();
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -443,21 +554,37 @@ static void bond_ipsec_del_sa(struct xfrm_state *xs)
|
|||||||
static bool bond_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *xs)
|
static bool bond_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *xs)
|
||||||
{
|
{
|
||||||
struct net_device *bond_dev = xs->xso.dev;
|
struct net_device *bond_dev = xs->xso.dev;
|
||||||
struct bonding *bond = netdev_priv(bond_dev);
|
struct net_device *real_dev;
|
||||||
struct slave *curr_active = rcu_dereference(bond->curr_active_slave);
|
struct slave *curr_active;
|
||||||
struct net_device *slave_dev = curr_active->dev;
|
struct bonding *bond;
|
||||||
|
int err;
|
||||||
|
|
||||||
if (BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP)
|
bond = netdev_priv(bond_dev);
|
||||||
return true;
|
rcu_read_lock();
|
||||||
|
curr_active = rcu_dereference(bond->curr_active_slave);
|
||||||
|
real_dev = curr_active->dev;
|
||||||
|
|
||||||
if (!(slave_dev->xfrmdev_ops
|
if (BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP) {
|
||||||
&& slave_dev->xfrmdev_ops->xdo_dev_offload_ok)) {
|
err = false;
|
||||||
slave_warn(bond_dev, slave_dev, "%s: no slave xdo_dev_offload_ok\n", __func__);
|
goto out;
|
||||||
return false;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
xs->xso.real_dev = slave_dev;
|
if (!xs->xso.real_dev) {
|
||||||
return slave_dev->xfrmdev_ops->xdo_dev_offload_ok(skb, xs);
|
err = false;
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!real_dev->xfrmdev_ops ||
|
||||||
|
!real_dev->xfrmdev_ops->xdo_dev_offload_ok ||
|
||||||
|
netif_is_bond_master(real_dev)) {
|
||||||
|
err = false;
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
|
err = real_dev->xfrmdev_ops->xdo_dev_offload_ok(skb, xs);
|
||||||
|
out:
|
||||||
|
rcu_read_unlock();
|
||||||
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
static const struct xfrmdev_ops bond_xfrmdev_ops = {
|
static const struct xfrmdev_ops bond_xfrmdev_ops = {
|
||||||
@@ -974,8 +1101,7 @@ void bond_change_active_slave(struct bonding *bond, struct slave *new_active)
|
|||||||
return;
|
return;
|
||||||
|
|
||||||
#ifdef CONFIG_XFRM_OFFLOAD
|
#ifdef CONFIG_XFRM_OFFLOAD
|
||||||
if (old_active && bond->xs)
|
bond_ipsec_del_sa_all(bond);
|
||||||
bond_ipsec_del_sa(bond->xs);
|
|
||||||
#endif /* CONFIG_XFRM_OFFLOAD */
|
#endif /* CONFIG_XFRM_OFFLOAD */
|
||||||
|
|
||||||
if (new_active) {
|
if (new_active) {
|
||||||
@@ -1051,10 +1177,7 @@ void bond_change_active_slave(struct bonding *bond, struct slave *new_active)
|
|||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_XFRM_OFFLOAD
|
#ifdef CONFIG_XFRM_OFFLOAD
|
||||||
if (new_active && bond->xs) {
|
bond_ipsec_add_sa_all(bond);
|
||||||
xfrm_dev_state_flush(dev_net(bond->dev), bond->dev, true);
|
|
||||||
bond_ipsec_add_sa(bond->xs);
|
|
||||||
}
|
|
||||||
#endif /* CONFIG_XFRM_OFFLOAD */
|
#endif /* CONFIG_XFRM_OFFLOAD */
|
||||||
|
|
||||||
/* resend IGMP joins since active slave has changed or
|
/* resend IGMP joins since active slave has changed or
|
||||||
@@ -3293,6 +3416,9 @@ static int bond_master_netdev_event(unsigned long event,
|
|||||||
return bond_event_changename(event_bond);
|
return bond_event_changename(event_bond);
|
||||||
case NETDEV_UNREGISTER:
|
case NETDEV_UNREGISTER:
|
||||||
bond_remove_proc_entry(event_bond);
|
bond_remove_proc_entry(event_bond);
|
||||||
|
#ifdef CONFIG_XFRM_OFFLOAD
|
||||||
|
xfrm_dev_state_flush(dev_net(bond_dev), bond_dev, true);
|
||||||
|
#endif /* CONFIG_XFRM_OFFLOAD */
|
||||||
break;
|
break;
|
||||||
case NETDEV_REGISTER:
|
case NETDEV_REGISTER:
|
||||||
bond_create_proc_entry(event_bond);
|
bond_create_proc_entry(event_bond);
|
||||||
@@ -4726,7 +4852,8 @@ void bond_setup(struct net_device *bond_dev)
|
|||||||
#ifdef CONFIG_XFRM_OFFLOAD
|
#ifdef CONFIG_XFRM_OFFLOAD
|
||||||
/* set up xfrm device ops (only supported in active-backup right now) */
|
/* set up xfrm device ops (only supported in active-backup right now) */
|
||||||
bond_dev->xfrmdev_ops = &bond_xfrmdev_ops;
|
bond_dev->xfrmdev_ops = &bond_xfrmdev_ops;
|
||||||
bond->xs = NULL;
|
INIT_LIST_HEAD(&bond->ipsec_list);
|
||||||
|
spin_lock_init(&bond->ipsec_lock);
|
||||||
#endif /* CONFIG_XFRM_OFFLOAD */
|
#endif /* CONFIG_XFRM_OFFLOAD */
|
||||||
|
|
||||||
/* don't acquire bond device's netif_tx_lock when transmitting */
|
/* don't acquire bond device's netif_tx_lock when transmitting */
|
||||||
|
@@ -3433,6 +3433,11 @@ static const struct mv88e6xxx_ops mv88e6141_ops = {
|
|||||||
.serdes_irq_enable = mv88e6390_serdes_irq_enable,
|
.serdes_irq_enable = mv88e6390_serdes_irq_enable,
|
||||||
.serdes_irq_status = mv88e6390_serdes_irq_status,
|
.serdes_irq_status = mv88e6390_serdes_irq_status,
|
||||||
.gpio_ops = &mv88e6352_gpio_ops,
|
.gpio_ops = &mv88e6352_gpio_ops,
|
||||||
|
.serdes_get_sset_count = mv88e6390_serdes_get_sset_count,
|
||||||
|
.serdes_get_strings = mv88e6390_serdes_get_strings,
|
||||||
|
.serdes_get_stats = mv88e6390_serdes_get_stats,
|
||||||
|
.serdes_get_regs_len = mv88e6390_serdes_get_regs_len,
|
||||||
|
.serdes_get_regs = mv88e6390_serdes_get_regs,
|
||||||
.phylink_validate = mv88e6341_phylink_validate,
|
.phylink_validate = mv88e6341_phylink_validate,
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -4205,6 +4210,11 @@ static const struct mv88e6xxx_ops mv88e6341_ops = {
|
|||||||
.gpio_ops = &mv88e6352_gpio_ops,
|
.gpio_ops = &mv88e6352_gpio_ops,
|
||||||
.avb_ops = &mv88e6390_avb_ops,
|
.avb_ops = &mv88e6390_avb_ops,
|
||||||
.ptp_ops = &mv88e6352_ptp_ops,
|
.ptp_ops = &mv88e6352_ptp_ops,
|
||||||
|
.serdes_get_sset_count = mv88e6390_serdes_get_sset_count,
|
||||||
|
.serdes_get_strings = mv88e6390_serdes_get_strings,
|
||||||
|
.serdes_get_stats = mv88e6390_serdes_get_stats,
|
||||||
|
.serdes_get_regs_len = mv88e6390_serdes_get_regs_len,
|
||||||
|
.serdes_get_regs = mv88e6390_serdes_get_regs,
|
||||||
.phylink_validate = mv88e6341_phylink_validate,
|
.phylink_validate = mv88e6341_phylink_validate,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@@ -590,7 +590,7 @@ static struct mv88e6390_serdes_hw_stat mv88e6390_serdes_hw_stats[] = {
|
|||||||
|
|
||||||
int mv88e6390_serdes_get_sset_count(struct mv88e6xxx_chip *chip, int port)
|
int mv88e6390_serdes_get_sset_count(struct mv88e6xxx_chip *chip, int port)
|
||||||
{
|
{
|
||||||
if (mv88e6390_serdes_get_lane(chip, port) == 0)
|
if (mv88e6xxx_serdes_get_lane(chip, port) == 0)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
return ARRAY_SIZE(mv88e6390_serdes_hw_stats);
|
return ARRAY_SIZE(mv88e6390_serdes_hw_stats);
|
||||||
@@ -602,7 +602,7 @@ int mv88e6390_serdes_get_strings(struct mv88e6xxx_chip *chip,
|
|||||||
struct mv88e6390_serdes_hw_stat *stat;
|
struct mv88e6390_serdes_hw_stat *stat;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
if (mv88e6390_serdes_get_lane(chip, port) == 0)
|
if (mv88e6xxx_serdes_get_lane(chip, port) == 0)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
for (i = 0; i < ARRAY_SIZE(mv88e6390_serdes_hw_stats); i++) {
|
for (i = 0; i < ARRAY_SIZE(mv88e6390_serdes_hw_stats); i++) {
|
||||||
@@ -638,7 +638,7 @@ int mv88e6390_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
|
|||||||
int lane;
|
int lane;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
lane = mv88e6390_serdes_get_lane(chip, port);
|
lane = mv88e6xxx_serdes_get_lane(chip, port);
|
||||||
if (lane == 0)
|
if (lane == 0)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
@@ -350,6 +350,12 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
|
|||||||
if (dsa_is_cpu_port(ds, port))
|
if (dsa_is_cpu_port(ds, port))
|
||||||
v->pvid = true;
|
v->pvid = true;
|
||||||
list_add(&v->list, &priv->dsa_8021q_vlans);
|
list_add(&v->list, &priv->dsa_8021q_vlans);
|
||||||
|
|
||||||
|
v = kmemdup(v, sizeof(*v), GFP_KERNEL);
|
||||||
|
if (!v)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
|
list_add(&v->list, &priv->bridge_vlans);
|
||||||
}
|
}
|
||||||
|
|
||||||
((struct sja1105_vlan_lookup_entry *)table->entries)[0] = pvid;
|
((struct sja1105_vlan_lookup_entry *)table->entries)[0] = pvid;
|
||||||
|
@@ -1633,11 +1633,16 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp,
|
|||||||
|
|
||||||
if ((tpa_info->flags2 & RX_CMP_FLAGS2_META_FORMAT_VLAN) &&
|
if ((tpa_info->flags2 & RX_CMP_FLAGS2_META_FORMAT_VLAN) &&
|
||||||
(skb->dev->features & BNXT_HW_FEATURE_VLAN_ALL_RX)) {
|
(skb->dev->features & BNXT_HW_FEATURE_VLAN_ALL_RX)) {
|
||||||
u16 vlan_proto = tpa_info->metadata >>
|
__be16 vlan_proto = htons(tpa_info->metadata >>
|
||||||
RX_CMP_FLAGS2_METADATA_TPID_SFT;
|
RX_CMP_FLAGS2_METADATA_TPID_SFT);
|
||||||
u16 vtag = tpa_info->metadata & RX_CMP_FLAGS2_METADATA_TCI_MASK;
|
u16 vtag = tpa_info->metadata & RX_CMP_FLAGS2_METADATA_TCI_MASK;
|
||||||
|
|
||||||
__vlan_hwaccel_put_tag(skb, htons(vlan_proto), vtag);
|
if (eth_type_vlan(vlan_proto)) {
|
||||||
|
__vlan_hwaccel_put_tag(skb, vlan_proto, vtag);
|
||||||
|
} else {
|
||||||
|
dev_kfree_skb(skb);
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
skb_checksum_none_assert(skb);
|
skb_checksum_none_assert(skb);
|
||||||
@@ -1858,9 +1863,15 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
|
|||||||
(skb->dev->features & BNXT_HW_FEATURE_VLAN_ALL_RX)) {
|
(skb->dev->features & BNXT_HW_FEATURE_VLAN_ALL_RX)) {
|
||||||
u32 meta_data = le32_to_cpu(rxcmp1->rx_cmp_meta_data);
|
u32 meta_data = le32_to_cpu(rxcmp1->rx_cmp_meta_data);
|
||||||
u16 vtag = meta_data & RX_CMP_FLAGS2_METADATA_TCI_MASK;
|
u16 vtag = meta_data & RX_CMP_FLAGS2_METADATA_TCI_MASK;
|
||||||
u16 vlan_proto = meta_data >> RX_CMP_FLAGS2_METADATA_TPID_SFT;
|
__be16 vlan_proto = htons(meta_data >>
|
||||||
|
RX_CMP_FLAGS2_METADATA_TPID_SFT);
|
||||||
|
|
||||||
__vlan_hwaccel_put_tag(skb, htons(vlan_proto), vtag);
|
if (eth_type_vlan(vlan_proto)) {
|
||||||
|
__vlan_hwaccel_put_tag(skb, vlan_proto, vtag);
|
||||||
|
} else {
|
||||||
|
dev_kfree_skb(skb);
|
||||||
|
goto next_rx;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
skb_checksum_none_assert(skb);
|
skb_checksum_none_assert(skb);
|
||||||
@@ -9830,6 +9841,12 @@ int bnxt_half_open_nic(struct bnxt *bp)
|
|||||||
{
|
{
|
||||||
int rc = 0;
|
int rc = 0;
|
||||||
|
|
||||||
|
if (test_bit(BNXT_STATE_ABORT_ERR, &bp->state)) {
|
||||||
|
netdev_err(bp->dev, "A previous firmware reset has not completed, aborting half open\n");
|
||||||
|
rc = -ENODEV;
|
||||||
|
goto half_open_err;
|
||||||
|
}
|
||||||
|
|
||||||
rc = bnxt_alloc_mem(bp, false);
|
rc = bnxt_alloc_mem(bp, false);
|
||||||
if (rc) {
|
if (rc) {
|
||||||
netdev_err(bp->dev, "bnxt_alloc_mem err: %x\n", rc);
|
netdev_err(bp->dev, "bnxt_alloc_mem err: %x\n", rc);
|
||||||
@@ -11480,6 +11497,10 @@ static void bnxt_fw_reset_task(struct work_struct *work)
|
|||||||
}
|
}
|
||||||
bp->fw_reset_timestamp = jiffies;
|
bp->fw_reset_timestamp = jiffies;
|
||||||
rtnl_lock();
|
rtnl_lock();
|
||||||
|
if (test_bit(BNXT_STATE_ABORT_ERR, &bp->state)) {
|
||||||
|
rtnl_unlock();
|
||||||
|
goto fw_reset_abort;
|
||||||
|
}
|
||||||
bnxt_fw_reset_close(bp);
|
bnxt_fw_reset_close(bp);
|
||||||
if (bp->fw_cap & BNXT_FW_CAP_ERR_RECOVER_RELOAD) {
|
if (bp->fw_cap & BNXT_FW_CAP_ERR_RECOVER_RELOAD) {
|
||||||
bp->fw_reset_state = BNXT_FW_RESET_STATE_POLL_FW_DOWN;
|
bp->fw_reset_state = BNXT_FW_RESET_STATE_POLL_FW_DOWN;
|
||||||
@@ -12901,7 +12922,8 @@ static pci_ers_result_t bnxt_io_error_detected(struct pci_dev *pdev,
|
|||||||
if (netif_running(netdev))
|
if (netif_running(netdev))
|
||||||
bnxt_close(netdev);
|
bnxt_close(netdev);
|
||||||
|
|
||||||
pci_disable_device(pdev);
|
if (pci_is_enabled(pdev))
|
||||||
|
pci_disable_device(pdev);
|
||||||
bnxt_free_ctx_mem(bp);
|
bnxt_free_ctx_mem(bp);
|
||||||
kfree(bp->ctx);
|
kfree(bp->ctx);
|
||||||
bp->ctx = NULL;
|
bp->ctx = NULL;
|
||||||
|
@@ -479,15 +479,16 @@ struct bnxt_en_dev *bnxt_ulp_probe(struct net_device *dev)
|
|||||||
if (!edev)
|
if (!edev)
|
||||||
return ERR_PTR(-ENOMEM);
|
return ERR_PTR(-ENOMEM);
|
||||||
edev->en_ops = &bnxt_en_ops_tbl;
|
edev->en_ops = &bnxt_en_ops_tbl;
|
||||||
if (bp->flags & BNXT_FLAG_ROCEV1_CAP)
|
|
||||||
edev->flags |= BNXT_EN_FLAG_ROCEV1_CAP;
|
|
||||||
if (bp->flags & BNXT_FLAG_ROCEV2_CAP)
|
|
||||||
edev->flags |= BNXT_EN_FLAG_ROCEV2_CAP;
|
|
||||||
edev->net = dev;
|
edev->net = dev;
|
||||||
edev->pdev = bp->pdev;
|
edev->pdev = bp->pdev;
|
||||||
edev->l2_db_size = bp->db_size;
|
edev->l2_db_size = bp->db_size;
|
||||||
edev->l2_db_size_nc = bp->db_size;
|
edev->l2_db_size_nc = bp->db_size;
|
||||||
bp->edev = edev;
|
bp->edev = edev;
|
||||||
}
|
}
|
||||||
|
edev->flags &= ~BNXT_EN_FLAG_ROCE_CAP;
|
||||||
|
if (bp->flags & BNXT_FLAG_ROCEV1_CAP)
|
||||||
|
edev->flags |= BNXT_EN_FLAG_ROCEV1_CAP;
|
||||||
|
if (bp->flags & BNXT_FLAG_ROCEV2_CAP)
|
||||||
|
edev->flags |= BNXT_EN_FLAG_ROCEV2_CAP;
|
||||||
return bp->edev;
|
return bp->edev;
|
||||||
}
|
}
|
||||||
|
@@ -420,7 +420,7 @@ static int cn23xx_pf_setup_global_input_regs(struct octeon_device *oct)
|
|||||||
* bits 32:47 indicate the PVF num.
|
* bits 32:47 indicate the PVF num.
|
||||||
*/
|
*/
|
||||||
for (q_no = 0; q_no < ern; q_no++) {
|
for (q_no = 0; q_no < ern; q_no++) {
|
||||||
reg_val = oct->pcie_port << CN23XX_PKT_INPUT_CTL_MAC_NUM_POS;
|
reg_val = (u64)oct->pcie_port << CN23XX_PKT_INPUT_CTL_MAC_NUM_POS;
|
||||||
|
|
||||||
/* for VF assigned queues. */
|
/* for VF assigned queues. */
|
||||||
if (q_no < oct->sriov_info.pf_srn) {
|
if (q_no < oct->sriov_info.pf_srn) {
|
||||||
|
@@ -2643,6 +2643,9 @@ static void detach_ulds(struct adapter *adap)
|
|||||||
{
|
{
|
||||||
unsigned int i;
|
unsigned int i;
|
||||||
|
|
||||||
|
if (!is_uld(adap))
|
||||||
|
return;
|
||||||
|
|
||||||
mutex_lock(&uld_mutex);
|
mutex_lock(&uld_mutex);
|
||||||
list_del(&adap->list_node);
|
list_del(&adap->list_node);
|
||||||
|
|
||||||
@@ -7145,10 +7148,13 @@ static void remove_one(struct pci_dev *pdev)
|
|||||||
*/
|
*/
|
||||||
destroy_workqueue(adapter->workq);
|
destroy_workqueue(adapter->workq);
|
||||||
|
|
||||||
if (is_uld(adapter)) {
|
detach_ulds(adapter);
|
||||||
detach_ulds(adapter);
|
|
||||||
t4_uld_clean_up(adapter);
|
for_each_port(adapter, i)
|
||||||
}
|
if (adapter->port[i]->reg_state == NETREG_REGISTERED)
|
||||||
|
unregister_netdev(adapter->port[i]);
|
||||||
|
|
||||||
|
t4_uld_clean_up(adapter);
|
||||||
|
|
||||||
adap_free_hma_mem(adapter);
|
adap_free_hma_mem(adapter);
|
||||||
|
|
||||||
@@ -7156,10 +7162,6 @@ static void remove_one(struct pci_dev *pdev)
|
|||||||
|
|
||||||
cxgb4_free_mps_ref_entries(adapter);
|
cxgb4_free_mps_ref_entries(adapter);
|
||||||
|
|
||||||
for_each_port(adapter, i)
|
|
||||||
if (adapter->port[i]->reg_state == NETREG_REGISTERED)
|
|
||||||
unregister_netdev(adapter->port[i]);
|
|
||||||
|
|
||||||
debugfs_remove_recursive(adapter->debugfs_root);
|
debugfs_remove_recursive(adapter->debugfs_root);
|
||||||
|
|
||||||
if (!is_t4(adapter->params.chip))
|
if (!is_t4(adapter->params.chip))
|
||||||
|
@@ -581,6 +581,9 @@ void t4_uld_clean_up(struct adapter *adap)
|
|||||||
{
|
{
|
||||||
unsigned int i;
|
unsigned int i;
|
||||||
|
|
||||||
|
if (!is_uld(adap))
|
||||||
|
return;
|
||||||
|
|
||||||
mutex_lock(&uld_mutex);
|
mutex_lock(&uld_mutex);
|
||||||
for (i = 0; i < CXGB4_ULD_MAX; i++) {
|
for (i = 0; i < CXGB4_ULD_MAX; i++) {
|
||||||
if (!adap->uld[i].handle)
|
if (!adap->uld[i].handle)
|
||||||
|
@@ -1340,13 +1340,16 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||||||
|
|
||||||
err = register_netdev(dev);
|
err = register_netdev(dev);
|
||||||
if (err)
|
if (err)
|
||||||
goto abort_with_wq;
|
goto abort_with_gve_init;
|
||||||
|
|
||||||
dev_info(&pdev->dev, "GVE version %s\n", gve_version_str);
|
dev_info(&pdev->dev, "GVE version %s\n", gve_version_str);
|
||||||
gve_clear_probe_in_progress(priv);
|
gve_clear_probe_in_progress(priv);
|
||||||
queue_work(priv->gve_wq, &priv->service_task);
|
queue_work(priv->gve_wq, &priv->service_task);
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
abort_with_gve_init:
|
||||||
|
gve_teardown_priv_resources(priv);
|
||||||
|
|
||||||
abort_with_wq:
|
abort_with_wq:
|
||||||
destroy_workqueue(priv->gve_wq);
|
destroy_workqueue(priv->gve_wq);
|
||||||
|
|
||||||
|
@@ -131,7 +131,7 @@
|
|||||||
/* buf unit size is cache_line_size, which is 64, so the shift is 6 */
|
/* buf unit size is cache_line_size, which is 64, so the shift is 6 */
|
||||||
#define PPE_BUF_SIZE_SHIFT 6
|
#define PPE_BUF_SIZE_SHIFT 6
|
||||||
#define PPE_TX_BUF_HOLD BIT(31)
|
#define PPE_TX_BUF_HOLD BIT(31)
|
||||||
#define CACHE_LINE_MASK 0x3F
|
#define SOC_CACHE_LINE_MASK 0x3F
|
||||||
#else
|
#else
|
||||||
#define PPE_CFG_QOS_VMID_GRP_SHIFT 8
|
#define PPE_CFG_QOS_VMID_GRP_SHIFT 8
|
||||||
#define PPE_CFG_RX_CTRL_ALIGN_SHIFT 11
|
#define PPE_CFG_RX_CTRL_ALIGN_SHIFT 11
|
||||||
@@ -531,8 +531,8 @@ hip04_mac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
|
|||||||
#if defined(CONFIG_HI13X1_GMAC)
|
#if defined(CONFIG_HI13X1_GMAC)
|
||||||
desc->cfg = (__force u32)cpu_to_be32(TX_CLEAR_WB | TX_FINISH_CACHE_INV
|
desc->cfg = (__force u32)cpu_to_be32(TX_CLEAR_WB | TX_FINISH_CACHE_INV
|
||||||
| TX_RELEASE_TO_PPE | priv->port << TX_POOL_SHIFT);
|
| TX_RELEASE_TO_PPE | priv->port << TX_POOL_SHIFT);
|
||||||
desc->data_offset = (__force u32)cpu_to_be32(phys & CACHE_LINE_MASK);
|
desc->data_offset = (__force u32)cpu_to_be32(phys & SOC_CACHE_LINE_MASK);
|
||||||
desc->send_addr = (__force u32)cpu_to_be32(phys & ~CACHE_LINE_MASK);
|
desc->send_addr = (__force u32)cpu_to_be32(phys & ~SOC_CACHE_LINE_MASK);
|
||||||
#else
|
#else
|
||||||
desc->cfg = (__force u32)cpu_to_be32(TX_CLEAR_WB | TX_FINISH_CACHE_INV);
|
desc->cfg = (__force u32)cpu_to_be32(TX_CLEAR_WB | TX_FINISH_CACHE_INV);
|
||||||
desc->send_addr = (__force u32)cpu_to_be32(phys);
|
desc->send_addr = (__force u32)cpu_to_be32(phys);
|
||||||
|
@@ -134,7 +134,8 @@ struct hclge_mbx_vf_to_pf_cmd {
|
|||||||
u8 mbx_need_resp;
|
u8 mbx_need_resp;
|
||||||
u8 rsv1[1];
|
u8 rsv1[1];
|
||||||
u8 msg_len;
|
u8 msg_len;
|
||||||
u8 rsv2[3];
|
u8 rsv2;
|
||||||
|
u16 match_id;
|
||||||
struct hclge_vf_to_pf_msg msg;
|
struct hclge_vf_to_pf_msg msg;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -144,7 +145,8 @@ struct hclge_mbx_pf_to_vf_cmd {
|
|||||||
u8 dest_vfid;
|
u8 dest_vfid;
|
||||||
u8 rsv[3];
|
u8 rsv[3];
|
||||||
u8 msg_len;
|
u8 msg_len;
|
||||||
u8 rsv1[3];
|
u8 rsv1;
|
||||||
|
u16 match_id;
|
||||||
struct hclge_pf_to_vf_msg msg;
|
struct hclge_pf_to_vf_msg msg;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@@ -47,6 +47,7 @@ static int hclge_gen_resp_to_vf(struct hclge_vport *vport,
|
|||||||
|
|
||||||
resp_pf_to_vf->dest_vfid = vf_to_pf_req->mbx_src_vfid;
|
resp_pf_to_vf->dest_vfid = vf_to_pf_req->mbx_src_vfid;
|
||||||
resp_pf_to_vf->msg_len = vf_to_pf_req->msg_len;
|
resp_pf_to_vf->msg_len = vf_to_pf_req->msg_len;
|
||||||
|
resp_pf_to_vf->match_id = vf_to_pf_req->match_id;
|
||||||
|
|
||||||
resp_pf_to_vf->msg.code = HCLGE_MBX_PF_VF_RESP;
|
resp_pf_to_vf->msg.code = HCLGE_MBX_PF_VF_RESP;
|
||||||
resp_pf_to_vf->msg.vf_mbx_msg_code = vf_to_pf_req->msg.code;
|
resp_pf_to_vf->msg.vf_mbx_msg_code = vf_to_pf_req->msg.code;
|
||||||
|
@@ -2518,6 +2518,16 @@ static int hclgevf_rss_init_hw(struct hclgevf_dev *hdev)
|
|||||||
|
|
||||||
static int hclgevf_init_vlan_config(struct hclgevf_dev *hdev)
|
static int hclgevf_init_vlan_config(struct hclgevf_dev *hdev)
|
||||||
{
|
{
|
||||||
|
struct hnae3_handle *nic = &hdev->nic;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
ret = hclgevf_en_hw_strip_rxvtag(nic, true);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(&hdev->pdev->dev,
|
||||||
|
"failed to enable rx vlan offload, ret = %d\n", ret);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
return hclgevf_set_vlan_filter(&hdev->nic, htons(ETH_P_8021Q), 0,
|
return hclgevf_set_vlan_filter(&hdev->nic, htons(ETH_P_8021Q), 0,
|
||||||
false);
|
false);
|
||||||
}
|
}
|
||||||
|
@@ -7657,6 +7657,7 @@ err_flashmap:
|
|||||||
err_ioremap:
|
err_ioremap:
|
||||||
free_netdev(netdev);
|
free_netdev(netdev);
|
||||||
err_alloc_etherdev:
|
err_alloc_etherdev:
|
||||||
|
pci_disable_pcie_error_reporting(pdev);
|
||||||
pci_release_mem_regions(pdev);
|
pci_release_mem_regions(pdev);
|
||||||
err_pci_reg:
|
err_pci_reg:
|
||||||
err_dma:
|
err_dma:
|
||||||
|
@@ -2227,6 +2227,7 @@ err_sw_init:
|
|||||||
err_ioremap:
|
err_ioremap:
|
||||||
free_netdev(netdev);
|
free_netdev(netdev);
|
||||||
err_alloc_netdev:
|
err_alloc_netdev:
|
||||||
|
pci_disable_pcie_error_reporting(pdev);
|
||||||
pci_release_mem_regions(pdev);
|
pci_release_mem_regions(pdev);
|
||||||
err_pci_reg:
|
err_pci_reg:
|
||||||
err_dma:
|
err_dma:
|
||||||
|
@@ -3759,6 +3759,7 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||||||
err_ioremap:
|
err_ioremap:
|
||||||
free_netdev(netdev);
|
free_netdev(netdev);
|
||||||
err_alloc_etherdev:
|
err_alloc_etherdev:
|
||||||
|
pci_disable_pcie_error_reporting(pdev);
|
||||||
pci_release_regions(pdev);
|
pci_release_regions(pdev);
|
||||||
err_pci_reg:
|
err_pci_reg:
|
||||||
err_dma:
|
err_dma:
|
||||||
|
@@ -931,6 +931,7 @@ static void igb_configure_msix(struct igb_adapter *adapter)
|
|||||||
**/
|
**/
|
||||||
static int igb_request_msix(struct igb_adapter *adapter)
|
static int igb_request_msix(struct igb_adapter *adapter)
|
||||||
{
|
{
|
||||||
|
unsigned int num_q_vectors = adapter->num_q_vectors;
|
||||||
struct net_device *netdev = adapter->netdev;
|
struct net_device *netdev = adapter->netdev;
|
||||||
int i, err = 0, vector = 0, free_vector = 0;
|
int i, err = 0, vector = 0, free_vector = 0;
|
||||||
|
|
||||||
@@ -939,7 +940,13 @@ static int igb_request_msix(struct igb_adapter *adapter)
|
|||||||
if (err)
|
if (err)
|
||||||
goto err_out;
|
goto err_out;
|
||||||
|
|
||||||
for (i = 0; i < adapter->num_q_vectors; i++) {
|
if (num_q_vectors > MAX_Q_VECTORS) {
|
||||||
|
num_q_vectors = MAX_Q_VECTORS;
|
||||||
|
dev_warn(&adapter->pdev->dev,
|
||||||
|
"The number of queue vectors (%d) is higher than max allowed (%d)\n",
|
||||||
|
adapter->num_q_vectors, MAX_Q_VECTORS);
|
||||||
|
}
|
||||||
|
for (i = 0; i < num_q_vectors; i++) {
|
||||||
struct igb_q_vector *q_vector = adapter->q_vector[i];
|
struct igb_q_vector *q_vector = adapter->q_vector[i];
|
||||||
|
|
||||||
vector++;
|
vector++;
|
||||||
@@ -1678,14 +1685,15 @@ static bool is_any_txtime_enabled(struct igb_adapter *adapter)
|
|||||||
**/
|
**/
|
||||||
static void igb_config_tx_modes(struct igb_adapter *adapter, int queue)
|
static void igb_config_tx_modes(struct igb_adapter *adapter, int queue)
|
||||||
{
|
{
|
||||||
struct igb_ring *ring = adapter->tx_ring[queue];
|
|
||||||
struct net_device *netdev = adapter->netdev;
|
struct net_device *netdev = adapter->netdev;
|
||||||
struct e1000_hw *hw = &adapter->hw;
|
struct e1000_hw *hw = &adapter->hw;
|
||||||
|
struct igb_ring *ring;
|
||||||
u32 tqavcc, tqavctrl;
|
u32 tqavcc, tqavctrl;
|
||||||
u16 value;
|
u16 value;
|
||||||
|
|
||||||
WARN_ON(hw->mac.type != e1000_i210);
|
WARN_ON(hw->mac.type != e1000_i210);
|
||||||
WARN_ON(queue < 0 || queue > 1);
|
WARN_ON(queue < 0 || queue > 1);
|
||||||
|
ring = adapter->tx_ring[queue];
|
||||||
|
|
||||||
/* If any of the Qav features is enabled, configure queues as SR and
|
/* If any of the Qav features is enabled, configure queues as SR and
|
||||||
* with HIGH PRIO. If none is, then configure them with LOW PRIO and
|
* with HIGH PRIO. If none is, then configure them with LOW PRIO and
|
||||||
@@ -3616,6 +3624,7 @@ err_sw_init:
|
|||||||
err_ioremap:
|
err_ioremap:
|
||||||
free_netdev(netdev);
|
free_netdev(netdev);
|
||||||
err_alloc_etherdev:
|
err_alloc_etherdev:
|
||||||
|
pci_disable_pcie_error_reporting(pdev);
|
||||||
pci_release_mem_regions(pdev);
|
pci_release_mem_regions(pdev);
|
||||||
err_pci_reg:
|
err_pci_reg:
|
||||||
err_dma:
|
err_dma:
|
||||||
@@ -4836,6 +4845,8 @@ static void igb_clean_tx_ring(struct igb_ring *tx_ring)
|
|||||||
DMA_TO_DEVICE);
|
DMA_TO_DEVICE);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
tx_buffer->next_to_watch = NULL;
|
||||||
|
|
||||||
/* move us one more past the eop_desc for start of next pkt */
|
/* move us one more past the eop_desc for start of next pkt */
|
||||||
tx_buffer++;
|
tx_buffer++;
|
||||||
i++;
|
i++;
|
||||||
|
@@ -532,7 +532,7 @@ static inline s32 igc_read_phy_reg(struct igc_hw *hw, u32 offset, u16 *data)
|
|||||||
if (hw->phy.ops.read_reg)
|
if (hw->phy.ops.read_reg)
|
||||||
return hw->phy.ops.read_reg(hw, offset, data);
|
return hw->phy.ops.read_reg(hw, offset, data);
|
||||||
|
|
||||||
return 0;
|
return -EOPNOTSUPP;
|
||||||
}
|
}
|
||||||
|
|
||||||
void igc_reinit_locked(struct igc_adapter *);
|
void igc_reinit_locked(struct igc_adapter *);
|
||||||
|
@@ -207,6 +207,8 @@ static void igc_clean_tx_ring(struct igc_ring *tx_ring)
|
|||||||
DMA_TO_DEVICE);
|
DMA_TO_DEVICE);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
tx_buffer->next_to_watch = NULL;
|
||||||
|
|
||||||
/* move us one more past the eop_desc for start of next pkt */
|
/* move us one more past the eop_desc for start of next pkt */
|
||||||
tx_buffer++;
|
tx_buffer++;
|
||||||
i++;
|
i++;
|
||||||
@@ -5221,6 +5223,7 @@ err_sw_init:
|
|||||||
err_ioremap:
|
err_ioremap:
|
||||||
free_netdev(netdev);
|
free_netdev(netdev);
|
||||||
err_alloc_etherdev:
|
err_alloc_etherdev:
|
||||||
|
pci_disable_pcie_error_reporting(pdev);
|
||||||
pci_release_mem_regions(pdev);
|
pci_release_mem_regions(pdev);
|
||||||
err_pci_reg:
|
err_pci_reg:
|
||||||
err_dma:
|
err_dma:
|
||||||
|
@@ -1825,7 +1825,8 @@ static void ixgbe_dma_sync_frag(struct ixgbe_ring *rx_ring,
|
|||||||
struct sk_buff *skb)
|
struct sk_buff *skb)
|
||||||
{
|
{
|
||||||
if (ring_uses_build_skb(rx_ring)) {
|
if (ring_uses_build_skb(rx_ring)) {
|
||||||
unsigned long offset = (unsigned long)(skb->data) & ~PAGE_MASK;
|
unsigned long mask = (unsigned long)ixgbe_rx_pg_size(rx_ring) - 1;
|
||||||
|
unsigned long offset = (unsigned long)(skb->data) & mask;
|
||||||
|
|
||||||
dma_sync_single_range_for_cpu(rx_ring->dev,
|
dma_sync_single_range_for_cpu(rx_ring->dev,
|
||||||
IXGBE_CB(skb)->dma,
|
IXGBE_CB(skb)->dma,
|
||||||
@@ -11081,6 +11082,7 @@ err_ioremap:
|
|||||||
disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state);
|
disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state);
|
||||||
free_netdev(netdev);
|
free_netdev(netdev);
|
||||||
err_alloc_etherdev:
|
err_alloc_etherdev:
|
||||||
|
pci_disable_pcie_error_reporting(pdev);
|
||||||
pci_release_mem_regions(pdev);
|
pci_release_mem_regions(pdev);
|
||||||
err_pci_reg:
|
err_pci_reg:
|
||||||
err_dma:
|
err_dma:
|
||||||
|
@@ -211,7 +211,7 @@ struct xfrm_state *ixgbevf_ipsec_find_rx_state(struct ixgbevf_ipsec *ipsec,
|
|||||||
static int ixgbevf_ipsec_parse_proto_keys(struct xfrm_state *xs,
|
static int ixgbevf_ipsec_parse_proto_keys(struct xfrm_state *xs,
|
||||||
u32 *mykey, u32 *mysalt)
|
u32 *mykey, u32 *mysalt)
|
||||||
{
|
{
|
||||||
struct net_device *dev = xs->xso.dev;
|
struct net_device *dev = xs->xso.real_dev;
|
||||||
unsigned char *key_data;
|
unsigned char *key_data;
|
||||||
char *alg_name = NULL;
|
char *alg_name = NULL;
|
||||||
int key_len;
|
int key_len;
|
||||||
@@ -260,12 +260,15 @@ static int ixgbevf_ipsec_parse_proto_keys(struct xfrm_state *xs,
|
|||||||
**/
|
**/
|
||||||
static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
|
static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
|
||||||
{
|
{
|
||||||
struct net_device *dev = xs->xso.dev;
|
struct net_device *dev = xs->xso.real_dev;
|
||||||
struct ixgbevf_adapter *adapter = netdev_priv(dev);
|
struct ixgbevf_adapter *adapter;
|
||||||
struct ixgbevf_ipsec *ipsec = adapter->ipsec;
|
struct ixgbevf_ipsec *ipsec;
|
||||||
u16 sa_idx;
|
u16 sa_idx;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
adapter = netdev_priv(dev);
|
||||||
|
ipsec = adapter->ipsec;
|
||||||
|
|
||||||
if (xs->id.proto != IPPROTO_ESP && xs->id.proto != IPPROTO_AH) {
|
if (xs->id.proto != IPPROTO_ESP && xs->id.proto != IPPROTO_AH) {
|
||||||
netdev_err(dev, "Unsupported protocol 0x%04x for IPsec offload\n",
|
netdev_err(dev, "Unsupported protocol 0x%04x for IPsec offload\n",
|
||||||
xs->id.proto);
|
xs->id.proto);
|
||||||
@@ -383,11 +386,14 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
|
|||||||
**/
|
**/
|
||||||
static void ixgbevf_ipsec_del_sa(struct xfrm_state *xs)
|
static void ixgbevf_ipsec_del_sa(struct xfrm_state *xs)
|
||||||
{
|
{
|
||||||
struct net_device *dev = xs->xso.dev;
|
struct net_device *dev = xs->xso.real_dev;
|
||||||
struct ixgbevf_adapter *adapter = netdev_priv(dev);
|
struct ixgbevf_adapter *adapter;
|
||||||
struct ixgbevf_ipsec *ipsec = adapter->ipsec;
|
struct ixgbevf_ipsec *ipsec;
|
||||||
u16 sa_idx;
|
u16 sa_idx;
|
||||||
|
|
||||||
|
adapter = netdev_priv(dev);
|
||||||
|
ipsec = adapter->ipsec;
|
||||||
|
|
||||||
if (xs->xso.flags & XFRM_OFFLOAD_INBOUND) {
|
if (xs->xso.flags & XFRM_OFFLOAD_INBOUND) {
|
||||||
sa_idx = xs->xso.offload_handle - IXGBE_IPSEC_BASE_RX_INDEX;
|
sa_idx = xs->xso.offload_handle - IXGBE_IPSEC_BASE_RX_INDEX;
|
||||||
|
|
||||||
|
@@ -5160,7 +5160,8 @@ static int r8169_mdio_register(struct rtl8169_private *tp)
|
|||||||
new_bus->priv = tp;
|
new_bus->priv = tp;
|
||||||
new_bus->parent = &pdev->dev;
|
new_bus->parent = &pdev->dev;
|
||||||
new_bus->irq[0] = PHY_IGNORE_INTERRUPT;
|
new_bus->irq[0] = PHY_IGNORE_INTERRUPT;
|
||||||
snprintf(new_bus->id, MII_BUS_ID_SIZE, "r8169-%x", pci_dev_id(pdev));
|
snprintf(new_bus->id, MII_BUS_ID_SIZE, "r8169-%x-%x",
|
||||||
|
pci_domain_nr(pdev->bus), pci_dev_id(pdev));
|
||||||
|
|
||||||
new_bus->read = r8169_mdio_read_reg;
|
new_bus->read = r8169_mdio_read_reg;
|
||||||
new_bus->write = r8169_mdio_write_reg;
|
new_bus->write = r8169_mdio_write_reg;
|
||||||
|
@@ -889,18 +889,20 @@ int efx_set_channels(struct efx_nic *efx)
|
|||||||
if (efx_channel_is_xdp_tx(channel)) {
|
if (efx_channel_is_xdp_tx(channel)) {
|
||||||
efx_for_each_channel_tx_queue(tx_queue, channel) {
|
efx_for_each_channel_tx_queue(tx_queue, channel) {
|
||||||
tx_queue->queue = next_queue++;
|
tx_queue->queue = next_queue++;
|
||||||
netif_dbg(efx, drv, efx->net_dev, "Channel %u TXQ %u is XDP %u, HW %u\n",
|
|
||||||
channel->channel, tx_queue->label,
|
|
||||||
xdp_queue_number, tx_queue->queue);
|
|
||||||
/* We may have a few left-over XDP TX
|
/* We may have a few left-over XDP TX
|
||||||
* queues owing to xdp_tx_queue_count
|
* queues owing to xdp_tx_queue_count
|
||||||
* not dividing evenly by EFX_MAX_TXQ_PER_CHANNEL.
|
* not dividing evenly by EFX_MAX_TXQ_PER_CHANNEL.
|
||||||
* We still allocate and probe those
|
* We still allocate and probe those
|
||||||
* TXQs, but never use them.
|
* TXQs, but never use them.
|
||||||
*/
|
*/
|
||||||
if (xdp_queue_number < efx->xdp_tx_queue_count)
|
if (xdp_queue_number < efx->xdp_tx_queue_count) {
|
||||||
|
netif_dbg(efx, drv, efx->net_dev, "Channel %u TXQ %u is XDP %u, HW %u\n",
|
||||||
|
channel->channel, tx_queue->label,
|
||||||
|
xdp_queue_number, tx_queue->queue);
|
||||||
efx->xdp_tx_queues[xdp_queue_number] = tx_queue;
|
efx->xdp_tx_queues[xdp_queue_number] = tx_queue;
|
||||||
xdp_queue_number++;
|
xdp_queue_number++;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
efx_for_each_channel_tx_queue(tx_queue, channel) {
|
efx_for_each_channel_tx_queue(tx_queue, channel) {
|
||||||
@@ -912,6 +914,7 @@ int efx_set_channels(struct efx_nic *efx)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
WARN_ON(xdp_queue_number != efx->xdp_tx_queue_count);
|
||||||
|
|
||||||
rc = netif_set_real_num_tx_queues(efx->net_dev, efx->n_tx_channels);
|
rc = netif_set_real_num_tx_queues(efx->net_dev, efx->n_tx_channels);
|
||||||
if (rc)
|
if (rc)
|
||||||
|
@@ -399,6 +399,7 @@ stmmac_probe_config_dt(struct platform_device *pdev, const char **mac)
|
|||||||
struct device_node *np = pdev->dev.of_node;
|
struct device_node *np = pdev->dev.of_node;
|
||||||
struct plat_stmmacenet_data *plat;
|
struct plat_stmmacenet_data *plat;
|
||||||
struct stmmac_dma_cfg *dma_cfg;
|
struct stmmac_dma_cfg *dma_cfg;
|
||||||
|
int phy_mode;
|
||||||
int rc;
|
int rc;
|
||||||
|
|
||||||
plat = devm_kzalloc(&pdev->dev, sizeof(*plat), GFP_KERNEL);
|
plat = devm_kzalloc(&pdev->dev, sizeof(*plat), GFP_KERNEL);
|
||||||
@@ -413,10 +414,11 @@ stmmac_probe_config_dt(struct platform_device *pdev, const char **mac)
|
|||||||
*mac = NULL;
|
*mac = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
plat->phy_interface = device_get_phy_mode(&pdev->dev);
|
phy_mode = device_get_phy_mode(&pdev->dev);
|
||||||
if (plat->phy_interface < 0)
|
if (phy_mode < 0)
|
||||||
return ERR_PTR(plat->phy_interface);
|
return ERR_PTR(phy_mode);
|
||||||
|
|
||||||
|
plat->phy_interface = phy_mode;
|
||||||
plat->interface = stmmac_of_get_mac_mode(np);
|
plat->interface = stmmac_of_get_mac_mode(np);
|
||||||
if (plat->interface < 0)
|
if (plat->interface < 0)
|
||||||
plat->interface = plat->phy_interface;
|
plat->interface = plat->phy_interface;
|
||||||
|
@@ -2496,7 +2496,7 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
|
|||||||
hso_net_init);
|
hso_net_init);
|
||||||
if (!net) {
|
if (!net) {
|
||||||
dev_err(&interface->dev, "Unable to create ethernet device\n");
|
dev_err(&interface->dev, "Unable to create ethernet device\n");
|
||||||
goto exit;
|
goto err_hso_dev;
|
||||||
}
|
}
|
||||||
|
|
||||||
hso_net = netdev_priv(net);
|
hso_net = netdev_priv(net);
|
||||||
@@ -2509,13 +2509,13 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
|
|||||||
USB_DIR_IN);
|
USB_DIR_IN);
|
||||||
if (!hso_net->in_endp) {
|
if (!hso_net->in_endp) {
|
||||||
dev_err(&interface->dev, "Can't find BULK IN endpoint\n");
|
dev_err(&interface->dev, "Can't find BULK IN endpoint\n");
|
||||||
goto exit;
|
goto err_net;
|
||||||
}
|
}
|
||||||
hso_net->out_endp = hso_get_ep(interface, USB_ENDPOINT_XFER_BULK,
|
hso_net->out_endp = hso_get_ep(interface, USB_ENDPOINT_XFER_BULK,
|
||||||
USB_DIR_OUT);
|
USB_DIR_OUT);
|
||||||
if (!hso_net->out_endp) {
|
if (!hso_net->out_endp) {
|
||||||
dev_err(&interface->dev, "Can't find BULK OUT endpoint\n");
|
dev_err(&interface->dev, "Can't find BULK OUT endpoint\n");
|
||||||
goto exit;
|
goto err_net;
|
||||||
}
|
}
|
||||||
SET_NETDEV_DEV(net, &interface->dev);
|
SET_NETDEV_DEV(net, &interface->dev);
|
||||||
SET_NETDEV_DEVTYPE(net, &hso_type);
|
SET_NETDEV_DEVTYPE(net, &hso_type);
|
||||||
@@ -2524,18 +2524,18 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
|
|||||||
for (i = 0; i < MUX_BULK_RX_BUF_COUNT; i++) {
|
for (i = 0; i < MUX_BULK_RX_BUF_COUNT; i++) {
|
||||||
hso_net->mux_bulk_rx_urb_pool[i] = usb_alloc_urb(0, GFP_KERNEL);
|
hso_net->mux_bulk_rx_urb_pool[i] = usb_alloc_urb(0, GFP_KERNEL);
|
||||||
if (!hso_net->mux_bulk_rx_urb_pool[i])
|
if (!hso_net->mux_bulk_rx_urb_pool[i])
|
||||||
goto exit;
|
goto err_mux_bulk_rx;
|
||||||
hso_net->mux_bulk_rx_buf_pool[i] = kzalloc(MUX_BULK_RX_BUF_SIZE,
|
hso_net->mux_bulk_rx_buf_pool[i] = kzalloc(MUX_BULK_RX_BUF_SIZE,
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
if (!hso_net->mux_bulk_rx_buf_pool[i])
|
if (!hso_net->mux_bulk_rx_buf_pool[i])
|
||||||
goto exit;
|
goto err_mux_bulk_rx;
|
||||||
}
|
}
|
||||||
hso_net->mux_bulk_tx_urb = usb_alloc_urb(0, GFP_KERNEL);
|
hso_net->mux_bulk_tx_urb = usb_alloc_urb(0, GFP_KERNEL);
|
||||||
if (!hso_net->mux_bulk_tx_urb)
|
if (!hso_net->mux_bulk_tx_urb)
|
||||||
goto exit;
|
goto err_mux_bulk_rx;
|
||||||
hso_net->mux_bulk_tx_buf = kzalloc(MUX_BULK_TX_BUF_SIZE, GFP_KERNEL);
|
hso_net->mux_bulk_tx_buf = kzalloc(MUX_BULK_TX_BUF_SIZE, GFP_KERNEL);
|
||||||
if (!hso_net->mux_bulk_tx_buf)
|
if (!hso_net->mux_bulk_tx_buf)
|
||||||
goto exit;
|
goto err_free_tx_urb;
|
||||||
|
|
||||||
add_net_device(hso_dev);
|
add_net_device(hso_dev);
|
||||||
|
|
||||||
@@ -2543,7 +2543,7 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
|
|||||||
result = register_netdev(net);
|
result = register_netdev(net);
|
||||||
if (result) {
|
if (result) {
|
||||||
dev_err(&interface->dev, "Failed to register device\n");
|
dev_err(&interface->dev, "Failed to register device\n");
|
||||||
goto exit;
|
goto err_free_tx_buf;
|
||||||
}
|
}
|
||||||
|
|
||||||
hso_log_port(hso_dev);
|
hso_log_port(hso_dev);
|
||||||
@@ -2551,8 +2551,21 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
|
|||||||
hso_create_rfkill(hso_dev, interface);
|
hso_create_rfkill(hso_dev, interface);
|
||||||
|
|
||||||
return hso_dev;
|
return hso_dev;
|
||||||
exit:
|
|
||||||
hso_free_net_device(hso_dev, true);
|
err_free_tx_buf:
|
||||||
|
remove_net_device(hso_dev);
|
||||||
|
kfree(hso_net->mux_bulk_tx_buf);
|
||||||
|
err_free_tx_urb:
|
||||||
|
usb_free_urb(hso_net->mux_bulk_tx_urb);
|
||||||
|
err_mux_bulk_rx:
|
||||||
|
for (i = 0; i < MUX_BULK_RX_BUF_COUNT; i++) {
|
||||||
|
usb_free_urb(hso_net->mux_bulk_rx_urb_pool[i]);
|
||||||
|
kfree(hso_net->mux_bulk_rx_buf_pool[i]);
|
||||||
|
}
|
||||||
|
err_net:
|
||||||
|
free_netdev(net);
|
||||||
|
err_hso_dev:
|
||||||
|
kfree(hso_dev);
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -751,7 +751,10 @@ static inline blk_status_t nvme_setup_write_zeroes(struct nvme_ns *ns,
|
|||||||
cpu_to_le64(nvme_sect_to_lba(ns, blk_rq_pos(req)));
|
cpu_to_le64(nvme_sect_to_lba(ns, blk_rq_pos(req)));
|
||||||
cmnd->write_zeroes.length =
|
cmnd->write_zeroes.length =
|
||||||
cpu_to_le16((blk_rq_bytes(req) >> ns->lba_shift) - 1);
|
cpu_to_le16((blk_rq_bytes(req) >> ns->lba_shift) - 1);
|
||||||
cmnd->write_zeroes.control = 0;
|
if (nvme_ns_has_pi(ns))
|
||||||
|
cmnd->write_zeroes.control = cpu_to_le16(NVME_RW_PRINFO_PRACT);
|
||||||
|
else
|
||||||
|
cmnd->write_zeroes.control = 0;
|
||||||
return BLK_STS_OK;
|
return BLK_STS_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -2596,7 +2596,9 @@ static void nvme_reset_work(struct work_struct *work)
|
|||||||
bool was_suspend = !!(dev->ctrl.ctrl_config & NVME_CC_SHN_NORMAL);
|
bool was_suspend = !!(dev->ctrl.ctrl_config & NVME_CC_SHN_NORMAL);
|
||||||
int result;
|
int result;
|
||||||
|
|
||||||
if (WARN_ON(dev->ctrl.state != NVME_CTRL_RESETTING)) {
|
if (dev->ctrl.state != NVME_CTRL_RESETTING) {
|
||||||
|
dev_warn(dev->ctrl.device, "ctrl state %d is not RESETTING\n",
|
||||||
|
dev->ctrl.state);
|
||||||
result = -ENODEV;
|
result = -ENODEV;
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
@@ -3003,7 +3005,6 @@ static void nvme_remove(struct pci_dev *pdev)
|
|||||||
if (!pci_device_is_present(pdev)) {
|
if (!pci_device_is_present(pdev)) {
|
||||||
nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DEAD);
|
nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DEAD);
|
||||||
nvme_dev_disable(dev, true);
|
nvme_dev_disable(dev, true);
|
||||||
nvme_dev_remove_admin(dev);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
flush_work(&dev->ctrl.reset_work);
|
flush_work(&dev->ctrl.reset_work);
|
||||||
|
@@ -5264,7 +5264,8 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0422, quirk_no_ext_tags);
|
|||||||
static void quirk_amd_harvest_no_ats(struct pci_dev *pdev)
|
static void quirk_amd_harvest_no_ats(struct pci_dev *pdev)
|
||||||
{
|
{
|
||||||
if ((pdev->device == 0x7312 && pdev->revision != 0x00) ||
|
if ((pdev->device == 0x7312 && pdev->revision != 0x00) ||
|
||||||
(pdev->device == 0x7340 && pdev->revision != 0xc5))
|
(pdev->device == 0x7340 && pdev->revision != 0xc5) ||
|
||||||
|
(pdev->device == 0x7341 && pdev->revision != 0x00))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
pci_info(pdev, "disabling ATS\n");
|
pci_info(pdev, "disabling ATS\n");
|
||||||
@@ -5279,6 +5280,7 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x6900, quirk_amd_harvest_no_ats);
|
|||||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7312, quirk_amd_harvest_no_ats);
|
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7312, quirk_amd_harvest_no_ats);
|
||||||
/* AMD Navi14 dGPU */
|
/* AMD Navi14 dGPU */
|
||||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7340, quirk_amd_harvest_no_ats);
|
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7340, quirk_amd_harvest_no_ats);
|
||||||
|
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7341, quirk_amd_harvest_no_ats);
|
||||||
#endif /* CONFIG_PCI_ATS */
|
#endif /* CONFIG_PCI_ATS */
|
||||||
|
|
||||||
/* Freescale PCIe doesn't support MSI in RC mode */
|
/* Freescale PCIe doesn't support MSI in RC mode */
|
||||||
|
@@ -180,13 +180,10 @@ static int sprd_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (state->period != cstate->period ||
|
ret = sprd_pwm_config(spc, pwm, state->duty_cycle,
|
||||||
state->duty_cycle != cstate->duty_cycle) {
|
state->period);
|
||||||
ret = sprd_pwm_config(spc, pwm, state->duty_cycle,
|
if (ret)
|
||||||
state->period);
|
return ret;
|
||||||
if (ret)
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
sprd_pwm_write(spc, pwm->hwpwm, SPRD_PWM_ENABLE, 1);
|
sprd_pwm_write(spc, pwm->hwpwm, SPRD_PWM_ENABLE, 1);
|
||||||
} else if (cstate->enabled) {
|
} else if (cstate->enabled) {
|
||||||
|
@@ -366,9 +366,8 @@ static struct hi6421_regulator_info
|
|||||||
|
|
||||||
static int hi6421_regulator_enable(struct regulator_dev *rdev)
|
static int hi6421_regulator_enable(struct regulator_dev *rdev)
|
||||||
{
|
{
|
||||||
struct hi6421_regulator_pdata *pdata;
|
struct hi6421_regulator_pdata *pdata = rdev_get_drvdata(rdev);
|
||||||
|
|
||||||
pdata = dev_get_drvdata(rdev->dev.parent);
|
|
||||||
/* hi6421 spec requires regulator enablement must be serialized:
|
/* hi6421 spec requires regulator enablement must be serialized:
|
||||||
* - Because when BUCK, LDO switching from off to on, it will have
|
* - Because when BUCK, LDO switching from off to on, it will have
|
||||||
* a huge instantaneous current; so you can not turn on two or
|
* a huge instantaneous current; so you can not turn on two or
|
||||||
@@ -385,9 +384,10 @@ static int hi6421_regulator_enable(struct regulator_dev *rdev)
|
|||||||
|
|
||||||
static unsigned int hi6421_regulator_ldo_get_mode(struct regulator_dev *rdev)
|
static unsigned int hi6421_regulator_ldo_get_mode(struct regulator_dev *rdev)
|
||||||
{
|
{
|
||||||
struct hi6421_regulator_info *info = rdev_get_drvdata(rdev);
|
struct hi6421_regulator_info *info;
|
||||||
u32 reg_val;
|
unsigned int reg_val;
|
||||||
|
|
||||||
|
info = container_of(rdev->desc, struct hi6421_regulator_info, desc);
|
||||||
regmap_read(rdev->regmap, rdev->desc->enable_reg, ®_val);
|
regmap_read(rdev->regmap, rdev->desc->enable_reg, ®_val);
|
||||||
if (reg_val & info->mode_mask)
|
if (reg_val & info->mode_mask)
|
||||||
return REGULATOR_MODE_IDLE;
|
return REGULATOR_MODE_IDLE;
|
||||||
@@ -397,9 +397,10 @@ static unsigned int hi6421_regulator_ldo_get_mode(struct regulator_dev *rdev)
|
|||||||
|
|
||||||
static unsigned int hi6421_regulator_buck_get_mode(struct regulator_dev *rdev)
|
static unsigned int hi6421_regulator_buck_get_mode(struct regulator_dev *rdev)
|
||||||
{
|
{
|
||||||
struct hi6421_regulator_info *info = rdev_get_drvdata(rdev);
|
struct hi6421_regulator_info *info;
|
||||||
u32 reg_val;
|
unsigned int reg_val;
|
||||||
|
|
||||||
|
info = container_of(rdev->desc, struct hi6421_regulator_info, desc);
|
||||||
regmap_read(rdev->regmap, rdev->desc->enable_reg, ®_val);
|
regmap_read(rdev->regmap, rdev->desc->enable_reg, ®_val);
|
||||||
if (reg_val & info->mode_mask)
|
if (reg_val & info->mode_mask)
|
||||||
return REGULATOR_MODE_STANDBY;
|
return REGULATOR_MODE_STANDBY;
|
||||||
@@ -410,9 +411,10 @@ static unsigned int hi6421_regulator_buck_get_mode(struct regulator_dev *rdev)
|
|||||||
static int hi6421_regulator_ldo_set_mode(struct regulator_dev *rdev,
|
static int hi6421_regulator_ldo_set_mode(struct regulator_dev *rdev,
|
||||||
unsigned int mode)
|
unsigned int mode)
|
||||||
{
|
{
|
||||||
struct hi6421_regulator_info *info = rdev_get_drvdata(rdev);
|
struct hi6421_regulator_info *info;
|
||||||
u32 new_mode;
|
unsigned int new_mode;
|
||||||
|
|
||||||
|
info = container_of(rdev->desc, struct hi6421_regulator_info, desc);
|
||||||
switch (mode) {
|
switch (mode) {
|
||||||
case REGULATOR_MODE_NORMAL:
|
case REGULATOR_MODE_NORMAL:
|
||||||
new_mode = 0;
|
new_mode = 0;
|
||||||
@@ -434,9 +436,10 @@ static int hi6421_regulator_ldo_set_mode(struct regulator_dev *rdev,
|
|||||||
static int hi6421_regulator_buck_set_mode(struct regulator_dev *rdev,
|
static int hi6421_regulator_buck_set_mode(struct regulator_dev *rdev,
|
||||||
unsigned int mode)
|
unsigned int mode)
|
||||||
{
|
{
|
||||||
struct hi6421_regulator_info *info = rdev_get_drvdata(rdev);
|
struct hi6421_regulator_info *info;
|
||||||
u32 new_mode;
|
unsigned int new_mode;
|
||||||
|
|
||||||
|
info = container_of(rdev->desc, struct hi6421_regulator_info, desc);
|
||||||
switch (mode) {
|
switch (mode) {
|
||||||
case REGULATOR_MODE_NORMAL:
|
case REGULATOR_MODE_NORMAL:
|
||||||
new_mode = 0;
|
new_mode = 0;
|
||||||
@@ -459,7 +462,9 @@ static unsigned int
|
|||||||
hi6421_regulator_ldo_get_optimum_mode(struct regulator_dev *rdev,
|
hi6421_regulator_ldo_get_optimum_mode(struct regulator_dev *rdev,
|
||||||
int input_uV, int output_uV, int load_uA)
|
int input_uV, int output_uV, int load_uA)
|
||||||
{
|
{
|
||||||
struct hi6421_regulator_info *info = rdev_get_drvdata(rdev);
|
struct hi6421_regulator_info *info;
|
||||||
|
|
||||||
|
info = container_of(rdev->desc, struct hi6421_regulator_info, desc);
|
||||||
|
|
||||||
if (load_uA > info->eco_microamp)
|
if (load_uA > info->eco_microamp)
|
||||||
return REGULATOR_MODE_NORMAL;
|
return REGULATOR_MODE_NORMAL;
|
||||||
@@ -543,14 +548,13 @@ static int hi6421_regulator_probe(struct platform_device *pdev)
|
|||||||
if (!pdata)
|
if (!pdata)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
mutex_init(&pdata->lock);
|
mutex_init(&pdata->lock);
|
||||||
platform_set_drvdata(pdev, pdata);
|
|
||||||
|
|
||||||
for (i = 0; i < ARRAY_SIZE(hi6421_regulator_info); i++) {
|
for (i = 0; i < ARRAY_SIZE(hi6421_regulator_info); i++) {
|
||||||
/* assign per-regulator data */
|
/* assign per-regulator data */
|
||||||
info = &hi6421_regulator_info[i];
|
info = &hi6421_regulator_info[i];
|
||||||
|
|
||||||
config.dev = pdev->dev.parent;
|
config.dev = pdev->dev.parent;
|
||||||
config.driver_data = info;
|
config.driver_data = pdata;
|
||||||
config.regmap = pmic->regmap;
|
config.regmap = pmic->regmap;
|
||||||
|
|
||||||
rdev = devm_regulator_register(&pdev->dev, &info->desc,
|
rdev = devm_regulator_register(&pdev->dev, &info->desc,
|
||||||
|
@@ -440,39 +440,10 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
|
|||||||
struct device *dev = container_of(kobj, struct device, kobj);
|
struct device *dev = container_of(kobj, struct device, kobj);
|
||||||
struct iscsi_iface *iface = iscsi_dev_to_iface(dev);
|
struct iscsi_iface *iface = iscsi_dev_to_iface(dev);
|
||||||
struct iscsi_transport *t = iface->transport;
|
struct iscsi_transport *t = iface->transport;
|
||||||
int param;
|
int param = -1;
|
||||||
int param_type;
|
|
||||||
|
|
||||||
if (attr == &dev_attr_iface_enabled.attr)
|
if (attr == &dev_attr_iface_enabled.attr)
|
||||||
param = ISCSI_NET_PARAM_IFACE_ENABLE;
|
param = ISCSI_NET_PARAM_IFACE_ENABLE;
|
||||||
else if (attr == &dev_attr_iface_vlan_id.attr)
|
|
||||||
param = ISCSI_NET_PARAM_VLAN_ID;
|
|
||||||
else if (attr == &dev_attr_iface_vlan_priority.attr)
|
|
||||||
param = ISCSI_NET_PARAM_VLAN_PRIORITY;
|
|
||||||
else if (attr == &dev_attr_iface_vlan_enabled.attr)
|
|
||||||
param = ISCSI_NET_PARAM_VLAN_ENABLED;
|
|
||||||
else if (attr == &dev_attr_iface_mtu.attr)
|
|
||||||
param = ISCSI_NET_PARAM_MTU;
|
|
||||||
else if (attr == &dev_attr_iface_port.attr)
|
|
||||||
param = ISCSI_NET_PARAM_PORT;
|
|
||||||
else if (attr == &dev_attr_iface_ipaddress_state.attr)
|
|
||||||
param = ISCSI_NET_PARAM_IPADDR_STATE;
|
|
||||||
else if (attr == &dev_attr_iface_delayed_ack_en.attr)
|
|
||||||
param = ISCSI_NET_PARAM_DELAYED_ACK_EN;
|
|
||||||
else if (attr == &dev_attr_iface_tcp_nagle_disable.attr)
|
|
||||||
param = ISCSI_NET_PARAM_TCP_NAGLE_DISABLE;
|
|
||||||
else if (attr == &dev_attr_iface_tcp_wsf_disable.attr)
|
|
||||||
param = ISCSI_NET_PARAM_TCP_WSF_DISABLE;
|
|
||||||
else if (attr == &dev_attr_iface_tcp_wsf.attr)
|
|
||||||
param = ISCSI_NET_PARAM_TCP_WSF;
|
|
||||||
else if (attr == &dev_attr_iface_tcp_timer_scale.attr)
|
|
||||||
param = ISCSI_NET_PARAM_TCP_TIMER_SCALE;
|
|
||||||
else if (attr == &dev_attr_iface_tcp_timestamp_en.attr)
|
|
||||||
param = ISCSI_NET_PARAM_TCP_TIMESTAMP_EN;
|
|
||||||
else if (attr == &dev_attr_iface_cache_id.attr)
|
|
||||||
param = ISCSI_NET_PARAM_CACHE_ID;
|
|
||||||
else if (attr == &dev_attr_iface_redirect_en.attr)
|
|
||||||
param = ISCSI_NET_PARAM_REDIRECT_EN;
|
|
||||||
else if (attr == &dev_attr_iface_def_taskmgmt_tmo.attr)
|
else if (attr == &dev_attr_iface_def_taskmgmt_tmo.attr)
|
||||||
param = ISCSI_IFACE_PARAM_DEF_TASKMGMT_TMO;
|
param = ISCSI_IFACE_PARAM_DEF_TASKMGMT_TMO;
|
||||||
else if (attr == &dev_attr_iface_header_digest.attr)
|
else if (attr == &dev_attr_iface_header_digest.attr)
|
||||||
@@ -509,6 +480,38 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
|
|||||||
param = ISCSI_IFACE_PARAM_STRICT_LOGIN_COMP_EN;
|
param = ISCSI_IFACE_PARAM_STRICT_LOGIN_COMP_EN;
|
||||||
else if (attr == &dev_attr_iface_initiator_name.attr)
|
else if (attr == &dev_attr_iface_initiator_name.attr)
|
||||||
param = ISCSI_IFACE_PARAM_INITIATOR_NAME;
|
param = ISCSI_IFACE_PARAM_INITIATOR_NAME;
|
||||||
|
|
||||||
|
if (param != -1)
|
||||||
|
return t->attr_is_visible(ISCSI_IFACE_PARAM, param);
|
||||||
|
|
||||||
|
if (attr == &dev_attr_iface_vlan_id.attr)
|
||||||
|
param = ISCSI_NET_PARAM_VLAN_ID;
|
||||||
|
else if (attr == &dev_attr_iface_vlan_priority.attr)
|
||||||
|
param = ISCSI_NET_PARAM_VLAN_PRIORITY;
|
||||||
|
else if (attr == &dev_attr_iface_vlan_enabled.attr)
|
||||||
|
param = ISCSI_NET_PARAM_VLAN_ENABLED;
|
||||||
|
else if (attr == &dev_attr_iface_mtu.attr)
|
||||||
|
param = ISCSI_NET_PARAM_MTU;
|
||||||
|
else if (attr == &dev_attr_iface_port.attr)
|
||||||
|
param = ISCSI_NET_PARAM_PORT;
|
||||||
|
else if (attr == &dev_attr_iface_ipaddress_state.attr)
|
||||||
|
param = ISCSI_NET_PARAM_IPADDR_STATE;
|
||||||
|
else if (attr == &dev_attr_iface_delayed_ack_en.attr)
|
||||||
|
param = ISCSI_NET_PARAM_DELAYED_ACK_EN;
|
||||||
|
else if (attr == &dev_attr_iface_tcp_nagle_disable.attr)
|
||||||
|
param = ISCSI_NET_PARAM_TCP_NAGLE_DISABLE;
|
||||||
|
else if (attr == &dev_attr_iface_tcp_wsf_disable.attr)
|
||||||
|
param = ISCSI_NET_PARAM_TCP_WSF_DISABLE;
|
||||||
|
else if (attr == &dev_attr_iface_tcp_wsf.attr)
|
||||||
|
param = ISCSI_NET_PARAM_TCP_WSF;
|
||||||
|
else if (attr == &dev_attr_iface_tcp_timer_scale.attr)
|
||||||
|
param = ISCSI_NET_PARAM_TCP_TIMER_SCALE;
|
||||||
|
else if (attr == &dev_attr_iface_tcp_timestamp_en.attr)
|
||||||
|
param = ISCSI_NET_PARAM_TCP_TIMESTAMP_EN;
|
||||||
|
else if (attr == &dev_attr_iface_cache_id.attr)
|
||||||
|
param = ISCSI_NET_PARAM_CACHE_ID;
|
||||||
|
else if (attr == &dev_attr_iface_redirect_en.attr)
|
||||||
|
param = ISCSI_NET_PARAM_REDIRECT_EN;
|
||||||
else if (iface->iface_type == ISCSI_IFACE_TYPE_IPV4) {
|
else if (iface->iface_type == ISCSI_IFACE_TYPE_IPV4) {
|
||||||
if (attr == &dev_attr_ipv4_iface_ipaddress.attr)
|
if (attr == &dev_attr_ipv4_iface_ipaddress.attr)
|
||||||
param = ISCSI_NET_PARAM_IPV4_ADDR;
|
param = ISCSI_NET_PARAM_IPV4_ADDR;
|
||||||
@@ -599,32 +602,7 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
switch (param) {
|
return t->attr_is_visible(ISCSI_NET_PARAM, param);
|
||||||
case ISCSI_IFACE_PARAM_DEF_TASKMGMT_TMO:
|
|
||||||
case ISCSI_IFACE_PARAM_HDRDGST_EN:
|
|
||||||
case ISCSI_IFACE_PARAM_DATADGST_EN:
|
|
||||||
case ISCSI_IFACE_PARAM_IMM_DATA_EN:
|
|
||||||
case ISCSI_IFACE_PARAM_INITIAL_R2T_EN:
|
|
||||||
case ISCSI_IFACE_PARAM_DATASEQ_INORDER_EN:
|
|
||||||
case ISCSI_IFACE_PARAM_PDU_INORDER_EN:
|
|
||||||
case ISCSI_IFACE_PARAM_ERL:
|
|
||||||
case ISCSI_IFACE_PARAM_MAX_RECV_DLENGTH:
|
|
||||||
case ISCSI_IFACE_PARAM_FIRST_BURST:
|
|
||||||
case ISCSI_IFACE_PARAM_MAX_R2T:
|
|
||||||
case ISCSI_IFACE_PARAM_MAX_BURST:
|
|
||||||
case ISCSI_IFACE_PARAM_CHAP_AUTH_EN:
|
|
||||||
case ISCSI_IFACE_PARAM_BIDI_CHAP_EN:
|
|
||||||
case ISCSI_IFACE_PARAM_DISCOVERY_AUTH_OPTIONAL:
|
|
||||||
case ISCSI_IFACE_PARAM_DISCOVERY_LOGOUT_EN:
|
|
||||||
case ISCSI_IFACE_PARAM_STRICT_LOGIN_COMP_EN:
|
|
||||||
case ISCSI_IFACE_PARAM_INITIATOR_NAME:
|
|
||||||
param_type = ISCSI_IFACE_PARAM;
|
|
||||||
break;
|
|
||||||
default:
|
|
||||||
param_type = ISCSI_NET_PARAM;
|
|
||||||
}
|
|
||||||
|
|
||||||
return t->attr_is_visible(param_type, param);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct attribute *iscsi_iface_attrs[] = {
|
static struct attribute *iscsi_iface_attrs[] = {
|
||||||
|
@@ -84,6 +84,7 @@ MODULE_PARM_DESC(polling_limit_us,
|
|||||||
* struct bcm2835_spi - BCM2835 SPI controller
|
* struct bcm2835_spi - BCM2835 SPI controller
|
||||||
* @regs: base address of register map
|
* @regs: base address of register map
|
||||||
* @clk: core clock, divided to calculate serial clock
|
* @clk: core clock, divided to calculate serial clock
|
||||||
|
* @clk_hz: core clock cached speed
|
||||||
* @irq: interrupt, signals TX FIFO empty or RX FIFO ¾ full
|
* @irq: interrupt, signals TX FIFO empty or RX FIFO ¾ full
|
||||||
* @tfr: SPI transfer currently processed
|
* @tfr: SPI transfer currently processed
|
||||||
* @ctlr: SPI controller reverse lookup
|
* @ctlr: SPI controller reverse lookup
|
||||||
@@ -124,6 +125,7 @@ MODULE_PARM_DESC(polling_limit_us,
|
|||||||
struct bcm2835_spi {
|
struct bcm2835_spi {
|
||||||
void __iomem *regs;
|
void __iomem *regs;
|
||||||
struct clk *clk;
|
struct clk *clk;
|
||||||
|
unsigned long clk_hz;
|
||||||
int irq;
|
int irq;
|
||||||
struct spi_transfer *tfr;
|
struct spi_transfer *tfr;
|
||||||
struct spi_controller *ctlr;
|
struct spi_controller *ctlr;
|
||||||
@@ -1082,19 +1084,18 @@ static int bcm2835_spi_transfer_one(struct spi_controller *ctlr,
|
|||||||
struct spi_transfer *tfr)
|
struct spi_transfer *tfr)
|
||||||
{
|
{
|
||||||
struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr);
|
struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr);
|
||||||
unsigned long spi_hz, clk_hz, cdiv;
|
unsigned long spi_hz, cdiv;
|
||||||
unsigned long hz_per_byte, byte_limit;
|
unsigned long hz_per_byte, byte_limit;
|
||||||
u32 cs = bs->prepare_cs[spi->chip_select];
|
u32 cs = bs->prepare_cs[spi->chip_select];
|
||||||
|
|
||||||
/* set clock */
|
/* set clock */
|
||||||
spi_hz = tfr->speed_hz;
|
spi_hz = tfr->speed_hz;
|
||||||
clk_hz = clk_get_rate(bs->clk);
|
|
||||||
|
|
||||||
if (spi_hz >= clk_hz / 2) {
|
if (spi_hz >= bs->clk_hz / 2) {
|
||||||
cdiv = 2; /* clk_hz/2 is the fastest we can go */
|
cdiv = 2; /* clk_hz/2 is the fastest we can go */
|
||||||
} else if (spi_hz) {
|
} else if (spi_hz) {
|
||||||
/* CDIV must be a multiple of two */
|
/* CDIV must be a multiple of two */
|
||||||
cdiv = DIV_ROUND_UP(clk_hz, spi_hz);
|
cdiv = DIV_ROUND_UP(bs->clk_hz, spi_hz);
|
||||||
cdiv += (cdiv % 2);
|
cdiv += (cdiv % 2);
|
||||||
|
|
||||||
if (cdiv >= 65536)
|
if (cdiv >= 65536)
|
||||||
@@ -1102,7 +1103,7 @@ static int bcm2835_spi_transfer_one(struct spi_controller *ctlr,
|
|||||||
} else {
|
} else {
|
||||||
cdiv = 0; /* 0 is the slowest we can go */
|
cdiv = 0; /* 0 is the slowest we can go */
|
||||||
}
|
}
|
||||||
tfr->effective_speed_hz = cdiv ? (clk_hz / cdiv) : (clk_hz / 65536);
|
tfr->effective_speed_hz = cdiv ? (bs->clk_hz / cdiv) : (bs->clk_hz / 65536);
|
||||||
bcm2835_wr(bs, BCM2835_SPI_CLK, cdiv);
|
bcm2835_wr(bs, BCM2835_SPI_CLK, cdiv);
|
||||||
|
|
||||||
/* handle all the 3-wire mode */
|
/* handle all the 3-wire mode */
|
||||||
@@ -1318,6 +1319,7 @@ static int bcm2835_spi_probe(struct platform_device *pdev)
|
|||||||
return bs->irq ? bs->irq : -ENODEV;
|
return bs->irq ? bs->irq : -ENODEV;
|
||||||
|
|
||||||
clk_prepare_enable(bs->clk);
|
clk_prepare_enable(bs->clk);
|
||||||
|
bs->clk_hz = clk_get_rate(bs->clk);
|
||||||
|
|
||||||
err = bcm2835_dma_init(ctlr, &pdev->dev, bs);
|
err = bcm2835_dma_init(ctlr, &pdev->dev, bs);
|
||||||
if (err)
|
if (err)
|
||||||
|
@@ -517,6 +517,12 @@ static int cdns_spi_probe(struct platform_device *pdev)
|
|||||||
goto clk_dis_apb;
|
goto clk_dis_apb;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pm_runtime_use_autosuspend(&pdev->dev);
|
||||||
|
pm_runtime_set_autosuspend_delay(&pdev->dev, SPI_AUTOSUSPEND_TIMEOUT);
|
||||||
|
pm_runtime_get_noresume(&pdev->dev);
|
||||||
|
pm_runtime_set_active(&pdev->dev);
|
||||||
|
pm_runtime_enable(&pdev->dev);
|
||||||
|
|
||||||
ret = of_property_read_u32(pdev->dev.of_node, "num-cs", &num_cs);
|
ret = of_property_read_u32(pdev->dev.of_node, "num-cs", &num_cs);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
master->num_chipselect = CDNS_SPI_DEFAULT_NUM_CS;
|
master->num_chipselect = CDNS_SPI_DEFAULT_NUM_CS;
|
||||||
@@ -531,11 +537,6 @@ static int cdns_spi_probe(struct platform_device *pdev)
|
|||||||
/* SPI controller initializations */
|
/* SPI controller initializations */
|
||||||
cdns_spi_init_hw(xspi);
|
cdns_spi_init_hw(xspi);
|
||||||
|
|
||||||
pm_runtime_set_active(&pdev->dev);
|
|
||||||
pm_runtime_enable(&pdev->dev);
|
|
||||||
pm_runtime_use_autosuspend(&pdev->dev);
|
|
||||||
pm_runtime_set_autosuspend_delay(&pdev->dev, SPI_AUTOSUSPEND_TIMEOUT);
|
|
||||||
|
|
||||||
irq = platform_get_irq(pdev, 0);
|
irq = platform_get_irq(pdev, 0);
|
||||||
if (irq <= 0) {
|
if (irq <= 0) {
|
||||||
ret = -ENXIO;
|
ret = -ENXIO;
|
||||||
@@ -566,6 +567,9 @@ static int cdns_spi_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
master->bits_per_word_mask = SPI_BPW_MASK(8);
|
master->bits_per_word_mask = SPI_BPW_MASK(8);
|
||||||
|
|
||||||
|
pm_runtime_mark_last_busy(&pdev->dev);
|
||||||
|
pm_runtime_put_autosuspend(&pdev->dev);
|
||||||
|
|
||||||
ret = spi_register_master(master);
|
ret = spi_register_master(master);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
dev_err(&pdev->dev, "spi_register_master failed\n");
|
dev_err(&pdev->dev, "spi_register_master failed\n");
|
||||||
|
@@ -66,8 +66,7 @@ struct spi_imx_data;
|
|||||||
struct spi_imx_devtype_data {
|
struct spi_imx_devtype_data {
|
||||||
void (*intctrl)(struct spi_imx_data *, int);
|
void (*intctrl)(struct spi_imx_data *, int);
|
||||||
int (*prepare_message)(struct spi_imx_data *, struct spi_message *);
|
int (*prepare_message)(struct spi_imx_data *, struct spi_message *);
|
||||||
int (*prepare_transfer)(struct spi_imx_data *, struct spi_device *,
|
int (*prepare_transfer)(struct spi_imx_data *, struct spi_device *);
|
||||||
struct spi_transfer *);
|
|
||||||
void (*trigger)(struct spi_imx_data *);
|
void (*trigger)(struct spi_imx_data *);
|
||||||
int (*rx_available)(struct spi_imx_data *);
|
int (*rx_available)(struct spi_imx_data *);
|
||||||
void (*reset)(struct spi_imx_data *);
|
void (*reset)(struct spi_imx_data *);
|
||||||
@@ -572,11 +571,10 @@ static int mx51_ecspi_prepare_message(struct spi_imx_data *spi_imx,
|
|||||||
}
|
}
|
||||||
|
|
||||||
static int mx51_ecspi_prepare_transfer(struct spi_imx_data *spi_imx,
|
static int mx51_ecspi_prepare_transfer(struct spi_imx_data *spi_imx,
|
||||||
struct spi_device *spi,
|
struct spi_device *spi)
|
||||||
struct spi_transfer *t)
|
|
||||||
{
|
{
|
||||||
u32 ctrl = readl(spi_imx->base + MX51_ECSPI_CTRL);
|
u32 ctrl = readl(spi_imx->base + MX51_ECSPI_CTRL);
|
||||||
u32 clk = t->speed_hz, delay;
|
u32 clk, delay;
|
||||||
|
|
||||||
/* Clear BL field and set the right value */
|
/* Clear BL field and set the right value */
|
||||||
ctrl &= ~MX51_ECSPI_CTRL_BL_MASK;
|
ctrl &= ~MX51_ECSPI_CTRL_BL_MASK;
|
||||||
@@ -590,7 +588,7 @@ static int mx51_ecspi_prepare_transfer(struct spi_imx_data *spi_imx,
|
|||||||
/* set clock speed */
|
/* set clock speed */
|
||||||
ctrl &= ~(0xf << MX51_ECSPI_CTRL_POSTDIV_OFFSET |
|
ctrl &= ~(0xf << MX51_ECSPI_CTRL_POSTDIV_OFFSET |
|
||||||
0xf << MX51_ECSPI_CTRL_PREDIV_OFFSET);
|
0xf << MX51_ECSPI_CTRL_PREDIV_OFFSET);
|
||||||
ctrl |= mx51_ecspi_clkdiv(spi_imx, t->speed_hz, &clk);
|
ctrl |= mx51_ecspi_clkdiv(spi_imx, spi_imx->spi_bus_clk, &clk);
|
||||||
spi_imx->spi_bus_clk = clk;
|
spi_imx->spi_bus_clk = clk;
|
||||||
|
|
||||||
if (spi_imx->usedma)
|
if (spi_imx->usedma)
|
||||||
@@ -702,13 +700,12 @@ static int mx31_prepare_message(struct spi_imx_data *spi_imx,
|
|||||||
}
|
}
|
||||||
|
|
||||||
static int mx31_prepare_transfer(struct spi_imx_data *spi_imx,
|
static int mx31_prepare_transfer(struct spi_imx_data *spi_imx,
|
||||||
struct spi_device *spi,
|
struct spi_device *spi)
|
||||||
struct spi_transfer *t)
|
|
||||||
{
|
{
|
||||||
unsigned int reg = MX31_CSPICTRL_ENABLE | MX31_CSPICTRL_MASTER;
|
unsigned int reg = MX31_CSPICTRL_ENABLE | MX31_CSPICTRL_MASTER;
|
||||||
unsigned int clk;
|
unsigned int clk;
|
||||||
|
|
||||||
reg |= spi_imx_clkdiv_2(spi_imx->spi_clk, t->speed_hz, &clk) <<
|
reg |= spi_imx_clkdiv_2(spi_imx->spi_clk, spi_imx->spi_bus_clk, &clk) <<
|
||||||
MX31_CSPICTRL_DR_SHIFT;
|
MX31_CSPICTRL_DR_SHIFT;
|
||||||
spi_imx->spi_bus_clk = clk;
|
spi_imx->spi_bus_clk = clk;
|
||||||
|
|
||||||
@@ -807,14 +804,13 @@ static int mx21_prepare_message(struct spi_imx_data *spi_imx,
|
|||||||
}
|
}
|
||||||
|
|
||||||
static int mx21_prepare_transfer(struct spi_imx_data *spi_imx,
|
static int mx21_prepare_transfer(struct spi_imx_data *spi_imx,
|
||||||
struct spi_device *spi,
|
struct spi_device *spi)
|
||||||
struct spi_transfer *t)
|
|
||||||
{
|
{
|
||||||
unsigned int reg = MX21_CSPICTRL_ENABLE | MX21_CSPICTRL_MASTER;
|
unsigned int reg = MX21_CSPICTRL_ENABLE | MX21_CSPICTRL_MASTER;
|
||||||
unsigned int max = is_imx27_cspi(spi_imx) ? 16 : 18;
|
unsigned int max = is_imx27_cspi(spi_imx) ? 16 : 18;
|
||||||
unsigned int clk;
|
unsigned int clk;
|
||||||
|
|
||||||
reg |= spi_imx_clkdiv_1(spi_imx->spi_clk, t->speed_hz, max, &clk)
|
reg |= spi_imx_clkdiv_1(spi_imx->spi_clk, spi_imx->spi_bus_clk, max, &clk)
|
||||||
<< MX21_CSPICTRL_DR_SHIFT;
|
<< MX21_CSPICTRL_DR_SHIFT;
|
||||||
spi_imx->spi_bus_clk = clk;
|
spi_imx->spi_bus_clk = clk;
|
||||||
|
|
||||||
@@ -883,13 +879,12 @@ static int mx1_prepare_message(struct spi_imx_data *spi_imx,
|
|||||||
}
|
}
|
||||||
|
|
||||||
static int mx1_prepare_transfer(struct spi_imx_data *spi_imx,
|
static int mx1_prepare_transfer(struct spi_imx_data *spi_imx,
|
||||||
struct spi_device *spi,
|
struct spi_device *spi)
|
||||||
struct spi_transfer *t)
|
|
||||||
{
|
{
|
||||||
unsigned int reg = MX1_CSPICTRL_ENABLE | MX1_CSPICTRL_MASTER;
|
unsigned int reg = MX1_CSPICTRL_ENABLE | MX1_CSPICTRL_MASTER;
|
||||||
unsigned int clk;
|
unsigned int clk;
|
||||||
|
|
||||||
reg |= spi_imx_clkdiv_2(spi_imx->spi_clk, t->speed_hz, &clk) <<
|
reg |= spi_imx_clkdiv_2(spi_imx->spi_clk, spi_imx->spi_bus_clk, &clk) <<
|
||||||
MX1_CSPICTRL_DR_SHIFT;
|
MX1_CSPICTRL_DR_SHIFT;
|
||||||
spi_imx->spi_bus_clk = clk;
|
spi_imx->spi_bus_clk = clk;
|
||||||
|
|
||||||
@@ -1195,6 +1190,16 @@ static int spi_imx_setupxfer(struct spi_device *spi,
|
|||||||
if (!t)
|
if (!t)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
if (!t->speed_hz) {
|
||||||
|
if (!spi->max_speed_hz) {
|
||||||
|
dev_err(&spi->dev, "no speed_hz provided!\n");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
dev_dbg(&spi->dev, "using spi->max_speed_hz!\n");
|
||||||
|
spi_imx->spi_bus_clk = spi->max_speed_hz;
|
||||||
|
} else
|
||||||
|
spi_imx->spi_bus_clk = t->speed_hz;
|
||||||
|
|
||||||
spi_imx->bits_per_word = t->bits_per_word;
|
spi_imx->bits_per_word = t->bits_per_word;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -1236,7 +1241,7 @@ static int spi_imx_setupxfer(struct spi_device *spi,
|
|||||||
spi_imx->slave_burst = t->len;
|
spi_imx->slave_burst = t->len;
|
||||||
}
|
}
|
||||||
|
|
||||||
spi_imx->devtype_data->prepare_transfer(spi_imx, spi, t);
|
spi_imx->devtype_data->prepare_transfer(spi_imx, spi);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@@ -434,13 +434,23 @@ static int mtk_spi_fifo_transfer(struct spi_master *master,
|
|||||||
mtk_spi_setup_packet(master);
|
mtk_spi_setup_packet(master);
|
||||||
|
|
||||||
cnt = xfer->len / 4;
|
cnt = xfer->len / 4;
|
||||||
iowrite32_rep(mdata->base + SPI_TX_DATA_REG, xfer->tx_buf, cnt);
|
if (xfer->tx_buf)
|
||||||
|
iowrite32_rep(mdata->base + SPI_TX_DATA_REG, xfer->tx_buf, cnt);
|
||||||
|
|
||||||
|
if (xfer->rx_buf)
|
||||||
|
ioread32_rep(mdata->base + SPI_RX_DATA_REG, xfer->rx_buf, cnt);
|
||||||
|
|
||||||
remainder = xfer->len % 4;
|
remainder = xfer->len % 4;
|
||||||
if (remainder > 0) {
|
if (remainder > 0) {
|
||||||
reg_val = 0;
|
reg_val = 0;
|
||||||
memcpy(®_val, xfer->tx_buf + (cnt * 4), remainder);
|
if (xfer->tx_buf) {
|
||||||
writel(reg_val, mdata->base + SPI_TX_DATA_REG);
|
memcpy(®_val, xfer->tx_buf + (cnt * 4), remainder);
|
||||||
|
writel(reg_val, mdata->base + SPI_TX_DATA_REG);
|
||||||
|
}
|
||||||
|
if (xfer->rx_buf) {
|
||||||
|
reg_val = readl(mdata->base + SPI_RX_DATA_REG);
|
||||||
|
memcpy(xfer->rx_buf + (cnt * 4), ®_val, remainder);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
mtk_spi_enable_transfer(master);
|
mtk_spi_enable_transfer(master);
|
||||||
|
@@ -1946,6 +1946,7 @@ static int stm32_spi_probe(struct platform_device *pdev)
|
|||||||
master->can_dma = stm32_spi_can_dma;
|
master->can_dma = stm32_spi_can_dma;
|
||||||
|
|
||||||
pm_runtime_set_active(&pdev->dev);
|
pm_runtime_set_active(&pdev->dev);
|
||||||
|
pm_runtime_get_noresume(&pdev->dev);
|
||||||
pm_runtime_enable(&pdev->dev);
|
pm_runtime_enable(&pdev->dev);
|
||||||
|
|
||||||
ret = spi_register_master(master);
|
ret = spi_register_master(master);
|
||||||
@@ -1967,6 +1968,8 @@ static int stm32_spi_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
err_pm_disable:
|
err_pm_disable:
|
||||||
pm_runtime_disable(&pdev->dev);
|
pm_runtime_disable(&pdev->dev);
|
||||||
|
pm_runtime_put_noidle(&pdev->dev);
|
||||||
|
pm_runtime_set_suspended(&pdev->dev);
|
||||||
err_dma_release:
|
err_dma_release:
|
||||||
if (spi->dma_tx)
|
if (spi->dma_tx)
|
||||||
dma_release_channel(spi->dma_tx);
|
dma_release_channel(spi->dma_tx);
|
||||||
@@ -1983,9 +1986,14 @@ static int stm32_spi_remove(struct platform_device *pdev)
|
|||||||
struct spi_master *master = platform_get_drvdata(pdev);
|
struct spi_master *master = platform_get_drvdata(pdev);
|
||||||
struct stm32_spi *spi = spi_master_get_devdata(master);
|
struct stm32_spi *spi = spi_master_get_devdata(master);
|
||||||
|
|
||||||
|
pm_runtime_get_sync(&pdev->dev);
|
||||||
|
|
||||||
spi_unregister_master(master);
|
spi_unregister_master(master);
|
||||||
spi->cfg->disable(spi);
|
spi->cfg->disable(spi);
|
||||||
|
|
||||||
|
pm_runtime_disable(&pdev->dev);
|
||||||
|
pm_runtime_put_noidle(&pdev->dev);
|
||||||
|
pm_runtime_set_suspended(&pdev->dev);
|
||||||
if (master->dma_tx)
|
if (master->dma_tx)
|
||||||
dma_release_channel(master->dma_tx);
|
dma_release_channel(master->dma_tx);
|
||||||
if (master->dma_rx)
|
if (master->dma_rx)
|
||||||
@@ -1993,7 +2001,6 @@ static int stm32_spi_remove(struct platform_device *pdev)
|
|||||||
|
|
||||||
clk_disable_unprepare(spi->clk);
|
clk_disable_unprepare(spi->clk);
|
||||||
|
|
||||||
pm_runtime_disable(&pdev->dev);
|
|
||||||
|
|
||||||
pinctrl_pm_select_sleep_state(&pdev->dev);
|
pinctrl_pm_select_sleep_state(&pdev->dev);
|
||||||
|
|
||||||
|
@@ -25,7 +25,7 @@
|
|||||||
#include "target_core_alua.h"
|
#include "target_core_alua.h"
|
||||||
|
|
||||||
static sense_reason_t
|
static sense_reason_t
|
||||||
sbc_check_prot(struct se_device *, struct se_cmd *, unsigned char *, u32, bool);
|
sbc_check_prot(struct se_device *, struct se_cmd *, unsigned char, u32, bool);
|
||||||
static sense_reason_t sbc_execute_unmap(struct se_cmd *cmd);
|
static sense_reason_t sbc_execute_unmap(struct se_cmd *cmd);
|
||||||
|
|
||||||
static sense_reason_t
|
static sense_reason_t
|
||||||
@@ -279,14 +279,14 @@ static inline unsigned long long transport_lba_64_ext(unsigned char *cdb)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static sense_reason_t
|
static sense_reason_t
|
||||||
sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *ops)
|
sbc_setup_write_same(struct se_cmd *cmd, unsigned char flags, struct sbc_ops *ops)
|
||||||
{
|
{
|
||||||
struct se_device *dev = cmd->se_dev;
|
struct se_device *dev = cmd->se_dev;
|
||||||
sector_t end_lba = dev->transport->get_blocks(dev) + 1;
|
sector_t end_lba = dev->transport->get_blocks(dev) + 1;
|
||||||
unsigned int sectors = sbc_get_write_same_sectors(cmd);
|
unsigned int sectors = sbc_get_write_same_sectors(cmd);
|
||||||
sense_reason_t ret;
|
sense_reason_t ret;
|
||||||
|
|
||||||
if ((flags[0] & 0x04) || (flags[0] & 0x02)) {
|
if ((flags & 0x04) || (flags & 0x02)) {
|
||||||
pr_err("WRITE_SAME PBDATA and LBDATA"
|
pr_err("WRITE_SAME PBDATA and LBDATA"
|
||||||
" bits not supported for Block Discard"
|
" bits not supported for Block Discard"
|
||||||
" Emulation\n");
|
" Emulation\n");
|
||||||
@@ -308,7 +308,7 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* We always have ANC_SUP == 0 so setting ANCHOR is always an error */
|
/* We always have ANC_SUP == 0 so setting ANCHOR is always an error */
|
||||||
if (flags[0] & 0x10) {
|
if (flags & 0x10) {
|
||||||
pr_warn("WRITE SAME with ANCHOR not supported\n");
|
pr_warn("WRITE SAME with ANCHOR not supported\n");
|
||||||
return TCM_INVALID_CDB_FIELD;
|
return TCM_INVALID_CDB_FIELD;
|
||||||
}
|
}
|
||||||
@@ -316,7 +316,7 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
|
|||||||
* Special case for WRITE_SAME w/ UNMAP=1 that ends up getting
|
* Special case for WRITE_SAME w/ UNMAP=1 that ends up getting
|
||||||
* translated into block discard requests within backend code.
|
* translated into block discard requests within backend code.
|
||||||
*/
|
*/
|
||||||
if (flags[0] & 0x08) {
|
if (flags & 0x08) {
|
||||||
if (!ops->execute_unmap)
|
if (!ops->execute_unmap)
|
||||||
return TCM_UNSUPPORTED_SCSI_OPCODE;
|
return TCM_UNSUPPORTED_SCSI_OPCODE;
|
||||||
|
|
||||||
@@ -331,7 +331,7 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
|
|||||||
if (!ops->execute_write_same)
|
if (!ops->execute_write_same)
|
||||||
return TCM_UNSUPPORTED_SCSI_OPCODE;
|
return TCM_UNSUPPORTED_SCSI_OPCODE;
|
||||||
|
|
||||||
ret = sbc_check_prot(dev, cmd, &cmd->t_task_cdb[0], sectors, true);
|
ret = sbc_check_prot(dev, cmd, flags >> 5, sectors, true);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
@@ -686,10 +686,9 @@ sbc_set_prot_op_checks(u8 protect, bool fabric_prot, enum target_prot_type prot_
|
|||||||
}
|
}
|
||||||
|
|
||||||
static sense_reason_t
|
static sense_reason_t
|
||||||
sbc_check_prot(struct se_device *dev, struct se_cmd *cmd, unsigned char *cdb,
|
sbc_check_prot(struct se_device *dev, struct se_cmd *cmd, unsigned char protect,
|
||||||
u32 sectors, bool is_write)
|
u32 sectors, bool is_write)
|
||||||
{
|
{
|
||||||
u8 protect = cdb[1] >> 5;
|
|
||||||
int sp_ops = cmd->se_sess->sup_prot_ops;
|
int sp_ops = cmd->se_sess->sup_prot_ops;
|
||||||
int pi_prot_type = dev->dev_attrib.pi_prot_type;
|
int pi_prot_type = dev->dev_attrib.pi_prot_type;
|
||||||
bool fabric_prot = false;
|
bool fabric_prot = false;
|
||||||
@@ -737,7 +736,7 @@ sbc_check_prot(struct se_device *dev, struct se_cmd *cmd, unsigned char *cdb,
|
|||||||
fallthrough;
|
fallthrough;
|
||||||
default:
|
default:
|
||||||
pr_err("Unable to determine pi_prot_type for CDB: 0x%02x "
|
pr_err("Unable to determine pi_prot_type for CDB: 0x%02x "
|
||||||
"PROTECT: 0x%02x\n", cdb[0], protect);
|
"PROTECT: 0x%02x\n", cmd->t_task_cdb[0], protect);
|
||||||
return TCM_INVALID_CDB_FIELD;
|
return TCM_INVALID_CDB_FIELD;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -812,7 +811,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
|
|||||||
if (sbc_check_dpofua(dev, cmd, cdb))
|
if (sbc_check_dpofua(dev, cmd, cdb))
|
||||||
return TCM_INVALID_CDB_FIELD;
|
return TCM_INVALID_CDB_FIELD;
|
||||||
|
|
||||||
ret = sbc_check_prot(dev, cmd, cdb, sectors, false);
|
ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, false);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
@@ -826,7 +825,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
|
|||||||
if (sbc_check_dpofua(dev, cmd, cdb))
|
if (sbc_check_dpofua(dev, cmd, cdb))
|
||||||
return TCM_INVALID_CDB_FIELD;
|
return TCM_INVALID_CDB_FIELD;
|
||||||
|
|
||||||
ret = sbc_check_prot(dev, cmd, cdb, sectors, false);
|
ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, false);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
@@ -840,7 +839,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
|
|||||||
if (sbc_check_dpofua(dev, cmd, cdb))
|
if (sbc_check_dpofua(dev, cmd, cdb))
|
||||||
return TCM_INVALID_CDB_FIELD;
|
return TCM_INVALID_CDB_FIELD;
|
||||||
|
|
||||||
ret = sbc_check_prot(dev, cmd, cdb, sectors, false);
|
ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, false);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
@@ -861,7 +860,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
|
|||||||
if (sbc_check_dpofua(dev, cmd, cdb))
|
if (sbc_check_dpofua(dev, cmd, cdb))
|
||||||
return TCM_INVALID_CDB_FIELD;
|
return TCM_INVALID_CDB_FIELD;
|
||||||
|
|
||||||
ret = sbc_check_prot(dev, cmd, cdb, sectors, true);
|
ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, true);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
@@ -875,7 +874,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
|
|||||||
if (sbc_check_dpofua(dev, cmd, cdb))
|
if (sbc_check_dpofua(dev, cmd, cdb))
|
||||||
return TCM_INVALID_CDB_FIELD;
|
return TCM_INVALID_CDB_FIELD;
|
||||||
|
|
||||||
ret = sbc_check_prot(dev, cmd, cdb, sectors, true);
|
ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, true);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
@@ -890,7 +889,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
|
|||||||
if (sbc_check_dpofua(dev, cmd, cdb))
|
if (sbc_check_dpofua(dev, cmd, cdb))
|
||||||
return TCM_INVALID_CDB_FIELD;
|
return TCM_INVALID_CDB_FIELD;
|
||||||
|
|
||||||
ret = sbc_check_prot(dev, cmd, cdb, sectors, true);
|
ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, true);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
@@ -949,7 +948,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
|
|||||||
size = sbc_get_size(cmd, 1);
|
size = sbc_get_size(cmd, 1);
|
||||||
cmd->t_task_lba = get_unaligned_be64(&cdb[12]);
|
cmd->t_task_lba = get_unaligned_be64(&cdb[12]);
|
||||||
|
|
||||||
ret = sbc_setup_write_same(cmd, &cdb[10], ops);
|
ret = sbc_setup_write_same(cmd, cdb[10], ops);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
break;
|
break;
|
||||||
@@ -1048,7 +1047,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
|
|||||||
size = sbc_get_size(cmd, 1);
|
size = sbc_get_size(cmd, 1);
|
||||||
cmd->t_task_lba = get_unaligned_be64(&cdb[2]);
|
cmd->t_task_lba = get_unaligned_be64(&cdb[2]);
|
||||||
|
|
||||||
ret = sbc_setup_write_same(cmd, &cdb[1], ops);
|
ret = sbc_setup_write_same(cmd, cdb[1], ops);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
break;
|
break;
|
||||||
@@ -1066,7 +1065,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
|
|||||||
* Follow sbcr26 with WRITE_SAME (10) and check for the existence
|
* Follow sbcr26 with WRITE_SAME (10) and check for the existence
|
||||||
* of byte 1 bit 3 UNMAP instead of original reserved field
|
* of byte 1 bit 3 UNMAP instead of original reserved field
|
||||||
*/
|
*/
|
||||||
ret = sbc_setup_write_same(cmd, &cdb[1], ops);
|
ret = sbc_setup_write_same(cmd, cdb[1], ops);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
break;
|
break;
|
||||||
|
@@ -47,6 +47,7 @@
|
|||||||
|
|
||||||
#define USB_TP_TRANSMISSION_DELAY 40 /* ns */
|
#define USB_TP_TRANSMISSION_DELAY 40 /* ns */
|
||||||
#define USB_TP_TRANSMISSION_DELAY_MAX 65535 /* ns */
|
#define USB_TP_TRANSMISSION_DELAY_MAX 65535 /* ns */
|
||||||
|
#define USB_PING_RESPONSE_TIME 400 /* ns */
|
||||||
|
|
||||||
/* Protect struct usb_device->state and ->children members
|
/* Protect struct usb_device->state and ->children members
|
||||||
* Note: Both are also protected by ->dev.sem, except that ->state can
|
* Note: Both are also protected by ->dev.sem, except that ->state can
|
||||||
@@ -181,8 +182,9 @@ int usb_device_supports_lpm(struct usb_device *udev)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Set the Maximum Exit Latency (MEL) for the host to initiate a transition from
|
* Set the Maximum Exit Latency (MEL) for the host to wakup up the path from
|
||||||
* either U1 or U2.
|
* U1/U2, send a PING to the device and receive a PING_RESPONSE.
|
||||||
|
* See USB 3.1 section C.1.5.2
|
||||||
*/
|
*/
|
||||||
static void usb_set_lpm_mel(struct usb_device *udev,
|
static void usb_set_lpm_mel(struct usb_device *udev,
|
||||||
struct usb3_lpm_parameters *udev_lpm_params,
|
struct usb3_lpm_parameters *udev_lpm_params,
|
||||||
@@ -192,35 +194,37 @@ static void usb_set_lpm_mel(struct usb_device *udev,
|
|||||||
unsigned int hub_exit_latency)
|
unsigned int hub_exit_latency)
|
||||||
{
|
{
|
||||||
unsigned int total_mel;
|
unsigned int total_mel;
|
||||||
unsigned int device_mel;
|
|
||||||
unsigned int hub_mel;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Calculate the time it takes to transition all links from the roothub
|
* tMEL1. time to transition path from host to device into U0.
|
||||||
* to the parent hub into U0. The parent hub must then decode the
|
* MEL for parent already contains the delay up to parent, so only add
|
||||||
* packet (hub header decode latency) to figure out which port it was
|
* the exit latency for the last link (pick the slower exit latency),
|
||||||
* bound for.
|
* and the hub header decode latency. See USB 3.1 section C 2.2.1
|
||||||
*
|
* Store MEL in nanoseconds
|
||||||
* The Hub Header decode latency is expressed in 0.1us intervals (0x1
|
|
||||||
* means 0.1us). Multiply that by 100 to get nanoseconds.
|
|
||||||
*/
|
*/
|
||||||
total_mel = hub_lpm_params->mel +
|
total_mel = hub_lpm_params->mel +
|
||||||
(hub->descriptor->u.ss.bHubHdrDecLat * 100);
|
max(udev_exit_latency, hub_exit_latency) * 1000 +
|
||||||
|
hub->descriptor->u.ss.bHubHdrDecLat * 100;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* How long will it take to transition the downstream hub's port into
|
* tMEL2. Time to submit PING packet. Sum of tTPTransmissionDelay for
|
||||||
* U0? The greater of either the hub exit latency or the device exit
|
* each link + wHubDelay for each hub. Add only for last link.
|
||||||
* latency.
|
* tMEL4, the time for PING_RESPONSE to traverse upstream is similar.
|
||||||
*
|
* Multiply by 2 to include it as well.
|
||||||
* The BOS U1/U2 exit latencies are expressed in 1us intervals.
|
|
||||||
* Multiply that by 1000 to get nanoseconds.
|
|
||||||
*/
|
*/
|
||||||
device_mel = udev_exit_latency * 1000;
|
total_mel += (__le16_to_cpu(hub->descriptor->u.ss.wHubDelay) +
|
||||||
hub_mel = hub_exit_latency * 1000;
|
USB_TP_TRANSMISSION_DELAY) * 2;
|
||||||
if (device_mel > hub_mel)
|
|
||||||
total_mel += device_mel;
|
/*
|
||||||
else
|
* tMEL3, tPingResponse. Time taken by device to generate PING_RESPONSE
|
||||||
total_mel += hub_mel;
|
* after receiving PING. Also add 2100ns as stated in USB 3.1 C 1.5.2.4
|
||||||
|
* to cover the delay if the PING_RESPONSE is queued behind a Max Packet
|
||||||
|
* Size DP.
|
||||||
|
* Note these delays should be added only once for the entire path, so
|
||||||
|
* add them to the MEL of the device connected to the roothub.
|
||||||
|
*/
|
||||||
|
if (!hub->hdev->parent)
|
||||||
|
total_mel += USB_PING_RESPONSE_TIME + 2100;
|
||||||
|
|
||||||
udev_lpm_params->mel = total_mel;
|
udev_lpm_params->mel = total_mel;
|
||||||
}
|
}
|
||||||
@@ -4040,6 +4044,47 @@ static int usb_set_lpm_timeout(struct usb_device *udev,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Don't allow device intiated U1/U2 if the system exit latency + one bus
|
||||||
|
* interval is greater than the minimum service interval of any active
|
||||||
|
* periodic endpoint. See USB 3.2 section 9.4.9
|
||||||
|
*/
|
||||||
|
static bool usb_device_may_initiate_lpm(struct usb_device *udev,
|
||||||
|
enum usb3_link_state state)
|
||||||
|
{
|
||||||
|
unsigned int sel; /* us */
|
||||||
|
int i, j;
|
||||||
|
|
||||||
|
if (state == USB3_LPM_U1)
|
||||||
|
sel = DIV_ROUND_UP(udev->u1_params.sel, 1000);
|
||||||
|
else if (state == USB3_LPM_U2)
|
||||||
|
sel = DIV_ROUND_UP(udev->u2_params.sel, 1000);
|
||||||
|
else
|
||||||
|
return false;
|
||||||
|
|
||||||
|
for (i = 0; i < udev->actconfig->desc.bNumInterfaces; i++) {
|
||||||
|
struct usb_interface *intf;
|
||||||
|
struct usb_endpoint_descriptor *desc;
|
||||||
|
unsigned int interval;
|
||||||
|
|
||||||
|
intf = udev->actconfig->interface[i];
|
||||||
|
if (!intf)
|
||||||
|
continue;
|
||||||
|
|
||||||
|
for (j = 0; j < intf->cur_altsetting->desc.bNumEndpoints; j++) {
|
||||||
|
desc = &intf->cur_altsetting->endpoint[j].desc;
|
||||||
|
|
||||||
|
if (usb_endpoint_xfer_int(desc) ||
|
||||||
|
usb_endpoint_xfer_isoc(desc)) {
|
||||||
|
interval = (1 << (desc->bInterval - 1)) * 125;
|
||||||
|
if (sel + 125 > interval)
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Enable the hub-initiated U1/U2 idle timeouts, and enable device-initiated
|
* Enable the hub-initiated U1/U2 idle timeouts, and enable device-initiated
|
||||||
* U1/U2 entry.
|
* U1/U2 entry.
|
||||||
@@ -4112,20 +4157,23 @@ static void usb_enable_link_state(struct usb_hcd *hcd, struct usb_device *udev,
|
|||||||
* U1/U2_ENABLE
|
* U1/U2_ENABLE
|
||||||
*/
|
*/
|
||||||
if (udev->actconfig &&
|
if (udev->actconfig &&
|
||||||
usb_set_device_initiated_lpm(udev, state, true) == 0) {
|
usb_device_may_initiate_lpm(udev, state)) {
|
||||||
if (state == USB3_LPM_U1)
|
if (usb_set_device_initiated_lpm(udev, state, true)) {
|
||||||
udev->usb3_lpm_u1_enabled = 1;
|
/*
|
||||||
else if (state == USB3_LPM_U2)
|
* Request to enable device initiated U1/U2 failed,
|
||||||
udev->usb3_lpm_u2_enabled = 1;
|
* better to turn off lpm in this case.
|
||||||
} else {
|
*/
|
||||||
/* Don't request U1/U2 entry if the device
|
usb_set_lpm_timeout(udev, state, 0);
|
||||||
* cannot transition to U1/U2.
|
hcd->driver->disable_usb3_lpm_timeout(hcd, udev, state);
|
||||||
*/
|
return;
|
||||||
usb_set_lpm_timeout(udev, state, 0);
|
}
|
||||||
hcd->driver->disable_usb3_lpm_timeout(hcd, udev, state);
|
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
|
if (state == USB3_LPM_U1)
|
||||||
|
udev->usb3_lpm_u1_enabled = 1;
|
||||||
|
else if (state == USB3_LPM_U2)
|
||||||
|
udev->usb3_lpm_u2_enabled = 1;
|
||||||
|
}
|
||||||
/*
|
/*
|
||||||
* Disable the hub-initiated U1/U2 idle timeouts, and disable device-initiated
|
* Disable the hub-initiated U1/U2 idle timeouts, and disable device-initiated
|
||||||
* U1/U2 entry.
|
* U1/U2 entry.
|
||||||
|
@@ -502,10 +502,6 @@ static const struct usb_device_id usb_quirk_list[] = {
|
|||||||
/* DJI CineSSD */
|
/* DJI CineSSD */
|
||||||
{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
|
{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
|
||||||
|
|
||||||
/* Fibocom L850-GL LTE Modem */
|
|
||||||
{ USB_DEVICE(0x2cb7, 0x0007), .driver_info =
|
|
||||||
USB_QUIRK_IGNORE_REMOTE_WAKEUP },
|
|
||||||
|
|
||||||
/* INTEL VALUE SSD */
|
/* INTEL VALUE SSD */
|
||||||
{ USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME },
|
{ USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME },
|
||||||
|
|
||||||
|
@@ -2749,12 +2749,14 @@ static void dwc2_hsotg_complete_in(struct dwc2_hsotg *hsotg,
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Zlp for all endpoints, for ep0 only in DATA IN stage */
|
/* Zlp for all endpoints in non DDMA, for ep0 only in DATA IN stage */
|
||||||
if (hs_ep->send_zlp) {
|
if (hs_ep->send_zlp) {
|
||||||
dwc2_hsotg_program_zlp(hsotg, hs_ep);
|
|
||||||
hs_ep->send_zlp = 0;
|
hs_ep->send_zlp = 0;
|
||||||
/* transfer will be completed on next complete interrupt */
|
if (!using_desc_dma(hsotg)) {
|
||||||
return;
|
dwc2_hsotg_program_zlp(hsotg, hs_ep);
|
||||||
|
/* transfer will be completed on next complete interrupt */
|
||||||
|
return;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (hs_ep->index == 0 && hsotg->ep0_state == DWC2_EP0_DATA_IN) {
|
if (hs_ep->index == 0 && hsotg->ep0_state == DWC2_EP0_DATA_IN) {
|
||||||
@@ -3900,9 +3902,27 @@ static void dwc2_hsotg_ep_stop_xfr(struct dwc2_hsotg *hsotg,
|
|||||||
__func__);
|
__func__);
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
/* Mask GINTSTS_GOUTNAKEFF interrupt */
|
||||||
|
dwc2_hsotg_disable_gsint(hsotg, GINTSTS_GOUTNAKEFF);
|
||||||
|
|
||||||
if (!(dwc2_readl(hsotg, GINTSTS) & GINTSTS_GOUTNAKEFF))
|
if (!(dwc2_readl(hsotg, GINTSTS) & GINTSTS_GOUTNAKEFF))
|
||||||
dwc2_set_bit(hsotg, DCTL, DCTL_SGOUTNAK);
|
dwc2_set_bit(hsotg, DCTL, DCTL_SGOUTNAK);
|
||||||
|
|
||||||
|
if (!using_dma(hsotg)) {
|
||||||
|
/* Wait for GINTSTS_RXFLVL interrupt */
|
||||||
|
if (dwc2_hsotg_wait_bit_set(hsotg, GINTSTS,
|
||||||
|
GINTSTS_RXFLVL, 100)) {
|
||||||
|
dev_warn(hsotg->dev, "%s: timeout GINTSTS.RXFLVL\n",
|
||||||
|
__func__);
|
||||||
|
} else {
|
||||||
|
/*
|
||||||
|
* Pop GLOBAL OUT NAK status packet from RxFIFO
|
||||||
|
* to assert GOUTNAKEFF interrupt
|
||||||
|
*/
|
||||||
|
dwc2_readl(hsotg, GRXSTSP);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/* Wait for global nak to take effect */
|
/* Wait for global nak to take effect */
|
||||||
if (dwc2_hsotg_wait_bit_set(hsotg, GINTSTS,
|
if (dwc2_hsotg_wait_bit_set(hsotg, GINTSTS,
|
||||||
GINTSTS_GOUTNAKEFF, 100))
|
GINTSTS_GOUTNAKEFF, 100))
|
||||||
@@ -4348,6 +4368,9 @@ static int dwc2_hsotg_ep_sethalt(struct usb_ep *ep, int value, bool now)
|
|||||||
epctl = dwc2_readl(hs, epreg);
|
epctl = dwc2_readl(hs, epreg);
|
||||||
|
|
||||||
if (value) {
|
if (value) {
|
||||||
|
/* Unmask GOUTNAKEFF interrupt */
|
||||||
|
dwc2_hsotg_en_gsint(hs, GINTSTS_GOUTNAKEFF);
|
||||||
|
|
||||||
if (!(dwc2_readl(hs, GINTSTS) & GINTSTS_GOUTNAKEFF))
|
if (!(dwc2_readl(hs, GINTSTS) & GINTSTS_GOUTNAKEFF))
|
||||||
dwc2_set_bit(hs, DCTL, DCTL_SGOUTNAK);
|
dwc2_set_bit(hs, DCTL, DCTL_SGOUTNAK);
|
||||||
// STALL bit will be set in GOUTNAKEFF interrupt handler
|
// STALL bit will be set in GOUTNAKEFF interrupt handler
|
||||||
|
@@ -3861,6 +3861,7 @@ static int tegra_xudc_probe(struct platform_device *pdev)
|
|||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
free_eps:
|
free_eps:
|
||||||
|
pm_runtime_disable(&pdev->dev);
|
||||||
tegra_xudc_free_eps(xudc);
|
tegra_xudc_free_eps(xudc);
|
||||||
free_event_ring:
|
free_event_ring:
|
||||||
tegra_xudc_free_event_ring(xudc);
|
tegra_xudc_free_event_ring(xudc);
|
||||||
|
@@ -703,7 +703,8 @@ EXPORT_SYMBOL_GPL(ehci_setup);
|
|||||||
static irqreturn_t ehci_irq (struct usb_hcd *hcd)
|
static irqreturn_t ehci_irq (struct usb_hcd *hcd)
|
||||||
{
|
{
|
||||||
struct ehci_hcd *ehci = hcd_to_ehci (hcd);
|
struct ehci_hcd *ehci = hcd_to_ehci (hcd);
|
||||||
u32 status, masked_status, pcd_status = 0, cmd;
|
u32 status, current_status, masked_status, pcd_status = 0;
|
||||||
|
u32 cmd;
|
||||||
int bh;
|
int bh;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
@@ -715,19 +716,22 @@ static irqreturn_t ehci_irq (struct usb_hcd *hcd)
|
|||||||
*/
|
*/
|
||||||
spin_lock_irqsave(&ehci->lock, flags);
|
spin_lock_irqsave(&ehci->lock, flags);
|
||||||
|
|
||||||
status = ehci_readl(ehci, &ehci->regs->status);
|
status = 0;
|
||||||
|
current_status = ehci_readl(ehci, &ehci->regs->status);
|
||||||
|
restart:
|
||||||
|
|
||||||
/* e.g. cardbus physical eject */
|
/* e.g. cardbus physical eject */
|
||||||
if (status == ~(u32) 0) {
|
if (current_status == ~(u32) 0) {
|
||||||
ehci_dbg (ehci, "device removed\n");
|
ehci_dbg (ehci, "device removed\n");
|
||||||
goto dead;
|
goto dead;
|
||||||
}
|
}
|
||||||
|
status |= current_status;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* We don't use STS_FLR, but some controllers don't like it to
|
* We don't use STS_FLR, but some controllers don't like it to
|
||||||
* remain on, so mask it out along with the other status bits.
|
* remain on, so mask it out along with the other status bits.
|
||||||
*/
|
*/
|
||||||
masked_status = status & (INTR_MASK | STS_FLR);
|
masked_status = current_status & (INTR_MASK | STS_FLR);
|
||||||
|
|
||||||
/* Shared IRQ? */
|
/* Shared IRQ? */
|
||||||
if (!masked_status || unlikely(ehci->rh_state == EHCI_RH_HALTED)) {
|
if (!masked_status || unlikely(ehci->rh_state == EHCI_RH_HALTED)) {
|
||||||
@@ -737,6 +741,12 @@ static irqreturn_t ehci_irq (struct usb_hcd *hcd)
|
|||||||
|
|
||||||
/* clear (just) interrupts */
|
/* clear (just) interrupts */
|
||||||
ehci_writel(ehci, masked_status, &ehci->regs->status);
|
ehci_writel(ehci, masked_status, &ehci->regs->status);
|
||||||
|
|
||||||
|
/* For edge interrupts, don't race with an interrupt bit being raised */
|
||||||
|
current_status = ehci_readl(ehci, &ehci->regs->status);
|
||||||
|
if (current_status & INTR_MASK)
|
||||||
|
goto restart;
|
||||||
|
|
||||||
cmd = ehci_readl(ehci, &ehci->regs->command);
|
cmd = ehci_readl(ehci, &ehci->regs->command);
|
||||||
bh = 0;
|
bh = 0;
|
||||||
|
|
||||||
|
@@ -153,8 +153,6 @@ struct max3421_hcd {
|
|||||||
*/
|
*/
|
||||||
struct urb *curr_urb;
|
struct urb *curr_urb;
|
||||||
enum scheduling_pass sched_pass;
|
enum scheduling_pass sched_pass;
|
||||||
struct usb_device *loaded_dev; /* dev that's loaded into the chip */
|
|
||||||
int loaded_epnum; /* epnum whose toggles are loaded */
|
|
||||||
int urb_done; /* > 0 -> no errors, < 0: errno */
|
int urb_done; /* > 0 -> no errors, < 0: errno */
|
||||||
size_t curr_len;
|
size_t curr_len;
|
||||||
u8 hien;
|
u8 hien;
|
||||||
@@ -492,39 +490,17 @@ max3421_set_speed(struct usb_hcd *hcd, struct usb_device *dev)
|
|||||||
* Caller must NOT hold HCD spinlock.
|
* Caller must NOT hold HCD spinlock.
|
||||||
*/
|
*/
|
||||||
static void
|
static void
|
||||||
max3421_set_address(struct usb_hcd *hcd, struct usb_device *dev, int epnum,
|
max3421_set_address(struct usb_hcd *hcd, struct usb_device *dev, int epnum)
|
||||||
int force_toggles)
|
|
||||||
{
|
{
|
||||||
struct max3421_hcd *max3421_hcd = hcd_to_max3421(hcd);
|
int rcvtog, sndtog;
|
||||||
int old_epnum, same_ep, rcvtog, sndtog;
|
|
||||||
struct usb_device *old_dev;
|
|
||||||
u8 hctl;
|
u8 hctl;
|
||||||
|
|
||||||
old_dev = max3421_hcd->loaded_dev;
|
|
||||||
old_epnum = max3421_hcd->loaded_epnum;
|
|
||||||
|
|
||||||
same_ep = (dev == old_dev && epnum == old_epnum);
|
|
||||||
if (same_ep && !force_toggles)
|
|
||||||
return;
|
|
||||||
|
|
||||||
if (old_dev && !same_ep) {
|
|
||||||
/* save the old end-points toggles: */
|
|
||||||
u8 hrsl = spi_rd8(hcd, MAX3421_REG_HRSL);
|
|
||||||
|
|
||||||
rcvtog = (hrsl >> MAX3421_HRSL_RCVTOGRD_BIT) & 1;
|
|
||||||
sndtog = (hrsl >> MAX3421_HRSL_SNDTOGRD_BIT) & 1;
|
|
||||||
|
|
||||||
/* no locking: HCD (i.e., we) own toggles, don't we? */
|
|
||||||
usb_settoggle(old_dev, old_epnum, 0, rcvtog);
|
|
||||||
usb_settoggle(old_dev, old_epnum, 1, sndtog);
|
|
||||||
}
|
|
||||||
/* setup new endpoint's toggle bits: */
|
/* setup new endpoint's toggle bits: */
|
||||||
rcvtog = usb_gettoggle(dev, epnum, 0);
|
rcvtog = usb_gettoggle(dev, epnum, 0);
|
||||||
sndtog = usb_gettoggle(dev, epnum, 1);
|
sndtog = usb_gettoggle(dev, epnum, 1);
|
||||||
hctl = (BIT(rcvtog + MAX3421_HCTL_RCVTOG0_BIT) |
|
hctl = (BIT(rcvtog + MAX3421_HCTL_RCVTOG0_BIT) |
|
||||||
BIT(sndtog + MAX3421_HCTL_SNDTOG0_BIT));
|
BIT(sndtog + MAX3421_HCTL_SNDTOG0_BIT));
|
||||||
|
|
||||||
max3421_hcd->loaded_epnum = epnum;
|
|
||||||
spi_wr8(hcd, MAX3421_REG_HCTL, hctl);
|
spi_wr8(hcd, MAX3421_REG_HCTL, hctl);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -532,7 +508,6 @@ max3421_set_address(struct usb_hcd *hcd, struct usb_device *dev, int epnum,
|
|||||||
* address-assignment so it's best to just always load the
|
* address-assignment so it's best to just always load the
|
||||||
* address whenever the end-point changed/was forced.
|
* address whenever the end-point changed/was forced.
|
||||||
*/
|
*/
|
||||||
max3421_hcd->loaded_dev = dev;
|
|
||||||
spi_wr8(hcd, MAX3421_REG_PERADDR, dev->devnum);
|
spi_wr8(hcd, MAX3421_REG_PERADDR, dev->devnum);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -667,7 +642,7 @@ max3421_select_and_start_urb(struct usb_hcd *hcd)
|
|||||||
struct max3421_hcd *max3421_hcd = hcd_to_max3421(hcd);
|
struct max3421_hcd *max3421_hcd = hcd_to_max3421(hcd);
|
||||||
struct urb *urb, *curr_urb = NULL;
|
struct urb *urb, *curr_urb = NULL;
|
||||||
struct max3421_ep *max3421_ep;
|
struct max3421_ep *max3421_ep;
|
||||||
int epnum, force_toggles = 0;
|
int epnum;
|
||||||
struct usb_host_endpoint *ep;
|
struct usb_host_endpoint *ep;
|
||||||
struct list_head *pos;
|
struct list_head *pos;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
@@ -777,7 +752,6 @@ done:
|
|||||||
usb_settoggle(urb->dev, epnum, 0, 1);
|
usb_settoggle(urb->dev, epnum, 0, 1);
|
||||||
usb_settoggle(urb->dev, epnum, 1, 1);
|
usb_settoggle(urb->dev, epnum, 1, 1);
|
||||||
max3421_ep->pkt_state = PKT_STATE_SETUP;
|
max3421_ep->pkt_state = PKT_STATE_SETUP;
|
||||||
force_toggles = 1;
|
|
||||||
} else
|
} else
|
||||||
max3421_ep->pkt_state = PKT_STATE_TRANSFER;
|
max3421_ep->pkt_state = PKT_STATE_TRANSFER;
|
||||||
}
|
}
|
||||||
@@ -785,7 +759,7 @@ done:
|
|||||||
spin_unlock_irqrestore(&max3421_hcd->lock, flags);
|
spin_unlock_irqrestore(&max3421_hcd->lock, flags);
|
||||||
|
|
||||||
max3421_ep->last_active = max3421_hcd->frame_number;
|
max3421_ep->last_active = max3421_hcd->frame_number;
|
||||||
max3421_set_address(hcd, urb->dev, epnum, force_toggles);
|
max3421_set_address(hcd, urb->dev, epnum);
|
||||||
max3421_set_speed(hcd, urb->dev);
|
max3421_set_speed(hcd, urb->dev);
|
||||||
max3421_next_transfer(hcd, 0);
|
max3421_next_transfer(hcd, 0);
|
||||||
return 1;
|
return 1;
|
||||||
@@ -1380,6 +1354,16 @@ max3421_urb_done(struct usb_hcd *hcd)
|
|||||||
status = 0;
|
status = 0;
|
||||||
urb = max3421_hcd->curr_urb;
|
urb = max3421_hcd->curr_urb;
|
||||||
if (urb) {
|
if (urb) {
|
||||||
|
/* save the old end-points toggles: */
|
||||||
|
u8 hrsl = spi_rd8(hcd, MAX3421_REG_HRSL);
|
||||||
|
int rcvtog = (hrsl >> MAX3421_HRSL_RCVTOGRD_BIT) & 1;
|
||||||
|
int sndtog = (hrsl >> MAX3421_HRSL_SNDTOGRD_BIT) & 1;
|
||||||
|
int epnum = usb_endpoint_num(&urb->ep->desc);
|
||||||
|
|
||||||
|
/* no locking: HCD (i.e., we) own toggles, don't we? */
|
||||||
|
usb_settoggle(urb->dev, epnum, 0, rcvtog);
|
||||||
|
usb_settoggle(urb->dev, epnum, 1, sndtog);
|
||||||
|
|
||||||
max3421_hcd->curr_urb = NULL;
|
max3421_hcd->curr_urb = NULL;
|
||||||
spin_lock_irqsave(&max3421_hcd->lock, flags);
|
spin_lock_irqsave(&max3421_hcd->lock, flags);
|
||||||
usb_hcd_unlink_urb_from_ep(hcd, urb);
|
usb_hcd_unlink_urb_from_ep(hcd, urb);
|
||||||
|
@@ -1557,11 +1557,12 @@ int xhci_hub_status_data(struct usb_hcd *hcd, char *buf)
|
|||||||
* Inform the usbcore about resume-in-progress by returning
|
* Inform the usbcore about resume-in-progress by returning
|
||||||
* a non-zero value even if there are no status changes.
|
* a non-zero value even if there are no status changes.
|
||||||
*/
|
*/
|
||||||
|
spin_lock_irqsave(&xhci->lock, flags);
|
||||||
|
|
||||||
status = bus_state->resuming_ports;
|
status = bus_state->resuming_ports;
|
||||||
|
|
||||||
mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC | PORT_CEC;
|
mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC | PORT_CEC;
|
||||||
|
|
||||||
spin_lock_irqsave(&xhci->lock, flags);
|
|
||||||
/* For each port, did anything change? If so, set that bit in buf. */
|
/* For each port, did anything change? If so, set that bit in buf. */
|
||||||
for (i = 0; i < max_ports; i++) {
|
for (i = 0; i < max_ports; i++) {
|
||||||
temp = readl(ports[i]->addr);
|
temp = readl(ports[i]->addr);
|
||||||
|
@@ -207,8 +207,7 @@ static int renesas_check_rom_state(struct pci_dev *pdev)
|
|||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
case RENESAS_ROM_STATUS_NO_RESULT: /* No result yet */
|
case RENESAS_ROM_STATUS_NO_RESULT: /* No result yet */
|
||||||
dev_dbg(&pdev->dev, "Unknown ROM status ...\n");
|
return 0;
|
||||||
break;
|
|
||||||
|
|
||||||
case RENESAS_ROM_STATUS_ERROR: /* Error State */
|
case RENESAS_ROM_STATUS_ERROR: /* Error State */
|
||||||
default: /* All other states are marked as "Reserved states" */
|
default: /* All other states are marked as "Reserved states" */
|
||||||
@@ -225,12 +224,13 @@ static int renesas_fw_check_running(struct pci_dev *pdev)
|
|||||||
u8 fw_state;
|
u8 fw_state;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
/*
|
/* Check if device has ROM and loaded, if so skip everything */
|
||||||
* Only if device has ROM and loaded FW we can skip loading and
|
err = renesas_check_rom(pdev);
|
||||||
* return success. Otherwise (even unknown state), attempt to load FW.
|
if (err) { /* we have rom */
|
||||||
*/
|
err = renesas_check_rom_state(pdev);
|
||||||
if (renesas_check_rom(pdev) && !renesas_check_rom_state(pdev))
|
if (!err)
|
||||||
return 0;
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Test if the device is actually needing the firmware. As most
|
* Test if the device is actually needing the firmware. As most
|
||||||
|
@@ -631,7 +631,14 @@ static const struct pci_device_id pci_ids[] = {
|
|||||||
{ /* end: all zeroes */ }
|
{ /* end: all zeroes */ }
|
||||||
};
|
};
|
||||||
MODULE_DEVICE_TABLE(pci, pci_ids);
|
MODULE_DEVICE_TABLE(pci, pci_ids);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Without CONFIG_USB_XHCI_PCI_RENESAS renesas_xhci_check_request_fw() won't
|
||||||
|
* load firmware, so don't encumber the xhci-pci driver with it.
|
||||||
|
*/
|
||||||
|
#if IS_ENABLED(CONFIG_USB_XHCI_PCI_RENESAS)
|
||||||
MODULE_FIRMWARE("renesas_usb_fw.mem");
|
MODULE_FIRMWARE("renesas_usb_fw.mem");
|
||||||
|
#endif
|
||||||
|
|
||||||
/* pci driver glue; this is a "new style" PCI driver module */
|
/* pci driver glue; this is a "new style" PCI driver module */
|
||||||
static struct pci_driver xhci_pci_driver = {
|
static struct pci_driver xhci_pci_driver = {
|
||||||
|
@@ -101,6 +101,8 @@ static struct dma_chan *usbhsf_dma_chan_get(struct usbhs_fifo *fifo,
|
|||||||
#define usbhsf_dma_map(p) __usbhsf_dma_map_ctrl(p, 1)
|
#define usbhsf_dma_map(p) __usbhsf_dma_map_ctrl(p, 1)
|
||||||
#define usbhsf_dma_unmap(p) __usbhsf_dma_map_ctrl(p, 0)
|
#define usbhsf_dma_unmap(p) __usbhsf_dma_map_ctrl(p, 0)
|
||||||
static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map);
|
static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map);
|
||||||
|
static void usbhsf_tx_irq_ctrl(struct usbhs_pipe *pipe, int enable);
|
||||||
|
static void usbhsf_rx_irq_ctrl(struct usbhs_pipe *pipe, int enable);
|
||||||
struct usbhs_pkt *usbhs_pkt_pop(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt)
|
struct usbhs_pkt *usbhs_pkt_pop(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt)
|
||||||
{
|
{
|
||||||
struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
|
struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
|
||||||
@@ -123,6 +125,11 @@ struct usbhs_pkt *usbhs_pkt_pop(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt)
|
|||||||
if (chan) {
|
if (chan) {
|
||||||
dmaengine_terminate_all(chan);
|
dmaengine_terminate_all(chan);
|
||||||
usbhsf_dma_unmap(pkt);
|
usbhsf_dma_unmap(pkt);
|
||||||
|
} else {
|
||||||
|
if (usbhs_pipe_is_dir_in(pipe))
|
||||||
|
usbhsf_rx_irq_ctrl(pipe, 0);
|
||||||
|
else
|
||||||
|
usbhsf_tx_irq_ctrl(pipe, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
usbhs_pipe_clear_without_sequence(pipe, 0, 0);
|
usbhs_pipe_clear_without_sequence(pipe, 0, 0);
|
||||||
|
@@ -159,6 +159,7 @@ static const struct usb_device_id id_table[] = {
|
|||||||
{ USB_DEVICE(0x10C4, 0x89A4) }, /* CESINEL FTBC Flexible Thyristor Bridge Controller */
|
{ USB_DEVICE(0x10C4, 0x89A4) }, /* CESINEL FTBC Flexible Thyristor Bridge Controller */
|
||||||
{ USB_DEVICE(0x10C4, 0x89FB) }, /* Qivicon ZigBee USB Radio Stick */
|
{ USB_DEVICE(0x10C4, 0x89FB) }, /* Qivicon ZigBee USB Radio Stick */
|
||||||
{ USB_DEVICE(0x10C4, 0x8A2A) }, /* HubZ dual ZigBee and Z-Wave dongle */
|
{ USB_DEVICE(0x10C4, 0x8A2A) }, /* HubZ dual ZigBee and Z-Wave dongle */
|
||||||
|
{ USB_DEVICE(0x10C4, 0x8A5B) }, /* CEL EM3588 ZigBee USB Stick */
|
||||||
{ USB_DEVICE(0x10C4, 0x8A5E) }, /* CEL EM3588 ZigBee USB Stick Long Range */
|
{ USB_DEVICE(0x10C4, 0x8A5E) }, /* CEL EM3588 ZigBee USB Stick Long Range */
|
||||||
{ USB_DEVICE(0x10C4, 0x8B34) }, /* Qivicon ZigBee USB Radio Stick */
|
{ USB_DEVICE(0x10C4, 0x8B34) }, /* Qivicon ZigBee USB Radio Stick */
|
||||||
{ USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */
|
{ USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */
|
||||||
@@ -206,8 +207,8 @@ static const struct usb_device_id id_table[] = {
|
|||||||
{ USB_DEVICE(0x1901, 0x0194) }, /* GE Healthcare Remote Alarm Box */
|
{ USB_DEVICE(0x1901, 0x0194) }, /* GE Healthcare Remote Alarm Box */
|
||||||
{ USB_DEVICE(0x1901, 0x0195) }, /* GE B850/B650/B450 CP2104 DP UART interface */
|
{ USB_DEVICE(0x1901, 0x0195) }, /* GE B850/B650/B450 CP2104 DP UART interface */
|
||||||
{ USB_DEVICE(0x1901, 0x0196) }, /* GE B850 CP2105 DP UART interface */
|
{ USB_DEVICE(0x1901, 0x0196) }, /* GE B850 CP2105 DP UART interface */
|
||||||
{ USB_DEVICE(0x1901, 0x0197) }, /* GE CS1000 Display serial interface */
|
{ USB_DEVICE(0x1901, 0x0197) }, /* GE CS1000 M.2 Key E serial interface */
|
||||||
{ USB_DEVICE(0x1901, 0x0198) }, /* GE CS1000 M.2 Key E serial interface */
|
{ USB_DEVICE(0x1901, 0x0198) }, /* GE CS1000 Display serial interface */
|
||||||
{ USB_DEVICE(0x199B, 0xBA30) }, /* LORD WSDA-200-USB */
|
{ USB_DEVICE(0x199B, 0xBA30) }, /* LORD WSDA-200-USB */
|
||||||
{ USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */
|
{ USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */
|
||||||
{ USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
|
{ USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
|
||||||
|
@@ -238,6 +238,7 @@ static void option_instat_callback(struct urb *urb);
|
|||||||
#define QUECTEL_PRODUCT_UC15 0x9090
|
#define QUECTEL_PRODUCT_UC15 0x9090
|
||||||
/* These u-blox products use Qualcomm's vendor ID */
|
/* These u-blox products use Qualcomm's vendor ID */
|
||||||
#define UBLOX_PRODUCT_R410M 0x90b2
|
#define UBLOX_PRODUCT_R410M 0x90b2
|
||||||
|
#define UBLOX_PRODUCT_R6XX 0x90fa
|
||||||
/* These Yuga products use Qualcomm's vendor ID */
|
/* These Yuga products use Qualcomm's vendor ID */
|
||||||
#define YUGA_PRODUCT_CLM920_NC5 0x9625
|
#define YUGA_PRODUCT_CLM920_NC5 0x9625
|
||||||
|
|
||||||
@@ -1101,6 +1102,8 @@ static const struct usb_device_id option_ids[] = {
|
|||||||
/* u-blox products using Qualcomm vendor ID */
|
/* u-blox products using Qualcomm vendor ID */
|
||||||
{ USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R410M),
|
{ USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R410M),
|
||||||
.driver_info = RSVD(1) | RSVD(3) },
|
.driver_info = RSVD(1) | RSVD(3) },
|
||||||
|
{ USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R6XX),
|
||||||
|
.driver_info = RSVD(3) },
|
||||||
/* Quectel products using Quectel vendor ID */
|
/* Quectel products using Quectel vendor ID */
|
||||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC21, 0xff, 0xff, 0xff),
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC21, 0xff, 0xff, 0xff),
|
||||||
.driver_info = NUMEP2 },
|
.driver_info = NUMEP2 },
|
||||||
|
@@ -45,6 +45,13 @@ UNUSUAL_DEV(0x059f, 0x105f, 0x0000, 0x9999,
|
|||||||
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
|
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
|
||||||
US_FL_NO_REPORT_OPCODES | US_FL_NO_SAME),
|
US_FL_NO_REPORT_OPCODES | US_FL_NO_SAME),
|
||||||
|
|
||||||
|
/* Reported-by: Julian Sikorski <belegdol@gmail.com> */
|
||||||
|
UNUSUAL_DEV(0x059f, 0x1061, 0x0000, 0x9999,
|
||||||
|
"LaCie",
|
||||||
|
"Rugged USB3-FW",
|
||||||
|
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
|
||||||
|
US_FL_IGNORE_UAS),
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Apricorn USB3 dongle sometimes returns "USBSUSBSUSBS" in response to SCSI
|
* Apricorn USB3 dongle sometimes returns "USBSUSBSUSBS" in response to SCSI
|
||||||
* commands in UAS mode. Observed with the 1.28 firmware; are there others?
|
* commands in UAS mode. Observed with the 1.28 firmware; are there others?
|
||||||
|
@@ -739,10 +739,6 @@ static int stusb160x_probe(struct i2c_client *client)
|
|||||||
typec_set_pwr_opmode(chip->port, chip->pwr_opmode);
|
typec_set_pwr_opmode(chip->port, chip->pwr_opmode);
|
||||||
|
|
||||||
if (client->irq) {
|
if (client->irq) {
|
||||||
ret = stusb160x_irq_init(chip, client->irq);
|
|
||||||
if (ret)
|
|
||||||
goto port_unregister;
|
|
||||||
|
|
||||||
chip->role_sw = fwnode_usb_role_switch_get(fwnode);
|
chip->role_sw = fwnode_usb_role_switch_get(fwnode);
|
||||||
if (IS_ERR(chip->role_sw)) {
|
if (IS_ERR(chip->role_sw)) {
|
||||||
ret = PTR_ERR(chip->role_sw);
|
ret = PTR_ERR(chip->role_sw);
|
||||||
@@ -752,6 +748,10 @@ static int stusb160x_probe(struct i2c_client *client)
|
|||||||
ret);
|
ret);
|
||||||
goto port_unregister;
|
goto port_unregister;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ret = stusb160x_irq_init(chip, client->irq);
|
||||||
|
if (ret)
|
||||||
|
goto role_sw_put;
|
||||||
} else {
|
} else {
|
||||||
/*
|
/*
|
||||||
* If Source or Dual power role, need to enable VDD supply
|
* If Source or Dual power role, need to enable VDD supply
|
||||||
@@ -775,6 +775,9 @@ static int stusb160x_probe(struct i2c_client *client)
|
|||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
role_sw_put:
|
||||||
|
if (chip->role_sw)
|
||||||
|
usb_role_switch_put(chip->role_sw);
|
||||||
port_unregister:
|
port_unregister:
|
||||||
typec_unregister_port(chip->port);
|
typec_unregister_port(chip->port);
|
||||||
all_reg_disable:
|
all_reg_disable:
|
||||||
|
@@ -29,16 +29,11 @@ static void SRXAFSCB_TellMeAboutYourself(struct work_struct *);
|
|||||||
|
|
||||||
static int afs_deliver_yfs_cb_callback(struct afs_call *);
|
static int afs_deliver_yfs_cb_callback(struct afs_call *);
|
||||||
|
|
||||||
#define CM_NAME(name) \
|
|
||||||
char afs_SRXCB##name##_name[] __tracepoint_string = \
|
|
||||||
"CB." #name
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* CB.CallBack operation type
|
* CB.CallBack operation type
|
||||||
*/
|
*/
|
||||||
static CM_NAME(CallBack);
|
|
||||||
static const struct afs_call_type afs_SRXCBCallBack = {
|
static const struct afs_call_type afs_SRXCBCallBack = {
|
||||||
.name = afs_SRXCBCallBack_name,
|
.name = "CB.CallBack",
|
||||||
.deliver = afs_deliver_cb_callback,
|
.deliver = afs_deliver_cb_callback,
|
||||||
.destructor = afs_cm_destructor,
|
.destructor = afs_cm_destructor,
|
||||||
.work = SRXAFSCB_CallBack,
|
.work = SRXAFSCB_CallBack,
|
||||||
@@ -47,9 +42,8 @@ static const struct afs_call_type afs_SRXCBCallBack = {
|
|||||||
/*
|
/*
|
||||||
* CB.InitCallBackState operation type
|
* CB.InitCallBackState operation type
|
||||||
*/
|
*/
|
||||||
static CM_NAME(InitCallBackState);
|
|
||||||
static const struct afs_call_type afs_SRXCBInitCallBackState = {
|
static const struct afs_call_type afs_SRXCBInitCallBackState = {
|
||||||
.name = afs_SRXCBInitCallBackState_name,
|
.name = "CB.InitCallBackState",
|
||||||
.deliver = afs_deliver_cb_init_call_back_state,
|
.deliver = afs_deliver_cb_init_call_back_state,
|
||||||
.destructor = afs_cm_destructor,
|
.destructor = afs_cm_destructor,
|
||||||
.work = SRXAFSCB_InitCallBackState,
|
.work = SRXAFSCB_InitCallBackState,
|
||||||
@@ -58,9 +52,8 @@ static const struct afs_call_type afs_SRXCBInitCallBackState = {
|
|||||||
/*
|
/*
|
||||||
* CB.InitCallBackState3 operation type
|
* CB.InitCallBackState3 operation type
|
||||||
*/
|
*/
|
||||||
static CM_NAME(InitCallBackState3);
|
|
||||||
static const struct afs_call_type afs_SRXCBInitCallBackState3 = {
|
static const struct afs_call_type afs_SRXCBInitCallBackState3 = {
|
||||||
.name = afs_SRXCBInitCallBackState3_name,
|
.name = "CB.InitCallBackState3",
|
||||||
.deliver = afs_deliver_cb_init_call_back_state3,
|
.deliver = afs_deliver_cb_init_call_back_state3,
|
||||||
.destructor = afs_cm_destructor,
|
.destructor = afs_cm_destructor,
|
||||||
.work = SRXAFSCB_InitCallBackState,
|
.work = SRXAFSCB_InitCallBackState,
|
||||||
@@ -69,9 +62,8 @@ static const struct afs_call_type afs_SRXCBInitCallBackState3 = {
|
|||||||
/*
|
/*
|
||||||
* CB.Probe operation type
|
* CB.Probe operation type
|
||||||
*/
|
*/
|
||||||
static CM_NAME(Probe);
|
|
||||||
static const struct afs_call_type afs_SRXCBProbe = {
|
static const struct afs_call_type afs_SRXCBProbe = {
|
||||||
.name = afs_SRXCBProbe_name,
|
.name = "CB.Probe",
|
||||||
.deliver = afs_deliver_cb_probe,
|
.deliver = afs_deliver_cb_probe,
|
||||||
.destructor = afs_cm_destructor,
|
.destructor = afs_cm_destructor,
|
||||||
.work = SRXAFSCB_Probe,
|
.work = SRXAFSCB_Probe,
|
||||||
@@ -80,9 +72,8 @@ static const struct afs_call_type afs_SRXCBProbe = {
|
|||||||
/*
|
/*
|
||||||
* CB.ProbeUuid operation type
|
* CB.ProbeUuid operation type
|
||||||
*/
|
*/
|
||||||
static CM_NAME(ProbeUuid);
|
|
||||||
static const struct afs_call_type afs_SRXCBProbeUuid = {
|
static const struct afs_call_type afs_SRXCBProbeUuid = {
|
||||||
.name = afs_SRXCBProbeUuid_name,
|
.name = "CB.ProbeUuid",
|
||||||
.deliver = afs_deliver_cb_probe_uuid,
|
.deliver = afs_deliver_cb_probe_uuid,
|
||||||
.destructor = afs_cm_destructor,
|
.destructor = afs_cm_destructor,
|
||||||
.work = SRXAFSCB_ProbeUuid,
|
.work = SRXAFSCB_ProbeUuid,
|
||||||
@@ -91,9 +82,8 @@ static const struct afs_call_type afs_SRXCBProbeUuid = {
|
|||||||
/*
|
/*
|
||||||
* CB.TellMeAboutYourself operation type
|
* CB.TellMeAboutYourself operation type
|
||||||
*/
|
*/
|
||||||
static CM_NAME(TellMeAboutYourself);
|
|
||||||
static const struct afs_call_type afs_SRXCBTellMeAboutYourself = {
|
static const struct afs_call_type afs_SRXCBTellMeAboutYourself = {
|
||||||
.name = afs_SRXCBTellMeAboutYourself_name,
|
.name = "CB.TellMeAboutYourself",
|
||||||
.deliver = afs_deliver_cb_tell_me_about_yourself,
|
.deliver = afs_deliver_cb_tell_me_about_yourself,
|
||||||
.destructor = afs_cm_destructor,
|
.destructor = afs_cm_destructor,
|
||||||
.work = SRXAFSCB_TellMeAboutYourself,
|
.work = SRXAFSCB_TellMeAboutYourself,
|
||||||
@@ -102,9 +92,8 @@ static const struct afs_call_type afs_SRXCBTellMeAboutYourself = {
|
|||||||
/*
|
/*
|
||||||
* YFS CB.CallBack operation type
|
* YFS CB.CallBack operation type
|
||||||
*/
|
*/
|
||||||
static CM_NAME(YFS_CallBack);
|
|
||||||
static const struct afs_call_type afs_SRXYFSCB_CallBack = {
|
static const struct afs_call_type afs_SRXYFSCB_CallBack = {
|
||||||
.name = afs_SRXCBYFS_CallBack_name,
|
.name = "YFSCB.CallBack",
|
||||||
.deliver = afs_deliver_yfs_cb_callback,
|
.deliver = afs_deliver_yfs_cb_callback,
|
||||||
.destructor = afs_cm_destructor,
|
.destructor = afs_cm_destructor,
|
||||||
.work = SRXAFSCB_CallBack,
|
.work = SRXAFSCB_CallBack,
|
||||||
|
@@ -5883,6 +5883,9 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
|
|||||||
mutex_lock(&fs_info->fs_devices->device_list_mutex);
|
mutex_lock(&fs_info->fs_devices->device_list_mutex);
|
||||||
devices = &fs_info->fs_devices->devices;
|
devices = &fs_info->fs_devices->devices;
|
||||||
list_for_each_entry(device, devices, dev_list) {
|
list_for_each_entry(device, devices, dev_list) {
|
||||||
|
if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state))
|
||||||
|
continue;
|
||||||
|
|
||||||
ret = btrfs_trim_free_extents(device, &group_trimmed);
|
ret = btrfs_trim_free_extents(device, &group_trimmed);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
dev_failed++;
|
dev_failed++;
|
||||||
|
@@ -4401,7 +4401,7 @@ bool check_session_state(struct ceph_mds_session *s)
|
|||||||
break;
|
break;
|
||||||
case CEPH_MDS_SESSION_CLOSING:
|
case CEPH_MDS_SESSION_CLOSING:
|
||||||
/* Should never reach this when we're unmounting */
|
/* Should never reach this when we're unmounting */
|
||||||
WARN_ON_ONCE(true);
|
WARN_ON_ONCE(s->s_ttl);
|
||||||
fallthrough;
|
fallthrough;
|
||||||
case CEPH_MDS_SESSION_NEW:
|
case CEPH_MDS_SESSION_NEW:
|
||||||
case CEPH_MDS_SESSION_RESTARTING:
|
case CEPH_MDS_SESSION_RESTARTING:
|
||||||
|
@@ -3466,7 +3466,7 @@ static int smb3_simple_fallocate_write_range(unsigned int xid,
|
|||||||
char *buf)
|
char *buf)
|
||||||
{
|
{
|
||||||
struct cifs_io_parms io_parms = {0};
|
struct cifs_io_parms io_parms = {0};
|
||||||
int nbytes;
|
int rc, nbytes;
|
||||||
struct kvec iov[2];
|
struct kvec iov[2];
|
||||||
|
|
||||||
io_parms.netfid = cfile->fid.netfid;
|
io_parms.netfid = cfile->fid.netfid;
|
||||||
@@ -3474,13 +3474,25 @@ static int smb3_simple_fallocate_write_range(unsigned int xid,
|
|||||||
io_parms.tcon = tcon;
|
io_parms.tcon = tcon;
|
||||||
io_parms.persistent_fid = cfile->fid.persistent_fid;
|
io_parms.persistent_fid = cfile->fid.persistent_fid;
|
||||||
io_parms.volatile_fid = cfile->fid.volatile_fid;
|
io_parms.volatile_fid = cfile->fid.volatile_fid;
|
||||||
io_parms.offset = off;
|
|
||||||
io_parms.length = len;
|
|
||||||
|
|
||||||
/* iov[0] is reserved for smb header */
|
while (len) {
|
||||||
iov[1].iov_base = buf;
|
io_parms.offset = off;
|
||||||
iov[1].iov_len = io_parms.length;
|
io_parms.length = len;
|
||||||
return SMB2_write(xid, &io_parms, &nbytes, iov, 1);
|
if (io_parms.length > SMB2_MAX_BUFFER_SIZE)
|
||||||
|
io_parms.length = SMB2_MAX_BUFFER_SIZE;
|
||||||
|
/* iov[0] is reserved for smb header */
|
||||||
|
iov[1].iov_base = buf;
|
||||||
|
iov[1].iov_len = io_parms.length;
|
||||||
|
rc = SMB2_write(xid, &io_parms, &nbytes, iov, 1);
|
||||||
|
if (rc)
|
||||||
|
break;
|
||||||
|
if (nbytes > len)
|
||||||
|
return -EINVAL;
|
||||||
|
buf += nbytes;
|
||||||
|
off += nbytes;
|
||||||
|
len -= nbytes;
|
||||||
|
}
|
||||||
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int smb3_simple_fallocate_range(unsigned int xid,
|
static int smb3_simple_fallocate_range(unsigned int xid,
|
||||||
@@ -3504,11 +3516,6 @@ static int smb3_simple_fallocate_range(unsigned int xid,
|
|||||||
(char **)&out_data, &out_data_len);
|
(char **)&out_data, &out_data_len);
|
||||||
if (rc)
|
if (rc)
|
||||||
goto out;
|
goto out;
|
||||||
/*
|
|
||||||
* It is already all allocated
|
|
||||||
*/
|
|
||||||
if (out_data_len == 0)
|
|
||||||
goto out;
|
|
||||||
|
|
||||||
buf = kzalloc(1024 * 1024, GFP_KERNEL);
|
buf = kzalloc(1024 * 1024, GFP_KERNEL);
|
||||||
if (buf == NULL) {
|
if (buf == NULL) {
|
||||||
@@ -3631,6 +3638,24 @@ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon,
|
|||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (keep_size == true) {
|
||||||
|
/*
|
||||||
|
* We can not preallocate pages beyond the end of the file
|
||||||
|
* in SMB2
|
||||||
|
*/
|
||||||
|
if (off >= i_size_read(inode)) {
|
||||||
|
rc = 0;
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
/*
|
||||||
|
* For fallocates that are partially beyond the end of file,
|
||||||
|
* clamp len so we only fallocate up to the end of file.
|
||||||
|
*/
|
||||||
|
if (off + len > i_size_read(inode)) {
|
||||||
|
len = i_size_read(inode) - off;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if ((keep_size == true) || (i_size_read(inode) >= off + len)) {
|
if ((keep_size == true) || (i_size_read(inode) >= off + len)) {
|
||||||
/*
|
/*
|
||||||
* At this point, we are trying to fallocate an internal
|
* At this point, we are trying to fallocate an internal
|
||||||
|
@@ -77,7 +77,7 @@ enum hugetlb_param {
|
|||||||
static const struct fs_parameter_spec hugetlb_fs_parameters[] = {
|
static const struct fs_parameter_spec hugetlb_fs_parameters[] = {
|
||||||
fsparam_u32 ("gid", Opt_gid),
|
fsparam_u32 ("gid", Opt_gid),
|
||||||
fsparam_string("min_size", Opt_min_size),
|
fsparam_string("min_size", Opt_min_size),
|
||||||
fsparam_u32 ("mode", Opt_mode),
|
fsparam_u32oct("mode", Opt_mode),
|
||||||
fsparam_string("nr_inodes", Opt_nr_inodes),
|
fsparam_string("nr_inodes", Opt_nr_inodes),
|
||||||
fsparam_string("pagesize", Opt_pagesize),
|
fsparam_string("pagesize", Opt_pagesize),
|
||||||
fsparam_string("size", Opt_size),
|
fsparam_string("size", Opt_size),
|
||||||
|
@@ -4916,6 +4916,7 @@ static int io_connect(struct io_kiocb *req, bool force_nonblock,
|
|||||||
struct io_poll_table {
|
struct io_poll_table {
|
||||||
struct poll_table_struct pt;
|
struct poll_table_struct pt;
|
||||||
struct io_kiocb *req;
|
struct io_kiocb *req;
|
||||||
|
int nr_entries;
|
||||||
int error;
|
int error;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -5098,11 +5099,11 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
|
|||||||
struct io_kiocb *req = pt->req;
|
struct io_kiocb *req = pt->req;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If poll->head is already set, it's because the file being polled
|
* The file being polled uses multiple waitqueues for poll handling
|
||||||
* uses multiple waitqueues for poll handling (eg one for read, one
|
* (e.g. one for read, one for write). Setup a separate io_poll_iocb
|
||||||
* for write). Setup a separate io_poll_iocb if this happens.
|
* if this happens.
|
||||||
*/
|
*/
|
||||||
if (unlikely(poll->head)) {
|
if (unlikely(pt->nr_entries)) {
|
||||||
struct io_poll_iocb *poll_one = poll;
|
struct io_poll_iocb *poll_one = poll;
|
||||||
|
|
||||||
/* already have a 2nd entry, fail a third attempt */
|
/* already have a 2nd entry, fail a third attempt */
|
||||||
@@ -5124,7 +5125,7 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
|
|||||||
*poll_ptr = poll;
|
*poll_ptr = poll;
|
||||||
}
|
}
|
||||||
|
|
||||||
pt->error = 0;
|
pt->nr_entries++;
|
||||||
poll->head = head;
|
poll->head = head;
|
||||||
|
|
||||||
if (poll->events & EPOLLEXCLUSIVE)
|
if (poll->events & EPOLLEXCLUSIVE)
|
||||||
@@ -5210,11 +5211,16 @@ static __poll_t __io_arm_poll_handler(struct io_kiocb *req,
|
|||||||
|
|
||||||
ipt->pt._key = mask;
|
ipt->pt._key = mask;
|
||||||
ipt->req = req;
|
ipt->req = req;
|
||||||
ipt->error = -EINVAL;
|
ipt->error = 0;
|
||||||
|
ipt->nr_entries = 0;
|
||||||
|
|
||||||
mask = vfs_poll(req->file, &ipt->pt) & poll->events;
|
mask = vfs_poll(req->file, &ipt->pt) & poll->events;
|
||||||
|
if (unlikely(!ipt->nr_entries) && !ipt->error)
|
||||||
|
ipt->error = -EINVAL;
|
||||||
|
|
||||||
spin_lock_irq(&ctx->completion_lock);
|
spin_lock_irq(&ctx->completion_lock);
|
||||||
|
if (ipt->error)
|
||||||
|
io_poll_remove_double(req);
|
||||||
if (likely(poll->head)) {
|
if (likely(poll->head)) {
|
||||||
spin_lock(&poll->head->lock);
|
spin_lock(&poll->head->lock);
|
||||||
if (unlikely(list_empty(&poll->wait.entry))) {
|
if (unlikely(list_empty(&poll->wait.entry))) {
|
||||||
|
@@ -856,7 +856,7 @@ static ssize_t mem_rw(struct file *file, char __user *buf,
|
|||||||
flags = FOLL_FORCE | (write ? FOLL_WRITE : 0);
|
flags = FOLL_FORCE | (write ? FOLL_WRITE : 0);
|
||||||
|
|
||||||
while (count > 0) {
|
while (count > 0) {
|
||||||
int this_len = min_t(int, count, PAGE_SIZE);
|
size_t this_len = min_t(size_t, count, PAGE_SIZE);
|
||||||
|
|
||||||
if (write && copy_from_user(page, buf, this_len)) {
|
if (write && copy_from_user(page, buf, this_len)) {
|
||||||
copied = -EFAULT;
|
copied = -EFAULT;
|
||||||
|
@@ -1242,23 +1242,21 @@ static __always_inline void wake_userfault(struct userfaultfd_ctx *ctx,
|
|||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline int validate_range(struct mm_struct *mm,
|
static __always_inline int validate_range(struct mm_struct *mm,
|
||||||
__u64 *start, __u64 len)
|
__u64 start, __u64 len)
|
||||||
{
|
{
|
||||||
__u64 task_size = mm->task_size;
|
__u64 task_size = mm->task_size;
|
||||||
|
|
||||||
*start = untagged_addr(*start);
|
if (start & ~PAGE_MASK)
|
||||||
|
|
||||||
if (*start & ~PAGE_MASK)
|
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
if (len & ~PAGE_MASK)
|
if (len & ~PAGE_MASK)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
if (!len)
|
if (!len)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
if (*start < mmap_min_addr)
|
if (start < mmap_min_addr)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
if (*start >= task_size)
|
if (start >= task_size)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
if (len > task_size - *start)
|
if (len > task_size - start)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -1318,7 +1316,7 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
|
|||||||
vm_flags |= VM_UFFD_MINOR;
|
vm_flags |= VM_UFFD_MINOR;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = validate_range(mm, &uffdio_register.range.start,
|
ret = validate_range(mm, uffdio_register.range.start,
|
||||||
uffdio_register.range.len);
|
uffdio_register.range.len);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto out;
|
goto out;
|
||||||
@@ -1527,7 +1525,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
|
|||||||
if (copy_from_user(&uffdio_unregister, buf, sizeof(uffdio_unregister)))
|
if (copy_from_user(&uffdio_unregister, buf, sizeof(uffdio_unregister)))
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
ret = validate_range(mm, &uffdio_unregister.start,
|
ret = validate_range(mm, uffdio_unregister.start,
|
||||||
uffdio_unregister.len);
|
uffdio_unregister.len);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto out;
|
goto out;
|
||||||
@@ -1679,7 +1677,7 @@ static int userfaultfd_wake(struct userfaultfd_ctx *ctx,
|
|||||||
if (copy_from_user(&uffdio_wake, buf, sizeof(uffdio_wake)))
|
if (copy_from_user(&uffdio_wake, buf, sizeof(uffdio_wake)))
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
ret = validate_range(ctx->mm, &uffdio_wake.start, uffdio_wake.len);
|
ret = validate_range(ctx->mm, uffdio_wake.start, uffdio_wake.len);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
@@ -1719,7 +1717,7 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx,
|
|||||||
sizeof(uffdio_copy)-sizeof(__s64)))
|
sizeof(uffdio_copy)-sizeof(__s64)))
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
ret = validate_range(ctx->mm, &uffdio_copy.dst, uffdio_copy.len);
|
ret = validate_range(ctx->mm, uffdio_copy.dst, uffdio_copy.len);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto out;
|
goto out;
|
||||||
/*
|
/*
|
||||||
@@ -1776,7 +1774,7 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx,
|
|||||||
sizeof(uffdio_zeropage)-sizeof(__s64)))
|
sizeof(uffdio_zeropage)-sizeof(__s64)))
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
ret = validate_range(ctx->mm, &uffdio_zeropage.range.start,
|
ret = validate_range(ctx->mm, uffdio_zeropage.range.start,
|
||||||
uffdio_zeropage.range.len);
|
uffdio_zeropage.range.len);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto out;
|
goto out;
|
||||||
@@ -1826,7 +1824,7 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx,
|
|||||||
sizeof(struct uffdio_writeprotect)))
|
sizeof(struct uffdio_writeprotect)))
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
|
|
||||||
ret = validate_range(ctx->mm, &uffdio_wp.range.start,
|
ret = validate_range(ctx->mm, uffdio_wp.range.start,
|
||||||
uffdio_wp.range.len);
|
uffdio_wp.range.len);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
@@ -68,6 +68,7 @@ typedef int drm_ioctl_compat_t(struct file *filp, unsigned int cmd,
|
|||||||
unsigned long arg);
|
unsigned long arg);
|
||||||
|
|
||||||
#define DRM_IOCTL_NR(n) _IOC_NR(n)
|
#define DRM_IOCTL_NR(n) _IOC_NR(n)
|
||||||
|
#define DRM_IOCTL_TYPE(n) _IOC_TYPE(n)
|
||||||
#define DRM_MAJOR 226
|
#define DRM_MAJOR 226
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@@ -207,7 +207,7 @@ static inline void __next_physmem_range(u64 *idx, struct memblock_type *type,
|
|||||||
*/
|
*/
|
||||||
#define for_each_mem_range(i, p_start, p_end) \
|
#define for_each_mem_range(i, p_start, p_end) \
|
||||||
__for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, \
|
__for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, \
|
||||||
MEMBLOCK_NONE, p_start, p_end, NULL)
|
MEMBLOCK_HOTPLUG, p_start, p_end, NULL)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* for_each_mem_range_rev - reverse iterate through memblock areas from
|
* for_each_mem_range_rev - reverse iterate through memblock areas from
|
||||||
@@ -218,7 +218,7 @@ static inline void __next_physmem_range(u64 *idx, struct memblock_type *type,
|
|||||||
*/
|
*/
|
||||||
#define for_each_mem_range_rev(i, p_start, p_end) \
|
#define for_each_mem_range_rev(i, p_start, p_end) \
|
||||||
__for_each_mem_range_rev(i, &memblock.memory, NULL, NUMA_NO_NODE, \
|
__for_each_mem_range_rev(i, &memblock.memory, NULL, NUMA_NO_NODE, \
|
||||||
MEMBLOCK_NONE, p_start, p_end, NULL)
|
MEMBLOCK_HOTPLUG, p_start, p_end, NULL)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* for_each_reserved_mem_range - iterate over all reserved memblock areas
|
* for_each_reserved_mem_range - iterate over all reserved memblock areas
|
||||||
|
@@ -4157,6 +4157,9 @@ enum skb_ext_id {
|
|||||||
#endif
|
#endif
|
||||||
#if IS_ENABLED(CONFIG_MPTCP)
|
#if IS_ENABLED(CONFIG_MPTCP)
|
||||||
SKB_EXT_MPTCP,
|
SKB_EXT_MPTCP,
|
||||||
|
#endif
|
||||||
|
#if IS_ENABLED(CONFIG_KCOV)
|
||||||
|
SKB_EXT_KCOV_HANDLE,
|
||||||
#endif
|
#endif
|
||||||
SKB_EXT_NUM, /* must be last */
|
SKB_EXT_NUM, /* must be last */
|
||||||
};
|
};
|
||||||
@@ -4612,5 +4615,35 @@ static inline void skb_reset_redirect(struct sk_buff *skb)
|
|||||||
#endif
|
#endif
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#ifdef CONFIG_KCOV
|
||||||
|
static inline void skb_set_kcov_handle(struct sk_buff *skb,
|
||||||
|
const u64 kcov_handle)
|
||||||
|
{
|
||||||
|
/* Do not allocate skb extensions only to set kcov_handle to zero
|
||||||
|
* (as it is zero by default). However, if the extensions are
|
||||||
|
* already allocated, update kcov_handle anyway since
|
||||||
|
* skb_set_kcov_handle can be called to zero a previously set
|
||||||
|
* value.
|
||||||
|
*/
|
||||||
|
if (skb_has_extensions(skb) || kcov_handle) {
|
||||||
|
u64 *kcov_handle_ptr = skb_ext_add(skb, SKB_EXT_KCOV_HANDLE);
|
||||||
|
|
||||||
|
if (kcov_handle_ptr)
|
||||||
|
*kcov_handle_ptr = kcov_handle;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline u64 skb_get_kcov_handle(struct sk_buff *skb)
|
||||||
|
{
|
||||||
|
u64 *kcov_handle = skb_ext_find(skb, SKB_EXT_KCOV_HANDLE);
|
||||||
|
|
||||||
|
return kcov_handle ? *kcov_handle : 0;
|
||||||
|
}
|
||||||
|
#else
|
||||||
|
static inline void skb_set_kcov_handle(struct sk_buff *skb,
|
||||||
|
const u64 kcov_handle) { }
|
||||||
|
static inline u64 skb_get_kcov_handle(struct sk_buff *skb) { return 0; }
|
||||||
|
#endif /* CONFIG_KCOV */
|
||||||
|
|
||||||
#endif /* __KERNEL__ */
|
#endif /* __KERNEL__ */
|
||||||
#endif /* _LINUX_SKBUFF_H */
|
#endif /* _LINUX_SKBUFF_H */
|
||||||
|
@@ -199,6 +199,11 @@ struct bond_up_slave {
|
|||||||
*/
|
*/
|
||||||
#define BOND_LINK_NOCHANGE -1
|
#define BOND_LINK_NOCHANGE -1
|
||||||
|
|
||||||
|
struct bond_ipsec {
|
||||||
|
struct list_head list;
|
||||||
|
struct xfrm_state *xs;
|
||||||
|
};
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Here are the locking policies for the two bonding locks:
|
* Here are the locking policies for the two bonding locks:
|
||||||
* Get rcu_read_lock when reading or RTNL when writing slave list.
|
* Get rcu_read_lock when reading or RTNL when writing slave list.
|
||||||
@@ -247,7 +252,9 @@ struct bonding {
|
|||||||
#endif /* CONFIG_DEBUG_FS */
|
#endif /* CONFIG_DEBUG_FS */
|
||||||
struct rtnl_link_stats64 bond_stats;
|
struct rtnl_link_stats64 bond_stats;
|
||||||
#ifdef CONFIG_XFRM_OFFLOAD
|
#ifdef CONFIG_XFRM_OFFLOAD
|
||||||
struct xfrm_state *xs;
|
struct list_head ipsec_list;
|
||||||
|
/* protecting ipsec_list */
|
||||||
|
spinlock_t ipsec_lock;
|
||||||
#endif /* CONFIG_XFRM_OFFLOAD */
|
#endif /* CONFIG_XFRM_OFFLOAD */
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@@ -174,6 +174,34 @@ enum afs_vl_operation {
|
|||||||
afs_VL_GetCapabilities = 65537, /* AFS Get VL server capabilities */
|
afs_VL_GetCapabilities = 65537, /* AFS Get VL server capabilities */
|
||||||
};
|
};
|
||||||
|
|
||||||
|
enum afs_cm_operation {
|
||||||
|
afs_CB_CallBack = 204, /* AFS break callback promises */
|
||||||
|
afs_CB_InitCallBackState = 205, /* AFS initialise callback state */
|
||||||
|
afs_CB_Probe = 206, /* AFS probe client */
|
||||||
|
afs_CB_GetLock = 207, /* AFS get contents of CM lock table */
|
||||||
|
afs_CB_GetCE = 208, /* AFS get cache file description */
|
||||||
|
afs_CB_GetXStatsVersion = 209, /* AFS get version of extended statistics */
|
||||||
|
afs_CB_GetXStats = 210, /* AFS get contents of extended statistics data */
|
||||||
|
afs_CB_InitCallBackState3 = 213, /* AFS initialise callback state, version 3 */
|
||||||
|
afs_CB_ProbeUuid = 214, /* AFS check the client hasn't rebooted */
|
||||||
|
};
|
||||||
|
|
||||||
|
enum yfs_cm_operation {
|
||||||
|
yfs_CB_Probe = 206, /* YFS probe client */
|
||||||
|
yfs_CB_GetLock = 207, /* YFS get contents of CM lock table */
|
||||||
|
yfs_CB_XStatsVersion = 209, /* YFS get version of extended statistics */
|
||||||
|
yfs_CB_GetXStats = 210, /* YFS get contents of extended statistics data */
|
||||||
|
yfs_CB_InitCallBackState3 = 213, /* YFS initialise callback state, version 3 */
|
||||||
|
yfs_CB_ProbeUuid = 214, /* YFS check the client hasn't rebooted */
|
||||||
|
yfs_CB_GetServerPrefs = 215,
|
||||||
|
yfs_CB_GetCellServDV = 216,
|
||||||
|
yfs_CB_GetLocalCell = 217,
|
||||||
|
yfs_CB_GetCacheConfig = 218,
|
||||||
|
yfs_CB_GetCellByNum = 65537,
|
||||||
|
yfs_CB_TellMeAboutYourself = 65538, /* get client capabilities */
|
||||||
|
yfs_CB_CallBack = 64204,
|
||||||
|
};
|
||||||
|
|
||||||
enum afs_edit_dir_op {
|
enum afs_edit_dir_op {
|
||||||
afs_edit_dir_create,
|
afs_edit_dir_create,
|
||||||
afs_edit_dir_create_error,
|
afs_edit_dir_create_error,
|
||||||
@@ -435,6 +463,32 @@ enum afs_cb_break_reason {
|
|||||||
EM(afs_YFSVL_GetCellName, "YFSVL.GetCellName") \
|
EM(afs_YFSVL_GetCellName, "YFSVL.GetCellName") \
|
||||||
E_(afs_VL_GetCapabilities, "VL.GetCapabilities")
|
E_(afs_VL_GetCapabilities, "VL.GetCapabilities")
|
||||||
|
|
||||||
|
#define afs_cm_operations \
|
||||||
|
EM(afs_CB_CallBack, "CB.CallBack") \
|
||||||
|
EM(afs_CB_InitCallBackState, "CB.InitCallBackState") \
|
||||||
|
EM(afs_CB_Probe, "CB.Probe") \
|
||||||
|
EM(afs_CB_GetLock, "CB.GetLock") \
|
||||||
|
EM(afs_CB_GetCE, "CB.GetCE") \
|
||||||
|
EM(afs_CB_GetXStatsVersion, "CB.GetXStatsVersion") \
|
||||||
|
EM(afs_CB_GetXStats, "CB.GetXStats") \
|
||||||
|
EM(afs_CB_InitCallBackState3, "CB.InitCallBackState3") \
|
||||||
|
E_(afs_CB_ProbeUuid, "CB.ProbeUuid")
|
||||||
|
|
||||||
|
#define yfs_cm_operations \
|
||||||
|
EM(yfs_CB_Probe, "YFSCB.Probe") \
|
||||||
|
EM(yfs_CB_GetLock, "YFSCB.GetLock") \
|
||||||
|
EM(yfs_CB_XStatsVersion, "YFSCB.XStatsVersion") \
|
||||||
|
EM(yfs_CB_GetXStats, "YFSCB.GetXStats") \
|
||||||
|
EM(yfs_CB_InitCallBackState3, "YFSCB.InitCallBackState3") \
|
||||||
|
EM(yfs_CB_ProbeUuid, "YFSCB.ProbeUuid") \
|
||||||
|
EM(yfs_CB_GetServerPrefs, "YFSCB.GetServerPrefs") \
|
||||||
|
EM(yfs_CB_GetCellServDV, "YFSCB.GetCellServDV") \
|
||||||
|
EM(yfs_CB_GetLocalCell, "YFSCB.GetLocalCell") \
|
||||||
|
EM(yfs_CB_GetCacheConfig, "YFSCB.GetCacheConfig") \
|
||||||
|
EM(yfs_CB_GetCellByNum, "YFSCB.GetCellByNum") \
|
||||||
|
EM(yfs_CB_TellMeAboutYourself, "YFSCB.TellMeAboutYourself") \
|
||||||
|
E_(yfs_CB_CallBack, "YFSCB.CallBack")
|
||||||
|
|
||||||
#define afs_edit_dir_ops \
|
#define afs_edit_dir_ops \
|
||||||
EM(afs_edit_dir_create, "create") \
|
EM(afs_edit_dir_create, "create") \
|
||||||
EM(afs_edit_dir_create_error, "c_fail") \
|
EM(afs_edit_dir_create_error, "c_fail") \
|
||||||
@@ -567,6 +621,8 @@ afs_server_traces;
|
|||||||
afs_cell_traces;
|
afs_cell_traces;
|
||||||
afs_fs_operations;
|
afs_fs_operations;
|
||||||
afs_vl_operations;
|
afs_vl_operations;
|
||||||
|
afs_cm_operations;
|
||||||
|
yfs_cm_operations;
|
||||||
afs_edit_dir_ops;
|
afs_edit_dir_ops;
|
||||||
afs_edit_dir_reasons;
|
afs_edit_dir_reasons;
|
||||||
afs_eproto_causes;
|
afs_eproto_causes;
|
||||||
@@ -647,20 +703,21 @@ TRACE_EVENT(afs_cb_call,
|
|||||||
|
|
||||||
TP_STRUCT__entry(
|
TP_STRUCT__entry(
|
||||||
__field(unsigned int, call )
|
__field(unsigned int, call )
|
||||||
__field(const char *, name )
|
|
||||||
__field(u32, op )
|
__field(u32, op )
|
||||||
|
__field(u16, service_id )
|
||||||
),
|
),
|
||||||
|
|
||||||
TP_fast_assign(
|
TP_fast_assign(
|
||||||
__entry->call = call->debug_id;
|
__entry->call = call->debug_id;
|
||||||
__entry->name = call->type->name;
|
|
||||||
__entry->op = call->operation_ID;
|
__entry->op = call->operation_ID;
|
||||||
|
__entry->service_id = call->service_id;
|
||||||
),
|
),
|
||||||
|
|
||||||
TP_printk("c=%08x %s o=%u",
|
TP_printk("c=%08x %s",
|
||||||
__entry->call,
|
__entry->call,
|
||||||
__entry->name,
|
__entry->service_id == 2501 ?
|
||||||
__entry->op)
|
__print_symbolic(__entry->op, yfs_cm_operations) :
|
||||||
|
__print_symbolic(__entry->op, afs_cm_operations))
|
||||||
);
|
);
|
||||||
|
|
||||||
TRACE_EVENT(afs_call,
|
TRACE_EVENT(afs_call,
|
||||||
|
@@ -3356,6 +3356,8 @@ continue_func:
|
|||||||
if (tail_call_reachable)
|
if (tail_call_reachable)
|
||||||
for (j = 0; j < frame; j++)
|
for (j = 0; j < frame; j++)
|
||||||
subprog[ret_prog[j]].tail_call_reachable = true;
|
subprog[ret_prog[j]].tail_call_reachable = true;
|
||||||
|
if (subprog[0].tail_call_reachable)
|
||||||
|
env->prog->aux->tail_call_reachable = true;
|
||||||
|
|
||||||
/* end of for() loop means the last insn of the 'subprog'
|
/* end of for() loop means the last insn of the 'subprog'
|
||||||
* was reached. Doesn't matter whether it was JA or EXIT
|
* was reached. Doesn't matter whether it was JA or EXIT
|
||||||
|
@@ -5,6 +5,13 @@
|
|||||||
*/
|
*/
|
||||||
#include <linux/dma-map-ops.h>
|
#include <linux/dma-map-ops.h>
|
||||||
|
|
||||||
|
static struct page *dma_common_vaddr_to_page(void *cpu_addr)
|
||||||
|
{
|
||||||
|
if (is_vmalloc_addr(cpu_addr))
|
||||||
|
return vmalloc_to_page(cpu_addr);
|
||||||
|
return virt_to_page(cpu_addr);
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Create scatter-list for the already allocated DMA buffer.
|
* Create scatter-list for the already allocated DMA buffer.
|
||||||
*/
|
*/
|
||||||
@@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
|
|||||||
void *cpu_addr, dma_addr_t dma_addr, size_t size,
|
void *cpu_addr, dma_addr_t dma_addr, size_t size,
|
||||||
unsigned long attrs)
|
unsigned long attrs)
|
||||||
{
|
{
|
||||||
struct page *page = virt_to_page(cpu_addr);
|
struct page *page = dma_common_vaddr_to_page(cpu_addr);
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
|
ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
|
||||||
@@ -33,6 +40,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
|
|||||||
unsigned long user_count = vma_pages(vma);
|
unsigned long user_count = vma_pages(vma);
|
||||||
unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
|
unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
|
||||||
unsigned long off = vma->vm_pgoff;
|
unsigned long off = vma->vm_pgoff;
|
||||||
|
struct page *page = dma_common_vaddr_to_page(cpu_addr);
|
||||||
int ret = -ENXIO;
|
int ret = -ENXIO;
|
||||||
|
|
||||||
vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs);
|
vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs);
|
||||||
@@ -44,7 +52,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
|
|||||||
return -ENXIO;
|
return -ENXIO;
|
||||||
|
|
||||||
return remap_pfn_range(vma, vma->vm_start,
|
return remap_pfn_range(vma, vma->vm_start,
|
||||||
page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,
|
page_to_pfn(page) + vma->vm_pgoff,
|
||||||
user_count << PAGE_SHIFT, vma->vm_page_prot);
|
user_count << PAGE_SHIFT, vma->vm_page_prot);
|
||||||
#else
|
#else
|
||||||
return -ENXIO;
|
return -ENXIO;
|
||||||
|
@@ -991,6 +991,11 @@ static void posix_cpu_timer_rearm(struct k_itimer *timer)
|
|||||||
if (!p)
|
if (!p)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
|
/* Protect timer list r/w in arm_timer() */
|
||||||
|
sighand = lock_task_sighand(p, &flags);
|
||||||
|
if (unlikely(sighand == NULL))
|
||||||
|
goto out;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Fetch the current sample and update the timer's expiry time.
|
* Fetch the current sample and update the timer's expiry time.
|
||||||
*/
|
*/
|
||||||
@@ -1001,11 +1006,6 @@ static void posix_cpu_timer_rearm(struct k_itimer *timer)
|
|||||||
|
|
||||||
bump_cpu_timer(timer, now);
|
bump_cpu_timer(timer, now);
|
||||||
|
|
||||||
/* Protect timer list r/w in arm_timer() */
|
|
||||||
sighand = lock_task_sighand(p, &flags);
|
|
||||||
if (unlikely(sighand == NULL))
|
|
||||||
goto out;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Now re-arm for the new expiry time.
|
* Now re-arm for the new expiry time.
|
||||||
*/
|
*/
|
||||||
|
@@ -212,6 +212,7 @@ struct timer_base {
|
|||||||
unsigned int cpu;
|
unsigned int cpu;
|
||||||
bool next_expiry_recalc;
|
bool next_expiry_recalc;
|
||||||
bool is_idle;
|
bool is_idle;
|
||||||
|
bool timers_pending;
|
||||||
DECLARE_BITMAP(pending_map, WHEEL_SIZE);
|
DECLARE_BITMAP(pending_map, WHEEL_SIZE);
|
||||||
struct hlist_head vectors[WHEEL_SIZE];
|
struct hlist_head vectors[WHEEL_SIZE];
|
||||||
} ____cacheline_aligned;
|
} ____cacheline_aligned;
|
||||||
@@ -601,6 +602,7 @@ static void enqueue_timer(struct timer_base *base, struct timer_list *timer,
|
|||||||
* can reevaluate the wheel:
|
* can reevaluate the wheel:
|
||||||
*/
|
*/
|
||||||
base->next_expiry = bucket_expiry;
|
base->next_expiry = bucket_expiry;
|
||||||
|
base->timers_pending = true;
|
||||||
base->next_expiry_recalc = false;
|
base->next_expiry_recalc = false;
|
||||||
trigger_dyntick_cpu(base, timer);
|
trigger_dyntick_cpu(base, timer);
|
||||||
}
|
}
|
||||||
@@ -1581,6 +1583,7 @@ static unsigned long __next_timer_interrupt(struct timer_base *base)
|
|||||||
}
|
}
|
||||||
|
|
||||||
base->next_expiry_recalc = false;
|
base->next_expiry_recalc = false;
|
||||||
|
base->timers_pending = !(next == base->clk + NEXT_TIMER_MAX_DELTA);
|
||||||
|
|
||||||
return next;
|
return next;
|
||||||
}
|
}
|
||||||
@@ -1632,7 +1635,6 @@ u64 get_next_timer_interrupt(unsigned long basej, u64 basem)
|
|||||||
struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]);
|
struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]);
|
||||||
u64 expires = KTIME_MAX;
|
u64 expires = KTIME_MAX;
|
||||||
unsigned long nextevt;
|
unsigned long nextevt;
|
||||||
bool is_max_delta;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Pretend that there is no timer pending if the cpu is offline.
|
* Pretend that there is no timer pending if the cpu is offline.
|
||||||
@@ -1645,7 +1647,6 @@ u64 get_next_timer_interrupt(unsigned long basej, u64 basem)
|
|||||||
if (base->next_expiry_recalc)
|
if (base->next_expiry_recalc)
|
||||||
base->next_expiry = __next_timer_interrupt(base);
|
base->next_expiry = __next_timer_interrupt(base);
|
||||||
nextevt = base->next_expiry;
|
nextevt = base->next_expiry;
|
||||||
is_max_delta = (nextevt == base->clk + NEXT_TIMER_MAX_DELTA);
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* We have a fresh next event. Check whether we can forward the
|
* We have a fresh next event. Check whether we can forward the
|
||||||
@@ -1663,7 +1664,7 @@ u64 get_next_timer_interrupt(unsigned long basej, u64 basem)
|
|||||||
expires = basem;
|
expires = basem;
|
||||||
base->is_idle = false;
|
base->is_idle = false;
|
||||||
} else {
|
} else {
|
||||||
if (!is_max_delta)
|
if (base->timers_pending)
|
||||||
expires = basem + (u64)(nextevt - basej) * TICK_NSEC;
|
expires = basem + (u64)(nextevt - basej) * TICK_NSEC;
|
||||||
/*
|
/*
|
||||||
* If we expect to sleep more than a tick, mark the base idle.
|
* If we expect to sleep more than a tick, mark the base idle.
|
||||||
@@ -1946,6 +1947,7 @@ int timers_prepare_cpu(unsigned int cpu)
|
|||||||
base = per_cpu_ptr(&timer_bases[b], cpu);
|
base = per_cpu_ptr(&timer_bases[b], cpu);
|
||||||
base->clk = jiffies;
|
base->clk = jiffies;
|
||||||
base->next_expiry = base->clk + NEXT_TIMER_MAX_DELTA;
|
base->next_expiry = base->clk + NEXT_TIMER_MAX_DELTA;
|
||||||
|
base->timers_pending = false;
|
||||||
base->is_idle = false;
|
base->is_idle = false;
|
||||||
}
|
}
|
||||||
return 0;
|
return 0;
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user