Merge 5.10.83 into android-5.10
Changes in 5.10.83 bpf: Fix toctou on read-only map's constant scalar tracking ACPI: Get acpi_device's parent from the parent field USB: serial: option: add Telit LE910S1 0x9200 composition USB: serial: option: add Fibocom FM101-GL variants usb: dwc2: gadget: Fix ISOC flow for elapsed frames usb: dwc2: hcd_queue: Fix use of floating point literal usb: dwc3: gadget: Ignore NoStream after End Transfer usb: dwc3: gadget: Check for L1/L2/U3 for Start Transfer usb: dwc3: gadget: Fix null pointer exception net: nexthop: fix null pointer dereference when IPv6 is not enabled usb: chipidea: ci_hdrc_imx: fix potential error pointer dereference in probe usb: typec: fusb302: Fix masking of comparator and bc_lvl interrupts usb: hub: Fix usb enumeration issue due to address0 race usb: hub: Fix locking issues with address0_mutex binder: fix test regression due to sender_euid change ALSA: ctxfi: Fix out-of-range access ALSA: hda/realtek: Add quirk for ASRock NUC Box 1100 ALSA: hda/realtek: Fix LED on HP ProBook 435 G7 media: cec: copy sequence field for the reply Revert "parisc: Fix backtrace to always include init funtion names" HID: wacom: Use "Confidence" flag to prevent reporting invalid contacts staging/fbtft: Fix backlight staging: greybus: Add missing rwsem around snd_ctl_remove() calls staging: rtl8192e: Fix use after free in _rtl92e_pci_disconnect() fuse: release pipe buf after last use xen: don't continue xenstore initialization in case of errors xen: detect uninitialized xenbus in xenbus_init KVM: PPC: Book3S HV: Prevent POWER7/8 TLB flush flushing SLB tracing/uprobe: Fix uprobe_perf_open probes iteration tracing: Fix pid filtering when triggers are attached mmc: sdhci-esdhc-imx: disable CMDQ support mmc: sdhci: Fix ADMA for PAGE_SIZE >= 64KiB mdio: aspeed: Fix "Link is Down" issue powerpc/32: Fix hardlockup on vmap stack overflow PCI: aardvark: Deduplicate code in advk_pcie_rd_conf() PCI: aardvark: Update comment about disabling link training PCI: aardvark: Implement re-issuing config requests on CRS response PCI: aardvark: Simplify initialization of rootcap on virtual bridge PCI: aardvark: Fix link training proc/vmcore: fix clearing user buffer by properly using clear_user() netfilter: ctnetlink: fix filtering with CTA_TUPLE_REPLY netfilter: ctnetlink: do not erase error code with EINVAL netfilter: ipvs: Fix reuse connection if RS weight is 0 netfilter: flowtable: fix IPv6 tunnel addr match ARM: dts: BCM5301X: Fix I2C controller interrupt ARM: dts: BCM5301X: Add interrupt properties to GPIO node ARM: dts: bcm2711: Fix PCIe interrupts ASoC: qdsp6: q6routing: Conditionally reset FrontEnd Mixer ASoC: qdsp6: q6asm: fix q6asm_dai_prepare error handling ASoC: topology: Add missing rwsem around snd_ctl_remove() calls ASoC: codecs: wcd934x: return error code correctly from hw_params net: ieee802154: handle iftypes as u32 firmware: arm_scmi: pm: Propagate return value to caller NFSv42: Don't fail clone() unless the OP_CLONE operation failed ARM: socfpga: Fix crash with CONFIG_FORTIRY_SOURCE drm/nouveau/acr: fix a couple NULL vs IS_ERR() checks scsi: mpt3sas: Fix kernel panic during drive powercycle test drm/vc4: fix error code in vc4_create_object() net: marvell: prestera: fix double free issue on err path iavf: Prevent changing static ITR values if adaptive moderation is on ALSA: intel-dsp-config: add quirk for JSL devices based on ES8336 codec mptcp: fix delack timer firmware: smccc: Fix check for ARCH_SOC_ID not implemented ipv6: fix typos in __ip6_finish_output() nfp: checking parameter process for rx-usecs/tx-usecs is invalid net: stmmac: fix system hang caused by eee_ctrl_timer during suspend/resume net: stmmac: retain PTP clock time during SIOCSHWTSTAMP ioctls net: ipv6: add fib6_nh_release_dsts stub net: nexthop: release IPv6 per-cpu dsts when replacing a nexthop group ice: fix vsi->txq_map sizing ice: avoid bpf_prog refcount underflow scsi: core: sysfs: Fix setting device state to SDEV_RUNNING scsi: scsi_debug: Zero clear zones at reset write pointer erofs: fix deadlock when shrink erofs slab net/smc: Ensure the active closing peer first closes clcsock mlxsw: Verify the accessed index doesn't exceed the array length mlxsw: spectrum: Protect driver from buggy firmware net: marvell: mvpp2: increase MTU limit when XDP enabled nvmet-tcp: fix incomplete data digest send net/ncsi : Add payload to be 32-bit aligned to fix dropped packets PM: hibernate: use correct mode for swsusp_close() drm/amd/display: Set plane update flags for all planes in reset tcp_cubic: fix spurious Hystart ACK train detections for not-cwnd-limited flows lan743x: fix deadlock in lan743x_phy_link_status_change() net: phylink: Force link down and retrigger resolve on interface change net: phylink: Force retrigger in case of latched link-fail indicator net/smc: Fix NULL pointer dereferencing in smc_vlan_by_tcpsk() net/smc: Fix loop in smc_listen nvmet: use IOCB_NOWAIT only if the filesystem supports it igb: fix netpoll exit with traffic MIPS: loongson64: fix FTLB configuration MIPS: use 3-level pgtable for 64KB page size on MIPS_VA_BITS_48 tls: splice_read: fix record type check tls: fix replacing proto_ops net/sched: sch_ets: don't peek at classes beyond 'nbands' net: vlan: fix underflow for the real_dev refcnt net/smc: Don't call clcsock shutdown twice when smc shutdown net: hns3: fix VF RSS failed problem after PF enable multi-TCs net: mscc: ocelot: don't downgrade timestamping RX filters in SIOCSHWTSTAMP net: mscc: ocelot: correctly report the timestamping RX filters in ethtool tcp: correctly handle increased zerocopy args struct size sched/scs: Reset task stack state in bringup_cpu() f2fs: set SBI_NEED_FSCK flag when inconsistent node block found ceph: properly handle statfs on multifs setups smb3: do not error on fsync when readonly iommu/amd: Clarify AMD IOMMUv2 initialization messages vhost/vsock: fix incorrect used length reported to the guest tracing: Check pid filtering when creating events xen: sync include/xen/interface/io/ring.h with Xen's newest version xen/blkfront: read response from backend only once xen/blkfront: don't take local copy of a request from the ring page xen/blkfront: don't trust the backend response data blindly xen/netfront: read response from backend only once xen/netfront: don't read data from request on the ring page xen/netfront: disentangle tx_skb_freelist xen/netfront: don't trust the backend response data blindly tty: hvc: replace BUG_ON() with negative return value s390/mm: validate VMA in PGSTE manipulation functions shm: extend forced shm destroy to support objects from several IPC nses net: stmmac: platform: fix build warning when with !CONFIG_PM_SLEEP drm/amdgpu/gfx9: switch to golden tsc registers for renoir+ Linux 5.10.83 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: Ief47fb0c545e95bd269b07477eacd4c8713f287d
This commit is contained in:
@@ -37,8 +37,7 @@ conn_reuse_mode - INTEGER
|
||||
|
||||
0: disable any special handling on port reuse. The new
|
||||
connection will be delivered to the same real server that was
|
||||
servicing the previous connection. This will effectively
|
||||
disable expire_nodest_conn.
|
||||
servicing the previous connection.
|
||||
|
||||
bit 1: enable rescheduling of new connections when it is safe.
|
||||
That is, whenever expire_nodest_conn and for TCP sockets, when
|
||||
|
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 10
|
||||
SUBLEVEL = 82
|
||||
SUBLEVEL = 83
|
||||
EXTRAVERSION =
|
||||
NAME = Dare mighty things
|
||||
|
||||
|
@@ -480,11 +480,17 @@
|
||||
#address-cells = <3>;
|
||||
#interrupt-cells = <1>;
|
||||
#size-cells = <2>;
|
||||
interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
|
||||
interrupts = <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
|
||||
interrupt-names = "pcie", "msi";
|
||||
interrupt-map-mask = <0x0 0x0 0x0 0x7>;
|
||||
interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143
|
||||
IRQ_TYPE_LEVEL_HIGH>,
|
||||
<0 0 0 2 &gicv2 GIC_SPI 144
|
||||
IRQ_TYPE_LEVEL_HIGH>,
|
||||
<0 0 0 3 &gicv2 GIC_SPI 145
|
||||
IRQ_TYPE_LEVEL_HIGH>,
|
||||
<0 0 0 4 &gicv2 GIC_SPI 146
|
||||
IRQ_TYPE_LEVEL_HIGH>;
|
||||
msi-controller;
|
||||
msi-parent = <&pcie0>;
|
||||
|
@@ -242,6 +242,8 @@
|
||||
|
||||
gpio-controller;
|
||||
#gpio-cells = <2>;
|
||||
interrupt-controller;
|
||||
#interrupt-cells = <2>;
|
||||
};
|
||||
|
||||
pcie0: pcie@12000 {
|
||||
@@ -408,7 +410,7 @@
|
||||
i2c0: i2c@18009000 {
|
||||
compatible = "brcm,iproc-i2c";
|
||||
reg = <0x18009000 0x50>;
|
||||
interrupts = <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>;
|
||||
interrupts = <GIC_SPI 89 IRQ_TYPE_LEVEL_HIGH>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
clock-frequency = <100000>;
|
||||
|
@@ -33,7 +33,7 @@ extern void __iomem *sdr_ctl_base_addr;
|
||||
u32 socfpga_sdram_self_refresh(u32 sdr_base);
|
||||
extern unsigned int socfpga_sdram_self_refresh_sz;
|
||||
|
||||
extern char secondary_trampoline, secondary_trampoline_end;
|
||||
extern char secondary_trampoline[], secondary_trampoline_end[];
|
||||
|
||||
extern unsigned long socfpga_cpu1start_addr;
|
||||
|
||||
|
@@ -20,14 +20,14 @@
|
||||
|
||||
static int socfpga_boot_secondary(unsigned int cpu, struct task_struct *idle)
|
||||
{
|
||||
int trampoline_size = &secondary_trampoline_end - &secondary_trampoline;
|
||||
int trampoline_size = secondary_trampoline_end - secondary_trampoline;
|
||||
|
||||
if (socfpga_cpu1start_addr) {
|
||||
/* This will put CPU #1 into reset. */
|
||||
writel(RSTMGR_MPUMODRST_CPU1,
|
||||
rst_manager_base_addr + SOCFPGA_RSTMGR_MODMPURST);
|
||||
|
||||
memcpy(phys_to_virt(0), &secondary_trampoline, trampoline_size);
|
||||
memcpy(phys_to_virt(0), secondary_trampoline, trampoline_size);
|
||||
|
||||
writel(__pa_symbol(secondary_startup),
|
||||
sys_manager_base_addr + (socfpga_cpu1start_addr & 0x000000ff));
|
||||
@@ -45,12 +45,12 @@ static int socfpga_boot_secondary(unsigned int cpu, struct task_struct *idle)
|
||||
|
||||
static int socfpga_a10_boot_secondary(unsigned int cpu, struct task_struct *idle)
|
||||
{
|
||||
int trampoline_size = &secondary_trampoline_end - &secondary_trampoline;
|
||||
int trampoline_size = secondary_trampoline_end - secondary_trampoline;
|
||||
|
||||
if (socfpga_cpu1start_addr) {
|
||||
writel(RSTMGR_MPUMODRST_CPU1, rst_manager_base_addr +
|
||||
SOCFPGA_A10_RSTMGR_MODMPURST);
|
||||
memcpy(phys_to_virt(0), &secondary_trampoline, trampoline_size);
|
||||
memcpy(phys_to_virt(0), secondary_trampoline, trampoline_size);
|
||||
|
||||
writel(__pa_symbol(secondary_startup),
|
||||
sys_manager_base_addr + (socfpga_cpu1start_addr & 0x00000fff));
|
||||
|
@@ -3189,7 +3189,7 @@ config STACKTRACE_SUPPORT
|
||||
config PGTABLE_LEVELS
|
||||
int
|
||||
default 4 if PAGE_SIZE_4KB && MIPS_VA_BITS_48
|
||||
default 3 if 64BIT && !PAGE_SIZE_64KB
|
||||
default 3 if 64BIT && (!PAGE_SIZE_64KB || MIPS_VA_BITS_48)
|
||||
default 2
|
||||
|
||||
config MIPS_AUTO_PFN_OFFSET
|
||||
|
@@ -1721,8 +1721,6 @@ static inline void decode_cpucfg(struct cpuinfo_mips *c)
|
||||
|
||||
static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
|
||||
{
|
||||
decode_configs(c);
|
||||
|
||||
/* All Loongson processors covered here define ExcCode 16 as GSExc. */
|
||||
c->options |= MIPS_CPU_GSEXCEX;
|
||||
|
||||
@@ -1783,6 +1781,8 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
|
||||
panic("Unknown Loongson Processor ID!");
|
||||
break;
|
||||
}
|
||||
|
||||
decode_configs(c);
|
||||
}
|
||||
#else
|
||||
static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu) { }
|
||||
|
@@ -57,8 +57,6 @@ SECTIONS
|
||||
{
|
||||
. = KERNEL_BINARY_TEXT_START;
|
||||
|
||||
_stext = .; /* start of kernel text, includes init code & data */
|
||||
|
||||
__init_begin = .;
|
||||
HEAD_TEXT_SECTION
|
||||
MLONGCALL_DISCARD(INIT_TEXT_SECTION(8))
|
||||
@@ -82,6 +80,7 @@ SECTIONS
|
||||
/* freed after init ends here */
|
||||
|
||||
_text = .; /* Text and read-only data */
|
||||
_stext = .;
|
||||
MLONGCALL_KEEP(INIT_TEXT_SECTION(8))
|
||||
.text ALIGN(PAGE_SIZE) : {
|
||||
TEXT_TEXT
|
||||
|
@@ -333,11 +333,11 @@ label:
|
||||
mfspr r1, SPRN_SPRG_THREAD
|
||||
lwz r1, TASK_CPU - THREAD(r1)
|
||||
slwi r1, r1, 3
|
||||
addis r1, r1, emergency_ctx@ha
|
||||
addis r1, r1, emergency_ctx-PAGE_OFFSET@ha
|
||||
#else
|
||||
lis r1, emergency_ctx@ha
|
||||
lis r1, emergency_ctx-PAGE_OFFSET@ha
|
||||
#endif
|
||||
lwz r1, emergency_ctx@l(r1)
|
||||
lwz r1, emergency_ctx-PAGE_OFFSET@l(r1)
|
||||
addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE
|
||||
EXCEPTION_PROLOG_2
|
||||
SAVE_NVGPRS(r11)
|
||||
|
@@ -867,6 +867,7 @@ static void flush_guest_tlb(struct kvm *kvm)
|
||||
"r" (0) : "memory");
|
||||
}
|
||||
asm volatile("ptesync": : :"memory");
|
||||
// POWER9 congruence-class TLBIEL leaves ERAT. Flush it now.
|
||||
asm volatile(PPC_RADIX_INVALIDATE_ERAT_GUEST : : :"memory");
|
||||
} else {
|
||||
for (set = 0; set < kvm->arch.tlb_sets; ++set) {
|
||||
@@ -877,7 +878,9 @@ static void flush_guest_tlb(struct kvm *kvm)
|
||||
rb += PPC_BIT(51); /* increment set number */
|
||||
}
|
||||
asm volatile("ptesync": : :"memory");
|
||||
asm volatile(PPC_ISA_3_0_INVALIDATE_ERAT : : :"memory");
|
||||
// POWER9 congruence-class TLBIEL leaves ERAT. Flush it now.
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_300))
|
||||
asm volatile(PPC_ISA_3_0_INVALIDATE_ERAT : : :"memory");
|
||||
}
|
||||
}
|
||||
|
||||
|
@@ -988,6 +988,7 @@ EXPORT_SYMBOL(get_guest_storage_key);
|
||||
int pgste_perform_essa(struct mm_struct *mm, unsigned long hva, int orc,
|
||||
unsigned long *oldpte, unsigned long *oldpgste)
|
||||
{
|
||||
struct vm_area_struct *vma;
|
||||
unsigned long pgstev;
|
||||
spinlock_t *ptl;
|
||||
pgste_t pgste;
|
||||
@@ -997,6 +998,10 @@ int pgste_perform_essa(struct mm_struct *mm, unsigned long hva, int orc,
|
||||
WARN_ON_ONCE(orc > ESSA_MAX);
|
||||
if (unlikely(orc > ESSA_MAX))
|
||||
return -EINVAL;
|
||||
|
||||
vma = find_vma(mm, hva);
|
||||
if (!vma || hva < vma->vm_start || is_vm_hugetlb_page(vma))
|
||||
return -EFAULT;
|
||||
ptep = get_locked_pte(mm, hva, &ptl);
|
||||
if (unlikely(!ptep))
|
||||
return -EFAULT;
|
||||
@@ -1089,10 +1094,14 @@ EXPORT_SYMBOL(pgste_perform_essa);
|
||||
int set_pgste_bits(struct mm_struct *mm, unsigned long hva,
|
||||
unsigned long bits, unsigned long value)
|
||||
{
|
||||
struct vm_area_struct *vma;
|
||||
spinlock_t *ptl;
|
||||
pgste_t new;
|
||||
pte_t *ptep;
|
||||
|
||||
vma = find_vma(mm, hva);
|
||||
if (!vma || hva < vma->vm_start || is_vm_hugetlb_page(vma))
|
||||
return -EFAULT;
|
||||
ptep = get_locked_pte(mm, hva, &ptl);
|
||||
if (unlikely(!ptep))
|
||||
return -EFAULT;
|
||||
@@ -1117,9 +1126,13 @@ EXPORT_SYMBOL(set_pgste_bits);
|
||||
*/
|
||||
int get_pgste(struct mm_struct *mm, unsigned long hva, unsigned long *pgstep)
|
||||
{
|
||||
struct vm_area_struct *vma;
|
||||
spinlock_t *ptl;
|
||||
pte_t *ptep;
|
||||
|
||||
vma = find_vma(mm, hva);
|
||||
if (!vma || hva < vma->vm_start || is_vm_hugetlb_page(vma))
|
||||
return -EFAULT;
|
||||
ptep = get_locked_pte(mm, hva, &ptl);
|
||||
if (unlikely(!ptep))
|
||||
return -EFAULT;
|
||||
|
@@ -1110,15 +1110,10 @@ struct fwnode_handle *acpi_node_get_parent(const struct fwnode_handle *fwnode)
|
||||
/* All data nodes have parent pointer so just return that */
|
||||
return to_acpi_data_node(fwnode)->parent;
|
||||
} else if (is_acpi_device_node(fwnode)) {
|
||||
acpi_handle handle, parent_handle;
|
||||
struct device *dev = to_acpi_device_node(fwnode)->dev.parent;
|
||||
|
||||
handle = to_acpi_device_node(fwnode)->handle;
|
||||
if (ACPI_SUCCESS(acpi_get_parent(handle, &parent_handle))) {
|
||||
struct acpi_device *adev;
|
||||
|
||||
if (!acpi_bus_get_device(parent_handle, &adev))
|
||||
return acpi_fwnode_handle(adev);
|
||||
}
|
||||
if (dev)
|
||||
return acpi_fwnode_handle(to_acpi_device(dev));
|
||||
}
|
||||
|
||||
return NULL;
|
||||
|
@@ -80,6 +80,7 @@ enum blkif_state {
|
||||
BLKIF_STATE_DISCONNECTED,
|
||||
BLKIF_STATE_CONNECTED,
|
||||
BLKIF_STATE_SUSPENDED,
|
||||
BLKIF_STATE_ERROR,
|
||||
};
|
||||
|
||||
struct grant {
|
||||
@@ -89,6 +90,7 @@ struct grant {
|
||||
};
|
||||
|
||||
enum blk_req_status {
|
||||
REQ_PROCESSING,
|
||||
REQ_WAITING,
|
||||
REQ_DONE,
|
||||
REQ_ERROR,
|
||||
@@ -543,10 +545,10 @@ static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
|
||||
|
||||
id = get_id_from_freelist(rinfo);
|
||||
rinfo->shadow[id].request = req;
|
||||
rinfo->shadow[id].status = REQ_WAITING;
|
||||
rinfo->shadow[id].status = REQ_PROCESSING;
|
||||
rinfo->shadow[id].associated_id = NO_ASSOCIATED_ID;
|
||||
|
||||
(*ring_req)->u.rw.id = id;
|
||||
rinfo->shadow[id].req.u.rw.id = id;
|
||||
|
||||
return id;
|
||||
}
|
||||
@@ -554,11 +556,12 @@ static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
|
||||
static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_info *rinfo)
|
||||
{
|
||||
struct blkfront_info *info = rinfo->dev_info;
|
||||
struct blkif_request *ring_req;
|
||||
struct blkif_request *ring_req, *final_ring_req;
|
||||
unsigned long id;
|
||||
|
||||
/* Fill out a communications ring structure. */
|
||||
id = blkif_ring_get_request(rinfo, req, &ring_req);
|
||||
id = blkif_ring_get_request(rinfo, req, &final_ring_req);
|
||||
ring_req = &rinfo->shadow[id].req;
|
||||
|
||||
ring_req->operation = BLKIF_OP_DISCARD;
|
||||
ring_req->u.discard.nr_sectors = blk_rq_sectors(req);
|
||||
@@ -569,8 +572,9 @@ static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_inf
|
||||
else
|
||||
ring_req->u.discard.flag = 0;
|
||||
|
||||
/* Keep a private copy so we can reissue requests when recovering. */
|
||||
rinfo->shadow[id].req = *ring_req;
|
||||
/* Copy the request to the ring page. */
|
||||
*final_ring_req = *ring_req;
|
||||
rinfo->shadow[id].status = REQ_WAITING;
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -703,6 +707,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
|
||||
{
|
||||
struct blkfront_info *info = rinfo->dev_info;
|
||||
struct blkif_request *ring_req, *extra_ring_req = NULL;
|
||||
struct blkif_request *final_ring_req, *final_extra_ring_req = NULL;
|
||||
unsigned long id, extra_id = NO_ASSOCIATED_ID;
|
||||
bool require_extra_req = false;
|
||||
int i;
|
||||
@@ -747,7 +752,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
|
||||
}
|
||||
|
||||
/* Fill out a communications ring structure. */
|
||||
id = blkif_ring_get_request(rinfo, req, &ring_req);
|
||||
id = blkif_ring_get_request(rinfo, req, &final_ring_req);
|
||||
ring_req = &rinfo->shadow[id].req;
|
||||
|
||||
num_sg = blk_rq_map_sg(req->q, req, rinfo->shadow[id].sg);
|
||||
num_grant = 0;
|
||||
@@ -798,7 +804,9 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
|
||||
ring_req->u.rw.nr_segments = num_grant;
|
||||
if (unlikely(require_extra_req)) {
|
||||
extra_id = blkif_ring_get_request(rinfo, req,
|
||||
&extra_ring_req);
|
||||
&final_extra_ring_req);
|
||||
extra_ring_req = &rinfo->shadow[extra_id].req;
|
||||
|
||||
/*
|
||||
* Only the first request contains the scatter-gather
|
||||
* list.
|
||||
@@ -840,10 +848,13 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
|
||||
if (setup.segments)
|
||||
kunmap_atomic(setup.segments);
|
||||
|
||||
/* Keep a private copy so we can reissue requests when recovering. */
|
||||
rinfo->shadow[id].req = *ring_req;
|
||||
if (unlikely(require_extra_req))
|
||||
rinfo->shadow[extra_id].req = *extra_ring_req;
|
||||
/* Copy request(s) to the ring page. */
|
||||
*final_ring_req = *ring_req;
|
||||
rinfo->shadow[id].status = REQ_WAITING;
|
||||
if (unlikely(require_extra_req)) {
|
||||
*final_extra_ring_req = *extra_ring_req;
|
||||
rinfo->shadow[extra_id].status = REQ_WAITING;
|
||||
}
|
||||
|
||||
if (new_persistent_gnts)
|
||||
gnttab_free_grant_references(setup.gref_head);
|
||||
@@ -1415,8 +1426,8 @@ static enum blk_req_status blkif_rsp_to_req_status(int rsp)
|
||||
static int blkif_get_final_status(enum blk_req_status s1,
|
||||
enum blk_req_status s2)
|
||||
{
|
||||
BUG_ON(s1 == REQ_WAITING);
|
||||
BUG_ON(s2 == REQ_WAITING);
|
||||
BUG_ON(s1 < REQ_DONE);
|
||||
BUG_ON(s2 < REQ_DONE);
|
||||
|
||||
if (s1 == REQ_ERROR || s2 == REQ_ERROR)
|
||||
return BLKIF_RSP_ERROR;
|
||||
@@ -1449,7 +1460,7 @@ static bool blkif_completion(unsigned long *id,
|
||||
s->status = blkif_rsp_to_req_status(bret->status);
|
||||
|
||||
/* Wait the second response if not yet here. */
|
||||
if (s2->status == REQ_WAITING)
|
||||
if (s2->status < REQ_DONE)
|
||||
return false;
|
||||
|
||||
bret->status = blkif_get_final_status(s->status,
|
||||
@@ -1557,7 +1568,7 @@ static bool blkif_completion(unsigned long *id,
|
||||
static irqreturn_t blkif_interrupt(int irq, void *dev_id)
|
||||
{
|
||||
struct request *req;
|
||||
struct blkif_response *bret;
|
||||
struct blkif_response bret;
|
||||
RING_IDX i, rp;
|
||||
unsigned long flags;
|
||||
struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
|
||||
@@ -1568,54 +1579,76 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
|
||||
|
||||
spin_lock_irqsave(&rinfo->ring_lock, flags);
|
||||
again:
|
||||
rp = rinfo->ring.sring->rsp_prod;
|
||||
rmb(); /* Ensure we see queued responses up to 'rp'. */
|
||||
rp = READ_ONCE(rinfo->ring.sring->rsp_prod);
|
||||
virt_rmb(); /* Ensure we see queued responses up to 'rp'. */
|
||||
if (RING_RESPONSE_PROD_OVERFLOW(&rinfo->ring, rp)) {
|
||||
pr_alert("%s: illegal number of responses %u\n",
|
||||
info->gd->disk_name, rp - rinfo->ring.rsp_cons);
|
||||
goto err;
|
||||
}
|
||||
|
||||
for (i = rinfo->ring.rsp_cons; i != rp; i++) {
|
||||
unsigned long id;
|
||||
unsigned int op;
|
||||
|
||||
RING_COPY_RESPONSE(&rinfo->ring, i, &bret);
|
||||
id = bret.id;
|
||||
|
||||
bret = RING_GET_RESPONSE(&rinfo->ring, i);
|
||||
id = bret->id;
|
||||
/*
|
||||
* The backend has messed up and given us an id that we would
|
||||
* never have given to it (we stamp it up to BLK_RING_SIZE -
|
||||
* look in get_id_from_freelist.
|
||||
*/
|
||||
if (id >= BLK_RING_SIZE(info)) {
|
||||
WARN(1, "%s: response to %s has incorrect id (%ld)\n",
|
||||
info->gd->disk_name, op_name(bret->operation), id);
|
||||
/* We can't safely get the 'struct request' as
|
||||
* the id is busted. */
|
||||
continue;
|
||||
pr_alert("%s: response has incorrect id (%ld)\n",
|
||||
info->gd->disk_name, id);
|
||||
goto err;
|
||||
}
|
||||
if (rinfo->shadow[id].status != REQ_WAITING) {
|
||||
pr_alert("%s: response references no pending request\n",
|
||||
info->gd->disk_name);
|
||||
goto err;
|
||||
}
|
||||
|
||||
rinfo->shadow[id].status = REQ_PROCESSING;
|
||||
req = rinfo->shadow[id].request;
|
||||
|
||||
if (bret->operation != BLKIF_OP_DISCARD) {
|
||||
op = rinfo->shadow[id].req.operation;
|
||||
if (op == BLKIF_OP_INDIRECT)
|
||||
op = rinfo->shadow[id].req.u.indirect.indirect_op;
|
||||
if (bret.operation != op) {
|
||||
pr_alert("%s: response has wrong operation (%u instead of %u)\n",
|
||||
info->gd->disk_name, bret.operation, op);
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (bret.operation != BLKIF_OP_DISCARD) {
|
||||
/*
|
||||
* We may need to wait for an extra response if the
|
||||
* I/O request is split in 2
|
||||
*/
|
||||
if (!blkif_completion(&id, rinfo, bret))
|
||||
if (!blkif_completion(&id, rinfo, &bret))
|
||||
continue;
|
||||
}
|
||||
|
||||
if (add_id_to_freelist(rinfo, id)) {
|
||||
WARN(1, "%s: response to %s (id %ld) couldn't be recycled!\n",
|
||||
info->gd->disk_name, op_name(bret->operation), id);
|
||||
info->gd->disk_name, op_name(bret.operation), id);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (bret->status == BLKIF_RSP_OKAY)
|
||||
if (bret.status == BLKIF_RSP_OKAY)
|
||||
blkif_req(req)->error = BLK_STS_OK;
|
||||
else
|
||||
blkif_req(req)->error = BLK_STS_IOERR;
|
||||
|
||||
switch (bret->operation) {
|
||||
switch (bret.operation) {
|
||||
case BLKIF_OP_DISCARD:
|
||||
if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {
|
||||
if (unlikely(bret.status == BLKIF_RSP_EOPNOTSUPP)) {
|
||||
struct request_queue *rq = info->rq;
|
||||
printk(KERN_WARNING "blkfront: %s: %s op failed\n",
|
||||
info->gd->disk_name, op_name(bret->operation));
|
||||
|
||||
pr_warn_ratelimited("blkfront: %s: %s op failed\n",
|
||||
info->gd->disk_name, op_name(bret.operation));
|
||||
blkif_req(req)->error = BLK_STS_NOTSUPP;
|
||||
info->feature_discard = 0;
|
||||
info->feature_secdiscard = 0;
|
||||
@@ -1625,15 +1658,15 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
|
||||
break;
|
||||
case BLKIF_OP_FLUSH_DISKCACHE:
|
||||
case BLKIF_OP_WRITE_BARRIER:
|
||||
if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {
|
||||
printk(KERN_WARNING "blkfront: %s: %s op failed\n",
|
||||
info->gd->disk_name, op_name(bret->operation));
|
||||
if (unlikely(bret.status == BLKIF_RSP_EOPNOTSUPP)) {
|
||||
pr_warn_ratelimited("blkfront: %s: %s op failed\n",
|
||||
info->gd->disk_name, op_name(bret.operation));
|
||||
blkif_req(req)->error = BLK_STS_NOTSUPP;
|
||||
}
|
||||
if (unlikely(bret->status == BLKIF_RSP_ERROR &&
|
||||
if (unlikely(bret.status == BLKIF_RSP_ERROR &&
|
||||
rinfo->shadow[id].req.u.rw.nr_segments == 0)) {
|
||||
printk(KERN_WARNING "blkfront: %s: empty %s op failed\n",
|
||||
info->gd->disk_name, op_name(bret->operation));
|
||||
pr_warn_ratelimited("blkfront: %s: empty %s op failed\n",
|
||||
info->gd->disk_name, op_name(bret.operation));
|
||||
blkif_req(req)->error = BLK_STS_NOTSUPP;
|
||||
}
|
||||
if (unlikely(blkif_req(req)->error)) {
|
||||
@@ -1646,9 +1679,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
|
||||
fallthrough;
|
||||
case BLKIF_OP_READ:
|
||||
case BLKIF_OP_WRITE:
|
||||
if (unlikely(bret->status != BLKIF_RSP_OKAY))
|
||||
dev_dbg(&info->xbdev->dev, "Bad return from blkdev data "
|
||||
"request: %x\n", bret->status);
|
||||
if (unlikely(bret.status != BLKIF_RSP_OKAY))
|
||||
dev_dbg_ratelimited(&info->xbdev->dev,
|
||||
"Bad return from blkdev data request: %#x\n",
|
||||
bret.status);
|
||||
|
||||
break;
|
||||
default:
|
||||
@@ -1674,6 +1708,14 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
|
||||
spin_unlock_irqrestore(&rinfo->ring_lock, flags);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
|
||||
err:
|
||||
info->connected = BLKIF_STATE_ERROR;
|
||||
|
||||
spin_unlock_irqrestore(&rinfo->ring_lock, flags);
|
||||
|
||||
pr_alert("%s disabled for further use\n", info->gd->disk_name);
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
|
||||
|
@@ -112,9 +112,7 @@ static int scmi_pm_domain_probe(struct scmi_device *sdev)
|
||||
scmi_pd_data->domains = domains;
|
||||
scmi_pd_data->num_domains = num_domains;
|
||||
|
||||
of_genpd_add_provider_onecell(np, scmi_pd_data);
|
||||
|
||||
return 0;
|
||||
return of_genpd_add_provider_onecell(np, scmi_pd_data);
|
||||
}
|
||||
|
||||
static const struct scmi_device_id scmi_id_table[] = {
|
||||
|
@@ -50,7 +50,7 @@ static int __init smccc_soc_init(void)
|
||||
arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
|
||||
ARM_SMCCC_ARCH_SOC_ID, &res);
|
||||
|
||||
if (res.a0 == SMCCC_RET_NOT_SUPPORTED) {
|
||||
if ((int)res.a0 == SMCCC_RET_NOT_SUPPORTED) {
|
||||
pr_info("ARCH_SOC_ID not implemented, skipping ....\n");
|
||||
return 0;
|
||||
}
|
||||
|
@@ -137,6 +137,11 @@ MODULE_FIRMWARE("amdgpu/green_sardine_rlc.bin");
|
||||
#define mmTCP_CHAN_STEER_5_ARCT 0x0b0c
|
||||
#define mmTCP_CHAN_STEER_5_ARCT_BASE_IDX 0
|
||||
|
||||
#define mmGOLDEN_TSC_COUNT_UPPER_Renoir 0x0025
|
||||
#define mmGOLDEN_TSC_COUNT_UPPER_Renoir_BASE_IDX 1
|
||||
#define mmGOLDEN_TSC_COUNT_LOWER_Renoir 0x0026
|
||||
#define mmGOLDEN_TSC_COUNT_LOWER_Renoir_BASE_IDX 1
|
||||
|
||||
enum ta_ras_gfx_subblock {
|
||||
/*CPC*/
|
||||
TA_RAS_BLOCK__GFX_CPC_INDEX_START = 0,
|
||||
@@ -4147,19 +4152,38 @@ failed_kiq_read:
|
||||
|
||||
static uint64_t gfx_v9_0_get_gpu_clock_counter(struct amdgpu_device *adev)
|
||||
{
|
||||
uint64_t clock;
|
||||
uint64_t clock, clock_lo, clock_hi, hi_check;
|
||||
|
||||
amdgpu_gfx_off_ctrl(adev, false);
|
||||
mutex_lock(&adev->gfx.gpu_clock_mutex);
|
||||
if (adev->asic_type == CHIP_VEGA10 && amdgpu_sriov_runtime(adev)) {
|
||||
clock = gfx_v9_0_kiq_read_clock(adev);
|
||||
} else {
|
||||
WREG32_SOC15(GC, 0, mmRLC_CAPTURE_GPU_CLOCK_COUNT, 1);
|
||||
clock = (uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_LSB) |
|
||||
((uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_MSB) << 32ULL);
|
||||
switch (adev->asic_type) {
|
||||
case CHIP_RENOIR:
|
||||
preempt_disable();
|
||||
clock_hi = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER_Renoir);
|
||||
clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_LOWER_Renoir);
|
||||
hi_check = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER_Renoir);
|
||||
/* The SMUIO TSC clock frequency is 100MHz, which sets 32-bit carry over
|
||||
* roughly every 42 seconds.
|
||||
*/
|
||||
if (hi_check != clock_hi) {
|
||||
clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_LOWER_Renoir);
|
||||
clock_hi = hi_check;
|
||||
}
|
||||
preempt_enable();
|
||||
clock = clock_lo | (clock_hi << 32ULL);
|
||||
break;
|
||||
default:
|
||||
amdgpu_gfx_off_ctrl(adev, false);
|
||||
mutex_lock(&adev->gfx.gpu_clock_mutex);
|
||||
if (adev->asic_type == CHIP_VEGA10 && amdgpu_sriov_runtime(adev)) {
|
||||
clock = gfx_v9_0_kiq_read_clock(adev);
|
||||
} else {
|
||||
WREG32_SOC15(GC, 0, mmRLC_CAPTURE_GPU_CLOCK_COUNT, 1);
|
||||
clock = (uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_LSB) |
|
||||
((uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_MSB) << 32ULL);
|
||||
}
|
||||
mutex_unlock(&adev->gfx.gpu_clock_mutex);
|
||||
amdgpu_gfx_off_ctrl(adev, true);
|
||||
break;
|
||||
}
|
||||
mutex_unlock(&adev->gfx.gpu_clock_mutex);
|
||||
amdgpu_gfx_off_ctrl(adev, true);
|
||||
return clock;
|
||||
}
|
||||
|
||||
|
@@ -1963,8 +1963,8 @@ static int dm_resume(void *handle)
|
||||
|
||||
for (i = 0; i < dc_state->stream_count; i++) {
|
||||
dc_state->streams[i]->mode_changed = true;
|
||||
for (j = 0; j < dc_state->stream_status->plane_count; j++) {
|
||||
dc_state->stream_status->plane_states[j]->update_flags.raw
|
||||
for (j = 0; j < dc_state->stream_status[i].plane_count; j++) {
|
||||
dc_state->stream_status[i].plane_states[j]->update_flags.raw
|
||||
= 0xffffffff;
|
||||
}
|
||||
}
|
||||
|
@@ -207,11 +207,13 @@ int
|
||||
gm200_acr_wpr_parse(struct nvkm_acr *acr)
|
||||
{
|
||||
const struct wpr_header *hdr = (void *)acr->wpr_fw->data;
|
||||
struct nvkm_acr_lsfw *lsfw;
|
||||
|
||||
while (hdr->falcon_id != WPR_HEADER_V0_FALCON_ID_INVALID) {
|
||||
wpr_header_dump(&acr->subdev, hdr);
|
||||
if (!nvkm_acr_lsfw_add(NULL, acr, NULL, (hdr++)->falcon_id))
|
||||
return -ENOMEM;
|
||||
lsfw = nvkm_acr_lsfw_add(NULL, acr, NULL, (hdr++)->falcon_id);
|
||||
if (IS_ERR(lsfw))
|
||||
return PTR_ERR(lsfw);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@@ -161,11 +161,13 @@ int
|
||||
gp102_acr_wpr_parse(struct nvkm_acr *acr)
|
||||
{
|
||||
const struct wpr_header_v1 *hdr = (void *)acr->wpr_fw->data;
|
||||
struct nvkm_acr_lsfw *lsfw;
|
||||
|
||||
while (hdr->falcon_id != WPR_HEADER_V1_FALCON_ID_INVALID) {
|
||||
wpr_header_v1_dump(&acr->subdev, hdr);
|
||||
if (!nvkm_acr_lsfw_add(NULL, acr, NULL, (hdr++)->falcon_id))
|
||||
return -ENOMEM;
|
||||
lsfw = nvkm_acr_lsfw_add(NULL, acr, NULL, (hdr++)->falcon_id);
|
||||
if (IS_ERR(lsfw))
|
||||
return PTR_ERR(lsfw);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@@ -389,7 +389,7 @@ struct drm_gem_object *vc4_create_object(struct drm_device *dev, size_t size)
|
||||
|
||||
bo = kzalloc(sizeof(*bo), GFP_KERNEL);
|
||||
if (!bo)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
return NULL;
|
||||
|
||||
bo->madv = VC4_MADV_WILLNEED;
|
||||
refcount_set(&bo->usecnt, 0);
|
||||
|
@@ -2578,6 +2578,9 @@ static void wacom_wac_finger_event(struct hid_device *hdev,
|
||||
return;
|
||||
|
||||
switch (equivalent_usage) {
|
||||
case HID_DG_CONFIDENCE:
|
||||
wacom_wac->hid_data.confidence = value;
|
||||
break;
|
||||
case HID_GD_X:
|
||||
wacom_wac->hid_data.x = value;
|
||||
break;
|
||||
@@ -2610,7 +2613,8 @@ static void wacom_wac_finger_event(struct hid_device *hdev,
|
||||
}
|
||||
|
||||
if (usage->usage_index + 1 == field->report_count) {
|
||||
if (equivalent_usage == wacom_wac->hid_data.last_slot_field)
|
||||
if (equivalent_usage == wacom_wac->hid_data.last_slot_field &&
|
||||
wacom_wac->hid_data.confidence)
|
||||
wacom_wac_finger_slot(wacom_wac, wacom_wac->touch_input);
|
||||
}
|
||||
}
|
||||
@@ -2625,6 +2629,8 @@ static void wacom_wac_finger_pre_report(struct hid_device *hdev,
|
||||
|
||||
wacom_wac->is_invalid_bt_frame = false;
|
||||
|
||||
hid_data->confidence = true;
|
||||
|
||||
for (i = 0; i < report->maxfield; i++) {
|
||||
struct hid_field *field = report->field[i];
|
||||
int j;
|
||||
|
@@ -300,6 +300,7 @@ struct hid_data {
|
||||
bool tipswitch;
|
||||
bool barrelswitch;
|
||||
bool barrelswitch2;
|
||||
bool confidence;
|
||||
int x;
|
||||
int y;
|
||||
int pressure;
|
||||
|
@@ -927,10 +927,8 @@ static int __init amd_iommu_v2_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
pr_info("AMD IOMMUv2 driver by Joerg Roedel <jroedel@suse.de>\n");
|
||||
|
||||
if (!amd_iommu_v2_supported()) {
|
||||
pr_info("AMD IOMMUv2 functionality not available on this system\n");
|
||||
pr_info("AMD IOMMUv2 functionality not available on this system - This is not a bug.\n");
|
||||
/*
|
||||
* Load anyway to provide the symbols to other modules
|
||||
* which may use AMD IOMMUv2 optionally.
|
||||
@@ -947,6 +945,8 @@ static int __init amd_iommu_v2_init(void)
|
||||
|
||||
amd_iommu_register_ppr_notifier(&ppr_nb);
|
||||
|
||||
pr_info("AMD IOMMUv2 loaded and initialized\n");
|
||||
|
||||
return 0;
|
||||
|
||||
out:
|
||||
|
@@ -1199,6 +1199,7 @@ void cec_received_msg_ts(struct cec_adapter *adap,
|
||||
if (abort)
|
||||
dst->rx_status |= CEC_RX_STATUS_FEATURE_ABORT;
|
||||
msg->flags = dst->flags;
|
||||
msg->sequence = dst->sequence;
|
||||
/* Remove it from the wait_queue */
|
||||
list_del_init(&data->list);
|
||||
|
||||
|
@@ -263,7 +263,6 @@ static struct esdhc_soc_data usdhc_imx8qxp_data = {
|
||||
.flags = ESDHC_FLAG_USDHC | ESDHC_FLAG_STD_TUNING
|
||||
| ESDHC_FLAG_HAVE_CAP1 | ESDHC_FLAG_HS200
|
||||
| ESDHC_FLAG_HS400 | ESDHC_FLAG_HS400_ES
|
||||
| ESDHC_FLAG_CQHCI
|
||||
| ESDHC_FLAG_STATE_LOST_IN_LPMODE
|
||||
| ESDHC_FLAG_CLK_RATE_LOST_IN_PM_RUNTIME,
|
||||
};
|
||||
@@ -272,7 +271,6 @@ static struct esdhc_soc_data usdhc_imx8mm_data = {
|
||||
.flags = ESDHC_FLAG_USDHC | ESDHC_FLAG_STD_TUNING
|
||||
| ESDHC_FLAG_HAVE_CAP1 | ESDHC_FLAG_HS200
|
||||
| ESDHC_FLAG_HS400 | ESDHC_FLAG_HS400_ES
|
||||
| ESDHC_FLAG_CQHCI
|
||||
| ESDHC_FLAG_STATE_LOST_IN_LPMODE,
|
||||
};
|
||||
|
||||
|
@@ -774,7 +774,19 @@ static void sdhci_adma_table_pre(struct sdhci_host *host,
|
||||
len -= offset;
|
||||
}
|
||||
|
||||
BUG_ON(len > 65536);
|
||||
/*
|
||||
* The block layer forces a minimum segment size of PAGE_SIZE,
|
||||
* so 'len' can be too big here if PAGE_SIZE >= 64KiB. Write
|
||||
* multiple descriptors, noting that the ADMA table is sized
|
||||
* for 4KiB chunks anyway, so it will be big enough.
|
||||
*/
|
||||
while (len > host->max_adma) {
|
||||
int n = 32 * 1024; /* 32KiB*/
|
||||
|
||||
__sdhci_adma_write_desc(host, &desc, addr, n, ADMA2_TRAN_VALID);
|
||||
addr += n;
|
||||
len -= n;
|
||||
}
|
||||
|
||||
/* tran, valid */
|
||||
if (len)
|
||||
@@ -3955,6 +3967,7 @@ struct sdhci_host *sdhci_alloc_host(struct device *dev,
|
||||
* descriptor for each segment, plus 1 for a nop end descriptor.
|
||||
*/
|
||||
host->adma_table_cnt = SDHCI_MAX_SEGS * 2 + 1;
|
||||
host->max_adma = 65536;
|
||||
|
||||
return host;
|
||||
}
|
||||
@@ -4618,10 +4631,12 @@ int sdhci_setup_host(struct sdhci_host *host)
|
||||
* be larger than 64 KiB though.
|
||||
*/
|
||||
if (host->flags & SDHCI_USE_ADMA) {
|
||||
if (host->quirks & SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC)
|
||||
if (host->quirks & SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC) {
|
||||
host->max_adma = 65532; /* 32-bit alignment */
|
||||
mmc->max_seg_size = 65535;
|
||||
else
|
||||
} else {
|
||||
mmc->max_seg_size = 65536;
|
||||
}
|
||||
} else {
|
||||
mmc->max_seg_size = mmc->max_req_size;
|
||||
}
|
||||
|
@@ -339,7 +339,8 @@ struct sdhci_adma2_64_desc {
|
||||
|
||||
/*
|
||||
* Maximum segments assuming a 512KiB maximum requisition size and a minimum
|
||||
* 4KiB page size.
|
||||
* 4KiB page size. Note this also allows enough for multiple descriptors in
|
||||
* case of PAGE_SIZE >= 64KiB.
|
||||
*/
|
||||
#define SDHCI_MAX_SEGS 128
|
||||
|
||||
@@ -541,6 +542,7 @@ struct sdhci_host {
|
||||
unsigned int blocks; /* remaining PIO blocks */
|
||||
|
||||
int sg_count; /* Mapped sg entries */
|
||||
int max_adma; /* Max. length in ADMA descriptor */
|
||||
|
||||
void *adma_table; /* ADMA descriptor table */
|
||||
void *align_buffer; /* Bounce buffer */
|
||||
|
@@ -679,9 +679,9 @@ static int hclgevf_set_rss_tc_mode(struct hclgevf_dev *hdev, u16 rss_size)
|
||||
roundup_size = ilog2(roundup_size);
|
||||
|
||||
for (i = 0; i < HCLGEVF_MAX_TC_NUM; i++) {
|
||||
tc_valid[i] = !!(hdev->hw_tc_map & BIT(i));
|
||||
tc_valid[i] = 1;
|
||||
tc_size[i] = roundup_size;
|
||||
tc_offset[i] = rss_size * i;
|
||||
tc_offset[i] = (hdev->hw_tc_map & BIT(i)) ? rss_size * i : 0;
|
||||
}
|
||||
|
||||
hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_RSS_TC_MODE, false);
|
||||
|
@@ -719,12 +719,31 @@ static int iavf_get_per_queue_coalesce(struct net_device *netdev, u32 queue,
|
||||
*
|
||||
* Change the ITR settings for a specific queue.
|
||||
**/
|
||||
static void iavf_set_itr_per_queue(struct iavf_adapter *adapter,
|
||||
struct ethtool_coalesce *ec, int queue)
|
||||
static int iavf_set_itr_per_queue(struct iavf_adapter *adapter,
|
||||
struct ethtool_coalesce *ec, int queue)
|
||||
{
|
||||
struct iavf_ring *rx_ring = &adapter->rx_rings[queue];
|
||||
struct iavf_ring *tx_ring = &adapter->tx_rings[queue];
|
||||
struct iavf_q_vector *q_vector;
|
||||
u16 itr_setting;
|
||||
|
||||
itr_setting = rx_ring->itr_setting & ~IAVF_ITR_DYNAMIC;
|
||||
|
||||
if (ec->rx_coalesce_usecs != itr_setting &&
|
||||
ec->use_adaptive_rx_coalesce) {
|
||||
netif_info(adapter, drv, adapter->netdev,
|
||||
"Rx interrupt throttling cannot be changed if adaptive-rx is enabled\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
itr_setting = tx_ring->itr_setting & ~IAVF_ITR_DYNAMIC;
|
||||
|
||||
if (ec->tx_coalesce_usecs != itr_setting &&
|
||||
ec->use_adaptive_tx_coalesce) {
|
||||
netif_info(adapter, drv, adapter->netdev,
|
||||
"Tx interrupt throttling cannot be changed if adaptive-tx is enabled\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
rx_ring->itr_setting = ITR_REG_ALIGN(ec->rx_coalesce_usecs);
|
||||
tx_ring->itr_setting = ITR_REG_ALIGN(ec->tx_coalesce_usecs);
|
||||
@@ -747,6 +766,7 @@ static void iavf_set_itr_per_queue(struct iavf_adapter *adapter,
|
||||
* the Tx and Rx ITR values based on the values we have entered
|
||||
* into the q_vector, no need to write the values now.
|
||||
*/
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -788,9 +808,11 @@ static int __iavf_set_coalesce(struct net_device *netdev,
|
||||
*/
|
||||
if (queue < 0) {
|
||||
for (i = 0; i < adapter->num_active_queues; i++)
|
||||
iavf_set_itr_per_queue(adapter, ec, i);
|
||||
if (iavf_set_itr_per_queue(adapter, ec, i))
|
||||
return -EINVAL;
|
||||
} else if (queue < adapter->num_active_queues) {
|
||||
iavf_set_itr_per_queue(adapter, ec, queue);
|
||||
if (iavf_set_itr_per_queue(adapter, ec, queue))
|
||||
return -EINVAL;
|
||||
} else {
|
||||
netif_info(adapter, drv, netdev, "Invalid queue value, queue range is 0 - %d\n",
|
||||
adapter->num_active_queues - 1);
|
||||
|
@@ -83,8 +83,13 @@ static int ice_vsi_alloc_arrays(struct ice_vsi *vsi)
|
||||
if (!vsi->rx_rings)
|
||||
goto err_rings;
|
||||
|
||||
/* XDP will have vsi->alloc_txq Tx queues as well, so double the size */
|
||||
vsi->txq_map = devm_kcalloc(dev, (2 * vsi->alloc_txq),
|
||||
/* txq_map needs to have enough space to track both Tx (stack) rings
|
||||
* and XDP rings; at this point vsi->num_xdp_txq might not be set,
|
||||
* so use num_possible_cpus() as we want to always provide XDP ring
|
||||
* per CPU, regardless of queue count settings from user that might
|
||||
* have come from ethtool's set_channels() callback;
|
||||
*/
|
||||
vsi->txq_map = devm_kcalloc(dev, (vsi->alloc_txq + num_possible_cpus()),
|
||||
sizeof(*vsi->txq_map), GFP_KERNEL);
|
||||
|
||||
if (!vsi->txq_map)
|
||||
|
@@ -2397,7 +2397,18 @@ int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog)
|
||||
ice_stat_str(status));
|
||||
goto clear_xdp_rings;
|
||||
}
|
||||
ice_vsi_assign_bpf_prog(vsi, prog);
|
||||
|
||||
/* assign the prog only when it's not already present on VSI;
|
||||
* this flow is a subject of both ethtool -L and ndo_bpf flows;
|
||||
* VSI rebuild that happens under ethtool -L can expose us to
|
||||
* the bpf_prog refcount issues as we would be swapping same
|
||||
* bpf_prog pointers from vsi->xdp_prog and calling bpf_prog_put
|
||||
* on it as it would be treated as an 'old_prog'; for ndo_bpf
|
||||
* this is not harmful as dev_xdp_install bumps the refcount
|
||||
* before calling the op exposed by the driver;
|
||||
*/
|
||||
if (!ice_is_xdp_ena_vsi(vsi))
|
||||
ice_vsi_assign_bpf_prog(vsi, prog);
|
||||
|
||||
return 0;
|
||||
clear_xdp_rings:
|
||||
@@ -2527,6 +2538,11 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,
|
||||
if (xdp_ring_err)
|
||||
NL_SET_ERR_MSG_MOD(extack, "Freeing XDP Tx resources failed");
|
||||
} else {
|
||||
/* safe to call even when prog == vsi->xdp_prog as
|
||||
* dev_xdp_install in net/core/dev.c incremented prog's
|
||||
* refcount so corresponding bpf_prog_put won't cause
|
||||
* underflow
|
||||
*/
|
||||
ice_vsi_assign_bpf_prog(vsi, prog);
|
||||
}
|
||||
|
||||
|
@@ -8032,7 +8032,7 @@ static int igb_poll(struct napi_struct *napi, int budget)
|
||||
if (likely(napi_complete_done(napi, work_done)))
|
||||
igb_ring_irq_enable(q_vector);
|
||||
|
||||
return min(work_done, budget - 1);
|
||||
return work_done;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@@ -4652,11 +4652,13 @@ static int mvpp2_change_mtu(struct net_device *dev, int mtu)
|
||||
mtu = ALIGN(MVPP2_RX_PKT_SIZE(mtu), 8);
|
||||
}
|
||||
|
||||
if (port->xdp_prog && mtu > MVPP2_MAX_RX_BUF_SIZE) {
|
||||
netdev_err(dev, "Illegal MTU value %d (> %d) for XDP mode\n",
|
||||
mtu, (int)MVPP2_MAX_RX_BUF_SIZE);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (MVPP2_RX_PKT_SIZE(mtu) > MVPP2_BM_LONG_PKT_SIZE) {
|
||||
if (port->xdp_prog) {
|
||||
netdev_err(dev, "Jumbo frames are not supported with XDP\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
if (priv->percpu_pools) {
|
||||
netdev_warn(dev, "mtu %d too high, switching to shared buffers", mtu);
|
||||
mvpp2_bm_switch_buffers(priv, false);
|
||||
@@ -4942,8 +4944,8 @@ static int mvpp2_xdp_setup(struct mvpp2_port *port, struct netdev_bpf *bpf)
|
||||
bool running = netif_running(port->dev);
|
||||
bool reset = !prog != !port->xdp_prog;
|
||||
|
||||
if (port->dev->mtu > ETH_DATA_LEN) {
|
||||
NL_SET_ERR_MSG_MOD(bpf->extack, "XDP is not supported with jumbo frames enabled");
|
||||
if (port->dev->mtu > MVPP2_MAX_RX_BUF_SIZE) {
|
||||
NL_SET_ERR_MSG_MOD(bpf->extack, "MTU too large for XDP");
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
|
@@ -439,8 +439,8 @@ static int prestera_port_bridge_join(struct prestera_port *port,
|
||||
|
||||
br_port = prestera_bridge_port_add(bridge, port->dev);
|
||||
if (IS_ERR(br_port)) {
|
||||
err = PTR_ERR(br_port);
|
||||
goto err_brport_create;
|
||||
prestera_bridge_put(bridge);
|
||||
return PTR_ERR(br_port);
|
||||
}
|
||||
|
||||
if (bridge->vlan_enabled)
|
||||
@@ -454,8 +454,6 @@ static int prestera_port_bridge_join(struct prestera_port *port,
|
||||
|
||||
err_port_join:
|
||||
prestera_bridge_port_put(br_port);
|
||||
err_brport_create:
|
||||
prestera_bridge_put(bridge);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@@ -234,6 +234,7 @@ static void mlxsw_m_port_remove(struct mlxsw_m *mlxsw_m, u8 local_port)
|
||||
static int mlxsw_m_port_module_map(struct mlxsw_m *mlxsw_m, u8 local_port,
|
||||
u8 *last_module)
|
||||
{
|
||||
unsigned int max_ports = mlxsw_core_max_ports(mlxsw_m->core);
|
||||
u8 module, width;
|
||||
int err;
|
||||
|
||||
@@ -249,6 +250,9 @@ static int mlxsw_m_port_module_map(struct mlxsw_m *mlxsw_m, u8 local_port,
|
||||
if (module == *last_module)
|
||||
return 0;
|
||||
*last_module = module;
|
||||
|
||||
if (WARN_ON_ONCE(module >= max_ports))
|
||||
return -EINVAL;
|
||||
mlxsw_m->module_to_port[module] = ++mlxsw_m->max_ports;
|
||||
|
||||
return 0;
|
||||
|
@@ -2052,9 +2052,14 @@ static void mlxsw_sp_pude_event_func(const struct mlxsw_reg_info *reg,
|
||||
struct mlxsw_sp *mlxsw_sp = priv;
|
||||
struct mlxsw_sp_port *mlxsw_sp_port;
|
||||
enum mlxsw_reg_pude_oper_status status;
|
||||
unsigned int max_ports;
|
||||
u8 local_port;
|
||||
|
||||
max_ports = mlxsw_core_max_ports(mlxsw_sp->core);
|
||||
local_port = mlxsw_reg_pude_local_port_get(pude_pl);
|
||||
|
||||
if (WARN_ON_ONCE(!local_port || local_port >= max_ports))
|
||||
return;
|
||||
mlxsw_sp_port = mlxsw_sp->ports[local_port];
|
||||
if (!mlxsw_sp_port)
|
||||
return;
|
||||
|
@@ -568,10 +568,13 @@ void mlxsw_sp1_ptp_got_timestamp(struct mlxsw_sp *mlxsw_sp, bool ingress,
|
||||
u8 domain_number, u16 sequence_id,
|
||||
u64 timestamp)
|
||||
{
|
||||
unsigned int max_ports = mlxsw_core_max_ports(mlxsw_sp->core);
|
||||
struct mlxsw_sp_port *mlxsw_sp_port;
|
||||
struct mlxsw_sp1_ptp_key key;
|
||||
u8 types;
|
||||
|
||||
if (WARN_ON_ONCE(local_port >= max_ports))
|
||||
return;
|
||||
mlxsw_sp_port = mlxsw_sp->ports[local_port];
|
||||
if (!mlxsw_sp_port)
|
||||
return;
|
||||
|
@@ -2177,6 +2177,7 @@ static void mlxsw_sp_router_neigh_ent_ipv4_process(struct mlxsw_sp *mlxsw_sp,
|
||||
char *rauhtd_pl,
|
||||
int ent_index)
|
||||
{
|
||||
u64 max_rifs = MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_RIFS);
|
||||
struct net_device *dev;
|
||||
struct neighbour *n;
|
||||
__be32 dipn;
|
||||
@@ -2185,6 +2186,8 @@ static void mlxsw_sp_router_neigh_ent_ipv4_process(struct mlxsw_sp *mlxsw_sp,
|
||||
|
||||
mlxsw_reg_rauhtd_ent_ipv4_unpack(rauhtd_pl, ent_index, &rif, &dip);
|
||||
|
||||
if (WARN_ON_ONCE(rif >= max_rifs))
|
||||
return;
|
||||
if (!mlxsw_sp->router->rifs[rif]) {
|
||||
dev_err_ratelimited(mlxsw_sp->bus_info->dev, "Incorrect RIF in neighbour entry\n");
|
||||
return;
|
||||
|
@@ -2410,6 +2410,7 @@ static void mlxsw_sp_fdb_notify_mac_process(struct mlxsw_sp *mlxsw_sp,
|
||||
char *sfn_pl, int rec_index,
|
||||
bool adding)
|
||||
{
|
||||
unsigned int max_ports = mlxsw_core_max_ports(mlxsw_sp->core);
|
||||
struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan;
|
||||
struct mlxsw_sp_bridge_device *bridge_device;
|
||||
struct mlxsw_sp_bridge_port *bridge_port;
|
||||
@@ -2422,6 +2423,9 @@ static void mlxsw_sp_fdb_notify_mac_process(struct mlxsw_sp *mlxsw_sp,
|
||||
int err;
|
||||
|
||||
mlxsw_reg_sfn_mac_unpack(sfn_pl, rec_index, mac, &fid, &local_port);
|
||||
|
||||
if (WARN_ON_ONCE(local_port >= max_ports))
|
||||
return;
|
||||
mlxsw_sp_port = mlxsw_sp->ports[local_port];
|
||||
if (!mlxsw_sp_port) {
|
||||
dev_err_ratelimited(mlxsw_sp->bus_info->dev, "Incorrect local port in FDB notification\n");
|
||||
|
@@ -922,8 +922,7 @@ static int lan743x_phy_reset(struct lan743x_adapter *adapter)
|
||||
}
|
||||
|
||||
static void lan743x_phy_update_flowcontrol(struct lan743x_adapter *adapter,
|
||||
u8 duplex, u16 local_adv,
|
||||
u16 remote_adv)
|
||||
u16 local_adv, u16 remote_adv)
|
||||
{
|
||||
struct lan743x_phy *phy = &adapter->phy;
|
||||
u8 cap;
|
||||
@@ -951,7 +950,6 @@ static void lan743x_phy_link_status_change(struct net_device *netdev)
|
||||
|
||||
phy_print_status(phydev);
|
||||
if (phydev->state == PHY_RUNNING) {
|
||||
struct ethtool_link_ksettings ksettings;
|
||||
int remote_advertisement = 0;
|
||||
int local_advertisement = 0;
|
||||
|
||||
@@ -988,18 +986,14 @@ static void lan743x_phy_link_status_change(struct net_device *netdev)
|
||||
}
|
||||
lan743x_csr_write(adapter, MAC_CR, data);
|
||||
|
||||
memset(&ksettings, 0, sizeof(ksettings));
|
||||
phy_ethtool_get_link_ksettings(netdev, &ksettings);
|
||||
local_advertisement =
|
||||
linkmode_adv_to_mii_adv_t(phydev->advertising);
|
||||
remote_advertisement =
|
||||
linkmode_adv_to_mii_adv_t(phydev->lp_advertising);
|
||||
|
||||
lan743x_phy_update_flowcontrol(adapter,
|
||||
ksettings.base.duplex,
|
||||
local_advertisement,
|
||||
lan743x_phy_update_flowcontrol(adapter, local_advertisement,
|
||||
remote_advertisement);
|
||||
lan743x_ptp_update_latency(adapter, ksettings.base.speed);
|
||||
lan743x_ptp_update_latency(adapter, phydev->speed);
|
||||
}
|
||||
}
|
||||
|
||||
|
@@ -811,12 +811,6 @@ int ocelot_hwstamp_set(struct ocelot *ocelot, int port, struct ifreq *ifr)
|
||||
switch (cfg.rx_filter) {
|
||||
case HWTSTAMP_FILTER_NONE:
|
||||
break;
|
||||
case HWTSTAMP_FILTER_ALL:
|
||||
case HWTSTAMP_FILTER_SOME:
|
||||
case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:
|
||||
case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
|
||||
case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
|
||||
case HWTSTAMP_FILTER_NTP_ALL:
|
||||
case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
|
||||
case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
|
||||
case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
|
||||
@@ -935,7 +929,10 @@ int ocelot_get_ts_info(struct ocelot *ocelot, int port,
|
||||
SOF_TIMESTAMPING_RAW_HARDWARE;
|
||||
info->tx_types = BIT(HWTSTAMP_TX_OFF) | BIT(HWTSTAMP_TX_ON) |
|
||||
BIT(HWTSTAMP_TX_ONESTEP_SYNC);
|
||||
info->rx_filters = BIT(HWTSTAMP_FILTER_NONE) | BIT(HWTSTAMP_FILTER_ALL);
|
||||
info->rx_filters = BIT(HWTSTAMP_FILTER_NONE) |
|
||||
BIT(HWTSTAMP_FILTER_PTP_V2_EVENT) |
|
||||
BIT(HWTSTAMP_FILTER_PTP_V2_L2_EVENT) |
|
||||
BIT(HWTSTAMP_FILTER_PTP_V2_L4_EVENT);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@@ -557,7 +557,6 @@ struct nfp_net_dp {
|
||||
* @exn_name: Name for Exception interrupt
|
||||
* @shared_handler: Handler for shared interrupts
|
||||
* @shared_name: Name for shared interrupt
|
||||
* @me_freq_mhz: ME clock_freq (MHz)
|
||||
* @reconfig_lock: Protects @reconfig_posted, @reconfig_timer_active,
|
||||
* @reconfig_sync_present and HW reconfiguration request
|
||||
* regs/machinery from async requests (sync must take
|
||||
@@ -640,8 +639,6 @@ struct nfp_net {
|
||||
irq_handler_t shared_handler;
|
||||
char shared_name[IFNAMSIZ + 8];
|
||||
|
||||
u32 me_freq_mhz;
|
||||
|
||||
bool link_up;
|
||||
spinlock_t link_status_lock;
|
||||
|
||||
|
@@ -1347,7 +1347,7 @@ static int nfp_net_set_coalesce(struct net_device *netdev,
|
||||
* ME timestamp ticks. There are 16 ME clock cycles for each timestamp
|
||||
* count.
|
||||
*/
|
||||
factor = nn->me_freq_mhz / 16;
|
||||
factor = nn->tlv_caps.me_freq_mhz / 16;
|
||||
|
||||
/* Each pair of (usecs, max_frames) fields specifies that interrupts
|
||||
* should be coalesced until
|
||||
|
@@ -258,6 +258,7 @@ int stmmac_mdio_register(struct net_device *ndev);
|
||||
int stmmac_mdio_reset(struct mii_bus *mii);
|
||||
void stmmac_set_ethtool_ops(struct net_device *netdev);
|
||||
|
||||
int stmmac_init_tstamp_counter(struct stmmac_priv *priv, u32 systime_flags);
|
||||
void stmmac_ptp_register(struct stmmac_priv *priv);
|
||||
void stmmac_ptp_unregister(struct stmmac_priv *priv);
|
||||
int stmmac_resume(struct device *dev);
|
||||
|
@@ -47,6 +47,13 @@
|
||||
#include "dwxgmac2.h"
|
||||
#include "hwif.h"
|
||||
|
||||
/* As long as the interface is active, we keep the timestamping counter enabled
|
||||
* with fine resolution and binary rollover. This avoid non-monotonic behavior
|
||||
* (clock jumps) when changing timestamping settings at runtime.
|
||||
*/
|
||||
#define STMMAC_HWTS_ACTIVE (PTP_TCR_TSENA | PTP_TCR_TSCFUPDT | \
|
||||
PTP_TCR_TSCTRLSSR)
|
||||
|
||||
#define STMMAC_ALIGN(x) ALIGN(ALIGN(x, SMP_CACHE_BYTES), 16)
|
||||
#define TSO_MAX_BUFF_SIZE (SZ_16K - 1)
|
||||
|
||||
@@ -508,8 +515,6 @@ static int stmmac_hwtstamp_set(struct net_device *dev, struct ifreq *ifr)
|
||||
{
|
||||
struct stmmac_priv *priv = netdev_priv(dev);
|
||||
struct hwtstamp_config config;
|
||||
struct timespec64 now;
|
||||
u64 temp = 0;
|
||||
u32 ptp_v2 = 0;
|
||||
u32 tstamp_all = 0;
|
||||
u32 ptp_over_ipv4_udp = 0;
|
||||
@@ -518,11 +523,6 @@ static int stmmac_hwtstamp_set(struct net_device *dev, struct ifreq *ifr)
|
||||
u32 snap_type_sel = 0;
|
||||
u32 ts_master_en = 0;
|
||||
u32 ts_event_en = 0;
|
||||
u32 sec_inc = 0;
|
||||
u32 value = 0;
|
||||
bool xmac;
|
||||
|
||||
xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac;
|
||||
|
||||
if (!(priv->dma_cap.time_stamp || priv->adv_ts)) {
|
||||
netdev_alert(priv->dev, "No support for HW time stamping\n");
|
||||
@@ -684,42 +684,17 @@ static int stmmac_hwtstamp_set(struct net_device *dev, struct ifreq *ifr)
|
||||
priv->hwts_rx_en = ((config.rx_filter == HWTSTAMP_FILTER_NONE) ? 0 : 1);
|
||||
priv->hwts_tx_en = config.tx_type == HWTSTAMP_TX_ON;
|
||||
|
||||
if (!priv->hwts_tx_en && !priv->hwts_rx_en)
|
||||
stmmac_config_hw_tstamping(priv, priv->ptpaddr, 0);
|
||||
else {
|
||||
value = (PTP_TCR_TSENA | PTP_TCR_TSCFUPDT | PTP_TCR_TSCTRLSSR |
|
||||
tstamp_all | ptp_v2 | ptp_over_ethernet |
|
||||
ptp_over_ipv6_udp | ptp_over_ipv4_udp | ts_event_en |
|
||||
ts_master_en | snap_type_sel);
|
||||
stmmac_config_hw_tstamping(priv, priv->ptpaddr, value);
|
||||
priv->systime_flags = STMMAC_HWTS_ACTIVE;
|
||||
|
||||
/* program Sub Second Increment reg */
|
||||
stmmac_config_sub_second_increment(priv,
|
||||
priv->ptpaddr, priv->plat->clk_ptp_rate,
|
||||
xmac, &sec_inc);
|
||||
temp = div_u64(1000000000ULL, sec_inc);
|
||||
|
||||
/* Store sub second increment and flags for later use */
|
||||
priv->sub_second_inc = sec_inc;
|
||||
priv->systime_flags = value;
|
||||
|
||||
/* calculate default added value:
|
||||
* formula is :
|
||||
* addend = (2^32)/freq_div_ratio;
|
||||
* where, freq_div_ratio = 1e9ns/sec_inc
|
||||
*/
|
||||
temp = (u64)(temp << 32);
|
||||
priv->default_addend = div_u64(temp, priv->plat->clk_ptp_rate);
|
||||
stmmac_config_addend(priv, priv->ptpaddr, priv->default_addend);
|
||||
|
||||
/* initialize system time */
|
||||
ktime_get_real_ts64(&now);
|
||||
|
||||
/* lower 32 bits of tv_sec are safe until y2106 */
|
||||
stmmac_init_systime(priv, priv->ptpaddr,
|
||||
(u32)now.tv_sec, now.tv_nsec);
|
||||
if (priv->hwts_tx_en || priv->hwts_rx_en) {
|
||||
priv->systime_flags |= tstamp_all | ptp_v2 |
|
||||
ptp_over_ethernet | ptp_over_ipv6_udp |
|
||||
ptp_over_ipv4_udp | ts_event_en |
|
||||
ts_master_en | snap_type_sel;
|
||||
}
|
||||
|
||||
stmmac_config_hw_tstamping(priv, priv->ptpaddr, priv->systime_flags);
|
||||
|
||||
memcpy(&priv->tstamp_config, &config, sizeof(config));
|
||||
|
||||
return copy_to_user(ifr->ifr_data, &config,
|
||||
@@ -747,6 +722,66 @@ static int stmmac_hwtstamp_get(struct net_device *dev, struct ifreq *ifr)
|
||||
sizeof(*config)) ? -EFAULT : 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* stmmac_init_tstamp_counter - init hardware timestamping counter
|
||||
* @priv: driver private structure
|
||||
* @systime_flags: timestamping flags
|
||||
* Description:
|
||||
* Initialize hardware counter for packet timestamping.
|
||||
* This is valid as long as the interface is open and not suspended.
|
||||
* Will be rerun after resuming from suspend, case in which the timestamping
|
||||
* flags updated by stmmac_hwtstamp_set() also need to be restored.
|
||||
*/
|
||||
int stmmac_init_tstamp_counter(struct stmmac_priv *priv, u32 systime_flags)
|
||||
{
|
||||
bool xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac;
|
||||
struct timespec64 now;
|
||||
u32 sec_inc = 0;
|
||||
u64 temp = 0;
|
||||
int ret;
|
||||
|
||||
if (!(priv->dma_cap.time_stamp || priv->dma_cap.atime_stamp))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
ret = clk_prepare_enable(priv->plat->clk_ptp_ref);
|
||||
if (ret < 0) {
|
||||
netdev_warn(priv->dev,
|
||||
"failed to enable PTP reference clock: %pe\n",
|
||||
ERR_PTR(ret));
|
||||
return ret;
|
||||
}
|
||||
|
||||
stmmac_config_hw_tstamping(priv, priv->ptpaddr, systime_flags);
|
||||
priv->systime_flags = systime_flags;
|
||||
|
||||
/* program Sub Second Increment reg */
|
||||
stmmac_config_sub_second_increment(priv, priv->ptpaddr,
|
||||
priv->plat->clk_ptp_rate,
|
||||
xmac, &sec_inc);
|
||||
temp = div_u64(1000000000ULL, sec_inc);
|
||||
|
||||
/* Store sub second increment for later use */
|
||||
priv->sub_second_inc = sec_inc;
|
||||
|
||||
/* calculate default added value:
|
||||
* formula is :
|
||||
* addend = (2^32)/freq_div_ratio;
|
||||
* where, freq_div_ratio = 1e9ns/sec_inc
|
||||
*/
|
||||
temp = (u64)(temp << 32);
|
||||
priv->default_addend = div_u64(temp, priv->plat->clk_ptp_rate);
|
||||
stmmac_config_addend(priv, priv->ptpaddr, priv->default_addend);
|
||||
|
||||
/* initialize system time */
|
||||
ktime_get_real_ts64(&now);
|
||||
|
||||
/* lower 32 bits of tv_sec are safe until y2106 */
|
||||
stmmac_init_systime(priv, priv->ptpaddr, (u32)now.tv_sec, now.tv_nsec);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(stmmac_init_tstamp_counter);
|
||||
|
||||
/**
|
||||
* stmmac_init_ptp - init PTP
|
||||
* @priv: driver private structure
|
||||
@@ -757,9 +792,11 @@ static int stmmac_hwtstamp_get(struct net_device *dev, struct ifreq *ifr)
|
||||
static int stmmac_init_ptp(struct stmmac_priv *priv)
|
||||
{
|
||||
bool xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac;
|
||||
int ret;
|
||||
|
||||
if (!(priv->dma_cap.time_stamp || priv->dma_cap.atime_stamp))
|
||||
return -EOPNOTSUPP;
|
||||
ret = stmmac_init_tstamp_counter(priv, STMMAC_HWTS_ACTIVE);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
priv->adv_ts = 0;
|
||||
/* Check if adv_ts can be enabled for dwmac 4.x / xgmac core */
|
||||
@@ -2721,10 +2758,6 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
|
||||
stmmac_mmc_setup(priv);
|
||||
|
||||
if (init_ptp) {
|
||||
ret = clk_prepare_enable(priv->plat->clk_ptp_ref);
|
||||
if (ret < 0)
|
||||
netdev_warn(priv->dev, "failed to enable PTP reference clock: %d\n", ret);
|
||||
|
||||
ret = stmmac_init_ptp(priv);
|
||||
if (ret == -EOPNOTSUPP)
|
||||
netdev_warn(priv->dev, "PTP not supported by HW\n");
|
||||
@@ -5238,7 +5271,6 @@ int stmmac_suspend(struct device *dev)
|
||||
struct net_device *ndev = dev_get_drvdata(dev);
|
||||
struct stmmac_priv *priv = netdev_priv(ndev);
|
||||
u32 chan;
|
||||
int ret;
|
||||
|
||||
if (!ndev || !netif_running(ndev))
|
||||
return 0;
|
||||
@@ -5280,13 +5312,6 @@ int stmmac_suspend(struct device *dev)
|
||||
|
||||
stmmac_mac_set(priv, priv->ioaddr, false);
|
||||
pinctrl_pm_select_sleep_state(priv->device);
|
||||
/* Disable clock in case of PWM is off */
|
||||
clk_disable_unprepare(priv->plat->clk_ptp_ref);
|
||||
ret = pm_runtime_force_suspend(dev);
|
||||
if (ret) {
|
||||
mutex_unlock(&priv->lock);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&priv->lock);
|
||||
|
||||
@@ -5351,12 +5376,6 @@ int stmmac_resume(struct device *dev)
|
||||
priv->irq_wake = 0;
|
||||
} else {
|
||||
pinctrl_pm_select_default_state(priv->device);
|
||||
/* enable the clk previously disabled */
|
||||
ret = pm_runtime_force_resume(dev);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (priv->plat->clk_ptp_ref)
|
||||
clk_prepare_enable(priv->plat->clk_ptp_ref);
|
||||
/* reset the phy so that it's ready */
|
||||
if (priv->mii)
|
||||
stmmac_mdio_reset(priv->mii);
|
||||
|
@@ -9,6 +9,7 @@
|
||||
*******************************************************************************/
|
||||
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/of.h>
|
||||
@@ -778,9 +779,52 @@ static int __maybe_unused stmmac_runtime_resume(struct device *dev)
|
||||
return stmmac_bus_clks_config(priv, true);
|
||||
}
|
||||
|
||||
static int __maybe_unused stmmac_pltfr_noirq_suspend(struct device *dev)
|
||||
{
|
||||
struct net_device *ndev = dev_get_drvdata(dev);
|
||||
struct stmmac_priv *priv = netdev_priv(ndev);
|
||||
int ret;
|
||||
|
||||
if (!netif_running(ndev))
|
||||
return 0;
|
||||
|
||||
if (!device_may_wakeup(priv->device) || !priv->plat->pmt) {
|
||||
/* Disable clock in case of PWM is off */
|
||||
clk_disable_unprepare(priv->plat->clk_ptp_ref);
|
||||
|
||||
ret = pm_runtime_force_suspend(dev);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __maybe_unused stmmac_pltfr_noirq_resume(struct device *dev)
|
||||
{
|
||||
struct net_device *ndev = dev_get_drvdata(dev);
|
||||
struct stmmac_priv *priv = netdev_priv(ndev);
|
||||
int ret;
|
||||
|
||||
if (!netif_running(ndev))
|
||||
return 0;
|
||||
|
||||
if (!device_may_wakeup(priv->device) || !priv->plat->pmt) {
|
||||
/* enable the clk previously disabled */
|
||||
ret = pm_runtime_force_resume(dev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
stmmac_init_tstamp_counter(priv, priv->systime_flags);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
const struct dev_pm_ops stmmac_pltfr_pm_ops = {
|
||||
SET_SYSTEM_SLEEP_PM_OPS(stmmac_pltfr_suspend, stmmac_pltfr_resume)
|
||||
SET_RUNTIME_PM_OPS(stmmac_runtime_suspend, stmmac_runtime_resume, NULL)
|
||||
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(stmmac_pltfr_noirq_suspend, stmmac_pltfr_noirq_resume)
|
||||
};
|
||||
EXPORT_SYMBOL_GPL(stmmac_pltfr_pm_ops);
|
||||
|
||||
|
@@ -61,6 +61,13 @@ static int aspeed_mdio_read(struct mii_bus *bus, int addr, int regnum)
|
||||
|
||||
iowrite32(ctrl, ctx->base + ASPEED_MDIO_CTRL);
|
||||
|
||||
rc = readl_poll_timeout(ctx->base + ASPEED_MDIO_CTRL, ctrl,
|
||||
!(ctrl & ASPEED_MDIO_CTRL_FIRE),
|
||||
ASPEED_MDIO_INTERVAL_US,
|
||||
ASPEED_MDIO_TIMEOUT_US);
|
||||
if (rc < 0)
|
||||
return rc;
|
||||
|
||||
rc = readl_poll_timeout(ctx->base + ASPEED_MDIO_DATA, data,
|
||||
data & ASPEED_MDIO_DATA_IDLE,
|
||||
ASPEED_MDIO_INTERVAL_US,
|
||||
|
@@ -644,6 +644,7 @@ static void phylink_resolve(struct work_struct *w)
|
||||
struct phylink_link_state link_state;
|
||||
struct net_device *ndev = pl->netdev;
|
||||
bool mac_config = false;
|
||||
bool retrigger = false;
|
||||
bool cur_link_state;
|
||||
|
||||
mutex_lock(&pl->state_mutex);
|
||||
@@ -657,6 +658,7 @@ static void phylink_resolve(struct work_struct *w)
|
||||
link_state.link = false;
|
||||
} else if (pl->mac_link_dropped) {
|
||||
link_state.link = false;
|
||||
retrigger = true;
|
||||
} else {
|
||||
switch (pl->cur_link_an_mode) {
|
||||
case MLO_AN_PHY:
|
||||
@@ -673,6 +675,19 @@ static void phylink_resolve(struct work_struct *w)
|
||||
case MLO_AN_INBAND:
|
||||
phylink_mac_pcs_get_state(pl, &link_state);
|
||||
|
||||
/* The PCS may have a latching link-fail indicator.
|
||||
* If the link was up, bring the link down and
|
||||
* re-trigger the resolve. Otherwise, re-read the
|
||||
* PCS state to get the current status of the link.
|
||||
*/
|
||||
if (!link_state.link) {
|
||||
if (cur_link_state)
|
||||
retrigger = true;
|
||||
else
|
||||
phylink_mac_pcs_get_state(pl,
|
||||
&link_state);
|
||||
}
|
||||
|
||||
/* If we have a phy, the "up" state is the union of
|
||||
* both the PHY and the MAC */
|
||||
if (pl->phydev)
|
||||
@@ -680,6 +695,15 @@ static void phylink_resolve(struct work_struct *w)
|
||||
|
||||
/* Only update if the PHY link is up */
|
||||
if (pl->phydev && pl->phy_state.link) {
|
||||
/* If the interface has changed, force a
|
||||
* link down event if the link isn't already
|
||||
* down, and re-resolve.
|
||||
*/
|
||||
if (link_state.interface !=
|
||||
pl->phy_state.interface) {
|
||||
retrigger = true;
|
||||
link_state.link = false;
|
||||
}
|
||||
link_state.interface = pl->phy_state.interface;
|
||||
|
||||
/* If we have a PHY, we need to update with
|
||||
@@ -721,7 +745,7 @@ static void phylink_resolve(struct work_struct *w)
|
||||
else
|
||||
phylink_link_up(pl, link_state);
|
||||
}
|
||||
if (!link_state.link && pl->mac_link_dropped) {
|
||||
if (!link_state.link && retrigger) {
|
||||
pl->mac_link_dropped = false;
|
||||
queue_work(system_power_efficient_wq, &pl->resolve);
|
||||
}
|
||||
|
@@ -126,21 +126,17 @@ struct netfront_queue {
|
||||
|
||||
/*
|
||||
* {tx,rx}_skbs store outstanding skbuffs. Free tx_skb entries
|
||||
* are linked from tx_skb_freelist through skb_entry.link.
|
||||
*
|
||||
* NB. Freelist index entries are always going to be less than
|
||||
* PAGE_OFFSET, whereas pointers to skbs will always be equal or
|
||||
* greater than PAGE_OFFSET: we use this property to distinguish
|
||||
* them.
|
||||
* are linked from tx_skb_freelist through tx_link.
|
||||
*/
|
||||
union skb_entry {
|
||||
struct sk_buff *skb;
|
||||
unsigned long link;
|
||||
} tx_skbs[NET_TX_RING_SIZE];
|
||||
struct sk_buff *tx_skbs[NET_TX_RING_SIZE];
|
||||
unsigned short tx_link[NET_TX_RING_SIZE];
|
||||
#define TX_LINK_NONE 0xffff
|
||||
#define TX_PENDING 0xfffe
|
||||
grant_ref_t gref_tx_head;
|
||||
grant_ref_t grant_tx_ref[NET_TX_RING_SIZE];
|
||||
struct page *grant_tx_page[NET_TX_RING_SIZE];
|
||||
unsigned tx_skb_freelist;
|
||||
unsigned int tx_pend_queue;
|
||||
|
||||
spinlock_t rx_lock ____cacheline_aligned_in_smp;
|
||||
struct xen_netif_rx_front_ring rx;
|
||||
@@ -173,6 +169,9 @@ struct netfront_info {
|
||||
bool netback_has_xdp_headroom;
|
||||
bool netfront_xdp_enabled;
|
||||
|
||||
/* Is device behaving sane? */
|
||||
bool broken;
|
||||
|
||||
atomic_t rx_gso_checksum_fixup;
|
||||
};
|
||||
|
||||
@@ -181,33 +180,25 @@ struct netfront_rx_info {
|
||||
struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX - 1];
|
||||
};
|
||||
|
||||
static void skb_entry_set_link(union skb_entry *list, unsigned short id)
|
||||
{
|
||||
list->link = id;
|
||||
}
|
||||
|
||||
static int skb_entry_is_link(const union skb_entry *list)
|
||||
{
|
||||
BUILD_BUG_ON(sizeof(list->skb) != sizeof(list->link));
|
||||
return (unsigned long)list->skb < PAGE_OFFSET;
|
||||
}
|
||||
|
||||
/*
|
||||
* Access macros for acquiring freeing slots in tx_skbs[].
|
||||
*/
|
||||
|
||||
static void add_id_to_freelist(unsigned *head, union skb_entry *list,
|
||||
unsigned short id)
|
||||
static void add_id_to_list(unsigned *head, unsigned short *list,
|
||||
unsigned short id)
|
||||
{
|
||||
skb_entry_set_link(&list[id], *head);
|
||||
list[id] = *head;
|
||||
*head = id;
|
||||
}
|
||||
|
||||
static unsigned short get_id_from_freelist(unsigned *head,
|
||||
union skb_entry *list)
|
||||
static unsigned short get_id_from_list(unsigned *head, unsigned short *list)
|
||||
{
|
||||
unsigned int id = *head;
|
||||
*head = list[id].link;
|
||||
|
||||
if (id != TX_LINK_NONE) {
|
||||
*head = list[id];
|
||||
list[id] = TX_LINK_NONE;
|
||||
}
|
||||
return id;
|
||||
}
|
||||
|
||||
@@ -363,7 +354,7 @@ static int xennet_open(struct net_device *dev)
|
||||
unsigned int i = 0;
|
||||
struct netfront_queue *queue = NULL;
|
||||
|
||||
if (!np->queues)
|
||||
if (!np->queues || np->broken)
|
||||
return -ENODEV;
|
||||
|
||||
for (i = 0; i < num_queues; ++i) {
|
||||
@@ -391,27 +382,47 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
|
||||
unsigned short id;
|
||||
struct sk_buff *skb;
|
||||
bool more_to_do;
|
||||
const struct device *dev = &queue->info->netdev->dev;
|
||||
|
||||
BUG_ON(!netif_carrier_ok(queue->info->netdev));
|
||||
|
||||
do {
|
||||
prod = queue->tx.sring->rsp_prod;
|
||||
if (RING_RESPONSE_PROD_OVERFLOW(&queue->tx, prod)) {
|
||||
dev_alert(dev, "Illegal number of responses %u\n",
|
||||
prod - queue->tx.rsp_cons);
|
||||
goto err;
|
||||
}
|
||||
rmb(); /* Ensure we see responses up to 'rp'. */
|
||||
|
||||
for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
|
||||
struct xen_netif_tx_response *txrsp;
|
||||
struct xen_netif_tx_response txrsp;
|
||||
|
||||
txrsp = RING_GET_RESPONSE(&queue->tx, cons);
|
||||
if (txrsp->status == XEN_NETIF_RSP_NULL)
|
||||
RING_COPY_RESPONSE(&queue->tx, cons, &txrsp);
|
||||
if (txrsp.status == XEN_NETIF_RSP_NULL)
|
||||
continue;
|
||||
|
||||
id = txrsp->id;
|
||||
skb = queue->tx_skbs[id].skb;
|
||||
id = txrsp.id;
|
||||
if (id >= RING_SIZE(&queue->tx)) {
|
||||
dev_alert(dev,
|
||||
"Response has incorrect id (%u)\n",
|
||||
id);
|
||||
goto err;
|
||||
}
|
||||
if (queue->tx_link[id] != TX_PENDING) {
|
||||
dev_alert(dev,
|
||||
"Response for inactive request\n");
|
||||
goto err;
|
||||
}
|
||||
|
||||
queue->tx_link[id] = TX_LINK_NONE;
|
||||
skb = queue->tx_skbs[id];
|
||||
queue->tx_skbs[id] = NULL;
|
||||
if (unlikely(gnttab_query_foreign_access(
|
||||
queue->grant_tx_ref[id]) != 0)) {
|
||||
pr_alert("%s: warning -- grant still in use by backend domain\n",
|
||||
__func__);
|
||||
BUG();
|
||||
dev_alert(dev,
|
||||
"Grant still in use by backend domain\n");
|
||||
goto err;
|
||||
}
|
||||
gnttab_end_foreign_access_ref(
|
||||
queue->grant_tx_ref[id], GNTMAP_readonly);
|
||||
@@ -419,7 +430,7 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
|
||||
&queue->gref_tx_head, queue->grant_tx_ref[id]);
|
||||
queue->grant_tx_ref[id] = GRANT_INVALID_REF;
|
||||
queue->grant_tx_page[id] = NULL;
|
||||
add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, id);
|
||||
add_id_to_list(&queue->tx_skb_freelist, queue->tx_link, id);
|
||||
dev_kfree_skb_irq(skb);
|
||||
}
|
||||
|
||||
@@ -429,13 +440,20 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
|
||||
} while (more_to_do);
|
||||
|
||||
xennet_maybe_wake_tx(queue);
|
||||
|
||||
return;
|
||||
|
||||
err:
|
||||
queue->info->broken = true;
|
||||
dev_alert(dev, "Disabled for further use\n");
|
||||
}
|
||||
|
||||
struct xennet_gnttab_make_txreq {
|
||||
struct netfront_queue *queue;
|
||||
struct sk_buff *skb;
|
||||
struct page *page;
|
||||
struct xen_netif_tx_request *tx; /* Last request */
|
||||
struct xen_netif_tx_request *tx; /* Last request on ring page */
|
||||
struct xen_netif_tx_request tx_local; /* Last request local copy*/
|
||||
unsigned int size;
|
||||
};
|
||||
|
||||
@@ -451,7 +469,7 @@ static void xennet_tx_setup_grant(unsigned long gfn, unsigned int offset,
|
||||
struct netfront_queue *queue = info->queue;
|
||||
struct sk_buff *skb = info->skb;
|
||||
|
||||
id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
|
||||
id = get_id_from_list(&queue->tx_skb_freelist, queue->tx_link);
|
||||
tx = RING_GET_REQUEST(&queue->tx, queue->tx.req_prod_pvt++);
|
||||
ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
|
||||
WARN_ON_ONCE(IS_ERR_VALUE((unsigned long)(int)ref));
|
||||
@@ -459,34 +477,37 @@ static void xennet_tx_setup_grant(unsigned long gfn, unsigned int offset,
|
||||
gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
|
||||
gfn, GNTMAP_readonly);
|
||||
|
||||
queue->tx_skbs[id].skb = skb;
|
||||
queue->tx_skbs[id] = skb;
|
||||
queue->grant_tx_page[id] = page;
|
||||
queue->grant_tx_ref[id] = ref;
|
||||
|
||||
tx->id = id;
|
||||
tx->gref = ref;
|
||||
tx->offset = offset;
|
||||
tx->size = len;
|
||||
tx->flags = 0;
|
||||
info->tx_local.id = id;
|
||||
info->tx_local.gref = ref;
|
||||
info->tx_local.offset = offset;
|
||||
info->tx_local.size = len;
|
||||
info->tx_local.flags = 0;
|
||||
|
||||
*tx = info->tx_local;
|
||||
|
||||
/*
|
||||
* Put the request in the pending queue, it will be set to be pending
|
||||
* when the producer index is about to be raised.
|
||||
*/
|
||||
add_id_to_list(&queue->tx_pend_queue, queue->tx_link, id);
|
||||
|
||||
info->tx = tx;
|
||||
info->size += tx->size;
|
||||
info->size += info->tx_local.size;
|
||||
}
|
||||
|
||||
static struct xen_netif_tx_request *xennet_make_first_txreq(
|
||||
struct netfront_queue *queue, struct sk_buff *skb,
|
||||
struct page *page, unsigned int offset, unsigned int len)
|
||||
struct xennet_gnttab_make_txreq *info,
|
||||
unsigned int offset, unsigned int len)
|
||||
{
|
||||
struct xennet_gnttab_make_txreq info = {
|
||||
.queue = queue,
|
||||
.skb = skb,
|
||||
.page = page,
|
||||
.size = 0,
|
||||
};
|
||||
info->size = 0;
|
||||
|
||||
gnttab_for_one_grant(page, offset, len, xennet_tx_setup_grant, &info);
|
||||
gnttab_for_one_grant(info->page, offset, len, xennet_tx_setup_grant, info);
|
||||
|
||||
return info.tx;
|
||||
return info->tx;
|
||||
}
|
||||
|
||||
static void xennet_make_one_txreq(unsigned long gfn, unsigned int offset,
|
||||
@@ -499,35 +520,27 @@ static void xennet_make_one_txreq(unsigned long gfn, unsigned int offset,
|
||||
xennet_tx_setup_grant(gfn, offset, len, data);
|
||||
}
|
||||
|
||||
static struct xen_netif_tx_request *xennet_make_txreqs(
|
||||
struct netfront_queue *queue, struct xen_netif_tx_request *tx,
|
||||
struct sk_buff *skb, struct page *page,
|
||||
static void xennet_make_txreqs(
|
||||
struct xennet_gnttab_make_txreq *info,
|
||||
struct page *page,
|
||||
unsigned int offset, unsigned int len)
|
||||
{
|
||||
struct xennet_gnttab_make_txreq info = {
|
||||
.queue = queue,
|
||||
.skb = skb,
|
||||
.tx = tx,
|
||||
};
|
||||
|
||||
/* Skip unused frames from start of page */
|
||||
page += offset >> PAGE_SHIFT;
|
||||
offset &= ~PAGE_MASK;
|
||||
|
||||
while (len) {
|
||||
info.page = page;
|
||||
info.size = 0;
|
||||
info->page = page;
|
||||
info->size = 0;
|
||||
|
||||
gnttab_foreach_grant_in_range(page, offset, len,
|
||||
xennet_make_one_txreq,
|
||||
&info);
|
||||
info);
|
||||
|
||||
page++;
|
||||
offset = 0;
|
||||
len -= info.size;
|
||||
len -= info->size;
|
||||
}
|
||||
|
||||
return info.tx;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -574,19 +587,34 @@ static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
|
||||
return queue_idx;
|
||||
}
|
||||
|
||||
static void xennet_mark_tx_pending(struct netfront_queue *queue)
|
||||
{
|
||||
unsigned int i;
|
||||
|
||||
while ((i = get_id_from_list(&queue->tx_pend_queue, queue->tx_link)) !=
|
||||
TX_LINK_NONE)
|
||||
queue->tx_link[i] = TX_PENDING;
|
||||
}
|
||||
|
||||
static int xennet_xdp_xmit_one(struct net_device *dev,
|
||||
struct netfront_queue *queue,
|
||||
struct xdp_frame *xdpf)
|
||||
{
|
||||
struct netfront_info *np = netdev_priv(dev);
|
||||
struct netfront_stats *tx_stats = this_cpu_ptr(np->tx_stats);
|
||||
struct xennet_gnttab_make_txreq info = {
|
||||
.queue = queue,
|
||||
.skb = NULL,
|
||||
.page = virt_to_page(xdpf->data),
|
||||
};
|
||||
int notify;
|
||||
|
||||
xennet_make_first_txreq(queue, NULL,
|
||||
virt_to_page(xdpf->data),
|
||||
xennet_make_first_txreq(&info,
|
||||
offset_in_page(xdpf->data),
|
||||
xdpf->len);
|
||||
|
||||
xennet_mark_tx_pending(queue);
|
||||
|
||||
RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
|
||||
if (notify)
|
||||
notify_remote_via_irq(queue->tx_irq);
|
||||
@@ -611,6 +639,8 @@ static int xennet_xdp_xmit(struct net_device *dev, int n,
|
||||
int drops = 0;
|
||||
int i, err;
|
||||
|
||||
if (unlikely(np->broken))
|
||||
return -ENODEV;
|
||||
if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
|
||||
return -EINVAL;
|
||||
|
||||
@@ -640,7 +670,7 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
|
||||
{
|
||||
struct netfront_info *np = netdev_priv(dev);
|
||||
struct netfront_stats *tx_stats = this_cpu_ptr(np->tx_stats);
|
||||
struct xen_netif_tx_request *tx, *first_tx;
|
||||
struct xen_netif_tx_request *first_tx;
|
||||
unsigned int i;
|
||||
int notify;
|
||||
int slots;
|
||||
@@ -649,6 +679,7 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
|
||||
unsigned int len;
|
||||
unsigned long flags;
|
||||
struct netfront_queue *queue = NULL;
|
||||
struct xennet_gnttab_make_txreq info = { };
|
||||
unsigned int num_queues = dev->real_num_tx_queues;
|
||||
u16 queue_index;
|
||||
struct sk_buff *nskb;
|
||||
@@ -656,6 +687,8 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
|
||||
/* Drop the packet if no queues are set up */
|
||||
if (num_queues < 1)
|
||||
goto drop;
|
||||
if (unlikely(np->broken))
|
||||
goto drop;
|
||||
/* Determine which queue to transmit this SKB on */
|
||||
queue_index = skb_get_queue_mapping(skb);
|
||||
queue = &np->queues[queue_index];
|
||||
@@ -706,21 +739,24 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
|
||||
}
|
||||
|
||||
/* First request for the linear area. */
|
||||
first_tx = tx = xennet_make_first_txreq(queue, skb,
|
||||
page, offset, len);
|
||||
offset += tx->size;
|
||||
info.queue = queue;
|
||||
info.skb = skb;
|
||||
info.page = page;
|
||||
first_tx = xennet_make_first_txreq(&info, offset, len);
|
||||
offset += info.tx_local.size;
|
||||
if (offset == PAGE_SIZE) {
|
||||
page++;
|
||||
offset = 0;
|
||||
}
|
||||
len -= tx->size;
|
||||
len -= info.tx_local.size;
|
||||
|
||||
if (skb->ip_summed == CHECKSUM_PARTIAL)
|
||||
/* local packet? */
|
||||
tx->flags |= XEN_NETTXF_csum_blank | XEN_NETTXF_data_validated;
|
||||
first_tx->flags |= XEN_NETTXF_csum_blank |
|
||||
XEN_NETTXF_data_validated;
|
||||
else if (skb->ip_summed == CHECKSUM_UNNECESSARY)
|
||||
/* remote but checksummed. */
|
||||
tx->flags |= XEN_NETTXF_data_validated;
|
||||
first_tx->flags |= XEN_NETTXF_data_validated;
|
||||
|
||||
/* Optional extra info after the first request. */
|
||||
if (skb_shinfo(skb)->gso_size) {
|
||||
@@ -729,7 +765,7 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
|
||||
gso = (struct xen_netif_extra_info *)
|
||||
RING_GET_REQUEST(&queue->tx, queue->tx.req_prod_pvt++);
|
||||
|
||||
tx->flags |= XEN_NETTXF_extra_info;
|
||||
first_tx->flags |= XEN_NETTXF_extra_info;
|
||||
|
||||
gso->u.gso.size = skb_shinfo(skb)->gso_size;
|
||||
gso->u.gso.type = (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) ?
|
||||
@@ -743,12 +779,12 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
|
||||
}
|
||||
|
||||
/* Requests for the rest of the linear area. */
|
||||
tx = xennet_make_txreqs(queue, tx, skb, page, offset, len);
|
||||
xennet_make_txreqs(&info, page, offset, len);
|
||||
|
||||
/* Requests for all the frags. */
|
||||
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
|
||||
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
|
||||
tx = xennet_make_txreqs(queue, tx, skb, skb_frag_page(frag),
|
||||
xennet_make_txreqs(&info, skb_frag_page(frag),
|
||||
skb_frag_off(frag),
|
||||
skb_frag_size(frag));
|
||||
}
|
||||
@@ -759,6 +795,8 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
|
||||
/* timestamp packet in software */
|
||||
skb_tx_timestamp(skb);
|
||||
|
||||
xennet_mark_tx_pending(queue);
|
||||
|
||||
RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
|
||||
if (notify)
|
||||
notify_remote_via_irq(queue->tx_irq);
|
||||
@@ -816,7 +854,7 @@ static int xennet_get_extras(struct netfront_queue *queue,
|
||||
RING_IDX rp)
|
||||
|
||||
{
|
||||
struct xen_netif_extra_info *extra;
|
||||
struct xen_netif_extra_info extra;
|
||||
struct device *dev = &queue->info->netdev->dev;
|
||||
RING_IDX cons = queue->rx.rsp_cons;
|
||||
int err = 0;
|
||||
@@ -832,24 +870,22 @@ static int xennet_get_extras(struct netfront_queue *queue,
|
||||
break;
|
||||
}
|
||||
|
||||
extra = (struct xen_netif_extra_info *)
|
||||
RING_GET_RESPONSE(&queue->rx, ++cons);
|
||||
RING_COPY_RESPONSE(&queue->rx, ++cons, &extra);
|
||||
|
||||
if (unlikely(!extra->type ||
|
||||
extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
|
||||
if (unlikely(!extra.type ||
|
||||
extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
|
||||
if (net_ratelimit())
|
||||
dev_warn(dev, "Invalid extra type: %d\n",
|
||||
extra->type);
|
||||
extra.type);
|
||||
err = -EINVAL;
|
||||
} else {
|
||||
memcpy(&extras[extra->type - 1], extra,
|
||||
sizeof(*extra));
|
||||
extras[extra.type - 1] = extra;
|
||||
}
|
||||
|
||||
skb = xennet_get_rx_skb(queue, cons);
|
||||
ref = xennet_get_rx_ref(queue, cons);
|
||||
xennet_move_rx_slot(queue, skb, ref);
|
||||
} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
|
||||
} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
|
||||
|
||||
queue->rx.rsp_cons = cons;
|
||||
return err;
|
||||
@@ -907,7 +943,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
|
||||
struct sk_buff_head *list,
|
||||
bool *need_xdp_flush)
|
||||
{
|
||||
struct xen_netif_rx_response *rx = &rinfo->rx;
|
||||
struct xen_netif_rx_response *rx = &rinfo->rx, rx_local;
|
||||
int max = XEN_NETIF_NR_SLOTS_MIN + (rx->status <= RX_COPY_THRESHOLD);
|
||||
RING_IDX cons = queue->rx.rsp_cons;
|
||||
struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
|
||||
@@ -991,7 +1027,8 @@ next:
|
||||
break;
|
||||
}
|
||||
|
||||
rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
|
||||
RING_COPY_RESPONSE(&queue->rx, cons + slots, &rx_local);
|
||||
rx = &rx_local;
|
||||
skb = xennet_get_rx_skb(queue, cons + slots);
|
||||
ref = xennet_get_rx_ref(queue, cons + slots);
|
||||
slots++;
|
||||
@@ -1046,10 +1083,11 @@ static int xennet_fill_frags(struct netfront_queue *queue,
|
||||
struct sk_buff *nskb;
|
||||
|
||||
while ((nskb = __skb_dequeue(list))) {
|
||||
struct xen_netif_rx_response *rx =
|
||||
RING_GET_RESPONSE(&queue->rx, ++cons);
|
||||
struct xen_netif_rx_response rx;
|
||||
skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
|
||||
|
||||
RING_COPY_RESPONSE(&queue->rx, ++cons, &rx);
|
||||
|
||||
if (skb_shinfo(skb)->nr_frags == MAX_SKB_FRAGS) {
|
||||
unsigned int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
|
||||
|
||||
@@ -1064,7 +1102,7 @@ static int xennet_fill_frags(struct netfront_queue *queue,
|
||||
|
||||
skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
|
||||
skb_frag_page(nfrag),
|
||||
rx->offset, rx->status, PAGE_SIZE);
|
||||
rx.offset, rx.status, PAGE_SIZE);
|
||||
|
||||
skb_shinfo(nskb)->nr_frags = 0;
|
||||
kfree_skb(nskb);
|
||||
@@ -1158,12 +1196,19 @@ static int xennet_poll(struct napi_struct *napi, int budget)
|
||||
skb_queue_head_init(&tmpq);
|
||||
|
||||
rp = queue->rx.sring->rsp_prod;
|
||||
if (RING_RESPONSE_PROD_OVERFLOW(&queue->rx, rp)) {
|
||||
dev_alert(&dev->dev, "Illegal number of responses %u\n",
|
||||
rp - queue->rx.rsp_cons);
|
||||
queue->info->broken = true;
|
||||
spin_unlock(&queue->rx_lock);
|
||||
return 0;
|
||||
}
|
||||
rmb(); /* Ensure we see queued responses up to 'rp'. */
|
||||
|
||||
i = queue->rx.rsp_cons;
|
||||
work_done = 0;
|
||||
while ((i != rp) && (work_done < budget)) {
|
||||
memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
|
||||
RING_COPY_RESPONSE(&queue->rx, i, rx);
|
||||
memset(extras, 0, sizeof(rinfo.extras));
|
||||
|
||||
err = xennet_get_responses(queue, &rinfo, rp, &tmpq,
|
||||
@@ -1288,17 +1333,18 @@ static void xennet_release_tx_bufs(struct netfront_queue *queue)
|
||||
|
||||
for (i = 0; i < NET_TX_RING_SIZE; i++) {
|
||||
/* Skip over entries which are actually freelist references */
|
||||
if (skb_entry_is_link(&queue->tx_skbs[i]))
|
||||
if (!queue->tx_skbs[i])
|
||||
continue;
|
||||
|
||||
skb = queue->tx_skbs[i].skb;
|
||||
skb = queue->tx_skbs[i];
|
||||
queue->tx_skbs[i] = NULL;
|
||||
get_page(queue->grant_tx_page[i]);
|
||||
gnttab_end_foreign_access(queue->grant_tx_ref[i],
|
||||
GNTMAP_readonly,
|
||||
(unsigned long)page_address(queue->grant_tx_page[i]));
|
||||
queue->grant_tx_page[i] = NULL;
|
||||
queue->grant_tx_ref[i] = GRANT_INVALID_REF;
|
||||
add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, i);
|
||||
add_id_to_list(&queue->tx_skb_freelist, queue->tx_link, i);
|
||||
dev_kfree_skb_irq(skb);
|
||||
}
|
||||
}
|
||||
@@ -1378,6 +1424,9 @@ static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
|
||||
struct netfront_queue *queue = dev_id;
|
||||
unsigned long flags;
|
||||
|
||||
if (queue->info->broken)
|
||||
return IRQ_HANDLED;
|
||||
|
||||
spin_lock_irqsave(&queue->tx_lock, flags);
|
||||
xennet_tx_buf_gc(queue);
|
||||
spin_unlock_irqrestore(&queue->tx_lock, flags);
|
||||
@@ -1390,6 +1439,9 @@ static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
|
||||
struct netfront_queue *queue = dev_id;
|
||||
struct net_device *dev = queue->info->netdev;
|
||||
|
||||
if (queue->info->broken)
|
||||
return IRQ_HANDLED;
|
||||
|
||||
if (likely(netif_carrier_ok(dev) &&
|
||||
RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
|
||||
napi_schedule(&queue->napi);
|
||||
@@ -1411,6 +1463,10 @@ static void xennet_poll_controller(struct net_device *dev)
|
||||
struct netfront_info *info = netdev_priv(dev);
|
||||
unsigned int num_queues = dev->real_num_tx_queues;
|
||||
unsigned int i;
|
||||
|
||||
if (info->broken)
|
||||
return;
|
||||
|
||||
for (i = 0; i < num_queues; ++i)
|
||||
xennet_interrupt(0, &info->queues[i]);
|
||||
}
|
||||
@@ -1482,6 +1538,11 @@ static int xennet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
|
||||
|
||||
static int xennet_xdp(struct net_device *dev, struct netdev_bpf *xdp)
|
||||
{
|
||||
struct netfront_info *np = netdev_priv(dev);
|
||||
|
||||
if (np->broken)
|
||||
return -ENODEV;
|
||||
|
||||
switch (xdp->command) {
|
||||
case XDP_SETUP_PROG:
|
||||
return xennet_xdp_set(dev, xdp->prog, xdp->extack);
|
||||
@@ -1859,13 +1920,15 @@ static int xennet_init_queue(struct netfront_queue *queue)
|
||||
snprintf(queue->name, sizeof(queue->name), "vif%s-q%u",
|
||||
devid, queue->id);
|
||||
|
||||
/* Initialise tx_skbs as a free chain containing every entry. */
|
||||
/* Initialise tx_skb_freelist as a free chain containing every entry. */
|
||||
queue->tx_skb_freelist = 0;
|
||||
queue->tx_pend_queue = TX_LINK_NONE;
|
||||
for (i = 0; i < NET_TX_RING_SIZE; i++) {
|
||||
skb_entry_set_link(&queue->tx_skbs[i], i+1);
|
||||
queue->tx_link[i] = i + 1;
|
||||
queue->grant_tx_ref[i] = GRANT_INVALID_REF;
|
||||
queue->grant_tx_page[i] = NULL;
|
||||
}
|
||||
queue->tx_link[NET_TX_RING_SIZE - 1] = TX_LINK_NONE;
|
||||
|
||||
/* Clear out rx_skbs */
|
||||
for (i = 0; i < NET_RX_RING_SIZE; i++) {
|
||||
@@ -2134,6 +2197,9 @@ static int talk_to_netback(struct xenbus_device *dev,
|
||||
if (info->queues)
|
||||
xennet_destroy_queues(info);
|
||||
|
||||
/* For the case of a reconnect reset the "broken" indicator. */
|
||||
info->broken = false;
|
||||
|
||||
err = xennet_create_queues(info, &num_queues);
|
||||
if (err < 0) {
|
||||
xenbus_dev_fatal(dev, err, "creating queues");
|
||||
|
@@ -8,6 +8,7 @@
|
||||
#include <linux/uio.h>
|
||||
#include <linux/falloc.h>
|
||||
#include <linux/file.h>
|
||||
#include <linux/fs.h>
|
||||
#include "nvmet.h"
|
||||
|
||||
#define NVMET_MAX_MPOOL_BVEC 16
|
||||
@@ -266,7 +267,8 @@ static void nvmet_file_execute_rw(struct nvmet_req *req)
|
||||
|
||||
if (req->ns->buffered_io) {
|
||||
if (likely(!req->f.mpool_alloc) &&
|
||||
nvmet_file_execute_io(req, IOCB_NOWAIT))
|
||||
(req->ns->file->f_mode & FMODE_NOWAIT) &&
|
||||
nvmet_file_execute_io(req, IOCB_NOWAIT))
|
||||
return;
|
||||
nvmet_file_submit_buffered_io(req);
|
||||
} else
|
||||
|
@@ -688,10 +688,11 @@ static int nvmet_try_send_r2t(struct nvmet_tcp_cmd *cmd, bool last_in_batch)
|
||||
static int nvmet_try_send_ddgst(struct nvmet_tcp_cmd *cmd, bool last_in_batch)
|
||||
{
|
||||
struct nvmet_tcp_queue *queue = cmd->queue;
|
||||
int left = NVME_TCP_DIGEST_LENGTH - cmd->offset;
|
||||
struct msghdr msg = { .msg_flags = MSG_DONTWAIT };
|
||||
struct kvec iov = {
|
||||
.iov_base = (u8 *)&cmd->exp_ddgst + cmd->offset,
|
||||
.iov_len = NVME_TCP_DIGEST_LENGTH - cmd->offset
|
||||
.iov_len = left
|
||||
};
|
||||
int ret;
|
||||
|
||||
@@ -705,6 +706,10 @@ static int nvmet_try_send_ddgst(struct nvmet_tcp_cmd *cmd, bool last_in_batch)
|
||||
return ret;
|
||||
|
||||
cmd->offset += ret;
|
||||
left -= ret;
|
||||
|
||||
if (left)
|
||||
return -EAGAIN;
|
||||
|
||||
if (queue->nvme_sq.sqhd_disabled) {
|
||||
cmd->queue->snd_cmd = NULL;
|
||||
|
@@ -306,11 +306,6 @@ static inline u32 advk_readl(struct advk_pcie *pcie, u64 reg)
|
||||
return readl(pcie->base + reg);
|
||||
}
|
||||
|
||||
static inline u16 advk_read16(struct advk_pcie *pcie, u64 reg)
|
||||
{
|
||||
return advk_readl(pcie, (reg & ~0x3)) >> ((reg & 0x3) * 8);
|
||||
}
|
||||
|
||||
static u8 advk_pcie_ltssm_state(struct advk_pcie *pcie)
|
||||
{
|
||||
u32 val;
|
||||
@@ -384,16 +379,9 @@ static void advk_pcie_wait_for_retrain(struct advk_pcie *pcie)
|
||||
|
||||
static void advk_pcie_issue_perst(struct advk_pcie *pcie)
|
||||
{
|
||||
u32 reg;
|
||||
|
||||
if (!pcie->reset_gpio)
|
||||
return;
|
||||
|
||||
/* PERST does not work for some cards when link training is enabled */
|
||||
reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
|
||||
reg &= ~LINK_TRAINING_EN;
|
||||
advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
|
||||
|
||||
/* 10ms delay is needed for some cards */
|
||||
dev_info(&pcie->pdev->dev, "issuing PERST via reset GPIO for 10ms\n");
|
||||
gpiod_set_value_cansleep(pcie->reset_gpio, 1);
|
||||
@@ -401,53 +389,46 @@ static void advk_pcie_issue_perst(struct advk_pcie *pcie)
|
||||
gpiod_set_value_cansleep(pcie->reset_gpio, 0);
|
||||
}
|
||||
|
||||
static int advk_pcie_train_at_gen(struct advk_pcie *pcie, int gen)
|
||||
static void advk_pcie_train_link(struct advk_pcie *pcie)
|
||||
{
|
||||
int ret, neg_gen;
|
||||
struct device *dev = &pcie->pdev->dev;
|
||||
u32 reg;
|
||||
int ret;
|
||||
|
||||
/* Setup link speed */
|
||||
/*
|
||||
* Setup PCIe rev / gen compliance based on device tree property
|
||||
* 'max-link-speed' which also forces maximal link speed.
|
||||
*/
|
||||
reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
|
||||
reg &= ~PCIE_GEN_SEL_MSK;
|
||||
if (gen == 3)
|
||||
if (pcie->link_gen == 3)
|
||||
reg |= SPEED_GEN_3;
|
||||
else if (gen == 2)
|
||||
else if (pcie->link_gen == 2)
|
||||
reg |= SPEED_GEN_2;
|
||||
else
|
||||
reg |= SPEED_GEN_1;
|
||||
advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
|
||||
|
||||
/*
|
||||
* Enable link training. This is not needed in every call to this
|
||||
* function, just once suffices, but it does not break anything either.
|
||||
* Set maximal link speed value also into PCIe Link Control 2 register.
|
||||
* Armada 3700 Functional Specification says that default value is based
|
||||
* on SPEED_GEN but tests showed that default value is always 8.0 GT/s.
|
||||
*/
|
||||
reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL2);
|
||||
reg &= ~PCI_EXP_LNKCTL2_TLS;
|
||||
if (pcie->link_gen == 3)
|
||||
reg |= PCI_EXP_LNKCTL2_TLS_8_0GT;
|
||||
else if (pcie->link_gen == 2)
|
||||
reg |= PCI_EXP_LNKCTL2_TLS_5_0GT;
|
||||
else
|
||||
reg |= PCI_EXP_LNKCTL2_TLS_2_5GT;
|
||||
advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL2);
|
||||
|
||||
/* Enable link training after selecting PCIe generation */
|
||||
reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
|
||||
reg |= LINK_TRAINING_EN;
|
||||
advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
|
||||
|
||||
/*
|
||||
* Start link training immediately after enabling it.
|
||||
* This solves problems for some buggy cards.
|
||||
*/
|
||||
reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL);
|
||||
reg |= PCI_EXP_LNKCTL_RL;
|
||||
advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL);
|
||||
|
||||
ret = advk_pcie_wait_for_link(pcie);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
reg = advk_read16(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKSTA);
|
||||
neg_gen = reg & PCI_EXP_LNKSTA_CLS;
|
||||
|
||||
return neg_gen;
|
||||
}
|
||||
|
||||
static void advk_pcie_train_link(struct advk_pcie *pcie)
|
||||
{
|
||||
struct device *dev = &pcie->pdev->dev;
|
||||
int neg_gen = -1, gen;
|
||||
|
||||
/*
|
||||
* Reset PCIe card via PERST# signal. Some cards are not detected
|
||||
* during link training when they are in some non-initial state.
|
||||
@@ -458,41 +439,18 @@ static void advk_pcie_train_link(struct advk_pcie *pcie)
|
||||
* PERST# signal could have been asserted by pinctrl subsystem before
|
||||
* probe() callback has been called or issued explicitly by reset gpio
|
||||
* function advk_pcie_issue_perst(), making the endpoint going into
|
||||
* fundamental reset. As required by PCI Express spec a delay for at
|
||||
* least 100ms after such a reset before link training is needed.
|
||||
* fundamental reset. As required by PCI Express spec (PCI Express
|
||||
* Base Specification, REV. 4.0 PCI Express, February 19 2014, 6.6.1
|
||||
* Conventional Reset) a delay for at least 100ms after such a reset
|
||||
* before sending a Configuration Request to the device is needed.
|
||||
* So wait until PCIe link is up. Function advk_pcie_wait_for_link()
|
||||
* waits for link at least 900ms.
|
||||
*/
|
||||
msleep(PCI_PM_D3COLD_WAIT);
|
||||
|
||||
/*
|
||||
* Try link training at link gen specified by device tree property
|
||||
* 'max-link-speed'. If this fails, iteratively train at lower gen.
|
||||
*/
|
||||
for (gen = pcie->link_gen; gen > 0; --gen) {
|
||||
neg_gen = advk_pcie_train_at_gen(pcie, gen);
|
||||
if (neg_gen > 0)
|
||||
break;
|
||||
}
|
||||
|
||||
if (neg_gen < 0)
|
||||
goto err;
|
||||
|
||||
/*
|
||||
* After successful training if negotiated gen is lower than requested,
|
||||
* train again on negotiated gen. This solves some stability issues for
|
||||
* some buggy gen1 cards.
|
||||
*/
|
||||
if (neg_gen < gen) {
|
||||
gen = neg_gen;
|
||||
neg_gen = advk_pcie_train_at_gen(pcie, gen);
|
||||
}
|
||||
|
||||
if (neg_gen == gen) {
|
||||
dev_info(dev, "link up at gen %i\n", gen);
|
||||
return;
|
||||
}
|
||||
|
||||
err:
|
||||
dev_err(dev, "link never came up\n");
|
||||
ret = advk_pcie_wait_for_link(pcie);
|
||||
if (ret < 0)
|
||||
dev_err(dev, "link never came up\n");
|
||||
else
|
||||
dev_info(dev, "link up\n");
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -692,6 +650,7 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
|
||||
u32 reg;
|
||||
unsigned int status;
|
||||
char *strcomp_status, *str_posted;
|
||||
int ret;
|
||||
|
||||
reg = advk_readl(pcie, PIO_STAT);
|
||||
status = (reg & PIO_COMPLETION_STATUS_MASK) >>
|
||||
@@ -716,6 +675,7 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
|
||||
case PIO_COMPLETION_STATUS_OK:
|
||||
if (reg & PIO_ERR_STATUS) {
|
||||
strcomp_status = "COMP_ERR";
|
||||
ret = -EFAULT;
|
||||
break;
|
||||
}
|
||||
/* Get the read result */
|
||||
@@ -723,9 +683,11 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
|
||||
*val = advk_readl(pcie, PIO_RD_DATA);
|
||||
/* No error */
|
||||
strcomp_status = NULL;
|
||||
ret = 0;
|
||||
break;
|
||||
case PIO_COMPLETION_STATUS_UR:
|
||||
strcomp_status = "UR";
|
||||
ret = -EOPNOTSUPP;
|
||||
break;
|
||||
case PIO_COMPLETION_STATUS_CRS:
|
||||
if (allow_crs && val) {
|
||||
@@ -743,6 +705,7 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
|
||||
*/
|
||||
*val = CFG_RD_CRS_VAL;
|
||||
strcomp_status = NULL;
|
||||
ret = 0;
|
||||
break;
|
||||
}
|
||||
/* PCIe r4.0, sec 2.3.2, says:
|
||||
@@ -758,21 +721,24 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
|
||||
* Request and taking appropriate action, e.g., complete the
|
||||
* Request to the host as a failed transaction.
|
||||
*
|
||||
* To simplify implementation do not re-issue the Configuration
|
||||
* Request and complete the Request as a failed transaction.
|
||||
* So return -EAGAIN and caller (pci-aardvark.c driver) will
|
||||
* re-issue request again up to the PIO_RETRY_CNT retries.
|
||||
*/
|
||||
strcomp_status = "CRS";
|
||||
ret = -EAGAIN;
|
||||
break;
|
||||
case PIO_COMPLETION_STATUS_CA:
|
||||
strcomp_status = "CA";
|
||||
ret = -ECANCELED;
|
||||
break;
|
||||
default:
|
||||
strcomp_status = "Unknown";
|
||||
ret = -EINVAL;
|
||||
break;
|
||||
}
|
||||
|
||||
if (!strcomp_status)
|
||||
return 0;
|
||||
return ret;
|
||||
|
||||
if (reg & PIO_NON_POSTED_REQ)
|
||||
str_posted = "Non-posted";
|
||||
@@ -782,7 +748,7 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
|
||||
dev_dbg(dev, "%s PIO Response Status: %s, %#x @ %#x\n",
|
||||
str_posted, strcomp_status, reg, advk_readl(pcie, PIO_ADDR_LS));
|
||||
|
||||
return -EFAULT;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int advk_pcie_wait_pio(struct advk_pcie *pcie)
|
||||
@@ -790,13 +756,13 @@ static int advk_pcie_wait_pio(struct advk_pcie *pcie)
|
||||
struct device *dev = &pcie->pdev->dev;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < PIO_RETRY_CNT; i++) {
|
||||
for (i = 1; i <= PIO_RETRY_CNT; i++) {
|
||||
u32 start, isr;
|
||||
|
||||
start = advk_readl(pcie, PIO_START);
|
||||
isr = advk_readl(pcie, PIO_ISR);
|
||||
if (!start && isr)
|
||||
return 0;
|
||||
return i;
|
||||
udelay(PIO_RETRY_DELAY);
|
||||
}
|
||||
|
||||
@@ -984,7 +950,6 @@ static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = {
|
||||
static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
|
||||
{
|
||||
struct pci_bridge_emul *bridge = &pcie->bridge;
|
||||
int ret;
|
||||
|
||||
bridge->conf.vendor =
|
||||
cpu_to_le16(advk_readl(pcie, PCIE_CORE_DEV_ID_REG) & 0xffff);
|
||||
@@ -1004,19 +969,14 @@ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
|
||||
/* Support interrupt A for MSI feature */
|
||||
bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE;
|
||||
|
||||
/* Indicates supports for Completion Retry Status */
|
||||
bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS);
|
||||
|
||||
bridge->has_pcie = true;
|
||||
bridge->data = pcie;
|
||||
bridge->ops = &advk_pci_bridge_emul_ops;
|
||||
|
||||
/* PCIe config space can be initialized after pci_bridge_emul_init() */
|
||||
ret = pci_bridge_emul_init(bridge, 0);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/* Indicates supports for Completion Retry Status */
|
||||
bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS);
|
||||
|
||||
return 0;
|
||||
return pci_bridge_emul_init(bridge, 0);
|
||||
}
|
||||
|
||||
static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus,
|
||||
@@ -1068,6 +1028,7 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
|
||||
int where, int size, u32 *val)
|
||||
{
|
||||
struct advk_pcie *pcie = bus->sysdata;
|
||||
int retry_count;
|
||||
bool allow_crs;
|
||||
u32 reg;
|
||||
int ret;
|
||||
@@ -1090,18 +1051,8 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
|
||||
(le16_to_cpu(pcie->bridge.pcie_conf.rootctl) &
|
||||
PCI_EXP_RTCTL_CRSSVE);
|
||||
|
||||
if (advk_pcie_pio_is_running(pcie)) {
|
||||
/*
|
||||
* If it is possible return Completion Retry Status so caller
|
||||
* tries to issue the request again instead of failing.
|
||||
*/
|
||||
if (allow_crs) {
|
||||
*val = CFG_RD_CRS_VAL;
|
||||
return PCIBIOS_SUCCESSFUL;
|
||||
}
|
||||
*val = 0xffffffff;
|
||||
return PCIBIOS_SET_FAILED;
|
||||
}
|
||||
if (advk_pcie_pio_is_running(pcie))
|
||||
goto try_crs;
|
||||
|
||||
/* Program the control register */
|
||||
reg = advk_readl(pcie, PIO_CTRL);
|
||||
@@ -1120,30 +1071,24 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
|
||||
/* Program the data strobe */
|
||||
advk_writel(pcie, 0xf, PIO_WR_DATA_STRB);
|
||||
|
||||
/* Clear PIO DONE ISR and start the transfer */
|
||||
advk_writel(pcie, 1, PIO_ISR);
|
||||
advk_writel(pcie, 1, PIO_START);
|
||||
retry_count = 0;
|
||||
do {
|
||||
/* Clear PIO DONE ISR and start the transfer */
|
||||
advk_writel(pcie, 1, PIO_ISR);
|
||||
advk_writel(pcie, 1, PIO_START);
|
||||
|
||||
ret = advk_pcie_wait_pio(pcie);
|
||||
if (ret < 0) {
|
||||
/*
|
||||
* If it is possible return Completion Retry Status so caller
|
||||
* tries to issue the request again instead of failing.
|
||||
*/
|
||||
if (allow_crs) {
|
||||
*val = CFG_RD_CRS_VAL;
|
||||
return PCIBIOS_SUCCESSFUL;
|
||||
}
|
||||
*val = 0xffffffff;
|
||||
return PCIBIOS_SET_FAILED;
|
||||
}
|
||||
ret = advk_pcie_wait_pio(pcie);
|
||||
if (ret < 0)
|
||||
goto try_crs;
|
||||
|
||||
/* Check PIO status and get the read result */
|
||||
ret = advk_pcie_check_pio_status(pcie, allow_crs, val);
|
||||
if (ret < 0) {
|
||||
*val = 0xffffffff;
|
||||
return PCIBIOS_SET_FAILED;
|
||||
}
|
||||
retry_count += ret;
|
||||
|
||||
/* Check PIO status and get the read result */
|
||||
ret = advk_pcie_check_pio_status(pcie, allow_crs, val);
|
||||
} while (ret == -EAGAIN && retry_count < PIO_RETRY_CNT);
|
||||
|
||||
if (ret < 0)
|
||||
goto fail;
|
||||
|
||||
if (size == 1)
|
||||
*val = (*val >> (8 * (where & 3))) & 0xff;
|
||||
@@ -1151,6 +1096,20 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
|
||||
*val = (*val >> (8 * (where & 3))) & 0xffff;
|
||||
|
||||
return PCIBIOS_SUCCESSFUL;
|
||||
|
||||
try_crs:
|
||||
/*
|
||||
* If it is possible, return Completion Retry Status so that caller
|
||||
* tries to issue the request again instead of failing.
|
||||
*/
|
||||
if (allow_crs) {
|
||||
*val = CFG_RD_CRS_VAL;
|
||||
return PCIBIOS_SUCCESSFUL;
|
||||
}
|
||||
|
||||
fail:
|
||||
*val = 0xffffffff;
|
||||
return PCIBIOS_SET_FAILED;
|
||||
}
|
||||
|
||||
static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
|
||||
@@ -1159,6 +1118,7 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
|
||||
struct advk_pcie *pcie = bus->sysdata;
|
||||
u32 reg;
|
||||
u32 data_strobe = 0x0;
|
||||
int retry_count;
|
||||
int offset;
|
||||
int ret;
|
||||
|
||||
@@ -1200,19 +1160,22 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
|
||||
/* Program the data strobe */
|
||||
advk_writel(pcie, data_strobe, PIO_WR_DATA_STRB);
|
||||
|
||||
/* Clear PIO DONE ISR and start the transfer */
|
||||
advk_writel(pcie, 1, PIO_ISR);
|
||||
advk_writel(pcie, 1, PIO_START);
|
||||
retry_count = 0;
|
||||
do {
|
||||
/* Clear PIO DONE ISR and start the transfer */
|
||||
advk_writel(pcie, 1, PIO_ISR);
|
||||
advk_writel(pcie, 1, PIO_START);
|
||||
|
||||
ret = advk_pcie_wait_pio(pcie);
|
||||
if (ret < 0)
|
||||
return PCIBIOS_SET_FAILED;
|
||||
ret = advk_pcie_wait_pio(pcie);
|
||||
if (ret < 0)
|
||||
return PCIBIOS_SET_FAILED;
|
||||
|
||||
ret = advk_pcie_check_pio_status(pcie, false, NULL);
|
||||
if (ret < 0)
|
||||
return PCIBIOS_SET_FAILED;
|
||||
retry_count += ret;
|
||||
|
||||
return PCIBIOS_SUCCESSFUL;
|
||||
ret = advk_pcie_check_pio_status(pcie, false, NULL);
|
||||
} while (ret == -EAGAIN && retry_count < PIO_RETRY_CNT);
|
||||
|
||||
return ret < 0 ? PCIBIOS_SET_FAILED : PCIBIOS_SUCCESSFUL;
|
||||
}
|
||||
|
||||
static struct pci_ops advk_pcie_ops = {
|
||||
|
@@ -3675,7 +3675,7 @@ _scsih_ublock_io_device(struct MPT3SAS_ADAPTER *ioc, u64 sas_address)
|
||||
|
||||
shost_for_each_device(sdev, ioc->shost) {
|
||||
sas_device_priv_data = sdev->hostdata;
|
||||
if (!sas_device_priv_data)
|
||||
if (!sas_device_priv_data || !sas_device_priv_data->sas_target)
|
||||
continue;
|
||||
if (sas_device_priv_data->sas_target->sas_address
|
||||
!= sas_address)
|
||||
|
@@ -4628,6 +4628,7 @@ static void zbc_rwp_zone(struct sdebug_dev_info *devip,
|
||||
struct sdeb_zone_state *zsp)
|
||||
{
|
||||
enum sdebug_z_cond zc;
|
||||
struct sdeb_store_info *sip = devip2sip(devip, false);
|
||||
|
||||
if (zbc_zone_is_conv(zsp))
|
||||
return;
|
||||
@@ -4639,6 +4640,10 @@ static void zbc_rwp_zone(struct sdebug_dev_info *devip,
|
||||
if (zsp->z_cond == ZC4_CLOSED)
|
||||
devip->nr_closed--;
|
||||
|
||||
if (zsp->z_wp > zsp->z_start)
|
||||
memset(sip->storep + zsp->z_start * sdebug_sector_size, 0,
|
||||
(zsp->z_wp - zsp->z_start) * sdebug_sector_size);
|
||||
|
||||
zsp->z_non_seq_resource = false;
|
||||
zsp->z_wp = zsp->z_start;
|
||||
zsp->z_cond = ZC1_EMPTY;
|
||||
|
@@ -816,7 +816,7 @@ store_state_field(struct device *dev, struct device_attribute *attr,
|
||||
|
||||
mutex_lock(&sdev->state_mutex);
|
||||
if (sdev->sdev_state == SDEV_RUNNING && state == SDEV_RUNNING) {
|
||||
ret = count;
|
||||
ret = 0;
|
||||
} else {
|
||||
ret = scsi_device_set_state(sdev, state);
|
||||
if (ret == 0 && state == SDEV_RUNNING)
|
||||
|
@@ -187,7 +187,6 @@ static struct fbtft_display display = {
|
||||
},
|
||||
};
|
||||
|
||||
#ifdef CONFIG_FB_BACKLIGHT
|
||||
static int update_onboard_backlight(struct backlight_device *bd)
|
||||
{
|
||||
struct fbtft_par *par = bl_get_data(bd);
|
||||
@@ -231,9 +230,6 @@ static void register_onboard_backlight(struct fbtft_par *par)
|
||||
if (!par->fbtftops.unregister_backlight)
|
||||
par->fbtftops.unregister_backlight = fbtft_unregister_backlight;
|
||||
}
|
||||
#else
|
||||
static void register_onboard_backlight(struct fbtft_par *par) { };
|
||||
#endif
|
||||
|
||||
FBTFT_REGISTER_DRIVER(DRVNAME, "solomon,ssd1351", &display);
|
||||
|
||||
|
@@ -128,7 +128,6 @@ static int fbtft_request_gpios(struct fbtft_par *par)
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_FB_BACKLIGHT
|
||||
static int fbtft_backlight_update_status(struct backlight_device *bd)
|
||||
{
|
||||
struct fbtft_par *par = bl_get_data(bd);
|
||||
@@ -161,6 +160,7 @@ void fbtft_unregister_backlight(struct fbtft_par *par)
|
||||
par->info->bl_dev = NULL;
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(fbtft_unregister_backlight);
|
||||
|
||||
static const struct backlight_ops fbtft_bl_ops = {
|
||||
.get_brightness = fbtft_backlight_get_brightness,
|
||||
@@ -198,12 +198,7 @@ void fbtft_register_backlight(struct fbtft_par *par)
|
||||
if (!par->fbtftops.unregister_backlight)
|
||||
par->fbtftops.unregister_backlight = fbtft_unregister_backlight;
|
||||
}
|
||||
#else
|
||||
void fbtft_register_backlight(struct fbtft_par *par) { };
|
||||
void fbtft_unregister_backlight(struct fbtft_par *par) { };
|
||||
#endif
|
||||
EXPORT_SYMBOL(fbtft_register_backlight);
|
||||
EXPORT_SYMBOL(fbtft_unregister_backlight);
|
||||
|
||||
static void fbtft_set_addr_win(struct fbtft_par *par, int xs, int ys, int xe,
|
||||
int ye)
|
||||
@@ -853,13 +848,11 @@ int fbtft_register_framebuffer(struct fb_info *fb_info)
|
||||
fb_info->fix.smem_len >> 10, text1,
|
||||
HZ / fb_info->fbdefio->delay, text2);
|
||||
|
||||
#ifdef CONFIG_FB_BACKLIGHT
|
||||
/* Turn on backlight if available */
|
||||
if (fb_info->bl_dev) {
|
||||
fb_info->bl_dev->props.power = FB_BLANK_UNBLANK;
|
||||
fb_info->bl_dev->ops->update_status(fb_info->bl_dev);
|
||||
}
|
||||
#endif
|
||||
|
||||
return 0;
|
||||
|
||||
|
@@ -192,7 +192,11 @@ int gbaudio_remove_component_controls(struct snd_soc_component *component,
|
||||
unsigned int num_controls)
|
||||
{
|
||||
struct snd_card *card = component->card->snd_card;
|
||||
int err;
|
||||
|
||||
return gbaudio_remove_controls(card, component->dev, controls,
|
||||
num_controls, component->name_prefix);
|
||||
down_write(&card->controls_rwsem);
|
||||
err = gbaudio_remove_controls(card, component->dev, controls,
|
||||
num_controls, component->name_prefix);
|
||||
up_write(&card->controls_rwsem);
|
||||
return err;
|
||||
}
|
||||
|
@@ -2551,13 +2551,14 @@ static void _rtl92e_pci_disconnect(struct pci_dev *pdev)
|
||||
free_irq(dev->irq, dev);
|
||||
priv->irq = 0;
|
||||
}
|
||||
free_rtllib(dev);
|
||||
|
||||
if (dev->mem_start != 0) {
|
||||
iounmap((void __iomem *)dev->mem_start);
|
||||
release_mem_region(pci_resource_start(pdev, 1),
|
||||
pci_resource_len(pdev, 1));
|
||||
}
|
||||
|
||||
free_rtllib(dev);
|
||||
} else {
|
||||
priv = rtllib_priv(dev);
|
||||
}
|
||||
|
@@ -86,7 +86,11 @@ static int __write_console(struct xencons_info *xencons,
|
||||
cons = intf->out_cons;
|
||||
prod = intf->out_prod;
|
||||
mb(); /* update queue values before going on */
|
||||
BUG_ON((prod - cons) > sizeof(intf->out));
|
||||
|
||||
if ((prod - cons) > sizeof(intf->out)) {
|
||||
pr_err_once("xencons: Illegal ring page indices");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
while ((sent < len) && ((prod - cons) < sizeof(intf->out)))
|
||||
intf->out[MASK_XENCONS_IDX(prod++, intf->out)] = data[sent++];
|
||||
@@ -115,6 +119,9 @@ static int domU_write_console(uint32_t vtermno, const char *data, int len)
|
||||
while (len) {
|
||||
int sent = __write_console(cons, data, len);
|
||||
|
||||
if (sent < 0)
|
||||
return sent;
|
||||
|
||||
data += sent;
|
||||
len -= sent;
|
||||
|
||||
@@ -138,7 +145,11 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
|
||||
cons = intf->in_cons;
|
||||
prod = intf->in_prod;
|
||||
mb(); /* get pointers before reading ring */
|
||||
BUG_ON((prod - cons) > sizeof(intf->in));
|
||||
|
||||
if ((prod - cons) > sizeof(intf->in)) {
|
||||
pr_err_once("xencons: Illegal ring page indices");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
while (cons != prod && recv < len)
|
||||
buf[recv++] = intf->in[MASK_XENCONS_IDX(cons++, intf->in)];
|
||||
|
@@ -425,15 +425,15 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
|
||||
data->phy = devm_usb_get_phy_by_phandle(dev, "fsl,usbphy", 0);
|
||||
if (IS_ERR(data->phy)) {
|
||||
ret = PTR_ERR(data->phy);
|
||||
if (ret == -ENODEV) {
|
||||
data->phy = devm_usb_get_phy_by_phandle(dev, "phys", 0);
|
||||
if (IS_ERR(data->phy)) {
|
||||
ret = PTR_ERR(data->phy);
|
||||
if (ret == -ENODEV)
|
||||
data->phy = NULL;
|
||||
else
|
||||
goto err_clk;
|
||||
}
|
||||
if (ret != -ENODEV)
|
||||
goto err_clk;
|
||||
data->phy = devm_usb_get_phy_by_phandle(dev, "phys", 0);
|
||||
if (IS_ERR(data->phy)) {
|
||||
ret = PTR_ERR(data->phy);
|
||||
if (ret == -ENODEV)
|
||||
data->phy = NULL;
|
||||
else
|
||||
goto err_clk;
|
||||
}
|
||||
}
|
||||
|
||||
|
@@ -4628,8 +4628,6 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
|
||||
if (oldspeed == USB_SPEED_LOW)
|
||||
delay = HUB_LONG_RESET_TIME;
|
||||
|
||||
mutex_lock(hcd->address0_mutex);
|
||||
|
||||
/* Reset the device; full speed may morph to high speed */
|
||||
/* FIXME a USB 2.0 device may morph into SuperSpeed on reset. */
|
||||
retval = hub_port_reset(hub, port1, udev, delay, false);
|
||||
@@ -4940,7 +4938,6 @@ fail:
|
||||
hub_port_disable(hub, port1, 0);
|
||||
update_devnum(udev, devnum); /* for disconnect processing */
|
||||
}
|
||||
mutex_unlock(hcd->address0_mutex);
|
||||
return retval;
|
||||
}
|
||||
|
||||
@@ -5115,6 +5112,7 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
|
||||
struct usb_port *port_dev = hub->ports[port1 - 1];
|
||||
struct usb_device *udev = port_dev->child;
|
||||
static int unreliable_port = -1;
|
||||
bool retry_locked;
|
||||
|
||||
/* Disconnect any existing devices under this port */
|
||||
if (udev) {
|
||||
@@ -5170,8 +5168,11 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
|
||||
unit_load = 100;
|
||||
|
||||
status = 0;
|
||||
for (i = 0; i < PORT_INIT_TRIES; i++) {
|
||||
|
||||
for (i = 0; i < PORT_INIT_TRIES; i++) {
|
||||
usb_lock_port(port_dev);
|
||||
mutex_lock(hcd->address0_mutex);
|
||||
retry_locked = true;
|
||||
/* reallocate for each attempt, since references
|
||||
* to the previous one can escape in various ways
|
||||
*/
|
||||
@@ -5179,6 +5180,8 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
|
||||
if (!udev) {
|
||||
dev_err(&port_dev->dev,
|
||||
"couldn't allocate usb_device\n");
|
||||
mutex_unlock(hcd->address0_mutex);
|
||||
usb_unlock_port(port_dev);
|
||||
goto done;
|
||||
}
|
||||
|
||||
@@ -5200,12 +5203,14 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
|
||||
}
|
||||
|
||||
/* reset (non-USB 3.0 devices) and get descriptor */
|
||||
usb_lock_port(port_dev);
|
||||
status = hub_port_init(hub, udev, port1, i);
|
||||
usb_unlock_port(port_dev);
|
||||
if (status < 0)
|
||||
goto loop;
|
||||
|
||||
mutex_unlock(hcd->address0_mutex);
|
||||
usb_unlock_port(port_dev);
|
||||
retry_locked = false;
|
||||
|
||||
if (udev->quirks & USB_QUIRK_DELAY_INIT)
|
||||
msleep(2000);
|
||||
|
||||
@@ -5298,6 +5303,10 @@ loop:
|
||||
usb_ep0_reinit(udev);
|
||||
release_devnum(udev);
|
||||
hub_free_dev(udev);
|
||||
if (retry_locked) {
|
||||
mutex_unlock(hcd->address0_mutex);
|
||||
usb_unlock_port(port_dev);
|
||||
}
|
||||
usb_put_dev(udev);
|
||||
if ((status == -ENOTCONN) || (status == -ENOTSUPP))
|
||||
break;
|
||||
@@ -5839,6 +5848,8 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
|
||||
bos = udev->bos;
|
||||
udev->bos = NULL;
|
||||
|
||||
mutex_lock(hcd->address0_mutex);
|
||||
|
||||
for (i = 0; i < PORT_INIT_TRIES; ++i) {
|
||||
|
||||
/* ep0 maxpacket size may change; let the HCD know about it.
|
||||
@@ -5848,6 +5859,7 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
|
||||
if (ret >= 0 || ret == -ENOTCONN || ret == -ENODEV)
|
||||
break;
|
||||
}
|
||||
mutex_unlock(hcd->address0_mutex);
|
||||
|
||||
if (ret < 0)
|
||||
goto re_enumerate;
|
||||
|
@@ -1198,6 +1198,8 @@ static void dwc2_hsotg_start_req(struct dwc2_hsotg *hsotg,
|
||||
}
|
||||
ctrl |= DXEPCTL_CNAK;
|
||||
} else {
|
||||
hs_req->req.frame_number = hs_ep->target_frame;
|
||||
hs_req->req.actual = 0;
|
||||
dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, -ENODATA);
|
||||
return;
|
||||
}
|
||||
@@ -2856,9 +2858,12 @@ static void dwc2_gadget_handle_ep_disabled(struct dwc2_hsotg_ep *hs_ep)
|
||||
|
||||
do {
|
||||
hs_req = get_ep_head(hs_ep);
|
||||
if (hs_req)
|
||||
if (hs_req) {
|
||||
hs_req->req.frame_number = hs_ep->target_frame;
|
||||
hs_req->req.actual = 0;
|
||||
dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req,
|
||||
-ENODATA);
|
||||
}
|
||||
dwc2_gadget_incr_frame_num(hs_ep);
|
||||
/* Update current frame number value. */
|
||||
hsotg->frame_number = dwc2_hsotg_read_frameno(hsotg);
|
||||
@@ -2911,8 +2916,11 @@ static void dwc2_gadget_handle_out_token_ep_disabled(struct dwc2_hsotg_ep *ep)
|
||||
|
||||
while (dwc2_gadget_target_frame_elapsed(ep)) {
|
||||
hs_req = get_ep_head(ep);
|
||||
if (hs_req)
|
||||
if (hs_req) {
|
||||
hs_req->req.frame_number = ep->target_frame;
|
||||
hs_req->req.actual = 0;
|
||||
dwc2_hsotg_complete_request(hsotg, ep, hs_req, -ENODATA);
|
||||
}
|
||||
|
||||
dwc2_gadget_incr_frame_num(ep);
|
||||
/* Update current frame number value. */
|
||||
@@ -3001,8 +3009,11 @@ static void dwc2_gadget_handle_nak(struct dwc2_hsotg_ep *hs_ep)
|
||||
|
||||
while (dwc2_gadget_target_frame_elapsed(hs_ep)) {
|
||||
hs_req = get_ep_head(hs_ep);
|
||||
if (hs_req)
|
||||
if (hs_req) {
|
||||
hs_req->req.frame_number = hs_ep->target_frame;
|
||||
hs_req->req.actual = 0;
|
||||
dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, -ENODATA);
|
||||
}
|
||||
|
||||
dwc2_gadget_incr_frame_num(hs_ep);
|
||||
/* Update current frame number value. */
|
||||
|
@@ -59,7 +59,7 @@
|
||||
#define DWC2_UNRESERVE_DELAY (msecs_to_jiffies(5))
|
||||
|
||||
/* If we get a NAK, wait this long before retrying */
|
||||
#define DWC2_RETRY_WAIT_DELAY 1*1E6L
|
||||
#define DWC2_RETRY_WAIT_DELAY (1 * NSEC_PER_MSEC)
|
||||
|
||||
/**
|
||||
* dwc2_periodic_channel_available() - Checks that a channel is available for a
|
||||
|
@@ -310,13 +310,24 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned int cmd,
|
||||
if (DWC3_DEPCMD_CMD(cmd) == DWC3_DEPCMD_STARTTRANSFER) {
|
||||
int link_state;
|
||||
|
||||
/*
|
||||
* Initiate remote wakeup if the link state is in U3 when
|
||||
* operating in SS/SSP or L1/L2 when operating in HS/FS. If the
|
||||
* link state is in U1/U2, no remote wakeup is needed. The Start
|
||||
* Transfer command will initiate the link recovery.
|
||||
*/
|
||||
link_state = dwc3_gadget_get_link_state(dwc);
|
||||
if (link_state == DWC3_LINK_STATE_U1 ||
|
||||
link_state == DWC3_LINK_STATE_U2 ||
|
||||
link_state == DWC3_LINK_STATE_U3) {
|
||||
switch (link_state) {
|
||||
case DWC3_LINK_STATE_U2:
|
||||
if (dwc->gadget->speed >= USB_SPEED_SUPER)
|
||||
break;
|
||||
|
||||
fallthrough;
|
||||
case DWC3_LINK_STATE_U3:
|
||||
ret = __dwc3_gadget_wakeup(dwc);
|
||||
dev_WARN_ONCE(dwc->dev, ret, "wakeup failed --> %d\n",
|
||||
ret);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3370,6 +3381,14 @@ static void dwc3_gadget_endpoint_command_complete(struct dwc3_ep *dep,
|
||||
if (cmd != DWC3_DEPCMD_ENDTRANSFER)
|
||||
return;
|
||||
|
||||
/*
|
||||
* The END_TRANSFER command will cause the controller to generate a
|
||||
* NoStream Event, and it's not due to the host DP NoStream rejection.
|
||||
* Ignore the next NoStream event.
|
||||
*/
|
||||
if (dep->stream_capable)
|
||||
dep->flags |= DWC3_EP_IGNORE_NEXT_NOSTREAM;
|
||||
|
||||
dep->flags &= ~DWC3_EP_END_TRANSFER_PENDING;
|
||||
dep->flags &= ~DWC3_EP_TRANSFER_STARTED;
|
||||
dwc3_gadget_ep_cleanup_cancelled_requests(dep);
|
||||
@@ -3592,14 +3611,6 @@ void dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force,
|
||||
WARN_ON_ONCE(ret);
|
||||
dep->resource_index = 0;
|
||||
|
||||
/*
|
||||
* The END_TRANSFER command will cause the controller to generate a
|
||||
* NoStream Event, and it's not due to the host DP NoStream rejection.
|
||||
* Ignore the next NoStream event.
|
||||
*/
|
||||
if (dep->stream_capable)
|
||||
dep->flags |= DWC3_EP_IGNORE_NEXT_NOSTREAM;
|
||||
|
||||
if (!interrupt)
|
||||
dep->flags &= ~DWC3_EP_TRANSFER_STARTED;
|
||||
else
|
||||
|
@@ -1267,6 +1267,8 @@ static const struct usb_device_id option_ids[] = {
|
||||
.driver_info = NCTRL(2) },
|
||||
{ USB_DEVICE(TELIT_VENDOR_ID, 0x9010), /* Telit SBL FN980 flashing device */
|
||||
.driver_info = NCTRL(0) | ZLP },
|
||||
{ USB_DEVICE(TELIT_VENDOR_ID, 0x9200), /* Telit LE910S1 flashing device */
|
||||
.driver_info = NCTRL(0) | ZLP },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0002, 0xff, 0xff, 0xff),
|
||||
.driver_info = RSVD(1) },
|
||||
@@ -2094,6 +2096,9 @@ static const struct usb_device_id option_ids[] = {
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0xff, 0x30) }, /* Fibocom FG150 Diag */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0, 0) }, /* Fibocom FG150 AT */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) }, /* Fibocom NL668-AM/NL652-EU (laptop MBIM) */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a2, 0xff) }, /* Fibocom FM101-GL (laptop MBIM) */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a4, 0xff), /* Fibocom FM101-GL (laptop MBIM) */
|
||||
.driver_info = RSVD(4) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2df3, 0x9d03, 0xff) }, /* LongSung M5710 */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1404, 0xff) }, /* GosunCn GM500 RNDIS */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1405, 0xff) }, /* GosunCn GM500 MBIM */
|
||||
|
@@ -665,18 +665,6 @@ static int tcpm_set_cc(struct tcpc_dev *dev, enum typec_cc_status cc)
|
||||
ret);
|
||||
goto done;
|
||||
}
|
||||
ret = fusb302_i2c_mask_write(chip, FUSB_REG_MASK,
|
||||
FUSB_REG_MASK_BC_LVL |
|
||||
FUSB_REG_MASK_COMP_CHNG,
|
||||
FUSB_REG_MASK_COMP_CHNG);
|
||||
if (ret < 0) {
|
||||
fusb302_log(chip, "cannot set SRC interrupt, ret=%d",
|
||||
ret);
|
||||
goto done;
|
||||
}
|
||||
chip->intr_comp_chng = true;
|
||||
break;
|
||||
case TYPEC_CC_RD:
|
||||
ret = fusb302_i2c_mask_write(chip, FUSB_REG_MASK,
|
||||
FUSB_REG_MASK_BC_LVL |
|
||||
FUSB_REG_MASK_COMP_CHNG,
|
||||
@@ -686,7 +674,21 @@ static int tcpm_set_cc(struct tcpc_dev *dev, enum typec_cc_status cc)
|
||||
ret);
|
||||
goto done;
|
||||
}
|
||||
chip->intr_comp_chng = true;
|
||||
chip->intr_bc_lvl = false;
|
||||
break;
|
||||
case TYPEC_CC_RD:
|
||||
ret = fusb302_i2c_mask_write(chip, FUSB_REG_MASK,
|
||||
FUSB_REG_MASK_BC_LVL |
|
||||
FUSB_REG_MASK_COMP_CHNG,
|
||||
FUSB_REG_MASK_COMP_CHNG);
|
||||
if (ret < 0) {
|
||||
fusb302_log(chip, "cannot set SRC interrupt, ret=%d",
|
||||
ret);
|
||||
goto done;
|
||||
}
|
||||
chip->intr_bc_lvl = true;
|
||||
chip->intr_comp_chng = false;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
|
@@ -494,7 +494,7 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work)
|
||||
virtio_transport_free_pkt(pkt);
|
||||
|
||||
len += sizeof(pkt->hdr);
|
||||
vhost_add_used(vq, head, len);
|
||||
vhost_add_used(vq, head, 0);
|
||||
total_len += len;
|
||||
added = true;
|
||||
} while(likely(!vhost_exceeds_weight(vq, ++pkts, total_len)));
|
||||
|
@@ -846,7 +846,7 @@ static struct notifier_block xenbus_resume_nb = {
|
||||
|
||||
static int __init xenbus_init(void)
|
||||
{
|
||||
int err = 0;
|
||||
int err;
|
||||
uint64_t v = 0;
|
||||
xen_store_domain_type = XS_UNKNOWN;
|
||||
|
||||
@@ -886,6 +886,29 @@ static int __init xenbus_init(void)
|
||||
err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
|
||||
if (err)
|
||||
goto out_error;
|
||||
/*
|
||||
* Uninitialized hvm_params are zero and return no error.
|
||||
* Although it is theoretically possible to have
|
||||
* HVM_PARAM_STORE_PFN set to zero on purpose, in reality it is
|
||||
* not zero when valid. If zero, it means that Xenstore hasn't
|
||||
* been properly initialized. Instead of attempting to map a
|
||||
* wrong guest physical address return error.
|
||||
*
|
||||
* Also recognize all bits set as an invalid value.
|
||||
*/
|
||||
if (!v || !~v) {
|
||||
err = -ENOENT;
|
||||
goto out_error;
|
||||
}
|
||||
/* Avoid truncation on 32-bit. */
|
||||
#if BITS_PER_LONG == 32
|
||||
if (v > ULONG_MAX) {
|
||||
pr_err("%s: cannot handle HVM_PARAM_STORE_PFN=%llx > ULONG_MAX\n",
|
||||
__func__, v);
|
||||
err = -EINVAL;
|
||||
goto out_error;
|
||||
}
|
||||
#endif
|
||||
xen_store_gfn = (unsigned long)v;
|
||||
xen_store_interface =
|
||||
xen_remap(xen_store_gfn << XEN_PAGE_SHIFT,
|
||||
@@ -920,8 +943,10 @@ static int __init xenbus_init(void)
|
||||
*/
|
||||
proc_create_mount_point("xen");
|
||||
#endif
|
||||
return 0;
|
||||
|
||||
out_error:
|
||||
xen_store_domain_type = XS_UNKNOWN;
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@@ -52,8 +52,7 @@ static int ceph_statfs(struct dentry *dentry, struct kstatfs *buf)
|
||||
struct ceph_fs_client *fsc = ceph_inode_to_client(d_inode(dentry));
|
||||
struct ceph_mon_client *monc = &fsc->client->monc;
|
||||
struct ceph_statfs st;
|
||||
u64 fsid;
|
||||
int err;
|
||||
int i, err;
|
||||
u64 data_pool;
|
||||
|
||||
if (fsc->mdsc->mdsmap->m_num_data_pg_pools == 1) {
|
||||
@@ -99,12 +98,14 @@ static int ceph_statfs(struct dentry *dentry, struct kstatfs *buf)
|
||||
buf->f_namelen = NAME_MAX;
|
||||
|
||||
/* Must convert the fsid, for consistent values across arches */
|
||||
buf->f_fsid.val[0] = 0;
|
||||
mutex_lock(&monc->mutex);
|
||||
fsid = le64_to_cpu(*(__le64 *)(&monc->monmap->fsid)) ^
|
||||
le64_to_cpu(*((__le64 *)&monc->monmap->fsid + 1));
|
||||
for (i = 0 ; i < sizeof(monc->monmap->fsid) / sizeof(__le32) ; ++i)
|
||||
buf->f_fsid.val[0] ^= le32_to_cpu(((__le32 *)&monc->monmap->fsid)[i]);
|
||||
mutex_unlock(&monc->mutex);
|
||||
|
||||
buf->f_fsid = u64_to_fsid(fsid);
|
||||
/* fold the fs_cluster_id into the upper bits */
|
||||
buf->f_fsid.val[1] = monc->fs_cluster_id;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@@ -2618,12 +2618,23 @@ int cifs_strict_fsync(struct file *file, loff_t start, loff_t end,
|
||||
tcon = tlink_tcon(smbfile->tlink);
|
||||
if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOSSYNC)) {
|
||||
server = tcon->ses->server;
|
||||
if (server->ops->flush)
|
||||
rc = server->ops->flush(xid, tcon, &smbfile->fid);
|
||||
else
|
||||
if (server->ops->flush == NULL) {
|
||||
rc = -ENOSYS;
|
||||
goto strict_fsync_exit;
|
||||
}
|
||||
|
||||
if ((OPEN_FMODE(smbfile->f_flags) & FMODE_WRITE) == 0) {
|
||||
smbfile = find_writable_file(CIFS_I(inode), FIND_WR_ANY);
|
||||
if (smbfile) {
|
||||
rc = server->ops->flush(xid, tcon, &smbfile->fid);
|
||||
cifsFileInfo_put(smbfile);
|
||||
} else
|
||||
cifs_dbg(FYI, "ignore fsync for file not open for write\n");
|
||||
} else
|
||||
rc = server->ops->flush(xid, tcon, &smbfile->fid);
|
||||
}
|
||||
|
||||
strict_fsync_exit:
|
||||
free_xid(xid);
|
||||
return rc;
|
||||
}
|
||||
@@ -2635,6 +2646,7 @@ int cifs_fsync(struct file *file, loff_t start, loff_t end, int datasync)
|
||||
struct cifs_tcon *tcon;
|
||||
struct TCP_Server_Info *server;
|
||||
struct cifsFileInfo *smbfile = file->private_data;
|
||||
struct inode *inode = file_inode(file);
|
||||
struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file);
|
||||
|
||||
rc = file_write_and_wait_range(file, start, end);
|
||||
@@ -2651,12 +2663,23 @@ int cifs_fsync(struct file *file, loff_t start, loff_t end, int datasync)
|
||||
tcon = tlink_tcon(smbfile->tlink);
|
||||
if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOSSYNC)) {
|
||||
server = tcon->ses->server;
|
||||
if (server->ops->flush)
|
||||
rc = server->ops->flush(xid, tcon, &smbfile->fid);
|
||||
else
|
||||
if (server->ops->flush == NULL) {
|
||||
rc = -ENOSYS;
|
||||
goto fsync_exit;
|
||||
}
|
||||
|
||||
if ((OPEN_FMODE(smbfile->f_flags) & FMODE_WRITE) == 0) {
|
||||
smbfile = find_writable_file(CIFS_I(inode), FIND_WR_ANY);
|
||||
if (smbfile) {
|
||||
rc = server->ops->flush(xid, tcon, &smbfile->fid);
|
||||
cifsFileInfo_put(smbfile);
|
||||
} else
|
||||
cifs_dbg(FYI, "ignore fsync for file not open for write\n");
|
||||
} else
|
||||
rc = server->ops->flush(xid, tcon, &smbfile->fid);
|
||||
}
|
||||
|
||||
fsync_exit:
|
||||
free_xid(xid);
|
||||
return rc;
|
||||
}
|
||||
|
@@ -1420,6 +1420,7 @@ page_hit:
|
||||
nid, nid_of_node(page), ino_of_node(page),
|
||||
ofs_of_node(page), cpver_of_node(page),
|
||||
next_blkaddr_of_node(page));
|
||||
set_sbi_flag(sbi, SBI_NEED_FSCK);
|
||||
err = -EINVAL;
|
||||
out_err:
|
||||
ClearPageUptodate(page);
|
||||
|
@@ -856,17 +856,17 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep)
|
||||
goto out_put_old;
|
||||
}
|
||||
|
||||
get_page(newpage);
|
||||
|
||||
if (!(buf->flags & PIPE_BUF_FLAG_LRU))
|
||||
lru_cache_add(newpage);
|
||||
|
||||
/*
|
||||
* Release while we have extra ref on stolen page. Otherwise
|
||||
* anon_pipe_buf_release() might think the page can be reused.
|
||||
*/
|
||||
pipe_buf_release(cs->pipe, buf);
|
||||
|
||||
get_page(newpage);
|
||||
|
||||
if (!(buf->flags & PIPE_BUF_FLAG_LRU))
|
||||
lru_cache_add(newpage);
|
||||
|
||||
err = 0;
|
||||
spin_lock(&cs->req->waitq.lock);
|
||||
if (test_bit(FR_ABORTED, &cs->req->flags))
|
||||
|
@@ -1396,8 +1396,7 @@ static int nfs4_xdr_dec_clone(struct rpc_rqst *rqstp,
|
||||
status = decode_clone(xdr);
|
||||
if (status)
|
||||
goto out;
|
||||
status = decode_getfattr(xdr, res->dst_fattr, res->server);
|
||||
|
||||
decode_getfattr(xdr, res->dst_fattr, res->server);
|
||||
out:
|
||||
res->rpc_status = status;
|
||||
return status;
|
||||
|
@@ -124,9 +124,13 @@ ssize_t read_from_oldmem(char *buf, size_t count,
|
||||
nr_bytes = count;
|
||||
|
||||
/* If pfn is not ram, return zeros for sparse dump files */
|
||||
if (pfn_is_ram(pfn) == 0)
|
||||
memset(buf, 0, nr_bytes);
|
||||
else {
|
||||
if (pfn_is_ram(pfn) == 0) {
|
||||
tmp = 0;
|
||||
if (!userbuf)
|
||||
memset(buf, 0, nr_bytes);
|
||||
else if (clear_user(buf, nr_bytes))
|
||||
tmp = -EFAULT;
|
||||
} else {
|
||||
if (encrypted)
|
||||
tmp = copy_oldmem_page_encrypted(pfn, buf,
|
||||
nr_bytes,
|
||||
@@ -135,10 +139,10 @@ ssize_t read_from_oldmem(char *buf, size_t count,
|
||||
else
|
||||
tmp = copy_oldmem_page(pfn, buf, nr_bytes,
|
||||
offset, userbuf);
|
||||
|
||||
if (tmp < 0)
|
||||
return tmp;
|
||||
}
|
||||
if (tmp < 0)
|
||||
return tmp;
|
||||
|
||||
*ppos += nr_bytes;
|
||||
count -= nr_bytes;
|
||||
buf += nr_bytes;
|
||||
|
@@ -177,7 +177,7 @@ struct bpf_map {
|
||||
atomic64_t usercnt;
|
||||
struct work_struct work;
|
||||
struct mutex freeze_mutex;
|
||||
u64 writecnt; /* writable mmap cnt; protected by freeze_mutex */
|
||||
atomic64_t writecnt;
|
||||
};
|
||||
|
||||
static inline bool map_value_has_spin_lock(const struct bpf_map *map)
|
||||
@@ -1260,6 +1260,7 @@ void bpf_map_charge_move(struct bpf_map_memory *dst,
|
||||
void *bpf_map_area_alloc(u64 size, int numa_node);
|
||||
void *bpf_map_area_mmapable_alloc(u64 size, int numa_node);
|
||||
void bpf_map_area_free(void *base);
|
||||
bool bpf_map_write_active(const struct bpf_map *map);
|
||||
void bpf_map_init_from_attr(struct bpf_map *map, union bpf_attr *attr);
|
||||
int generic_map_lookup_batch(struct bpf_map *map,
|
||||
const union bpf_attr *attr,
|
||||
|
@@ -132,6 +132,16 @@ static inline struct ipc_namespace *get_ipc_ns(struct ipc_namespace *ns)
|
||||
return ns;
|
||||
}
|
||||
|
||||
static inline struct ipc_namespace *get_ipc_ns_not_zero(struct ipc_namespace *ns)
|
||||
{
|
||||
if (ns) {
|
||||
if (refcount_inc_not_zero(&ns->count))
|
||||
return ns;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
extern void put_ipc_ns(struct ipc_namespace *ns);
|
||||
#else
|
||||
static inline struct ipc_namespace *copy_ipcs(unsigned long flags,
|
||||
@@ -148,6 +158,11 @@ static inline struct ipc_namespace *get_ipc_ns(struct ipc_namespace *ns)
|
||||
return ns;
|
||||
}
|
||||
|
||||
static inline struct ipc_namespace *get_ipc_ns_not_zero(struct ipc_namespace *ns)
|
||||
{
|
||||
return ns;
|
||||
}
|
||||
|
||||
static inline void put_ipc_ns(struct ipc_namespace *ns)
|
||||
{
|
||||
}
|
||||
|
@@ -158,7 +158,7 @@ static inline struct vm_struct *task_stack_vm_area(const struct task_struct *t)
|
||||
* Protects ->fs, ->files, ->mm, ->group_info, ->comm, keyring
|
||||
* subscriptions and synchronises with wait4(). Also used in procfs. Also
|
||||
* pins the final release of task.io_context. Also protects ->cpuset and
|
||||
* ->cgroup.subsys[]. And ->vfork_done.
|
||||
* ->cgroup.subsys[]. And ->vfork_done. And ->sysvshm.shm_clist.
|
||||
*
|
||||
* Nests both inside and outside of read_lock(&tasklist_lock).
|
||||
* It must not be nested with write_lock_irq(&tasklist_lock),
|
||||
|
@@ -501,6 +501,7 @@ int fib6_nh_init(struct net *net, struct fib6_nh *fib6_nh,
|
||||
struct fib6_config *cfg, gfp_t gfp_flags,
|
||||
struct netlink_ext_ack *extack);
|
||||
void fib6_nh_release(struct fib6_nh *fib6_nh);
|
||||
void fib6_nh_release_dsts(struct fib6_nh *fib6_nh);
|
||||
|
||||
int call_fib6_entry_notifiers(struct net *net,
|
||||
enum fib_event_type event_type,
|
||||
|
@@ -47,6 +47,7 @@ struct ipv6_stub {
|
||||
struct fib6_config *cfg, gfp_t gfp_flags,
|
||||
struct netlink_ext_ack *extack);
|
||||
void (*fib6_nh_release)(struct fib6_nh *fib6_nh);
|
||||
void (*fib6_nh_release_dsts)(struct fib6_nh *fib6_nh);
|
||||
void (*fib6_update_sernum)(struct net *net, struct fib6_info *rt);
|
||||
int (*ip6_del_rt)(struct net *net, struct fib6_info *rt, bool skip_notify);
|
||||
void (*fib6_rt_update)(struct net *net, struct fib6_info *rt,
|
||||
|
@@ -19,6 +19,8 @@
|
||||
*
|
||||
*/
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
#define NL802154_GENL_NAME "nl802154"
|
||||
|
||||
enum nl802154_commands {
|
||||
@@ -150,10 +152,9 @@ enum nl802154_attrs {
|
||||
};
|
||||
|
||||
enum nl802154_iftype {
|
||||
/* for backwards compatibility TODO */
|
||||
NL802154_IFTYPE_UNSPEC = -1,
|
||||
NL802154_IFTYPE_UNSPEC = (~(__u32)0),
|
||||
|
||||
NL802154_IFTYPE_NODE,
|
||||
NL802154_IFTYPE_NODE = 0,
|
||||
NL802154_IFTYPE_MONITOR,
|
||||
NL802154_IFTYPE_COORD,
|
||||
|
||||
|
@@ -1,21 +1,53 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/******************************************************************************
|
||||
* ring.h
|
||||
*
|
||||
* Shared producer-consumer ring macros.
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to
|
||||
* deal in the Software without restriction, including without limitation the
|
||||
* rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
|
||||
* sell copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
||||
* DEALINGS IN THE SOFTWARE.
|
||||
*
|
||||
* Tim Deegan and Andrew Warfield November 2004.
|
||||
*/
|
||||
|
||||
#ifndef __XEN_PUBLIC_IO_RING_H__
|
||||
#define __XEN_PUBLIC_IO_RING_H__
|
||||
|
||||
/*
|
||||
* When #include'ing this header, you need to provide the following
|
||||
* declaration upfront:
|
||||
* - standard integers types (uint8_t, uint16_t, etc)
|
||||
* They are provided by stdint.h of the standard headers.
|
||||
*
|
||||
* In addition, if you intend to use the FLEX macros, you also need to
|
||||
* provide the following, before invoking the FLEX macros:
|
||||
* - size_t
|
||||
* - memcpy
|
||||
* - grant_ref_t
|
||||
* These declarations are provided by string.h of the standard headers,
|
||||
* and grant_table.h from the Xen public headers.
|
||||
*/
|
||||
|
||||
#include <xen/interface/grant_table.h>
|
||||
|
||||
typedef unsigned int RING_IDX;
|
||||
|
||||
/* Round a 32-bit unsigned constant down to the nearest power of two. */
|
||||
#define __RD2(_x) (((_x) & 0x00000002) ? 0x2 : ((_x) & 0x1))
|
||||
#define __RD2(_x) (((_x) & 0x00000002) ? 0x2 : ((_x) & 0x1))
|
||||
#define __RD4(_x) (((_x) & 0x0000000c) ? __RD2((_x)>>2)<<2 : __RD2(_x))
|
||||
#define __RD8(_x) (((_x) & 0x000000f0) ? __RD4((_x)>>4)<<4 : __RD4(_x))
|
||||
#define __RD16(_x) (((_x) & 0x0000ff00) ? __RD8((_x)>>8)<<8 : __RD8(_x))
|
||||
@@ -27,82 +59,79 @@ typedef unsigned int RING_IDX;
|
||||
* A ring contains as many entries as will fit, rounded down to the nearest
|
||||
* power of two (so we can mask with (size-1) to loop around).
|
||||
*/
|
||||
#define __CONST_RING_SIZE(_s, _sz) \
|
||||
(__RD32(((_sz) - offsetof(struct _s##_sring, ring)) / \
|
||||
sizeof(((struct _s##_sring *)0)->ring[0])))
|
||||
|
||||
#define __CONST_RING_SIZE(_s, _sz) \
|
||||
(__RD32(((_sz) - offsetof(struct _s##_sring, ring)) / \
|
||||
sizeof(((struct _s##_sring *)0)->ring[0])))
|
||||
/*
|
||||
* The same for passing in an actual pointer instead of a name tag.
|
||||
*/
|
||||
#define __RING_SIZE(_s, _sz) \
|
||||
(__RD32(((_sz) - (long)&(_s)->ring + (long)(_s)) / sizeof((_s)->ring[0])))
|
||||
#define __RING_SIZE(_s, _sz) \
|
||||
(__RD32(((_sz) - (long)(_s)->ring + (long)(_s)) / sizeof((_s)->ring[0])))
|
||||
|
||||
/*
|
||||
* Macros to make the correct C datatypes for a new kind of ring.
|
||||
*
|
||||
* To make a new ring datatype, you need to have two message structures,
|
||||
* let's say struct request, and struct response already defined.
|
||||
* let's say request_t, and response_t already defined.
|
||||
*
|
||||
* In a header where you want the ring datatype declared, you then do:
|
||||
*
|
||||
* DEFINE_RING_TYPES(mytag, struct request, struct response);
|
||||
* DEFINE_RING_TYPES(mytag, request_t, response_t);
|
||||
*
|
||||
* These expand out to give you a set of types, as you can see below.
|
||||
* The most important of these are:
|
||||
*
|
||||
* struct mytag_sring - The shared ring.
|
||||
* struct mytag_front_ring - The 'front' half of the ring.
|
||||
* struct mytag_back_ring - The 'back' half of the ring.
|
||||
* mytag_sring_t - The shared ring.
|
||||
* mytag_front_ring_t - The 'front' half of the ring.
|
||||
* mytag_back_ring_t - The 'back' half of the ring.
|
||||
*
|
||||
* To initialize a ring in your code you need to know the location and size
|
||||
* of the shared memory area (PAGE_SIZE, for instance). To initialise
|
||||
* the front half:
|
||||
*
|
||||
* struct mytag_front_ring front_ring;
|
||||
* SHARED_RING_INIT((struct mytag_sring *)shared_page);
|
||||
* FRONT_RING_INIT(&front_ring, (struct mytag_sring *)shared_page,
|
||||
* PAGE_SIZE);
|
||||
* mytag_front_ring_t front_ring;
|
||||
* SHARED_RING_INIT((mytag_sring_t *)shared_page);
|
||||
* FRONT_RING_INIT(&front_ring, (mytag_sring_t *)shared_page, PAGE_SIZE);
|
||||
*
|
||||
* Initializing the back follows similarly (note that only the front
|
||||
* initializes the shared ring):
|
||||
*
|
||||
* struct mytag_back_ring back_ring;
|
||||
* BACK_RING_INIT(&back_ring, (struct mytag_sring *)shared_page,
|
||||
* PAGE_SIZE);
|
||||
* mytag_back_ring_t back_ring;
|
||||
* BACK_RING_INIT(&back_ring, (mytag_sring_t *)shared_page, PAGE_SIZE);
|
||||
*/
|
||||
|
||||
#define DEFINE_RING_TYPES(__name, __req_t, __rsp_t) \
|
||||
\
|
||||
/* Shared ring entry */ \
|
||||
union __name##_sring_entry { \
|
||||
__req_t req; \
|
||||
__rsp_t rsp; \
|
||||
}; \
|
||||
\
|
||||
/* Shared ring page */ \
|
||||
struct __name##_sring { \
|
||||
RING_IDX req_prod, req_event; \
|
||||
RING_IDX rsp_prod, rsp_event; \
|
||||
uint8_t pad[48]; \
|
||||
union __name##_sring_entry ring[1]; /* variable-length */ \
|
||||
}; \
|
||||
\
|
||||
/* "Front" end's private variables */ \
|
||||
struct __name##_front_ring { \
|
||||
RING_IDX req_prod_pvt; \
|
||||
RING_IDX rsp_cons; \
|
||||
unsigned int nr_ents; \
|
||||
struct __name##_sring *sring; \
|
||||
}; \
|
||||
\
|
||||
/* "Back" end's private variables */ \
|
||||
struct __name##_back_ring { \
|
||||
RING_IDX rsp_prod_pvt; \
|
||||
RING_IDX req_cons; \
|
||||
unsigned int nr_ents; \
|
||||
struct __name##_sring *sring; \
|
||||
};
|
||||
|
||||
#define DEFINE_RING_TYPES(__name, __req_t, __rsp_t) \
|
||||
\
|
||||
/* Shared ring entry */ \
|
||||
union __name##_sring_entry { \
|
||||
__req_t req; \
|
||||
__rsp_t rsp; \
|
||||
}; \
|
||||
\
|
||||
/* Shared ring page */ \
|
||||
struct __name##_sring { \
|
||||
RING_IDX req_prod, req_event; \
|
||||
RING_IDX rsp_prod, rsp_event; \
|
||||
uint8_t __pad[48]; \
|
||||
union __name##_sring_entry ring[1]; /* variable-length */ \
|
||||
}; \
|
||||
\
|
||||
/* "Front" end's private variables */ \
|
||||
struct __name##_front_ring { \
|
||||
RING_IDX req_prod_pvt; \
|
||||
RING_IDX rsp_cons; \
|
||||
unsigned int nr_ents; \
|
||||
struct __name##_sring *sring; \
|
||||
}; \
|
||||
\
|
||||
/* "Back" end's private variables */ \
|
||||
struct __name##_back_ring { \
|
||||
RING_IDX rsp_prod_pvt; \
|
||||
RING_IDX req_cons; \
|
||||
unsigned int nr_ents; \
|
||||
struct __name##_sring *sring; \
|
||||
}; \
|
||||
\
|
||||
/*
|
||||
* Macros for manipulating rings.
|
||||
*
|
||||
@@ -119,94 +148,99 @@ struct __name##_back_ring { \
|
||||
*/
|
||||
|
||||
/* Initialising empty rings */
|
||||
#define SHARED_RING_INIT(_s) do { \
|
||||
(_s)->req_prod = (_s)->rsp_prod = 0; \
|
||||
(_s)->req_event = (_s)->rsp_event = 1; \
|
||||
memset((_s)->pad, 0, sizeof((_s)->pad)); \
|
||||
#define SHARED_RING_INIT(_s) do { \
|
||||
(_s)->req_prod = (_s)->rsp_prod = 0; \
|
||||
(_s)->req_event = (_s)->rsp_event = 1; \
|
||||
(void)memset((_s)->__pad, 0, sizeof((_s)->__pad)); \
|
||||
} while(0)
|
||||
|
||||
#define FRONT_RING_ATTACH(_r, _s, _i, __size) do { \
|
||||
(_r)->req_prod_pvt = (_i); \
|
||||
(_r)->rsp_cons = (_i); \
|
||||
(_r)->nr_ents = __RING_SIZE(_s, __size); \
|
||||
(_r)->sring = (_s); \
|
||||
#define FRONT_RING_ATTACH(_r, _s, _i, __size) do { \
|
||||
(_r)->req_prod_pvt = (_i); \
|
||||
(_r)->rsp_cons = (_i); \
|
||||
(_r)->nr_ents = __RING_SIZE(_s, __size); \
|
||||
(_r)->sring = (_s); \
|
||||
} while (0)
|
||||
|
||||
#define FRONT_RING_INIT(_r, _s, __size) FRONT_RING_ATTACH(_r, _s, 0, __size)
|
||||
|
||||
#define BACK_RING_ATTACH(_r, _s, _i, __size) do { \
|
||||
(_r)->rsp_prod_pvt = (_i); \
|
||||
(_r)->req_cons = (_i); \
|
||||
(_r)->nr_ents = __RING_SIZE(_s, __size); \
|
||||
(_r)->sring = (_s); \
|
||||
#define BACK_RING_ATTACH(_r, _s, _i, __size) do { \
|
||||
(_r)->rsp_prod_pvt = (_i); \
|
||||
(_r)->req_cons = (_i); \
|
||||
(_r)->nr_ents = __RING_SIZE(_s, __size); \
|
||||
(_r)->sring = (_s); \
|
||||
} while (0)
|
||||
|
||||
#define BACK_RING_INIT(_r, _s, __size) BACK_RING_ATTACH(_r, _s, 0, __size)
|
||||
|
||||
/* How big is this ring? */
|
||||
#define RING_SIZE(_r) \
|
||||
#define RING_SIZE(_r) \
|
||||
((_r)->nr_ents)
|
||||
|
||||
/* Number of free requests (for use on front side only). */
|
||||
#define RING_FREE_REQUESTS(_r) \
|
||||
#define RING_FREE_REQUESTS(_r) \
|
||||
(RING_SIZE(_r) - ((_r)->req_prod_pvt - (_r)->rsp_cons))
|
||||
|
||||
/* Test if there is an empty slot available on the front ring.
|
||||
* (This is only meaningful from the front. )
|
||||
*/
|
||||
#define RING_FULL(_r) \
|
||||
#define RING_FULL(_r) \
|
||||
(RING_FREE_REQUESTS(_r) == 0)
|
||||
|
||||
/* Test if there are outstanding messages to be processed on a ring. */
|
||||
#define RING_HAS_UNCONSUMED_RESPONSES(_r) \
|
||||
#define RING_HAS_UNCONSUMED_RESPONSES(_r) \
|
||||
((_r)->sring->rsp_prod - (_r)->rsp_cons)
|
||||
|
||||
#define RING_HAS_UNCONSUMED_REQUESTS(_r) \
|
||||
({ \
|
||||
unsigned int req = (_r)->sring->req_prod - (_r)->req_cons; \
|
||||
unsigned int rsp = RING_SIZE(_r) - \
|
||||
((_r)->req_cons - (_r)->rsp_prod_pvt); \
|
||||
req < rsp ? req : rsp; \
|
||||
})
|
||||
#define RING_HAS_UNCONSUMED_REQUESTS(_r) ({ \
|
||||
unsigned int req = (_r)->sring->req_prod - (_r)->req_cons; \
|
||||
unsigned int rsp = RING_SIZE(_r) - \
|
||||
((_r)->req_cons - (_r)->rsp_prod_pvt); \
|
||||
req < rsp ? req : rsp; \
|
||||
})
|
||||
|
||||
/* Direct access to individual ring elements, by index. */
|
||||
#define RING_GET_REQUEST(_r, _idx) \
|
||||
#define RING_GET_REQUEST(_r, _idx) \
|
||||
(&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].req))
|
||||
|
||||
#define RING_GET_RESPONSE(_r, _idx) \
|
||||
(&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
|
||||
|
||||
/*
|
||||
* Get a local copy of a request.
|
||||
* Get a local copy of a request/response.
|
||||
*
|
||||
* Use this in preference to RING_GET_REQUEST() so all processing is
|
||||
* Use this in preference to RING_GET_{REQUEST,RESPONSE}() so all processing is
|
||||
* done on a local copy that cannot be modified by the other end.
|
||||
*
|
||||
* Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this
|
||||
* to be ineffective where _req is a struct which consists of only bitfields.
|
||||
* to be ineffective where dest is a struct which consists of only bitfields.
|
||||
*/
|
||||
#define RING_COPY_REQUEST(_r, _idx, _req) do { \
|
||||
/* Use volatile to force the copy into _req. */ \
|
||||
*(_req) = *(volatile typeof(_req))RING_GET_REQUEST(_r, _idx); \
|
||||
#define RING_COPY_(type, r, idx, dest) do { \
|
||||
/* Use volatile to force the copy into dest. */ \
|
||||
*(dest) = *(volatile typeof(dest))RING_GET_##type(r, idx); \
|
||||
} while (0)
|
||||
|
||||
#define RING_GET_RESPONSE(_r, _idx) \
|
||||
(&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
|
||||
#define RING_COPY_REQUEST(r, idx, req) RING_COPY_(REQUEST, r, idx, req)
|
||||
#define RING_COPY_RESPONSE(r, idx, rsp) RING_COPY_(RESPONSE, r, idx, rsp)
|
||||
|
||||
/* Loop termination condition: Would the specified index overflow the ring? */
|
||||
#define RING_REQUEST_CONS_OVERFLOW(_r, _cons) \
|
||||
#define RING_REQUEST_CONS_OVERFLOW(_r, _cons) \
|
||||
(((_cons) - (_r)->rsp_prod_pvt) >= RING_SIZE(_r))
|
||||
|
||||
/* Ill-behaved frontend determination: Can there be this many requests? */
|
||||
#define RING_REQUEST_PROD_OVERFLOW(_r, _prod) \
|
||||
#define RING_REQUEST_PROD_OVERFLOW(_r, _prod) \
|
||||
(((_prod) - (_r)->rsp_prod_pvt) > RING_SIZE(_r))
|
||||
|
||||
/* Ill-behaved backend determination: Can there be this many responses? */
|
||||
#define RING_RESPONSE_PROD_OVERFLOW(_r, _prod) \
|
||||
(((_prod) - (_r)->rsp_cons) > RING_SIZE(_r))
|
||||
|
||||
#define RING_PUSH_REQUESTS(_r) do { \
|
||||
virt_wmb(); /* back sees requests /before/ updated producer index */ \
|
||||
(_r)->sring->req_prod = (_r)->req_prod_pvt; \
|
||||
#define RING_PUSH_REQUESTS(_r) do { \
|
||||
virt_wmb(); /* back sees requests /before/ updated producer index */\
|
||||
(_r)->sring->req_prod = (_r)->req_prod_pvt; \
|
||||
} while (0)
|
||||
|
||||
#define RING_PUSH_RESPONSES(_r) do { \
|
||||
virt_wmb(); /* front sees responses /before/ updated producer index */ \
|
||||
(_r)->sring->rsp_prod = (_r)->rsp_prod_pvt; \
|
||||
#define RING_PUSH_RESPONSES(_r) do { \
|
||||
virt_wmb(); /* front sees resps /before/ updated producer index */ \
|
||||
(_r)->sring->rsp_prod = (_r)->rsp_prod_pvt; \
|
||||
} while (0)
|
||||
|
||||
/*
|
||||
@@ -239,40 +273,40 @@ struct __name##_back_ring { \
|
||||
* field appropriately.
|
||||
*/
|
||||
|
||||
#define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do { \
|
||||
RING_IDX __old = (_r)->sring->req_prod; \
|
||||
RING_IDX __new = (_r)->req_prod_pvt; \
|
||||
virt_wmb(); /* back sees requests /before/ updated producer index */ \
|
||||
(_r)->sring->req_prod = __new; \
|
||||
virt_mb(); /* back sees new requests /before/ we check req_event */ \
|
||||
(_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) < \
|
||||
(RING_IDX)(__new - __old)); \
|
||||
#define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do { \
|
||||
RING_IDX __old = (_r)->sring->req_prod; \
|
||||
RING_IDX __new = (_r)->req_prod_pvt; \
|
||||
virt_wmb(); /* back sees requests /before/ updated producer index */\
|
||||
(_r)->sring->req_prod = __new; \
|
||||
virt_mb(); /* back sees new requests /before/ we check req_event */ \
|
||||
(_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) < \
|
||||
(RING_IDX)(__new - __old)); \
|
||||
} while (0)
|
||||
|
||||
#define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do { \
|
||||
RING_IDX __old = (_r)->sring->rsp_prod; \
|
||||
RING_IDX __new = (_r)->rsp_prod_pvt; \
|
||||
virt_wmb(); /* front sees responses /before/ updated producer index */ \
|
||||
(_r)->sring->rsp_prod = __new; \
|
||||
virt_mb(); /* front sees new responses /before/ we check rsp_event */ \
|
||||
(_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) < \
|
||||
(RING_IDX)(__new - __old)); \
|
||||
#define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do { \
|
||||
RING_IDX __old = (_r)->sring->rsp_prod; \
|
||||
RING_IDX __new = (_r)->rsp_prod_pvt; \
|
||||
virt_wmb(); /* front sees resps /before/ updated producer index */ \
|
||||
(_r)->sring->rsp_prod = __new; \
|
||||
virt_mb(); /* front sees new resps /before/ we check rsp_event */ \
|
||||
(_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) < \
|
||||
(RING_IDX)(__new - __old)); \
|
||||
} while (0)
|
||||
|
||||
#define RING_FINAL_CHECK_FOR_REQUESTS(_r, _work_to_do) do { \
|
||||
(_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r); \
|
||||
if (_work_to_do) break; \
|
||||
(_r)->sring->req_event = (_r)->req_cons + 1; \
|
||||
virt_mb(); \
|
||||
(_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r); \
|
||||
#define RING_FINAL_CHECK_FOR_REQUESTS(_r, _work_to_do) do { \
|
||||
(_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r); \
|
||||
if (_work_to_do) break; \
|
||||
(_r)->sring->req_event = (_r)->req_cons + 1; \
|
||||
virt_mb(); \
|
||||
(_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r); \
|
||||
} while (0)
|
||||
|
||||
#define RING_FINAL_CHECK_FOR_RESPONSES(_r, _work_to_do) do { \
|
||||
(_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r); \
|
||||
if (_work_to_do) break; \
|
||||
(_r)->sring->rsp_event = (_r)->rsp_cons + 1; \
|
||||
virt_mb(); \
|
||||
(_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r); \
|
||||
#define RING_FINAL_CHECK_FOR_RESPONSES(_r, _work_to_do) do { \
|
||||
(_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r); \
|
||||
if (_work_to_do) break; \
|
||||
(_r)->sring->rsp_event = (_r)->rsp_cons + 1; \
|
||||
virt_mb(); \
|
||||
(_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r); \
|
||||
} while (0)
|
||||
|
||||
|
||||
|
197
ipc/shm.c
197
ipc/shm.c
@@ -62,9 +62,18 @@ struct shmid_kernel /* private to the kernel */
|
||||
struct pid *shm_lprid;
|
||||
struct user_struct *mlock_user;
|
||||
|
||||
/* The task created the shm object. NULL if the task is dead. */
|
||||
/*
|
||||
* The task created the shm object, for
|
||||
* task_lock(shp->shm_creator)
|
||||
*/
|
||||
struct task_struct *shm_creator;
|
||||
struct list_head shm_clist; /* list by creator */
|
||||
|
||||
/*
|
||||
* List by creator. task_lock(->shm_creator) required for read/write.
|
||||
* If list_empty(), then the creator is dead already.
|
||||
*/
|
||||
struct list_head shm_clist;
|
||||
struct ipc_namespace *ns;
|
||||
} __randomize_layout;
|
||||
|
||||
/* shm_mode upper byte flags */
|
||||
@@ -115,6 +124,7 @@ static void do_shm_rmid(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp)
|
||||
struct shmid_kernel *shp;
|
||||
|
||||
shp = container_of(ipcp, struct shmid_kernel, shm_perm);
|
||||
WARN_ON(ns != shp->ns);
|
||||
|
||||
if (shp->shm_nattch) {
|
||||
shp->shm_perm.mode |= SHM_DEST;
|
||||
@@ -225,10 +235,43 @@ static void shm_rcu_free(struct rcu_head *head)
|
||||
kvfree(shp);
|
||||
}
|
||||
|
||||
static inline void shm_rmid(struct ipc_namespace *ns, struct shmid_kernel *s)
|
||||
/*
|
||||
* It has to be called with shp locked.
|
||||
* It must be called before ipc_rmid()
|
||||
*/
|
||||
static inline void shm_clist_rm(struct shmid_kernel *shp)
|
||||
{
|
||||
list_del(&s->shm_clist);
|
||||
ipc_rmid(&shm_ids(ns), &s->shm_perm);
|
||||
struct task_struct *creator;
|
||||
|
||||
/* ensure that shm_creator does not disappear */
|
||||
rcu_read_lock();
|
||||
|
||||
/*
|
||||
* A concurrent exit_shm may do a list_del_init() as well.
|
||||
* Just do nothing if exit_shm already did the work
|
||||
*/
|
||||
if (!list_empty(&shp->shm_clist)) {
|
||||
/*
|
||||
* shp->shm_creator is guaranteed to be valid *only*
|
||||
* if shp->shm_clist is not empty.
|
||||
*/
|
||||
creator = shp->shm_creator;
|
||||
|
||||
task_lock(creator);
|
||||
/*
|
||||
* list_del_init() is a nop if the entry was already removed
|
||||
* from the list.
|
||||
*/
|
||||
list_del_init(&shp->shm_clist);
|
||||
task_unlock(creator);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
static inline void shm_rmid(struct shmid_kernel *s)
|
||||
{
|
||||
shm_clist_rm(s);
|
||||
ipc_rmid(&shm_ids(s->ns), &s->shm_perm);
|
||||
}
|
||||
|
||||
|
||||
@@ -283,7 +326,7 @@ static void shm_destroy(struct ipc_namespace *ns, struct shmid_kernel *shp)
|
||||
shm_file = shp->shm_file;
|
||||
shp->shm_file = NULL;
|
||||
ns->shm_tot -= (shp->shm_segsz + PAGE_SIZE - 1) >> PAGE_SHIFT;
|
||||
shm_rmid(ns, shp);
|
||||
shm_rmid(shp);
|
||||
shm_unlock(shp);
|
||||
if (!is_file_hugepages(shm_file))
|
||||
shmem_lock(shm_file, 0, shp->mlock_user);
|
||||
@@ -306,10 +349,10 @@ static void shm_destroy(struct ipc_namespace *ns, struct shmid_kernel *shp)
|
||||
*
|
||||
* 2) sysctl kernel.shm_rmid_forced is set to 1.
|
||||
*/
|
||||
static bool shm_may_destroy(struct ipc_namespace *ns, struct shmid_kernel *shp)
|
||||
static bool shm_may_destroy(struct shmid_kernel *shp)
|
||||
{
|
||||
return (shp->shm_nattch == 0) &&
|
||||
(ns->shm_rmid_forced ||
|
||||
(shp->ns->shm_rmid_forced ||
|
||||
(shp->shm_perm.mode & SHM_DEST));
|
||||
}
|
||||
|
||||
@@ -340,7 +383,7 @@ static void shm_close(struct vm_area_struct *vma)
|
||||
ipc_update_pid(&shp->shm_lprid, task_tgid(current));
|
||||
shp->shm_dtim = ktime_get_real_seconds();
|
||||
shp->shm_nattch--;
|
||||
if (shm_may_destroy(ns, shp))
|
||||
if (shm_may_destroy(shp))
|
||||
shm_destroy(ns, shp);
|
||||
else
|
||||
shm_unlock(shp);
|
||||
@@ -361,10 +404,10 @@ static int shm_try_destroy_orphaned(int id, void *p, void *data)
|
||||
*
|
||||
* As shp->* are changed under rwsem, it's safe to skip shp locking.
|
||||
*/
|
||||
if (shp->shm_creator != NULL)
|
||||
if (!list_empty(&shp->shm_clist))
|
||||
return 0;
|
||||
|
||||
if (shm_may_destroy(ns, shp)) {
|
||||
if (shm_may_destroy(shp)) {
|
||||
shm_lock_by_ptr(shp);
|
||||
shm_destroy(ns, shp);
|
||||
}
|
||||
@@ -382,48 +425,97 @@ void shm_destroy_orphaned(struct ipc_namespace *ns)
|
||||
/* Locking assumes this will only be called with task == current */
|
||||
void exit_shm(struct task_struct *task)
|
||||
{
|
||||
struct ipc_namespace *ns = task->nsproxy->ipc_ns;
|
||||
struct shmid_kernel *shp, *n;
|
||||
for (;;) {
|
||||
struct shmid_kernel *shp;
|
||||
struct ipc_namespace *ns;
|
||||
|
||||
if (list_empty(&task->sysvshm.shm_clist))
|
||||
return;
|
||||
task_lock(task);
|
||||
|
||||
/*
|
||||
* If kernel.shm_rmid_forced is not set then only keep track of
|
||||
* which shmids are orphaned, so that a later set of the sysctl
|
||||
* can clean them up.
|
||||
*/
|
||||
if (!ns->shm_rmid_forced) {
|
||||
down_read(&shm_ids(ns).rwsem);
|
||||
list_for_each_entry(shp, &task->sysvshm.shm_clist, shm_clist)
|
||||
shp->shm_creator = NULL;
|
||||
/*
|
||||
* Only under read lock but we are only called on current
|
||||
* so no entry on the list will be shared.
|
||||
*/
|
||||
list_del(&task->sysvshm.shm_clist);
|
||||
up_read(&shm_ids(ns).rwsem);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Destroy all already created segments, that were not yet mapped,
|
||||
* and mark any mapped as orphan to cover the sysctl toggling.
|
||||
* Destroy is skipped if shm_may_destroy() returns false.
|
||||
*/
|
||||
down_write(&shm_ids(ns).rwsem);
|
||||
list_for_each_entry_safe(shp, n, &task->sysvshm.shm_clist, shm_clist) {
|
||||
shp->shm_creator = NULL;
|
||||
|
||||
if (shm_may_destroy(ns, shp)) {
|
||||
shm_lock_by_ptr(shp);
|
||||
shm_destroy(ns, shp);
|
||||
if (list_empty(&task->sysvshm.shm_clist)) {
|
||||
task_unlock(task);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/* Remove the list head from any segments still attached. */
|
||||
list_del(&task->sysvshm.shm_clist);
|
||||
up_write(&shm_ids(ns).rwsem);
|
||||
shp = list_first_entry(&task->sysvshm.shm_clist, struct shmid_kernel,
|
||||
shm_clist);
|
||||
|
||||
/*
|
||||
* 1) Get pointer to the ipc namespace. It is worth to say
|
||||
* that this pointer is guaranteed to be valid because
|
||||
* shp lifetime is always shorter than namespace lifetime
|
||||
* in which shp lives.
|
||||
* We taken task_lock it means that shp won't be freed.
|
||||
*/
|
||||
ns = shp->ns;
|
||||
|
||||
/*
|
||||
* 2) If kernel.shm_rmid_forced is not set then only keep track of
|
||||
* which shmids are orphaned, so that a later set of the sysctl
|
||||
* can clean them up.
|
||||
*/
|
||||
if (!ns->shm_rmid_forced)
|
||||
goto unlink_continue;
|
||||
|
||||
/*
|
||||
* 3) get a reference to the namespace.
|
||||
* The refcount could be already 0. If it is 0, then
|
||||
* the shm objects will be free by free_ipc_work().
|
||||
*/
|
||||
ns = get_ipc_ns_not_zero(ns);
|
||||
if (!ns) {
|
||||
unlink_continue:
|
||||
list_del_init(&shp->shm_clist);
|
||||
task_unlock(task);
|
||||
continue;
|
||||
}
|
||||
|
||||
/*
|
||||
* 4) get a reference to shp.
|
||||
* This cannot fail: shm_clist_rm() is called before
|
||||
* ipc_rmid(), thus the refcount cannot be 0.
|
||||
*/
|
||||
WARN_ON(!ipc_rcu_getref(&shp->shm_perm));
|
||||
|
||||
/*
|
||||
* 5) unlink the shm segment from the list of segments
|
||||
* created by current.
|
||||
* This must be done last. After unlinking,
|
||||
* only the refcounts obtained above prevent IPC_RMID
|
||||
* from destroying the segment or the namespace.
|
||||
*/
|
||||
list_del_init(&shp->shm_clist);
|
||||
|
||||
task_unlock(task);
|
||||
|
||||
/*
|
||||
* 6) we have all references
|
||||
* Thus lock & if needed destroy shp.
|
||||
*/
|
||||
down_write(&shm_ids(ns).rwsem);
|
||||
shm_lock_by_ptr(shp);
|
||||
/*
|
||||
* rcu_read_lock was implicitly taken in shm_lock_by_ptr, it's
|
||||
* safe to call ipc_rcu_putref here
|
||||
*/
|
||||
ipc_rcu_putref(&shp->shm_perm, shm_rcu_free);
|
||||
|
||||
if (ipc_valid_object(&shp->shm_perm)) {
|
||||
if (shm_may_destroy(shp))
|
||||
shm_destroy(ns, shp);
|
||||
else
|
||||
shm_unlock(shp);
|
||||
} else {
|
||||
/*
|
||||
* Someone else deleted the shp from namespace
|
||||
* idr/kht while we have waited.
|
||||
* Just unlock and continue.
|
||||
*/
|
||||
shm_unlock(shp);
|
||||
}
|
||||
|
||||
up_write(&shm_ids(ns).rwsem);
|
||||
put_ipc_ns(ns); /* paired with get_ipc_ns_not_zero */
|
||||
}
|
||||
}
|
||||
|
||||
static vm_fault_t shm_fault(struct vm_fault *vmf)
|
||||
@@ -680,7 +772,11 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
|
||||
if (error < 0)
|
||||
goto no_id;
|
||||
|
||||
shp->ns = ns;
|
||||
|
||||
task_lock(current);
|
||||
list_add(&shp->shm_clist, ¤t->sysvshm.shm_clist);
|
||||
task_unlock(current);
|
||||
|
||||
/*
|
||||
* shmid gets reported as "inode#" in /proc/pid/maps.
|
||||
@@ -1573,7 +1669,8 @@ out_nattch:
|
||||
down_write(&shm_ids(ns).rwsem);
|
||||
shp = shm_lock(ns, shmid);
|
||||
shp->shm_nattch--;
|
||||
if (shm_may_destroy(ns, shp))
|
||||
|
||||
if (shm_may_destroy(shp))
|
||||
shm_destroy(ns, shp);
|
||||
else
|
||||
shm_unlock(shp);
|
||||
|
@@ -129,6 +129,21 @@ static struct bpf_map *find_and_alloc_map(union bpf_attr *attr)
|
||||
return map;
|
||||
}
|
||||
|
||||
static void bpf_map_write_active_inc(struct bpf_map *map)
|
||||
{
|
||||
atomic64_inc(&map->writecnt);
|
||||
}
|
||||
|
||||
static void bpf_map_write_active_dec(struct bpf_map *map)
|
||||
{
|
||||
atomic64_dec(&map->writecnt);
|
||||
}
|
||||
|
||||
bool bpf_map_write_active(const struct bpf_map *map)
|
||||
{
|
||||
return atomic64_read(&map->writecnt) != 0;
|
||||
}
|
||||
|
||||
static u32 bpf_map_value_size(struct bpf_map *map)
|
||||
{
|
||||
if (map->map_type == BPF_MAP_TYPE_PERCPU_HASH ||
|
||||
@@ -590,11 +605,8 @@ static void bpf_map_mmap_open(struct vm_area_struct *vma)
|
||||
{
|
||||
struct bpf_map *map = vma->vm_file->private_data;
|
||||
|
||||
if (vma->vm_flags & VM_MAYWRITE) {
|
||||
mutex_lock(&map->freeze_mutex);
|
||||
map->writecnt++;
|
||||
mutex_unlock(&map->freeze_mutex);
|
||||
}
|
||||
if (vma->vm_flags & VM_MAYWRITE)
|
||||
bpf_map_write_active_inc(map);
|
||||
}
|
||||
|
||||
/* called for all unmapped memory region (including initial) */
|
||||
@@ -602,11 +614,8 @@ static void bpf_map_mmap_close(struct vm_area_struct *vma)
|
||||
{
|
||||
struct bpf_map *map = vma->vm_file->private_data;
|
||||
|
||||
if (vma->vm_flags & VM_MAYWRITE) {
|
||||
mutex_lock(&map->freeze_mutex);
|
||||
map->writecnt--;
|
||||
mutex_unlock(&map->freeze_mutex);
|
||||
}
|
||||
if (vma->vm_flags & VM_MAYWRITE)
|
||||
bpf_map_write_active_dec(map);
|
||||
}
|
||||
|
||||
static const struct vm_operations_struct bpf_map_default_vmops = {
|
||||
@@ -656,7 +665,7 @@ static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma)
|
||||
goto out;
|
||||
|
||||
if (vma->vm_flags & VM_MAYWRITE)
|
||||
map->writecnt++;
|
||||
bpf_map_write_active_inc(map);
|
||||
out:
|
||||
mutex_unlock(&map->freeze_mutex);
|
||||
return err;
|
||||
@@ -1088,6 +1097,7 @@ static int map_update_elem(union bpf_attr *attr)
|
||||
map = __bpf_map_get(f);
|
||||
if (IS_ERR(map))
|
||||
return PTR_ERR(map);
|
||||
bpf_map_write_active_inc(map);
|
||||
if (!(map_get_sys_perms(map, f) & FMODE_CAN_WRITE)) {
|
||||
err = -EPERM;
|
||||
goto err_put;
|
||||
@@ -1129,6 +1139,7 @@ free_value:
|
||||
free_key:
|
||||
kfree(key);
|
||||
err_put:
|
||||
bpf_map_write_active_dec(map);
|
||||
fdput(f);
|
||||
return err;
|
||||
}
|
||||
@@ -1151,6 +1162,7 @@ static int map_delete_elem(union bpf_attr *attr)
|
||||
map = __bpf_map_get(f);
|
||||
if (IS_ERR(map))
|
||||
return PTR_ERR(map);
|
||||
bpf_map_write_active_inc(map);
|
||||
if (!(map_get_sys_perms(map, f) & FMODE_CAN_WRITE)) {
|
||||
err = -EPERM;
|
||||
goto err_put;
|
||||
@@ -1181,6 +1193,7 @@ static int map_delete_elem(union bpf_attr *attr)
|
||||
out:
|
||||
kfree(key);
|
||||
err_put:
|
||||
bpf_map_write_active_dec(map);
|
||||
fdput(f);
|
||||
return err;
|
||||
}
|
||||
@@ -1485,6 +1498,7 @@ static int map_lookup_and_delete_elem(union bpf_attr *attr)
|
||||
map = __bpf_map_get(f);
|
||||
if (IS_ERR(map))
|
||||
return PTR_ERR(map);
|
||||
bpf_map_write_active_inc(map);
|
||||
if (!(map_get_sys_perms(map, f) & FMODE_CAN_READ) ||
|
||||
!(map_get_sys_perms(map, f) & FMODE_CAN_WRITE)) {
|
||||
err = -EPERM;
|
||||
@@ -1526,6 +1540,7 @@ free_value:
|
||||
free_key:
|
||||
kfree(key);
|
||||
err_put:
|
||||
bpf_map_write_active_dec(map);
|
||||
fdput(f);
|
||||
return err;
|
||||
}
|
||||
@@ -1552,8 +1567,7 @@ static int map_freeze(const union bpf_attr *attr)
|
||||
}
|
||||
|
||||
mutex_lock(&map->freeze_mutex);
|
||||
|
||||
if (map->writecnt) {
|
||||
if (bpf_map_write_active(map)) {
|
||||
err = -EBUSY;
|
||||
goto err_put;
|
||||
}
|
||||
@@ -3978,6 +3992,9 @@ static int bpf_map_do_batch(const union bpf_attr *attr,
|
||||
union bpf_attr __user *uattr,
|
||||
int cmd)
|
||||
{
|
||||
bool has_read = cmd == BPF_MAP_LOOKUP_BATCH ||
|
||||
cmd == BPF_MAP_LOOKUP_AND_DELETE_BATCH;
|
||||
bool has_write = cmd != BPF_MAP_LOOKUP_BATCH;
|
||||
struct bpf_map *map;
|
||||
int err, ufd;
|
||||
struct fd f;
|
||||
@@ -3990,16 +4007,13 @@ static int bpf_map_do_batch(const union bpf_attr *attr,
|
||||
map = __bpf_map_get(f);
|
||||
if (IS_ERR(map))
|
||||
return PTR_ERR(map);
|
||||
|
||||
if ((cmd == BPF_MAP_LOOKUP_BATCH ||
|
||||
cmd == BPF_MAP_LOOKUP_AND_DELETE_BATCH) &&
|
||||
!(map_get_sys_perms(map, f) & FMODE_CAN_READ)) {
|
||||
if (has_write)
|
||||
bpf_map_write_active_inc(map);
|
||||
if (has_read && !(map_get_sys_perms(map, f) & FMODE_CAN_READ)) {
|
||||
err = -EPERM;
|
||||
goto err_put;
|
||||
}
|
||||
|
||||
if (cmd != BPF_MAP_LOOKUP_BATCH &&
|
||||
!(map_get_sys_perms(map, f) & FMODE_CAN_WRITE)) {
|
||||
if (has_write && !(map_get_sys_perms(map, f) & FMODE_CAN_WRITE)) {
|
||||
err = -EPERM;
|
||||
goto err_put;
|
||||
}
|
||||
@@ -4012,8 +4026,9 @@ static int bpf_map_do_batch(const union bpf_attr *attr,
|
||||
BPF_DO_BATCH(map->ops->map_update_batch);
|
||||
else
|
||||
BPF_DO_BATCH(map->ops->map_delete_batch);
|
||||
|
||||
err_put:
|
||||
if (has_write)
|
||||
bpf_map_write_active_dec(map);
|
||||
fdput(f);
|
||||
return err;
|
||||
}
|
||||
|
@@ -3486,7 +3486,22 @@ static void coerce_reg_to_size(struct bpf_reg_state *reg, int size)
|
||||
|
||||
static bool bpf_map_is_rdonly(const struct bpf_map *map)
|
||||
{
|
||||
return (map->map_flags & BPF_F_RDONLY_PROG) && map->frozen;
|
||||
/* A map is considered read-only if the following condition are true:
|
||||
*
|
||||
* 1) BPF program side cannot change any of the map content. The
|
||||
* BPF_F_RDONLY_PROG flag is throughout the lifetime of a map
|
||||
* and was set at map creation time.
|
||||
* 2) The map value(s) have been initialized from user space by a
|
||||
* loader and then "frozen", such that no new map update/delete
|
||||
* operations from syscall side are possible for the rest of
|
||||
* the map's lifetime from that point onwards.
|
||||
* 3) Any parallel/pending map update/delete operations from syscall
|
||||
* side have been completed. Only after that point, it's safe to
|
||||
* assume that map value(s) are immutable.
|
||||
*/
|
||||
return (map->map_flags & BPF_F_RDONLY_PROG) &&
|
||||
READ_ONCE(map->frozen) &&
|
||||
!bpf_map_write_active(map);
|
||||
}
|
||||
|
||||
static int bpf_map_direct_read(struct bpf_map *map, int off, int size, u64 *val)
|
||||
|
@@ -560,8 +560,8 @@ static int bringup_cpu(unsigned int cpu)
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* Reset stale stack state from the last time this CPU was online.
|
||||
*/
|
||||
* Reset stale stack state from the last time this CPU was online.
|
||||
*/
|
||||
scs_task_reset(idle);
|
||||
kasan_unpoison_task_stack(idle);
|
||||
|
||||
|
@@ -688,7 +688,7 @@ static int load_image_and_restore(void)
|
||||
goto Unlock;
|
||||
|
||||
error = swsusp_read(&flags);
|
||||
swsusp_close(FMODE_READ);
|
||||
swsusp_close(FMODE_READ | FMODE_EXCL);
|
||||
if (!error)
|
||||
error = hibernation_restore(flags & SF_PLATFORM_MODE);
|
||||
|
||||
@@ -978,7 +978,7 @@ static int software_resume(void)
|
||||
/* The snapshot device should not be opened while we're running */
|
||||
if (!hibernate_acquire()) {
|
||||
error = -EBUSY;
|
||||
swsusp_close(FMODE_READ);
|
||||
swsusp_close(FMODE_READ | FMODE_EXCL);
|
||||
goto Unlock;
|
||||
}
|
||||
|
||||
@@ -1013,7 +1013,7 @@ static int software_resume(void)
|
||||
pm_pr_dbg("Hibernation image not present or could not be loaded.\n");
|
||||
return error;
|
||||
Close_Finish:
|
||||
swsusp_close(FMODE_READ);
|
||||
swsusp_close(FMODE_READ | FMODE_EXCL);
|
||||
goto Finish;
|
||||
}
|
||||
|
||||
|
@@ -1506,14 +1506,26 @@ __event_trigger_test_discard(struct trace_event_file *file,
|
||||
if (eflags & EVENT_FILE_FL_TRIGGER_COND)
|
||||
*tt = event_triggers_call(file, entry, event);
|
||||
|
||||
if (test_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags) ||
|
||||
(unlikely(file->flags & EVENT_FILE_FL_FILTERED) &&
|
||||
!filter_match_preds(file->filter, entry))) {
|
||||
__trace_event_discard_commit(buffer, event);
|
||||
return true;
|
||||
}
|
||||
if (likely(!(file->flags & (EVENT_FILE_FL_SOFT_DISABLED |
|
||||
EVENT_FILE_FL_FILTERED |
|
||||
EVENT_FILE_FL_PID_FILTER))))
|
||||
return false;
|
||||
|
||||
if (file->flags & EVENT_FILE_FL_SOFT_DISABLED)
|
||||
goto discard;
|
||||
|
||||
if (file->flags & EVENT_FILE_FL_FILTERED &&
|
||||
!filter_match_preds(file->filter, entry))
|
||||
goto discard;
|
||||
|
||||
if ((file->flags & EVENT_FILE_FL_PID_FILTER) &&
|
||||
trace_event_ignore_this_pid(file))
|
||||
goto discard;
|
||||
|
||||
return false;
|
||||
discard:
|
||||
__trace_event_discard_commit(buffer, event);
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@@ -2462,12 +2462,22 @@ static struct trace_event_file *
|
||||
trace_create_new_event(struct trace_event_call *call,
|
||||
struct trace_array *tr)
|
||||
{
|
||||
struct trace_pid_list *no_pid_list;
|
||||
struct trace_pid_list *pid_list;
|
||||
struct trace_event_file *file;
|
||||
|
||||
file = kmem_cache_alloc(file_cachep, GFP_TRACE);
|
||||
if (!file)
|
||||
return NULL;
|
||||
|
||||
pid_list = rcu_dereference_protected(tr->filtered_pids,
|
||||
lockdep_is_held(&event_mutex));
|
||||
no_pid_list = rcu_dereference_protected(tr->filtered_no_pids,
|
||||
lockdep_is_held(&event_mutex));
|
||||
|
||||
if (pid_list || no_pid_list)
|
||||
file->flags |= EVENT_FILE_FL_PID_FILTER;
|
||||
|
||||
file->event_call = call;
|
||||
file->tr = tr;
|
||||
atomic_set(&file->sm_ref, 0);
|
||||
|
@@ -1312,6 +1312,7 @@ static int uprobe_perf_open(struct trace_event_call *call,
|
||||
return 0;
|
||||
|
||||
list_for_each_entry(pos, trace_probe_probe_list(tp), list) {
|
||||
tu = container_of(pos, struct trace_uprobe, tp);
|
||||
err = uprobe_apply(tu->inode, tu->offset, &tu->consumer, true);
|
||||
if (err) {
|
||||
uprobe_perf_close(call, event);
|
||||
|
@@ -181,9 +181,6 @@ int register_vlan_dev(struct net_device *dev, struct netlink_ext_ack *extack)
|
||||
if (err)
|
||||
goto out_unregister_netdev;
|
||||
|
||||
/* Account for reference in struct vlan_dev_priv */
|
||||
dev_hold(real_dev);
|
||||
|
||||
vlan_stacked_transfer_operstate(real_dev, dev, vlan);
|
||||
linkwatch_fire_event(dev); /* _MUST_ call rfc2863_policy() */
|
||||
|
||||
|
@@ -606,6 +606,9 @@ static int vlan_dev_init(struct net_device *dev)
|
||||
if (!vlan->vlan_pcpu_stats)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Get vlan's reference to real_dev */
|
||||
dev_hold(real_dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@@ -924,15 +924,36 @@ static void remove_nexthop(struct net *net, struct nexthop *nh,
|
||||
/* if any FIB entries reference this nexthop, any dst entries
|
||||
* need to be regenerated
|
||||
*/
|
||||
static void nh_rt_cache_flush(struct net *net, struct nexthop *nh)
|
||||
static void nh_rt_cache_flush(struct net *net, struct nexthop *nh,
|
||||
struct nexthop *replaced_nh)
|
||||
{
|
||||
struct fib6_info *f6i;
|
||||
struct nh_group *nhg;
|
||||
int i;
|
||||
|
||||
if (!list_empty(&nh->fi_list))
|
||||
rt_cache_flush(net);
|
||||
|
||||
list_for_each_entry(f6i, &nh->f6i_list, nh_list)
|
||||
ipv6_stub->fib6_update_sernum(net, f6i);
|
||||
|
||||
/* if an IPv6 group was replaced, we have to release all old
|
||||
* dsts to make sure all refcounts are released
|
||||
*/
|
||||
if (!replaced_nh->is_group)
|
||||
return;
|
||||
|
||||
/* new dsts must use only the new nexthop group */
|
||||
synchronize_net();
|
||||
|
||||
nhg = rtnl_dereference(replaced_nh->nh_grp);
|
||||
for (i = 0; i < nhg->num_nh; i++) {
|
||||
struct nh_grp_entry *nhge = &nhg->nh_entries[i];
|
||||
struct nh_info *nhi = rtnl_dereference(nhge->nh->nh_info);
|
||||
|
||||
if (nhi->family == AF_INET6)
|
||||
ipv6_stub->fib6_nh_release_dsts(&nhi->fib6_nh);
|
||||
}
|
||||
}
|
||||
|
||||
static int replace_nexthop_grp(struct net *net, struct nexthop *old,
|
||||
@@ -1111,7 +1132,7 @@ static int replace_nexthop(struct net *net, struct nexthop *old,
|
||||
err = replace_nexthop_single(net, old, new, extack);
|
||||
|
||||
if (!err) {
|
||||
nh_rt_cache_flush(net, old);
|
||||
nh_rt_cache_flush(net, old, new);
|
||||
|
||||
__remove_nexthop(net, new, NULL);
|
||||
nexthop_put(new);
|
||||
@@ -1355,11 +1376,15 @@ static int nh_create_ipv6(struct net *net, struct nexthop *nh,
|
||||
/* sets nh_dev if successful */
|
||||
err = ipv6_stub->fib6_nh_init(net, fib6_nh, &fib6_cfg, GFP_KERNEL,
|
||||
extack);
|
||||
if (err)
|
||||
if (err) {
|
||||
/* IPv6 is not enabled, don't call fib6_nh_release */
|
||||
if (err == -EAFNOSUPPORT)
|
||||
goto out;
|
||||
ipv6_stub->fib6_nh_release(fib6_nh);
|
||||
else
|
||||
} else {
|
||||
nh->nh_flags = fib6_nh->fib_nh_flags;
|
||||
|
||||
}
|
||||
out:
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@@ -3911,7 +3911,7 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
|
||||
}
|
||||
#ifdef CONFIG_MMU
|
||||
case TCP_ZEROCOPY_RECEIVE: {
|
||||
struct tcp_zerocopy_receive zc;
|
||||
struct tcp_zerocopy_receive zc = {};
|
||||
int err;
|
||||
|
||||
if (get_user(len, optlen))
|
||||
@@ -3929,7 +3929,7 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
|
||||
lock_sock(sk);
|
||||
err = tcp_zerocopy_receive(sk, &zc);
|
||||
release_sock(sk);
|
||||
if (len == sizeof(zc))
|
||||
if (len >= offsetofend(struct tcp_zerocopy_receive, err))
|
||||
goto zerocopy_rcv_sk_err;
|
||||
switch (len) {
|
||||
case offsetofend(struct tcp_zerocopy_receive, err):
|
||||
|
@@ -337,8 +337,6 @@ static void bictcp_cong_avoid(struct sock *sk, u32 ack, u32 acked)
|
||||
return;
|
||||
|
||||
if (tcp_in_slow_start(tp)) {
|
||||
if (hystart && after(ack, ca->end_seq))
|
||||
bictcp_hystart_reset(sk);
|
||||
acked = tcp_slow_start(tp, acked);
|
||||
if (!acked)
|
||||
return;
|
||||
@@ -398,6 +396,9 @@ static void hystart_update(struct sock *sk, u32 delay)
|
||||
struct bictcp *ca = inet_csk_ca(sk);
|
||||
u32 threshold;
|
||||
|
||||
if (after(tp->snd_una, ca->end_seq))
|
||||
bictcp_hystart_reset(sk);
|
||||
|
||||
if (hystart_detect & HYSTART_ACK_TRAIN) {
|
||||
u32 now = bictcp_clock_us(sk);
|
||||
|
||||
|
@@ -1018,6 +1018,7 @@ static const struct ipv6_stub ipv6_stub_impl = {
|
||||
.ip6_mtu_from_fib6 = ip6_mtu_from_fib6,
|
||||
.fib6_nh_init = fib6_nh_init,
|
||||
.fib6_nh_release = fib6_nh_release,
|
||||
.fib6_nh_release_dsts = fib6_nh_release_dsts,
|
||||
.fib6_update_sernum = fib6_update_sernum_stub,
|
||||
.fib6_rt_update = fib6_rt_update,
|
||||
.ip6_del_rt = ip6_del_rt,
|
||||
|
@@ -193,7 +193,7 @@ static int __ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff
|
||||
#if defined(CONFIG_NETFILTER) && defined(CONFIG_XFRM)
|
||||
/* Policy lookup after SNAT yielded a new policy */
|
||||
if (skb_dst(skb)->xfrm) {
|
||||
IPCB(skb)->flags |= IPSKB_REROUTED;
|
||||
IP6CB(skb)->flags |= IP6SKB_REROUTED;
|
||||
return dst_output(net, sk, skb);
|
||||
}
|
||||
#endif
|
||||
|
@@ -3570,6 +3570,25 @@ void fib6_nh_release(struct fib6_nh *fib6_nh)
|
||||
fib_nh_common_release(&fib6_nh->nh_common);
|
||||
}
|
||||
|
||||
void fib6_nh_release_dsts(struct fib6_nh *fib6_nh)
|
||||
{
|
||||
int cpu;
|
||||
|
||||
if (!fib6_nh->rt6i_pcpu)
|
||||
return;
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
struct rt6_info *pcpu_rt, **ppcpu_rt;
|
||||
|
||||
ppcpu_rt = per_cpu_ptr(fib6_nh->rt6i_pcpu, cpu);
|
||||
pcpu_rt = xchg(ppcpu_rt, NULL);
|
||||
if (pcpu_rt) {
|
||||
dst_dev_put(&pcpu_rt->dst);
|
||||
dst_release(&pcpu_rt->dst);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static struct fib6_info *ip6_route_info_create(struct fib6_config *cfg,
|
||||
gfp_t gfp_flags,
|
||||
struct netlink_ext_ack *extack)
|
||||
|
@@ -368,9 +368,10 @@ static void schedule_3rdack_retransmission(struct sock *sk)
|
||||
|
||||
/* reschedule with a timeout above RTT, as we must look only for drop */
|
||||
if (tp->srtt_us)
|
||||
timeout = tp->srtt_us << 1;
|
||||
timeout = usecs_to_jiffies(tp->srtt_us >> (3 - 1));
|
||||
else
|
||||
timeout = TCP_TIMEOUT_INIT;
|
||||
timeout += jiffies;
|
||||
|
||||
WARN_ON_ONCE(icsk->icsk_ack.pending & ICSK_ACK_TIMER);
|
||||
icsk->icsk_ack.pending |= ICSK_ACK_SCHED | ICSK_ACK_TIMER;
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user