Merge 5.10.132 into android12-5.10-lts

Changes in 5.10.132
	ALSA: hda - Add fixup for Dell Latitidue E5430
	ALSA: hda/conexant: Apply quirk for another HP ProDesk 600 G3 model
	ALSA: hda/realtek: Fix headset mic for Acer SF313-51
	ALSA: hda/realtek - Fix headset mic problem for a HP machine with alc671
	ALSA: hda/realtek - Fix headset mic problem for a HP machine with alc221
	ALSA: hda/realtek - Enable the headset-mic on a Xiaomi's laptop
	xen/netback: avoid entering xenvif_rx_next_skb() with an empty rx queue
	fix race between exit_itimers() and /proc/pid/timers
	mm: split huge PUD on wp_huge_pud fallback
	tracing/histograms: Fix memory leak problem
	net: sock: tracing: Fix sock_exceed_buf_limit not to dereference stale pointer
	ip: fix dflt addr selection for connected nexthop
	ARM: 9213/1: Print message about disabled Spectre workarounds only once
	ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction
	wifi: mac80211: fix queue selection for mesh/OCB interfaces
	cgroup: Use separate src/dst nodes when preloading css_sets for migration
	btrfs: return -EAGAIN for NOWAIT dio reads/writes on compressed and inline extents
	drm/panfrost: Put mapping instead of shmem obj on panfrost_mmu_map_fault_addr() error
	drm/panfrost: Fix shrinker list corruption by madvise IOCTL
	fs/remap: constrain dedupe of EOF blocks
	nilfs2: fix incorrect masking of permission flags for symlinks
	sh: convert nommu io{re,un}map() to static inline functions
	Revert "evm: Fix memleak in init_desc"
	ext4: fix race condition between ext4_write and ext4_convert_inline_data
	ARM: dts: imx6qdl-ts7970: Fix ngpio typo and count
	spi: amd: Limit max transfer and message size
	ARM: 9209/1: Spectre-BHB: avoid pr_info() every time a CPU comes out of idle
	ARM: 9210/1: Mark the FDT_FIXED sections as shareable
	net/mlx5e: kTLS, Fix build time constant test in TX
	net/mlx5e: kTLS, Fix build time constant test in RX
	net/mlx5e: Fix capability check for updating vnic env counters
	drm/i915: fix a possible refcount leak in intel_dp_add_mst_connector()
	ima: Fix a potential integer overflow in ima_appraise_measurement
	ASoC: sgtl5000: Fix noise on shutdown/remove
	ASoC: tas2764: Add post reset delays
	ASoC: tas2764: Fix and extend FSYNC polarity handling
	ASoC: tas2764: Correct playback volume range
	ASoC: tas2764: Fix amp gain register offset & default
	ASoC: Intel: Skylake: Correct the ssp rate discovery in skl_get_ssp_clks()
	ASoC: Intel: Skylake: Correct the handling of fmt_config flexible array
	net: stmmac: dwc-qos: Disable split header for Tegra194
	sysctl: Fix data races in proc_dointvec().
	sysctl: Fix data races in proc_douintvec().
	sysctl: Fix data races in proc_dointvec_minmax().
	sysctl: Fix data races in proc_douintvec_minmax().
	sysctl: Fix data races in proc_doulongvec_minmax().
	sysctl: Fix data races in proc_dointvec_jiffies().
	tcp: Fix a data-race around sysctl_tcp_max_orphans.
	inetpeer: Fix data-races around sysctl.
	net: Fix data-races around sysctl_mem.
	cipso: Fix data-races around sysctl.
	icmp: Fix data-races around sysctl.
	ipv4: Fix a data-race around sysctl_fib_sync_mem.
	ARM: dts: at91: sama5d2: Fix typo in i2s1 node
	ARM: dts: sunxi: Fix SPI NOR campatible on Orange Pi Zero
	drm/i915/selftests: fix a couple IS_ERR() vs NULL tests
	drm/i915/gt: Serialize TLB invalidates with GT resets
	sysctl: Fix data-races in proc_dointvec_ms_jiffies().
	icmp: Fix a data-race around sysctl_icmp_ratelimit.
	icmp: Fix a data-race around sysctl_icmp_ratemask.
	raw: Fix a data-race around sysctl_raw_l3mdev_accept.
	ipv4: Fix data-races around sysctl_ip_dynaddr.
	nexthop: Fix data-races around nexthop_compat_mode.
	net: ftgmac100: Hold reference returned by of_get_child_by_name()
	ima: force signature verification when CONFIG_KEXEC_SIG is configured
	ima: Fix potential memory leak in ima_init_crypto()
	sfc: fix use after free when disabling sriov
	seg6: fix skb checksum evaluation in SRH encapsulation/insertion
	seg6: fix skb checksum in SRv6 End.B6 and End.B6.Encaps behaviors
	seg6: bpf: fix skb checksum in bpf_push_seg6_encap()
	sfc: fix kernel panic when creating VF
	net: atlantic: remove deep parameter on suspend/resume functions
	net: atlantic: remove aq_nic_deinit() when resume
	KVM: x86: Fully initialize 'struct kvm_lapic_irq' in kvm_pv_kick_cpu_op()
	net/tls: Check for errors in tls_device_init
	mm: sysctl: fix missing numa_stat when !CONFIG_HUGETLB_PAGE
	virtio_mmio: Add missing PM calls to freeze/restore
	virtio_mmio: Restore guest page size on resume
	netfilter: br_netfilter: do not skip all hooks with 0 priority
	scsi: hisi_sas: Limit max hw sectors for v3 HW
	cpufreq: pmac32-cpufreq: Fix refcount leak bug
	platform/x86: hp-wmi: Ignore Sanitization Mode event
	net: tipc: fix possible refcount leak in tipc_sk_create()
	NFC: nxp-nci: don't print header length mismatch on i2c error
	nvme-tcp: always fail a request when sending it failed
	nvme: fix regression when disconnect a recovering ctrl
	net: sfp: fix memory leak in sfp_probe()
	ASoC: ops: Fix off by one in range control validation
	pinctrl: aspeed: Fix potential NULL dereference in aspeed_pinmux_set_mux()
	ASoC: SOF: Intel: hda-loader: Clarify the cl_dsp_init() flow
	ASoC: wm5110: Fix DRE control
	ASoC: dapm: Initialise kcontrol data for mux/demux controls
	ASoC: cs47l15: Fix event generation for low power mux control
	ASoC: madera: Fix event generation for OUT1 demux
	ASoC: madera: Fix event generation for rate controls
	irqchip: or1k-pic: Undefine mask_ack for level triggered hardware
	x86: Clear .brk area at early boot
	soc: ixp4xx/npe: Fix unused match warning
	ARM: dts: stm32: use the correct clock source for CEC on stm32mp151
	Revert "can: xilinx_can: Limit CANFD brp to 2"
	nvme-pci: phison e16 has bogus namespace ids
	signal handling: don't use BUG_ON() for debugging
	USB: serial: ftdi_sio: add Belimo device ids
	usb: typec: add missing uevent when partner support PD
	usb: dwc3: gadget: Fix event pending check
	tty: serial: samsung_tty: set dma burst_size to 1
	vt: fix memory overlapping when deleting chars in the buffer
	serial: 8250: fix return error code in serial8250_request_std_resource()
	serial: stm32: Clear prev values before setting RTS delays
	serial: pl011: UPSTAT_AUTORTS requires .throttle/unthrottle
	serial: 8250: Fix PM usage_count for console handover
	x86/pat: Fix x86_has_pat_wp()
	Linux 5.10.132

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I450f357105f90b1b9549dea5de62dc9a160d4ba9
This commit is contained in:
Greg Kroah-Hartman
2022-07-28 17:17:36 +02:00
112 changed files with 602 additions and 288 deletions

View File

@@ -988,7 +988,7 @@ cipso_cache_enable - BOOLEAN
cipso_cache_bucket_size - INTEGER cipso_cache_bucket_size - INTEGER
The CIPSO label cache consists of a fixed size hash table with each The CIPSO label cache consists of a fixed size hash table with each
hash bucket containing a number of cache entries. This variable limits hash bucket containing a number of cache entries. This variable limits
the number of entries in each hash bucket; the larger the value the the number of entries in each hash bucket; the larger the value is, the
more CIPSO label mappings that can be cached. When the number of more CIPSO label mappings that can be cached. When the number of
entries in a given hash bucket reaches this limit adding new entries entries in a given hash bucket reaches this limit adding new entries
causes the oldest entry in the bucket to be removed to make room. causes the oldest entry in the bucket to be removed to make room.
@@ -1093,7 +1093,7 @@ ip_autobind_reuse - BOOLEAN
option should only be set by experts. option should only be set by experts.
Default: 0 Default: 0
ip_dynaddr - BOOLEAN ip_dynaddr - INTEGER
If set non-zero, enables support for dynamic addresses. If set non-zero, enables support for dynamic addresses.
If set to a non-zero value larger than 1, a kernel log If set to a non-zero value larger than 1, a kernel log
message will be printed when dynamic address rewriting message will be printed when dynamic address rewriting

View File

@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
VERSION = 5 VERSION = 5
PATCHLEVEL = 10 PATCHLEVEL = 10
SUBLEVEL = 131 SUBLEVEL = 132
EXTRAVERSION = EXTRAVERSION =
NAME = Dare mighty things NAME = Dare mighty things

View File

@@ -226,7 +226,7 @@
reg = <0x28>; reg = <0x28>;
#gpio-cells = <2>; #gpio-cells = <2>;
gpio-controller; gpio-controller;
ngpio = <32>; ngpios = <62>;
}; };
sgtl5000: codec@a { sgtl5000: codec@a {

View File

@@ -1125,7 +1125,7 @@
clocks = <&pmc PMC_TYPE_PERIPHERAL 55>, <&pmc PMC_TYPE_GCK 55>; clocks = <&pmc PMC_TYPE_PERIPHERAL 55>, <&pmc PMC_TYPE_GCK 55>;
clock-names = "pclk", "gclk"; clock-names = "pclk", "gclk";
assigned-clocks = <&pmc PMC_TYPE_CORE PMC_I2S1_MUX>; assigned-clocks = <&pmc PMC_TYPE_CORE PMC_I2S1_MUX>;
assigned-parrents = <&pmc PMC_TYPE_GCK 55>; assigned-clock-parents = <&pmc PMC_TYPE_GCK 55>;
status = "disabled"; status = "disabled";
}; };

View File

@@ -543,7 +543,7 @@
compatible = "st,stm32-cec"; compatible = "st,stm32-cec";
reg = <0x40016000 0x400>; reg = <0x40016000 0x400>;
interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&rcc CEC_K>, <&clk_lse>; clocks = <&rcc CEC_K>, <&rcc CEC>;
clock-names = "cec", "hdmi-cec"; clock-names = "cec", "hdmi-cec";
status = "disabled"; status = "disabled";
}; };

View File

@@ -169,7 +169,7 @@
flash@0 { flash@0 {
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
compatible = "mxicy,mx25l1606e", "winbond,w25q128"; compatible = "mxicy,mx25l1606e", "jedec,spi-nor";
reg = <0>; reg = <0>;
spi-max-frequency = <40000000>; spi-max-frequency = <40000000>;
}; };

View File

@@ -27,6 +27,7 @@ enum {
MT_HIGH_VECTORS, MT_HIGH_VECTORS,
MT_MEMORY_RWX, MT_MEMORY_RWX,
MT_MEMORY_RW, MT_MEMORY_RW,
MT_MEMORY_RO,
MT_ROM, MT_ROM,
MT_MEMORY_RWX_NONCACHED, MT_MEMORY_RWX_NONCACHED,
MT_MEMORY_RW_DTCM, MT_MEMORY_RW_DTCM,

View File

@@ -164,5 +164,31 @@ static inline unsigned long user_stack_pointer(struct pt_regs *regs)
((current_stack_pointer | (THREAD_SIZE - 1)) - 7) - 1; \ ((current_stack_pointer | (THREAD_SIZE - 1)) - 7) - 1; \
}) })
/*
* Update ITSTATE after normal execution of an IT block instruction.
*
* The 8 IT state bits are split into two parts in CPSR:
* ITSTATE<1:0> are in CPSR<26:25>
* ITSTATE<7:2> are in CPSR<15:10>
*/
static inline unsigned long it_advance(unsigned long cpsr)
{
if ((cpsr & 0x06000400) == 0) {
/* ITSTATE<2:0> == 0 means end of IT block, so clear IT state */
cpsr &= ~PSR_IT_MASK;
} else {
/* We need to shift left ITSTATE<4:0> */
const unsigned long mask = 0x06001c00; /* Mask ITSTATE<4:0> */
unsigned long it = cpsr & mask;
it <<= 1;
it |= it >> (27 - 10); /* Carry ITSTATE<2> to correct place */
it &= mask;
cpsr &= ~mask;
cpsr |= it;
}
return cpsr;
}
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif #endif

View File

@@ -935,6 +935,9 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
if (type == TYPE_LDST) if (type == TYPE_LDST)
do_alignment_finish_ldst(addr, instr, regs, offset); do_alignment_finish_ldst(addr, instr, regs, offset);
if (thumb_mode(regs))
regs->ARM_cpsr = it_advance(regs->ARM_cpsr);
return 0; return 0;
bad_or_fault: bad_or_fault:

View File

@@ -296,6 +296,13 @@ static struct mem_type mem_types[] __ro_after_init = {
.prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE, .prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE,
.domain = DOMAIN_KERNEL, .domain = DOMAIN_KERNEL,
}, },
[MT_MEMORY_RO] = {
.prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY |
L_PTE_XN | L_PTE_RDONLY,
.prot_l1 = PMD_TYPE_TABLE,
.prot_sect = PMD_TYPE_SECT,
.domain = DOMAIN_KERNEL,
},
[MT_ROM] = { [MT_ROM] = {
.prot_sect = PMD_TYPE_SECT, .prot_sect = PMD_TYPE_SECT,
.domain = DOMAIN_KERNEL, .domain = DOMAIN_KERNEL,
@@ -490,6 +497,7 @@ static void __init build_mem_type_table(void)
/* Also setup NX memory mapping */ /* Also setup NX memory mapping */
mem_types[MT_MEMORY_RW].prot_sect |= PMD_SECT_XN; mem_types[MT_MEMORY_RW].prot_sect |= PMD_SECT_XN;
mem_types[MT_MEMORY_RO].prot_sect |= PMD_SECT_XN;
} }
if (cpu_arch >= CPU_ARCH_ARMv7 && (cr & CR_TRE)) { if (cpu_arch >= CPU_ARCH_ARMv7 && (cr & CR_TRE)) {
/* /*
@@ -569,6 +577,7 @@ static void __init build_mem_type_table(void)
mem_types[MT_ROM].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE; mem_types[MT_ROM].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE;
mem_types[MT_MINICLEAN].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE; mem_types[MT_MINICLEAN].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE;
mem_types[MT_CACHECLEAN].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE; mem_types[MT_CACHECLEAN].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE;
mem_types[MT_MEMORY_RO].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE;
#endif #endif
/* /*
@@ -588,6 +597,8 @@ static void __init build_mem_type_table(void)
mem_types[MT_MEMORY_RWX].prot_pte |= L_PTE_SHARED; mem_types[MT_MEMORY_RWX].prot_pte |= L_PTE_SHARED;
mem_types[MT_MEMORY_RW].prot_sect |= PMD_SECT_S; mem_types[MT_MEMORY_RW].prot_sect |= PMD_SECT_S;
mem_types[MT_MEMORY_RW].prot_pte |= L_PTE_SHARED; mem_types[MT_MEMORY_RW].prot_pte |= L_PTE_SHARED;
mem_types[MT_MEMORY_RO].prot_sect |= PMD_SECT_S;
mem_types[MT_MEMORY_RO].prot_pte |= L_PTE_SHARED;
mem_types[MT_MEMORY_DMA_READY].prot_pte |= L_PTE_SHARED; mem_types[MT_MEMORY_DMA_READY].prot_pte |= L_PTE_SHARED;
mem_types[MT_MEMORY_RWX_NONCACHED].prot_sect |= PMD_SECT_S; mem_types[MT_MEMORY_RWX_NONCACHED].prot_sect |= PMD_SECT_S;
mem_types[MT_MEMORY_RWX_NONCACHED].prot_pte |= L_PTE_SHARED; mem_types[MT_MEMORY_RWX_NONCACHED].prot_pte |= L_PTE_SHARED;
@@ -648,6 +659,8 @@ static void __init build_mem_type_table(void)
mem_types[MT_MEMORY_RWX].prot_pte |= kern_pgprot; mem_types[MT_MEMORY_RWX].prot_pte |= kern_pgprot;
mem_types[MT_MEMORY_RW].prot_sect |= ecc_mask | cp->pmd; mem_types[MT_MEMORY_RW].prot_sect |= ecc_mask | cp->pmd;
mem_types[MT_MEMORY_RW].prot_pte |= kern_pgprot; mem_types[MT_MEMORY_RW].prot_pte |= kern_pgprot;
mem_types[MT_MEMORY_RO].prot_sect |= ecc_mask | cp->pmd;
mem_types[MT_MEMORY_RO].prot_pte |= kern_pgprot;
mem_types[MT_MEMORY_DMA_READY].prot_pte |= kern_pgprot; mem_types[MT_MEMORY_DMA_READY].prot_pte |= kern_pgprot;
mem_types[MT_MEMORY_RWX_NONCACHED].prot_sect |= ecc_mask; mem_types[MT_MEMORY_RWX_NONCACHED].prot_sect |= ecc_mask;
mem_types[MT_ROM].prot_sect |= cp->pmd; mem_types[MT_ROM].prot_sect |= cp->pmd;
@@ -1342,7 +1355,7 @@ static void __init devicemaps_init(const struct machine_desc *mdesc)
map.pfn = __phys_to_pfn(__atags_pointer & SECTION_MASK); map.pfn = __phys_to_pfn(__atags_pointer & SECTION_MASK);
map.virtual = FDT_FIXED_BASE; map.virtual = FDT_FIXED_BASE;
map.length = FDT_FIXED_SIZE; map.length = FDT_FIXED_SIZE;
map.type = MT_ROM; map.type = MT_MEMORY_RO;
create_mapping(&map); create_mapping(&map);
} }

View File

@@ -108,8 +108,7 @@ static unsigned int spectre_v2_install_workaround(unsigned int method)
#else #else
static unsigned int spectre_v2_install_workaround(unsigned int method) static unsigned int spectre_v2_install_workaround(unsigned int method)
{ {
pr_info("CPU%u: Spectre V2: workarounds disabled by configuration\n", pr_info_once("Spectre V2: workarounds disabled by configuration\n");
smp_processor_id());
return SPECTRE_VULNERABLE; return SPECTRE_VULNERABLE;
} }
@@ -209,10 +208,10 @@ static int spectre_bhb_install_workaround(int method)
return SPECTRE_VULNERABLE; return SPECTRE_VULNERABLE;
spectre_bhb_method = method; spectre_bhb_method = method;
}
pr_info("CPU%u: Spectre BHB: using %s workaround\n", pr_info("CPU%u: Spectre BHB: enabling %s workaround for all CPUs\n",
smp_processor_id(), spectre_bhb_method_name(method)); smp_processor_id(), spectre_bhb_method_name(method));
}
return SPECTRE_MITIGATED; return SPECTRE_MITIGATED;
} }

View File

@@ -14,6 +14,7 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/stddef.h> #include <linux/stddef.h>
#include <asm/probes.h> #include <asm/probes.h>
#include <asm/ptrace.h>
#include <asm/kprobes.h> #include <asm/kprobes.h>
void __init arm_probes_decode_init(void); void __init arm_probes_decode_init(void);
@@ -35,31 +36,6 @@ void __init find_str_pc_offset(void);
#endif #endif
/*
* Update ITSTATE after normal execution of an IT block instruction.
*
* The 8 IT state bits are split into two parts in CPSR:
* ITSTATE<1:0> are in CPSR<26:25>
* ITSTATE<7:2> are in CPSR<15:10>
*/
static inline unsigned long it_advance(unsigned long cpsr)
{
if ((cpsr & 0x06000400) == 0) {
/* ITSTATE<2:0> == 0 means end of IT block, so clear IT state */
cpsr &= ~PSR_IT_MASK;
} else {
/* We need to shift left ITSTATE<4:0> */
const unsigned long mask = 0x06001c00; /* Mask ITSTATE<4:0> */
unsigned long it = cpsr & mask;
it <<= 1;
it |= it >> (27 - 10); /* Carry ITSTATE<2> to correct place */
it &= mask;
cpsr &= ~mask;
cpsr |= it;
}
return cpsr;
}
static inline void __kprobes bx_write_pc(long pcv, struct pt_regs *regs) static inline void __kprobes bx_write_pc(long pcv, struct pt_regs *regs)
{ {
long cpsr = regs->ARM_cpsr; long cpsr = regs->ARM_cpsr;

View File

@@ -271,8 +271,12 @@ static inline void __iomem *ioremap_prot(phys_addr_t offset, unsigned long size,
#endif /* CONFIG_HAVE_IOREMAP_PROT */ #endif /* CONFIG_HAVE_IOREMAP_PROT */
#else /* CONFIG_MMU */ #else /* CONFIG_MMU */
#define iounmap(addr) do { } while (0) static inline void __iomem *ioremap(phys_addr_t offset, size_t size)
#define ioremap(offset, size) ((void __iomem *)(unsigned long)(offset)) {
return (void __iomem *)(unsigned long)offset;
}
static inline void iounmap(volatile void __iomem *addr) { }
#endif /* CONFIG_MMU */ #endif /* CONFIG_MMU */
#define ioremap_uc ioremap #define ioremap_uc ioremap

View File

@@ -419,6 +419,8 @@ static void __init clear_bss(void)
{ {
memset(__bss_start, 0, memset(__bss_start, 0,
(unsigned long) __bss_stop - (unsigned long) __bss_start); (unsigned long) __bss_stop - (unsigned long) __bss_start);
memset(__brk_base, 0,
(unsigned long) __brk_limit - (unsigned long) __brk_base);
} }
static unsigned long get_cmd_line_ptr(void) static unsigned long get_cmd_line_ptr(void)

View File

@@ -88,6 +88,8 @@ const char * const *arch_get_ima_policy(void)
if (IS_ENABLED(CONFIG_IMA_ARCH_POLICY) && arch_ima_get_secureboot()) { if (IS_ENABLED(CONFIG_IMA_ARCH_POLICY) && arch_ima_get_secureboot()) {
if (IS_ENABLED(CONFIG_MODULE_SIG)) if (IS_ENABLED(CONFIG_MODULE_SIG))
set_module_sig_enforced(); set_module_sig_enforced();
if (IS_ENABLED(CONFIG_KEXEC_SIG))
set_kexec_sig_enforced();
return sb_arch_rules; return sb_arch_rules;
} }
return NULL; return NULL;

View File

@@ -8142,15 +8142,17 @@ static int kvm_pv_clock_pairing(struct kvm_vcpu *vcpu, gpa_t paddr,
*/ */
static void kvm_pv_kick_cpu_op(struct kvm *kvm, unsigned long flags, int apicid) static void kvm_pv_kick_cpu_op(struct kvm *kvm, unsigned long flags, int apicid)
{ {
struct kvm_lapic_irq lapic_irq; /*
* All other fields are unused for APIC_DM_REMRD, but may be consumed by
* common code, e.g. for tracing. Defer initialization to the compiler.
*/
struct kvm_lapic_irq lapic_irq = {
.delivery_mode = APIC_DM_REMRD,
.dest_mode = APIC_DEST_PHYSICAL,
.shorthand = APIC_DEST_NOSHORT,
.dest_id = apicid,
};
lapic_irq.shorthand = APIC_DEST_NOSHORT;
lapic_irq.dest_mode = APIC_DEST_PHYSICAL;
lapic_irq.level = 0;
lapic_irq.dest_id = apicid;
lapic_irq.msi_redir_hint = false;
lapic_irq.delivery_mode = APIC_DM_REMRD;
kvm_irq_delivery_to_apic(kvm, NULL, &lapic_irq, NULL); kvm_irq_delivery_to_apic(kvm, NULL, &lapic_irq, NULL);
} }

View File

@@ -78,10 +78,20 @@ static uint8_t __pte2cachemode_tbl[8] = {
[__pte2cm_idx(_PAGE_PWT | _PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC, [__pte2cm_idx(_PAGE_PWT | _PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC,
}; };
/* Check that the write-protect PAT entry is set for write-protect */ /*
* Check that the write-protect PAT entry is set for write-protect.
* To do this without making assumptions how PAT has been set up (Xen has
* another layout than the kernel), translate the _PAGE_CACHE_MODE_WP cache
* mode via the __cachemode2pte_tbl[] into protection bits (those protection
* bits will select a cache mode of WP or better), and then translate the
* protection bits back into the cache mode using __pte2cm_idx() and the
* __pte2cachemode_tbl[] array. This will return the really used cache mode.
*/
bool x86_has_pat_wp(void) bool x86_has_pat_wp(void)
{ {
return __pte2cachemode_tbl[_PAGE_CACHE_MODE_WP] == _PAGE_CACHE_MODE_WP; uint16_t prot = __cachemode2pte_tbl[_PAGE_CACHE_MODE_WP];
return __pte2cachemode_tbl[__pte2cm_idx(prot)] == _PAGE_CACHE_MODE_WP;
} }
enum page_cache_mode pgprot2cachemode(pgprot_t pgprot) enum page_cache_mode pgprot2cachemode(pgprot_t pgprot)

View File

@@ -471,6 +471,10 @@ static int pmac_cpufreq_init_MacRISC3(struct device_node *cpunode)
if (slew_done_gpio_np) if (slew_done_gpio_np)
slew_done_gpio = read_gpio(slew_done_gpio_np); slew_done_gpio = read_gpio(slew_done_gpio_np);
of_node_put(volt_gpio_np);
of_node_put(freq_gpio_np);
of_node_put(slew_done_gpio_np);
/* If we use the frequency GPIOs, calculate the min/max speeds based /* If we use the frequency GPIOs, calculate the min/max speeds based
* on the bus frequencies * on the bus frequencies
*/ */

View File

@@ -790,6 +790,7 @@ static struct drm_connector *intel_dp_add_mst_connector(struct drm_dp_mst_topolo
ret = drm_connector_init(dev, connector, &intel_dp_mst_connector_funcs, ret = drm_connector_init(dev, connector, &intel_dp_mst_connector_funcs,
DRM_MODE_CONNECTOR_DisplayPort); DRM_MODE_CONNECTOR_DisplayPort);
if (ret) { if (ret) {
drm_dp_mst_put_port_malloc(port);
intel_connector_free(intel_connector); intel_connector_free(intel_connector);
return NULL; return NULL;
} }

View File

@@ -736,6 +736,20 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
mutex_lock(&gt->tlb_invalidate_lock); mutex_lock(&gt->tlb_invalidate_lock);
intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL); intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL);
spin_lock_irq(&uncore->lock); /* serialise invalidate with GT reset */
for_each_engine(engine, gt, id) {
struct reg_and_bit rb;
rb = get_reg_and_bit(engine, regs == gen8_regs, regs, num);
if (!i915_mmio_reg_offset(rb.reg))
continue;
intel_uncore_write_fw(uncore, rb.reg, rb.bit);
}
spin_unlock_irq(&uncore->lock);
for_each_engine(engine, gt, id) { for_each_engine(engine, gt, id) {
/* /*
* HW architecture suggest typical invalidation time at 40us, * HW architecture suggest typical invalidation time at 40us,
@@ -750,7 +764,6 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
if (!i915_mmio_reg_offset(rb.reg)) if (!i915_mmio_reg_offset(rb.reg))
continue; continue;
intel_uncore_write_fw(uncore, rb.reg, rb.bit);
if (__intel_wait_for_register_fw(uncore, if (__intel_wait_for_register_fw(uncore,
rb.reg, rb.bit, 0, rb.reg, rb.bit, 0,
timeout_us, timeout_ms, timeout_us, timeout_ms,

View File

@@ -4788,8 +4788,8 @@ static int live_lrc_layout(void *arg)
continue; continue;
hw = shmem_pin_map(engine->default_state); hw = shmem_pin_map(engine->default_state);
if (IS_ERR(hw)) { if (!hw) {
err = PTR_ERR(hw); err = -ENOMEM;
break; break;
} }
hw += LRC_STATE_OFFSET / sizeof(*hw); hw += LRC_STATE_OFFSET / sizeof(*hw);
@@ -4965,8 +4965,8 @@ static int live_lrc_fixed(void *arg)
continue; continue;
hw = shmem_pin_map(engine->default_state); hw = shmem_pin_map(engine->default_state);
if (IS_ERR(hw)) { if (!hw) {
err = PTR_ERR(hw); err = -ENOMEM;
break; break;
} }
hw += LRC_STATE_OFFSET / sizeof(*hw); hw += LRC_STATE_OFFSET / sizeof(*hw);

View File

@@ -427,8 +427,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
if (args->retained) { if (args->retained) {
if (args->madv == PANFROST_MADV_DONTNEED) if (args->madv == PANFROST_MADV_DONTNEED)
list_add_tail(&bo->base.madv_list, list_move_tail(&bo->base.madv_list,
&pfdev->shrinker_list); &pfdev->shrinker_list);
else if (args->madv == PANFROST_MADV_WILLNEED) else if (args->madv == PANFROST_MADV_WILLNEED)
list_del_init(&bo->base.madv_list); list_del_init(&bo->base.madv_list);
} }

View File

@@ -484,7 +484,7 @@ err_map:
err_pages: err_pages:
drm_gem_shmem_put_pages(&bo->base); drm_gem_shmem_put_pages(&bo->base);
err_bo: err_bo:
drm_gem_object_put(&bo->base.base); panfrost_gem_mapping_put(bomapping);
return ret; return ret;
} }

View File

@@ -66,7 +66,6 @@ static struct or1k_pic_dev or1k_pic_level = {
.name = "or1k-PIC-level", .name = "or1k-PIC-level",
.irq_unmask = or1k_pic_unmask, .irq_unmask = or1k_pic_unmask,
.irq_mask = or1k_pic_mask, .irq_mask = or1k_pic_mask,
.irq_mask_ack = or1k_pic_mask_ack,
}, },
.handle = handle_level_irq, .handle = handle_level_irq,
.flags = IRQ_LEVEL | IRQ_NOPROBE, .flags = IRQ_LEVEL | IRQ_NOPROBE,

View File

@@ -259,7 +259,7 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd2 = {
.tseg2_min = 1, .tseg2_min = 1,
.tseg2_max = 128, .tseg2_max = 128,
.sjw_max = 128, .sjw_max = 128,
.brp_min = 2, .brp_min = 1,
.brp_max = 256, .brp_max = 256,
.brp_inc = 1, .brp_inc = 1,
}; };
@@ -272,7 +272,7 @@ static const struct can_bittiming_const xcan_data_bittiming_const_canfd2 = {
.tseg2_min = 1, .tseg2_min = 1,
.tseg2_max = 16, .tseg2_max = 16,
.sjw_max = 16, .sjw_max = 16,
.brp_min = 2, .brp_min = 1,
.brp_max = 256, .brp_max = 256,
.brp_inc = 1, .brp_inc = 1,
}; };

View File

@@ -385,7 +385,7 @@ static void aq_pci_shutdown(struct pci_dev *pdev)
} }
} }
static int aq_suspend_common(struct device *dev, bool deep) static int aq_suspend_common(struct device *dev)
{ {
struct aq_nic_s *nic = pci_get_drvdata(to_pci_dev(dev)); struct aq_nic_s *nic = pci_get_drvdata(to_pci_dev(dev));
@@ -398,17 +398,15 @@ static int aq_suspend_common(struct device *dev, bool deep)
if (netif_running(nic->ndev)) if (netif_running(nic->ndev))
aq_nic_stop(nic); aq_nic_stop(nic);
if (deep) { aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol);
aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol); aq_nic_set_power(nic);
aq_nic_set_power(nic);
}
rtnl_unlock(); rtnl_unlock();
return 0; return 0;
} }
static int atl_resume_common(struct device *dev, bool deep) static int atl_resume_common(struct device *dev)
{ {
struct pci_dev *pdev = to_pci_dev(dev); struct pci_dev *pdev = to_pci_dev(dev);
struct aq_nic_s *nic; struct aq_nic_s *nic;
@@ -421,11 +419,6 @@ static int atl_resume_common(struct device *dev, bool deep)
pci_set_power_state(pdev, PCI_D0); pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev); pci_restore_state(pdev);
if (deep) {
/* Reinitialize Nic/Vecs objects */
aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol);
}
if (netif_running(nic->ndev)) { if (netif_running(nic->ndev)) {
ret = aq_nic_init(nic); ret = aq_nic_init(nic);
if (ret) if (ret)
@@ -450,22 +443,22 @@ err_exit:
static int aq_pm_freeze(struct device *dev) static int aq_pm_freeze(struct device *dev)
{ {
return aq_suspend_common(dev, true); return aq_suspend_common(dev);
} }
static int aq_pm_suspend_poweroff(struct device *dev) static int aq_pm_suspend_poweroff(struct device *dev)
{ {
return aq_suspend_common(dev, true); return aq_suspend_common(dev);
} }
static int aq_pm_thaw(struct device *dev) static int aq_pm_thaw(struct device *dev)
{ {
return atl_resume_common(dev, true); return atl_resume_common(dev);
} }
static int aq_pm_resume_restore(struct device *dev) static int aq_pm_resume_restore(struct device *dev)
{ {
return atl_resume_common(dev, true); return atl_resume_common(dev);
} }
static const struct dev_pm_ops aq_pm_ops = { static const struct dev_pm_ops aq_pm_ops = {

View File

@@ -1747,6 +1747,19 @@ cleanup_clk:
return rc; return rc;
} }
static bool ftgmac100_has_child_node(struct device_node *np, const char *name)
{
struct device_node *child_np = of_get_child_by_name(np, name);
bool ret = false;
if (child_np) {
ret = true;
of_node_put(child_np);
}
return ret;
}
static int ftgmac100_probe(struct platform_device *pdev) static int ftgmac100_probe(struct platform_device *pdev)
{ {
struct resource *res; struct resource *res;
@@ -1860,7 +1873,7 @@ static int ftgmac100_probe(struct platform_device *pdev)
/* Display what we found */ /* Display what we found */
phy_attached_info(phy); phy_attached_info(phy);
} else if (np && !of_get_child_by_name(np, "mdio")) { } else if (np && !ftgmac100_has_child_node(np, "mdio")) {
/* Support legacy ASPEED devicetree descriptions that decribe a /* Support legacy ASPEED devicetree descriptions that decribe a
* MAC with an embedded MDIO controller but have no "mdio" * MAC with an embedded MDIO controller but have no "mdio"
* child node. Automatically scan the MDIO bus for available * child node. Automatically scan the MDIO bus for available

View File

@@ -231,8 +231,7 @@ mlx5e_set_ktls_rx_priv_ctx(struct tls_context *tls_ctx,
struct mlx5e_ktls_offload_context_rx **ctx = struct mlx5e_ktls_offload_context_rx **ctx =
__tls_driver_ctx(tls_ctx, TLS_OFFLOAD_CTX_DIR_RX); __tls_driver_ctx(tls_ctx, TLS_OFFLOAD_CTX_DIR_RX);
BUILD_BUG_ON(sizeof(struct mlx5e_ktls_offload_context_rx *) > BUILD_BUG_ON(sizeof(priv_rx) > TLS_DRIVER_STATE_SIZE_RX);
TLS_OFFLOAD_CONTEXT_SIZE_RX);
*ctx = priv_rx; *ctx = priv_rx;
} }

View File

@@ -63,8 +63,7 @@ mlx5e_set_ktls_tx_priv_ctx(struct tls_context *tls_ctx,
struct mlx5e_ktls_offload_context_tx **ctx = struct mlx5e_ktls_offload_context_tx **ctx =
__tls_driver_ctx(tls_ctx, TLS_OFFLOAD_CTX_DIR_TX); __tls_driver_ctx(tls_ctx, TLS_OFFLOAD_CTX_DIR_TX);
BUILD_BUG_ON(sizeof(struct mlx5e_ktls_offload_context_tx *) > BUILD_BUG_ON(sizeof(priv_tx) > TLS_DRIVER_STATE_SIZE_TX);
TLS_OFFLOAD_CONTEXT_SIZE_TX);
*ctx = priv_tx; *ctx = priv_tx;
} }

View File

@@ -536,7 +536,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(vnic_env)
u32 in[MLX5_ST_SZ_DW(query_vnic_env_in)] = {}; u32 in[MLX5_ST_SZ_DW(query_vnic_env_in)] = {};
struct mlx5_core_dev *mdev = priv->mdev; struct mlx5_core_dev *mdev = priv->mdev;
if (!MLX5_CAP_GEN(priv->mdev, nic_receive_steering_discard)) if (!mlx5e_stats_grp_vnic_env_num_stats(priv))
return; return;
MLX5_SET(query_vnic_env_in, in, opcode, MLX5_CMD_OP_QUERY_VNIC_ENV); MLX5_SET(query_vnic_env_in, in, opcode, MLX5_CMD_OP_QUERY_VNIC_ENV);

View File

@@ -1916,7 +1916,10 @@ static int efx_ef10_try_update_nic_stats_vf(struct efx_nic *efx)
efx_update_sw_stats(efx, stats); efx_update_sw_stats(efx, stats);
out: out:
/* releasing a DMA coherent buffer with BH disabled can panic */
spin_unlock_bh(&efx->stats_lock);
efx_nic_free_buffer(efx, &stats_buf); efx_nic_free_buffer(efx, &stats_buf);
spin_lock_bh(&efx->stats_lock);
return rc; return rc;
} }

View File

@@ -411,8 +411,9 @@ fail1:
static int efx_ef10_pci_sriov_disable(struct efx_nic *efx, bool force) static int efx_ef10_pci_sriov_disable(struct efx_nic *efx, bool force)
{ {
struct pci_dev *dev = efx->pci_dev; struct pci_dev *dev = efx->pci_dev;
struct efx_ef10_nic_data *nic_data = efx->nic_data;
unsigned int vfs_assigned = pci_vfs_assigned(dev); unsigned int vfs_assigned = pci_vfs_assigned(dev);
int rc = 0; int i, rc = 0;
if (vfs_assigned && !force) { if (vfs_assigned && !force) {
netif_info(efx, drv, efx->net_dev, "VFs are assigned to guests; " netif_info(efx, drv, efx->net_dev, "VFs are assigned to guests; "
@@ -420,10 +421,13 @@ static int efx_ef10_pci_sriov_disable(struct efx_nic *efx, bool force)
return -EBUSY; return -EBUSY;
} }
if (!vfs_assigned) if (!vfs_assigned) {
for (i = 0; i < efx->vf_count; i++)
nic_data->vf[i].pci_dev = NULL;
pci_disable_sriov(dev); pci_disable_sriov(dev);
else } else {
rc = -EBUSY; rc = -EBUSY;
}
efx_ef10_sriov_free_vf_vswitching(efx); efx_ef10_sriov_free_vf_vswitching(efx);
efx->vf_count = 0; efx->vf_count = 0;

View File

@@ -363,6 +363,7 @@ bypass_clk_reset_gpio:
data->fix_mac_speed = tegra_eqos_fix_speed; data->fix_mac_speed = tegra_eqos_fix_speed;
data->init = tegra_eqos_init; data->init = tegra_eqos_init;
data->bsp_priv = eqos; data->bsp_priv = eqos;
data->sph_disable = 1;
err = tegra_eqos_init(pdev, eqos); err = tegra_eqos_init(pdev, eqos);
if (err < 0) if (err < 0)

View File

@@ -2427,7 +2427,7 @@ static int sfp_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, sfp); platform_set_drvdata(pdev, sfp);
err = devm_add_action(sfp->dev, sfp_cleanup, sfp); err = devm_add_action_or_reset(sfp->dev, sfp_cleanup, sfp);
if (err < 0) if (err < 0)
return err; return err;

View File

@@ -495,6 +495,7 @@ void xenvif_rx_action(struct xenvif_queue *queue)
queue->rx_copy.completed = &completed_skbs; queue->rx_copy.completed = &completed_skbs;
while (xenvif_rx_ring_slots_available(queue) && while (xenvif_rx_ring_slots_available(queue) &&
!skb_queue_empty(&queue->rx_queue) &&
work_done < RX_BATCH_SIZE) { work_done < RX_BATCH_SIZE) {
xenvif_rx_skb(queue); xenvif_rx_skb(queue);
work_done++; work_done++;

View File

@@ -122,7 +122,9 @@ static int nxp_nci_i2c_fw_read(struct nxp_nci_i2c_phy *phy,
skb_put_data(*skb, &header, NXP_NCI_FW_HDR_LEN); skb_put_data(*skb, &header, NXP_NCI_FW_HDR_LEN);
r = i2c_master_recv(client, skb_put(*skb, frame_len), frame_len); r = i2c_master_recv(client, skb_put(*skb, frame_len), frame_len);
if (r != frame_len) { if (r < 0) {
goto fw_read_exit_free_skb;
} else if (r != frame_len) {
nfc_err(&client->dev, nfc_err(&client->dev,
"Invalid frame length: %u (expected %zu)\n", "Invalid frame length: %u (expected %zu)\n",
r, frame_len); r, frame_len);
@@ -166,7 +168,9 @@ static int nxp_nci_i2c_nci_read(struct nxp_nci_i2c_phy *phy,
return 0; return 0;
r = i2c_master_recv(client, skb_put(*skb, header.plen), header.plen); r = i2c_master_recv(client, skb_put(*skb, header.plen), header.plen);
if (r != header.plen) { if (r < 0) {
goto nci_read_exit_free_skb;
} else if (r != header.plen) {
nfc_err(&client->dev, nfc_err(&client->dev,
"Invalid frame payload length: %u (expected %u)\n", "Invalid frame payload length: %u (expected %u)\n",
r, header.plen); r, header.plen);

View File

@@ -4460,6 +4460,8 @@ void nvme_stop_ctrl(struct nvme_ctrl *ctrl)
nvme_stop_keep_alive(ctrl); nvme_stop_keep_alive(ctrl);
flush_work(&ctrl->async_event_work); flush_work(&ctrl->async_event_work);
cancel_work_sync(&ctrl->fw_act_work); cancel_work_sync(&ctrl->fw_act_work);
if (ctrl->ops->stop_ctrl)
ctrl->ops->stop_ctrl(ctrl);
} }
EXPORT_SYMBOL_GPL(nvme_stop_ctrl); EXPORT_SYMBOL_GPL(nvme_stop_ctrl);

View File

@@ -478,6 +478,7 @@ struct nvme_ctrl_ops {
void (*free_ctrl)(struct nvme_ctrl *ctrl); void (*free_ctrl)(struct nvme_ctrl *ctrl);
void (*submit_async_event)(struct nvme_ctrl *ctrl); void (*submit_async_event)(struct nvme_ctrl *ctrl);
void (*delete_ctrl)(struct nvme_ctrl *ctrl); void (*delete_ctrl)(struct nvme_ctrl *ctrl);
void (*stop_ctrl)(struct nvme_ctrl *ctrl);
int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size); int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size);
}; };

View File

@@ -3234,7 +3234,8 @@ static const struct pci_device_id nvme_id_table[] = {
NVME_QUIRK_DISABLE_WRITE_ZEROES| NVME_QUIRK_DISABLE_WRITE_ZEROES|
NVME_QUIRK_IGNORE_DEV_SUBNQN, }, NVME_QUIRK_IGNORE_DEV_SUBNQN, },
{ PCI_DEVICE(0x1987, 0x5016), /* Phison E16 */ { PCI_DEVICE(0x1987, 0x5016), /* Phison E16 */
.driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, }, .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN |
NVME_QUIRK_BOGUS_NID, },
{ PCI_DEVICE(0x1b4b, 0x1092), /* Lexar 256 GB SSD */ { PCI_DEVICE(0x1b4b, 0x1092), /* Lexar 256 GB SSD */
.driver_data = NVME_QUIRK_NO_NS_DESC_LIST | .driver_data = NVME_QUIRK_NO_NS_DESC_LIST |
NVME_QUIRK_IGNORE_DEV_SUBNQN, }, NVME_QUIRK_IGNORE_DEV_SUBNQN, },

View File

@@ -1057,6 +1057,14 @@ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
} }
} }
static void nvme_rdma_stop_ctrl(struct nvme_ctrl *nctrl)
{
struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl);
cancel_work_sync(&ctrl->err_work);
cancel_delayed_work_sync(&ctrl->reconnect_work);
}
static void nvme_rdma_free_ctrl(struct nvme_ctrl *nctrl) static void nvme_rdma_free_ctrl(struct nvme_ctrl *nctrl)
{ {
struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl); struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl);
@@ -2236,9 +2244,6 @@ static const struct blk_mq_ops nvme_rdma_admin_mq_ops = {
static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown) static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown)
{ {
cancel_work_sync(&ctrl->err_work);
cancel_delayed_work_sync(&ctrl->reconnect_work);
nvme_rdma_teardown_io_queues(ctrl, shutdown); nvme_rdma_teardown_io_queues(ctrl, shutdown);
blk_mq_quiesce_queue(ctrl->ctrl.admin_q); blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
if (shutdown) if (shutdown)
@@ -2288,6 +2293,7 @@ static const struct nvme_ctrl_ops nvme_rdma_ctrl_ops = {
.submit_async_event = nvme_rdma_submit_async_event, .submit_async_event = nvme_rdma_submit_async_event,
.delete_ctrl = nvme_rdma_delete_ctrl, .delete_ctrl = nvme_rdma_delete_ctrl,
.get_address = nvmf_get_address, .get_address = nvmf_get_address,
.stop_ctrl = nvme_rdma_stop_ctrl,
}; };
/* /*

View File

@@ -1149,8 +1149,7 @@ done:
} else if (ret < 0) { } else if (ret < 0) {
dev_err(queue->ctrl->ctrl.device, dev_err(queue->ctrl->ctrl.device,
"failed to send request %d\n", ret); "failed to send request %d\n", ret);
if (ret != -EPIPE && ret != -ECONNRESET) nvme_tcp_fail_request(queue->request);
nvme_tcp_fail_request(queue->request);
nvme_tcp_done_send_req(queue); nvme_tcp_done_send_req(queue);
} }
return ret; return ret;
@@ -2136,9 +2135,6 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
static void nvme_tcp_teardown_ctrl(struct nvme_ctrl *ctrl, bool shutdown) static void nvme_tcp_teardown_ctrl(struct nvme_ctrl *ctrl, bool shutdown)
{ {
cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work);
cancel_delayed_work_sync(&to_tcp_ctrl(ctrl)->connect_work);
nvme_tcp_teardown_io_queues(ctrl, shutdown); nvme_tcp_teardown_io_queues(ctrl, shutdown);
blk_mq_quiesce_queue(ctrl->admin_q); blk_mq_quiesce_queue(ctrl->admin_q);
if (shutdown) if (shutdown)
@@ -2178,6 +2174,12 @@ out_fail:
nvme_tcp_reconnect_or_remove(ctrl); nvme_tcp_reconnect_or_remove(ctrl);
} }
static void nvme_tcp_stop_ctrl(struct nvme_ctrl *ctrl)
{
cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work);
cancel_delayed_work_sync(&to_tcp_ctrl(ctrl)->connect_work);
}
static void nvme_tcp_free_ctrl(struct nvme_ctrl *nctrl) static void nvme_tcp_free_ctrl(struct nvme_ctrl *nctrl)
{ {
struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl);
@@ -2500,6 +2502,7 @@ static const struct nvme_ctrl_ops nvme_tcp_ctrl_ops = {
.submit_async_event = nvme_tcp_submit_async_event, .submit_async_event = nvme_tcp_submit_async_event,
.delete_ctrl = nvme_tcp_delete_ctrl, .delete_ctrl = nvme_tcp_delete_ctrl,
.get_address = nvmf_get_address, .get_address = nvmf_get_address,
.stop_ctrl = nvme_tcp_stop_ctrl,
}; };
static bool static bool

View File

@@ -235,11 +235,11 @@ int aspeed_pinmux_set_mux(struct pinctrl_dev *pctldev, unsigned int function,
const struct aspeed_sig_expr **funcs; const struct aspeed_sig_expr **funcs;
const struct aspeed_sig_expr ***prios; const struct aspeed_sig_expr ***prios;
pr_debug("Muxing pin %s for %s\n", pdesc->name, pfunc->name);
if (!pdesc) if (!pdesc)
return -EINVAL; return -EINVAL;
pr_debug("Muxing pin %s for %s\n", pdesc->name, pfunc->name);
prios = pdesc->prios; prios = pdesc->prios;
if (!prios) if (!prios)

View File

@@ -62,6 +62,7 @@ enum hp_wmi_event_ids {
HPWMI_BACKLIT_KB_BRIGHTNESS = 0x0D, HPWMI_BACKLIT_KB_BRIGHTNESS = 0x0D,
HPWMI_PEAKSHIFT_PERIOD = 0x0F, HPWMI_PEAKSHIFT_PERIOD = 0x0F,
HPWMI_BATTERY_CHARGE_PERIOD = 0x10, HPWMI_BATTERY_CHARGE_PERIOD = 0x10,
HPWMI_SANITIZATION_MODE = 0x17,
}; };
struct bios_args { struct bios_args {
@@ -629,6 +630,8 @@ static void hp_wmi_notify(u32 value, void *context)
break; break;
case HPWMI_BATTERY_CHARGE_PERIOD: case HPWMI_BATTERY_CHARGE_PERIOD:
break; break;
case HPWMI_SANITIZATION_MODE:
break;
default: default:
pr_info("Unknown event_id - %d - 0x%x\n", event_id, event_data); pr_info("Unknown event_id - %d - 0x%x\n", event_id, event_data);
break; break;

View File

@@ -2738,6 +2738,7 @@ static int slave_configure_v3_hw(struct scsi_device *sdev)
struct hisi_hba *hisi_hba = shost_priv(shost); struct hisi_hba *hisi_hba = shost_priv(shost);
struct device *dev = hisi_hba->dev; struct device *dev = hisi_hba->dev;
int ret = sas_slave_configure(sdev); int ret = sas_slave_configure(sdev);
unsigned int max_sectors;
if (ret) if (ret)
return ret; return ret;
@@ -2755,6 +2756,12 @@ static int slave_configure_v3_hw(struct scsi_device *sdev)
} }
} }
/* Set according to IOMMU IOVA caching limit */
max_sectors = min_t(size_t, queue_max_hw_sectors(sdev->request_queue),
(PAGE_SIZE * 32) >> SECTOR_SHIFT);
blk_queue_max_hw_sectors(sdev->request_queue, max_sectors);
return 0; return 0;
} }

View File

@@ -735,7 +735,7 @@ static const struct of_device_id ixp4xx_npe_of_match[] = {
static struct platform_driver ixp4xx_npe_driver = { static struct platform_driver ixp4xx_npe_driver = {
.driver = { .driver = {
.name = "ixp4xx-npe", .name = "ixp4xx-npe",
.of_match_table = of_match_ptr(ixp4xx_npe_of_match), .of_match_table = ixp4xx_npe_of_match,
}, },
.probe = ixp4xx_npe_probe, .probe = ixp4xx_npe_probe,
.remove = ixp4xx_npe_remove, .remove = ixp4xx_npe_remove,

View File

@@ -28,6 +28,7 @@
#define AMD_SPI_RX_COUNT_REG 0x4B #define AMD_SPI_RX_COUNT_REG 0x4B
#define AMD_SPI_STATUS_REG 0x4C #define AMD_SPI_STATUS_REG 0x4C
#define AMD_SPI_FIFO_SIZE 70
#define AMD_SPI_MEM_SIZE 200 #define AMD_SPI_MEM_SIZE 200
/* M_CMD OP codes for SPI */ /* M_CMD OP codes for SPI */
@@ -245,6 +246,11 @@ static int amd_spi_master_transfer(struct spi_master *master,
return 0; return 0;
} }
static size_t amd_spi_max_transfer_size(struct spi_device *spi)
{
return AMD_SPI_FIFO_SIZE;
}
static int amd_spi_probe(struct platform_device *pdev) static int amd_spi_probe(struct platform_device *pdev)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
@@ -278,6 +284,8 @@ static int amd_spi_probe(struct platform_device *pdev)
master->flags = SPI_MASTER_HALF_DUPLEX; master->flags = SPI_MASTER_HALF_DUPLEX;
master->setup = amd_spi_master_setup; master->setup = amd_spi_master_setup;
master->transfer_one_message = amd_spi_master_transfer; master->transfer_one_message = amd_spi_master_transfer;
master->max_transfer_size = amd_spi_max_transfer_size;
master->max_message_size = amd_spi_max_transfer_size;
/* Register the controller with SPI framework */ /* Register the controller with SPI framework */
err = devm_spi_register_master(dev, master); err = devm_spi_register_master(dev, master);

View File

@@ -23,6 +23,7 @@
#include <linux/sysrq.h> #include <linux/sysrq.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/tty.h> #include <linux/tty.h>
#include <linux/ratelimit.h> #include <linux/ratelimit.h>
#include <linux/tty_flip.h> #include <linux/tty_flip.h>
@@ -571,6 +572,9 @@ serial8250_register_ports(struct uart_driver *drv, struct device *dev)
up->port.dev = dev; up->port.dev = dev;
if (uart_console_enabled(&up->port))
pm_runtime_get_sync(up->port.dev);
serial8250_apply_quirks(up); serial8250_apply_quirks(up);
uart_add_one_port(drv, &up->port); uart_add_one_port(drv, &up->port);
} }

View File

@@ -2953,8 +2953,10 @@ static int serial8250_request_std_resource(struct uart_8250_port *up)
case UPIO_MEM32BE: case UPIO_MEM32BE:
case UPIO_MEM16: case UPIO_MEM16:
case UPIO_MEM: case UPIO_MEM:
if (!port->mapbase) if (!port->mapbase) {
ret = -EINVAL;
break; break;
}
if (!request_mem_region(port->mapbase, size, "serial")) { if (!request_mem_region(port->mapbase, size, "serial")) {
ret = -EBUSY; ret = -EBUSY;

View File

@@ -1326,6 +1326,15 @@ static void pl011_stop_rx(struct uart_port *port)
pl011_dma_rx_stop(uap); pl011_dma_rx_stop(uap);
} }
static void pl011_throttle_rx(struct uart_port *port)
{
unsigned long flags;
spin_lock_irqsave(&port->lock, flags);
pl011_stop_rx(port);
spin_unlock_irqrestore(&port->lock, flags);
}
static void pl011_enable_ms(struct uart_port *port) static void pl011_enable_ms(struct uart_port *port)
{ {
struct uart_amba_port *uap = struct uart_amba_port *uap =
@@ -1717,9 +1726,10 @@ static int pl011_allocate_irq(struct uart_amba_port *uap)
*/ */
static void pl011_enable_interrupts(struct uart_amba_port *uap) static void pl011_enable_interrupts(struct uart_amba_port *uap)
{ {
unsigned long flags;
unsigned int i; unsigned int i;
spin_lock_irq(&uap->port.lock); spin_lock_irqsave(&uap->port.lock, flags);
/* Clear out any spuriously appearing RX interrupts */ /* Clear out any spuriously appearing RX interrupts */
pl011_write(UART011_RTIS | UART011_RXIS, uap, REG_ICR); pl011_write(UART011_RTIS | UART011_RXIS, uap, REG_ICR);
@@ -1741,7 +1751,14 @@ static void pl011_enable_interrupts(struct uart_amba_port *uap)
if (!pl011_dma_rx_running(uap)) if (!pl011_dma_rx_running(uap))
uap->im |= UART011_RXIM; uap->im |= UART011_RXIM;
pl011_write(uap->im, uap, REG_IMSC); pl011_write(uap->im, uap, REG_IMSC);
spin_unlock_irq(&uap->port.lock); spin_unlock_irqrestore(&uap->port.lock, flags);
}
static void pl011_unthrottle_rx(struct uart_port *port)
{
struct uart_amba_port *uap = container_of(port, struct uart_amba_port, port);
pl011_enable_interrupts(uap);
} }
static int pl011_startup(struct uart_port *port) static int pl011_startup(struct uart_port *port)
@@ -2116,6 +2133,8 @@ static const struct uart_ops amba_pl011_pops = {
.stop_tx = pl011_stop_tx, .stop_tx = pl011_stop_tx,
.start_tx = pl011_start_tx, .start_tx = pl011_start_tx,
.stop_rx = pl011_stop_rx, .stop_rx = pl011_stop_rx,
.throttle = pl011_throttle_rx,
.unthrottle = pl011_unthrottle_rx,
.enable_ms = pl011_enable_ms, .enable_ms = pl011_enable_ms,
.break_ctl = pl011_break_ctl, .break_ctl = pl011_break_ctl,
.startup = pl011_startup, .startup = pl011_startup,

View File

@@ -361,8 +361,7 @@ static void enable_tx_dma(struct s3c24xx_uart_port *ourport)
/* Enable tx dma mode */ /* Enable tx dma mode */
ucon = rd_regl(port, S3C2410_UCON); ucon = rd_regl(port, S3C2410_UCON);
ucon &= ~(S3C64XX_UCON_TXBURST_MASK | S3C64XX_UCON_TXMODE_MASK); ucon &= ~(S3C64XX_UCON_TXBURST_MASK | S3C64XX_UCON_TXMODE_MASK);
ucon |= (dma_get_cache_alignment() >= 16) ? ucon |= S3C64XX_UCON_TXBURST_1;
S3C64XX_UCON_TXBURST_16 : S3C64XX_UCON_TXBURST_1;
ucon |= S3C64XX_UCON_TXMODE_DMA; ucon |= S3C64XX_UCON_TXMODE_DMA;
wr_regl(port, S3C2410_UCON, ucon); wr_regl(port, S3C2410_UCON, ucon);
@@ -634,7 +633,7 @@ static void enable_rx_dma(struct s3c24xx_uart_port *ourport)
S3C64XX_UCON_DMASUS_EN | S3C64XX_UCON_DMASUS_EN |
S3C64XX_UCON_TIMEOUT_EN | S3C64XX_UCON_TIMEOUT_EN |
S3C64XX_UCON_RXMODE_MASK); S3C64XX_UCON_RXMODE_MASK);
ucon |= S3C64XX_UCON_RXBURST_16 | ucon |= S3C64XX_UCON_RXBURST_1 |
0xf << S3C64XX_UCON_TIMEOUT_SHIFT | 0xf << S3C64XX_UCON_TIMEOUT_SHIFT |
S3C64XX_UCON_EMPTYINT_EN | S3C64XX_UCON_EMPTYINT_EN |
S3C64XX_UCON_TIMEOUT_EN | S3C64XX_UCON_TIMEOUT_EN |

View File

@@ -1934,11 +1934,6 @@ static int uart_proc_show(struct seq_file *m, void *v)
} }
#endif #endif
static inline bool uart_console_enabled(struct uart_port *port)
{
return uart_console(port) && (port->cons->flags & CON_ENABLED);
}
static void uart_port_spin_lock_init(struct uart_port *port) static void uart_port_spin_lock_init(struct uart_port *port)
{ {
spin_lock_init(&port->lock); spin_lock_init(&port->lock);

View File

@@ -70,6 +70,8 @@ static void stm32_usart_config_reg_rs485(u32 *cr1, u32 *cr3, u32 delay_ADE,
*cr3 |= USART_CR3_DEM; *cr3 |= USART_CR3_DEM;
over8 = *cr1 & USART_CR1_OVER8; over8 = *cr1 & USART_CR1_OVER8;
*cr1 &= ~(USART_CR1_DEDT_MASK | USART_CR1_DEAT_MASK);
if (over8) if (over8)
rs485_deat_dedt = delay_ADE * baud * 8; rs485_deat_dedt = delay_ADE * baud * 8;
else else

View File

@@ -855,7 +855,7 @@ static void delete_char(struct vc_data *vc, unsigned int nr)
unsigned short *p = (unsigned short *) vc->vc_pos; unsigned short *p = (unsigned short *) vc->vc_pos;
vc_uniscr_delete(vc, nr); vc_uniscr_delete(vc, nr);
scr_memcpyw(p, p + nr, (vc->vc_cols - vc->state.x - nr) * 2); scr_memmovew(p, p + nr, (vc->vc_cols - vc->state.x - nr) * 2);
scr_memsetw(p + vc->vc_cols - vc->state.x - nr, vc->vc_video_erase_char, scr_memsetw(p + vc->vc_cols - vc->state.x - nr, vc->vc_video_erase_char,
nr * 2); nr * 2);
vc->vc_need_wrap = 0; vc->vc_need_wrap = 0;

View File

@@ -4198,7 +4198,6 @@ static irqreturn_t dwc3_process_event_buf(struct dwc3_event_buffer *evt)
} }
evt->count = 0; evt->count = 0;
evt->flags &= ~DWC3_EVENT_PENDING;
ret = IRQ_HANDLED; ret = IRQ_HANDLED;
/* Unmask interrupt */ /* Unmask interrupt */
@@ -4211,6 +4210,9 @@ static irqreturn_t dwc3_process_event_buf(struct dwc3_event_buffer *evt)
dwc3_writel(dwc->regs, DWC3_DEV_IMOD(0), dwc->imod_interval); dwc3_writel(dwc->regs, DWC3_DEV_IMOD(0), dwc->imod_interval);
} }
/* Keep the clearing of DWC3_EVENT_PENDING at the end */
evt->flags &= ~DWC3_EVENT_PENDING;
return ret; return ret;
} }

View File

@@ -1023,6 +1023,9 @@ static const struct usb_device_id id_table_combined[] = {
{ USB_DEVICE(FTDI_VID, CHETCO_SEASMART_DISPLAY_PID) }, { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_DISPLAY_PID) },
{ USB_DEVICE(FTDI_VID, CHETCO_SEASMART_LITE_PID) }, { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_LITE_PID) },
{ USB_DEVICE(FTDI_VID, CHETCO_SEASMART_ANALOG_PID) }, { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_ANALOG_PID) },
/* Belimo Automation devices */
{ USB_DEVICE(FTDI_VID, BELIMO_ZTH_PID) },
{ USB_DEVICE(FTDI_VID, BELIMO_ZIP_PID) },
/* ICP DAS I-756xU devices */ /* ICP DAS I-756xU devices */
{ USB_DEVICE(ICPDAS_VID, ICPDAS_I7560U_PID) }, { USB_DEVICE(ICPDAS_VID, ICPDAS_I7560U_PID) },
{ USB_DEVICE(ICPDAS_VID, ICPDAS_I7561U_PID) }, { USB_DEVICE(ICPDAS_VID, ICPDAS_I7561U_PID) },

View File

@@ -1568,6 +1568,12 @@
#define CHETCO_SEASMART_LITE_PID 0xA5AE /* SeaSmart Lite USB Adapter */ #define CHETCO_SEASMART_LITE_PID 0xA5AE /* SeaSmart Lite USB Adapter */
#define CHETCO_SEASMART_ANALOG_PID 0xA5AF /* SeaSmart Analog Adapter */ #define CHETCO_SEASMART_ANALOG_PID 0xA5AF /* SeaSmart Analog Adapter */
/*
* Belimo Automation
*/
#define BELIMO_ZTH_PID 0x8050
#define BELIMO_ZIP_PID 0xC811
/* /*
* Unjo AB * Unjo AB
*/ */

View File

@@ -1776,6 +1776,7 @@ void typec_set_pwr_opmode(struct typec_port *port,
partner->usb_pd = 1; partner->usb_pd = 1;
sysfs_notify(&partner_dev->kobj, NULL, sysfs_notify(&partner_dev->kobj, NULL,
"supports_usb_power_delivery"); "supports_usb_power_delivery");
kobject_uevent(&partner_dev->kobj, KOBJ_CHANGE);
} }
put_device(partner_dev); put_device(partner_dev);
} }

View File

@@ -62,6 +62,7 @@
#include <linux/list.h> #include <linux/list.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/virtio.h> #include <linux/virtio.h>
@@ -543,6 +544,28 @@ static const struct virtio_config_ops virtio_mmio_config_ops = {
.get_shm_region = vm_get_shm_region, .get_shm_region = vm_get_shm_region,
}; };
#ifdef CONFIG_PM_SLEEP
static int virtio_mmio_freeze(struct device *dev)
{
struct virtio_mmio_device *vm_dev = dev_get_drvdata(dev);
return virtio_device_freeze(&vm_dev->vdev);
}
static int virtio_mmio_restore(struct device *dev)
{
struct virtio_mmio_device *vm_dev = dev_get_drvdata(dev);
if (vm_dev->version == 1)
writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE);
return virtio_device_restore(&vm_dev->vdev);
}
static const struct dev_pm_ops virtio_mmio_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(virtio_mmio_freeze, virtio_mmio_restore)
};
#endif
static void virtio_mmio_release_dev(struct device *_d) static void virtio_mmio_release_dev(struct device *_d)
{ {
@@ -807,7 +830,9 @@ static struct platform_driver virtio_mmio_driver = {
.name = "virtio-mmio", .name = "virtio-mmio",
.of_match_table = virtio_mmio_match, .of_match_table = virtio_mmio_match,
.acpi_match_table = ACPI_PTR(virtio_mmio_acpi_match), .acpi_match_table = ACPI_PTR(virtio_mmio_acpi_match),
.pm = &virtio_mmio_pm_ops, #ifdef CONFIG_PM_SLEEP
.pm = &virtio_mmio_pm_ops,
#endif
}, },
}; };

View File

@@ -7480,7 +7480,19 @@ static int btrfs_dio_iomap_begin(struct inode *inode, loff_t start,
if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) || if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) ||
em->block_start == EXTENT_MAP_INLINE) { em->block_start == EXTENT_MAP_INLINE) {
free_extent_map(em); free_extent_map(em);
ret = -ENOTBLK; /*
* If we are in a NOWAIT context, return -EAGAIN in order to
* fallback to buffered IO. This is not only because we can
* block with buffered IO (no support for NOWAIT semantics at
* the moment) but also to avoid returning short reads to user
* space - this happens if we were able to read some data from
* previous non-compressed extents and then when we fallback to
* buffered IO, at btrfs_file_read_iter() by calling
* filemap_read(), we fail to fault in pages for the read buffer,
* in which case filemap_read() returns a short read (the number
* of bytes previously read is > 0, so it does not return -EFAULT).
*/
ret = (flags & IOMAP_NOWAIT) ? -EAGAIN : -ENOTBLK;
goto unlock_err; goto unlock_err;
} }

View File

@@ -1288,7 +1288,7 @@ int begin_new_exec(struct linux_binprm * bprm)
bprm->mm = NULL; bprm->mm = NULL;
#ifdef CONFIG_POSIX_TIMERS #ifdef CONFIG_POSIX_TIMERS
exit_itimers(me->signal); exit_itimers(me);
flush_itimer_signals(); flush_itimer_signals();
#endif #endif

View File

@@ -4691,16 +4691,17 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
return -EOPNOTSUPP; return -EOPNOTSUPP;
ext4_fc_start_update(inode); ext4_fc_start_update(inode);
inode_lock(inode);
ret = ext4_convert_inline_data(inode);
inode_unlock(inode);
if (ret)
goto exit;
if (mode & FALLOC_FL_PUNCH_HOLE) { if (mode & FALLOC_FL_PUNCH_HOLE) {
ret = ext4_punch_hole(file, offset, len); ret = ext4_punch_hole(file, offset, len);
goto exit; goto exit;
} }
ret = ext4_convert_inline_data(inode);
if (ret)
goto exit;
if (mode & FALLOC_FL_COLLAPSE_RANGE) { if (mode & FALLOC_FL_COLLAPSE_RANGE) {
ret = ext4_collapse_range(file, offset, len); ret = ext4_collapse_range(file, offset, len);
goto exit; goto exit;

View File

@@ -4074,15 +4074,6 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length)
trace_ext4_punch_hole(inode, offset, length, 0); trace_ext4_punch_hole(inode, offset, length, 0);
ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
if (ext4_has_inline_data(inode)) {
down_write(&EXT4_I(inode)->i_mmap_sem);
ret = ext4_convert_inline_data(inode);
up_write(&EXT4_I(inode)->i_mmap_sem);
if (ret)
return ret;
}
/* /*
* Write out all dirty pages to avoid race conditions * Write out all dirty pages to avoid race conditions
* Then release them. * Then release them.

View File

@@ -198,6 +198,9 @@ static inline int nilfs_acl_chmod(struct inode *inode)
static inline int nilfs_init_acl(struct inode *inode, struct inode *dir) static inline int nilfs_init_acl(struct inode *inode, struct inode *dir)
{ {
if (S_ISLNK(inode->i_mode))
return 0;
inode->i_mode &= ~current_umask(); inode->i_mode &= ~current_umask();
return 0; return 0;
} }

View File

@@ -71,7 +71,8 @@ static int generic_remap_checks(struct file *file_in, loff_t pos_in,
* Otherwise, make sure the count is also block-aligned, having * Otherwise, make sure the count is also block-aligned, having
* already confirmed the starting offsets' block alignment. * already confirmed the starting offsets' block alignment.
*/ */
if (pos_in + count == size_in) { if (pos_in + count == size_in &&
(!(remap_flags & REMAP_FILE_DEDUP) || pos_out + count == size_out)) {
bcount = ALIGN(size_in, bs) - pos_in; bcount = ALIGN(size_in, bs) - pos_in;
} else { } else {
if (!IS_ALIGNED(count, bs)) if (!IS_ALIGNED(count, bs))

View File

@@ -261,7 +261,8 @@ struct css_set {
* List of csets participating in the on-going migration either as * List of csets participating in the on-going migration either as
* source or destination. Protected by cgroup_mutex. * source or destination. Protected by cgroup_mutex.
*/ */
struct list_head mg_preload_node; struct list_head mg_src_preload_node;
struct list_head mg_dst_preload_node;
struct list_head mg_node; struct list_head mg_node;
/* /*

View File

@@ -442,6 +442,12 @@ static inline int kexec_crash_loaded(void) { return 0; }
#define kexec_in_progress false #define kexec_in_progress false
#endif /* CONFIG_KEXEC_CORE */ #endif /* CONFIG_KEXEC_CORE */
#ifdef CONFIG_KEXEC_SIG
void set_kexec_sig_enforced(void);
#else
static inline void set_kexec_sig_enforced(void) {}
#endif
#endif /* !defined(__ASSEBMLY__) */ #endif /* !defined(__ASSEBMLY__) */
#endif /* LINUX_KEXEC_H */ #endif /* LINUX_KEXEC_H */

View File

@@ -82,7 +82,7 @@ static inline void exit_thread(struct task_struct *tsk)
extern void do_group_exit(int); extern void do_group_exit(int);
extern void exit_files(struct task_struct *); extern void exit_files(struct task_struct *);
extern void exit_itimers(struct signal_struct *); extern void exit_itimers(struct task_struct *);
extern pid_t kernel_clone(struct kernel_clone_args *kargs); extern pid_t kernel_clone(struct kernel_clone_args *kargs);
struct task_struct *fork_idle(int); struct task_struct *fork_idle(int);

View File

@@ -403,6 +403,11 @@ static const bool earlycon_acpi_spcr_enable EARLYCON_USED_OR_UNUSED;
static inline int setup_earlycon(char *buf) { return 0; } static inline int setup_earlycon(char *buf) { return 0; }
#endif #endif
static inline bool uart_console_enabled(struct uart_port *port)
{
return uart_console(port) && (port->cons->flags & CON_ENABLED);
}
struct uart_port *uart_get_console(struct uart_port *ports, int nr, struct uart_port *uart_get_console(struct uart_port *ports, int nr,
struct console *c); struct console *c);
int uart_parse_earlycon(char *p, unsigned char *iotype, resource_size_t *addr, int uart_parse_earlycon(char *p, unsigned char *iotype, resource_size_t *addr,

View File

@@ -75,7 +75,7 @@ static inline bool raw_sk_bound_dev_eq(struct net *net, int bound_dev_if,
int dif, int sdif) int dif, int sdif)
{ {
#if IS_ENABLED(CONFIG_NET_L3_MASTER_DEV) #if IS_ENABLED(CONFIG_NET_L3_MASTER_DEV)
return inet_bound_dev_eq(!!net->ipv4.sysctl_raw_l3mdev_accept, return inet_bound_dev_eq(READ_ONCE(net->ipv4.sysctl_raw_l3mdev_accept),
bound_dev_if, dif, sdif); bound_dev_if, dif, sdif);
#else #else
return inet_bound_dev_eq(true, bound_dev_if, dif, sdif); return inet_bound_dev_eq(true, bound_dev_if, dif, sdif);

View File

@@ -1466,7 +1466,7 @@ void __sk_mem_reclaim(struct sock *sk, int amount);
/* sysctl_mem values are in pages, we convert them in SK_MEM_QUANTUM units */ /* sysctl_mem values are in pages, we convert them in SK_MEM_QUANTUM units */
static inline long sk_prot_mem_limits(const struct sock *sk, int index) static inline long sk_prot_mem_limits(const struct sock *sk, int index)
{ {
long val = sk->sk_prot->sysctl_mem[index]; long val = READ_ONCE(sk->sk_prot->sysctl_mem[index]);
#if PAGE_SIZE > SK_MEM_QUANTUM #if PAGE_SIZE > SK_MEM_QUANTUM
val <<= PAGE_SHIFT - SK_MEM_QUANTUM_SHIFT; val <<= PAGE_SHIFT - SK_MEM_QUANTUM_SHIFT;

View File

@@ -721,7 +721,7 @@ int tls_sw_fallback_init(struct sock *sk,
struct tls_crypto_info *crypto_info); struct tls_crypto_info *crypto_info);
#ifdef CONFIG_TLS_DEVICE #ifdef CONFIG_TLS_DEVICE
void tls_device_init(void); int tls_device_init(void);
void tls_device_cleanup(void); void tls_device_cleanup(void);
void tls_device_sk_destruct(struct sock *sk); void tls_device_sk_destruct(struct sock *sk);
int tls_set_device_offload(struct sock *sk, struct tls_context *ctx); int tls_set_device_offload(struct sock *sk, struct tls_context *ctx);
@@ -741,7 +741,7 @@ static inline bool tls_is_sk_rx_device_offloaded(struct sock *sk)
return tls_get_ctx(sk)->rx_conf == TLS_HW; return tls_get_ctx(sk)->rx_conf == TLS_HW;
} }
#else #else
static inline void tls_device_init(void) {} static inline int tls_device_init(void) { return 0; }
static inline void tls_device_cleanup(void) {} static inline void tls_device_cleanup(void) {}
static inline int static inline int

View File

@@ -98,7 +98,7 @@ TRACE_EVENT(sock_exceed_buf_limit,
TP_STRUCT__entry( TP_STRUCT__entry(
__array(char, name, 32) __array(char, name, 32)
__field(long *, sysctl_mem) __array(long, sysctl_mem, 3)
__field(long, allocated) __field(long, allocated)
__field(int, sysctl_rmem) __field(int, sysctl_rmem)
__field(int, rmem_alloc) __field(int, rmem_alloc)
@@ -110,7 +110,9 @@ TRACE_EVENT(sock_exceed_buf_limit,
TP_fast_assign( TP_fast_assign(
strncpy(__entry->name, prot->name, 32); strncpy(__entry->name, prot->name, 32);
__entry->sysctl_mem = prot->sysctl_mem; __entry->sysctl_mem[0] = READ_ONCE(prot->sysctl_mem[0]);
__entry->sysctl_mem[1] = READ_ONCE(prot->sysctl_mem[1]);
__entry->sysctl_mem[2] = READ_ONCE(prot->sysctl_mem[2]);
__entry->allocated = allocated; __entry->allocated = allocated;
__entry->sysctl_rmem = sk_get_rmem0(sk, prot); __entry->sysctl_rmem = sk_get_rmem0(sk, prot);
__entry->rmem_alloc = atomic_read(&sk->sk_rmem_alloc); __entry->rmem_alloc = atomic_read(&sk->sk_rmem_alloc);

View File

@@ -755,7 +755,8 @@ struct css_set init_css_set = {
.task_iters = LIST_HEAD_INIT(init_css_set.task_iters), .task_iters = LIST_HEAD_INIT(init_css_set.task_iters),
.threaded_csets = LIST_HEAD_INIT(init_css_set.threaded_csets), .threaded_csets = LIST_HEAD_INIT(init_css_set.threaded_csets),
.cgrp_links = LIST_HEAD_INIT(init_css_set.cgrp_links), .cgrp_links = LIST_HEAD_INIT(init_css_set.cgrp_links),
.mg_preload_node = LIST_HEAD_INIT(init_css_set.mg_preload_node), .mg_src_preload_node = LIST_HEAD_INIT(init_css_set.mg_src_preload_node),
.mg_dst_preload_node = LIST_HEAD_INIT(init_css_set.mg_dst_preload_node),
.mg_node = LIST_HEAD_INIT(init_css_set.mg_node), .mg_node = LIST_HEAD_INIT(init_css_set.mg_node),
/* /*
@@ -1230,7 +1231,8 @@ static struct css_set *find_css_set(struct css_set *old_cset,
INIT_LIST_HEAD(&cset->threaded_csets); INIT_LIST_HEAD(&cset->threaded_csets);
INIT_HLIST_NODE(&cset->hlist); INIT_HLIST_NODE(&cset->hlist);
INIT_LIST_HEAD(&cset->cgrp_links); INIT_LIST_HEAD(&cset->cgrp_links);
INIT_LIST_HEAD(&cset->mg_preload_node); INIT_LIST_HEAD(&cset->mg_src_preload_node);
INIT_LIST_HEAD(&cset->mg_dst_preload_node);
INIT_LIST_HEAD(&cset->mg_node); INIT_LIST_HEAD(&cset->mg_node);
/* Copy the set of subsystem state objects generated in /* Copy the set of subsystem state objects generated in
@@ -2578,21 +2580,27 @@ int cgroup_migrate_vet_dst(struct cgroup *dst_cgrp)
*/ */
void cgroup_migrate_finish(struct cgroup_mgctx *mgctx) void cgroup_migrate_finish(struct cgroup_mgctx *mgctx)
{ {
LIST_HEAD(preloaded);
struct css_set *cset, *tmp_cset; struct css_set *cset, *tmp_cset;
lockdep_assert_held(&cgroup_mutex); lockdep_assert_held(&cgroup_mutex);
spin_lock_irq(&css_set_lock); spin_lock_irq(&css_set_lock);
list_splice_tail_init(&mgctx->preloaded_src_csets, &preloaded); list_for_each_entry_safe(cset, tmp_cset, &mgctx->preloaded_src_csets,
list_splice_tail_init(&mgctx->preloaded_dst_csets, &preloaded); mg_src_preload_node) {
list_for_each_entry_safe(cset, tmp_cset, &preloaded, mg_preload_node) {
cset->mg_src_cgrp = NULL; cset->mg_src_cgrp = NULL;
cset->mg_dst_cgrp = NULL; cset->mg_dst_cgrp = NULL;
cset->mg_dst_cset = NULL; cset->mg_dst_cset = NULL;
list_del_init(&cset->mg_preload_node); list_del_init(&cset->mg_src_preload_node);
put_css_set_locked(cset);
}
list_for_each_entry_safe(cset, tmp_cset, &mgctx->preloaded_dst_csets,
mg_dst_preload_node) {
cset->mg_src_cgrp = NULL;
cset->mg_dst_cgrp = NULL;
cset->mg_dst_cset = NULL;
list_del_init(&cset->mg_dst_preload_node);
put_css_set_locked(cset); put_css_set_locked(cset);
} }
@@ -2634,7 +2642,7 @@ void cgroup_migrate_add_src(struct css_set *src_cset,
src_cgrp = cset_cgroup_from_root(src_cset, dst_cgrp->root); src_cgrp = cset_cgroup_from_root(src_cset, dst_cgrp->root);
if (!list_empty(&src_cset->mg_preload_node)) if (!list_empty(&src_cset->mg_src_preload_node))
return; return;
WARN_ON(src_cset->mg_src_cgrp); WARN_ON(src_cset->mg_src_cgrp);
@@ -2645,7 +2653,7 @@ void cgroup_migrate_add_src(struct css_set *src_cset,
src_cset->mg_src_cgrp = src_cgrp; src_cset->mg_src_cgrp = src_cgrp;
src_cset->mg_dst_cgrp = dst_cgrp; src_cset->mg_dst_cgrp = dst_cgrp;
get_css_set(src_cset); get_css_set(src_cset);
list_add_tail(&src_cset->mg_preload_node, &mgctx->preloaded_src_csets); list_add_tail(&src_cset->mg_src_preload_node, &mgctx->preloaded_src_csets);
} }
/** /**
@@ -2670,7 +2678,7 @@ int cgroup_migrate_prepare_dst(struct cgroup_mgctx *mgctx)
/* look up the dst cset for each src cset and link it to src */ /* look up the dst cset for each src cset and link it to src */
list_for_each_entry_safe(src_cset, tmp_cset, &mgctx->preloaded_src_csets, list_for_each_entry_safe(src_cset, tmp_cset, &mgctx->preloaded_src_csets,
mg_preload_node) { mg_src_preload_node) {
struct css_set *dst_cset; struct css_set *dst_cset;
struct cgroup_subsys *ss; struct cgroup_subsys *ss;
int ssid; int ssid;
@@ -2689,7 +2697,7 @@ int cgroup_migrate_prepare_dst(struct cgroup_mgctx *mgctx)
if (src_cset == dst_cset) { if (src_cset == dst_cset) {
src_cset->mg_src_cgrp = NULL; src_cset->mg_src_cgrp = NULL;
src_cset->mg_dst_cgrp = NULL; src_cset->mg_dst_cgrp = NULL;
list_del_init(&src_cset->mg_preload_node); list_del_init(&src_cset->mg_src_preload_node);
put_css_set(src_cset); put_css_set(src_cset);
put_css_set(dst_cset); put_css_set(dst_cset);
continue; continue;
@@ -2697,8 +2705,8 @@ int cgroup_migrate_prepare_dst(struct cgroup_mgctx *mgctx)
src_cset->mg_dst_cset = dst_cset; src_cset->mg_dst_cset = dst_cset;
if (list_empty(&dst_cset->mg_preload_node)) if (list_empty(&dst_cset->mg_dst_preload_node))
list_add_tail(&dst_cset->mg_preload_node, list_add_tail(&dst_cset->mg_dst_preload_node,
&mgctx->preloaded_dst_csets); &mgctx->preloaded_dst_csets);
else else
put_css_set(dst_cset); put_css_set(dst_cset);
@@ -2949,7 +2957,8 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp)
goto out_finish; goto out_finish;
spin_lock_irq(&css_set_lock); spin_lock_irq(&css_set_lock);
list_for_each_entry(src_cset, &mgctx.preloaded_src_csets, mg_preload_node) { list_for_each_entry(src_cset, &mgctx.preloaded_src_csets,
mg_src_preload_node) {
struct task_struct *task, *ntask; struct task_struct *task, *ntask;
/* all tasks in src_csets need to be migrated */ /* all tasks in src_csets need to be migrated */

View File

@@ -784,7 +784,7 @@ void __noreturn do_exit(long code)
#ifdef CONFIG_POSIX_TIMERS #ifdef CONFIG_POSIX_TIMERS
hrtimer_cancel(&tsk->signal->real_timer); hrtimer_cancel(&tsk->signal->real_timer);
exit_itimers(tsk->signal); exit_itimers(tsk);
#endif #endif
if (tsk->mm) if (tsk->mm)
setmax_mm_hiwater_rss(&tsk->signal->maxrss, tsk->mm); setmax_mm_hiwater_rss(&tsk->signal->maxrss, tsk->mm);

View File

@@ -29,6 +29,15 @@
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include "kexec_internal.h" #include "kexec_internal.h"
#ifdef CONFIG_KEXEC_SIG
static bool sig_enforce = IS_ENABLED(CONFIG_KEXEC_SIG_FORCE);
void set_kexec_sig_enforced(void)
{
sig_enforce = true;
}
#endif
static int kexec_calculate_store_digests(struct kimage *image); static int kexec_calculate_store_digests(struct kimage *image);
/* /*
@@ -159,7 +168,7 @@ kimage_validate_signature(struct kimage *image)
image->kernel_buf_len); image->kernel_buf_len);
if (ret) { if (ret) {
if (IS_ENABLED(CONFIG_KEXEC_SIG_FORCE)) { if (sig_enforce) {
pr_notice("Enforced kernel signature verification failed (%d).\n", ret); pr_notice("Enforced kernel signature verification failed (%d).\n", ret);
return ret; return ret;
} }

View File

@@ -1925,12 +1925,12 @@ bool do_notify_parent(struct task_struct *tsk, int sig)
bool autoreap = false; bool autoreap = false;
u64 utime, stime; u64 utime, stime;
BUG_ON(sig == -1); WARN_ON_ONCE(sig == -1);
/* do_notify_parent_cldstop should have been called instead. */ /* do_notify_parent_cldstop should have been called instead. */
BUG_ON(task_is_stopped_or_traced(tsk)); WARN_ON_ONCE(task_is_stopped_or_traced(tsk));
BUG_ON(!tsk->ptrace && WARN_ON_ONCE(!tsk->ptrace &&
(tsk->group_leader != tsk || !thread_group_empty(tsk))); (tsk->group_leader != tsk || !thread_group_empty(tsk)));
/* Wake up all pidfd waiters */ /* Wake up all pidfd waiters */

View File

@@ -560,14 +560,14 @@ static int do_proc_dointvec_conv(bool *negp, unsigned long *lvalp,
if (*negp) { if (*negp) {
if (*lvalp > (unsigned long) INT_MAX + 1) if (*lvalp > (unsigned long) INT_MAX + 1)
return -EINVAL; return -EINVAL;
*valp = -*lvalp; WRITE_ONCE(*valp, -*lvalp);
} else { } else {
if (*lvalp > (unsigned long) INT_MAX) if (*lvalp > (unsigned long) INT_MAX)
return -EINVAL; return -EINVAL;
*valp = *lvalp; WRITE_ONCE(*valp, *lvalp);
} }
} else { } else {
int val = *valp; int val = READ_ONCE(*valp);
if (val < 0) { if (val < 0) {
*negp = true; *negp = true;
*lvalp = -(unsigned long)val; *lvalp = -(unsigned long)val;
@@ -586,9 +586,9 @@ static int do_proc_douintvec_conv(unsigned long *lvalp,
if (write) { if (write) {
if (*lvalp > UINT_MAX) if (*lvalp > UINT_MAX)
return -EINVAL; return -EINVAL;
*valp = *lvalp; WRITE_ONCE(*valp, *lvalp);
} else { } else {
unsigned int val = *valp; unsigned int val = READ_ONCE(*valp);
*lvalp = (unsigned long)val; *lvalp = (unsigned long)val;
} }
return 0; return 0;
@@ -962,7 +962,7 @@ static int do_proc_dointvec_minmax_conv(bool *negp, unsigned long *lvalp,
if ((param->min && *param->min > tmp) || if ((param->min && *param->min > tmp) ||
(param->max && *param->max < tmp)) (param->max && *param->max < tmp))
return -EINVAL; return -EINVAL;
*valp = tmp; WRITE_ONCE(*valp, tmp);
} }
return 0; return 0;
@@ -1028,7 +1028,7 @@ static int do_proc_douintvec_minmax_conv(unsigned long *lvalp,
(param->max && *param->max < tmp)) (param->max && *param->max < tmp))
return -ERANGE; return -ERANGE;
*valp = tmp; WRITE_ONCE(*valp, tmp);
} }
return 0; return 0;
@@ -1196,9 +1196,9 @@ static int __do_proc_doulongvec_minmax(void *data, struct ctl_table *table,
err = -EINVAL; err = -EINVAL;
break; break;
} }
*i = val; WRITE_ONCE(*i, val);
} else { } else {
val = convdiv * (*i) / convmul; val = convdiv * READ_ONCE(*i) / convmul;
if (!first) if (!first)
proc_put_char(&buffer, &left, '\t'); proc_put_char(&buffer, &left, '\t');
proc_put_long(&buffer, &left, val, false); proc_put_long(&buffer, &left, val, false);
@@ -1279,9 +1279,12 @@ static int do_proc_dointvec_jiffies_conv(bool *negp, unsigned long *lvalp,
if (write) { if (write) {
if (*lvalp > INT_MAX / HZ) if (*lvalp > INT_MAX / HZ)
return 1; return 1;
*valp = *negp ? -(*lvalp*HZ) : (*lvalp*HZ); if (*negp)
WRITE_ONCE(*valp, -*lvalp * HZ);
else
WRITE_ONCE(*valp, *lvalp * HZ);
} else { } else {
int val = *valp; int val = READ_ONCE(*valp);
unsigned long lval; unsigned long lval;
if (val < 0) { if (val < 0) {
*negp = true; *negp = true;
@@ -1327,9 +1330,9 @@ static int do_proc_dointvec_ms_jiffies_conv(bool *negp, unsigned long *lvalp,
if (jif > INT_MAX) if (jif > INT_MAX)
return 1; return 1;
*valp = (int)jif; WRITE_ONCE(*valp, (int)jif);
} else { } else {
int val = *valp; int val = READ_ONCE(*valp);
unsigned long lval; unsigned long lval;
if (val < 0) { if (val < 0) {
*negp = true; *negp = true;
@@ -1397,8 +1400,8 @@ int proc_dointvec_userhz_jiffies(struct ctl_table *table, int write,
* @ppos: the current position in the file * @ppos: the current position in the file
* *
* Reads/writes up to table->maxlen/sizeof(unsigned int) integer * Reads/writes up to table->maxlen/sizeof(unsigned int) integer
* values from/to the user buffer, treated as an ASCII string. * values from/to the user buffer, treated as an ASCII string.
* The values read are assumed to be in 1/1000 seconds, and * The values read are assumed to be in 1/1000 seconds, and
* are converted into jiffies. * are converted into jiffies.
* *
* Returns 0 on success. * Returns 0 on success.
@@ -2814,6 +2817,17 @@ static struct ctl_table vm_table[] = {
.extra1 = SYSCTL_ZERO, .extra1 = SYSCTL_ZERO,
.extra2 = &two_hundred, .extra2 = &two_hundred,
}, },
#ifdef CONFIG_NUMA
{
.procname = "numa_stat",
.data = &sysctl_vm_numa_stat,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = sysctl_vm_numa_stat_handler,
.extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_ONE,
},
#endif
#ifdef CONFIG_HUGETLB_PAGE #ifdef CONFIG_HUGETLB_PAGE
{ {
.procname = "nr_hugepages", .procname = "nr_hugepages",
@@ -2830,15 +2844,6 @@ static struct ctl_table vm_table[] = {
.mode = 0644, .mode = 0644,
.proc_handler = &hugetlb_mempolicy_sysctl_handler, .proc_handler = &hugetlb_mempolicy_sysctl_handler,
}, },
{
.procname = "numa_stat",
.data = &sysctl_vm_numa_stat,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = sysctl_vm_numa_stat_handler,
.extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_ONE,
},
#endif #endif
{ {
.procname = "hugetlb_shm_group", .procname = "hugetlb_shm_group",

View File

@@ -1051,15 +1051,24 @@ retry_delete:
} }
/* /*
* This is called by do_exit or de_thread, only when there are no more * This is called by do_exit or de_thread, only when nobody else can
* references to the shared signal_struct. * modify the signal->posix_timers list. Yet we need sighand->siglock
* to prevent the race with /proc/pid/timers.
*/ */
void exit_itimers(struct signal_struct *sig) void exit_itimers(struct task_struct *tsk)
{ {
struct list_head timers;
struct k_itimer *tmr; struct k_itimer *tmr;
while (!list_empty(&sig->posix_timers)) { if (list_empty(&tsk->signal->posix_timers))
tmr = list_entry(sig->posix_timers.next, struct k_itimer, list); return;
spin_lock_irq(&tsk->sighand->siglock);
list_replace_init(&tsk->signal->posix_timers, &timers);
spin_unlock_irq(&tsk->sighand->siglock);
while (!list_empty(&timers)) {
tmr = list_first_entry(&timers, struct k_itimer, list);
itimer_delete(tmr); itimer_delete(tmr);
} }
} }

View File

@@ -3943,6 +3943,8 @@ static int parse_var_defs(struct hist_trigger_data *hist_data)
s = kstrdup(field_str, GFP_KERNEL); s = kstrdup(field_str, GFP_KERNEL);
if (!s) { if (!s) {
kfree(hist_data->attrs->var_defs.name[n_vars]);
hist_data->attrs->var_defs.name[n_vars] = NULL;
ret = -ENOMEM; ret = -ENOMEM;
goto free; goto free;
} }

View File

@@ -4609,6 +4609,19 @@ static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)
static vm_fault_t create_huge_pud(struct vm_fault *vmf) static vm_fault_t create_huge_pud(struct vm_fault *vmf)
{ {
#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \
defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
/* No support for anonymous transparent PUD pages yet */
if (vma_is_anonymous(vmf->vma))
return VM_FAULT_FALLBACK;
if (vmf->vma->vm_ops->huge_fault)
return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PUD);
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
return VM_FAULT_FALLBACK;
}
static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud)
{
#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \ #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \
defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
/* No support for anonymous transparent PUD pages yet */ /* No support for anonymous transparent PUD pages yet */
@@ -4623,19 +4636,7 @@ static vm_fault_t create_huge_pud(struct vm_fault *vmf)
split: split:
/* COW or write-notify not handled on PUD level: split pud.*/ /* COW or write-notify not handled on PUD level: split pud.*/
__split_huge_pud(vmf->vma, vmf->pud, vmf->address); __split_huge_pud(vmf->vma, vmf->pud, vmf->address);
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif /* CONFIG_TRANSPARENT_HUGEPAGE && CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
return VM_FAULT_FALLBACK;
}
static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud)
{
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
/* No support for anonymous transparent PUD pages yet */
if (vma_is_anonymous(vmf->vma))
return VM_FAULT_FALLBACK;
if (vmf->vma->vm_ops->huge_fault)
return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PUD);
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
return VM_FAULT_FALLBACK; return VM_FAULT_FALLBACK;
} }

View File

@@ -1012,9 +1012,24 @@ int br_nf_hook_thresh(unsigned int hook, struct net *net,
return okfn(net, sk, skb); return okfn(net, sk, skb);
ops = nf_hook_entries_get_hook_ops(e); ops = nf_hook_entries_get_hook_ops(e);
for (i = 0; i < e->num_hook_entries && for (i = 0; i < e->num_hook_entries; i++) {
ops[i]->priority <= NF_BR_PRI_BRNF; i++) /* These hooks have already been called */
; if (ops[i]->priority < NF_BR_PRI_BRNF)
continue;
/* These hooks have not been called yet, run them. */
if (ops[i]->priority > NF_BR_PRI_BRNF)
break;
/* take a closer look at NF_BR_PRI_BRNF. */
if (ops[i]->hook == br_nf_pre_routing) {
/* This hook diverted the skb to this function,
* hooks after this have not been run yet.
*/
i++;
break;
}
}
nf_hook_state_init(&state, hook, NFPROTO_BRIDGE, indev, outdev, nf_hook_state_init(&state, hook, NFPROTO_BRIDGE, indev, outdev,
sk, net, okfn); sk, net, okfn);

View File

@@ -5606,7 +5606,6 @@ static int bpf_push_seg6_encap(struct sk_buff *skb, u32 type, void *hdr, u32 len
if (err) if (err)
return err; return err;
ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
skb_set_transport_header(skb, sizeof(struct ipv6hdr)); skb_set_transport_header(skb, sizeof(struct ipv6hdr));
return seg6_lookup_nexthop(skb, NULL, 0); return seg6_lookup_nexthop(skb, NULL, 0);

View File

@@ -1249,7 +1249,7 @@ static int inet_sk_reselect_saddr(struct sock *sk)
if (new_saddr == old_saddr) if (new_saddr == old_saddr)
return 0; return 0;
if (sock_net(sk)->ipv4.sysctl_ip_dynaddr > 1) { if (READ_ONCE(sock_net(sk)->ipv4.sysctl_ip_dynaddr) > 1) {
pr_info("%s(): shifting inet->saddr from %pI4 to %pI4\n", pr_info("%s(): shifting inet->saddr from %pI4 to %pI4\n",
__func__, &old_saddr, &new_saddr); __func__, &old_saddr, &new_saddr);
} }
@@ -1304,7 +1304,7 @@ int inet_sk_rebuild_header(struct sock *sk)
* Other protocols have to map its equivalent state to TCP_SYN_SENT. * Other protocols have to map its equivalent state to TCP_SYN_SENT.
* DCCP maps its DCCP_REQUESTING state to TCP_SYN_SENT. -acme * DCCP maps its DCCP_REQUESTING state to TCP_SYN_SENT. -acme
*/ */
if (!sock_net(sk)->ipv4.sysctl_ip_dynaddr || if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_ip_dynaddr) ||
sk->sk_state != TCP_SYN_SENT || sk->sk_state != TCP_SYN_SENT ||
(sk->sk_userlocks & SOCK_BINDADDR_LOCK) || (sk->sk_userlocks & SOCK_BINDADDR_LOCK) ||
(err = inet_sk_reselect_saddr(sk)) != 0) (err = inet_sk_reselect_saddr(sk)) != 0)

View File

@@ -240,7 +240,7 @@ static int cipso_v4_cache_check(const unsigned char *key,
struct cipso_v4_map_cache_entry *prev_entry = NULL; struct cipso_v4_map_cache_entry *prev_entry = NULL;
u32 hash; u32 hash;
if (!cipso_v4_cache_enabled) if (!READ_ONCE(cipso_v4_cache_enabled))
return -ENOENT; return -ENOENT;
hash = cipso_v4_map_cache_hash(key, key_len); hash = cipso_v4_map_cache_hash(key, key_len);
@@ -297,13 +297,14 @@ static int cipso_v4_cache_check(const unsigned char *key,
int cipso_v4_cache_add(const unsigned char *cipso_ptr, int cipso_v4_cache_add(const unsigned char *cipso_ptr,
const struct netlbl_lsm_secattr *secattr) const struct netlbl_lsm_secattr *secattr)
{ {
int bkt_size = READ_ONCE(cipso_v4_cache_bucketsize);
int ret_val = -EPERM; int ret_val = -EPERM;
u32 bkt; u32 bkt;
struct cipso_v4_map_cache_entry *entry = NULL; struct cipso_v4_map_cache_entry *entry = NULL;
struct cipso_v4_map_cache_entry *old_entry = NULL; struct cipso_v4_map_cache_entry *old_entry = NULL;
u32 cipso_ptr_len; u32 cipso_ptr_len;
if (!cipso_v4_cache_enabled || cipso_v4_cache_bucketsize <= 0) if (!READ_ONCE(cipso_v4_cache_enabled) || bkt_size <= 0)
return 0; return 0;
cipso_ptr_len = cipso_ptr[1]; cipso_ptr_len = cipso_ptr[1];
@@ -323,7 +324,7 @@ int cipso_v4_cache_add(const unsigned char *cipso_ptr,
bkt = entry->hash & (CIPSO_V4_CACHE_BUCKETS - 1); bkt = entry->hash & (CIPSO_V4_CACHE_BUCKETS - 1);
spin_lock_bh(&cipso_v4_cache[bkt].lock); spin_lock_bh(&cipso_v4_cache[bkt].lock);
if (cipso_v4_cache[bkt].size < cipso_v4_cache_bucketsize) { if (cipso_v4_cache[bkt].size < bkt_size) {
list_add(&entry->list, &cipso_v4_cache[bkt].list); list_add(&entry->list, &cipso_v4_cache[bkt].list);
cipso_v4_cache[bkt].size += 1; cipso_v4_cache[bkt].size += 1;
} else { } else {
@@ -1200,7 +1201,8 @@ static int cipso_v4_gentag_rbm(const struct cipso_v4_doi *doi_def,
/* This will send packets using the "optimized" format when /* This will send packets using the "optimized" format when
* possible as specified in section 3.4.2.6 of the * possible as specified in section 3.4.2.6 of the
* CIPSO draft. */ * CIPSO draft. */
if (cipso_v4_rbm_optfmt && ret_val > 0 && ret_val <= 10) if (READ_ONCE(cipso_v4_rbm_optfmt) && ret_val > 0 &&
ret_val <= 10)
tag_len = 14; tag_len = 14;
else else
tag_len = 4 + ret_val; tag_len = 4 + ret_val;
@@ -1604,7 +1606,7 @@ int cipso_v4_validate(const struct sk_buff *skb, unsigned char **option)
* all the CIPSO validations here but it doesn't * all the CIPSO validations here but it doesn't
* really specify _exactly_ what we need to validate * really specify _exactly_ what we need to validate
* ... so, just make it a sysctl tunable. */ * ... so, just make it a sysctl tunable. */
if (cipso_v4_rbm_strictvalid) { if (READ_ONCE(cipso_v4_rbm_strictvalid)) {
if (cipso_v4_map_lvl_valid(doi_def, if (cipso_v4_map_lvl_valid(doi_def,
tag[3]) < 0) { tag[3]) < 0) {
err_offset = opt_iter + 3; err_offset = opt_iter + 3;

View File

@@ -1229,7 +1229,7 @@ static int fib_check_nh_nongw(struct net *net, struct fib_nh *nh,
nh->fib_nh_dev = in_dev->dev; nh->fib_nh_dev = in_dev->dev;
dev_hold(nh->fib_nh_dev); dev_hold(nh->fib_nh_dev);
nh->fib_nh_scope = RT_SCOPE_HOST; nh->fib_nh_scope = RT_SCOPE_LINK;
if (!netif_carrier_ok(nh->fib_nh_dev)) if (!netif_carrier_ok(nh->fib_nh_dev))
nh->fib_nh_flags |= RTNH_F_LINKDOWN; nh->fib_nh_flags |= RTNH_F_LINKDOWN;
err = 0; err = 0;
@@ -1831,7 +1831,7 @@ int fib_dump_info(struct sk_buff *skb, u32 portid, u32 seq, int event,
goto nla_put_failure; goto nla_put_failure;
if (nexthop_is_blackhole(fi->nh)) if (nexthop_is_blackhole(fi->nh))
rtm->rtm_type = RTN_BLACKHOLE; rtm->rtm_type = RTN_BLACKHOLE;
if (!fi->fib_net->ipv4.sysctl_nexthop_compat_mode) if (!READ_ONCE(fi->fib_net->ipv4.sysctl_nexthop_compat_mode))
goto offload; goto offload;
} }

View File

@@ -497,7 +497,7 @@ static void tnode_free(struct key_vector *tn)
tn = container_of(head, struct tnode, rcu)->kv; tn = container_of(head, struct tnode, rcu)->kv;
} }
if (tnode_free_size >= sysctl_fib_sync_mem) { if (tnode_free_size >= READ_ONCE(sysctl_fib_sync_mem)) {
tnode_free_size = 0; tnode_free_size = 0;
synchronize_rcu(); synchronize_rcu();
} }

View File

@@ -261,11 +261,12 @@ bool icmp_global_allow(void)
spin_lock(&icmp_global.lock); spin_lock(&icmp_global.lock);
delta = min_t(u32, now - icmp_global.stamp, HZ); delta = min_t(u32, now - icmp_global.stamp, HZ);
if (delta >= HZ / 50) { if (delta >= HZ / 50) {
incr = sysctl_icmp_msgs_per_sec * delta / HZ ; incr = READ_ONCE(sysctl_icmp_msgs_per_sec) * delta / HZ;
if (incr) if (incr)
WRITE_ONCE(icmp_global.stamp, now); WRITE_ONCE(icmp_global.stamp, now);
} }
credit = min_t(u32, icmp_global.credit + incr, sysctl_icmp_msgs_burst); credit = min_t(u32, icmp_global.credit + incr,
READ_ONCE(sysctl_icmp_msgs_burst));
if (credit) { if (credit) {
/* We want to use a credit of one in average, but need to randomize /* We want to use a credit of one in average, but need to randomize
* it for security reasons. * it for security reasons.
@@ -289,7 +290,7 @@ static bool icmpv4_mask_allow(struct net *net, int type, int code)
return true; return true;
/* Limit if icmp type is enabled in ratemask. */ /* Limit if icmp type is enabled in ratemask. */
if (!((1 << type) & net->ipv4.sysctl_icmp_ratemask)) if (!((1 << type) & READ_ONCE(net->ipv4.sysctl_icmp_ratemask)))
return true; return true;
return false; return false;
@@ -327,7 +328,8 @@ static bool icmpv4_xrlim_allow(struct net *net, struct rtable *rt,
vif = l3mdev_master_ifindex(dst->dev); vif = l3mdev_master_ifindex(dst->dev);
peer = inet_getpeer_v4(net->ipv4.peers, fl4->daddr, vif, 1); peer = inet_getpeer_v4(net->ipv4.peers, fl4->daddr, vif, 1);
rc = inet_peer_xrlim_allow(peer, net->ipv4.sysctl_icmp_ratelimit); rc = inet_peer_xrlim_allow(peer,
READ_ONCE(net->ipv4.sysctl_icmp_ratelimit));
if (peer) if (peer)
inet_putpeer(peer); inet_putpeer(peer);
out: out:

View File

@@ -148,16 +148,20 @@ static void inet_peer_gc(struct inet_peer_base *base,
struct inet_peer *gc_stack[], struct inet_peer *gc_stack[],
unsigned int gc_cnt) unsigned int gc_cnt)
{ {
int peer_threshold, peer_maxttl, peer_minttl;
struct inet_peer *p; struct inet_peer *p;
__u32 delta, ttl; __u32 delta, ttl;
int i; int i;
if (base->total >= inet_peer_threshold) peer_threshold = READ_ONCE(inet_peer_threshold);
peer_maxttl = READ_ONCE(inet_peer_maxttl);
peer_minttl = READ_ONCE(inet_peer_minttl);
if (base->total >= peer_threshold)
ttl = 0; /* be aggressive */ ttl = 0; /* be aggressive */
else else
ttl = inet_peer_maxttl ttl = peer_maxttl - (peer_maxttl - peer_minttl) / HZ *
- (inet_peer_maxttl - inet_peer_minttl) / HZ * base->total / peer_threshold * HZ;
base->total / inet_peer_threshold * HZ;
for (i = 0; i < gc_cnt; i++) { for (i = 0; i < gc_cnt; i++) {
p = gc_stack[i]; p = gc_stack[i];

View File

@@ -882,7 +882,7 @@ static void __remove_nexthop_fib(struct net *net, struct nexthop *nh)
/* __ip6_del_rt does a release, so do a hold here */ /* __ip6_del_rt does a release, so do a hold here */
fib6_info_hold(f6i); fib6_info_hold(f6i);
ipv6_stub->ip6_del_rt(net, f6i, ipv6_stub->ip6_del_rt(net, f6i,
!net->ipv4.sysctl_nexthop_compat_mode); !READ_ONCE(net->ipv4.sysctl_nexthop_compat_mode));
} }
} }
@@ -1173,7 +1173,8 @@ out:
if (!rc) { if (!rc) {
nh_base_seq_inc(net); nh_base_seq_inc(net);
nexthop_notify(RTM_NEWNEXTHOP, new_nh, &cfg->nlinfo); nexthop_notify(RTM_NEWNEXTHOP, new_nh, &cfg->nlinfo);
if (replace_notify && net->ipv4.sysctl_nexthop_compat_mode) if (replace_notify &&
READ_ONCE(net->ipv4.sysctl_nexthop_compat_mode))
nexthop_replace_notify(net, new_nh, &cfg->nlinfo); nexthop_replace_notify(net, new_nh, &cfg->nlinfo);
} }

View File

@@ -5598,7 +5598,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
if (nexthop_is_blackhole(rt->nh)) if (nexthop_is_blackhole(rt->nh))
rtm->rtm_type = RTN_BLACKHOLE; rtm->rtm_type = RTN_BLACKHOLE;
if (net->ipv4.sysctl_nexthop_compat_mode && if (READ_ONCE(net->ipv4.sysctl_nexthop_compat_mode) &&
rt6_fill_node_nexthop(skb, rt->nh, &nh_flags) < 0) rt6_fill_node_nexthop(skb, rt->nh, &nh_flags) < 0)
goto nla_put_failure; goto nla_put_failure;

View File

@@ -188,6 +188,8 @@ int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto)
} }
#endif #endif
hdr->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
skb_postpush_rcsum(skb, hdr, tot_len); skb_postpush_rcsum(skb, hdr, tot_len);
return 0; return 0;
@@ -240,6 +242,8 @@ int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh)
} }
#endif #endif
hdr->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
skb_postpush_rcsum(skb, hdr, sizeof(struct ipv6hdr) + hdrlen); skb_postpush_rcsum(skb, hdr, sizeof(struct ipv6hdr) + hdrlen);
return 0; return 0;
@@ -301,7 +305,6 @@ static int seg6_do_srh(struct sk_buff *skb)
break; break;
} }
ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
skb_set_transport_header(skb, sizeof(struct ipv6hdr)); skb_set_transport_header(skb, sizeof(struct ipv6hdr));
return 0; return 0;

View File

@@ -435,7 +435,6 @@ static int input_action_end_b6(struct sk_buff *skb, struct seg6_local_lwt *slwt)
if (err) if (err)
goto drop; goto drop;
ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
skb_set_transport_header(skb, sizeof(struct ipv6hdr)); skb_set_transport_header(skb, sizeof(struct ipv6hdr));
seg6_lookup_nexthop(skb, NULL, 0); seg6_lookup_nexthop(skb, NULL, 0);
@@ -467,7 +466,6 @@ static int input_action_end_b6_encap(struct sk_buff *skb,
if (err) if (err)
goto drop; goto drop;
ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
skb_set_transport_header(skb, sizeof(struct ipv6hdr)); skb_set_transport_header(skb, sizeof(struct ipv6hdr));
seg6_lookup_nexthop(skb, NULL, 0); seg6_lookup_nexthop(skb, NULL, 0);

View File

@@ -145,8 +145,8 @@ u16 __ieee80211_select_queue(struct ieee80211_sub_if_data *sdata,
bool qos; bool qos;
/* all mesh/ocb stations are required to support WME */ /* all mesh/ocb stations are required to support WME */
if (sdata->vif.type == NL80211_IFTYPE_MESH_POINT || if (sta && (sdata->vif.type == NL80211_IFTYPE_MESH_POINT ||
sdata->vif.type == NL80211_IFTYPE_OCB) sdata->vif.type == NL80211_IFTYPE_OCB))
qos = true; qos = true;
else if (sta) else if (sta)
qos = sta->sta.wme; qos = sta->sta.wme;

View File

@@ -489,6 +489,7 @@ static int tipc_sk_create(struct net *net, struct socket *sock,
sock_init_data(sock, sk); sock_init_data(sock, sk);
tipc_set_sk_state(sk, TIPC_OPEN); tipc_set_sk_state(sk, TIPC_OPEN);
if (tipc_sk_insert(tsk)) { if (tipc_sk_insert(tsk)) {
sk_free(sk);
pr_warn("Socket create failed; port number exhausted\n"); pr_warn("Socket create failed; port number exhausted\n");
return -EINVAL; return -EINVAL;
} }

View File

@@ -1390,9 +1390,9 @@ static struct notifier_block tls_dev_notifier = {
.notifier_call = tls_dev_event, .notifier_call = tls_dev_event,
}; };
void __init tls_device_init(void) int __init tls_device_init(void)
{ {
register_netdevice_notifier(&tls_dev_notifier); return register_netdevice_notifier(&tls_dev_notifier);
} }
void __exit tls_device_cleanup(void) void __exit tls_device_cleanup(void)

View File

@@ -905,7 +905,12 @@ static int __init tls_register(void)
if (err) if (err)
return err; return err;
tls_device_init(); err = tls_device_init();
if (err) {
unregister_pernet_subsys(&tls_proc_ops);
return err;
}
tcp_register_ulp(&tcp_tls_ulp_ops); tcp_register_ulp(&tcp_tls_ulp_ops);
return 0; return 0;

View File

@@ -73,7 +73,7 @@ static struct shash_desc *init_desc(char type, uint8_t hash_algo)
{ {
long rc; long rc;
const char *algo; const char *algo;
struct crypto_shash **tfm, *tmp_tfm = NULL; struct crypto_shash **tfm, *tmp_tfm;
struct shash_desc *desc; struct shash_desc *desc;
if (type == EVM_XATTR_HMAC) { if (type == EVM_XATTR_HMAC) {
@@ -118,16 +118,13 @@ unlock:
alloc: alloc:
desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(*tfm), desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(*tfm),
GFP_KERNEL); GFP_KERNEL);
if (!desc) { if (!desc)
crypto_free_shash(tmp_tfm);
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
}
desc->tfm = *tfm; desc->tfm = *tfm;
rc = crypto_shash_init(desc); rc = crypto_shash_init(desc);
if (rc) { if (rc) {
crypto_free_shash(tmp_tfm);
kfree(desc); kfree(desc);
return ERR_PTR(rc); return ERR_PTR(rc);
} }

View File

@@ -396,7 +396,8 @@ int ima_appraise_measurement(enum ima_hooks func,
goto out; goto out;
} }
status = evm_verifyxattr(dentry, XATTR_NAME_IMA, xattr_value, rc, iint); status = evm_verifyxattr(dentry, XATTR_NAME_IMA, xattr_value,
rc < 0 ? 0 : rc, iint);
switch (status) { switch (status) {
case INTEGRITY_PASS: case INTEGRITY_PASS:
case INTEGRITY_PASS_IMMUTABLE: case INTEGRITY_PASS_IMMUTABLE:

View File

@@ -205,6 +205,7 @@ out_array:
crypto_free_shash(ima_algo_array[i].tfm); crypto_free_shash(ima_algo_array[i].tfm);
} }
kfree(ima_algo_array);
out: out:
crypto_free_shash(ima_shash_tfm); crypto_free_shash(ima_shash_tfm);
return rc; return rc;

View File

@@ -937,6 +937,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
SND_PCI_QUIRK(0x103c, 0x828c, "HP EliteBook 840 G4", CXT_FIXUP_HP_DOCK), SND_PCI_QUIRK(0x103c, 0x828c, "HP EliteBook 840 G4", CXT_FIXUP_HP_DOCK),
SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x82b4, "HP ProDesk 600 G3", CXT_FIXUP_HP_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x836e, "HP ProBook 455 G5", CXT_FIXUP_MUTE_LED_GPIO), SND_PCI_QUIRK(0x103c, 0x836e, "HP ProBook 455 G5", CXT_FIXUP_MUTE_LED_GPIO),
SND_PCI_QUIRK(0x103c, 0x837f, "HP ProBook 470 G5", CXT_FIXUP_MUTE_LED_GPIO), SND_PCI_QUIRK(0x103c, 0x837f, "HP ProBook 470 G5", CXT_FIXUP_MUTE_LED_GPIO),
SND_PCI_QUIRK(0x103c, 0x83b2, "HP EliteBook 840 G5", CXT_FIXUP_HP_DOCK), SND_PCI_QUIRK(0x103c, 0x83b2, "HP EliteBook 840 G5", CXT_FIXUP_HP_DOCK),

Some files were not shown because too many files have changed in this diff Show More