Merge tag 'usb-serial-4.6-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/johan/usb-serial into usb-next

Johan writes:

USB-serial fixes for v4.6-rc7

Here are some more new device ids.

Signed-off-by: Johan Hovold <johan@kernel.org>
This commit is contained in:
Greg Kroah-Hartman
2016-05-09 09:26:56 +02:00
214 changed files with 2537 additions and 1220 deletions

View File

@@ -48,6 +48,9 @@ Felix Kuhling <fxkuehl@gmx.de>
Felix Moeller <felix@derklecks.de> Felix Moeller <felix@derklecks.de>
Filipe Lautert <filipe@icewall.org> Filipe Lautert <filipe@icewall.org>
Franck Bui-Huu <vagabon.xyz@gmail.com> Franck Bui-Huu <vagabon.xyz@gmail.com>
Frank Rowand <frowand.list@gmail.com> <frowand@mvista.com>
Frank Rowand <frowand.list@gmail.com> <frank.rowand@am.sony.com>
Frank Rowand <frowand.list@gmail.com> <frank.rowand@sonymobile.com>
Frank Zago <fzago@systemfabricworks.com> Frank Zago <fzago@systemfabricworks.com>
Greg Kroah-Hartman <greg@echidna.(none)> Greg Kroah-Hartman <greg@echidna.(none)>
Greg Kroah-Hartman <gregkh@suse.de> Greg Kroah-Hartman <gregkh@suse.de>
@@ -79,6 +82,7 @@ Kay Sievers <kay.sievers@vrfy.org>
Kenneth W Chen <kenneth.w.chen@intel.com> Kenneth W Chen <kenneth.w.chen@intel.com>
Konstantin Khlebnikov <koct9i@gmail.com> <k.khlebnikov@samsung.com> Konstantin Khlebnikov <koct9i@gmail.com> <k.khlebnikov@samsung.com>
Koushik <raghavendra.koushik@neterion.com> Koushik <raghavendra.koushik@neterion.com>
Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski.k@gmail.com>
Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Leonid I Ananiev <leonid.i.ananiev@intel.com> Leonid I Ananiev <leonid.i.ananiev@intel.com>
Linas Vepstas <linas@austin.ibm.com> Linas Vepstas <linas@austin.ibm.com>

View File

@@ -2,7 +2,7 @@
The ARC HS can be configured with a pipeline performance monitor for counting The ARC HS can be configured with a pipeline performance monitor for counting
CPU and cache events like cache misses and hits. Like conventional PCT there CPU and cache events like cache misses and hits. Like conventional PCT there
are 100+ hardware conditions dynamically mapped to upto 32 counters. are 100+ hardware conditions dynamically mapped to up to 32 counters.
It also supports overflow interrupts. It also supports overflow interrupts.
Required properties: Required properties:

View File

@@ -2,7 +2,7 @@
The ARC700 can be configured with a pipeline performance monitor for counting The ARC700 can be configured with a pipeline performance monitor for counting
CPU and cache events like cache misses and hits. Like conventional PCT there CPU and cache events like cache misses and hits. Like conventional PCT there
are 100+ hardware conditions dynamically mapped to upto 32 counters are 100+ hardware conditions dynamically mapped to up to 32 counters
Note that: Note that:
* The ARC 700 PCT does not support interrupts; although HW events may be * The ARC 700 PCT does not support interrupts; although HW events may be

View File

@@ -192,7 +192,6 @@ nodes to be present and contain the properties described below.
can be one of: can be one of:
"allwinner,sun6i-a31" "allwinner,sun6i-a31"
"allwinner,sun8i-a23" "allwinner,sun8i-a23"
"arm,psci"
"arm,realview-smp" "arm,realview-smp"
"brcm,bcm-nsp-smp" "brcm,bcm-nsp-smp"
"brcm,brahma-b15" "brcm,brahma-b15"

View File

@@ -6,8 +6,8 @@ RK3xxx SoCs.
Required properties : Required properties :
- reg : Offset and length of the register set for the device - reg : Offset and length of the register set for the device
- compatible : should be "rockchip,rk3066-i2c", "rockchip,rk3188-i2c" or - compatible : should be "rockchip,rk3066-i2c", "rockchip,rk3188-i2c",
"rockchip,rk3288-i2c". "rockchip,rk3228-i2c" or "rockchip,rk3288-i2c".
- interrupts : interrupt number - interrupts : interrupt number
- clocks : parent clock - clocks : parent clock

View File

@@ -581,15 +581,16 @@ Specify "[Nn]ode" for node order
"Zone Order" orders the zonelists by zone type, then by node within each "Zone Order" orders the zonelists by zone type, then by node within each
zone. Specify "[Zz]one" for zone order. zone. Specify "[Zz]one" for zone order.
Specify "[Dd]efault" to request automatic configuration. Autoconfiguration Specify "[Dd]efault" to request automatic configuration.
will select "node" order in following case.
(1) if the DMA zone does not exist or
(2) if the DMA zone comprises greater than 50% of the available memory or
(3) if any node's DMA zone comprises greater than 70% of its local memory and
the amount of local memory is big enough.
Otherwise, "zone" order will be selected. Default order is recommended unless On 32-bit, the Normal zone needs to be preserved for allocations accessible
this is causing problems for your system/application. by the kernel, so "zone" order will be selected.
On 64-bit, devices that require DMA32/DMA are relatively rare, so "node"
order will be selected.
Default order is recommended unless this is causing problems for your
system/application.
============================================================== ==============================================================

View File

@@ -6027,7 +6027,7 @@ F: include/scsi/*iscsi*
ISCSI EXTENSIONS FOR RDMA (ISER) INITIATOR ISCSI EXTENSIONS FOR RDMA (ISER) INITIATOR
M: Or Gerlitz <ogerlitz@mellanox.com> M: Or Gerlitz <ogerlitz@mellanox.com>
M: Sagi Grimberg <sagig@mellanox.com> M: Sagi Grimberg <sagi@grimberg.me>
M: Roi Dayan <roid@mellanox.com> M: Roi Dayan <roid@mellanox.com>
L: linux-rdma@vger.kernel.org L: linux-rdma@vger.kernel.org
S: Supported S: Supported
@@ -6037,7 +6037,7 @@ Q: http://patchwork.kernel.org/project/linux-rdma/list/
F: drivers/infiniband/ulp/iser/ F: drivers/infiniband/ulp/iser/
ISCSI EXTENSIONS FOR RDMA (ISER) TARGET ISCSI EXTENSIONS FOR RDMA (ISER) TARGET
M: Sagi Grimberg <sagig@mellanox.com> M: Sagi Grimberg <sagi@grimberg.me>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git master T: git git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git master
L: linux-rdma@vger.kernel.org L: linux-rdma@vger.kernel.org
L: target-devel@vger.kernel.org L: target-devel@vger.kernel.org
@@ -6400,7 +6400,7 @@ F: mm/kmemleak.c
F: mm/kmemleak-test.c F: mm/kmemleak-test.c
KPROBES KPROBES
M: Ananth N Mavinakayanahalli <ananth@in.ibm.com> M: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
M: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> M: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
M: "David S. Miller" <davem@davemloft.net> M: "David S. Miller" <davem@davemloft.net>
M: Masami Hiramatsu <mhiramat@kernel.org> M: Masami Hiramatsu <mhiramat@kernel.org>
@@ -11071,6 +11071,15 @@ S: Maintained
F: drivers/clk/ti/ F: drivers/clk/ti/
F: include/linux/clk/ti.h F: include/linux/clk/ti.h
TI ETHERNET SWITCH DRIVER (CPSW)
M: Mugunthan V N <mugunthanvnm@ti.com>
R: Grygorii Strashko <grygorii.strashko@ti.com>
L: linux-omap@vger.kernel.org
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/ethernet/ti/cpsw*
F: drivers/net/ethernet/ti/davinci*
TI FLASH MEDIA INTERFACE DRIVER TI FLASH MEDIA INTERFACE DRIVER
M: Alex Dubov <oakad@yahoo.com> M: Alex Dubov <oakad@yahoo.com>
S: Maintained S: Maintained

View File

@@ -1,8 +1,8 @@
VERSION = 4 VERSION = 4
PATCHLEVEL = 6 PATCHLEVEL = 6
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc5 EXTRAVERSION = -rc6
NAME = Blurry Fish Butt NAME = Charred Weasel
# *DOCUMENTATION* # *DOCUMENTATION*
# To see a list of typical targets execute "make help" # To see a list of typical targets execute "make help"

View File

@@ -35,8 +35,10 @@ config ARC
select NO_BOOTMEM select NO_BOOTMEM
select OF select OF
select OF_EARLY_FLATTREE select OF_EARLY_FLATTREE
select OF_RESERVED_MEM
select PERF_USE_VMALLOC select PERF_USE_VMALLOC
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select HAVE_GENERIC_DMA_COHERENT
config MIGHT_HAVE_PCI config MIGHT_HAVE_PCI
bool bool

View File

@@ -18,6 +18,12 @@
#define STATUS_AD_MASK (1<<STATUS_AD_BIT) #define STATUS_AD_MASK (1<<STATUS_AD_BIT)
#define STATUS_IE_MASK (1<<STATUS_IE_BIT) #define STATUS_IE_MASK (1<<STATUS_IE_BIT)
/* status32 Bits as encoded/expected by CLRI/SETI */
#define CLRI_STATUS_IE_BIT 4
#define CLRI_STATUS_E_MASK 0xF
#define CLRI_STATUS_IE_MASK (1 << CLRI_STATUS_IE_BIT)
#define AUX_USER_SP 0x00D #define AUX_USER_SP 0x00D
#define AUX_IRQ_CTRL 0x00E #define AUX_IRQ_CTRL 0x00E
#define AUX_IRQ_ACT 0x043 /* Active Intr across all levels */ #define AUX_IRQ_ACT 0x043 /* Active Intr across all levels */
@@ -100,6 +106,13 @@ static inline long arch_local_save_flags(void)
: :
: "memory"); : "memory");
/* To be compatible with irq_save()/irq_restore()
* encode the irq bits as expected by CLRI/SETI
* (this was needed to make CONFIG_TRACE_IRQFLAGS work)
*/
temp = (1 << 5) |
((!!(temp & STATUS_IE_MASK)) << CLRI_STATUS_IE_BIT) |
(temp & CLRI_STATUS_E_MASK);
return temp; return temp;
} }
@@ -108,7 +121,7 @@ static inline long arch_local_save_flags(void)
*/ */
static inline int arch_irqs_disabled_flags(unsigned long flags) static inline int arch_irqs_disabled_flags(unsigned long flags)
{ {
return !(flags & (STATUS_IE_MASK)); return !(flags & CLRI_STATUS_IE_MASK);
} }
static inline int arch_irqs_disabled(void) static inline int arch_irqs_disabled(void)
@@ -128,11 +141,32 @@ static inline void arc_softirq_clear(int irq)
#else #else
#ifdef CONFIG_TRACE_IRQFLAGS
.macro TRACE_ASM_IRQ_DISABLE
bl trace_hardirqs_off
.endm
.macro TRACE_ASM_IRQ_ENABLE
bl trace_hardirqs_on
.endm
#else
.macro TRACE_ASM_IRQ_DISABLE
.endm
.macro TRACE_ASM_IRQ_ENABLE
.endm
#endif
.macro IRQ_DISABLE scratch .macro IRQ_DISABLE scratch
clri clri
TRACE_ASM_IRQ_DISABLE
.endm .endm
.macro IRQ_ENABLE scratch .macro IRQ_ENABLE scratch
TRACE_ASM_IRQ_ENABLE
seti seti
.endm .endm

View File

@@ -69,8 +69,11 @@ ENTRY(handle_interrupt)
clri ; To make status32.IE agree with CPU internal state clri ; To make status32.IE agree with CPU internal state
lr r0, [ICAUSE] #ifdef CONFIG_TRACE_IRQFLAGS
TRACE_ASM_IRQ_DISABLE
#endif
lr r0, [ICAUSE]
mov blink, ret_from_exception mov blink, ret_from_exception
b.d arch_do_IRQ b.d arch_do_IRQ
@@ -169,6 +172,11 @@ END(EV_TLBProtV)
.Lrestore_regs: .Lrestore_regs:
# Interrpts are actually disabled from this point on, but will get
# reenabled after we return from interrupt/exception.
# But irq tracer needs to be told now...
TRACE_ASM_IRQ_ENABLE
ld r0, [sp, PT_status32] ; U/K mode at time of entry ld r0, [sp, PT_status32] ; U/K mode at time of entry
lr r10, [AUX_IRQ_ACT] lr r10, [AUX_IRQ_ACT]

View File

@@ -341,6 +341,9 @@ END(call_do_page_fault)
.Lrestore_regs: .Lrestore_regs:
# Interrpts are actually disabled from this point on, but will get
# reenabled after we return from interrupt/exception.
# But irq tracer needs to be told now...
TRACE_ASM_IRQ_ENABLE TRACE_ASM_IRQ_ENABLE
lr r10, [status32] lr r10, [status32]

View File

@@ -13,6 +13,7 @@
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
#include <linux/initrd.h> #include <linux/initrd.h>
#endif #endif
#include <linux/of_fdt.h>
#include <linux/swap.h> #include <linux/swap.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/highmem.h> #include <linux/highmem.h>
@@ -136,6 +137,9 @@ void __init setup_arch_memory(void)
memblock_reserve(__pa(initrd_start), initrd_end - initrd_start); memblock_reserve(__pa(initrd_start), initrd_end - initrd_start);
#endif #endif
early_init_fdt_reserve_self();
early_init_fdt_scan_reserved_mem();
memblock_dump_all(); memblock_dump_all();
/*----------------- node/zones setup --------------------------*/ /*----------------- node/zones setup --------------------------*/

View File

@@ -860,7 +860,7 @@
ti,no-idle-on-init; ti,no-idle-on-init;
reg = <0x50000000 0x2000>; reg = <0x50000000 0x2000>;
interrupts = <100>; interrupts = <100>;
dmas = <&edma 52>; dmas = <&edma 52 0>;
dma-names = "rxtx"; dma-names = "rxtx";
gpmc,num-cs = <7>; gpmc,num-cs = <7>;
gpmc,num-waitpins = <2>; gpmc,num-waitpins = <2>;

View File

@@ -884,7 +884,7 @@
gpmc: gpmc@50000000 { gpmc: gpmc@50000000 {
compatible = "ti,am3352-gpmc"; compatible = "ti,am3352-gpmc";
ti,hwmods = "gpmc"; ti,hwmods = "gpmc";
dmas = <&edma 52>; dmas = <&edma 52 0>;
dma-names = "rxtx"; dma-names = "rxtx";
clocks = <&l3s_gclk>; clocks = <&l3s_gclk>;
clock-names = "fck"; clock-names = "fck";

View File

@@ -99,13 +99,6 @@
#cooling-cells = <2>; #cooling-cells = <2>;
}; };
extcon_usb1: extcon_usb1 {
compatible = "linux,extcon-usb-gpio";
id-gpio = <&gpio7 25 GPIO_ACTIVE_HIGH>;
pinctrl-names = "default";
pinctrl-0 = <&extcon_usb1_pins>;
};
hdmi0: connector { hdmi0: connector {
compatible = "hdmi-connector"; compatible = "hdmi-connector";
label = "hdmi"; label = "hdmi";
@@ -349,12 +342,6 @@
>; >;
}; };
extcon_usb1_pins: extcon_usb1_pins {
pinctrl-single,pins = <
DRA7XX_CORE_IOPAD(0x37ec, PIN_INPUT_PULLUP | MUX_MODE14) /* uart1_rtsn.gpio7_25 */
>;
};
tpd12s015_pins: pinmux_tpd12s015_pins { tpd12s015_pins: pinmux_tpd12s015_pins {
pinctrl-single,pins = < pinctrl-single,pins = <
DRA7XX_CORE_IOPAD(0x37b0, PIN_OUTPUT | MUX_MODE14) /* gpio7_10 CT_CP_HPD */ DRA7XX_CORE_IOPAD(0x37b0, PIN_OUTPUT | MUX_MODE14) /* gpio7_10 CT_CP_HPD */
@@ -706,10 +693,6 @@
pinctrl-0 = <&usb1_pins>; pinctrl-0 = <&usb1_pins>;
}; };
&omap_dwc3_1 {
extcon = <&extcon_usb1>;
};
&omap_dwc3_2 { &omap_dwc3_2 {
extcon = <&extcon_usb2>; extcon = <&extcon_usb2>;
}; };

View File

@@ -4,6 +4,157 @@
* published by the Free Software Foundation. * published by the Free Software Foundation.
*/ */
&pllss {
/*
* See TRM "2.6.10 Connected outputso DPLLS" and
* "2.6.11 Connected Outputs of DPLLJ". Only clkout is
* connected except for hdmi and usb.
*/
adpll_mpu_ck: adpll@40 {
#clock-cells = <1>;
compatible = "ti,dm814-adpll-s-clock";
reg = <0x40 0x40>;
clocks = <&devosc_ck &devosc_ck &devosc_ck>;
clock-names = "clkinp", "clkinpulow", "clkinphif";
clock-output-names = "481c5040.adpll.dcoclkldo",
"481c5040.adpll.clkout",
"481c5040.adpll.clkoutx2",
"481c5040.adpll.clkouthif";
};
adpll_dsp_ck: adpll@80 {
#clock-cells = <1>;
compatible = "ti,dm814-adpll-lj-clock";
reg = <0x80 0x30>;
clocks = <&devosc_ck &devosc_ck>;
clock-names = "clkinp", "clkinpulow";
clock-output-names = "481c5080.adpll.dcoclkldo",
"481c5080.adpll.clkout",
"481c5080.adpll.clkoutldo";
};
adpll_sgx_ck: adpll@b0 {
#clock-cells = <1>;
compatible = "ti,dm814-adpll-lj-clock";
reg = <0xb0 0x30>;
clocks = <&devosc_ck &devosc_ck>;
clock-names = "clkinp", "clkinpulow";
clock-output-names = "481c50b0.adpll.dcoclkldo",
"481c50b0.adpll.clkout",
"481c50b0.adpll.clkoutldo";
};
adpll_hdvic_ck: adpll@e0 {
#clock-cells = <1>;
compatible = "ti,dm814-adpll-lj-clock";
reg = <0xe0 0x30>;
clocks = <&devosc_ck &devosc_ck>;
clock-names = "clkinp", "clkinpulow";
clock-output-names = "481c50e0.adpll.dcoclkldo",
"481c50e0.adpll.clkout",
"481c50e0.adpll.clkoutldo";
};
adpll_l3_ck: adpll@110 {
#clock-cells = <1>;
compatible = "ti,dm814-adpll-lj-clock";
reg = <0x110 0x30>;
clocks = <&devosc_ck &devosc_ck>;
clock-names = "clkinp", "clkinpulow";
clock-output-names = "481c5110.adpll.dcoclkldo",
"481c5110.adpll.clkout",
"481c5110.adpll.clkoutldo";
};
adpll_isp_ck: adpll@140 {
#clock-cells = <1>;
compatible = "ti,dm814-adpll-lj-clock";
reg = <0x140 0x30>;
clocks = <&devosc_ck &devosc_ck>;
clock-names = "clkinp", "clkinpulow";
clock-output-names = "481c5140.adpll.dcoclkldo",
"481c5140.adpll.clkout",
"481c5140.adpll.clkoutldo";
};
adpll_dss_ck: adpll@170 {
#clock-cells = <1>;
compatible = "ti,dm814-adpll-lj-clock";
reg = <0x170 0x30>;
clocks = <&devosc_ck &devosc_ck>;
clock-names = "clkinp", "clkinpulow";
clock-output-names = "481c5170.adpll.dcoclkldo",
"481c5170.adpll.clkout",
"481c5170.adpll.clkoutldo";
};
adpll_video0_ck: adpll@1a0 {
#clock-cells = <1>;
compatible = "ti,dm814-adpll-lj-clock";
reg = <0x1a0 0x30>;
clocks = <&devosc_ck &devosc_ck>;
clock-names = "clkinp", "clkinpulow";
clock-output-names = "481c51a0.adpll.dcoclkldo",
"481c51a0.adpll.clkout",
"481c51a0.adpll.clkoutldo";
};
adpll_video1_ck: adpll@1d0 {
#clock-cells = <1>;
compatible = "ti,dm814-adpll-lj-clock";
reg = <0x1d0 0x30>;
clocks = <&devosc_ck &devosc_ck>;
clock-names = "clkinp", "clkinpulow";
clock-output-names = "481c51d0.adpll.dcoclkldo",
"481c51d0.adpll.clkout",
"481c51d0.adpll.clkoutldo";
};
adpll_hdmi_ck: adpll@200 {
#clock-cells = <1>;
compatible = "ti,dm814-adpll-lj-clock";
reg = <0x200 0x30>;
clocks = <&devosc_ck &devosc_ck>;
clock-names = "clkinp", "clkinpulow";
clock-output-names = "481c5200.adpll.dcoclkldo",
"481c5200.adpll.clkout",
"481c5200.adpll.clkoutldo";
};
adpll_audio_ck: adpll@230 {
#clock-cells = <1>;
compatible = "ti,dm814-adpll-lj-clock";
reg = <0x230 0x30>;
clocks = <&devosc_ck &devosc_ck>;
clock-names = "clkinp", "clkinpulow";
clock-output-names = "481c5230.adpll.dcoclkldo",
"481c5230.adpll.clkout",
"481c5230.adpll.clkoutldo";
};
adpll_usb_ck: adpll@260 {
#clock-cells = <1>;
compatible = "ti,dm814-adpll-lj-clock";
reg = <0x260 0x30>;
clocks = <&devosc_ck &devosc_ck>;
clock-names = "clkinp", "clkinpulow";
clock-output-names = "481c5260.adpll.dcoclkldo",
"481c5260.adpll.clkout",
"481c5260.adpll.clkoutldo";
};
adpll_ddr_ck: adpll@290 {
#clock-cells = <1>;
compatible = "ti,dm814-adpll-lj-clock";
reg = <0x290 0x30>;
clocks = <&devosc_ck &devosc_ck>;
clock-names = "clkinp", "clkinpulow";
clock-output-names = "481c5290.adpll.dcoclkldo",
"481c5290.adpll.clkout",
"481c5290.adpll.clkoutldo";
};
};
&pllss_clocks { &pllss_clocks {
timer1_fck: timer1_fck { timer1_fck: timer1_fck {
#clock-cells = <0>; #clock-cells = <0>;
@@ -23,6 +174,24 @@
reg = <0x2e0>; reg = <0x2e0>;
}; };
/* CPTS_RFT_CLK in RMII_REFCLK_SRC, usually sourced from auiod */
cpsw_cpts_rft_clk: cpsw_cpts_rft_clk {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&adpll_video0_ck 1
&adpll_video1_ck 1
&adpll_audio_ck 1>;
ti,bit-shift = <1>;
reg = <0x2e8>;
};
/* REVISIT: Set up with a proper mux using RMII_REFCLK_SRC */
cpsw_125mhz_gclk: cpsw_125mhz_gclk {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <125000000>;
};
sysclk18_ck: sysclk18_ck { sysclk18_ck: sysclk18_ck {
#clock-cells = <0>; #clock-cells = <0>;
compatible = "ti,mux-clock"; compatible = "ti,mux-clock";
@@ -79,37 +248,6 @@
compatible = "fixed-clock"; compatible = "fixed-clock";
clock-frequency = <1000000000>; clock-frequency = <1000000000>;
}; };
sysclk4_ck: sysclk4_ck {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <222000000>;
};
sysclk6_ck: sysclk6_ck {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <100000000>;
};
sysclk10_ck: sysclk10_ck {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <48000000>;
};
cpsw_125mhz_gclk: cpsw_125mhz_gclk {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <125000000>;
};
cpsw_cpts_rft_clk: cpsw_cpts_rft_clk {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <250000000>;
};
}; };
&prcm_clocks { &prcm_clocks {
@@ -138,6 +276,49 @@
clock-div = <78125>; clock-div = <78125>;
}; };
/* L4_HS 220 MHz*/
sysclk4_ck: sysclk4_ck {
#clock-cells = <0>;
compatible = "ti,fixed-factor-clock";
clocks = <&adpll_l3_ck 1>;
ti,clock-mult = <1>;
ti,clock-div = <1>;
};
/* L4_FWCFG */
sysclk5_ck: sysclk5_ck {
#clock-cells = <0>;
compatible = "ti,fixed-factor-clock";
clocks = <&adpll_l3_ck 1>;
ti,clock-mult = <1>;
ti,clock-div = <2>;
};
/* L4_LS 110 MHz */
sysclk6_ck: sysclk6_ck {
#clock-cells = <0>;
compatible = "ti,fixed-factor-clock";
clocks = <&adpll_l3_ck 1>;
ti,clock-mult = <1>;
ti,clock-div = <2>;
};
sysclk8_ck: sysclk8_ck {
#clock-cells = <0>;
compatible = "ti,fixed-factor-clock";
clocks = <&adpll_usb_ck 1>;
ti,clock-mult = <1>;
ti,clock-div = <1>;
};
sysclk10_ck: sysclk10_ck {
compatible = "ti,divider-clock";
reg = <0x324>;
ti,max-div = <7>;
#clock-cells = <0>;
clocks = <&adpll_usb_ck 1>;
};
aud_clkin0_ck: aud_clkin0_ck { aud_clkin0_ck: aud_clkin0_ck {
#clock-cells = <0>; #clock-cells = <0>;
compatible = "fixed-clock"; compatible = "fixed-clock";

View File

@@ -6,6 +6,32 @@
#include "dm814x-clocks.dtsi" #include "dm814x-clocks.dtsi"
/* Compared to dm814x, dra62x does not have hdic, l3 or dss PLLs */
&adpll_hdvic_ck {
status = "disabled";
};
&adpll_l3_ck {
status = "disabled";
};
&adpll_dss_ck {
status = "disabled";
};
/* Compared to dm814x, dra62x has interconnect clocks on isp PLL */
&sysclk4_ck {
clocks = <&adpll_isp_ck 1>;
};
&sysclk5_ck {
clocks = <&adpll_isp_ck 1>;
};
&sysclk6_ck {
clocks = <&adpll_isp_ck 1>;
};
/* /*
* Compared to dm814x, dra62x has different shifts and more mux options. * Compared to dm814x, dra62x has different shifts and more mux options.
* Please add the extra options for ysclk_14 and 16 if really needed. * Please add the extra options for ysclk_14 and 16 if really needed.

View File

@@ -98,12 +98,20 @@
clock-frequency = <32768>; clock-frequency = <32768>;
}; };
sys_32k_ck: sys_32k_ck { sys_clk32_crystal_ck: sys_clk32_crystal_ck {
#clock-cells = <0>; #clock-cells = <0>;
compatible = "fixed-clock"; compatible = "fixed-clock";
clock-frequency = <32768>; clock-frequency = <32768>;
}; };
sys_clk32_pseudo_ck: sys_clk32_pseudo_ck {
#clock-cells = <0>;
compatible = "fixed-factor-clock";
clocks = <&sys_clkin1>;
clock-mult = <1>;
clock-div = <610>;
};
virt_12000000_ck: virt_12000000_ck { virt_12000000_ck: virt_12000000_ck {
#clock-cells = <0>; #clock-cells = <0>;
compatible = "fixed-clock"; compatible = "fixed-clock";
@@ -2170,4 +2178,12 @@
ti,bit-shift = <22>; ti,bit-shift = <22>;
reg = <0x0558>; reg = <0x0558>;
}; };
sys_32k_ck: sys_32k_ck {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clk32_crystal_ck>, <&sys_clk32_pseudo_ck>, <&sys_clk32_pseudo_ck>, <&sys_clk32_pseudo_ck>;
ti,bit-shift = <8>;
reg = <0x6c4>;
};
}; };

View File

@@ -1,6 +1,6 @@
/dts-v1/; /dts-v1/;
#include <dt-bindings/interrupt-controller/arm-gic.h> #include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/clock/qcom,gcc-msm8974.h> #include <dt-bindings/clock/qcom,gcc-msm8974.h>
#include "skeleton.dtsi" #include "skeleton.dtsi"
@@ -460,8 +460,6 @@
clock-names = "core", "iface"; clock-names = "core", "iface";
#address-cells = <1>; #address-cells = <1>;
#size-cells = <0>; #size-cells = <0>;
dmas = <&blsp2_dma 20>, <&blsp2_dma 21>;
dma-names = "tx", "rx";
}; };
spmi_bus: spmi@fc4cf000 { spmi_bus: spmi@fc4cf000 {
@@ -479,16 +477,6 @@
interrupt-controller; interrupt-controller;
#interrupt-cells = <4>; #interrupt-cells = <4>;
}; };
blsp2_dma: dma-controller@f9944000 {
compatible = "qcom,bam-v1.4.0";
reg = <0xf9944000 0x19000>;
interrupts = <GIC_SPI 239 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&gcc GCC_BLSP2_AHB_CLK>;
clock-names = "bam_clk";
#dma-cells = <1>;
qcom,ee = <0>;
};
}; };
smd { smd {

View File

@@ -661,6 +661,7 @@
}; };
&pcie_bus_clk { &pcie_bus_clk {
clock-frequency = <100000000>;
status = "okay"; status = "okay";
}; };

View File

@@ -143,19 +143,11 @@
}; };
&pfc { &pfc {
pinctrl-0 = <&scif_clk_pins>;
pinctrl-names = "default";
scif0_pins: serial0 { scif0_pins: serial0 {
renesas,groups = "scif0_data_d"; renesas,groups = "scif0_data_d";
renesas,function = "scif0"; renesas,function = "scif0";
}; };
scif_clk_pins: scif_clk {
renesas,groups = "scif_clk";
renesas,function = "scif_clk";
};
ether_pins: ether { ether_pins: ether {
renesas,groups = "eth_link", "eth_mdio", "eth_rmii"; renesas,groups = "eth_link", "eth_mdio", "eth_rmii";
renesas,function = "eth"; renesas,function = "eth";
@@ -229,11 +221,6 @@
status = "okay"; status = "okay";
}; };
&scif_clk {
clock-frequency = <14745600>;
status = "okay";
};
&ether { &ether {
pinctrl-0 = <&ether_pins &phy1_pins>; pinctrl-0 = <&ether_pins &phy1_pins>;
pinctrl-names = "default"; pinctrl-names = "default";
@@ -414,6 +401,7 @@
}; };
&pcie_bus_clk { &pcie_bus_clk {
clock-frequency = <100000000>;
status = "okay"; status = "okay";
}; };

View File

@@ -1083,9 +1083,8 @@
pcie_bus_clk: pcie_bus_clk { pcie_bus_clk: pcie_bus_clk {
compatible = "fixed-clock"; compatible = "fixed-clock";
#clock-cells = <0>; #clock-cells = <0>;
clock-frequency = <100000000>; clock-frequency = <0>;
clock-output-names = "pcie_bus"; clock-output-names = "pcie_bus";
status = "disabled";
}; };
/* External SCIF clock */ /* External SCIF clock */
@@ -1094,7 +1093,6 @@
#clock-cells = <0>; #clock-cells = <0>;
/* This value must be overridden by the board. */ /* This value must be overridden by the board. */
clock-frequency = <0>; clock-frequency = <0>;
status = "disabled";
}; };
/* External USB clock - can be overridden by the board */ /* External USB clock - can be overridden by the board */
@@ -1112,7 +1110,6 @@
/* This value must be overridden by the board. */ /* This value must be overridden by the board. */
clock-frequency = <0>; clock-frequency = <0>;
clock-output-names = "can_clk"; clock-output-names = "can_clk";
status = "disabled";
}; };
/* Special CPG clocks */ /* Special CPG clocks */

View File

@@ -71,6 +71,7 @@ struct platform_device *__init imx_add_sdhci_esdhc_imx(
if (!pdata) if (!pdata)
pdata = &default_esdhc_pdata; pdata = &default_esdhc_pdata;
return imx_add_platform_device(data->devid, data->id, res, return imx_add_platform_device_dmamask(data->devid, data->id, res,
ARRAY_SIZE(res), pdata, sizeof(*pdata)); ARRAY_SIZE(res), pdata, sizeof(*pdata),
DMA_BIT_MASK(32));
} }

View File

@@ -461,7 +461,7 @@ static struct clockdomain ipu_7xx_clkdm = {
.cm_inst = DRA7XX_CM_CORE_AON_IPU_INST, .cm_inst = DRA7XX_CM_CORE_AON_IPU_INST,
.clkdm_offs = DRA7XX_CM_CORE_AON_IPU_IPU_CDOFFS, .clkdm_offs = DRA7XX_CM_CORE_AON_IPU_IPU_CDOFFS,
.dep_bit = DRA7XX_IPU_STATDEP_SHIFT, .dep_bit = DRA7XX_IPU_STATDEP_SHIFT,
.flags = CLKDM_CAN_HWSUP_SWSUP, .flags = CLKDM_CAN_SWSUP,
}; };
static struct clockdomain mpu1_7xx_clkdm = { static struct clockdomain mpu1_7xx_clkdm = {

View File

@@ -737,7 +737,8 @@ void __init omap5_init_late(void)
#ifdef CONFIG_SOC_DRA7XX #ifdef CONFIG_SOC_DRA7XX
void __init dra7xx_init_early(void) void __init dra7xx_init_early(void)
{ {
omap2_set_globals_tap(-1, OMAP2_L4_IO_ADDRESS(DRA7XX_TAP_BASE)); omap2_set_globals_tap(DRA7XX_CLASS,
OMAP2_L4_IO_ADDRESS(DRA7XX_TAP_BASE));
omap2_set_globals_prcm_mpu(OMAP2_L4_IO_ADDRESS(OMAP54XX_PRCM_MPU_BASE)); omap2_set_globals_prcm_mpu(OMAP2_L4_IO_ADDRESS(OMAP54XX_PRCM_MPU_BASE));
omap2_control_base_init(); omap2_control_base_init();
omap4_pm_init_early(); omap4_pm_init_early();

View File

@@ -274,6 +274,10 @@ static inline void omap5_irq_save_context(void)
*/ */
static void irq_save_context(void) static void irq_save_context(void)
{ {
/* DRA7 has no SAR to save */
if (soc_is_dra7xx())
return;
if (!sar_base) if (!sar_base)
sar_base = omap4_get_sar_ram_base(); sar_base = omap4_get_sar_ram_base();
@@ -290,6 +294,9 @@ static void irq_sar_clear(void)
{ {
u32 val; u32 val;
u32 offset = SAR_BACKUP_STATUS_OFFSET; u32 offset = SAR_BACKUP_STATUS_OFFSET;
/* DRA7 has no SAR to save */
if (soc_is_dra7xx())
return;
if (soc_is_omap54xx()) if (soc_is_omap54xx())
offset = OMAP5_SAR_BACKUP_STATUS_OFFSET; offset = OMAP5_SAR_BACKUP_STATUS_OFFSET;

View File

@@ -198,7 +198,6 @@ void omap_sram_idle(void)
int per_next_state = PWRDM_POWER_ON; int per_next_state = PWRDM_POWER_ON;
int core_next_state = PWRDM_POWER_ON; int core_next_state = PWRDM_POWER_ON;
int per_going_off; int per_going_off;
int core_prev_state;
u32 sdrc_pwr = 0; u32 sdrc_pwr = 0;
mpu_next_state = pwrdm_read_next_pwrst(mpu_pwrdm); mpu_next_state = pwrdm_read_next_pwrst(mpu_pwrdm);
@@ -278,16 +277,20 @@ void omap_sram_idle(void)
sdrc_write_reg(sdrc_pwr, SDRC_POWER); sdrc_write_reg(sdrc_pwr, SDRC_POWER);
/* CORE */ /* CORE */
if (core_next_state < PWRDM_POWER_ON) { if (core_next_state < PWRDM_POWER_ON &&
core_prev_state = pwrdm_read_prev_pwrst(core_pwrdm); pwrdm_read_prev_pwrst(core_pwrdm) == PWRDM_POWER_OFF) {
if (core_prev_state == PWRDM_POWER_OFF) {
omap3_core_restore_context(); omap3_core_restore_context();
omap3_cm_restore_context(); omap3_cm_restore_context();
omap3_sram_restore_context(); omap3_sram_restore_context();
omap2_sms_restore_context(); omap2_sms_restore_context();
} } else {
} /*
* In off-mode resume path above, omap3_core_restore_context
* also handles the INTC autoidle restore done here so limit
* this to non-off mode resume paths so we don't do it twice.
*/
omap3_intc_resume_idle(); omap3_intc_resume_idle();
}
pwrdm_post_transition(NULL); pwrdm_post_transition(NULL);

View File

@@ -40,8 +40,7 @@ static void __init shmobile_setup_delay_hz(unsigned int max_cpu_core_hz,
void __init shmobile_init_delay(void) void __init shmobile_init_delay(void)
{ {
struct device_node *np, *cpus; struct device_node *np, *cpus;
bool is_a7_a8_a9 = false; unsigned int div = 0;
bool is_a15 = false;
bool has_arch_timer = false; bool has_arch_timer = false;
u32 max_freq = 0; u32 max_freq = 0;
@@ -55,27 +54,22 @@ void __init shmobile_init_delay(void)
if (!of_property_read_u32(np, "clock-frequency", &freq)) if (!of_property_read_u32(np, "clock-frequency", &freq))
max_freq = max(max_freq, freq); max_freq = max(max_freq, freq);
if (of_device_is_compatible(np, "arm,cortex-a8") || if (of_device_is_compatible(np, "arm,cortex-a8")) {
of_device_is_compatible(np, "arm,cortex-a9")) { div = 2;
is_a7_a8_a9 = true; } else if (of_device_is_compatible(np, "arm,cortex-a9")) {
} else if (of_device_is_compatible(np, "arm,cortex-a7")) { div = 1;
is_a7_a8_a9 = true; } else if (of_device_is_compatible(np, "arm,cortex-a7") ||
has_arch_timer = true; of_device_is_compatible(np, "arm,cortex-a15")) {
} else if (of_device_is_compatible(np, "arm,cortex-a15")) { div = 1;
is_a15 = true;
has_arch_timer = true; has_arch_timer = true;
} }
} }
of_node_put(cpus); of_node_put(cpus);
if (!max_freq) if (!max_freq || !div)
return; return;
if (!has_arch_timer || !IS_ENABLED(CONFIG_ARM_ARCH_TIMER)) { if (!has_arch_timer || !IS_ENABLED(CONFIG_ARM_ARCH_TIMER))
if (is_a7_a8_a9) shmobile_setup_delay_hz(max_freq, 1, div);
shmobile_setup_delay_hz(max_freq, 1, 3);
else if (is_a15)
shmobile_setup_delay_hz(max_freq, 2, 4);
}
} }

View File

@@ -70,7 +70,6 @@
i2c3 = &i2c3; i2c3 = &i2c3;
i2c4 = &i2c4; i2c4 = &i2c4;
i2c5 = &i2c5; i2c5 = &i2c5;
i2c6 = &i2c6;
}; };
}; };

View File

@@ -201,15 +201,12 @@
i2c2: i2c@58782000 { i2c2: i2c@58782000 {
compatible = "socionext,uniphier-fi2c"; compatible = "socionext,uniphier-fi2c";
status = "disabled";
reg = <0x58782000 0x80>; reg = <0x58782000 0x80>;
#address-cells = <1>; #address-cells = <1>;
#size-cells = <0>; #size-cells = <0>;
interrupts = <0 43 4>; interrupts = <0 43 4>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_i2c2>;
clocks = <&i2c_clk>; clocks = <&i2c_clk>;
clock-frequency = <100000>; clock-frequency = <400000>;
}; };
i2c3: i2c@58783000 { i2c3: i2c@58783000 {
@@ -227,12 +224,15 @@
i2c4: i2c@58784000 { i2c4: i2c@58784000 {
compatible = "socionext,uniphier-fi2c"; compatible = "socionext,uniphier-fi2c";
status = "disabled";
reg = <0x58784000 0x80>; reg = <0x58784000 0x80>;
#address-cells = <1>; #address-cells = <1>;
#size-cells = <0>; #size-cells = <0>;
interrupts = <0 45 4>; interrupts = <0 45 4>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_i2c4>;
clocks = <&i2c_clk>; clocks = <&i2c_clk>;
clock-frequency = <400000>; clock-frequency = <100000>;
}; };
i2c5: i2c@58785000 { i2c5: i2c@58785000 {
@@ -245,16 +245,6 @@
clock-frequency = <400000>; clock-frequency = <400000>;
}; };
i2c6: i2c@58786000 {
compatible = "socionext,uniphier-fi2c";
reg = <0x58786000 0x80>;
#address-cells = <1>;
#size-cells = <0>;
interrupts = <0 26 4>;
clocks = <&i2c_clk>;
clock-frequency = <400000>;
};
system_bus: system-bus@58c00000 { system_bus: system-bus@58c00000 {
compatible = "socionext,uniphier-system-bus"; compatible = "socionext,uniphier-system-bus";
status = "disabled"; status = "disabled";

View File

@@ -68,7 +68,7 @@ void *memset(void *s, int c, size_t count)
"=r" (charcnt), /* %1 Output */ "=r" (charcnt), /* %1 Output */
"=r" (dwordcnt), /* %2 Output */ "=r" (dwordcnt), /* %2 Output */
"=r" (fill8reg), /* %3 Output */ "=r" (fill8reg), /* %3 Output */
"=r" (wrkrega) /* %4 Output */ "=&r" (wrkrega) /* %4 Output only */
: "r" (c), /* %5 Input */ : "r" (c), /* %5 Input */
"0" (s), /* %0 Input/Output */ "0" (s), /* %0 Input/Output */
"1" (count) /* %1 Input/Output */ "1" (count) /* %1 Input/Output */

View File

@@ -384,3 +384,5 @@ SYSCALL(ni_syscall)
SYSCALL(ni_syscall) SYSCALL(ni_syscall)
SYSCALL(mlock2) SYSCALL(mlock2)
SYSCALL(copy_file_range) SYSCALL(copy_file_range)
COMPAT_SYS_SPU(preadv2)
COMPAT_SYS_SPU(pwritev2)

View File

@@ -12,7 +12,7 @@
#include <uapi/asm/unistd.h> #include <uapi/asm/unistd.h>
#define NR_syscalls 380 #define NR_syscalls 382
#define __NR__exit __NR_exit #define __NR__exit __NR_exit

View File

@@ -390,5 +390,7 @@
#define __NR_membarrier 365 #define __NR_membarrier 365
#define __NR_mlock2 378 #define __NR_mlock2 378
#define __NR_copy_file_range 379 #define __NR_copy_file_range 379
#define __NR_preadv2 380
#define __NR_pwritev2 381
#endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */ #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */

View File

@@ -11,7 +11,7 @@ typedef struct {
spinlock_t list_lock; spinlock_t list_lock;
struct list_head pgtable_list; struct list_head pgtable_list;
struct list_head gmap_list; struct list_head gmap_list;
unsigned long asce_bits; unsigned long asce;
unsigned long asce_limit; unsigned long asce_limit;
unsigned long vdso_base; unsigned long vdso_base;
/* The mmu context allocates 4K page tables. */ /* The mmu context allocates 4K page tables. */

View File

@@ -26,12 +26,28 @@ static inline int init_new_context(struct task_struct *tsk,
mm->context.has_pgste = 0; mm->context.has_pgste = 0;
mm->context.use_skey = 0; mm->context.use_skey = 0;
#endif #endif
if (mm->context.asce_limit == 0) { switch (mm->context.asce_limit) {
case 1UL << 42:
/*
* forked 3-level task, fall through to set new asce with new
* mm->pgd
*/
case 0:
/* context created by exec, set asce limit to 4TB */ /* context created by exec, set asce limit to 4TB */
mm->context.asce_bits = _ASCE_TABLE_LENGTH |
_ASCE_USER_BITS | _ASCE_TYPE_REGION3;
mm->context.asce_limit = STACK_TOP_MAX; mm->context.asce_limit = STACK_TOP_MAX;
} else if (mm->context.asce_limit == (1UL << 31)) { mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH |
_ASCE_USER_BITS | _ASCE_TYPE_REGION3;
break;
case 1UL << 53:
/* forked 4-level task, set new asce with new mm->pgd */
mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH |
_ASCE_USER_BITS | _ASCE_TYPE_REGION2;
break;
case 1UL << 31:
/* forked 2-level compat task, set new asce with new mm->pgd */
mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH |
_ASCE_USER_BITS | _ASCE_TYPE_SEGMENT;
/* pgd_alloc() did not increase mm->nr_pmds */
mm_inc_nr_pmds(mm); mm_inc_nr_pmds(mm);
} }
crst_table_init((unsigned long *) mm->pgd, pgd_entry_type(mm)); crst_table_init((unsigned long *) mm->pgd, pgd_entry_type(mm));
@@ -42,7 +58,7 @@ static inline int init_new_context(struct task_struct *tsk,
static inline void set_user_asce(struct mm_struct *mm) static inline void set_user_asce(struct mm_struct *mm)
{ {
S390_lowcore.user_asce = mm->context.asce_bits | __pa(mm->pgd); S390_lowcore.user_asce = mm->context.asce;
if (current->thread.mm_segment.ar4) if (current->thread.mm_segment.ar4)
__ctl_load(S390_lowcore.user_asce, 7, 7); __ctl_load(S390_lowcore.user_asce, 7, 7);
set_cpu_flag(CIF_ASCE); set_cpu_flag(CIF_ASCE);
@@ -71,7 +87,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
{ {
int cpu = smp_processor_id(); int cpu = smp_processor_id();
S390_lowcore.user_asce = next->context.asce_bits | __pa(next->pgd); S390_lowcore.user_asce = next->context.asce;
if (prev == next) if (prev == next)
return; return;
if (MACHINE_HAS_TLB_LC) if (MACHINE_HAS_TLB_LC)

View File

@@ -52,8 +52,8 @@ static inline unsigned long pgd_entry_type(struct mm_struct *mm)
return _REGION2_ENTRY_EMPTY; return _REGION2_ENTRY_EMPTY;
} }
int crst_table_upgrade(struct mm_struct *, unsigned long limit); int crst_table_upgrade(struct mm_struct *);
void crst_table_downgrade(struct mm_struct *, unsigned long limit); void crst_table_downgrade(struct mm_struct *);
static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address) static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address)
{ {

View File

@@ -175,7 +175,7 @@ extern __vector128 init_task_fpu_regs[__NUM_VXRS];
regs->psw.mask = PSW_USER_BITS | PSW_MASK_BA; \ regs->psw.mask = PSW_USER_BITS | PSW_MASK_BA; \
regs->psw.addr = new_psw; \ regs->psw.addr = new_psw; \
regs->gprs[15] = new_stackp; \ regs->gprs[15] = new_stackp; \
crst_table_downgrade(current->mm, 1UL << 31); \ crst_table_downgrade(current->mm); \
execve_tail(); \ execve_tail(); \
} while (0) } while (0)

View File

@@ -110,8 +110,7 @@ static inline void __tlb_flush_asce(struct mm_struct *mm, unsigned long asce)
static inline void __tlb_flush_kernel(void) static inline void __tlb_flush_kernel(void)
{ {
if (MACHINE_HAS_IDTE) if (MACHINE_HAS_IDTE)
__tlb_flush_idte((unsigned long) init_mm.pgd | __tlb_flush_idte(init_mm.context.asce);
init_mm.context.asce_bits);
else else
__tlb_flush_global(); __tlb_flush_global();
} }
@@ -133,8 +132,7 @@ static inline void __tlb_flush_asce(struct mm_struct *mm, unsigned long asce)
static inline void __tlb_flush_kernel(void) static inline void __tlb_flush_kernel(void)
{ {
if (MACHINE_HAS_TLB_LC) if (MACHINE_HAS_TLB_LC)
__tlb_flush_idte_local((unsigned long) init_mm.pgd | __tlb_flush_idte_local(init_mm.context.asce);
init_mm.context.asce_bits);
else else
__tlb_flush_local(); __tlb_flush_local();
} }
@@ -148,8 +146,7 @@ static inline void __tlb_flush_mm(struct mm_struct * mm)
* only ran on the local cpu. * only ran on the local cpu.
*/ */
if (MACHINE_HAS_IDTE && list_empty(&mm->context.gmap_list)) if (MACHINE_HAS_IDTE && list_empty(&mm->context.gmap_list))
__tlb_flush_asce(mm, (unsigned long) mm->pgd | __tlb_flush_asce(mm, mm->context.asce);
mm->context.asce_bits);
else else
__tlb_flush_full(mm); __tlb_flush_full(mm);
} }

View File

@@ -89,7 +89,8 @@ void __init paging_init(void)
asce_bits = _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH; asce_bits = _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH;
pgd_type = _REGION3_ENTRY_EMPTY; pgd_type = _REGION3_ENTRY_EMPTY;
} }
S390_lowcore.kernel_asce = (__pa(init_mm.pgd) & PAGE_MASK) | asce_bits; init_mm.context.asce = (__pa(init_mm.pgd) & PAGE_MASK) | asce_bits;
S390_lowcore.kernel_asce = init_mm.context.asce;
clear_table((unsigned long *) init_mm.pgd, pgd_type, clear_table((unsigned long *) init_mm.pgd, pgd_type,
sizeof(unsigned long)*2048); sizeof(unsigned long)*2048);
vmem_map_init(); vmem_map_init();

View File

@@ -174,7 +174,7 @@ int s390_mmap_check(unsigned long addr, unsigned long len, unsigned long flags)
if (!(flags & MAP_FIXED)) if (!(flags & MAP_FIXED))
addr = 0; addr = 0;
if ((addr + len) >= TASK_SIZE) if ((addr + len) >= TASK_SIZE)
return crst_table_upgrade(current->mm, TASK_MAX_SIZE); return crst_table_upgrade(current->mm);
return 0; return 0;
} }
@@ -191,7 +191,7 @@ s390_get_unmapped_area(struct file *filp, unsigned long addr,
return area; return area;
if (area == -ENOMEM && !is_compat_task() && TASK_SIZE < TASK_MAX_SIZE) { if (area == -ENOMEM && !is_compat_task() && TASK_SIZE < TASK_MAX_SIZE) {
/* Upgrade the page table to 4 levels and retry. */ /* Upgrade the page table to 4 levels and retry. */
rc = crst_table_upgrade(mm, TASK_MAX_SIZE); rc = crst_table_upgrade(mm);
if (rc) if (rc)
return (unsigned long) rc; return (unsigned long) rc;
area = arch_get_unmapped_area(filp, addr, len, pgoff, flags); area = arch_get_unmapped_area(filp, addr, len, pgoff, flags);
@@ -213,7 +213,7 @@ s390_get_unmapped_area_topdown(struct file *filp, const unsigned long addr,
return area; return area;
if (area == -ENOMEM && !is_compat_task() && TASK_SIZE < TASK_MAX_SIZE) { if (area == -ENOMEM && !is_compat_task() && TASK_SIZE < TASK_MAX_SIZE) {
/* Upgrade the page table to 4 levels and retry. */ /* Upgrade the page table to 4 levels and retry. */
rc = crst_table_upgrade(mm, TASK_MAX_SIZE); rc = crst_table_upgrade(mm);
if (rc) if (rc)
return (unsigned long) rc; return (unsigned long) rc;
area = arch_get_unmapped_area_topdown(filp, addr, len, area = arch_get_unmapped_area_topdown(filp, addr, len,

View File

@@ -76,81 +76,52 @@ static void __crst_table_upgrade(void *arg)
__tlb_flush_local(); __tlb_flush_local();
} }
int crst_table_upgrade(struct mm_struct *mm, unsigned long limit) int crst_table_upgrade(struct mm_struct *mm)
{ {
unsigned long *table, *pgd; unsigned long *table, *pgd;
unsigned long entry;
int flush;
BUG_ON(limit > TASK_MAX_SIZE); /* upgrade should only happen from 3 to 4 levels */
flush = 0; BUG_ON(mm->context.asce_limit != (1UL << 42));
repeat:
table = crst_table_alloc(mm); table = crst_table_alloc(mm);
if (!table) if (!table)
return -ENOMEM; return -ENOMEM;
spin_lock_bh(&mm->page_table_lock); spin_lock_bh(&mm->page_table_lock);
if (mm->context.asce_limit < limit) {
pgd = (unsigned long *) mm->pgd; pgd = (unsigned long *) mm->pgd;
if (mm->context.asce_limit <= (1UL << 31)) { crst_table_init(table, _REGION2_ENTRY_EMPTY);
entry = _REGION3_ENTRY_EMPTY;
mm->context.asce_limit = 1UL << 42;
mm->context.asce_bits = _ASCE_TABLE_LENGTH |
_ASCE_USER_BITS |
_ASCE_TYPE_REGION3;
} else {
entry = _REGION2_ENTRY_EMPTY;
mm->context.asce_limit = 1UL << 53;
mm->context.asce_bits = _ASCE_TABLE_LENGTH |
_ASCE_USER_BITS |
_ASCE_TYPE_REGION2;
}
crst_table_init(table, entry);
pgd_populate(mm, (pgd_t *) table, (pud_t *) pgd); pgd_populate(mm, (pgd_t *) table, (pud_t *) pgd);
mm->pgd = (pgd_t *) table; mm->pgd = (pgd_t *) table;
mm->context.asce_limit = 1UL << 53;
mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH |
_ASCE_USER_BITS | _ASCE_TYPE_REGION2;
mm->task_size = mm->context.asce_limit; mm->task_size = mm->context.asce_limit;
table = NULL;
flush = 1;
}
spin_unlock_bh(&mm->page_table_lock); spin_unlock_bh(&mm->page_table_lock);
if (table)
crst_table_free(mm, table);
if (mm->context.asce_limit < limit)
goto repeat;
if (flush)
on_each_cpu(__crst_table_upgrade, mm, 0); on_each_cpu(__crst_table_upgrade, mm, 0);
return 0; return 0;
} }
void crst_table_downgrade(struct mm_struct *mm, unsigned long limit) void crst_table_downgrade(struct mm_struct *mm)
{ {
pgd_t *pgd; pgd_t *pgd;
/* downgrade should only happen from 3 to 2 levels (compat only) */
BUG_ON(mm->context.asce_limit != (1UL << 42));
if (current->active_mm == mm) { if (current->active_mm == mm) {
clear_user_asce(); clear_user_asce();
__tlb_flush_mm(mm); __tlb_flush_mm(mm);
} }
while (mm->context.asce_limit > limit) {
pgd = mm->pgd; pgd = mm->pgd;
switch (pgd_val(*pgd) & _REGION_ENTRY_TYPE_MASK) {
case _REGION_ENTRY_TYPE_R2:
mm->context.asce_limit = 1UL << 42;
mm->context.asce_bits = _ASCE_TABLE_LENGTH |
_ASCE_USER_BITS |
_ASCE_TYPE_REGION3;
break;
case _REGION_ENTRY_TYPE_R3:
mm->context.asce_limit = 1UL << 31;
mm->context.asce_bits = _ASCE_TABLE_LENGTH |
_ASCE_USER_BITS |
_ASCE_TYPE_SEGMENT;
break;
default:
BUG();
}
mm->pgd = (pgd_t *) (pgd_val(*pgd) & _REGION_ENTRY_ORIGIN); mm->pgd = (pgd_t *) (pgd_val(*pgd) & _REGION_ENTRY_ORIGIN);
mm->context.asce_limit = 1UL << 31;
mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH |
_ASCE_USER_BITS | _ASCE_TYPE_SEGMENT;
mm->task_size = mm->context.asce_limit; mm->task_size = mm->context.asce_limit;
crst_table_free(mm, (unsigned long *) pgd); crst_table_free(mm, (unsigned long *) pgd);
}
if (current->active_mm == mm) if (current->active_mm == mm)
set_user_asce(mm); set_user_asce(mm);
} }

View File

@@ -457,7 +457,7 @@ int zpci_dma_init_device(struct zpci_dev *zdev)
zdev->dma_table = dma_alloc_cpu_table(); zdev->dma_table = dma_alloc_cpu_table();
if (!zdev->dma_table) { if (!zdev->dma_table) {
rc = -ENOMEM; rc = -ENOMEM;
goto out_clean; goto out;
} }
/* /*
@@ -477,18 +477,22 @@ int zpci_dma_init_device(struct zpci_dev *zdev)
zdev->iommu_bitmap = vzalloc(zdev->iommu_pages / 8); zdev->iommu_bitmap = vzalloc(zdev->iommu_pages / 8);
if (!zdev->iommu_bitmap) { if (!zdev->iommu_bitmap) {
rc = -ENOMEM; rc = -ENOMEM;
goto out_reg; goto free_dma_table;
} }
rc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, rc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
(u64) zdev->dma_table); (u64) zdev->dma_table);
if (rc) if (rc)
goto out_reg; goto free_bitmap;
return 0;
out_reg: return 0;
free_bitmap:
vfree(zdev->iommu_bitmap);
zdev->iommu_bitmap = NULL;
free_dma_table:
dma_free_cpu_table(zdev->dma_table); dma_free_cpu_table(zdev->dma_table);
out_clean: zdev->dma_table = NULL;
out:
return rc; return rc;
} }

View File

@@ -115,7 +115,7 @@ static __initconst const u64 amd_hw_cache_event_ids
/* /*
* AMD Performance Monitor K7 and later. * AMD Performance Monitor K7 and later.
*/ */
static const u64 amd_perfmon_event_map[] = static const u64 amd_perfmon_event_map[PERF_COUNT_HW_MAX] =
{ {
[PERF_COUNT_HW_CPU_CYCLES] = 0x0076, [PERF_COUNT_HW_CPU_CYCLES] = 0x0076,
[PERF_COUNT_HW_INSTRUCTIONS] = 0x00c0, [PERF_COUNT_HW_INSTRUCTIONS] = 0x00c0,

View File

@@ -3639,6 +3639,7 @@ __init int intel_pmu_init(void)
case 78: /* 14nm Skylake Mobile */ case 78: /* 14nm Skylake Mobile */
case 94: /* 14nm Skylake Desktop */ case 94: /* 14nm Skylake Desktop */
case 85: /* 14nm Skylake Server */
x86_pmu.late_ack = true; x86_pmu.late_ack = true;
memcpy(hw_cache_event_ids, skl_hw_cache_event_ids, sizeof(hw_cache_event_ids)); memcpy(hw_cache_event_ids, skl_hw_cache_event_ids, sizeof(hw_cache_event_ids));
memcpy(hw_cache_extra_regs, skl_hw_cache_extra_regs, sizeof(hw_cache_extra_regs)); memcpy(hw_cache_extra_regs, skl_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));

View File

@@ -63,7 +63,7 @@ static enum {
#define LBR_PLM (LBR_KERNEL | LBR_USER) #define LBR_PLM (LBR_KERNEL | LBR_USER)
#define LBR_SEL_MASK 0x1ff /* valid bits in LBR_SELECT */ #define LBR_SEL_MASK 0x3ff /* valid bits in LBR_SELECT */
#define LBR_NOT_SUPP -1 /* LBR filter not supported */ #define LBR_NOT_SUPP -1 /* LBR filter not supported */
#define LBR_IGN 0 /* ignored */ #define LBR_IGN 0 /* ignored */
@@ -610,8 +610,10 @@ static int intel_pmu_setup_hw_lbr_filter(struct perf_event *event)
* The first 9 bits (LBR_SEL_MASK) in LBR_SELECT operate * The first 9 bits (LBR_SEL_MASK) in LBR_SELECT operate
* in suppress mode. So LBR_SELECT should be set to * in suppress mode. So LBR_SELECT should be set to
* (~mask & LBR_SEL_MASK) | (mask & ~LBR_SEL_MASK) * (~mask & LBR_SEL_MASK) | (mask & ~LBR_SEL_MASK)
* But the 10th bit LBR_CALL_STACK does not operate
* in suppress mode.
*/ */
reg->config = mask ^ x86_pmu.lbr_sel_mask; reg->config = mask ^ (x86_pmu.lbr_sel_mask & ~LBR_CALL_STACK);
if ((br_type & PERF_SAMPLE_BRANCH_NO_CYCLES) && if ((br_type & PERF_SAMPLE_BRANCH_NO_CYCLES) &&
(br_type & PERF_SAMPLE_BRANCH_NO_FLAGS) && (br_type & PERF_SAMPLE_BRANCH_NO_FLAGS) &&

View File

@@ -136,9 +136,21 @@ static int __init pt_pmu_hw_init(void)
struct dev_ext_attribute *de_attrs; struct dev_ext_attribute *de_attrs;
struct attribute **attrs; struct attribute **attrs;
size_t size; size_t size;
u64 reg;
int ret; int ret;
long i; long i;
if (boot_cpu_has(X86_FEATURE_VMX)) {
/*
* Intel SDM, 36.5 "Tracing post-VMXON" says that
* "IA32_VMX_MISC[bit 14]" being 1 means PT can trace
* post-VMXON.
*/
rdmsrl(MSR_IA32_VMX_MISC, reg);
if (reg & BIT(14))
pt_pmu.vmx = true;
}
attrs = NULL; attrs = NULL;
for (i = 0; i < PT_CPUID_LEAVES; i++) { for (i = 0; i < PT_CPUID_LEAVES; i++) {
@@ -269,20 +281,23 @@ static void pt_config(struct perf_event *event)
reg |= (event->attr.config & PT_CONFIG_MASK); reg |= (event->attr.config & PT_CONFIG_MASK);
event->hw.config = reg;
wrmsrl(MSR_IA32_RTIT_CTL, reg); wrmsrl(MSR_IA32_RTIT_CTL, reg);
} }
static void pt_config_start(bool start) static void pt_config_stop(struct perf_event *event)
{ {
u64 ctl; u64 ctl = READ_ONCE(event->hw.config);
/* may be already stopped by a PMI */
if (!(ctl & RTIT_CTL_TRACEEN))
return;
rdmsrl(MSR_IA32_RTIT_CTL, ctl);
if (start)
ctl |= RTIT_CTL_TRACEEN;
else
ctl &= ~RTIT_CTL_TRACEEN; ctl &= ~RTIT_CTL_TRACEEN;
wrmsrl(MSR_IA32_RTIT_CTL, ctl); wrmsrl(MSR_IA32_RTIT_CTL, ctl);
WRITE_ONCE(event->hw.config, ctl);
/* /*
* A wrmsr that disables trace generation serializes other PT * A wrmsr that disables trace generation serializes other PT
* registers and causes all data packets to be written to memory, * registers and causes all data packets to be written to memory,
@@ -291,7 +306,6 @@ static void pt_config_start(bool start)
* The below WMB, separating data store and aux_head store matches * The below WMB, separating data store and aux_head store matches
* the consumer's RMB that separates aux_head load and data load. * the consumer's RMB that separates aux_head load and data load.
*/ */
if (!start)
wmb(); wmb();
} }
@@ -942,11 +956,17 @@ void intel_pt_interrupt(void)
if (!ACCESS_ONCE(pt->handle_nmi)) if (!ACCESS_ONCE(pt->handle_nmi))
return; return;
pt_config_start(false); /*
* If VMX is on and PT does not support it, don't touch anything.
*/
if (READ_ONCE(pt->vmx_on))
return;
if (!event) if (!event)
return; return;
pt_config_stop(event);
buf = perf_get_aux(&pt->handle); buf = perf_get_aux(&pt->handle);
if (!buf) if (!buf)
return; return;
@@ -983,6 +1003,35 @@ void intel_pt_interrupt(void)
} }
} }
void intel_pt_handle_vmx(int on)
{
struct pt *pt = this_cpu_ptr(&pt_ctx);
struct perf_event *event;
unsigned long flags;
/* PT plays nice with VMX, do nothing */
if (pt_pmu.vmx)
return;
/*
* VMXON will clear RTIT_CTL.TraceEn; we need to make
* sure to not try to set it while VMX is on. Disable
* interrupts to avoid racing with pmu callbacks;
* concurrent PMI should be handled fine.
*/
local_irq_save(flags);
WRITE_ONCE(pt->vmx_on, on);
if (on) {
/* prevent pt_config_stop() from writing RTIT_CTL */
event = pt->handle.event;
if (event)
event->hw.config = 0;
}
local_irq_restore(flags);
}
EXPORT_SYMBOL_GPL(intel_pt_handle_vmx);
/* /*
* PMU callbacks * PMU callbacks
*/ */
@@ -992,6 +1041,9 @@ static void pt_event_start(struct perf_event *event, int mode)
struct pt *pt = this_cpu_ptr(&pt_ctx); struct pt *pt = this_cpu_ptr(&pt_ctx);
struct pt_buffer *buf = perf_get_aux(&pt->handle); struct pt_buffer *buf = perf_get_aux(&pt->handle);
if (READ_ONCE(pt->vmx_on))
return;
if (!buf || pt_buffer_is_full(buf, pt)) { if (!buf || pt_buffer_is_full(buf, pt)) {
event->hw.state = PERF_HES_STOPPED; event->hw.state = PERF_HES_STOPPED;
return; return;
@@ -1014,7 +1066,8 @@ static void pt_event_stop(struct perf_event *event, int mode)
* see comment in intel_pt_interrupt(). * see comment in intel_pt_interrupt().
*/ */
ACCESS_ONCE(pt->handle_nmi) = 0; ACCESS_ONCE(pt->handle_nmi) = 0;
pt_config_start(false);
pt_config_stop(event);
if (event->hw.state == PERF_HES_STOPPED) if (event->hw.state == PERF_HES_STOPPED)
return; return;

View File

@@ -65,6 +65,7 @@ enum pt_capabilities {
struct pt_pmu { struct pt_pmu {
struct pmu pmu; struct pmu pmu;
u32 caps[PT_CPUID_REGS_NUM * PT_CPUID_LEAVES]; u32 caps[PT_CPUID_REGS_NUM * PT_CPUID_LEAVES];
bool vmx;
}; };
/** /**
@@ -107,10 +108,12 @@ struct pt_buffer {
* struct pt - per-cpu pt context * struct pt - per-cpu pt context
* @handle: perf output handle * @handle: perf output handle
* @handle_nmi: do handle PT PMI on this cpu, there's an active event * @handle_nmi: do handle PT PMI on this cpu, there's an active event
* @vmx_on: 1 if VMX is ON on this cpu
*/ */
struct pt { struct pt {
struct perf_output_handle handle; struct perf_output_handle handle;
int handle_nmi; int handle_nmi;
int vmx_on;
}; };
#endif /* __INTEL_PT_H__ */ #endif /* __INTEL_PT_H__ */

View File

@@ -718,6 +718,7 @@ static int __init rapl_pmu_init(void)
break; break;
case 60: /* Haswell */ case 60: /* Haswell */
case 69: /* Haswell-Celeron */ case 69: /* Haswell-Celeron */
case 70: /* Haswell GT3e */
case 61: /* Broadwell */ case 61: /* Broadwell */
case 71: /* Broadwell-H */ case 71: /* Broadwell-H */
rapl_cntr_mask = RAPL_IDX_HSW; rapl_cntr_mask = RAPL_IDX_HSW;

View File

@@ -285,6 +285,10 @@ static inline void perf_events_lapic_init(void) { }
static inline void perf_check_microcode(void) { } static inline void perf_check_microcode(void) { }
#endif #endif
#ifdef CONFIG_CPU_SUP_INTEL
extern void intel_pt_handle_vmx(int on);
#endif
#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_AMD) #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_AMD)
extern void amd_pmu_enable_virt(void); extern void amd_pmu_enable_virt(void);
extern void amd_pmu_disable_virt(void); extern void amd_pmu_disable_virt(void);

View File

@@ -256,7 +256,8 @@ static void clear_irq_vector(int irq, struct apic_chip_data *data)
struct irq_desc *desc; struct irq_desc *desc;
int cpu, vector; int cpu, vector;
BUG_ON(!data->cfg.vector); if (!data->cfg.vector)
return;
vector = data->cfg.vector; vector = data->cfg.vector;
for_each_cpu_and(cpu, data->domain, cpu_online_mask) for_each_cpu_and(cpu, data->domain, cpu_online_mask)

View File

@@ -389,12 +389,6 @@ default_entry:
/* Make changes effective */ /* Make changes effective */
wrmsr wrmsr
/*
* And make sure that all the mappings we set up have NX set from
* the beginning.
*/
orl $(1 << (_PAGE_BIT_NX - 32)), pa(__supported_pte_mask + 4)
enable_paging: enable_paging:
/* /*

View File

@@ -3103,6 +3103,8 @@ static __init int vmx_disabled_by_bios(void)
static void kvm_cpu_vmxon(u64 addr) static void kvm_cpu_vmxon(u64 addr)
{ {
intel_pt_handle_vmx(1);
asm volatile (ASM_VMX_VMXON_RAX asm volatile (ASM_VMX_VMXON_RAX
: : "a"(&addr), "m"(addr) : : "a"(&addr), "m"(addr)
: "memory", "cc"); : "memory", "cc");
@@ -3172,6 +3174,8 @@ static void vmclear_local_loaded_vmcss(void)
static void kvm_cpu_vmxoff(void) static void kvm_cpu_vmxoff(void)
{ {
asm volatile (__ex(ASM_VMX_VMXOFF) : : : "cc"); asm volatile (__ex(ASM_VMX_VMXOFF) : : : "cc");
intel_pt_handle_vmx(0);
} }
static void hardware_disable(void) static void hardware_disable(void)

View File

@@ -32,8 +32,9 @@ early_param("noexec", noexec_setup);
void x86_configure_nx(void) void x86_configure_nx(void)
{ {
/* If disable_nx is set, clear NX on all new mappings going forward. */ if (boot_cpu_has(X86_FEATURE_NX) && !disable_nx)
if (disable_nx) __supported_pte_mask |= _PAGE_NX;
else
__supported_pte_mask &= ~_PAGE_NX; __supported_pte_mask &= ~_PAGE_NX;
} }

View File

@@ -27,6 +27,12 @@ static bool xen_pvspin = true;
static void xen_qlock_kick(int cpu) static void xen_qlock_kick(int cpu)
{ {
int irq = per_cpu(lock_kicker_irq, cpu);
/* Don't kick if the target's kicker interrupt is not initialized. */
if (irq == -1)
return;
xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR); xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
} }

View File

@@ -538,7 +538,6 @@ static int _rbd_dev_v2_snap_size(struct rbd_device *rbd_dev, u64 snap_id,
u8 *order, u64 *snap_size); u8 *order, u64 *snap_size);
static int _rbd_dev_v2_snap_features(struct rbd_device *rbd_dev, u64 snap_id, static int _rbd_dev_v2_snap_features(struct rbd_device *rbd_dev, u64 snap_id,
u64 *snap_features); u64 *snap_features);
static u64 rbd_snap_id_by_name(struct rbd_device *rbd_dev, const char *name);
static int rbd_open(struct block_device *bdev, fmode_t mode) static int rbd_open(struct block_device *bdev, fmode_t mode)
{ {
@@ -3127,9 +3126,6 @@ static void rbd_watch_cb(u64 ver, u64 notify_id, u8 opcode, void *data)
struct rbd_device *rbd_dev = (struct rbd_device *)data; struct rbd_device *rbd_dev = (struct rbd_device *)data;
int ret; int ret;
if (!rbd_dev)
return;
dout("%s: \"%s\" notify_id %llu opcode %u\n", __func__, dout("%s: \"%s\" notify_id %llu opcode %u\n", __func__,
rbd_dev->header_name, (unsigned long long)notify_id, rbd_dev->header_name, (unsigned long long)notify_id,
(unsigned int)opcode); (unsigned int)opcode);
@@ -3263,6 +3259,9 @@ static void rbd_dev_header_unwatch_sync(struct rbd_device *rbd_dev)
ceph_osdc_cancel_event(rbd_dev->watch_event); ceph_osdc_cancel_event(rbd_dev->watch_event);
rbd_dev->watch_event = NULL; rbd_dev->watch_event = NULL;
dout("%s flushing notifies\n", __func__);
ceph_osdc_flush_notifies(&rbd_dev->rbd_client->client->osdc);
} }
/* /*
@@ -3642,21 +3641,14 @@ static void rbd_exists_validate(struct rbd_device *rbd_dev)
static void rbd_dev_update_size(struct rbd_device *rbd_dev) static void rbd_dev_update_size(struct rbd_device *rbd_dev)
{ {
sector_t size; sector_t size;
bool removing;
/* /*
* Don't hold the lock while doing disk operations, * If EXISTS is not set, rbd_dev->disk may be NULL, so don't
* or lock ordering will conflict with the bdev mutex via: * try to update its size. If REMOVING is set, updating size
* rbd_add() -> blkdev_get() -> rbd_open() * is just useless work since the device can't be opened.
*/ */
spin_lock_irq(&rbd_dev->lock); if (test_bit(RBD_DEV_FLAG_EXISTS, &rbd_dev->flags) &&
removing = test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags); !test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags)) {
spin_unlock_irq(&rbd_dev->lock);
/*
* If the device is being removed, rbd_dev->disk has
* been destroyed, so don't try to update its size
*/
if (!removing) {
size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE; size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE;
dout("setting size to %llu sectors", (unsigned long long)size); dout("setting size to %llu sectors", (unsigned long long)size);
set_capacity(rbd_dev->disk, size); set_capacity(rbd_dev->disk, size);
@@ -4191,7 +4183,7 @@ static int _rbd_dev_v2_snap_features(struct rbd_device *rbd_dev, u64 snap_id,
__le64 features; __le64 features;
__le64 incompat; __le64 incompat;
} __attribute__ ((packed)) features_buf = { 0 }; } __attribute__ ((packed)) features_buf = { 0 };
u64 incompat; u64 unsup;
int ret; int ret;
ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_name, ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_name,
@@ -4204,9 +4196,12 @@ static int _rbd_dev_v2_snap_features(struct rbd_device *rbd_dev, u64 snap_id,
if (ret < sizeof (features_buf)) if (ret < sizeof (features_buf))
return -ERANGE; return -ERANGE;
incompat = le64_to_cpu(features_buf.incompat); unsup = le64_to_cpu(features_buf.incompat) & ~RBD_FEATURES_SUPPORTED;
if (incompat & ~RBD_FEATURES_SUPPORTED) if (unsup) {
rbd_warn(rbd_dev, "image uses unsupported features: 0x%llx",
unsup);
return -ENXIO; return -ENXIO;
}
*snap_features = le64_to_cpu(features_buf.features); *snap_features = le64_to_cpu(features_buf.features);
@@ -5187,6 +5182,10 @@ out_err:
return ret; return ret;
} }
/*
* rbd_dev->header_rwsem must be locked for write and will be unlocked
* upon return.
*/
static int rbd_dev_device_setup(struct rbd_device *rbd_dev) static int rbd_dev_device_setup(struct rbd_device *rbd_dev)
{ {
int ret; int ret;
@@ -5195,7 +5194,7 @@ static int rbd_dev_device_setup(struct rbd_device *rbd_dev)
ret = rbd_dev_id_get(rbd_dev); ret = rbd_dev_id_get(rbd_dev);
if (ret) if (ret)
return ret; goto err_out_unlock;
BUILD_BUG_ON(DEV_NAME_LEN BUILD_BUG_ON(DEV_NAME_LEN
< sizeof (RBD_DRV_NAME) + MAX_INT_FORMAT_WIDTH); < sizeof (RBD_DRV_NAME) + MAX_INT_FORMAT_WIDTH);
@@ -5236,8 +5235,9 @@ static int rbd_dev_device_setup(struct rbd_device *rbd_dev)
/* Everything's ready. Announce the disk to the world. */ /* Everything's ready. Announce the disk to the world. */
set_bit(RBD_DEV_FLAG_EXISTS, &rbd_dev->flags); set_bit(RBD_DEV_FLAG_EXISTS, &rbd_dev->flags);
add_disk(rbd_dev->disk); up_write(&rbd_dev->header_rwsem);
add_disk(rbd_dev->disk);
pr_info("%s: added with size 0x%llx\n", rbd_dev->disk->disk_name, pr_info("%s: added with size 0x%llx\n", rbd_dev->disk->disk_name,
(unsigned long long) rbd_dev->mapping.size); (unsigned long long) rbd_dev->mapping.size);
@@ -5252,6 +5252,8 @@ err_out_blkdev:
unregister_blkdev(rbd_dev->major, rbd_dev->name); unregister_blkdev(rbd_dev->major, rbd_dev->name);
err_out_id: err_out_id:
rbd_dev_id_put(rbd_dev); rbd_dev_id_put(rbd_dev);
err_out_unlock:
up_write(&rbd_dev->header_rwsem);
return ret; return ret;
} }
@@ -5442,6 +5444,7 @@ static ssize_t do_rbd_add(struct bus_type *bus,
spec = NULL; /* rbd_dev now owns this */ spec = NULL; /* rbd_dev now owns this */
rbd_opts = NULL; /* rbd_dev now owns this */ rbd_opts = NULL; /* rbd_dev now owns this */
down_write(&rbd_dev->header_rwsem);
rc = rbd_dev_image_probe(rbd_dev, 0); rc = rbd_dev_image_probe(rbd_dev, 0);
if (rc < 0) if (rc < 0)
goto err_out_rbd_dev; goto err_out_rbd_dev;
@@ -5471,6 +5474,7 @@ out:
return rc; return rc;
err_out_rbd_dev: err_out_rbd_dev:
up_write(&rbd_dev->header_rwsem);
rbd_dev_destroy(rbd_dev); rbd_dev_destroy(rbd_dev);
err_out_client: err_out_client:
rbd_put_client(rbdc); rbd_put_client(rbdc);
@@ -5577,12 +5581,6 @@ static ssize_t do_rbd_remove(struct bus_type *bus,
return ret; return ret;
rbd_dev_header_unwatch_sync(rbd_dev); rbd_dev_header_unwatch_sync(rbd_dev);
/*
* flush remaining watch callbacks - these must be complete
* before the osd_client is shutdown
*/
dout("%s: flushing notifies", __func__);
ceph_osdc_flush_notifies(&rbd_dev->rbd_client->client->osdc);
/* /*
* Don't free anything from rbd_dev->disk until after all * Don't free anything from rbd_dev->disk until after all

View File

@@ -193,12 +193,8 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
wall_time = cur_wall_time - j_cdbs->prev_cpu_wall; wall_time = cur_wall_time - j_cdbs->prev_cpu_wall;
j_cdbs->prev_cpu_wall = cur_wall_time; j_cdbs->prev_cpu_wall = cur_wall_time;
if (cur_idle_time <= j_cdbs->prev_cpu_idle) {
idle_time = 0;
} else {
idle_time = cur_idle_time - j_cdbs->prev_cpu_idle; idle_time = cur_idle_time - j_cdbs->prev_cpu_idle;
j_cdbs->prev_cpu_idle = cur_idle_time; j_cdbs->prev_cpu_idle = cur_idle_time;
}
if (ignore_nice) { if (ignore_nice) {
u64 cur_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE]; u64 cur_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE];

View File

@@ -813,6 +813,11 @@ static int core_get_max_pstate(void)
if (err) if (err)
goto skip_tar; goto skip_tar;
/* For level 1 and 2, bits[23:16] contain the ratio */
if (tdp_ctrl)
tdp_ratio >>= 16;
tdp_ratio &= 0xff; /* ratios are only 8 bits long */
if (tdp_ratio - 1 == tar) { if (tdp_ratio - 1 == tar) {
max_pstate = tar; max_pstate = tar;
pr_debug("max_pstate=TAC %x\n", max_pstate); pr_debug("max_pstate=TAC %x\n", max_pstate);

View File

@@ -63,6 +63,14 @@ static void to_talitos_ptr(struct talitos_ptr *ptr, dma_addr_t dma_addr,
ptr->eptr = upper_32_bits(dma_addr); ptr->eptr = upper_32_bits(dma_addr);
} }
static void copy_talitos_ptr(struct talitos_ptr *dst_ptr,
struct talitos_ptr *src_ptr, bool is_sec1)
{
dst_ptr->ptr = src_ptr->ptr;
if (!is_sec1)
dst_ptr->eptr = src_ptr->eptr;
}
static void to_talitos_ptr_len(struct talitos_ptr *ptr, unsigned int len, static void to_talitos_ptr_len(struct talitos_ptr *ptr, unsigned int len,
bool is_sec1) bool is_sec1)
{ {
@@ -1083,21 +1091,20 @@ static int ipsec_esp(struct talitos_edesc *edesc, struct aead_request *areq,
sg_count = dma_map_sg(dev, areq->src, edesc->src_nents ?: 1, sg_count = dma_map_sg(dev, areq->src, edesc->src_nents ?: 1,
(areq->src == areq->dst) ? DMA_BIDIRECTIONAL (areq->src == areq->dst) ? DMA_BIDIRECTIONAL
: DMA_TO_DEVICE); : DMA_TO_DEVICE);
/* hmac data */ /* hmac data */
desc->ptr[1].len = cpu_to_be16(areq->assoclen); desc->ptr[1].len = cpu_to_be16(areq->assoclen);
if (sg_count > 1 && if (sg_count > 1 &&
(ret = sg_to_link_tbl_offset(areq->src, sg_count, 0, (ret = sg_to_link_tbl_offset(areq->src, sg_count, 0,
areq->assoclen, areq->assoclen,
&edesc->link_tbl[tbl_off])) > 1) { &edesc->link_tbl[tbl_off])) > 1) {
tbl_off += ret;
to_talitos_ptr(&desc->ptr[1], edesc->dma_link_tbl + tbl_off * to_talitos_ptr(&desc->ptr[1], edesc->dma_link_tbl + tbl_off *
sizeof(struct talitos_ptr), 0); sizeof(struct talitos_ptr), 0);
desc->ptr[1].j_extent = DESC_PTR_LNKTBL_JUMP; desc->ptr[1].j_extent = DESC_PTR_LNKTBL_JUMP;
dma_sync_single_for_device(dev, edesc->dma_link_tbl, dma_sync_single_for_device(dev, edesc->dma_link_tbl,
edesc->dma_len, DMA_BIDIRECTIONAL); edesc->dma_len, DMA_BIDIRECTIONAL);
tbl_off += ret;
} else { } else {
to_talitos_ptr(&desc->ptr[1], sg_dma_address(areq->src), 0); to_talitos_ptr(&desc->ptr[1], sg_dma_address(areq->src), 0);
desc->ptr[1].j_extent = 0; desc->ptr[1].j_extent = 0;
@@ -1126,11 +1133,13 @@ static int ipsec_esp(struct talitos_edesc *edesc, struct aead_request *areq,
if (edesc->desc.hdr & DESC_HDR_MODE1_MDEU_CICV) if (edesc->desc.hdr & DESC_HDR_MODE1_MDEU_CICV)
sg_link_tbl_len += authsize; sg_link_tbl_len += authsize;
if (sg_count > 1 && if (sg_count == 1) {
(ret = sg_to_link_tbl_offset(areq->src, sg_count, areq->assoclen, to_talitos_ptr(&desc->ptr[4], sg_dma_address(areq->src) +
sg_link_tbl_len, areq->assoclen, 0);
&edesc->link_tbl[tbl_off])) > 1) { } else if ((ret = sg_to_link_tbl_offset(areq->src, sg_count,
tbl_off += ret; areq->assoclen, sg_link_tbl_len,
&edesc->link_tbl[tbl_off])) >
1) {
desc->ptr[4].j_extent |= DESC_PTR_LNKTBL_JUMP; desc->ptr[4].j_extent |= DESC_PTR_LNKTBL_JUMP;
to_talitos_ptr(&desc->ptr[4], edesc->dma_link_tbl + to_talitos_ptr(&desc->ptr[4], edesc->dma_link_tbl +
tbl_off * tbl_off *
@@ -1138,8 +1147,10 @@ static int ipsec_esp(struct talitos_edesc *edesc, struct aead_request *areq,
dma_sync_single_for_device(dev, edesc->dma_link_tbl, dma_sync_single_for_device(dev, edesc->dma_link_tbl,
edesc->dma_len, edesc->dma_len,
DMA_BIDIRECTIONAL); DMA_BIDIRECTIONAL);
} else tbl_off += ret;
to_talitos_ptr(&desc->ptr[4], sg_dma_address(areq->src), 0); } else {
copy_talitos_ptr(&desc->ptr[4], &edesc->link_tbl[tbl_off], 0);
}
/* cipher out */ /* cipher out */
desc->ptr[5].len = cpu_to_be16(cryptlen); desc->ptr[5].len = cpu_to_be16(cryptlen);
@@ -1151,11 +1162,13 @@ static int ipsec_esp(struct talitos_edesc *edesc, struct aead_request *areq,
edesc->icv_ool = false; edesc->icv_ool = false;
if (sg_count > 1 && if (sg_count == 1) {
(sg_count = sg_to_link_tbl_offset(areq->dst, sg_count, to_talitos_ptr(&desc->ptr[5], sg_dma_address(areq->dst) +
areq->assoclen, 0);
} else if ((sg_count =
sg_to_link_tbl_offset(areq->dst, sg_count,
areq->assoclen, cryptlen, areq->assoclen, cryptlen,
&edesc->link_tbl[tbl_off])) > &edesc->link_tbl[tbl_off])) > 1) {
1) {
struct talitos_ptr *tbl_ptr = &edesc->link_tbl[tbl_off]; struct talitos_ptr *tbl_ptr = &edesc->link_tbl[tbl_off];
to_talitos_ptr(&desc->ptr[5], edesc->dma_link_tbl + to_talitos_ptr(&desc->ptr[5], edesc->dma_link_tbl +
@@ -1178,8 +1191,9 @@ static int ipsec_esp(struct talitos_edesc *edesc, struct aead_request *areq,
edesc->dma_len, DMA_BIDIRECTIONAL); edesc->dma_len, DMA_BIDIRECTIONAL);
edesc->icv_ool = true; edesc->icv_ool = true;
} else } else {
to_talitos_ptr(&desc->ptr[5], sg_dma_address(areq->dst), 0); copy_talitos_ptr(&desc->ptr[5], &edesc->link_tbl[tbl_off], 0);
}
/* iv out */ /* iv out */
map_single_talitos_ptr(dev, &desc->ptr[6], ivsize, ctx->iv, map_single_talitos_ptr(dev, &desc->ptr[6], ivsize, ctx->iv,
@@ -2629,21 +2643,11 @@ struct talitos_crypto_alg {
struct talitos_alg_template algt; struct talitos_alg_template algt;
}; };
static int talitos_cra_init(struct crypto_tfm *tfm) static int talitos_init_common(struct talitos_ctx *ctx,
struct talitos_crypto_alg *talitos_alg)
{ {
struct crypto_alg *alg = tfm->__crt_alg;
struct talitos_crypto_alg *talitos_alg;
struct talitos_ctx *ctx = crypto_tfm_ctx(tfm);
struct talitos_private *priv; struct talitos_private *priv;
if ((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_AHASH)
talitos_alg = container_of(__crypto_ahash_alg(alg),
struct talitos_crypto_alg,
algt.alg.hash);
else
talitos_alg = container_of(alg, struct talitos_crypto_alg,
algt.alg.crypto);
/* update context with ptr to dev */ /* update context with ptr to dev */
ctx->dev = talitos_alg->dev; ctx->dev = talitos_alg->dev;
@@ -2661,10 +2665,33 @@ static int talitos_cra_init(struct crypto_tfm *tfm)
return 0; return 0;
} }
static int talitos_cra_init(struct crypto_tfm *tfm)
{
struct crypto_alg *alg = tfm->__crt_alg;
struct talitos_crypto_alg *talitos_alg;
struct talitos_ctx *ctx = crypto_tfm_ctx(tfm);
if ((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_AHASH)
talitos_alg = container_of(__crypto_ahash_alg(alg),
struct talitos_crypto_alg,
algt.alg.hash);
else
talitos_alg = container_of(alg, struct talitos_crypto_alg,
algt.alg.crypto);
return talitos_init_common(ctx, talitos_alg);
}
static int talitos_cra_init_aead(struct crypto_aead *tfm) static int talitos_cra_init_aead(struct crypto_aead *tfm)
{ {
talitos_cra_init(crypto_aead_tfm(tfm)); struct aead_alg *alg = crypto_aead_alg(tfm);
return 0; struct talitos_crypto_alg *talitos_alg;
struct talitos_ctx *ctx = crypto_aead_ctx(tfm);
talitos_alg = container_of(alg, struct talitos_crypto_alg,
algt.alg.aead);
return talitos_init_common(ctx, talitos_alg);
} }
static int talitos_cra_init_ahash(struct crypto_tfm *tfm) static int talitos_cra_init_ahash(struct crypto_tfm *tfm)

View File

@@ -1866,7 +1866,7 @@ static int i7core_mce_check_error(struct notifier_block *nb, unsigned long val,
i7_dev = get_i7core_dev(mce->socketid); i7_dev = get_i7core_dev(mce->socketid);
if (!i7_dev) if (!i7_dev)
return NOTIFY_BAD; return NOTIFY_DONE;
mci = i7_dev->mci; mci = i7_dev->mci;
pvt = mci->pvt_info; pvt = mci->pvt_info;

View File

@@ -3168,7 +3168,7 @@ static int sbridge_mce_check_error(struct notifier_block *nb, unsigned long val,
mci = get_mci_for_node_id(mce->socketid); mci = get_mci_for_node_id(mce->socketid);
if (!mci) if (!mci)
return NOTIFY_BAD; return NOTIFY_DONE;
pvt = mci->pvt_info; pvt = mci->pvt_info;
/* /*

View File

@@ -202,29 +202,44 @@ static const struct variable_validate variable_validate[] = {
{ NULL_GUID, "", NULL }, { NULL_GUID, "", NULL },
}; };
/*
* Check if @var_name matches the pattern given in @match_name.
*
* @var_name: an array of @len non-NUL characters.
* @match_name: a NUL-terminated pattern string, optionally ending in "*". A
* final "*" character matches any trailing characters @var_name,
* including the case when there are none left in @var_name.
* @match: on output, the number of non-wildcard characters in @match_name
* that @var_name matches, regardless of the return value.
* @return: whether @var_name fully matches @match_name.
*/
static bool static bool
variable_matches(const char *var_name, size_t len, const char *match_name, variable_matches(const char *var_name, size_t len, const char *match_name,
int *match) int *match)
{ {
for (*match = 0; ; (*match)++) { for (*match = 0; ; (*match)++) {
char c = match_name[*match]; char c = match_name[*match];
char u = var_name[*match];
/* Wildcard in the matching name means we've matched */ switch (c) {
if (c == '*') case '*':
/* Wildcard in @match_name means we've matched. */
return true; return true;
/* Case sensitive match */ case '\0':
if (!c && *match == len) /* @match_name has ended. Has @var_name too? */
return true; return (*match == len);
if (c != u) default:
/*
* We've reached a non-wildcard char in @match_name.
* Continue only if there's an identical character in
* @var_name.
*/
if (*match < len && c == var_name[*match])
continue;
return false; return false;
if (!c)
return true;
} }
return true; }
} }
bool bool

View File

@@ -360,7 +360,7 @@ static struct cpuidle_ops psci_cpuidle_ops __initdata = {
.init = psci_dt_cpu_init_idle, .init = psci_dt_cpu_init_idle,
}; };
CPUIDLE_METHOD_OF_DECLARE(psci, "arm,psci", &psci_cpuidle_ops); CPUIDLE_METHOD_OF_DECLARE(psci, "psci", &psci_cpuidle_ops);
#endif #endif
#endif #endif

View File

@@ -63,10 +63,6 @@ bool amdgpu_has_atpx(void) {
return amdgpu_atpx_priv.atpx_detected; return amdgpu_atpx_priv.atpx_detected;
} }
bool amdgpu_has_atpx_dgpu_power_cntl(void) {
return amdgpu_atpx_priv.atpx.functions.power_cntl;
}
/** /**
* amdgpu_atpx_call - call an ATPX method * amdgpu_atpx_call - call an ATPX method
* *
@@ -146,6 +142,13 @@ static void amdgpu_atpx_parse_functions(struct amdgpu_atpx_functions *f, u32 mas
*/ */
static int amdgpu_atpx_validate(struct amdgpu_atpx *atpx) static int amdgpu_atpx_validate(struct amdgpu_atpx *atpx)
{ {
/* make sure required functions are enabled */
/* dGPU power control is required */
if (atpx->functions.power_cntl == false) {
printk("ATPX dGPU power cntl not present, forcing\n");
atpx->functions.power_cntl = true;
}
if (atpx->functions.px_params) { if (atpx->functions.px_params) {
union acpi_object *info; union acpi_object *info;
struct atpx_px_params output; struct atpx_px_params output;

View File

@@ -62,12 +62,6 @@ static const char *amdgpu_asic_name[] = {
"LAST", "LAST",
}; };
#if defined(CONFIG_VGA_SWITCHEROO)
bool amdgpu_has_atpx_dgpu_power_cntl(void);
#else
static inline bool amdgpu_has_atpx_dgpu_power_cntl(void) { return false; }
#endif
bool amdgpu_device_is_px(struct drm_device *dev) bool amdgpu_device_is_px(struct drm_device *dev)
{ {
struct amdgpu_device *adev = dev->dev_private; struct amdgpu_device *adev = dev->dev_private;
@@ -1485,7 +1479,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
if (amdgpu_runtime_pm == 1) if (amdgpu_runtime_pm == 1)
runtime = true; runtime = true;
if (amdgpu_device_is_px(ddev) && amdgpu_has_atpx_dgpu_power_cntl()) if (amdgpu_device_is_px(ddev))
runtime = true; runtime = true;
vga_switcheroo_register_client(adev->pdev, &amdgpu_switcheroo_ops, runtime); vga_switcheroo_register_client(adev->pdev, &amdgpu_switcheroo_ops, runtime);
if (runtime) if (runtime)

View File

@@ -910,7 +910,10 @@ static int gmc_v7_0_late_init(void *handle)
{ {
struct amdgpu_device *adev = (struct amdgpu_device *)handle; struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (amdgpu_vm_fault_stop != AMDGPU_VM_FAULT_STOP_ALWAYS)
return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0); return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0);
else
return 0;
} }
static int gmc_v7_0_sw_init(void *handle) static int gmc_v7_0_sw_init(void *handle)

View File

@@ -870,7 +870,10 @@ static int gmc_v8_0_late_init(void *handle)
{ {
struct amdgpu_device *adev = (struct amdgpu_device *)handle; struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (amdgpu_vm_fault_stop != AMDGPU_VM_FAULT_STOP_ALWAYS)
return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0); return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0);
else
return 0;
} }
#define mmMC_SEQ_MISC0_FIJI 0xA71 #define mmMC_SEQ_MISC0_FIJI 0xA71

View File

@@ -1796,6 +1796,11 @@ int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr)
req_payload.start_slot = cur_slots; req_payload.start_slot = cur_slots;
if (mgr->proposed_vcpis[i]) { if (mgr->proposed_vcpis[i]) {
port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi); port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi);
port = drm_dp_get_validated_port_ref(mgr, port);
if (!port) {
mutex_unlock(&mgr->payload_lock);
return -EINVAL;
}
req_payload.num_slots = mgr->proposed_vcpis[i]->num_slots; req_payload.num_slots = mgr->proposed_vcpis[i]->num_slots;
req_payload.vcpi = mgr->proposed_vcpis[i]->vcpi; req_payload.vcpi = mgr->proposed_vcpis[i]->vcpi;
} else { } else {
@@ -1823,6 +1828,9 @@ int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr)
mgr->payloads[i].payload_state = req_payload.payload_state; mgr->payloads[i].payload_state = req_payload.payload_state;
} }
cur_slots += req_payload.num_slots; cur_slots += req_payload.num_slots;
if (port)
drm_dp_put_port(port);
} }
for (i = 0; i < mgr->max_payloads; i++) { for (i = 0; i < mgr->max_payloads; i++) {
@@ -2128,6 +2136,8 @@ int drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr)
if (mgr->mst_primary) { if (mgr->mst_primary) {
int sret; int sret;
u8 guid[16];
sret = drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd, DP_RECEIVER_CAP_SIZE); sret = drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd, DP_RECEIVER_CAP_SIZE);
if (sret != DP_RECEIVER_CAP_SIZE) { if (sret != DP_RECEIVER_CAP_SIZE) {
DRM_DEBUG_KMS("dpcd read failed - undocked during suspend?\n"); DRM_DEBUG_KMS("dpcd read failed - undocked during suspend?\n");
@@ -2142,6 +2152,16 @@ int drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr)
ret = -1; ret = -1;
goto out_unlock; goto out_unlock;
} }
/* Some hubs forget their guids after they resume */
sret = drm_dp_dpcd_read(mgr->aux, DP_GUID, guid, 16);
if (sret != 16) {
DRM_DEBUG_KMS("dpcd read failed - undocked during suspend?\n");
ret = -1;
goto out_unlock;
}
drm_dp_check_mstb_guid(mgr->mst_primary, guid);
ret = 0; ret = 0;
} else } else
ret = -1; ret = -1;

View File

@@ -572,6 +572,24 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
goto fail; goto fail;
} }
/*
* Set the GPU linear window to be at the end of the DMA window, where
* the CMA area is likely to reside. This ensures that we are able to
* map the command buffers while having the linear window overlap as
* much RAM as possible, so we can optimize mappings for other buffers.
*
* For 3D cores only do this if MC2.0 is present, as with MC1.0 it leads
* to different views of the memory on the individual engines.
*/
if (!(gpu->identity.features & chipFeatures_PIPE_3D) ||
(gpu->identity.minor_features0 & chipMinorFeatures0_MC20)) {
u32 dma_mask = (u32)dma_get_required_mask(gpu->dev);
if (dma_mask < PHYS_OFFSET + SZ_2G)
gpu->memory_base = PHYS_OFFSET;
else
gpu->memory_base = dma_mask - SZ_2G + 1;
}
ret = etnaviv_hw_reset(gpu); ret = etnaviv_hw_reset(gpu);
if (ret) if (ret)
goto fail; goto fail;
@@ -1566,7 +1584,6 @@ static int etnaviv_gpu_platform_probe(struct platform_device *pdev)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct etnaviv_gpu *gpu; struct etnaviv_gpu *gpu;
u32 dma_mask;
int err = 0; int err = 0;
gpu = devm_kzalloc(dev, sizeof(*gpu), GFP_KERNEL); gpu = devm_kzalloc(dev, sizeof(*gpu), GFP_KERNEL);
@@ -1576,18 +1593,6 @@ static int etnaviv_gpu_platform_probe(struct platform_device *pdev)
gpu->dev = &pdev->dev; gpu->dev = &pdev->dev;
mutex_init(&gpu->lock); mutex_init(&gpu->lock);
/*
* Set the GPU linear window to be at the end of the DMA window, where
* the CMA area is likely to reside. This ensures that we are able to
* map the command buffers while having the linear window overlap as
* much RAM as possible, so we can optimize mappings for other buffers.
*/
dma_mask = (u32)dma_get_required_mask(dev);
if (dma_mask < PHYS_OFFSET + SZ_2G)
gpu->memory_base = PHYS_OFFSET;
else
gpu->memory_base = dma_mask - SZ_2G + 1;
/* Map registers: */ /* Map registers: */
gpu->mmio = etnaviv_ioremap(pdev, NULL, dev_name(gpu->dev)); gpu->mmio = etnaviv_ioremap(pdev, NULL, dev_name(gpu->dev));
if (IS_ERR(gpu->mmio)) if (IS_ERR(gpu->mmio))

View File

@@ -2608,10 +2608,152 @@ static void evergreen_agp_enable(struct radeon_device *rdev)
WREG32(VM_CONTEXT1_CNTL, 0); WREG32(VM_CONTEXT1_CNTL, 0);
} }
static const unsigned ni_dig_offsets[] =
{
NI_DIG0_REGISTER_OFFSET,
NI_DIG1_REGISTER_OFFSET,
NI_DIG2_REGISTER_OFFSET,
NI_DIG3_REGISTER_OFFSET,
NI_DIG4_REGISTER_OFFSET,
NI_DIG5_REGISTER_OFFSET
};
static const unsigned ni_tx_offsets[] =
{
NI_DCIO_UNIPHY0_UNIPHY_TX_CONTROL1,
NI_DCIO_UNIPHY1_UNIPHY_TX_CONTROL1,
NI_DCIO_UNIPHY2_UNIPHY_TX_CONTROL1,
NI_DCIO_UNIPHY3_UNIPHY_TX_CONTROL1,
NI_DCIO_UNIPHY4_UNIPHY_TX_CONTROL1,
NI_DCIO_UNIPHY5_UNIPHY_TX_CONTROL1
};
static const unsigned evergreen_dp_offsets[] =
{
EVERGREEN_DP0_REGISTER_OFFSET,
EVERGREEN_DP1_REGISTER_OFFSET,
EVERGREEN_DP2_REGISTER_OFFSET,
EVERGREEN_DP3_REGISTER_OFFSET,
EVERGREEN_DP4_REGISTER_OFFSET,
EVERGREEN_DP5_REGISTER_OFFSET
};
/*
* Assumption is that EVERGREEN_CRTC_MASTER_EN enable for requested crtc
* We go from crtc to connector and it is not relible since it
* should be an opposite direction .If crtc is enable then
* find the dig_fe which selects this crtc and insure that it enable.
* if such dig_fe is found then find dig_be which selects found dig_be and
* insure that it enable and in DP_SST mode.
* if UNIPHY_PLL_CONTROL1.enable then we should disconnect timing
* from dp symbols clocks .
*/
static bool evergreen_is_dp_sst_stream_enabled(struct radeon_device *rdev,
unsigned crtc_id, unsigned *ret_dig_fe)
{
unsigned i;
unsigned dig_fe;
unsigned dig_be;
unsigned dig_en_be;
unsigned uniphy_pll;
unsigned digs_fe_selected;
unsigned dig_be_mode;
unsigned dig_fe_mask;
bool is_enabled = false;
bool found_crtc = false;
/* loop through all running dig_fe to find selected crtc */
for (i = 0; i < ARRAY_SIZE(ni_dig_offsets); i++) {
dig_fe = RREG32(NI_DIG_FE_CNTL + ni_dig_offsets[i]);
if (dig_fe & NI_DIG_FE_CNTL_SYMCLK_FE_ON &&
crtc_id == NI_DIG_FE_CNTL_SOURCE_SELECT(dig_fe)) {
/* found running pipe */
found_crtc = true;
dig_fe_mask = 1 << i;
dig_fe = i;
break;
}
}
if (found_crtc) {
/* loop through all running dig_be to find selected dig_fe */
for (i = 0; i < ARRAY_SIZE(ni_dig_offsets); i++) {
dig_be = RREG32(NI_DIG_BE_CNTL + ni_dig_offsets[i]);
/* if dig_fe_selected by dig_be? */
digs_fe_selected = NI_DIG_BE_CNTL_FE_SOURCE_SELECT(dig_be);
dig_be_mode = NI_DIG_FE_CNTL_MODE(dig_be);
if (dig_fe_mask & digs_fe_selected &&
/* if dig_be in sst mode? */
dig_be_mode == NI_DIG_BE_DPSST) {
dig_en_be = RREG32(NI_DIG_BE_EN_CNTL +
ni_dig_offsets[i]);
uniphy_pll = RREG32(NI_DCIO_UNIPHY0_PLL_CONTROL1 +
ni_tx_offsets[i]);
/* dig_be enable and tx is running */
if (dig_en_be & NI_DIG_BE_EN_CNTL_ENABLE &&
dig_en_be & NI_DIG_BE_EN_CNTL_SYMBCLK_ON &&
uniphy_pll & NI_DCIO_UNIPHY0_PLL_CONTROL1_ENABLE) {
is_enabled = true;
*ret_dig_fe = dig_fe;
break;
}
}
}
}
return is_enabled;
}
/*
* Blank dig when in dp sst mode
* Dig ignores crtc timing
*/
static void evergreen_blank_dp_output(struct radeon_device *rdev,
unsigned dig_fe)
{
unsigned stream_ctrl;
unsigned fifo_ctrl;
unsigned counter = 0;
if (dig_fe >= ARRAY_SIZE(evergreen_dp_offsets)) {
DRM_ERROR("invalid dig_fe %d\n", dig_fe);
return;
}
stream_ctrl = RREG32(EVERGREEN_DP_VID_STREAM_CNTL +
evergreen_dp_offsets[dig_fe]);
if (!(stream_ctrl & EVERGREEN_DP_VID_STREAM_CNTL_ENABLE)) {
DRM_ERROR("dig %d , should be enable\n", dig_fe);
return;
}
stream_ctrl &=~EVERGREEN_DP_VID_STREAM_CNTL_ENABLE;
WREG32(EVERGREEN_DP_VID_STREAM_CNTL +
evergreen_dp_offsets[dig_fe], stream_ctrl);
stream_ctrl = RREG32(EVERGREEN_DP_VID_STREAM_CNTL +
evergreen_dp_offsets[dig_fe]);
while (counter < 32 && stream_ctrl & EVERGREEN_DP_VID_STREAM_STATUS) {
msleep(1);
counter++;
stream_ctrl = RREG32(EVERGREEN_DP_VID_STREAM_CNTL +
evergreen_dp_offsets[dig_fe]);
}
if (counter >= 32 )
DRM_ERROR("counter exceeds %d\n", counter);
fifo_ctrl = RREG32(EVERGREEN_DP_STEER_FIFO + evergreen_dp_offsets[dig_fe]);
fifo_ctrl |= EVERGREEN_DP_STEER_FIFO_RESET;
WREG32(EVERGREEN_DP_STEER_FIFO + evergreen_dp_offsets[dig_fe], fifo_ctrl);
}
void evergreen_mc_stop(struct radeon_device *rdev, struct evergreen_mc_save *save) void evergreen_mc_stop(struct radeon_device *rdev, struct evergreen_mc_save *save)
{ {
u32 crtc_enabled, tmp, frame_count, blackout; u32 crtc_enabled, tmp, frame_count, blackout;
int i, j; int i, j;
unsigned dig_fe;
if (!ASIC_IS_NODCE(rdev)) { if (!ASIC_IS_NODCE(rdev)) {
save->vga_render_control = RREG32(VGA_RENDER_CONTROL); save->vga_render_control = RREG32(VGA_RENDER_CONTROL);
@@ -2651,7 +2793,17 @@ void evergreen_mc_stop(struct radeon_device *rdev, struct evergreen_mc_save *sav
break; break;
udelay(1); udelay(1);
} }
/*we should disable dig if it drives dp sst*/
/*but we are in radeon_device_init and the topology is unknown*/
/*and it is available after radeon_modeset_init*/
/*the following method radeon_atom_encoder_dpms_dig*/
/*does the job if we initialize it properly*/
/*for now we do it this manually*/
/**/
if (ASIC_IS_DCE5(rdev) &&
evergreen_is_dp_sst_stream_enabled(rdev, i ,&dig_fe))
evergreen_blank_dp_output(rdev, dig_fe);
/*we could remove 6 lines below*/
/* XXX this is a hack to avoid strange behavior with EFI on certain systems */ /* XXX this is a hack to avoid strange behavior with EFI on certain systems */
WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1); WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1);
tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]); tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]);

View File

@@ -250,8 +250,43 @@
/* HDMI blocks at 0x7030, 0x7c30, 0x10830, 0x11430, 0x12030, 0x12c30 */ /* HDMI blocks at 0x7030, 0x7c30, 0x10830, 0x11430, 0x12030, 0x12c30 */
#define EVERGREEN_HDMI_BASE 0x7030 #define EVERGREEN_HDMI_BASE 0x7030
/*DIG block*/
#define NI_DIG0_REGISTER_OFFSET (0x7000 - 0x7000)
#define NI_DIG1_REGISTER_OFFSET (0x7C00 - 0x7000)
#define NI_DIG2_REGISTER_OFFSET (0x10800 - 0x7000)
#define NI_DIG3_REGISTER_OFFSET (0x11400 - 0x7000)
#define NI_DIG4_REGISTER_OFFSET (0x12000 - 0x7000)
#define NI_DIG5_REGISTER_OFFSET (0x12C00 - 0x7000)
#define NI_DIG_FE_CNTL 0x7000
# define NI_DIG_FE_CNTL_SOURCE_SELECT(x) ((x) & 0x3)
# define NI_DIG_FE_CNTL_SYMCLK_FE_ON (1<<24)
#define NI_DIG_BE_CNTL 0x7140
# define NI_DIG_BE_CNTL_FE_SOURCE_SELECT(x) (((x) >> 8 ) & 0x3F)
# define NI_DIG_FE_CNTL_MODE(x) (((x) >> 16) & 0x7 )
#define NI_DIG_BE_EN_CNTL 0x7144
# define NI_DIG_BE_EN_CNTL_ENABLE (1 << 0)
# define NI_DIG_BE_EN_CNTL_SYMBCLK_ON (1 << 8)
# define NI_DIG_BE_DPSST 0
/* Display Port block */ /* Display Port block */
#define EVERGREEN_DP0_REGISTER_OFFSET (0x730C - 0x730C)
#define EVERGREEN_DP1_REGISTER_OFFSET (0x7F0C - 0x730C)
#define EVERGREEN_DP2_REGISTER_OFFSET (0x10B0C - 0x730C)
#define EVERGREEN_DP3_REGISTER_OFFSET (0x1170C - 0x730C)
#define EVERGREEN_DP4_REGISTER_OFFSET (0x1230C - 0x730C)
#define EVERGREEN_DP5_REGISTER_OFFSET (0x12F0C - 0x730C)
#define EVERGREEN_DP_VID_STREAM_CNTL 0x730C
# define EVERGREEN_DP_VID_STREAM_CNTL_ENABLE (1 << 0)
# define EVERGREEN_DP_VID_STREAM_STATUS (1 <<16)
#define EVERGREEN_DP_STEER_FIFO 0x7310
# define EVERGREEN_DP_STEER_FIFO_RESET (1 << 0)
#define EVERGREEN_DP_SEC_CNTL 0x7280 #define EVERGREEN_DP_SEC_CNTL 0x7280
# define EVERGREEN_DP_SEC_STREAM_ENABLE (1 << 0) # define EVERGREEN_DP_SEC_STREAM_ENABLE (1 << 0)
# define EVERGREEN_DP_SEC_ASP_ENABLE (1 << 4) # define EVERGREEN_DP_SEC_ASP_ENABLE (1 << 4)
@@ -266,4 +301,15 @@
# define EVERGREEN_DP_SEC_N_BASE_MULTIPLE(x) (((x) & 0xf) << 24) # define EVERGREEN_DP_SEC_N_BASE_MULTIPLE(x) (((x) & 0xf) << 24)
# define EVERGREEN_DP_SEC_SS_EN (1 << 28) # define EVERGREEN_DP_SEC_SS_EN (1 << 28)
/*DCIO_UNIPHY block*/
#define NI_DCIO_UNIPHY0_UNIPHY_TX_CONTROL1 (0x6600 -0x6600)
#define NI_DCIO_UNIPHY1_UNIPHY_TX_CONTROL1 (0x6640 -0x6600)
#define NI_DCIO_UNIPHY2_UNIPHY_TX_CONTROL1 (0x6680 - 0x6600)
#define NI_DCIO_UNIPHY3_UNIPHY_TX_CONTROL1 (0x66C0 - 0x6600)
#define NI_DCIO_UNIPHY4_UNIPHY_TX_CONTROL1 (0x6700 - 0x6600)
#define NI_DCIO_UNIPHY5_UNIPHY_TX_CONTROL1 (0x6740 - 0x6600)
#define NI_DCIO_UNIPHY0_PLL_CONTROL1 0x6618
# define NI_DCIO_UNIPHY0_PLL_CONTROL1_ENABLE (1 << 0)
#endif #endif

View File

@@ -230,22 +230,13 @@ EXPORT_SYMBOL(ttm_bo_del_sub_from_lru);
void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo) void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo)
{ {
struct ttm_bo_device *bdev = bo->bdev; int put_count = 0;
struct ttm_mem_type_manager *man;
lockdep_assert_held(&bo->resv->lock.base); lockdep_assert_held(&bo->resv->lock.base);
if (bo->mem.placement & TTM_PL_FLAG_NO_EVICT) { put_count = ttm_bo_del_from_lru(bo);
list_del_init(&bo->swap); ttm_bo_list_ref_sub(bo, put_count, true);
list_del_init(&bo->lru); ttm_bo_add_to_lru(bo);
} else {
if (bo->ttm && !(bo->ttm->page_flags & TTM_PAGE_FLAG_SG))
list_move_tail(&bo->swap, &bo->glob->swap_lru);
man = &bdev->man[bo->mem.mem_type];
list_move_tail(&bo->lru, &man->lru);
}
} }
EXPORT_SYMBOL(ttm_bo_move_to_lru_tail); EXPORT_SYMBOL(ttm_bo_move_to_lru_tail);

View File

@@ -267,11 +267,23 @@ static int virtio_gpu_crtc_atomic_check(struct drm_crtc *crtc,
return 0; return 0;
} }
static void virtio_gpu_crtc_atomic_flush(struct drm_crtc *crtc,
struct drm_crtc_state *old_state)
{
unsigned long flags;
spin_lock_irqsave(&crtc->dev->event_lock, flags);
if (crtc->state->event)
drm_crtc_send_vblank_event(crtc, crtc->state->event);
spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
}
static const struct drm_crtc_helper_funcs virtio_gpu_crtc_helper_funcs = { static const struct drm_crtc_helper_funcs virtio_gpu_crtc_helper_funcs = {
.enable = virtio_gpu_crtc_enable, .enable = virtio_gpu_crtc_enable,
.disable = virtio_gpu_crtc_disable, .disable = virtio_gpu_crtc_disable,
.mode_set_nofb = virtio_gpu_crtc_mode_set_nofb, .mode_set_nofb = virtio_gpu_crtc_mode_set_nofb,
.atomic_check = virtio_gpu_crtc_atomic_check, .atomic_check = virtio_gpu_crtc_atomic_check,
.atomic_flush = virtio_gpu_crtc_atomic_flush,
}; };
static void virtio_gpu_enc_mode_set(struct drm_encoder *encoder, static void virtio_gpu_enc_mode_set(struct drm_encoder *encoder,

View File

@@ -3293,19 +3293,19 @@ static const struct vmw_cmd_entry vmw_cmd_entries[SVGA_3D_CMD_MAX] = {
&vmw_cmd_dx_cid_check, true, false, true), &vmw_cmd_dx_cid_check, true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_DEFINE_QUERY, &vmw_cmd_dx_define_query, VMW_CMD_DEF(SVGA_3D_CMD_DX_DEFINE_QUERY, &vmw_cmd_dx_define_query,
true, false, true), true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_DESTROY_QUERY, &vmw_cmd_ok, VMW_CMD_DEF(SVGA_3D_CMD_DX_DESTROY_QUERY, &vmw_cmd_dx_cid_check,
true, false, true), true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_BIND_QUERY, &vmw_cmd_dx_bind_query, VMW_CMD_DEF(SVGA_3D_CMD_DX_BIND_QUERY, &vmw_cmd_dx_bind_query,
true, false, true), true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_QUERY_OFFSET, VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_QUERY_OFFSET,
&vmw_cmd_ok, true, false, true), &vmw_cmd_dx_cid_check, true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_BEGIN_QUERY, &vmw_cmd_ok, VMW_CMD_DEF(SVGA_3D_CMD_DX_BEGIN_QUERY, &vmw_cmd_dx_cid_check,
true, false, true), true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_END_QUERY, &vmw_cmd_ok, VMW_CMD_DEF(SVGA_3D_CMD_DX_END_QUERY, &vmw_cmd_dx_cid_check,
true, false, true), true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_READBACK_QUERY, &vmw_cmd_invalid, VMW_CMD_DEF(SVGA_3D_CMD_DX_READBACK_QUERY, &vmw_cmd_invalid,
true, false, true), true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_PREDICATION, &vmw_cmd_invalid, VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_PREDICATION, &vmw_cmd_dx_cid_check,
true, false, true), true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_VIEWPORTS, &vmw_cmd_dx_cid_check, VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_VIEWPORTS, &vmw_cmd_dx_cid_check,
true, false, true), true, false, true),

View File

@@ -574,7 +574,7 @@ static int vmw_fb_set_par(struct fb_info *info)
old_mode = NULL; old_mode = NULL;
} else if (!vmw_kms_validate_mode_vram(vmw_priv, } else if (!vmw_kms_validate_mode_vram(vmw_priv,
mode->hdisplay * mode->hdisplay *
(var->bits_per_pixel + 7) / 8, DIV_ROUND_UP(var->bits_per_pixel, 8),
mode->vdisplay)) { mode->vdisplay)) {
drm_mode_destroy(vmw_priv->dev, mode); drm_mode_destroy(vmw_priv->dev, mode);
return -EINVAL; return -EINVAL;

View File

@@ -975,10 +975,10 @@ config I2C_XLR
config I2C_XLP9XX config I2C_XLP9XX
tristate "XLP9XX I2C support" tristate "XLP9XX I2C support"
depends on CPU_XLP || COMPILE_TEST depends on CPU_XLP || ARCH_VULCAN || COMPILE_TEST
help help
This driver enables support for the on-chip I2C interface of This driver enables support for the on-chip I2C interface of
the Broadcom XLP9xx/XLP5xx MIPS processors. the Broadcom XLP9xx/XLP5xx MIPS and Vulcan ARM64 processors.
This driver can also be built as a module. If so, the module will This driver can also be built as a module. If so, the module will
be called i2c-xlp9xx. be called i2c-xlp9xx.

View File

@@ -116,8 +116,8 @@ struct cpm_i2c {
cbd_t __iomem *rbase; cbd_t __iomem *rbase;
u_char *txbuf[CPM_MAXBD]; u_char *txbuf[CPM_MAXBD];
u_char *rxbuf[CPM_MAXBD]; u_char *rxbuf[CPM_MAXBD];
u32 txdma[CPM_MAXBD]; dma_addr_t txdma[CPM_MAXBD];
u32 rxdma[CPM_MAXBD]; dma_addr_t rxdma[CPM_MAXBD];
}; };
static irqreturn_t cpm_i2c_interrupt(int irq, void *dev_id) static irqreturn_t cpm_i2c_interrupt(int irq, void *dev_id)

View File

@@ -671,7 +671,9 @@ static int exynos5_i2c_xfer(struct i2c_adapter *adap,
return -EIO; return -EIO;
} }
clk_prepare_enable(i2c->clk); ret = clk_enable(i2c->clk);
if (ret)
return ret;
for (i = 0; i < num; i++, msgs++) { for (i = 0; i < num; i++, msgs++) {
stop = (i == num - 1); stop = (i == num - 1);
@@ -695,7 +697,7 @@ static int exynos5_i2c_xfer(struct i2c_adapter *adap,
} }
out: out:
clk_disable_unprepare(i2c->clk); clk_disable(i2c->clk);
return ret; return ret;
} }
@@ -747,7 +749,9 @@ static int exynos5_i2c_probe(struct platform_device *pdev)
return -ENOENT; return -ENOENT;
} }
clk_prepare_enable(i2c->clk); ret = clk_prepare_enable(i2c->clk);
if (ret)
return ret;
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
i2c->regs = devm_ioremap_resource(&pdev->dev, mem); i2c->regs = devm_ioremap_resource(&pdev->dev, mem);
@@ -799,6 +803,10 @@ static int exynos5_i2c_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, i2c); platform_set_drvdata(pdev, i2c);
clk_disable(i2c->clk);
return 0;
err_clk: err_clk:
clk_disable_unprepare(i2c->clk); clk_disable_unprepare(i2c->clk);
return ret; return ret;
@@ -810,6 +818,8 @@ static int exynos5_i2c_remove(struct platform_device *pdev)
i2c_del_adapter(&i2c->adap); i2c_del_adapter(&i2c->adap);
clk_unprepare(i2c->clk);
return 0; return 0;
} }
@@ -821,6 +831,8 @@ static int exynos5_i2c_suspend_noirq(struct device *dev)
i2c->suspended = 1; i2c->suspended = 1;
clk_unprepare(i2c->clk);
return 0; return 0;
} }
@@ -830,7 +842,9 @@ static int exynos5_i2c_resume_noirq(struct device *dev)
struct exynos5_i2c *i2c = platform_get_drvdata(pdev); struct exynos5_i2c *i2c = platform_get_drvdata(pdev);
int ret = 0; int ret = 0;
clk_prepare_enable(i2c->clk); ret = clk_prepare_enable(i2c->clk);
if (ret)
return ret;
ret = exynos5_hsi2c_clock_setup(i2c); ret = exynos5_hsi2c_clock_setup(i2c);
if (ret) { if (ret) {
@@ -839,7 +853,7 @@ static int exynos5_i2c_resume_noirq(struct device *dev)
} }
exynos5_i2c_init(i2c); exynos5_i2c_init(i2c);
clk_disable_unprepare(i2c->clk); clk_disable(i2c->clk);
i2c->suspended = 0; i2c->suspended = 0;
return 0; return 0;

View File

@@ -75,6 +75,7 @@
/* PCI DIDs for the Intel SMBus Message Transport (SMT) Devices */ /* PCI DIDs for the Intel SMBus Message Transport (SMT) Devices */
#define PCI_DEVICE_ID_INTEL_S1200_SMT0 0x0c59 #define PCI_DEVICE_ID_INTEL_S1200_SMT0 0x0c59
#define PCI_DEVICE_ID_INTEL_S1200_SMT1 0x0c5a #define PCI_DEVICE_ID_INTEL_S1200_SMT1 0x0c5a
#define PCI_DEVICE_ID_INTEL_DNV_SMT 0x19ac
#define PCI_DEVICE_ID_INTEL_AVOTON_SMT 0x1f15 #define PCI_DEVICE_ID_INTEL_AVOTON_SMT 0x1f15
#define ISMT_DESC_ENTRIES 2 /* number of descriptor entries */ #define ISMT_DESC_ENTRIES 2 /* number of descriptor entries */
@@ -180,6 +181,7 @@ struct ismt_priv {
static const struct pci_device_id ismt_ids[] = { static const struct pci_device_id ismt_ids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_S1200_SMT0) }, { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_S1200_SMT0) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_S1200_SMT1) }, { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_S1200_SMT1) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_DNV_SMT) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_AVOTON_SMT) }, { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_AVOTON_SMT) },
{ 0, } { 0, }
}; };

View File

@@ -855,6 +855,7 @@ static struct rk3x_i2c_soc_data soc_data[3] = {
static const struct of_device_id rk3x_i2c_match[] = { static const struct of_device_id rk3x_i2c_match[] = {
{ .compatible = "rockchip,rk3066-i2c", .data = (void *)&soc_data[0] }, { .compatible = "rockchip,rk3066-i2c", .data = (void *)&soc_data[0] },
{ .compatible = "rockchip,rk3188-i2c", .data = (void *)&soc_data[1] }, { .compatible = "rockchip,rk3188-i2c", .data = (void *)&soc_data[1] },
{ .compatible = "rockchip,rk3228-i2c", .data = (void *)&soc_data[2] },
{ .compatible = "rockchip,rk3288-i2c", .data = (void *)&soc_data[2] }, { .compatible = "rockchip,rk3288-i2c", .data = (void *)&soc_data[2] },
{}, {},
}; };

View File

@@ -691,7 +691,8 @@ void ib_cache_gid_set_default_gid(struct ib_device *ib_dev, u8 port,
NULL); NULL);
/* Coudn't find default GID location */ /* Coudn't find default GID location */
WARN_ON(ix < 0); if (WARN_ON(ix < 0))
goto release;
zattr_type.gid_type = gid_type; zattr_type.gid_type = gid_type;

View File

@@ -48,6 +48,7 @@
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <rdma/ib.h>
#include <rdma/ib_cm.h> #include <rdma/ib_cm.h>
#include <rdma/ib_user_cm.h> #include <rdma/ib_user_cm.h>
#include <rdma/ib_marshall.h> #include <rdma/ib_marshall.h>
@@ -1103,6 +1104,9 @@ static ssize_t ib_ucm_write(struct file *filp, const char __user *buf,
struct ib_ucm_cmd_hdr hdr; struct ib_ucm_cmd_hdr hdr;
ssize_t result; ssize_t result;
if (WARN_ON_ONCE(!ib_safe_file_access(filp)))
return -EACCES;
if (len < sizeof(hdr)) if (len < sizeof(hdr))
return -EINVAL; return -EINVAL;

View File

@@ -1574,6 +1574,9 @@ static ssize_t ucma_write(struct file *filp, const char __user *buf,
struct rdma_ucm_cmd_hdr hdr; struct rdma_ucm_cmd_hdr hdr;
ssize_t ret; ssize_t ret;
if (WARN_ON_ONCE(!ib_safe_file_access(filp)))
return -EACCES;
if (len < sizeof(hdr)) if (len < sizeof(hdr))
return -EINVAL; return -EINVAL;

View File

@@ -48,6 +48,8 @@
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <rdma/ib.h>
#include "uverbs.h" #include "uverbs.h"
MODULE_AUTHOR("Roland Dreier"); MODULE_AUTHOR("Roland Dreier");
@@ -709,6 +711,9 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf,
int srcu_key; int srcu_key;
ssize_t ret; ssize_t ret;
if (WARN_ON_ONCE(!ib_safe_file_access(filp)))
return -EACCES;
if (count < sizeof hdr) if (count < sizeof hdr)
return -EINVAL; return -EINVAL;

View File

@@ -1860,6 +1860,7 @@ EXPORT_SYMBOL(ib_drain_rq);
void ib_drain_qp(struct ib_qp *qp) void ib_drain_qp(struct ib_qp *qp)
{ {
ib_drain_sq(qp); ib_drain_sq(qp);
if (!qp->srq)
ib_drain_rq(qp); ib_drain_rq(qp);
} }
EXPORT_SYMBOL(ib_drain_qp); EXPORT_SYMBOL(ib_drain_qp);

View File

@@ -1390,6 +1390,8 @@ int iwch_register_device(struct iwch_dev *dev)
dev->ibdev.iwcm->add_ref = iwch_qp_add_ref; dev->ibdev.iwcm->add_ref = iwch_qp_add_ref;
dev->ibdev.iwcm->rem_ref = iwch_qp_rem_ref; dev->ibdev.iwcm->rem_ref = iwch_qp_rem_ref;
dev->ibdev.iwcm->get_qp = iwch_get_qp; dev->ibdev.iwcm->get_qp = iwch_get_qp;
memcpy(dev->ibdev.iwcm->ifname, dev->rdev.t3cdev_p->lldev->name,
sizeof(dev->ibdev.iwcm->ifname));
ret = ib_register_device(&dev->ibdev, NULL); ret = ib_register_device(&dev->ibdev, NULL);
if (ret) if (ret)

View File

@@ -162,7 +162,7 @@ static int create_cq(struct c4iw_rdev *rdev, struct t4_cq *cq,
cq->bar2_va = c4iw_bar2_addrs(rdev, cq->cqid, T4_BAR2_QTYPE_INGRESS, cq->bar2_va = c4iw_bar2_addrs(rdev, cq->cqid, T4_BAR2_QTYPE_INGRESS,
&cq->bar2_qid, &cq->bar2_qid,
user ? &cq->bar2_pa : NULL); user ? &cq->bar2_pa : NULL);
if (user && !cq->bar2_va) { if (user && !cq->bar2_pa) {
pr_warn(MOD "%s: cqid %u not in BAR2 range.\n", pr_warn(MOD "%s: cqid %u not in BAR2 range.\n",
pci_name(rdev->lldi.pdev), cq->cqid); pci_name(rdev->lldi.pdev), cq->cqid);
ret = -EINVAL; ret = -EINVAL;

View File

@@ -580,6 +580,8 @@ int c4iw_register_device(struct c4iw_dev *dev)
dev->ibdev.iwcm->add_ref = c4iw_qp_add_ref; dev->ibdev.iwcm->add_ref = c4iw_qp_add_ref;
dev->ibdev.iwcm->rem_ref = c4iw_qp_rem_ref; dev->ibdev.iwcm->rem_ref = c4iw_qp_rem_ref;
dev->ibdev.iwcm->get_qp = c4iw_get_qp; dev->ibdev.iwcm->get_qp = c4iw_get_qp;
memcpy(dev->ibdev.iwcm->ifname, dev->rdev.lldi.ports[0]->name,
sizeof(dev->ibdev.iwcm->ifname));
ret = ib_register_device(&dev->ibdev, NULL); ret = ib_register_device(&dev->ibdev, NULL);
if (ret) if (ret)

View File

@@ -185,6 +185,10 @@ void __iomem *c4iw_bar2_addrs(struct c4iw_rdev *rdev, unsigned int qid,
if (pbar2_pa) if (pbar2_pa)
*pbar2_pa = (rdev->bar2_pa + bar2_qoffset) & PAGE_MASK; *pbar2_pa = (rdev->bar2_pa + bar2_qoffset) & PAGE_MASK;
if (is_t4(rdev->lldi.adapter_type))
return NULL;
return rdev->bar2_kva + bar2_qoffset; return rdev->bar2_kva + bar2_qoffset;
} }
@@ -270,7 +274,7 @@ static int create_qp(struct c4iw_rdev *rdev, struct t4_wq *wq,
/* /*
* User mode must have bar2 access. * User mode must have bar2 access.
*/ */
if (user && (!wq->sq.bar2_va || !wq->rq.bar2_va)) { if (user && (!wq->sq.bar2_pa || !wq->rq.bar2_pa)) {
pr_warn(MOD "%s: sqid %u or rqid %u not in BAR2 range.\n", pr_warn(MOD "%s: sqid %u or rqid %u not in BAR2 range.\n",
pci_name(rdev->lldi.pdev), wq->sq.qid, wq->rq.qid); pci_name(rdev->lldi.pdev), wq->sq.qid, wq->rq.qid);
goto free_dma; goto free_dma;
@@ -1895,13 +1899,27 @@ int c4iw_ib_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
void c4iw_drain_sq(struct ib_qp *ibqp) void c4iw_drain_sq(struct ib_qp *ibqp)
{ {
struct c4iw_qp *qp = to_c4iw_qp(ibqp); struct c4iw_qp *qp = to_c4iw_qp(ibqp);
unsigned long flag;
bool need_to_wait;
spin_lock_irqsave(&qp->lock, flag);
need_to_wait = !t4_sq_empty(&qp->wq);
spin_unlock_irqrestore(&qp->lock, flag);
if (need_to_wait)
wait_for_completion(&qp->sq_drained); wait_for_completion(&qp->sq_drained);
} }
void c4iw_drain_rq(struct ib_qp *ibqp) void c4iw_drain_rq(struct ib_qp *ibqp)
{ {
struct c4iw_qp *qp = to_c4iw_qp(ibqp); struct c4iw_qp *qp = to_c4iw_qp(ibqp);
unsigned long flag;
bool need_to_wait;
spin_lock_irqsave(&qp->lock, flag);
need_to_wait = !t4_rq_empty(&qp->wq);
spin_unlock_irqrestore(&qp->lock, flag);
if (need_to_wait)
wait_for_completion(&qp->rq_drained); wait_for_completion(&qp->rq_drained);
} }

View File

@@ -530,7 +530,7 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
sizeof(struct mlx5_wqe_ctrl_seg)) / sizeof(struct mlx5_wqe_ctrl_seg)) /
sizeof(struct mlx5_wqe_data_seg); sizeof(struct mlx5_wqe_data_seg);
props->max_sge = min(max_rq_sg, max_sq_sg); props->max_sge = min(max_rq_sg, max_sq_sg);
props->max_sge_rd = props->max_sge; props->max_sge_rd = MLX5_MAX_SGE_RD;
props->max_cq = 1 << MLX5_CAP_GEN(mdev, log_max_cq); props->max_cq = 1 << MLX5_CAP_GEN(mdev, log_max_cq);
props->max_cqe = (1 << MLX5_CAP_GEN(mdev, log_max_cq_sz)) - 1; props->max_cqe = (1 << MLX5_CAP_GEN(mdev, log_max_cq_sz)) - 1;
props->max_mr = 1 << MLX5_CAP_GEN(mdev, log_max_mkey); props->max_mr = 1 << MLX5_CAP_GEN(mdev, log_max_mkey);
@@ -671,8 +671,8 @@ static int mlx5_query_hca_port(struct ib_device *ibdev, u8 port,
struct mlx5_ib_dev *dev = to_mdev(ibdev); struct mlx5_ib_dev *dev = to_mdev(ibdev);
struct mlx5_core_dev *mdev = dev->mdev; struct mlx5_core_dev *mdev = dev->mdev;
struct mlx5_hca_vport_context *rep; struct mlx5_hca_vport_context *rep;
int max_mtu; u16 max_mtu;
int oper_mtu; u16 oper_mtu;
int err; int err;
u8 ib_link_width_oper; u8 ib_link_width_oper;
u8 vl_hw_cap; u8 vl_hw_cap;

View File

@@ -500,9 +500,6 @@ static int nes_netdev_start_xmit(struct sk_buff *skb, struct net_device *netdev)
* skb_shinfo(skb)->nr_frags, skb_is_gso(skb)); * skb_shinfo(skb)->nr_frags, skb_is_gso(skb));
*/ */
if (!netif_carrier_ok(netdev))
return NETDEV_TX_OK;
if (netif_queue_stopped(netdev)) if (netif_queue_stopped(netdev))
return NETDEV_TX_BUSY; return NETDEV_TX_BUSY;

View File

@@ -45,6 +45,8 @@
#include <linux/export.h> #include <linux/export.h>
#include <linux/uio.h> #include <linux/uio.h>
#include <rdma/ib.h>
#include "qib.h" #include "qib.h"
#include "qib_common.h" #include "qib_common.h"
#include "qib_user_sdma.h" #include "qib_user_sdma.h"
@@ -2067,6 +2069,9 @@ static ssize_t qib_write(struct file *fp, const char __user *data,
ssize_t ret = 0; ssize_t ret = 0;
void *dest; void *dest;
if (WARN_ON_ONCE(!ib_safe_file_access(fp)))
return -EACCES;
if (count < sizeof(cmd.type)) { if (count < sizeof(cmd.type)) {
ret = -EINVAL; ret = -EINVAL;
goto bail; goto bail;

View File

@@ -1637,9 +1637,9 @@ bail:
spin_unlock_irqrestore(&qp->s_hlock, flags); spin_unlock_irqrestore(&qp->s_hlock, flags);
if (nreq) { if (nreq) {
if (call_send) if (call_send)
rdi->driver_f.schedule_send_no_lock(qp);
else
rdi->driver_f.do_send(qp); rdi->driver_f.do_send(qp);
else
rdi->driver_f.schedule_send_no_lock(qp);
} }
return err; return err;
} }

View File

@@ -1452,13 +1452,6 @@ static int usbvision_probe(struct usb_interface *intf,
printk(KERN_INFO "%s: %s found\n", __func__, printk(KERN_INFO "%s: %s found\n", __func__,
usbvision_device_data[model].model_string); usbvision_device_data[model].model_string);
/*
* this is a security check.
* an exploit using an incorrect bInterfaceNumber is known
*/
if (ifnum >= USB_MAXINTERFACES || !dev->actconfig->interface[ifnum])
return -ENODEV;
if (usbvision_device_data[model].interface >= 0) if (usbvision_device_data[model].interface >= 0)
interface = &dev->actconfig->interface[usbvision_device_data[model].interface]->altsetting[0]; interface = &dev->actconfig->interface[usbvision_device_data[model].interface]->altsetting[0];
else if (ifnum < dev->actconfig->desc.bNumInterfaces) else if (ifnum < dev->actconfig->desc.bNumInterfaces)

View File

@@ -1645,7 +1645,7 @@ static int __vb2_wait_for_done_vb(struct vb2_queue *q, int nonblocking)
* Will sleep if required for nonblocking == false. * Will sleep if required for nonblocking == false.
*/ */
static int __vb2_get_done_vb(struct vb2_queue *q, struct vb2_buffer **vb, static int __vb2_get_done_vb(struct vb2_queue *q, struct vb2_buffer **vb,
int nonblocking) void *pb, int nonblocking)
{ {
unsigned long flags; unsigned long flags;
int ret; int ret;
@@ -1666,9 +1666,9 @@ static int __vb2_get_done_vb(struct vb2_queue *q, struct vb2_buffer **vb,
/* /*
* Only remove the buffer from done_list if v4l2_buffer can handle all * Only remove the buffer from done_list if v4l2_buffer can handle all
* the planes. * the planes.
* Verifying planes is NOT necessary since it already has been checked
* before the buffer is queued/prepared. So it can never fail.
*/ */
ret = call_bufop(q, verify_planes_array, *vb, pb);
if (!ret)
list_del(&(*vb)->done_entry); list_del(&(*vb)->done_entry);
spin_unlock_irqrestore(&q->done_lock, flags); spin_unlock_irqrestore(&q->done_lock, flags);
@@ -1748,7 +1748,7 @@ int vb2_core_dqbuf(struct vb2_queue *q, unsigned int *pindex, void *pb,
struct vb2_buffer *vb = NULL; struct vb2_buffer *vb = NULL;
int ret; int ret;
ret = __vb2_get_done_vb(q, &vb, nonblocking); ret = __vb2_get_done_vb(q, &vb, pb, nonblocking);
if (ret < 0) if (ret < 0)
return ret; return ret;
@@ -2297,6 +2297,16 @@ unsigned int vb2_core_poll(struct vb2_queue *q, struct file *file,
if (!vb2_is_streaming(q) || q->error) if (!vb2_is_streaming(q) || q->error)
return POLLERR; return POLLERR;
/*
* If this quirk is set and QBUF hasn't been called yet then
* return POLLERR as well. This only affects capture queues, output
* queues will always initialize waiting_for_buffers to false.
* This quirk is set by V4L2 for backwards compatibility reasons.
*/
if (q->quirk_poll_must_check_waiting_for_buffers &&
q->waiting_for_buffers && (req_events & (POLLIN | POLLRDNORM)))
return POLLERR;
/* /*
* For output streams you can call write() as long as there are fewer * For output streams you can call write() as long as there are fewer
* buffers queued than there are buffers available. * buffers queued than there are buffers available.

View File

@@ -49,7 +49,7 @@ struct frame_vector *vb2_create_framevec(unsigned long start,
vec = frame_vector_create(nr); vec = frame_vector_create(nr);
if (!vec) if (!vec)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
ret = get_vaddr_frames(start, nr, write, 1, vec); ret = get_vaddr_frames(start & PAGE_MASK, nr, write, true, vec);
if (ret < 0) if (ret < 0)
goto out_destroy; goto out_destroy;
/* We accept only complete set of PFNs */ /* We accept only complete set of PFNs */

View File

@@ -74,6 +74,11 @@ static int __verify_planes_array(struct vb2_buffer *vb, const struct v4l2_buffer
return 0; return 0;
} }
static int __verify_planes_array_core(struct vb2_buffer *vb, const void *pb)
{
return __verify_planes_array(vb, pb);
}
/** /**
* __verify_length() - Verify that the bytesused value for each plane fits in * __verify_length() - Verify that the bytesused value for each plane fits in
* the plane length and that the data offset doesn't exceed the bytesused value. * the plane length and that the data offset doesn't exceed the bytesused value.
@@ -437,6 +442,7 @@ static int __fill_vb2_buffer(struct vb2_buffer *vb,
} }
static const struct vb2_buf_ops v4l2_buf_ops = { static const struct vb2_buf_ops v4l2_buf_ops = {
.verify_planes_array = __verify_planes_array_core,
.fill_user_buffer = __fill_v4l2_buffer, .fill_user_buffer = __fill_v4l2_buffer,
.fill_vb2_buffer = __fill_vb2_buffer, .fill_vb2_buffer = __fill_vb2_buffer,
.copy_timestamp = __copy_timestamp, .copy_timestamp = __copy_timestamp,
@@ -765,6 +771,12 @@ int vb2_queue_init(struct vb2_queue *q)
q->is_output = V4L2_TYPE_IS_OUTPUT(q->type); q->is_output = V4L2_TYPE_IS_OUTPUT(q->type);
q->copy_timestamp = (q->timestamp_flags & V4L2_BUF_FLAG_TIMESTAMP_MASK) q->copy_timestamp = (q->timestamp_flags & V4L2_BUF_FLAG_TIMESTAMP_MASK)
== V4L2_BUF_FLAG_TIMESTAMP_COPY; == V4L2_BUF_FLAG_TIMESTAMP_COPY;
/*
* For compatibility with vb1: if QBUF hasn't been called yet, then
* return POLLERR as well. This only affects capture queues, output
* queues will always initialize waiting_for_buffers to false.
*/
q->quirk_poll_must_check_waiting_for_buffers = true;
return vb2_core_queue_init(q); return vb2_core_queue_init(q);
} }
@@ -818,14 +830,6 @@ unsigned int vb2_poll(struct vb2_queue *q, struct file *file, poll_table *wait)
poll_wait(file, &fh->wait, wait); poll_wait(file, &fh->wait, wait);
} }
/*
* For compatibility with vb1: if QBUF hasn't been called yet, then
* return POLLERR as well. This only affects capture queues, output
* queues will always initialize waiting_for_buffers to false.
*/
if (q->waiting_for_buffers && (req_events & (POLLIN | POLLRDNORM)))
return POLLERR;
return res | vb2_core_poll(q, file, wait); return res | vb2_core_poll(q, file, wait);
} }
EXPORT_SYMBOL_GPL(vb2_poll); EXPORT_SYMBOL_GPL(vb2_poll);

View File

@@ -223,6 +223,13 @@ int __detach_context(struct cxl_context *ctx)
cxl_ops->link_ok(ctx->afu->adapter, ctx->afu)); cxl_ops->link_ok(ctx->afu->adapter, ctx->afu));
flush_work(&ctx->fault_work); /* Only needed for dedicated process */ flush_work(&ctx->fault_work); /* Only needed for dedicated process */
/*
* Wait until no further interrupts are presented by the PSL
* for this context.
*/
if (cxl_ops->irq_wait)
cxl_ops->irq_wait(ctx);
/* release the reference to the group leader and mm handling pid */ /* release the reference to the group leader and mm handling pid */
put_pid(ctx->pid); put_pid(ctx->pid);
put_pid(ctx->glpid); put_pid(ctx->glpid);

View File

@@ -274,6 +274,7 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0};
#define CXL_PSL_DSISR_An_PE (1ull << (63-4)) /* PSL Error (implementation specific) */ #define CXL_PSL_DSISR_An_PE (1ull << (63-4)) /* PSL Error (implementation specific) */
#define CXL_PSL_DSISR_An_AE (1ull << (63-5)) /* AFU Error */ #define CXL_PSL_DSISR_An_AE (1ull << (63-5)) /* AFU Error */
#define CXL_PSL_DSISR_An_OC (1ull << (63-6)) /* OS Context Warning */ #define CXL_PSL_DSISR_An_OC (1ull << (63-6)) /* OS Context Warning */
#define CXL_PSL_DSISR_PENDING (CXL_PSL_DSISR_TRANS | CXL_PSL_DSISR_An_PE | CXL_PSL_DSISR_An_AE | CXL_PSL_DSISR_An_OC)
/* NOTE: Bits 32:63 are undefined if DSISR[DS] = 1 */ /* NOTE: Bits 32:63 are undefined if DSISR[DS] = 1 */
#define CXL_PSL_DSISR_An_M DSISR_NOHPTE /* PTE not found */ #define CXL_PSL_DSISR_An_M DSISR_NOHPTE /* PTE not found */
#define CXL_PSL_DSISR_An_P DSISR_PROTFAULT /* Storage protection violation */ #define CXL_PSL_DSISR_An_P DSISR_PROTFAULT /* Storage protection violation */
@@ -855,6 +856,7 @@ struct cxl_backend_ops {
u64 dsisr, u64 errstat); u64 dsisr, u64 errstat);
irqreturn_t (*psl_interrupt)(int irq, void *data); irqreturn_t (*psl_interrupt)(int irq, void *data);
int (*ack_irq)(struct cxl_context *ctx, u64 tfc, u64 psl_reset_mask); int (*ack_irq)(struct cxl_context *ctx, u64 tfc, u64 psl_reset_mask);
void (*irq_wait)(struct cxl_context *ctx);
int (*attach_process)(struct cxl_context *ctx, bool kernel, int (*attach_process)(struct cxl_context *ctx, bool kernel,
u64 wed, u64 amr); u64 wed, u64 amr);
int (*detach_process)(struct cxl_context *ctx); int (*detach_process)(struct cxl_context *ctx);

Some files were not shown because too many files have changed in this diff Show More